Robots with human behavior

Robots with human behavior
Robots with human behavior

It is said, and it is true, that we live in a “digital world”. Computer and software systems have grown and become very complex, to the point of becoming impenetrable to most. But it is no less true that our world is still human and that the digital world is a manufactured domain, that we have created, we decide how to use it and we can put it aside or read the code, analyze the problems and make changes. Unlike the human being, the artifacts of the digital world, the robots, neither live nor die; they can “know” facts of life and reason about them, but not relate life to themselves. How, then, can they grow, as the bedside book promises?

Mark H. Lee’s fondness for automata comes from afar. Above all, in what they have of imitation of human behavior; thus, its ability to adapt, which is summarized in the feedback systems, capable of correcting, for example, the course of a ship in the face of the onslaught of wind and waves. Always intrigued by the relationship between computer, brain and machine, since his doctoral thesis on sensorimotor control and coordination, Lee has been dedicating himself to problems that combine engineering and human behavior, such as speech encoding, color vision processing or autonomic control. . In this mature work, he summarizes extensive research carried out in four projects: two funded by the British Council for Engineering and Physical Sciences, and another two by the European Commission.

One of these projects financed by the European Community originated the iCub humanoid robot, whose models are used in some thirty robotics research laboratories around the world. The iCubre system requires a humanoid morphology to construct an interpretation of its environment that is compatible with the understanding of human cognitive agents. With that idea it was designed and is modeled after the anatomy of a three-year-old child. The iCub robot (acronym for “Universal Cognitive Body”) is a mental creature of Giorgio Metta and his collaborators, devised at the Genoa Institute of Technology in 2010. The one-meter-high artifact is a open source bot. It has 53 degrees of freedom, all of them powered by electric motors.

The humanoid consists of three sensory modalities: vision, touch and proprioception, the latter sense that reports the positions of the joints of the body and perceives where the different parts of our skeletal structure are in space. At all joints there are precision angle encoders that supply signals for proprioceptive sensation. When gravity or other external forces come into play, the muscles can generate counteracting and correcting forces, but the position sense provided by proprioceptive sensors always accurately informs the spatial location of body parts.

It also has tactile sensors installed on the skin, especially on the hand, arms and parts of the torso, which perceive the friction with objects and play an important role in the processes of grasping and grasping, among other actions. The robot’s hands are quite advanced, with good finger control and a tactile feedback loop, but they don’t come close to reaching the prehensile sensitivity of the human hand. For the vision function, it carries charge-coupled color cameras in both eyes, which can scan the environment at human eye speed. The human vision system differs from the formation of images in the digital camera.

But iCub’s Achilles heel lies in its immobility. The robot has limbs, but lacks enough hardware to allow it to roam. That means that the experiments have been carried out, in their entirety, within an egocentric space, a space delimited by the horizon of vision and scope. Phenomena and stimuli that are outside that egosphere can go unnoticed and, therefore, will be meaningless to him. Newborn children start more or less the same, but as they move, first crawling and then walking, they move their egocentric space to other places, until they build a new space, an allocentric space.

Two years later, in 2012, another team of artificial intelligence experts took a giant step forward in their research: they discovered how computers could learn for themselves which features to use for automatic image recognition. They overcame a barrier that hindered the progress of robotics. Intelligent systems could learn without manual intervention from their designers. This area that uses learning methods and masses of data has been given the name of deep learning ( see «Deep learning» , by Yoshua Bengio; Research and Science, August 2016]. Greater involvement of robots in daily life could be counted on. Over the years, it has had exceptional success in vision and imaging, speech and language processing, pattern and data analytics, and entertainment.

Looking to the future, superintelligence has been proposed as a form capable of ceaseless self-advancement. The idea of ​​singularity is linked to it, which indicates that autonomous artificial intelligence systems will reach such power that they will be uncontrollable. In a kind of clever chain reaction, they will make humans redundant, and cast them out of this world. That fiction of a world dominated by robots as a superior digital evolutionary force enjoyed a certain prestige long ago. Lee does not share that fear. He believes that this future is not viable, because, among other reasons, superintelligence requires artificial intelligence as a requirement, and that depends on the human being. Furthermore, we have no known mechanisms for this. It doesn’t seem likely that superintelligent systems would take over the Internet without humans stepping in and correcting the situation. The Network’s systems have human control and points of intervention. On the other hand, it is presumable and desirable that the design of humanoid robots transcend specific tasks to act by themselves and carry open learning programs not limited to specific functions, which will force the manufacture of adaptive mechanisms and general learning methods, to address unforeseen situations and solve new problems.

One source of inspiration will be neuroscience. In cortical maps we can see how to build learning structures. From the sensory structure of the ocular system, the attention mechanisms of the superior colliculus and the sensory system of touch, many applications in robotics have already been extracted. Lee points to psychology. In the past, the brain was the model to be imitated by artificial intelligence. Now and in the future, that role could be played by developmental psychology, which is based on a vast arsenal of data and ideas about human behavior.

Robotics is advancing steadily on different flanks. The last spectacular milestone of this year of 2021, after the publication of the book, has been the synchronization of schools of robotic fish in their navigation. Synchronized swimming is one of the most important lessons in a school of fish. Coordination helps them find food and escape predators. Such a feat had never been achieved. But in January 2021, a fleet of seven fish-like robots was built that could navigate in a circle, without bumping into each other. The researchers developed a series of algorithms for the hardware to coordinate collective behaviors, from swimming in a circle to scattering at the edges of the pool.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts