They chose a physical robot with four coupled degrees of freedom and able to record action-sensation pairs by moving through 1,000 random trajectories
This step is not unlike a babbling baby observing its hands.
They used deep learning to train a self-model from scratch, very much in line with what DeepMind did with AlphaGO. In the video, you can see the robot performance on two separate tasks: a pick-and-place task and a handwriting task.
Now their creators are dreaming of more elevated endeavours:
Self-imaging will be key to allowing robots to move away from the confinements of so-called narrow AI toward more general abilities. We conjecture that this separation of self and task may have also been the evolutionary origin of self-awareness in humans.
Let’s stay tuned.
(1) Kwiatkowski, Robert, and Hod Lipson. ‘Task-Agnostic Self-Modeling Machines’. Science Robotics, vol. 4, no. 26, Jan. 2019, p. eaau9354. robotics.sciencemag.org, doi:10.1126/scirobotics.aau9354.