Can a robot create a self-model with no prior knowledge?

There might be something going on hereor not.

Researchers from Columbia Engineering are trying to “explain away” consciousness with a robot designed to answer the question: Can a robot create a self-model with no prior knowledge?

They chose a physical robot with four coupled degrees of freedom and able to record action-sensation pairs by moving through 1,000 random trajectories

This step is not unlike a babbling baby observing its hands.

They used deep learning to train a self-model from scratch, very much in line with what DeepMind did with AlphaGO. In the video, you can see the robot performance on two separate tasks: a pick-and-place task and a handwriting task.

Now their creators are dreaming of more elevated endeavours:

Self-imaging will be key to allowing robots to move away from the confinements of so-called narrow AI toward more general abilities. We conjecture that this separation of self and task may have also been the evolutionary origin of self-awareness in humans.

Let’s stay tuned.


(1) Kwiatkowski, Robert, and Hod Lipson. ‘Task-Agnostic Self-Modeling Machines’. Science Robotics, vol. 4, no. 26, Jan. 2019, p. eaau9354., doi:10.1126/scirobotics.aau9354.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.