Friday 4 March 2011

Making sense of robots: the hermeneutic challenge

One of the challenges of the artificial culture project that we knew we would face from the start is that of making sense of the free running experiments in the lab. One of the project investigators - philosopher Robin Durie - called this the hermeneutic challenge. In the project proposal Robin wrote:
what means will we be able to develop by which we can identify/recognise meaningful/cultural behaviour [in the robots]; and, then, what means might we go on to develop for interpreting or understanding this behaviour and/or its significance?
Now, more than 3 years on, we come face to face with that question. Let me clarify: we are not - or at least not yet - claiming to have identified or recognised emerging robot culture. We do, however, more modestly claim to have demonstrated new behavioural patterns (memes) that emerge and - for awhile at least - are dominant. It's an open-ended evolutionary process in which the dominant 'species' of memes come and go. Maybe these clusters of closely related memes could be labelled behavioural traditions?

Leaving that speculation aside, a more pressing problem in recent months has been to try and understand how and why certain behavioural patterns emerge at all. Let me explain. We typically seed each robot with a behavioural pattern; it is literally a sequence of movements. Think of it as a dance. But we choose these initial dances arbitrarily - movements that describe a square or triangle for instance - without any regard whatsoever for whether these movement sequences are easy or hard for the robots to imitate.

Not surprisingly then, the initial dances quickly mutate to different patterns, sometimes more complex and sometimes less. But what is it about the robot's physical shape, its sensorium, and the process of estimation inherent in imitation that gives rise to these mutations? Let me explain why this is important. Our robots and you, dear reader, have one thing in common: you both have bodies. And bodies bring limitations: firstly because you body doesn't allow you to make any movement imaginable - only ones that your shape, structure and muscles allow, and secondly because if you try to watch and imitate someone else's movements you have to guess some of what they're doing (because you don't have a perfect 360 degree view of them). That's why your imitated copy of someone else's behaviour is always a bit different. Exactly the same limitations give rise to variation in imitated behaviours in the robots.

Now it may seem a relatively trivial matter to watch the robots imitate each other and then figure out how the mutations in successive copies (and copies of copies) are determined by the robots' shape, sensors and programming. But it's not, and we find ourselves having to devise new ways of visualising the experimental data in order to make sense of what's going on. The picture below is one such visualisation*; it's actually a family tree of memes, with parent memes at the top and child memes (i.e. copies) shown branching below parents.

Unlike a human family tree each child meme has only one parent. In this 'memeogram' there are two memes at the start, numbered 1 and 2. 1 is a triangle movement pattern, and 2 is a square movement pattern. In this experiment there are 4 robots, and it's easy to see here that the triangle meme dominates - it and its descendants are seen much more often.

The diagram also shows which child-memes are high quality copies of their parents - these are shown in brown with bold arrows connecting them to their parent-memes. This allows us to easily see clusters of similar memes, for instance in the bottom-left there are 7 closely related and very similar memes (numbered 36, 37, 46, 49, 50, 51 and 55). Does this cluster represent a dominant 'species' of memes?


*created by Mehmet Erbas, and posted here with his permission.

1 comments:

Frances Griffiths said...

The choice of behavioural pattern chosen for the robots is guided by our knowledge of what robots are able to undertake. The robots are able to observe and move so we choose behaviours that use these abilities. The robots cannot smell so we don't choose a behaviour based on smell (although we could have chosen to use robots that can smell). If we assume that biological and social evolution to some extent happen concurrently then we are starting with a model fairly late in bio- social evolution, where key abilities are established and change little - or very slowly - in relation to the speed of change of behaviours.
Comment by Frances Griffiths, member of the artificial culture team

Post a Comment