Friday, 11 March 2011
Creative Commons Licence
Friday, 4 March 2011
Making sense of robots: the hermeneutic challenge
what means will we be able to develop by which we can identify/recognise meaningful/cultural behaviour [in the robots]; and, then, what means might we go on to develop for interpreting or understanding this behaviour and/or its significance?Now, more than 3 years on, we come face to face with that question. Let me clarify: we are not - or at least not yet - claiming to have identified or recognised emerging robot culture. We do, however, more modestly claim to have demonstrated new behavioural patterns (memes) that emerge and - for awhile at least - are dominant. It's an open-ended evolutionary process in which the dominant 'species' of memes come and go. Maybe these clusters of closely related memes could be labelled behavioural traditions?
Leaving that speculation aside, a more pressing problem in recent months has been to try and understand how and why certain behavioural patterns emerge at all. Let me explain. We typically seed each robot with a behavioural pattern; it is literally a sequence of movements. Think of it as a dance. But we choose these initial dances arbitrarily - movements that describe a square or triangle for instance - without any regard whatsoever for whether these movement sequences are easy or hard for the robots to imitate.
Not surprisingly then, the initial dances quickly mutate to different patterns, sometimes more complex and sometimes less. But what is it about the robot's physical shape, its sensorium, and the process of estimation inherent in imitation that gives rise to these mutations? Let me explain why this is important. Our robots and you, dear reader, have one thing in common: you both have bodies. And bodies bring limitations: firstly because you body doesn't allow you to make any movement imaginable - only ones that your shape, structure and muscles allow, and secondly because if you try to watch and imitate someone else's movements you have to guess some of what they're doing (because you don't have a perfect 360 degree view of them). That's why your imitated copy of someone else's behaviour is always a bit different. Exactly the same limitations give rise to variation in imitated behaviours in the robots.
Now it may seem a relatively trivial matter to watch the robots imitate each other and then figure out how the mutations in successive copies (and copies of copies) are determined by the robots' shape, sensors and programming. But it's not, and we find ourselves having to devise new ways of visualising the experimental data in order to make sense of what's going on. The picture below is one such visualisation*; it's actually a family tree of memes, with parent memes at the top and child memes (i.e. copies) shown branching below parents.
The diagram also shows which child-memes are high quality copies of their parents - these are shown in brown with bold arrows connecting them to their parent-memes. This allows us to easily see clusters of similar memes, for instance in the bottom-left there are 7 closely related and very similar memes (numbered 36, 37, 46, 49, 50, 51 and 55). Does this cluster represent a dominant 'species' of memes?
*created by Mehmet Erbas, and posted here with his permission.
"Culture" in the Artificial Culture Project
This seems to me to raise a series of questions about the basic role of the concept of "culture" in our project.
The first of these is the extent to which "culture" is functioning as an "empty signifier" in the project at the moment; & whether, in fact, the ultimate issue for our research is not culture per se, but rather the process by means of which embodied variations are transmitted through a "community" of embodied agents.
Whilst this question is of some interest to cultural theorists, I don't think it would represent a central area of concern for research in this field. So what sorts of questions, & what sorts of research issues, might cultural theorists who came to our project find interesting?
My hunch is that many cultural theorists would be most interested in three aspects of our work:
(1) the actual activities of the robots themselves, & the meme/gene co-evolution element of our work
(2) the way that the concept of "memes" has functioned in our research (this being a very contested notion in cultural theory)
(3) the evolution of our behaviour as researchers, relative to, & based on our interactions with, the robots.
This third aspect would sit within the ethnographic dimension of our work. It's the kind of thing that someone like Bruno Latour, as a representative of the area of cultural theory known as Science & Technology Studies, would be interested in. And, I think, it's a fine example of what Andy Pickering, in his work in STS, calls the "mangle of practice".
I think it would be interesting to parallel the reflection on "machinic" creativity within the robot society; creativity within within the hermeneutic dimension of the project (creativity in interpreting the results); & the creativity of the "culture" of the Artificial Culture research project, or research team.
One further way in which this last might be of interest is as a case study in creative, trans-disciplinary, research working.
Medicine in Society: a complex mix
Monday, 28 February 2011
The nature of the social agent
I found this a helpful way of thinking about the robots.
Paper details: Carley K and Allen N. The nature of the social agent. Journal of Mathematical Sociology. 1994. 19 (4) 221-262.
Saturday, 26 February 2011
Medicine in Society: a complex mix
Tuesday, 1 February 2011
Robot Imitation: What do children think?
One of our main research problems was whether we as humans can identify emergent patterns of behaviour within a swarm of robots. In order to assist in this interpretation, I demonstrated a video of e-puck imitation http://www.youtube.com/watch?v=hygWbKcAaTs (speeded up) and asked a group of ten children (aged 7-8) what they thought was happening in the picture. I specifically did not ask whether they can ‘spot any patterns’ as I felt that this was a leading question.
The majority response was that ‘the robots are making triangles’. Only one child stated that ‘they are copying each other’. I then showed the children the player stage video without tracks and subsequently with tracks. Whilst they were watching the player stage with tracks, one child remarked: ‘I think the robot people made the robots to make shapes but these robots can’t do it very properly so maybe the robot is broken. I think you need to take the robots back for the robot scientists to fix them’.
Even though the children were engaged in watching the video which indicates that they were not bored, their responses did not imply that any patterns were recognised. What does mean for our research? Are children not the best candidates for pattern spotting? Or maybe there are no patterns for children to spot.