Friday, 11 March 2011

Creative Commons Licence

To enable other people to take up the ideas on our project blog in their own work, and yet attribute their source, we can place a Creative Commons Attribution Licence on this blog. Further information about this licence can be found at:
http://creativecommons.org/licenses/
This seems to be in keeping with our aim for open science and our desire for engagement with other people, both scientists and non-scientists, about our research.

Friday, 4 March 2011

Making sense of robots: the hermeneutic challenge

One of the challenges of the artificial culture project that we knew we would face from the start is that of making sense of the free running experiments in the lab. One of the project investigators - philosopher Robin Durie - called this the hermeneutic challenge. In the project proposal Robin wrote:
what means will we be able to develop by which we can identify/recognise meaningful/cultural behaviour [in the robots]; and, then, what means might we go on to develop for interpreting or understanding this behaviour and/or its significance?
Now, more than 3 years on, we come face to face with that question. Let me clarify: we are not - or at least not yet - claiming to have identified or recognised emerging robot culture. We do, however, more modestly claim to have demonstrated new behavioural patterns (memes) that emerge and - for awhile at least - are dominant. It's an open-ended evolutionary process in which the dominant 'species' of memes come and go. Maybe these clusters of closely related memes could be labelled behavioural traditions?

Leaving that speculation aside, a more pressing problem in recent months has been to try and understand how and why certain behavioural patterns emerge at all. Let me explain. We typically seed each robot with a behavioural pattern; it is literally a sequence of movements. Think of it as a dance. But we choose these initial dances arbitrarily - movements that describe a square or triangle for instance - without any regard whatsoever for whether these movement sequences are easy or hard for the robots to imitate.

Not surprisingly then, the initial dances quickly mutate to different patterns, sometimes more complex and sometimes less. But what is it about the robot's physical shape, its sensorium, and the process of estimation inherent in imitation that gives rise to these mutations? Let me explain why this is important. Our robots and you, dear reader, have one thing in common: you both have bodies. And bodies bring limitations: firstly because you body doesn't allow you to make any movement imaginable - only ones that your shape, structure and muscles allow, and secondly because if you try to watch and imitate someone else's movements you have to guess some of what they're doing (because you don't have a perfect 360 degree view of them). That's why your imitated copy of someone else's behaviour is always a bit different. Exactly the same limitations give rise to variation in imitated behaviours in the robots.

Now it may seem a relatively trivial matter to watch the robots imitate each other and then figure out how the mutations in successive copies (and copies of copies) are determined by the robots' shape, sensors and programming. But it's not, and we find ourselves having to devise new ways of visualising the experimental data in order to make sense of what's going on. The picture below is one such visualisation*; it's actually a family tree of memes, with parent memes at the top and child memes (i.e. copies) shown branching below parents.

Unlike a human family tree each child meme has only one parent. In this 'memeogram' there are two memes at the start, numbered 1 and 2. 1 is a triangle movement pattern, and 2 is a square movement pattern. In this experiment there are 4 robots, and it's easy to see here that the triangle meme dominates - it and its descendants are seen much more often.

The diagram also shows which child-memes are high quality copies of their parents - these are shown in brown with bold arrows connecting them to their parent-memes. This allows us to easily see clusters of similar memes, for instance in the bottom-left there are 7 closely related and very similar memes (numbered 36, 37, 46, 49, 50, 51 and 55). Does this cluster represent a dominant 'species' of memes?


*created by Mehmet Erbas, and posted here with his permission.

"Culture" in the Artificial Culture Project

Can the Artificial Culture Project "teach" anything about culture to those working in "cultural studies"?

This seems to me to raise a series of questions about the basic role of the concept of "culture" in our project.

The first of these is the extent to which "culture" is functioning as an "empty signifier" in the project at the moment; & whether, in fact, the ultimate issue for our research is not culture per se, but rather the process by means of which embodied variations are transmitted through a "community" of embodied agents.

Whilst this question is of some interest to cultural theorists, I don't think it would represent a central area of concern for research in this field. So what sorts of questions, & what sorts of research issues, might cultural theorists who came to our project find interesting?

My hunch is that many cultural theorists would be most interested in three aspects of our work:
(1) the actual activities of the robots themselves, & the meme/gene co-evolution element of our work
(2) the way that the concept of "memes" has functioned in our research (this being a very contested notion in cultural theory)
(3) the evolution of our behaviour as researchers, relative to, & based on our interactions with, the robots.

This third aspect would sit within the ethnographic dimension of our work. It's the kind of thing that someone like Bruno Latour, as a representative of the area of cultural theory known as Science & Technology Studies, would be interested in. And, I think, it's a fine example of what Andy Pickering, in his work in STS, calls the "mangle of practice".

I think it would be interesting to parallel the reflection on "machinic" creativity within the robot society; creativity within within the hermeneutic dimension of the project (creativity in interpreting the results); & the creativity of the "culture" of the Artificial Culture research project, or research team.

One further way in which this last might be of interest is as a case study in creative, trans-disciplinary, research working.

Medicine in Society: a complex mix

I was honoured to give my inaugral lecture as Professor of Medicine in Society on January 18th 2011 at Warwick Medical School, University of Warwick. The lecture considers the artificial culture project (towards the end) and is available to read at: http://blogs.warwick.ac.uk/fegriffiths/

Monday, 28 February 2011

The nature of the social agent

A classic paper by Kathleen Carley and Allen Newell classifies different types of social agent, as a useful starting point for social simulation. Based on their classification our robots seem to be cognitive agents in real time interaction. The interaction at present is imitation. Through this imitation the robots might evolve in terms of their individual behaviours (as agents) as the context evolves (context includes the other agents - other robots - and the physical environment). Carley's classification suggests that as the robots evolve and become emptional cognitive agents, the processing capabilities of the robots can become less. If interaction leads to the development of social structure, social goals and then culture, the environment is becoming increasinly enriched.
I found this a helpful way of thinking about the robots.
Paper details: Carley K and Allen N. The nature of the social agent. Journal of Mathematical Sociology. 1994. 19 (4) 221-262.

Saturday, 26 February 2011

Medicine in Society: a complex mix

I had the honour of giving my inaugral lecture as Professor of Medicine in Society on January 18th 2011 at Warwick Medical School, University of Warwick. During the lecture I reflected on how the artificial culture project differs from the research I do most of the time on health and health care. Much of my research is enmeshed with its locality and time, whatever the extent of the locality and however long the time. The artificial culture project attempts to step outside the constraints of time and locality in building a robot society. Full text to follow.

Tuesday, 1 February 2011

Robot Imitation: What do children think?

One of our main research problems was whether we as humans can identify emergent patterns of behaviour within a swarm of robots. In order to assist in this interpretation, I demonstrated a video of e-puck imitation http://www.youtube.com/watch?v=hygWbKcAaTs (speeded up) and asked a group of ten children (aged 7-8) what they thought was happening in the picture. I specifically did not ask whether they can ‘spot any patterns’ as I felt that this was a leading question.

The majority response was that ‘the robots are making triangles’. Only one child stated that ‘they are copying each other’. I then showed the children the player stage video without tracks and subsequently with tracks. Whilst they were watching the player stage with tracks, one child remarked: I think the robot people made the robots to make shapes but these robots can’t do it very properly so maybe the robot is broken. I think you need to take the robots back for the robot scientists to fix them’.

Even though the children were engaged in watching the video which indicates that they were not bored, their responses did not imply that any patterns were recognised. What does mean for our research? Are children not the best candidates for pattern spotting? Or maybe there are no patterns for children to spot.