Exploring forms and norms – “Terminal” installation at the Sackler Gallery, DC

One of the things I sometimes consider on this blog is how design and interactive art can help us to explore our relationships to technology and how we see the world. “Exploring forms and norms” is an occasional series of posts on this topic.

By now the phrase “we’re all connected” has become almost synonymous with network maps showing the links between different people or nodes. Whether the maps show who interacts with whom within an organisation or which scientists around the world collaborate together, network diagrams start to add a systems perspective to our own interactions.

However, one thing these network maps don’t really describe is the consequence of all this connection – the sometimes subtle cause and effects of our inter-relatedness. If that department over there is closed down or these two friends of friends meet, then so what for me – or anyone else in the system?

The ultimate link fest! Image credit: https://www.flickr.com/photos/chanceprojects/4388266976/

Continue reading

Emergence in communities – from quorum sensing in bacteria to human cohorts

I got into an interesting conversation recently at a conference about what “emergence” looks like in practice. It’s one of those words that’s being increasingly used to describe the power of communities to self-organise (e.g. “emergence over authority” is one of the chapters of Whiplash by Joi Ito of Media Lab). And yet I hadn’t fully appreciated how emergence plays out in groups. At least, until I realised that emergence is what I was working on as a graduate student – without ever describing it in those terms.

My biochemistry research focused on quorum sensing in bacteria – a mechanism by which a group of bacteria of the same species coordinate to produce a compound at high population density that’s not seen when the same bugs exist at a lower density. The specific compound produced varies with the type of bacteria – sometimes it’s a pigment, others an antibiotic, or a specific set of enzymes. But essentially, quorum sensing is about how bacteria communicate as a group to decide when to make this population-dependent chemical. So I often joke that my interests in collaborative behaviour took the long route from studying uni-cellular organisms to multi-cellular ones! 

Emergence in bacteria – a lightbulb moment!
Image credit: https://www.flickr.com/photos/ajc1/252308050/

Continue reading

Instant replay: how a study of online communities helps to re-run scenarios in order to understand popularity

In the last post, we looked at how groups behave when binary decisions are involved. There, it’s assumed that each individual in the group has their own threshold that needs to be exceeded before they’ll take action. Granovetter’s riot model provided a useful place to start thinking about why the interactions amongst individuals in a group are important, but the simple threshold model doesn’t take into account more complex dynamics that are involved in social relationships or how these might affect decision making.

One noted social phenomenon  is cumulative advantage (also known as preferential attachment). Here, once a few people express their liking for something, it will become more popular still, and any differences between the popular choice and any less popular alternatives will be amplified.  Cumulative advantage tells us that it’s the number of people that like something that’s important to its success – not necessarily any intrinsic qualities of the object itself.  As Duncan J. Watts argues in his book Everything is Obvious, this goes against our common sense feeling that it must be that a popular item has some special, distinguishing features.

If we had the chance to repeat life multiple times, we could test which of the two ideas was actually true. If it’s the intrinsic features that are important, we’d expect that every time we replayed history, the same item would emerge on top. But if cumulative advantage is at work, then different items might emerge as favourites each time.

But of course, we only get to play history once. And this is where the internet becomes an incredibly useful tool for studying network effects – the large numbers of users and ability to create different online environments can allow hypothesis to be tested in multiple parallel situations, as if history were being allowed to play out several times. This is nicely demonstrated by Watts and his collaborators in a fascinating 2006 study of an online community of people interested in listening to music (no, not YouTube!).

Do you like what you hear? Image credit: Photo by Flickr user Mark JP http://www.flickr.com/photos/pyth0ns/6757854133/

Do you like what you hear?
Image credit: Photo by Flickr user Mark JP http://www.flickr.com/photos/pyth0ns/6757854133/

In the experiment, participants from a teen social network were recruited to a site called Music Lab, specifically created for the study. Each visitor was assigned to one of two types of condition – “independent” or “social influence”. In both cases they were asked to listen to and rate songs and given the opportunity to download them.  In the social influence condition participants could also see how many times others had downloaded the songs – the “social” aspect.

Over 14, 000 people took part in the experiment. The researchers divided them into 1 of 9 different “worlds”  – 8 of which had the social feedback about downloads displayed to members. All worlds featured the same 48 songs and started with download counts at zero. As songs were downloaded, this social data contributed only to the specific world where the song was accessed so that each world provided an independent repetition of the study. The additional group that wasn’t shown any social feedback provided a control for quality – given that participants couldn’t see what anyone else had downloaded, it was assumed that songs that became popular there might be the ones that were intrinsically better.

So what happened?

Where downloads were shown, the social input did influence what other users downloaded, and popular songs became more popular than anything in the independent, non-social conditions. What proved to catch on in one social world was also quite different to what was popular in another. So social influence increases not just inequality in decision making (“the rich get richer”), but also adds an element of unpredictability.

Interestingly, for those interested in online marketing, the whole experiment was also repeated to compare the layout of the songs on the website – displaying the songs as a ranked list in one scenario but as a random grid in another. The ranked list provides a clearer signal about the preferences of other users, and unsurprisingly, resulted in even more inequality and unpredictability about which songs would end up topping the ratings.

The results as a whole were even more dramatic because the experiment as a whole was likely to represent a toned down version of the social signals than might be observed in the real world, where marketing tactics and even discussion amongst users might be taking place. Finally, just in case you’re wondering if the experiment merely revealed some quirks about teenagers’ music tastes, the study was also repeated with adult participants with similar results.

So, after considering riots and music preferences we’re starting to get a feel for the importance of capturing the relationships between individuals if you want to understand group behaviour. Next, we’ll move onto thinking about the role of influencers in prompting changes in behaviour.

References/further reading

Everything is Obvious – once you know the answer – Duncan J. Watts – Chapter 3 – The Wisdom (and Madness) of Crowds

“Experimental Study of Inequality and Unpredictibility in an Artiifical Cultural Market” – Salganik, Dodds and Watts (2006). Science Vol 31. 854-856.

Preferential attachment (cumulative advantage) – this has been used to explain links to pages on the Internet and differences in citations of scholarly articles.

Do you wanna riot? Thinking about group behaviour from the perspective of individual preferences

Over the Christmas and New Year break, I started to read Duncan Watts’ “Everything is obvious – once you know the answer”. It’s an interesting discussion of why the thing we call common sense is often inadequate for explaining why we behave the way we do. And it’s provided a really useful starting point for me to ponder network effects.

Watts describes how those who seek to explain group behaviour are faced with the “micro-macro” problem. This is the need to rationalise the “macro” actions of large groups of people — why certain people become celebrities or what people want to buy — in terms of the “micro” activities of the individuals within a study’s population.

One theoretical way to tackle this is the representative-agent approach – create a fictitious individual who is designed to represent the behaviour of the population as a whole and then use this theoretical character to attempt to predict how the population would react under different circumstances. While it seems like a simple, intuitive solution – and it’s one that’s been used in fields from economics to sociology and political science – this theoretical approach is often inadequate to make sense of group dynamics. Watts uses sociologist Mark Granovetter’s 1978 threshold model to highlight why.

The example used by Granovetter to illustrate his model is whether or not a given population of individuals would take part in a riot. The model relies on a couple of assumptions. Firstly, that individuals within a population are being asked to make a binary decision – whether or not to join the riot (but it could equally be whether to pass on a rumour, or whether to leave a social gathering etc).  Secondly, each individual has a threshold at which they will change their behaviour, set by a personal assessment of the costs and benefits of doing so. Opting for one course of action is at least in part influenced by other people e.g. the more others join in the riot, the less “costly” or risky it is deemed to do so oneself.

Bear debates whether he should start a riot of his own...  Image credit: Photo by Flickr user Jenny Downing: http://www.flickr.com/photos/7941044@N06/5764351769

Bear debates whether he should start a riot of his own…
Image credit: Photo by Flickr user Jenny Downing: http://www.flickr.com/photos/7941044@N06/5764351769

Watts explains Granovetter’s paper by asking the reader to imagine two towns, each with a population of 100, where there may be good cause for the individuals to want to riot. In town A, each individual has a threshold to take part in violence that ranges from 0-99, with no one having the same threshold as anyone else. In this instance, a rowdy instigator with a threshold of 0 would start the riot. Then a neighbour with a threshold of 1 would decide to join in. This would prompt the next person who had a threshold of 2 to participate and so on, in a domino effect that would lead to widespread unrest.

Compare this to town B, where the thresholds of the population are the same, except that no one has a threshold of 1 — instead, 2 people have a threshold of 2. In this instance, the rowdy instigator would be a lone vandal because the domino effect would not be triggered, since no one had a threshold of 1 to follow the instigator’s lead and subsequently set off the cascade.

While it sounds simply like an interesting thought experiment, this model is useful precisely because it permits an explanation of why two populations with seemingly similar people and circumstances can result in two strikingly different outcomes. The representative agent model could only account for these differences by looking for some critical factor that distinguished between the two populations – was one of the instigators particularly persuasive or had one of the towns been suffering from hardship for longer? That doesn’t really work in the riot scenario described above, as the average townsfolk are pretty much identical. Hence, any approach such as the representative agent model that relies solely on a theoretical individual’s behaviour is inadequate to explain what actually happened: it doesn’t capture any of the effects of interactions between individuals in the two towns.

But just how helpful is the threshold model? For starters, to use such a model in a predictive manner, you’d need to know the thresholds of everyone in the population you were studying, which is reality is rarely the case. Theoretically, you could study something such as the adoption of birth control in villages in developing countries, and use the recorded effects from one village to calculate the likely threshold distribution present there. You could then extrapolate this to other villages, potentially adjusting the implementation of similar schemes based on what you learn. But it’s clear that this involves many assumptions.

Secondly, the model only deals with binary decisions, whereas in reality many choices involve a more complex array of options. And finally, there’s no acknowledgement of the strength of relationships between individuals, or whether it matters that those relationships are reciprocated. If you’re my friend and you take part in the riot, does that lower my threshold for taking part too? In the next post, we’ll look at another study of group dynamics, one that attempts to take into account social influence in a different way.

References/further reading

Everything is Obvious*  *once you know the answer – Duncan J. Watts – Chapter 3 – The Wisdom (and Madness) of Crowds

Mark Granovetter – Wikipedia article. Granovetter was also behind the idea of the strength of weak ties.

Threshold models of collective behaviour – Mark Granovetter, The American Journal of Sociology (1978), Vol. 83, No 6. 1420 – 1443.