Instant replay: how a study of online communities helps to re-run scenarios in order to understand popularity

In the last post, we looked at how groups behave when binary decisions are involved. There, it’s assumed that each individual in the group has their own threshold that needs to be exceeded before they’ll take action. Granovetter’s riot model provided a useful place to start thinking about why the interactions amongst individuals in a group are important, but the simple threshold model doesn’t take into account more complex dynamics that are involved in social relationships or how these might affect decision making.

One noted social phenomenon  is cumulative advantage (also known as preferential attachment). Here, once a few people express their liking for something, it will become more popular still, and any differences between the popular choice and any less popular alternatives will be amplified.  Cumulative advantage tells us that it’s the number of people that like something that’s important to its success – not necessarily any intrinsic qualities of the object itself.  As Duncan J. Watts argues in his book Everything is Obvious, this goes against our common sense feeling that it must be that a popular item has some special, distinguishing features.

If we had the chance to repeat life multiple times, we could test which of the two ideas was actually true. If it’s the intrinsic features that are important, we’d expect that every time we replayed history, the same item would emerge on top. But if cumulative advantage is at work, then different items might emerge as favourites each time.

But of course, we only get to play history once. And this is where the internet becomes an incredibly useful tool for studying network effects – the large numbers of users and ability to create different online environments can allow hypothesis to be tested in multiple parallel situations, as if history were being allowed to play out several times. This is nicely demonstrated by Watts and his collaborators in a fascinating 2006 study of an online community of people interested in listening to music (no, not YouTube!).

Do you like what you hear? Image credit: Photo by Flickr user Mark JP http://www.flickr.com/photos/pyth0ns/6757854133/

Do you like what you hear?
Image credit: Photo by Flickr user Mark JP http://www.flickr.com/photos/pyth0ns/6757854133/

In the experiment, participants from a teen social network were recruited to a site called Music Lab, specifically created for the study. Each visitor was assigned to one of two types of condition – “independent” or “social influence”. In both cases they were asked to listen to and rate songs and given the opportunity to download them.  In the social influence condition participants could also see how many times others had downloaded the songs – the “social” aspect.

Over 14, 000 people took part in the experiment. The researchers divided them into 1 of 9 different “worlds”  – 8 of which had the social feedback about downloads displayed to members. All worlds featured the same 48 songs and started with download counts at zero. As songs were downloaded, this social data contributed only to the specific world where the song was accessed so that each world provided an independent repetition of the study. The additional group that wasn’t shown any social feedback provided a control for quality – given that participants couldn’t see what anyone else had downloaded, it was assumed that songs that became popular there might be the ones that were intrinsically better.

So what happened?

Where downloads were shown, the social input did influence what other users downloaded, and popular songs became more popular than anything in the independent, non-social conditions. What proved to catch on in one social world was also quite different to what was popular in another. So social influence increases not just inequality in decision making (“the rich get richer”), but also adds an element of unpredictability.

Interestingly, for those interested in online marketing, the whole experiment was also repeated to compare the layout of the songs on the website – displaying the songs as a ranked list in one scenario but as a random grid in another. The ranked list provides a clearer signal about the preferences of other users, and unsurprisingly, resulted in even more inequality and unpredictability about which songs would end up topping the ratings.

The results as a whole were even more dramatic because the experiment as a whole was likely to represent a toned down version of the social signals than might be observed in the real world, where marketing tactics and even discussion amongst users might be taking place. Finally, just in case you’re wondering if the experiment merely revealed some quirks about teenagers’ music tastes, the study was also repeated with adult participants with similar results.

So, after considering riots and music preferences we’re starting to get a feel for the importance of capturing the relationships between individuals if you want to understand group behaviour. Next, we’ll move onto thinking about the role of influencers in prompting changes in behaviour.

References/further reading

Everything is Obvious – once you know the answer – Duncan J. Watts – Chapter 3 – The Wisdom (and Madness) of Crowds

“Experimental Study of Inequality and Unpredictibility in an Artiifical Cultural Market” – Salganik, Dodds and Watts (2006). Science Vol 31. 854-856.

Preferential attachment (cumulative advantage) – this has been used to explain links to pages on the Internet and differences in citations of scholarly articles.

9 thoughts on “Instant replay: how a study of online communities helps to re-run scenarios in order to understand popularity

  1. Pingback: URLs of wisdom (January 19th 2014) | Social in silico

  2. Pingback: Morsels For The Mind – 24/01/2014 › Six Incredible Things Before Breakfast

  3. Pingback: URLs of wisdom (end of March 2014) | Social in silico

  4. Pingback: URLs of wisdom (3rd May) | Social in silico

  5. Pingback: 2014 in blog posts | Social in silico

  6. Pingback: 2014 in books | Social in silico

  7. Pingback: Building interdisciplinary communities – what hurdles do we need to overcome? – Social in silico

Leave a comment