New Experiments Question the Power of Social Proof on the Web

In a lot of my presentations and research, I’ve talked about social proof, and I’ve hypothesized that it has an effect on social and viral behavior online, but I had never actually proven it. So a few weeks ago, I began a series of experiments designed to test the assumption that the effects of social proof and social conformity can be exploited on the web.

In the first two experiments, I split tested ReTweet buttons with different ReTweet counts shown on blog posts. First I compared “0 Tweets” with “776 Tweets.” The results were exactly the opposite of what I expected. After 36 hours, the button showing no Tweets had been clicked more than double the times the other button had. The sample size and variation performance are statistically significant, and the results show a 96% confidence level.

While discussing these results with Alison, she suggested that they may have been due to a “first post” effect, where people want to be the first to share a piece of content. So I tested a button showing “15 Tweets” against one showing “776 Tweets.”

While the post I used for this test was more popular, the results of the experiment showed a far less significant difference between the two buttons. The “15 Tweets” button performed marginally better, but the low confidence value means there is probably no meaningful difference between the two buttons.

The results of the first two tests had me questioning whether or not social proof has the effect online I thought it did. My next step was to test the Feedburner subscriber count RSS button, which I believed was perhaps more likely to exhibit traditional social conformity effects.

I began by testing a button displaying “12 Subscribers” against one that displayed “62172 Subscribers.” The higher variation was clicked on a slight .13% more, and again this experiment’s confidence interval is too low to really be significant.

Finally, I decided to test the “first post” effect on the RSS button, by comparing a “0 Subscribers” button against the “62172 Subscribers” button. Again, the 62172 version did a little better, but failed to reach a statistically significant level.

In spite of the insignificant results I found in 3 of 4 tests, I believe my findings are interesting for a few reasons. First and perhaps most importantly, they represent a first step towards “contagiousness testing” which would allow marketers to apply split and multivariate testing methods to content virality.

An elongated test may reveal that higher showing a higher subscriber count on an RSS button, does lead to a small, but significant click-through increase.

These 4 experiments also suggest that there may be a powerful “first post” effect that marketers can leverage in certain situations. I plan to do more research into this in the future.