This is the sixth of a weekly series of posts on various aspects of social software design I find interesting, here is the full list. Today we have a very special guest post Social Software Sunday.
Today’s Social Software Sunday is written by a good friend of mine, Jonathan Morgan. Jonathan is a graduate student at the University of Washington in Seattle and a member of the dub group. He studies online collaboration and recently worked on the design of ConsiderIt, the crowdsourced deliberation platform that powers the Living Voters Guide. He also blogs irregularly at newsfromconstantinople.com.
There’s a tendency among designers of social media platforms to believe that they can learn anything they would ever want to know about their users by looking at easily quantifiable things. Want to know whether your new question feature is popular? Check the logs to see how many people are using it. Want to know whether your site is sticky? See how long new users stay when they arrive on the front page for the first time.
Questions like these are easy to answer through aggregated behavioral metrics like hits, clicks, links and log ons, or through demographic data gleaned from IPs, webforms and on-site surveys. However, relying solely on such a ‘big data’ approaches to user research glosses over a lot of important information about how people actually use your site. There are other, tougher, sorts of questions that are harder to answer with quantitative methods alone, and that are nonetheless critical for effective design and evaluation of social media, such as how are people using your comment board? What kind of questions get the most responses on your new Q & A feature? What kind of profile information do people share about themselves on your social networking site? What kinds of features will best support high-volume users?
Neglecting these kinds of questions and the user research methods that have been developed to answer them can get you into trouble for several reasons:
- Different user groups will use the same features for different purposes. Most social media platforms serve different purposes for different people. Some people use Twitter to advertise themselves to the world, while others use it to maintain an ambient awareness of the trending topics in their field. Still others actually use it to communicate with a distributed group of friends and colleagues (despite claims by some bloggers that it’s purely a marketing vehicle), and many people fall somewhere in between. Not all of these user groups will benefit from every potential new feature, and some people might even find certain features make it harder to do what they want with Twitter. Likewise not everyone is looking for the same kind of experience when they ask or answer a question on Quora. As designers, we need to be aware of the different user groups that are out there, how they use our software, and which users and uses we want to explicitly support.
- The features that get the most traffic aren’t necessarily the ones your users value most. Facebook is probably to most visible example of how to make sweeping design changes that royally piss off your user base. They may be somewhat bulletproof (for now…) when it comes to feeling the adverse impacts of their constant fiddling with our privacy settings, but you can’t count on your own site being so indispensable. Due diligence calls for at very least understanding the functions or features that your users hold dearest, and often these come down to deeply-held values (like privacy and trust) that you can’t easily detect with an algorithm or represent in a graph.
- Users will use your site the way they want to, despite your best intentions. The main advantage of social media is their flexibility, which sometimes means that people come up with novel ways to use them that their designers never intended. MySpace wasn’t designed to provide cheap web hosting for up-and-coming garage bands, but once they started losing people to Facebook, they came up with a variety of features that supported this use.
The actual content of user-generated content–the things people say, upload, tag, bump and ‘like’–is often treated as a black box by those who design and evaluate social media, unless it happens to be a kind of content that is easily machine-tractable. One simple example of a feature that probably wouldn’t have happened without someone actually paying attention to how people use their service is Twitter’s @replies feature (since renamed @mentions). @replies would never have been implemented if someone at Twitter hadn’t looked at some tweets and realized that a lot of people were putting [email protected] in front of messages directed to particular other users and decided to facilitate that use by hyperlinking the replies and adding notifications to let people know they had been mentioned in a tweet.*
But I think that the case of Twitter is an exception. In many cases, the deepest analysis that user-generated content like the text of a tweet or status update ever gets from SM platform designers is an automatic scan for keywords and links related to current events, trends, etc. To some extent, this aversion to actually looking at this messy human-created content is understandable. For one thing, the volume of text entered into a platform like Twitter, Quora or Facebook is staggering, and human speech, even text-based speech is highly variable and contextualized. After all, people didn’t put it up there for you to glean actionable design insights from. Most of your users probably aren’t going to post direct statements about your interface like “If only this damn comment box allowed rich text formatting I would use this service much more often and be willing to pay for the priviledge even” in your interface itself.
But it’s still a shame, because directly examining what people say and do is one of the best ways of understanding their motivation for saying or doing it. And understanding your user’s motivations, as everyone knows by now, is invaluable for deciding what functionality to add, what interface tweaks to make, or why no one seems to use your new ‘poke’ feature.
You can get at some of these kinds of questions through traditional qualitative methods such as interviews, open-ended surveys, usability tests and focus groups, but these take time and money–making them hard to justify within companies working in the web application world of limited startup funding and rapid deployment cycles. These methods also have the disadvantage of providing findings that seem hard to generalize and turn into concrete design recommendations, since these findings are often anecdotal and contextual, and the sample size is usually small.
While these disadvantages are too often overstated, a more fundamental difficulty of applying these methods to the use of social media is that they elicit information from the user outside of the normal context of use. Because social media are by definition communication platforms, methods that focus on single-user interactions with an interface (like usability testing) or ask users to describe their online experiences and behaviors after they’ve logged out (like interviews and focus groups) can’t always answer your ‘why’ questions any better than quantitative metrics can.
But there are other ways of making sense of user-generated content and gleaning design insights from it that. Content analysis for example, is a lightweight, flexible method for breaking down data that’s too complex to be automatically detected into manageable categories and making comparisons between them. Content analysis has been used in academic disciplines like communication, political science, sociology, psychology and health sciences for decades, and is commonly used in human-computer interaction research today to complement quantitative methods like social network analysis and behavior trace logs.
Content analysis ‘coding’ can be very quick and dirty and still yield interesting results: check out this study of Twitter that classifies tweets according to their purpose. It can also be performed in a more or less structured way: simply having the temerity to actually read through some posts, comments, uploads or tags and picking examples of interesting behaviors to share with the rest of your team can be quite illuminating. On the other hand, there are also more elaborate content analysis coding schemes out there that require some training to employ consistently, but which can allow you to identify and tally certain kinds of user behaviors, infer the motivations behind those behaviors, and even run stats.
However you do it, content analysis methods can facilitate the identification and measurement of socially meaningful behavioral cues can shed light on how groups of users interact with and through technology. Content analysis is an effective method for surfacing user wants and needs and for testing specific design decisions, and should be a part of a the methodological toolkit of any researcher or practitioner tasked with the evaluation of social media.
*Note: I don’t claim to know exactly how Twitter became aware of the [email protected] phenomenon, although I would be fascinated to find out. Perhaps they use the service themselves, or got direct requests for the functionality from users rather than poring over tweet logs. Regardless, read this interesting post on the Twitter blog to read a good example of how the good folks at Twitter make usage and user feedback drive design–they obviously take a mixed-methods approach to user research. Not all designers have the benefit of being in the community they’re designing for though (Remeber: you are not your user!), and you can’t rely on users to always tell you directly what they like about your platform or why they like it–so observation is key.
To read more of Jonathan’s writings, you can visit his blog or twitter page. To be notified of the next Social Software Sunday piece as it’s posted, you can subscribe to the RSS feed, follow me on twitter or subscribe via email: