Anil Dash asks a question in his blog post:
how are privacy settings on social networks different than DRM restrictions placed on media content files from companies? Is it because I’m not a corporation? Is it because the DRM technology is provided by Flickr or Facebook instead of by Apple’s iTunes or Microsoft’s WIndows Media? Is it because I only (theoretically) grant permissions to dozens or hundreds of people, instead of millions?
This intersects nicely with the work I’ve been doing on pinpointing why social design needs to be it’s own seperate specialty with it’s own rules and literature. Let me take a quick stab at answering his question:
- We think differently about different social relationships. We literally use different parts of our brains to do different types of social reasoning. For individuals, we invoke a much deeper Theory of Mind construct that affects our behaviour. With DRM, we engage in the cost/benifit portion of our brain and basically treat the opposing party as if it were an impersonal force. With social relationships, we not only think about how our actions affect us, but also how they affect them. “What would Sally think of me if I did this?”, “What would Sally think I wanted if I did this?” etc.
- Social mechanisms scale poorly. Different social mechanisms behave radically differently if you make the scale much larger or smaller. Part of why DRM fails at large scales is simply that it only takes one bad apple to “release” a piece of information before it is freed. That social networking data is relatively secure is an artifact of the small scale it operates on. If you take a look at when social networking suddenly goes “large scale”, Ashlee Dupree or Todd Palin for example, you can see that it’s even more ineffective at protecting media than traditional mechanisms.
- You can punish those who misbehave with social media. Social media works because you can push enforcement into the social layer. If people misbehave, you can actually punish them in real life. As a result, the rules for good behaviour can be negotiated at the social level. With DRM, the social layer is so weak that you can’t do any real form of enforcement which is why media companies have tried using either technological layer (DRM), the formal layer (courts) or the societal layer (appeals to morality). I’m going to write about this in much more detail in an upcoming blog post.
- Social Media is of limited utility. Let’s face it, the number of people who want to but can’t see your flickr photos is close enough to 0 that noone is going to bother to go to the effort of revealing it.
- The dark side of DRM is visible, the dark side of social media is invisible. Piracy is something that now happens out in the open so we get a generally accurate picture of how it’s practised and what the extent of it is. Violations of privacy in social media still happens in a shadowy underground so we tend to ignore it out of ignorance. In my fieldwork on just how people use socia media in less than savory ways, it’s actually quite surprising just how prevalent and casual privacy violation can be yet it’s not talked about nearly as much.
In short, I think this really highlights the importance of context in discussions about social design. It’s not merely enough to look at the software and expect that functionality maps onto results in a clean manner. The software is only a small part of a much larger design.