Skip to main contentSkip to navigationSkip to navigation
Woman using iPad tablet computer at home to browse iTunes digital music store
Let's talk about what happens with DRM in the real world. Photograph: Iain Masterton / Alamy/Alamy
Let's talk about what happens with DRM in the real world. Photograph: Iain Masterton / Alamy/Alamy

What happens with digital rights management in the real world?

This article is more than 10 years old
DRM is one of the most salient, and least understood, facts about technology in the contemporary world

I've been writing about "digital rights management" (DRM) for years in this column, but here I am, about to write about it again. That's because DRM – sometimes called "copy protection software" or "digital restrictions management" – is one of the most salient, and least understood, facts about technology in the contemporary world.

When you get into a discussion about DRM, you often find yourself arguing about whether and when copying and sharing should be allowed. Forget that for now. It's beside the point, for reasons that will shortly be clear. Instead, let's talk about the cold, hard legal, technical, marketplace and normative realities of DRM. Let's talk about what happens with DRM in the real world.

In the real world, "bare" DRM doesn't really do much. Before governments enacted laws making compromising DRM illegal (even if no copyright infringement took place), DRM didn't survive contact with the market for long. That's because technologically, DRM doesn't make any sense. For DRM to work, you have to send a scrambled message (say, a movie) to your customer, then give your customer a program to unscramble it. Anyone who wants to can become your customer simply by downloading your player or buying your device – "anyone" in this case includes the most skilled technical people in the world. From there, your adversary's job is to figure out where in the player you've hidden the key that is used to unscramble the message (the movie, the ebook, song, etc). Once she does that, she can make her own player that unscrambles your files. And unless it's illegal to do this, she can sell her app or device, which will be better than yours, because it will do a bunch of things you don't want it to do: allow your customers to use the media they buy on whatever devices they own, allow them to share the media with friends, to play it in other countries, to sell it on as a used good, and so on.

The only reason to use DRM is because your customers want to do something and you don't want them to do it. If someone else can offer your customers a player that does the stuff you hate and they love, they'll buy it. So your DRM vanishes.

A good analogue to this is inkjet cartridges. Printer companies make a lot more money when you buy your ink from them, because they can mark it up like crazy (millilitre for millilitre, HP ink costs more than vintage Champagne). So they do a bunch of stuff to stop you from refilling your cartridges and putting them in your printer. Nevertheless, you can easily and legally buy cheap, refilled and third-party cartridges for your printer. Same for phone unlocking: obviously phone companies keep you as a customer longer and make more money if you have to throw away your phone when you change carriers, so they try to lock the phone you buy with your plan to their networks. But phone unlocking is legal in the UK, so practically every newsagent and dry cleaner in my neighbourhood will unlock your phone for a fiver (you can also download free programs from the net to do this if you are willing to trade hassle for money).

The technical and commercial forces that gave us phone unlocking and cartridge refilling are the same forces that would make DRM a total non-starter, except for a pesky law.

Enter the DMCA

Back in 1995, Bill Clinton's copyright tsar Bruce Lehman – a copyright lawyer, late of Microsoft – wrote a white paper proposing a new regulatory framework for the internet. It was bonkers. Under Lehman's plan, every copy of every work would have to be explicitly permitted and a license fee collected. That means that your computer would have to check for permission and pay a tiny royalty when it copied a file from the modem's buffer into memory, and from memory into the graphics card.

Lehman submitted his paper to then-Vice President Al Gore, who was holding hearings on the demilitarisation of the internet – the National Information Infrastructure (NII) or "information superhighway" hearings. To his credit, Al Gore rejected the Lehman plan and sent him packing.

Lehman's next stop was Geneva, where he convinced the UN's World Intellectual Property Organisation (WIPO) to enact key measures from his plan in international treaties (the WIPO Copyright Treaty and WIPO Performances and Phonograms Treaty). Then he got the US Congress to pass a law to comply with the treaty – the Digital Millennium Copyright Act (DMCA) – that snuck much of the stuff that Gore had rejected into US law.

The DMCA is a long and complex instrument, but what I'm talking about here is section 1201: the notorious "anti-circumvention" provisions. They make it illegal to circumvent an "effective means of access control" that restricts a copyrighted work. The companies that make DRM and the courts have interpreted this very broadly, enjoining people from publishing information about vulnerabilities in DRM, from publishing the secret keys hidden in the DRM, from publishing instructions for getting around the DRM – basically, anything that could conceivably give aid and comfort to someone who wanted to do something that the manufacturer or the copyright holder forbade.

Significantly, in 2000, a US appeals court found (in Universal City Studios, Inc v Reimerdes) that breaking DRM was illegal, even if you were trying to do something that would otherwise be legal. In other words, if your ebook has a restriction that stops you reading it on Wednesdays, you can't break that restriction, even if it would be otherwise legal to read the book on Wednesdays.

In the USA, the First Amendment of the Constitution gives broad protection to free expression, and prohibits government from making laws that abridge Americans' free speech rights. Here, the Reimerdes case set another bad precedent: it moved computer code from the realm of protected expression into a kind of grey-zone where it may or may not be protected.

In 1997's Bernstein v United States, another US appeals court found that code was protected expression. Bernstein was a turning point in the history of computers and the law: it concerned itself with a UC Berkeley mathematician named Daniel Bernstein who challenged the American prohibition on producing cryptographic tools that could scramble messages with such efficiency that the police could not unscramble them. The US National Security Agency (NSA) called such programs "munitions" and severely restricted their use and publication. Bernstein published his encryption programs on the internet, and successfully defended his right to do so by citing the First Amendment. When the appellate court agreed, the NSA's ability to control civilian use of strong cryptography was destroyed. Ever since, our computers have had the power to keep secrets that none may extract except with our permission – that's why the NSA and GCHQ's secret anti-security initiatives, Bullrun and Edgehill, targetted vulnerabilities in operating systems, programs, and hardware. They couldn't defeat the maths (they also tried to subvert the maths, getting the US National Institute for Standards in Technology to adopt a weak algorithm for producing random numbers).

Ever since Reimerdes, it's been clear that DRM isn't the right to prevent piracy: it's the right to make up your own copyright laws. The right to invent things that people aren't allowed to do – even though the law permits it -- and to embed these prohibitions in code that is illegal to violate. Reimerdes also showed us that DRM is the right to suppress speech: the right to stop people from uttering code or keys or other expressions if there is some chance that these utterances will interfere with your made-up copyright laws.

Understanding security

The entertainment industry calls DRM "security" software, because it makes them secure from their customers. Security is not a matter of abstract absolutes, it requires a context. You can't be "secure," generally -- you can only be secure from some risk. For example, having food makes you secure from hunger, but puts you at risk from obesity-related illness.

DRM is designed on the presumption that users don't want it, and if they could turn it off, they would. You only need DRM to stop users from doing things they're trying to do and want to do. If the thing the DRM restricts is something no one wants to do anyway, you don't need the DRM. You don't need a lock on a door that no one ever wants to open.

DRM assumes that the computer's owner is its adversary. For DRM to work, there has to be no obvious way to remove, interrupt or fool it. For DRM to work, it has to reside in a computer whose operating system is designed to obfuscate some of its files and processes: to deliberately hoodwink the computer's owner about what the computer is doing. If you ask your computer to list all the running programs, it has to hide the DRM program from you. If you ask it to show you the files, it has to hide the DRM files from you. Anything less and you, as the computer's owner, would kill the program and delete its associated files at the first sign of trouble.

An increase in the security of the companies you buy your media from means a decrease in your own security. When your computer is designed to treat you as an untrusted party, you are at serious risk: anyone who can put malicious software on your computer has only to take advantage of your computer's intentional capacity to disguise its operation from you in order to make it much harder for you to know when and how you've been compromised.

DRM in the era of mass surveillance

Here's another thing about security: it's a process, not a product (hat tip to Bruce Schneier!). There's no test to know whether a system is secure or not; by definition, all you can do to test a system's security is tell people how it works and ask them to tell you what's wrong with it. Designing a security system without public review is a fool's errand, ensuring that you've designed a system that is secure against people stupider than you, and no one else.

Every security system relies on reports of newly discovered vulnerabilities as a means of continuously improving. The forces that work against security systems – scripts that automate attacks, theoretical advances, easy-to-follow guides that can be readily googled – are always improving so any system that does not benefit from its own continuous improvement becomes less effective over time. That is, the pool of adversaries capable of defeating the system goes up over time, and the energy they must expend to do so goes down over time, unless vulnerabilities are continuously reported and repaired.

Here is where DRM and your security work at cross-purposes. The DMCA's injunction against publishing weaknesses in DRM means that its vulnerabilities remain unpatched for longer than in comparable systems that are not covered by the DMCA. That means that any system with DRM will on average be more dangerous for its users than one without DRM.

The DMCA has spread to other territories, thanks to those WIPO treaties. In the UK, we got DMCA-like laws through the EUCD. Canada got them through Bill C-11. Pretty much any place that's industrialized and wants to trade with the rest of the world has a prohibition on weakening DRM. Many of these laws – including the DMCA – have provisions that supposedly protect legitimate security research, but in practice, these are so narrow and the penalties for DMCA violations are so terrible that no one tries to avail themselves of them.

For example, in 2005, Sony-BMG music shipped a DRM called the "Sony Rootkit" on 51m audio CDs. When one of these CDs was inserted into a PC, it automatically and undetectably changed the operating system so that it could no longer see files or programs that started with "$SYS$." The rootkit infected millions of computers, including over 200,000 US military and government networks, before its existence became public. However, various large and respected security organisations say they knew about the Sony Rootkit months before the disclosure, but did not publish because they feared punishment under the DMCA. Meanwhile, virus-writers immediately began renaming their programs to begin with $SYS$, because these files would be invisible to virus-checkers if they landed on a computer that had been compromised by Sony.

Snowden, DMCA and the Future of Security

The revelations of the NSA whistleblower Edward Snowden have changed the global conversation about privacy and security. According to a Pew study from last autumn, most American Internet users are now attempting to take measures to make their computers more secure and keep their private information more private.

It's hard to overstate how remarkable this is (I devoted an entire column to it in December). For the entire history of the technology industry, there was no appreciable consumer demand for security and privacy. There was no reason to believe that spending money making a product more secure would translate into enough new users to pay for the extra engineering work it entailed.

With the shift in consciousness redounding from the Snowden files, we have, for the first time ever, the potential for commercial success based on claims of security. That's good news indeed – because computer security is never a matter of individual action. It doesn't matter how carefully you handle your email if the people you correspond with are sloppy with their copies of your messages. It's a bit like public health: it's important to make sure you have clean drinking water, but if your neighbours don't pay attention to their water and all get cholera, your own water supply's purity won't keep you safe.

But there can be no real security in a world where it is illegal to tell people when the computers in their lives are putting them in danger. In other words, there can be no real security in a world where the DMCA and its global cousins are still intact.

Party like it's 1997

Which brings us back to Bernstein. in 1997, a panel of American federal appeals court judges in the Ninth Circuit decided that code was expressive speech and that laws prohibiting its publication were unconstitutional. In 2000, the Reimerdes court found that this protection did not extend to code that violated the DMCA.

It's been a long time since anyone asked a judge to reconsider the questions raised in Reimerdes. In 2000, a judge decided that the issue wasn't about free speech, but rather a fight between companies who "invested huge sums" in movies and people who believed that "information should be available without charge to anyone clever enough to break into the computer systems." The judge was wrong then, and the wrongness has only become more glaring since.

No court case is ever a sure thing, but I believe that there's a good chance that a judge in 2014 might answer the DMCA/free speech question very differently. In 14 years, the case for code as expressive speech has only strengthened, and the dangers of censoring code have only become more apparent.

If I was a canny entrepreneur with a high appetite for risk -- and a reasonable war-chest for litigation – I would be thinking very seriously about how to build a technology that adds legal features to a DRM-enfeebled system (say, Itunes/Netflix/Amazon video), features that all my competitors are too cowardly to contemplate. The potential market for devices that do legal things that people want to do is titanic, and a judgment that went the right way on this would eliminate a serious existential threat to computer security, which, these days, is a synonym for security itself.

And once anti-circumvention is a dead letter in America, it can't survive long in the rest of the world. For one thing, a product like a notional Itunes/Amazon/Netflix video unlocker would leak across national borders very easily, making non-US bans demonstrably pointless. For another, most countries that have anti-circumvention on the books got there due to pressure from the US Trade Representative; if the US drops anti-circumvention, the trading partners it armed-twisted into the same position won't be far behind.

I've talked to some lawyers who are intimate with all the relevant cases and none of them told me it was a lost cause (on the other hand, none of them said it was a sure thing, either). It's a risky proposition, but something must be done. You see, contrary to what the judge in Reimerdes said in 2000, this has nothing to do with whether information is free or not – it's all about whether people are free.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed