Article 13: even worse than the US DMCA takedown system
We have entered the last few days before the key European Parliament vote on the proposed copyright directive, which is due to take place on 12 September. Various “new” proposals for the most contentious elements – notably Article 11 and Article 13 – are floating around from the lead MEP, Axel Voss, and others.
One thing that is not changing is Voss’s view that online services should be held liable for the material that is available from their servers, and a concomitant requirement for them to sign licences to cover that material. Leaving aside the fact that this is the wrong approach, since most of the online platforms concerned are simply conduits, it’s worth considering the practical consequences of forcing companies to license material in this way.
Voss carefully avoids all mention of upload filters in his proposed text, and even went so far as to tweet: “The new proposal for #copyrightdirective does not forsee any measures/„upload filter“ …. Now I expect everyone who was against the previous proposal because of this to support the new proposal.” But a moment’s reflection shows why the new formulation is just as bad as its predecessors.
The fragmented copyright landscape, and the fact that every possible medium is involved – text, music, audio, images, maps, diagrams, photos, video, film, software, 3D models etc – means that under Voss’s proposal, online sites will be required to sign up to hundreds, possibly thousands, of different licences. That will impose a huge administrative burden on companies, making the EU less attractive to startups, and hobbling existing digital services in the region as they struggle to negotiate and implement such deals.
Licensing agreements are very diverse. Some will probably require licensees to monitor uploads to allow the licensor to calculate how much is due. Others may specify exclusions to the deal, which will require items to be blocked. Moreover, online platforms are likely to block items that are not explicitly covered by licences, since they will not want the threat of costly lawsuits, caused by inadvertently allowing commercial copyright material to be posted online, hanging over them.
All of these factors will lead to a requirement for extremely complex and constantly-updated upload filters to monitor, check and block massive quantities of material coming from members of the public. The filter will be software-based, because the volumes involved make it impossible for human scrutiny to be applied. In other words, Voss’s latest proposal may not call for automated upload filters, but it will inevitably lead to them being used, since there is no alternative.
The question then becomes: what will be the real-world consequences of such automated upload filters? Fortunately, we don’t need to depend upon vague surmises here. Automated systems designed to spot copyright material are already deployed by companies as a result of the US Digital Millennium Copyright Act. The global nature of the Internet means that their effects are also felt in the EU.
For example, a couple of weeks ago, around a dozen articles and campaign sites criticising the plans for reforming copyright in the EU disappeared from Google’s search engine. Although the sites and Web pages themselves were still online, their absence from Google search results meant that for all intents and purposes they had vanished from the public Web. It turned out that a service called “Topple Track” had sent a notice to Google claiming erroneously that the Internet addresses were infringing on copyright material – a song in this case.
The Topple Track service is run by a company called Symphonic Distribution. The Electronic Frontier Foundation reported that the company says it is “one of the leading Google Trusted Copyright Program members”. This is a scheme that allows certain companies to remove sites from Google’s search index directly and without oversight, based purely on their claim that copyrights are being infringed. The EFF quoted Symphonic Distribution’s boast:
Once we identify pirated content we send out automated DMCA takedown requests to Google to remove the URLs from their search results and/or the website operators. Links and files are processed and removed as soon as possible because of Topple Track’s relationship with Google and file sharing websites that are most commonly involved in the piracy process.
That text seems to have been removed from Symphonic Distribution’s site, probably because the 23 Internet addresses allegedly infringing on a song include maps of India, an article about “Labor Recruitment and its Regulation in the US-Mexico-Central America Corridor”, and an EU site on road safety. In other words, the incident is a perfect demonstration of all that is wrong with automated takedowns, and with the practice of allowing external companies to remove content without checks.
The EFF reports that Symphonic Distribution says the mistakes were caused by “bugs within the system that resulted in many whitelisted domains receiving these notices unintentionally”. And that is the problem with automated filters: there are always bugs in the system, because all software has bugs. In this case, those bugs resulted in a wide range of unrelated and legitimate material being disappeared from Google’s search index. If Article 13 with its licensing requirements becomes law, other bugs will similarly result in legal material being blocked erroneously by uncomprehending algorithms.
But it’s not just bugs that are the issue. Even if the software were perfect, deciding whether material is legal or not is simply too difficult for automated systems. Another recent incident in Europe exemplifies the threat here. It involves a German music professor’s project to digitise materials that are unequivocally in the public domain. Dr. Ulrich Kaiser uploaded a short video about his work to YouTube. Here’s what happened:
In this video, I explained my project, while examples of the [public domain] music played in the background. Less than three minutes after uploading, I received a notification that there was a ContentID claim against my video. ContentID is a system, developed by YouTube, which checks user uploaded videos against databases of copyrighted content in order to curb copyright infringement. This system took millions of dollars to develop and is often pointed to as a working example of upload filters by rights holders and lawmakers who wish to make such technology mandatory for every website which hosts user content online.
The fact that a claim was made against public domain music in this way surprised the professor, and he went on to investigate the problem:
I decided to open a different YouTube account “Labeltest” to share additional excerpts of copyright-free music. I quickly received ContentID notifications for copyright-free music by Bartok, Schubert, Puccini and Wagner. Again and again, YouTube told me that I was violating the copyright of these long-dead composers, despite all of my uploads existing in the public domain. I appealed each of these decisions, explaining that 1) the composers of these works had been dead for more than 70 years, 2) the recordings were first published before 1963, and 3) these takedown request did not provide justification in their property rights under the German Copyright Act.
What is particularly troubling is Google’s response when Kaiser contacted the company about these repeated, erroneous notices: “thank you for contacting Google Inc. Please note that due to the large number of enquiries, e-mails received at this e-mail address firstname.lastname@example.org cannot be read and acknowledged”. That raises an important point. Even if there is some kind of appeals process designed to rectify upload filter mistakes, there is no guarantee it will work effectively, or at all. The fact that Google is unable to cope with the volume of enquiries from the public does not augur well for future appeals systems run by companies with far fewer technical and financial resources than the US giant.
Yet another recent case confirms the problems here. The Dutch pro-choice organisation Women on Waves was told that it had violated YouTube’s “community guidelines” and therefore its account was taken down not once, but three times. Each time, the account was eventually re-instated. However, this was not because YourTube’s appeals procedure worked, but rather:
because Women on Waves has a network of journalists and high profile followers that could draw attention to the ban and force YouTube to act. This might have worked now, and it might have worked for Women on Waves, but it can’t, and shouldn’t, be relied on to work in every case and for everyone.
Again, this is what is likely to happen with Article 13’s upload filters blocking legitimate material: if you can make enough noise, you will probably be able to get the block lifted. But for the vast majority of people, without the benefit of “a network of journalists and high profile followers”, the risk is that their uploads will stay blocked, and that they will give up trying to overturn that decision, or to post more of their material. Online culture and freedom of expression in the EU will be poorer as a result.
Bad as the cases above seem, Article 13 will be significantly worse in one important respect. The US DMCA created a system for automated takedown of materials that have been posted online. Article 13 will lead to automated upload filtering that blocks material even before it is posted.
Under the DMCA, the public has a chance to spot that material has been removed. People can then fight to get it put back. Article 13 will work invisibly in the background, silently blocking material without anyone knowing other than the person uploading the file, who will probably just accept that result, however unreasonable.
No amount of re-drafting can save this flawed and dangerous idea that will undermine the Internet as we know it in the EU. Article 13 and its pernicious mix of compulsory licensing and fallible filtering must be dropped completely.
Featured image from The Juice Media.