jeudi, 20 septembre 2018

(English) Article 13: Making Copyright Unfit for the Digital Age

Désolé, cet article est seulement disponible en Anglais Américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

The previous examination of Article 13’s deep flaws noted how it tilted the copyright playing-field against ordinary citizens. This is probably an inevitable result of the blinkered view of the powerful lobbyists and key politicians who have shaped the proposed law.

Despite its name, the « Directive Of The European Parliament And Of The Council on copyright In The Digital Single Market » is really about online videos in general, and YouTube in particular. Reading the comments from rightholders, the constant refrain is about the mythical « value gap » – in reality, an innovation gap. And when politicians try to justify the unworkable upload filter required by Article 13, they typically point to YouTube’s Content ID system to « prove » that such things are not only possible, but already exist. However, Content ID is a one-off example that cost Google over $60 million to create, and required 50,000 hours of coding. Moreover, it displays all the flaws discussed in the previous article: things like overblocking and the inability to distinguish cases covered by copyright exceptions. If anything, it is a perfect demonstration of all that is wrong with filters.

Supporters of the Copyright Directive and Article 13 don’t worry about such details, because they just want an example – any example – of an upload filter. But in constantly pointing to Content ID – or, very occasionally, to Audible Magic’s system – they overlook one crucial aspect of Article 13: the fact it would apply to everything that is covered by copyright, even creative expressions that have nothing in common with videos or music.

For example, Article 13 applies to all the text and images that appear as parts of trillions of social media posts. Every single sentence, and every single image would have to be checked against a database of textual and graphical material in case posts include elements claimed by rightholders. The impossibility of doing this exactly – how can every sentence, or part of every sentence, be compared against every entry in the list of material to be blocked? – means that online platforms will inevitably err on the side of caution. As a result, memes will be blocked because they use pre-existing images. Similarly, witty transformations of copyright material will be blocked because the algorithm will note similarities, and flag them up as likely to be infringing.

Another problem with applying upload filters to text and images is that there is no equivalent to Content ID that can be used. Indeed, it is impossible that such a monolithic filtering system could ever be constructed, since commercial text and images exist in vastly greater numbers than commercial videos or music. The largest music streaming service, Apple Music, offers around 40 million songs: over 750 million Facebook comments are posted every single day – that’s 270 billion a year. Most of those are non-commercial, but huge numbers are posted by companies.

Commercial text and images also exist in many different forms – novels, articles, poems, advertisements, short stories, news stories, blog posts, tweets, paintings, prints, signs, cartoons etc. etc. However flawed Content ID may be, at least it possesses the virtue of existing. There are no single-solution equivalents for text and music that politicians can point to, and it is highly unlikely there ever will be. At best, there will be many fragmentary solutions for different domains, which implies online platforms will need to run multiple checks on every upload. Only the very largest platforms will be able to afford the necessary resources and licences. Startups, especially the European ones, will be shut out of the market completely.

There are other problems arising from the Copyright Directive’s all-encompassing nature. For example, whereas videos, music, text and images have been around for decades, digital representations of 3D models, which are protected by copyright law, are still comparatively new. As a consequence, relatively little research has been done on copyright infringement in this area. There are certainly no handy tools available that online platforms can licence and deploy to filter uploads of 3D models. So how do EU politicians expect Internet sites to comply – by writing their own 3D model upload filters? That’s a multi-year research project at the very least.

The area of 3D models may be slightly esoteric, and thus of limited impact. But the same cannot be said for software code, which powers huge swathes of the modern world, and is also covered by copyright. The completely general nature of Article 13 means that all major commercial upload sites would be forced to check any code that is uploaded for infringements – not just of code, but also of text, images, sounds and videos, all of which regularly appear in code repositories. Once more, there is no general filtering technology that can cope with such a demanding task, nor will there ever be.

Moreover, the world of software is probably unique in the copyright world because of the massive scale of consensual code sharing. That’s because open source has essentially won, and is the dominant software development methodology in many sectors. As a result, software programs that re-use code from other projects are probably the rule now. So how will upload filters distinguish between this legitimate copying of software code, and the kind that is unauthorised? Simply looking at the software licences is not enough, because it is the complex interaction of licensing that determines what is, and what is not, legal.

There’s another major issue for software code thanks to Article 13. As the previous CopyBuzz analysis noted, under the proposed law there is nothing to stop malicious actors making false claims of copyright ownership to online services. Imagine, now, that a false claim were made against key open source software – for example, a vital element in the Android operating system, which has Linux at its heart. Until the false claim was challenged and overturned, online platforms would be obliged to block all uploads and downloads of that code wherever it appears, or otherwise risk legal penalties. Because of the way software works as an interdependent mosaic of code elements, removing just a few lines of programming could lead to major software projects becoming dysfunctional, with serious economic consequences, and even risks to health and safety if critical systems start to fail.

These potentially massive problems would arise because the people drafting Article 13 clearly thought only in terms of videos and music. But instead of making that explicit, the draft Directive and the European Parliament’s Legal Affairs Committee’s Report both speak of copyright works in general, thus impacting all the other quite different fields. In those areas, however, Article 13’s ideas and methods are simply not transferable or feasible.

Finally, it’s worth noting that there is one other way in which the tunnel vision of lobbyists and politicians has resulted in a bad law that will have negative effects on the online world in the EU. Film, video and recorded music have traditionally required expensive technological resources and complex distribution networks for them to be created and delivered to their audiences. This is reflected in the thinking behind Article 13, which implicitly and incorrectly frames the situation as Google using YouTube as an alternative distribution network to « cheat » traditional copyright companies of their profits.

This flawed narrative of a zero-sum game, where either the US tech giants win, and traditional copyright industries lose, or the EU’s copyright industries are given legislative weapons so that they can « beat » Google and Facebook, permeates the thinking behind the Copyright Directive. It is also almost universal among the mainstream media. In its coverage of the recent vote in which the European Parliament rejected the text of the JURI committee, The New York Times summarised the result as « Tech Giants Win a Battle Over Copyright Rules in Europe« . Even the Guardian wrote: « YouTube and Facebook escape billions in copyright payouts after EU vote« , as if this were just about money.

It’s not, as the massive mobilisation by email, calls, tweets and petitions of thousands of ordinary people in the EU clearly proves. This is about the hugely harmful effects that Article 13 would have on them, the users of the Internet. This is the missing story: that the world has changed from the 20th-century model of passive consumption of text, images, sound and films by the public, to one of active engagement where ordinary people are discovering they can be creators too.

This third force – alongside traditional copyright companies, and the new Internet giants – has been completely ignored so far in the debate about Article 13. But in doing so, the politicians risk failing in their stated goal of fashioning « a European copyright fit for the digital age« . This new epoch is the age of mass participation, and of the general public using the Internet as a canvas for their newly-liberated creativity. Article 13’s upload filters will inevitably throttle and kill that creativity, which is why they must be removed completely from the Copyright Directive.

Featured image by Joe Haupt.

Writer (Rebel Code), journalist, blogger. on openness, the commons, copyright, patents and digital rights.