(English) Interview with Bernd Porr
Bernd Porr is a lecturer at the University of Glasgow. He straddles the two worlds of science and the arts. He invented the fastest walking robot in the world, numerous artificial intelligence (AI) learning algorithms (that is, sets of rules), deep brain stimulation devices, and models of the emotional system of the brain. He also conducts hearing aid research, and is co-founder of a startup producing data acquisition equipment. Alongside his many-stranded academic research, Porr worked in theatre for more than 10 years as a lighting designer, wrote/produced radio programmes/radio plays, and has been making fiction films since 2008.
GM: First, could you give a quick introduction to how today’s much-hyped AI systems work in practice?
BP: AI is divided up into open loops and closed loops. The open loop is simply you have some kind of „black box” and you show the box an image. The box says: „that’s a cat”, but actually the image is a dog, so you punish the box, and hopefully the next time the box gives the right answer. If not, you punish it again. Otherwise you give the box a carrot as a reward. And that’s basically AI. Essentially you have some kind of error signal. If it’s positive, then you tell the network [inside the black box], OK, this was great, or you tell the network it was bad, and do something else. And you do this [process] in AI nowadays roughly 5 billion times. Hopefully, at the end the network does the right thing.
The closed loop stuff was what Google’s DeepMind did with Atari video games. The network looked at the video image of this very old Atari video game, and then took an action to win the game. If the network lost, it was punished. It then ran the game 5 billion times, and the network learned to solve it. Basically it was all happening in this black box, and it was becoming better and better. But it is still very error-prone, and obviously takes a long time to train. So there is no magic behind this, I would say.
GM: We know that Article 13 will inevitably lead to upload filters. The volume of digital material that will need to be filtered is beyond human capabilities, which means that upload filters must be automated. Speaking as an engineer, how would you go about using AI to achieve what Article 13 demands – the recognition and blocking of unauthorised uploads?
BP: In terms of technology, I would buy computing time somewhere in the cloud, and run the AI algorithm there. Deep learning [AI] is basically able to detect images, and [some claim] the error rate is just now around 2%. But I just had a look at more realistic articles which say the error rate is 10%.
In order to train this AI system, you need to have massive amounts of computing power. If you run this service for three years, say, Amazon would charge about £100,000 for that. You have to have more or less all the images of the world in your database. I as engineer would not know how to do that, because I would need to scavenge the Web, and then train the network on this.
Images are being created all the time, so that’s why the cost of this would easily go into the millions. For me as an engineer, I think there’s just no way of doing it, from a cost perspective, and also time. The training of the [AI] networks takes a very long time – it takes millions and millions of iterations to get to a very low error rate. Every time new films or images are generated, the training has to be updated.
The computing cost of this would be absolutely massive, and I don’t think it’s possible for anybody except Google, who already have a system [Content ID]. I think for anybody who is wanting to comply with [Article 13] they basically have to go to Google and license it. Google will have a total monopoly regarding this.
I just had a look at Amazon Web Services – they’re very advanced in offering packaged deals for image detection [in the cloud]. Even they would not be able to do [what Article 13 requires]. If they were able to do it, I’m pretty sure they would be offering the service already. So for me as an engineer, if I worked in a company, I would just say: well, we have to subcontract Google to do that. There’s no other way around this.
GM: What about the limitations of this kind of AI-based approach – for example, the well-known issue of false positives?
BP: I think that’s a major problem. The issue is that deep learning has amazing learning algorithms, but deep learning is not just one network layer, but a black box feeding into the next one – it iterates. Nobody knows what these black boxes do when they get these very diffuse rewards and punishments from the outside. So it’s very hard to figure out what decision mechanisms they use. There might be absolutely weird false positives coming up. I’m very sceptical regarding these approaches, but the problem is deep learning has been extremely hyped up, and there is very little criticism of it.
If you were a lawyer, especially in Germany, you could send what in Germany is called an Abmahnung [a demand for a payment for having found a violation of the law, for example copyright law]. You could send everybody whose site has an upload button this letter, because there will definitely be errors made in the detection protection. Even Google will make these errors.
GM: That sounds like the problem of copyright trolls demanding money from sites could be even more of a problem than it is today. If Article 13 is passed, what do you think the overall result will be?
BP: Especially in jurisdictions where people can be sued, I think that people will not allow uploading anything, any more, because of the [legal] risk. So I think the danger is that basically a lot of smaller companies will just block completely any upload. For me, the main worry is that upload filter systems will default to the assumption that you are a criminal if you are uploading anything there. I think it will cause mayhem.
Featured image by Bernd Porr.