Jump to content

Could a computer algorithm be put on trial?


Recommended Posts

http://cdn.mos.techradar.com/art/TRBC/Miscellaneous/Algorithms%20on%20trial/1-470-75.jpg

Introduction and existential threats

Should we hold algorithms and artificial intelligence accountable for their actions? That's the question asked by a provocative piece of performance art by Goldsmiths, University of London MFA student Helen Knowles, in which a computer algorithm goes on trial for manslaughter.

In a fictional plot, an algorithm called 'Superdebthunterbot' is used by a debt collection agency that's just bought the debts of students across the UK. The algorithm then targets job adverts at the students to ensure there are fewer defaulters, but two of them die after taking part in a risky medical trial advertised to them by the algorithm. Is the algorithm culpable?

There is, of course, one problem with the thesis. Algorithms don't have legal status. "It's possible that a computer algorithm could be put on trial," says Dr Kevin Curran, Technical Expert at the IEEE and reader in computer science at Ulster University, but he raises an excellent question for anyone thinking of getting litigious. "No computer algorithm has opened a bank account yet, so what would you sue?"

http://mos.futurenet.com/techradar/art/TRBC/Miscellaneous/Algorithms%20on%20trial/2-420-90.jpg

Who to sue?

The answer of who to sue if something goes wrong with an algorithm seems simple enough. "The most pragmatic and reasonable approach is to sue the humans who deployed the algorithm," says Curran, but it's not as straightforward as that.

"Take the instance of an automated driverless car causing a death," says Curran. "Does the lawsuit pursue the dealership, the car manufacturer, or the third-party who developed the algorithm that was deemed to be at fault?" Cue new kinds of lawyers; with super-complex incidents put in front of judges, there's bound to be a growing need for lawyers skilled in the role of automation and its relation to legal accountability.

"Algorithms are essentially sets of rules that computers must follow when processing data – so, in legal terms, they're much like any other software," says Richard Kemp, founder of Kemp IT Law and one of the world's top IT lawyers. "Just as if CAD software is used to design a building that falls down and injures people, the designers of the defective software may be liable, so it's possible that the designers of faulty algorithms may also have to accept legal responsibility."

Technology is rarely just a tool.

http://mos.futurenet.com/techradar/art/TRBC/Miscellaneous/Algorithms%20on%20trial/3-420-90.jpg

Algorithmic angst

Technology isn't neutral. Apps constantly make algorithm-based decisions that affect their users. Facebook's algorithm decides both what news you read and which of your friends' updates you get to see. Facebook's algorithm decides what values its users are exposed to. This isn't unusual. The values of the Silicon Valley elite are often baked into the platforms and products created in that one tiny corner of a diverse world.

For example, some love the part-time, casual labour habits created by the likes of Uber, Lyft and Airbnb to create the 'sharing', 'gig' or 'peer-to-peer' economy. Some hate it for creating millions of largely worthless, zero-hours jobs. But can anyone refute that Uber is akin to a political movement? Or that Airbnb isn't contributing to a housing problem in some places? Both algorithm-driven apps are wrapped-up with moral and ethical dilemmas, whether the coders and programmers behind the apps like it or not.

Existential threats

However, that's nothing compared to the threat from artificial intelligence (AI). Could AI represent an existential threat to humanity? "A world where big data is constantly whirring away in the background in every part of our lives poses big risks around security, individual liberties and state powers," says Kemp. "All these things in another form are going through the UK parliament at the moment with the Investigatory Powers Bill … AI, big data and algorithms just makes them more pervasive."

Top Image Credit: Helen Knowles & Liza Brett

Lesser of two evils

Faulty algorithms

Apps and social media platforms can purposely or accidentally help shape societies, but such concerns are of a very different nature to mistakes made by programmers and coders now shaping the algorithmic economy.

http://mos.futurenet.com/techradar/art/TRBC/Miscellaneous/Algorithms%20on%20trial/4-420-90.jpg

"When it comes to systems that monitor vital signs, like automated diabetes pumps or drug injection systems, then a flaw could prove disastrous," says Curran, who notes that AI is used in aviation autopilot systems, in Segways, and even during a Google search, natural language and speech processing, and translation.

Mistakes could be costly – a wrongly transcribed word could lead to an accidental libelling of someone, while a mistranslated word could cause a huge legal misunderstanding.

http://mos.futurenet.com/techradar/art/TRBC/Miscellaneous/Algorithms%20on%20trial/5-420-90.jpg

Lesser of two evils

However, we're not just talking about mistakes. "If we think about it, amoral computer programmers are trying to solve moral dilemmas with algorithms," says Curran, but he's quick to jump to the defence of technological progress. "We also have to believe that sophisticated AI is just as likely to be nurturing and ultimately beneficial to mankind in so many ways."

There are millions of road deaths every year and huge problems with unemployment and poverty. If algorithms can alleviate those ills – or try to – shouldn't they be forgiven in advance for the odd mistake, or unintended consequences?

"Pervasive software, algorithms and big data are a key feature of the Fourth Industrial Revolution – the deep digital transformation affecting our world that is just starting," says Kemp. "So you can add 'lesser of two evil' moral choice issues around smart cities, IoT, robotics and even 'designer beings' to driverless car dilemmas."

Where do algorithms go to die?

An algorithm found not to be fit for purpose, blamed for an accident, or considered politically irresponsible can, of course, be made to disappear. "An AI algorithm could be banned or deleted," says Curran. "If a system flaw in an algorithm was brought to light, then the organisation deploying that algorithm should seek to rectify the situation … in reality, the system would be tweaked."

After all, isn't that how the tech world works? Invent something, code it, beta test, sell it to the public, then iron out the bugs in the software. Even the very biggest, long-established tech companies like Apple approach their global businesses in this bizarrely risky way.

"Outside of aviation and medical products, technology experts tend not to follow stringent testing methodologies, but lazily rely on fixing problems as they arise," says Curran, "but a mis-configured service in a fast-moving truck could lead to death."

http://mos.futurenet.com/techradar/art/TRBC/Miscellaneous/Algorithms%20on%20trial/6-420-90.jpg

Blame the programmers

That said, tech companies will have to take responsibility for what they create. After all, in the programmable economy of the near-future, there's an obvious group to blame: the programmers of the algorithms on whose decisions lives will rest.

"The motivation to build rigorous and secure systems should be there because it is quite possible that all involved in its design could be held liable if a defect caused or contributed to a collision," says Curran, who thinks that there needs to be guards put in place to protect the public from algorithms deployed in software that could lead to catastrophic events.

He adds: "If computer programmers eventually play a bigger role in the way driverless cars move than drivers do, it is likely manufacturers will build the cost of litigation and insurance into their trucks."

In the algorithm-driven automated world now being created by programmers there's got to be a massive cultural shift towards more rigorous testing, and more responsibility taken by the tech industry. "There's always someone behind the artificial intelligence, algorithm or program," says Kemp.

Algorithms represent the coders' value – and usually it's only about maximising profits. With autonomous vehicles now chosen as the tech industry's next big project, the media is only going to get more obsessed with driverless cars having accidents – and that means that the algorithms behind the wheel are going to come under intense scrutiny.

http://feeds.feedburner.com/~r/techradar/software-news/~4/L7zwJY8WpDQ
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...