Watching them, watching us: Can we trust Big Tech to regulate itself?

Tay started life as an innocent experiment, in under one day it had turned to misogyny and racism. Microsoft’s artificial intelligence chat bot, released in March 2016, was intended to teach the tech giant how machines can interpret natural language and understand how humans really talk to each other. Instead, Tay became an example of how AI can go wrong.

Early automated 140-character exchanges between Tay and intrepid Twitter users were jovial enough. Yet, the AI behind the bot, like the majority of current systems, learns from the data it is trained on. As a result of being swamped with messages from the murkier corners of the internet, Tay emulated the behaviour, believing it was the way people talk.



The example is, largely, trivial but is indicative of potential AI pitfalls. “The most immediate negative societal impact AI could have is that we come to place more trust in it than is merited,” says Christopher Markou, who is researching AI law at the University of Cambridge. “Right now everyone wants to jump on the AI train, and there are fortunes to be made in just about every business sector by throwing some AI at a particular problem.”

Various approaches and applications are being developed by computer scientists employed with Google, Facebook, Amazon, to smaller companies and university research labs around the world. Methods include computer vision systems that can recognise what is in front of them in real-time, such as those in driverless cars and robots; natural language processing, used by Tay and other bots; and most prominently, machine learning.

Illustration by Matt Murphy

Evolving from pattern recognition, machine learning systems try to emulate the brain’s processes with artificial neural networks. They analyse data, learn from it, and repetition allows for refinement in decision making. Arguably, the most famous machine learning system to date is Google DeepMind’s AlphaGo, which completed a historic victory against the world champion of the board-game Go, after being trained on some 30 million moves.

Below: Google DeepMind’s AlphaGo in action

While researchers push the systems to new heights, the use of automated code that makes decisions is expanding into real-world scenarios. “Algorithms are being used across industrial sectors for financial trading, recruiting decisions (hiring, firing, and promotions), and setting insurance premiums,” says Sandra Wachter, a postdoctoral researcher in data ethics and algorithms at the University of Oxford’s Internet Institute and a former member of The Alan Turing Institute. “Algorithms help decide whether you are a desirable candidate, eligible for a loan or a mortgage, or should be admitted to university.” The widely recognised issue with these systems, Wachter continues, is the human ability to understand them. She describes them as “opaque” and Carnegie Mellon’s Dean Pomerleau, among many others, has referred to machine learning systems as “black boxes”.

The widely recognised issue with these systems is the human ability to understand them

Ultimately, the decisions made by AIs and machine learning can be impossible to interpret. This becomes problematic when the transfer from lab to real-world happens. Microsoft researchers can attest to this, as Tay was taken offline two days after being unleashed on the world.

In July 2015, technology’s biggest minds took a stand against rogue AI applications. Physicist Stephen Hawking, billionaire-entrepreneur Elon Musk, and Apple’s Steve Wozniak, were among 1,000 experts calling for regulation against AI weapons. In an open letter, the learned group called for a ban on autonomous weapon systems that could start a “global AI arms race”. The letter acknowledged that while AI weapon systems – used for “assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group” – don’t exist yet, they should be controlled by laws. The same questions are being raised for less lethal AI applications.

In Europe, AI and algorithmic regulation is already underway. In 2018 new data protection rules, under the guise of the General Data Protection Regulation (GDPR), will be enforced across EU member states and will effectively, it is argued, create a ‘right to explanation’. The law states that people should have a right not to have decisions about their lives made “based solely on automated [data] processing”.

“We’re still at what can be described as a formative stage when it comes to the law,” Roger Bickerstaff, a partner at international law firm Bird & Bird, says. “What we need to do is look at it from two angles: firstly, what changes to the law need to be made to facilitate AI; beyond that the managerial and control elements that the law might need to look at.” In anticipation of self-driving cars taking to the roads on a mass scale, the UK government has issued one of the first laws to directly impact on AI. Under the country’s Vehicle Technology and Aviation Bill, those travelling in autonomous vehicles will be able to claim compensation if a crash occurs while the car is in control. Theoretically, at this stage, insurance firms could attempt to claim costs from the car’s manufacturer when it is to blame.

No-one in their right mind would say we should let Facebook, Google, Baidu, or Uber police themselves

The UK government is taking AI’s potential seriously. A report from the Government Office for Science recognises that public trust in AI needs to be created and there will be “barriers to acceptance”. The 2015 report said a public debate about how AI is used needs to cover three areas: what happens when mistakes occur; how to understand decision making; and the amount of trust that should be placed in automated systems. Separately, as part of the government’s Digital Strategy, it has opened a “major review” into how AI can boost the economy.

Aiming to be at the centre of AI legality discussions is a cabal of the world’s biggest technology companies. IBM, Facebook, Microsoft, Google, DeepMind, Apple, Elon Musk’s OpenAI, and Amazon have all joined forces in an umbrella group called the Partnership on AI. Along with the American Civil Liberties Union and academics, the organisation, formed in September 2016, says it will create best practices around AI use, but not act as a lobby group to act against governments trying to impose regulation on AI.

“There’s no explicit attempt at the notion of self-regulation to repel government intrusion,” Eric Horvitz, a managing director at Microsoft’s research division, said to reporters in a phone call at the launch of the Partnership on AI. However, months after its formation, the group has yet to publish its research objectives or any documentation on best practices and when approached said it had nothing to add to this story. Markou argues that even if the large technology companies were to advocate self-regulation of AI, it shouldn’t be allowed: “We’ve seen what happened with Wall Street in 2008 and no-one in their right mind would say we should let Facebook, Google, Baidu, or Uber police themselves.”

So, the collaborative approach to AI standards promoted by the Partnership is likely to be one that is involved in the creation of laws around AI. Luciano Floridi, a professor of philosophy and ethics of information at Oxford’s Information Institute, says algorithms should be regulated and these “should be a matter of public-private coordination”. “It could work in all contexts, the question is whether it could work without stifling innovation, that’s why it is delicate work,” he says. Wachter, Floridi’s colleague, argues that trade secrets will lead to companies being reluctant to disclose details about their computer systems publicly.

The Partnership on AI website

When it comes to whether AI should replace humans in making decisions within governments, councils, and other public bodies, Joanna Bryson, a reader at the University of Bath and affiliate at Princeton University’s Center for Information Technology Policy, says AI systems should help to augment human decisions. “I think they do need to understand you shouldn’t use artificial intelligence to replace humans but to make the humans that we have better,” she says. “The more different kinds of minds you have, the better you are. So this is like a different kind of diversity”.

Not all artificial intelligences are created equal, however, and the underlying algorithms, datasets, and code behind AI applications can create varying performances. According to Chris Urmson, who until last year was tech lead at Google’s self-driving cars unit, during millions of miles of autonomous driving on public roads between 2009 and 2015, its cars were involved in 14 accidents – but in each case, human drivers in other vehicles were the cause. Conversely, in San Francisco, when Uber defied state regulations and experimented with its new self-driving system on the city’s streets, one of its autonomous Volvos jumped a red-light near the Museum of Modern Art.

“What’s the driving test standard for an autonomous car?” Bickerstaff asks. “It’s clearly not the same as a human driver.” He says for self-driving vehicles to provide a societal value they must be capable of being safer than human drivers. For this, regulation and standards are required.

Across the vast majority of AI methods one thing is common: data. Discrimination and bias from the data used to train machine learning systems is one area being studied. “Getting balanced training data is not a trivial issue,” says Emiel van Miltenburg, a PhD student from VU University Amsterdam, who has looked at stereotyping in the language used within large datasets used for AI production. “An innocent example that I’ve found is that a recent system mistakenly identifies cellos and violins as guitars, because there are more guitars than cellos in the training data,” he says. “So, guessing ‘guitar’ instead of ‘cello’ is a very good strategy.” The example can be replicated for humans – a ProPublica report stated automated software used in the US to predict crimes was biased against African Americans. Van Miltenburg says regulations can help to force businesses and researchers to consider how ethical their AI is. But to regulate AI, it needs to be clear what the AI is doing. At present, Eric Price, a computer scientist at the University of Texas at Austin, argues that not enough is known about how machine learning black boxes work to enforce regulations. Along with Google researchers, Price has attempted to create AI systems that are able to reduce bias.

Everyone spoken to for this story agreed that regulation of AI is needed – at least in some scenarios. And autonomous cars could be among the first. “If the standards are too low then it will bring AI into disrepute,” Bickerstaff adds. Research from Wachter has gone one step further, to suggest a third-party regulator, made of law, ethics and computer science experts, to keep algorithms and the companies creating them in check. “Such a setup would allow the body to act on the behalf of the persons that feel that they have been treated unfairly by an algorithm,” she says. “In addition, such a watchdog could have auditing powers to inspect data controllers and evaluate whether processing is legal and fair, even when a complaint has not already been lodged by an individual.”

Markou agrees that regulation should happen and says, despite its potential, AI should be treated the same as any other new technology. AIs are nothing more than tools, he argues. “We regulate all sorts of tools, whether they are power drills, pressure washers, cars, or the composition of the chemicals that make up the parts of your computer, tablet, smartphone, whatever. It would be a mistake, certainly at this stage, to imbue AI with some sort of magical capabilities that should prevent us from being as thorough and comprehensive with how we mitigate its harms as we do with just about any other thing that attracts regulatory attention.”

Matt Burgess is a staff writer at Wired UK.


The post Watching them, watching us: Can we trust Big Tech to regulate itself? appeared first on Creative Review.

Source: http://ift.tt/1KjyLUn

Leave a Reply