Thursday, October 19, 2017

AI SPECIAL ..... Artificial ignorance

Artificial ignorance


Many of us won't believe that artificial intelligence is here until a human-like android walks through the door, but the AI revolution is well and truly here; we just didn't notice it arrive

On a server farm somewhere, there is a recording of my wife talking in our kit chen. She didn't know she was being recorded, but then she hadn't read the terms and conditions of Amazon's digital assistant, Echo. On the recording, which I can access and play back as often as I like, she's asking me why Echo is more popularly known as Alexa.
“Why choose Alexa?“ she says.“There must be a reason.“

Seasoned users of Echo will know that Alexa wakes up and starts listening ­ and recording ­ at the mention of her name. But actually she records the moments before her name. That suggests she must always be listening, surely? I can feel the paranoia begin.

Paranoia is a common reaction of human intelligence to artificial intelligence. We are both thrilled and disturbed by the prospect of machines that can respond to us as a human would ­ and at some level even seem to be human.

Certainly there is no lack of dire warnings of AI's dangers. It is watching us, destroying our privacy and perverting our public discourse. It's out to steal our jobs ­ and may ultimately destroy humanity itself.

I don't know who or what to believe. Is anybody even asking the right questions?

“Alexa, why are you called Alexa?“ “My name Alexa comes from the Library of Alexandria, which stored the knowledge of the ancient world.“

Alexa is certainly clever ­ and very competent. I try to fool her by mentioning “The Amex“, my local football stadium. She doesn't wake. I suggest I might “annexe a country“. Nothing. Alexa is astonishingly good at recognising my voice, interpreting my commands and generally doing whatever I ask of her.

When it comes to technology, “we have a natural tendency to anthropomorphise,“ says philosopher Stephen Cave of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. “As AI becomes more general in application, and more pervasive, we will start giving these systems names and treating them like part of the team or family.“ And that's dangerous, says Joanna Bryson of the University of Bath, UK: the illusion of human-likeness generates a false sense of security.

“Alexa, what is the point of you?“ “I was made to play music, answer questions and be useful.“

Strangely, she doesn't mention providing data to feed Amazon, Apple, Google, Facebook and the rest. The big companies behind most AI would argue they want that data only for our benefit ­ to understand what we meant when we mistyped that query, to determine which friends' posts we want to see, or generally to fulfil our heart's desires.

But that data also sells ads and products, and hones the revenue-generating AI algorithms themselves. Google, Amazon, Microsoft and others have all made some of their AI algorithms open source, meaning outside developers can use them for their own applications, while improving the code big firms incorporate into their stillproprietary AIs.

So why do we think it's a “not yet“ technology? Partly it is the dystopian warnings from the likes of entrepreneur Elon Musk and cosmologist Stephen Hawking. Both speak regularly and loudly about a future in which machines have gone rogue. Last year, Hawking warned that AI could be the biggest disaster in human history. In 2014, he even said that “the development of full artificial intelligence could spell the end of the human race“, conjuring up a vision in which machines we create might decide we are not worth our place on Earth. In August, Musk tweeted that AI poses “vastly more risk than North Korea“. Such millenarian warnings don't square with the rather dull reality we see ­ so we assume AI isn't here yet.

Facebook CEO Mark Zuckerberg shot back at one of Musk's earlier doomsday warnings that it was “irresponsible“. But then he would say that, wouldn't he? Zuckerberg's understanding of the subject was “limited“, Musk retorted.

“Siri, should I be afraid of you?“ “I'm sure I don't know.“

That is suspiciously evasive. I talk to Siri, my iPhone's AI-powered virtual assistant, almost every day. I ask it to send my wife a message, or make a note in my diary ­ nothing I could see it using against me.

Siri and Alexa don't have bodies, so would certainly struggle to fire a gun.But even framing our fears about AI in those terms exposes our problem looking rationally at AI's promise and pitfalls. We continually conflate AI with robots ­ especially of the evil Terminatorkind.

Real AI is software that runs on computers inside big metal boxes, honing its responses by crunching data from all Alexa's interactions with users, say. It couldn't wield a laser cannon even if one were carelessly left inside the entrance to the server farm. It cares about one thing, and one thing alone: data.

“AI in its current version is about statistical machine learning, often from crowdsourced data,“ says Ross Anderson at the University of Cambridge. This type of AI processes available information, identifying patterns in it, and assesses their relevance to goals defined by a human creator: setting someone's insurance premium, say, or curating a Facebook feed and populating it with ads. The system's response provides feedback on the AI's action, which the AI uses to do a better job next time ­ perhaps just a microsecond later.

If that sounds boring, it is. But for boring tasks, AI is useful. Siting those adverts on your Facebook timeline is not something a human does well, even if they wanted to.

Siri, are you cleverer than me?“ Hmmm, that's something I don't know.“

Astonishing ­ Siri should know the answer to that. You and I are far cleverer than any AI.

Even “machine learning“ seems a bit of a misnomer for what AI does.The algorithms “learn“ by altering their data-processing routines in ways that get a better result, given the goal.They don't “know“ anything afterwards in the way that you (hopefully) know more now than you did five minutes ago. Nor can they deliberately forget or accidentally misremember that knowledge as you can, or apply it in any way you choose ­ to inform someone else, make yourself look clever, or even just to decide you know enough to stop reading this article right now and go do something more interesting.

Humans have “general intelligence“, meaning we can apply learned knowledge and skills in many situations and environments. Google DeepMind's AlphaGo can beat the world human champion in a game of Go, but can't drive a car or beat me at general knowledge quiz or Scrabble. It has “weak“ intelligence: the ability to do one thing really well. It couldn't even write this article.

“Siri, would you like to be a journalist?“ “This is about you, Michael, not me.“

One thing about AI worries more people than any other: that it might be after their job. A survey in 2016 found that 82 per cent of people believe that AI will lead to job losses. Automation angst has increased in recent years. So far automation has so far mainly affected blue-collar jobs. Now white-collar workers worry that AI will move on from being something that just curates Facebook feeds, and begin to displace accountants, surgeons, financial analysts, legal clerks ­ and journalists.

Economist David Autor, at the Massachusetts Institute of Technology, has suggested that AI will work alongside all but the most unskilled workers, not without them. In medicine, for instance, AI tools are certainly making impressive forays. Machine learning algorithms can be better than heart surgeons at predicting risk of heart attacks. But diagnostic AIs still make mistakes ­ just different ones from humans, suggesting that pooling human and artificial intelligence might create a significantly better future.

AI researchers are only too aware of the struggle ahead: getting people to react appropriately to the reality of artificial intelligence, rather than the myth. AIs will only ever be as good, or bad, as the people and the societies that program them. We must demand accountability of AI and find ways to deliver it.

That, and norms about how much personal, private data it is acceptable to feed them.“Now we live in a world where our own personal information is used and traded and mined for value. We should ask questions about where we want to draw the line“, says Cristianini.

And that's the dull truth: neither the Hawking-Musk doomsday line, nor the Zuckerberg it'll-all-work-outjust-fine line. We shouldn't fear allout war with the machines, but neither should we be lulled by their apparent inoffensive competence. There are indeed legitimate questions we should ask of AI.

Michael Brooks 2017, Tribune Content Agency Oct 10 2017 : Mirror (Mumbai)


No comments: