CyberWarfare / ExoWarfare

The rise of ‘pseudo-AI’: how tech firms quietly use humans to do bots’ work

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

This practice was brought to the fore this week in a Wall Street Journal article highlighting the hundreds of third-party app developers that Google allows to access people’s inboxes.

In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users – with their identities redacted – to improve a “smart replies” feature. The company did not mention that humans would view users’ emails in its privacy policy.

The third parties highlighted in the WSJ article are far from the first ones to do it. In 2008, Spinvox, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work.

In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself.

In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence.

 

 

Alison Darcy, a psychologist and founder of Woebot, a mental health support chatbot, describes this as the “Wizard of Oz design technique”.

“You simulate what the ultimate experience of something is going to be. And a lot of time when it comes to AI, there is a person behind the curtain rather than an algorithm,” she said, adding that building a good AI system required a “ton of data” and that sometimes designers wanted to know if there was sufficient demand for a service before making the investment.

This approach was not appropriate in the case of a psychological support service like Woebot, she said.

“As psychologists we are guided by a code of ethics. Not deceiving people is very clearly one of those ethical principles.”

Research has shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one’s mental health.

A team from the University of Southern California tested this with a virtual therapist called Ellie. They found that veterans with post-traumatic stress disorder were more likely to divulge their symptoms when they knew that Ellie was an AI system versus when they were told there was a human operating the machine.

Others think companies should always be transparent about how their services operate.

“I don’t like it,” said LaPlante of companies that pretend to offer AI-powered services but actually employ humans. “It feels dishonest and deceptive to me, neither of which is something I’d want from a business I’m using.

“And on the worker side, it feels like we’re being pushed behind a curtain. I don’t like my labour being used by a company that will turn around and lie to their customers about what’s really happening.”

This ethical quandary also raises its head with AI systems that pretend to be human. One recent example of this is Google Duplex, a robot assistant that makes eerily lifelike phone calls complete with “ums” and “ers” to book appointments and make reservations.

After an initial backlash, Google said its AI would identify itself to the humans it spoke to.

“In their demo version, it feels marginally deceptive in a low-impact conversation,” said Darcy. Although booking a table at a restaurant might seem like a low-stakes interaction, the same technology could be much more manipulative in the wrong hands.

What would happen if you could make lifelike calls simulating the voice of a celebrity or politician, for example?

“There’s already major fear around AI and it’s not really helping the conversation when there’s a lack of transparency,” Darcy said.

 

from: https://www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies

 

 

Being human:
how realistic do we want robots to be?

With Google’s AI assistant able to make phone calls and androids populating households in games and films, the line between machine and man is getting scarily blurred.

 

Unnerving … an android detective in Detroit: Become Human. Photograph: Sony

 

As our dependence on technology builds and the privacy-destroying, brain-hacking consequences of that start to come to light, we are seeing the return of a science-fiction trope: the rise of the robots. A new wave of television shows, films and video games is grappling with the question of what will happen if we develop the technology to create machines in our own image.

Westworld posits that if we could develop realistic androids, we would want to rape and murder them for fun. In Blade Runner 2049, they have replaced humans as sex workers and manual labourers. In the recently released video game Detroit: Become Human, androids are nannies, carers and even pop stars, omnipresent in the home and in city life.

The current wave of android fiction centres on what happens when the line between human and machine becomes blurred. At what point do robots deserve rights: when they reach a certain level of intelligence, or when they develop the capacity for emotion, creativity or free will? In the cold war, when we believed that machines might kill us any minute in the shape of nuclear bombs, our nightmare robots were relentless killing machines such as The Terminator or RoboCop – or the pitiless military droids that hunt down the last remnants of humanity in Metalhead, a recent episode of Black Mirror. Now that technology has enmeshed itself in our lives, it is dawning on us that machines can take over in another way – by encroaching on our humanity.

 

Omnipresent … in Detroit: Become Human androids are nannies, carers and pop stars. Photograph: Quantic Dream/Sony

 

Just a few weeks ago, Google demonstrated that its home-assistant robot is capable of holding an unsettlingly natural conversation with a human being over the phone to book a haircut or make a restaurant reservation, complete with “ums” and “ahs” to make the listener believe they are talking to a real person.

We are increasingly worried about what will happen if machines become just like us. Adam Williams, lead writer on the game Detroit: Become Human, thinks that the development of human-like emotion is more unsettling than the idea of straightforward robot antagonism. “It’s a more subtle threat to the sanctity of the human category,” he says. “Emotion is something we reserve for ourselves: depth of feeling is what we use to justify the primacy of human life. If a machine is capable of feeling, that doesn’t make it dangerous in a Terminator-esque fashion, but in the abstract sense of impinging on what we think of as classically human.”

In the game, household androids that have been mistreated by humans start rebelling, eventually banding together to demand rights. It is not an original premise, but video games now look so lifelike that it is a good litmus test for how comfortable you feel with the idea of a human-like android. The game’s characters, played by human actors, look almost indistinguishably close to real people.

Anouk van Maris, a robot cognition specialist who is researching ethical human-robot interaction, has found that comfort levels with robots vary greatly depending on location and culture. “It depends on what you expect from it. Some people love it, others want to run away as soon as it starts moving,” she says. “The advantage of a robot that looks human-like is that people feel more comfortable with it being close to them, and it is easier to communicate with it. The big disadvantage is that you expect it to be able to do human things and it often can’t.”

 

Azuma … a holographic home assistant by Japanese firm Gatebox AI.

 

In Japan, where the animus belief perhaps makes people more comfortable with the idea that spirit can reside in something that isn’t human, robots are already being used as shop assistants, in care homes and in schools. Japan is the world leader in robotics and demand is high for robots that could help fill a shortfall in nursing care. The country is home to the creepy Erica, the most realistic female humanoid in existence, and Gatebox AI’s Azuma, a holographic girl in a jar that combines Alexa-like home-assistant functionality with a cute anime look and a simulated, deferential personality.

In Europe, by contrast, people are generally uncomfortable with the idea of an android performing roles that require interaction with humans. “In one study, people were asked if there was a robot interacting with children, whether it would be ethically acceptable if the children got attached to that robot,” says Van Maris. “Only 40% thought that was acceptable.” It is telling that US companies design their home-assistant robots to look like black boxes and sound like computers.

“A machine can exhibit human-like qualities and not be considered particularly controversial if it doesn’t look human,” says Williams. “That’s what is intriguing. What scares people about that Google Assistant phone call is that it sounds human. The fact that it can construct the conversation is not what scares people – it’s the fact they can’t distinguish it from a real person.”

 

Blurring the line … Ex Machina. Photograph: Allstar/Film 4

 

Some robotics experts, including the University of Edinburgh’s Robert Fisher, see the concept of human-like robots as ill-advised. “I don’t think artificial intelligence will ever be like humans,” Fisher says. “We put ourselves and them in a difficult situation by trying to pretend they are human, or make them look like us. Maybe it is better not to do that in the first place. Sex robots is perhaps the only case where there is a reason for them to look human.”

On the evidence of Westworld, Detroit: Become Human and Ex Machina – none of which paint the most optimistic portrait of human-android relations – perhaps we will all be better off if our future robot assistants are more Wall-E or R2-D2 in Star Wars than Star Trek’s Data or Blade Runner’s Pris.

 

from: https://www.theguardian.com/technology/2018/jun/27/being-human-realistic-robots-google-assistant-androids