AI is moving too fast, and that's a good thing

clicks | 5 days ago | Google AI sentiment 0.40 | comments: discuss | tags: cryptocurrency

Article preview (bot search)

(Original link:

By John Pavlus 6 minute Read 2019 was a great year for seeing what AI could do. Waymo deployed self-driving taxis to actual paying customers in Arizona. Bots from OpenAI and DeepMind beat the top professionals in two major esports games. A deep-learning algorithm performed as well as doctors—and sometimes better—at spotting lung cancer tumors in medical imaging.
But as for what AI should do, 2019 was terrible. Amazon’s facial recognition software? Racist, according to MIT researchers, who reported that the tech giant’s algorithms misidentify nearly a third of dark-skinned women’s faces (while demonstrating near-perfect accuracy for light-skinned men’s). Emotion detection, used by companies such as WeSee and HireVue to perform threat assessments and screen job applicants? Hogwash, says the Association for Psychological Science. Even the wonky field of natural language processing took a hit, when a state-of-the-art system called GPT-2—capable of generating hundreds of words of convincing text after only a few phrases of prompting—was deemed too risky to release by its own creators, OpenAI, which feared it could be used “maliciously” to propagate fake news, hate speech, or worse.
2019, in other words, was the year that two things became unavoidably clear about the rocket ship of innovation called artificial intelligence. One: It’s accelerating faster than most of us expected. Two: It’s got some serious screws loose.
That’s a scary realization to have, given that we’re collectively strapped into this rocket instead of watching it from a safe distance. But AI’s anxiety-inducing progress has an upside: For perhaps the first time, the unintended consequences of a disruptive technology are visible in the moment, instead of years or even decades later. And that means that while we may be moving too quickly for comfort, we can actually grab the throttle—and steer.
[Illustration: Harry Campbell ] It’s easy to forget that before 2012, the technology we now call AI—deep learning with artificial neural networks—for all practical purposes didn’t exist. The concept of using layers of digital connections (organized in a crude approximation of biological brain tissue) to learn pattern-recognition tasks was decades old, but largely stuck in an academic rut. Then, in September 2012, a neural network designed by students of University of Toronto professor and future “godfather of deep learning” Geoffrey Hinton unexpectedly smashed records on a highly regarded computer-vision challenge called ImageNet. The test asks software to correctly identify the content of millions of images: say, a picture of a parrot, or a guitar. The students’ neural net made half as many errors as the runner-up.
Suddenly, deep learning “worked.” Within five years, Google and Microsoft had hired scores of deep-learning experts and were dubbing themselves “AI first” companies. And it wasn’t just Big Tech: A 2018 global survey of more than 2,000 companies by consulting firm McKinsey found that more than three-quarters of them had already incorporated AI or were undergoing pilot programs to do so.
It took modern smartphones 10 years to “eat the world,” as Andreessen Horowitz analyst Benedict Evans famously put it; the web, about 20. But in just five years, AI has gone from laboratory curiosity to economic force—contributing an estimated $2 trillion to global GDP in 2017 and 2018, according to accounting firm PricewaterhouseCoopers. “We’re certainly working on a compressed time cycle when it comes to the speed of AI evolution,” says R. David Edelman, a former technology adviser to President Barack Obama who now leads AI policy research at MIT.
advertisement This unprecedented pace has helped drive both the advances and the agita around artificial intelligence. Between 54% and 75% of the general public, according to a 2019 survey of Americans by the global marketing consultancy Edelman in collaboration with the World Economic Forum, believes that AI will hurt the poor and benefit the wealthy, increase social isolation, and lead to a “loss of human intellectual capabilities.” A third of respondents even think that deepfakes—creepily convincing phony videos of celebrities, government officials, or everyday people, generated by deep-learning networks—could contribute to “an information war that, in turn, might lead to a shooting war.”
So how should society respond, without resorting to what MIT’s Edelman (no relation to the consultancy) calls “latter-day Luddism”? After all, a smash-the-looms approach didn’t work for the actual Luddites, who tried it during the Industrial Revolution. But the opposite—a blind faith that innovation will eventually work its own kinks out—won’t do either. (Exhibit A: the entire planet, after 100 years of carbon-spewing vehicles.)
This is where the rocket-ship throttle comes into play. The standard timeline of technological innovation follows what’s known as an “S curve”: a slow start, followed by a rising slope as the tech catches ...