Artificial intelligence is expected to be an increasingly important policy issue. (Bigstock Illustration / Andrey Popov)
President Donald Trump today signed an executive order that puts the White House Office of Management and Budget in charge of drawing up a roadmap for how federal agencies use artificial intelligence software.
The roadmap, due for publication in 180 days, will cover AI applications used by the federal government for purposes other than defense or national security. The Department of Defense and the U.S. intelligence community already have drawn up a different set of rules for using AI.
Today’s order could well be the Trump administration’s final word on a technology marked by rapid innovation — and more than a little controversy.
Future regulations could have an outsized impact on Amazon and Microsoft, two of the largest developers of AI technologies. The sharpest debates have focused on facial recognition software, but there are also issues relating to algorithmic bias, data privacy and transparency.
Melissa McCarthy and Bobby Cannavale star in “Superintelligence.” (New Line Cinema / HBO Max / WarnerMedia Photo)
Seattle, Microsoft and the field of artificial intelligence come in for their share of the spotlight in “Superintelligence” — an HBO Max movie starring Melissa McCarthy as the rom-com heroine, and comedian James Corden as the world’s new disembodied AI overlord.
But how much substance is there behind the spotlight? Although the action is set in Seattle, much of the principal filming was actually done in Georgia. And the scientific basis of the plot — which involves an AI trying to decide whether or not to destroy the planet — is, shall we say, debatable.
Fortunately, we have the perfect team to put “Superintelligence” to the test, as a set-in-Seattle movie as well as a guide to the capabilities of artificial intelligence.
An ethicist says there shouldn't be any stigma about senior sex with robots. (Bigstock Photo / Digitalista)
Are sex robots just what the doctor ordered for the over-65 set?
In a newly published research paper, a bioethicist at the University of Washington argues that older people, particularly those who are disabled or socially isolated, are an overlooked market for intimate robotic companionship — and that there shouldn’t be any shame over seeking it out.
To argue otherwise would be a form of ageism, says Nancy Jecker, a professor of bioethics and humanities at the UW School of Medicine.
“Designing and marketing sex robots for older, disabled people would represent a sea change from current practice,” she said today in a news release. “The reason to do it is to support human dignity and to take seriously the claims of those whose sexuality is diminished by disability or isolation. Society needs to make reasonable efforts to help them.”
Jecker’s argument, laid out in the Journal of Medical Ethics, reawakens a debate that has raged at least since a bosomy robot made her debut in Fritz Lang’s 1927 film, “Metropolis.” In a 2007 book titled “Love and Sex With Robots,” computer chess pioneer David Levy argued that robot sex would become routine by 2050.
Over the past decade or so, the sex robot trade has advanced somewhat, with computerized dolls that would typically appeal to randy guys. At the same time, researchers have acknowledged that the world’s growing over-65 population may well need to turn to robotic caregivers and companions, due to demographic trends.
Jecker says sex should be part of the equation for those robots — especially when human-to-human sex is more difficult due to disabilities, or the mere fact that an older person’s parts don’t work as well as they once did. Manufacturers should think about tailoring robot partners for an older person’s tastes, she says.
How much would it take to raise a robot butler? (Shade Lite / Bigstock.com Illustration)
What rights does a robot have? If our machines become intelligent in the science-fiction way, that’s likely to become a complicated question — and the humans who nurture those robots just might take their side.
Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattle’s tech community, doesn’t back away from such questions. They spark the thought experiments that generate award-winning novellas like “The Lifecycle of Software Objects,” and inspire Hollywood movies like “Arrival.”
Can science fiction have an impact in the real world, even at times when the world seems as if it’s in the midst of a slow-moving disaster movie? Absolutely, Chiang says.
“Art is one way to make sense of a world which, on its own, does not make sense,” he says in the latest episode of our Fiction Science podcast, which focuses on the intersection between science and fiction. “Art can impose a kind of order onto things. … It doesn’t offer a cure-all, because I don’t think there’s going to be any easy cure-all, but I think art helps us get by in these stressful times.”
COVID-19 provides one illustration. Chiang would argue that our response to the coronavirus pandemic has been problematic in part because it doesn’t match what we’ve seen in sci-fi movies.
“The greatest conflict that we see generated is from people who don’t believe in it vs. everyone else,” he said. “That might be the product of the fact that it is not as severe. If it looked like various movie pandemics, it’d probably be hard for anyone to deny that it was happening.”
This pandemic may well spark a new kind of sci-fi theme.
“It’s worth thinking about, that traditional depictions of pandemics don’t spend much time on people coming together and trying to support each other,” Chiang said. “That is not typically a theme in stories about disaster or enormous crisis. I guess the narrative is usually, ‘It’s the end of civilization.’ And people have not turned on each other in that way.”
Artificial intelligence is another field where science fiction often gives people the wrong idea. “When we talk about AI in science fiction, we’re talking about something very different than what we mean when we say AI in the context of current technology,” Chiang said.
Chiang isn’t speaking here merely as an author of short stories, but as someone who joined the Seattle tech community three decades ago to work at Microsoft as a technical writer. During his first days in Seattle, his participation in 1989’s Clarion West Science Fiction and Fantasy Writers’ Workshop helped launch his second career as a fiction writer..
In our interview, Chiang didn’t want to say much about the technical-writing side of his career, but his expertise showed through in our discussion about real AI vs. sci-fi AI.
“When people talk about AI in the real world … they’re talking about a certain type of software that is usually like a superpowered version of applied statistics,” Chiang said.
That’s a far cry from the software-enhanced supervillains of movies like “Terminator” or “The Matrix,” or the somewhat more sympathetic characters in shows like “Westworld” and “Humans.”
In Chiang’s view, most depictions of sci-fi AI fall short even by science-fiction standards.
“A lot of stories imagine something which is a product like a robot that comes in a box, and you flip it on, and suddenly you have a butler — a perfectly competent and loyal and obedient butler,” he noted. “That, I think jumps over all these steps, because butlers don’t just happen.”
In “The Lifecycle of Software Objects,” Chiang imagines a world in which it takes just as long to raise a robot as it does to raise a child. That thought experiment sparks all kinds of interesting all-too-human questions: What if the people who raise such robots want them to be something more than butlers? Would they stand by and let their sci-fi robot progeny be treated like slaves, even like sex slaves?
“Maybe they want that robot, or conscious software, to have some kind of autonomy,” Chiang said. “To have a good life.”
Chiang’s latest collection of short stories, “Exhalation,” extends those kinds of thought experiments to science-fiction standbys ranging from free will to the search for extraterrestrial intelligence.
Both those subjects come into play in what’s certainly Chiang’s best-known novella, “Story of Your Life,” which was first published in 1998 and adapted to produce the screenplay for “Arrival” in 2016. Like so many of Chiang’s other stories, “Story of Your Life” takes an oft-used science-fiction trope — in this case, first contact with intelligent aliens — and adds an unexpected but insightful and heart-rending twist.
“Exhalation” is the latest collection of Ted Chiang’s science-fiction short stories. (Knopf Doubleday)
Chiang said that the success of the novella and the movie hasn’t led to particularly dramatic changes in the story of his own life, but that it has broadened the audience for the kinds of stories he tells.
“My work has been read by people who would not describe themselves as science-fiction readers, by people who don’t usually read a lot of science fiction, and that’s been amazing. That’s been really gratifying,” he said. “It’s not something that I ever really expected.”
During our podcast chat, Chiang indulged in yet another thought experiment: Could AI replace science-fiction writers?
Chiang’s answer? It depends.
“If we could get software-generated novels that were coherent, but not necessarily particularly good, I think there would be a market for them,” he said.
But Chiang doesn’t think that would doom human authors.
“For an AI to generate a novel that you think of as really good, that you feel like, ‘Oh, wow, this novel was both gripping and caused me to think about my life in a new way’ — that, I think, is going to be very, very hard,” he said.
Ted Chiang only makes it look easy.
Cosmic Log Used Book Club
So what’s Chiang reading? It’s definitely not an AI-generated novel.
“I recently enjoyed the novel “The Devourers” by Indra Das,” Chiang said. “It’s a novel about — you might call them werewolves, or maybe just ‘shape-shifter’ would be a more accurate term. But it’s about shape-shifters or werewolves in pre-colonial India, in medieval India. It’s a setting that I haven’t seen a lot of in fiction, and really, it’s an interesting take on the werewolf or shape-shifter mythos.”
“The Devourers” by Indra Das is set in India. (Del Rey)
Based on that recommendation, we’re designating “The Devourers” as November’s selection for the Cosmic Log Used Book Club. Since 2002, the CLUB Club has recognized books with cosmic themes that could well be available at your local library or used-book store.
“I had been very skeptical about the idea of a TV series that was going to be a sequel to ‘Watchmen,’ ” Chiang said. “When I first heard about it, I thought, ‘That sounds like a bad idea.’ But I heard good things about it, and I gave it a try, and it surprised me with how interesting it was. For people who haven’t seen that, I definitely recommend checking it out.”
Ted Chiang and other Arthur C. Clarke Foundation awardees will take part in the 2020 Clarke Conversation on Imagination at 9 a.m. PT Nov. 12. Register via the foundation’s website and Eventbrite to get in on the interactive video event.
Neuralink co-founder Elon Musk holds a brain implant device between his fingers. (Neuralink via YouTube)
With grudging assistance from a trio of pigs, Neuralink co-founder Elon Musk showed off the startup’s state-of-the-art neuron-reading brain implant and announced that the system has received the Food and Drug Administration’s preliminary blessing as an experimental medical device.
During today’s demonstration at Neuralink’s headquarters in Fremont, Calif., it took a few minutes for wranglers to get the swine into their proper positions for what Musk called his “Three Little Pigs demonstration.”
One of the pigs was in her natural state, and roamed unremarkably around her straw-covered pen. Musk said the second pig had been given a brain implant that was later removed, showing that the operation could be reversed safely.
After some difficulty, a third pig named Gertrude was brought into her pen. As she rooted around in the straw, a sequence of jazzy electronic beeps played through the sound system. Musk said the tones were sounded whenever nerves in the pig’s snout triggered electrical impulses that were picked up by her brain implant.
“The beeps you’re hearing are real-time signals from the Neuralink in Gertrude’s head,” he said.
Eventually, Neuralink’s team plans to place the implants in people, initially to see if those who have become paralyzed due to spinal cord injuries can regain motor functions through thought alone.
Musk said the implant received a Breakthrough Device designation from the FDA last month. That doesn’t yet clear the way for human clinical trials, but it does put Neuralink on the fast track for consultation with the FDA’s experts during preparations for such trials.
Neuralink has received more than $150 million in funding, with roughly two-thirds of that support coming from Musk himself. Today he said the venture had about 100 employees. He expects that number to grow. “Over time, there might be 10,000 or more people at Neuralink,” he said.
CSIRO Data61’s Brett Wood, checks the team’s Titan robot and piggyback drone just before a robot run in the Urban Circuit of DARPA’s Subterranean Challenge. (GeekWire Photo / Alan Boyle)
SATSOP, Wash. — Amid the ruins of what was meant to be a nuclear power plant, a robot catches a whiff of carbon dioxide — and hundreds of feet away, its master perks up his ears.
“I think I’ve got gas sensing,” Fletcher Talbot, the designated human operator for Team CSIRO Data61 in DARPA’s Subterranean Challenge, told teammates who were bunkered with him in the bowels of the Satsop nuclear reactor site near Elma.
Moments after Talbot fed the coordinates into a computer, a point appeared on the video scoreboard mounted on a wall of the bunker. “Hey, nice,” one member of the team said, and the whole squad broke into a short burst of applause.
Then it was back to the hunt.
The robot’s discovery marked one small step in the Subterranean Challenge, a multimillion-dollar competition aimed at promoting the development of autonomous robots to seek out and identify victims amid the rubble of an urban disaster area, or hazards hidden in the alleys of a hostile cityscape.
White House chief technology officer Michael Kratsios speaks at last year’s Web Summit in Portugal. (Web Summit via YouTube)
One year after the White House kicked off the American AI Initiative, its effects on research and development in the burgeoning field of artificial intelligence are just beginning to sink in.
And Michael Kratsios, the White House’s chief technology officer, says those effects are sure to be felt in Seattle — where industry leaders including Amazon and Microsoft, and leading research institutions including the University of Washington and the Allen Institute for Artificial Intelligence, are expanding the AI frontier.
“We made the big step of announcing, a couple of weeks ago, a doubling of AI R&D over two years,” he told me this week in an interview marking the anniversary of the American AI Initiative.
Eric Horvitz, the director of Microsoft Research Labs, discusses trends in artificial intelligence during the annual meeting of the American Association for the Advancement of Science in Seattle. (GeekWire Photo / Alan Boyle)
Artificial intelligence is often portrayed as a rising competitor for human intelligence, in settings ranging from human-vs.-machine card games to the “Terminator” movie series. But according to Eric Horvitz, the director of Microsoft Research Labs, the hottest trends in AI have more to do with creating synergies between the humans and the machines.
The RoboTHOR 2020 Challenge will test how well computer models for visual identification and navigation translate into real-world robotic performance. (AI2 Illustration / Winson Han)
Computer vision and navigation have improved by leaps and bounds, thanks to artificial intelligence, but how well do the computer models work in the real world?
That’s the challenge that Seattle’s Allen Institute for Artificial Intelligence is setting for AI researchers over the next few months, with geek fame and glory as the prize.
Ani Kembhavi, a research scientist at AI2, says RoboTHOR focuses on the next step. “If you can train a deep-learning, computer vision model to do something in an embodied environment … how well would this model work when deployed in an actual robot?” he told GeekWire.
A diagram from Amazon’s patent application shows a customer issuing a command to open up one of the doors on a storage compartment vehicle. (Amazon Illustration via USPTO)
Amazon is already testing robots that deliver packages, but a newly issued patent covers a far more ambitious scheme, involving storage compartment vehicles that can roam the sidewalks to make multiple deliveries along their routes.
If the plan is fully implemented, it could address the “last mile” or “final 50 feet” challenge for delivery systems by having customers come out to the sidewalk, tap the required security code on their smartphones, and open up the right doors to grab the items they’ve ordered.
There’s no guarantee that we’ll see treaded SCVs roaming the street anytime soon. Amazon says its patent applications explore the full possibilities of new technologies — but those inventions don’t always get turned into new products and services as described in the applications. Sometimes the inventions never see the light of day. (Just ask Jeff Bezos about the airbag-cushioned smartphone he invented.)