A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology — and to the ways in which that technology are being abused.
The project’s first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies. At the same time, it acknowledged that the effects of AI and automation could lead to social disruption.
WellSaid Labs will have a lot more to say in the years ahead, thanks to $10 million in new investment that’ll be used to beef up the Seattle startup’s efforts to put a widening chorus of AI-generated synthetic voices to work.
ManipulaTHOR adds a highly articulated robotic arm to the institute’s AI2-THOR artificial intelligence platform — which should provide lots more capability for testing the software for robots even before they’re built.
AI2-THOR was programmed to find its way through virtual versions of indoor environments, such as kitchens and bathrooms. It could use computer vision to locate everyday objects, but the model didn’t delve deeply into the mechanics of moving those objects. Instead, it just levitated them, as if by video-game magic.
Now AI2-THOR is getting real.
“Imagine a robot being able to navigate a kitchen, open a refrigerator and pull out a can of soda,” AI2 CEO Oren Etzioni said in a news release. “This is one of the biggest and yet often overlooked challenges in robotics, and AI2-THOR is the first to design a benchmark for the task of moving objects to various locations in virtual rooms, enabling reproducibility and measuring progress.”
Two and a half years after the death of Microsoft co-founder Paul Allen, his legacy in science and philanthropy is still being reshaped — and this time, the reshaping involves two of his deepest passions: conservation and computation.
Some aspects of that reorganization have stirred controversy, but Hilf said the transition to an expanded AI2 should be straightforward.
“All of the AI products and the teams that are currently managed by Vulcan will transfer in to that new entity and expand the mission of AI2,” he said. “It’s really bringing together Paul’s vision for AI, improving life on Earth, human lives, and leveraging AI2’s mission of ‘AI for the Common Good.’”
The projects include EarthRanger, which uses sensors and software to track endangered species and fight illegal poaching; Skylight, which monitors maritime traffic to head off illegal fishing; Vulcan’s climate modeling group, which is developing more accurate climate projections; and the Center for Machine Learning, which applies AI to a wide range of environmental challenges.
“There are definitely days where I came out of the writing, and looked around and realized that I was back in the real world — and was occasionally sad about it, because there are really useful things in ‘Machinehood’ that I wish we had today,” Divya says in the latest episode of our Fiction Science podcast.
President Donald Trump today signed an executive order that puts the White House Office of Management and Budget in charge of drawing up a roadmap for how federal agencies use artificial intelligence software.
The roadmap, due for publication in 180 days, will cover AI applications used by the federal government for purposes other than defense or national security. The Department of Defense and the U.S. intelligence community already have drawn up a different set of rules for using AI.
Today’s order could well be the Trump administration’s final word on a technology marked by rapid innovation — and more than a little controversy.
Seattle, Microsoft and the field of artificial intelligence come in for their share of the spotlight in “Superintelligence” — an HBO Max movie starring Melissa McCarthy as the rom-com heroine, and comedian James Corden as the world’s new disembodied AI overlord.
But how much substance is there behind the spotlight? Although the action is set in Seattle, much of the principal filming was actually done in Georgia. And the scientific basis of the plot — which involves an AI trying to decide whether or not to destroy the planet — is, shall we say, debatable.
Fortunately, we have the perfect team to put “Superintelligence” to the test, as a set-in-Seattle movie as well as a guide to the capabilities of artificial intelligence.
What rights does a robot have? If our machines become intelligent in the science-fiction way, that’s likely to become a complicated question — and the humans who nurture those robots just might take their side.
Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattle’s tech community, doesn’t back away from such questions. They spark the thought experiments that generate award-winning novellas like “The Lifecycle of Software Objects,” and inspire Hollywood movies like “Arrival.”
Can science fiction have an impact in the real world, even at times when the world seems as if it’s in the midst of a slow-moving disaster movie? Absolutely, Chiang says.
“Art is one way to make sense of a world which, on its own, does not make sense,” he says in the latest episode of our Fiction Science podcast, which focuses on the intersection between science and fiction. “Art can impose a kind of order onto things. … It doesn’t offer a cure-all, because I don’t think there’s going to be any easy cure-all, but I think art helps us get by in these stressful times.”
COVID-19 provides one illustration. Chiang would argue that our response to the coronavirus pandemic has been problematic in part because it doesn’t match what we’ve seen in sci-fi movies.
“The greatest conflict that we see generated is from people who don’t believe in it vs. everyone else,” he said. “That might be the product of the fact that it is not as severe. If it looked like various movie pandemics, it’d probably be hard for anyone to deny that it was happening.”
This pandemic may well spark a new kind of sci-fi theme.
“It’s worth thinking about, that traditional depictions of pandemics don’t spend much time on people coming together and trying to support each other,” Chiang said. “That is not typically a theme in stories about disaster or enormous crisis. I guess the narrative is usually, ‘It’s the end of civilization.’ And people have not turned on each other in that way.”
Artificial intelligence is another field where science fiction often gives people the wrong idea. “When we talk about AI in science fiction, we’re talking about something very different than what we mean when we say AI in the context of current technology,” Chiang said.
In Chiang’s view, most depictions of sci-fi AI fall short even by science-fiction standards.
“A lot of stories imagine something which is a product like a robot that comes in a box, and you flip it on, and suddenly you have a butler — a perfectly competent and loyal and obedient butler,” he noted. “That, I think jumps over all these steps, because butlers don’t just happen.”
In “The Lifecycle of Software Objects,” Chiang imagines a world in which it takes just as long to raise a robot as it does to raise a child. That thought experiment sparks all kinds of interesting all-too-human questions: What if the people who raise such robots want them to be something more than butlers? Would they stand by and let their sci-fi robot progeny be treated like slaves, even like sex slaves?
“Maybe they want that robot, or conscious software, to have some kind of autonomy,” Chiang said. “To have a good life.”
Chiang’s latest collection of short stories, “Exhalation,” extends those kinds of thought experiments to science-fiction standbys ranging from free will to the search for extraterrestrial intelligence.
Both those subjects come into play in what’s certainly Chiang’s best-known novella, “Story of Your Life,” which was first published in 1998 and adapted to produce the screenplay for “Arrival” in 2016. Like so many of Chiang’s other stories, “Story of Your Life” takes an oft-used science-fiction trope — in this case, first contact with intelligent aliens — and adds an unexpected but insightful and heart-rending twist.
Chiang said that the success of the novella and the movie hasn’t led to particularly dramatic changes in the story of his own life, but that it has broadened the audience for the kinds of stories he tells.
“My work has been read by people who would not describe themselves as science-fiction readers, by people who don’t usually read a lot of science fiction, and that’s been amazing. That’s been really gratifying,” he said. “It’s not something that I ever really expected.”
During our podcast chat, Chiang indulged in yet another thought experiment: Could AI replace science-fiction writers?
Chiang’s answer? It depends.
“If we could get software-generated novels that were coherent, but not necessarily particularly good, I think there would be a market for them,” he said.
But Chiang doesn’t think that would doom human authors.
“For an AI to generate a novel that you think of as really good, that you feel like, ‘Oh, wow, this novel was both gripping and caused me to think about my life in a new way’ — that, I think, is going to be very, very hard,” he said.
Ted Chiang only makes it look easy.
Cosmic Log Used Book Club
So what’s Chiang reading? It’s definitely not an AI-generated novel.
“I recently enjoyed the novel “The Devourers” by Indra Das,” Chiang said. “It’s a novel about — you might call them werewolves, or maybe just ‘shape-shifter’ would be a more accurate term. But it’s about shape-shifters or werewolves in pre-colonial India, in medieval India. It’s a setting that I haven’t seen a lot of in fiction, and really, it’s an interesting take on the werewolf or shape-shifter mythos.”
Based on that recommendation, we’re designating “The Devourers” as November’s selection for the Cosmic Log Used Book Club. Since 2002, the CLUB Club has recognized books with cosmic themes that could well be available at your local library or used-book store.
“I had been very skeptical about the idea of a TV series that was going to be a sequel to ‘Watchmen,’ ” Chiang said. “When I first heard about it, I thought, ‘That sounds like a bad idea.’ But I heard good things about it, and I gave it a try, and it surprised me with how interesting it was. For people who haven’t seen that, I definitely recommend checking it out.”
According to Zarkadakis, one of the most important fixes will be for governments to earn back the trust of the people they govern.
“We should have a more participatory form of government, rather than the one we have now,” Zarkadakis told me from his home base in London. “A mixture, if you like, of more direct democracy and representational democracy. And that’s where this idea of citizen assemblies comes about.”
He delves into his prescription for curing liberal democracy — and the precedents that can be drawn from science fiction — in the latest episode of the Fiction Science podcast. Check out the entire show via your favorite podcast channel, whether that’s Anchor, Apple, Spotify, Google, Breaker, Overcast, Pocket Casts or RadioPublic.
The process involves recruiting small groups of ordinary citizens, and getting them up to speed on a pressing social issue. In Zarkadakis’ case, the issue had to do with the policies and ethical considerations surrounding brain science. During a series of deliberations, the groups worked out a series of recommendations on research policies, free of the political maneuvering that usually accompanies such debates.
One of the key challenges involved how to connect regular citizens with expert knowledge. It struck Zarkadakis that machine-based expert systems — for example, IBM’s Watson, the question-answering computer that bested human champs on the “Jeopardy” game show — could help guide citizen assemblies through the complexities of complex issues such as climate change, health care and education.
“Those algorithms are very powerful,” he said. “They collect a lot of data, and they have a lot of collateral damage. They just want to sell ads. Now, can we do something about it? I think we can, of course. We can use this technology for other purposes. We can use this technology, for example, to build algorithms with different goals.”
Rewriting the formula for how personal data can be used is a big part of Zarkadakis’ prescription. In the book, he proposes the development of data trusts that put consumers in control of their own data — and put a price tag on the use of such data by businesses.
Is the market for an individual’s data lucrative enough to sustain the sellers? That was one of the questions my Fiction Science co-host, sci-fi author Dominica Phetteplace, asked Zarkadakis.
“Interestingly, they put up a collateral for that loan that wasn’t the airplanes. It wasn’t the slots they have on various air fields around the world. It was the loyalty program, a database,” he said.
Speaking of science fiction, the sky’s not the limit for Zarkadakis’ ideas: Early on, he planned to devote a chapter of “Cyber Republic” to the idea of creating decentralized, crypto-savvy cooperatives to govern future space settlements.
“My publisher dissuaded me from including the chapter in the book,” he said with a chuckle. “I didn’t want to argue the point too much, so I said, OK, fine, we’ll keep it on Earth and keep it earthly for this time.”
Instead, Zarkadakis laid out the idea in a pair of postings to his personal blog. He’s also working on a science-fiction novel that capitalizes on his familiarity with the ins and outs of AI and robotics — and who knows? In that novel, he just might address the invention of democracy for intelligent machines.
I reminded him that happy endings aren’t guaranteed, whether we’re talking about science fiction or real-world governance. The example I had in mind was the scene from “Star Wars, Episode III: Revenge of the Sith,” where Natalie Portman’s character watches the birth of the Galactic Empire and remarks: “So this is how liberty dies: with thunderous applause.”
“Both of those novels are interesting, because they imagine future human colonies on the moon, very near, but in very different ways as well,” Zarkadakis said. “It’s always interesting to read science fiction when you are interested in politics.”
Will citizen assemblies and data trusts end up being consigned to the realm of science fiction, along with Heinlein’s lunar revolutionaries and Le Guin’s anarcho-syndicalists? Zarkadakis, for one, hopes not. The way he sees it, we’re already stuck in a bad science-fiction plot.
“We are living actually in a nightmare right now, as far as I’m concerned,” Zarkadakis said. “And I believe that one of the reasons why this is happening is because the public was not involved in the conversation, and therefore there was not acceptance by the public of those measures. To cut a long story short, I believe that this needs to change.”