President Donald Trump today signed an executive order that puts the White House Office of Management and Budget in charge of drawing up a roadmap for how federal agencies use artificial intelligence software.
The roadmap, due for publication in 180 days, will cover AI applications used by the federal government for purposes other than defense or national security. The Department of Defense and the U.S. intelligence community already have drawn up a different set of rules for using AI.
Today’s order could well be the Trump administration’s final word on a technology marked by rapid innovation — and more than a little controversy.
Seattle, Microsoft and the field of artificial intelligence come in for their share of the spotlight in “Superintelligence” — an HBO Max movie starring Melissa McCarthy as the rom-com heroine, and comedian James Corden as the world’s new disembodied AI overlord.
But how much substance is there behind the spotlight? Although the action is set in Seattle, much of the principal filming was actually done in Georgia. And the scientific basis of the plot — which involves an AI trying to decide whether or not to destroy the planet — is, shall we say, debatable.
Fortunately, we have the perfect team to put “Superintelligence” to the test, as a set-in-Seattle movie as well as a guide to the capabilities of artificial intelligence.
What rights does a robot have? If our machines become intelligent in the science-fiction way, that’s likely to become a complicated question — and the humans who nurture those robots just might take their side.
Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattle’s tech community, doesn’t back away from such questions. They spark the thought experiments that generate award-winning novellas like “The Lifecycle of Software Objects,” and inspire Hollywood movies like “Arrival.”
Can science fiction have an impact in the real world, even at times when the world seems as if it’s in the midst of a slow-moving disaster movie? Absolutely, Chiang says.
“Art is one way to make sense of a world which, on its own, does not make sense,” he says in the latest episode of our Fiction Science podcast, which focuses on the intersection between science and fiction. “Art can impose a kind of order onto things. … It doesn’t offer a cure-all, because I don’t think there’s going to be any easy cure-all, but I think art helps us get by in these stressful times.”
COVID-19 provides one illustration. Chiang would argue that our response to the coronavirus pandemic has been problematic in part because it doesn’t match what we’ve seen in sci-fi movies.
“The greatest conflict that we see generated is from people who don’t believe in it vs. everyone else,” he said. “That might be the product of the fact that it is not as severe. If it looked like various movie pandemics, it’d probably be hard for anyone to deny that it was happening.”
This pandemic may well spark a new kind of sci-fi theme.
“It’s worth thinking about, that traditional depictions of pandemics don’t spend much time on people coming together and trying to support each other,” Chiang said. “That is not typically a theme in stories about disaster or enormous crisis. I guess the narrative is usually, ‘It’s the end of civilization.’ And people have not turned on each other in that way.”
Artificial intelligence is another field where science fiction often gives people the wrong idea. “When we talk about AI in science fiction, we’re talking about something very different than what we mean when we say AI in the context of current technology,” Chiang said.
In Chiang’s view, most depictions of sci-fi AI fall short even by science-fiction standards.
“A lot of stories imagine something which is a product like a robot that comes in a box, and you flip it on, and suddenly you have a butler — a perfectly competent and loyal and obedient butler,” he noted. “That, I think jumps over all these steps, because butlers don’t just happen.”
In “The Lifecycle of Software Objects,” Chiang imagines a world in which it takes just as long to raise a robot as it does to raise a child. That thought experiment sparks all kinds of interesting all-too-human questions: What if the people who raise such robots want them to be something more than butlers? Would they stand by and let their sci-fi robot progeny be treated like slaves, even like sex slaves?
“Maybe they want that robot, or conscious software, to have some kind of autonomy,” Chiang said. “To have a good life.”
Chiang’s latest collection of short stories, “Exhalation,” extends those kinds of thought experiments to science-fiction standbys ranging from free will to the search for extraterrestrial intelligence.
Both those subjects come into play in what’s certainly Chiang’s best-known novella, “Story of Your Life,” which was first published in 1998 and adapted to produce the screenplay for “Arrival” in 2016. Like so many of Chiang’s other stories, “Story of Your Life” takes an oft-used science-fiction trope — in this case, first contact with intelligent aliens — and adds an unexpected but insightful and heart-rending twist.
Chiang said that the success of the novella and the movie hasn’t led to particularly dramatic changes in the story of his own life, but that it has broadened the audience for the kinds of stories he tells.
“My work has been read by people who would not describe themselves as science-fiction readers, by people who don’t usually read a lot of science fiction, and that’s been amazing. That’s been really gratifying,” he said. “It’s not something that I ever really expected.”
During our podcast chat, Chiang indulged in yet another thought experiment: Could AI replace science-fiction writers?
Chiang’s answer? It depends.
“If we could get software-generated novels that were coherent, but not necessarily particularly good, I think there would be a market for them,” he said.
But Chiang doesn’t think that would doom human authors.
“For an AI to generate a novel that you think of as really good, that you feel like, ‘Oh, wow, this novel was both gripping and caused me to think about my life in a new way’ — that, I think, is going to be very, very hard,” he said.
Ted Chiang only makes it look easy.
Cosmic Log Used Book Club
So what’s Chiang reading? It’s definitely not an AI-generated novel.
“I recently enjoyed the novel “The Devourers” by Indra Das,” Chiang said. “It’s a novel about — you might call them werewolves, or maybe just ‘shape-shifter’ would be a more accurate term. But it’s about shape-shifters or werewolves in pre-colonial India, in medieval India. It’s a setting that I haven’t seen a lot of in fiction, and really, it’s an interesting take on the werewolf or shape-shifter mythos.”
Based on that recommendation, we’re designating “The Devourers” as November’s selection for the Cosmic Log Used Book Club. Since 2002, the CLUB Club has recognized books with cosmic themes that could well be available at your local library or used-book store.
“I had been very skeptical about the idea of a TV series that was going to be a sequel to ‘Watchmen,’ ” Chiang said. “When I first heard about it, I thought, ‘That sounds like a bad idea.’ But I heard good things about it, and I gave it a try, and it surprised me with how interesting it was. For people who haven’t seen that, I definitely recommend checking it out.”
According to Zarkadakis, one of the most important fixes will be for governments to earn back the trust of the people they govern.
“We should have a more participatory form of government, rather than the one we have now,” Zarkadakis told me from his home base in London. “A mixture, if you like, of more direct democracy and representational democracy. And that’s where this idea of citizen assemblies comes about.”
He delves into his prescription for curing liberal democracy — and the precedents that can be drawn from science fiction — in the latest episode of the Fiction Science podcast. Check out the entire show via your favorite podcast channel, whether that’s Anchor, Apple, Spotify, Google, Breaker, Overcast, Pocket Casts or RadioPublic.
The process involves recruiting small groups of ordinary citizens, and getting them up to speed on a pressing social issue. In Zarkadakis’ case, the issue had to do with the policies and ethical considerations surrounding brain science. During a series of deliberations, the groups worked out a series of recommendations on research policies, free of the political maneuvering that usually accompanies such debates.
One of the key challenges involved how to connect regular citizens with expert knowledge. It struck Zarkadakis that machine-based expert systems — for example, IBM’s Watson, the question-answering computer that bested human champs on the “Jeopardy” game show — could help guide citizen assemblies through the complexities of complex issues such as climate change, health care and education.
“Those algorithms are very powerful,” he said. “They collect a lot of data, and they have a lot of collateral damage. They just want to sell ads. Now, can we do something about it? I think we can, of course. We can use this technology for other purposes. We can use this technology, for example, to build algorithms with different goals.”
Rewriting the formula for how personal data can be used is a big part of Zarkadakis’ prescription. In the book, he proposes the development of data trusts that put consumers in control of their own data — and put a price tag on the use of such data by businesses.
Is the market for an individual’s data lucrative enough to sustain the sellers? That was one of the questions my Fiction Science co-host, sci-fi author Dominica Phetteplace, asked Zarkadakis.
“Interestingly, they put up a collateral for that loan that wasn’t the airplanes. It wasn’t the slots they have on various air fields around the world. It was the loyalty program, a database,” he said.
Speaking of science fiction, the sky’s not the limit for Zarkadakis’ ideas: Early on, he planned to devote a chapter of “Cyber Republic” to the idea of creating decentralized, crypto-savvy cooperatives to govern future space settlements.
“My publisher dissuaded me from including the chapter in the book,” he said with a chuckle. “I didn’t want to argue the point too much, so I said, OK, fine, we’ll keep it on Earth and keep it earthly for this time.”
Instead, Zarkadakis laid out the idea in a pair of postings to his personal blog. He’s also working on a science-fiction novel that capitalizes on his familiarity with the ins and outs of AI and robotics — and who knows? In that novel, he just might address the invention of democracy for intelligent machines.
I reminded him that happy endings aren’t guaranteed, whether we’re talking about science fiction or real-world governance. The example I had in mind was the scene from “Star Wars, Episode III: Revenge of the Sith,” where Natalie Portman’s character watches the birth of the Galactic Empire and remarks: “So this is how liberty dies: with thunderous applause.”
“Both of those novels are interesting, because they imagine future human colonies on the moon, very near, but in very different ways as well,” Zarkadakis said. “It’s always interesting to read science fiction when you are interested in politics.”
Will citizen assemblies and data trusts end up being consigned to the realm of science fiction, along with Heinlein’s lunar revolutionaries and Le Guin’s anarcho-syndicalists? Zarkadakis, for one, hopes not. The way he sees it, we’re already stuck in a bad science-fiction plot.
“We are living actually in a nightmare right now, as far as I’m concerned,” Zarkadakis said. “And I believe that one of the reasons why this is happening is because the public was not involved in the conversation, and therefore there was not acceptance by the public of those measures. To cut a long story short, I believe that this needs to change.”
BlackSky, a satellite data venture with offices in Seattle, says it’s won a U.S. Air Force contract to track the effects of the coronavirus pandemic on military interests worldwide.
The contract calls for BlackSky to monitor U.S. military bases overseas and assess the status of supply chains, using its AI-enabled Spectra geospatial data analysis platform.
Spectra can analyze satellite data as well as news feeds and social media postings to identify anomalies worth following up on with additional imagery or investigation. The data inputs include imagery from BlackSky’s own satellite constellation as well as from other sources.
The approach was demonstrated for GeekWire back in May, when BlackSky executives showed how satellite images could be compared to detect an unusual rise or fall in, say, the number of cars parked in a lot outside a given installation. That could point to places where social distancing is decreasing or increasing.
Spectra can also analyze activity at airports, loading docks, maintenance facilities, fuel storage depots and other key installations to assess how supply chains might be affected by pandemic-related bottlenecks.
Such analyses can be compared with reported infection numbers coming from local governments, and integrated into computer models to predict the risk to deployed Air Force personnel and the surrounding communities.
How do you know when a region’s economy has recovered from the coronavirus pandemic? You could wait for the verdict from the unemployment figures, gather reports from individual businesses and scan news reports about business reopeniings. You could count how many cars show up in the parking lots of factories and shopping centers. Or you could just let Spectra do all of that.
Seattle-based BlackSky’sSpectra geospatial data platform can combine satellite imagery and other data inputs to generate insights that are greater than the sum of their parts. It’ll even use AI-enabled image recognition to count the cars.
As the COVID-19 crisis progresses, Spectra is learning how to recognize the early signs of recovery, or the telltale signs of a rebound.
“That’s what BlackSky is really all about: How can we inform you that something is happening, or something is going to happen, before you hear it from anywhere else?” said Patrick O’Neil, director of machine learning and artificial intelligence.
One month after the debut of the COVID-19 Open Research Dataset, or CORD-19, the database of coronavirus-related research papers has doubled in size – and has given rise to more than a dozen software tools to channel the hundreds of studies that are being published every day about the pandemic.
In a roundup published on the ArXiv preprint server this week, researchers from Seattle’s Allen Institute for Artificial Intelligence, Microsoft Research and other partners in the project say CORD-19’s collection has risen from about 28,000 papers to more than 52,000. Every day, several hundred more papers are being published, in peer-reviewed journals and on preprint servers such as BioRxiv and MedRxiv.
CORD-19 aims to make sense of them all, using the Semantic Scholar academic search engine developed by the Allen Institute for AI, also known as AI2.
A consortium of tech leaders — including Seattle’s Allen Institute for Artificial Intelligence, Microsoft and Facebook CEO Mark Zuckerberg’s charity — today unveiled an AI-enabled database that’s meant to give researchers quicker, surer access to resources relating to coronavirus and how to stop it.
The COVID-19 Open Research Dataset, or CORD-19, was created in response to a request from the White House’s Office of Science and Technology Policy. It takes advantage of AI tools to organize more than 24,000 articles about the COVID-19 disease and the SARS-CoV-2 coronavirus that causes it.
“We think that AI has an important part to play in solving this problem,” said Doug Raymond, general manager for the Semantic Scholar academic search engine at the Allen Institute for Artificial Intelligence, also known as AI2.
AI2’s CEO, Oren Etzioni, said his team leapt at the opportunity to participate in CORD-19. “We hesitated all of negative-two seconds,” he joked.