A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology — and to the ways in which that technology are being abused.
The project’s first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies. At the same time, it acknowledged that the effects of AI and automation could lead to social disruption.
Bert and Ernie, Scooter and Kermit may have started out as warm and fuzzy Muppet characters, but now they’re part of Amazon’s team of warehouse robots as well.
Amazon showed off the latest members of its mechanical menagerie today in a blog post that focuses on how it’s using robotic research to improve workplace safety for its human employees.
For example, a type of robot nicknamed Ernie is designed to take boxy product containers known as totes off shelves at different heights, and then use its robotic arm to deliver the totes to warehouse employees at a standard height. The goal is to reduce the amount of reaching up or bending down that workers have to do.
“We’re known for being passionate about innovating for customers, but being able to innovate with robotics for our employees is something that gives me an extra kick of motivation each day,” Kevin Keck, worldwide director of advanced technology at Amazon, said in the blog posting. “The innovation with a robot like Ernie is interesting because while it doesn’t make the process go any faster, we’re optimistic, based on our testing, it can make our facilities safer for employees.”
ManipulaTHOR adds a highly articulated robotic arm to the institute’s AI2-THOR artificial intelligence platform — which should provide lots more capability for testing the software for robots even before they’re built.
AI2-THOR was programmed to find its way through virtual versions of indoor environments, such as kitchens and bathrooms. It could use computer vision to locate everyday objects, but the model didn’t delve deeply into the mechanics of moving those objects. Instead, it just levitated them, as if by video-game magic.
Now AI2-THOR is getting real.
“Imagine a robot being able to navigate a kitchen, open a refrigerator and pull out a can of soda,” AI2 CEO Oren Etzioni said in a news release. “This is one of the biggest and yet often overlooked challenges in robotics, and AI2-THOR is the first to design a benchmark for the task of moving objects to various locations in virtual rooms, enabling reproducibility and measuring progress.”
President Donald Trump today signed an executive order that puts the White House Office of Management and Budget in charge of drawing up a roadmap for how federal agencies use artificial intelligence software.
The roadmap, due for publication in 180 days, will cover AI applications used by the federal government for purposes other than defense or national security. The Department of Defense and the U.S. intelligence community already have drawn up a different set of rules for using AI.
Today’s order could well be the Trump administration’s final word on a technology marked by rapid innovation — and more than a little controversy.
Seattle, Microsoft and the field of artificial intelligence come in for their share of the spotlight in “Superintelligence” — an HBO Max movie starring Melissa McCarthy as the rom-com heroine, and comedian James Corden as the world’s new disembodied AI overlord.
But how much substance is there behind the spotlight? Although the action is set in Seattle, much of the principal filming was actually done in Georgia. And the scientific basis of the plot — which involves an AI trying to decide whether or not to destroy the planet — is, shall we say, debatable.
Fortunately, we have the perfect team to put “Superintelligence” to the test, as a set-in-Seattle movie as well as a guide to the capabilities of artificial intelligence.
Are sex robots just what the doctor ordered for the over-65 set?
In a newly published research paper, a bioethicist at the University of Washington argues that older people, particularly those who are disabled or socially isolated, are an overlooked market for intimate robotic companionship — and that there shouldn’t be any shame over seeking it out.
To argue otherwise would be a form of ageism, says Nancy Jecker, a professor of bioethics and humanities at the UW School of Medicine.
“Designing and marketing sex robots for older, disabled people would represent a sea change from current practice,” she said today in a news release. “The reason to do it is to support human dignity and to take seriously the claims of those whose sexuality is diminished by disability or isolation. Society needs to make reasonable efforts to help them.”
Jecker’s argument, laid out in the Journal of Medical Ethics, reawakens a debate that has raged at least since a bosomy robot made her debut in Fritz Lang’s 1927 film, “Metropolis.” In a 2007 book titled “Love and Sex With Robots,” computer chess pioneer David Levy argued that robot sex would become routine by 2050.
Over the past decade or so, the sex robot trade has advanced somewhat, with computerized dolls that would typically appeal to randy guys. At the same time, researchers have acknowledged that the world’s growing over-65 population may well need to turn to robotic caregivers and companions, due to demographic trends.
Jecker says sex should be part of the equation for those robots — especially when human-to-human sex is more difficult due to disabilities, or the mere fact that an older person’s parts don’t work as well as they once did. Manufacturers should think about tailoring robot partners for an older person’s tastes, she says.
What rights does a robot have? If our machines become intelligent in the science-fiction way, that’s likely to become a complicated question — and the humans who nurture those robots just might take their side.
Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattle’s tech community, doesn’t back away from such questions. They spark the thought experiments that generate award-winning novellas like “The Lifecycle of Software Objects,” and inspire Hollywood movies like “Arrival.”
Can science fiction have an impact in the real world, even at times when the world seems as if it’s in the midst of a slow-moving disaster movie? Absolutely, Chiang says.
“Art is one way to make sense of a world which, on its own, does not make sense,” he says in the latest episode of our Fiction Science podcast, which focuses on the intersection between science and fiction. “Art can impose a kind of order onto things. … It doesn’t offer a cure-all, because I don’t think there’s going to be any easy cure-all, but I think art helps us get by in these stressful times.”
COVID-19 provides one illustration. Chiang would argue that our response to the coronavirus pandemic has been problematic in part because it doesn’t match what we’ve seen in sci-fi movies.
“The greatest conflict that we see generated is from people who don’t believe in it vs. everyone else,” he said. “That might be the product of the fact that it is not as severe. If it looked like various movie pandemics, it’d probably be hard for anyone to deny that it was happening.”
This pandemic may well spark a new kind of sci-fi theme.
“It’s worth thinking about, that traditional depictions of pandemics don’t spend much time on people coming together and trying to support each other,” Chiang said. “That is not typically a theme in stories about disaster or enormous crisis. I guess the narrative is usually, ‘It’s the end of civilization.’ And people have not turned on each other in that way.”
Artificial intelligence is another field where science fiction often gives people the wrong idea. “When we talk about AI in science fiction, we’re talking about something very different than what we mean when we say AI in the context of current technology,” Chiang said.
In Chiang’s view, most depictions of sci-fi AI fall short even by science-fiction standards.
“A lot of stories imagine something which is a product like a robot that comes in a box, and you flip it on, and suddenly you have a butler — a perfectly competent and loyal and obedient butler,” he noted. “That, I think jumps over all these steps, because butlers don’t just happen.”
In “The Lifecycle of Software Objects,” Chiang imagines a world in which it takes just as long to raise a robot as it does to raise a child. That thought experiment sparks all kinds of interesting all-too-human questions: What if the people who raise such robots want them to be something more than butlers? Would they stand by and let their sci-fi robot progeny be treated like slaves, even like sex slaves?
“Maybe they want that robot, or conscious software, to have some kind of autonomy,” Chiang said. “To have a good life.”
Chiang’s latest collection of short stories, “Exhalation,” extends those kinds of thought experiments to science-fiction standbys ranging from free will to the search for extraterrestrial intelligence.
Both those subjects come into play in what’s certainly Chiang’s best-known novella, “Story of Your Life,” which was first published in 1998 and adapted to produce the screenplay for “Arrival” in 2016. Like so many of Chiang’s other stories, “Story of Your Life” takes an oft-used science-fiction trope — in this case, first contact with intelligent aliens — and adds an unexpected but insightful and heart-rending twist.
Chiang said that the success of the novella and the movie hasn’t led to particularly dramatic changes in the story of his own life, but that it has broadened the audience for the kinds of stories he tells.
“My work has been read by people who would not describe themselves as science-fiction readers, by people who don’t usually read a lot of science fiction, and that’s been amazing. That’s been really gratifying,” he said. “It’s not something that I ever really expected.”
During our podcast chat, Chiang indulged in yet another thought experiment: Could AI replace science-fiction writers?
Chiang’s answer? It depends.
“If we could get software-generated novels that were coherent, but not necessarily particularly good, I think there would be a market for them,” he said.
But Chiang doesn’t think that would doom human authors.
“For an AI to generate a novel that you think of as really good, that you feel like, ‘Oh, wow, this novel was both gripping and caused me to think about my life in a new way’ — that, I think, is going to be very, very hard,” he said.
Ted Chiang only makes it look easy.
Cosmic Log Used Book Club
So what’s Chiang reading? It’s definitely not an AI-generated novel.
“I recently enjoyed the novel “The Devourers” by Indra Das,” Chiang said. “It’s a novel about — you might call them werewolves, or maybe just ‘shape-shifter’ would be a more accurate term. But it’s about shape-shifters or werewolves in pre-colonial India, in medieval India. It’s a setting that I haven’t seen a lot of in fiction, and really, it’s an interesting take on the werewolf or shape-shifter mythos.”
Based on that recommendation, we’re designating “The Devourers” as November’s selection for the Cosmic Log Used Book Club. Since 2002, the CLUB Club has recognized books with cosmic themes that could well be available at your local library or used-book store.
“I had been very skeptical about the idea of a TV series that was going to be a sequel to ‘Watchmen,’ ” Chiang said. “When I first heard about it, I thought, ‘That sounds like a bad idea.’ But I heard good things about it, and I gave it a try, and it surprised me with how interesting it was. For people who haven’t seen that, I definitely recommend checking it out.”
With grudging assistance from a trio of pigs, Neuralink co-founder Elon Musk showed off the startup’s state-of-the-art neuron-reading brain implant and announced that the system has received the Food and Drug Administration’s preliminary blessing as an experimental medical device.
During today’s demonstration at Neuralink’s headquarters in Fremont, Calif., it took a few minutes for wranglers to get the swine into their proper positions for what Musk called his “Three Little Pigs demonstration.”
One of the pigs was in her natural state, and roamed unremarkably around her straw-covered pen. Musk said the second pig had been given a brain implant that was later removed, showing that the operation could be reversed safely.
After some difficulty, a third pig named Gertrude was brought into her pen. As she rooted around in the straw, a sequence of jazzy electronic beeps played through the sound system. Musk said the tones were sounded whenever nerves in the pig’s snout triggered electrical impulses that were picked up by her brain implant.
“The beeps you’re hearing are real-time signals from the Neuralink in Gertrude’s head,” he said.
Eventually, Neuralink’s team plans to place the implants in people, initially to see if those who have become paralyzed due to spinal cord injuries can regain motor functions through thought alone.
Musk said the implant received a Breakthrough Device designation from the FDA last month. That doesn’t yet clear the way for human clinical trials, but it does put Neuralink on the fast track for consultation with the FDA’s experts during preparations for such trials.
Neuralink has received more than $150 million in funding, with roughly two-thirds of that support coming from Musk himself. Today he said the venture had about 100 employees. He expects that number to grow. “Over time, there might be 10,000 or more people at Neuralink,” he said.
SATSOP, Wash. — Amid the ruins of what was meant to be a nuclear power plant, a robot catches a whiff of carbon dioxide — and hundreds of feet away, its master perks up his ears.
“I think I’ve got gas sensing,” Fletcher Talbot, the designated human operator for Team CSIRO Data61 in DARPA’s Subterranean Challenge, told teammates who were bunkered with him in the bowels of the Satsop nuclear reactor site near Elma.
Moments after Talbot fed the coordinates into a computer, a point appeared on the video scoreboard mounted on a wall of the bunker. “Hey, nice,” one member of the team said, and the whole squad broke into a short burst of applause.
Then it was back to the hunt.
The robot’s discovery marked one small step in the Subterranean Challenge, a multimillion-dollar competition aimed at promoting the development of autonomous robots to seek out and identify victims amid the rubble of an urban disaster area, or hazards hidden in the alleys of a hostile cityscape.
One year after the White House kicked off the American AI Initiative, its effects on research and development in the burgeoning field of artificial intelligence are just beginning to sink in.
And Michael Kratsios, the White House’s chief technology officer, says those effects are sure to be felt in Seattle — where industry leaders including Amazon and Microsoft, and leading research institutions including the University of Washington and the Allen Institute for Artificial Intelligence, are expanding the AI frontier.