Categories
Fiction Science Club

AI goes full circle from fiction to science and back again

Artificial intelligence has had an effect on nearly every facet of modern life — ranging from diagnosing diseases, to applying for a job, to deciding which movie to watch. Now it’s reaching back into the realm where our notions about AI were born decades ago: science fiction.

“AI is just becoming more and more prominent in science fiction, which I think is a just a reflection of the times we’re in right now,” says Allan Kaster, who has been editing annual collections of sci-fi stories for 15 years. “It’s getting harder and harder to see a story that doesn’t include some sort of AI.”

Kaster, who heads up a sci-fi publishing house called Infinivox, discusses the connections between real-world science and fiction in the latest episode of the Fiction Science podcast.

Categories
GeekWire

Federally funded lab enlists AI to safeguard security

Bringing artificial intelligence to bear on issues relating to nuclear weapons might sound like the stuff of a scary sci-fi movie — but at the Department of Energy’s Pacific Northwest National Laboratory, it’s just one of the items on the to-do list.

One of PNNL’s research priorities is to identify and combat complex threats to national security, and AI can help meet that priority by detecting attempts to acquire nuclear weapons or associated technology.

Nuclear proliferation detection is one of the potential applications that could get an assist from the Center for AI @PNNL, a newly announced effort to coordinate research that makes use of AI tools — including the generative AI tools that have captured the attention of the tech world over the past year or two.

“For decades we’ve been doing artificial intelligence,” center director Court Corley, PNNL’s chief scientist for AI, told me in a recent interview. “What we’re seeing now, though, is an exceptional phase shift in where AI is being used and how it’s being used.”

Categories
Fiction Science Club

How AI and quantum physics link up to consciousness

Will artificial intelligence serve humanity — or will it spawn a new species of conscious digital beings with their own agenda?

It’s a question that has sparked scores of science-fiction plots, from “Colossus: The Forbin Project” in 1970, to “The Matrix” in 1999, to this year’s big-budget tale about AI vs. humans, “The Creator.”

The same question has also been lurking behind the OpenAI leadership struggle — in which CEO Sam Altman won out over the nonprofit board members who fired him a week earlier.

If you had to divide the AI community into go-fast and go-slow camps, those board members would be on the go-slow side, while Altman would favor going fast. And there have been rumblings about the possibility of a “breakthrough” at OpenAI that would set the field going very fast — potentially too fast for humanity’s good.

Is the prospect of AI becoming sentient and taking matters into its own hands something we should be worried about? That’s just one of the questions covered by veteran science writer George Musser in a newly published book titled “Putting Ourselves Back in the Equation.”

Musser interviewed AI researchers, neuroscientists, quantum physicists, neuroscientists and philosophers to get a reading on the quest to unravel one of life’s deepest mysteries: What is the nature of consciousness? And is it a uniquely human phenomenon?

His conclusion? There’s no reason why the right kind of AI couldn’t be as conscious as we are. “Almost everyone who thinks about this, in all these different fields, says if we were to replicate a neuron in silicon — if we were to create a neuromorphic computer that would have to be very, very true to the biology — yes, it would be conscious,” Musser says in the latest episode of the Fiction Science podcast.

But should we be worried about enabling the rise of future AI overlords? On that existential question, Musser’s view runs counter to the usual sci-fi script.

Categories
GeekWire

AI influencers are worried about AI’s influence

What do you get when you put two of Time magazine’s 100 most influential people on artificial intelligence together in the same lecture hall? If the two influencers happen to be science-fiction writer Ted Chiang and Emily Bender, a linguistics professor at the University of Washington, you get a lot of skepticism about the future of generative AI tools such as ChatGPT.

“I don’t use it, and I won’t use it, and I don’t want to read what other people do using it,” Bender said Nov. 10 at a Town Hall Seattle forum presented by Clarion West

Chiang, who writes essays about AI and works intelligent machines into some of his fictional tales, said it’s becoming too easy to think that AI agents are thinking.

“I feel confident that they’re not thinking,” he said. “They’re not understanding anything, but we need another way to make sense of what they’re doing.”

What’s the harm? One of Chiang’s foremost fears is that the thinking, breathing humans who wield AI will use it as a means to control other humans. In a recent Vanity Fair interview, he compared our increasingly AI-driven economy to “a giant treadmill that we can’t get off” — and during Friday’s forum, Chiang worried that the seeming humanness of AI assistants could play a role in keeping us on the treadmill.

“If people start thinking that Alexa, or something like that, deserves any kind of respect, that works to Amazon’s advantage,” he said. “That’s something that Amazon would try and amplify. Any corporation, they’re going to try and make you think that a product is a person, because you are going to interact with a person in a certain way, and they benefit from that. So, this is a vulnerability in human psychology which corporations are really trying to exploit.”

AI tools including ChatGPT and DALL-E typically produce text or imagery by breaking down huge databases of existing works, and putting the elements together into products that look as if they were created by humans. The artificial genuineness is the biggest reason why Bender stays as far away from generative AI as she can.

“The papier-mâché language that comes out of these systems isn’t representing the experience of any entity, any person. And so I don’t think it can be creative writing,” she said. “I do think there’s a risk that it is going to be harder to make a living as a writer, as corporations try to say, ‘Well, we can get the copy…’ or similarly in art, ‘We can get the illustrations done much cheaper by taking the output of the system that was built with stolen art, visual or linguistic, and just repurposing that.’”

Categories
GeekWire

Olis raises $4M for tech that keeps robots on track

Seattle-based Olis Robotics has raised $4.1 million to explore new markets for tools that make it possible to monitor and control industrial robots remotely and securely.

The funding round was led by PSL Ventures, Olis Robotics said today in a news release. Additional backing came from Tectonic VenturesUbiquity Ventures and several strategic angel investors — including Daniel Theobald, a pioneer in the field who played key roles in founding MassRobotics and Vecna Robotics.

Olis’ flagship product, Olis Connect, helps operators monitor and manage their machines remotely from anywhere via any browser-capable device. If a robot encounters a problem, Olis Connect sends out an alert via a secure connection to the operator’s device without connecting to the cloud — which is an added safeguard in environments where cybersecurity is a concern. Operators can then use the system remotely to execute error recovery actions, such as releasing the robot’s grip on a part, or moving the robot from its error position.

“Robot downtime can cost a large plant over $1 million per hour. When every minute counts, you need to leverage remote tools to react as quickly as possible no matter where you are,” Olis Robotics CEO Fredrik Ryden explained. “Our technology is ingeniously simple to use and intensely practical in terms of its impact.”

Categories
GeekWire

Microsoft’s AI chatbot gets into some ugly arguments

It turns out we’re not the only ones getting into fact-checking fights with Bing Chat, Microsoft’s much-vaunted AI chatbot.

Last week, GeekWire’s Todd Bishop recounted an argument with the ChatGPT-based conversational search engine over his previous reporting on Porch Group’s growth plans. Bing Chat acknowledged that it gave Bishop the wrong target date for the company’s timeline to double its value. “I hope you can forgive me,” the chatbot said.

Since then, other news reports have highlighted queries that prompted wrong and sometimes even argumentative responses from Bing Chat.

Categories
GeekWire

Report says AI’s promises and perils are getting real

A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology — and to the ways in which that technology are being abused.

The report, titled “Gathering Strength, Gathering Storms,” was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development .

AI100 was initiated by Eric Horvitz, Microsoft’s chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary.

The project’s first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies. At the same time, it acknowledged that the effects of AI and automation could lead to social disruption.

This year’s update, prepared by a standing committee in collaboration with a panel of 17 researchers and experts, says AI’s effects are increasingly touching people’s lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical diagnoses.

Categories
GeekWire

Can Amazon’s robots make work safer for humans?

Bert and Ernie, Scooter and Kermit may have started out as warm and fuzzy Muppet characters, but now they’re part of Amazon’s team of warehouse robots as well.

Amazon showed off the latest members of its mechanical menagerie today in a blog post that focuses on how it’s using robotic research to improve workplace safety for its human employees.

For example, a type of robot nicknamed Ernie is designed to take boxy product containers known as totes off shelves at different heights, and then use its robotic arm to deliver the totes to warehouse employees at a standard height. The goal is to reduce the amount of reaching up or bending down that workers have to do.

“We’re known for being passionate about innovating for customers, but being able to innovate with robotics for our employees is something that gives me an extra kick of motivation each day,” Kevin Keck, worldwide director of advanced technology at Amazon, said in the blog posting. “The innovation with a robot like Ernie is interesting because while it doesn’t make the process go any faster, we’re optimistic, based on our testing, it can make our facilities safer for employees.”

Today’s inside look at the research being done at labs in the Seattle area, the Boston area and northern Italy comes in the wake of a couple of reports criticizing Amazon’s workplace safety record.

Categories
GeekWire

ManipulaTHOR lends virtual robots a hand (and an arm)

You can lead a virtual robot to a refrigerator, but you can’t make it pull out a drink. This is the problem that Seattle’s Allen Institute for Artificial Intelligence, also known as AI2, is addressing with a new breed of virtual robotic agent called ManipulaTHOR.

ManipulaTHOR adds a highly articulated robotic arm to the institute’s AI2-THOR artificial intelligence platform — which should provide lots more capability for testing the software for robots even before they’re built.

AI2-THOR was programmed to find its way through virtual versions of indoor environments, such as kitchens and bathrooms. It could use computer vision to locate everyday objects, but the model didn’t delve deeply into the mechanics of moving those objects. Instead, it just levitated them, as if by video-game magic.

Now AI2-THOR is getting real.

“Imagine a robot being able to navigate a kitchen, open a refrigerator and pull out a can of soda,” AI2 CEO Oren Etzioni said in a news release. “This is one of the biggest and yet often overlooked challenges in robotics, and AI2-THOR is the first to design a benchmark for the task of moving objects to various locations in virtual rooms, enabling reproducibility and measuring progress.”

Categories
GeekWire

Trump signs order guiding federal adoption of AI

President Donald Trump today signed an executive order that puts the White House Office of Management and Budget in charge of drawing up a roadmap for how federal agencies use artificial intelligence software.

The roadmap, due for publication in 180 days, will cover AI applications used by the federal government for purposes other than defense or national security. The Department of Defense and the U.S. intelligence community already have drawn up a different set of rules for using AI.

Today’s order could well be the Trump administration’s final word on a technology marked by rapid innovation — and more than a little controversy.

Future regulations could have an outsized impact on Amazon and Microsoft, two of the largest developers of AI technologies. The sharpest debates have focused on facial recognition software, but there are also issues relating to algorithmic biasdata privacy and transparency.