Will artificial intelligence serve humanity — or will it spawn a new species of conscious digital beings with their own agenda?
It’s a question that has sparked scores of science-fiction plots, from “Colossus: The Forbin Project” in 1970, to “The Matrix” in 1999, to this year’s big-budget tale about AI vs. humans, “The Creator.”
The same question has also been lurking behind the OpenAI leadership struggle — in which CEO Sam Altman won out over the nonprofit board members who fired him a week earlier.
If you had to divide the AI community into go-fast and go-slow camps, those board members would be on the go-slow side, while Altman would favor going fast. And there have been rumblings about the possibility of a “breakthrough” at OpenAI that would set the field going very fast — potentially too fast for humanity’s good.
Is the prospect of AI becoming sentient and taking matters into its own hands something we should be worried about? That’s just one of the questions covered by veteran science writer George Musser in a newly published book titled “Putting Ourselves Back in the Equation.”
Musser interviewed AI researchers, neuroscientists, quantum physicists, neuroscientists and philosophers to get a reading on the quest to unravel one of life’s deepest mysteries: What is the nature of consciousness? And is it a uniquely human phenomenon?
His conclusion? There’s no reason why the right kind of AI couldn’t be as conscious as we are. “Almost everyone who thinks about this, in all these different fields, says if we were to replicate a neuron in silicon — if we were to create a neuromorphic computer that would have to be very, very true to the biology — yes, it would be conscious,” Musser says in the latest episode of the Fiction Science podcast.
But should we be worried about enabling the rise of future AI overlords? On that existential question, Musser’s view runs counter to the usual sci-fi script.