Categories
GeekWire

Scientists back AI principles for biomolecular design

More than 90 researchers — including a Nobel laureate — have signed on to a call for the scientific community to follow a set of safety and security standards when using artificial intelligence to design synthetic proteins.

The community statement on the responsible development of AI for protein design is being unveiled today in Boston at Winter RosettaCon 2024, a conference focusing on biomolecular engineering. The statement follows up on an AI safety summit that was convened last October by the Institute for Protein Design at the University of Washington School of Medicine.

“I view this as a crucial step for the scientific community,” the institute’s director, David Baker, said in a news release. “The responsible use of AI for protein design will unlock new vaccines, medicines and sustainable materials that benefit the world. As scientists, we must ensure this happens while also minimizing the chance that our tools could ever be misused to cause harm.”

Categories
GeekWire

Quindar raises $6M to automate satellite management

A space-centric startup called Quindar says it has successfully closed an oversubscribed $6 million funding round that will give a boost to its cloud-hosted, AI-supported software platform for satellite management.

The seed extension round was led by the Seattle-area venture capital firm Fuse, with continuing investment from Y Combinator and Funders Club. The round builds upon last year’s initial seed round and brings total funding to date to $8.5 million, Quindar CEO and co-founder Nate Hamet told GeekWire in an email.

“The infusion of funding will propel our mission to manage satellites as efficiently as servers by utilizing AI-driven insights and operations to revolutionize the industry’s approach to spacecraft management,” Quindar said today in an announcement about the investment round.

Categories
GeekWire

Federally funded lab enlists AI to safeguard security

Bringing artificial intelligence to bear on issues relating to nuclear weapons might sound like the stuff of a scary sci-fi movie — but at the Department of Energy’s Pacific Northwest National Laboratory, it’s just one of the items on the to-do list.

One of PNNL’s research priorities is to identify and combat complex threats to national security, and AI can help meet that priority by detecting attempts to acquire nuclear weapons or associated technology.

Nuclear proliferation detection is one of the potential applications that could get an assist from the Center for AI @PNNL, a newly announced effort to coordinate research that makes use of AI tools — including the generative AI tools that have captured the attention of the tech world over the past year or two.

“For decades we’ve been doing artificial intelligence,” center director Court Corley, PNNL’s chief scientist for AI, told me in a recent interview. “What we’re seeing now, though, is an exceptional phase shift in where AI is being used and how it’s being used.”

Categories
GeekWire

Pacific Northwest National Lab creates a new AI center

The Department of Energy’s Pacific Northwest National Laboratory is shining a brighter spotlight on artificial intelligence by creating the Center for AI @PNNL, but don’t expect the lab’s researchers to build a better chatbot.

Instead, the center is meant to advance AI applications that boost PNNL’s capabilities in its traditional focus areas, including scientific discovery, national security and energy resilience.

“The creation of the Center for AI @PNNL will leverage and amplify these capabilities for even greater impact in service of our nation,” PNNL Director Steven Ashby said today in a news release.

Today’s announcement coincides with the annual Conference on Neural Information Processing Systems, or NeurIPS, which is taking place this week in New Orleans. The center’s creation serves as further evidence that AI tools are rapidly transforming a wide range of scientific and technical fields.

“The time is right for PNNL to focus its AI-related efforts,” said Court Corley, PNNL’s chief scientist for AI and director of the new center. “The field is moving at light speed, and we need to move quickly to keep PNNL at the frontier.”

Categories
Fiction Science Club

How AI and quantum physics link up to consciousness

Will artificial intelligence serve humanity — or will it spawn a new species of conscious digital beings with their own agenda?

It’s a question that has sparked scores of science-fiction plots, from “Colossus: The Forbin Project” in 1970, to “The Matrix” in 1999, to this year’s big-budget tale about AI vs. humans, “The Creator.”

The same question has also been lurking behind the OpenAI leadership struggle — in which CEO Sam Altman won out over the nonprofit board members who fired him a week earlier.

If you had to divide the AI community into go-fast and go-slow camps, those board members would be on the go-slow side, while Altman would favor going fast. And there have been rumblings about the possibility of a “breakthrough” at OpenAI that would set the field going very fast — potentially too fast for humanity’s good.

Is the prospect of AI becoming sentient and taking matters into its own hands something we should be worried about? That’s just one of the questions covered by veteran science writer George Musser in a newly published book titled “Putting Ourselves Back in the Equation.”

Musser interviewed AI researchers, neuroscientists, quantum physicists, neuroscientists and philosophers to get a reading on the quest to unravel one of life’s deepest mysteries: What is the nature of consciousness? And is it a uniquely human phenomenon?

His conclusion? There’s no reason why the right kind of AI couldn’t be as conscious as we are. “Almost everyone who thinks about this, in all these different fields, says if we were to replicate a neuron in silicon — if we were to create a neuromorphic computer that would have to be very, very true to the biology — yes, it would be conscious,” Musser says in the latest episode of the Fiction Science podcast.

But should we be worried about enabling the rise of future AI overlords? On that existential question, Musser’s view runs counter to the usual sci-fi script.

Categories
GeekWire

AI influencers are worried about AI’s influence

What do you get when you put two of Time magazine’s 100 most influential people on artificial intelligence together in the same lecture hall? If the two influencers happen to be science-fiction writer Ted Chiang and Emily Bender, a linguistics professor at the University of Washington, you get a lot of skepticism about the future of generative AI tools such as ChatGPT.

“I don’t use it, and I won’t use it, and I don’t want to read what other people do using it,” Bender said Nov. 10 at a Town Hall Seattle forum presented by Clarion West

Chiang, who writes essays about AI and works intelligent machines into some of his fictional tales, said it’s becoming too easy to think that AI agents are thinking.

“I feel confident that they’re not thinking,” he said. “They’re not understanding anything, but we need another way to make sense of what they’re doing.”

What’s the harm? One of Chiang’s foremost fears is that the thinking, breathing humans who wield AI will use it as a means to control other humans. In a recent Vanity Fair interview, he compared our increasingly AI-driven economy to “a giant treadmill that we can’t get off” — and during Friday’s forum, Chiang worried that the seeming humanness of AI assistants could play a role in keeping us on the treadmill.

“If people start thinking that Alexa, or something like that, deserves any kind of respect, that works to Amazon’s advantage,” he said. “That’s something that Amazon would try and amplify. Any corporation, they’re going to try and make you think that a product is a person, because you are going to interact with a person in a certain way, and they benefit from that. So, this is a vulnerability in human psychology which corporations are really trying to exploit.”

AI tools including ChatGPT and DALL-E typically produce text or imagery by breaking down huge databases of existing works, and putting the elements together into products that look as if they were created by humans. The artificial genuineness is the biggest reason why Bender stays as far away from generative AI as she can.

“The papier-mâché language that comes out of these systems isn’t representing the experience of any entity, any person. And so I don’t think it can be creative writing,” she said. “I do think there’s a risk that it is going to be harder to make a living as a writer, as corporations try to say, ‘Well, we can get the copy…’ or similarly in art, ‘We can get the illustrations done much cheaper by taking the output of the system that was built with stolen art, visual or linguistic, and just repurposing that.’”

Categories
GeekWire

AI-savvy writers do a reality check on techno-optimism

How will “The Techno-Optimist Manifesto,” venture capitalist Marc Andreessen’s paean to economic growth and artificial intelligence, play to a wider audience? The reviews are in from two award-winning writers who are familiar with the impact of generative AI on creative professions.

“I think it’s mostly nonsense,” science-fiction writer Ted Chiang said Oct. 19 at the GeekWire Summit in Seattle.

Chiang, a longtime Seattle-area resident, is best-known as the author of “Story of Your Life,” the novella that was adapted for the Oscar-nominated 2016 movie “Arrival.” But he’s also won acclaim as a commentator on AI’s effects for The New Yorker and other publications. Last month, Time magazine included Chiang among the 100 most influential people in AI.

The other writer on the SIFF Cinema stage was Eric Heisserer, the screenwriter who turned Chiang’s story into the script for “Arrival.” Heisserer witnessed the debate over generative AI and the future of work up close as a member of the negotiating committee for the Writers Guild of America during its recent strike against Hollywood studios.

Both Chiang and Heisserer say AI is too often unjustly portrayed as a high-tech panacea. That claim came through loud and clear in Andreessen’s manifesto, which called AI a “universal problem solver.”

“Technology can solve certain problems, but I think the biggest problems that we face are not problems that have technological solutions,” Chiang said in response. “Climate change probably does not have a technological solution. Wealth inequality does not have a technological solution. Most of these are problems of political will. … And so Marc Andreessen’s manifesto is a prime example of ignoring all of these other realities.”

Categories
GeekWire

Neurophos raises $7M to create exotic chips for AI

A semi-stealthy startup called Neurophos says it’s raised $7 million in seed funding to support the development of a chip that makes use of metamaterials for heavy-duty AI applications. And although the company’s HQ is in Austin, Texas, it has plenty of connections to Seattle-area tech leaders.

Founded in 2020, Neurophos was one of the first companies to receive pre-seed support from MetaVC Partners, a metamaterials-centric venture fund backed by Microsoft co-founder Bill Gates and former Microsoft executive Nathan Myhrvold. Neurophos’ co-founder and CEO, Patrick Bowen, previously contributed his expertise to Seattle-area metamaterials ventures such as Kymeta and Lumotive.

Tom Driscoll — the founder and chief technology officer of yet another Gates-backed metamaterials venture, Kirkland, Wash.-based Echodyne — is listed on Neurophos’ website as its CTO and co-founder. Kymeta’s former CEO, Nathan Kundtz, is listed as a board member.

The aforementioned ventures all rely on the exotic properties of metamaterials — electronic arrays that are structured to bend light in a variety of wavelengths, in a variety of ways, without the need for moving parts. Bowen told me that such properties could reduce the size and the energy requirements for photonic chips that could be tailor-made for artificial intelligence platforms like ChatGPT.

Categories
Fiction Science Club

‘The Creator’: Get a reality check on AI at the movies

Over the next 50 years, will humanity become too attached to the artificial-intelligence agents that dictate the course of our lives? Or is forming a deep attachment the only way we’ll survive?

Those are the sorts of questions raised by “The Creator,” Hollywood’s latest take on the potential for a robo-apocalypse. It’s a subject that has inspired a string of Terminator and Matrix movies as well as real-world warnings from the likes of Elon Musk and the late Stephen Hawking.

How close does “The Creator” come to the truth about AI’s promise and peril? We conducted a reality check with a panel of critics who are familiar with AI research and the ways in which that research percolates into popular culture. Their musings are the stuff of the latest episode of the Fiction Science podcast.

Semi-spoiler alert: We’ve tried to avoid giving away any major plot points, but if you’re obsessive about spoilers, turn away now — and come back after you’ve seen “The Creator.”

Categories
GeekWire

How satellites and AI work together to monitor the planet

Geospatial data analysis promises to revolutionize the way agriculture, urban planning and disaster relief is done — and thanks to a variety of projects that make use of artificial intelligence, Microsoft and Seattle’s Allen Institute for AI are part of that revolution.

The Allen Institute for AI, also known as AI2, recently rolled out Satlas, a new software platform for exploring global geospatial data generated from satellite imagery. Meanwhile, Microsoft’s AI for Good Lab is working with public and private institutions in Colombia on Project Guacamaya, which uses AI tools to monitor and understand conditions in the Amazon Rainforest.