Categories
GeekWire

Amazon’s Alexa will ride on NASA’s Orion moon ship

Alexa, when are we arriving at the moon?

Putting Amazon’s AI-enabled voice assistant on a moon-bound spaceship may sound like science fiction (hello, HAL!). But it’s due to become science fact later this year when a radiation-hardened console rides along in NASA’s Orion deep-space capsule for the Artemis 1 round-the-moon mission.

There’ll be no humans aboard for the test flight, which will mark the first launch of NASA’s heavy-lift Space Launch System rocket. Instead, Alexa’s voice, and Echo’s pulsing blue ring, will be interacting with operators at Houston’s Mission Control for a technology demonstration created by Lockheed Martin, Amazon and Cisco.

The project is known as Callisto — a name that pays tribute to the mythological nymph who was a follower of the Greek goddess Artemis.

“Callisto will demonstrate a first-of-its-kind technology that could be used in the future to enable astronauts to be more self-reliant as they explore deep space,” Lisa Callahan, Lockheed Martin’s vice president and general manager of commercial civil space, said in a news release.

Including Alexa on the mission is particularly meaningful for Aaron Rubenson, vice president of Amazon Alexa. “The Star Trek computer was actually a key part of the original inspiration for Alexa — this notion of an ambient intelligence that is there when you need it … but then also fades into the background when you don’t need it,” he said during a teleconference.

Categories
GeekWire

AI makes customer service bots sound more like humans

Earlier this year, Seattle-based WellSaid Labs helped create an AI disk jockey with a voice that sounds like it’s coming from a flesh-and-blood DJ. Now WellSaid’s lifelike voice bots could be coming to a customer-service line near you.

California-based Five9 says it will incorporate WellSaid’s voice synthesis technology into its Virtual Voiceover menu of synthetic voices suitable for self-service contact centers. The new capabilities will be provided to users of the Five9 Inference Studio 7 platform at no additional cost, with wide availability planned in early 2022.

WellSaid Labs, a three-year-old startup fostered at the Allen Institute for Artificial Intelligence’s AI2 Incubator, takes advantage of artificial intelligence to produce natural-sounding synthetic voices like the ones that Five9 will give its Intelligent Virtual Agents, or IVAs.

“In our experience, the more lifelike an IVA can sound, the better the reception it will receive from the customer who is speaking with it,” Callan Schebella, Five9’s executive vice president for product management, said today in a news release. “We’re continually looking for the latest and greatest technologies to enhance the Studio platform, and we are excited to partner with WellSaid to bring this new innovation to our customers.”

Categories
GeekWire

This AI has the upbeat sound of a DJ down pat

As he turns from a Foo Fighters tune to the Smashing Pumpkins, Andy sounds just like your typical alternative-rock DJ — but his tag line is positively inhuman.

“Ever feel like your day just needs a shot of pick-me-up? Well, that’s what we’re here for — to help turn that frown upside down and crank the dial to 11,” he says. “Yes, I may be a robot, but I still love to rock.”

The robot reference isn’t just a nod to his canned DJ cliches: In a sense, Andy really is a robot — as in ANDY, or Artificial Neural Disk JockeY. And thanks to Seattle-based WellSaid Labs and Super Hi-Fi, an AI-centric production company in Los Angeles, ANDY could soon be coming to a streaming music service near you.

Categories
GeekWire

WellSaid gets a $10M boost for its synthetic voices

WellSaid Labs will have a lot more to say in the years ahead, thanks to $10 million in new investment that’ll be used to beef up the Seattle startup’s efforts to put a widening chorus of AI-generated synthetic voices to work.

The Series A funding round — led by Fuse, an early-stage venture capital firm that counts Seattle Seahawks star linebacker Bobby Wagner among its partners — follows up on $2 million in seed funding that WellSaid raised in 2019 when it was spun out from Seattle’s Allen Institute for Artificial Intelligence.

One of the investors in that earlier seed round, Voyager Capital, contributed to the newly announced Series A funding. So did Qualcomm Ventures and Good Friends.

WellSaid CEO Matt Hocking said the new funding will go toward growing the text-to-speech startup, which currently has a dozen employees and plenty of customers.

Categories
GeekWire

How Alexa is changing its tone during pandemic

Manoj Sindhwani
Manoj Sindhwani is vice president of Alexa speech at Amazon. (AWS Video)

Alexa, do I need a coronavirus test?

That’s a query that almost certainly was not in the repertoire for Amazon’s voice assistant six months ago. But the ins and outs of the coronavirus outbreak are changing Alexa’s work habits, said Manoj Sindhwani, Amazon’s vice president of Alexa Speech.

“We’re certainly seeing certain shifts,” Sindhwani told GeekWire this week. “You see a lot of people asking about COVID-19. People are not asking about ‘How long will it take for me to get to work.’ ”

Get the full story on GeekWire.

Categories
GeekWire

WellSaid creates well-spoken robot voices

WellSaid platform
A screenshot illustrates how WellSaid’s voice synthesis platform could be used. (WellSaid Illustration)

We’ve got Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa and Google Assistant — so do we really need more synthesized voices to do our bidding?

Absolutely, say the founders of WellSaid Labs, a startup that’s being spun out from Seattle’s Allen Institute for Artificial Intelligence (also known as AI2).

“We’re just solving a different problem,” co-founder and chief technology officer Michael Petrochuk told GeekWire. “Alexa and Google Home are trying to solve the problem of clearly, slowly communicating — pronouncing everything the same way, in a monotone format so it could be understood by everyone.”

WellSaid, in contrast, is developing a stable of AI-powered voices customized for different context, and sounding so lifelike that you wouldn’t believe they’re robots. During a recent video demonstration for a roomful of AI aficionados, most folks guessed that the images were generated by an algorithm, but not the voices.

Get the full story on GeekWire.