A few weeks ago, at Google I/O, company CEO Sundar Pichai demoed an incredible advance in AI and voice technology. The demo involved a new AI assistant making phone calls to two businesses — a hairdresser and a restaurant — to make an appointment and book a reservation, respectively. You can see the original demo below, but for those of you who don’t have access to video, the calls are impressive because the AI responds (and sounds) far more human than your typical voice assistant.
It pauses. It says “Umm,” in the same way a human might, while considering options. When the reservation-taker at the restaurant confuses the AI’s request, the AI doesn’t start jibbering nonsense. It doesn’t even flail. Both demos are impressive, but the restaurant demo is the one, I personally think, that made people sit up and blink. But there are also oddities in the recordings that I noticed, even at the time. Now Axios has written a bit more on the event, including some questions that Google, apparently, doesn’t want to answer. First, here’s the video:
Note the oddities:
Axios called over two dozen hair salons, including some in Mountain View, and every business establishment promptly gave its name when contacted. This is simply standard procedure. It’s extremely rare to contact someone at a place of business and not be given either the name of the business, the name of the speaker, or both. Even something as simple as “This is Lisa, how may I help you?” tells the person on the other end of the phone that they’ve reached a company as opposed to a personal line. Yet Pichai specifically states: What you’re going to hear is the Google assistant actually calling a real salon to schedule an appointment for you. Let’s listen.
Google has clammed up on the topic. It refuses to say if the calls were edited, even just to remove the business names and identifying information. It refuses to disclose anything about the demo at all.
Does it Matter?
There are two ways to look at this situation. On the one hand, the call technology was impressive as hell. The AI did respond with believable pauses. It didn’t error out when presented with flustered communication. Assuming that the AI was actually generating those responses itself, that’s still an impressive achievement in voice replication (and we’ve covered Google’s work in this area before).
But Sundar Pichai didn’t pitch this demo as a genuinely impressive advance in voice technology. He was demoing (or claiming to demo) a situation in which an AI assistant could respond to unexpected conversation prompts, confusion, and real-world scenarios. And it’s entirely possible that the reason Google partially or completely staged the demo (and therefore lied in its framing of the presentation) is because it can’t guarantee that the person on the other end of the phone will have an accent that its AI can understand. It may not be able to guarantee that the conversation won’t take a turn that its AI can handle. And it may not be able to promise that its AI will understand the information provided depending on how that information is verbally ordered. This type of “fuzzy” processing is something human brains are very, very good at, and it’s an area where AI has generally struggled.
But here’s one thing we do know. When companies pull off major breakthroughs, especially in an area like artificial assistant technology, they typically can’t wait to show them off in as many scenarios as possible. When Apple launched Siri or Microsoft launched Cortana, they showed off their respective capabilities at great length. To this day, Microsoft pushes regular Cortana news about the features and capabilities of the platform.
All we have on Google Duplex, in contrast, is one canned AI demo shown at a Google event, under conditions that raise real questions about whether or not the event was staged. Google’s refusal to answer questions about its demo doesn’t leave it looking good in this scenario. If Google actually staged this event, then we withdraw our earlier remarks. The company didn’t pass the Turing Test at all. It just demonstrated a theoretical scenario that might one day lead to such an achievement.
How to Build a Face Mask Detector With a Jetson Nano 2GB and AlwaysAI
Nvidia continues to make AI at the edge more affordable and easier to deploy. So instead of simply running through the benchmarks to review the new Jetson Nano 2GB, I decided to tackle the DIY project of building my own face mask detector.
Google Pixel Slate Owners Report Failing Flash Storage
Google's product support forums are flooded with angry Pixel Slate owners who say their devices are running into frequent, crippling storage errors.
Astronomers Might Finally Know the Source of Fast Radio Bursts
A trio of new studies report on an FRB within our own galaxy. Because this one was so much closer than past signals, scientists were able to track it to a particular type of neutron star known as a magnetar.
The Fast Radio Burst in Our Galaxy Is Officially a Repeater
Last month, a team announced the discovery of a FRB in our own galaxy, giving scientists a chance to study these bizarre signals up close. The scientific community will have a lot of data on FRBs soon enough — a new study confirms this nearby FRB is repeating.