Siri in iOS 12: Apple’s Big Leap

I think Apple made a big leap with Siri at WWDC this year, and it’s notable for being a different leap than we were expecting. This is being read as disappointing to some, and revolutionary to others. Why are some people totally jazzed? Take a look at this quick Siri action in Drafts:

And check out me trying to resize a couple images with voice in Siri and Google Assistant:

Siri has numerous problems today: voice recognition, limited functionality, and unpredictable results are the big ones I can think of right now. These are all areas that Siri lags behind their only real competitor, Google Assistant. And while they could have said they were focusing on making voice recognition better or made general queries more robust, they focused on raw functionality and user control. And they did this by leveraging their greatest asset: third party apps and a developer base keen on keeping up to date with iOS’s latest and greatest tech.

Apple is not going to catch up with Google in terms of general query processing. They’re getting better, but “let me Google that” sort of questions simply will never be as good as Google unless Apple builds their own search engine…and they travel back in time to start building it 10 years ago. This is not a space they can win, and I think they know that, so they didn’t address that up front at WWDC.

Instead, Apple has a rich library of third party apps that people rely on everyday, and they found a way to make Siri work better with those apps than anything possible on Google Assistant. By opening Siri up to perform any action by any app at any time, they have created separation between themselves and the competition for the first time in years. Come this fall, Siri will be able to do literally thousands more things than it can today. With very little work on developers’ part, they will be able to integrate with Siri, and I would expect most of your favorite apps to be updated right as iOS 12 launches. Not only that, power users will have tools like Shortcuts and Drafts to build our own custom actions that we can trigger with Siri as well. If Siri started with like 6 intents in iOS 10 and expanded to 8 or 9 in iOS 11, it’s shooting up to infinity in iOS 12. The sky appears to be the limit.

Now this initial implementation is a little less Siri-like than we’ve seen before. So far Apple’s attitude towards Siri has been “users shouldn’t have to set it up and they should be able to speak to it with natural language,” but that’s less the case with these new changes. Instead of being able to say what you want and having Siri parse that request for you, you’ll have to say very specific things. So if you set up a Siri action to post to Instagram as “post this,” you won’t be able to say “post this to Instagram” or “post a photo to Instagram” because those aren’t your phrase. That’s a limitation, but it is precisely what is enabling Siri to grow so tremendously in functionality in just one year. Also, as my friend Dave on Twitter suggests, this could be a first step towards Siri doing more automatic parsing for us in a year or two.

Siri needs to continue to improve in many ways. It should recognize multiple voices on HomePod, it should be able to set more than one timer at a time, it should be much better at parsing the words we say to it, and it should be more consistent across all devices. It must improve in these areas as well, but those are all things Apple needs to do to play catch up, none of it will separate them from the pack. So while I hope to see improvements in those areas soon (don’t make us wait for iOS 13), I’m impressed with Apple for finding a way to make their voice assistant take a lead in some areas, not just play catch up.