As research teams at Google, Microsoft, Facebook, IBM, and even Amazon have broken new ground in artificial intelligence in recent years, Apple always seemed to be the odd man out. It was too closed off to meaningfully integrate AI into the company’s software—it wasn’t a part of the research community, and didn’t have developer tools available for others to bring AI to its systems.
That’s changing. Through a slew of updates and announcements today at its annual developer conference, Apple made it clear that the machine learning found everywhere else in Silicon Valley is foundational to its software as well, and it’s giving developers the power to use AI in their own iOS apps as well.
Developers, developers, developers
The biggest news today for developers looking to build AI into their iOS apps was barely mentioned on stage. It’s a new set of machine learning models and application protocol interfaces (APIs) built by Apple, called Core ML. Developers can use these tools to build image recognition into their photo apps, or have a chatbot understand what you’re telling it with natural language processing. Apple has initially released four of these models for image recognition, as well as an API for both computer vision and natural language processing. These tools run locally on the user’s device, meaning data stays private and never needs to process on the cloud. This idea isn’t new—even data hoarders like Google have realized the value of letting users keep and process data on their own devices.