Blog

Share
Publishers

How Core ML will change mobile app development

by Basil Shikin on Jul 10, 2017

ARKit may have been the flashy new technology highlighted at this year’s WWDC, but it was Core ML that really showed Apple’s vision of smarter software. Apple has historically been a conservative company, never jumping into new technologies until they’re mature. With the introduction of Core ML, Apple has solidified machine learning as the future of app development and not just a fad.

It shouldn’t come as a surprise that Apple sees machine learning as integral to its products and services. Apple purchased Siri back in 2010 for $200 million and its voice assistant has grown into an essential part of its software experience. Siri is now on iPhones, iPads, Macs, AirPods, and soon, Homepods. Although the launch of Core ML seems like the first time Apple has made machine learning easily accessible to developers, the company actually had high-level APIs for speech recognition in SiriKit, which was introduced in iOS 10.

With the introduction of Core ML, Apple has taken away the arduous process of building platform-level plumbing for deploying machine learning models leveraging all computational resources of millions of existing devices.

What’s possible with Core ML

With iOS 11 and Core ML, Apple is making it easier than ever for developers to integrate their own machine learning models on top of being able to use their pre-trained models. Developers will be able to build tools into apps that can anticipate users’ needs. This will be useful for offering contextual suggestions, like suggesting events for you to add to your calendar when a friend mentions going to a party in a chat app.

CoreML will influence how applications interact with Siri, the camera, and Apple’s predictive QuickType keyboard and other services. For example, Apple also introduced the Vision framework which allows facial recognition, barcode/QR code scanning, objects, and much more. This will allow image recognition within apps to track users’ faces more effectively than Snapchat and Facebook.

These examples are just scratching the surface of what’s possible with Core ML, and only time will tell what types of apps developers will come up with.

Machine learning for everyone

Core ML will bring the power of artificial intelligence beyond the iPhone to all of Apple’s products. Apps will become smarter across iPhones, iPads, and Macs. The Apple Watch will learn to give more contextually relevant notifications.

Like ARKit, Core ML is designed to run on existing devices, which means users won’t have to upgrade to expensive and specialized hardware in order to enjoy the benefits of device-based machine learning. Millions of iPhone users will be able to immediately benefit from apps that have integrated machine learning.

Fast and secure

While more powerful machine learning frameworks exist, Core ML is designed to run locally on a user’s device. This means user data never leaves the device, improving application security. It also means apps won’t rely on an internet connection. Since Apple controls both its hardware and its software, Core ML will undoubtedly be highly optimized for mobile performance.

What used to take months can now be accomplished within any iOS app within days, freeing up developers to work on their apps instead of building machine learning capabilities from the ground up.

It’s still early for device-based machine learning, but Core ML clearly has the ability to change how we interact with our devices and apps. And with Apple’s massive market share, machine learning is here to stay. To get started with integrating Core ML into your app, be sure to check out Apple’s documentation.

Basil Shikin is AppLovin’s VP of Engineering.

We’re hiring! Apply here.