After more than two hours of my life spent watching Apple keynote I felt somewhat disappointed: every year at developer conference Apple delivers astonishing news to devs enabling them to exploit new and unforeseen ideas. Not today. For the first time since the lottery begun, I thanked God for not being selected for a ticket.
My delusion was so high, I could neither read any blog post about WWDC. As an iOS developer, I felt betrayed. I wrote to some friends while traveling home and they gave me a very different perspective: they were thrilled by the news delivered moreover, they were happy to hear that after “years of bullshit about platform and things devs can do with iOS, blah, blah, ..”.
Then I understood.
There is a possibility that while WWDC is a Developer conference, Apple Keynote was not intended for developers.
Yesterday, being cooled down, I decided to give a second shot to Tim & folks and watched “Platform State of the Union Keynote”. It was good, almost great. Not the kind of mind-blowing new we are accustomed by Apple, but it was definitely a session explaining what iOS 11 means to developers community. Let me recap a few of them:
This has been one of the biggest bets by Apple towards developers: innovating the way a dev is introduced into coding. And they did it: Swift Playground si a REPL (Read-Eval-Print-Loop) console that can be used to train young kids and experienced developers into Swift language features. And yesterday it becomes even bigger: Playground can directly connect to Bluetooth drones, robots and IoT devices to control them. Manufacturers are developing templates for Swift Playgrounds making devs starting to write working code at no time. Pay attention here: Playground is rapidly becoming a lightweight IDE for Swift development.
Playground is going to be an IDE with a marketplace of templates. Cool.
The biggest announcement was “the source editor has been completely rewritten from the ground up”. Period. Everything else is a variation on this huge innovation:
- issue alerting has been improved and now supports applying multiple fixes at the same time
- opening files is 3x faster, scrolling reaches 60fps and build time has been considerably reduced thanks to tool management optimizations
- source code is tokenized, and editor performs a semantic analysis of your code suggesting the best action take or contextual information such as “extract method,” “extract class” and so on
- Refactoring is now working correctly (with a helpful code folding preview feature) throughout the project either for Objective-C and Swift
- Wireless deployment: can deploy app from Xcode to any iOS device credited into your account
- Main thread API call checker is going to prevent your code from calling UI APIs from threads other than the main
The upcoming major release of Swift language brings some improvements (characterization of a string) aside with a JSON parser / serializer.
iOS 11 introduces a new set of APIs for Drag and Drop between apps: implementation already works out of the box for text and URLs and can be easily implemented with iOS new coordinator delegates.
The main (astonishing) new is that iOS now offers a set of Vision, Language Processing API locally available, without the need to leverage on cloud services (Google, are you listening?). Vision APIs can offer a set of algorithms already available
- Face and landmark detection
- Rectangle detection
- Text Detection
- Object Tracking and Classification
Apple proposed architecture is layered starting from Accelerate and Metal up to CoreML (Machine Learning) and Vision APIs which is another domain specific framework (as NLP already available in iOS)
Going at a deeper level, with CoreML comes the availability of already implemented state of the art algorithms, such as:
But the best of all is the capability of CoreML to import custom machine learning models trained using Cafè, one of the industry standard Deep Learning framework.
Moreover, the importer also manages automatic hardware selection depending on implemented model and can split network layers between CPU and GPU. Would be great extending this idea adding cloud/server and being able, given a model, to break its computation between cloud and devices. This opportunity could lead to a whole new set of applications where not only performances but also data security is guaranteed by moving non-anonymized layers on the device and sending to the cloud only anonymized data.
Finally Apple showed that all theese features coming with iOS 11 can be joined togheter to build state of the art solutions, such as Augmented Reality platform. ARKit is a framework capable of recognition of distance and planes, plus ambient lights, providing astonishing results in augmented reality applications: objects are placed correctly (precision is 5%) and blended seamlessy into the surrounding environment. The biggest notice is that the technology needed for AR has already been widely distributed into last two generations of Apple devices, making today ARKit an interesting solution with a wide adoption.
At the end of the day, this WWDC started as a non-developer conference, but moved quickly into the main topics, out from hype but certainly with huge relevance in the coming years.
We could also discuss about the new Macbook, the iMac Pro and the upcoming app improvements to iPad Pro, but this is a story for another day.