This is a slight aside, but could someone more familiar with the field than me explain what the Neural Engine is for? Does any software currently use it, or is it something that Apple are trying to push? Is it designed to build ML models, or to apply them to data?
And perhaps more basically, what for? How much machine learning is done on user machines, as opposed to renting some cloud time? What are they anticipating will be done?
I ask these questions in genuine curiousity, I assume I'm missing something but this just seems like a rather wild divergence from what (very little) I knew of the ML field.
Apple has always been good at running ML on device, as opposed to Google's approach of sending everything to the cloud, then mining all your data, selling it, tracking it, etc. One example is iPhotos, which implements feature detection on-device, while Google implements it in the cloud.
It’s accessible through the CoreML framework. I know nothing about this and very little about ML in general, but I found an interesting GitHub page written by a consultant who seems to know a fair amount about it.
And perhaps more basically, what for? How much machine learning is done on user machines, as opposed to renting some cloud time? What are they anticipating will be done?
I ask these questions in genuine curiousity, I assume I'm missing something but this just seems like a rather wild divergence from what (very little) I knew of the ML field.