ICT

Your next phone may have an ARM machine learning processor

Published

on

Spread The News

ARM doesn’t build any chips itself, but its designs are at the core of virtually every CPU in modern smartphones, cameras and IoT devices.

So far, the company’s partners have shipped more than 125 billion ARM-based chips. After moving into GPUs in recent years, the company today announced that it will now offer its partners machine learning and dedicated object detection processors.

Project Trillium, as the overall project is called, is meant to make ARM’s machine learning (ML) chips the de facto standard for the machine learning platform for mobile and IoT.

For this first launch, ARM is launching both an ML processor for general AI workloads and a next-generation object detection chip that specializes in detecting faces, people and their gestures, etc. in videos that can be as high-res as full HD and running at 60 frames per second.

This is actually ARM’s second-generation object detection chip. The first generation ran in Hive’s smart security camera.

As ARM fellow and general manager for machine learning Jem Davies, and Rene Haas, the company’s president of its IP Products Group, told me, the company decided to start building these chips from scratch.

“We could have produced things on what we already had, but decided we needed a new design,” Davies told me.

“Many of our market segments are power constrained, so we needed that new design to be power efficient.”

The team could have looked at its existing GPU architecture and expanded on that, but Davies noted that, for the most part, GPUs aren’t great at managing their memory budget, and machine learning workloads often rely on efficiently moving data in and out of memory.

ARM stresses these new machine learning chips are meant for running machine learning models at the edge (and not for training them).

The promise is that they will be highly efficient (the promise is 3 teraops per watt) but still offer a mobile performance of 4.6 teraops — and the company expects that number to go up with additional optimizations.

Finding the right balance between power and battery life is at the heart of much of what ARM does, of course, and Davies and Haas believe that the team found the right mix here.

ARM expects that many OEMs will use both the object detection and ML chips together. The object detection chip could be used for a first pass, for example, to detect faces or objects in an image and then pass the information of where these are on to the ML chip, which can then do the actual face or image recognition.

“OEMs have ideas, they have prototype applications and they are just waiting for us to provide that performance to them,” Davies said.

ARMs canonical example for this is an intelligent augmented reality scuba mask (Davies is a certified diver, in case you were wondering).

This mask could tell you which fish you are seeing as you are bobbing in the warm waters of Kauai, for example.

But the more realistic scenario is probably an IoT solution that uses video to watch over a busy intersection where you want to know if roads are blocked or whether it’s time to empty a given trash can that seems to be getting a lot of use lately.

“The idea here to note is that this is fairly sophisticated work that’s all taking place locally,” Haas said, and added that while there is a fair amount of buzz around devices that can make decisions, those decisions are often being made in the cloud, not locally.

ARM thinks that there are plenty of use cases for machine learning at the edge, be that on a phone, in an IoT device or in a car.

Indeed, Haas and Davies expect that we’ll see quite a few of these chips in cars going forward.

While the likes of Nvidia are putting supercomputers into cars to power autonomous driving, ARM believes its chips are great for doing object detection in a smart mirror, for example, where there are heat and space constraints.

At another end of the spectrum, ARM is also marketing these chips to display manufacturers that want to be able to tune videos and make them look better based on an analysis of what’s happening on the screen.

“We believe this is genuinely going to unleash a whole bunch of capabilities,” said Haas.

We’ve recently seen a number of smartphone manufacturers build their own AI chips. That includes Google’s Pixel Visual Core for working with images, the iPhone X’s Neural Engine and the likes of Huawei’s Kirin 970. For the most part, those are all home-built chips. ARM, of course, wants a piece of this business.

Leave a Reply

Your email address will not be published.

Trending

Copyright © 2024 Nationaldailyng