- 27 July 2018
- Benny Har-Even
Recently, we ran a webinar entitled, “Enabling efficient implementation of neural networks in smart cameras”. If you missed it, it’s worth checking out, as we take a close look at the smart camera market and the need to embed neural network accelerator in edge devices.
For those with an eye on the industry, this move to placing intelligence in edge devices is notable as a few years ago it seems ‘the cloud’ was everything. It was the buzzword that never seemed to go away. And with good reason: off-loading computation to an external device offsets the cost of many activities that need to be performed in an efficient manner, and with a large number of established cloud service providers in the market, there’s no shortage of choice. So, surely that’s all wrapped up? Why are we even talking about placing processing back into the edge device? Isn’t that a backward step? Can’t we rely on the cloud for everything?
Bandwidth, privacy, latency
Well, it’s not as simple as that. Many applications can’t function unless you’re connected to the cloud, creating a lot of data and requiring a lot of bandwidth. That places a strain on networks, especially when it might turn out that we don’t require all of that data (for example, why send pictures of empty roads?). It’s far more efficient to deal with it at the source.
Then there are the privacy concerns. The use of the term, ‘hacker’ may be a bit well, hackneyed, but our smart speakers and smart cameras are capturing a huge amount of data, which is inherently private and sensitive. By analysing that locally and only sending what is needed to the cloud you’re removing a huge surface area of attack for those looking to snoop or steal.
Even with the low latencies promised by 5G, for devices that rely on near instant decision making, relying on the cloud will introduce an unacceptable delay: if your drone is traveling through a crowded environment, it needs to be able to process and negotiate obstacles pretty much immediately: if not it won’t be able to travel at speed. Then there’s the classic current use case: autonomous cars. It’s a no-brainer to understand that when someone has just stepped out in front of the vehicle; it can’t wait for a communication from a cloud that might or might not be there – it needs to make a decision instantaneously.
In the webinar, we touch upon some of the use cases for AI in smart cameras, and it got me thinking about some of the other interesting possibilities and use cases. There’s certainly seems to be no stopping AI from infiltrating every area of our life. It’s even possible that AI will one day be able to write this post, (no doubt some would argue that this would be an improvement!).
Already companies such as Reuters are investing heavily in AI-supported journalism through a tool called Lynx Insight – though the keyword here is supported. The AI doesn’t write the stories, but churns through large data sets and spots something unusual or interesting and then flags up that information to a journalist – such as noting that a stock price has moved sharply, or other changes in a given market. After all, neural networks are highly adept at spotting these sorts of patterns faster than a human, but it’s only the human that can explain what matters.
Moving specifically to smart cameras, there are interesting use cases in a number of categories, both commercial and consumer.
Starting with the former, you have cameras that can do things that you would expect, such as identify license plates. The next step for this would be for the camera to recognise the whole car or even the occupants automatically – ideal for security at an airport. Certainly, it’s already possible for a person to be picked out from a huge crowd by smart cameras analysis as this individual from China discovered.
There’s also the ability to recognise an abandoned package – noting when something has been placed and then suspiciously discarded – again with obvious benefits for security in a busy public place such as an airport.
Retail analytics is another important area. Amazon’s checkout-free stores (as in there are no checkouts – the products are not free), where shoppers can ‘grab-and-go’ in that they can pick up what they want and walk out with it. Cameras identify a person and then they are automatically charged for what they buy.
Then there are fast food restaurants in China that use cameras to make menu suggestions based upon your age and gender, and then there are systems that will enable planners to optimise a store layout by tracking customer movements. You could even recognise a VIP or high-spending customer and offer them personal service and attention. Suddenly, the personal advertising scene in Minority Report doesn’t seem so far-fetched.
Keeping a lookout
You won’t be able to escape the cameras even in the car. In fact, cockpit cameras are only going to be on the rise, having a key role to play in Advanced Driver Assistance Systems (ADAS), and specifically driver monitoring. Gaze tracking is used to ensure the driver is awake and paying attention to the road, or if the driver is intoxicated to disable the vehicle or engage the autonomous driving mode, if safe. It will also help with the smooth handover between autonomous and driver-controlled states, by assessing the attentiveness of the driver.
And then there’s the home. For many people the home is the place where adding intelligence at the edge will have the most tangible impact.
As the technology becomes very affordable, more of us will become accustomed to having cameras in the home for security and peace of mind. However, the way many of these work now is relatively primitive. They can alert us when they detect motion but the camera itself, or rather the supporting software, has no understanding of what it sees. The new generation of cameras will be able to recognise family members, and do ‘smart stuff’ such as sending a notification when someone has returned home or departed, or warning when the kids are running amock! We will also see that intelligence applied to the footage so you can ask using your voice about a specific incident, and the software will show you those moments image. For example, you can say, “let me know if the kids aren’t home by four.”
However, as this review proves, it’s still early days for this sort of intelligence to really work, taking a couple of weeks to learn someone’s face and provide useful information. You also have to pay significant fees for a cloud service to enable those ‘smarts’. Wouldn’t it be better if you could have that work done locally? It might cut off a potential revenue stream but if much or most of the work could be achieved locally it would be much more efficient, saving time, and money in terms of both bandwidth and power.
Entirely AI-powered cameras such as Google Clip have received mixed reviews but it’s early days for the technology
We also have devices such as the Amazon Look, which uses a camera to analyse your clothing and make recommendations based on machine learning. Then there are clip-on cameras that have ‘intelligence’ entirely built in that are designed to recognise when you or your family are doing something interesting and take pictures entirely automatically: literally, an AI-powered camera. Again, the review notes that it doesn’t work very well, but this is just the start: with better, speedier, more power-efficient designs it’s surely a matter of time before the algorithms improve and the technology is able to achieve better results.
There are a plethora of creative uses to which neural networks can be applied. They now identify people and objects in photos as a matter of course. Taken any pictures of a cat or a dog? Type cat or dog into your phone’s photo app search bar and see what you get. And while the cameras in our phones are getting inherently better, with more sensitivity to light and better processing, apps such as Phancer will take your regular photos and elevate them to DSLR level results – more photography ‘cheats’, such as the fake bokeh effects that many high-end cameras now offer, thanks to the power of the neural networks.
It’s very apparent then that the use cases of neural networks in edge devices, and specifically cameras, is wide, but that it’s very much early days for the technology. Imagination certainly stands ready for this new age, and its PowerVR Series 2NX hardware is an ideal solution for delivering these solutions, offering very high performance and very low power consumption. To find out more about the subject we’d definitely check out the webinar, and be sure to take a look at our blog posts on the Series2NX architecture and the two recently released cores, the PowerVR AX2185 and the PowerVR AX2145.