Welcome to the second in a series of articles where we explore how Imagination and Humanising Autonomy, a UK-based AI behaviour company, are teaming up to deliver practical, real-world AI-driven active safety. This time we talk to Ron Pelley, Vice President, Commercial at Humanising Autonomy about their mission, mantras, and the unlikely meeting that started it all.
You can follow the rest of the series here: Part 1
Imagination and Humanising Autonomy recently partnered to take Humanising Autonomy’s behavioural AI software and validate it on Imagination’s low-power discreet NNAs for use in automotive. Over the last few months, the validation has been hugely successful and is paving the way for commercial success across the automotive sector. With that in mind, we spoke to Ron Pelley Vice President, Commercial at Humanising Autonomy about their mission, mantras, and the unlikely meeting that started it all.
Tell us a little bit more about Humanising Autonomy, where did it all begin?
Ron: Humanising Autonomy was founded in 2017 as a collaboration between our three founders; Maya Pindeus, Leslie Nooteboom and Raunaq Bose who met whilst studying at Imperial College, London. They shared an interest in how humans interacted with the urban landscape and how this interaction might look in a future with autonomous vehicles.
Their initial work was on a system called Blink which went on to be nominated for a prestigious START Award. This system was designed around using gestures to signal to an autonomous vehicle that a pedestrian was going to cross the street or for the vehicle to stop.
A technical demonstration of the Blink sensor in action.
They wanted to take this concept and push it even further. What if we could train the system to understand human beings so well that they would not need to make a conscious effort to signal the vehicle to stop. Instead, it could read their intent and identify their actions to improve safety for both pedestrians and road users.
After 2017, there was a great deal of interest in Humanising Autonomy’s work from across the automotive industry and we secured over £5m in seed funding. With that funding, the team grew and I joined the company to lead the commercialisation of our software
Since then, it has been an upward journey where, as a company, we have gone strength to strength and have continued to see interest within the automotive space in both AV and human-driven areas and beyond automotive entirely.
What are the fundamental principles behind Humanising Autonomy’s behavioural AI software?
Ron: HA software is image-based. We take computer vision algorithms and techniques to detect and classify all objects including pedestrians and other road users. Where we then differentiate is in our approach to AI, which includes some well-understood techniques but also uses behavioural science. Our team includes people with great backgrounds in behavioural psychology, allowing the software to detect and derive insights from observable and inferable behaviours.
Essentially, what we are doing is going beyond physics. Other systems might look at a group of pedestrians on the street and see that they are all at a crossing and may cross, what we do is out of that group of maybes, provide a much more accurate prediction of who will cross.
This combination of observable physical actions and other more subtle or inferable actions combines to create much safer and smoother systems. In the autonomous vehicle (AV) area, there is a particular challenge with false positives, which results in vehicles getting into problematic situations where either they stop and start too frequently as they are too sensitive in the name of safety, or they end up unable to correctly respond. In benchmarking comparisons, Humanising Autonomy’s software has delivered markedly better results with fewer false positives.
What has it been like working with Imagination?
Ron: Working with Imagination has been great. When we were approached by Imagination, it was clear that we both had the same end goal in mind, but both had questions about how to get there. By teaming up, we reached that end goal and I think learned a lot mutually along the way.
Working with a larger organisation like Imagination is always a great way to pool resources and achieve joint goals. As well as being a good fit, with both sides in managing the project, the real heroes in all of this, the software and hardware engineers, have worked excellently to bounce knowledge off each other and deliver on a great project quickly and to a great standard.
Looking beyond the confines of the car, where else could the solution help people stay safe?
Ron: Urban mobility is the area where we have seen early interest and success. With the current popularity of ADAS features in vehicles and the ever-closer dream of a self-driving mass-market vehicle, we have a long list of interested parties. The beauty of our software echoes back to what I said earlier, it can improve safety wherever humans and machines interact.
Whether this is on an active building site ensuring the right PPE is being used and people are staying in safe areas, or a static camera on an industrial tool or machine, that can understand the observable and inferable behaviours in an operator that could improve safety or productivity. The beauty of the software is all you need is a camera sensor and a low-powered compute solution like a neural network accelerator to be able to deploy it anywhere man and machine interact.
The last four years have been a strong upward curve for Humanising Autonomy, based on the work we have done with Imagination and the excitement in the wider industry for our work, I see no sign of that slowing down!
Download to learn how bandwidth-saving Imagination Tensor Tiling (ITT) technology
inside the IMG Series4 provides real-world benefits for accelerating neural network models