Why IMG CXT ray tracing is a game-changer for mobile

In early November 2021 we announced a significant milestone in the history of not just Imagination GPUs, but, in our opinion, in the history of GPUs in.

Why we’re taking corporate social responsibility seriously at Imagination

There’s been a focus recently in the media on the importance of corporate social responsibility (CSR) – but what is CSR and why is it important? CSR.

Learning from the best: How nature can inspire AI research

Why should current-day AI researchers look into biology and the brain? We discuss in our latest AI research blog post.

Imagination’s neural network accelerator and Visidon’s denoising algorithm prove to be perfect partners

This blog post is a result of a collaboration between Visidon, headquartered in Finland and Imagination, based in the UK. Visidon is recognised as an.

How Imagination is steering the automotive industry to a safer future

If you’re in the market for a brand-new vehicle there’s a good chance it will feature a digital display, providing a rich, next-gen feel to the user.

Imagination boosts recruitment drive with new Cambridge office

Cambridge, one of the most famous cities in England, is well known for many things, such as its ancient university, its classical architecture, punting on.

International Women in Engineering Day 2021: You’re my hero

This year’s International Women in Engineering Day theme is heroes, celebrating the best, brightest and bravest women in engineering. When I was thinking.

Why Ethernet should be the connectivity backbone of every car

It’s no exaggeration to say that connectivity is the lifeblood that enables all our everyday tech to function, and that extends to connectivity inside our.

Imagination and Humanising Autonomy Part 2: The humans behind the autonomy

Welcome to the second in a series of articles where we explore how Imagination and Humanising Autonomy, a UK-based AI behaviour company, are teaming up to.

High-Fidelity Conversion of Floating-Point Networks for Low-Precision Inference using Distillation

A major challenge for fast, low-power inference on neural network accelerators is the size of the models. There is a trend in recent years towards deeper.