Imagination Blog

All Models Are Wrong, But Some Are Useful: Lessons from Everyday Life

Written by Andrea Battistella | Jan 5, 2026 10:56:07 AM

Have you ever checked a weather forecast, packed an umbrella, and then spent the day under clear blue skies? Or trusted your navigation app to save time, only to end up stuck behind a tractor? These moments are frustrating—but they illustrate a fundamental truth:

All models are wrong, but some are useful.

This principle, coined by statistician George Box, applies everywhere—from predicting the weather to designing next-generation technology. Models simplify reality. They can’t capture every variable, every nuance, every surprise. And yet, despite being “wrong”, they remain indispensable because they help us make better decisions.

Why We Model: Two Purposes, Two Mindsets

Not all models serve the same purpose. Broadly, they fall into two categories:

  • Models for Exploration
    These help us understand possibilities, test ideas, and compare options early in the design process. They don’t need perfect accuracy—they need speed and flexibility to guide innovation.
  • Models for Anticipation (Shift-Left)
    These aim to predict outcomes earlier, moving validation and risk assessment to the left in the development timeline. Here, accuracy matters more because decisions have downstream impact.
 “Shift-left”  Moving development tasks, validation and risk assessment earlier in the development process—anticipating and catching issues before they become costly. 

 

Both are useful—but in different ways. Exploration models spark creativity; anticipation models reduce surprises. And in practice, the distinction isn’t always clear-cut—some models serve both purposes.

The Everyday Analogy: Weather Forecasting

Consider weather forecasting. Behind that simple “20% chance of rain” lies a staggering amount of science and computation. Meteorologists use models that approximate atmospheric behaviour based on physics, historical data, and real-time sensors. These models crunch billions of data points to predict tomorrow’s sky.

But the atmosphere is chaotic. Tiny changes can cascade into big differences. That’s why forecasts are often wrong in detail. You might get rain when none was predicted—or sunshine when storms were forecast.

Does that make the forecast useless? Absolutely not. It’s still valuable because it helps you plan. You might carry an umbrella, reschedule a picnic, or choose indoor activities. The model doesn’t need to be perfect; it just needs to be useful.

The Trade-Off Triangle: Speed, Cost, Accuracy

Here’s where it gets interesting. If you demanded a perfectly accurate forecast, you’d need:

  • More sensors across the globe to capture every microclimate.
  • Supercomputers running complex simulations for hours.
  • Massive data storage and processing costs.

The result? A forecast that’s incredibly precise, but delivered too late to be useful and at a cost no one can afford.

This is the trade-off triangle: speed, cost, and accuracy. You can’t maximise all three. If you want speed, you sacrifice accuracy. If you want accuracy, you sacrifice speed and cost-efficiency.

And speed isn’t just about how fast the model runs. It’s also about how quickly you can build the model. A model that takes months to develop might miss the window for strategic decisions. In technology, time-to-model is as critical as time-to-result.

From Weather to Technology: Modelling at Imagination

At Imagination, we face the same reality. Some of our models guide decisions about architecture, performance, and feasibility, others are designed to support a "shift-left" approach—bringing validation and risk assessment earlier in the development process. By anticipating potential issues sooner, all these models help us, and our customers, make informed choices that reduce surprises and streamline the path to robust solutions. This ensures that we address risks and verify functionality before moving too far. When issues are identified before designs are set in stone or products are built, solutions can often be simple and almost inexpensive—adjusting a parameter, refining an assumption, or tweaking a design. However, if the same issues are discovered much later, after substantial development, the resources needed to address them multiply dramatically. Early detection not only speeds up development but also protects budgets and timelines, making the modelling process even more valuable.

Our models aim for maximum usefulness given finite time and resources. And usefulness depends on purpose. Therefore our portfolio includes a variety of different types of models for different needs, both internal and external:

RSIM (Research Simulator)

Fast and lightweight, RSIM is an internal tool that our engineers use for early exploration. It helps us validate architectural concepts and functional behaviour without waiting for full implementation. Flexibility and speed are critical here—both in execution and in how quickly we can develop the model to test a new idea.

FSIM (Functional Simulator)

FSIM goes deeper, enabling functional correctness checks across complex scenarios. It offers instruction-level accuracy (so more detail than RSIM) but not hardware accuracy making it useful for anticipating DDK and software development.

In turn, our customers use FSIM to get immediate feedback on shader execution and to eliminate lengthy debug cycles. It helps with securing functionality in the early stages of driver development and supports control stream debugging.

CSIM (HW-Accurate Simulator)

CSIM provides bit-level and hardware accuracy. It’s essential for verifying the hardware and for identifying hardware/software integration issues earlier in the flow, but comes at a cost: longer development time and slower execution. This is where the trade-off triangle really bites.

Our customers use CSIM to understand the behaviour of their final silicon block at the register level, which in turn supports reliable verification. It enables detailed hardware-software co-simulation and “shift left” development, ensuring products to market on time with a great software experience.  

PerfSIM (Performance Simulator)

PerfSIM focuses on IP-level performance metrics, helping us predict throughput, latency, and bottlenecks at the highest possible accuracy. It’s a key tool for performance analysis, reducing surprises late in the design cycle. This model is under development and only available at present as an internal tool, but we see it playing a critical role in the future in our customers’ SoC prototyping efforts.

VPSIM (Virtual Prototyping Simulator)

Finally, VPSIM is our integration environment, where the previous models (i.e. FSIM/CSIM/PerfSIM) are wrapped and connected to abstract or more accurate system prototypes for complete system validation and full software stack testing.

Our VPSIM solution is compatible with both QEMU and SystemC TLM, providing comprehensive tool support across the open communities (like RISC-V) as well as within the EDA sector and it can be used in conjunction with Imagination’s developer tools. Imagination has extensive experience delivering and supporting these models in various customer scenarios.

Each of these models sits at a different point on the speed–cost–accuracy triangle. RSIM prioritises speed and flexibility for exploration. CSIM and PerfSIM lean towards accuracy for anticipation. FSIM strikes a balance. Together, they form a modelling ecosystem that supports both innovation and risk reduction. They are ready for almost all available Imagination GPU configurations.

Closing Thought

Next time you check the weather, think about the invisible trade-offs behind that forecast. Then think about your own models. Are they useful? Are they timely? Are they cost-effective? If the answer is yes, then you’re doing it right.

Because in modelling—whether for the skies or for silicon—the ultimate measure isn’t perfection. It’s impact.

Looking to accelerate your project? Get in touch for further details about our modelling solutions.