Scientists Create Digital Twin of Earth with 1-Kilometer Resolution Combining Weather and Climate Models
- MM24 News Desk
- Nov 14
- 4 min read

Researchers led by Daniel Klocke from Max Planck Institute in Germany have developed what climate scientists call the "holy grail"—an Earth model with 1.25-kilometer resolution using 672 million calculated cells and 20,480 GH200 superchips to simulate 145.7 days in a single day.
The breakthrough combines "fast" weather systems with "slow" climate processes, calculating nearly 1 trillion degrees of freedom across JUPITER and Alps supercomputers in Germany and Switzerland.
Weather forecasting is notoriously wonky—climate modeling even more so. But their increasing ability to predict what the natural world will throw at us humans is largely thanks to two things: better models and increased computing power.
Now, a new paper from researchers led by Daniel Klocke of the Max Planck Institute in Germany describes what some in the climate modeling community have described as the "holy grail" of their field—an almost kilometer-scale resolution model that combines weather forecasting with climate modeling, reported Universe Today.
Technically the scale of the new model isn't quite 1 square kilometer per modeled patch—it's 1.25 kilometers. But really, who's counting at that point—there are an estimated 336 million cells to cover all the land and sea on Earth, and the authors added that same amount of "atmospheric" cells directly above the ground-based ones, making for a total of 672 million calculated cells.
For each of those cells, the authors ran a series of interconnected models to reflect Earth's primary dynamic systems. They broke them into two categories—"fast" and "slow."
The "fast" systems include the energy and water cycles—which basically means the weather. In order to clearly track them, a model needs extremely high resolution, like the 1.25 kilometers the new system is capable of. For this model, the authors used the ICOsahedral Nonhydrostatic (ICON) model that was developed by the German Weather Service and the Max Planck Institute for Meteorology.
"Slow" processes, on the other hand, include the carbon cycle and changes in the biosphere and ocean geochemistry. These reflect trends over the course of years or even decades, rather than a few minutes it takes a thunderstorm to move from one 1.25-kilometer cell to another.
Combining these two fast and slow processes is the real breakthrough of the paper, as the authors are happy to agree. Typical models that would incorporate these complex systems would only be computationally tractable at resolutions of more than 40 kilometers, according to Universe Today.
So how did they do it? By combining some really in-depth software engineering with plenty of the most brand-spanking new computer chips money can buy.
The model used as the basis for much of this work was originally written in Fortran—the bane of anyone who has ever tried to modernize code written before 1990. Since it was originally developed, it had become bogged down with plenty of extras that made it difficult to use in any modern computational architecture.
So the authors decided to use a framework called Data-Centric Parallel Programming (DaCe) that would handle the data in a way that is compatible with modern-day systems.
That modern system took the form of JUPITER and Alps, two supercomputers located in Germany and Switzerland respectively, and both of which are based on the new GH200 Grace Hopper chip from Nvidia.
In these chips, a GPU (like the type used in training AI—in this case called Hopper) is accompanied by a CPU (in this case from ARM, another chip supplier, and labeled Grace).
This bifurcation of computational responsibilities and specialties allowed the authors to run the "fast" models on the GPU to reflect their relatively rapid update speeds, while the slower carbon cycle models were supported by the CPUs in parallel.
Separating out the computational power required like that allowed them to utilize 20,480 GH200 superchips to accurately model 145.7 days in a single day. To do so, the model used nearly 1 trillion "degrees of freedom", which, in this context, means the total number of values it had to calculate. No wonder this model needed a supercomputer to run.
Unfortunately, that also means that models of this complexity aren't coming to your local weather station anytime soon. Computational power like that isn't easy to come by, and the big tech companies are more likely to use it on squeezing every last bit out of generative AI that they can, no matter what the consequences for climate modeling.
The practical implications of this breakthrough extend beyond mere technical achievement. Current climate models operating at 40-kilometer resolution struggle to capture local weather phenomena like thunderstorms, urban heat islands, or coastal effects. The new 1.25-kilometer resolution changes that equation dramatically.
With such fine-grained detail, the model can track individual storm cells, predict flash flooding in specific valleys, and model how cities generate their own microclimates. This level of precision could revolutionize disaster preparedness, agricultural planning, and infrastructure development in an era of increasing climate volatility.
The "fast-slow" integration represents another conceptual leap. Previous models typically chose between short-term weather accuracy and long-term climate trends. By running both simultaneously, this digital twin can show how immediate weather patterns influence decade-long climate shifts—and vice versa. Understanding these interconnections proves crucial for anticipating tipping points in Earth's climate system.
However, the computational requirements present significant barriers to widespread adoption. Running 20,480 superchips simultaneously consumes enormous amounts of electricity—ironically contributing to the very climate challenges the model aims to understand. The research team acknowledges this paradox but argues the insights gained justify the energy expenditure.
The model's architecture using Nvidia's GH200 Grace Hopper chips reflects how AI hardware development inadvertently benefits climate science. These chips were designed primarily for training large language models, but their parallel processing capabilities prove equally valuable for simulating Earth's interconnected systems.
But, at the very least, the fact that the authors were able to pull off this impressive computational feat deserves some praise and recognition—and hopefully one day we'll get to a point where those kinds of simulations become commonplace.



Comments