China ‘revives’ a 50-year-old technology that uses 200 times less energy than digital computing

While the world races to build bigger data centres and ever-larger AI models, Chinese researchers have chosen a different path: dusting off an old analogue computing concept and wiring it directly into a new kind of AI chip that promises huge energy savings.

A chip that thinks in voltages, not in zeros and ones

Researchers at Peking University say they have built an analogue AI chip that runs certain tasks up to 12 times faster than advanced digital processors, while using around 200 times less energy.

Cette nouvelle puce chinoise montre qu’un calcul « à l’ancienne » peut battre les processeurs numériques modernes sur la vitesse et surtout sur la consommation d’énergie.

Instead of shuffling binary digits through billions of transistors, this chip treats numbers as continuous physical quantities, such as voltages and currents. The physics of the circuit itself performs the computation.

This approach might sound like a throwback to the 1960s, when analogue computers were used to model aircraft wings or electrical grids. The twist is that the Chinese team has tailored the idea to modern AI workloads, and shrunk it onto a chip designed for tasks like recommendation systems and image processing.

Why AI’s energy bill has become a problem

Today’s AI models lean on digital accelerators such as Nvidia’s GPUs. These chips excel at matrix operations, which underpin neural networks and recommendation engines. Yet their biggest bottleneck is increasingly simple: moving data around.

Every time data has to travel from memory to a GPU core and back, energy is burned and time is lost. At the scale of cloud computing, this back-and-forth becomes a huge source of heat, cost and carbon emissions.

Datacentres already account for a sizeable slice of global electricity demand, and large AI models are pushing that share upwards year after year.

The Peking University team attacks this head-on with “in-memory analogue computing”. The idea: store data and process it in almost the same physical place. Instead of dragging numbers across a chip, the electrical state of the memory elements themselves participates in the calculation.

➡️ France gets its mojo back in solid-state batteries as new study gives industry chiefs a clear roadmap

➡️ «Ich spare 30 Minuten pro Woche»: Diese Methode zum Fensterputzen geht ultraschnell

➡️ Salz im Spülmittel: ein genialer Trick, der Ihr größtes Küchenproblem löst

➡️ Help Birds Survive The Coldest Nights: The One Food That Brings Their Warmth Back

➡️ Der ultimative Wäsche-Hack: Warum das Hinzufügen von 100 ml weißem Essig in das Spülfach die Fasern von Jeans auf natürliche Weise weicher macht und die Farben schont

➡️ Heating off at night: money-saving trick or costly mistake?

➡️ Small Business Hack: Die genaue Berechnung der Tagesreichweite (Daily Reach) auf Instagram und die 3 besten Strategien zur Steigerung der Sichtbarkeit durch Reels und Stories

➡️ France Called In To Help Third-Largest Caribbean Island On €144 Million Lifeline Water Project

Less movement means fewer losses and lower temperatures. That, in turn, allows more computation within the same power and cooling budget.

Calculating like in the 1970s, but tuned for 2026

How analogue computing works in this context

Traditional digital processors break every operation into sequential steps. Even when they run in parallel, the underlying logic still advances in discrete ticks. Analogue circuits operate differently. Once configured, they let currents and voltages settle into a state that directly encodes the result of a calculation.

  • Digital chips: perform many tiny arithmetic operations one after another, controlled by a clock.
  • Analogue chips: map a maths problem onto a physical system and let the system “relax” into the answer.

For decades, engineers shunned analogue computers because they were considered noisy, hard to program and difficult to scale. Modern fabrication processes and better circuit design now make it possible to tame much of that instability.

In the Chinese chip, the analogue elements handle heavy linear algebra operations, while digital logic still manages control, interfacing and error handling. The result is a hybrid system that borrows the best from both eras.

A very specific math trick baked into hardware

The breakthrough centres on a technique called non-negative matrix factorisation (NMF). This method breaks a large table of numbers into two smaller tables, with the constraint that all entries stay positive. It is widely used to tease out hidden patterns in data.

In AI, NMF helps identify user preferences, topics in documents, or underlying features in images. On a digital processor, running NMF at scale can demand huge memory bandwidth and long training times.

The Peking University chip does something unusual: it implements the core NMF operation directly in analogue hardware. Where a digital GPU might need thousands of iterative steps, the analogue circuit can, in effect, perform many of those steps as a single physical process.

Transforming NMF from a software algorithm into a physical phenomenon inside a circuit drastically cuts energy use and latency for these tasks.

From Netflix-style recommendations to image compression

Real-world tests, not just lab demos

According to the study in Nature Communications, the team led by Sun Zhong tested the chip on workloads that resemble those used by major tech platforms.

One experiment focused on recommendation systems, similar in principle to those used by Netflix, Amazon or Yahoo. These systems learn which items users might like, based on large matrices of user–item interactions.

On comparable data sizes, the analogue chip reached its answers far faster than digital rivals and drew only a fraction of the power. The authors claim energy consumption dropped by roughly two orders of magnitude.

The team also applied the chip to image compression. During tests, it reconstructed pictures with visual quality close to that produced by high-precision digital methods, while halving the required storage. For streaming platforms or image-heavy apps, that kind of gain influences both user experience and infrastructure costs.

Why this matters for companies running AI at scale

AI giants currently throw vast fleets of GPUs at recommendation engines, ranking systems and ad targeting. These models rarely make headlines like chatbots do, but they crunch staggering volumes of data every second.

AI task Typical hardware today Potential analogue benefit
Recommendations (video, shopping) GPU / TPU clusters Big reduction in power and cooling costs
Large-scale image storage Digital compression chips / CPUs Smaller files, lower bandwidth, faster retrieval
User behaviour analysis CPU + GPU analytics platforms Faster pattern extraction on huge datasets

If even a slice of those workloads moves to highly efficient analogue accelerators, cloud providers could run the same services with fewer racks and smaller power contracts. That prospect will not go unnoticed by telecom operators and hyperscalers under pressure to cut emissions.

China’s strategic angle on AI hardware

This research also fits a wider pattern: China backing alternative hardware routes at a time when access to leading-edge digital chips is politically sensitive. Sanctions have limited Chinese firms’ ability to buy some top-tier GPUs, especially for AI training.

By pushing analogue and neuromorphic approaches, Chinese labs can chase performance using designs that are less dependent on the very latest digital processes. If they manage to turn lab prototypes into manufacturable products, the country gains more autonomy over its AI infrastructure.

That does not mean digital GPUs become obsolete. Instead, future Chinese systems may pair more conventional processors with specialised analogue co-processors for targeted tasks, such as NMF-based recommendation pipelines.

Limits, risks and what still needs fixing

Noise, accuracy and programming headaches

Analogue computing comes with trade-offs. Physical signals are inherently noisy. Temperature changes, device ageing and tiny variations in manufacturing all affect accuracy. For some AI tasks, a bit of fuzziness is acceptable. For financial models or safety-critical systems, it is far less welcome.

Engineers also face a programming challenge. Developers are comfortable with Python, CUDA and TensorFlow. Mapping algorithms onto analogue circuits requires a different mindset and new tools. Without usable software stacks, even the most efficient chip risks gathering dust.

  • Calibration: analogue chips need regular tuning to keep results stable.
  • Error correction: extra digital logic may be required, eating into gains.
  • Generality: a chip optimised for NMF might not suit every AI algorithm.

There is also a business risk. Building new fabs or retooling existing lines for analogue-heavy designs costs money. Vendors will look for clear demand before committing to mass production.

Where this could show up in everyday life

Despite those hurdles, there are clear scenarios where such chips fit naturally. A video platform wanting to recommend shows to hundreds of millions of users every evening could place analogue accelerators next to its content databases, cutting both energy bills and latency.

Large e-commerce firms might use similar chips to constantly update product rankings in near real time, accommodating shifting trends without burning through power budgets. Even regional datacentres, in places where the grid is fragile or energy prices are volatile, could deploy analogue modules to keep services running within a tighter power envelope.

For readers less familiar with the jargon, it helps to think of NMF as a sophisticated way of “unmixing” data. Imagine a playlist containing many genres. NMF tries to find hidden themes — rock, jazz, pop influences — and represent each song as a blend of those themes. The analogue chip essentially does this unmixing through carefully designed electric behaviour rather than long lines of code.

If teams in China and elsewhere manage to generalise this strategy to other core AI operations, we may see a wave of specialised, physics-driven chips that sit alongside GPUs. Rather than always pushing for more digital transistors, tomorrow’s AI hardware might lean more on clever uses of the old analogue tricks hiding in plain sight in circuit theory textbooks from half a century ago.

Nach oben scrollen