Simple because it matters.
Simple because it matters.
Digitalisation & Technology, 18 November 2024
There seem to be no limits to the development of AI, at least in terms of functionality. However, the high energy requirements are a cause for concern and have sparked controversy. Some AI providers are even considering building nuclear power plants. New technologies could still prevent this, as they aim to radically reduce the energy consumption of AI systems.
The enormous consumption of resources by AI systems is one of the big criticisms of ChatGPT & Co. Especially when training the various Large Language Models (LLMs), the data centres, which are designed for high performance, consume vast amounts of electricity and, due to the waste heat, a lot of water for cooling. Electricity consumption, in particular, is problematic at the wrong time.
The global climate crisis demands the decarbonisation of the world's energy sector. However, the transition to renewable energies is a long-term process that will take several more years. As a result, electricity is currently in short supply in many countries. The boom in AI systems is exacerbating the strained energy supply with its hunger for resources.
The International Energy Agency (IEA) has long since sounded the alarm: as early as 2022, data centres worldwide consumed around 460 terawatt hours of electricity, even though OpenAI only made ChatGPT available to the public in November 2022. How electricity demand has developed since then can only be estimated. The IEA assumes that energy consumption will double by 2026.
The major AI providers, such as Google and Microsoft, on the other hand, are aware of the high energy demand from their own experience and are developing their own plans. Since the first bottlenecks are already occurring in the USA, for example, in Data Center Alley, because the expansion of renewable energies cannot keep pace with the increasing demand, the IT giants are planning to build their own nuclear power plants.
Google wants to build mini nuclear power plants (small modular reactors, SMR) to ensure that the increasing demand for energy does not jeopardise the climate targets it has set itself. Its competitor Microsoft is even taking a step that could damage its own image: a reactor from the decommissioned Three Mile Island nuclear power plant is to be restarted. As a reminder, this plant was the site of the most serious nuclear incident in the US commercial nuclear power industry in 1979.
These two examples alone show how important it is not only to make AI systems increasingly powerful, but above all to make them more efficient.
And there is definitely potential for efficiency. Two developments in different areas give us reason to hope that we will be able to continue using AI systems in the future without having to build new nuclear power plants.
Put simply, AI hardware consumes a great deal of energy because there is a constant exchange between processing and storage components. Data is transferred from a memory to the processing unit and then back again.
Researchers at the University of Minnesota Twin Cities have now developed a new technology that makes this exchange unnecessary. In the hardware called Computational Random-Access Memory (CRAM), the previously separate components merge into a single one. In other words, data processing takes place directly in the memory.
In scientific tests, CRAM technology proved to be 2,500 times more energy efficient and 1,700 times faster than a conventional system with separate components.
This is the state of development: the researchers are already in contact with leading companies in the semiconductor industry to develop CRAM technology on a large scale and make it marketable. The development stage is already well advanced, because the technology is the result of more than 20 years of research and interdisciplinary collaboration.
In language models and artificial intelligence applications, algorithms use floating-point operations that are very computationally intensive. These are also known as a measure of the performance of supercomputers, which is expressed in FLOPS (Floating Point Operations Per Second). In contrast, the researchers at BitEnergy AI use simpler integer additions in their algorithm ‘Linear-complexity multiplication’ (L-Mul). Using this mathematical method, the researchers claim to have achieved an energy saving of 95 per cent in experiments.
This is the state of development: the research work on the L-Mul algorithm has so far only been published as a preprint. A regular publication in a scientific journal, which requires a professional review of the results, is still pending. Furthermore, special hardware that can fully exploit the potential of L-Mul is still needed for market readiness.
The global trend towards greater sustainability will continue in the wake of climate change. Even the great hype surrounding artificial intelligence will not change this. On the contrary: the best AI systems are useless if the climate tips over.
Energy is already in short supply in many countries, and energy-intensive industries are under scrutiny. It is still in their own hands to become future-proof and sustainable at the same time by investing more in efficiency. This raises the question:
What could be achieved with the resources earmarked for the construction of nuclear power plants if they were instead invested in AI efficiency technologies?
Text: Falk Hedemann
Your opinion
If you would like to share your opinion on this topic with us, please send us a message to: next@ergo.de