Using AI to power the cleantech transition

In May 2024, Europe became the first jurisdiction globally to establish a regulatory framework for Artificial Intelligence (AI), the so-called AI Act.

The AI Act’s purpose is to regulate the development and use of AI systems by setting forth the obligations for AI developers and AI users ranging from transparency requirements to obtaining a comprehensive ‘European conformity’ (CE) marking. It provides for a classification for AI systems with different requirements and obligations tailored on a risk-based approach (low, limited, high) including a prohibition on certain types of AI that pose unacceptable risk. Non-compliance with the AI Act can result in regulatory fines of up to 7% of a company’s global worldwide turnover.

While the AI Act implements a meaningful risk assessment framework towards advancing AI development and adoption in Europe, its provisions regarding the environmental performance of AI systems are cautious given the increasing energy demands of these technologies. For example, according to the International Energy Agency, the average electricity demand of a ChatGPT request takes 2.9 Watt-hours compared to a Google search which needs 0.3 Watt-hours of electricity1.The AI Act requires providers of general-purpose AI (GPAI) systems, including generative systems like ChatGPT, to disclose the known or estimated energy consumption of the model. However, in the long run, this measure alone is not enough to address the significant environmental impact of a consumer-focused mass-adoption of generative AI systems.

We caught up with Bernhard Petermeier, Partner at xista science ventures, on all things AI and how can AI help in the wide deployment of cleantech solutions.

How would you define the relationship between the cleantech transition and AI?

I think of AI as a broad force reshaping almost every industry in both a horizontal and a vertical way. Cleantech is no exception and will widely benefit from this transformation. Over the last decades, cleantech's core industries (broadly Agriculture, Energy, Chemicals, Resources, Transportation, Waste) have undergone a digital transformation, seeing the deployment of software in a central strategic position across sectors and business functions. With such a well-prepared bedding ground, the paradigm shift of how software itself works (by way of AI rolling out across software verticals) can be incorporated incomparably faster.

Horizontal shifts are introduced through integrating AI capabilities into various operational business functions. This directly impacts resource management and efficiency, which is a critical competitive factor for any business. From a sustainability perspective, many of these gains have positive effects on sustainability and the innovation ecosystem, but primarily, I'd argue the big gains are economical, bringing their own social challenges. 

What's more cleantech-specific are the tendentially narrower vertical, industry-specific applications of AI. Here I see a great number of applications around reducing waste of energy and natural resources, inventing new materials and processes, reversing environmental damages, mitigating natural disasters, and gradually transforming or substituting bits and pieces of traditional industry value chains. AI is a remarkably sharp tool for transforming resource hungry industries, products, and services by providing data-driven suggestions or making decisions. For example, one of xista science ventures’ portfolio companies is using AI for supply chain and ESG monitoring. That's a perfect use for AI in a field that was traditionally untransparent and now shifts into the center of attention through policies like the EU supply chain law.

What is the difference between AI and machine learning (ML) in the cleantech transition?

I think the differentiation between AI and ML is a techno-centric exercise that is rather distracting in seeing the bigger picture. Let me put it this way: there are modern methods in building software – some of them classify as AI, some classify as ML – which are not governed by the traditional algorithmic programming approach and gain a lot of ground in terms of capabilities. What's central are these new capabilities and what they can do for cleantech, but what's equally important, but often neglected is to understand shortcomings and limitations. Let me give you a few examples. 

What we can do now with Reinforcement Learning is take a huge variety of measured variables into account and learn an optimal responding action for them even though the governing laws of the underlying process are not precisely known. Any deterministic system that needs to optimize outputs of a device, process, or dynamic system based on inputs can profit from that. This is an improvement over classical multivariable feedback control, which was widely based on a distinct understanding of the underlying differential equations. Applications that will massively benefit are robotics, but also bio-reactor and any process control in manufacturing. Better yield, lower cost and better scalability are the consequences. Shortcomings are robustness and explainability, which is a problem for controlling complex critical infrastructure like energy networks, for example. Furthermore, there are new security vulnerabilities and attack vectors for neural networks and studying these adversarial machine learning methods is more important than ever to harness the advantages.

Pattern recognition and advances in geometric machine learning are helping to increase our knowledge of chemical, material and biological processes and compounds. This will lead to new substitutes for unsustainable materials, chemicals and drastically reduce the time taken to find viable formulas, hence short-cutting development timelines. Classification algorithms can help to improve waste separation, quality control and manufacturing processes. Natural processes and human actions can be better predicted, enhancing planning, and optimizing outcome. Traditionally resource intensive simulation outcomes can be predicted at high accuracy, improving the design process of engineers. There are countless things we can do better, faster and cheaper once a model is trained. However, the boundaries of applying AI are where transparency, certainty and stability are required. These are ingrained in current AI and ML methods and it's impofrtant to keep them in mind to avoid taking involuntary risks. 

How can we harness AI to enable the cleantech transition?

I believe many AI applications will find their way into the cleantech sector, no matter what. Horizontal AI applications are largely introduced by the competitive forces of markets. Survival of the fittest has been a strong driver in applying cutting edge efficiency tools to stay on top of competition. So, in terms of the investment and market side, these applications are taken very well care of by traditional incentives. 

What's more in need of further consideration are vertical AI applications that might not have an immediate competitive edge over their non-sustainable counterparts. I think it is particularly important to introduce the right incentive structures to guarantee innovation is not stalling in this area and that there are some early-market pull mechanisms. We need to ensure that AI for cleantech innovators can benefit from regulatory support that nurtures competitive sustainability in the mid- to long-term. For the regulatory support to be effective, it needs to have a strategic focus, while keeping a certain technology openness. Sustainability criteria on procurement, guarantees and subsidies can help for example on the market side. 

Where I struggle on the investment side, especially towards SFDR Article 9 funds (dark green funds), is that value chains can be long and complex. I fear that Article 9 investors are placing too much emphasis on the end of value chain innovations, focusing on easy to identify and attribute impact for their fund's shop window. What this could lead to is the typical good intentions gone wrong, resulting in an unbalance of innovation financing at expense of enabling technologies. This is how Goodhart's law (i.e., when a specific measure is used as the primary target for policy, it loses its effectiveness as a measure) starts to build up and ESG criteria cease to be a good measure of sustainability.

I can give you a simple example to illustrate my point. I recently invested in a startup that is developing autonomous underwater vehicles with the capability to detect bio-fouling on ship hulls (i.e. the accumulation of microorganisms, plants, algae, or small animals on wet surfaces). Cleaning of biofouling allows to reduce greenhouse gas emissions of the shipping industry by more than 40%, as it reduces drag on the vessels and consequently lowers fuel consumption. However, my portfolio company is part of the value chain that acquires and processes data. Providing the intel for precise spot cleaning merely unlocks the cost effectiveness of the business case – it's only indirectly responsible for the reduction in carbon emission. Final removal is executed by robots of other companies, which is why many of the Article 9 investors were not interested in investing. 

How should cleantech investors and innovators think about implementing AI?

There is a wealth of investment opportunities in the intersection of AI and cleantech, but identifying genuine value can be challenging for investors at times. Similarly, it seems entrepreneurs feel the pressure to implement AI into their product to infer a particular notion of innovation. As a former scientist and geek to this day, I love technology, but I think it's important that we digress from the very techno-centric AI mindset back to a more value driven discussion and reasoning. What I mean is that a nuanced and sensible assessment of advantages and shortcomings in employing or investing in AI methods for particular applications is needed. Unfortunately, in a time of hype, the loudest voices are not particularly the best to listen to, and many stakeholders have vested interest in driving the AI narrative deeper and wider, disregarding current limitations and shortcomings.

What I would recommend is treating AI for what it is: a mighty instrument for certain types of problems, but never the end in itself. Currently we see much of the opposite, for example generative AI being treated as a mystical tool that is capable of expert output. Themes where AI can play to its strengths are predictions and optimizations, especially where many variables come into play and a wealth of historic data is available. Predicting energy production under varying conditions, optimizing storage solutions, or suggesting mitigation of fluctuations and vulnerabilities, for example. AI algorithms can forecast weather patterns, allowing for better planning and maximizing utilization of renewable resources. Also discovering novel formulas and molecules might be possible if the training dataset is relevant enough. 

What are some emerging AI trends that could significantly impact the future of cleantech?

The development speed of AI methods is unprecedented, and I can only wonder how fast researchers are coming up with new mind-blowing capabilities. Therefore, it's very hard to make good predictions that will age well and stand the test of time. Structurally we see a further rise of large general-purpose models that function as platforms for a wave of narrow vertical AI tools. For example, Microsoft has recently shown Aurora, a new large-scale foundation model of the atmosphere. Building and training these models takes huge computational and financial resources together with giant data sources. It's not the typical startup competition ground. The open-source community, however, constantly shows that the competitive advantage of large models might not be as profound as anticipated. In any case, I would argue that access to a high quality and proprietary dataset will continue to be a competitive advantage, and we will see many smaller players with groundbreaking products. 

One particularly exciting use of AI are so called ‘physics-informed neural networks,’ or in short PINNs. These models are trained on data that was obtained solving traditional physics equations. Many engineering disciplines have benefitted from the emergence of numerical simulations, which provide accurate results, but are very cumbersome to set up and run. PINNs could be a very much needed addition to traditional simulations and might provide a boost to engineering in the coming decades just as traditional numerics did in the past. For example, I have recently invested in a company, training PINNs on computational fluid simulation data. Environmental simulations for the urban environment play a key role in understanding the impact of rapid urbanization on the climate of our cities. This company makes climate simulations accessible, affordable, and simple to use, so that architects and city planners receive instant feedback on the effects on ESG qualities while planning buildings and urban environments.

Lastly, what excites me is that science and research itself will be significantly transformed. Literature studies and drafting papers are tedious, but very crucial parts of every knowledge worker's duty towards better outcomes. Current large language models are afflicted with hallucination and relatively high energy hunger. There are attempts to prevent these shortcomings, either by altering the LLMs themselves, or by intertwining them with other models. To connect LLMs with neuro-symbolic AI for example could lead to the ability to improve on facts, reasoning, and memory. An engineer or scientist supported with such tools could multiply their productive output and permanently increase the clock speed of new innovations.

This interview is part of our ongoing series Voices of Innovation, where we convene cleantech investors to discuss challenges, opportunities and trends of the cleantech transition in Europe.

[1] https://iea.blob.core.windows.net/assets/6b2fd954-2017-408e-bf08-952fdd62118a/Electricity2024-Analysisandforecastto2026.pdf

Previous article
Article link
Next article
Article link