2025 - AVEVA World - San Francisco - Power & Utilities
Interactive Community Session | Power and Utilities: the role of AI and advanced analytics.
AI-infused solutions can turbocharge industries’ progress towards efficiency and sustainability. Although AI has been helping to lower carbon emissions for many years, we have only scratched the surface of its potential in the overall area of sustainability. This interactive session explores AVEVA's approach to AI and how we are actively working with governments and our industrial clients to continually refine AI models, enhancing efficiency and supporting the energy transition. Following a presentation by Jim Chappell the audience will have the opportunity to discuss use cases, challenges and how to get started quickly.
Industry
Power and Utilities
Company
AVEVA
Speaker
Ann Moore
Ann’s role at AVEVA is Industry Principal - Power & Utilities. Previously, Ann was the Business Development Executive, Director-Industry and Market Principals then Director-Regional Development, Greater China and India at OSIsoft/AVEVA. Prior to OSIsoft, Ann was with SDG&E for extensive years. Ann holds a master’s degree from The University of Michigan.
Company
AVEVA
Speaker
Jim Chappell
With over 30 years of experience in the industrial software sector, Jim Chappell is currently AVEVA’s global vice president and head of Artificial Intelligence (AI). Prior to this, he led the Asset Performance Management (APM) suite of products and related engineering/analytics services for Schneider Electric. He was also a founding partner and managing officer of InStep Software, a global leader in industrial AI-driven Predictive Analytics and Big Data software, which was acquired by Schneider Electric in 2014. Jim holds a B.S. in Nuclear Engineering from Rensselaer Polytechnic Institute (RPI) in Troy, NY, a M.S. in Nuclear Engineering from the Naval Nuclear Power School in Orlando, FL, and a M.B.A. from Chaminade University in Honolulu, Hawaii. In addition, he graduated from the Civil Engineer Corps Officer's School (CECOS) in Port Hueneme, CA. He also held a top-secret clearance while an officer in the U.S. Nuclear Navy.
Session Code
SESS-135
Transcript
So, yeah, what's what is the role of AI in in power and utilities? You know, it's an exciting place, but it's also been around for a while. It's AI in power has been around, I would say, since the late eighties and nineties with expert systems where they were being developed, another which is one type of AI.
And then in the late nineties, early two thousands, machine learning really started picking up with predictive maintenance and other uses. Certainly, in power gen was one of the early ones in oil and gas, and then it progressed beyond that.
And that's really where AVEVA came in into play with AI, is predictive maintenance in the relatively early two thousands in power generation, and gas, and then progressed, you know, substantially beyond there.
And and it kept moving, kept moving. We kept doing more sophisticated things with root cause and prescriptive and fault diagnostics and forecasting and things like that. And then in the over the last five years or so, we've really put a big focus on AI across all of our products and capabilities, as you see design that operate or operations and maintenance and optimization.
And then all the way leading up to our industrial AI assistant on our Connect platform where it's that GenAI, the large language model interactive. It really humanizes the interaction of the data, and you can ask, you know, lots of types of questions and so forth. And things will keep going. But if you look at the design space, you know, just the number of things that we've done and are continuing to do, but just to point out one of them, unified engineering, our e three d design system. We put a type of genetic algorithm in it, and it can do automated three d pipe routing. And that's just with based on constraints to optimize, minimize the number of bins and optimize the throughput. And that's just one example, and we're continuing to take that farther, and we're going to do prescriptive design that'll be released later.
And integrating our industrial AI assistant natively into e three d, which is on prem, allowing it to interact, but not just interact with the help file, which is part of it, but also tell it to do stuff, do something for me, and it'll take charge. So it's really changing the way humans interact with software.
And then, the operation space, we've been doing that for a while, and we have Vision AI natively integrated into our HMI SCADA systems and continuing on all the way to autonomous operations, and we'll talk about that a little bit later. So, it's really going cutting edge.
And then in the optimized space, again, that's where we started.
And we're going all the way to set point optimizations and things like that. So, continuing to improve and grow. So, design, operate, optimize. We started with predictive maintenance back when predictive maintenance really began in the power industry.
So data and AI and power really takes the data is the foundation, and then you apply AI. And there's so many things you can do with it that's that's happening even outside of AVEVA and all over on the grid side with where you have grid resilience, improving grid resilience or load and demand forecasting and or or managing renewables onto the grid and all that distributed generation, super complicated, or AI driven state estimation or AI driven synchrophasors, PMUs, where it allows you to operate with less margin on the grid.
And by doing that, you can put more power across the grid because you're using some of that reserve capacity because things are so much more accurate. And so, you can leverage the existing physical infrastructure that you have to transmit more power.
But data and AI, that's where it's at. You have good data. You have easily accessed data. You have enough data, and you apply the AI to it.
So what I'd like to do now is walk you through the progression, you know, one thread of progression, but starting with one of the first things in power, predictive maintenance, and how it's evolved and become more sophisticated and where things some examples of where things are today and where things are headed with AI in power and utilities.
So reliability centered maintenance, RCM strategy, that started to become a big thing in the 2000s where just say, okay, I'm going to go clean the filter or lube the bearings every three months whether they need it or not. I'm going to start basing it on data. It becomes more data driven.
And so a study was done, and it said eighteen percent of all failures are time based, calendar based, which is the foundation of preventative maintenance.
But eighty two percent of all failures are random.
And so, that's where AI comes in.
So, if you look at the pyramid on the right, you've got risk or you've got at the bottom, you've got reactive maintenance run to failure. So things are, you know, low cost, low impact, and there's plenty on the shelf. So run you know, when it breaks, just replace it. That's a that's a valid strategy. Preventative maintenance is your calendar based maintenance.
That's how it's always been done for a long, long time. Then condition based maintenance, that's where you start maybe using PIE, PIE tags, calculations to trigger an event. This is going up. This pressure is going up.
This temperature is going up. Let me go take a look. I need it's an alert. And that's that was a big step forward.
And then predictive maintenance. That's the AI driven, using anomaly detection, machine learning. And then at the top of the pyramid is the risk based maintenance, and that's where things have headed, and and we'll talk about that as well.
So, this evolution, you've got it starts with historical and real time data, and that's the foundation. And, you know, it's in your historian. It's in a data repository, data lake, your data. It's in the cloud, in your cloud platform, historical real time data, then you apply predictive analytics to it, the anomaly detection.
I know you get early early warning, could be days, weeks, maybe months ahead of a SCADA alert or a control system alarm. And so it gives you time to react. And so when that happens, great. But so the next step is what do I do?
And that's prescriptive. And so you use AI there to say, are the contributors to the anomalies? Is it a outlet pressure problem? Is it a vibration problem? An electrical problem?
And then what do I you know, what's the likely root cause and what do I do to fix it?
And then, prognostics, how bad will it get or what's the forecast?
What's the remaining useful life of this asset? Can I make it to the next planned maintenance outage or should I do an immediate shutdown? So, forecasting, where you're using types of deep learning and statistical methods of extrapolating out.
So, you know, just to give you some examples because it's all real world. We've been doing this a long time.
Predictive analytics, predictive maintenance started with heavy rotating equipment, and a lot of you, I'm sure, are familiar with that. And so things like here's an example of a combustion turbine where you had the blade path temperature was increasing. The spread was increasing, and it turned out it was a transition piece failing. And you there's a picture of the crack.
And had it gone on, it would have separated. It would have caused major damage to the turbine. It would have been a lot of unplanned downtime. And so the customer calculated themselves the amount of avoided cost that this resulted in.
It was over four million USD.
And it was due to things like they didn't lose production, they didn't have to bring in a lot of people on overtime, and it didn't damage the turbine. Also, the slow degradation of the failure results in an efficiency loss over time. That didn't happen for too long, and they were able to detect it. So over four million dollars And we've had customers report and show us how they calculated over thirty million to forty million dollars in avoided costs, some have done just huge numbers over time with when they deployed it enterprise wide.
So, that's predictive analytics on assets. That's been done. That's how it started.
Still super valuable and valid today. And matter of fact, what's happening is more industries are doing it. Now it moved to manufacturing and chemicals and mining and it's really just become pervasive. And so more and more industries doing predictive analytics following the lead of power, oil and gas.
But then, they put it in operations, predictive analytics and operations. So, here's an example of the customers running a plant, and they had to do a shutdown. They had a hydrogen leak. Anyway, they fixed that. They came back up, and the operator inadvertently extracted the steam two hundred degrees too low. And steam extraction off the steam turbine is where they use it for ancillary processes.
And so what happened is it was two hundred degrees too low, but no alarm is going to happen because that's a valid operational state. It's good. Two hundred degrees too low, but it's not valid for them. And AI said, wait, you usually don't operate like that because it's in context of all the other sensors that it's monitoring as part of the model.
So you don't normally operate like that. So, yeah, two hundred degrees is okay, but not for you. And so it alerted them to them. They go, woah.
Woah. That's not good. And and they fixed it. But the calculation is what what if they had continued to do that?
Well, they're losing you know, their heat rate is, you know, way too low, and they're gonna be burning a lot more fuel and emitting a lot more carbon, and they use an atemperation to to cool it off, and that would have resulted in a hundred thousand gallons of demineralized water. But even more significant is two thousand metric tons of carbon dioxide per week. And the customer said it probably would have been five or six weeks before anybody would have noticed it just because they you know, it it wasn't an alarm. They weren't looking at it.
You set it. You forget it. And stuff like happens all the time. And so there's AI in in operations.
But also for green energy, you know, wind farms.
You know, here's an example. There was there was a cocked roller, a hundred and eighty degrees, and and it was gonna the turbine was gonna seize up and fail, and they were able to catch it. But that's just one turbine. But there was a study that showed almost there's almost two and a half failures per wind turbine per year, statistically.
So, if you have a wind farm of a hundred turbines, you're have, statistically, rough almost two fifty failures or major issues that could be as many of which could be avoided. What if you can find all those things way before they're they're actually gonna be big impacting? You know, they could be blade pitch angles, maybe it's not a failure, but you massively reduce your efficiency in an electrical problem, other types of vibration problems and so forth, yawing and and all that. And if you can catch them all, you can send crews out.
And the the crews can be super efficient because they'll have a long list of problems they need to fix, and you only send them out once, and they fix it very early so you're not losing your your power and having to backfill, buy power off the grid or backfill and so forth. And so that really keeps things super efficient and reliable for the wind farm. But that's a picture of just one example of many where predictive analytics helps in green energy as well. Same with solar fields and with alternative energies like blue hydrogen, green hydrogen, where you can also help predict what's going on oftentimes in combination with physics simulation.
So, I mentioned integrate with simulation. Now, what if you were to integrate physics based, first principle simulation, with AI? Well, there's some amazing power that can happen, and there's a lot way too many things that we're doing with that to go over all of them. But I'd to discuss one of them is called predictive asset optimization.
That's what we call it. But it really it's that risk. If you remember the pyramid, I said I would talk about the risk based assets beyond predictive maintenance. There's risk based maintenance.
And this is where you're moving into risk based maintenance, and you're balancing that risk versus cost. So, should I stop and fix it now?
Okay. Or should I wait for the next planned maintenance outage? Well, if I stop and fix it now, I'm going to incur an extra outage. It's going to be very expensive.
I'm going to my efficiency is going to drop. But when I come back up, I'm going to be running at a higher level of efficiency until the outage, maybe three months down the line. Or do I just take it and say, alright, I'm going to run at a lower efficiency until the next outage and I don't have to incur the extra shutdown? What's better?
Well, you don't know. The only way you're going know is you simulate it with AI. Use AI in simulation, and now you can get much better insight into making that choice. And we've had cases where it was super surprising.
It was more efficient to shut down in some cases. In others, it was more efficient to run at a lower efficiency. And so, it's something that you would think you could gut feel. Many cases, no.
But also, can I operate differently?
Know, maybe there's something operationally, you know, because this whole O and M thing needs to merge together, and this helps do that. Maybe can I I can operate differently and improve my efficiency enough to make it to the next planned or or something like that?
So to look at it, let's talk specific. And he these actually were cases where we applied PAO, a heat exchanger fouling. The customer was running a once through steam generator to extract oil from sands, and they were having fouling of the tubes. And so they there were wasn't transferring heat as well as it should.
Therefore, it's generating less steam, and they're having to burn more fuel to generate the same amount of steam. And anyway so what do they do? The fouling, it was a rate, right? It's a progression fouling.
And the more steam you try to produce, the faster it's going to foul. So do they shut down now, or do they wait for the next planned outage?
Or can they operate differently? And in that case, the answer was they can operate differently. They could fire the duck burner, increase duck burner firing, which is something that interjects heat upstream by ten percent, and then that would allow them it wasn't enough to chip the scales, but it would allow them to generate enough steam to meet demand for the next few months till the till the outage. Plus, they had another unit that was sending steam into a common header. They could they could increase that a little bit as well. And that way, they didn't have to hit get hit with an extra outage.
But that wasn't do I wait or do I shut down now. That was I actually operate differently.
Another one is gas turbine maintenance. So, a typical thing with gas turbines is air filter. You got to replace the air filter.
When do you do it? What's the optimal time to replace that filter? Because if you when you do it, you're shutting down. You're not producing.
And you if you shut down too early, you're going to have to do it again too soon. But if you wait too long, it's going to tip the scales and your efficiency goes way down. So what's the best time to do it? That's what answers that question.
The other one is when you do it, did it work? Was that? Did it did can you prove that that was the best? And that was always big problem.
Nobody would ever go back and look at that. Now with this type of system, with simulation, along with the AI, you can go back and look at maintenance optimization. Did it work? What was the lessons learned?
So, there's some examples with AI and simulation that is happening in the power industry and energy industry.
Moving even farther is autonomous operations.
So, this is full blown, no human in the loop. Or it could be run as guidance and the human makes the change, the operational change.
But what this is, and we're actually partnered with NVIDIA in the solution, is not just autonomous operations for steady state. This is autonomous operations for transient conditions. It's like those self driving cars you're seeing, taxis that you're seeing outside with Waymo Waymo and all that.
It's it's like really the the the goal here. And and, you know, if you have a start up or a shutdown, it's just so many variables happening there. What do you do? How can you handle that?
Or if you have a major a change in feed level or a major disruption to the plant operations, a lot of times, it'll take operators, you know, experienced operators six, eight hours to get things to stabilize. And in some cases, they'll shut down the unit and bring it back up, and that's how the the only way they could stabilize it. Here, this could do it in seconds. And the way that it works, it takes our dynamic simulation.
It uses synthetic data here, not historian data because you can never have enough data in the historian to cover all the operational scenarios that are possible, good and bad. And that's what you need to train the reinforcement learning engine on, and that's where we're partnered with NVIDIA. So we train this engine in a way to turn it into a brain. It knows every operational scenario, good and bad.
And as you're training it, it has a feedback loop that says, good, good, bad, bad, bad. And so it remembers all the good and the best. And so it knows what those set points should be, and it's just blasting it with data for weeks on end to train it. Now, once it's trained, it can be deployed in the cloud, on prem, and very flexible.
It also can handle batch operations as well and improve quality because you can stabilize the consistency because you can predict that much better as well. So it can do that set point optimization.
So, autonomous operations. This is something that's in the labs. It's work. It's being deployed at a couple of customers right now. And we have more lined up, and we'll be turning this into a commercial solution as well.
But, this is what know, these are cutting edge things.
I mentioned our industrial AI assistant.
You know, this is where you ask it a question. You know, we have all that pie data. We have the Wonderware data. We got all kind of data and in our cloud, and you can ask it the question to query it. But not just time series data, you can ask engineering data or PDF manuals. And you ask it a complicated question, it'll bring back whatever data you need and serve it up to you. So, that's on the data and information level.
Yeah. But we're moving toward, and this is going to be released in the next month or so, more on the functional level. Do something for me. We're starting with create a dashboard. Create a dashboard with a pie chart with trends and these types of tags.
And it takes these types of tags and figures out exactly what tags they are using things like inferencing and semantic search and find figures out what the related tags are and then puts some on the dashboard and a lot of course, you can edit it from there. But it's very human. It's natural language. So it's on the functional level.
And then ultimately and we're working on this. And if you saw the keynote yesterday morning with the r and d session, it it it showed some some demos of agents that we're building, but we want this unified user experience. Why should you have to learn all these different pieces of software, menus and drop downs and sometimes scripting? Just tell it what you want it to do, at least at least on the the fundamental level.
Yeah. Advanced users can still do all the advanced stuff. But on a fundamental level, tell you tell it what you want it to do, and it'll do it. And that's what we're working on with these agents.
And in combination with generative AI. And so generative AI plus agentic AI, we're going to have more and more and more agents behind the scenes running autonomously doing stuff for you, many of which will be AI, some may not even be AI. They could be different types of technologies, but it's a bunch of agents and the experience is going to be very different because it'll be much more unified and it'll be interactive.
But that's we're working on it. We're taking it one step at a time, and the things you saw yesterday are in the labs. So, they're functional, but they're being developed.
And AI helps drive net zero. Of course, AI has its own carbon footprint. And typically, when you think about that, you're thinking the large language models and the big training and all that, and that's a huge, huge footprint. Most of what I talked about today with AI has a small carbon footprint. It doesn't use that much energy.
And you know? But it the benefits from it vastly outweigh the the energy that it uses. And so it helps drive net zero by improving efficiency. And if you're more efficient, you're burning less fuel, and you're reducing your greenhouse gas emissions.
But tomorrow, it also helps drive down the cost of renewables like I talked about before. And if you can drive down the cost of green energy and improve the reliability of green energy to where it's equal or better than traditional sources of power, it'll be a no brainer than everybody. It'll it's the big drive to net zero. And once you achieve net zero or a or a larger dominance of net zero, then that's gonna be the energy used to power AI.
And AI, regardless of how much energy it uses, will have a zero carbon footprint. So it's a self fulfilling prophecy.
And with that, thank you very much.