2025 - AVEVA World - San Francisco - Innovation (CONNECT, AI, etc.)
From Data to Decisions: How AVEVA AI is Shaping the Future of Industry
For two decades AVEVA has delivered proven Industrial AI solutions at scale. Our broad, AI-infused software portfolio continues to evolve across operations and engineering with capabilities at the edge, in the cloud, and through advanced hybrid solutions. We don’t do AI for AI’s sake, we leverage it to make our industrial solutions better. This session will provide an overview of the breadth of AVEVA AI today, our vision & strategy, and specifically where we’re continuing to advance with Generative and Agentic AI. · Cutting-edge AI-infused products and solutions to help achieve your operational and sustainability goals · Multiple types of AI working together to create new capabilities · Industrial AI Assistant on CONNECT leveraging GenAI and Intelligent Agents to drive a new level of capability on the cloud.
Company
AVEVA
Speaker
Jim Chappell
With over 30 years of experience in the industrial software sector, Jim Chappell is currently AVEVA’s global vice president and head of Artificial Intelligence (AI). Prior to this, he led the Asset Performance Management (APM) suite of products and related engineering/analytics services for Schneider Electric. He was also a founding partner and managing officer of InStep Software, a global leader in industrial AI-driven Predictive Analytics and Big Data software, which was acquired by Schneider Electric in 2014. Jim holds a B.S. in Nuclear Engineering from Rensselaer Polytechnic Institute (RPI) in Troy, NY, a M.S. in Nuclear Engineering from the Naval Nuclear Power School in Orlando, FL, and a M.B.A. from Chaminade University in Honolulu, Hawaii. In addition, he graduated from the Civil Engineer Corps Officer's School (CECOS) in Port Hueneme, CA. He also held a top-secret clearance while an officer in the U.S. Nuclear Navy.
Company
AVEVA
Speaker
Lori Warda
Lori Warda is a seasoned professional with over 25 years of experience in industrial software. Currently serving as a Product Director for AVEVA’s AI team, Lori has navigated a diverse career path from software development to managerial roles. She has worked on numerous data integration and visualization projects and products. She holds a Bachelor of Science in Electrical Engineering from the University of Notre Dame and an MBA from DePaul University.
Session Code
SESS-101
Transcript
I'm Jim Chappell, and Lori Ward and I are gonna talk about how AI is shaping the future of industry. And it really is. It starts with data.
And you apply AI, you apply analytics, and you it helps in decision making. It can help drive decisions, but a lot further than that. It can help with all the way through autonomous operations.
But before I get into all that, I want to define what is industrial AI? You know, what makes that different? Well, it's domain focused AI. We're not a generic player at AVEVA. We do industrial we're an industrial software company and we do industrial AI.
And so it solves problems. It it makes our solutions and products and data more effective, better, more performant. But it's also not just one thing. AI is a science made up of many types of technologies from various types of machine learning and then there's expert systems and types of machine learning include deep learning and generative AI and reinforcement learning, neural nets and genetic algorithms, all types of things. And so they can work together and they can improve things. And as you as they work together more and more on a data platform that connects everything together, you're going to help with that learning and collaboration and innovation.
And then that's going to drive the buzzword, radical collaboration.
And so that's truly where we're headed. It used to be that AI would do one thing and do do it very well, predictive maintenance or vision AI or schedule AI. But now it's starting to integrate and do more complicated things and drive multiple facets of the business and they all are working together.
And so, it's not just a technology, it's lots of different things. At AVEVA, we've been doing this for over twenty years but again, strong domain expertise in the industrial space end to end, all types of industries. Today, have nineteen AI infused products, more under development. And our strategy is simple.
First, we infuse AI into our products and we've been doing that for a long time. Make it, put it in there. Some products are built around AI and that's the focus. Others are, you know, they're they're SCADA systems or they're engineering design systems, but AI makes them better.
Then we've been integrating, say for the last five years or so, integrating multiple types of AI together to make it more powerful, more intelligent. And then ultimately, want to achieve this intellect where we apply multiple types of AI and analytics to the data and that achieves that industrial intelligence. And, we do this across the design space, engineering design, operations and maintenance, as well as optimization.
And so, just looking at our evolution of AI, you know, we started back in two thousand four build building predictive analytics and released it in two thousand six. And that was focused around predictive maintenance and reliability centered maintenance. But we've been adding to it, adding lots of stuff to it. But if you see a big acceleration around twenty twenty, twenty nineteen, twenty twenty, where there was a big shift and we've really started focusing on AI across all of our businesses, all all areas of AVEVA, from engineering design, operations maintenance, data, and optimization.
And so we've been accelerating. And then that white line is now. And so we're going to be releasing quite a bit this year in AI and beyond. And so if you look at it, in the design space we've got a number of capabilities with E3D where we've infused things in it. And we're going to be looking at some examples here today. We have plans to drive it a lot farther.
With Operate, we've had Vision AI in our SCADA systems and machine learning in SCADA and other things and we're going to drive it a lot farther all the way to autonomous operations.
And same with optimization. We're going to optimize those set points and create super sophisticated autonomous operation solutions, all the while continuing to advance our existing capabilities and portfolio.
So that's the roadmap. That's the high level. Now let's take a look at some specifics.
So again, design, operate, optimize. Let's start with design, engineering design.
And we have so much to talk about in the space. We're working on that's coming. But just to point out a couple of them, we have what's called the intelligent point cloud.
And so what that is is it's a great way to turn a brownfield into digital twins. You can do a laser scan, a three d laser scan of a facility and it looks really cool, real really powerful, but it's dumb. And so you need intelligence. The intelligent point cloud looks at all these pixels, looks at all these dots, then says what's related to each other in an instance.
And then what type of instance is that? Is that a pipe, a valve, pump, turbine? And then well, specifically, what piece of pipe, what valve, what turbine, what pump? And so, if it's a container, is it a tank, is it a pipe, is it a valve, you know, it can color code it.
And then once you've got the specific item it is, then you can tag it. You can relate it to the engineering data such as our AIM system.
And then you can pull it all together. So it's a quick it's a it's a really powerful capability to move brownfields into the digital world. Then we have generative design AI, which we released last December as part of Unified Engineering e three d.
And it's done some amazing things. We're starting with automated three d pipe routing and you may have seen Rob McGreevey on the main stage this morning giving you a demonstration. But specifically, it can really transform engineering design. We're starting with three d pipe routing.
We're going to move to HVAC. We're going to move to structural elements. But basically, do a start and end point of pipes and then you give it a plane. You say, I want most of the piping up high or wherever you want it so that maybe you have foot traffic down below they're not tripping over pipes.
And then, obviously it's going to avoid any clashes. It's not going to hit structural elements or other pipes or anything else.
And then you can it's going to optimize itself to minimize the number of bends, number of elbows, the overall length of the pipes. You can reduce head loss and optimize the efficiency of whatever's being transported through the pipe.
And so, it gives you options and several things to choose from and then of course you can manually edit it from there or you can have it run some other scenarios.
And so, there's so many things like that and obviously the next part of that or another part of that that we're working on is predictive design. What comes next? You know, flange valve flange, simple example, but more complicated ones would allow you to really take it much farther.
Now, let's look at the operation space.
We've been doing HMI SCADA, we've been a leader in that space for thirty, thirty five years. And, one of the early ones was InTouch and still a dominant player, market share, Wonderware InTouch. But it runs on prem. So how are we going to improve that with AI?
Well, what we've done, and again, this is in the labs, but it it is functional and we're working on it, is to integrate and embed our industrial AI assistant that runs on Connect, our Connect platform, generative AI, large language model, embed that into InTouch and so that it can ask questions. And now it's like, is InTouch? What is it good for? It can integrate with the help files to get people started.
It makes it lowers the barrier of entry to some of these complicated not as complicated, but, you know, to if you don't know how to use it, you don't have to go through a big training regime, to learn it. And so how do I do things? How do I create a view? And then it says, okay, well, create a view for me.
So now we're taking it to the next level. Not only are you just asking on an informational level how you do something, you're actually asking it to create something for you.
And and it does. It interacts with the API. So, okay, now put a graphic on the screen for me and it will.
And so it works with different functions. It's changing the user experience of software.
It's making it so that you don't have to learn how to do something, how to operate software with the menus and the drop downs and the clicks and the scripting or whatever you might need to do. It does it for you. Maybe you change you don't like the background, change the background to white. Whatever.
These are very functional and explicit commands and over time it'll get more implicit on desires. But this is the beginning of a new user experience for software. And so you can ask it more complicated questions as well. And you know, how do you do limits on points and tags within InTouch?
That's that is putting that type of user experience into operational software that's been around and and continues to to be a dominant player in the market. And of course, we've done Vision AI rated with system platform and OMI as well. Now, moving into the optimized space, as you saw from the timeline, we've been doing that for twenty years or so, predictive maintenance. And it all starts with historical and real time data. You know, your data historian, whether it's your pie data, your Wonderware historian, or any historian for that matter. And the real time data, the IOT, the SCADA data, the control system data that's coming in. That's the foundation.
And then you use that, you use that to create a predictive model. And so, it's multi varied and it allows you to look for deviations and anomalies that you can't really figure out by yourself because there's so many variables and nuances and all this. But it allows you to look for issues that would otherwise go undetected days, weeks, maybe even months ahead of a SCADA or control system alarm or any type of alert. And time is money and that gives you time to take action, take precaution, schedule maintenance or do emergency maintenance.
But then we've taken it a lot farther. We put it in prescriptive, looking, using AI to find what are the outliers, what's the root cause? What sensors contributed to the problem? Is it an outlet pressure problem?
Is it a vibration problem? You know, what is or problems. It could be multiple sensors that contributed to it. And then from a we use fault diagnostics to figure out what's the likely root cause, and then we have a library of prescriptive actions that you should take to begin the repair or the maintenance.
And then, prognostics. This is where deep learning and statistical methods come in so that you can forecast what's the remaining useful life of an asset.
Can I make it to the next planned maintenance outage or should I schedule an emergency outage now? And so all of these things we've been doing adding to and becoming more and more sophisticated. In this case, it's part of a reliability centered maintenance program but it can work, it's that hybrid. We have these types of capabilities on prem and in the cloud.
And so now, take the same type of AI and add simulation with it. One of the things that you may or may not have seen on the timeline was something called predictive asset optimization, PAO. And this is where we use AI and simulation to look at the risk based aspects of things. So, yeah, maybe I know I can make it to the next planned maintenance outage, but should I?
Or should I shut down now? Should I is there a way I could operate differently? You know, various things like that. Another type of AI in simulation is gray box modeling.
And so, that is, is you take a physics based model, you know, algorithms and physics, that's the white box model as the term goes. Black box model is the AI model, put them together and you can do some really cool stuff. Carbon capture unit is a good example because takes a while to run. It may take ten, twenty, thirty minutes to converge.
Whereas with the AI model, can do it in a second. And so, you can run the overall process in real time. Some of the components are running with physics based and they're feeding in the AI model. The AI model is running super fast and feeding back into other physics based components.
Then with autonomous operations, this is where we have a partnership with NVIDIA. And so we're taking autonomous operations to the next level. We've been doing steady state autonomous for quite some time and that's very good, very powerful. Tweaking set points automatically to maintain a process.
Here, we're doing it through transients as well. So start ups, shutdowns, major disruptions, changing feed levels, all these different types of things are handled through autonomous operations. And so, the the key is how do you train the model? And here's an example of where you're changing the feed level and then you're getting random outputs and so it's going to give bad reward states.
It's going to say bad, bad feedback, bad feedback until eventually they converge and it gets happy, hence the happy face. And so when you do this with many variables on a huge scale, you're going to create terabytes of data that you're training into your reinforcement learning brain of all operational states.
And there's no way you can have that amount of data in a historian though because you're never going to see all those operational states. So, we use synthetic data. And what we do is we use our dynamic simulation simulator, rigorous first principles model that does all these transients and dynamic processes along with NVIDIA's reinforcement learning engine and it runs on NVIDIA GPUs to do things like minimize production impact by stabilizing things quicker and reducing the startup shutdown time or for batch processes, maintaining quality much better and much more consistently through all these dynamics and changes.
So, what's next? Where are we going from here?
Well, if you look at our Connect platform, we have the industrial AI assistant where you can ask it, it was released last year, you can ask it questions of all all types and it it deal complicated questions and it might, the answers may involve some engineering data and some time series data, event data, maybe from PDF maintenance manual or design specs, whatever.
Ask it the questions and it'll give you an answer. But now we're taking it to the next level and this is coming in the next month or so where you can actually, similar to what you saw in InTouch on our connect platform, ask it to do something for you and it'll do it. So the functional level. We're starting with dashboards.
Build me a dashboard with a pie chart and trends and this title and this very explicit. We're using explicit intents. That's coming out first. Then we're going to do implicit.
We're going to do build me an energy management dashboard on unit one and it figures out what sensors it's going to use things like inferencing and semantic relationships and all that. It's going to figure out what needs to get related and put it up there.
And it's going to create the dashboard for you even better in the future. But first, we've got start with explicit, intents. And in development, and this is where, matter of fact tomorrow morning you're going to see some of this as well in the session, AgenTik AI, where these agents are running behind the industrial AI assistant to do specific processes. And when you do this, this is going to create a new unified user experience because when you have all these agents, you're just you don't care about the software itself, you care about what you want to do and what you're trying to solve. And that's where the AgenTik AI really comes into play.
And so, these agents are going to be running behind the scenes.
To go a little bit deeper, just briefly into AgenTic AI, you start with the human like interface. We have the capability to deal with voice or text and that's the, you know, that's the chat GPT like thing except we we built our own because we wanted it focused on the industrial space. We built our own our own orchestrator as they say and we use open a it runs on Azure, we use OpenAI services with the large language model GPT.
But, we built it specifically for the industrial space.
And, that's part of the patent that we've put on this. And, it works with data tools like intelligent tools that work with data, industrial data. It doesn't deal with financial data or HR data or sales data. It doesn't know anything about it. It's specifically for industrial data. So it's looking at IOT data and various other types of SCADA and events and MES and documentation, engineering design information, two d, three d designs.
But also, it's working with the functionality. And then when you add agents into that, the agents and the tools work with this data and it works with the functionality. And that generative AI with AgenTeq AI creates a new experience, a new power. So, you don't have to be a data scientist to get a lot of these benefits specifically designed for the industrial world. Give you a quick example.
This is a monitoring agent. And again, this is something that's in the labs. It's functional. We can actually create a monitoring agent on the fly.
Here, we are looking at data from four air cooled condensers of a power plant being contextualized and visualized and connect. The unit one operator concerned with drop in plant's performance wants to assess fouling impact on unit wind condenser and asks the industrial AI assistant for agents monitoring this performance to diagnose issues further.
The AI assistant fetches the list of available agents deployed at the power plant and tells the operator there isn't one monitoring unit in Contensa. He also asked the operator if one should be created and deployed in unit one. The operator asked the AI assistant to create the Contensa Monitoring Agent and train it to monitor active power of the unit and also turbine exhaust pressure based on relevant parameters which could affect its overall performance.
The AI system then initiates the agent creation process, which can also be visualized for transparency.
The terminal logs through the condensed monitoring agent getting created autonomously. Using relevant dataset and evaluation of underlying prediction model losses.
The operator then asks for deployment of the agent and to run the model every thirty minute interval. It also updates the deployed agent list and shows the activity details for reference.
Two weeks later, the operator visits Unit one and asks the AI assistant to show the results of the deployed intensive monitoring agent.
The monitoring agent reveals degradation in performance of the condenser of Unit one due to fouling, which is shown here in red red line on the charts.
To determine whether to diagnose the issue right away or wait until next maintenance event, the operator asks the AI assistant to calculate the power loss revenue due to the contents of fouling and determine cleaning payback time.
The agent generates a dashboard chart for the operator to analyse, along with which the operator asks the AI assistant to summarize the condenser issue and inspection procedures to raise maintenance work order.
The AI assistant then generates the procedures of condenser installation inspection and maintenance and provides a link to the manual interconnect dashboard.
So what you saw was an air cooled condenser where you had turbine exhaust pressure and you had active power, and they were saw there was a problem. They know he's had fouling issues and performance issues. Stuff gets dirty. And so what it had to do was it created a monitoring, an AI model on the fly.
And it said, create the model, use these types of sensors in the model. But it didn't name the tags, it didn't talk about the asset, it figured it all out through things like inferencing and semantic relationships. And then it created the model, training it on historical data. Initially, defaults everything, trains it, and then deployed it, and then they used it to monitor.
And then when you saw the white versus the blue, the turbine pressure, exhaust pressure was increasing. And that versus what was expected. And that expected is in terms of all the different sensors at once, not just by one sensor alone. And that told you there was a problem.
And then of course it went through and wrote up the issue and you could send it to an asset management system or you know EAM or some kind of maintenance system. Get dispatch crews to start working on it.
So, I've talked about a lot of different stuff. A lot of new things, some things that we're working on. But to bring it all together, I'm going to turn it over to Laurie to talk about the Industrial AI assistant a little bit deeper and what's coming new with that. So Laurie?
Great. Thanks, Jim. Hopefully, you all saw the keynote this morning and got to see some cameo appearances by our AI assistant. We're real excited to be bringing this to Connect Visualization.
Some of the key components of the AI Assistant include natural language interface, the ability to ask questions in natural language without needing to know much about the underlying system, the asset names, the stream names.
Generative AI is really good at content search and summarization. So being able to get summarized contents from documents, bring back nicely formatted pieces of information, help you find things real easily.
We're using citations to show our work as part of our commitment to transparency and traceability. And I think Eric hit on this really well this morning when he talked about working together with AI. We're not trying to replace the human. We're trying to help the human do their job better. So we bring these citations so they can make sure that the correct information was used to retrieve the answer.
And then the capability to generate charts in line so you can get some information the mins, the maxes, the averages and also see right in line a chart. You can see where it's trending and have the ability to click on the link that's above that little picture there and see a bigger trend and be able to see your information really quickly and easily without having to do a lot of pointing and clicking and searching.
Here's some sample questions and some ideas of some of the information that you could currently access within the AI Assistant. We have basic questions, like what are the temperatures in my wind turbine? So if you just ask that question, we don't have stream names. We don't have asset names.
We didn't even give it a time range. Right? So we're looking for that information for now. Maybe we'll give you some, this is what's been happening today, and give you a little trend.
We can take it a little further and do a comparison. Was my temperature higher this week or last week? And it will go and retrieve both sets of data for you and do a comparison and give you a nice summary of what's been happening. You can search for your saved content, your saved visualizations.
Maybe you're looking for a dashboard that has information about utilization or information about generation. You can ask, and it will search based on the name of the item you're looking for and bring you the possible matches of content and give you an easy link to look for it. So again, no browsing or searching. Just ask for what you're looking for and easy access to it.
We can take it a little bit further and look at things like our MES production events and utilization events, asking for information about what's been going on. We can also search for documents by name or by association to an asset. And then if you find the document you're looking for, you can ask more detailed questions about the content within the document. You can also ask questions about the content across all of your documents that are stored and indexed in the system.
You can also look for things like three d models or two d drawings. So it really brings this together and makes it a lot easier, reduces the training, and makes it easier for people to get up to speed and find that information they need to make decisions quickly.
So as an example, I'm going to show generating a dashboard. It's slightly different than the one that was on the stage this morning, but we just start by saying, can you create a dashboard for GEO one temperature and wind speed? Didn't have to give it a whole lot of information, and it goes off. It finds what it thinks is the right information, and it's going to generate a dashboard for me with two charts.
It knew to separate them based on the units of measure. So the temperature is a separate chart. The wind speed is a separate chart. I can ask a follow-up question to change the name of the title of the dashboard.
So I asked it to give it a more specific title. You can also make adjustments on colors or axis type or change it to a different type of a chart if you want to see a column chart or a bar chart or something like that. So it changed my dashboard title. And I think Jim already mentioned, right now, have to give some pretty specific details.
You have to ask very specifically what you want for. But over time, we'll be able to do more implicit requests and understand better without even you know, with less information, we'll be able to generate that content for you using language.
So next, I'd like to talk a little bit about the architecture and how this all works. We start on the left. We're embedded. We're a part of Connect Visualization.
So the chat interface is a window within that tool, and you can type in a question in natural language. So for example, what was the average output of Hornsea Wind Farm last week? The chat interface passes that over to our AI orchestrator, And the AI orchestrator starts by passing that information securely over to the large language model. And the large language model processes it and figures out the intents, and then passes that back to the AI orchestrator.
Then the AI orchestrator requests the proper tools and agents to access the information needed to answer that question. So for example, here, we know that last week would be the last seven days. It would find that we need to find an asset called Hornsea Wind Farm, and that we're looking for some streams called output or related to output of a wind farm. So using our semantic indexing, we have a hybrid semantic index capability that not only looks for keywords, but it also knows how to look at context and meaning.
So it takes advantage of all of that and finds the right information for you, retrieves the data back to the orchestrator. The orchestrator will then send it back again securely.
And through the Azure OpenAI interface, we are not leaking any data. Your data will not be used for training or anything like that. And then it will give you back a nicely formatted response. It might be a list. It might be a paragraph. Or it might be some charts. So all of this is come together within our Connect platform securely using your Connect user security profile.
So let's look at another example.
And this is our unified engineering.
Is the video playing? Here we go. So this is showing where we start with help and asking, how do I do something? So we start by asking if you could tell me about symbols, a symbol, and a symbol template.
And it gives us an answer. And you'll notice in here that we have images embedded in that answer. So as we started working with this engineering content, we realized we needed pictures in addition to the text. So we've incorporated that.
Now we're asking a follow-up question about to get a little bit more information about how we can create these symbols and what we can do with them. And now we're gonna transition from asking how to do something to actually asking it to do it for us. So now we're gonna ask it to create a design for us, and it's gonna invoke those commands from within the software to start the process.
We're going to ask a follow-up question to give us a little more information and open up the window properly for us.
And based on those commands, the software knows what to do and gets the process started and opens up your window.
And next, we'll be looking at a preliminary design with some piping.
So from when you're looking at this, one of the other cool things you can do is you can take from this design, and you can ask it to highlight certain components of it. So here, we're asking it to highlight the pipes that are less than a hundred millimeters.
So after asking the question, it processes it. And then momentarily, we'll see that it highlights in yellow the pipes that match the criteria that we just set.
And not only can we have it highlight and interact with the picture, but we can also ask it to give us a list. So next, we're gonna say, please give me the list of those pipes that match the criteria. And we didn't have to reenter the criteria. It remembered.
So it is. It's a conversational, and it's remembering what you've asked. And then it gave us that list of criteria. So this is just the start.
This is gonna be released later this year. And, hopefully, it gives you a good idea of some of the things we can automate and some of the processes that we can assist the human with and give them the capacity to be their own CEO and make the harder decisions and spend their time doing the harder things instead of those basic repetitive tasks.
Next, I'd like to take a little more look into how we're doing that document ingestion, the way we're able to process the help files and the text documents. Right now, have two sources of documents, but we'll be expanding that in the future. We have our Connect help files, including the images, as I was showing in that previous demo. Then we have text based documents, so things like PDFs that might be a maintenance manual or a procedure or some other type of document that you need access to within the system.
We have an AI assisted ingestion process that takes these documents.
It extracts all of the text out of them. And it uses contextual chunking to bring kind of appropriate groupings of this text together, including some of the metadata, like page numbers. But it groups them together into topics and subjects that will then be future searchable. Once these chunks are all created, then we use an embedding generation to put them into a semantic index for access later. So once we have all of these documents processed and indexed, then from the other side, our API or the Industrial AI assistant can then request information from these documents. So it's a really cool process, and it's a great use of AI to really get a lot of information out of these documents at your fingertips in a very secure and safe way as well.
So now I'd like to show a little more of an example of of how that's working in practice. So first, we start by just asking, is there a maintenance manual for Hornsea Wind Farm? So just based on the name of the document and its association with the asset, we're looking for, do I have a document? It finds the document for me. It gives me a link, so I can just click on that link and open up the document in our document viewer.
Quickly see that it's a hundred and sixty four pages, and that's a pretty long document. I'm interested in getting information as maybe a new employee about preventive maintenance. So I type in the little search within the document viewer to see if preventive maintenance is measured. As mentioned, there were a whole bunch of responses there. So instead of clicking on the links and looking at it, I'm going to ask, what does this maintenance manual suggest for preventive maintenance?
It's going to search through all of that indexed content and not just give me a list of mentions, but it's actually going to give me a summary, a nicely formatted response with, you know, lists and types of information. I'm going to follow-up and ask, what are the most important items for preventive maintenance? And it's going to go through. And again, instead of just giving me a list, it's going to try to prioritize and give me the things that seem to be more critical.
Again, I get a nice formatted response, and I notice a mention here of yaw system and break pads. So I follow-up with one more question, and I ask if there's anything specific about the yaw system.
And again, it's going to go out, look at all of that index content, and give me a nicely formatted response.
So we think this will be really useful. You know, help files, learning how to build things, drawing things, etcetera. There's a lot of really good applications for this technology.
I want to address a topic that I think comes up almost every time we do a presentation, and that's, you know, how are we being responsible with AI? How are we preventing problems? How are we keeping your data secure? AVEVA has worked really hard. We've gathered a bunch of experts to really develop good practices and make sure that we're delivering this in a way that's secure and safe and responsible for the end customer. And the three areas we like to talk about are guardrails, grounding, and security.
For guardrails, when we're talking to the LLM, first we set the tone. We say, you're a helpful AI assistant. You only really answer questions about industrial topics. So, if somebody comes in and asks a question about their sports team or the news, it's going to say, I can't help you.
It's really focused on your industrial operations. We also want to make sure that we don't answer questions that we don't have the data for locally. So if you ask a question and there's not data or we can't find it, we don't make up an answer. We just say, sorry, I can't find the information. And maybe you can ask it a different way or give more details.
For grounding, I think the most important thing here is that we're getting the answers from your connect data. So again, if the data is not in your data store, if it's not stored within your environment, we're not going to give you an answer. We're not going to make something up. We're not going to search on the internet.
Every answer is traceable. This is where we're doing the citations. So again, the human in the loop can confirm that the right assets, the right streams, the right time frame were considered when you were giving this answer, or the right document, or the right visualization. So you have control. You can see how that answer was derived and make sure that it's what you wanted to see.
And last, on security and property, we always follow the connect user context. So if you don't have access to that information, you will not get answers that contain that data. And when we're talking to the large language model, we're using a secure link through Azure OpenAI. Very important, your data is safe and we're confident that your data is not being used for training. You can feel really good that we're looking out for your best interests as far as that goes.
Last slide, because I think we're running out of time, but where we are with our release, our roadmap.
Available now, where available is a preview in Connect Visualization, so anybody who has a Connect Visualization account can get the AI assistance. Just talk to your account manager. And most of the capabilities that we've shown today are available now.
Releasing soon or very, very soon, we have a describe capability, which allows you to interact with metadata. So maybe you have a limit. You can say, how long was my wind turbine operating under preferred performance? Or how long have I been over a certain temperature limit?
You can also we'll be having the questions about specific documents that's going to be releasing very soon, and the generative dashboards capability should be out in production next week. So very exciting stuff.
Some of the things that we're looking forward to releasing later this year include working on some agents for ad hoc analytics. I'm sure you've all heard a lot about AgenTic AI. We're really excited to deliver some great customer value using that technology. Jim also showed some proof of concept around monitoring, and we're also gonna be using agents for helping to assist with the digital twin building. I don't know, for those of you who were in Todd's presentation before this. We'll also be working on just expanding what types of data can be used by the AI assistant and embedding it into the Connect visualization experiences.
And with that, that concludes my presentation. We've got some recommended other sessions if you're interested in learning more about what we're doing at AVEVA AI. And tomorrow afternoon on the Connect Lounge, which I think is on this level, I'll be presenting another AI assistant demo that has some different examples. So if you're interested in learning a little more, you can come see me tomorrow. Thanks very much, and enjoy the rest of your conference.