2025 - AVEVA World - San Francisco - Energy (Oil, Gas & New Energies)
Panel: Advanced Analytics, AI & Generative AI
This panel will explore perspectives, challenges, opportunities and current status on Industrial Data Management (sometimes referend to a digital twin). The panel will dicuss their definition, perpectives, and status of their companies efforts /progress including thougths on the challenges, opportunties and recommended approaches. Moderator: Cindy Crow, AVEVA Panelists: Richard Stinson, Team Lead Real Time Monitoring and Analytics, TC Energy Ibrahim Itani, Head of Analytics and Innovation, Marathon Petroleum Julian Debard, Global Director of Energy, Databricks Matt Oberdorfer, CEO, EOT.AI
Industry
Oil Gas and Energy
Company
AVEVA
Speaker
Cindy Crow
Company
TC Energy
Speaker
Richard Stinson
Company
Databricks
Speaker
Julian Debard
Company
EOT.AI
Speaker
Matt Oberdorfer
Company
Marathon Petroleum Company
Speaker
Ibrahim Itani
Session Code
SESS-105
Transcript
My name is Cindy Crow. I'm the oil and gas industry principal for Upstream mainly.
I've been in the business for over forty years, and I love to learn.
And that's what we're here today for. We used to have these user group meetings, and so what we've decided is to try to take the most highly asked topics and try to make them into panel discussions. So, I want to encourage you. This is all for you, as well for us. I've been honored by the four gentlemen on stage to be here to talk to you about the things that they're aware of and the things that they're doing in our business. But I also want to encourage you to ask the questions about things you have in your business.
So, maybe if you are going to ask a question, maybe you say something that you could share with the audience, you know, some little nugget you've learned as you go through the process yourself in these. I know that not all of you have done that, though. Please ask all the questions you like, okay?
But, today is about generative AI, AI machine learning, and the path that we're all taking along this. You know, me personally, you know, it's like when you think about it, we use it every day in some regard because our cell phones and our apps all have some little chat bot on there. Right? But, those are not the same as industrial chat bots. So, let's let's think about how we can all learn to better our industry and ourselves this process. I'm going to ask each of them to introduce themselves and tell a little bit about themselves and about the company that they work for, and some of the challenges that they've seen as well as the successes they've seen. So with that, I'm going turn it over to Ibram, and I'll let you start.
I sat at the last end, so pick me first.
Ibrahim Itani, I work for Marathon Petroleum.
I'm not sure if you are aware of that brand or not. Marathon Petroleum is the largest refiner in the US for hydrocarbons.
And so we refine almost twenty percent of the products that you all use at home or in your daily life. Even if you drive a Tesla, the rubber on your wheels come from petrochemicals.
So we refine almost three point five million barrels of crude oil every day. We are also in the transport business. So we have a huge logistical network. We have like eleven thousand miles of pipelines.
We have rail. We have Blue Marine, Brown Marine. So we are kind of in many places. We also have downstream systems.
A little bit about myself. I come from a very probably techy background. I'm a computer engineer by profession. And I've been doing analytics for probably the last twenty two, twenty three years.
And today, I lead the analytics analytics platform division at Marathon Petroleum. So we don't have fancy titles, but if you want an equivalent, it is like a CD AO for Marathon Petroleum.
All right. Yeah. So Richard Stinson, here representing TC Energy. So TC Energy, a little bit about us. We're one of North America's largest transportation companies of natural gas.
And the way I like to say it is nearly one in every three molecules of natural gas that gets transported to you all every day in all of North America touches our system at one point or another.
To do that, we've got over fifty eight thousand miles of pipeline.
We have over one thousand compression units spread out over three sixty five or so different stations. So it's something different. Some of the tracks that you've seen this week are a plant or a couple of plants. We essentially have three sixty five plants.
They were installed anywhere back from the 1940s to this year, right? And so thinking about the complexity of monitoring remotely all of these units, different control systems, different vintages, different sizes, different makes and models.
So that's what we've we've got. You know, we also are in some of the other energy sectors, but I primarily am focused on the natural gas side of things.
My role is I have the responsibility of leading the team that does the real time monitoring and analytics day in and day out of mainly our compression fleet. So those thousand plus units that I'm talking about, know, it's over eleven million horsepower. I have a really small specialized team. So we've we've we have to you utilize the technology, you know, may maintain our diligence and standardization, kinda all this that technology enables us to do.
A little bit about my background, probably a little different than the others on stage here, is mechanical engineering by training.
Over the last four years, I've been running the team that I'm doing now, but before that, spent about a decade at a turbine OEM.
You know, started out in the field, field support, a little bit of sales. And then, you know, now I'm here. Kind of excited to to learn what you all have to talk about and learn some of your guys' issues and see where we can help out.
Thanks.
My name is Matt Obertoffer and I'm the founder and CEO of EoT dot AI. At EoT dot AI, we focus and help companies to stop pretending it's the twentieth century by basically bridging the gap between OT and IT.
We are a software company. We have development centers in India and Europe. We have diff you know, we started in two thousand eighteen. Today we have offices in Houston, in London, and in in the United States.
What we really do is to create products that literally are spanning between OT and IT. And we've built a number of products that are based on the AVEVA platform, integrating with different AVEVA products.
And we actually on top of that do some AI machine learning.
And it sounds fancy because it is.
We are also a Databricks partner. Here my friend Julian on the right side will talk obviously about Databricks. But we basically, from a focus perspective, allow how can can you use the OT side and structure kind of an an operational abstraction layer on top of OT and to drive that towards IT. So our mission is really to empower the operational side to drive innovation.
Because let's face it, the knowledge about the assets, the knowledge about refineries, transportation of gas, or oil, or whatever, typically is on the operational side. Right? So how can we actually take advantage of that? We have to empower really the operators, the people with that knowledge to drive operation and bring it over to the IT side. So that's kind of where our software comes in, and that's what we do.
Cool. Thanks, Matt. Yeah. Good morning, everyone, good afternoon.
For being here. And thanks to Cindy and AVEVA World, right, for having us on stage and talking with the important stuff, which is how do we move to the twenty first century in the industry space as well. Right? So my name is Julian Deborden. Not unlike Richard, I actually started as a field engineer. I was an offshore field engineer in the oil and gas space and grew up through the ladders of the companies, but I spent eighteen years in operations and business management.
And eighteen years into my career, was really wondering what is the next technology that's going to have a true impact on the future energy systems of the world. So I really didn't know what to choose, right? Do I focus on carbon capture? Do I focus on hydrogen?
Do I focus in making oil more efficient or gas more efficient?
And I really didn't know what was a silver bullet, and the answer is because there is not one silver bullet, we need all of this. But I realized that the technology that was a good lever for all of those other technologies was digital. So I really pivoted my career five years ago where I moved to the more digital space and moved to AWS, And I've been with Databricks for three months.
The reason I introduced myself this way is that a few years ago, I had really no idea about data and AI. And I wish I had the opportunity to ask all the questions that make sense on how do we translate what all these data and AI guys talk about into the real life of how do we improve our industry and our efficiency. So please ask all the questions.
I wish I had that opportunity a few years ago. It took me a few years to learn. I still don't have all the answers, but I'll certainly try, right? So at Databricks, we're only a twelve year old company, but we've grown super fast, right? So we are now at twelve thousand customers.
We're running a three billion dollars revenue annual.
And in November, we had our round J Series where we raised ten billion dollars plus four billion dollars from bankers. So while we're still calling ourselves a start up, and we certainly want to act that way and be agile, we have a very strong backbone, but also that means a lot of people do believe in what we do, right? So through your questions, hope to detail a bit more what we do at Databricks.
But clearly, the industry, all the cloud providers are investors in our company because they truly believe that what we have is game changing. So look, it's super exciting. We are if you are in this room, it means that you are at this intersection between energy and utilities on one side and data and AI on the other. Energy and utilities is probably the biggest challenge of our generation, if not humankind, when it comes to developing the energy systems of the future. And like I said in my introduction, data and AI is the single lever that can really help all of the technologies across that value chain. So looking forward to your discussions.
Thank you all. We're gonna start. I'm gonna ask our operations companies to begin and share a story around some of the implementations that they've had with, advanced analytics and AI, and what was a great story and maybe something they might have taken away or a little nugget that we might all take away today. So I'll start with Richard since I picked on you, Ebrahim.
Alright. No. Appreciate that. So, maybe I'll take a step back and talk about a bit of the journey of how we've got there with the program that we have.
So, our asset surveillance system that we're using really started fifteen to eighteen years ago. Built a platform with a really young team, you know, really brilliant team and kinda were following the trends at the time and and a few years in, we're really struggling to see the value in it. Right? And so, you know, from what they learned from that, who attending conferences like this, learning in rooms like this, they actually kinda got the go ahead to to to burn it down and start over.
I think that's something important that we need to remember just because of where you're at today doesn't mean that you can't be somewhere different tomorrow or starting on that journey tomorrow.
So earned it down, kinda started over, and and really focused on using the the pie suite of tools as really the base for our digital twin and building on top of that and building analytics and visualizations on top of that.
We've be become, you know, quite successful. I think we heard from one of the industry professionals. He gave a talk earlier this week that it was our twentieth or twenty first presentation that we've given at AVEVA World or PIE World. So encourage you all. Lots of great videos out there. I don't have time to go through the entire journey, but we've talked about it quite a bit. And it's been really exciting and I think we're at the forefront of another journey.
But, you know, essentially, you know, our main focus is anomaly detection and something that we've really focused detection and something that we've really focused on. Once we have that base, right, building those analytics, it's really a stacked complexity approach.
So we don't need to jump to the most complicated, highest level of analytics to solve every problem. Right? We need to find, you know, what value is it that we're chasing, and how do we accomplish that value, really in the the simplest, most effective, quickest way we can, and if that's not working, be able to quickly advance our analytics.
So like I said, we've called that the the stacked complexity approach.
I think that next level of complexity is is where we're at today. And, you know, what I'm really looking forward to, you know, maybe instead of a win, but, you know, focus on what we're looking forward to. And kind of the next piece is being able to take over a decade of taking all of this knowledge, having a program that we're using different levels of analytics. We've tracked how well they've succeeded in gaining us that value.
We've taken that input of what change those have driven, what actions we're able to take from that, and being able to moving into the agentic space and take these agents to learn from that. I mentioned we have a lot of equipment that we monitor, very varied. I think we've standardized enough to be able to do that, but we're being constantly challenged, right, to grow beyond our rotating equipment to our measurement sites, to our balance of plant equipment. And when we get that ask, we're not also given double, triple the team to do so.
So being able to layer on top the next level of analytics in the AI space to kind of help us go through and take all of that digitized tribal knowledge that we've collected over the past decade plus and read through that and very quickly gain valuable insights that will allow my small specialized team to be validating some of that work and kind of moving forward and being able to get a lot more value with the same people, really treating those AI agents kind of as as our coworkers that are able to kind of read through and understand all of this knowledge that we've collected and built over the years much quicker than, you know, even the most dedicated and driven, you know, intern can.
Right?
So that's kind of where we're going. You know, I think some of the things that we've learned that made us burn it down is not over complicating it. You know, really make sure that you've that value that you're you're looking to reach, whatever that is for you and for your company. And then kind of going backwards and and you know, developing a technology and then trying to find value from it.
It's really where is your value? What technology is gonna support that? And then even more importantly, making sure you have the right people and process to support that technology and drive towards value. You kind of need all three pieces to make that happen.
Very good.
Abram. Good. Yeah. Good points, Richard.
I certainly reflect to what's happening in TC Energy because we I think we went through the same journey as well. Where we differ is on the approach itself. So what we have seen is that I mean, probably this applies for many industrial and slash manufacturing type of companies is that we are all process driven.
Companies, this it's kind of process driven. Everything that we do has a process, a start and an end, people responsible and own the process itself. So over time, we have reached a level of operational excellence that people are proud about.
But at the same time, that remains siloed. So for example, at Marathon, we have fifteen different refineries. You can say that these are fifteen different businesses at the same time because probably they have their own P and L as well. So people are worried about what's happening in this specific geography or domain or sub domain or function.
So this, in a way, probably remoted or kept things in silo and remoted collaboration so that people can learn from from what's going on. And that's not because of egos, quite honestly, because at a certain point, technology was a limit. How how are you able to bring in techno data together? How are you going to to kind of, work on the conflicts in datas? And at the same time, how are you going to put like, an Uber process to put the data together and process it? Right?
So and now if you look at technology, technology is becoming ubiquitous at this stage. It's available. Right? You can use technology to do whatever you want, but we still neglect how people and tools, systems people and tools, our systems, right, come together to make this happen.
So from a productivity point of view, we have plenty of examples of how we use statistical analysis or AI or any of the new agent based systems, like using foundational models, whether it is small or large. It doesn't matter at that stage, to to benefit the organization. But we are going for example, just to give you examples. So we do you've seen many of the demos about agents using documents to look at standards and make it available for people to use on the field. We use computer vision to detect flares and spills.
Probably, my colleague will will agree with me. Our license to operate is safety. It's not how good are we in providing products. If we if we are not safe in what we do, whether it's people, process, environmental, so you check all of those, we will lose our license to operate.
So for us, this is extremely important. So we do use technology today so that, you know, we are able to monitor and check on things at a major level and in areas that it used to for us to dis so for example, let's say that you have a a pump on a pipeline in the middle of nowhere. If we see that a pressure went down on that pipeline, you know that probably there is a leak somewhere. So it's usually you dispatch someone to drive four hours with a truck to go and see.
There'll be checking points because we have sensors on them. Is there a leak or not? So now we utilize remote sensing technologies to do all of that. So here, you're not, first of all, dispatching someone that you'll know four hours from now whether there is a leak or not, but also when you know that when these are like heavy machinery that we have to dispatch because they have to fix it if it's there.
And this put our people in harm's way when we dispatch them. So we use technology to minimize the impact on on people, on the environment, wherever we can. So we in other technology that we use, we we we do aerial right of way detection. Before, you know, we we used to fly planes and with a pilot, they take video images.
They bring it back to our lab or their lab. They analyze the data after that two, three, four days, and they say, you know what? There is something need to be fixed on the right of way because, you know, it has to be clean from, from source to destination. So now we are doing that analysis in almost near real time so that, you know, people will be able to go and act on it fast.
Right? So now imagine that you have a pipeline that that's close to, let's say, school or it's close to, like a train station or what have you. Right? So unless unless you act fast enough, you are putting a lot of things, into jeopardy.
So internal, we we use I mean, oil and gas business really run on statistics because we always forecast what we need to produce based on input that we get. But now we take it at a much bigger level from forecasting. Instead of doing it for a short period of time, we start working on market macros to see where we can take the product and predict what will be coming from macroeconomics, geopolitical. And probably if you just open the news, you know what how that impact production.
Right? So we are using all of those technologies. But where we are going into is the process of, like, hyper collaboration that will come across from systems, people, geographies, and multitude of sites all at the same time.
So, I mean, what did we learn? So you ask, you know, is there any something that we learn from or something that did not work? So quite honestly, anything that fails, and if you are in the AI business, I've been doing it for quite a while, not everything will go to production. So you have to go in knowing that there is a lot of trial and error that has to take place.
Even if it works, it might not go to production. Right? Because you don't have control over the whole process. But any attempt is a learning attempt.
So the next iteration of whatever we do is going to be more solid. We learn from what we have done before. We enhance. We do. So our chances of succeeding always improves.
So there's a lot of kind of and partnerships. We we are, I'm not sure in this room, probably we are like the oldest customer of AVEVA. Data is really our our kind of, I would say, the most important thing that we have. If we don't have data coming from our sensors, we don't know what to do.
So we've been like a client of AVEVA for thirty two years. And we probably have all of their software, Wonderware, everywhere, and, something in between. So we are proud about, you know, being able to be in the middle of something that everybody need, which is energy, at the same time in the middle of the intersection of technology and advancement in technology. Because without it, we will not be able to squeeze the lemon more than what it is because of the performance excellence that I started from.
Maybe I took too long.
No. Not at all. I just wanted to say, he's he's a little humble. They they just won an innovation award from the AFPM. They'll be awarded next week for some of their work around standard chat RSP. Can you say a little bit more about that?
Yeah. Yeah. So so that's part of our kind of environmental drive for excellence. So this has just been announced.
We are going to receive the award on May fifteen. I'm not sure if you guys know. AFPM is kind of the largest, kind of entity that, that work on oil and gas. It's a consortium.
And we use GenAI, actually, tools. So if you just imagine, just a little bit background. If you think about manufacturing site for us, we call it, refineries.
So we have fifteen of those. Each refinery has tons of standard documents for equipment. So if you want to have a scale, we have almost four point five million tags in our operation.
And imagine how many equipment those connect to. And each one of those equipment has historical and virgining of information and documentation that human brain cannot really even get like half a you know, one per thousand probably of the information. And there's kind of legacy knowledge and people move on and you have a lot of notes and stuff. So we put a system in place, is really kind of based on a large language model to bring this information and make it dynamic for people to ask, chat with it, basically, and put things in context and give references as well.
So this is one application from a suite of applications. Probably we won the award because we were the first maybe to do it. Right? Now probably if you walk the floor, maybe six out of seven vendors have a similar application right there.
But remember that unless you, kind of try to do something new, you're still going to get the same results. And this is what excited people when we did it. We have many new things that we are doing now way beyond that. So we're going in now into the multimodal agentic.
Probably we'll speak about that in a minute, which is also worth, talking about. So thank you for the mention. Sure.
I'm gonna switch a little bit. Matt and Julian, if you all would talk about you know, you don't need to say the customer name, but talk about some implementation you are proud of and thought went really well. I'll I'll let Julian start. Okay.
Yeah. So at the end of the day, and back to your question earlier as well, what what works is keep it simple. Right? What doesn't work is make it complicated. We're pretty good at making things complicated.
We've just talked about JN AI and AgenTiK AI. They are fantastic tools. There's still a lot of our use cases that can be fixed with traditional AI.
So some of the use case that we see working very well is this customer that keep it simple.
I'll get a bit more into how you do that from a data and AI standpoint. But an example is a large tire manufacturer in France. Some of you will know the name and it's pretty public information that we work with them.
But what they've done simply was connect twenty of their plants to a single data catalog where AI was able to compare these plants on how much energy they were consuming.
And similar plants with similar machines and with similar outputs were having some time 10x difference in their energy consumption. And it didn't take long for the AI to be able to point to the right equipment on why that was happening, right? They were not having the same maintenance schedule, for example.
So these are simple approach of things we can do, right? How do we derive value? If you are a field engineer like I was twenty years ago, you go and try to fix an equipment, you probably go to that site three or four times because you go find out what's wrong.
Then you come back, you spend a few days finding out what are the spare parts that I'm going to be needing and the tools.
Then you go back, you realize that you don't have all the skills to do that. So you need someone else to help you. So you come back and you find the schedule and you see how it takes time, right?
Today with AI and Gen AI, when an equipment not even fails, but you realize through predictive maintenance that it needs maintenance. In a blink of an eye, you know what are the spare parts you should be taking, who are the skilled people in your company that have the skills to do that, what is the route to take to go there, What is the weather forecast so you go at the right time, right? And when you're on-site with natural language, you could be literally asking your machines and your data, how do I do that? I see this piece leaking, what should I do?
And by digging into your proprietary data and proprietary knowledge, the system is able to tell you this is what you should be doing. Tighten that bushing and it should solve it. And if it doesn't, do this and that, right?
So these are example of the things that work. And on the exact same example, and I think you guys touch on it, if your approach is to think, oh, Agentic AI is the last Chinese stuff, I need to find a use case to use it, Probably not going to work.
Need to start from the use case, what is your problem, and try to fix it. How you do that?
Keeping it simple, you keep your data in order. That's really what we say at Databricks, right?
The AI challenge is not really an AI challenge. The AI that we use today were started twenty years ago, maybe more.
What we can really do today is to train them and make them connect data sets that are larger and larger and larger. So what you need to do is have your data estate in order where you can easily access it. And the last point I'll make is don't jump on the last shiny object or last shiny solution or the last shiny software and plan all your data estate and AI on this because the only thing that's certain is that there will be disruption and there's going to be better tools coming in. So set up your data whereby when there is a new tool coming, when there is a new large log rich model coming, you can very easily start using it as opposed to having to remigrate all your data to something else, right? I'll stop there.
Oh, my talk. Please. So I'm going to do something very pragmatic and give you a couple of examples to use cases.
And I just want to say and highlight what Ibrahim and Richard said about, you know, you kind of scratch everything and start from scratch and and and and and the concept of failing fast. Right? One of the things is, in this whole AI space, you gotta take the risk to try something out and freaking fail and do it again. Right? Now in terms of use cases, what we see, and just as as a as an example, our software today is used by probably a lot of Houston based companies, including BPX, NOV, Hill Corp. These are all oil and gas companies that use our stuff in production.
So we have seen many use cases. Right? And they typically fall in two categories. Right?
Old school is reactive stuff. Detect an anomaly, react. Something went off, there's a leak, there's whatever. And you gotta do something about it.
The other one is preventive stuff. We predict the future. Right? Now what we see is that it's kind of slowly shifting as the AI and machine learning models become more and more sophisticated, you can actually move from the reactive anomaly detection to more like, oh, we know that anomaly is gonna appear in the future because we already know the history of it.
Right? So you can have, if you in your mind, kind of just think about that's reactive, that's kind of proactive. It we see kind of a shift over there. Now, what Julian said, it never starts with the AI model.
That we have for instance, a very simple table that says, okay these are the top failures of artificial lifts, or you know, a gas pipeline, or whatever.
It's so the top problems, top failures. And from the use case, you have to match what is the right machine learning OAI model. You know, is is it always LSTM which is long term short term memory? Is it a recurring neural network?
Is it ARIMA? There are so many different type of models that you can actually apply. Now for what we have here in AVEVA world, there's a lot of time series data, right, that comes from equipment. Most traditional neural networks don't even do machine learning.
Right? So for these use cases, you have to have very specialized neural networks that actually can do that. Most of them are recurring. So you have a neural network, and the input goes back into the output and vice versa to actually make it happen within the approach.
So machine learning model is and and models the the the classic ones as well as the the newer ones are great, but it never starts with, oh, we have some shiny new stuff. You know what Julian said, and then let's find a use case for that. It's the other way around. The use case leads always.
Regarding Gen AI. Right?
So what Ibrahim said about document scanning and all that stuff. What we found is a very simple Gen AI use case that's going beyond the typical chatbot where you're like, how many, you know, oil wells do we have? You know, where everybody's kind of rolling over in the hands like we knew that it's on a freaking HMI board anyway, or everybody in the company should know that, is something I think that most people know, but it's kind of overlooked, is log files. Like if something goes wrong, you know, and you go into an application, let it be PI or something else, you have tons of log files. Right? And they're all text.
If you think about it. Right? LLMs take text. So if you train LLMs, you train them on log files.
And it could be any application, could be our applications. And we, that's actually part of what we do. We have what we call the knowledge builder. You can actually take our tools and train them on log files.
But then you can say, hey, why did it fail? When did it fail? What were the root causes? Was it disconnect of data?
Was there a downturn of the data quality, the data volume? All these things where you typically have to kind of scroll through and and try to figure out in a log files when stuff happened. Let an LAM do that. That that's a perfect case.
What Ibrahim said also, if you have documentation that's PDF files, it's images, it's tables, LLMs can learn that. So if you have equipment, you have the data, you have the log files, then you can actually, if you have that coming into with RAC models and agentic RAC approaches for LLM, then you not only get, oh, we have a trained LLM, there's no stuff, but it can answer questions of stuff that happens right now. So that's a use case that works.
And so again, even in that use case, the use case leads to the LLM, not the other way around. Just because we can have you know, a gen AI cool thing, that's not how how how you should get started. And I leave it at that. Back to you.
Thank you. Well, now you've heard a little bit about the rest of the organization the rest of the team here. Sorry.
Someone caught my attention over there.