XR training and the future of industrial work
Posted: May 02, 2025

Attempts to meld virtual spaces and physical reality so far have struggled to take hold. For all the ingenuity behind projects like Google Glass, the metaverse, and the Apple Vision Pro, they’ve often felt like technologies in search of a purpose. Professor Nick Kelling, an engineer turned researcher, may have found one. Rebecca and Joe spoke to him about the limitations and possibilities of extended reality in the industrial sector.
REBECCA AHRENS
I’m Rebecca Ahrens.
JOE RENSHAW
I’m Joe Renshaw.
REBECCA AHRENS
And this is the Our Industrial Life podcast.
[Intro music]
REBECCA AHRENS
Alright. So JOE RENSHAW, have you heard of simulation theory?
JOE RENSHAW
I don't think so.
REBECCA AHRENS
The theory is basically that any civilization that becomes advanced enough to create a realistic simulation of the world—in particular, conscious beings—will do so. They'll do that if they have the capability. And that those simulations will likewise—since they’ll still be simulations of conscious beings—will likewise have the capability, and they will in turn create simulations. And so you'll get this propagation of simulated realities such that for any given conscious being, the likelihood that you are in a simulation far exceeds the likelihood that you're in what's called, “base reality.”
JOE RENSHAW
Wow.
REBECCA AHRENS
You’ve not heard this before?
JOE RENSHAW
No, I was, I was not familiar with that. I don't know if I'm more relaxed about the state of the world or less right now.
REBECCA AHRENS
It might be because I live in the Bay Area. It's like a very in vogue way to conceptualize things.
JOE RENSHAW
Maybe it's not quite so in vogue in the suburbs of Cambridge, where I am at the moment.
REBECCA AHRENS
Perhaps not. I mean, I hear it getting thrown around recently a lot, too. Just any time something kind of bewildering happens in the news, people are like: oh, evidence that we are indeed in a simulation.
JOE RENSHAW
Yeah, it does sound like a combination of the Matrix and the multiverse.
REBECCA AHRENS
Yes. There's actually a really good movie called World on a Wire, and it's basically about a research institution that discovers that there are these layers upon layers, like nested dolls of virtual reality and all the conscious beings, they're called, “identity units” in the movie. All of the identity units believe that they are in a real world, but in fact they're just the simulation of some higher reality. And it kind of—the implication is it goes on forever.
So anyway, today we aren't going to be talking about anything quite that radical, but we are going to be talking about simulation and virtual reality and its effects on the mind and things like learned behavior.
JOE RENSHAW
That's right. Yeah. You and I recently sat down with Professor Nick Kelling, who's a professor of psychology at the University of Houston at Clear Lake. And we spoke to him about his research into VR, AR, XR—all that stuff—and the technology's potential, particularly in the industrial sector.
REBECCA AHRENS
Right. We wanted to understand how things like simulation and VR might become a critical part of things like job training within the industrial sector in the coming years. So what is the state of the technology right now? And what its limitations are when it comes to training?
JOE RENSHAW
So without further ado, here's our conversation with Professor-slash-”identity unit” Nick Kelling.
Interview part 1
REBECCA AHRENS
Professor Nicholas Kelling, welcome to Our Industrial Life.
Nick Kelling
Thank you. It's a pleasure to be here.
REBECCA AHRENS
So I wanted to start by just getting a sense of your research background. How did you become interested in studying extended-reality technologies, and particularly their effect on human performance?
Nick
Sure. So, actually, I originally started out as an engineer and, when looking at design and such like that, one of the most fascinating components of that, to me, was the interaction that we were fostering with the development of technologies. And so understanding how the human and the machine interact with each other became very critical. And so there was really a research interest for me there.
I then kind of made the transfer over to human factors. And from there it really became clear to me that one of the most intriguing aspects that we have with technology is that it's always so rapid that it's hard for research to keep up when we're really pushing the edge on these technologies. Sometimes, I think, we simply get ahead of our skis a little bit. And we develop technologies and we're looking for a purpose for them.
And so trying to look at something like XR technologies and VR and AR, we had the situation where we really had an exponential growth in the capacity and the availability of systems, but it kind of stayed in games. And it was this, you know, entertainment experience. And so looking at trying to figure out, well, where else can we use this kind of technology to advance issues that we have? And, kind of one of them that interested me was looking at training.
JOE RENSHAW
And associated with all this tech are a bunch of different terms that I'm not sure I fully understand. So what is the difference between augmented reality, virtual reality, mixed reality...
REBECCA AHRENS
...extended reality: so many realities to keep track of!
NICK
[Laughter] It's...it's a lot. It's a whole multiverse that we have, apparently! To kind of give a global view, we kind of think of it as a spectrum.
So on one end, we have virtual reality, which I think more people have experience with now. They've kind of seen it or experienced it. In which essentially we are completely controlling the environment they're in, right? You're giving them a completely different visual scene. These are done through the headsets. And you, you know, as a developer, or whatever, can control that entire environment. You could be in a nice beach in Tahiti, you can be in space, you can control it however you want.
Then on the other end of the spectrum, we think of augmented reality. And that's really trying to find a way to add information to the reality that we normally exist in. And that can be headset based, right, with things like the Apple device, the Apple Pro, or with the HoloLens. And it's really looking at how do we essentially supplement context or information in reality, using some external means, right? Some other kind of visual or auditory means.
Now that can also be done through phones, right? We've probably all seen, you know, the QR codes or maps that you can do. You can hold your phone over and it will pop up things, videos or other things. That also falls under that umbrella of augmented reality.
Then we have this mixed reality, which is kind of our middle point. And that really is the idea of somehow combining both: at some level of reality that we think of it, and then also, you know, this somewhat artificial or, you know, virtual one. That could be done in a couple different ways. One that is kind of gaining a lot of prominence is the idea of: well, what if we want to, for example, create a scenario where somebody is, maybe, examining a washer and dryer and they want to see it in their house.
We can also use it in more advanced ways, in which if we're looking at designing a control panel, for example, well, physically, we could just have a blank control panel in front of them, a table even. And then we can just overlay what that would look like under design X or design Y, so you can still interact with it. You can still do things. And although the physical representation of space is there, the interaction becomes virtual.
And so that's what we kind of think of as this mixed reality. It's kind of the blending of the two.
And then extended reality is just the umbrella. We kind of take a big pencil and just circle everything that we do that kind of falls into either augmenting or creating a virtual environment. All that falls under this XR, extended reality, umbrella.
JOE RENSHAW
So I was going to ask that—I can really obviously see the benefit of VR training in kind of high stakes environments: pilots, surgeons, whatever. But I guess I'm curious about the effectiveness of it in less high stakes environments, you know, or in a more general environment. And if it's if it's less effective or if it could be more effective than normal reality.
Nick
it's a great question, right? Because a lot of times we think of, you know, using these advanced technologies in these kind of high-risk situations.
The other benefit of it that I think exists universally is, one: access. Right? The ability to train anywhere, in any environment is groundbreaking, right? The idea that, from a workforce standpoint, I could train somebody on a system that they've never seen before, that they can't just, you know, get in their car and drive and go see it, is huge. Especially in environments in which we're collaborating at distance more. Those are situations in which I think VR also has a really good niche that really isn't matched by anything else.
You know, the idea that if whether it be onboarding a new employee on system X, Y, or Z, or even just the situation where you're integrating a new control system, or a new version, or these kinds of aspects. That you can have the individual experience it and train on it before it's implemented is great from a training aspect. That you're not wasting time if you have to bring in a new control system or launch a completely different interface. That we could actually have people be working in that before it's done, not just in the sense of user feedback and getting that aspect, but being able to kind of be able to utilize it day one when it gets implemented, reducing downtimes. Those are all, I think, universal aspects that I think are untapped at this point, but that I think really have some big benefits.
REBECCA AHRENS
Can you just say a little bit more? I'm struggling to, like, visualize an example where VR in particular is useful beyond, like, other types of virtual training. Like, you could do, you could imagine, like a 2D or 3D training module that you could still do from wherever, but doesn't require that more, like, immersive experience.
Do you have an example of the kind of thing you're talking about?
NICK
Yeah, I think, especially in systems that, even if we have a control panel, one of the—sometimes it's hard to connect and create kind of a mental model is what is that control panel doing. Right? So the idea that, you know, that's controlling, you know, some tank valves or, you know, piping assembly or electrical distribution, being able to present that that shows the direct interaction—because you can control the environment, in a virtual sense—that essentially it's not just having that control panel, it's having that plant image that shows exactly what that's doing.
REBECCA AHRENS
Like, you could put someone in the control room and have them see: oh, when I turn this knob, I look up and I see something happens to the machine. Or if I flip this switch, I see the thing happen to the machine, you know, on the other side of my virtual glass window, something like that.
NICK
Yeah.
REBECCA AHRENS
Yeah. Okay.
NICK
Yeah.
And even the sense of: I've seen some really unique work that is looking at even audio representation. We spent so much time thinking about the visual aspect of VR and AR, but also auditory. Even the sounds a machine makes: that's feedback, right? And so the idea that you know that it's in the, you know, bottom right corner that this machine is making a certain noise, we can represent that in VR.
That's a bit harder when it's on the screen. And so those kinds of aspects are again one of those things that we just have more control over now. And you can start integrating that and kind of creating those environments.
REBECCA AHRENS
And so can you talk a little bit about this concept of immersion and the relationship that it might have to your area of research in particular, which has to do with learning and training and that kind of thing?
NICK
Yeah. So we think of kind of three major elements when we're dealing with these kinds of environments. We think of immersion, which is the kind of idea of how well it represents that environment. So for example, if it's a space environment, how well does it represent the way space works? Or if I'm looking at a situation where even in training, or learning, how well does it kind of represent how that task actually exists, or that job actually exists in the world? And that really gets to be an aspect of what is the technology, what can the technology do? Right? We're getting headsets that can do near retinal image quality. You know, the processing in computer graphics is amazing in terms of the way water looks and all these other kinds of things that really that's a measure of the technology.
Then we have presence, which is kind of another term that we use. Now, presence is now being a measure of whether that person has a feeling of being there. Now we're talking about something that's a bit more, kind of, subjective and more, you know, kind of the cognitive, psychological aspects of, you know, whether they're convinced they're here. If they're on a plane, you know, 400 stories up on a building, you know, standing outside, does it increase heart rate? Right? Does it, you know, create fear?
REBECCA AHRENS
Right. And might require more than just really accurate visual input, right? You might also need some haptic experience.
JOE RENSHAW
A big fan.
REBECCA AHRENS
Yeah.
JOE RENSHAW
A big fan blowing wind at you.
REBECCA AHRENS
Yeah. Yeah a big fan.
NICK
Right. Exactly, exactly! And so how do you convince them that they feel like they're there so you can get more natural reactions and involvement in that?
And then we have, in especially training and education environments, this idea of flow, which is really looking at when we're doing the task or the job or that interaction, does it really get the person engaged in the same way that if they were doing that kind of task in real life or without essentially that assistance? So those are kind of the three terms.
And so it's really kind of a multi-tiered problem where you're trying to go through all of them. Just because you make something looks super realistic, you can actually push people away on presence because it almost gets to this, “well, I know it's not this good,” or, you know, “I know I'm not in space.” How do you still fight those kinds of things?
JOE RENSHAW
And where does the common complaint that people have with VR—of it giving you headaches or make you feel sick—like, where does that kind of fit into this picture?
NICK
This was, I think, a really big challenge when we first started looking at VR. Because there's such a disconnect, in that we were really looking at the visual aspects of VR. And again, with limitations at the time, for computer graphics especially, it was hard to do all the technical stuff.
And so it was causing, what we would refer to as VR simulator sickness. And it really was a disconnect of the capacity of the system, especially turning your head really quickly. It was hard for the computer to keep up.
Those aspects have gotten immensely better. We usually don't see those as much. But the challenges we're trying to get through now is that we're not used to, and have not evolved in a way that we're used to having a screen so close to our face. And you can even imagine just working, you know, at your desktop, at work, every once while you just kind of kind of look away and blink to, you know, just kind of reset your, kind of, visual system so it's not getting fatigued. But now we're in VR where there is no looking away. How do you manage to keep people involved in something that long when they don't necessarily have those, kind of, classical reliefs that we would use in other situations? And so again, those are new problems. You know, as the technology gets better, we've solved some, we've made some new ones. You know, that's kind of the, the whack-a-mole game that sometimes we're playing with it.
Interview part 2
REBECCA AHRENS
So when it comes to learning and training specifically, given what you've just been saying about certain issues or challenges or limitations, what would you say you've been finding overall in some of the studies you've been conducting? Where are you noticing patterns of limitations, or are you noticing patterns? Are there specific things that it seems like VR is not helpful for? And then conversely, there are things that it does seem to be really well suited, maybe better than doing this kind of training in reality.
Nick
There are a lot of situations that are just seemingly tailor-made for VR training. And they're really in environments where there's a high level of risk or they're very complex environments, in which that ability to control every aspect is just immensely powerful.
REBECCA AHRENS
We've had flight simulators forever for a good reason.
NICK
Exactly! We can think of it differently before in classic training, you know, you would kind of build up complexity levels and things like that. But that's sometimes a challenge on the other end where if we kind of piecemeal it together, it sometimes makes it much harder later on in the training program to integrate those together.
And so sometimes there are situations in complex environments where, kind of almost, you know, kind of throwing somebody in the deep end, right, can be beneficial if it's structured well, where they can see where all the interactions exist in the task they're doing. And so, you know, somebody—a firefighter who's breaching a building searching for survivors—how do you deal with both the searching part? Right? How do I find people who are at risk? Also, you know, a situational awareness of knowing where are dangers that I've got to recognize as I navigate through this, this situation, right? Do I have to worry about a part of the building collapsing? Do I have to worry about these aspects? And integrating those later on can sometimes be a challenge.
So where we can use VR is being able to step up complexity in a much more controlled way rather than just separating parts of the task. And so that's where I think VR has a really big advantage, because we just have complete control over that situation.
On the other side, we also have a limitation that really exists that makes that a challenge, in that haptics—the idea of how we physically interact—it's just fairly hard to represent all the different ways that we, you know, interact in our environments. And I kind of, like using the one that, you know, sometimes you just got to close the door with your elbow, right? You're carrying too many things.
REBECCA AHRENS
Or your foot or whatever, yeah.
NICK
Or your foot. Yeah. And we don't—we can't do that in VR. You have these two controllers, classically, or even gloves, and you lose all of those other aspects that we do.
So one of the biggest things in looking at—especially looking at training—where instead of focusing so much on the experience in terms of, “was this a fun, engaging game?” we have to think about performance, right? How do we make sure that we are training that task in the exact same way that we, you know, experience that in real life?
To give kind of an example: some of the research, that we've been doing, we can think of a task in which you're deciding an object that goes in a certain box. You can imagine if you're working for online, you know, shopping company X, and you're deciding what box this widget goes in to ship it. And you're trying to make it optimal. It's got to be just big enough, but, you know, not too small.
That that's a fairly simple task. And even just doing that task between real life, actually having them physically put things into boxes and doing it virtually, we find behavioral differences. And especially looking at the idea of the controllers. That one thing that we've recognized that people essentially attack that problem differently. When they're in real life, they'll take a handful of objects. You know, just kind of scoop them all up, and then can kind of, one by one, sort them.
In VR, at best, because of the controllers, can only pick up two. So there's this change in how we're doing the task. It seems like when we have it in real life, they're focusing on the speed of the task. They can do that task faster. If you actually had them do it in VR, they tend to do it slower, but more accurately. So now we have this interesting tradeoff where what do we want in training? Should we be encouraging them and training them to do one at a time or two at a time? Do we do this more efficient way?
The limitation changes the training dynamic, and that's something that we're now really being concerned about in VR and AR training.
JOE RENSHAW
Could you talk about your study: the don't drop the ball paper? I thought that was a really cool study.
NICK
So yes, absolutely. So we had participants basically try and throw a ball into a bucket. Not—you know, just like the the carnival game, you know, where you have a bucket maybe five meters away or ten meters away, and you're just throwing it in.
We actually had them do that in real life, and then we had them do it in VR. And we matched it as close as we could: you know, millimeter accuracy. It looked really good. You know, the balls were the same.
But the challenge in that task is that we found huge performance differences. The limitation really is the idea that we get so much information in the weight of the ball and picking that up and how we throw and that, how that then adjusts the next one that it's much harder to adjust just based on the visual information when we don't have that additional kind of natural feedback that we get.
And that's really hard to recreate in VR. You could be, you know, 80%, 90% correct in being able to throw it in a bucket in real life, and then it's, you know, 15% in VR, even though all the physics are the same and everything else.
So that really kind of speaks to that gap that we're really trying to address, especially moving it into training aspects.
REBECCA AHRENS
Yeah. I mean, you know, when you're living in the physical world, you sort of take for granted how much you rely on the intuition that you've developed over time of the physics of the world, right?
Like, people pick up something, you're not doing a calculation: Okay, how big is this? How massive is it? How much force am I going to need to use to throw this? You rely on, like, how heavy does it feel in my hand? And then you kind of adjust how much force you're going to give it based on that.
And it sounds like you're saying, correct me if I'm wrong, but it sounds like you're saying that while the visual side of VR has been advancing really rapidly, that haptic side, like the the weightiness of the ball, trying to mimic that in a virtual space has really been lagging. And that's putting certain limitations on what kind of training would be useful in a VR environment.
NICK
Yeah, I think you're exactly right. And I think the other danger of it is, is that at least in something like throwing, we know that the weight is used to—you know, inherently it makes sense. But what about now in situations where we don't know really what are the unique aspects that are being integrated that we're not capturing in VR? And then applying that in training where especially in situations where the main benefit for VR is being able to do really dangerous environments, for example, and looking at emergency action procedures in a plant fire, or that now when they're in that environment and real and that situation happens, they don't—they're trying not to use all the cues that might be available to them.
And so that's really the danger sometimes when we're trying to train in these scenarios, that we don't necessarily know all the inputs that people use. We can try and extrapolate it and, you know, like I said, if it makes sense. But there are a lot of times in which we integrate information that we wouldn't have thought of, right? And it inherently is just the process by which we learn. But if we don't know about it, they don't capture it, they don't represent it, then we lose out on it.
I think that one of the interesting questions that has come up that I think has a lot of application industry-side is, is that when do we use VR, when do we use AR, when are, you know, kind of the traditional 2D monitor displays better? When are 3D displays better?
There has to be some really clean, kind of, guidance, hopefully, that we can say, hey, in this kind of situation, in this task or if this is what you're trying to decide, you know, a 2D display might be more effective. In this situation, a 3D model that you can just spin on the screen—you know, kind of the AutoCAD style or whatever the case—might be better. And then here are the situations in which VR might be good, if we're, you know, looking at large scale. Or AR might be better because we have to keep it grounded into this reality or we're just trying to assist a little bit.
So coming up with the idea of when is it best to use, you know, display X, Y or Z I think is really where we're at the point of finding that utility for it, right? That the VR and AR and XR are far more capable than just using it for video games and entertainment. I think there is a true utilization for it that remains untapped. But we also haven't really spent the time trying to figure out what does it really do well, and how do we keep up with balancing the technological limitations that exist and its capacity?
JOE RENSHAW
You sound bullish, for an academic, at least, on the potential of VR, and I'm wondering how you see it potentially helping or not, address this kind of chronic skills shortage that the industrial sector is grappling with?
NICK
Yeah. I mean, I think, I hopefully I have a cautious optimism about it.
JOE RENSHAW
There he is, the academic!
NICK
[Laughter] Yeah! I think that there are some really cool applications that I think could be done, especially in situations where we have as-needed training. That, really, I think AR and VR have a capability and a possibility to really aid in. A lot of times I think we think of these as being whole solutions, right? We're going to train an entire job in AR and VR. But I think in based on looking at previous research and looking at even training emergency tasks, right, these idea one-offs, of situations where, “oh, we really need this kind of knowledge to solve this one problem,” right? And maybe that problem will come up again next week and maybe it won't come up for another year. That's where I can see there, especially for AR, there really being a nice utilization that gives it purpose.
And I think that's what I'm hoping to see out of kind of the future development is that we're we're thoughtful about its purpose. And so if we have a situation where maybe somebody has a knowledge gap, that being able to use something like, you know, a headset to kind of get them spun up, or at least at the worst case, walk them through a process just to make sure it gets done and accomplished this one time. And then, you know, utilizing it to catalog: Does that mean a training difference that needs to be done? You know, is this just something that is a rare occasion? We really want to use this technology for this. How do we do that well? And again, not just using technology to solve it, but also thinking about the problem space and is AR the best way to do it? Is, you know, the old school manuals on the shelf that you could pull down and go through a checklist, right? You know, those kinds of situations. But I think there is an advantage there. I just think, you know, we need to put some real good thought into it.
REBECCA AHRENS
What do you think—I'm just fantasizing here, but roll with me for a second.
NICK
All for it.
REBECCA AHRENS
I hear often in this conversation about, a concern over, you know, the skills gap or worker shortages, one of the things that gets brought up is, you know: well, Bill has been working at this plant for 45 years, and he knows all of the manual-type stuff. But he also has this sort of spidey sense, like something you mentioned before. He can hear in the machine: hmm, that whirring is a little bit different than it should be whirring, and I know that that means blah is about to happen.
Is there a world in which you could use VR to capture some of that more intuitive knowledge? Because you can't really replicate something like: there's a specific shift in the way that the machine hums in a manual, but you might be able to do that in VR.
And so then, you know—and it's something that happens rarely—so then somebody new coming in could go into the VR environment and hear exactly that shift in buzzing or whirr that Bill could recognize, you know, in a second. But if you've never experienced it, you wouldn't necessarily recognize.
NICK
Yeah, I think it's a fantastic idea.
I mean, I could see there really being an interest, especially as we focus more on the performance aspect, right? And it's not just the job, but you can use those aspects to get those, I think, is a fascinating opportunity because we can alter the environment. Right? We can start to dial in. I could see situations in which we try and capture what is that kind of spidey sense, what are the exact cues that may be being used?
That would be really hard in any other mechanism, right? We could observe them doing the job, but it just makes it—as an expert, it'll be seamless, right? You won't notice necessarily anything. I could see a situation in VR where we can start playing with the cues they're getting to kind of see what is that kind of, you know, magic connection or magic recipe of: how do they make that call? Right? Because that was just an intuitive call. I could certainly see it that way. That'd be a really interesting approach to that.
I think the other aspect that we don't necessarily think of, and we're just now kind of exploring recently, we were looking at adapting what people see based on the struggles that they're having. So you can imagine a situation where, let's say you do have an individual who's early-career, who's working on this problem. That, you know, AR: you can present all the information you want all over the place.
So you could have a situation where they can see the entire plant operating in front of them. Right? And they could see every, you know, pipe that's, you know, pumping X, Y and Z. But that may be overwhelming to them, right? When they're trying to do task Y or solve problem X, that you can actually have the system then reduce down the information, is something that we've been trying to look at in terms of adapting, you know, based on not only the performance, if they're not doing the task well, but also other physiological measures, right? If you can determine a level of stress, right?
And so you in especially in a situation early on in training, where they may become overwhelmed and become stressful and start making more mistakes, well, if you now peel back some of that information—don't give it to them right until they need it—that you can start to mitigate some of those aspects potentially. So that's another way to hopefully, not necessarily speed up the learning process, but at least make it more effective. That when they get done with that initial training, they're better prepared.
JOE RENSHAW
What I also think that, what you were saying about using AR out in the field is also—kind of blew my mind a bit as well, because when we talk about virtual reality, we, I mean, I at least picture someone in a room with some goggles on. And it doesn't even matter what's in that room.
But the idea that you could send someone with maybe less comprehensive desk training out into the field and have them have extra information and guidance provided to them in front of their very eyes I think is also, yeah, I mean, it sounds very promising. I think you're right to be cautiously optimistic.
NICK
[Laughing] Thank you!
REBECCA AHRENS
Well, Professor Kelling, thank you so much for coming on the show. We really appreciate talking to you. And there's always a lot of hype with new technologies: AI, you know, metaverse, VR. And it's good to get, like, a little bit more grounded and understand like, okay what is actually going on? What are people thinking? What could this be used for? So thank you for being our guide.
NICHOLAS
Thank you for having me. This has been an awesome time.
[Outro music]
JOE RENSHAW
You can follow Nick on LinkedIn and you can find some of his research on the UHCL website.
You can also find us on OurIndustrialLife.com. Every two weeks, we publish a new batch of stories. Our latest batch includes pieces about the Chinese nuclear industry, how the wind turbine got its shape, and the surprising role of chemical giants in stopping the spread of malaria.