Safety on Purpose
Podcast where safety meets leadership, culture, and human connection. Hosted by Joe Garcia—speaker, culture advocate, and safety leader—this show dives beyond checklists and compliance to explore what really keeps people safe: purpose-driven leadership, trust, communication, and mindset.
Safety on Purpose
Tech, AI & Human Factors
Leaders connect technology, AI, and human factors to move from reactive safety to proactive, people-centered prevention. We show how wearables, predictive analytics, and simple design choices reduce risk while building trust and stronger culture.
• safety wearables and real-time alerts preventing incidents
• dashboards as decision tools, not scoreboards
• privacy, trust, and data overload risks
• AI pattern recognition, risk scoring, and simulations
• leading indicators replacing lagging metrics
• bias, transparency, and ethical guardrails
• human factors principles for usable systems
• field case: heat stress prevention and trust gains
• five leadership practices for adoption and resilience
• 48-hour challenge to engage the frontline
If you found this episode valuable, subscribe, share, and leave a review
Hosted by: Joe Garcia, Safety Leader & Culture Advocate
New Episodes Every Other Tuesday
Safety on Purpose
Follow & Connect:
🔸 Instagram: Instagram
🔸 LinkedIn: Joe Garcia
🔸 Spotify | Apple | Podcasts: Search "Safety on Purpose"
Welcome to Safety on Purpose, the show where we connect safety, leadership, and culture. I'm Joe Garcia. Today we tackle a triple header: tech, AI, and human factors. From smart PPE and drones to chatbots predicting risk, tech is exploding. But none of it matters unless we remember the human at the center of all of it. Buckle up for a tour of what's now, what's next, and how leaders can harness innovation without losing human connection. So when we talk about technology, there's a large range of different technologies we could talk about. Let's focus on a few things. Safety wearables. What is that real-time data that we get from those safety wearables? Let's talk about the new frontier of safety and why does this all matter? So the traditional approach to safety has largely been reactive. We investigate after something goes wrong. But technology, particularly safety wearables and real-time data, is transforming that model into something far more proactive, predictive, and personalized. What exactly are safety wearables? Safety wearables are smart devices worn by workers, often on hard hats or helmets, vests or wrists that monitor a range of metrics like your heart rate, fatigue, and even your hydration level. It can check on movement and posture. It can look for exposure to heat, gas, or harmful loud noises. And it can also help with location and proximity to hazards or machinery. These devices capture live, continuous data about what workers are experiencing in real time on the job. Let's talk about this power of this real-time data. Instead of waiting for an incident report or an injury to take action, safety teams can actually get instant alerts when workers are in stress, like heat stress, fall detection, or exposure to hazardous gases. Now, all of this data can be compiled into what we call dashboards. We can use dashboards to monitor trends across different teams, locations, and it gives us all this real-time data in that time and space that we need it to make those decisions to help to figure out what we need to change or look at. And then they adjust conditions in the moment, like pull someone from a hot zone before heat stroke or heat stress sets in. This data allows for early intervention, reducing risk before it turns into an incident. So let's talk about from data to action. Real-time data only becomes valuable when it's translated into action. The best implementations empower supervisors to make on-the-fly decisions to protect those workers that they're looking over. Inform future planning like staffing, equipment use, and training. Drive continuous improvement by identifying hidden trends or recurring risk patterns. And let's talk about what to watch for. While promising, this text still brings lots of questions that we need to figure out and we have to consider. The big one is going to be privacy concerns. How is data being used and who's actually seeing it? Where does this data go? Trust. Do workers feel this is for their safety or for control? Do they really feel that this is going to help keep them safe? Or is this just to control what they're actually doing? Overload. Too much data can overwhelm. It can it's not focused and actionable. So if you get too much of this data, do you actually know what to do with it? Can you actually use it to better help what you're trying to build? Or is it just too much of something? And where's this going? As AI and machine learning integrate with wearables, we'll see even more predictive modeling of injuries and risks. Customized safety interventions based on individuals and task specific data. Integration with broader operational systems to create smart, responsive workplaces. Technology and real-time data won't replace human judgment, but they will augment it. The key is to combine these two tools with a culture that's rooted in trust, empathy, and shared responsibility. So tech becomes an enabler, not just a tracker. Alright, let's talk about a few more things. AI, predictive analytics, how this is shaping the future of safety, and what exactly is predictive analytics and safety? So let's talk about that first. Predictive analytics is the practice of using data, statistical algorithms, and machine learning techniques to identify the likelihood of future outcomes based on historical real-time data. In a safety context, it means using technology to anticipate incidents before they occur, rather than just reacting to them after the fact. With the power of AI, these capabilities become smarter, faster, and more nuanced, allowing safety professionals to make data-informed decisions that protect people more effectively. So, how can AI enhance predictive safety? Artificial intelligence supercharges predictive analytics in three ways. Number one, pattern recognition. AI can sift through massive amounts of data, like incident reports, near misses, environmental readings, those wearables we were talking about earlier, behavioral observations, and it can detect patterns humans might miss. For example, repeated microevents like small slips or near misses, and if it's in a specific area of your location could indicate a larger brewing problem. Number two, risk scoring. AI models can assign real-time risk scores to specific jobs, tasks, or even individual workers based on fatigue levels, environmental conditions, historical performance, and training history. This enables proactive interventions like reassigning tasks, scheduling breaks, or increasing supervision. Number three, scenario simulation. AI can actually simulate those what if scenarios to help safety teams understand how changes in procedure, equipment, or even staffing might affect risk before implementing them in real time. What exactly can artificial intelligence predict? AI-driven analytics can predict the likelihood of injury for specific job roles or individuals, fatigue-related incidents based on shift patterns or biometric data, machine failure or asset wear before a breakdown occurs, hot zones for environmental risks like heat, noise, and air quality, and then of course noncompliance trends and behavioral drifts over time. So why does this all matter? Traditional safety systems rely heavily on lagging indicators, like recordable incidents or lost-time injuries, which tell you what happened after the fact. AI and predictive analytics shift the paradigm to leading indicators, helping organizations reduce incidents before they happen, target resources more effectively, focus safety efforts on high-risk areas, and move from reactive compliance to strategic prevention. Let's talk about some challenges and what to watch out for. With all its promise, there are important ethical and operational considerations. There is some bias in the data. AI learns from past data. If that data reflects system bias, then the predictions might as well too. There are again, like I mentioned before, privacy concerns. Predictive analytics based on biometric or behavioral data must be handled with transparency and care. Over reliance. AI is a decision support tool, not a replacement for human insight and leadership. What AI can't replace. It's important to note AI doesn't build culture. It doesn't coach workers. It doesn't lead people. The future of safety still depends on people, especially leaders who can interpret the data, communicate effectively, build trust, and respond with empathy. AI and predictive analytics represent a game-changing opportunity to transform safety into a smarter, more strategic discipline. But the real power comes when technology meets culture, when we use insight not to control, but to care more deeply and act more proactively and lead more effectively. Anonymous robots can handle repetitive or hazardous tasks. And then, of course, remote virtual reality training brings high-risk scenario into low-risk environments. So what are some human factors? Tech removes exposure, but requires new skills. Remote decision making, situational awareness, trust in the machine. You can use a drone, let's say, to go find some corrosion in a quicker time than it would be by the human. But the engineers still need to validate that this information is correct before the shutdown can take place. So again, keep in mind, technology is just a tool. We're using it to help us to find the problem quicker, not to be that solution. Let's talk about human factors 101 in a high-tech world. Human factors is the science of understanding how people interact with systems, tools, environments, and technology with the goal of optimizing human performance and reducing errors. In a high-tech world, this discipline is more critical than ever. As automation, AI, and complex systems become embedded in the work environments, the interface between human and machine becomes a major source of both strength and vulnerability. Let's talk about the key principles of human factors. Number one, it's designed for people. Systems should be built around human strengths and limitations, not the other way around. Reduce complexicity in whatever we're doing. So let's make it less complex, right? Simplifying the processes helps people make better decisions under stress. Think about it. Put yourself in those frontline worker shoes. If you have to make a decision on your feet, are you less stressed when you understand something more or less? Account for fatigue, stress, and cognitive load. Mental and physical states impact performance. There's no doubt about it, especially in high pressure environments. Prioritize usability. Tools, dashboards, and wearables must be in must be able to be used easily by everybody that's using them. The frontline workers have to be able to understand how they work, they're accessible, they're easy to use. Are these user-friendly? Understand real-world conditions. Procedures that work in theory may fail in a messy, unpredictable reality of the field. So, why it matters in a high-tech world? More tech equals more potential for human error. As systems get smarter, they need the need for human-centered designs increases because the consequences of a mismatch can be serious. Blaming people for system failures ignores the real issue. Poor integration between human capabilities and technology. In safety, this means rethinking investigations, training, and system design. Shifting from who messed up to why did the system allow this to happen? In the race toward digital transformation, don't lose sight of the human at the center of all this. Technology may be changing fast, but people are still people. At the safest, systems are the ones designed with that in mind. So it was a summer project, high heat index, large crew, right? So they wanted to use some special cooling technology. So they had these real-time dashboards show heat stress risk. Foreman gets phone alerts when workers' core temp peaks. So they were able to set up cooling tents, hydration reminders triggered automatically. What did this result in? In six months, 47% reduction in heat-related first aid cases, zero recordable heat illnesses, and workers reported higher trust. They're watching out for us, not watching us. So the biggest thing here is they trust in the technology. They believed in what they were doing because they helped with keeping them safe. They didn't feel like they were being controlled. Let's talk about some pitfalls of where we might get with technology, AI, and all of this. So again, I keep mentioning this. Privacy is going to be the biggest problem with this. Who owns the data? Where is this data going? Is there a bias? Algorithms trained on historical data may overlook underrepresented groups. And then again, we've talked about this before. Over reliance. If that app or technology fails, can this team still recognize the risk? And then there's cost versus culture. Shiny tech can hide a toxic culture. You need to fix that culture before you do anything else. Let's talk about five leadership practices for tech adoption. Number one, co-design with the workers, pilot new tech with the frontline, gather their feedback. They're at the core of everything we're doing. Get them involved. Get them in the middle of the decision-making process. If they're going to be the ones using the technology, wearing the technology, get their opinion, get their advice. Don't just say, hey, this is what we're doing. Get used to it. No, bring them in at the beginning so they can be a part of the whole process. Translate data into stories. Dashboards don't inspire anybody. They never will and never are going to be an inspiration. But the data you get from those dashboards, translate that into stories. Translate those numbers and figures and pie graphs, charts, whatever you want, into stories that can relate to the people in the field. Show them how this impacts them. Train for the why. Please adopt tech they understand. Take technology that they understand and how it works and how it benefits them. Monitor trust indicators. Take a survey. Do you feel technology helps or are we just surveying you? Ask the field and the frontline that question. Do you feel technology helps or is it just surveying you? And then, of course, plan for failure. Nothing ever works when it comes to technology 100% of the time. There's obviously going to be something that fails. So have a plan for what happens if that does fail. What are we going to do? How are we going to move forward? What can we do until we get this fixed? Let's challenge you guys out there now. All right. So for in the next 48 hours, I want to challenge you to do a couple different things. Number one, identify one tech tool that you actually use today. And then I want you to go ask two frontline workers what's helpful about that tech or what's frustrating about that tech. And then share one improvement idea with your leadership team. Technology is an amplifier. If your culture is strong, tech is actually going to make it stronger. If your culture is weak, tech is going to expose those cracks or those weaknesses. You got to lead with purpose. Keep the human at the center and let tech do what tech does best. Support smart, safer workers. If you found this episode valuable, subscribe, share, and leave a review. I'm Joe Garcia. Thank you for choosing to lead safety on purpose.