January 21, 2026

00:42:46

AI Today (Aired 01-21-26) From Black Box to Glass Box: How to Build Real Trust in Artificial Intelligence

Show Notes

In this new episode of AI Today, Dr. Allen Badeau sits down with Todd Thomas, bestselling author of Hyperscale and founder and CEO of Woodchuck AI, to explore one of the most urgent challenges in modern technology: trust in artificial intelligence. As AI becomes embedded in business, finance, healthcare, and security decisions, leaders are asking a critical question how can we trust systems we don’t fully understand?

Todd breaks down the concept of Explainable AI (XAI) and why transforming the “black box” into a “glass box” is essential for transparency, accountability, and confidence in AI-driven outcomes.

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Sam. Welcome to AI Today where we break down technology shaping our world and translate complexity into clarity. I'm your host, Dr. Alan Badot. And today we are joined by Todd Thomas, an influential voice in artificial intelligence, energy, sustainability and entrepreneurship. Todd is a best selling author of Hyperscale and the Unleashing Abundant Energy trilogy. That's not easy to say. And the founder and CEO of Woodchuck AI, my favorite name for any company moving forward. Todd but you know, the things that your career really has focused on harnessing emerging technologies to drive efficiency, transparency, of course, and you know, really real world impact across a lot of different industries. And you know, so as AI has become embedded in everything, almost, you know, one concern consistently rises to the surface that we see, and that's always trust. And you know, today we're going to really dive into that. And you know, we see a lot of different faces of it. There's a lot of different members of our audience who are business owners and you know, they struggle trying to define those kind of things. And so, you know, from your perspective, why do leaders really struggle with trust and those recommendations that we get from the various different AI solutions? [00:02:02] Speaker B: I think leaders often struggle to trust AI recommendations due to what's called the black box problem. It's a lack of transparency into how the AI arrives at a decision. And oftentimes this can be compounded by concerns over data quality, algorithmic bias, regulatory compliance. And then of course, there's always a fear of relinquishing human judgment. And that's especially true when you're talking about high stake decisions. [00:02:29] Speaker A: Yeah, yeah. And that's one of the things that we, we are trying so hard to get really out in the open. So, you know, when folks are using those tools that they can get online, or they're using some that they even are purchasing with, with their other software, or they go to a, you know, a doctor and they're using it in the background and you know, understanding that trust and how those decisions are made and why they're made is something that, you know, is, should be very important to everybody. And that black box is a little bit scary for folks. And so when you start looking at explainable AI and really helping people understand the ghost in the box and how it's making those kinds of decisions, what's some of the, you know, what's some advice that you would give them to, to help them really understand how to deal with that? [00:03:18] Speaker B: So explainable AI is fantastic. Some people just refer to it as xai. But the great thing about explainable AI is, it really does provide a clear reasoning behind AI's output, and it phrases it in a way that's human, understandable. It offers insights into which data inputs and which model features were most influential in the decision. And, and it allows the users to verify the logic, to detect bias, and really to build confidence in the system's reliability. Some people refer to that as transforming from the black box into the glass box. [00:03:57] Speaker A: Yeah, and that's a great way to put it because, you know, as you know, I think there is a little bit of confusion between just explainability and transparency. Right. And folks have a, have a hard time, you know, looking at it from that perspective. And the glass box piece in some ways is even more important than the explainability piece. And you know, from your perspective, whether it's security, whether it's risk, whether it's even going down to a bank and getting a loan, why is transparency and AI so important? [00:04:28] Speaker B: Well, I think particularly for security and risk management, transparency is critical for accountability, for audit ability, and for rapid response. If an AI flags a critical threat or denies a claim, decision makers need to understand why, to ensure the system isn't misidentifying legitimate activity or perpetrating an unfair bias. And so transparency really allows necessary system adjustments, it validates compliance with regulations, and it builds stakeholder trust. [00:05:05] Speaker A: Yeah, yeah. And if you're a small business owner and you're using some of these AI tools, and really it's the first time maybe you've integrated those into, into those, those systems, what are some of the gotchas that you tell people to, to kind of watch out for? [00:05:21] Speaker B: Well, you kind of want to use common sense. So the, the power of the XAI is it really does translate it and give it to you in a way that's easily understandable. So, for example, a system might give you a clear explanation, like this transaction was flagged because the purchase was made from a new country, or the amount exceeded the user's typical daily spend by 200, 300%. So these are concrete, verifiable explanations that allow leadership to act confidently and quickly. And so when you're looking at AI tools, I would certainly suggest xai, but make sure you have one that really does provide a clear explanation that gives your users the ability to make quick decisions. [00:06:08] Speaker A: Yeah, yeah, and we know, we know it. You've seen it before. I've seen it. You know, it takes a while to really build that kind of trust in whatever system it is. And then when you put AI in, makes it even more difficult. Right. And so we, what are some good examples where over the long term leaders have really transformed their belief in some of these systems or even go the opposite way, where something happened and it immediately caused them to go to get rid of those kind of systems. What are some examples that you've seen? [00:06:47] Speaker B: Well, I think we're seeing more and more in daily operations. Everyone is using AI tools and we're using AI to do a lot of the mundane routine audits of tons and tons of data, to prioritize decision making and to make recommendations. And so when you get a really nice explainable AI and it tells you why that decision was based, if that explanation doesn't make sense to you, that's a nice red flag that can cause you to look at what are the underlying criteria. So I think some really good questions for a business owner, you know, how does the AI explain its decisions? Right? So you really want to focus on clarity and accessibility of the explanations. You know, is it, is it human readable reports, you know, not just mathematical metrics? Mathematical metrics are hard for humans to understand. So you want really clear, you know, human understandable recommendations. You also want to know what data was used to train and test the model and how is that data quality maintained. There's often bias in an algorithm, and if there's bias in algorithms that impacts the reliability, and that usually goes back to the underlying data. You can also ask, what are the known failure modes or what are the accuracy rates for different user groups? So you can find out, is this particular AI the right one for our use case? Are we the correct mode for this? And another really important question is, is the system auditable and what mechanisms are in place for model governance and retraining? And you need to do this to ensure compliance and continuous improvement. AI is a living process, right? It's algorithms that are constantly taking in more information and learning. So how do you make sure that it is using the correct data to learn the correct positive, continuous improvement? We've all seen examples of AI that have gone off on a tangent and caused damage. So to be human oversight, making sure that your continuous improvement is heading you in the direction you need to go. [00:09:06] Speaker A: Yeah, yeah. And that's one of the bigger things I think that we're seeing with vendors too, because now with agents, sorry, I always say that when I say agents now with those becoming more popular and seemingly everywhere, you know, that becomes even more important for, for business owners and leaders to understand, you know, exactly that. And so as, as we continued this, well, I don't know If I'd even call it a modernization, almost a migration in some cases, but really a, a cultural shift in using these, these types of tools. You know, what, what sort of things are you pinging vendors on to see and make sure that their systems can do what they say they can do? [00:09:51] Speaker B: Well, you know, that's a great question and it kind of comes back to a couple of things we've already touched on, right? Is, is it auditable and what are the mechanisms in place for, for model governance and how do you maintain that you do not have an algorithmic bias and so that the explainable AI can help you with that to make sure you're understanding. But ultimately, if you're the one responsible for the output from this AI, whether it's agentic AI or not, you need to be monitoring it. There needs to be human oversight, monitoring the results, monitoring the direction that your algorithm is going to. So AI is a fantastic tool for taking over mundane and repetitive processes, but there needs to be human oversight on what is the output that we're really generating here, particularly when it comes to higher risk or higher impact decisions. Ultimately, AI can help you and it can do all the grunt work, but if it's high risk, high impact, you need human oversight make those final decisions. [00:10:57] Speaker A: Yeah, I agree 100%. And you know, it seems like, you know, sometimes it's very easy for folks to say, oh, the AI has gotten them right before, there's no reason it's not going to get it right, you know, for, for this decision. And they will, you know, just rely on that and allow it to make the decision. So it kind of feels like the human becomes the, the robot if they're not careful. And so, you know, I know we're gonna, I know we're gonna talk about this more as, as folks are trying to scale, they're trying to do things faster. But folks are going to have to wait because we'll be right back. And you know, coming up, we're going to talk about how AI is helping organizations scale when humans teams simply can't keep play or can't keep pace. So stick with us. We'll be right back. [00:11:48] Speaker B: Sa. [00:12:13] Speaker A: Welcome back to AI Today. Want more of what you're watching? Stay connected to AI Today and every NOW Media tv, favorite live or on demand anytime you like, download the free Now Media TV app on Roku or iOS and unlock non stop bilingual programming in English and in Spanish on the move. You can also catch the podcast version right from our [email protected] from business and news to life cycle culture and beyond. Now, media TV is streaming around the clock. Ready whenever you are. So welcome back to AI Today. I'm Dr. Alan Badot and we are continuing our conversation with Todd Thomas. And you know, we talked about in the previous segment, you know, some, some gotchas, some things to watch out for. And in this segment we're really gonna work on some operational realities in that, you know, humans and human teams simply can't scale fast enough to meet modern security and business demands. And you know, we see it all over the place, whether it's security operation centers, whether it's, you know, just operation teams being, you know, spread too thin. You know, it's, it oftentimes feels impossible with all the threats and the data volume that we have. And sometimes people will feel like they're alone. And AI is a true force multiplier. Now, Todd, when we are looking at all of these different activities that are taking place and we've seen the rush of folks trying to deploy different types of products and watching out for some of those things, why is it really unrealistic to expect human teams by themselves to really keep up with some of these modern threats that we're seeing? [00:14:08] Speaker B: Technology accelerates at an accelerating rate and that acceleration is self perpetuating. Modern threats are characterized by their speed, the volume and the sophistication. So human teams struggle to keep up because the attack surface area is vast, the number of alerts is overwhelming, and the pace of new vulnerabilities and attack techniques is too fast for manual analysis and response. [00:14:35] Speaker A: Yeah, and we've all seen those SOC reports and the false positives that our folks get, and it really can be demoralizing in some cases that you just can't overcome that backlog oftentimes. And so even as we are deploying some of these technologies and folks are getting more comfortable with them, you know, really from a repetitive task to, you know, without replacing people, necessarily. But what are some of the best places that AI can really augment, you know, the human in those environments? [00:15:13] Speaker B: So AI is fantastic at repetitive tasks. AI automates the grunt work of security, such as sifting through millions of logs, prioritizing alerts, and performing initial triage and correlation. And so letting the AI take care of that grunt work frees up human analysts to focus on complex strategic and cognitive tasks that require human judgment, creativity, and deeper investigation. AI augments the human. It really doesn't replace them. [00:15:46] Speaker A: Yeah, and I try to tell people that your relationship, and it is a relationship with these AI tools It's going to evolve and it's going to change really over time. And so when you talk to folks about using AI, how can you help them change their viewpoint from going that, oh, it's just another tool to it's really a teaming. [00:16:14] Speaker B: So AI is such a powerful tool. I mean, we've used lots of different emerging technologies and new tools over the years, but AI is really a game changer because it can be so interactive. So AI really can become a teammate. And it shifts from being a passive product that generates data to an active collaborator that provides real time guidance, suggests next steps and even executes pre approved defensive actions. So the relationship changes from analyst overseeing software to analyst collaborating with AI, leading to faster, more consistent and more effective defense. [00:16:51] Speaker A: Yeah, and perspective, right. Perspective is always going to be, you know, important because you know, AI can run so many different simulations, it can do so many different things all at the same time and give you a different point of view that you should be bringing into your analysis process. And so, you know, the smaller the team, oftentimes, you know, we say, you know, I've seen where, you know, they've got a couple of SMEs and they've got some AI and they can do a tremendous amount of work. But then, you know, maybe is that skill set, is it knowledge, is it some other pieces that go along with that? And so, you know, you know from your experience, what are some good, good case studies that you have that where you have seen a small team with AI be able to do a tremendous amount of work that it used to take a lot of people to be able to do? [00:17:44] Speaker B: Sure. We can always look in the financial world and financial institutions and I think kind of the scenario you're describing where you have a small security team that manages a global network of thousands of endpoints that's becoming more and more common. So using AI driven extended detection and response systems can automatically correlate alerts from endpoint network and cloud sources. So when a low and slow phishing attack is launched, the AI can immediately spot kind of the subtle multi stage behavior that a human wouldn't catch by looking at individual alerts. The AI can automatically contain that threat of the infected endpoints and then allowing small human teams to focus solely on the high level remediation and policy review. [00:18:33] Speaker A: Yeah, and I think that kind of goes to our broader discussion that I think folks are really struggling with now. Is it more important to get somebody that is a subject matter expert in the technology or the vertical unit that they're working in or is it to understand how to work with AI and be able to use AI and apply it in some vertical markets? You don't have to be a subject matter expert anymore because AI can help you get through that. And so from your staff, you know, when you go out and you try to staff, you know, projects, what sort of things do you look for and what kind of advice can you give to our, our business owners that are watching to say, you know, you should look at these that maybe you haven't looked at before? [00:19:19] Speaker B: I think leaders really need to look at staffing and AI together, not as competitors, but as a collaboration. As we've touched on earlier, leaders should view AI as a force multiplier. I think you used that phrase earlier, but it really can be a really powerful force multiplier that allows their existing staffs to achieve exponentially greater results. So instead of thinking about it or asking the question, how many people can AI replace? That's the wrong question. The question can be, should be, how can AI enable my current team to handle 10 times larger workloads or to protect against 10 times more sophisticated threats? So this does involve investing in upskilling your staff to really turn them into AI pilots and AI trainers. And then this allows them to focus on threat hunting and complex incident response rather than simple alert management. [00:20:14] Speaker A: Yeah, and one of the, one of the things that I tried to get people to do is really shift their mindsets to say, you know, instead of, oh, AI is coming in and it's going to replace me if they turn that around and they say, look at what superpower I have now. Look how much more work I can do. You know, I can really tackle a lot of things that I never could do before. And, you know, it's a dynamic. And I think we're seeing a little bit more of it as, as folks come, become more familiar and trusted, a little bit better. But, you know, that is a cultural shift and it's really changing the dynamics of cybersecurity and AI. Right. Because those are coming together and I think in probably three to five years, it's, it's going to be one in the same almost. What are your thoughts around, you know, that relationship? [00:21:05] Speaker B: Well, I think that's absolutely coming. And the companies that recognize that early and embrace that are going to have a big advantage over those companies that resist that and push it off. AI is here. It's an incredibly powerful tool. And if you use it well, it absolutely, as we've touched on, is a force multiplier. You can do More work, you can do better work and you can save your human brain power for complex non repetitive problems. So you really, you can, it's, I mean the force multiplier, that's, that's the best phrase I think for it. Embrace AI and let, let it use it to increase the amount of work and the quality of work that your teams can do. [00:21:47] Speaker A: Yeah. And so Todd, for, for our viewers, what's the easiest way for them to get a hold of you? [00:21:53] Speaker B: So we are, you can reach us@woodchuck AI I'm also on LinkedIn. I'm just Todd Thomas at LinkedIn. Pretty easy to get ahold of. [00:22:03] Speaker A: Yeah, that's, that's great. My, are, you know, our viewers, they, they love getting a hold of you on LinkedIn, so be careful. So up next we're going to talk about why speed matters and really more than perfection when incidents, you know, strike an enterprise. So stay with us. We'll be right back. [00:22:22] Speaker B: Sam Foreign. [00:22:55] Speaker A: Welcome back to AI Today. I'm here with Todd Thomas and we are talking everything AI and you know, as we continue, we talked a little bit about staffing the last time. We talked about a little bit about trying to, to really, you know, be, be you know, more mindful that using AI and team settings can really, you know, accelerate their, their ability to do more work. And we're going to shift a little bit though and talk about really response times because in today's environment, delays, you know, they, they can be huge. You know, I was on a call and it was, you know, it's a fraud, it was a fraud call. And you know, the person on the phone was talking live to the, the help desk person who had no AI supporting her and took about 20 minutes and in between he saw his bank balance just crack, crash, you know, and that's, that's, that just sucks the life out of you. Right. And so, you know, Todd, can you help folks think about speed? Why does speed matter? Why does response time matter? And you know, especially more today than any other time we've seen. [00:24:02] Speaker B: So in the digital age, threats like cyber attacks, system failures and fraudulent transactions, these are proliferating at machine speed. The bad guys are using AI and they're doing it at speed. The window between detection and catastrophic damage, just like your example, that window has shrunk dramatically. So slow manual processes simply are no match for the velocity and scale of modern threats. And this leads to increased financial loss, regulatory penalties and honestly, reputational damage. [00:24:35] Speaker A: Yeah, and that reputation piece I think is what folks sometimes forget about because you know, it takes a lifetime to build your reputation as an enterprise, but depending on how you respond to these things now, that can all be gone in minutes. No, not, not even hours and days and minutes. And so looking at your ability to respond to threats faster and looking at how that interaction with humans goes, you know, from that traditional relationship to where it is now, how can AI really as a team member accelerate that response and really help folks out? [00:25:18] Speaker B: I think AI can help you really in three key areas. The first is near instantaneous analysis. AI models can process and correlate so much data so quickly, logs network traffic, user behavior so across complex systems. And it can do that in milliseconds, identifying anomalies that would take a human analyst hours or even days to identify. The next big area is pre programmed action. Once a threat is identified and categorized, AI can be pre authorized to execute defensive actions such as isolating a compromised network segment or rolling back a configuration change or blocking an IP address. And that can be done without human intervention, removing the time lag of human decision making and approval. And probably the third area is continuous learning. So AI systems constantly learn from every incident. So they're improving their detection accuracy, the response efficiency with each cycle, making the subsequent responses even faster. [00:26:23] Speaker A: Yeah, and one of the things that I think is a lot of fun is that, you know, as folks are using it and, and like I said, they're becoming more confident in, in what it can actually do. They seem to allow the AI to do a little bit more each time it goes along, which is great, right? Because it, as you said, it continuously learns, it has that ability and you know, I've used them to do, you know, oh, it can go spawn up a new, a new agent to assist with a bottleneck or something like that. However, we all have that same experience where maybe it's gone too far. And you know, what kind of safeguards do you encourage folks to put in place or ones that you're familiar with that ensures that it doesn't overreact with those kinds of events. [00:27:12] Speaker B: So there's several critical safeguards that you can put in place for effective AI response. The first one is, is confidence thresholds. So you can configure AI with a probability score threshold and only allow it to take action if it's highly confident and you can define what highly confident is to you. So maybe it's 95% threshold in threat classification. And lower confidence alerts are often routed to human analyst for a review. Another nice area is fail safe protocols. So systems often operate on a principle of containment before action. So initial automated actions are designed to minimize harm or to isolate while a human review is initiated. And destructive actions like data deletion are rarely fully automated. You're going to use human controls for that. And that kind of leads into the next oversight, which I would call human in the loop design. So for high stakes or irreversible actions, the AI will present its analysis and proposed action to a human operator for final approval, maintaining human oversight. And then kind of just another area is continual simulation and testing. So the AI's response policies should be rigorously tested in simulated environments. Sandboxes. Right. And this should be done before deployment to ensure they perform as intended to do. But they should be continually tested in simulations. You shouldn't stop testing after initial deployment. [00:28:48] Speaker A: Yeah, and that's, that's, that's definitely key, especially if you want to make sure the drift doesn't happen and make sure that your models are continuously performing the way that you want them to. That's, that's important. That also ties though into the relationship though between the human and the AI, because there's, there are some that would never allow AI to make any decision whatsoever and others that are a little bit more forward leaning. If somebody came to you and they said where's that point that I should allow the AI to make a decision versus the humans, what do you tell them? [00:29:26] Speaker B: You need to look at your particular experiences and your user experiences and set up kind of common sense protocols that work for you. Again, turning to financial services, it's always a great area for kind of common scenarios. So one that's pretty powerful is credit card fraud detection and prevention. So you can imagine a customer's account is compromised and a fraudulent transaction begins. A traditional system might flag the transaction and only hours later, after the banks batch processing has already occurred, An AI powered system can monitor those transactions in real time, can instantly detect an anonymous pattern. Say you know, five transactions in five different countries within a minute. A human would have difficulty seeing that or seeing that in real time. The AI can automatically see that and then automatically score the activity as high risk and immediately block the card before the attacker can execute further major purchases. And this action prevents significant financial loss for both the bank and the customer. And then it can also be flagged for human for further human review. But after that credit card has already been blocked. [00:30:36] Speaker A: Yeah, and that's a perfect example of using AI to make a decision that prevents faster right than a human can. But not letting AI dictate all the other things that go along with that, put myself in the caller shoes that I was talking about earlier. If the AI would have just said, yeah, it's a real customer, it's a real person, shut the account down in three minutes versus a 20 minute call, he might have felt a lot better after he got off that phone call. So, you know, I want the audience to stay with us though, because coming up, we are going to continue to talk about exploring AI and you know, how it can rebuild trust in technology when confidence has been shaken. And we've seen an awful lot of that recently. So you're going to want to stay around for this because, you know, I think you're going to hear some things you don't want to, you don't want to miss. So stay with us. We'll be right back. Welcome back to AI Today. Don't miss a second of this show or any of your NOW Media TV favorites. Streaming live and on demand whenever and wherever you want. Grab the free Now Media TV app on Roku or your iOS device and enjoy instant access to our lineup of bilingual programming in both English and in Spanish. Prefer podcasts? Well, you can listen to AI Today anytime on Now Media TV on the website at www.nowmedia.tv covering business, breaking news, lifestyle, culture, and of course, this show. Now Media TV is available 24 7. So the stories you care about are always within reach. So welcome back to AI Today. You know, as we close out Today's conversation with Mr. Todd Thomas, we want to address one of the most fragile elements of really working with any sort of new system. And that's going to be trust. The problem we have is just like a customer who has been done wrong by a store or something like that. With AI, we see the same sort of thing in that, you know, if the AI fails one time, it's very, very difficult to regain that trust in that system. And that's just, you know, how it, how it is and how we see those sort of things. And so, you know, when used responsibly and transparently, it works out great. But, you know, there are times that it doesn't. And so, Todd, one of the things that fragile part of this relationship, we've never seen something like that before where I guess the closest thing might be your phone. But why is trust such a fragile piece of today's AI technology landscape? [00:33:57] Speaker B: So we've touched on a lot of the reasons earlier in the show and it's really trust is fragile because modern technology is often opaque. The black box problem, problem that we started with it's also highly interconnected. So an issue can lead to complex cascading failures. It's also rapidly evolving. So failures, data breaches and misuse can have widespread, immediate and high stake consequences, making recovery of public confidence extremely difficult. [00:34:28] Speaker A: Yeah, and I think, I think some of the challenges too is that, you know, like, like everything you, you want to oversell, you know, and you, it can do everything and you know, and you know, you, some folks have their first experiences with Alexa or some of these other devices that really aren't AI in the first place and they have those beliefs and, and those kind of things. And I think that balance is just, you know, sometimes goes over, goes over the cliff. And you know, I think as folks become more aware of what real AI is and how it really can be used, you know, they have a hurdle, a deficit that they've got to dig out of already from the start. And so, you know, as you're, as you're talking to folks about using AI for complex systems and you know, where reliability and accountability are really going to be driven by its performance, how do you get them into the rhythm and belief that you know it's going to work? And I'm not overselling it, it's going to work. [00:35:28] Speaker B: So AI really helps maintain reliability by proactively monitoring systems for anomalies, predicting potential failures before they occur, and automating corrective actions. It enhances accountability by meticulously logging every decision and interaction and providing auditable trails that can be used to understand why a system behaved in a certain way. [00:35:54] Speaker A: Yeah, and I think as we've seen, what's really fun is that now we're using AI to monitor AI in some cases or AI to monitor some of these other complex systems that are out there. And I think that makes some people feel fine and I think it scares some other people. And so what role, from your perspective, do you feel like AI is ready for the challenge to be able to support, support oversight on some of these systems. [00:36:26] Speaker B: So I think AI can play a critical role in verification and validation. So AI overseeing AI as you put it, AI can be used to model intended behavior and then continuously compare real time system performance against that model. Tools like explainable AI that we talked about earlier provide insights into AI driven decisions, proving that the system is following the prescribed logic and not introducing unintended biases or errors. [00:36:57] Speaker A: Yeah, and what's really interesting, the dynamic again, it always comes back to the dynamic, is that when somebody is first deploying it and the AI gets the wrong answer, oh, it's broken. It's awful. I knew it wasn't going to work. If they change that around and it becomes, like you said earlier, a relationship, a part of the team, then you've got to train it and you've got to help it just like everybody else. But once it's trained on your data and ready to go, it's going to make a huge difference in time, efficiencies, all of those other things and even the decision making processes of our other staff members. You know, once that trust is broken though, it just seems like it's out the window. And so how do you get folks to, to come back to and see the light that you know, it's a one time event, it's new, whatever that is, to help them rebuy into it. [00:37:54] Speaker B: So I'd say for a common example of how AI can help restore trust after an incident, we can look at large scale IT infrastructure or cloud computing. So after a survival service outage, an incident AI powered root cause analysis tools can sift through petabytes of data, of log data far faster than humans can, allowing to quickly and accurately identify the exact point of failure and the contributing factors. The organization can communicate a precise fix and a clear plan for preservation, which is essentially for rapidly restoring customer trust. And human manual processes just can't match that speed. [00:38:39] Speaker A: Yeah, and I think what's interesting too is that that speed is really driving a lot of things that come after that. As you were saying, the post mortem discussions that you have and trying to get to solutions a lot faster, that's the amazing part of these technologies today. Now you know, we've never been in a time where your customers, you're trying to sell to your customers, but also your, your staff is also a customer now and has to buy in, you know, to the solution. And so, you know, what advice would you give a CEO who's trying to deploy something for the first time, wants to build that confidence but has some challenges with their staff. What advice would you give them to to help folks understand it and buy in. [00:39:28] Speaker B: So I think leaders should really communicate three themes that we've really talked about throughout the show today, which is transparency, utility and control. So transparency clearly explain what the AI is doing and why it's being used. [00:39:43] Speaker A: Right. [00:39:44] Speaker B: AI is sorting our data to speed up customer service and then talk about utility. Focus on the tangible benefits to the consumer of the use. For example, this AI reduces waiting times by 50% and then highlight control. That's maybe the biggest one. Assure users that there is human oversight and an accessible, easy to use path for review or appeal if the AI makes an error. And if you really focus on those three things, transparency, utility, control, I think you can build confidence in your system. [00:40:21] Speaker A: Yeah, that's, that's great. And you know, I know you talk about some of these things in your, in your book that you have out, Todd, but why don't you tell some folks about Hyperscale and, and why it's a, why it's going to be a great reading for them. [00:40:35] Speaker B: Thanks so much. I appreciate the chance to do that. Yeah, really happy with Hyperscale. It just came out, it went to, to number one on Amazon, so pretty excited about that. And it's hyperscale AI, data centers and the next great expansion of global energy capacity. So Hyperscale really talks a lot about both the uses of AI and some of the, really, honestly part of the book. It sounds a little science fictiony, but it is science, but it's not fiction. Just as we've talked about today, AI is here today. It's making measurable, meaningful impacts today. And the other side of that is the data centers and the energy that's required to power all of this AI that's becoming more and more commonplace. That's really what the book dives into. So, yeah, you can pick it up on Amazon and please do. Hopefully you'll enjoy it. [00:41:26] Speaker A: Yeah, I encourage everybody to go get it. It's going to answer a lot of questions that I get in an email about, oh, they're building a data center here or they want to do something else. So it's a very valuable read for folks. And you know, Todd, thank you for being on the show. I appreciate it. I think you've added a lot of clarity and depth to the discussions and the questions that the viewing audience has. And I think the conversation was fantastic. So thank you. You know, we have heard from Todd, you know, about AI reinforcing, you know, how to make decisions and that you can't replace the humans as part of that. And, you know, maintaining speed and trust and resilience is the most important thing. And you know, to our viewers, you know, the future of AI belongs to those that are going to use it responsibly. We heard Todd talk about transparency and, you know, intention and those kind of things and keep that in mind as you are making your decisions about how to use it, who you're going to use it with and what you're going to do with the data. And we're going to talk about more of the data in the near future, but, you know. Thank you all for being here. I'm Dr. Alan Badot. This has been AI today. We'll see you next week.

Other Episodes