Episode Transcript
[00:00:00] Speaker A: Sam, Welcome to Power CEOs, the truth behind the business meets AI today. I'm Jen Gode, your fearless host, joined by the host of AI Today, Dr. Alan Badot. Welcome to the show.
[00:00:40] Speaker B: Hi, Jen. Good to be here.
[00:00:42] Speaker A: Well, we wanted to bring to you this joint conversation because it's real, it's relevant, and it is the hot topic of conversation. AI in business last week was CES and we heard a lot about AI. We heard about AI in business. We heard about more than just a chatbot. We heard about physical AI and a lot of different changes.
So I figured that the best way for us to unpack that, Alan, is for you and I to just, to just go there and talk about what's happening. So what's actually changing for CEOs in 2026? CEOs made it clear AI has moved from chat to control systems. What does that mean in plain English for a CEO?
[00:01:27] Speaker B: Well, I think they have to start looking at, no longer just chatbots and things that they can do with basic software on the desktop and the laptop. They've got to look at ways and the impacts of taking AI out into the field. I mean, that really is going to be the driving force around a lot of this technology. It's how does it integrate with, you know, wearable devices, how does it integrate with robotics, how does it integrate with all of these autonomous systems and heck, even robotics, you know, those, those kind of things, how does it integrate with those and how does it really push, you know, that whole intelligence discussion from the cloud to the real world?
[00:02:10] Speaker A: Yeah, physical AI. So we keep hearing that, everyone keeps talking about physical AI, industrial AI.
What do leaders actually, what, what should leaders actually picture showing up in their business in the near term when we're talking about this physical AI in real world application?
[00:02:28] Speaker B: Yeah, I think it's going to, I think it's going to boil down to looking at, you know, you know, we always talk about, you and I always talk about, you know, the crossover the, the, the AI with the mobile device, the AI with a new system, the AI with vehicles, whatever that is. And now that really becoming even more significant than it was before, because that's going to start to drive margins, it's going to start to drive investment. There's a lot of things that it's, it's really going to influence that folks are just not ready for.
[00:03:02] Speaker A: You know, you're, you're absolutely right. And, and people are still ducking there, let's be frank. People are still ducking their heads in the sand, pretending that this is a next year problem or not in my industry or yeah, someone else has to deal with this. But the reality is I'm seeing it real time in deals. Deals are falling apart, exits are falling apart at the, at the, at the 11th hour, really, because they didn't think about compliance or they didn't think about regulatory. And they said, that's not a now problem, I'll fix it later.
So it's really starting to show up and these CEOs who haven't thought about that or haven't gotten up with the system. So I think it's really interesting because you know what we have seen since you and I have been on the air, what this is two or three years we've been doing this.
What we've seen is AI stop being just about productivity or just about search function. It became operational infrastructure tools to help people work smarter and faster. But also systems. It became a part of our operational systems and update.
So while we're on that topic, if a CEO does nothing with AI or the updated compliance, regulatory and privacy in 2026, what is the risk they're taking on without realizing it?
[00:04:20] Speaker B: Well, I think, I think the risk is they.
We stressed early last year that the risk of doing nothing is going out of business.
Now I think the risk of doing nothing is going out of business even faster than it was before because these are things that you just cannot ignore anymore because, you know, it used to be, oh, I'll focus on this part of the business and it won't impact me because there's not going to be AI there. Well, that's, that is gone out the window.
That is not a, that's an awful assumption. Let's just say that.
[00:04:58] Speaker A: So let me ask you this, because in the past, like one of the, one of the misconceptions that I'm seeing, I do a lot of AI strategy, a lot of, I do a lot of business consulting with founders.
And one of the things that has come up time and time again is that tech is here to support humans. They're still thinking about AI like a spreadsheet or an Excel or a Word document.
But to me, I'm going to answer the question. The biggest risk I think people are taking without realizing is AI is integrated into all of your systems. Now, whether you're using a Microsoft or a Google product or some other product, QuickBooks even, everybody has integrated AI into their systems.
But if we're not leveraging it and understanding where the risks are and what we have to be doing on our end as the user, it can easily replace our judgment if we're careless. And quite frankly that is a huge risk that I agree with you will put you out of business even faster. But I think that is to me the scariest part of people who are pretending like this is going away or that it's not here.
So I think that that risk of allowing our tools to replace our judgment is, is one of the riskiest issues that, that I see in 2026.
[00:06:23] Speaker B: Yeah, I, I, yeah, I definitely agree with that because you know, we've seen it before when we've talked to, you know, it doesn't even have to be the CEO, it can be the CTO or somebody that just, they don't have experience with AI.
They are abdicating their responsibility by just listening to the AI because that's, you know, the AI is usually right. And I think that is going to exacerbate the problem as we, as we continue on with these types of systems. Because you know, folks just, they didn't feel comfortable with large language models, for goodness sakes. Now it's being in a control system, you know, your Word document, your phone, your car, everything.
And it's going to make it worse.
[00:07:07] Speaker A: Oh, I agree with you. So let's kind of dive into the microscope right now as we're on this topic. What's the biggest disconnect that you're seeing between what people think AI can do and what it can safely do today, like what's real right now versus what is hype that's going to break under scale? It might work in a pilot on a one off, but it can't work when integrated across the whole organization yet.
[00:07:34] Speaker B: Yeah, I think it comes down to the complexity of what they're trying to solve. I think that's part of it. People still have a false understanding that AI can solve system of system type problems. And you know, if I put enough AI against it and I put enough compute behind it, I'm going to be able to solve all those types of system problems. And fundamentally that hasn't changed that. AI is just not good at solving everything. It cannot solve everything. You have to have other technologies to do that and to help it and to supplement that. And now that it's showing up in more systems then they think, oh, the AI is going to be able to answer all my questions because now it's in a system. Well no, that doesn't solve the problem. It doesn't, it's going to make it worse. And you know, and I think it's that lack of understanding that is going to cause people even more problems because not only are we putting them in a, in our day to day lives like this, but now we're going to put them in operational systems where, you know, lives start to become more important and lives matter. And it's going to be, I think it's going to take a, you know, I hate to say it, but it's going to take a negative event where somebody just listened to the AI and they turned off something and it winds up turning off half the power grid. Somewhere. Something like that is going to happen.
[00:09:00] Speaker A: Yeah, yeah, I, I actually agree with you and I think it's a culmination of all these things. It's, I put my head in the sand, now I can avoid it because it's in everything. I'm going to abdicate my decision making and my judgment and it's going to make a decision that is wrong based off of whatever data that I've got that I haven't cleaned up, that I haven't checked and I didn't check its output. And it's going to be a, I, I think it would be something on almost catastrophic nature either to that business or to, to the general public.
And I think it's just like the snowball effect that we've kind of seen building over time as people choose to put their head in the sand or pretend like it's going to go away or. It doesn't impact me. No folks, like it's here, it's impacting everyone. It's embedded in all of the things that we utilize during the day. It's listening, always, learning, always.
So it's really important that we, we sort of take ownership and don't let it replace our judgment because it's really important that we are the ones making the decisions, especially as entrepreneurs.
So let me ask you what, what decision in your mind is the most important decision for CEOs to make this year about AI that they're currently avoiding?
[00:10:16] Speaker B: Who? I, I think it's really going to, you know, we always talk about people and it's really going to be driven by people. I think, I think CEOs are going to be forced to make decisions on who the right people are working for my company and do they have the right skills from an AI perspective? You know, they were able to skirt around it before because they really were, they were dabbling a little bit. And you're not going to be able to do that in 2026. You're just not, I think that you Know re looking at your staffing profile, their skill sets, you know, are they aligned to your strategic direction and the products and the, you know, the services that you want to sell, you're not going to have a choice because as these start to become, you know, more everyday table stakes that you have to use them, then you've got to have people that are willing to use them, understand them and can really balance between the technology and, you know, the, the thought process there. And that's going to be the struggle.
[00:11:21] Speaker A: And I would just add to that, Alan. I believe it's the culture of innovation that needs to be in each and every one of our businesses. We have to have a safe space for them to learn the new skill set. If we are not providing that as leaders, then we are failing our people and not setting our, our integrations up for adoption, which I think the decision is we smart CEOs are fixing their data, they're fixing their workflows, they're having accountability, they're shopping for the software that meets their strategic needs.
And they're not thinking about AI integration or tools, they're thinking about AI adoption. Deciding what should never be automated, which is part of your strategy, deciding who's going to be doing what and what are we going to leave as the human in the loop. And I think many are skipping that step. But that is going to make or break our business decisions this year. It's the what is it that we are not going to allow it to do and, and then having that culture of innovation and training our people. Just like you said, we do have to take a brief break because we are coming up on commercial but you're going to want to stick around because we are going to dive even deeper into this topic so that you know exactly what smart CEOs are doing now to set themselves up for success and how to prove future proof your business after these important messages.
Sam.
Foreign welcome back to this amazing Collaboration between power CEOs the truth behind the Business with me, Jen Gode and AI Today with my amazing co facilitator, Dr. Alan Badeau. We were talking about some of the bombs that were dropped, figuratively speaking, at CES about physical AI, about what do we need to do differently this year. And if you're just tuning in, smart CEOs are fixing their data, they're fixing their workflows, they're setting up accountability, they're setting a culture of innovation that allows for training of their team. So they're not just strategizing and planning for an AI tool. They're planning for AI adoption over integration because that's how we know that we're going to be set for success. The other thing is they are paying attention and making sure that AI is not replacing the human judgment because that's a very slippery slope and a scary place to be.
So if you're just tuning in, you're going to want to go to now Media tv, click on shows, catch the first segment, because we're just going to dive right back in. I mean, this has been really fantastic.
And so we were talking before the break a lot about what was coming. But I want to ask this question because this is coming up in literally every strategy call and business strategy call that I'm having with my clients. Alan. And that's what is the hidden cost associated with putting off cleaning our data, putting off AI integration for another year and also the hidden cost of not understanding what we're hoping to achieve with our AI adoption?
[00:14:50] Speaker B: Yeah, I would say, you know, those hidden costs used to be really around the technology. Oh, I'm ignoring the training piece or I'm ignoring, you know, the infrastructure costs that go along with that. That is, that is not the case anymore. I think we have, we have gone way beyond that. And the hidden costs now are, you know, your ability to grow, your, your ability to create new products, your ability to, to really innovate, you know, in, in lead your market space. And you know, those are costs. You know, it's hard to, it's hard to quantify some of those right now, but those are just the realities that, you know, no longer are you going to be able just to ignore, you know, and try to promote your IT folks into AI spots or, you know, maybe just, you know, ignore some of the pilots and ignore some of those new breakthroughs in your area. But now it's really going to drive what your long term plans are because if you ignore those pieces and you try to catch up the rate that we're moving, you can't catch up. It's just, it's just not possible. You know, when you're seeing technology changing so quickly and you're integrating things so quickly, you know, your ability to be able to catch up as an organization is out the window. If you're not doing things right now, you're in trouble.
[00:16:17] Speaker A: Yeah, I would agree. You know, and it used to be that, hey, you need to get on the train or you're going to be falling behind gradually. That is not the case. You're falling behind exponentially with the rate of technology and you're not just falling behind, you know, from, from a, from a tool standpoint or from a productivity standpoint. You're actually starting to fall behind structurally now because companies are rearranging how they do business using, I'm going to call them virtual employees, like AI agents. And we can talk about AI agents and automations in a moment.
But you're basically falling behind all your competition structurally. And I agree with you, Alan. Like, if you want to productize something and everyone else is leveraging AI to productize faster, hit the market, fit faster, more effectively, more efficient, there's no way for you to catch up, let alone keep up or get ahead.
And the other thing is, costs continue to rise if you continue to notice, clean your data, structure your data, think about compliance and privacy now that it sort of snowballs and becomes a bigger and a bigger and a bigger project over time. These things are not commoditized and going down in cost. They're going up in cost. And I'm going to give you an example. There's a client that I started working with maybe third quarter of last year, and they were storing the exact same data in four or five different places to the tune of multiple terabytes. That cost adds up when you're not talking about one location, but 576 locations internationally. Like, it is a tremendous suck of financial resources and simply cleaning the data. Yet it costs to do the project, to clean the data, to have it stored in one place with one redundancy. But pulling that out of all the different clouds and everything else, and getting it consolidated and cleaning it, save them lots of money. And so you have to ask yourself, okay, what have I been doing until now? And that's now broken. Because the reality is the cost of cloud storage has gone up and it continues to rise. It's not going down anytime soon, folks.
So that's another thing that's interesting. And before we go off of the conversation about cost, let's talk about how do I think about cost when I'm looking at AI? Because I am astounded. Probably 90% of the people who contact me about AI strategy and even business strategy for that matter, when they talk about an AI integration, they haven't even thought about what kind of metrics they're going to put to this. They haven't thought about what an ROI even looks like. They just think it's a magic button and it's going to like, print money. But they don't even know what the process or the system that they're trying to automate or utilize AI with it costs to begin with. Would you touch on that just a little bit? What are some of the things that the CEOs, the investors, the executives watching need to think about before they start shopping for a tool?
[00:19:23] Speaker B: Yeah, I think people and you know, CEOs, and even technologists are thinking of AI like it's your normal software upgrade or your, your modernization software strategy. And it's not. It is a merger and you touched on it earlier. It is a merger of cultures. You know, you've got an AI culture and you've got a human culture. And as you are doing that, you know, how you are strategizing how you're planning to spend your money should be treated like a merger between two organizations as opposed to, oh, I'm just going to upgrade my word or my email or, or you know, my QuickBooks or whatever that is, because those are fundamentally different and how you handle them are fundamentally different. And as you're building that strategy, you know, you can't do it every three years where you're going to say I'm going to modernize my software. And you know, in two years it does not work that way. It's a continuous evaluation just like those early on mergers. And you know, that's how you have to treat them because your cycles of development are accelerated way beyond what a lot of folks thought they would be. Your, your training that you need for your employees way faster than it used to be. It's not a yearly one, you know, two hour session now. It is a deep dive into technology quarterly in some cases. And so all of that has to be taken into account.
[00:20:53] Speaker A: Yeah. And I can't stress enough that that culture of innovation, like when you talk about this, treating this like an, like an, a merger or an acquisition, it is absolutely that. It is. What do we need to consider to have a full adoption process that's going to be successful? And part of that is understanding. Okay, I'm going to apply this to maybe it's lead generation and customer acquisition. So it's going to impact marketing and sales. Right. So what does that cost us now? The time cost, the labor cost, the physical, like the fiscal cost, like what are the, what are the human and relational and financial capital needs to run that particular system now and then. If I was going to do an AI integration and my expectation is that it's going to save me 30, it may be a 30% more effective, more efficient process then this is the dollar amount now and the hour amount now. This is what I would expect that pilot to result in when we run it. And if we're not meeting those expectations, don't just blanket roll out. Ask yourself, can this be optimized? What do I need to do differently in the planning process so that you have an expectation of, okay, I want to save 30% every time I run that process. If the process costs a thousand dollars and between labor and other resources now, 30% means I want it to cost only 700 once that's done, and then, and then kind of reverse engineer from that. If we are not doing that, we're thinking about this entirely differently than how we do business. And that's not going to work in an AI empowered age. And it's really funny because literally if you're watching this and you're a founder and you're going, well, I didn't know about that. Almost all founders that I work with, almost every CEO that I've worked with and technical teams included, don't have metrics on the systems that they have currently, let alone what they expect of the AI integration to be. So you're not alone if this is new to you or you haven't thought about it in this way, but you must think about it this way because that's the only way to really understand is this a successful pilot and is this scalable.
So I would love you to just weigh in on that. We are getting kind of close to the end of our segment right now, but weigh in on that and what your thoughts are, because you and I have been on some of these calls together.
[00:23:16] Speaker B: Yeah, yeah. It's amazing to me that it's very easy for CEOs to take people out of the business and say, you know, I've got two companies I'm bringing together. I want to cut my IT budget by 50% and I'm going to take this many people out.
But then they stop there. Now, with AI, you know, you can look at it from the same perspective and you know, and maybe you can even take a little bit more out because you are super humanizing the whole cognitive part of a human human and how they make those kind of decisions and. But they don't look at it that way. They look at it as a software integration and it's not. And until some of those folks change their mindset, they are going to continue to struggle. The problem is you can only struggle so long and then you're not struggling anymore because you're out of business.
[00:24:04] Speaker A: That's right. So let me ask you. I like to close on an action Step in my show. What's the first AI decision every CEO watching this should make?
[00:24:13] Speaker B: Well, I think the first decision that they should make is what kind of how are they going to align their pilot projects with their strategy? And if you don't have a strategy for AI, that means don't do AI because you're not ready for it. But they've got to start to align those to their business strategies.
[00:24:33] Speaker A: I agree, for me it would be where is AI creating leverage and where is it going to create a liability?
Because it's that downside protection that a lot of the founders that I work with have forgotten about in this equation. We do have to take a brief break, but we are going to be right back after these messages and we're going to dive into AI agents when automation becomes a leadership issue. We're going to talk about how we are doing as far as are we deploying them too early, Are we thinking this through, are we making dangerous decisions or are we sitting on the sidelines when we should be engaging with them? More after these important messages. Sam.
Welcome Back to Power CEOs Truth behind the Business meets AI today. I'm Jen Goday here with Dr. Alan Badot for this joint conversation. It is so incredibly important. AI is here, It's a reality, it is evolving. What do we need to think about? And before the break, Alan, we tease this. AI agents when automation becomes a leadership issue.
So let me ask you question number one that I have because this is something that happens. Companies see the shiny object, they go, they buy it and they immediately implement it and all hell breaks loose. So talk to me, why do most companies deploy agents too early?
[00:26:18] Speaker B: Well, I think it's because they haven't looked at what kind of agent they actually want.
The, you know, the challenge is we, we talk about this all the time, right? About aligning your AI projects to your corporate strateg strategy and how you're going to move forward and where you're going to implement things and the costs associated with them. And now they think one agent is one agent. All the agents are going to be the same. And that is not true.
Oh, it's not true at all. It's, it's scary. Not true. And the problem as they are doing that is that now everybody has agents and they think, oh, you know, I can use my salesforce agent to do the same thing as another agent or vice versa.
That's what gets them in trouble because now they're trying to go way beyond what those agents are designed for and they don't work. And we're back to early 2025 where, oh, you know, I'm doing all these pilots and nothing works. It's because, you know, they, they haven't looked at them in depth enough to understand what their true capabilities are. And until they do that, then they're going to struggle. In a lot of cases, you're absolutely.
[00:27:32] Speaker A: Right, and this hasn't changed. But garbage in equals garbage out. Messy decisions means you're scaling the mess faster.
Agents are scaling what you're doing. And if you haven't dialed in the most effective and efficient processes and made sure that your agent is, is following the most effective, efficient process and making the decisions you want it to make and putting the human in the loop, otherwise you're basically just accelerating exponentially your mess that you'll have to clean up so really quickly. The difference between a helpful agent and a dangerous one.
[00:28:06] Speaker B: Well, a dangerous one is one that's just been deployed, that nobody really is watching. From my perspective, that is the ultimate, you know, risk for a company, a government agency, whoever it is that's, that's using them because, you know, not fully understanding their capabilities is, is such a challenge for, or, you know, organizations, they think, oh, if I just do this, then, and I tell it to just do this, then I'll be okay. But you have forgotten that you need to tell it all the other things not to do, because these are curious. You're giving it a goal. It's trying to work towards a goal. And those, you know, all it cares about is getting there, and it doesn't care how it gets there. And if you ignore that piece of it, the how is going to come and bite you in the behind.
[00:28:53] Speaker A: You're absolutely right. And there's so much evidence of this. We've seen this across the news for the last couple years. You and I have talked about this a lot. But really, the difference between something that's helpful and something that's not going to be helpful to you is guardrails and clear.
I would add to this clear accountability. Who owns oversight over your agent. Because just like with other aspects in our organization, if you don't have one clear owner, then when things go wrong, everybody does this and nobody takes ownership for it. And it's really very scary.
You need one clear owner of the agent's responsibility that's in that loop so that you can kind of mitigate that risk. Is there anything else, Alan, that comes to mind? Because I'm immediately going to go into what should never be fully automated.
[00:29:46] Speaker B: Well, yeah, yeah, but that's exactly what comes to mind, because now you're putting an agent inside of a, you know, a physical system. Now in some cases, whether it's an autonomous vehicle, whether it's, you know, any sort of autonomous system, that starts to become a big problem because, you know, now you're thinking, okay, who actually is responsible if something goes wrong? You know, you can't, you can't give away responsibility.
Is it the engineer that trained the system? Is it the actual owner of the system? Is it the medical doctor that, you know, didn't necessarily make the decision but is challenged to. To use the road? You know, it's. There's so many questions on that that, you know, it just comes back to the same sort of thing that the more important the system, the greater the impact of that system, the more oversight that you have to have on that system. And people are forgetting that.
[00:30:44] Speaker A: Yep, that's. I would completely concur. And then I would add to that my. Yes. And is there are some things that we should not be even considering considering fully automated.
One of them to me is money movement.
I mean, money movement. Let's. Let's give the example.
I don't remember. Was it at McDonald's. Refresh my memory. Where they just gave 250 chicken nuggets to somebody that didn't order it. Something like that happened, so they had to shut down the system. Imagine if that was your money movement, and all of a sudden they shipped $250,000 out of your business account to some random something or other just because it didn't have any guardrails. So I think money movement should not be fully automated.
Customer disputes should never be fully automated. Look at the companies that had let all their customer service reps go that are now rebound hiring them, because nobody wants to speak to a bot who's just going to quote policy to them if there's a dispute.
And I would also say, end of the day, legal disputes and commits should always have an attorney look at what the AI did, because AI has done great things, but it doesn't know exactly which jurisdiction or, or whatnot. So you may not be getting exactly what you think you are, even with the clearest of commands and prompts.
[00:31:58] Speaker B: That's right. And in customer relationships, you got to be careful because like Air Canada, you know, with. With their AI system that they had it wrote new policy that was not there. And, you know, those kind of things you have to worry about. And I would add the whole medical diagnosis piece, that entire process, you know, it's scary enough for a lot of folks and, you know, if they don't, you know, if they think that a doctor is not making their decision, imagine the ramifications that come along with that and the responsibilities and everything else that falls from that, that is, that's a nightmare.
[00:32:36] Speaker A: I would agree. And so I think, to sort of summarize this for all of you who are watching, where do humans have to stay involved even if it's slowing things down? So we don't want to go all the way on efficiency because you run into some of these challenges that Alan and I were just talking about. But to me, anywhere that trust is earned or lost needs a human touch. Because humans don't touch. We don't trust computers, we don't trust AI.
We trust humans. We do business with humans. We know like and trust. So anywhere that trust is being earned or has a potential for being lost, you need a human in the loop. Because the reality is speed doesn't matter. If confidence disappears, that's your customer base gone in an instant, which kills your business. So I think that to me is absolutely an additional place where humans must stay involved. And anything else that we didn't touch on here. Alan, we talked about the legal and the medical.
[00:33:37] Speaker B: I would say, I would say like you were saying around the generalization piece, no matter how confident the AI sounds, no matter what it is telling you, there are certain things you just either have to double check, triple check, and you may have a gut feeling and AIs can't have gut feelings. And that whole process and that whole decision making process, you know, if it's okay to have AI give you its perspective, but that's where it should stop. We've got to remember who is human and who is AI, because if we lose that, then we're no better than the AI.
[00:34:17] Speaker A: That's right. It's really funny that you said that it's okay to get perspective. How do you, how do you, how do you train an AI on a, on a, an sop? That gut instinct is a piece of it. You can't like, that's not exactly. So it's missing a component of decision making.
So let's talk about this.
One of the challenges that people who integrated too quickly without setting up the right guard rails in place is AI is now scaling bad decisions. Agents are scaling bad decisions.
So talk to me about. Okay, we're hearing you, Jen and Alan, loud and clear. We've got somebody who's a clear human in the loop. And, oh, we have a problem, it's making the wrong decision. What do we do first? How do we fix this problem?
[00:35:06] Speaker B: Well, I think, you know, that used to be a little bit easier to, to say because it wasn't so prolific in a business. And now, you know, it's almost like you better have a crisis management team on speed dial because it is no longer a technology problem. In most of those cases, it is a relationship problem or, you know, a customer problem. And that's not something that technology is going to be able to solve. And, you know, that is where we are getting to because, as you know, we always talked about, you know, fomo. Oh, fear of missing out. And so I'm going to accelerate. I'm going to do all these things. I'm going to get ahead of the game.
You know, sometimes it's okay to be second because there are a lot of lessons learned that folks can really bring to bear that just by watching everything else that's going on, you're going to see a lot of things that you're not going to want to repeat. And AI is a perfect example of that.
[00:36:11] Speaker A: I love bleeding edge. I love being first. But in a sandbox, in a sandbox, not in a Go Live scenario.
And so that's one of the ways to mitigate this. If you're thinking about this and now you're scared, don't say, well, I'm not going to implement, because that's the wrong answer. You will be left behind and be out of business if you don't think about this. But start in a sandbox.
Fix the problems first. If something's not giving the outcome that you want, don't scale or multiply that bad result. Fix the decision logic. Why is it coming to this erroneous decision? What is happening? Fix that in the sandbox.
Because all AI does right now is multiply what already exists. We put it in the system, it's going to multiply what's in that system. So test it in that sandbox safe area first and then go. And then I have one question that I'm going to leave you all with today before we have to break, because we unfortunately do have to break for commercial.
The one question that reveals readiness for agents, what happens when it's wrong?
If you don't have an answer for this, you are not ready.
Alan has said this in a couple of different ways. I have said this in a couple of ways, but the reality is, is no matter what you're doing, especially with AI agents or anything you're doing with technology, what happens when it's wrong. And if you can't answer that, stop. Do not press go. If you don't know how to figure out how to answer that, contact me, contact Alan. We're both on LinkedIn. We are on Facebook. I have a group called Power CEOs that Alan is going to then bring in as well.
Connect with us, ask us the questions. We're happy to answer those questions for you, but stop. Do not press go until you can answer that question. We will be right back after these important messages.
Welcome Back to power CEOs meets AI. Today you are on Now Media. If you're just tuning in, you can catch the podcast version and the prior segments at NowMedia TV. Just click on shows, scroll down to the show that you want to watch and the show and catch up. It has been epic. We have talked about what's happening with AI in 2026. What does every leader need to know about AI?
What's coming down the pipes, what's here now, what's real? We dove deep into agents and the do's and do nots of that. And before the break, we ended with the one question that tells whether you're ready for agents or not. And that's what happens when it's wrong. And if you can't answer that, stop. Do not press go. Do not collect $200. You are not ready to put an agent into that system if you cannot answer that question and have clear ownership over oversight of that agent. But we're, we're going to close today's episode, Alan, I think with something that is very, very strong and real and relevant. Want to talk about privacy compliance and the cost of later?
You know, privacy is a hot topic right now. Very hot topic. And people think it's a legal issue, but it's not just a legal issue.
So can you talk to me a little bit about that? Because the reality is tech engineers develop because they can. They're not thinking about the risk, they're just seeing what can they do. Because they're truly scientists at heart. They're innovators. What can we do?
Legal shows up after the damage is already done. So there has to be something in between that process. So why isn't privacy just a legal issue? And what do we need to know about it? What can we do about it?
[00:40:08] Speaker B: The biggest thing is it is. Yeah, you're exactly right. It's not just a legal decision. It is an architectural decision. How you are designing and fundamentally designing your system from a security perspective, from a data management perspective, from a Privacy, perspective, all of those have to be baked in at the beginning because you cannot put them back in the box afterwards because, you know, that's the biggest, that's the biggest challenge that folks have, is they think that it's an, it's an afterthought. And it cannot be an afterthought because, you know, the systems are changing so rapidly. The software, the tools, the numerics around that, that if it's not fundamentally in there from the beginning, how are you designing to meet it and trying to put those, those things on after the fact that especially when you're, you're in the airplane. You know, I always use the analogy, oh, I'm, I'm flying the airplane while I'm putting the wings on.
[00:41:08] Speaker A: That doesn't work.
[00:41:10] Speaker B: And especially from compliance and especially from the complexity of the data challenges that we're seeing now, it just, it is, it. You're asking for a disaster.
[00:41:21] Speaker A: Well, and I just want to highlight entrepreneurs, founders, what is it that we do?
We jump out of the plane and build the parachute on the way down. That's how we operate. But fundamentally we have to, we cannot operate in that manner when it comes to privacy, data management and compliance. So if you're listening and you're a founder, listen, I hear you, I feel you. I am a founder too. I am an entrepreneur at heart.
But this is something that we have to truly strategize and think through. And 95% of your AI implementation and integration happens in the planning phase before we even think about a tool.
It's all about designing the system properly. So we protect our clients privacy, our privacy. We manage our data in a way that's ethical and compliant and we think about what's going to be important. Because let me tell you, trust is everything. People are losing trust left and right right now when it comes to AI integrations, people in general in business. And trust is everything. It's very hard to regain once it is lost. So please take heed and, and bake this into the beginning. So where, Alan, Where, Alan, would you say that companies lose control of their data? Like, where are the most common places?
[00:42:41] Speaker B: Well, I would say usually it is, it's really in, you know, the training phase. And I'll give you an example.
I was, you know, I got one of the Nvidia DGX sparks, and it's about this big and it has a petabyte of data that I can run through it. It's a petabyte and it sounds so exciting. A petabyte five Years ago, a petabyte was, was huge. And now I've got one on my desk, it's this big, right. And now I think I can put all this data through and I can do more and I can, I can store more and I can compute more.
And yeah, that just means that you've made your life a lot harder. If your data is wrong, if your algorithms are wrong, if your strategy for handling security is wrong, you've just generated tenfold what you have to go through and what you have to try to fix.
And from a data perspective, they always think, oh, I can, I can clean it, I can, I can do this, I can do that with it. And no, no, if you had a hard time cleaning it before the AI, imagine after the AI, now you're really in trouble.
[00:43:51] Speaker A: Caused a problem.
[00:43:52] Speaker B: That's right. And that's usually where you know that folks still have this reliance that their data is good, their data is clean and they've got a lot of data. And in an AI age, and it's, it's, it's only worse. It's much, much worse. And then you use that to train your models and stuff. It's just, you know, like you said, it's going to snowball and it's going to explode.
[00:44:16] Speaker A: Yeah. And then I want to hit some other kind of leakage points that I've seen as well. I agree with you in the training phase and when we're putting this in, but other places that companies lose control of their data are in their vendors. How many times. Agree, agree, agree. You can't just blanket click agree to everything that comes across you because that's you agreeing that they can use your data and the data that you have stored in their systems and for training in either things. Another place that, that I see it is when you're doing the integration and you're, you're doing an API to API. Well, sometimes financial data is in one place and medical protected information is in another. Do you really want those things talking to each other without putting a control in place? Think about if it's not mapped, it's not controlled. And so before you just slap an API on something, think about what are you actually letting it have use of or what it's seeing and are you in compliance? And this is exceptionally important in medical, legal, financial, any of these heavily regulated industries.
[00:45:22] Speaker B: Yeah. And one of the things I tell people, Jen, is that you have to treat these agents as insider threats.
You know, the agent doesn't live next door to you, so you're not Going to hurt its feelings by treating it as an insider threat, thinking that every piece of information that I give to could be somewhere else that I don't want it to be. And if you start from that premise that it's going to leak something, it's going to get rid of something, it's going to do something with that data that you don't want it to do, then you're much more protective and you are forced to build compliance and security at the beginning. And folks don't think about it like that.
[00:46:00] Speaker A: You absolutely have to. Thank you for that. And can we touch on unused data for a minute? Because a lot of people think that they have all this unused data or maybe it's old data or whatever and it's just sitting there. And I think it's an asset, but really it's not. It's a risk, it's a future risk. Because if you decide to put an integration to that Google Drive that has stuff in there that you're really not sure what's in there because you just keep adding to it for the past 18, 25 years, you could, you could have things that shouldn't be in there exposed. So how should leaders think about unused data? What should we be doing or what can we start to do in order to clean that up, if you will?
[00:46:43] Speaker B: Yeah, I think, you know, there are certain regulations with regulated businesses and you know how long you have to keep some data. I get that, I get all that.
Seven years in some cases. Right. If it's a legal thing or medical, I get that, that's great. But man, do you really need 20 year old data? Because that's where you started. Do you really need something else that's, that's that old, heck, even, even two year old data you start to really have a hard time justifying it because the technology has changed, your customers have changed. So how relevant really is it? It's probably not.
So get rid of it. I mean, throw it out, you know, okay, cleanse it the right way, but don't let it just sit there because it's personal information, potentially it's financial information, as you said, potentially. And it really is a liability and it just gives, it gives hackers and attackers one more attack vector that they can really get access to something that is going to cause you heartburn.
Don't keep it. I mean, that's the strategy I like to use. Don't keep it if you don't need it.
[00:47:53] Speaker A: I love that strategy. And we are getting to the end of our episode today. So I want to just kind of quickly summarize what Alan just said and what we've been talking about this segment. What does privacy by design look like in your business? It looks like you're collecting less. You're collecting only what's relevant. You're storing less. You're only storing what's relevant to what you're going to use. You're limiting access. We didn't talk about that today, but we've talked about it before. Limit who can access this and you're going for that simple win. And how does compliance then become an advantage? Because when you have compliance baked in and privacy baked in, you build trust. Trust closes deals. Enterprise buyers notice, they know if you are security compliant or not. They'll ask to see those certificates. And if you worked with a vendor who can't provide that, then you're, you're not really secure.
So it's important that you know this. And then I'm going to go with the one practice to stop immediately because we believe in action steps here and doing the thing and on the data things. Stop porting data just in case.
The risk always shows up before the value does. If you've got data that you're not sure how you're going to use it. So let's stop doing that. What's your one lightning round practice to stop immediately, Alan?
[00:49:08] Speaker B: Well, I think the hoarding data one is a perfect. You stole that one. That's a perfect one. Because, you know, that's. Yeah, we used to say the more data, you know, oh, I've got all this data that I can train on. How about now you say I've got relevant data that I'm using to train my stuff on and I don't need the costs that go along with that. That's, that's perfect. That would be. That was my, my go to.
[00:49:30] Speaker A: All right, well, we agree on the action step. Stop hoarding data only. Keep what's real and relevant clean. The rest, unfortunately, all good things come to an end, including this show. But you can catch my show On Monday's Power CEOs, the truth behind the Business and Alan Show. When can we catch your show, Alan?
[00:49:46] Speaker B: On Wednesday evenings at 5 Central and.
[00:49:49] Speaker A: Then you can catch both of us on LinkedIn. Be sure to join the Power CEO's Facebook group because Alan will be popping in there as well. And we will have something fun for you on LinkedIn. So make sure you click follow and notify us when Alan and I post and look for a group there that's going to share all things relevant about business and AI. As always, we want you to win today, win this week, and we'll see you same time, same station next week. Have an amazing rest of your day. It.