January 23, 2025

00:51:28

AI Today (Aired 01-22-2025) : AI Trends 2025: Ethics, Business, & Technology—What’s Next for AI?

Show Notes

Kick off 2025 with Dr. Alan Badeau and Jen Gaudet as they explore top AI trends—ethics, branding, business and emerging tech.

View Full Transcript

Episode Transcript

[00:00:27] Speaker A: Foreign. [00:00:32] Speaker B: Hello, I'm Dr. Alan Badot and welcome to AI Today. Here we're exploring AI's extraordinary potential and its ethical challenges. Thank you for being here. As we continue to look at the heart of AI, trying to help small businesses, trying to help, you know, folks use it to really enrich their everyday lives, try to give them a perspective on how they can use AI to their advantage, as well as trying to prepare them for the bad things about AI. And that is definitely one of the topics that we're going to explore this week. We're going to deep dive into, you know, how hackers and AI are able to really become more of a challenge and a threat to everything that we're doing, from driving cars and, you know, the potential of the power grid to our software, our emails, our bank accounts, our phones, everything. And, you know, that's one of the things that I think that is really going to help a lot of folks as they prepare to combat and take as much action as they can to protect themselves as well as protect their, you know, identity. As someone who has had his identity stolen over six times from, you know, various foreign actors, as well as, you know, the, you know, being caught up in a lot of these other events that are out there, it does scare me a little bit to, you know, consider the power of a traditional hacker with AI. Now, I am a certified ethical hacker. I've done this for, for many years. And, you know, my ability to use AI to do some things that it should not do is definitely dwarfed by some of the, you know, even more professionals that are out there using this every single day to really take over, whether it's a system, an email, you know, your banking account information. And that's something that I want folks to get from our discussion. You know, this week we've got a really fantastic panel that we're going to be talking to. Whether it's the, the policy that is needed to help protect us, whether it is the, you know, the pure software element and the cyber threats associated with different, you know, systems and environments is another. As well as, you know, hearing from, you know, one of our guests talking about a closed system and how they're, you know, these air gap systems are protected from your traditional mechanisms, but you still have to worry about the insider threat. And one of the things that AI has allowed folks to do is gain access to biometric information to gain access to passwords to better understand or, you know, for instance, deep fake a voice or an image. And so it's not as simple as it used to be, you know, it used to be don't answer the phone, don't do this, don't do that. Well, it's, it's a lot harder now if somebody has spoofed your phone number and they are calling you and the voice on the end sounds like your significant other, your spouse, your, you know, your, your children, your boss. Those sort of events are drivers and things that folks need to be prepared for. Now I've talked about a lot of, you know, large language models, I've talked about a lot of other AI capabilities that are out there. And you know, when you start to couple artificial intelligence with cyber security, that is one of those cross domain unifying events where now you are scaling at a capability that is significantly beyond what a human can do by themselves and that should worry folks significantly. Now I'm not here to, I don't want to scare everybody and, and I'm not an alarmist when it comes to things like that. However, I am a realist. And you know, there are demonstrated, you know, attacks on folks that have really, you know, scared a lot of the cybersecurity experts. You know, a good example is a, a teller, a bank teller in Canada. I won't, I won't say the name and of course I won't say the, the, the bank. But she was on a zoom call with who she thought was her boss and her boss's boss and other folks that she works with and they all had their images on there and they were all sounding exactly like that. And they told her, we want you to deposit X amount of money into this other account and we want you, you know, we, we have to expedite it. So we want you to do it, you know, as soon as, as soon as possible. And unfortunately everybody on that zoom call was a deep fake. And she thought to herself though, she thought, you know, that's just really unusual. It's kind of strange. But you know, my boss has told me to do this, my boss's boss has approved it and she went ahead and did it and she sent a significant amount of money to an account. And like I said, it was a deep fake. And you know, that, that was a, you know, an eye opener to a lot of folks and how large language models and how AI can really be adapted at the, like I always say, at the speed of now because you know, there are capabilities that a lot of folks don't, don't realize these, these models have and it really can impact our everyday lives. And you know, you hear about it Every single day there's a new event that's taking place that somebody has hacked a new system. Whether it's, you know, your, you know, your, your mortgage provider or, you know, your cable provider or you're going to a hotel and their data has been stolen. You know, the biggest, the biggest emails, you know, the, you know, tranche of emails that I get, I could put in a bucket, that folks are just worried about their identities being compromised, about how much data these companies are asking for now and seem to, you know, not have the protections in place that allow them to, to really safeguard our information. And that's a concern to a lot of folks. And, you know, as we, as we walk through the show this week, I want you to think about the impacts of having your data stolen, having your, what would you do to, to try to protect that. Having, you know, other information compromise, whether it is a driver's license, an email, a passport, you know, what would you do to prepare yourselves? And really, that's the best thing that I can, you know, best advice that I can provide to folks is that, you know, quite honestly, you know, anytime I hear something has taken place, the first thing I do is go out and change all of my passwords in all of my accounts, whether or not I have been affected or not. Because more than likely I have been. There's, there's just so much information out there, so much that is shared, that it's very easy to have that data just be compromised. And if you assume that it's compromised, you go out and you are proactive is as best as you can. That's really the best way that you can, you know, combat those, those types of things, have a couple of emails that you just routinely will, will change so that you're continuously trying to stay ahead of that curve. Meaning, okay, they've got it, you assume they have it. You assume that it's for sale on the deep web or the dark web. And the best thing you can do is go out and change those because that's going to protect you as you move forward. And the easiest way for you to mitigate those sort of situations, do not wait until you get a free version of whatever the, you know, hacked party is going to, to give you access to. I have lost count how many free accounts at these identity protection agencies that I have received because quite honestly, I mean, it really, there was a time it seemed like it was happening about every month, from the mortgage to the cars to the. Whatever it was. So just keep that in mind as you as you go forward. So, you know, with that, I hope you enjoy this week's show. It's really going to be excited. It's one of my fun topics that, that I like to talk, talk about. I will get a little bit more involved in it as we, as we move forward over the next few weeks because I do think it's important for folks to realize it and embrace it. And I think you're gonna, I think you're gonna like our guest panel this week. I think you're gonna gain some, some knowledge, you're gonna understand how to better protect yourselves and you're going to start asking a lot more questions of our leaders as well as the business leaders that are out there saying, why are we doing some of the things that we're doing? Stay with us. And we'll have our first guest in the next segment. And with that, we'll go to a quick commercial break. Welcome back to AI Today. I'm your host, Dr. Alan Badot. And we've been talking about cyber security threat and how artificial intelligence is being used to help facilitate some of the actions of, you know, the hacking community. And, and really it's exploded recently, as we've seen. It's a huge, huge problem for a lot of folks. I get emails, hundreds of emails every week, really, asking about what can we do to help protect it, what can we do to, you know, make sure that our businesses are protected? And, you know, I, I unfortunately don't have a fantastic answer for them. I, I give them the, you know, the, the rundown on the normal things that we can do. But really, hackers are gaining an advantage and one of the critical things, though, that I think folks are really concerned about is what our next guest is going to talk about. It's a, it's a pleasure to have Mr. Ben Naver back from DBT Arrow. He's the chief of staff over there. You know, Ben is a great friend of the show he's been on. He's talked about a lot of things when it comes to the aerospace industry and how AI is helping facilitate that and give DBT Arrow a significant advantage. You know, Ben's got a fantastic background when it comes to 25 years of experience, really, across multiple countries and multiple industries and multiple segments, and it's really fantastic. Ben, great for you to be back with us again. [00:12:33] Speaker C: Great to see you. Thanks for having me. Always appreciate the conversation. [00:12:37] Speaker B: Yeah, yeah. And an exciting one this week because, you know, we hear about the power grid, we hear about all the other things that are in the media about how and what sort of protections we need to take against, you know, and things we should worry about when it comes to systems and when it comes to, you know, non traditional systems that we see, you know, being, being talked about in aerospace and the, you know, the aerospace industry and the FAA and radars and all those other systems are a high priority target. From your perspective, Ben, what are some of the things that, you know, whether it's the government, whether it's, you know, private industry, what are some of the things that they're doing to try to protect some of those systems out there from these next generation hacking events? [00:13:31] Speaker C: Well, a very good friend of mine used to be the chief information security officer in the White House and he can assure, you know, they're trying to protect everything with various levels of success depending on what they directly control and what they don't. And there's a number of strategies that they follow when they're looking at protecting systems. Some of the most critical systems they use air gap strategies. In other words, those systems either work through a completely private network or don't talk to a network at all. And so everything going into that system has to be certified, delivered basically in a disk or some other USB key that they can, they can assure that what's on that key is clean and then they run their update. There's always the chance, as we saw with the air traffic control system a number of years ago, where they made a fairly minor programming error. So, you know, those sorts of things do happen. But it wasn't an AI system hacking, it was just simple human error in a code that caused some, some loss of data. They didn't lose the data, they were able to recover it from backup. But for the pilots filing their flight plans and looking at live data to plan their flights, there was a lot of information missing for a brief period of time. A lot of those systems are air gapped. A lot of those systems use advanced security to check the code to make sure the code hasn't changed. When you get to our product, like an aircraft, it's really an air gap system. When you look at a GPS in an aircraft, there isn't a way to remotely update a lot of those systems. You have to bring the update to the airplane, mechanically plug something into the aircraft, then you run the update and the software actually checks for certain validity keys in that software to be sure that it's the right software and it's been created by Jeppesen or whoever it is and that it's correct and clean. So Yeah, I mean, you're right, but the challenge is you got to protect everything. And yeah, there are priorities, but the way aviation works, you can't really provide a chink in the armor. If they get in one place, they can create a lot of havoc even though they can't get in in other places. [00:16:05] Speaker B: That's right. [00:16:06] Speaker C: It's a very broad defense. [00:16:08] Speaker B: Yeah. And I think, and I think that goes to a larger discussion that I have with a lot of folks that it's still driven by humans and you have to be just as concerned about the insider threat and the person that is actually doing that as opposed to the AI itself. You know, I go back to my favorite Christmas movie. I know I'm going to get a lot of emails when I say this, but it was Die Hard too, remember? You know, them taking over the radar and all the other things that at the Dulles airport. And that was, oh my goodness, that was 20 years ago. [00:16:41] Speaker C: Right. [00:16:41] Speaker B: And you know, it's, it's those sort of events that folks need to be more concerned about, I think, as opposed to, for these types of systems. And I, you know, attack when it comes to, comes to that from, you know, Ben, when you talk about air gap systems, I think that's a really important key. You mentioned it about the airplanes and the aircrafts. How are you all at DBT Arrow looking at vulnerabilities and cyber threats and those kind of things and insider threats, if we even want to get into that a little bit. [00:17:16] Speaker C: Well, I won't go into too much specifics, as you can imagine, but we have very talented it. We have some people that are very, very experienced at securing data. So we look at, again, a multi level approach. We know the people that we're issuing usernames and passwords to. We monitor to some extent the transactions that they're following that have to do with our systems. We don't look at what they do outside. And a lot of people that work with us do other things. So we look at what they're doing within DBT ERA in our collaborative database. We know where they go. We can tell if they're copying large quantities of data. When we work with email at the moment, we're in the process of looking at significantly upgrading the security of our email. But the email we have right now is secure and it does protect us to some extent, to the broader extent of keeping people out unless they're supposed to be in. And it does protect us. The IT guys can look at what's happening within the email system and Identify whether somebody's doing something unusual. Now, we don't use a very advanced AI. We're a fairly small company. But I do know and have worked with in previous jobs companies that specialize in these network security operations offices, you know, SOC security operations centers. And they use a huge amount of AI to fight AI. They're looking at patterns, they're looking at patterns of traffic at volume, and they're trying to understand whether something unusual is occurring within the first few seconds of that occurrence so they can immediately block it, determine whether it's valid or not, and then either allow it or not allow it and take the appropriate action. So, yeah, AI as a tool, as a tool for evil, yeah, it's very dangerous. But it's also very powerful as a tool for the good. It can look at a very large amount of data and highlight anomalies, even when those anomalies haven't, you know, might have occurred in the past but looked only slightly different. And a human scanning an error log or scanning a network log would never have picked it out. So, you know, there's a lot of those activities here at DBT Arrow answer your question. Specifically, we monitor the systems that we provide to our staff and we encrypt what we believe needs to be encrypted. There is email with outside parties. We do have an encryption solution that we can share with outside parties so that we can encrypt end to end with them. But in a lot of cases, the outside party information doesn't need to be encrypted, so we don't go through the hassle of getting them into the encryption system. And then they have to remember to use it when they're talking to us. So, you know, we try and focus our efforts to protect our data on our collaborative database and some other places where we're storing information. So we can be very good at that. [00:20:30] Speaker B: Yeah, that's great. And I do want to say to our audience, you know, and I did a lot of research on this over the last week and a half or so and have followed this for a while. I have not been able to find any verifiable, you know, evidence that says that somebody has been able to use the WI fi on a commercial airliner to hack into a system. I mean, they, they. I know that would scare everybody. And, you know, every time somebody says they have, they can't reproduce it. I have not found, you know, one strong case that has, that has ever, you know, happened. And, you know, Ben, is there, is, is there anything that you have ever heard of around that. And you know, I, you know, that supports that or that can refute what I just said. [00:21:20] Speaker C: Well, support it. I mean, those are air gap systems. The public WI fi in an aircraft is not used by the aircraft. The aircraft uses completely different system. There have been numerous cases of people turning on, purposely maliciously activating hotspots that look like the public system. They do it in airports, they do it all over. But specifically in aviation, there's been numerous cases of someone activating a hotspot. And that way people logging into their hotspot if they're not using a virtual private network solution, a vpn, the packets traveling through their hotspot can be viewed. They can collect the packets, they can look at what's in them, they can at least look at where they came from and where they're going to. Now. If whenever I'm in public and I have to use a network, I'll turn on a VPN virtual private network. And that way all they can see is that there is traffic from my phone to some server that the VPN provides. The packet is completely encrypted. So they can't read the header of my communications to a bank because they don't know it's all encrypted inside the packet. And yeah, if they're wandering around with a high end data system, they might spend a few years and try and break the packets up. But generally speaking, it takes too long to do it for it to be of any value to anybody. But you're right. From a WI fi point of view, no one has hacked the plane. They just create all kinds of interesting conversation amongst passengers or problems where you log into someone's hotspot, do a bank transaction because you think it's public and that person now has a lot more information about you and your bank than you wanted them to. [00:23:12] Speaker B: That's right, yeah. [00:23:13] Speaker C: Things to be careful. Go ahead. [00:23:14] Speaker B: Yes, yes. Yeah, that's fantastic, Ben. Really quick. Where, where can people find you at? [00:23:20] Speaker C: The easiest way is ben.niver dbtaero.com sorry dbt.arrow ben.nivertbt.arrow and of course I'm on LinkedIn under Ben Nybert, so I'd love to have people look us up or go to the DBT arrow page on LinkedIn and follow us. We have plenty of interesting information to look at. [00:23:42] Speaker B: Yeah, yeah, and that's fantastic. Thanks Ben, like always, I appreciate you being here with us. Great conversation. I know that as we see more of these, you know, hacking events that it's going to be a continued part of our conversation that we have and we're definitely going to deep dive into it more later on, especially as these things start to happen, you know, even more frequently than they are, you know, thank you again for that. And with that, you know, please stick around for our next segment. We're going to be talking more about some, you know, cybersecurity events and other hacking type, you know, activities that are taking place and stick with us. And we'll be right back after this short commercial break. Thank you. FOREIGN welcome back to AI Today. I'm your host, Dr. Alan Bedot. Thank you for being here this week. We're talking about hacking hackers using artificial intelligence to gain an advantage and really just wreak havoc on our lives a little bit more. We are very, very lucky to welcome back Mr. Bruce Schneier. He is a renowned security technologist and author, a faculty associate at the Harvard Kennedy School Car center for Human Rights and Policy, and really one of the foremost experts in this topic and in this field. So I would really suggest to our audience that you hang on every word that he says because what we are going to talk about is going to really impact an awful lot of things that you're going to, to do to protect yourself in the future. Bruce, thank you for being here. It's, it's great for you to be back. [00:25:57] Speaker D: Thanks for having me again. [00:26:00] Speaker B: So, so Bruce, you know, I can tell you our audience is, is ticked off, right? I mean, they're tired of having their identity stolen. They're tired of getting letters in the mail that say, oh, you know, this account has been compromised, this other account has been compromised, compromise, you know, and as I was doing a little research for the show this week, I saw that you were way ahead of this from a warrant trying to warn people of the power that AI would, would really be able to provide to, you know, hackers. And it was, you know, your, your article on the coming AI hackers that really got my attention. I think it was in what, 20, 21 that you were ahead of this. From your perspective, you know, what are some of the biggest impacts that AI and hackers using AI are really going to have on our societal systems on, you know, such as finance, taxation, elections, you name it. And what can people do about it? [00:27:09] Speaker D: Well, you know, we don't know. And the interesting question to ask is, will AI benefit the attackers or defenders more? It's really easy to imagine ways that attackers can use AI to break into systems. It's equally easy to imagine Ways defenders can use AI to defend systems. You know, the RSA conference was a couple months ago and there's a show floor full of companies that are using AI technologies. Most of it's marketing nonsense, but some of it's real. And we're going to see this shift in the attack defense balance as AI tools come online for both the attackers and the defenders. So it's going to be a short term shift as, as you know, different people and groups start using these tools and there'll be some long term change in the balance, which we don't know yet. So on the attack, I mean, you know, we can imagine AI's right, writing better phishing emails. But like, so what? Phishing emails are already good. We can imagine AIs finding vulnerabilities in software. You also can imagine AIs fixing vulnerabilities in software, right? Having the actual manufacturer use the same AI to find the vulnerabilities before the software is shipped. None of this is new. 2016 DARPA held an AI Capture the Flag contest. Now, Capture the Flag is a. It happens at hacker cons around the world and teams, human teams compete in a simulated network to defend their piece and attack other pieces. It's done for years. 2016 DARPAS had a contest of AIs doing it. This is 2016 AIs and there were regional competitions. I think the top eight faced off at DEFCON that year. An AI out of Carnegie Mellon one, it became a commercial product. That kind of thing is continuing. That research is ongoing. AIs both on the attack and on the defense. [00:29:13] Speaker B: Yes. [00:29:14] Speaker D: DARPA never redid their challenge. The kind of the scary news I'm going to tell you is that China does that every year. It is called robot hacking games and we don't have a lot of details of what's done there. But that AI capture the flag, attack and defend is something that they are pushing research in at a rate that we're not in the United States. So that's a little scary right there. [00:29:39] Speaker B: That is. [00:29:39] Speaker D: But you know, we don't know. My guess is that in the near term AI helps the defender more, right? You're already being attacked computer speeds. AI gives you the ability to defend at computer speeds. And there are some really clever startups that are integrating AI technology into cybersecurity. Adoption is going to be a problem. I mean, all the inertia that we see in organizations are going to be a problem. But my guess is that the next five or so years, AI helps the defender more than the attacker. So that's good news in kind of a sea of bad news. [00:30:15] Speaker B: Yeah, you're right about that. Because the bad news just keeps hitting. It seems every day somebody's been hacked. You know, I, I, I, you know, during the commercial break, I went and looked at how many emails and letters I've received. I got six emails that said, right. [00:30:33] Speaker D: AI is not doing that. That was true five years ago, that was true 10 years ago. Right. That's the sorry state of software, which really has more to do. The economics of software. We're not willing to pay for secure software. The market doesn't reward it, regulation doesn't require it. So we don't get it right. AI doesn't fix that problem. AI changes some of the economics. But if you want that fixed, that's a policy fix. That is not a test. [00:30:59] Speaker B: Yes, that's exactly right. And that's where I was going to go from that perspective. What, you know, what, you know, you see things on different, you know, government policy committees and, you know, different activities that are taking place. We are definitely behind our, you know, other, whether it's adversaries or even, you know, countries that we're friendly with. What do we need to do better as a government to, to really start to get a handle on some of these things a little bit better? [00:31:32] Speaker D: Well, it depends on what you're asking is very general. And my feeling is that the fact that regulation has pretty much advocated the Internet has not been good for the Internet, that the free for all of the market doesn't do what's best for society, does what's best for a bunch of, you know, tech billionaires. And, you know, it used to be that was okay, that, that, that was a decent proxy that's becoming less and less true. So more regulation, even though that stymies innovation, actually is better because now when innovation can kill you, you want to slow it down a little bit, you know, and it's not turning the Internet into, you know, the Food and Drug Administration, but it is doing some things to ensure that we're building is, is what's good for society. [00:32:23] Speaker B: Yeah. And I, the, the emails that I get every week too, folks are telling me, you know, I went to sign up for a loan and they're asking me for 10 times more information than they used to ask for. They're, they're, they're trying to get on. [00:32:37] Speaker D: Because they probably got it anyway. They probably don't need. [00:32:39] Speaker B: Exactly. [00:32:40] Speaker D: Yeah. So I don't know what's going on there, but. All right. [00:32:42] Speaker B: Yeah, yeah. And they feel like that they're, they're, they're really. Their cyber identity. They're, you know, that information that is out there, you know, they're being forced to put more out there. It's easier for everybody to get a hold of it and it's really frustrating for a lot of folks. [00:32:58] Speaker C: Yeah. [00:32:59] Speaker D: And it's best. That's always been true. Mean, like recently at&t lost pretty much everybody's information, which is like everybody. But, you know, Equifax was 2016, which was everybody's information. I mean, this stuff happens again and again. It's not new. So, yes, the fact that all this information is not under our control and there are no regulations on the entities that have our information. It's not like a fiduciary. It's not like your doctor or your attorney or your accountant where there are actual rules. It's very much. But, but blame the rules here. Don't blame the tech. Yeah, if the rules are in place, the companies would have better tech. We have the technology. We just don't have the incentive to deploy it and use it. And AI is not going to change that. AI will change things around the edges. And I mean, I'm a little bit excited right now because I think it'll be more positive than negative, but it's not going to change its incentive structure. [00:33:57] Speaker B: Yeah. And I agree with you. I'm more excited than I am. You know, I've warned people. I'm actually not trying to scare them this week. I'm just trying to inform them and really get them excited about, about these sort of things and how they can use them. From a business perspective, what would you encourage? What's the first thing, the number one thing that you would tell a small business that they should do to try to, you know, either protect themselves or protect their, their customers? Information. [00:34:23] Speaker D: This is the problem. If you're a small business, not much you can do that, really. You have to trust others. So moving out into the cloud, finding some managed service provider that will handle it, all the problems are now bigger than you as a small business. I mean, it's like hiring your own private doctor. You're not going to do that. You're going to find a good medical center and go there and we're in, we're in a world now where the expertise necessary to secure these systems is beyond the reach of small businesses. So you need to find a protector. It's a very futile world out there. [00:34:57] Speaker B: It is. That's. That's exactly right. And I, I talk about the convergence of technologies all the time. And from AI and cybersecurity, I think that is one of those areas that is, you know, that's a marriage made in heaven right there. Because something. [00:35:11] Speaker D: I don't know about that, but go, whatever. [00:35:14] Speaker B: It's something, right? It's something, but it really is, you know, into the future. I. You were going to see, you know, the, our cyber security experts have AI that's going to be sitting right next to them or on that interface that's going to help them throughout that process. [00:35:29] Speaker D: For every expert, attorneys will have that, accountants will have that. I mean, you will have that. We will all have experts. And it'll be like having a human expert. [00:35:39] Speaker B: Right? [00:35:39] Speaker D: I mean, they give you advice. Sometimes it's good, sometimes it's bad. You have to interpret it, but it scales. It'll scale in a way that'll give expert advice to, to lots of people who couldn't afford it before. And that's. [00:35:51] Speaker B: Yeah, I think it's. I think it's very exciting. So, Bruce, what's the easiest way for folks to be able to get a hold of you if they have any questions? I know they get a whole bunch of them. They get. I get a ton of them and I hope, I hope you get a bunch of them too, because it really is going to benefit our folks to get your perspective on those sort of things. What can we expect from, from you in the next couple of months? Anything exciting for us to read? [00:36:15] Speaker D: So first, all my stuff is on schneider.com like, I don't do social media. I don't on Facebook, I'm not on Twitter, I'm not on Instagram or TikTok, whatever the kids are using these days. I have a blog, I have an email newsletter. I have a website. So schnard.com is, is where everything is and everything I write is on there. So, you know, whenever there's something new, that's where it is. [00:36:36] Speaker B: Yeah. That's fantastic. Thanks again, Bruce, for, for being here for our audience. I really encourage you, go out, join, get the newsletter, read it. You know, there's so much valuable information on your site, Bruce, that it really is fantastic. And every time I look, I see something else that I want to read and dive into with you. So thank you again for being here. I appreciate it with that. We'll be right back after a short commercial break. [00:37:03] Speaker D: Thank you. Don't go away. [00:37:37] Speaker B: Back to AI today. I'm your host, Dr. Alan Badot. Thank you for being here. As we've heard from our Other guests, we're talking about various topics about, you know, how hackers and how they're using AI against us in a lot of different, a lot of different ways. And you know, this week I'm extraordinarily lucky to have, you know, Dr. Darren Asamoglu, an economist, an MIT professor, and also a AI expert when it comes to really how we are applying it to our social, our economics and you know, really the impacts across our entire lives. Darren, thank you for being here. As always, excited to have this discussion with you. [00:38:25] Speaker A: Thank you, Alan. It's my pleasure. [00:38:28] Speaker B: So, so Darren, one of the things, you know, and I've seen some, some things recently that you have said that I, I couldn't agree more with about how AI is not as a major mover for everyday citizens. However, it seems like the hackers have a purpose and now they have a really great tool that they've been able to use. What are your thoughts on, on policy? People are ticked off and they're ticked off having everything stolen from them. What do we got to do to put a, put a stop to this or at least slow it down? [00:39:04] Speaker A: Look, I think AI has tremendous potential and we will only waste it if we hype it up and use it incorrectly and also leave it in the hands of people who want to use it for malicious purposes. AI is an informational tool. It can take information, it can present information and it can manipulate information. That manipulative role is what a lot of people know how to monetize. If you want to organize a cyber attack, great. The better the tool you have in your hands, the more effective you're going to be. But if you also want to mislead people for other reasons, for example, you want to make people think that your supplement is going to cause all sorts of diseases. Well, the more you can convince people using that informational tool, the better. But also, if you think about it, there are many things we can do much better with an informational tool during our day to day life. I can get much more reliable information. I can find out things that I can't do right now. For example, how to fix the tire or something short circuit in, in the house. So an informational tool used correctly could be very useful for a variety of purposes. So if you multiply this to all the use cases in the production process, that's fantastic. But here's a problem with the hype. If you hype it up, everybody's going to invest right away because they don't want to be the ones left behind. And they don't know what to do with it. They're going to use it for all sorts of purposes for which it's not well designed, at least at the moment. They're going to automate a lot of things that shouldn't be automated. They're going to waste a lot of money, invest in all this technology. They don't know what to do with it. And of course we're going to put trillions of dollars of GPU capacity, computing power, capacity and energy into these models because we want to expand them before we know what to do with them. So the hype is actually quite dangerous, except again, to the cyber attackers and the manipulators. [00:41:11] Speaker B: Yeah, Yeah, I agree 100%. It is a concern when you look at using AI for things that it's not designed for. And I always tell people, you've got to have a human in the loop even when you deploy some of the things that it is designed for, because you never know what's going to happen. I think from a policy perspective, and this has always concerned me when I raise a flag, you know, even when I was supporting the, the government doing a lot of different things when it came to AI in that how was it regulated? And it always seems like policy is lacking behind technology, which is not unusual. However, it seems like it's a little bit even farther behind when it comes to AI. What can we do about that? [00:42:00] Speaker A: Well, look, regulation is hard. Regulating anything is hard. Did we start regulating pharmaceuticals right away? No. It was quite a while after people realized that you could do a lot of damage with the wrong drugs or the wrong chemical compounds, that the government jumped in and started passing some comprehensive regulation on what could be approved by the fda and the process and all of that AI is much more complex. It isn't surprising that we have fallen behind, especially during a time when president after president, administration after administration emphasized deregulation. But also it's a outcome that's supported by billions of dollars of lobbying. Tech companies don't want to be regulated and they are now some of the biggest lobbyers in the world. So lobbying requires, sorry, regulation requires a sort of a clear headed approach. What is it that we want from these tools? What are the dangers? How can we steer it in a better direction? And I think you put your finger on exactly the right thing. We want a human in the loop precisely because in many of the use cases at the moment, there's a lot of uncertainty. The technology can misfire. But more broadly, I think if you approach AI as an informational Tool, it would be most useful if we put it at the service of humans. So if we actually put it at the service of humans, of course humans are going to be in the loop because they are the ones who are using AI. [00:43:48] Speaker B: Yeah, yeah. And I think from that human in loop perspective, you know, I've talked to the audience about a couple of examples that I know have scared a lot of folks, especially on the financial side. And I just saw recently where, you know, certain banks and the amount of money that they lost in a certain, you know, a certain quarter. I'm trying to be very broad and not get myself into any trouble, but it's, it's really concerning to me that as the deep fakes and the number of hacks and, you know, the impact on a citizen's live livelihood, you know, from these financial institutions, they are really treating cyber hacks and that kind of information very differently than they have your traditional credit card fraud that has taken place in the past. And I think that is, that is another area that, you know, I get emails all the time about that, that what can I do? How can I, how can I get some help? I don't have a great answer for them and maybe, maybe you do. Maybe you'll have a better answer for them. But what can they do? [00:44:54] Speaker A: Well, I mean, I'm not sure that I do have a great answer. But with all technologies, if you want to use them for automation, the degree to which they can become truly autonomous is going to depend on their reliability. Once robots reached a sophistication level that they could actually do the welding tasks, then you didn't need to have a human babysitting them. Today, a lot of things that are related to IT security on your computer happen without you approving each step because that has a very high level of fidelity. AI is not there yet. It is likely to misfire, give you incorrect answers or miss certain patterns. And that raises the stakes if you want to use it for security, if you want to actually make sure that it's doing what it's supposed to do and it increases the reason for having humans in the loop and also the right regulatory approach so that it doesn't completely lead to a meltdown. [00:46:05] Speaker B: Yeah, yeah. From, from your perspective, Darren, do you think. Well, let me, let me put it a little bit different. Where do you think we will be in the next year when it comes to policy, do you think, with, you know, constant bickering and the things that are taking place, you know, do you think that we can get there in a time frame that will actually get something on paper, that'll be meaningful, realistic, something that can be applied, that can actually start to stem some of this, you know, these activities that are taking place. Are we still going to be in the Wild west and, you know, next year? [00:46:44] Speaker A: I, I'm not super optimistic because right now we just entered a presidential election cycle, so bipartisan bills are much harder today than they were, say, a year ago. And everybody knows the current situation of Congress where it is very difficult to bring people from the two aisles together. On the other hand, I do see that lawmakers have become much more informed about AI related things than they were about five years ago. And moreover, when I've spoken to people from both parties, I see some similar concerns about manipulation, about what social media is doing, mental health, joblessness, inequality. So it is possible, I believe, to have comprehensive legislation about AI, But I don't think it's going to happen very quickly. [00:47:40] Speaker B: Yeah, that's a, that's, that's a concern that I have, I have as well. And it's, I know our audience has the same concern. They're frustrated. They don't know what to do. They're scared at the same time because they're hearing alarmists talk about AI taking over the world and, and, you know, turning into the Terminator and those kind of things. I try to, I try to dispel that as much as I can, and I know you do as well. And it's just, it's just really frustrating for folks, you know, and, you know, when you don't understand the technology, sometimes you, you always think the worst. And I think that's where we're at today. And, you know, I think we just have to, we just have to keep pushing forward and try to do the best that we can when it comes to that. You know, I really hope that we get to at least where the European Union is. And, you know, they had a great comprehensive document that they put out. It was a first. It's got a long way to go, but at least they had something down on paper that was really strong. [00:48:41] Speaker A: Well, absolutely. This is all correct. And you know, why European lawmakers acted and why today American policymakers are more concerned about these issues because they keep hearing people's concerns. So the more Americans raise these issues with their representatives, the more likely it is that we get something out of this. And exactly like you said, people having the right emphasis, the right recognition of what the problems are is central. The two poles of this, either sort of blind techno, optimism, everything's going to work out. You know, OpenAI and Google are going to come out with the perfect technologies and we're all going to be amazingly prosperous as a result. Or killer robots are coming. Neither of these two poles are helpful. It's somewhere in the middle. [00:49:34] Speaker B: That's right. That's right. And I. [00:49:36] Speaker A: Sorry about that. My cough. [00:49:37] Speaker B: That's okay. That's okay. And I think. I think that is. That is really the perfect, you know, way to. To describe it. It's always somewhere in the middle. But I'll tell you, when you mess with people's money and they're scared about that, that is usually when they get the loudest. [00:49:59] Speaker A: Absolutely. [00:50:00] Speaker B: Yep. So, Darren, what's the easiest way for folks to. To get a hold of you if they have any other questions? [00:50:08] Speaker A: I think email or Twitter would be bust. [00:50:13] Speaker B: And I know you have. You have a lot of things that are coming out. You know, I see a couple of things a week on some new articles, some new, you know, interviews that you do. So I really encourage folks, you know, the easiest thing to do is put Darren's name in Google and you'll get notifications on when you can read some of his stuff or see new material that's coming out there. I really hope everybody does that. Darren, thank you again for being here. Great discussion. And I know we'll have more conversations about this as we, as we move forward and people get more mad, I think that's when it's going to be. That's when it's going to be interesting to see. So, for our audience, thank you for being here this week. I hope you learned something. I hope you're excited. I hope it gets you fired up a little bit. I'm fired up. I'm tired of having my identity stolen. I know you all are, too. Look forward to having a discussion with you next week. We'll have another great topic and another great panel of guests. So thank you and have a wonderful, wonderful week. Thanks. [00:51:17] Speaker A: Thanks, Alex. This has been a NOW Media Networks feature presentation. All rights reserved.

Other Episodes