E255 - Don Morron, Founder and CEO of Highland Tech, AI Agents for Enterprise
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:12] Hello, my name is Debbie Reynolds. To call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:24] Now I have a very special guest on the show. He is expert in business automation as it relates to AI agents.
[00:33] Don Morron, he is the founder of Highland Tech. Welcome.
[00:38] Don Morron: Thanks Debbie. Glad to be here.
[00:40] Debbie Reynolds: Yeah, happy to have you on the show. We met on LinkedIn and you actually are a podcaster as well. So you are the host of AI PhySec.
[00:51] Don Morron: It's a short for physical security.
[00:54] Debbie Reynolds: Well, why don't you tell me your trajectory in tech? I actually really liked your podcast. He has like a really great vibe and pretty cool good music too.
[01:03] And I also love your beard, so people can't see that, but he has an awesome beard. So go on the LinkedIn and check that out. But tell me about your trajectory and your career in tech and how you became the founder of Highland Tech.
[01:15] Don Morron: Yeah, sure. Love when imaginations can run amok on audio only, right? So visualize some awesome people here today to talk about some awesome things.
[01:26] I am a fellow podcaster. I've got the first and only podcast in the physical security industry dedicated to AI.
[01:33] And if you think about physical security, imagine there's actually an industry for this which is pretty wild, is the industry for physical security technology and physical security experts.
[01:45] And so physical security technology is basically like commercial cameras,
[01:50] commercial like electronic door X controls, if we use a key fob or car to get through a door.
[01:54] Life safety systems, like fire alarm systems, things like this can also be encompassed in this industry.
[02:01] And then you've got these physical security experts. A lot of these people are like ex military,
[02:05] FBI,
[02:07] CIA.
[02:08] They typically can range in like the chief security officer type of role. But like from the physical security standpoint,
[02:16] there also is some intersection of cybersecurity within the. Within the business that we're in because we.
[02:23] Everything we do is it's physical security. Right. So there's Internet of Things or edge devices that have to have cybersecurity considerations.
[02:32] And so there's a blend of those two.
[02:35] Yeah, and I've served that industry now for 13 years and started the podcast about two years ago.
[02:41] Had some really excellent guests on that have ranged from consultants to CEOs of companies, to enterprise end users to several PhDs that focus on data science and applications of that within the physical security industry.
[02:58] And yeah, I know I went public with a business that I founded that focuses on solving repetitive and mundane tasks using AI agents.
[03:09] And certainly data comes up all the time. It's a huge topic around AI because it's a garbage in, garbage out.
[03:16] And so what we do is we build these agents to automate business workflows for those repetitive and mundane tasks. As you can imagine, businesses are full of those.
[03:25] And we've had things like business process automation and robot process automation for a long time.
[03:31] But those can be very expensive. And even from even a BPA standpoint, this Boolean logic that says if this happens, then that happens is an impossible feat to achieve within business because business can move in different ways, up, down, sideways.
[03:48] And so what happened with the advent of generative AI is now there's a flexibility and adaptability opportunities that has taken business process automation to the next level. And RPA being enhanced with machine learning has been great.
[04:02] But again,
[04:03] you have to teach machine learning or it tends to be more narrow in scope as opposed to using pre trained algorithms that really reduce,
[04:10] quite frankly democratize automation like we've never had it before. So I saw it a business opportunity. That's what we focus on.
[04:18] We do so the physical security industry, primarily just because my entire network's there, but we also serve other industries like professional services,
[04:27] some healthcare, and then supply chain too.
[04:31] Debbie Reynolds: Well, wow, I didn't know about the physical security part, but let's talk about that. I have a friend that's kind of in physical security, but more as it relates to biometrics.
[04:40] Not exactly what you do, but I want your thoughts about how physical security has changed, especially as we're seeing more emerging tech being implemented in the physical security space.
[04:53] Don Morron: Yeah, so computer vision is combining the best of deep learning algorithms with large language models to enhance and really support investigations in the security world. And so deep learning will pick up on great expansive object detection.
[05:14] So today we can say I'm looking for a female,
[05:17] black shirt,
[05:19] et cetera. And you know, I can pick up on these markers to identify people. I can pick up like I'm looking for a car or that I'm looking for a truck, specifically a red truck.
[05:29] And deep learning has enabled us to do that from an object detection standpoint. But what it doesn't tell you is what else is happening in that scene.
[05:37] And normally what happens is you can't be proactive because you're searching for it after the fact. And then you get this video and you send it off to police or insurance companies to have to help you deal with.
[05:49] But when you have security operations teams, their job is to mitigate risk,
[05:54] to reduce risk. If they're in a retail environment, they want to reduce, shrink or theft,
[05:58] and so what? Generative AI LLMs have allowed new emerging tech to do is combine the best of deep learning and computer vision with large language models to help these security operators understand what's happening in a scene.
[06:15] So it's not that it just detects a person or a slip and fall,
[06:19] which deep learning can do, but it might help the operator understand the scene. Like to say, hey, what happened in the scene that could have caused this slip and fall?
[06:27] And therefore enhances and speeds up the investigation time or search things using natural language. Right. So, you know, not all object detection is captured by deep learning. The algorithm is made and the algorithm's made.
[06:41] Debbie Reynolds: Right.
[06:41] Don Morron: If it's only looking for people, cars or animals and the variation of those, that's its scope.
[06:46] But what happens when that scope now it changes. Maybe I'm looking for a person that has an umbrella or I'm looking for a person that's walking their dog. There's a combination of things that language vision models or language ven.
[07:00] These language vision models can help with investigations. And that's what we're seeing as an emerging trend right now.
[07:06] Debbie Reynolds: I guess one of the other things that interests me.
[07:08] I don't want your thoughts about this, Will.
[07:11] As I've seen the technology really advance exponentially.
[07:16] Being able to get a lot of this information,
[07:19] kind of like you said, combinations of information,
[07:23] information in parallel,
[07:25] parallel paths, and also more real time as opposed to kind of lagging information. But I want your thoughts about that.
[07:34] Don Morron: Yeah. So when it comes to getting real time versus lagging information, I mean, I found it's a tough problem to solve. I mean, something that we found, like an agent probably could do for us,
[07:44] takes inputs, helps it summarize what's happening. And I don't have to worry about what the latest is. I have that struggle myself. If you're trying to keep up with the AI space, it's certainly no stranger to news.
[07:56] And then the person that's trying to be in the know to try to stay abreast to all of the news and how it impacts them and their organization is just.
[08:03] It's really tough. Right. So maybe it's something we can make it agent for.
[08:07] But I think I myself have had that struggle. Like how do you. You stay up to date with this space that evolves every week and I think it just continues to have a culmination of thought leaders that have to share these things because a lot of some of the information is gated and not available.
[08:26] Some of the best stuff is in podcasts like this. When you have a conversation with people within their domain that can share insights that you can't necessarily use perplexity for an AI search tool to figure out.
[08:39] Debbie Reynolds: Right.
[08:39] Don Morron: And also when you use those search tools, they're usually limited to Reddit files and things that are fair use as opposed to what Google might enhance with their search capabilities that where we've all been waiting for Gemini does okay, but it could do much better.
[08:57] So yeah, I find that like staying current with this ever evolving AI community is extremely difficult. I went to the two top AI engineering conferences last year in San Francisco, New York City,
[09:08] I just got back from Microsoft AI tour here in Atlanta and yeah, and as a human to try to consume that much information, it's probably at least on a bi weekly basis I hit an information overload.
[09:22] But what happens if you're curious, even if you get the TLDR right? Basically the high points and they make these reports so you can sign up for anything they have AI reports or whatever your domain is.
[09:32] I'm sure somebody has curated a list of the highlights.
[09:37] But curiosity strikes and it says I want to know more about that because that pertains to me and my business or things I'm trying to explore and learn on.
[09:46] And so I think there will always be an avenue of having to continue to learn.
[09:50] It's just how do we consume that information? Where do we consume it from?
[09:55] There's a lot of clickbait things out there.
[09:58] Certainly I understand even being a media person that there's a lot of information you're getting that is helping to mold people's perspective around something very important. So even in my own media source, we just try to bring on experts that are certainly passionate about one thing and we spend an hour talking about that one thing maybe across 10 to 15 questions because they have a history of leading up into being really,
[10:23] really good at that one area. Right. So what people take away from that is like any other amount of information that they get would be I hope to say,
[10:33] hey, this is good nugget that I don't necessarily maybe I want to validate myself and then if it is validation to be if it's true, then now I have a good nugget to take to use for myself or my business.
[10:46] Debbie Reynolds: So how does privacy play into your work?
[10:51] Don Morron: Yeah. So at Highland Tech we, like I mentioned, we build agents to automate workflows. And privacy is definitely top of mind, right? We push, we're a Microsoft, IBM and Google reseller.
[11:03] And so we try to push those products as we can because they're, they build with privacy in mind.
[11:09] But that doesn't necessarily imply that it's already ready out the box. And from my perspective,
[11:17] so you have internal and you have external customers that interact with these agents. External customers are your paying customers, right? It could be your vendors,
[11:25] Internal is your employees.
[11:26] And when we, when I think about privacy, I think about like first off, internally we'll use that category is the data that exists in my business ecosystem. Microsoft calls it the Microsoft graph.
[11:41] How is that those documentations categorized from private to public to confidential.
[11:48] That is something that existed way before AI agents.
[11:52] And it's actually even more so important now that AI agents are here because an AI agent in some respects could get access to everything if you allow it to.
[12:03] But the good news is for companies like Microsoft,
[12:06] that based on assuming those permissions are already in place, that restrict your read write privileges on what's private, public, confidential, whatever,
[12:14] that the agent will already know that you don't have access to that information.
[12:17] So if you, if you get access, let's say to an HR agent,
[12:21] and that HR agent is enriched with all this company knowledge of hr. And it said it's got vacation policy,
[12:28] sorry, HR policy, vacation time,
[12:31] different things that are really useful to all employees. But then that HR agent also now has information to help HR managers that could be more private information.
[12:43] Those, the permissions that those employees have already within their tenant would decide what the agent would divulge.
[12:51] And so coming off the Microsoft tour,
[12:54] it definitely is evident to me that if people are trying to scale agents and an enterprise, they definitely have to make sure that those document protocols, if you will, between what's public, you know, private or confidential, et cetera, are already in place.
[13:10] And also that their data is clean, like you need to remove data that's not needed.
[13:15] And I use this, actually I posted about this on LinkedIn yesterday.
[13:19] And a lot of people too feel that agents are like these AIs, they could go rogue, they can create problems. There's some truth to that. But what happens when you combine the best of business process automation with large language models and agents?
[13:34] And you think about agents, they have to make autonomous planning,
[13:38] decision making, right?
[13:40] So we can decide with a combination of those three things.
[13:43] When we have business process automation, Microsoft calls it conversational orchestration.
[13:47] So I can decide how this agent interacts with my external or internal employees,
[13:53] how it interacts with those stakeholders in this conversational orchestration. So I'm leading the horse to water, right? So to speak.
[14:00] But then where do I bring in generative AI to help in that conversation dialogue,
[14:06] which would live within the confines of that automation workflow?
[14:11] And then when I want to include an agent, that's when the agent starts to make these autonomous decisions and planning.
[14:19] And I can decide where I put that in, not the whole thing.
[14:23] Debbie Reynolds: Right.
[14:23] Don Morron: The bad news is when you have the agent just do everything so you get different experiences on how you interact with the agent, what the agent decides to do next, there's not any control over that.
[14:33] And so what we're seeing is the best,
[14:36] most practical agents today combines BPA with LLMs and agents where autonomy is needed.
[14:45] And so from a privacy standpoint, I go back to that. The good news is that when you use tools like Microsoft, they're already grounded in your company data. If you've already got all of your data categorized with the right permissions, you don't have to worry about data going haywire to the wrong hands.
[15:01] And when we engineer the agents, we engineer them in mind that they're not just going autonomous as like their main goal.
[15:09] Debbie Reynolds: Right.
[15:09] Don Morron: They have to go through some steps before they get to any autonomous action.
[15:14] And then the last bit on the external customers.
[15:17] Yeah. If you're working in healthcare or in finance, so whether it's HIPAA or Sarbanes Oxley respectively, you've got some data concerns for, for customers. And those industries actually have the most impact to agents from like a financial gain and just value to the customer.
[15:40] But they also had the most risk because of the data that they're ingesting.
[15:45] And so what you'll definitely find within those. And Google just came out with announcement where I think you'll find that many of those companies will be hosting their own AIs and they will not be going to API calls.
[15:57] So. So I think there's, there's that, but there's going to need some specialized approaches for certain industries.
[16:03] But otherwise where agents are helping with are things that are lower impact today. And when they're high impact, they could get a person involved to help carry the rest of the work through.
[16:14] Debbie Reynolds: Right.
[16:14] Don Morron: We call that human loop.
[16:16] And so low impact could be example I have an agent, let's say you're in this meeting today,
[16:22] you're busy,
[16:23] the agent, someone sends you an email, you can have agent auto response your email based on whatever that person sent you. Low impact, it's Just trying to help the customer see that there's a conversation going,
[16:33] okay, fine.
[16:34] But then it can take them into like the service queue or to some other department based on whatever they're looking for. Low impact.
[16:41] But now if I get a high impact agent that needs to order inventory for me,
[16:46] I'd rather that agent suggest inventory list right to a inventory manager and the inventory manager checks that list and then says, yep, this is what we need to order. But the agent could have done 60,
[17:00] maybe 70% of the work.
[17:02] Debbie Reynolds: Right?
[17:02] Debbie Reynolds: So yeah, I think two of the things that you brought up I think always are concerned with certain companies, with certain organizations. And one of that is really having that data classification in place,
[17:16] which unfortunately a lot of companies don't do because a lot of security in the past was kind of security by obscurity where it was like, oh well, no one cares what marketing is doing or only those people are hr.
[17:30] So the fact that you have now these autonomous agents that can have access, can be look, looking out for things, they have those open access channels, I think it creates a concern there.
[17:42] And then also most companies don't even know what they have. Right.
[17:47] So it's I think just having that planning and having that thought process about what exactly you want an agent to do and what exactly what are your data risks before you kind of move into that.
[17:59] It's probably a good first step. What do you think?
[18:02] Don Morron: 100%. And it's going to get more important that people get their data classification right. Have you heard of model context protocol yet?
[18:13] Debbie Reynolds: No.
[18:14] Don Morron: So this is new in the AI development community and it's got such high promise.
[18:21] But it's also from a data security standpoint it's got such high fear too.
[18:27] So the way agents,
[18:30] part of the agent DNA is they have three parts of their DNA. One is memory, short and long term context.
[18:37] Also it's got reasoning capabilities,
[18:40] so quote unquote thinking capabilities where it breaks down whatever it needs to do in small pieces in order to reduce hallucinations. And in some of their benchmarking this is just generalities.
[18:51] They hallucinate like sub 1% right now. That'll be much higher when it actually is put to work. But the idea is these things, they are just, they just perform better than standard LLMs that just predict the next word in a sentence or tokenize the next word in a sentence.
[19:07] Now last you have actions. So actions is like the holy grail because think about like you have an employee that they remember everything and they're super smart, they can reason through a lot of things, but they do nothing with that information.
[19:20] That's like the most useless employee. That's like the someone that would crush trivia night,
[19:24] but they have a mediocre job during the day. It's like you didn't use your intelligence to climb life. Like it's a way to waste, right?
[19:31] So that's why actions are the number one component of the agent DNA that brings them to life and that makes them useful.
[19:39] And so when we think about how we give them actions today is we make API calls.
[19:44] And so if you think about a CRM, for example, chock full of data,
[19:49] that CRM has who knows, 60, a hundred endpoints. And every endpoint for the agent to do something within the CRM means that API, an API call has different action capabilities.
[20:00] Getting information, is it pushing information, et cetera.
[20:03] And then every one of those have to have an API schema written for it based on whatever.
[20:09] And that at scale for an agent to be useful is manual.
[20:14] And it requires people today.
[20:17] And that does not feel like something that in the world of AI that we should have to deal with now. Insert model context protocol. Model context protocol was something made by Anthropic and basically the way I understand it is it hosts all of those endpoints.
[20:33] So in that same example, the CRM,
[20:36] it could host those 600 endpoints in a model context protocol server that would then contrast against a large language model. So that way when the user wants to talk to the AI, they just say, hey, go update this lead record instead of me as the agent developer to have to create the API call for every single one of those interactions.
[20:56] The LLM is going to write the API schema in real time for every single one of those. So for the ad hoc actions that aren't like a continuous like thousand, ten thousand actions per day, that we should still do an API call for, the model context protocol will now take care of everything else.
[21:11] And that's amazing because now we're going to get. Agents are going to blow up. They're going to be in everything because of this type of protocol. And Google just came out with theirs, called agent to agent, maybe is their version of that.
[21:22] They just got released this week. But that's also from a data security standpoint, terrified.
[21:28] Because now what happens is that agent, unless we give it guardrails, now has full access to my entire CRM in that example. Now for a small business owner, they don't really care, right?
[21:37] Because it's like them, it's the owner, maybe they have 10 employees, sellers, like whatever, get access to the CRM. I don't really care. You all see what customers we have.
[21:45] But to an enterprise company that has 2,000 sellers,
[21:50] and maybe that works in different domains or different. They have different teams,
[21:55] or there's certain dashboards that are available only to the CRO or to the VP of sales or regional VP of Sales.
[22:02] Maybe it's activities of other sellers.
[22:04] Debbie Reynolds: Right.
[22:04] Don Morron: This is private information. Maybe that's a problem.
[22:08] And so.
[22:10] And so I think that's what their issue is right now is an MCP provides a lot of promise, but also there's some issues, but they're going to figure it out.
[22:19] Microsoft has started adopting mcp.
[22:22] I think Amazon has,
[22:24] of course, Anthropic has,
[22:26] and they're going to figure that problem out pretty quickly. So what we're going to see as a result is this explosion of agents. But from a data privacy and security standpoint,
[22:35] it's going to be definitely a big topic of discussion, I think.
[22:39] Debbie Reynolds: I think so. And part of that is so funny. So I did a speaking engagement for the national association of Corporate Boards recently, and at that conference, Amazon AWS actually had done like a tabletop exercise about boards.
[22:54] And this company wants to do this to deploy AI agents and stuff like that. And so we were kind of talking through the steps that we had to do the inventory things.
[23:03] And one of the things that I said is very true for AI projects is that you really have to think about the data first.
[23:11] Right. Because if the data is trash,
[23:14] what you get is trash. Right. If you don't know what those guardrails are to begin with,
[23:19] the agent will. Will go off the rails or will do what you told it to do, but may not understand what that impact will be. So trying to really focus on what those limitations should be, what those guardrails should be, and also making sure your data is governed properly,
[23:35] which unfortunately a lot of people, it isn't because they, they weren't ever using data in this way.
[23:41] Debbie Reynolds: Right.
[23:41] Debbie Reynolds: Or using technology in this way. So it creates an issue. What do you think?
[23:46] Don Morron: Yeah, I had a podcast and it was titled Data First, AI Second.
[23:50] Debbie Reynolds: Very good.
[23:51] Don Morron: So I agree,
[23:53] and you're a hundred percent right. One of the things I've noticed too, is that when you have domain experience,
[23:58] if you're building agents,
[24:00] it's way more useful than when you're a catch. All right, so Highland Tech, we. I'm using my physical security industry experience to serve the physical security industry,
[24:09] but outside of that, we can serve other industries. But what I've noticed is that when you're building these things like you mentioned guardrails and the is the data good or not?
[24:19] Debbie Reynolds: Right.
[24:19] Don Morron: Is it trash? Well, when you have domain experience you can look at it and say this, this is trash or we need to add some guardrails here.
[24:27] Debbie Reynolds: Right.
[24:28] Don Morron: But the thing is whether it's trash or you need guardrails, having domain experience allows you to short cycle that because otherwise now you're depending on the customer to decide who those things, what those things are.
[24:38] Debbie Reynolds: Right.
[24:39] Don Morron: And if they don't have that level of expertise or even the agent builder doesn't push people hard enough to figure that out on their own,
[24:49] then what ends up happening is like you have an agent that is a risk potentially.
[24:55] Again it depends on what it needs to do in the task. The task sets up the impact basically defines the risk. I gave the example earlier about the email,
[25:03] that's kind of low risk, it's not going to do much.
[25:05] If I'm suggesting the inventory, let's say earlier and it's just,
[25:11] I don't know, let's say that his job is to go off to the Internet and it makes some suggestions and your job is to buy electrical equipment but instead it's going to also add beauty supplies.
[25:20] You know that that's a little extreme. But the point is if you have domain experience, you know how to keep the guard rails up so that the agents perform and they're not going to goes sideways on, on, on the performance,
[25:34] but also that the data makes sense. I'm working with a manufacturer that they want to build a agent that builds ad hoc reports for their customers based on a SQL database that is like huge.
[25:48] And so see I've worked in the security industry for 13 years,
[25:52] so I know the type of data that they have and what happens when I read the data and we, and we test the agent in a sandbox and the agent then gives me the report, I'm that report from both the consumer manufacturer and the security integrator in to say is that useful?
[26:09] And if I, and if I don't have domain experience, I gotta now include the customer that I'm selling this agent to to define that to me. And that's not a good experience for them.
[26:18] Debbie Reynolds: Right.
[26:18] Don Morron: I can immediately be like yeah, this is good enough.
[26:21] And then short cycle the QA time with domain experience as opposed to not having domain experience.
[26:29] So I think some of what I'm trying to say is like the domain experience is super important with this.
[26:34] I've seen other companies that have tried to build agents at scale. And like, I think the issue that they run into is they're serving like, too many verticals.
[26:41] Debbie Reynolds: Yeah.
[26:42] Don Morron: As their. As their primary source of business.
[26:45] And if you don't have those domain experiences on staff,
[26:48] then it just becomes a lot harder to solve the problem then with enough performance and enough speed.
[26:56] Debbie Reynolds: Yeah. And then also knowing what should be automated.
[27:00] I think that's kind of a point of wisdom that you would get from someone with domain experience where the agent will automate as much as you would let it. Right.
[27:10] But you may decide, like your example about the person who's working on something and an agent suggested that they order supplies or something like that, or order paper. I love that example, actually, because I think that's the right way that you should be using agents as opposed to saying,
[27:27] we don't need John anymore because this thing is going to order paper. Like, so the. The agent doesn't know that you have a storage space with tons of X paper, so you don't necessarily need to order it.
[27:38] But they wouldn't know that. So just thinking through those scenarios, I think is very helpful.
[27:45] Don Morron: Absolutely.
[27:46] Debbie Reynolds: Yeah. Yeah. So what's happening in the world right now that's concerning you most in AI or cybersecurity or privacy?
[27:55] Don Morron: Deepfakes.
[27:56] Debbie Reynolds: Yeah.
[27:58] Don Morron: Deep fakes are a problem. And I looked into this more deeply several months ago, so things could have changed. But the deep fakes are getting so good, and the. The AIs that are being used to detect those deep fakes are not progressing at the same level.
[28:14] At least it seems that way. It could be wrong. If there's anyone listening to this that has any other information, let me know.
[28:19] But, like,
[28:20] that's an issue.
[28:21] Debbie Reynolds: Right.
[28:21] Don Morron: Because you can see what's happening in the world of face swaps and things like that. Or you're making even OpenAI's, like, latest release where it can make an AI image of me that looks just like me.
[28:33] But then what happens is, as I ask it to make other images, it looks just like me still.
[28:38] And before,
[28:39] the model would give you random images of people and it wouldn't look like you. So even from a consumer standpoint,
[28:47] I can take that image.
[28:48] Okay. Of Debbie,
[28:50] make a couple of images of Debbie,
[28:53] go into, like a Runway AI or another video generation tool that will create a video of Debbie doing what I'd say Debbie's doing.
[29:02] And that's a problem. I wouldn't say it's great.
[29:05] Debbie Reynolds: Right.
[29:05] Don Morron: I'm not saying, like, it's totally convincing, but what I Am saying for the low, low price of like 20, 30 bucks a month, someone can make a deep fake video of you.
[29:13] They can also record your video.
[29:16] I made an avatar of me and with about two minutes of recording, and there's some of these other ones, I can do it in 30 seconds. You can have the AI learn your voice.
[29:26] So now if I somehow, let's say on this conversation, I'm recording you on my phone. I'm not doing that, by the way. But if I was, and I want her to take that and now upload it, because I got definitely more than two minutes of conversation with you.
[29:37] Debbie Reynolds: Right?
[29:37] Don Morron: And clip out only the parts where Debbie's talking about,
[29:40] upload that into this AI voice avatar.
[29:45] And now I got the voice. I can create a transcription that matches with the video of you doing something maybe wrong and be. And then send that to the police and be like, look,
[29:55] David was down at 7:11,
[29:57] still some Slurpees. And this is her like yelling at the people. This is her voice, you know?
[30:01] Debbie Reynolds: Right.
[30:02] Don Morron: I got it on camera and I can maybe edit it to where it looks like it's on camera.
[30:04] Debbie Reynolds: Right.
[30:05] Don Morron: People can do this today with tools that cost them like less than a hundred bucks a month.
[30:11] Now there's a lot of other things involved in editing and, and how convincing is it? So it's not like they can just figure it out with one prompt. They probably have to play around with these things for a little bit.
[30:21] But just the fact that you could do that and for the low, low price of a subscription is a real issue.
[30:27] Now as we scale deepfakes into even the security industry.
[30:32] For example, biometrics, when you enter a high secure building, whether it's a data center,
[30:38] a health hospital,
[30:39] a large financial institution,
[30:42] the biometrics are picking up on both a 2D and 3D model of people. So what happens with old biometrics, they would tend to pick up on like a 2D version of your face where your face would have a matrices.
[30:53] Okay.
[30:54] And now I just interviewed a company when I was in a security show where they said we now layer 3D modeling on top of that.
[31:01] So that way if someone shows a picture of your face and it's like a high res picture and they put it up to the biometric that they're not spoofing the biometric with their version of, of cheap deep fake.
[31:10] Debbie Reynolds: Right?
[31:11] Don Morron: Okay,
[31:12] fair.
[31:14] But like, what happens, like if, let's just say an alternative example that someone takes a video of you or a video of somebody else, it looks similar to you and they decide to use AI to face swap it because they wanted you to get in trouble.
[31:28] And you know as well as I know that if you see security footage today that that steals from the convenience store or something else, it's not great.
[31:36] So even if it's the pixelation so bad that even if the face swap isn't fantastic,
[31:42] but if they have the characteristics already of you, that's close enough and I can just swap the faces.
[31:48] Now you have to deal with it doesn't mean you're going to get convicted, but now it means you got something to deal with. So imagine if someone wanted to use that against you.
[31:55] They can do that for the low, low price of a subscription today. And that's terrifying.
[31:59] Now imagine what the Dark Web has.
[32:02] Imagine what real criminal enterprises can do when they've got million dollar organized crime can do.
[32:10] That is their sophistication level. I would not be surprised if many of these criminal organizations have their own data scientists on staff and that are doing and exploring deepfakes more and more.
[32:21] So that's my, that's my answer to that, that deepfakes is a real problem and it's the one thing that keeps me up at night.
[32:28] Like not really, but hypothetically I know that only AI can detect deepfakes that people will not. So they're definitely, we're probably there today where you look at an AI image and you can't tell if that's a real person that exists right now.
[32:41] Now that will shape into a 3D environment. So 3D modeling is getting better. With AI models,
[32:48] spatial AI is like the term.
[32:50] And so as that improves those realistic images that you see today that you can't tell the difference between a person or an AI will be that way with video and you'll see that come in cinemas.
[33:03] I think Google just said that they created AI,
[33:07] a theater or a professional grade AI video that's going to be doing some of the wizard of Oz work or something like that.
[33:15] And so the AI is going to get so good at video generation that it will be indistinguishable from people.
[33:21] And so yeah, that kind of stuff is,
[33:23] is a little concerning, but I guess I'm along for the ride.
[33:27] Debbie Reynolds: Yeah,
[33:27] I agree wholeheartedly. I feel like the deep fake technology was being developed out of people's plain sight for many years.
[33:38] So it was getting really, really good all the time. And so we just kind of caught up with it. And so I agree, like kind of the detection part is lagging.
[33:48] I feel right now for sure.
[33:50] Well,
[33:51] if it were the world according to you and we did everything you said, what would be your wish for either privacy,
[34:00] cybersecurity or anywhere in the world,
[34:03] whether that be human behavior, technology or regulation.
[34:08] Don Morron: My wish would be that people would understand and fully embrace that AI is a new cognitive revolution and that I would not wait.
[34:23] I would figure out how you can get involved.
[34:26] Don't push back. I know it's easy to push back. It feels uncomfortable. Change feels uncomfortable. But so did the tractors when they showed up and the farmers had wanted nothing to do with it.
[34:37] But that's not where we're at today, right? We're talking about an intelligence boom. I think the good news is that we'll find an abundance of intelligence that will take care of a lot of things that we don't really like to do anyways.
[34:49] The bad news is I think the adaptation of people,
[34:53] it will be an issue if they don't proactively do something about it. So what I would say to people is to self reflect.
[35:01] To self reflect is to self correct. And if you self reflect now with what's happening in the AI world and figure out how you can upskill and reskill, you will be in a better position to for you and your family and your friends and encourage that amongst them in this next revolution.
[35:19] We can see it happening. It's everywhere. We can see the money that's being spent, we can see the technology advancements and how fast even hardware compute has changed. Quantum computing seems to be, feels like right around the corner,
[35:32] at least commercialized versions of that.
[35:35] And so yeah, I just, I think that unless you're close to retirement and if you're not that right, you need to be looking at this and just figuring out how you can exist in this new world that we're in.
[35:47] What I would hate to see,
[35:49] what I would not want to wish for people, is that they do the opposite where they're negative about it, where they're pushing against it.
[35:56] And if it feels uncomfortable,
[35:59] that might be a good sign that you're trying to change.
[36:03] And look at me, I'm a good example of that, right? So I worked in this security industry and I worked in a completely different domain. I had no background in data science and I went back to school and got like an 80 hour certificate from like MIT and low code machine learning and AI and then started a podcast and had all kinds of conversations from easy to tough and constantly reading the AI space and watching the AI
[36:30] space and traveling to go to AI community. It's not easy for me. Now mine's a little extreme because I'm trying to make a business out of it. But I think that I would encourage people to at least sign up for these TLDR reports to get some news in your inbox to start looking at it,
[36:46] start using some of the tools.
[36:48] You could also build retrieval agents, which is like the most basic form of agents today from a consumer standpoint. It's called Custom ChatGpts,
[36:58] Google,
[36:59] Gemini, Gems,
[37:01] Perplexity. I think not Perplexity, but Anthropic has their own versions. So start making your own little retrieval agents and start really just leaning in because I think leaning out is not going to be good for you and I think it's hard to argue against that.
[37:15] Right. I'm rooting for humanity.
[37:17] And what I feel, and this is something I shared with a colleague last week when we were at a trade show,
[37:23] is that I am so excited about what's happening with AI not because of replacement of jobs,
[37:30] but because I think it's going to get us back to what people were meant to do. We maybe it will take away all of these repetitive and mundane tasks.
[37:38] So that way a new value of the human is who creates the best relationships,
[37:44] who creates the best emotional connections, who's the most spiritual, who's the most earthbound people, right? Like Native Americans be in touch with spirituality at one point.
[37:54] That wasn't that long ago,
[37:56] but somehow over the last couple hundred years, the proliferation of technology, right, or the last a hundred years or even 50 years, we've lost it. Social media was supposed to connect us, but it's disconnected us.
[38:06] And so I think what will happen with AI is that it will automate things. It'll be up to us to lean into our human traits.
[38:15] Don't get lazy. This is not the time to get lazy. That's the time to say AI just clean my house because I got robots walking around my house.
[38:21] I think it's the time to say how can I elevate myself in my career and how can I double down onto the human only traits and to get better? And I'm really excited about that.
[38:29] I love the opportunity where people are spiritually connected and have real friends and we're not really competing too much and being selfish with what we our aspirations. We actually genuinely support people in the growth of their career rather than making it an agenda point every time.
[38:45] And so that's just a really great future to me. So what I wish,
[38:48] Debbie, is that people would lean in into AI, upskill and reskill and really learn and accept the fact that this change is here and they have no other choice.
[38:59] Debbie Reynolds: I agree with that. So I support your wish. To me, it feels almost like when we went from mainframes to personal computing.
[39:08] I remember people who folded their arms and like I want to do everything in paper and I want to do everything manually because there's so much data, so much information.
[39:18] We were forced into the computer age because you just could not do it that way. I feel like AI is the same way where things are accelerating so fast that you're going to be forced to be able to use these tools in order to keep up.
[39:32] Don Morron: Absolutely.
[39:33] Debbie Reynolds: Well, thank you so much people. Definitely, definitely check out Don on his podcast. Super cool. Very,
[39:40] very nice beard. I love it.
[39:43] You're doing great work and thank you so much for being on the show. I really appreciate it.
[39:48] Don Morron: Pleasure. Thank you.
[39:49] Debbie Reynolds: All right, talk to you soon.