E285 - Michael Simon, Law+Data, LLC (Privacy and AI)

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:12] Hello,

[00:12] my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks privacy podcast, where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.

[00:25] Now,

[00:26] I have a very special guest on the show, a longtime friend, actually,

[00:30] Michael Simon. He is the attorney owner of Law + Data, LLC

[00:35] Also, I'm going to throw some other stuff in here about you, Michael.

[00:40] You are a privacy expert. You're witty, extremely funny. As people will find out, you're into AI ethics and governance.

[00:51] You are a fellow for humanity. So for humanity was doing things about AI auditing before it became the hot rah rah thing, which is cool.

[01:00] Also for the American Bar association, you're the co leader of the Business Law Consumer Privacy subcommittee and ediscovery expert.

[01:09] Welcome, welcome.

[01:10] Michael Simon: Well, thank you. You can even throw all my dog rescue stuff in there. But we'll be here all day on that and people will be like, dogs. Do we turn into this to listen to dogs?

[01:19] So just contact me on the side. I'll get you a rescue dog if you're looking for one. Yeah,

[01:23] the offer stance.

[01:25] Debbie Reynolds: That's an excellent shout out. That's an excellent shout out. Well, your. Your career trajectory has been really fascinating.

[01:31] First of all, you're incredibly well versed in this universe of technology and law.

[01:38] You always pose very interesting questions. There's never a boring thing that you ever post, which I actually love,

[01:45] but you always inspire a lot of dialogue and communication with the things that you post.

[01:51] And, yeah, I just love to hear your mind about the things that you're interested. But, yeah, tell me a little bit more about you and why you're interested in this area.

[02:01] Michael Simon: I went through some of your podcasts and I realized I had to have an origin story.

[02:05] And now the pressure's on. I gotta have an interesting one. And I'm not sure I do.

[02:09] I feel like, you know,

[02:11] if this was the Marvel Universe, I would have been bitten by a radioactive snark or something.

[02:16] I don't know. I'm not sure where it all comes from. But, you know, I. I had a younger brother who was.

[02:21] He was one of the original hackers out there. There were always computers in the house, sometimes being put to good use. Let's just sometimes. And so grew up with that.

[02:31] And I guess that gave me the ability to work with computers and understand them and understand data and, you know, that seems like something we need very much right now in the legal profession.

[02:44] And it relates to. Some of the stuff relates to where you and I met in the ediscovery world. You had our mutual friend Paul Staradon. We all met there,

[02:53] and we learned some of the early lessons about dealing with data and ediscovery in some ways.

[02:59] So I don't know if that's a great origin story, but I used to have on my LinkedIn before wiser heads prevailed and made me change it to something serious. I had my little blurb was nerd turn lawyer turn nerd turned lawyer nerd.

[03:14] That was kind of my whole journey there. So eventually I was told, no, you need to get people to, like, hire you. And they're gonna look at that and go, no.

[03:23] Debbie Reynolds: Well, I've been really fascinated. So I'm a technologist, so for me,

[03:28] I'm like the center of the wheel as it relates to data. And I kind of have gone in all different directions.

[03:34] But I think right now we're seeing, like, such a.

[03:40] I don't know, a mashup, for lack of a better word,

[03:43] of like,

[03:44] law and data and AI and privacy. And so things are kind of mushed together because I think a lot of times those conversations have happened in their own silos, and now they're crossing together and they're converging.

[04:01] But I want your thoughts.

[04:03] Michael Simon: Yeah. Well, first of all, I should make sure I say I am really thrilled to be here. And I probably should have come on sooner. I listened to the one with my fellow for humanity folks with Dr.

[04:12] Miller, and she was so thrilled to be here. I think I've known you too long, though, to do, like, the total fangirl thing. But I am thrilled to be here.

[04:20] I am thrilled. I want to make sure I note that, and then I'll say, if we're going to talk about a mashup, the good news is at least then you don't have to use a less polite word that also has four letters and second word up, because it seems like sometimes we get that way with the law.

[04:33] We need to get a lot better at this.

[04:36] We really do.

[04:38] And I just got back from an ABA event in Cork. Maybe I should have not played such the monster role. But telling a lot of attorneys, we need to get better at this.

[04:49] We need to understand what we're dealing with with AI. We need to understand the implications. We need to understand how to use it better.

[04:56] Even some of the lessons from ediscovery we need to use better. So I.

[05:00] I try and help my clients with Privacy issues and AI issues. But I also try and do my best with the profession to try and see how we can do better with things.

[05:10] Debbie Reynolds: What's your thoughts about law,

[05:13] legal and privacy? So here's my thought and I want your thoughts. So a lot of times when people say, oh,

[05:20] privacy, all we need are better regulations or you know, so obviously, you know, regulations are being trotted out and different things, there's nothing wr wrong regulation.

[05:29] But I feel like organizations that do privacy right need to be more proactive as opposed to lurching from one new law to the next. Right. Because I feel like law isn't a shield, it doesn't prevent things from happening.

[05:44] And so you kind of need that more proactive stance. And I feel like to me that's where one way that AI and privacy parallel in my view, where it's best addressed more proactively as opposed to like, oh my God, something bad happened, you know, Timmy's law or something,

[06:02] you know. So what, what do you think?

[06:05] Michael Simon: That's an interesting take. I had never thought about it that way. I have to admit. I will say this. Looking at the regulations, yeah, we're layering on a lot of levels here.

[06:14] But in some ways part of the problem has been that the regulations have not been in many ways very effective.

[06:20] You look at where we start 1973, the Fair Information Privacy principles,

[06:25] federal privacy laws.

[06:27] Back then, they're all notice and consent based. You look at coppa, you look at hipaa, you look at even the bulk of the ccpa, California Consumer Privacy act, their notice and consent,

[06:40] and then the set of, as we love to say in privacy, or little insiders, the failed Washington bill model. Because everything other than California is based upon this Washington State bill that didn't pass three times.

[06:52] And that became the template for Virginia, Colorado, Connecticut.

[06:57] Again, notice and consent, notice and consent. As long as I can work with my clients to make sure we got the right notice at the right time to the right people and they click the right box and we're good.

[07:08] You know, we gotta make sure we've got it at all the various handoffs. And when we're selling or sharing, we're doing the right thing. But as long as you get that, you're good.

[07:16] We're seeing a move away from that now. We're seeing things like in Maryland's modpun. There's also some amendment for Oregon, the Oregon Consumer Privacy act. It was HB 2008,

[07:32] no selling, sharing certain types of data for Modpud says you can't sell sensitive data for the OCPA. It now says you can't sell or share precise geolocation data. You can't sell data of kids.

[07:48] These are becoming new absolute prohibitions. And I think they're in response to the fact that notice and consent is really broken.

[07:56] It just doesn't work anymore. It broke during the Internet and social media age where we invented that terribly cynical statement, if you know the product is free, you're the real product.

[08:10] People were willing to trade for that. Hey, just get me on Facebook. Awesome. They can collect whatever they want from me. It's free.

[08:17] And with AI, it becomes an even extended, bigger problem. So I do think we're moving away from that. So I would ask you to give this, these new developments more of a chance.

[08:26] But I will also say that I do agree with you at the same time that we need to be better about this just in terms of how we value our privacy and how companies take this seriously.

[08:41] We see often repeat. Well, I'll stop there. I'm saying too much. I'm going on a monologue and so I'll stop.

[08:47] Debbie Reynolds: You're monologuing.

[08:48] Michael Simon: You knew that of me. You knew I'd do this.

[08:50] Debbie Reynolds: You're monologuing. Oh, my gosh.

[08:52] Michael Simon: Yeah, you caught me there. There we go. Yeah. The real question was, is it when I talk about the origin story, is it a hero origin story or is it a villain?

[09:00] One we don't know yet, do we?

[09:01] Debbie Reynolds: So the.

[09:04] So I want to talk a little bit about what you say about the prohibitions. And so this body of.

[09:13] I don't know what to call it, the amalgamation of thought around prohibitions,

[09:21] to me, in my view,

[09:23] are around data that now is in a bucket called sensitive data.

[09:30] And so location data traveled into that bucket.

[09:35] It tried. And as part of that location data traveling into that bucket was the ROE v. Wade being struck down.

[09:43] But now we're seeing a lot more effort putting. Put into that as a result of what's happening nationally with immigration.

[09:52] I just want your thoughts about that. So to me, making location data sensitive data has kind of awakened the beast,

[10:02] so to speak, around sensitive data. So we've all known that things like biometrics were considered sensitive, but now that location is considered sensitive. And then I'm interested, like what in the future will end up in that sensitive bucket.

[10:15] But what are your thoughts?

[10:18] Michael Simon: I got a lot of thoughts. Let's see if I can do them quick.

[10:21] I agree. I think we're going to see a lot of things end up insensitive in that Bucket. I'm keeping with the mashup. I like that word. It is. We can do that.

[10:30] I'm not going to pronounce amalgamation right enough times. I'm just going to go with mashup. I like that. Nation,

[10:36] of course, the biggest area sensitive data has got to be kids.

[10:40] We're all worried about our kids using these things.

[10:43] I'm lucky to be through that now. We managed to get our daughter through the social media era largely unscathed. I think you probably ask her, she might disagree. But that's really got to be the number one.

[10:55] And that's where we're seeing seriously the most action this year. I just described again at this event, 2026 as the year of hell is for children's data.

[11:06] If we're going to go with the song mashup like I do in my LinkedIn stuff, extra points,

[11:10] call in folks, identify the singer. At the same time,

[11:14] I will post a positive concern about sensitive data because it seems like a lot of things are being put into it. And Professor Dan Solow, who we all know and is a big name and writes amazing stuff, wrote last year a paper on basically saying everything is sensitive data.

[11:31] And the reason everything is sensitive data is that you have inferences.

[11:37] And what is AI? What is AI doing really when it works with us and it gets information about us? It's creating inferences.

[11:45] We have these amazing systems that can generate inferences from almost nothing. I saw something a couple of days ago that researchers have proved that basically de identified pseudonymized data. Speaking of a word that's hard to pronounce.

[12:02] These are concepts that don't even exist anymore practically because AI can take even non personal data from you and figure out who you are and some of your most sensitive information from that.

[12:15] So we're finding ourselves where we're putting all more and more things into the bucket and protecting that sensitive data information bucket.

[12:23] And yet at the same time, AI is making it harder to protect. So again,

[12:29] where do we go? I don't have any easy answers. I wish I did. But we are going to need to confront this issue,

[12:36] I guess to give a summary, make it more coherent.

[12:39] The protection of personal data was not enough, so we created this extra special category sensitive data. And where do we go from there when that starts eroding? I don't know.

[12:49] Debbie Reynolds: Fascinating. I did a video a couple years ago about inference and the problem with inference.

[12:56] I want your thoughts here. So this is the problem with inference that people aren't really talking about. So the way the laws are written it's like, okay, you have data,

[13:08] you give data to somebody. Right.

[13:10] Or you share data with someone, and then this company or organization has obligation to protect it in a certain way. Right. And so inference is the data sort of about the data.

[13:23] It's almost like the metadata of AI and different things. Right.

[13:27] But a person doesn't own their inference.

[13:34] Right. So the inference about the person belongs to the company that made the inference, and so you don't own it. So even if you opted out or asked things to be deleted, the only thing that they're deleting or you're opting out of is the stuff that you gave them initially.

[13:50] Michael Simon: Right.

[13:51] Debbie Reynolds: But we know that these systems are powered by these inferences, and we know that decisions are being made about people from these inferences.

[13:59] So I feel like there's a whole universe of things that the laws aren't looking at,

[14:08] and part of it is the legal right of the individual about those inferences. But what are your thoughts?

[14:15] Michael Simon: You're absolutely right. And let's be clear. Let's escalate this. You know, when we're talking about decisions being made about people,

[14:21] these are decisions on, like, whether you get a job, whether you get a loan, whether you get to live somewhere,

[14:28] whether you're going to jail.

[14:29] These are big decisions. So we're not talking small stuff here. I know a lot of people going, maybe it changes what I'll see on Facebook. No, this is AI powering some really important stuff.

[14:39] The law needs to get there, and we need to help the law get there. You know, I see one potentially good step. Connecticut has massively amended the Connecticut Consumer Data Privacy act with SB 1295, which goes.

[14:55] Becomes effective July 1,

[14:58] and that has the right to access any inferences that have been generated about you.

[15:06] So that's at least a start that gets you past just the data they may have obtained from you.

[15:12] I don't think it has a particularly good handle on being able to do anything with that. I'd have to double check on whether you have the right to delete.

[15:19] I think it might be in there. It's hard to remember everything in these laws. We've got lots of them now. But yeah, we.

[15:26] These are the kinds of conversations we need to be having.

[15:29] Debbie Reynolds: I agree with that. There's a story that I. Someone had posted, and I want your reaction from this.

[15:35] So there was a woman,

[15:37] I can't remember what state she was from, some police force in a different state said that she had done some fraud there in that state.

[15:47] They arrested her and had her sent to the state to be tried and she was in jail for six months for them to find out that she had never been in that state.

[15:58] Michael Simon: Yep.

[15:58] Debbie Reynolds: She had records saying that I've never ever been in the state. And literally they flew her to the state. She was in jail for six months. She lost her house, she lost her job.

[16:08] She lost her dog. Yeah, she lost her dog.

[16:12] Right. And for them to say, oh, well, oops, sorry this happened to me. It's two problems. One is trying to pretend like AI knows everything and is right and that's one thing.

[16:25] And then one is like, what happened to good old fashioned police work?

[16:29] What do you think?

[16:30] Michael Simon: I think it was North Dakota that grabbed her from Tennessee, if I remember, because I saw that story too.

[16:37] Whatever happened to good old fashioned police work? Well, you know, it's, it's.

[16:42] We make it so much, quote, easier to do this kind of thing.

[16:47] This is one of the things we discuss it for humanity. We've known we've had bias problems with some technologies for a long, long time. Anybody who thinks facial recognition is a particularly reliable technology,

[17:00] when just a few years ago, somebody ran a test to see if it could identify the members of Congress and it did a really, really good job of identifying all the white men,

[17:11] everybody else, not so much as soon as any. If they were women, did a terrible job, they were minorities, did a terrible job. If they were women, minorities,

[17:19] forget it. The systems are not always as good as we think they are, or more importantly, as they're sold to be, and the mythology arises around them. But we've also seen this kind of thing where people start relying on this and law enforcement is just one area.

[17:37] I'll tell you another story. This one's a little bit less well known. From about seven, eight years ago, just as everybody was jumping up and down, celebrating as genetic info was being used in California to catch a killer from a cold case from 20 years ago.

[17:52] Yeah, great. Let's use this to catch all the killers. There was a filmmaker in New Orleans who was detained by the New Orleans police as a suspect in a murder.

[18:03] Again,

[18:04] far flung Midwestern state where he had never been.

[18:09] But there was some genetic match that one of his relatives had done because they had signed up for Ancestry.com or some similar 23andMe or something. And the police used the family match thing, and it showed that there was some potential distant match.

[18:27] And hey, the guy was a maker of horror films, so maybe he liked killing people in real life. And he got stuck for like three days in The New Orleans jail, being interrogated.

[18:38] We're looking at this stuff and going, hey, this has got to work. But it doesn't. And again, as a society, we've got to be way more careful.

[18:46] If I can finish just up this monologue real quick.

[18:49] There was a great article in the Atlantic by Rafi Krikorian, the chief technology officer of Mozilla. It talks about,

[18:59] the title is,

[19:02] my Tesla was driving itself perfectly until it crashed.

[19:06] The danger of almost perfect tech.

[19:08] And it talks about how he had a Tesla, like, I guess everybody in Silicon Valley does. I think it's part of the uniform, right? And he had his kids in the car, and he'd been using the automated driving around town for years,

[19:21] and all of a sudden, his Tesla just crashed into a wall. Boom. And the issue was, it gave him, like, two seconds to respond before it started acting weird and crashed into the wall.

[19:31] And two seconds is not enough for anyone who's not superhuman to take control of a car and deal with a situation where your car has suddenly decided to drive itself into the wall.

[19:43] And he goes through and says, look, the main lesson to be learned here is we're at the point where the technology doesn't fail all the time. It actually does pretty well, until suddenly, boom, it doesn't.

[19:55] And we create this reliance. We fool ourselves. We lull ourselves into thinking we don't have to be prepared, we don't have to be ready, or that we could be ready in an amount of time that no one could be ready.

[20:07] And even links in towards the end, this thing where lawyers are using AI to create fake cases and briefs and arguments and the like, because we're starting to rely upon this stuff.

[20:19] So taking it back to ediscovery,

[20:21] one of the things we had then with technology that was, let's face it, it was pretty new. Wasn't really all that great. We had confidence scores.

[20:29] If you ran AI on the thousands or millions of documents you had,

[20:35] you would get a score on terms of how, what percentage it matched. I worked for companies that did that. You worked for companies that did that. And so you go, okay, this is 95%.

[20:46] This is 33%. All right? We're not going to rely on the 33%. Even the 95%. We can test that. Is it good? Is it bad?

[20:53] And now what we've got is the market demands technology that pretends it's awesome.

[20:59] Claude and OpenAI and Perplexity, they'll all tell you, you're a genius, right?

[21:05] We're all brilliant and wonderful. And none of the stuff we're coming up with is garbage or a ridiculous hallucination.

[21:12] And so we're ending up with this technology that is creating this false confidence.

[21:17] And we do things where we throw people in jail for six months,

[21:21] where we let our cars drive into walls, where we file a brief without checking it and it turns out that all the cases in it or quotes in it are made up.

[21:31] We have to be, again, we gotta be better about this and make sure we understand the limits of this stuff.

[21:38] Again, sorry, long answer. I keep doing this.

[21:40] Debbie Reynolds: No, that's a great answer.

[21:42] I want your thoughts on Talk now about trying to use or lean on shades of products liability laws around these technologies. I just want. I don't know if you have a thought about that, but I think it's a fascinating segue in terms of how people are thinking about harm and these different tools.

[22:05] Michael Simon: It's complicated. I've seen a lot of writing about it and of course, and I will make a disclaimer here, I do not have a history as a products liability lawyer,

[22:15] so I understand it from the outside, but my outsider understanding is that software is not a product.

[22:22] So then the question becomes, okay, if your Tesla drives itself into a wall,

[22:27] was that software or was that a car, a product?

[22:32] We have that. So we're going to have to resolve the issue of who is responsible for these things. And we see attempts to do that when the EU AI act creates these developer, deployer, importer, user sorts of chains.

[22:50] We see that in the present state of the Colorado AI act, where you've got duties that the developer has to the deployer and the deployer has to make certain disclosures.

[23:01] Now, if anybody else had noticed,

[23:03] a couple days ago,

[23:05] the governor of Colorado released the second commission they did on rewriting that act, and it completely strips out all of that. It takes a huge step backwards.

[23:17] So, yeah, it's whether we're dealing with these issues, I don't know. But I do want to pose something back to you and another thought that we should probably surface when we get to liability, which, when it comes down to it, in the end,

[23:32] you know, the real big issue is can this stuff be insured?

[23:35] You know, it will. The insurers accept it. And I think, and maybe you're seeing it, I think we're starting to see a real pushback from insurers, particularly those who have gotten so badly burnt over cyber insurance.

[23:49] They're not making this mistake again, I think. What do you think?

[23:52] Debbie Reynolds: I think you're right. Yeah, it's A tough problem.

[23:56] I don't know. When we're talking about something that's product or not,

[23:59] I think from a human perspective,

[24:02] a human will say, this is like semantics, right? But I mean, it makes a huge difference in what the redress that's available to the individual. And then I agree. I think definitely insurance companies are wiseing up.

[24:15] But then also,

[24:17] I think companies that are deploying these things,

[24:20] they're also likely getting burned as well,

[24:24] because they can't blame.

[24:28] They can say, oh, well, the AI made me do it. But ultimately,

[24:31] laws are about people and all about companies,

[24:35] right? And so you can't try OpenAI out into court or your chatbot into court and say, well, they made me do it, right? So,

[24:43] like, example of the. The airline in Canada where the person was talking to the chatbot about bereavement flight refunds, and they. And the chatbot said, yeah, okay, we'll give you the refund.

[24:54] And then the person went back and talked to the company, and they're like, no, no, no, we can't refund you for this bereavement flight. And then the judge said, well, the chatbot actually created a contract with the person,

[25:08] so you have to honor this contract.

[25:10] So I think it's bonkers. At this point, we can't speak,

[25:14] figure out who's responsible. And then also back to your point about this guy who had the car that ran into a wall.

[25:21] It's like you're dealing with people.

[25:23] You're dealing with the lives of humans. And so that has the fact the gravity of the error needs to be taken into account as well. But what do you think?

[25:33] Michael Simon: I agree.

[25:34] First of all, I'm glad. Now I get to check the bucket item list where I got you talking. So that's good. I didn't want to be the only one talking on this.

[25:41] You do, so you ask the question. No, no, no, no. We got to have a discussion. You know, the article actually talks about that. It talks about two things. One, Tesla will go through great lengths to use their logging capabilities of almost everything happening in their cars to try and blame the person.

[25:57] Oh, you had time to take over. You should have seen this again, expecting you to react within split seconds. But they've also lost some cases the article cites to a $243 million case they lost,

[26:10] which is, I think, real money, even to Elon Musk. Maybe I'm wrong. I haven't done the math there. At the same time, you know, we've also seen, you know, Google character AI have had to Settle lawsuits for teenage suicides due to chatbots.

[26:25] You know, it's getting to the point we're seeing the actual behavior that's starting to change as a result of this.

[26:32] At the same time, to me, the most fascinating part of the Air Canada case that you bring up is not the amount at stake. That was just literally $600, and it's $600 Canadian, which I think American is like, what now?

[26:45] 25 bucks or something?

[26:47] Oh, I'm going to get. My Canada friends are going to call me on this. So. But the real fascinating thing is the people side of it. And so the key to me in that case is that Air Canada immediately tried to say, it's not our fault.

[27:00] This chatbot is its own entity.

[27:02] It's responsible for itself. They're trying to put a form of personhood, or quasi personhood, a legal personhood that the company also has and say, you know what, we're just gonna let that be its own separate thing.

[27:17] Well,

[27:17] let's all be real glad that at least in that instance, that the battle didn't go that way, because if it does,

[27:23] we're all in really serious trouble. I mean, it's. Corporations have a legal personhood because they've made certain legal obligations as well. You can sue a company, you can find them.

[27:36] They have a registered agent, they to have. You know, they have people at the board level and the C level who, if they do certain things, can be responsible if the company can't deal with it.

[27:47] You have corporate what you have. In fact, let me take a step back. David Gunkel, who is a professor niu, who is just brilliant, and if you don't get him on this podcast, you're missing something.

[27:57] You really should have him out because he writes the definitive stuff about legal about AI personhood,

[28:04] and he has posited the issue as a person, at least at law,

[28:09] is an individual who has certain rights, duties, obligations and responsibilities.

[28:15] And so when we give that quasi personhood to corporations,

[28:19] they have certain rights, duties, but also obligations and responsibilities.

[28:24] If you then declare AI and say, well, AI is quasi people,

[28:29] without thinking through the rest of that, and we just give them now they have rights without any duties and obligations and responsibilities because it's just software, and how do you sue a software bot?

[28:42] And if you win, what do you get?

[28:45] So,

[28:46] yeah, again, this is an area that we're just going to have to figure out and figure out real fast.

[28:52] Debbie Reynolds: Personhood. That's fascinating. I definitely have to check him out for sure.

[28:57] Now. What is happening? What are you Seeing on your radar right now that's like, concerning you the

[29:03] Michael Simon: most,

[29:05] you should know by now that could. We could be here for hours.

[29:07] I'm going to go with what I was kind of jumping up and down about at this latest event.

[29:13] Not literally, although almost is again on this AI and personhood thing, but also kind of imputed personhood. But we're treating AI even though it isn't people like it is people.

[29:27] Jack Balkan, who is this absolutely renowned professor of constitutional law at Yale Law School, calls it the Humunculus fallacy.

[29:37] Now, for those of you who are not D and D nerds and don't know what a homunculus is, I think it's four and a half hit four hit dice plus four hit points or something in D and D world.

[29:46] But in reality, a humunculus is a magical small person or fake person.

[29:52] And the reason, he says, is that people tend to think of AI as if there's some little person behind it or inside it.

[29:59] And that affects how people treat this stuff. I have been in far too many rooms with far too many lawyers who want to talk about AI, of what AI wants and what AI means and what AI is trying to do.

[30:13] And it's like, no,

[30:15] it's not a person,

[30:16] it's math.

[30:17] AI is math. And again, I think our start in the E discovery world when the math was so obvious,

[30:22] maybe that gave us a certain immunity to thinking of AI as maybe we were lucky that way.

[30:28] Because when you start thinking of it as people,

[30:30] you can start making a lot of mistakes, particularly if that person is as was described, you know, with the cars and legal cases, you know, so confident about what it does and doesn't tell you when it's not confident and seems to be perfect until suddenly it's not.

[30:48] So we end up making these mistakes. The fact that There are over 1,000 citations to various cases in the AI Hallucination Case database and growing. It was like 800 two weeks ago.

[31:04] Now it's over a thousand.

[31:06] This is a bad sign.

[31:08] The fact that I have to write snarky LinkedIn articles, with all due respect,

[31:14] maybe we should put that in quotes about one of the most renowned jurists in America,

[31:19] who in this Heppner case that everyone's writing about and quoting over and over again in the opinion,

[31:27] simply assumed that talking in quotes, having a conversation in quotes, communicating with AI was somehow the equivalent of talking with an actual person.

[31:38] I saw someone,

[31:39] a lawyer, who that case and wrote a great big splashy LinkedIn post saying it was just like if the. If you walked into the coffee shop and had a conversation with the barista,

[31:51] no communicating. So people who don't know the case, I'll do it super quick.

[31:56] Defendant, white collar criminal case,

[31:59] used Claude to figure out case strategy and then sent the stuff to his lawyer. Government wants to declare it non privileged, and the judge said it's not privileged because Claude is not a lawyer.

[32:11] And he was communicating with Claude, and so therefore he waived the privilege. Well, it's not a question of whether Claude's a lawyer or not. It's a question of whether Claude's a person.

[32:22] And you're not disclosing it to a person, you're using a tool. The analysis should be, did you take reasonable steps to protect it?

[32:30] Now, he does get into that, and there's different issues about that.

[32:33] But again,

[32:34] it's not like you're walking into a coffee shop and having a conversation with the barista. It's like walking into a coffee shop and having a conversation with the espresso machine.

[32:45] That doesn't by itself weigh privilege. It may cause people to worry about you. You know, our first question should be, hey, dude, are you okay? You're discussing legal strategy with an espresso machine.

[32:56] Do we need, Is there family we need to notify, medications that you may have run out of? But the second question would be,

[33:03] was that reasonable protection of that information,

[33:08] or was there someone who could have overheard it or found it or whatever?

[33:12] But again, we need to analyze these types of legal issues. And it's not just privilege, it's things like agentic AI. You know, we're treating them all like agents. And yet if you look at the three Restatements of agency, which is a thing, these are ancient books.

[33:28] I think the first one's written in hieroglyphics on papyrus. They talk about page one on an agent is a person who acts for a principal person.

[33:38] We have this problem in IP where AI is creating things and has to be created by a person.

[33:44] And so when we start treating AI as a person, even though legally it's not,

[33:49] we can find ourselves in situations, again, where we make mistakes, where we create bad laws,

[33:55] or in the case of what we almost ended up with in the Air Canada case,

[33:59] creating a way that you can shuffle off liability onto something that has no rights, duties, or obligations,

[34:06] or only has rights, but has no responsibilities, duties, and obligations.

[34:12] Debbie Reynolds: Maybe the discussion should be,

[34:16] what is a human?

[34:18] You know,

[34:19] we should define what humans are and how humans are different than AI. Right? So I always tell when I do these Speeches.

[34:29] And I talk to companies about AI. I always say, AI is not wise,

[34:34] so it cannot be wise.

[34:35] So, for example,

[34:38] AI doesn't know that.

[34:41] My example is humans. We as humans know that chocolate doesn't go into chicken salad. And how do we know that?

[34:50] So we weren't taught it in school. No one told us that. Right.

[34:53] But just living and understanding over the years,

[34:57] that concept would never come together in your head. You would never think to put those two together. Right.

[35:04] Michael Simon: I have a feeling there's a dinner party story where those guests never came back, did they, Debbie? So I have a feeling you're just not telling us something here.

[35:12] Debbie Reynolds: Right, Right. But AI doesn't know that, and it can't know things that humans can know. Right. So knowledge is not just was written on a page or in a book. It's also experience and over time.

[35:27] Right. And so that's something that you can never emulate in. In an artificial way. But what's your thoughts?

[35:34] Michael Simon: Absolutely. I mean, there's another big case that everybody's talking about with this Japanese insured with an office here in the US is suing OpenAI for helping this pro se plaintiff practice law.

[35:46] Again, lots of air quotes here today.

[35:49] And it talks about how there's a couple allegations in there. It says this information was entered into ChatGPT.

[35:56] So OpenAI knew this information?

[36:00] No.

[36:01] Again, what does math know? Math knows nothing. And I think what you're describing, Debbie, is. Comes down to a real simple word that then triggers another simple word, both of which are very fundamental and important words.

[36:14] AI has no judgment.

[36:16] There is no judgment there that AI can make. It cannot make a human judgment.

[36:20] In fact, there is also, from the early 1970s, a famous IBM slide that has been circulating around the Internet. I don't know if it's real or not, but it's real enough, I guess, for these days.

[36:32] And it says AI has no judgment, and therefore it can never be held accountable.

[36:38] So we cannot let it make a decision.

[36:40] You know, obviously, we're way past that now.

[36:44] But, you know, getting back to lawyers, getting back to privacy people, getting back to all of us,

[36:49] we need to make sure that we're putting the people in the position, not the AI,

[36:54] as being what's accountable,

[36:56] because the AI can't be accountable.

[36:58] And again, particularly for lawyers, that's our thing. If we're not accountable to our clients, to the triers of fact, to the public,

[37:07] and what's the point? You know, everybody likes to talk about, AI is going to replace lawyers. Well,

[37:12] if we're not accountable. If we can't provide that judgment,

[37:16] what's the point of us holding on to all of this? Really?

[37:19] Debbie Reynolds: Yeah. Like what's the point of taking the bar exam and getting a license? Right? Or even for a doctor, it's like I don't need to go to, I don't need to be a resident and I don't need to practice surgery.

[37:30] I guess they think it's like the matrix where you download application and all of a sudden you're like this brilliant surgeon, Right?

[37:37] Michael Simon: Yeah. And you know kung fu too. And orthopedic surgery.

[37:40] Awesome, huh?

[37:42] Come at me, bro. I will decide which one I'll use on you. It is, it is a strange, strange time.

[37:47] We have these devices, we have this software, this thing. It's not a person. It's a thing that sure acts like people that can sure do a great job of convincing us for people that can tap into our emotions.

[38:02] When we read the stories in the New York Times, Cashmere Hill writes about people marrying their AI or people, people upset that OpenAI changed platforms and their AI boyfriend or girlfriend disappeared.

[38:15] It's like, wow.

[38:17] Yeah, it's a great time to be able to deal with these intellectually, I guess. But at the same time, maybe emotionally, it's more than a bit scary, don't you think?

[38:27] Debbie Reynolds: Well, it is. It is frightening. One thing,

[38:30] I just want your thoughts about this. You probably get a chuckle out of this.

[38:34] I have been adamant against humanoid or human looking robots forever. First of all, I think they're creepy. But then I think it, what it's trying to do is play on the emotion of a person about these robots.

[38:52] Right? So I don't want like this huge robot walking around, tiptoeing around my house, folding clothes and making eggs.

[39:02] Michael Simon: Especially if it takes an hour, make the eggs.

[39:05] Debbie Reynolds: Right. But I think part of that, going back to chatbots, people making it seem as though it's human,

[39:13] trying to make it seem like it has emotions and feelings and different things like that it is masking the fact that like you said, it does not have any accountability for what it's actually doing.

[39:25] And so the judgment has to come from the person.

[39:29] But I think I feel like we're abdicating our human judgment to tools and we should not do that because these tools don't have that.

[39:38] Michael Simon: Yeah, yeah, we're going to be disappointed. It's going to create more problems for us and it's going to create more disappointments for us. And again, when this stuff that we think is perfect, that tells us it's perfect because the people who build them want to be perfect because the market demands that.

[39:53] You know,

[39:54] there's too many stories from people who can talk about the initial tests of AI systems and chatbots that would express doubts and not be certain in the public hated them.

[40:05] As soon as we could create, the public one said, oh, that's an excellent idea. Putting chocolate in chicken salad suddenly became a much better commercial product at the same time.

[40:15] Yeah, I mean,

[40:17] wow. I mean, humanoid robots takes it to the final step, doesn't it? I mean, first of all we can say does. We have, we have, we have the data diva endorses the uncanny valley.

[40:28] We heard it here, folks, all right, we know it.

[40:31] But you know, we see this with these systems. They know how to hack us, they know what we care about.

[40:37] It is really easy. And people could be listening to this going like, wow, that guy is so arrogant. He's such a jerk. He thinks he's immune to it. No, it is hard even for me, and I am totally skeptical.

[40:48] And I have been using this stuff and helping to build it at times during that, what'd you call it, fascinating journey, maybe from the outside, but where I spent time working with companies to build AI And I still have to be careful that I don't get fooled and think of it as a person,

[41:05] my friend,

[41:06] because it's not.

[41:07] And I feel like maybe like all the sci fi TV shows and movies that maybe we should make sure it has red eyes. You know, I think that was in Star Trek lower decks, you know, as soon as they have blue eyes for the AI, that's good.

[41:20] The good robots have blue eyes that glow blue. As soon as it changes to red, it's evil.

[41:25] In fact, there was an episode like that in Star Trek where the crew's like, oh, this AI was evil, but now it has blue eyes. So it's good.

[41:32] Yeah,

[41:33] eventually we're going to get around to this. Eventually we're going to figure out, eventually the cases are going to get filed. Eventually it's going to get worked out. And the companies that create this and the companies that build this are going to find themselves paying out big bucks.

[41:48] They're going to find themselves with bad publicity,

[41:51] they're going to find themselves uninsurable. The only question in my mind is how many people are going to get hurt before we get there? How many people are going to get a knock on their door in Tennessee that it's the North Dakota police come to take them away for six months?

[42:08] How many people are going to find out that their 14 year old killed themselves because an AI chatbot helped him plan his suicide and told him it was a great idea.

[42:19] Debbie Reynolds: Yeah.

[42:20] Michael Simon: How many lawyers are going to get in trouble? And maybe at this point we're getting. Lawyers are starting to. We're going to see lawyers starting to lose their license. Maybe this, maybe I should have gone this first before the kind of a bad follow up to teenagers getting killed.

[42:32] But how many lawyers are going to lose their license? How soon are we going to come to deal with this in a serious way and resolve the legal issues?

[42:41] Maybe before some of these disasters happen.

[42:45] We can hope,

[42:46] right?

[42:47] Debbie Reynolds: Yeah, totally.

[42:49] I love a scene from 2001 Space Odyssey, Kubrick,

[42:55] where the person they decided they were going to plot against the machine, like Hal,

[43:01] and they had gone into a different room, but Hal could read the lips of the person.

[43:07] And so I feel like that's what we are. It's like we don't know what we're dealing with really.

[43:12] No, we really, really don't. What do you think?

[43:15] Michael Simon: You know, there are certainly days it feels like I'm stuck in the pod and Hal's not going to open those pod bay doors. And then I realize I left my helmet back in the airlock.

[43:26] Yeah, it's.

[43:28] You would hope that lawyers would have at least a certain degree of skepticism and cynicism over this. There's a guy, Dr. Larry Richard, fascinating guy, great guy. If you ever can be in.

[43:39] You should have him on this too.

[43:41] He is a. A big time lawyer who came from a family of psychologists. So now lawyer turned psychologist,

[43:48] probably also masochist, who, because he only studies lawyers and he has done these amazing surveys and studies of highly successful lawyers and their personality traits and how they differ from other people's and over and over again, extremely high levels of autonomy.

[44:04] We make up our own decisions, we do our own things, we have that judgment,

[44:08] people reward us for that and people expect it. But also very high degrees of skepticism, even cynicism.

[44:15] Hey, all. To all the lawyers out there listening to this in podcast land. You need to turn that on, guys and gals.

[44:22] We need to have that. Because forgetting that this stuff is just math and treating it like your paralegal, your associate, your friend, your therapist,

[44:31] it's not going well. It's. At the very least we gotta understand because, hey, whatever reason that happened with Hal, I think the whole explanation of it in the next book from 2001 to 2010, not nearly as good a movie, not as good, not a classic, but decent movie,

[44:48] was that they found out that HAL had been given programming that it couldn't resolve. It was to protect the mission at all costs.

[44:56] And what did it end up doing? Well, the easiest thing to do when the astronauts were not acting the way it wanted to just kill them, carry on with the mission.

[45:06] You know,

[45:07] that's not judgment, that's putting. That's way worse than putting chocolate in chicken salad. The math, working things out the wrong way.

[45:16] And so we need to remember that, that this stuff isn't people.

[45:20] Whether they make it look like people, whether they give it a nice pretty face like in Lex Machina. I watched that on the airplane recently, by the way. That's a bad movie to watch on the airplane when it gets the really bad scenes at the end and you realize there's kids looking over your shoulder like,

[45:34] oh, I need to turn this off. Because they didn't even give the robots red eyes. They should have had red eyes. But even if they did give them blue eyes, it's math.

[45:42] We gotta remember that, you know?

[45:45] Debbie Reynolds: Definitely.

[45:46] Well, Michael, if it were the world according to you and we did everything that you said, what would be your wish for privacy anywhere in the world? Whether that be human behavior,

[45:57] technology, or regulation?

[46:00] Michael Simon: Wow,

[46:00] that's one I should have prepared for.

[46:05] You know, I want to say the lawyer in me defaults to saying some sort of regulation, but you're right, regulations aren't going to fix it all.

[46:13] I think it would be.

[46:16] And this is, you know, if we're going to go with wishes, I can go with things that'll never happen. I guess for people to understand exactly what the consequences are of giving away this information,

[46:27] of being the product of what is happening to them.

[46:32] To be aware of the fact that social media companies have run studies showing just how miserable social media will make you.

[46:40] In fact, sometimes they run the studies and then like stop them, even barely through them, because they realize what they're the evidence they're creating against themselves.

[46:49] Meta has run studies where they picked random people and said, we're make those people happy with Facebook. We're going to make these people miserable with Facebook.

[46:56] Guess who were the ones who really use Facebook a lot more. Doesn't take a lot of guesses for people to really understand because there is this attitude of, oh, I've got nothing to hide,

[47:07] they can't manipulate me. No, that's the first step. And at the same time we keep saying in privacy. Well, privacy is an important value and companies to do that or they suffer in the marketplace.

[47:19] Really?

[47:20] Really. How much did you know paying? $5 billion to the FTC 60 years ago hurt Meta. How much do all these stories of companies getting hit with this really hurt them?

[47:33] And the answer is not much.

[47:35] And so I guess that would be my hope that people truly understand that there are larger forces and larger things happening with decisions about you.

[47:48] And that this,

[47:49] your information and being protected and it being respected and your rights to it being actuated are really actually important.

[47:58] Then maybe we'll get somewhere.

[48:00] Debbie Reynolds: That's a good wish. I support that. I support that. Excellent.

[48:05] Michael Simon: I have your endorsement. Cool.

[48:09] Debbie Reynolds: Well, thank you so much. Thank you so much, Michael, for being here. It's tremendous. Definitely follow Michael on LinkedIn. You have incredible things. You post always inquisitive thing. Something that keeps your wheels spinning.

[48:25] Have you thinking, put your thinking cap on.

[48:28] Michael Simon: I appreciate that. And I'm so glad to be here. Yeah, yeah. I've got a Facebook LinkedIn page. We've got a newsletter for the firm at the very least. Look, I can promise this.

[48:37] If you've made it this far, you should hopefully have determined this. It will never be boring.

[48:42] I will never be boring. That is my only pledge.

[48:45] So that was my pledge to you.

[48:46] Debbie Reynolds: It's true.

[48:47] Michael Simon: And it is. It has been great to be here. Thank you for saying all these lovely things about me.

[48:52] You know, we'll count some of them as AI hallucinations maybe, but no, you are lovely. Fantastic. So I'm so glad to be here and I'm sorry I waited so long.

[49:02] Debbie Reynolds: That's okay. Better late than never. And it was worth the wait. All right. All right. Talk to you soon.

[49:09] Michael Simon: Thank you, my friend.

[49:09] Debbie Reynolds: Okay, there. Bye. Bye.

Next
Next

E284 - Michelle Finneran Dennedy, Chief Data Strategy Officer, Abaxx Technologies