E263 - Karen Smiley, Author, Founder and Owner, She Writes AI, LLC
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:12] Hello, my name is Debbie Reynolds. They call me The Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:25] Now I have a very special guest on the show.
[00:29] I have Karen Smiley, who is the founder and owner of She Writes AI llc.
[00:37] Welcome.
[00:38] Karen Smiley: Thank you, Debbie. So happy to be here.
[00:41] Debbie Reynolds: Well, wow, what can I say about Karen? So we met on LinkedIn.
[00:48] You had written something about AI and music.
[00:52] I think it was a series or something like a serial thing where you had written a couple different things.
[00:57] And I think it was on your sub stack, if I'm not mistaken.
[01:00] First of all, it was unbelievably detailed and just the amount of research that you do is incredible.
[01:08] And I had the pleasure of being quoted, I believe is something that you said there. And so I followed your work very closely. Anyone please follow Karen on LinkedIn and on your sub stack.
[01:21] The types of things that you write are incredible, like the detail,
[01:25] the thought that goes into it.
[01:28] All those things are unbelievable. And also you have a book out.
[01:32] Karen Smiley: Yes. Very excited about that.
[01:34] Yeah. So the series that you were talking about. When I first switched from corporate life to writing about AI and ethics as my new focus, this chapter of my life,
[01:44] I looked at music because I've always been a music lover. And I felt like what I've been seeing in the music industry, with things around use of AI and copyright and the way that musicians data and content is being used,
[01:58] I felt like they were breaking some ground that maybe could help to set some precedents that would help the rest of the world and other domains. And so I began writing about that and started looking really at the ethical principles around it.
[02:09] And I think that's where I came across a section there where I referenced something you had written. And I think that was how we really got connected, even though I'd been following you for a while before that.
[02:20] But yeah, the music series was really interesting. And it's, I guess, my habit now to do research. I'd like to know for sure that what I'm saying is something that is backed up and I think it's solely fair to share that with people to share the references.
[02:36] And so I did quite a lot of referencing on the music series, but that's a habit I developed while I was an industrial researcher for 18 years.
[02:46] So that came naturally, both reading a lot and writing and giving people Credit, I think, is super important. And so I try to always do that. And I've done that in the book as well.
[02:55] The book has over 200 references about everyday ethical AI and how it affects families and small businesses.
[03:05] Debbie Reynolds: Now,
[03:06] what got you interested in talking about AI and this topic? And you really go deep on different areas that you look into. But what is your path? How did you come to be what you are right now?
[03:19] Karen Smiley: Oh, that's a great question.
[03:21] I'll go back to when I was in college and that's where I first discovered computers. And I really got hooked on using software to analyze data. And my first job there was working in a health operations research group.
[03:36] And we were using computers to look at data from ambulance runs and tried to figure out if there were some factors that were affecting whether or not patients would survive, how they matched up paramedics and ambulances to different patients who put in calls.
[03:51] And I just thought that was so interesting to be able to try to find insights like that from data. And so I've really been working in data and software ever since then across a lot of different industries.
[04:01] So I've worked in aerospace,
[04:04] in fleet logistics, datacom telecom,
[04:07] worked with power grids and industrial automation and robotics,
[04:12] and went back to aerospace for a little while doing some machine learning research. And then most recently I was a director of AI for a company working on edge operating systems and edge cloud hybrid systems that exchange data from the devices to a cloud and communicated the other direction as well.
[04:31] And that's all been really interesting and a lot of fun, but it's always been about analyzing data and looking for useful things that we can do with it.
[04:41] Debbie Reynolds: That's unbelievable. Oh my goodness. Can't believe that.
[04:44] Wow. So you and I will have tons to talk about.
[04:47] I would love for you to tell me a little bit about,
[04:50] you know, edge and AI. So to me, this is so fascinating. So I do a lot of work in IoT as well,
[04:57] with privacy. And so you know, the ubiquity of these devices and what companies can do with them and like the power of them. Edge computing, I think is really important.
[05:12] And then now that everyone's gaga about AI and they're trying to throw AI into everything, I just want your thoughts about how like IoT and edge computing and AI are colliding right now.
[05:25] Karen Smiley: Yeah, it's a great question. And looking at edge devices, you know, the data comm company that I worked for, they were making what they called a network sniffer,
[05:34] which was a piece of hardware with software in it. That was custom designed to be able to watch the traffic that was flying by on the Internet and decode it and look for issues and to help to troubleshoot network communication problems.
[05:47] So this idea of having edge devices and the whole world of embedded devices and what's different about them is something that I've been interested in and engaged with for a long time.
[06:00] And then when I got to the multinational corporation that was about looking at, they were mostly a hardware company,
[06:08] but they were, as people put it at the time, learning how to spell software.
[06:13] And so moving from the world of having these devices which were not yet smart, and putting smarts into them so that they could do calculations, do analysis in the field, in at the edge, in the factory, in different scenarios.
[06:30] And that is where the more you can do there,
[06:35] the more resilience you can get from your system in a lot of cases.
[06:39] And that's a really valuable thing.
[06:41] And so by having the power of what we didn't call it a cloud back then, but having the power of the cloud to pull data from across different systems, from across different factories, from across different solar plants and be able to analyze that data and optimize it and come up with something that's better,
[07:00] that saves energy or makes products better, that's always been really interesting to me. And so in my last role, we were working on something that was called a digital feedback loop.
[07:12] And the company that I worked in has made these operating systems for embedded edge devices, IoT devices for years.
[07:20] And they've had their operating system in equipment that went on the Mars rover and things like that and in space. And so they're very much focused in that area. What we were doing with the digital feedback loop was enabling the devices that ran those real time edge operating systems to be able to offload data to the cloud in a safe and secure way and a very high performant way,
[07:48] because these devices are real time.
[07:50] And then to also be able to interact from the cloud with the devices, to send them software updates or what they call over the air updates,
[07:58] or to be able to send them commands and tell them to do something, or to reconfigure them and interact with them in that way. So to try to get the benefits of both the edge resilience and the power of the cloud, to be able to look at, for instance,
[08:14] not just the software in a car, but to manage a fleet of cars.
[08:18] But the other part of it, and this is where I think it's been getting more and more interesting in recent years, is looking at putting Intelligence, putting AI even inside these edge devices.
[08:29] And I think that's the direction that we're going to be seeing a lot more of. You hear a lot of the large chip providers and such talking about this. But for instance, Apple has been talking about it now recently about putting this edge intelligence where there would be a machine learning model that would run in your phone and it wouldn't send your data to the cloud.
[08:52] It would be able to do the analysis right on the phone. And the great thing about that is one, it can be faster, two, it can work when you don't have a cloud connection,
[09:00] and three, it can help keep you, your data more private.
[09:02] And those are all really good things.
[09:05] So you also. There is another announcement that came out recently about,
[09:09] for instance, I think it was Meta. They announced Executorch, which is their new evolution on the machine learning libraries. But it's meant to run to execute inside an edge device.
[09:22] And so I think we're going to be seeing some uptake on that.
[09:25] So I think in general,
[09:27] yeah, there are a lot of advantages to having the processing done at the edge and especially when it comes to privacy, which I know is your favorite topic.
[09:37] Debbie Reynolds: That's fascinating. Yeah. So to me, I just feel like the future will be around what you can do with these more decentralized devices. Right. Because a lot of the data breaches and the privacy issues that we have is because our data is shared so widely.
[09:58] And a lot of times, you know what I call.
[10:01] You have to go to the mothership to get an answer. You have to give all your information to a particular organization where,
[10:10] because there is so much more power in your smartphone or in these devices, it may be a situation, well, it may be a risky thing because you're still sharing data with the mothership, but it may be able to be used in a way to reduce the risk because instead of giving all your data to these companies,
[10:29] maybe all you're doing is answering a question or brokering an answer to a question which is a lot less risky.
[10:36] Fascinating. Oh my God, I had no idea about that.
[10:40] Well, as a, as a person who's dealt with, you know, hardware, software, systems, data, I can totally see why you're interested in AI and ethics. Right. Because now we're moving like the example that you gave about, you know, having devices be able to get information in real time.
[11:01] We're moving from a situation where we are, we maybe had lagging insights, like we have to wait a long time to get certain insights. And so now we have a power of Compute where we can get,
[11:13] you know, real time or near real time information.
[11:16] And because of AI, how it's being leveraged, depending on the models, the strengths and the weaknesses,
[11:24] that information may not be as accurate as it should be. And I know people are really excited about these innovations. But tell me a little bit about why ethics in AI is something that really fascinates you.
[11:41] Karen Smiley: Yeah, I think, Mike, it's been a simmering interest for a while, but it's really just started breaking through in a bigger way more recently. But one of the,
[11:48] I would,
[11:49] I would say maybe formative experiences that I had is I've been working as a woman in tech for a long time and mostly being vastly outnumbered and so been sensitive to issues around, around inclusion for, for quite a while.
[12:04] And I think ethics is very much like that in the sense of most people fundamentally want to do the right thing, but they maybe just don't always realize what's going on under the hood that is maybe not inclusive or not ethical.
[12:21] And there's just a lack of awareness. And it's one of the things, like once, once I seen it, I can't unsee it.
[12:28] But even working with data all the time, even back when in college when we were working with the ambulance data,
[12:34] there were concerns even back then about patient confidentiality and doing the right thing and realizing that the algorithms that we were coming up with could potentially influence whether or not somebody survived their next ambulance trip.
[12:48] And so there's a sense of responsibility there that I think I've had in the background for a long time. So, you know, the technology, you know, love the technology and love what it can do, but also want to make sure that it gets used responsibly.
[13:01] And this is something that has really,
[13:03] as I've gotten more into starting with some basic data analysis, but moving more into machine learning. I took my first courses in AI when I got my master's, so that's a long time ago.
[13:13] And just being more aware that as these techniques, they're very powerful. The analogy I use in my book is that AI technologies are like a chainsaw, right? They're power tools.
[13:24] They are very powerful. They can cut through problems that are simply not possible to handle with hand tools or smaller tools, but they're also dangerous. They have the ability to not just solve a problem, but to also cause a problem and cause a big one.
[13:41] And so being careful about having the safety equipment, having the training and keeping the equipment well maintained, it's all relevant to AI technologies as well. And so looking carefully at this and as I started to learn more about AI and actually applying it, started realizing things like where does the data come from?
[14:03] In the cases I worked with in the industrial research, it was always this is data that's from a customer or that was collected from the equipment, or we would write a physics based simulator that would generate the data.
[14:14] So there was always a sensitivity there about where the data came from, having permission to use it. And when I started hearing more about what some of the generative AI companies were doing and realizing that they weren't at all being careful or respectful about where the data was coming from or how that data was processed,
[14:35] and I started hearing more about the exploitation of the workers and of the resources that are used for building the data centers and the way that our resources are being used for operating the data centers.
[14:47] It all just seemed to crystallize for me in this area of ethics. This is something that I care passionately about and really want to make a difference in. And I think a lot of cases people just aren't aware of it.
[15:00] They open a chat prompt and don't realize what had to happen for that prompt to be available and not realizing that some of the companies are trying to do the right thing and trying to be ethical and others just don't care.
[15:15] They're really just going for the money.
[15:17] And I think that as people have talked to. So as part of what I'm doing with my substack, you mentioned earlier,
[15:24] I have this interview and podcast series where I talk to people about how they're using AI and how their data is being used by different AI systems and tools.
[15:35] And sometimes in those conversations I can kind of see the light bulbs go off. Like they just didn't realize what was happening when I asked them different questions.
[15:46] And then it's like I said, once they see it, they can't really unsee it.
[15:50] They I would like to. One of the reasons that I wrote the book was to try to help people to see what's behind the chat box, what's inside the smart system, and how it's affecting them and their daily lives.
[16:03] We hear so much hype. And one thing I love about your writing and your podcast, Debbie, is that you don't go for the hype.
[16:11] You tried to tell it straight. And this is my attempt, I think, to tell it straight and to help people debunk it and to hear not just the rich tech bro eight figure voices about get on board or you're going to get left behind, but more this is how real people are Using it.
[16:29] And this is how it helps some people, and this is how they are being in some cases, harmed by it and just helping everybody to be aware of this is what's going on, and this is reality.
[16:41] And to identify problems, because the first step in solving any problem is to identify it. And people have to know what their problems are so that we can work together as a society to improve on them.
[16:53] Debbie Reynolds: I agree.
[16:54] I agree wholeheartedly. As you were talking, just have my wheels turning. You just do this to me all the time. Like when you say something, it's like, oh, I have so many thoughts, so many things I want to say.
[17:04] Talk a little bit about ethics,
[17:06] just in general.
[17:08] So ethics is a word that's used a lot. I feel like a lot of people don't really understand what ethics is or what it means because I know sometimes when I talk to people about ethics, they're like, well, we just need more regulation.
[17:20] But, like, laws are not ethics and not all laws are ethical.
[17:27] Yes. So, yeah, tell me,
[17:30] let's expand on ethics and what that means in AI.
[17:34] Karen Smiley: Yeah,
[17:34] I think that's also something that people don't necessarily understand what ethics is. And I do have a section in the book where I try to explain basically what ethics is and what it's not.
[17:46] And then,
[17:47] okay, now what does this mean in terms of AI ethics? But like you said, it's not just what's legal, and regulation alone isn't going to make something ethical.
[17:57] As you said, there are a lot of laws in our history which were not ethical.
[18:02] And so it's not just that. It's more of. It's a sense of moral values and people doing the right thing because it's the right thing, and not just because there's a business case that says we have to or a law that says we have to, or regulation.
[18:18] And so when it comes to AI ethics, this is one of the areas that I really started to say, well, what does it mean for AI to be ethical? Or what's happening that isn't ethical?
[18:30] And I picked up really on five themes that I wanted to focus on in the book as far as the key concerns around ethics. And so one of them is just the way that our resources on the planet are being used for data centers and being used.
[18:49] And data centers obviously been around for a long time,
[18:52] but the demand that's being driven now by AI is on a bit of a hockey stick and it's growing significantly in a large part due to generative AI. So they're definitely related and so just looking at the different environmental impacts, the resources,
[19:10] the way that the minerals are mined for making the chips, and how it's exploitative of the folks in the Global south, where the minerals mostly come from, in exploiting their labor for classifying the data and for tagging it,
[19:25] and the way that the data is being sourced, whether it is scraped or obtained by permission or by giving what Sipri calls the three Cs, the consent, credit and compensation to the people who created that data and giving them control over it.
[19:42] And the other aspects are just things like the biases that get built in.
[19:48] We see a lot of harm from biases that are built into models, and partly it's because our society is biased, and so that's where the data came from.
[19:56] So if you build models on that without considering bias,
[20:00] then you're likely to get models that will just reinforce and worsen the biases.
[20:05] And so part of being ethical about AI to me, is being conscious about addressing that. And it's not easy,
[20:13] but AI itself is not easy. Right?
[20:15] We can do hard things.
[20:17] And so this is something that we really need to be acting responsibly. When we develop AI, we have to pay attention to biases and mitigating them.
[20:26] And just looking at the effect that AI has on people's lives and livelihoods, it can have some tremendous benefits for some people.
[20:33] For instance, I'm hearing from a lot of people who are neurodivergent that LLMs are able to help them to be able to access and organize and publish ideas that they would have trouble expressing otherwise and ideas that the world needs to hear.
[20:48] But on the other hand, there are artists who are losing their livelihoods with very little time to adapt or adjust. But,
[20:56] you know, there's a story with the biases that there was a study that came out of Lehigh University,
[21:02] and they were looking at the way that the most common LLMs handle financial decisions and recommendations, and they found that, yes, in fact, they are biased very badly. But they also found that if they gave those LLMs specific instructions to ignore, for instance, race or gender,
[21:21] then the models were able to give relatively unbiased results. So it's not that these problems can't be solved,
[21:28] but what I'd like to see us get to is where we don't have to remember to tell the LLM not to be biased.
[21:37] Right? That's something that they should be able to build into the tool to.
[21:41] And, but the person, the company that's designing that tool has to think about that and make addressing bias A priority.
[21:49] And those are all aspects of. Some people call it responsible AI. But to me, it's really about ethics and treating people Right.
[21:58] Debbie Reynolds: Right.
[22:00] One of the big problems that I had, I do a lot of media and presentations and stuff. And so one of the historical problems I've always had is to find images to use in presentations that were diverse at all.
[22:18] Right. So if you was typing CEO and search, even in search, and everyone would be white and there'd be no women.
[22:28] If I typed in women,
[22:30] the women weren't doing any leadership. They were like walking dogs or putting on makeup with their friends. And so I found all these things appalling. Right. So when I. I would go into a chatbot and ask it do an image for me,
[22:47] I would just say, do, do an image of me. People working in the office.
[22:50] And so it's typically an image of a guy standing over everyone, pointing, telling people what to do,
[22:58] and everyone is white and nobody or whatever. And then I say, well,
[23:02] give me a diverse. I just say diverse. Let's see what they come up with. Right. So then it's like the same scene, but then it's like one woman at the table.
[23:10] And then I say, well, give me someone.
[23:12] Make it people of different,
[23:14] like, nationalities or whatever. And then maybe someone who's like, you can't tell their nationality, but they're not white.
[23:21] Or they'll say, okay,
[23:22] put a black person in it. I just have to like, really go there to say those things. And so it's just really interesting to see that you're. For me, because I don't really fit into a box.
[23:35] I'm almost swimming upstream when I'm using these models.
[23:38] But thankfully,
[23:39] because I'm a data person, I know how to like, get out what I need. But that's something that other people may not have experience. But that's just an example. And so the example I gave was just about an image and a presentation.
[23:52] But these things can have,
[23:54] depending on how it's used, it could be like, okay, well, this person, we're not going to give them a loan because they're not like this other group that gets loans or we're not.
[24:03] This person needs to go to jail for a longer time because they're from a zip code where other people have gone to jail and stuff like that. So thinking about what those impacts are is very important because a lot of these things that we're using now are being used,
[24:21] being repurposed for these other use cases. What are your thoughts?
[24:25] Karen Smiley: Yeah, those are great points. I've talked with a couple of people that I've interviewed. One was an entrepreneur, she coaches solopreneurs and she was trying to generate an image that showed CEOs and like you were saying, she was trying to get an image that wasn't just white men in suits in an office because the CEOs that she coaches,
[24:47] the founders that she coaches, don't fit that.
[24:50] And another person I interviewed, Liz Sunshine, she's a photographer in Australia, documentary fashion photographer. And she was trying to get images of older women in Australia and was not getting anything that was representative.
[25:05] And so in some ways, like I said, it's machine learning models can only generate based on the data that they've been fed. And most of the machine learning models and LLMs that we have for different modes of data, for images and voices and music and text,
[25:22] are very heavily biased in their set. They've been trained mostly on Western world, English speaking and primarily white people.
[25:33] And so it's not really surprising that that's what comes out because that's the data that it was fed and there weren't in most cases actions taken to try to balance it and to provide something that's more representative.
[25:48] But yeah, we see this in a lot of places that the models for detecting faces. For instance,
[25:54] one of the young writers on Substack, I love her work. She's a high schooler in Georgia. Not Georgia the state, Georgia the country.
[26:01] And she wrote an article about face recognition and how it performs so so much worse on people with darker skin and than it does on white people. And it's worse for women than for.
[26:14] And the consequences of having your face misrecognized can be really serious.
[26:19] It can be something as simple as if it doesn't recognize dark skin. When you have a hand to turn on an automatic faucet, that's one thing.
[26:26] But in policing or in other areas, it's a whole lot more serious to have that kind of bias show up.
[26:32] And so it's something that we need people to be more aware, more responsible when they develop these tools, but also when they use them. Be aware that if you generate an image of CEOs,
[26:44] if it's just white men, stop and think about it and ask it.
[26:48] But ideally we wouldn't have to ask it. We would get unbiased representative images out of the tool in the first place. But we're not there yet. So in the meantime, it's on us as the people that use the tools to try to use them in more ethical ways, more responsible ways.
[27:05] Debbie Reynolds: I Have another story, this one. I mean, literally my mouth dropped to the floor when I saw this. I was on a podcast with someone.
[27:12] We had done a, like a promo image of my. My picture and the. The interviewer's picture. And he was white and I'm black, obviously. And so I fed the image into one of the models and I said, make me an image, like a promo image of this.
[27:28] It's not a photograph, but just kind of representation of us. And so it got the name of the seminar, right? It got his look, the look of him. The representation of him was somewhat similar to him,
[27:42] but the reputation of me was a white woman.
[27:45] Karen Smiley: Oh, my God.
[27:47] Debbie Reynolds: Like, literally, I gave you my picture and you put a white woman there. I was like, I could not believe it. So I'll find that picture and show it to you.
[27:56] But I was like, I was so done. I was so done when I saw that.
[28:01] One of the things that is very interesting and interplay between privacy and AI is that a lot of what privacy or data protection tries to do is create a level of transparency and how someone's data is used.
[28:20] And AI is not transparent in that way. And so there's this huge tension there. And we're seeing this play out all over the world, whether it be in, you know, regulated places or lawsuits or different things.
[28:35] So this tends to be an issue that is a big issue of friction between the two. But I want your thoughts.
[28:44] Karen Smiley: Yeah, definitely. It's. Transparency is probably the number one wish that I heard from the 70 + people that I've interviewed. They wish that AI companies would be more transparent about what they're doing, how they're going to use data, where they got the data that came from for their tool, for training their tool.
[29:02] And that's super important.
[29:05] We asked earlier about AI ethics and what are ethics.
[29:08] There are something else that I summarize in the book. There's these different analyses that have been done of different AI guidelines and such,
[29:16] and transparency always comes up as one of them. And some people call it a principle. I tend to think of transparency more as an enabler because if you don't have transparency and you don't know if people were sourcing data ethically, if you don't know if they're having it labeled or annotated by workers that they're treating fairly.
[29:38] If you don't have transparency into seeing that, it's very hard to know if the other ethical principles are being upheld or not.
[29:45] So I can see it as a principle of its own, but I see it really, it's more fundamental, more foundational, that if you don't have that,
[29:53] you really don't have an ethical system.
[29:56] Debbie Reynolds: And I feel like a lot of times when people talk about transparency in AI,
[30:03] it's like a user or a company,
[30:08] what they think they want to know is what are the mechanics,
[30:13] what's the sausage making like inside the model? And to me, I don't think that's really what transparency should be.
[30:21] I feel like someone has to be responsible or accountable for what goes into the models, and someone needs to be responsible and accountable for what goes out, especially how they use what the output is.
[30:34] But I want your thoughts.
[30:36] Karen Smiley: Yeah. A lot of people,
[30:37] I think, are wishing for explainability.
[30:40] Explainability is one of the principles that a lot of the frameworks do identify what they call explainable AI, which is what you're describing.
[30:49] I know what went into that. How did you come out with this recommendation that this person should or should not be approved for a mortgage? Just explain to me how that happened.
[30:57] And we actually ran into this when I was working in industrial research as well.
[31:01] We would have. We were working on a system called Asset Health, and it was for trying to predict when equipment was going to fail so that you could replace it before it failed.
[31:10] And this is especially important for the power grid because you don't want to lose power.
[31:15] And one of the things that we did, and I find that in general, machine learning models are always best if you've got people that know the technology and people that know the domain.
[31:25] So we are working directly with people who, for instance, knew how to service and diagnose power transformers that were failing.
[31:34] And one thing we found very early when trying to develop some initial models for this was that the experts always wanted to know,
[31:41] how did you come up with that? Especially if the machine learning model has a recommendation that's different from their 30 years of experience,
[31:49] why should they trust the model instead of trusting their own judgment? And it's not that they weren't open to something that they might have overlooked or that the machine learning model was able to detect that they didn't realize,
[32:01] but they wanted to understand it and to be able to believe it because they have a lot of experience that the machine learning model didn't capture. There are almost always context factors that maybe if you're not considering the right context factors, maybe you aren't getting the answer right.
[32:18] For instance, you could have a power transformer in the desert or one in the northern reaches of Canada, and the weather and climate are going to have an impact. And if you haven't factored those into your model, your model's going to be wrong, whereas the human would know that and take that into account.
[32:35] So I think when people are looking for explainability,
[32:38] that's certainly understandable, especially where it's going to have an effect on people's lives.
[32:43] The example of the mortgage approval being one, but there are so many others as well. If you're recommending anything in policing or in health care.
[32:51] You know, a lot of the models that were trained for healthcare,
[32:55] even for recommending medications and treatments for women, didn't necessarily involve women in their studies. And so if you don't know what the model was trained on, if you don't know whether or not the study included women, then you don't know if the recommendations being applied to women is really correct or not.
[33:14] And so I think that's the explainability aspect, and they're absolutely right about it. Transparency, to me, is a little bit different. I don't have to know how you built the model, but I need to know that you've covered the important questions.
[33:26] And I think that's part of.
[33:28] Everybody shouldn't have to become a data scientist to be able to use AI safely and effectively. It's like a car, right?
[33:38] We all shouldn't have to be a mechanic to be able to drive a car safely. We need to know the rules of the road. We need to know how to maintain our car and keep it functioning.
[33:48] Put air in the tires, give it gas, give it oil,
[33:52] get it changed. But we don't have to be a mechanic. We don't need to understand everything that's under the hood, but we do need to know enough to ask good questions.
[34:00] And I think with AI, it's the same thing people can learn to ask good questions about where did the data come from and was this considered? And that's enough to make a difference.
[34:13] Debbie Reynolds: I think that's true. So what's happening in the world right now that's concerning you most in this era in terms of ethics and AI?
[34:24] Karen Smiley: Good question, I think.
[34:28] Debbie Reynolds: And it doesn't have to be ethics either. It could be anything.
[34:32] Karen Smiley: No, ethics is definitely one of the things that I spend the most time thinking about and trying to think about what we can do. So what I hear a lot from people and what I tend to agree with is that we're moving very fast in the world of AI, and the companies that are moving the fastest often aren't incentivized to act responsibly.
[34:57] And to think of the Longer term. And there's.
[35:02] They're going after short term gains and not really looking at the longer term or systemic consequences. And I don't know that there's a good answer for that. The ones that I keep hearing are, well, we need regulations.
[35:15] Well, regulations always lag and we need to think about how to contain the harm, not stopping progress, but to keep it so that we limit the damage to people in the meantime until we've got systems that are safe and are trustworthy.
[35:33] Or it's one of my friends, a software architect that I admire and talked about the fact that we shouldn't be actually trying to get a system that we can trust or a company that we can trust.
[35:42] We should have systems that don't require trust.
[35:46] And I think that's an interesting perspective as well. So I do want to call that out.
[35:52] Debbie Reynolds: Wow, you're giving me a lot to think about. Right. I think that's true. I guess one of the funny things around regulation and where people say, well, we don't need regulation or we don't want rules because that's going to stifle innovation.
[36:06] And so when people say that, I'm like, the fact that we have stop signs and stop lights and lines printed on the street didn't stop innovation in the automobile industry.
[36:19] As a matter of fact, the industry wouldn't be as powerful today if we didn't have those things. Like, what would your life be like if you were driving to the grocery store and there were no rules of the road?
[36:31] Right. And so I think that's what we're really missing right now.
[36:35] What do you think?
[36:35] Karen Smiley: That's a great, that's a great analogy. I like that.
[36:38] Debbie Reynolds: If it were the world,
[36:39] according to you,
[36:41] what will be your wish for either privacy, AI AI, ethics, or whatever in the world, whether that be regulation,
[36:51] human behavior or technology.
[36:54] Karen Smiley: Wow, there's a million, billion trillion dollar question. Right.
[37:00] I think in an ideal world we would all have ethical,
[37:07] responsibly developed tools to use and we wouldn't need to worry about compromising our values in order to gain the benefits of the tools or to gain the accessibility or to help overcome any obstacles or impediments that we would just be able to choose from.
[37:27] Ethical tools that don't harm the world, don't harm the people in the world,
[37:33] don't inflict biases or other damages on people that we would just have those choices. But the real question, I think is how do we get to that role where we have those ethical choices?
[37:45] And I don't know the answer. But I think part of it is that there's a tendency in the world, I think, to just say, oh, well, the ship has sailed.
[37:54] It's too late. You know, the.
[37:55] The horse is out too late to close the barn door, that sort of thing.
[37:59] And I think we have more power than we realize and we shouldn't be so quick to give it up. I would like to see more people say, well, no, wait, this, this isn't.
[38:09] I see it now and I'm not going to unsee it.
[38:13] I'm going to put pressure on the companies to do the right thing, to act ethically, to compensate or credit and get consent from the creators of the data. I would like to see that happen.
[38:27] I would like to see companies respect the privacy of people's data and not say, well, I'm going to use this picture from your camera roll and, oh, by the way, now I'm going to take every picture in the camera roll and do something with it or things like that.
[38:42] I would like us to have good choices that. Ethical choices and realistic choices so that we can make truly informed decisions. I think the idea of informed consent right now is really not possible.
[38:58] Nobody reads. Well, a small, very small percentage of people, maybe you and I and some others read those terms and conditions and understand the privacy implications, but most people don't because it's not written for them to read.
[39:11] So I would like to see that transparency and I would like to see companies rewarded and incentivized to do the right thing and to provide us with ethical, responsible choices.
[39:27] That to my friend Mike's point, maybe it's not something that requires trust. And that's even better than saying, oh, yes, you can trust us, and then having that trust breached.
[39:37] Debbie Reynolds: I agree. I think. Yeah, I would. I love the fact that you said that, that your friend Mike said that.
[39:44] Karen Smiley: Mike Jonas. He was the architect that I worked with at Wind river on the digital feedback loop at the embedded systems. And we had a conversation more recently about trust. And that was his take, that we shouldn't have to trust.
[40:00] We should be able to have control without giving trust.
[40:04] Debbie Reynolds: I'm with Mike on that.
[40:08] Totally. I'm totally with him on that. Well, thank you so much for being on the show. This is great. And definitely, people check out Karen and her substack and her book.
[40:18] So, Karen,
[40:19] tell me the name of your book and what your book is about.
[40:23] Karen Smiley: Yeah, the book is called Everyday Ethical A Guide for Families and Small Businesses.
[40:29] And the goal of the book is really to provide guidance for everybody who's using AI or having their data used by an AI system and to give them some awareness of the risks that they may be taking that they don't know that they're taking,
[40:46] and to give them some actions about things they can actually do to try to protect themselves and their families and their businesses.
[40:55] Debbie Reynolds: Very good. Well, I'm excited about the book and actually you asked me to write the foreword for the book, so it'd be really cool to see it when it's out.
[41:05] I am thrilled that you asked me to write forward. I really like your work and your brilliance. So definitely people check out the book.
[41:12] Karen Smiley: Awesome. Thank you so much.
[41:14] Yes. If there was anything else that I wanted to mention this earlier when I was going through my career things, but something else that I'm working on, but one of the other things that I'm doing, in addition to writing,
[41:22] I'm starting up. I'm helping a friend start up a new company in the field of space, what they call situational awareness.
[41:32] And this has to do with the fact that space is getting more crowded and there's more risk of satellite collisions. And one of the problems I come back to data.
[41:41] One of the problems with space situational awareness is that we don't have good data.
[41:47] Most of the data comes from the ground.
[41:50] What they talk about with the collision is that our data is we know it's inaccurate and we end up having to tell a satellite to move out of the way, even if the chance of a collision is estimated to only be 1 in 10,000,
[42:04] because we just don't know and can't afford to take that risk of satellites colliding and creating debris and escalating the problem. So one thing that we're working on is how to get better data,
[42:16] not just from the ground, but from in space itself, putting some intelligence into satellites, into existing satellites, and the ones that are going to be going up in the near future.
[42:27] So I think that's a really neat area where you talk about the edge systems, and I can't seem to stay away from that.
[42:35] Debbie Reynolds: That's so cool. Well, I feel like a lot of the risks that people don't think about could be on the edge.
[42:44] Right. And so I think people are looking there as closely as they need to be. So it's very good that you're looking at that area and that fascinates me. So I like satellites and stuff like that.
[42:55] It is.
[42:56] Karen Smiley: It's a really neat area and I'm really enjoying getting back into it. The founder is Amanda Chadley. She's an aerospace engineer and so she has all this expertise about satellites and I'm learning so much from her and about this whole area of space.
[43:09] It's the different part of the aerospace that I had hadn't worked in before so really enjoying that and the idea that we could do something that would actually help to make space safer by having better data and by doing more embedded analytics inside the satellite to help it avoid collisions that just feels like a really cool mission to me and so I've really been having a lot of fun supporting her on that.
[43:32] Oh that's so cool.
[43:34] Debbie Reynolds: That's so cool. So very, very, very nice. Well thank you so much for being on the show. It's incredible.
[43:41] I always love to chat with you and let love to follow what you're doing. People definitely follow Karen on LinkedIn you have a substack look out for her book. It's amazing and yeah we'll talk soon.
[43:55] Karen Smiley: All right thank you so much Debbie.
[43:57] Debbie Reynolds: All right you're welcome.