E238 - Temi Odesanya, Director, Responsible AI and Automation, Thomson Reuters

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.

[00:25] Now I have a very special guest on the show, Temi Odesanya. She is the director of Responsible AI Automation for Thomson Reuters. Welcome.

[00:38] Temi Odesanya: Thank you, Debbie. I'm excited to be here.

[00:41] Debbie Reynolds: Well, this is exciting. This is very exciting for me to have you on the show. So you and I were at a conference in Spain and then once you got back, we got back to the US we had a call and it was great.

[00:54] So I'm really happy that you agreed to actually do this podcast and we were able to connect after that conference. Same here.

[01:01] Temi Odesanya: I know, I was also trying to get to you, but anytime I was able to talk to you, somebody else will steal you away.

[01:08] At least I'm glad we got to connect afterwards. So.

[01:11] Debbie Reynolds: Yeah. Well, you have a fascinating background. First of all, that conference was amazing that we went to. It was just women leaders at all different types of industries. It was very unique and it was great.

[01:22] It was great. I really enjoyed it. So tell me a little bit about your background, your trajectory. You're fascinating. I love working what you do in technology. So give me a background of how you got into artificial intelligence automation.

[01:36] Temi Odesanya: Thank you for the comment. So, yeah, I got in over 10, 10 years ago. So it started when my family, we all flew to Netherlands in Lee Wardens to work on managing like electronic LED boards for my family's business.

[01:54] And that's where my interest in technology picked. So you know, I don't know if you're familiar, very familiar with how African business is run, where the kids have to take over the company, but that hasn't happened in my case.

[02:05] So. But yes, this actually picked my interest and I went into software engineering, deploying open source enterprise resource planning tools. Thereafter I was responsible for managing the web analytics platform back then, which was to provide insight about traffic to our customers as well as vendors and advertisers who displayed on those busy roads.

[02:33] So I was very fascinated with the. Then it was data science before everyone starts to call it AI now. So I was very fascinated in data science. Took two postgraduate degrees in Canada as well as got a scholarship to study in Italy for data science too.

[02:49] And since then I've worn like different hats, been a data scientist processing for the banking sector as well as emerging technology programs, leisure's dabbled in and out of technology. But then my background in privacy started when I was a data scientist and I was analyzing customer data and I saw the amount of data that we had,

[03:11] both public and internally in the company. And a little bit of, a little part of me was scared, but the other part of me was very fascinated. And over time, like I navigated various industries and having to lead different teams and saw the importance of data governance, particularly when you're having to retain customer data,

[03:29] you're having to consume it, store it, as well as delete it. And compliance then became something I was very fascinated in and that's why I landed at Thompson Riders.

[03:38] Debbie Reynolds: So yeah,

[03:39] that is fascinating. I just have been so admiring of you and your career and your path. You're just so curious. I think that's amazing. That's the most important thing that you can be as a technologist for sure.

[03:53] But tell me a little bit about this is a question that I get asked a lot as someone who's a data scientist and has a long career in technology.

[04:03] A lot of people don't understand how privacy and artificial intelligence connect and how it can make privacy more challenging. So can you talk a little bit.

[04:15] Temi Odesanya: About that right now? The old umbrella is responsible AI, but within responsible AI is privacy, risk management, data governance model, governance, legal and a whole lot. So privacy connects because you're always using people's data, like their sensitive data, their, you know, PI data and also public data.

[04:39] And sometimes we forget that we are actually impacting people's lives. And so when we use that data without the right knowledge, without understanding the limitations and having to consume, the bias in the assumptions that you've made about people, the inaccuracies that you've made about people, then at the end of the day,

[05:01] the decision or automated decision making that a cause is faulty.

[05:07] So from a personal experience, I was just sharing a friend that I was flagged for money laundry and I keep saying is that I wish I had enough money to be flagged for money laundry, but I was actually flagged for money laundry when I was trying to close on my property from the decision making tool that was used and as a result I had to pay fines.

[05:26] But then it's a battle that I couldn't fight and I had to look for another broker so many times. We need to understand that when you're using people's data, whether you're scraping it for data processes or you're using it to build AI models, this is the data that you need to get content for this is also the data that you need to be aware of.

[05:48] And this is also the data that you need to protect, because that's how people trust you to be able to use your brand.

[05:55] Debbie Reynolds: Yeah, let's talk a little bit about, I guess, harm and safety. I guess those are two issues I want to talk about with privacy. So the harm, and I think you just talked about it, which was someone taking a model or information and making an incorrect assumption about someone that could possibly be harmful to their,

[06:19] you know, life or liberty or something that they want to do. And then also to me, another issue that I don't think we talk enough about privacy is that, you know, divulging someone's private information could create a harmful situation for them, like maybe stalking or different things like that.

[06:38] So just tell me your thoughts about kind of that human impact of potential privacy violations or what we're doing with automation and data that can create those problems.

[06:53] Temi Odesanya: So I don't know if you saw the news about the Ixray app.

[06:58] That was very, very fascinating because I saw people laughing. But then some people are also not scared about it. I'm like, okay, this is a problem. So just for the audience who's not aware of what this app is, the way it works is it works by using the Meta Smart glasses,

[07:13] and it has the ability to.

[07:16] It has the ability to take live video streams into Instagram based on what that glasses is. And then the computer program would monitor all of the faces that are there, then search through databases, like public databases, and try to identify whose face that is, as well as their public data online.

[07:36] So going back to your question about the harm of this, so imagine the misuse of this tool and this data leakage that is happening.

[07:46] Although this is not publicly released, this is just to heighten, you know, the conversation about harm. But I still believe that people are starting to use that particular data. They've been news articles about government, also using facial recognition tools on the streets and having to profile people.

[08:05] And nowadays, as heightened cybersecurity issues in which this data is released. So the harm that people can cause with these tools, I don't even think we can fathom what that looks like, because as a Christian, it says, the heart of man is desperately wicked.

[08:21] You know, and imagine someone taking that information about your children, having to put that online, being able to stalk people. Back when I was still very much actively dating,

[08:33] I would take each. Each person's Twitter account that I made and I would scan for the sentiments. Cause I wanna know if you're Angry. I wanna know if you're a.

[08:42] And as people were not aware that I was doing this, but I was actually doing this with their public information and I was also able to do it for my friends.

[08:49] And then we will find like their, their mom's mom, their grammars and things like that. So at the end of the day, what, what I'm trying to say is that you can cause different arms depending on the purpose of what you're trying to use it for.

[09:04] You can start on the noble course, but then it pivots into something that you can't figure out.

[09:10] And looking at the way news spreads, looking at deep fakes and things like that. We've seen situations where nude pictures of teenagers were exposed and that child committed suicide. Right.

[09:23] So it goes across board whether your child, whether your parents, whether you're individual, whether you're a sister or brother. This can actually affect anybody.

[09:32] Debbie Reynolds: I guess the analogy I like to use is like, let's say if you go into a grocery store and you step on the mat, that opens the door, right? And so it opens the door for you, but then the next person behind you, you stay step on the mat and it doesn't open the door for them.

[09:49] And it's like some people have idea where first of all, this is the, the issues that we're talking about about harm and automated decision making, creating harmful situations and safety.

[10:01] I feel like some people who are in the technology space, they feel like either they don't think about it, they, well, this can't possibly happen to me or this has never happened to me, so I'm not really considering the harm.

[10:13] But when you're developing tools and products that's supposed to be used by everybody, you really should be thinking about what those. Not just what, the benefits of technology, but also the downsides, right?

[10:25] Temi Odesanya: Yeah. And just, just to probably narrow down that too is it's a complex issue because what is harmful to me culturally might not be harmful to you culturally. So I think organizations need to understand the environments that they play and be willing to adapt to the requirements to ensure that the tools or the product are safe for people.

[10:51] And when it comes to technology,

[10:55] we all know that it's capitalist world, you know, capitalism and people care about the bottom line. At the end of the day,

[11:03] if the metrics change to including responsible practices,

[11:09] as well as, you know, balancing that we making money or being profitable, I think the world would be a better place. So once we start to encourage and educate people about the need for responsible practices while pursuing their businesses, Then people include this.

[11:28] So talk about privacy by design and ethics by design and things like that. It'll automatically be fed into the processes, the data, the thinking, the thoughts around these business cases moving forward.

[11:43] Debbie Reynolds: What is happening in the world right now that's concerning you about artificial intelligence or privacy?

[11:50] Temi Odesanya: A couple of things I already shared one, which is the uses of personal data. But the other thing that I think we chatted about this last time we spoke was a disconnect between the advancing technology, the pace at which it's changing, as well as the ability for the privacy laws and regulations to keep up.

[12:09] What I've experienced, again this is my personal experience is I've seen a public calling for policies that are not practical and I've also seen regulations that are,

[12:21] they're definitely not realistic. And I've also seen the flip side where it's practical, it's realistic, it's achievable and it's novel to have. Right. I believe that we all need to come together to educate ourselves on the possibilities of AI, what it can do in order to come up with what this regulations look like.

[12:39] And the other thing that bothers me specifically as a black person, is meaningful concept of using my data. Right. I'm finding out that my data is being used in different places.

[12:52] And we all know that history is not entirely truthful. And this has been developed into the AI models that everyone interacts with on a daily basis, knowingly and unknowingly. This is, this is something that I believe we still need to keep having that conversation.

[13:10] Also it's,

[13:12] and I'm also going through it myself, where individuals and professionals have to be aware of the impact of what they do as well as the legal and technical aspect of how these things can be achieved.

[13:27] So as a responsible AI professional,

[13:29] I need to understand what risk is. I need to understand how to handle risk management. I need to understand what privacy is. I just don't want to be a data scientist and I have no idea of what privacy is.

[13:39] I don't have to be a privacy expert, but I need to be aware of what that is and how I need to include it in my project, in my process.

[13:49] And this is not solely the responsible responsibility of a technical team, but as an organization we need to start seeing cross pollination of skills and cross functional teams working on data science project.

[14:01] Debbie Reynolds: As a,

[14:03] I agree with that. So I think companies do need to change. Instead of having people, obviously people will have silo types of work. But when it comes to data and how data flows work within Organization, there needs to be more multidisciplinary focus there and more data sharing and more knowledge sharing amongst those groups.

[14:27] Because it's hard to protect data if you only know a certain. You only know a little bit about what, why companies are handling data and what they're doing with data.

[14:36] What are your thoughts and AI and in terms of how the data flows within organizations? Because it's hard to, I think it's impossible to handle it if everyone's in kind of their only silo and they're not really thinking across the organization and how that data flows and why it flows in those ways.

[14:55] And then having those people, those champions within those silos so that they know, hey,

[15:02] we're handling data or we're doing something new that we wouldn't do before. And here's the concern that I have and how you can partner with these data teams on those issues.

[15:14] I want your thoughts. You had mentioned privacy by design, and I guess I'm gonna throw out something. Maybe it's like ethics by design or responsible AI by design.

[15:24] I think that especially when we talk about privacy, a lot of people talk about laws and regulation, which is, you know, nothing wrong with that. But I feel like we need more foundational guard rails around how we handle data that that regulation can't really handle.

[15:44] Right. Because regulation typically, especially in the U.S. you know, regulations typically don't happen until something bad happens. Right. So something bad happens like, oh, let's create Johnny's Law. Oh my God, what happened to Johnny?

[15:55] Right. You know, I think being able companies need to have more foundational ways to prevent issues before they become legal issues. Yeah, but I want your thoughts on that.

[16:08] Temi Odesanya: So everybody says data is the new oil. Right. And right now everyone is talking about AI and I think we're not really talking about data and data governance first as a foundation before building AI tools or AI models.

[16:26] You're very right.

[16:27] We need to look at data governance as a practice first. So, you know, data minimization, data retention, data quality, all of those things need to be addressed. Now you can, depending on how big your organization is, you might not be able to bore the ocean, but at least you can start with your critical data element,

[16:53] which is data that brings in financial rewards or something very important to your company, however you define it. Once those things are, those metrics, those qualities address,

[17:09] then that comes,

[17:11] I would say responsible AI by design. So before I create a model, if I fill out the data, impact assessment,

[17:21] assessment, I'll say assessment that contains a lot of questions around like data Governance model, governance, privacy, ip, things like that, risk and the rest.

[17:30] Once those things are filled and the right compliance team addresses it and walks with the business function to define how to balance the project with responsible practices, then it is starting to develop the models.

[17:45] And before you deploy these models, it's having like the white door documentation and having the right model risk assessment done, the right monitoring plan, and then you know, having to,

[17:59] having to, having to answer to questions about, you know, things like consent, having to look at anonymization of data in your models before you actually deploy yourself. When are using a third party tool or you build an internal tool, these things have been checked as a process,

[18:19] as a guardrail and better still, some of these controls can be integrated into your model registry or wherever you build your model, your data. So it's not a manual process done.

[18:32] Every time this data is being synced and streamlined across systems, then that way it becomes natural for people, it's innate for people to follow these rules. So I believe that as organizations start to enforce and emphasize the need that these practices before models are developed, like having the right sandbox of people to play with,

[18:59] right to implement some of those things before they go to production will be helpful. Bringing in the right people will be helpful. Having these conversations when those use cases are being developed or been even thought of is a way to,

[19:14] I'll say imprint this into the culture is one way I recommend like companies start to follow.

[19:23] Debbie Reynolds: I feel like we need to change from the big data era. Remember the big data era, we're like let's collect as much as possible because data is so valuable. And what a lot of companies ended up with a lot of tech debt as a result of that and then a lot of garbage that they couldn't do anything with.

[19:40] Right. So if you're collecting data, you don't know what is collected for, for you may not be able to get the right insights that you need or that you want to get.

[19:49] So thinking more, being more circumspect about what data that you actually have and not trying to throw everything into a bucket and hoping, you know, something comes out right. I mean my analogy is like you wouldn't take everything in your refrigerator and throw it into a pot and try to make a meal.

[20:07] Right?

[20:08] Temi Odesanya: That's right, that's right. And back to your point, just the rise of like test data management tools where you don't actually have to start collecting all of this data to validate your use case.

[20:20] You can create synthetic data to validate your use case at Least to see what type of data points you need if you don't have it, and then determine how you need to get that.

[20:30] So I think that's something that companies need to start looking at. You don't have to scrape people's data online.

[20:38] Or another thing that we talked about where were seeing.

[20:44] I think LinkedIn was guilty of this, having to use people's posts to train AI models. And who would have thought they were collecting all of those data? But I'm sure they just, maybe this is like, this is me just giving a scenario.

[20:57] Maybe they're like, I Scoop everything off LinkedIn, put it in our backup, and then when we need it, we just go get it out. So yeah, you're right. You have to be strategic on what data points you need, why you need it and what you need it for.

[21:11] Debbie Reynolds: Just to dovetail on your LinkedIn analogy, and actually I'll throw another one in with a different industry, which is automobiles. So General Motors had a situation where they were collecting people's data and they, they, the people didn't feel like they really understood why their data's being collected and they were getting their insurance canceled and different things like that.

[21:31] And it was a huge consumer backlash where GM decided they wanted to stop sending data to a certain, you know, company. I think there's like a lawsuit happening now in Texas or something about that.

[21:44] But I think the thing that we're seeing now is that I think consumers are waking up to these issues and they're really, you know, lashing out. They're like, hey, I didn't, you know, even though.

[21:58] Right, even. Right. They didn't consent. Right. So Even, you know, LinkedIn or GM, you know, they haven't, they haven't, they haven't done anything maybe legally wrong. Right. Maybe they followed the law or maybe they took advantage of places where there are gaps in the law.

[22:14] But the fact of the matter is consumers are really upset. And so being able to really communicate this in ways that, that consumers really understand and give them choice, I think is going to be important in the future.

[22:31] Because if consumers don't trust you, they're not going to give you good data. Right?

[22:35] Temi Odesanya: Yeah. And going back to your point, one of the reasons I like Apple for. So I'm taking, I think I shared with you, I'm taking the IPPC IPM course now. It's, I've been busy, so I've reduced the pace.

[22:50] But we were asked to read the Apple's privacy policy. And then I discovered that Apple actually has this feature,

[22:59] you can check out on your iPhone that tells you which apps are using your data and how many times they picked it. And so a funny story, I always use Google Maps, but this day, it was raining and it was crappy.

[23:13] So I opened.

[23:16] Was it Waze I opened or Lyft? I can't recall the app, but I think it was Waze. I opened Waze and I immediately closed it. Like, immediately. I didn't use it.

[23:27] Tell me why. 48 hours later, I was sitting at ways, I'd picked my location multiple times, and I'm like, I didn't even use the app right? So this actually led me to like, going through all my apps and like, ton of location, ton of the services, things like that.

[23:43] I think, again, this example goes back to your point of how are companies using data without even needing to justify it legally? Like, ethically, are you supposed to use that data?

[23:56] Are you following the principles that you set forth by organization? What are the norms, you know, that you set forth in your organization and that people or are your employees adhering to that?

[24:08] Because it's a different need to have it and it's a different thing for people not to follow it.

[24:12] Debbie Reynolds: So, yeah,

[24:14] yeah, let's talk about ethics. So ethics. Ethics is interesting because ethics, in my views, sort of not legal, right? Is what you. It's kind of your guidelines, your morals, your values, things like that.

[24:30] And, and, you know, not all laws are ethical either. Just throw that out there.

[24:36] Temi Odesanya: I agree.

[24:38] Debbie Reynolds: But, but, but how. How do you go about communicating the importance of ethics when organizations want to use AI?

[24:50] Temi Odesanya: Okay, so before you decide to do business with anybody, you have to understand what their values are, right? So for me, before I even develop a friendship with anybody or buy tool, do anything, it's like, okay, what values do I tie to this tool and what benefit does it bring me?

[25:11] What does it serve me? So going back to your question for organizations is, as a company,

[25:17] what are our. What's our mission?

[25:20] What's our vision? Like what, our brand identity,

[25:27] what principles? Because some people's mission doesn't.

[25:33] Some people's mission might contradict the use cases, right? And vice versa. So a good example is creating war robots for NASA that might align to their mission, but for I said health company having to create war over society, that goes against your mission in your values and everything, right?

[25:56] So I always ask, what's your mission? What's your values? What are your principles? How do you want to do business? What is your identity? And how does. How does your use case align to that first what are your customers expectation thereafter?

[26:10] What does the Lord say? So there are times where you have to forego a feature because of what the regulation says, or there are times you have to forego a feature because of what your mission and your brand stands for.

[26:25] Right. So starting on that note and once we're able to get that out of the way is then saying, okay, for every use case, here are the criteria that matter to us, here's a risk we're willing to accept that again ties back to vision and mission and here's how we want to deploy it.

[26:44] So in, you know, when you look at success metrics too, how does that tie back to what we stand for as a company? Right. So before I, you know, even so, I'll give example, there is a particular educational company where one of the executive approached me and said that they use facial recognition during exams to identify who's cheating and who's not cheating.

[27:13] And they also use their data to recommend educational courses and materials. And at the end of the day when they try to scale in certain regions, they couldn't because the regulations were against it.

[27:28] And I told them, I said, well, in Africa, for example, I haven't seen a strong regulation on facial recognition. In Europe, I've seen one. Fine. We need to change your business model.

[27:40] However, do we really need facial recognition for some use cases that you're selling? Right.

[27:47] Are you really sure that that feature is the most prominent feature that can give you money? Or are there ways to go about having to assess if people are cheating or not?

[27:59] Or do we even need to change your business model for certain types of product in that market? Right. These are conversations that I would say needs to be had. It's sometimes it can be complex, but if you start from the foundation of why you exist and why you're doing business,

[28:16] then you know that would help. And there's some founders who have unethical practices and we see them share those online.

[28:25] So you should expect that if you're going to create product, you're going to work on project for that type of company,

[28:31] you're going to have to do an ethical things. And that's up to you to decide if you want to work on that or if you don't want to work on that.

[28:39] Debbie Reynolds: Yeah, I'm glad to see there's more conversation around responsible AI and ethics because I don't know how you feel, but I feel as though you can innovate without hurting people.

[28:53] So I don't think trying to create safety or safe space for people will Stop innovation. I don't know. That's just my thought. What do you think?

[29:03] Temi Odesanya: So there.

[29:05] So I'm in the, I'm taking a Stanford Live class. It's on ethical AI. And there was a use case about that which is very popular, the Umela use case, where, I don't know if you're familiar with it, where society striving, everybody's joyful, but the prosperity of that society depends on the suffering of a child.

[29:27] And the question was, if you feed that child,

[29:31] the society is going to be affected. So some people were okay. Some people acted like they didn't know why. Some people left the community because they couldn't bear to see that child suffering.

[29:40] And so the conversation around that use case was, do you think this is fair? And how would you address that? And I said, this is real life.

[29:50] This is life right now, right?

[29:54] There are technologies you can deploy that would not have any consequence. So for example,

[30:02] the purpose of a technology might not have any consequence.

[30:07] However, the evolution of that technology might then bring consequences later. So good example is think of the case management system or medical record, right? I'm collecting all your medical records for the purpose of storing it for you and for it to be accessed by medical professionals.

[30:25] Now, the purpose is not harmful. However, when it now comes to innovating that feature to make it personalized,

[30:34] then you start to hear things like, now I'm using that data. And in using that data, the risk, such as leakage of that information,

[30:44] the risk around the bias, or they even risk around inaccuracies when handling that data. So back to you. Yes,

[30:53] you can deploy a technology without affecting people. However, it doesn't hold true for all types of technology, all types of use case where there won't be risk. And if those risk materializes, they become issues, and those issues have consequences.

[31:10] So it's a very, very complex,

[31:14] complex, complex use case or complex discussion because like I said, what is harmful to me might not be harmful to you. So, you know, another quick example is we cannot eat healthy food, but is that food really healthy for you?

[31:29] Right?

[31:30] So again, it depends on how the use case is used, where it's used, who is using it. It might not be harmful, but depends on the purpose, then it becomes harmful.

[31:41] Debbie Reynolds: Yeah, that's a great analogy. I hadn't heard that one before. Probably use it. I'm steal it from you. I'll give you credit.

[31:49] Temi Odesanya: I picked some from you too.

[31:51] Debbie Reynolds: Oh, thank you. Thank you. So if it were the world according to you, Timmy, and we did everything that you Said, what would be your wish for privacy or artificial intelligence anywhere in the world?

[32:03] Whether that be regulation,

[32:06] human behavior or technology.

[32:09] Temi Odesanya: So a couple,

[32:10] couple wishes. Wishes were hustles, right? One is better accountability and protection of vulnerable groups, especially kids, especially minorities.

[32:23] I think for everybody, we should all feel protected.

[32:28] Particularly heightened safeguards for our sensitive data. I don't want my health records spilling out in public. Right. I don't want my divorce, for example. I go through divorce and see my divorce papers online.

[32:42] Right. All my genetic data being leaked online.

[32:47] People need to feel protected.

[32:49] People need to build trust.

[32:53] I am hoping that trust will become the future oil if it's not currently the new oil, instead of just data.

[33:02] And the other thing is,

[33:05] personally I've become more aware of what I share to the public. So where I go, what I eat,

[33:13] where I visit,

[33:14] having to be more sensitive on what I post so that I don't put myself as a target for people. It's something that I want everyone to be aware of. And then lastly,

[33:30] it is being able to actually protect people when the data is leaked online. Because I've seen situations where somebody's data gets leaked and the whole world comes crashing on that person because the judgments are flowing.

[33:46] The person didn't purposely release their records, but it's online. We have to address why it's online, not criticize the person for what they did in the closet. That is online.

[33:58] Right. So that's something I'm hoping that will play out in the future as well as very realistic regulations that protect everybody.

[34:11] Everybody is something I'm looking forward to. And organizations having to employ responsible practices when defining use cases or building products.

[34:22] Debbie Reynolds: Those are all good wishes. Those are all great wishes.

[34:27] Temi Odesanya: Well wishes are awesome. We're going to see in the future what happens.

[34:33] Debbie Reynolds: Definitely. Well, thank you so much.

[34:35] Temi Odesanya: How about you though? I didn't hear you. Yes.

[34:38] Debbie Reynolds: Tell me you're turning the tables on me now.

[34:41] I want to see privacy as a fundamental human right in the US and it's not yet. And I think that will close some of the gaps that we see between a human versus consumer type of issue where some companies feel like, well, I can do something harmful to you because you're not my consumer.

[35:00] Right. You don't consume my product. So I don't really care about your data. So I think that's a huge gap. And then,

[35:08] you know, I want companies that are doing these more emerging technologies.

[35:15] They are very good at touting the benefits, but they don't really talk a lot about the downsides. So balancing what Those benefits are versus the risk and not having it be like, oh, you're trying to stop innovation.

[35:29] I think,

[35:31] for example,

[35:32] with the automobile industry, the fact that we have stop lights and stop signs and painting on the guardrails. Guardrails, that did not stop the innovation in the automobile industry. Right.

[35:45] I think that's what we're talking about when we talk about regulation. We're not talking about you can't do certain innovation. We're saying do it in a way that actually helps people and not harm people.

[35:56] And so that's my, my goal.

[35:58] Temi Odesanya: Yeah, yeah, I, I do agree with you and I'm hoping that becomes true. It's legal. Yeah. So we'll see. We'll see, we'll see, we'll see.

[36:08] Debbie Reynolds: Yeah. Well, thank you so much for being on the show. This is great. As I expected.

[36:13] It's great to actually get your point of view and I love what you're doing and I'm happy to. Hopefully we can find ways to collaborate in the future.

[36:21] Temi Odesanya: I can't wait, though. But thank you for inviting me here. This has been good. I've also, like, reflected in some of the things that I've said. I'm also going to listen to it cuz I have to uphold what I practice to.

[36:33] So. Yeah.

[36:36] Debbie Reynolds: Oh my goodness. Well, thank you so much. And we'll talk soon.

[36:39] Temi Odesanya: All right. Thank you, Debbie.

[36:41] Debbie Reynolds: You're welcome.

[36:41] Temi Odesanya: Have a great day. Okay.

[37:07] Debbie Reynolds: It.

Next
Next

E237 - Matthew Waddell, Founder, Tactically Secure