E281 - Mojisola Abi Sowemimo, Data Privacy and AI Governance Expert
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:11] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast, where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:24] I have a very special guest on the show today, Mojisola Abi Sowemimo.
[00:29] She is a data privacy expert and an AI governance expert. Welcome.
[00:34] Mojisola Abi Sowemimo: Thank you, Debbie. I'm glad to be on this show today.
[00:36] Debbie Reynolds: I was excited when I saw you pop up on my timeline. I was like, oh, my goodness, this is somebody I don't know.
[00:43] How do I not know this woman?
[00:45] Mojisola Abi Sowemimo: But I know about you. I follow you, and I follow your trends in your podcast. So I'm a huge fan. I'm glad to be on.
[00:53] Debbie Reynolds: Well, your profile caught my attention,
[00:56] obviously from privacy people, and I like to hear privacy folks that are talking about AI governance, and that's something that you speak about quite a bit.
[01:05] But your journey, how did you get into privacy and what was that journey like?
[01:12] Mojisola Abi Sowemimo: That's a great question. Thank you. So my journey to privacy is something I find really exciting because when I think about it, I get excited all over again.
[01:20] My background is in process improvement, project management,
[01:23] business process improvement.
[01:24] And the more I worked on projects that involved personal data,
[01:29] the more I saw that there were different hurdles to be crossed or covered whenever any process that involved personal data was being talked about or being improved upon.
[01:42] And sometimes you'd have the team say, oh, this project is involving these processes, and you have to reach out to the risk department or the risk management team.
[01:50] So. And I just got a curious T stirred up in me to know why these extra, extra steps. And then I got to find out about data privacy and why these processes had to be extra secure to make sure that we don't have unnecessary access to personal data and all that.
[02:07] So then I said, okay, I need to know more about this type of processes and niche.
[02:13] So I cordially walked in a gdpr. My first IAPP certification was in EU certification, which covers the gdpr.
[02:21] And the more I learned about it, the more I got intrigued about it. And that way I was able to cross over from my process improvement background and project management background to data privacy.
[02:33] So sometimes when I work on data privacy projects, my business process improvement lenses come up and I'm thinking about how to streamline a process or what have you.
[02:41] So that's been my journey.
[02:43] Debbie Reynolds: That's so cool. I think I find that A lot of people,
[02:49] because privacy wasn't really its own industry,
[02:53] it wasn't its own segment for a really long time outside of people who were doing a lot of cross border data transfer. So the fact that I see so many people interested in this field of study I think is really fascinating.
[03:09] Also I feel, and I don't know how you feel.
[03:12] To me, one of the things that really got me interested in it,
[03:18] well, I have a selfish reason,
[03:19] so I want to know what my rights are. So that's how I got into it. I'm like, wait a minute, like what are my rights? You know.
[03:25] But then also I feel like at the end of the day, in addition to helping companies, you're actually protecting people.
[03:33] So to me, that really made me feel good about doing this work. But I want your thoughts.
[03:39] Mojisola Abi Sowemimo: I agree with you in that and the fact that people's personal data, like I say to myself and I tell audiences that personal data today is the new gold.
[03:49] There's so much you can do with someone's personal data and when you're entrusted with that responsibility,
[03:54] you have to be accountable for it.
[03:56] I like to work with organizations to ensure that they are accountable for the personal data that they have in their
[04:03] Debbie Reynolds: care, in their trust.
[04:04] Mojisola Abi Sowemimo: And I see, for example, when you talk about the privacy notice that it comes down on their website,
[04:09] I see that as like a love letter or promissory note to their customers and their clients about what they plan to do with their personal data and they need to be held to standard to it.
[04:19] I bring a salvage or I agree with you on the fact that it also covers administering people's rights to them, making them aware of what they can do. To ask organizations for, for example, what do you do with my data?
[04:33] What part of my data do you have in your custody?
[04:36] Holding organizations accountable for that involves developing a process behind doors.
[04:41] So that way when that opportunity comes for them, they're able to come out and say, yes, we do this and this data and this is what you can do if you want to know more.
[04:51] So I do agree with you on that. The right, the ability to give people the responsibility or the ability to exercise their rights.
[05:00] Yes.
[05:01] Debbie Reynolds: I have never heard anyone call it a love letter.
[05:06] I think if companies thought of it that way, I think it would be easier for them. Because really I think that the difference now is that before companies felt like as long as I gave you a product or service,
[05:20] I didn't owe you anything else beyond that. So I think what these laws and regulations and also consumer pressure is Happening it's like, hey, you know, our data is very valuable,
[05:32] right?
[05:33] I want to do business with you and I want to know that doing business with you is not going to create more risk or more harm for me.
[05:42] And then for me on the business side it should be more of a bottom line issue because customers who don't feel comfortable with companies, they either don't give them accurate data,
[05:54] right.
[05:55] Or to protect themselves or they don't or they move somewhere else. So we're seeing that a lot, especially around like for example many years ago Apple had their app transparency thing where they were like they opted a lot of people out of advertising, they had to opt in and they had actually Apple had like their biggest quarter ever right after they did that.
[06:18] So to me I feel like that's telling the marketplace, hey, people really care about their data and they want to be protected.
[06:26] Not just the fact that you can send them a fair credit reporting notice when you have a breach. It's like it goes farther than that.
[06:33] Mojisola Abi Sowemimo: What do you think it does go further than that. And now companies ability to demonstrate their ability to secure people's personal data is more competitive advantage. Now today because people are paying attention,
[06:48] they're paying attention based on the different news that going around like breaches happening around and what having the data in the in doc market, it's is making people pay attention so they want to know what you're doing with the data.
[07:00] So having that ability to be transparent about what you do with your data, be accountable about what you're doing with the data helps to build that brand loyalty among consumers.
[07:11] So that way they feel comfortable to do business with you because they know what you're doing with your data and they trust you that you will handle it with care and protect it as much as you should do.
[07:22] So I do agree with you that it would help, it does build competitive advantage, it builds brand loyalty and it also helps to strengthen the brand because your brand really there's no value you can place on the brand.
[07:38] There's no value to it because you can't quantify it. Rather there's so much more that to get from your brand because in the future as you grow, your business data might not speak for you, but your brand quality will speak for you.
[07:53] The integrity that you've built over time with your consumers will speak for you, speak volumes for you.
[07:59] Debbie Reynolds: Very good.
[08:01] I want to talk with you about something that is coming very apparent to me. I've been talking about it for a while, but I think now that companies especially so many of them are trying to rush into implementing artificial intelligence.
[08:19] They're starting to reach this issue. So one side is a lot of companies feel like, well, let's rush into AI. We don't want to hinder innovation. So we don't want any laws or regulation around AI, which that part I understand,
[08:35] the part that people don't understand,
[08:37] and I want your thoughts here,
[08:39] is that when these companies rush into AI, they start to. And they're thinking, oh, I don't have any I obligation. There are no regulations there. And then they hit the data privacy wall.
[08:52] Right? Because it's like, okay, now the thing that you're doing with data,
[08:56] regardless of what you call the technology,
[08:58] you're still handling someone's personal information and you still have obligations there. And so I think that's the part that companies, I feel like, get wrong about AI, Especially when we're seeing certain jurisdictions trying to say we don't want governance because we don't want to curb innovation in this space.
[09:20] It is difficult,
[09:22] in my view,
[09:23] historically for innovation truly to take off if there aren't guardrails.
[09:31] So, you know, if there were not, we actually had a podcast guest, the Commissioner of New Zealand, the Privacy.
[09:38] Mojisola Abi Sowemimo: Yeah. Listen to that podcast. It was a great podcast.
[09:41] Debbie Reynolds: Yeah. But I love what she said, which was the reason why we can go fast in cars because of brakes.
[09:47] And I was like, wow, I didn't even think of it that way.
[09:50] I didn't even think of that way. So it's like this. The innovation and speed that you want with that innovation,
[09:57] in order for it to go fast, you have to think about those safety things. But what are your thoughts?
[10:03] Mojisola Abi Sowemimo: Yes, and I completely agree with you. I believe that that's where organizations get it wrong.
[10:09] And,
[10:10] and the reason I say so is because people have this assumption that. Because people are the ones behind the organizations. That's why I said people, because they believe that because they're using artificial intelligence, they don't need to follow the normal protocol of developing a privacy framework for their operations,
[10:29] which is completely wrong. And it's wrong because you are dealing with personal data.
[10:36] You use personal data to train. Your AI models your output, be it a product or service that you're deploying AI for is going to be using, interacting with personal data somehow, some way develop a framework internally before you deploy the systems out.
[10:53] It ensures that you have already a structured governance structure in place that helps ensure that everyone is held accountable and that you're transferring to new dealings.
[11:03] Take for example,
[11:04] without AI being saturated now before the advent of AI, where we had normal business processes, we've been advocating for organizations to have privacy programs.
[11:15] Debbie Reynolds: Right?
[11:16] Mojisola Abi Sowemimo: And the privacy programs, what they do is to help you to create a framework where your use of personal data is documented.
[11:24] You classify your data,
[11:26] analyze your risk, identify, analyze your risk.
[11:30] Debbie Reynolds: Right.
[11:30] Mojisola Abi Sowemimo: Ensure that you mitigate them for a process for you to ensure that risks identified and handled in a timely manner. And also you implement the privacy by design principle.
[11:41] And then also you ensure that you have your regulatory mapping going on. These are basically a very high overview of the privacy pillars that organizations have to have in place before the advent of AI.
[11:53] Now, with AI,
[11:54] it doesn't take away that framework. It only emphasizes the fact that you need to make sure that this framework is robust enough to handle the additional attacks that come in on board.
[12:07] Right.
[12:07] And if you notice most of these AI regulations and laws that are in place today,
[12:13] they're emphasizing similar things,
[12:16] similar things in the event that you need to be able to ensure that you identify your AI systems, how you're using AI.
[12:22] And the reason for this is because everyone has to be held accountable.
[12:25] The organization has to be held accountable for the data that has been entrusted to you. How are you using it? Do you have consent for the data you're using?
[12:33] This is the time that you cannot push aside privacy framework or programs. You need to have one. And we're not talking about, oh, the EU AI Act. There has to be a governance structure internally,
[12:44] so that way the organization is demonstrating transparency.
[12:49] When you have your AI models dripped or hallucinate, there's a track that you can follow to identify and rectify them. Even the goal here is to actually go ahead and prevent them from happening rather than be reactive.
[13:02] Right. Being proactive rather than being reactive. Right. It always goes a long way to ensure that there's less incidences and you have a monotonism for all this.
[13:12] So if anything, this is the time to ensure that there are enough structures in place to ensure that you demonstrate response. Sensible AI.
[13:22] It's all about building that trust in your product,
[13:25] in your organization, with your consumers and your clients.
[13:29] And the AI regulations and laws that we see today, they're all emphasized on the same things, really. Basically, are you monitoring your use cases? Are you evaluating your impact to individuals?
[13:39] Are you able to manage complaints? Do you have a structure in place to be able to ensure that you're able to demonstrate compliance to all these requirements by the different AI rules and regulations?
[13:49] And for the Privacy Meeting Office, I think this is an exciting time to be in privacy because we on the privacy team, we are like translators of risk today between the law teams and the tech technology teams.
[14:05] Because an engineer,
[14:07] you can ask an engineer to develop an AI governance structure. I mean, they could give it a good,
[14:12] it could be a good attempt at doing it, but they're not able to see the different facets that a privacy specialist can see with experience and with understanding the requirements for the organization.
[14:25] This goes beyond the legal team, it goes beyond the technical team. It's now it's a privacy function. And that's why I always recommend that organizations, when they're trying to develop an AI privacy governance structure and their governance structure,
[14:40] so they don't ignore the privacy part of it, because it's a very relevant part. It's the part that ensures that you have your assessments in place for your vendor, your AI risk assessments.
[14:52] It also helps to ensure that your AI regulation ready in the sense that you're in compliance with what the EU AI act has, if you have jurisdiction in the EU and also other emerging US state privacy AI laws.
[15:06] This is a function for the privacy team.
[15:08] The privacy team pulls that and works with other teams. Collaborating with other teams, with the legal team, with the technology team, and other teams will ensure that everyone's accountable for their different tasks and aspects of it.
[15:22] It takes a whole village, but the privacy team has that unique role that you can't just delegate to another team.
[15:31] That's my perspective on that and that's what I strongly advocate for when I'm reaching out to clients and providing them with recommendations and moving forward with the AI governance.
[15:42] Debbie Reynolds: I agree with that. I think the change that has happened in organizations is that they really need that person in privacy to be that communicator that can help bridge a lot of the silos in the organization because we're all using data and we're using it in different ways.
[16:02] And someone just working on maybe one discrete project may not understand the risk that they may be creating somewhere else or they don't understand.
[16:13] Like for example, let's say there are teams that have segments of data about a person, but then you combine it together and now you have a problem right where none of those teams saw that problem because that problem didn't exist in their silo.
[16:27] But now together the organization is accountable. They can't just say accounting did this and marketing did that. It's like, no. Well, the organization has to be accountable. So you need to be able to see all the different areas that you're handling data and what that like downstream Risk will be,
[16:45] I want your thoughts, like, what's happening in the world right now in privacy or tech that's concerning you, like something that you see, like, oh my gosh, like, oh, this is going to be an issue coming down the line.
[16:59] What do you think? Right.
[17:00] Mojisola Abi Sowemimo: I see a couple of things, to be honest, and I'll take them one by one.
[17:04] The one I think I see this bit concerning is when I see when what you mentioned earlier on about companies that believe that they don't need to have an AI governance structure in place, I just chuckle, but not in a good way with that because it's like, you see,
[17:19] it's like you can tell that something bad's going to happen. You just don't know when it's going to happen, but something bad's going to happen. If you feel like you don't need to have governance structure in place or you don't understand that creating that structure today is not only going to keep you,
[17:38] help you to ensure that you are in compliance with all these AI laws and emerging ones that are coming up, it also helps you to be able to understand what you need to do to be able to improve upon what you have today.
[17:54] Debbie Reynolds: Right.
[17:55] Mojisola Abi Sowemimo: And when you have a structure in place,
[17:57] it's demonstrating accountability,
[17:59] it's helping you to identify risk. Because most of the pitfalls that we have today in the deployment of AI systems is not really in the technology,
[18:09] but mostly in the governance of it.
[18:11] Right.
[18:12] For example,
[18:14] recently there was a case about AI powered assistance for technology that doctors used in surgery.
[18:23] For some reason, there was a model drift and the AI system did not provide the right information to the doctors and there were botched surgeries.
[18:32] Now, if you look at this system had an AI drift and this could have been avoided.
[18:38] There had been enough testing before deployment. AI governance covers all this. You create a structure in place where before any AI system has been deployed, production, there's enough testing and there are enough guardrails in place to ensure that there's human oversight and maybe the steps that desired controls that could be put in place to ensure that such situations don't happen.
[19:04] Now think about the consequences of it happening and think about what could have been done to prevent it having a rebound. Privacy, governance with accountability systems and not system controls in place could have provided or could have prevented that situation from occurring.
[19:21] Now, having watched surgery is not a good thing.
[19:25] It has a lot of implications, not only in the medical team, on the patients,
[19:30] on the technology that was used on so many factors.
[19:33] So if we look on the biggest. On the bigger frame, in the bigger picture. If we look at the fact that this is a consecutive event effect of this,
[19:41] then you realize the importance of it.
[19:43] Then you think about it, that this needs to be improved upon. And what even makes me even more scared today is the fact that agentic AI is actually being deployed, which is even worse than generative AI.
[19:56] Worse in the sense that agentic AI not only provides solutions, it makes decisions and you can interact with databases.
[20:04] It can do so much more than generative AI can do. And the question now is,
[20:09] how many organizations have a robust governance structure in place,
[20:14] have enough accountability structure, framework in place to ensure that these agentic AIs when deployed, are not going about messing things up and causing catastrophic effects on businesses.
[20:28] Now that gets me so very worried.
[20:31] And it's something that it's easily, it's easily solvable.
[20:35] Is solvable because there's steps to follow to ensure that you have a process in place to avoid that from happening. Not only are there steps in place to follow you to follow, there are the different framework, you name it, that you can follow to ensure that you Prevent these agentic AIs from causing constructive effects.
[20:52] There are also internal processes that need to be developed as well.
[20:55] So it's a process that I recommend, I highly recommend organizations not rushing out to deploy agentic AIs before they do their due diligence of developing a process in place where everyone has an accountability to be held accountable for their different parts.
[21:14] Identifying these processes, identifying owners to them, developing that structure,
[21:19] having continuous monitoring in place,
[21:22] ensuring that you have enough testing and so many other factors are very, very essential and are not optional today. They're very mandatory.
[21:31] And I hope that organizations wake up to the fact that this is something that needs to be done. We need to be concerned more about that than deploying these AI systems into production.
[21:44] If we can handle that,
[21:45] and then make sure that we have that in place before we deploy these AI systems. That would go a very long way to not only ensure that there are no negative consequences,
[21:58] but it helps to demonstrate trust and confidence in your consumers that be the overall. When I say fully impacted by the deployment of these systems,
[22:08] I have a
[22:11] Debbie Reynolds: thought that I want to run past you.
[22:14] And so I've been thinking about this for quite a while now. And I think, you know, I also work on Internet of Things as well,
[22:22] and this is an issue that has come up in Internet of Things. And now I see this being turbocharged in AI governance. And so,
[22:31] and your example about someone using AI in a surgical fashion. And that the drift of the model, this is the issue.
[22:42] So the issue that I have,
[22:44] and I want your thoughts, is that I think the way that people have traditionally thought about tools when they came into the enterprise is that someone would do an assessment.
[22:55] They would decide what people need to
[22:57] Mojisola Abi Sowemimo: do with the tool.
[22:58] Debbie Reynolds: They will like, set it and then forget it. Right. And so the problem here, and the same problem we have with IoT,
[23:05] is that the thing that you bought six months ago is different today than it was six months ago.
[23:12] And so you can't just set it and forget it. So, so that example about drift that you gave is like, okay, in olden times,
[23:21] when you,
[23:22] the tool that you had really didn't change,
[23:24] maybe let's say, for instance, you had an application. I used to do a lot of enterprise installation,
[23:31] implementation, things like that. And so typically, like a tool changed, maybe majorly, maybe three years.
[23:39] Okay.
[23:40] But now we're having tools,
[23:42] especially now they're connected to the Internet, they're updated over the air,
[23:46] especially these AI models. Like, they're changing in weeks,
[23:52] weeks and months. And they're doing, they're adding new features. And also in addition to that,
[23:59] these companies now, they're collecting data or they're creating data that was never created before.
[24:05] So the way that they thought about governance before is not sufficient for today. But I want your thoughts.
[24:11] Mojisola Abi Sowemimo: Yeah, so I agree with you. So the way they think about governance is different from what we have today because of the different technologies that we have today, the modern technology.
[24:18] And so, for example,
[24:21] to handle the model drift, in previous organizational governance structure, you probably could have had continuous monitoring done once in three months or once in six months. Now,
[24:32] continuous monitoring, I have one of my. Say I want it to be weekly or bi weekly, to be honest with you, because there's so many changes happening so fast, fast that it's so difficult to catch up with it.
[24:41] If you don't have a robust program in place, and to prevent any negative impact of any drift, any model drift, what would be recommended now would be having a continuous monitoring process in place, a robust one where no one can drop the loop, drop the wall.
[24:56] And to do that, you need to develop a process that have a, that will be a closed loop so that you have more than one or two persons, more than one or two teams responsible and being part of the process to ensure that the process that you have in place,
[25:09] the AI system you have in place, is being monitored. And that will be a collaborative effort, not just from the privacy team. Would Involve the technology team as well and all the teams that were involved in the deployment and the creation of that AI system because they're the ones that have that information on what the baseline was before the deployment.
[25:29] They're the ones that have the data that we see the data on a regular basis on the trends and then the updates that are required for that AI system and many other factors without going to technical parts.
[25:40] So these collaborative efforts has all these specialties.
[25:43] So that way the continuous monitoring process that you have in place is robust to identify any update, any feature update, any change or update that has to be implemented on that AI system.
[25:56] Developing the continuous monitoring process is actually a very robust process that we're just talking on a very high level now. But it's a process that shouldn't be neglected or overlooked because what happens is the AI model drifts and then it's not performing as it should perform.
[26:11] And then that would have a negative impact on the delivery you were expecting from that system.
[26:16] And if you now have an agentic AI model,
[26:19] it's not only just going to use that outcome and give you that information,
[26:23] it would use that information it gathers to update other databases.
[26:27] It might be interacting with other systems as well. So you have defect the negative or the wrong information being transmitted into other databases,
[26:37] being shared with other systems.
[26:40] And this other databases and systems would use that information to make their own decisions as well. So you'd have.
[26:46] That's why I said earlier on it's a catastrophic effect if you went ahead and did that.
[26:51] So if you can look at what could happen and what you need to do to prevent it from happening,
[26:59] you will more likely do what you need to do now because correct the damage that will be done. Take far more resources,
[27:06] more in every aspect, financial time specialties and professionals to rectify that damage being done. And also the brand at the end of the day,
[27:16] how can you redeem your brand from anything negative that could occur from that?
[27:21] That takes a lot of work because the outfits are ready to release any news impact. An AI system had this,
[27:28] had a default or had a problem and caused this and this negative impact the money you spend redeeming your brand, the money you spend rectifying problems,
[27:36] God forbid anything fatal happens, it's not quantifiable, it's a lot.
[27:41] So that's, that's what I recommend in your scenario to have a complete robust,
[27:45] continuous monitoring exercise, testing also and also before this for this,
[27:51] AI systems are able to make decisions, especially if you have an agentic AI system.
[27:56] I would Recommend, based on the scenario that you have in place to identify,
[28:01] is it okay for this agentic AI to go ahead and update this database or to update the system?
[28:07] I recommend having controls in place.
[28:11] Controls in place. That there's human oversight before it goes on to the next step. There should be human oversight to see to ensure that this is accurate. You can proceed.
[28:20] I really recommend that human oversight,
[28:22] a lot of AI regulations actually emphasize on that as well.
[28:26] Debbie Reynolds: That's true, that's true. I have always been concerned because I feel like there are harms that can happen to people based on AI systems for which there is no adequate redress for the person.
[28:38] Like you're talking about these bot surgery surgeries that people have had.
[28:42] Can't unbot the surgery, really. Right.
[28:45] So I think,
[28:46] you know, those types of things are very important.
[28:49] I would like for you to travel with me to the philosophical plane and I wanted to ask your thoughts about human in the loop.
[29:00] And so this is what concerns me about that term.
[29:03] It just never really sat well with me. And I understand what people mean when they say that, but to me, when you say a human in the loop,
[29:11] it does not communicate the person's right or nor their responsibility in the loop. Right? So it doesn't. That's like saying, okay, people are in a car. So, so are you a passenger in the car?
[29:26] Are you driving the car? Do you own the car? You know what I'm saying? So I think human in the loop. For me now, when I hear that, it's like almost like fingernails on the chalkboard.
[29:37] So like, well, well, what does that mean exactly? Some people are like, oh yeah, we do. Human in the loop. Like, who's. Whose rights are you ***?
[29:43] But what are your thoughts?
[29:44] Mojisola Abi Sowemimo: You have human in the loop. You have human on the loop, and you have. It's another one again.
[29:50] So I think that because AI is picking everyone's interest now,
[29:56] using terminologies that are quite vague will not serve as well at this particular time because one of the requirements of deploying AI systems is transparency.
[30:08] I think we should also be transparent in the terminologies that we use and also in the frameworks and governance practice that we develop as well. Human in the loop. I believe it means when there's a human being that is, that is informed of the decisions and human on the loop is when before the agent,
[30:25] AI or the AI system makes any decision,
[30:27] a human is involved to ensure and allow it go to the next stage.
[30:33] I believe that depending on the risk that an AI system has,
[30:37] the risk of the that could occur or the risk scenarios that could be involved in an AI system's functions will determine if you have an AI in a human in the loop or on the loop.
[30:50] However,
[30:51] I'll recommend human in the loop.
[30:54] I recommend all these are requirements for any type of AI system that you are deploying. A human being has to be informed. A human being has to be. Before a decision is made, a human being has to be consulted and allow it to go to the next stage.
[31:07] I recommend that. But the degree to which you'd have this involvement is what is different differs now.
[31:14] However, personally I would recommend human in the loop. Human in the loop. I would recommend different terminologies for those, please. However,
[31:21] I would recommend that we have them every AI system that's been deployed,
[31:25] regardless of the risk. Because if you don't have that at the start, over time you would eventually have to have that because you'd have you update your AI systems, there'll be improvements to it if you update to the features.
[31:36] And as time goes on,
[31:38] the risk that you have for that AI system would increase.
[31:41] And then if you had a human, just human in the loop only before, you'd have human on the loop again. Because the AI risk, the risk identified with that AI system has increased or has increased or has changed.
[31:53] So you need to ensure that there's human oversight,
[31:56] period.
[31:57] And so it's difficult to give a certain recommendation for,
[32:02] as a blanket recommendation for all AI systems.
[32:05] It's all on a case by case basis. But generally human oversight is not to be compromised. That's my personal opinion on that.
[32:14] What are your thoughts on that?
[32:16] Debbie Reynolds: Oh my gosh, I agree. I try to say,
[32:20] typically when people say human in the loop, I hear a lot of people who develop these technologies, they say, yeah, yeah, yeah, we do human in the loop or whatever.
[32:29] And they say it very matter of factly. Right.
[32:31] But for me, I think the human needs to be in the lead,
[32:35] right?
[32:36] Mojisola Abi Sowemimo: Yes. Right.
[32:37] Debbie Reynolds: Cause the, just like your example about the bot surgery is like some humans should have had some oversight over this.
[32:47] We all know the models drift. They don't always do what you want them to do,
[32:52] but and then also the person that goes to court, they're not going to sue the AI, they're going to sue the company.
[32:58] Right. They're going to be mad at the doctors and what they did. So it's like you can't abdicate your responsibility to a technology because when you purchase a technology like AI, you're buying risk.
[33:11] Mojisola Abi Sowemimo: Right.
[33:13] Debbie Reynolds: Your goal, your task or your responsibility. Is to manage the risk that you have purchased.
[33:19] Right.
[33:20] So that you can get the benefit of the tools without creating problems for yourself and for the individual.
[33:28] Mojisola Abi Sowemimo: Right. I drink
[33:32] Debbie Reynolds: so emoji. If it were the world according to you.
[33:36] Mojisola Abi Sowemimo: Yes.
[33:36] Debbie Reynolds: And we did everything that you said. It's your wish. Right.
[33:39] What would be your wish for privacy anywhere in the world, Whether that be human behavior,
[33:45] technology,
[33:46] or regulation?
[33:47] Mojisola Abi Sowemimo: My wish for the world would be for privacy to be something that everyone is aware of, and everyone has that knowledge.
[33:55] And people are aware of the regulations. They might not be aware of the details of it, but be aware that there's some regulation about that takes you of this somewhere.
[34:03] And also that human beings are aware that organizations need to be held responsible for what they do with their data,
[34:10] and that organizations are able to ensure that they build trust in their consumers about what they do with their data and that they actually are accountable and transparent with it and do what they say they do with it.
[34:21] And also that we don't let technology dictate how we run our lives,
[34:26] but that we actually develop structures in place in our personal lives and business.
[34:32] Also that we develop structures in place where we are able to control that technology. Because right now, if you look at it,
[34:40] the technology is almost controlling us. The technology has been deployed faster than the rules are being developed.
[34:46] So we need to make sure that we don't put the cart before the horse. We put the horse before the cart, so that way we're able to be in control of our data and what happens to it and what gets done with it.
[34:58] Yeah, basically.
[35:00] Debbie Reynolds: Oh, I love it. I'll take that. I like that wish. I like that wish. Well, thank you so much for being on the show. I really appreciate it.
[35:08] Mojisola Abi Sowemimo: Thank you for having me.
[35:09] Debbie Reynolds: Great. Thank you for indulging me on my philosophical plane.
[35:14] Mojisola Abi Sowemimo: My pleasure. My pleasure. I'm so glad to be on your show. I really appreciate you having me here. Thank you.
[35:20] Debbie Reynolds: All right, well, we'll be able to talk soon, but I really appreciate you being on the show again. Thank you.
[35:25] Mojisola Abi Sowemimo: Thank you, Debbie. Thank you. I wish you all the best. Thanks. Bye. Bye