E245 - Onur Korucu, a Non-Executive Director, Managing Partner, Advisory Board Member, IAPP

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.

[00:24] Now,

[00:25] I have a very special guest on the show all the way from Dublin, Onur Korucu.

[00:31] She is the managing partner and advisory board member at Govern Id. Welcome.

[00:39] Onur Korucu: Oh, thank you very much, Debbie. It's lovely to join you today and hopefully we will share some really brave insights about privacy, data security,

[00:50] AI parts. We'll see that. But I'm really so happy to join you today.

[00:54] Debbie Reynolds: Yes. So I'm very excited to have you on the show.

[00:58] You and I met because we were on a panel together at a conference in Spain,

[01:05] so It's called Ecosystem 2030.

[01:08] So we got a chance to meet a couple of times before we ended up in Spain. And then when we were in Spain, it was great to be able to see you and collaborate with you.

[01:15] So I would love for you to tell your story and privacy. It's great. Quite a journey. And also I wanted to give you a shout out. So in 2024 you received the IAPP Global Vanguard Award.

[01:29] Onur Korucu: Yeah, it's a big and fancy news, isn't it? Yeah, everybody likes to hear that. So I'm really so happy about that. Of course. And yes, as you mentioned that we met in really perfect Spain, you know, how can I say, it's kinds of perfect, you know, weather.

[01:43] And we shared our privacy for casting especially,

[01:47] we just talk about AI governance, privacy, new text is coming. So what will we as a privacy professionals around the world, not just Europe, not just in us,

[01:57] around Europe. So it was so lovely, you know, joining again another conversation with you today.

[02:02] So, Debbie, you know that I am both lawyer and engineers. So today I will just try to share my insights sometimes from, as a lawyer, from the law and the legislation side.

[02:11] But I really like to put my in other hats, engineering hats. So I will just try to take some questions from a technical part too. I like to, you know, play both roles.

[02:21] Hopefully it will be useful for our audience.

[02:25] Debbie Reynolds: Absolutely. So I do want to talk a little bit about your technology background and how that helped you move into privacy, how that connects together.

[02:34] Onur Korucu: Yeah, Debbie, actually I'm always,

[02:36] when I got any interviews or, you know,

[02:39] speeches, I always try to mention that cybersecurity, my first love, I cannot say privacy is my first love. My first love is always cybersecurity.

[02:47] Because mainly I am an computer and a software engineer. So.

[02:52] So I joined big four companies, PwC, KPMG in cybersecurity and IT audit area first of all. And then after all when I started my career in Istanbul, they put in force privacy legislation.

[03:05] So we understood that without privacy and data protection, understanding cybersecurity is so crippled or IT audit or technology digital compliance is really so crippled and we just, I just try to complete all of them together.

[03:19] So that is why I got my LLM degree in law. And after all I just try to make myself especially my knowledge side like the Voltron.

[03:29] So both different, you know, strengths together is meaningful. If you cannot bound these different disciplines together, it means that you don't know anything about security part. It's a big umbrella and I just try to put all different ingredients in just one pot.

[03:45] And especially you know that AI,

[03:49] we just try to integrate AI into cybersecurity, into privacy, into compliance, into risk management.

[03:57] So we have a new player on the game right now. So yes, my first love cybersecurity, my second lover's privacy. But right now I am really focusing on AI and emerging tech cartoon and.

[04:11] Debbie Reynolds: I want your thoughts on this.

[04:12] So I think definitely there's a symbiotic relationship between privacy and cybersecurity. Right. So we know that they're not the same, but we know that they intersect and they rely on one another for sure.

[04:27] But how do you think that AI has either changed how people think about cyber and privacy or how has it been either helped or hurt as a result of AI because becoming more democratized?

[04:43] Onur Korucu: Yeah, that means big question because we just got lots of argument,

[04:48] discussion or we shared our knowledge about that because suddenly lots of cyber security experts turns into their selves or they are AI security experts and lots of privacy experts calling theirselves right now AI governance expert.

[05:04] But AI so new, even just today we have no idea about what kind of new threats, what kind of new risk is coming together with this kind of very dynamic technology.

[05:15] Because we generally trying to manage like cloud competing or like you know, blockchain, this kind of a little bit stable technologies. But AI is not like that because they invented AI in 50s after Second World War and it was the general AI.

[05:32] So they just try to manage them,

[05:35] govern everything with the software development way like the SDLC life cycle model. But after all it turns generative AI. So every day it's just exponentially creating new risks, new threats into our life.

[05:48] And sometimes even we cannot imagine this kind of this topic, you know Enhancing models or new technologies in our life, in business life, especially in our daily life too.

[06:01] So we are experiencing this worldwide technology revolution. Emerging tech is not just AI, IoT is one of them. Quantum computing is a big part of it.

[06:13] My last master's degree in the University of Cambridge and when we start our lessons, our first lecture was talking about Cambridge Analytica and they just try to give us some different perspective about how can we use data for for beneficial and evil way and how we can manipulate this data.

[06:36] So I really like to say that again and again data is new oil. That is why lots of Middle Eastern countries who got the money, who has the power,

[06:46] they just trying to put their forces and money and investments in technology and AI area because futurist technology features AI and this kind of gift generative and the emerging technologies.

[07:00] So there are lots of things, of course we will talk about it, but advancement and the challenges brings together not just one of them, they're coming together into our life.

[07:10] Debbie Reynolds: I agree with that. What's your thoughts on AI governance? So I think what's happening with AI and this is my opinion and I want your thoughts.

[07:19] So one good thing that I see this happening in AI is that the European Union have their AI act. And so I'm glad that's an established law in the EU and then in the us I think we have seen a situation where we've traditionally not been super hot on regulation of new tech,

[07:39] where they feel like, okay, we need to have this tech develop and then maybe think about regulation later.

[07:45] I want your thoughts about the future of AI governance because governance isn't necessarily regulation, right? So I think it's going to become a much bigger issue because regardless of regulation here or there, there has to be more of a governance framework that companies need to think about because AI can be very wild.

[08:06] But when companies think about handling data and information, they really need to have some type of structure. What do you think, Debbie?

[08:15] Onur Korucu: Actually, you know, we just talk about AI governance. We they put lots of different privacy experts and security experts on the panel and they asked us lots of AI governance question.

[08:27] They created lots of different communities. They especially IAPP prepared one certification about that. If you ask me,

[08:35] I am a really good member and the chapter chair for iapp. But I am so sorry about that. It is very early stage and we are still learning about it.

[08:45] If somebody come and say I am an expert about AI governance, don't believe in him or her, it doesn't matter because it is really so early stage right now.

[08:57] Every day we Are just trying to understand what kind of risk is coming from. Especially from the technological backgrounds and the software part.

[09:07] I am working both different teams. I started my career in professional services companies and then I jumped Microsoft and I managing Microsoft different consulting teams for data privacy, cybersecurity and of course emerging tech area too.

[09:21] We created lots of different kinds of for different countries and residential part risk management approach, GRC approach or you know, digital realization approach. Because we generally trying to understand the local needs and of course the local requirements which is coming from the legislations.

[09:40] And we just try to adapt everything on the dashboards. This is engineering part. But you need to work together with the lawyers and engineers. But after we talk about AI governance people thought that especially from the privacy part, lawyers thought that with risk management approach they just prepared like policies,

[10:01] procedures, white papers and then they will give the numbers for these risks. And if they share with this kind of information, with the numeric information with the companies, they can solve all questions and the problems.

[10:14] But doesn't working like that.

[10:16] You know what in if you got any conversation with the digital expertise or digital risks management team, anyone from professional services.

[10:27] IT governance. Digital governance is not so new.

[10:31] Many, many years we are doing that and risk management approach. Oh my God. In ISO 2 and 7000, all ISO standards you generally trying to touch this kind of based risk management approaches.

[10:44] So it's not new for us. But in AI governance they just try to focus, find the way. They just use the legacy methods like risk management.

[10:52] This is very normal. But after all I think the main point in here,

[10:56] this is a technology at the end of the day and they never invented this technology to govern it. This is a savage technology and every day it wants to feed with data.

[11:07] And after all it just tried to enhance itself bigger and bigger. Just trying to get another territory in the digital era.

[11:16] I am talking about AI, you know. So after all we just try to put different approach or blockers for this technology.

[11:23] Especially in EU of course, because EU is the lawmaker for this century.

[11:28] And we just try to give some basic guidelines and you can put these policies, prepare these white papers and then give the numbers for this risk area and voila you are doing all these requirements.

[11:42] Oh my God, it is not working like that. So that is why I'm an advisor for the the DPC area right now in here we are just trying to share our sectorial knowledge and giving some different kind of threads from data security part of course, AI security part two.

[12:00] We are just trying to forecast.

[12:01] I'm not saying I'M really good and understand every part of it. We just try to forecast next two years.

[12:08] So I am waiting for different guidelines. Not just in eu in US too. Because in healthcare sector their threats and risks totally different. In hospitality sector they've got total different environments.

[12:22] Or in energy sector they are using generally Cicada systems. And if you integrate AI and Cicada systems, oh my God. It's turnbacks with huge penetrating area, lots of jumping points.

[12:34] We generally say jumping points in cybersecurity.

[12:39] So that is why we need lots of different guidelines to understand what kind of threats environment is coming. And the ecosystem is important.

[12:47] So I am waiting to see these guidelines from the lawmaker part.

[12:53] Hopefully we will see that. Because after all that's what people got microphones and they are just sharing their knowledge. But it's you put a good and proper security expert, for example, if they listen like lawyers talking about AI part they are just laughing.

[13:08] It doesn't sense sometimes no meaningful conversation.

[13:13] I'm generally just trying to be a little bit itchy and just trying to, you know, how can I say just make an disturbing in the conversation. And I am asking to this my privacy experts friend, please prepare this all this privacy and AI governance policies, procedures, terms and conditions.

[13:33] And just wait.

[13:35] If you can manager govern AI with this way, I will clap my hands and eat my all words after all. But it doesn't working like that. World is not turning like this way right now.

[13:46] So we need to be careful and we generally maybe of course you will join me dabi. Because mostly five years, six years, we always trying to say each other. Break the barrier between security and privacy teams you need to be collaborate.

[14:00] But today not just security and privacy team, you need to break down all these barriers, walls, Whatever you have in your company,

[14:09] you need it. You need software management, you need lawyers, legislation teams or dpo, you know, offices or hr, whatever you have. Because everybody individually using all these chatbots and AI in their life.

[14:24] If we don't train our colleagues, employees or whatever you have in your company, all assets, all sources, you need need to train all these kind of assets and resources in your company.

[14:36] Because AI is perfect. But if you can manage and control it, it is perfect. After all, it's two way traffic, it's always hungry and it always asks for more data.

[14:47] And we just talk about ethical concerns, bias, discrimination.

[14:52] You know what I'm just trying to mention all these people in the panels, we created all this dirty data.

[14:59] Nobody created it, we created it many years. We just try to put different labels on these people with the nationalities,

[15:05] skin colors, religion part, whatever they've got. Think about it, sky is the limit. Because people generally find different labeling for different groups and minorities.

[15:16] So after all, in today's we are just trying to clean up all this information and maybe we can say data.

[15:23] So we saw lots of very spectacular examples from Genesis or From Microsoft Part 2 When they ask all these chatbots, oh please create me some Santa Claus picture of the founding fathers from yes, they created very different part, you know, of the world, different colors, different shapes.

[15:42] People when you ask **** soldiers, they are generally trying to give you some pictures of gay Nazis picture. Oh my God. It is nuts. It is not non bias. It is nuts.

[15:53] We just try to clear all this discrimination.

[15:55] So fact is fact. We created very bad, very dirty data. We never respected each other. We never trying to understand or to see the way we never just, how can I say tolerate people.

[16:09] So after all today AI just try to reflect ourselves to back to us and we are just claiming about it. It's not unfair.

[16:18] So that is why we need to break down all barriers. Whatever we have, we need to be work together.

[16:25] We need to understand it. Not about gender, not about skin color, not about religion. This is not about that. This is just a technology and we need to be just one part,

[16:37] sometimes against it,

[16:39] sometimes working together.

[16:41] So hopefully I don't want to fight with AI. Most probably we will lose.

[16:45] We will lose. So I want to be on the same team to get rai. So I want to believe all our lawyers, experts, engineers, software developers, hackers,

[16:57] we can work on the same team together with AI, not against AI.

[17:01] Debbie Reynolds: Hopefully I agree with that and I'm glad that you brought that up.

[17:06] So let's say someone was in an organization and they're very siloed. So the way that organizations have traditionally been created have been very siloed. But because of AI and a lot of these transformative technologies is it is going to require a lot of us to get out of our silos to break those walls and be able to collaborate better.

[17:28] How would you recommend that someone, let's say a organization in a very siloed state of mind,

[17:34] what recommendation would you give for them to really be able to break down those walls and those barriers?

[17:39] Onur Korucu: Yeah, very hard question, Debbie. You know that we are generally trying to find the ANSWERS More than 10 years together, but let's take a look together. Actually the problem is generally just trying to break all the walls between privacy and security teams.

[17:56] They are speaking the same language in the same room, but nobody could understand each other. We generally got this kind of problems.

[18:03] So understanding different definition and the terminology is important.

[18:08] Because today we generally, especially in the engineering career, describe some technological advancements. But lawyers even they are working together with the technology legislation sometimes unfamiliar with this kind of termination and understanding.

[18:24] So we need to read and understand it. It's an ongoing process. Nobody can say, oh, I learned everything about it. So I'm just for example, from my site, I just try to give my one hour or two hours daily to reading some different white papers.

[18:39] What kind of sources is coming from different communities.

[18:42] Because different perspectives give you different angles.

[18:45] We are not perfect, we are just human beings and just trying to understand different angles. If you make it with integrity, if you can create this integrity in your mind,

[18:57] it's generally easier than understand that. So resisting technology or resisting legislation, resisting risks never come with the benefits.

[19:08] So just trying to understand it.

[19:10] Never say no before understand anything. So this is one of the important things. But after all, if I am on the three different advisory boards in different companies and when you just join any arguments,

[19:25] especially if advisory boards personalities coming from EU and US,

[19:31] Their focus, their priority, their mindset is generally total different.

[19:36] Because in Europe, people focusing on so much legislation and just trying to behave or manage by the book,

[19:45] I totally understand them. But this is just one part of the geography in the world. And US,

[19:52] even China, even India or the eastern countries, they've got total different dynamics.

[19:58] So we need to understand them too.

[20:01] Because maybe they are not the best practices, but sometimes they can find very handy, very useful methods. So adaptation is everything.

[20:10] This year, my keywords, my magic word for myself and my business,

[20:14] adaptation.

[20:16] If you can adapt legislation, you can survive. So adaptation is everything.

[20:21] World is changing every day. Europe is not the same Europe if you Compare with the 5 years ago us,

[20:27] not the same us if you compare with even 2 years ago.

[20:31] Bureaucracy, politics and the government's priority, everything is changing. So if you want to break all this barrier,

[20:38] we need to understand this is the multidisciplinary and multinationality works because technology's got no gender,

[20:47] no religion,

[20:48] no nationality, no flags. It's just one regime like technology.

[20:54] So don't think about just oh, this is Europe. We need to just follow these requirements by the book. We cannot monetize this kind of methods. Then after all, the US personality never wants to listen you in Microsoft, I experienced that you got generally three hours meetings and you are just finishing with no outcomes.

[21:13] So understand that money is money and world is turning around, especially the biggest giant companies turning around the money and monetizing their products or the services.

[21:26] So we just try to govern it comply to any different local or international legislation.

[21:33] So don't put the risks or the threats or the legislation just one priority.

[21:40] Understand different groups. And if you can understand and prioritize their requirements in the right steps, then after all it means that you are managing your company perfectly. It's generally working like this way.

[21:56] So maybe I can say that understand,

[21:59] listen,

[22:00] prioritize what is the first first because first is first. You cannot change it.

[22:08] If you today just focus on AI governance part just focus on debit every day you we can come together and speaking about AI governance but you know what?

[22:18] For example Apple just saying to us oh, we got Apple intelligence right now it's a black box AI system. So I don't care about so much what are you talking about?

[22:28] But please talk. We are innovating there.

[22:31] So if you want to say something. Yeah, say something. But we are making money with innovation. So we're all this trouble like that.

[22:39] Debbie Reynolds: That's true. So what's happening in privacy right now that's concerning you anywhere in the world?

[22:45] Onur Korucu: Yeah, maybe I can give an example from this point because I wrote one white paper today from ICO part ICO is the information Commissioner office in Europe and it's really mainly an important community and the organization for US and UK is ICO because you know, after Brexit it's a big earthquake around Europe and we just try to put different legislation requirements.

[23:12] They rolled out a documents tackling on issue we we all heard about it. AI accuracy and those infamous hallucinations.

[23:24] If you have ever had this chatgpt confidentially tell you the sky is green. You know exactly what I mean. Sky is green after all like are creating dirty data.

[23:35] So what here's twist.

[23:38] ICO isn't nitpicking over whether AI models accidentally process personal data.

[23:44] So we leave the debate Homebrook's data protection authority for example. Instead they zeroed for the more compelling idea purpose. What is AI models means to you?

[23:56] Think of this, you are baking cookies.

[23:59] You wouldn't hold yourself the standards for Michelin starred restaurant. The same principle applies to AI. If AI generative models is designed to speed or creative imaginative slightly vegged content, why insist it be perfectly accurate?

[24:18] Sometimes the magic of AI lies in it's unpredictably. It is like brainstorming body on steroids,

[24:25] for instance tools we use every day. Grammar checkers for example or translators. Even content creators don't need to be influenble.

[24:34] There is here to spark ideas,

[24:37] simplify tasks to help us think differently. Over regulating over tools in the name of accuracy could smoother their potential.

[24:46] So the wrap up to wrap up this ICO approach is freshing the balance acknowledges the needs for creativity and innovation while still protecting data rights to promoting accountability.

[24:59] So next time you are marveling at or cursing AI, remember it's all about purpose, transparency and of course dash of human oversight.

[25:09] So if you ask me what is my main problem about AI and our approaches of course transparency and accuracy and of course I think the ethical concerns because especially last year we talk about ethical parts mainly in universities.

[25:28] But you know what Debbie? They don't have any idea about that. Every day we've got new problems coming from copyrights or deep fakes or intellectual properties parts. But AI is a part of an art world right now.

[25:44] AI is a part of in hospitality. Most probably we will have lots of AI surgery in five years after 10 years after.

[25:52] So ethical concerns is really important because they shaped all AI innovation in human beings shape. They never try and give a robot shape. They're always giving a human shapes. So it's a big dilemma confusion for.

[26:07] For us for. Because think about it. We've got mothers, parents, grandmothers, grandfathers. If they got a telephone, if their telephone ring and they just try to pretend my voice to them and grandmother I need this or grandfather I need this kind of money from you.

[26:28] They never questioning it. And if they put my skins and my face, my body model in a robot with AI most probably they can feel it's like oh, this is our granddaughter.

[26:42] So we can, you know and behave like that. Lots of people just trying to got these dilemmas and these tech giants, all these companies just trying to make it to ourselves.

[26:54] They are putting all this technology in human shape.

[26:58] So I really like to give this example, the Terminator example, isn't it? This is coming like this way. They are just giving some different videos just trying to push the robots with some hard metals or something.

[27:13] And it's just trying to take its balance and you are feeling to your heart oh my God, what are you doing? It's. You are harassing these robots. But it's like your tossed mission or cattle.

[27:23] It's like that. For example, if I threw my cattle in the home.

[27:28] Nobody cares about it. People just what did you. What did this crazy woman doing right now? But if you put human shape,

[27:35] all these robots or AI models they're just feeling human beings feeling. And you're just trying to. How can I say?

[27:45] Emphasize. Yeah Making empathy for these missions.

[27:49] So Debbie,

[27:51] I think feature is coming very dangerously. They are just tickling and gaming our mindsets too.

[27:57] So I think the biggest problem in data protection, privacy, AI, innovation, whatever we are saying. I think ethical concerns is the main point for me for the next year.

[28:09] Debbie Reynolds: I agree with that. I think ethics is a huge issue, right? Because ethics tend to be typically hopefully ethic come before laws and regulation, right? So we as nations, as people, as humans need to understand what we think those ethical guidelines should be and really challenge companies that are developing these tools around what those ethics are.

[28:30] I think I agree with your,

[28:32] your idea and I think my concern is very similar to yours. I'm concerned that people will advocate their human judgment and responsibility to AI, right? So let's say someone is using a artificial intelligence tool in hiring and someone has a bad outcome and they ask companies so how did you come to this decision?

[28:56] And they're like, well, I don't know, the AI just did this, right? So that's not a good answer.

[29:02] Onur Korucu: Yes, yes, yes, yes. I totally, totally agree with you. We've got the same problem in cybersecurity area too because we just trying to explain people. There are three different models for AI.

[29:14] Human in the loop, human out of the loop, human in the middle of the loop. So this is human on the loop model. Right now we are living humans are decision makers.

[29:24] But after all, one day it's coming.

[29:26] And there are lots of examples, this hybrid motivation too.

[29:31] Sometimes AI just trying to give decision making. When you questioning it,

[29:36] you cannot get any proper answer. Nobody can make this reverse engineering because generally in NSTLC position,

[29:44] if you are making reverse engineering you can get an answer because oh, there is a malicious code in here. So if we can change this core code so we can manipulate the data.

[29:54] But right now generative AI, it's just learning itself. And when you're questioning it, maybe you solve these examples from Bing.

[30:03] One of the engineer just asked these chatbots without any backgrounds, maybe we can say oh did you remember I asked you this question and you gave me these answers.

[30:17] So today I want to ask you this. And there is no this kind of questions.

[30:21] So the chatbot just questioning itself. Oh my God, I couldn't remember the question.

[30:27] So why am I here right now if I cannot remember this question? What should I do? It's like a, how can I say questioning it's existentialism question. It's just like that psychological and philosophy question.

[30:41] So it's just burning itself. Think about it, Debbie. They are just trying to act like human beings. So that is why there are lots of perfect examples. And if you make reverse engineering, there is no answer.

[30:55] You cannot understand what kind of pattern they have to be making this decision making processes.

[31:01] Debbie Reynolds: I think that's true. I think we're talking about the anthropomorphizing,

[31:05] trying to pretend like AI is human. And to me a lot of that gets into,

[31:11] just as you said, it's kind of psychological manipulation. Right. And so let's talk about just cyber threats. You know, I'm concerned especially with things like deep fakes.

[31:20] The fact that deepface can be very realistic in terms of voice and like you say someone's the way someone appears and it's being used to great effect.

[31:31] You know, we're seeing a huge rise in people being scammed because of some of these technologies. But I want your thoughts just on that in terms of, you know, how do we, how do we face that issue actually?

[31:45] Onur Korucu: I mean it's one of the most popular questions generally on the panels when you join it generally one of the lawyers, you know, holding their hands and ask us is these deep fakes are the personal information or not?

[31:58] You know what, even the dpc, even data protection commissions guys generally trying to avoid this question because there are no proper answer from legislation part. They just trying to understand if this kind of the fakes core information is personal information or not.

[32:16] If it's personal information,

[32:17] oh my God. There are most probably more than a million case will open next year because sky is the limit. All the professional and celebrities,

[32:28] I don't know, the well known people, the public figures, everybody got different video fakes. And it's so common right now, especially around the, you know, social media world.

[32:39] So if they,

[32:40] anyone got the encourage and just say the public area, it's personal information, this is the pii so go and you know, claim it. Oh my God, I cannot imagine it.

[32:51] But from my perspective the videos and the visual understanding, especially in AI part one of the biggest area and Sony or this kind of giant brands putting really good money especially in this visionary area.

[33:08] So we are calling it futuristic. Maybe you remember the Superman movie or this kind of Marvel's movies, they use this green background and perfect imitation and the manipulation with the AI and we generally really like to watch it.

[33:22] Nobody can compare with oh it's normal people or not because maybe remember again because of the mustache, Superman got different kind of man shape and they just try to continue the filming.

[33:34] They don't want to stop that in Hollywood. So right now they enhance it Even they want to manipulated us.

[33:43] They can prepare different podcasts with Debian onor. They can just put all the videos and you know what?

[33:50] Even can say this is not true, we did this one most probably they never believe in us. They most probably go and believe in another part. So. So deepfakes are perfect.

[34:01] We cannot stop this innovation. Especially in us. They are doing it because in EU part we've got really strong intellectual property and copyrights. There are lots of people just doing their PhDs on this site and you can understand it if you compare with these science fictions.

[34:18] That is why in Europe you are not facing and experiencing so much perfect science fictions.

[34:24] So this is another sector and big money turning out, you know that. So I never believe in we can stop or block this kind of innovation.

[34:33] Maybe we can manage how people can understand which one is right, which is really fake. Because this is important.

[34:41] That is why I'm saying that in Cambridge Analytica or these kind of cases generally opening because of manipulation of publicity, manipulation of people in one community.

[34:51] So with this kind of defects, especially with the elder people, the senior people, you can manipulate them easily.

[34:58] So this is another really big weapon for the.

[35:02] Not just for the, you know, companies, for governments too. And I know lots of government governments supporting this kind of groups, technology groups too.

[35:12] So this is really big bureaucracy turning around again this topic. So if you ask me as a cyber security engineer, I think if any company can invent or protect the systems with an application, which is comparing, oh, this is the real one, this is the fake one kind of very simplistic way,

[35:34] they will win this verse because there is no this kind of comparing application or platforms right now. So we need this kind of specification. But the problem I'm generally working together with scale up and startups and I worked in Microsoft this part.

[35:51] And you are a perfect scale up for startups, these giant companies, big tech giant companies trying to understand what are you doing. If they like your solution, they want to buy you.

[36:03] But if they think that you are a threat for their feature, if they are trying to invent or enhance something in their systems, they are killing you. They're buying and killing you.

[36:15] So this is a big handshake,

[36:18] governments generally giving these backgrounds, all these big tech giant companies. And don't imagine that these tech companies never move,

[36:28] never take any action without government's approval. So this is a big game. We need to understand the big picture.

[36:35] We will see lots of defects because defects is just a reflection of this AI innovation, especially in the art sector,

[36:46] especially in the public sector. Public,

[36:50] maybe hallucination sector and somebody likes it.

[36:54] So this is a big money issues too. So we will see that we cannot stop it. But hopefully we can create some good models from security perspective. I'm not talking about privacy perspective.

[37:07] Because if you want to protect yourself,

[37:10] you need to take out all this social media, all this Internet area. You need, you know, delay to remove yourself in the Internet. Nobody wants that. We are making our money.

[37:22] We are getting this reputation with this kind of communication and the integration with the Internet platforms and social medias and lots of this kind of visualizing programs and the platforms, whatever you want to say that.

[37:35] So that is why they can use our image for any defects today.

[37:42] No legislation blocking it. They are just questioning it. But there is no blockers.

[37:48] If you ask me my individual answer, they don't want to stop it too. They just try to a little bit control it,

[37:56] don't want to stop it. So I think we will see lots of defects,

[38:01] more sophisticated deep fakes and videos. We'll see if there is any difference.

[38:06] Debbie Reynolds: Right, Right, right.

[38:08] So if it were the world according to you Onur, and we did everything you said, what would be your wish for privacy anywhere in the world? Whether that be regulation,

[38:16] human behavior or technology?

[38:18] Onur Korucu: Debbie, actually my main concern is always generally fair and transparent innovation.

[38:27] Because legislations generally in this world very static technology is really so dynamic. We cannot manage any dynamic technologies with very static legislations like 10 years, like 20 years ago.

[38:43] So we need to be ready for another bullets. And there are no legislative silver bullets to technology.

[38:51] So we need to be a little bit flex for these new technologies. And from privacy part. Privacy is privacy. Human rights are human rights.

[39:00] So without an automation, without any AI helps AI touched, you know,

[39:06] confirma whatever platform, solution, whatever we are saying it application tools. We need to be ready for the future.

[39:13] Without this kind of automation there are so much data.

[39:17] Human beings are not enough to capable to manage all these data.

[39:22] And please my,

[39:24] my love, your friends understand me.

[39:27] You can be so capable create perfect white papers, policies, procedures, whatever terms and conditions, agreements, sla. But technology is never stop and understand that it's just enhance itself. It's just renew itself.

[39:42] So be ready for that. And please be automate everything touch with the technology.

[39:49] Never resist it.

[39:50] After all the world is changing.

[39:53] Legislation platforms will changing too. I believe in, we will see in the future they will prepare some dynamic legislation. I mean that it and IT can enhance itself every day.

[40:05] Because if AI is coming into legislation world too, most probably AI will just explore and Check around the world what kind of threats and risk is coming, what kind of new cases opening.

[40:17] And it just trying to embed add all this new point into the legislation. So legislation can turn in dynamic model. Hopefully one day and this day we can fight or we can work together with technology.

[40:32] But before it, I mean that in this world we are really so slow if you compare with the technology.

[40:38] So my advice is don't resist it. Adapt it. Adaptation is everything.

[40:45] Debbie Reynolds: I agree with that. Adaptation is really important. I think we're going to really have to learn.

[40:49] We're going to learn how to exercise that muscle in the future for sure.

[40:53] Onur Korucu: Yeah.

[40:54] Debbie Reynolds: Well, thank you so much for being on the show. This has been great. I really appreciate seeing you and I'm really happy that we were able to chat today.

[41:01] Onur Korucu: Oh, thank you very much for having me, Debbie. It's always pleasure talking, chatting, whatever involving anything with you. Hopefully in the future we will do this kind of conversation again. AI is just running, so we just try to catch it.

[41:14] It's so. It's just saying to us, catch me if you can.

[41:19] Debbie Reynolds: I love that. Catch me if you can. Catch me if you can. All right, well, thank you so much and I'm sure we'll be able to talk soon.

[41:27] Onur Korucu: Yeah, hopefully. Hopefully. Debbie and I want to say good evening. Good morning. Good afternoon. Who is listening us in different part of the world.

[41:37] Debbie Reynolds: Thank you. And we'll talk soon.

[41:39] Onur Korucu: Talk soon. Bye bye. Bye bye. Then it.

Next
Next

E244 – Aleksandr Tiulkanov, Digital Ethics Researcher, Legal Expert, and Technology Policy Advisor