E244 – Aleksandr Tiulkanov, Digital Ethics Researcher, Legal Expert, and Technology Policy Advisor
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:24] Now I have a very special guest on the show, Aleksandr Tiulkanov.
[00:30] He is the AI and data governance advisor at Responsible Innovations in Strasbourg.
[00:38] Welcome.
[00:39] Aleksandr Tiulkanov: Yeah, hello. Hello. I'm very happy to be here.
[00:42] Debbie Reynolds: I'm excited to have you here. You're also a LinkedIn top voice.
[00:48] Aleksandr Tiulkanov: Not to mention that,
[00:50] I mean I did some content steadily across some amount of time and especially my newsletter where I touched upon topics related to privacy and AI and AI governance and somehow I got noticed.
[01:10] So this seems to be it, like consistency and interesting content.
[01:15] Debbie Reynolds: Your content is solid.
[01:17] Your content makes me laugh. You always put like a little humor in there and it's hilarious. So I, I would love for you to tell your journey and privacy and how you became what you are now in your privacy advisory and consulting practice.
[01:32] Aleksandr Tiulkanov: Sure.
[01:33] So I started out as generalist lawyer,
[01:36] commercial focused. So I was advising organizations on their commercial transactions, negotiating them, litigating sometimes. So this is where I started. But then I pivoted towards information technology,
[01:52] governance and regulation. And that was when I got my second legal degree from Edinburgh.
[01:59] So and I was focusing on innovation, technology and the law there. That was the name of my program at that point. That was when I started. It was 2015.
[02:11] And we didn't have any specifically AI focused regulation back then. I don't think anywhere in the world there were elements that were covered, you know, especially by data protection law.
[02:28] Right. And this is where I started. So I started with advising on data protection related projects. GDPR was coming up, so I changed my job. I went to work at Deloitte,
[02:43] Deloitte Legal and working with tech companies,
[02:48] or just companies that have some workflows related to transferring data,
[02:55] then amassing, collecting data and then using it in particular to train AI models. Right. So this is where it's started, like focusing for me.
[03:07] Eventually grade Jovi, but went into that line of work, so from data protection related projects and advising on them on data workflows into advising AI governance and commercialization.
[03:25] So how we could develop models, how companies would, could develop and do this in a collaborative fashion, you know, combining some data sets and coming up with useful insights thanks to the models which they are building as products.
[03:39] Right. So and eventually I went into more core policy work. So I Drafted some laws, proposals, analyzed them,
[03:49] steered some working groups around them. And it so happened that while I initially started my career in Russia, I got invited and had an opportunity to work at the Council of Europe in Strasbourg in France.
[04:05] So I moved from Moscow to France and to Strasbourg specifically in 2021.
[04:12] And since then I am based in Strasbourg. And after having worked at the Council of Europe on,
[04:21] especially at the time when they have already completed their feasibility study on a potential convention on AI, right. And they were preparing to negotiate it.
[04:32] Debbie Reynolds: So.
[04:32] Aleksandr Tiulkanov: So I did not participate in negotiations such, but I was supporting the secretariat of the organization,
[04:40] the hierarchy, with policy advice and analysis,
[04:44] which to some extent was taken into account,
[04:48] of course in the, in the convention. But I would not of course over exaggerate like my humble contributions. To some extent I could do something to inform the policymaker and also interaction with digital companies.
[05:05] So between the Council of Europe and companies which operate in the digital services market, there is some interaction. So I helped steer that to some extent.
[05:16] And then after working at the Council, I started my own consulting work.
[05:23] So my own consulting enterprise.
[05:26] So and this is how I ended up advising both private sector and public sector clients. This is what I do now on AI governance and focusing very strongly on the European frameworks, so be it, the act and everything related to it.
[05:46] Because in particular as regards this European regulation, I was following it from the very beginning and I was happy to see that those more kind of an academic type discussions we had in Edinburgh in 2015 now translated into real policy.
[06:06] Talk about like what would be the real effect of those policies we are thinking about and what academics were discussing, right. Some years ago.
[06:18] Now we have actual, you know,
[06:21] initially draft regulation. Now we have already the regulation which has come on, you know, coming into effect across several deadlines. So we have this.
[06:32] And yeah, it was very interesting to see the transformation of the regulation of the European Legation. I mean, in between how it came out as the first proposal from the European Commission, right.
[06:44] And then what happened during the discussions and debates in the European Parliament in the Council,
[06:52] when then it all ended in this trialogue where different European institutions have finally reached some consensus on what would be the final text.
[07:04] And now we are at the stage,
[07:06] we see already some guidelines appearing. And the important and interesting thing for me was also to now become part of technical standards setting on this, because under the EU act, we have actually a mandate from the European Commission to the European standard setting bodies, which are in our case,
[07:32] San salac. So there is a joint technical committee there and as an independent expert there, alongside with other colleagues,
[07:42] independent experts, representatives of companies,
[07:45] academia, civil society organizations. So we are trying to come up with, you know, actual technical specifications,
[07:54] how to implement those essential requirements which we have in the act. Because this is the framework in Europe in product safety law, because not everybody fully grasp it from the beginning is that actually EU AI act is a product safety law.
[08:12] So it's part of product safety in Europe and as such it's supposed to follow the same process as for other product safety laws,
[08:23] which is the legislator formulates essential requirements, the results to be achieved, right. And then in standards we develop solutions which operators on the market could then take up.
[08:40] They are not required to take up those specific solutions because standards are voluntary.
[08:45] So nobody requires you to take up and follow to the point the European standards were building. But the benefit of doing so would be the presumption of conformity. And so if you are following the technical,
[09:00] you know, solutions and specifications which are provided, you know,
[09:05] on in those standards,
[09:07] then you will be granted a presumption. It's only presumption. But still it's very important for those organizations who are, you know, interested in having this work done systematically in the organization and having some more certainty about like whether they are doing the right thing.
[09:27] So for them it's of course beneficial to follow the European standards. In this sense. We are still at a stage where we are still developing those standards, having in mind that the deadlines for the organizations to comply with the act are still a little bit in the future, but not so long in the future.
[09:51] So we don't have much time, so we don't have a lot of time to do this, to complete this work ongoing.
[09:59] And what we are doing there is basically based on the request from the Commission which explains that, yeah, there are these requirements around, for example, data governance requirements around the management of quality of the products of AI enabled products which enter the European market.
[10:19] We have to come up with those specifications which would support those sets of requirements primarily for those companies which will be providing those systems on the market. So provide us.
[10:32] But deployers are also,
[10:35] I mean, benefiting from that because the way we try to structure this, and this is according,
[10:42] in accordance with the act,
[10:44] some product features, some documentation requirements, there to be something to be done by providers to enable them to deployers for deployers to comply with their request. But they are being imposed in the regulation.
[11:03] And this,
[11:04] this standard setting bit is relatively new to me. So I started with this kind of work in standards only last year.
[11:13] But I've Gained some knowledge by the this point. Now to better understand how all of this works. And of course it's different from more core legal work I did before, right.
[11:26] In policy or in drafting draft legislation or advising companies on implementing policy directly. Now it's a bit different because I see that there's great value in standards as well. And the processes are a bit different, the approach is a bit different.
[11:46] Like not something you would be familiar with as a lawyer, right. Unless you have familiarized with yourself with how it all works or maybe you have worked previously and I saw some other standards organizations, in which case, of course, you would have this understanding.
[12:07] For me, it was interesting new and I love to learn interesting things and I like to share my learnings with others. This is what I'm trying to do with my newsletter, with my posts on LinkedIn.
[12:24] I hope it's useful to the public. And yeah, I'm always happy to see when it is the case,
[12:33] you know,
[12:34] very useful.
[12:35] Debbie Reynolds: And I work in standards as well.
[12:36] I enjoy standards because it does get more down to the actionable,
[12:43] practical ways and the way people implement things. And then also I feel like it's a bit less political in some way.
[12:55] Standards, where you're saying, okay, if you want to achieve this, here are the steps that you need to do or the things you need to think about and really being able to align that.
[13:04] I work in standards as well and ieee,
[13:07] so I'm excited to see a lot of people who are in data protection,
[13:11] really helping in that standards way. But tell me what's happening in the world today that's concerning you about data privacy, data protection or artificial intelligence?
[13:24] Aleksandr Tiulkanov: Yeah, I mean,
[13:25] especially like, of course I'm biased towards the European frameworks. Right. So this is what is the most interesting thing for me, not necessarily for everybody, but I tend to speak about what interests me.
[13:39] Right. So I mean, we have an overlap in particular in Europe in between what is already regulated by existing data protection law, primarily by gdpr.
[13:52] Of course, European Union institutions have their own,
[13:56] you know, regulation eudpr, which is more or less, you know, a repetition of what GDPR says with some aspects which would be pertinent to the Union institutions. But anyway, and there are things there in,
[14:14] in those regulations which address, of course,
[14:17] things which are relevant across the AI system life cycle. Right. So in standards,
[14:24] as you well know, you know, we're talking about trying to take into account any potential hazards and any potential risks that may come about because an organization might have missed something at any particular stage, be it, you know,
[14:42] during Even the inception or development of the system or later on when it's already deployed, right. And operating and maybe even self learning, which is a whole new concept for as compared to those other older legacy systems for which it was not the case that the word adapt during the operation.
[15:06] Now it's in the case and I think it complicates things to some extent as regards to compliance with data protection requirements, right? So there come the questions which basically centered around, you know, several topics and how do we use personal data in particular when we are training AI models,
[15:31] right. So,
[15:32] and to what extent it is permissible particular to use sensitive data or in you speak, sensitive,
[15:41] special categories of personal data, right?
[15:45] Such as ethnic origin,
[15:47] sexual behavior and such,
[15:49] and you know, other constructually important information to make sure that the models, for whichever purpose they are intended then producing outputs, be it predictions or generate some outputs which would be representative of the the reality where these systems operate.
[16:11] And this means like when you are using some data for training,
[16:17] it may not necessarily be fully representative of that reality, right.
[16:23] During the operation. And that means you have to address any potential harmful biases, for example, right? That may be in the data sets which you are using to train data, or maybe you are a company which is operating or using data to train UI systems in one part of the world,
[16:44] but then you want to expand and then you want to explore other jurisdictions, other markets, right? So how do you deploy your system to make sure that on the other target markets the system operates with acceptable accuracy rate and acceptable robustness and such other trustworthiness characteristics which are required.
[17:09] So then, for example, in the act we have this specific provision on how developer of a system may use those special categories of sensitive data to address this potential bias, right?
[17:27] But at the same time we have GDPR where there are provisions on rather restrictions on what you actually cannot do with those kinds of types of data and people are interested in to what extent there is a mismatch or tension or something.
[17:44] And I hear this a lot, there is a lot of tension or something like that between those two regulations. But when you look a little bit in more detail, what is going on there,
[17:57] and this is what I try to argue is, I mean, I wouldn't see the necessarily a tension between those two regulations on this particular matter at any rate,
[18:09] because what GDPR provides is a baseline like what you need to do in terms of processing data,
[18:17] especially also during the AI training stage. But then the act says, okay, and now after you have done all of that,
[18:26] you should also do this, this and this in addition,
[18:31] if you want to use those types of sensitive data. Right. So it quite complementary to what the GDPR already requires. Right. So this is one of the topics of potential misunderstanding, you know, in,
[18:46] in the market.
[18:48] And I try to address this misunderstanding. Although there is also a legitimate concern which people raise that,
[18:56] okay, basically in the act we are addressing primarily our requirements to those who developers provide us who develop high risk AI systems. So it's a subset of AI systems. So those provisions which are there in the act, they do not necessarily will be relevant for those systems which are not classified as high risk.
[19:22] Right. So but if we want still to address undesired bias in such a system during development,
[19:28] what do we do? And this becomes a little bit more complicated because I mean there are still provisions in GDPR on how to still it could be still possible to use this kind of sensitive data to address bias.
[19:42] But they are not specific so they do not take into account those challenges which developers face at this stage. Right. So there is still a possibility to rely on some legal basis to process this data and to rely on some exemptions from the general prohibition to process special categories.
[20:05] But there are like less opportunities because the act itself serves as one of the possible grounds, is kind of as one of the European regulations which under the GDPR would allow you as a, you know, data controller.
[20:27] Data controller to argue that there is substantial public interest.
[20:33] Right. In doing this and addressing bias in AI systems, especially when they're deployed at scale and influence the market and affect a lot of people potentially. Right. So there is a really public interest in doing this.
[20:48] So but if you are not within the scope of the high risk a provisions under the act,
[20:56] this means that if you don't have an explicit consent for the person or you don't have,
[21:02] I don't know if you're not using some data which has been made, manifestly made public or something like very exotic, I would say grounds to, or exemptions for this kind of operations, then you have to find if at all it is possible some legal basis in another EU law or member state law to address this and to use those categories of data,
[21:29] special categories for mitigating bias. So there are real challenges, but again there is no tension between those two acts. It's just that we need to know better how they interact.
[21:42] And this is just one,
[21:43] one of,
[21:44] you know, the, the concerns.
[21:47] Debbie Reynolds: Yeah, I agree.
[21:48] The right. So I think part of the,
[21:53] part of the way that people think about it as being attention is that they feel like regulation is slowing down innovation Right. It's kind of like a speed bump or something that's, that's making people slow down.
[22:07] And so I like the way that the European Union is looking at data and data protection from a human centric point of view, as opposed to let corporations do whatever they want and then let the chips fall where they may.
[22:23] I think that's very dangerous, especially in an AI setting where the harm, to me, some of the harms that are trying to be prevented or talked about in the EU AI act,
[22:35] you know, there could be no adequate redress. Right.
[22:40] For,
[22:41] for some of the harms that will happen. So to say, let's take off all the guardrails and,
[22:47] and do whatever we want with AI will create a lot of harm,
[22:52] especially for jurisdictions like ours that don't really have regulation around AI at all. It's kind of like a very empty cycle. Right. So it's like, okay, let's develop,
[23:05] let's not care about what happens to people data.
[23:08] And if something does happen,
[23:10] there really is no framework or no way to truly get redress from that because it really isn't transparent. And I agree with you that the GDPR and the EU AI act work together really well if people really think about it that way.
[23:29] But then also,
[23:31] I think one thing that people don't realize about AI systems is that you need to be careful about what goes into those systems because it creates a situation where it may not be transparent to the makers, the users or the people what's happening with their data.
[23:49] And then also these models, it may not be feasible to take that data out. So being able to be selective about what goes in, I think is very important.
[24:01] And those preparatory steps are really important in terms of safety. But I want your thoughts.
[24:08] Aleksandr Tiulkanov: Yeah, I agree with you, absolutely. And this is why we have, again,
[24:13] in the context of GDPR and the act, we have those safeguards built in,
[24:20] be it the need for human oversight. Right. For example, when they're using automated systems of any kind to come up with individual decisions which would affect particular people. Right. Or when exactly we are considering, like what data is being provided when the system is operational and how do we achieve the principles of,
[24:48] for example, data minimization. Right. So that we don't take on the data we don't actually need,
[24:55] you know, for a particular use case or operation or workflow.
[25:00] Right. So this is all I think is better understood through the prism of the principles which GDPR imposes on data controllers. Right. And of course,
[25:15] during the training,
[25:17] it will be one data controller who will be controlling the data at the input stage, during training and in the operation, there will be another data controller who will be deciding like yeah, exactly.
[25:28] So we are going to use this particular system to achieve this particular goal. And then they have to also decide what system or would be most appropriate to achieve that goal and to what extent.
[25:43] They can also influence maybe the design of the system in advance,
[25:50] thinking far ahead and when we'll be using it.
[25:55] If it is possible to customize how the system is built,
[26:00] they would be very interested in maybe even making sure that some data which enters into the system is even not processed because it's not relevant for their particular purpose, for their particular use case.
[26:14] So it will not be considered. And then again,
[26:19] what I spoke about human oversight, because basically what we had in GDPR before the act entered into force,
[26:29] we had this, for example, safeguards around fully automated decision making, right? So,
[26:34] and the GDPR has provided conditions that upon which it would be possible for the company of organization to use this kind of technology to arrive at an individual decision about the person using their personal data,
[26:53] right. In a fully automated fashion. So normally it's not something which is considered low risk, it's considered a high risk activity. So there should be safeguards. So because AI systems have their limitations,
[27:07] they do not usually understand context, all the,
[27:11] you know, surrounding information which people are able to take into account when arriving in particular, you know, individual decisions. So,
[27:20] and there are of course limitations with respect to accuracy and other characteristics. So this requires basically to provide some safeguards. So in gdpr, how it went,
[27:32] basically the legislator said, okay, if you ensure that there is meaningful human intervention in the decision cycle, then you will be exempt from some requirements. But this would mean like,
[27:48] you know,
[27:49] a compromise because then the,
[27:52] the company or organization which uses those systems will not maybe using them to the fullest extent of their possibilities,
[28:00] right?
[28:00] But then this would enable again, if you are going to use the system, there are these and these conditions under which you can do so if you want to avoid those requirements, then you have to put not just a perfunctory kind of, in a perfunctory kind of way, some human that would formerly rubber stamp whatever comes out of there,
[28:25] right? But in a meaningful manner,
[28:28] review whatever has been output and then take into account another context, tend to take take into account maybe some considerations from the person itself, right? So they may need to have a say in this and eventually may need to contest the decision, right?
[28:48] And to contest the decision who has been automatically generated to a large extent or highly influenced,
[28:56] you will have to think as an organization on to what extent the design of the system allows it. Because often those systems are indeed black boxes. And there are some explainability techniques,
[29:12] but sometimes it comes down to just post factum explanations which are not kind of real explanations. It's just a kind of a justifications of how you could look at this, what has been generated.
[29:29] And this is not really helping anybody who wants to contest the decision on substance, like why exactly these or the factors were taken into account. And I was for example,
[29:43] subjected to this treatment. Right. So I need to understand the basis for this, to contest it. Right. In the court of justice or something like that. Right. I need to understand.
[29:53] And what direct added to the equation here, to the calculation here is basically for at least high risk AI systems.
[30:03] So not for all high, not for all AI, but for high risk AI they require human oversight not only or not necessarily at decision cycle, at each decision cycle. So this is not required by the act that you have to intervene at each decision cycle.
[30:23] But what is required is that you have a process where you are monitoring the system.
[30:29] You can intervene, you can decide not to use an IS system if it's not making sense in a particular case. And you have kind of a strategic oversight over both the system and the externalities,
[30:43] which being, you know, on some third parties, affected persons as such. So we came from just originally in GDPR with something which is in, in theory, which is called human in the loop situation.
[30:58] Right. So but again not in a rubber stumping way, but in a meaningful way.
[31:04] We have come up with requirements about human on the loop and human in command. This is what the high level expert group on AI came up with like as a summarization of what, what is there in the literature around this topic.
[31:22] So there are three approaches of this kind. You know, human envelope, human on the loop, human in command.
[31:28] And basically, yeah, so then the act has made use of those questions previously which, which were academic, now they are part of law and we have those things as legal requirements now.
[31:44] Debbie Reynolds: So in your work, when you're doing training or educating people on AI literacy,
[31:51] what are maybe some of the things that companies or organizations struggle with the most? Like what is the hardest part for them to understand about AI literacy?
[32:02] Aleksandr Tiulkanov: I mean, I'm mostly doing courses which are focused on governance, risk and compliance professionals. Right. So it's a little bit easier in this sense for me because GRC people like, I know the tribe, I am part of the tribe, so I know their pains and like their way of thinking about things.
[32:23] Yeah, of course this is also a bias to see risks rather than opportunities. And I recognize that.
[32:30] Yeah. So it's a little bit easier,
[32:33] you know, trying to wrap our head around if you are already biased to see risks everywhere. And then they may be just needing a framework to address this in a structured manner.
[32:47] Right. So and this is where standards are very helpful because this,
[32:53] you know, implementing a standard in the organization of allows it in a structured way to address how exactly to start with responsible AI in organization. Right. So not necessarily with the act,
[33:08] but with any kind of a structured approach towards responsible AI. And you could use, for example, the 402001 standard from ISO AIC as one example to develop your responsible AI program.
[33:24] And these like this idea is to some people is refreshingly new, that yeah, there are indeed some frameworks which help you to get your basics straight.
[33:40] Even though, for example,
[33:42] this particular standard which I referred to from ISO, it will not necessarily cover all regulatory requirements in Europe. Right. But it will give you a solid foundation on what to build upon.
[33:55] You will have the understanding, for example, of the context of the organization and where your systems AI systems operate in, because this is a requirement under the standard, under the management system standard in principle, to understand the context,
[34:12] to set up the accountability and responsibilities. Right.
[34:18] To come up with an implementation plan and to find out what are the stakeholder requirements, who are the stakeholders,
[34:31] how we are making sure that their needs are taken into account,
[34:36] to what extent those requirements are also legal requirements. Right. So maybe there is something you must necessarily do and some others will be based on your interaction with stakeholders, be it clients, be it affected persons.
[34:54] So this is very helpful and I think, yeah, this is what people are usually struggling with.
[35:01] They lack a structured approach or like guidance to this structured approach. And this is usually very helpful.
[35:11] When I'm interacting on high level with like C level executives and board level executives, this of course means that I'm focusing on more strategically posed questions and sometimes they just initially need to get wrap their head around like the general overall situations and what these or that regulation means for them and for their particular organization and for their particular projects which they are pursuing and for their strategy.
[35:48] Right. For their business strategy and for their products. And this is of course then transforms into questions into very specific, you know, specific questions where they are struggling to understand the consequences for their business.
[36:04] Right. So what do you actually mean for us, you know, what we can do, what we cannot do. Right. So because they are not specialists in this. Right. So of course they need specialist advice and guidance for Their decision making to be more informed.
[36:21] So depending on the audience, there are questions at different level.
[36:27] For some it's on an implementation level on understanding the structured approach towards complying with some regulation or just implementing best practices, as is basically envisaged by some management system standard.
[36:44] And maybe they will want them to pursue a certification. Right. And maybe this is also driven by the client requirements because we know that, for example,
[36:57] large players on the market are now in some cases are even requiring you to be certified against a particular standard regarding AI management before you provide them with their.
[37:12] With your services in some sensitive areas, like relevant to both business risk and risk to affected persons. Fundamental rights risk, basically. So it depends on. Yeah, on the audience.
[37:29] Debbie Reynolds: I agree.
[37:31] If it were the world according to you,
[37:33] Aleksandr, and we did everything that you said, what will be your wish for data privacy, data protection or AI anywhere in the world, whether that be regulation,
[37:46] human behavior or technology?
[37:50] Aleksandr Tiulkanov: Okay, So I would just say that before we jump into the question of how do we immediately make use of this or that particular technology,
[38:05] which is often the case when people overhyped about something very new. And I would like people to pause and think about whether this particular technology is the best one or just the approach,
[38:23] which means that you have to use particular technology is the best approach for your particular situation, for your particular use case. And this means. Yeah, sometimes you will discover that, for example, the process you want to automate.
[38:39] Right. Maybe it does not make sense from the very at the. In the first place. Right. So maybe what you want to do is to review the process you are going to automate.
[38:51] And maybe the process will not make sense in general.
[38:56] It will lead, it will appear to lead some unwanted consequences is just something happened and everybody got used to doing things in this way.
[39:07] Right.
[39:08] For example, end up still discussing even, even in regulation, the use of technology such as polygraphs or something which is in essence unscientific and brings a lot of harm to people.
[39:24] Right. So before we jump into trying to see how we could make things better with that inherently faulty process,
[39:33] we want to avoid it altogether. And that would be relevant for any kind of process at all. And this is what would be my wish. Yes. So we would first challenge our assumptions of whether it makes sense at all to use any particular technology,
[39:53] whether the one you are considering is the best one for the job. Right. And only then if you have answered in the positive. So yes, the process makes sense.
[40:05] It helps us arrive at the outcomes we want.
[40:11] And yes,
[40:12] this particular technology,
[40:15] based on the analysis seems to be the best for the job,
[40:20] then we address specific risks and opportunity related to that technology from all the aspects, be it privacy, be it data protection or any other aspect.
[40:33] Debbie Reynolds: So I agree with that. And I think I have a similar experience with GDPR as well, where I feel like a lot of companies were very up in arms or concerned about that regulation.
[40:49] But as they've looked at what they were trying to do,
[40:52] maybe what they were trying to do is more narrow. Right. So not everything that they were trying to do,
[40:59] the regulation didn't apply to a lot of what they wanted to do,
[41:03] even though they were concerned about it. And so back to your point that you made earlier about the EU AI act, where a lot of companies,
[41:12] they may be concerned about that regulation, but when you look really deeply, more deeply, they may not even be doing things that AI that, that are in those high risk categories.
[41:23] Right. So just really looking at it more holistically and thinking about what they're trying to do in terms of what their purpose is will really help them hone down on what their real risk is and what they need to do with their next steps.
[41:39] Aleksandr Tiulkanov: Right.
[41:40] Debbie Reynolds: Well, thank you so much. It's so great to be able to have you on the show and I totally hope that people follow you on LinkedIn. Your newsletter is great and I love your hum and the things that you put out is fantastic.
[41:53] But I'm so happy that we were able to chat today.
[41:57] Aleksandr Tiulkanov: Absolutely, My pleasure. Thanks Debbie. Thanks for inviting me.
[42:02] Debbie Reynolds: It's my pleasure. And hopefully we get a chance to collaborate in the future.
[42:06] Aleksandr Tiulkanov: Absolutely. Would love to.
[42:08] Debbie Reynolds: Yeah, we'll talk to you soon. Thank you.
[42:11] Aleksandr Tiulkanov: Yeah, thanks a lot and see you soon. Bye. Bye. Sa.