E267 - Federico Marengo, Associate Partner at White Label Consultancy (Italy)
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:25] I have a very special guest on the show today, Federico Marengo. He is associate partner at White Label Consultancy based in Italy. Welcome.
[00:37] Federico Marengo: Thank you very much. Thank you very much to all. Thank you Debbie for the invitation. As Debbie said, I am Federico Marengo, associate partner at White Label Consultancy.
[00:46] At our firm we provide privacy, cybersecurity and AI governance. And I am part of the. I am the partner at Delivers and builds. Yeah.
[00:58] Manage the offering for AI governance.
[01:01] Debbie Reynolds: So let's talk a little bit about your path into privacy. And so this is exciting. You and I, we have had chances to chat on LinkedIn.
[01:12] You always have really sharp insights. But I would love to know like your pathway into this type of work.
[01:18] Federico Marengo: Yeah, absolutely. Well, first of all, I, I am Argentinian, I was born in Argentina and I did my studies in Argentina in.
[01:27] So I am a lawyer. I worked seven, eight years for the judiciary. But at some point I felt like I needed a change.
[01:35] So I moved to. I had the opportunity to move to Europe. I studied an LLM in the UK in 2017 and then I continued my studies in a PhD. But at the same time I realized that the academia was not for me.
[01:52] So PhDs for those who haven't initiated levels of study, this is a postgraduate studies where you made basically research and it's mostly focused, it is not always the case, but it's mostly focused on academia, on research and in teaching.
[02:10] So I was fortunate to realize that it was not what I wanted to do for my life.
[02:17] So while I was doing the PhD, I started to move to the industry as a consultant. As a consultant, my PhD was in the intersection was in legal studies, but the fields were privacy and AI.
[02:31] And this was where I started most heavily working with AI because I had some background in law but also in privacy.
[02:40] So I was working sometimes part time and then as a full time resource for different organizations as an external consultant and also as a DPO. And then in 2022, at the beginning of 2022, I joined White Label consultancy as a senior consultant.
[03:00] This was mostly focused on privacy. I was working as a dpo. So external DPO for companies, but also as an operational resource. So whenever, whenever companies need to expand,
[03:12] augment their teams,
[03:14] I provide the support on that on 2024,
[03:19] given that with my background and with increasing level of engagement that I had in AI and again in the intersection on privacy and AI, because we were supporting also clients in AI,
[03:34] I moved to the uk.
[03:36] I started working for Informa as AI Governance manager, leading the development and the rollout of the AI governance program. Informa is, is a company, is the largest B2B event producer and also is the owner of Taylor and Francis.
[03:52] Just to give some background on the company.
[03:55] But in this role I was involved in the design so thinking how the AI governance program should be developed, but also and very importantly how it should be aligned with the existing framework.
[04:12] Because AI, AI not the technology that develops in the vacuum,
[04:18] it works in the context of the organization. So huge work trying to integrate and also use reuse existing tools, processes and guidance and to upgrade it to cover also AI aspects.
[04:33] And finally in September this year I rejoined White Labels. I left the uk,
[04:40] I left my position in London, I moved back to Italy and I continued working. I restarted working for wide level consultancy, but now in different capacity. Now driving the AI governance,
[04:54] the AI governance offerings and also supporting customers in the implementation of the AI governance program, which is a field, which is a really, really exciting field. We received a lot of requests, many companies wanting to adopt AI but in a responsible manner, which is a really good.
[05:12] But we are working on that. So then this is how my journey was and how I am here today.
[05:19] Debbie Reynolds: I think it's really interesting that you intersect with privacy and AI and I think especially because people,
[05:29] companies have been investing really heavily and looking very closely at AI.
[05:34] I think a lot of people think they're experts at artificial intelligence.
[05:38] It's really great to see that you really have deep knowledge in that area.
[05:42] You had said something really interesting where you said that AI doesn't develop in a vacuum.
[05:47] The part of that development is the context in which the organizations really want to be able to use it. But I want your thoughts a little bit on automated decision making.
[06:00] So automated decision making,
[06:04] well, let's talk about how it intersects with privacy. So we know things like the GDPR mentioned, things like automated decision making since AI has come out. That's part of,
[06:14] you know, people can make automated decisions, even though not autumn, all automated decisions are made with AI.
[06:20] How does the privacy and AI intersect in your view?
[06:25] Federico Marengo: Well,
[06:25] this is a good question and there is, there is no simple answer for this because the touch points between AI and privacy are multiple. There are so many touch points in particular for us that we are privacy professionals by nature.
[06:40] Or we develop our careers on privacy.
[06:46] We see that most of the AI systems process at some point personal data.
[06:52] And this intersection is clear. So whenever a system is processing personal data, the privacy team sooner or later will intervene.
[07:00] So in this case, we see the interaction between privacy both in the development of AI system. Because if the system AI systems need data, need information to be trained. We know by now that the more information,
[07:15] the better the system may perform.
[07:17] So the privacy is intersect or AI is intersected by privacy.
[07:23] Because you need to have good data governance practices and good privacy framework and controls in order to in order to make this training or the development of the AI systems safe and secure.
[07:39] But also moving forward. Once let's imagine that you have your data governance practices in place,
[07:46] you are to build your models. But then you start to commercialize to use internally depending on the organization.
[07:55] And for deployment of AI system, you also privacy and AI intersects. Imagine that. And then let's think about the an important system that is present in every single company. For instance,
[08:09] HR management platform.
[08:12] When employees apply for jobs,
[08:16] it is highly likely that the first interaction it will be made with an AI system which evaluates or make an evaluation and produces a score for the individual. And this score tells the recruiter or the business associate what is the what probability the with which probability this candidate matches the job description.
[08:40] So, and this is even if it is not an automated decision per se, because the decision in general is taken by we. I think that now many companies have understood that automated decision making should be only a very, very tiny fraction of the cases.
[08:56] But even in this case we can call an automated recommender system because instead of a decision, it makes recommendation. The interaction of privacy and AI is clear. An AI system,
[09:07] an algorithm is evaluating the candidate's history and it tries to see the extent to which this cv, which is personal information matches with the information from the job description.
[09:23] And we can think about hundreds of examples here.
[09:27] Now we are starting to see and this is what I think that it may happen in the future. In the future,
[09:34] AI may develop as an independent function. It might be the case.
[09:39] And also for some companies that are AI heavy, AI may develop as an independent. But now what we see is that most of in most of the cases the AI governance the AI governance function is being built within the privacy within the privacy function.
[09:57] So and there are many reasons for this.
[10:00] So one of these reasons is that in most of the cases the AI systems will process personal data.
[10:08] Debbie Reynolds: I think what artificial intelligence is bringing now that companies hadn't really thought about and that is tracking the data all the way through the life cycle of a company where it's not just the collection of the data, where a lot of the laws are.
[10:25] The way the businesses thought about data was kind of about how they captured it at the beginning,
[10:31] not necessarily how it flowed through their systems and what those other uses or secondary uses or what their end of life data use is. But I want your thoughts.
[10:42] Federico Marengo: Absolutely. Even if thinking, and not repeating myself, but even if you, even if companies have excellent data management practices, then they have to also consider how this data is flowing within the organization in order to produce a rational decision.
[11:00] But I think that what we also see right now,
[11:03] and I think that this is what is a high concern of many companies,
[11:08] is the emergence of generative AI.
[11:10] And generative, because we often see that what you mentioned before, this automated decision making, sometimes it's based on predictive AI. And when I mentioned the difference between predictive AI and generative AI, predictive AI or traditional AI is this AI that produces a score or a classification, for instance, classifying an email as spam or non spam,
[11:33] or producing a score, or for instance matching faces and facial recognition, this is traditional AI. But then something that is problematic for companies is generative AI, because in this system, which were vastly known after the emergence of ChatGPT in November 2022, if I am correct, and this lead to the,
[11:56] let's say, democratization of the use of AI within organization, because after this moment, every single employee will have access to an AI system with which they can query, they can ask questions,
[12:11] input personal data and process personal data using AI.
[12:16] And you can imagine the heightened risks of using AI without any control, without any safeguards. So I think this is a huge source of concerns for companies right now.
[12:28] Debbie Reynolds: I think so. And then also companies,
[12:31] because people are using AI in ways that maybe they may not know about,
[12:35] like I call it shadow AI, where maybe you have employees that are processing personal data or company data without someone knowing. And so that's like a whole other risk the companies have.
[12:48] Federico Marengo: In the past we used to see shadow it. Shadow is a term widely known by people,
[12:56] by any professional working in the field. But now with the emergence of these tools and also mushrooming as they are, and products being delivered every single day and with free offerings, so every employee is able to access and to explore, which is welcome because the exploration and the exploration and testing is what make employees get better.
[13:23] But at the same time,
[13:25] this exploration or this testing, without any education,
[13:29] without any follow up or without any control can lead to very dangerous or problematic situations. And this is absolutely,
[13:38] I think it's.
[13:38] Debbie Reynolds: An exciting time to be in technology and be into the data space, especially as we're seeing so many shifts around the world, not only in the rapid development of the technology, but then also we're seeing a lot of regulation flow in different jurisdictions and how companies really think about those types of things.
[13:59] What are your thoughts about impact assessments? So I like to ask people from Europe this question because like for example, in the US we have a lot of states that are passing laws, comprehensive privacy laws, and some of those regulations have impact assessments.
[14:19] And for the US that's kind of a new thing.
[14:22] So for people in Europe, you've been doing impact assessments for a really long time. But what are some of the things for people who are maybe getting into impact assessments, the things that they need to really be thinking about?
[14:34] It may not be obvious to them. Some of the, maybe the obstacles are getting the information,
[14:38] figuring out what's the best way to assess, the best way to document what you assess.
[14:43] What are your thoughts?
[14:44] Federico Marengo: Well, this is,
[14:45] this is always spot on because every time I talk with American colleagues, they are aware of the risk and the existence of the impact assessment, but think that the,
[14:57] I think that there are two different, two main differences. First, in some jurisdictions it's not really mandatory. So many, many,
[15:03] from my experience, many American companies do it when they operate globally, so they have to conduct it anyway, so they harmonize their application. And also sometimes the standard we tend to evaluate in Europe is sometimes deeper.
[15:19] But this,
[15:20] I am fully, I'm fully convinced that this will change as things go on on the assessment side. So what would be,
[15:29] let's say, best practices and we've been able, we've been supporting many companies on this. And, and I think that there are,
[15:38] when it comes to evaluation of AI systems or what we usually call AI impact assessment, I think that the first thing to do is well, don't panic and second is don't over engineer,
[15:52] don't reinvent the wheel.
[15:54] I would say that most of the AI AI use cases are not high risk,
[16:00] what we consider high risk as per regulation. So this will be standard now, it's a new technology, but in the future it will be standard technology.
[16:11] So my first,
[16:15] if I could give advice to companies or to colleagues listening this postcard, the first thing that I would do is to see your impact assessment first try to see what are the impact assessment that you currently have.
[16:29] I imagine that many companies may have privacy impact assessments, vendor impact assessments, security impact Assessments. So take a look at these assessments,
[16:40] see how they are real,
[16:42] what are the type of questions that they have and also try to upgrade them with AI considerations.
[16:49] And by AI considerations means that adding a section or adding just some questions in relation to AI,
[16:56] what to ask,
[16:58] what type of questions they may need to ask.
[17:02] This will depend to a larger extent on the company and also on the capacity to process this information. Because we can provide the list of the question, but then if you lack the skill set to evaluate them or to risk assess the questions, it doesn't make a lot of sense,
[17:23] but on some type of questions can be. For instance, what type of AI is being employed in this system?
[17:34] What are the intended purpose of this AI system? What is the benefits that this AI system bring to the system?
[17:42] What is the level of accuracy of this AI system? Have you evaluated?
[17:48] Or for instance, for high risk applications like HR systems,
[17:53] what is the disparity or what is the bias? Have you conducted any bias audit to the system?
[18:01] And this is mandatory, for instance, in some places like New York currently.
[18:06] And then don't be shy and ask colleagues how they are doing it, because many colleagues are also experimenting and many colleagues are also facing similar challenges.
[18:21] So and then the last thing is that also something for us as a community and as colleagues,
[18:31] we need also to upskill ourselves.
[18:33] We always speak about rolling out trainings within the organization or upskilling and increasing literacy of employees on how to prompt on the risks. But also it is on our side on privacy professional to upskill ourselves and to understand how this technology works.
[18:53] What are the different types of technology?
[18:56] What are the typical use cases, how AI enters into the organization?
[19:02] What are the typical risks of developing AI systems?
[19:08] And finally, I guess that this is also more than ever multidisciplinary field where privacy is not the single or the most important component. It is just one of the many.
[19:23] Because if you are building a system, you will need support from the data science team or the product development team from security team. And also considering generative AI more broadly,
[19:36] many of the risks or many of the news that you have seen as headlines as AI incident were not about privacy, were not about. If we think about the case, the Deloitte case with the Australian government,
[19:57] this is a hallucination case where there was no personal information,
[20:02] no confidential information. It was just a case that there was a lack of human intervention or human review of the outputs of the AI system. So we need to acknowledge that the risks emerging from the use,
[20:15] from the development and use of AI are broader than simple privacy risks and span more also to the organizational. And we need to consider not only the risks to the individual, not only the risks to the data, for instance, in terms of security,
[20:34] confidentiality, integrity, but also the risks to the organization,
[20:38] for instance. Maybe.
[20:40] And also,
[20:41] and also, and finally, I would say ESG risks. So risk concerning environmental,
[20:47] environmental considerations or depending on the governance of the company.
[20:53] Debbie Reynolds: I always say that not all data is regulated, but all data should be governed.
[20:59] All data needs to be governed. I want to talk a little bit about biometrics and I'll just give you an example of something that I think happens a lot within companies where they're moving into more advanced technology,
[21:14] but they're not realizing that their risk is also increasing. So let's say a company had a time clock where people punched in with a car on a mechanical machine. And then the company decided, well, let's upgrade to this new biometric system where the person puts their thumbprint in and we're recording this information.
[21:34] And this is a very common thing and we've seen a lot of litigation over the years with stuff like this where companies,
[21:41] they went from kind of a manual or analog process and then they moved into digital where they're collecting more data or more sensitive data than they were before, but they don't really understand how they need to change the way they operate and think about what those risks are.
[21:58] But what are your thoughts?
[22:00] Federico Marengo: Well, I think that here maybe you also, you also have more, even more experience than me in particular because in, because of the Illinois, Illinois legislation which is among the most stringent regulations in terms of bipa.
[22:15] Bipa. The BIPA act is one of the most stringent regulations in the globin turmoil in terms of biometric. But, but I think it is true for maybe some people don't recognize this, but biometric is a special type of data in the sense that differently from my name,
[22:33] differently from my surname, from my, from many other identifiers, my email or my. Or my address,
[22:41] this cannot be changed. So whenever biometrics are collected, we need to. And this is why, this is why this category of data is so important and special because you cannot modify it.
[22:52] So if biometric, if biometric information about my face. So my template about my, about my face or as you mentioned, the fingerprints cannot be, this cannot be modified. So if there is a data breach or genetic, if there is an.
[23:07] Well, but genetic is not, is not biometric per se.
[23:10] So if there is a data breach concerning biometric data, it means that the person that has this data I will, I will be affected forever. Because there is no change, there is no way to change it.
[23:23] In terms of the shift from the shift that many companies or organizations are taking towards more automated systems and the use of biometrics in particular for time recording or attendance,
[23:39] I see that this is a natural progression and also progression in the sense that because it is true, it is a technology that is more,
[23:48] sometimes more convenient for the operator of this technology to use it.
[23:54] But at least here in Europe.
[23:58] Debbie Reynolds: It.
[23:58] Federico Marengo: Raises some concerns and many places you need to evaluate whether there is a less stringent mechanism to produce the same result without the collection of these biometric.
[24:15] This means in practice that if you can record attendance instead of using biometric by a card, you should do a great effort in justifying why the use of biometrics is better than a car.
[24:30] There might be some reasons. For instance,
[24:32] in highly security, in highly secured areas, it might be that it is more secure to use biometrics.
[24:40] But this requires an effort from the proponent to justify this decision and document it. Because at some point,
[24:48] as you saw, as you see in many cases there might be litigation, there might be what happened, what usually happen is that either there is a data breach and the authorities acknowledge that you are using this.
[25:01] So start asking questions or a data subject,
[25:06] make data subject request. So request for information of their update and then they can make a complaint to the authorities.
[25:13] And then once the authorities are in charge,
[25:16] they will start asking questions and this question will. And one of these questions will relate why are you using this data when you can use other means, less privacy, intrusive for the same purpose.
[25:29] So, and this is important then, then. And just to wrap up,
[25:34] then there is a, then there is a different huge. Because this is, this is only for authentication.
[25:39] So this is to determine, determine that the person is the person who claims to be. But then there is another type of biometric.
[25:49] There is another type of use of biometric data which is recognition of an individual, for instance, facial recognition in public spaces.
[25:57] This raises a completely different type of concerns. But I think that some, in some cases this is justified, for instance for security.
[26:06] But for instance, in Europe right now there is some.
[26:09] These cases are covered by the EU AI Act. So both developers of these technologies and also the deployers or the users of this technology will need to justify also the use of this technology and conduct the assessments necessary.
[26:26] Debbie Reynolds: I think that you're really giving us a very fulsome view of what organizations need to think about when they're considering adopting emerging technologies.
[26:38] So what's happening in the world right now.
[26:41] Whether it's happening now or something you see on the horizon that's concerning you most either around privacy or artificial intelligence.
[26:49] Federico Marengo: I think that right now one of the. One of I don't think I am concerned in particular for any particular development right now.
[26:57] We can discuss some concerning cases.
[27:02] For instance,
[27:03] when a company collect the biometrics of the IRIS or the eye of persons in exchange for cryptocurrencies.
[27:13] These are more problematic cases. Or when a company scraped data or photos to build a database of individuals.
[27:23] But I think that now what I see in particular in connection with privacy and AI is that many companies are struggling or they have this challenge of on the one hand,
[27:38] boosting adoption and encouraging employees to use AI to explore and to develop more skills.
[27:47] Mostly for two reasons. One is to increase productivity and to save time. And also sometimes is to reduce costs for companies.
[27:56] This is true and this is on the one hand and on the other trying to control or trying to mitigate safeguards or try to put measures around the responsible use of AI.
[28:10] And in this scenario is where most of the requests that we receive related to is how we can adopt a framework or how we can adopt measures in order to allow employees to boost the unleash the potential of AI or explore new possibilities,
[28:30] understand how this work better and how they can improve their work, but at the same time do it in the right way. So avoiding data leakage,
[28:41] avoiding the provision of data to third party to train models or any risks that you may imagine that can emerge from AI. And I think that this is a trend that,
[28:54] that we see and I expect that this will continue to be the case until we all have a standard or a harmonized level of knowledge and standardized practices.
[29:09] So as it happened some years ago with GDPR for instance, here in Europe, people were professionals were shocked and company were shocked about the.
[29:20] The difficulties in terms of adoption or implementation of gdpr. And now it's current practice, so now it's by default. And this is regardless of whether companies are compliant, fully compliant or non compliant at all.
[29:36] But at least there is a wide understanding and full understanding that privacy is a concern.
[29:45] Privacy is an issue, then you can decide whether to comply or not,
[29:50] or to what extent you can comply. Because it is true it's very taxing for companies.
[29:55] But I expect the same will happen with AI.
[29:59] Being fully compliant sometimes is not possible.
[30:04] So what I see as a good sign is that companies are doing efforts to improve their practices and to be as compliant as they can be.
[30:14] Debbie Reynolds: I think So I think that's true. And I think also one of the good things about gdpr,
[30:22] it's been very influential around the world. So since then,
[30:26] as you've seen, there have been a lot of jurisdictions that have passed laws that maybe they didn't mirror GDPR completely,
[30:33] but they've taken enough bits and pieces of it so that if you understand gdpr, that help you understand it. But then also one thing that I'm seeing, and I want your thoughts, is that I'm seeing that even jurisdictions or even companies that don't necessarily have to comply with gdpr,
[30:51] they're using that as like a baseline for them so that they can align. Like if a client asks them for something,
[30:59] it wouldn't be that much work for them to find a way to align with that business. But what are your thoughts?
[31:06] Federico Marengo: No, no. And this is, this is what I've seen. This is what I've seen. I would say most of the time in particular, we are speaking about global companies, this case capital operate globally depending on the markets they, they operate, they made the conscious selection or, or a rational choice.
[31:23] If highly, let's say if highly regulated jurisdiction like Europe is definitely a highly regulated Europe jurisdiction. If they,
[31:32] if the, if this is an important market for them,
[31:36] they will try, they will try to comply. And if they try to comply,
[31:41] most of the times it's easier to apply this measure globally rather than making segmentation.
[31:50] I see companies trying to build separate programs, but this is practically difficult. It's more difficult to make these differentiation. In some aspects you can, or in some specific areas you can do it, for instance in cookies, you can,
[32:08] you can set geographical rules and then automatically the cookies blow it or not, depending on the jurisdiction. But this is in general specific cases for organizational measures, for instance, rolling out or not rolling out the training, you definitely roll out to the whole company and then expect the better from the teams.
[32:30] And I'm starting to see the same in terms of AI.
[32:36] The US for instance. The US is a large,
[32:38] many companies, many developers of AI systems are US or American companies.
[32:45] And if the Europe is a, if Europe is a strategic market,
[32:52] they will align their practices to Europe. If it is not, they will just say, okay, we either run the risk or we don't provide our product or services in the eu.
[33:04] But if they provide the systems or they place the service into the European market,
[33:13] it will be easier for them to comply with the AI act globally than complying for specific parts of the regulation. Because some part of the regulations concern organizational measures. Like for instance Establishing a quality management system.
[33:32] And this needs to be implemented at the corporate level. So in general rather than for jurisdiction. And I expect the same approach that we've seen in privacy will mirror in AI in the future.
[33:50] And this is also what happened in security.
[33:53] And because it is for large global organizations,
[33:59] they tend to be more mature in terms of privacy and AI and security. So implementing requirements globally will pay off in the long run.
[34:10] Debbie Reynolds: I agree with that completely.
[34:14] So if it were the world according to you, Frederico, and we did everything that you said, what would be your wish for privacy anywhere in the world?
[34:24] Whether that be regulation,
[34:26] human behavior or technology?
[34:29] Federico Marengo: Well, I think even if I work in privacy and AI now I'm mostly focused on AI, but I work a lot in privacy. So I understand the challenges.
[34:43] I think that in terms of privacy,
[34:46] it might be companies might benefit from a distinction between global or large organization and small organization.
[34:58] This is something that the application of the requirements across the board.
[35:05] Sometimes small SMEs definitely cannot comply. And the same may happen. The same may happen with AI. Small organizations may not be able to to comply with other requirements. Then it is true.
[35:19] And I concede that from the individuals or from the citizens standpoint,
[35:23] it doesn't matter whether the personal data is processed by a large organization or by a small organization.
[35:30] For the individual the risks are the same regardless of who is processing whether it is large or a small organization. So I concede that. But I work for organization and I understand that it might be challenging for them.
[35:46] Then more specifically on expect that.
[35:50] Well,
[35:51] on the one hand this is a expectation and wish so both at the same time that we as a community of professionals will start to work harder on this area implementing best practices and also improve.
[36:10] Improve our skill set to better provide our services and also make AI not to be fear about the use of AI.
[36:21] AI in many years technology like it was the Internet 20, 30 years ago,
[36:28] there might be some difference. But we are going to incorporate these tools and will be part of our toolset as normal technology.
[36:38] But I expect that for us as a community,
[36:41] we upskill ourselves and do better use good use of this technology.
[36:47] But in any case, I'm more than happy to support colleagues and your listeners if they have any doubts on this regard.
[36:56] Debbie Reynolds: Those are good wishes, those are great wishes.
[36:59] It's true that I think the smaller the size of an organization matters and also what they're dealing with. So I've seen a lot of companies panic about the EU AI act even though they weren't doing any type of processing that would have been dangerous in any way.
[37:17] Federico Marengo: Yeah,
[37:18] definitely it is. And just a comment.
[37:21] We have a lot of questions from GCs and from head of legal. We want to comply with the EU AIA Act. We need you to prepare program to support our supporters to comply with the AIA act.
[37:34] And sometimes if they are not developers of high risk systems,
[37:41] as the EUA act heavily regulate,
[37:45] they don't need to worry too much.
[37:48] So this is also part of the education that we need to continue doing and understanding the requirements.
[37:57] It is not on the specifically on the EU Air Act. It is not a regulation to be to fear.
[38:05] I understand that for some companies it will be very taxing in particular if they are listed in the sectors. But for the vast majority of the companies it is not a regulation that will impact heavily them.
[38:21] Debbie Reynolds: I agree.
[38:22] Oh my goodness.
[38:24] Well, thank you so much for Federico.
[38:27] I really enjoyed talking with you today and thank you for taking the time out and yeah, we'll have a chance to chat later hopefully.
[38:35] Federico Marengo: Thank you very much, Debbie. Thank you. Thank you for, for being part of this amazing podcast. I've been following it for, for many years. Thank you very much to all your listeners.
[38:44] You have a. You have built an excellent community. So I'm very happy to be part of it.
[38:48] Debbie Reynolds: Oh, thank you so much. And we'll talk soon. Thank you.
[38:52] Federico Marengo: That was absolutely. Thank you very much, Debbie. Let's stay in touch.
[38:55] Debbie Reynolds: Okay, Bye bye.
[38:56] Federico Marengo: Bye bye.