E250 - Marianne Mazaud, Co-Founder of AI on Us, an International Executive Summit Focused on Responsible Artificial Intelligence

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:13] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks privacy podcast where we discuss data privacy issues with industry leaders around the world with information the businesses need to know.

[00:26] Now I have a very special guest on the show all the way from France, Marianne Mazaud, Co-Founder and General Director, AI ON US, The International Summit on Responsible AI for Executives.

[00:40] And the other co founder is Thomas Lazopone.

[00:44] So I'm really happy to have you here today and actually just as a backstory, you and I met on LinkedIn so you've invited me to come be a speaker.

[00:54] Really happy to be able to meet you in person, but this has been really cool. I mean I think first of all you're very organized with. I love and then you're very passionate about this topic.

[01:05] But tell me a little bit more about you and how you got involved with this effort on AI.

[01:13] Marianne Mazaud: Yeah. So thank you so much for having me today Debbie. I'm so happy and honored to be in your podcast actually.

[01:19] So my name is Marianne.

[01:21] After completing a preparatory program and graduating from Neoma Business School, I spent over a decade leading creative performance marketing teams internally across the luxury beyot tech and mobile game sectors from Biarritz to New York to Barcelona.

[01:37] I designed and rolled out high conversion campaigns for more than 25 global brands and public figures.

[01:44] My daily work at the time centered on real time creative performance analysis, testing new concepts, understanding what triggers a click, downloads or an engagement.

[01:55] And AI at the time was already part of the picture even though I wasn't fully understanding how it was working.

[02:04] First through the content curation because most of our work was through social medias and Google Zander networks. And obviously we all know that the AI algorithm are directly affecting the performance of the campaigns.

[02:17] But we were also using additional tools to basically predict for behavioral targeting and psychographic prediction. And then in 2022 eventually with the rise of generative and creative AI like ChatGPT,

[02:33] everything changed for us. At first it really represented a massive opportunity in terms of creativity and workflows.

[02:40] But with big opportunities often comes also risk,

[02:44] fake ads, skyrockets, deepfakes, image manipulation,

[02:48] celebrity impersonation.

[02:49] And also this technology could have helped us scale production and boost creative performance.

[02:55] At the time, because we were also working with some IPs under celebrity brands, it appeared to be risky to implement Genai in our creatives.

[03:07] I think the example of Taylor Swift Speaks for itself.

[03:11] She was overwhelmed by fake content.

[03:13] And eventually her team has to shut down her Twitter accounts in September 2024, not being able to protect the brand. Right. And so I think for me, this was a little bit of turning points.

[03:25] So in 2023, in between, I decided to go back to school.

[03:30] I basically passed a second master's degree, this time in AI.

[03:35] So I learned to cult and I spent an entire year working on building detection pipelines for celebrity brands to identify fake cats.

[03:43] So I basically built three different systems. One studies dedicated to detect deepfakes and another one for synthetic voices, and third one for analyzing text and metadata.

[03:54] But the further I advance on this initiative, the clearer it became to me. Technology alone isn't enough.

[04:01] And so we need guardrails, we need dialogue and standards.

[04:05] And because beyond misinformation,

[04:08] AI raises real structural concerns, right? Algorithmic biases are creating new forms of discrimination across domains, specifically like healthcare, hiring,

[04:19] education.

[04:19] And that's why I co founded with Thomas AI Unice,

[04:23] a strategic platform and the first international executive summit for responsible AI, for which one we have the honor to welcome you. And this will happen in Biarritz this October.

[04:34] Debbie Reynolds: Yeah, that's tremendous.

[04:36] So it's obvious to me that you are deeply passionate about not only technology, but people and how technology can help people and not hurt people. But why do you think is important to focus on responsible AI?

[04:53] Marianne Mazaud: Well, first of all, I think there is an erosion of trust that, you know, worldwide we are all going through.

[05:01] And according to KPMG's 2025 Global Studies,

[05:07] based on response from 48,000 people across 47 countries,

[05:11] 54% says that they don't trust AI. And this number drops to 39% in advanced economies,

[05:19] but rises to 57% in emerging ones. And I think it's highlighting a growing imbalance between tech adoption and its governance.

[05:29] I also think that, and that's what the report lists, this distrust stems from real concerns. There is concerns around cybersecurity,

[05:37] algorithmic biases, as I was explaining before, misinformation,

[05:40] loss of human oversight, and job displacements. I think it's pretty clear to me that the public is demanding for more transparency,

[05:48] more safeguards, more control,

[05:50] and above all, more inclusion.

[05:52] And these are basically aspects that framework for responsible AI will tackle. And they will basically accompany global leaders to implement and to operationalize.

[06:04] Debbie Reynolds: So what things are you seeing now? Obviously,

[06:07] artificial intelligence as an umbrella of so many different use cases and types of ways that people can implement the technology.

[06:16] What are you seeing?

[06:18] That's maybe concerning you most maybe something that's emerging new that you've seen, maybe a use case that you've seen that's concerning or a way of using the technology that has caught your attention recently.

[06:32] Marianne Mazaud: That is a very, very large question.

[06:36] Actually, I can take a concrete example because I was attending the All Techies Human gathering in London.

[06:44] Eric Shodruck presented a beautiful video about the disability technology project they've been conducting.

[06:52] And they were basically focusing on disabled people that can't use their hands. And most of those disabled people are actually involved in the project.

[07:01] And so it was really interesting in thinking how tech nowadays sometimes is just simply not inclusive. We all use devices that you have to access through scrolling and typing, which means that obviously you have to have a proper use of your hands, hand.

[07:15] Right.

[07:16] So why do you build technology that can also address disabled people?

[07:21] And during this gathering they were also tackling access to technology from children. And one of the aspects that really marked my mind is that they were saying that the same way they did it for the disability project,

[07:39] children should also be at the table of those conversations. Or let's say that we should also try to build more AI,

[07:45] not just for adults.

[07:47] Because even though, for example, ChatGPT in the first place was never designed for kids, we all know that nowadays they have access to those kids. So ensuring that there is,

[07:56] let's say, a safe access for children is very crucial. But there is many other different aspects that we could tackle here. Another thing that I think is we often hear about sovereignty challenge when it comes to AI in healthcare.

[08:13] And I think why it is so important is because we need to ensure the way we develop those technologies, again are inclusive in the sense that if you have an AI model that has been trained on the US citizens database, it will never apply the same way it could for the European people.

[08:28] Right. Because we all have different genes and potentially have different medical backgrounds. So I think having in mind that as globally we are all developing AI capabilities,

[08:39] we really need to take into account specificities of each region and purpose of those AI tools, specifically if they fall under high risk systems categories that include education, healthcare, hr, just to name some.

[08:53] So, yeah, inclusivity,

[08:55] in my perspective, it's so crucial for AI and if we wanted to make it effective and also foster trust around those technologies for people.

[09:05] Debbie Reynolds: Absolutely. So I agree with you wholeheartedly and I like the fact that you gave really great examples that maybe a lot of people don't think about. One thing that I think about in AI that is very different than a Lot of other technologies is that you don't have to be a user of AI for AI to impact your life.

[09:26] Right. And so that gets into things like automated decision making,

[09:30] which could be done with or without AI. But then also because of this rush towards companies and organizations, their fast adoption of AI, they need to really think about those cases where maybe decisions are being made about people that they don't know about with AI and things like that.

[09:49] Obviously in Europe you all have the AI Act.

[09:53] In the European Union also you have the gdpr.

[09:57] So you know, to me those are kind of like puzzle pieces that fit together. But I want your thoughts about privacy and how that plays into risk in AI.

[10:09] Marianne Mazaud: Right.

[10:10] Okay.

[10:10] And I'm going to mention a report here.

[10:14] You the experts here. So you know better than anyone how AI transform data and how it's really hard to track it and so on, just by the nature of the technology itself.

[10:24] There is a very interesting report that was published by PRICEWATER Hoskuper in 2024 that is called the Voice of the Consumer.

[10:32] And they basically interviewed 20,000 consumers across 31 countries. And 83% of the respondents consider data protection a core pillar of trust.

[10:42] And we all know that if people don't trust the brand, then they stop to purchase.

[10:47] So in my opinion, there is a big challenge around data privacy that goes beyond regulations, whether we base in Europe or somewhere else. And we also know that the landscape is very much fragmented and that we have nowadays contrasted approach when it comes to innovation in Europe versus the U.S.

[11:05] however, it's very important to take into account that the way companies handle the data of people will directly affect their reputation,

[11:13] the trust from their community,

[11:16] and on the long run, the performance of those brands.

[11:19] Debbie Reynolds: I agree with that and I tell companies that all the time. I say people don't trust you. They're not. They're either not going to give you their data or their business, or they're going to give you data that's incorrect.

[11:32] Right. So you're making bad decisions and insights based on stuff because they don't trust you. But I want to go back to something you said earlier,

[11:39] which I love for you to expand about, and that is that companies really need to be thinking beyond regulation.

[11:46] So to me, regulation is like a floor is not a ceiling. And so I think what we're seeing as companies try to rapidly adopt AI and artificial intelligence and different technologies,

[11:58] sometimes they go so fast where they're not really thinking the. And this goes back to trust as well. Like, am I handling someone's Data in a way that makes them trust me, am I respecting what they their expectations are of me and things like that.

[12:15] But I want your thoughts on that.

[12:17] Marianne Mazaud: Right, okay, interesting. Thank you for asking this.

[12:20] Well, I do think that responsible AI and competitive and competitiveness go hand in hand,

[12:28] especially when we study next generations insights and so on. And I do think that innovation and regulation can go hand in hand.

[12:38] There is actually some beret that is very well known in Australia, right. Leaders in responsible AI that explain that up to 80% of AI project fails. And often they fail because it's due to avoidable issues like biased data, structural gaps or lack of a legal framework from the start.

[12:55] And frameworks like the EU AI act aim to correct these pitfalls in the EU market. For example,

[13:01] Article 5 of the AI act now places joint responsibility on both developers deployers of high risk AI systems.

[13:10] So they must assess and mitigate risk. For example, ensure transparency, particularly for synthetic content like deepfakes,

[13:17] maintain robust data governance, provide documentation to demonstrate compliance and if you are not compliant, you can expose yourself to fines, product bans.

[13:29] But to go beyond, I don't think that compliance is just about avoiding penalties.

[13:35] There is recent reports called the Return on Investment of Ethics published by the Digital Economist in June 2025 that is very interesting. And it kind of cross return on investments and ethics AI.

[13:49] And it shows that companies using ethical AI from scratch are on average 20% more efficient,

[13:56] they face 35% less legal disputes and therefore they're able to reduce by 40% the unexpected compliance cost.

[14:05] So what I want to say is that when done right, ethical design, transparency and inclusion become key differentiators, right? Not burdens.

[14:14] And that's also what AI Honest is working on,

[14:18] from executive briefing to immersive workshops and the signature of an ethical AI charter, providing a structure and operational pathway to building AI systems that are responsible, compliant and high performing.

[14:30] However, I do agree with you. There is a lot of dark areas around the AI laws. There are techs that are evolving, they are not always easy to implement. And that is why we have this beautiful committee of 25 AI experts that you are part of that are working over months to basically to produce best practices and help global leaders developing frameworks that are aligned with the reality of the market.

[14:55] Python is a universal language, right?

[14:58] But as we said before, the regulatory landscape is very fragmented and AI laws are not universal. So there is also, I think beyond those aspects, in reality a bigger need for standardization.

[15:12] Debbie Reynolds: I agree.

[15:14] I want your thoughts about the differences or maybe the tension between maybe the EU and your activity towards regulating AI where maybe in the US we're almost trying to not regulate AI,

[15:36] so it's almost the exact polar opposite. But tell me about that, that tension and I'll give you my thoughts what I'm seeing from this side of the pond.

[15:46] Marianne Mazaud: Right.

[15:47] Well, I think there is two types of reactions to what's going on.

[15:52] I think it, it, it adds a burden of complexity, a layer of complexity for small companies that are developing AI,

[16:01] meaning that they often are in, in the need for more resources and having to comply with those legislations adds a layer of complexity, especially when there is companies that potentially started designing the AI models before the EU AI act was implemented since February 2025.

[16:20] So I think there is a lot of go back and forth for those companies and we all know that when you have to redesign or rethink or amend an AI architecture and AI model post creation, it can be extremely costly.

[16:34] And it also means wasting a lot of time in redoing all those steps.

[16:39] There is definitely frictions with that.

[16:43] And same as in the us,

[16:46] a lot of innovators or steptoppers that don't see regulation as something that help them deploying AI efficiently but that on the contrary will push them to not be able to innovate.

[16:59] And I believe that there is this overall feeling where how fast can we go if we have all of those rules to complain to if in front of us? We have markets like the US that are much more liberal.

[17:16] Right, Meaning that you guys have the chance to basically design and test new product features, launch them on the market,

[17:23] see what's going wrong and then retroactively amend and polish what needs to be amended.

[17:32] On another hand, again I think the position of Europe is interesting in the sense that instead of repairing, they're trying to prevent emerging harms, emerging risks from AI.

[17:43] So there's definitely a workflow that requires a different approach to innovation.

[17:51] And I do believe that the reason why we created AI on us is that we still need to have a framework basically that allows companies to innovate and remain creative.

[18:03] So as I was saying before, I don't think that AI regulation should be seen as a hurdle,

[18:09] but as an opportunity to structure for trust and market competitiveness. It's also a way to position yourself strategically as the Bird and Birds report mentioned, which is basically a simplified guide to the EU AI Act.

[18:24] Cutting corners in AI can cost you while a responsible, ethical and well governed approach brings far greater returns.

[18:32] So yeah,

[18:34] again to deliver on this vision, business need a structured framework. One that empowers them to stay innovative and competitive in Europe and globally. And I do believe that Europe is the platform of choice to meet tomorrow's AI challenge.

[18:47] And awesome. It's AI on US is designed to be the GO to forum for aligning legal, technological and social innovation.

[18:55] Debbie Reynolds: Yeah,

[18:56] so you mentioned the summit,

[18:57] so let's talk a little bit about that. The International Summit on Responsible AI for Executives. AI on US.

[19:04] Why was it important for you to do this type of summit? And what benefits will the attendees get from this summit?

[19:14] Marianne Mazaud: Right,

[19:15] so really the place we're coming from is to kind of reconciliate regulation and innovation.

[19:24] And the summit is designed as a direct response to executive concerns.

[19:29] And that is why also we, supported by the French Ministry for Europe and for Foreign affairs,

[19:34] we offer very unique, actionable and immersive approach to AI risk management.

[19:41] So for today's participants have the opportunity to engage in a unique experience featuring executive briefings that will be delivered in the morning,

[19:50] for which one our team of 25 AI experts are working over six months and EU AI ACT simulation. And we really decided intentionally to focus on the EU AI act because often is the piece of legislation that is used as a guide worldwide is the most advanced one, the most constraining one.

[20:09] So by being compliant to the EU AI act,

[20:12] usually you should ensure to be compliant in most of the regions. However, we'll also look into what's going on in other regions and produce recommendation on which AI laws or frameworks you can refer yourself to.

[20:26] For example, if you intend to operationalize your activities in the LATAM market, it might be good to still look into the Brazilian AI law, which is the most advanced one on this region.

[20:39] So on site we also offer the participant to sign the first international Charter for inclusive AI Arborist. It was presented at the AI Action Summit as one of the deliverables.

[20:49] But we don't only let the people signing it and leaving the room. We then invite them to take part into an ethical innovation sprint, during which one we will be delivering and presenting the AI playbook that you and our committee of experts have been working on.

[21:07] Yeah, and the depth of this work and the theme of the summit is turning AI risk into performance through compliance, inclusion, analytics.

[21:18] And so in terms of deliverables, besides attending six hours of executive briefing,

[21:23] taking part into two immersive workshop, we'll also produce an AI playbook, as I was saying before,

[21:30] and that AI playbook will combine a structured map of AI risk and biases. So all of our teams are building up on the taxonomy produced by the MIT Future Tech led by Peter Slattery,

[21:43] an international benchmark of AI laws and standards,

[21:46] but also with recommendation when it comes to setting up robust vendor management strategies. How do you basically build contracts, how do you ensure that you can conduct an ongoing assessment of your AI vendors, keep an eye on how the model are trained and are evolving and avoiding that basically versus forever lock in situation where it feels too much of a nightmare or you like legally attached to this vendor forever and yeah,

[22:14] and the day one.

[22:15] So as I said before will feature a workshop in the EU AI Act.

[22:19] Why it is so important for us is because Alexander Chulkhanov has developed scenario that is inspired by the reality.

[22:27] So we're going to spread the 100 participants into different teams and each team will be representing the board of a fictional company.

[22:35] Alexander will develop a scenario where each companies are facing market change,

[22:41] law enforcement audits, they will have to make decisions and then at the end of the workshop they'll find out if the company is actually failing or successful.

[22:50] And on the day two, we also go beyond those AI risk,

[22:56] mapping of AI risk and impact and also what are the standards when it comes to AI laws, et cetera. We'll look into the regulatory gaps and as I was saying before that that is also a key point because there is a lot of dark areas,

[23:08] sometimes some misunderstanding on how to translate those AI laws into science, into Python, into AI models.

[23:17] So that is why we'll talk about standardization and that's the time where the committee will formulate the best practices. All of those best practices will be also combined on the AI playbook and the AI playbook will also be including additional tools,

[23:32] resources that the participant can use when they back to the day to day work. Another key point is that during the immersive workshop of the second day, the responsible sprints, we let the participants using different tools that are open source that they can use at their company with their internal team to help them mapping AI risk and also basically assess vendor management and choose them wisely from those tools.

[23:59] I would like to mention the Rez AI tool that has been created by Iman Jujoshi.

[24:07] He's also a peer of Peter Slattery. He worked with him on the AI Risk taxonomy and the tool is actually connected to the AI Risk taxonomy from the MIT Future Tech.

[24:14] And there is another tool as well that we're going to offer the participants to use and that is again open source so they can use those tools when they back to their day to day work.

[24:23] It's called Open Ethics label and it helps basically,

[24:26] as I was saying before, choosing AI vendors wisely. So the idea really is that during those two days they receive information best practices that they have to digest. But we also ask them to take participation to basically test those frameworks that we are offering, with the ultimate goal of them being able to take those learning back to their company,

[24:51] share the AI playbook with their internal team, and also potentially redo the exact same exact that we're going to conduct at the summit.

[25:00] Debbie Reynolds: That's tremendous work because I know a lot of times when you have these types of summits,

[25:06] there may not be a great takeaway or sometimes people feel like they're just in rooms being talked at as opposed to having an interactive, immersive experience where you actually as a group, collaboratively creating content and information together.

[25:22] So I think it's great.

[25:24] Marianne Mazaud: Thank you.

[25:25] Yeah,

[25:26] yeah. And again, I mean, you're part of this, you know, Debbie. So I mean,

[25:31] congratulations to you too. And that is also why it was important for us to have the to include the Arborist Charter during our summit. Because the Arborist Charter, at the end of the day, is the first steps towards the label G's AI, gender equality and inclusion in AI.

[25:47] It's a standard that is implemented internationally and in Europe.

[25:52] And the Arborist Charter is already signed by 150 companies across 37 countries,

[25:56] some of them being L', Oreal, Orange, Tadam. So actually multinational companies.

[26:01] But why? It's symbolic. It's because we believe that it offers structure, clarity and a common foundation to translate CSR intention into tangible actions.

[26:12] Debbie Reynolds: Yeah,

[26:13] very good, very good.

[26:15] So if it were the world according to you, Marianne, and we did everything you said, what would be your wish for AI anywhere in the world, whether that be regulation,

[26:27] human behavior or technology?

[26:30] Marianne Mazaud: Well, I mean, for me, like whether we talk about regulations or standardization,

[26:35] as long as we put mindful intentions in what we develop,

[26:40] it doesn't matter the form of sand it takes. It's just a matter of finding a sweet point in really accompanying global leaders in developing responsible AI best practices.

[26:52] I think it's key for them to remain competitive on the market, to remain connected to their communities.

[26:58] But that's also what we want for the society. We want to develop tools that make sense, that help us to become a better version of us. Not on the contrary.

[27:08] So yeah, I just wish basically that we raised more awareness around those topics. I think they are not sometimes easy to understand and the direct impact of AI can sometimes seem quite far away from us.

[27:22] Because it keeps being a tool, it's part of a computer. So how does it actually impact negatively my life or could it be?

[27:30] That is why one of our team trust and safety team is actually doing an amazing work on crossing product features analysis and the impact on human rights. I think it's also important to give high level information to those global leaders so they can actually understand what could be the negative and direct effect of their models.

[27:50] Because it is not always very clear and obvious to everyone.

[27:55] And then I wish that people are not falling into fear but that they push themselves to try tools, experiment, iterate. It's only by adopting tools that you can understand how it works and that you can finally protect yourself.

[28:11] One good example is we are using AI Assistant. I'm the first,

[28:17] I'm very fond of ChatGPT and other tools like that. I use them in my daily work.

[28:22] However, being aware of how those tools are designed and the way they are set up is extremely important to ensure that you leverage those tools to basically increase your strategical insight and not to diminish it.

[28:39] What I mean by that is that there is a very great study that was conducted by Microsoft and Adwait early 2020,

[28:50] in 2025 or 2024, I can't remember Anyway, so they basically studied a group of students and workers using AI and not using AI assistants and they actually came to the conclusion,

[29:02] they came to different conclusions that are very interesting. First of all they analyzed that people that are confidence will be more willing to question the outputs that an AI assistant can provide to you.

[29:15] However, and at the same time, if you are someone confident and that you are kind of used to challenge the outputs, if you start integrating that tools in your day to day work over time you might stop challenging the output because you kind of start trusting it fully because you know it's integrated in your day to day work and it feels like normal to use it.

[29:36] So you kind of lose that little trick. A second thing that was quite interesting as well is that they compared on creative projects the answers from people using AI assistant vs workers non using them and they found out that the answers, the outputs were always more creative when we don't use AI assistants.

[29:57] So what is happening is that nowadays we have models that are so used worldwide and that I have been gathering so many information on the way people think that we start facing what we could call or what they possibly mention as an emerging risk would be the effect, the echo chamber effect.

[30:17] Meaning that the tool is so powerful that it can start predicting and standardize our human thinking. And by doing so,

[30:24] it's taking out diversity of outputs that we could usually find and creativity. So I think it's really important to understand how those tools work and ensure that we basically shift the cognitive effort from, you know, before we had to read tons of documents, summarize them, and it was taking us ages.

[30:43] Now we can automate that part, but maybe the actual work is not towards summarizing a document. Right. Because AI can do it, but it's going to be around, okay,

[30:52] what do I want to find?

[30:53] How can I ask that question?

[30:55] How can I verify the veracity of the answer? So prioritizing as one model where you can check on the reasoning is extremely important. And it goes without saying that if you are a corporation,

[31:06] using AI assistants internally can be quite challenging. For all of those open source questions, data privacy questions that you are pretty expert on.

[31:15] Debbie Reynolds: That's amazing. Thank you so much. I'm so happy to be able to collaborate with you and have you on the show today.

[31:22] If there are executives who are interested, interested in being part of the international summit on Responsible AI for Executives, what's the best way for them to reach out to you?

[31:32] Marianne Mazaud: Well, they can go on the website,

[31:34] ww.AI-on-earth.com they can check on the deliverables there, they can book a 15 minute call with me, and then finally we can help them, ensuring that the summit can answer their needs and guide them through the best package that suits their needs.

[31:52] Debbie Reynolds: Very good. Well, I'm excited we were able to do the show today and I'll be excited to see you in France.

[31:58] Marianne Mazaud: Me too, Debbie. We can't wait to have you here with us. And really, it's amazing to have you part of the initiative. I remember you were one of the first speaker I contacted when working on the initiative.

[32:11] You said yes. I didn't feel like I had to convince you. I know that we share the same values, but really I'm very fond of your work and it's really an honor for me to be on your podcast today.

[32:21] So thank you.

[32:22] Debbie Reynolds: Thank you, thank you, thank you.

[32:24] Well, this is amazing and I know that the audience will love this show as much as I do and definitely keep up the great work because I think this is super important and I think the foundation that you're putting now will be very important for decades to come.

[32:39] So keep up the good work.

[32:41] Marianne Mazaud: Thank you so much.

[32:42] Debbie Reynolds: All right, talk to you soon.

[32:44] Marianne Mazaud: Talk to you soon. Bye.

Next
Next

E249 - Marlyse McQuillen, IntegraConnect LLC - Vice President, Regulatory Compliance, Privacy and AI