E276 - Willem Koenders, Global Leader in Data Strategy and Author of “The Data Product Playbook”
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:25] Now I have a very special guest on the show from the Washington, D.C. area,
[00:30] Willem Koenders.
[00:32] He is a author.
[00:35] His book is the Data Product Playbook, and he's also associate principal at ZS Associates. Welcome.
[00:46] Willem Koenders: Yes, ma', am. Thanks for having me. Debbie, nice to meet you.
[00:48] Debbie Reynolds: Yeah,
[00:49] great to meet you.
[00:51] I love data people. And so I seen you post about your book and governance and that always attracts my attention. So I thought, wait a minute, this is really cool.
[01:02] These are great topics. I like the way that you try to go in depth and then as you know, the hardest part is trying to explain it so the people in different areas understand what you mean about data and data products and things like that.
[01:15] But why don't you give me your background in data and how you came to be the author of the data product Playbook?
[01:23] Willem Koenders: Yeah, for sure. So,
[01:25] you know, I started my career now, you know, a decade and a half ago in Europe. So spend, I'm originally from the Netherlands, so I, I spend, I started my, my career there, spent a few years actually starting out in core strategy consulting.
[01:39] So it wasn't quite in data yet. It was really looking at kind of the biggest strategic issues that some of these big companies were dealing with. But then very quickly found out that at least the clients that I happened to be working with, like the data theme was becoming ever more important to their overall competitiveness.
[01:58] And so I had the good fortune, I would say, to work with a really great series of Chief Data Officers in the first four or five, six years when that kind of role started to become a real thing in financial sector and in health care.
[02:14] And so after two and a half years, I came to the United States,
[02:19] have been based on and off here, as you said, in the Washington D.C. area, but also spend some time on projects in Asia. I actually spent a few years in Latin America,
[02:28] even in North, North Africa a little bit. And so, but most of my time here in the US and like you say, like one of the things we do and one of the things I enjoy the most is helping to kind of take some of these topics and try to make them,
[02:42] like, practical, like so for people to understand them, for them to be able to implement them and how I came to be kind of the author of the data product playbook is that I feel that the data product kind of as a, that concept is not necessarily new necessarily,
[03:00] if you really kind of see what's under the hood and how you can understand it,
[03:04] but it really kind of creates a narrative around how to bring together people from business side and organizations from technology side, from legal audit privacy teams, and kind of rally them around the concept of a kind of an identifiable data product instead of just at the highest level of the organization.
[03:22] I think if I connect a few dots, I think that those might be some of those connections. So that's kind of how and why.
[03:30] Debbie Reynolds: Fascinating. Well,
[03:32] let's dig into data products specifically.
[03:35] So what are the things that people misunderstand about data products? So I have my own theory. I feel like some people try to transpose analog thinking of a product to digital products and they're different.
[03:51] So it's not like you're buying a toy at a store, right? So that's a product too, but it's not a data product. And in terms of how companies implement products and how people use products, it has a huge implication on people understanding.
[04:06] But I want your thoughts.
[04:07] Willem Koenders: So I would first go kind of the core, the heart of the definition of what kind of separates a data product from everything else, right? From other things that have data, but that aren't necessarily a data product.
[04:19] And I kind of like the, what you just mentioned kind of a thing you buy in a store. Because I think the part that really goes to the heart of a data product is the product oriented mindset, right?
[04:29] And that is to say that you take a look at a certain cluster, a certain logical grouping of data and you see it as a product.
[04:36] And if you see something as a product, that means a few things, but more than anything,
[04:41] it means that you're thinking about it being sold, right? Or being used by consumers. And so you're not seeing it as if you, let's say the toy you mentioned in the store, it doesn't have value in and of itself, right?
[04:54] It has value because you're thinking about the kids,
[04:57] whoever would buy it and what would they be doing with it. And so at the heart of a data product is that you're very radically recognizing and identifying those use cases and you're integrating that into the design of the data product.
[05:11] And I would say not just once, like not just once during a design phase, but you continue to do that, right? So like even today's data product might not be the data product that the organization needs next year or the year after.
[05:24] Right. That's why you'd have to have kind of a recurring process to continue to make sure that it's fit for purpose and it's driving the right type of value. I think that's being very radical and simple about that, frankly, is more important than, I think, more of the more technical nuances of what is a data product,
[05:40] in my view, at least. Right. So that's, that's a big deal.
[05:43] Debbie Reynolds: Yeah,
[05:44] that's a great, great way to describe it. So let's talk about organizations.
[05:49] So I feel like I just want your thoughts. This is my feeling about organizations.
[05:54] I feel like a lot of organizations operate like Santa's workshop, where everyone has their own little thing that they're trying to do. And so when you're thinking about data and data products,
[06:08] there has to be more dialogue,
[06:11] conversation, collaboration,
[06:13] and that a lot of times that's not naturally occurring in an organization.
[06:18] But how does that impact a company obtaining and implementing a data product?
[06:24] Willem Koenders: Yeah, let's run with your example of Santa's workshop. Right. So in a way, like data products, so they really strike a balance between allowing people to drive certain initiatives in their respective teams and areas, domains,
[06:40] areas of the business versus not just letting them do whatever, however, without a set of standards. And so I think that, that, that is, that's another core, I think, dimension of data products to make that a little bit specific.
[06:54] If you take, take an example out of any sort of sector or team, but you can even think about Santa's work shoplift,
[07:01] Remember the example again, like, maybe some of them are making certain wooden toys. Right. Maybe some of them are making,
[07:07] I don't know, cars for little kids or something. So it's like they are different. They're going to have different consumers with different pace, but at the same time, you don't want, let's say, if both of them require, let's say, a certain amount of wood, you don't want them to separately go out and procure that.
[07:21] They don't want them to separately think about,
[07:24] like, what would be safety standards, for example, for kids. Like, you don't want them to, like, reinvent this every single time a new toy is being created. And I think something similar is the case for data products.
[07:34] Right. You really need to enable and keep the data very close to the people that produce change and use it. Because if you take that away, you centralize it. We've seen it in the past.
[07:43] Timelines are going to be longer. You're going to have a disconnection between the use of the data and how it's being managed. And you're going to see all sorts of shadow IT things pop up to kind of address these things.
[07:56] But at the same time, there's a very powerful set of things you can do centrally to help enable these folks in their respective decentralized settings. Right. A couple of things come to mind.
[08:06] Think about like even just a blueprint, let's say if you have an organization, you're an AWS shop or Microsoft shop, or it doesn't matter actually, like you can actually be pretty specific about.
[08:16] Here's the different components around,
[08:18] you know, data ingestion, transformation, cataloging, metadata management, lineage, observability.
[08:24] You can be very specific, like how you can do that in the context of a specific ecosystem. That's one thing that you can help these decentralized teams understand.
[08:33] The other one that I would mention maybe goes a little bit towards the theme of,
[08:38] I think of a lot of your episodes if you think about data privacy, for example. Right. So again, if data products really need to incorporate in this definition as well, like what is this data?
[08:47] What is it supposed to be used for? And critically, what should it not be used for?
[08:51] Especially if you open this up to different AI algorithms as well. Right. All of these things become exponentially more important.
[08:58] And so these things, you don't want everybody who just wanders into Santa's shop to do whatever they want for everybody. But you do want to give them the freedom and flexibility maximally to help drive their own kind of business objectives.
[09:10] Debbie Reynolds: I'm glad you brought up privacy. I'm glad you brought up privacy. For obvious reasons, I, as you were talking, I thought about two different things. One is that.
[09:19] And I want your thoughts.
[09:21] I feel like sometimes organizations don't know how to ask the right questions about so that they can figure out what product that they need.
[09:30] And then also sometimes companies, they think of privacy as an afterthought or something more reactionary, where really a good data product will embed some of those principles early on. But what are your thoughts?
[09:45] Willem Koenders: Absolutely. It goes to. One of the mystery. More favorite things to talk about with different organizations is kind of the, the data governance, data management, data privacy by design. Right. So like where you, instead of going into different organizations and you're going to try to help resolve the issues of the past,
[10:02] and I'm sure this is going to be necessary for a long time still, but instead of trying to focus on that, you're trying to make sure you get it right when it's being created.
[10:10] Because The ROI on getting it right in the beginning is a hundred times, in a literal sense, a hundred times higher than you trying to resolve it after the fact.
[10:19] When the issues, when the issues come about and you're absolutely right. Like again, to the core, the heart of a data product is what is this data,
[10:28] how can it be used and how can it drive value?
[10:30] And then by design you should be able to kind of wrap that in the right type of access controls. Right? So if something is truly public, they exist actually. Right.
[10:40] You have data products that have data that is actually public even outside of a company, or just internal to a company, then it should be freely accessible. It should still be good to see,
[10:50] if only to track impact, who is accessing it.
[10:52] But more often than not, especially when you talk about data products that contain customer360 details or transactional details, and this becomes exponentially more important when we're talking about healthcare, financial sector and things like that.
[11:07] Like, you need to do that by design. Like you actually, what is kind of the access mechanism for the data that sits within a given data product? There should be certain classification in place for the data asset as a whole, like a data product as a whole, but then also for the attributes within it.
[11:24] And even when you think about the least privilege required or needed around, like when somebody gets access to the data, like how can you make sure you're giving them exactly the data that they need, but no more?
[11:35] Right? All of these things, it really, really, really helps if you get these right at the heart of the data product. Right. So think about like a marketplace or a catalog or any sort of way where the data product is captured and described.
[11:49] If you can capture some of those use cases there separately, that really helps to kind of make sure you get the use of the data correctly as well from a privacy and compliance perspective.
[12:00] Debbie Reynolds: Yeah,
[12:01] that's brilliant. That's brilliant. And I feel like that's the step that companies miss a lot of times for two reasons. One is that many products in the past hadn't really thought about privacy.
[12:16] So there was right. There was no way to embed it because it was just never thought that way,
[12:22] like now and in the future.
[12:24] That's becoming such a heavy thing for companies that they want help. And so getting that help,
[12:30] as you say, is a thousand times easier to prevent that problem at the beginning than trying to solve it at the end. But I want your thoughts about connecting privacy and cyber risk data products and talking about how real delivery teams interact with this or deal with this on a day to Day basis for me.
[12:51] Willem Koenders: They go very much hand in hand. Right? Think about the definition real quick. If you think about data privacy,
[12:56] it's about data about individuals that can and cannot may be used depending on the region you're in, like the country you're in, the regulatory framework that's in, they may or may not be used for certain specific purposes.
[13:09] And cybersecurity is, you want to make sure that you protect it, right? Especially from bad actors, like they can't access it.
[13:15] But like, one of my favorite things to talk about with, with organizations is to recognize that there's a certain set of minimal data foundations that actually help enable both of these things.
[13:26] And not even just these things, even a lot of offensive kind of objectives and use cases as well.
[13:31] And so what is that core?
[13:33] So that core really is what is the data that we have and what could it be used for, right? So you have a certain data, you know where it is, it has a certain risk classification.
[13:44] And based on that, you can configure both the cybersecurity measures that are required to make sure there's no unwarranted access that it's protected as, as required, while at the same time allowing the right actors, people who should be using it or could be using it,
[13:59] access to the, to the same data.
[14:02] And so in terms of how we're seeing that, like I described this in a, in a chapter in the book as well, is where if you think about like a data product and you create it, you want to make sure you have some sort of engagement very early on with these respective teams,
[14:18] right, with the cyber security team and with the data privacy team, so that they can actually tell you, because they have this,
[14:25] they already have it. They can tell you what the policies and standards are, they can tell you what good looks like. And I think a lot of times you just see, especially in the larger companies where it just takes time to even figure out who these people are and navigate the organization.
[14:38] The business pressure is just too high. People just move, they just put it together and they worry about some of these other things later and sometimes too late. And so if you have a, like a right, a minimally required life cycle in place that, that enables these individuals, from a cybersecurity,
[14:54] from a legal and privacy perspective, to kind of be involved when the actual design is being done,
[14:59] that's really helpful, right? And that can also, that can be very productive.
[15:02] If I may add a few words in the direction of like cybersecurity audit and privacy professionals as well,
[15:09] my personal Experience is that they'd like to be involved earlier. They like to be earlier at the table to kind of help make sure that's stuff doesn't happen.
[15:18] Right. They spend depending on the organization, 80, 90% upwards of their time on fixing the mess. Right. And that's what they're associated with. But they, they would love to be involved earlier.
[15:27] And I think that the majority of those folks, some, some are still struggling with that, but the majority also know how to strike a balance between being reasonable, it needs to be feasible, and not just preventing any possible risk that could possibly happen.
[15:41] Right. There are things like where you want to make sure that you can't rule out any and every risk, but there are very reasonable measures you can take to protect against those.
[15:50] If I might just throw 2 cents in there.
[15:54] Debbie Reynolds: Very good,
[15:55] very good.
[15:57] I want your thoughts on artificial intelligence and privacy.
[16:02] And it's about data products. So I feel like a lot of bull don't understand the interplay there, like how AI somehow impacts privacy in a way that maybe prior types of technologies haven't done in the same way.
[16:21] Because I feel like some people don't understand that correlation.
[16:25] Willem Koenders: I would maybe call out two things here, and one is just the scale and how fast it can go, like the automation behind it.
[16:32] And the second is the sometimes black box,
[16:36] unpredictable nature of what's going to happen. Right. So the first one is related to a discussion that I've been seeing with a lot of organizations where they're trying to understand like what's even a difference between AI governance and data governance, right?
[16:50] Are they really different? Are they the same?
[16:52] And like at the heart of it, they're not necessarily different in the sense that both really talk about do I have the right data for a set of use cases for set of purposes.
[17:02] But one of the key differences is that AI just goes so much faster. Like if you take a simple example of recruiting process where you're evaluating resumes, if you have a simple individual who's scanning these resumes and that individual is doing has certain things in terms of bias in his or her mind,
[17:21] like that's not okay anyway, that's not okay if a certain individual does it.
[17:25] But the impact,
[17:27] let's say of an hour of that individual scanning two or three resumes versus an AR algorithm spent scanning 15,000 of them is much bigger. Right. And so the impact is so much more rapid.
[17:38] And I think that's why a lot of AI governance isn't necessarily separate in the basis of defining what is an appropriate use case.
[17:48] But you need to be way more careful because again, even in the example, like one individual,
[17:54] over time you might be able to check it and the scope is relatively limited, still bad, but the AI is exponentially worse.
[18:02] And then the second part is like privacy framework and requirements really talk about intent, right? What and the use of data. Like you are gathering data and you're using them for certain purposes.
[18:13] And the same thing here for AI,
[18:16] especially for some of the unsupervised type of.
[18:20] Unsupervised types of algorithms and models.
[18:25] Are you really sure what about what the mechanism is, how certain predictions are being made, right. How customer segments are decided,
[18:34] different prices are quoted to different people, like whatever the use case ends up being. And so if you at times less transparent nature of how an AI model comes,
[18:45] you know, with certain recommendations, which is again with the latest wave of language models is even incrementally is even more so the case that makes it harder to be very specifically clear about,
[18:58] you know, how are you actually using the data. And so you have an even increased focus required around are we really understanding what's happening in the black box of this model?
[19:08] And that is a little bit of the part where AI governance is a bit,
[19:12] it goes beyond data governance, right? Because data governance really talks about data and how is it supposed to be used. Whereas AI governance also takes in things like bias. You want to make sure that you stick to the right best practices, how a model is being created and tracked over time.
[19:27] That's at least some of the, that.
[19:28] Debbie Reynolds: I'm seeing them also in my view, I think too that a lot of times in privacy. Well, a couple things,
[19:36] a lot of times people are thinking about, okay,
[19:38] you know, a company and you give them your data and then they're supposed to let you know how the data is used and things like that.
[19:46] Where, say AI, for example.
[19:49] Well, maybe you didn't even give them your data. They have it some kind of way, right. They use it in whatever way that they want and then they're creating like derivative information,
[19:59] right. Where a person wouldn't necessarily have access to what that is,
[20:03] but there may be decisions made about them based on that data. So we're seeing a lot of that loop.
[20:10] Willem Koenders: No, totally. The, the thing I want to mention is the whole part about consent management, right. Even understanding it downstream, like how it is being used is, is one thing, but like you actually taking that understanding and making sure that you accurately capture and ask for the right consent at the right moment in a way that's understandable,
[20:27] right? That people Actually truly understand, I think is critical. So I think what you're saying is yes, definitely recognize that.
[20:34] Yep.
[20:35] Debbie Reynolds: I'm curious your thought. I don't know if you have opinion about this, if you don't, it's okay. But there's a lot of talk in the news about this malt bot where you have all these AI agents and stuff talking to each other and people are going bonkers about this thing from,
[20:53] from a governance perspective. I don't know if you had an opinion about it or a thought about it at all.
[20:59] Willem Koenders: Well, the first thing I would like to know still is. So I'm trying to figure out for that particular example like how much of it really are actual individual,
[21:06] relatively independent agents that are interacting from what I understand different. Like I would love to understand like how much of those are just folks actually putting in the content using let's say an agent or let's say a language model.
[21:19] But I mean this is, this is the big future, right? So like so so far up until now I think the vast majority of use cases are kind of relatively,
[21:28] it's like a single prompt, AI applications, right? So you go, you do a prompt, it comes back to you, you give it more feedback. I think very soon you're getting into this kind of wild west potentially of agentic capabilities where they all talk to each other, right?
[21:43] And so that, that is where the governance becomes super, super, super critical.
[21:48] If I might, might call out maybe one positive side and that in a way is, or listening, you know, in a, in an organization we're working with is where they're looking at like a very complicated end to end kind of worldwide physical supply chain that they've been managing for decades,
[22:06] right. In a certain way. And so that's very complicated, like dozens of facilities where they produce stuff, thousands of inputs that go in there and that there is enormous wild growth of dashboards that they use to make decisions, right?
[22:18] Where do they source,
[22:20] what sources from, what do they pay for them, et cetera. And so what's really exciting there is that you can start craft, you can start using some of these foundational capabilities that are actually critical with that privacy and protection as well around you know, the data definitions, right?
[22:38] And like how should you interpret them? What's appropriate use case?
[22:41] Because you can't create a semantic layer without that. Like no bot is going to look at all this historic data landscape and this get it exactly right. Like it's a recipe for disaster.
[22:52] And so there's this I think connection into this agentic future where A lot of these agents are kind of talking to each other. And in this example,
[23:02] they are seeing supply chain flows, they're seeing price development, they're interpreting MI based on the data definitions that are being captured. And instead of actually just dashboard being populated, they're making decisions.
[23:14] They're making decision which ship goes where with what on board against what price.
[23:19] And that's with a human in the loop.
[23:21] But that's also where in the same scenario, you can kind of see why governance becomes so critical, right. If you don't have the definitions, if you have. Even these agentic bots need to have the right approvals to see what data you're allowed to see, what are the decisions you're allowed to make,
[23:36] what are the parameters within which these decisions can be made. And then none of that's possible if you don't have the right type of data ownership in place and data stewardship in place.
[23:45] Debbie Reynolds: And I think a lot of data people,
[23:48] a lot of us have grown up in an era where that was vital, that was foundational, right? So you have to have some person who can tell you what is right and what is wrong about a process.
[24:06] So it's like stewardship in some way or guiding the ship in some way, as opposed to being on a ship where you don't know where it's going, you're hoping that it's going to get you to the right place.
[24:18] Willem Koenders: That's right. That's right. And I'm, I don't know, I'm just, oh, I have to be right is where we're working in. But I'm very positive in kind of the different ways that some of these capabilities are influencing each other.
[24:28] Because AI requires better data, right? AI readiness is something we keep hearing now everywhere. The data needs to be there, available,
[24:35] high quality for the AI bots to work. But the other, the reverse is true as well. Like, data is rapidly enhancing because AI is making it better, right? You're interpreting it better.
[24:45] You can fix the holes and do the data quality work. Like none of these things that you, I'm sure in your past you've seen like you would do manually. I mean, at least I bear the scars of years and years of spread deep,
[24:56] deep in spreadsheets, creating definitions and rules and whatnot. Like,
[25:01] no, no, no child of ours should have to do that again.
[25:05] Debbie Reynolds: I agree, I agree with you completely. What's happening in the world in general that you're seeing that maybe concerns you as it relates to like privacy or.
[25:16] Willem Koenders: In data products don't know if it concerns me necessarily, maybe it was concerning me. But I was reassured listening to Jensen Huang a little while back. So, because if you think about AI, and especially if you kind of almost step out of maybe the corporate reality into kind of more of the sci fi context,
[25:35] where some of these kind of AI capabilities, these identity capabilities become so strong that they become really dangerous, right? That they could kind of get out of control in a way.
[25:46] And so if you kind of take that pretty far, then you can be pretty pessimistic, right? I think Elon Musk has gone on record saying that I'm like 70, 80% positive, but 20%, 30% is pretty bad to me.
[25:58] Like that's a really bad outcome. And I was just listening to Jensen Huang, who was like basically laying out, look like this whole, these AI capabilities, these AI powers,
[26:08] they're going to be very, extremely fragmented, right? There's going to be power everywhere in your phone, in your glasses, in your front door.
[26:15] There are going to be malicious AIs, right? You can see it today. You can see at least. I got some emails the other week where I was like, I was legitimately not sure if this was spamming or not.
[26:25] Like, they were so much easier for them to do that. And even like the hacking attacks that can be done by an agent, you know, several agents spun up at,
[26:34] you know, at scale. But at the same time,
[26:37] what I took away from Jensen Wong's kind of comment was that, look, if you look at the other side, like it moves just as fast, right? So like the protection against all of these moves just that fast, right?
[26:47] And so you're going to be in a situation where some sort of AI is trying to get the better of you, but you're going to have different bits of AI with you who protect you, right?
[26:54] Whether it's actual chat, dbt, like personal assistance that help you as you live. Maybe I walk into different opportunities through like protection on your computer, et cetera. And I think.
[27:06] So that was worrying me before, like, are we able to control that?
[27:10] Because especially some of these cases that I'd seen in the past around like vulnerable individuals who are like being submitted to like these very sophisticated kind of malware attacks. This was just painful, just even read about.
[27:23] And I was just like, how are we going to control that in the past of in the future?
[27:26] I'm just a little bit more optimistic now that the other side will move at the same time. So I don't know if that's helped you or makes you more worried, but I thought it's kind of what I was thinking about.
[27:35] I don't know.
[27:36] Debbie Reynolds: I don't consider myself necessarily a pessimist,
[27:41] but, you know, I am a technologist, definitely.
[27:45] I love technology and I don't. But I don't love everything people do with technology.
[27:50] So these stories about, you know, agents taking over and doing different things,
[27:55] that's very interesting to me. So I'm definitely keeping an eye on that. But then you read these stories, and I'm sure you've seen this one maybe about, like,
[28:03] the Louvre.
[28:05] They. Someone broke into their security system because the password was Louvre.
[28:11] Willem Koenders: Really? I did not know. Oh, my God.
[28:16] Debbie Reynolds: I'll see you. The article or like, still for like, 30 years going, the. The most popular passwords are 1, 2, 3, 4, 5. Right. So.
[28:26] So I. I just feel like we're on a technology plane, we're going super fast,
[28:33] but on a human plane, we're still in romper room.
[28:38] Willem Koenders: Yeah.
[28:39] Debbie Reynolds: You know, so I'm very scared about that. What are your thoughts?
[28:43] Willem Koenders: The run out on passwords has gotten me thinking a little bit in the past as well, but seems to be that the people who.
[28:49] We were laughing about them in the future, we were like, writing their passports on a piece of paper. But, like, they seem to be the safest now.
[28:55] Debbie Reynolds: Yeah, totally.
[28:57] Willem Koenders: Nobody's gonna find, like.
[28:59] I mean, I don't know if they break in and they'll get stuff anyway, I guess. But, like, every single password I have is, let's say, randomized. I mean, I'm sure that there's potentially some letter overlap, but they're all randomized in their own way.
[29:10] And there's not even, like a common method to it. They're as genuinely random as can be. And, like, given with everything these days, like, you want to blow up the tire of your bike, you need to have a login for it.
[29:21] Everything is a password these days. You have like two, 300 passwords. And so, like, I have submitted to having a passport manager, like, personal. And so that has worried. It still worries me, actually, because on the one hand,
[29:34] I'm doing all the right things in the sense that every.
[29:36] They can get any. Any single password. They will not get remotely close to any of the others.
[29:41] However, there's a single point of failure. Right. And so there's a single point of. Anybody get access to that one particular master key?
[29:48] That part is worrying me. Right. And so there's a part of me that would love.
[29:52] That even has in the past appreciated using biometrics, like, conclusively for almost anything.
[30:00] But it Remains tough. Right. Because then your biometrics become an actual key. You need to store them centrally, which is obviously scary for other reasons. So I don't know, maybe I'm right there with you.
[30:12] Debbie Reynolds: Well, I know. Cause I used to use voice calling. I don't do it that often,
[30:16] but you know, this is a feature that you've had on your phone forever and I know a lot of people want to use. Well, first of all, let me say for the record, and I'm sure people have heard me say this voice is like the worst biometric you could possibly use.
[30:30] So I totally don't advise companies to use voice as a biometric, especially now that it's so easy to clone someone.
[30:36] Willem Koenders: It is, right? Yeah.
[30:37] Debbie Reynolds: But I remember one time I was trying to use voice and I had a cold and it did not understand anything that I said. It was like calling the wrong people and all this stuff.
[30:48] And it's like those things happen,
[30:51] you.
[30:51] Willem Koenders: Know what I mean?
[30:52] Debbie Reynolds: So it's like I'm always concerned that we're putting so much faith and like you say, it's like a single point of failure. Right. So it's like if I didn't, let's say if I didn't know the person's phone number or couldn't call it up some other way, you're kind of stuck if you can't think an alternative.
[31:10] Willem Koenders: I think so. The part that hasn't. That I think makes me reassured is that you're so. For those who are minimally technical savvy, I think it's relative. Like we're all used to it.
[31:20] You have two factor identification for effectively everything. And if there's anything that's remotely different, they're going to make you give you, send you an additional code because you're logging in from a new place and there was a payment on your transactions and all these kinds of things like where I think that we're like.
[31:34] If you're minimally technical savvy, I actually appreciate it a lot. But then there's at least two types of scenarios which make me a bit. How do you say that? A bit anxious.
[31:44] And one is probably better enough. I didn't use the name of the bank that I happen to be with. But like they. A while back, I'm like in US now.
[31:51] I lived in the Netherlands before and I was trying. There was for some reason there was like a temporary block on something that I couldn't get access to my account.
[32:00] And I was trying to get back into it and there was absolutely no way for me to call them, answer questions in the past.
[32:06] And like, what they actually end up doing was I had to send an envelope with a written like a printed out letter requesting a certain action to be taken with my signature on it, a copy of my passport and one or two other details.
[32:19] And I was just in my mind, because anybody gets this envelope, they have absolutely everything. This, this process is the dumbest thing ever. And so that scenario is where some companies, just because often I think because they just have a legacy,
[32:34] aren't able to kind of make this easy for you and because they don't make it easy, like I was locked out for a year pretty much because I had to have to fly back in person and go arrange this.
[32:44] And like, so that's where you actually open yourself up to kind of other risks. And I think the second scenario is just people who aren't minimally technical savvy. Right. Older folks, for example, there's so many examples where they can't tell the difference anymore between the right code.
[32:59] If it's. If it is somebody who is it a spoofed kind of message with a code they need to give. And they are very, very convincing when the phone comes in and they're asking for something.
[33:08] Like, I have at least three examples in my personal kind of group of people that have kind of fallen for these things that not even anxious, they make me very angry too.
[33:17] But that's very hard to kind of guard against because it becomes so common all the day, every day,
[33:23] everywhere to give a code and to kind of log in. So.
[33:27] But yeah, what do we do?
[33:29] It's true.
[33:31] Debbie Reynolds: So if it were the world according to you, Willem, and we did everything you said, what would be your wish for privacy anywhere in the world, whether that be technology,
[33:43] human behavior or regulation?
[33:46] Willem Koenders: If it would be my desired world, I would probably look, I would. There's maybe one or two things I would love for a lot of these things to get very people's fingertips.
[33:59] Right. So I at least feel that sometimes actually exercising your privacy rights is something we still need to get better at. Right. If you think about like for example, the, in the European Union, they had really great.
[34:11] The intention is there. Right. They wanted to kind of protect the rights of the individual. And then a lot of companies came out with these things online. We have to accept the cookies and you have to click through these different pages of,
[34:24] of privacy rights and. But then if you actually try to exercise it, you try to really understand it's very hard. Like, I had some other examples as well, like if you try to find the exact privacy settings in your Facebook account, it's very hard to figure out where they,
[34:36] where they are. And I would love for these to become simplified and closer to the respective individuals. Right. So that they can almost just talk to an interface.
[34:44] They can be, they can be explained what the rights is, they can explain what their data is being used for and they can, they should be able to make relatively easy changes.
[34:51] Like, I'm okay with this. I'm not okay with that. Inclusive of.
[34:55] You might suspect differently in the sense that I happen to spend a lot of time in data governance. I'm fine with a lot of companies having my data and using it,
[35:03] but in a lot of ways, I'm a person for convenience. I keep saying if I can quicker log in, there's protected. If the advertisements are for me. And on general, I like all of that stuff.
[35:11] Stuff, as long as it's done in the right protected kind of way.
[35:16] And then maybe a little bit closer to what we do.
[35:19] But I happen to be focusing on day in, day out is this whole data as an asset principle. I really would love for a lot of companies that are out there serving customers, clients, patients,
[35:30] animals, et cetera, to really look at the data to help inform how they, how they can serve them the best. That's kind of the just the continued desire that we'd have for a way to go, especially with every new wave of AI promising to make things better.
[35:46] Debbie Reynolds: Yeah,
[35:47] I agree with that. So I've always said that I think companies,
[35:51] their best bet is try to align their data uses with ways that benefit the individual.
[35:57] So you got to figure out what that person wants and if that benefits them, then you're pretty. In a pretty good situation.
[36:05] A lot of times where companies go off the rails is when they do something with data where it doesn't benefit the person,
[36:13] it may actually hurt the person or harm the person. Right. Because they're not really thinking about that. But I agree with that completely. Completely.
[36:20] Willem Koenders: Yep.
[36:21] Debbie Reynolds: It's a pleasure to have you on the show and talk with you about data products. Definitely. Folks, check out Willem and check out his book Super Cool. And then you have this cool graphic that you had done recently on LinkedIn about just understanding data and data products.
[36:38] It's really cool.
[36:39] Willem Koenders: Awesome. I really appreciate it. Appreciate your time. Appreciate the, you know, different episodes that you've been putting out there. It's really been super cool to listen to them.
[36:48] Debbie Reynolds: Aw, thank you. Thank you. Well, I'm sure we'll talk soon. Thank you.
[36:53] Willem Koenders: Awesome. Thank you, ma'. Am.