E146 - Cari Miller, Sr. Principal, Practice Lead, Responsible AI Governance & Research The Center for Inclusive Change

47:12

SUMMARY KEYWORDS

people, data, ai, companies, employee, problem, disability, lifecycle, systems, question, employment, governance, audit, law, employer, job, adp, probabilistic, technology, privacy

SPEAKERS

Cari Miller, Debbie Reynolds

Debbie Reynolds 00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds. They call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show all the way from Delaware. Cari Miller is the Senior Principal Practice Lead of Responsible AI Governance Research for the Center for Inclusive Change. Welcome.

Cari Miller 00:43

Thank you for having me on with "The Data Diva". I'm okay, I'm gonna calm down. But I'm just, I'm on with "The Data Diva" I'm so excited. Sorry, I'm really gonna calm down.

Debbie Reynolds 00:55

Oh my goodness, it's a pleasure to have you on the show. You and I have chatted back and forth on LinkedIn. I said you should come on the show. Especially because there is so much brewing and so much talk now in the news and stuff about AI, you work on very interesting areas of AI. So let's start by having you tell me about your journey and how you got into Responsible AI. Because I would imagine, I know you had asked me before we started like what was my first job as a teenager, which was working in a daycare center; kids are so bad. But how did you? Obviously, when you were growing up, you probably weren't dreaming of working in a Responsible Full AI. So tell me how you ended up here?

Cari Miller 01:42

No, but I was playing office, I did play office a lot with my sister. No, I my first job out of college. I got into a marketing firm, and one of their products was a proprietary e-commerce platform. And it was a multi-tenant, kind of like the Shopify platform where so multiple tenants were on the platform. And that was sort of my first experience front row seat, really, because this is in the late 90s, to the explosion of the Internet and, and all of that good stuff. And so I got to watch all of that unfold. And you know, in the early days, it was just very deterministic, you know, what you told it to do did. And then we got into, well, what if we put these products in this position, people will buy it more, and you started to see how this was gonna go. And then advertising came around with social media. And you could see how algorithms were starting to place products in certain people's paths. So you know, you didn't place the dog food in the path of a cat lover. And it just was very fascinating to me, I was always into marketing with my master's degree in marketing. But that company and those companies that I was working for were a little would just say they were allergic to estrogen. And so one day I quit. And I thought, you know what kids need to understand this stuff. And so I followed a passion project, and I opened a science center hands-on, and I loved it; it was so cool to see them get involved in STEM. And then eight months later, the pandemic hit, and I was like, oh, that's, that's not gonna work. So I started my dissertation, my doctorate degree, and I was like, all right, next plan. And that's where I really got, like, deep down into the algorithm. So I took everything from my 20-year career in strategy and everything I was doing before digital transformation, which was a lot of data work. And I just sort of parlayed that into my doctorate and started to look at, okay, what is going on with algorithms. And I was very attracted to sociotechnical algorithms, because they're so sensitive, and they really can mess with a person's life, you know. And so, here we are, and I just, I kind of drifted into the employment side of algorithms, and ad tech. Those are my two kinds of favorite areas to dig into. But the employment side is where I'm doing my doctorate degree right now. That's my journey. And I love it.

Debbie Reynolds 04:24

Wow, that's amazing. I love to hear people who are able to take what they're learning or what they're interested in and sort of make something new. You're definitely charting a path for other people, including us estrogen folks who want to go into technology fields. I want to dig into the employee data landscape around AI so this is such a sensitive topic and I feel like it's gonna be a hot issue in the US because of the California CCPA where on January 1, 2023, the employee rights kicked in, or California. So as having, I think a lot of other employers around the world look at this. So the issue is in the US, and I know a lot of my European friends would be horrified by this because we've talked about this before; in the US, you as an employee, really don't have a whole lot of rights. So once you become an employee, your emails, you know, anything you do with your employer is not private; a lot more people's personal data is getting employees’ hands, especially because of some of this, the post-Dodd stuff, where people may have to divulge more information around their health and some of the things that they're doing. But I find I want your thoughts about this, that a lot of times, because people who were in HR positions, knew that they didn't necessarily have to share the information around what they collected, they collect a lot of stuff that they probably shouldn't have collected, it's kind of a bad system. So tell me a little bit about that story and tell it, give us a deep dive, and what's happening in AI around employees and employers.

Cari Miller 06:25

Yeah, so I have a tendency right now to focus on the buyer side of the equation. So exactly what you're talking about is the employer side that I feel like the industry, and when I say the industry, my people, my responsible ethical AI, people that are watching the space, kind of with a critical eye, they're very focused on the developers and making sure that they'll preserve making ethical choices. Good. Lots of people are looking at that. I'm worried about the buyers. So I'm in the space that you're talking about. And there's a lot there, you're exactly right, I'm seeing exactly what you're saying. They collected a lot of data and probably shouldn't have. There are not a lot of rolls around it. There's not a lot of protection for employees, if any. And it's a problem. And so I'm going to go one further with this. So I'm in the process of writing what I call the full spectrum employee data taxonomy. So when I say full spectrum, I mean a full robust spectrum of employee data. So you mentioned a few pieces. But let me tell you what's in this taxonomy. For me, it's PII. So personal, identifiable information. That's what's in the California law, I think great. Am I right about that?

Debbie Reynolds 07:42

Yes.

Cari Miller 07:43

Cross-check me on that. There's a lot of stuff going on in the State. Yeah, so it's PII. So you know, social security numbers and birth dates and, and things like that, that are real sensitive to individual addresses and stuff. And then my next category is I call it sensitive, voluntary information. And this is a really big category, payroll data, your benefits, selections, that'll tell them you're your family member, you're not a family member, you might have a disability, your beneficiary information. Now, if you're providing them with your kids, social security numbers, what are they doing with that? How are they protecting that data, your attendance, record, your medical leave, your disability accommodations, even feedback from surveys, polls, surveys, how you feel today, you know, all of that kind of stuff falls into this category? Now, you mentioned some health information. There are chatbots out there, particularly some that will let employees ask questions to try to navigate to like, which benefits package should I select? I was recently reading some stuff on this, and the Chatbot provider was like, yeah, we'll save this information for you for a while. And you can have people look at it. It's like, wait, wait, I'm gonna ask a question about I might get pregnant this year. And you're gonna keep that, right, and give it to who? Can I have a say over that? And there's no law, you know, so that's a category, but it gets better. So then there's credentials, you know, what's not just what school you went to, but your third party training, whether or not you took the sexual assault prevention training, what assessment scores you got, and then there's your position history. What was the job description? What was your review data in that job? Disciplinary data, and then you go into the company data and enterprise data, and I'm talking about stuff like it could be metadata out of the CRM system; which files did you touch? How many orders did you enter today? How many? How many orders did you pick and pack out of the warehouse? What were your sales goals? Did you meet those sales goals? What approval authorities did you have? If you're a purchasing manager, could you purchase up to $500,000? This is all data. Here's why this bothers me. And then there's external data like, are they scraping your social media? That's a whole other kettle of fish.

Debbie Reynolds 10:10

Well, before you start, I have one more category of data. The other crap that hiring managers and managers put into people's files, this stuff that shouldn't be in there. Yeah, yeah, I'm sorry. Go ahead. Yeah.

Cari Miller 10:25

So here's my problem. With this right now. EEOC covers discrimination. And it's towards the employer. And it's only discrimination, right? Some of this stuff hits to your well-being, your inclusion, your belonging, your autonomy, your flexibility, your I mean, it's it just goes beyond discrimination sometimes, and it's to the employer. But my problem is ADP, workday, these big companies have the data. And they're buying up other little companies that are like, oh, you know, ADP is payroll processor. But they bought up a little reward and recognition company, and then they bought a little review company, and then they bought a little candidate assessment company. And you know, now they have a full spectrum of data. What are they doing with that? If they hit a plateau in their revenue, and their shareholders are like, Hey, man, let's see a little pick up here in the profit, the first thing they're going to do is turn to their best resource, which is their data. And they're going to come up with some new products. And they're going to take all that data. And they're going to mash it together and come up with a new product that was not ever intended for that data to be used that bugs the heck out of me, that'd be I'm just, there are no laws against that. I don't know what happens. And there's no rule that says you can't pick up data from three jobs ago. What's that look like? They're like the next potential Google or Facebook. But in the hiring or in the employment industry bothers me.

Debbie Reynolds 12:08

Yeah. Actually, ADP is in hot water now. So basically, ADP, because they process so many payments, may know your work history if your companies have ADP. So they know a lot about these people. So what they started to do is they started to create products, as you said, for companies and then actually help them, part of that data was kind of someone's race, whether they had disabilities. And they could actually have employers decide they don't want to hire or interview people based on their race or their disability. So they're in hot water for that. And I think that goes directly to your point where you amassed so much data and that you try to use it for purposes that it wasn't intended for.

Cari Miller 13:09

Yeah. And what's interesting is, when you read through there, like ethical AI, responsible AI, posturing, they will both them and work, they will say, we are responsible, we try to make sure that these things don't get into our algorithms. But when you use a neural network, you don't always get to know what's going on inside of that thing. So it could have picked up some piece of data somewhere that was like, Oh, by the way, we didn't tell it, you had a disability. But it did find out you were using assistive technology. And that was an anomaly. And that meant you weren't in the norm. Done. Right. Right.

Debbie Reynolds 13:48

Right. So it's not even the data that can be inferred, or the information may be inferred by all the little pieces that they have. Right?

Cari Miller 13:58

That's right. Yeah. That's the story. I don't know how we get the lawmakers to like, get their head around all this. It's, you know, it takes a lot to understand all this, right? I mean, you didn't fall off the turnip truck and understand all this three 5,10 years ago when you started, right?

Debbie Reynolds 14:17

Oh, absolutely not, right. I'm always concerned about the over-collection of data because you can get the wrong insight, right? Or you can make the wrong decision based on like overcollection of data and looking at it in a different way. So an example, I give one of the whistleblowers for Cambridge Analytica, who said they had a thing called a KitKat project. So they were able with the data they were able to scrape and get from people, and you know, looking at their likes and stuff, they found out that people who liked the Anti-Semitic messages also like KitKat bars. So does that mean if you like a KitKat bar, you're Anti-Semitc? So having data in that way and trying to do those things, correlation is not causation. And so having people misuse data that way. It's problematic.

Cari Miller 15:12

Yeah, that and whatever spits out of the machine, people are very like, oh, well, that's what it said, that's what we'll do like this. You're just questioning this stuff. Like that's what it says. Because early in the early days, that's what we meant for it to do when it said 10 plus 10 equals 20. We knew that's what it was; why would you question it? Now? They don't understand. These are probabilistic machines now, not deterministic machines; it's different.

Debbie Reynolds 15:43

Well, explain that, probabilistic versus deterministic.

Cari Miller 15:47

Yeah. So deterministic is going to be when you code the machine to do exactly what you tell it to do. And 100% of the time, it fills the Coke bottle to 90% of capacity 100% of the time, within an acceptable tolerance all the time that's deterministic; you've determined exactly what you want it to do. Probabilistic is a probability; it's probably in this range, it's probably with a 95% accuracy rate, then there's 5%, that might not be like this. And sometimes it's not 95%. Sometimes it's 60% or 75%. Like when we talk about speech processing was a Google speech tool that didn't process African American Women's speech. It was like 60%; was it 60? Do you remember?

Debbie Reynolds 16:38

I can't remember the percentage, but I remember that it was pretty low.

Cari Miller 16:41

Compared to, like a white guy, you know, like, well, that's probabilistic. It's, it's doesn't perform in a very determined way. It's, it's, you know, just kind of gravelly In summary, kinda, you know, so that's a problem. And that's what socio-technical. When we say socio-technical, that's another good word to define. It's, it's these are systems that are probabilistic because they're based on society. It's social, technical calculations.

Debbie Reynolds 17:13

Yeah, I guess the problem is, if you're not testing on all different types of people in society, people, the people who are underrepresented, can be harmed, just like you were saying that tools probably tested on a different category of people. Oh, then try not to make it fit everybody else. It just doesn't quite work that way.

Cari Miller 17:36

Yeah, that's one of the problems. And then the other is, like you said, overcollection of data, and all of a sudden you have causation versus...

Debbie Reynolds 17:43

Yeah. Now, we're in a space where people are just gaga over AI, and all of a sudden, we've missed like the hot new thing, even though it's been around for a while. I think that they did happen. And the reason why people are talking about it so much. And I haven't thought about this before, but corporations have been using AI for quite some time. So it's not a new thing for them. But I think, especially with ChatGPT, some of these tools are coming out. What they did was have AI crossover to the consumer market where people, average people, can kind of reach out and touch it and play with it. And I think that's one reason why people are getting all crazy about it. But there has to be some type of responsibility here, some type of guard rails we're seeing, especially in the EU, a lead on the regulation of AI, and I think that will influence other jurisdictions to look at it more closely. But tell me why we need Responsible AI we shouldn't just go crazy like other people about AI.

Cari Miller 18:47

Yeah, ChatGPT. Did you see the meme where it's the Microsoft paperclip helper, Clippy Scooby Doo. They're like pulling the mask off. And, like I knew ChatGPT was just the paperclip helper. Like yeah, Clippy, actually, it's like, yeah, AI has been around for a really long time. We just kind of hit it inside the products. Yeah, but somehow ChatGPT is gotten like center stage just kills me. And yeah, the guardrails are so critically important, and I absolutely applaud what the EU is trying to do. They're not getting it completely right. But God loves them. They're trying, I mean, just they're trying as hard as they possibly can. And thank goodness for that. Because the problem is systemic bias is everywhere. And so even when you do like you're saying, you collect up as much data and make sure it's representative. That's what we call it is representative data. So we've picked up, and we've made sure we've captured all races, ethnicities, religions, we've got everything represented even when you do that sometimes It can be a problem, you know, the, put it into a model, if the model doesn't, you've configured the model in a bad way, it picks up a proxy piece of data about, like someone's school, you know, school is a great example of a proxy data point that can throw a model off. And so, yeah, so you have to have governance structures overtop of what you're doing. And that means having a review committee that says, Wait, is this an ethical choice we're making? Why did we let me see that weighting system that we're going to put in place? Why are we doing that? Do we even need this data? Why did we cook? Do we think shoe size? If we're just asking for resume data? Why are we doing that? That’s, you know, so you have to have a governing body overtop of what you're doing. That is separated, you know, your forest and trees. That's, that's it's a good practice. You just have someone separate. And they ask questions, and you kind of defend, and you have a healthy debate. It's just good practice. And so they set guardrails. Employment. Tech is a great example of that, and I'll give you one, chatbots are an interesting, new phenomenon. And they're used a lot in employment. And chatbots, or, you know, let's say we use a chatbot for understanding company practices or policies. And I hate to use this phrase, but a younger demographic is very comfortable. In fact, they don't even want you, just don't, don't dare phone call me, I'm just going to chat about you. That's fine. A less technical, older demographic or someone who's less prone to a computer is going to say, if you make me get my fingers on that chatbot, I will leave this job right now. And so that's a great example of governance, where governance steps in and government says, you know what, we're going to give you the chatbot, and we're going to leave the 800 number in place. So you can call us if you have a question. That's what governance does. So that's a guardrail. And it's pretty simple. We just put a guard wheel in place so we don't lose people. That's it.

Debbie Reynolds 22:19

Yeah. Right. I think what you're explaining is it doesn't, because governance doesn't have to be difficult, right? You have to think through it and figure out what makes the most sense for you.

Cari Miller 22:33

Yeah, that was very simple there; it can be very hairy and very, very tough. But yes.

Debbie Reynolds 22:41

I want to talk with you a little bit about New York State, and something that they're doing is, in some instances, they're requiring companies to do audits of AI. So, you know, for companies that have been a customer doing whatever the heck they want to do with AI systems now to have prospects. They're going to have people on it. What are your thoughts about how this happens? And what's the best practice around it?

Cari Miller 23:12

Oh, New York. I mean, I wish they would; they did a process where they did involve people. I don't know how we got off the rails a little bit with them, I don't know that it's going to have the effect that they, they want it to have. But you know, honestly, it's a start. And that's just as important as not having anything at all, you know, you have strategic choices; you can do nothing, or you can do something. So at least we're doing something; it's just a little wonkier than I would like it to be. I think it's challenging in how they did it, because they're very focused on just that four-fifths rule of the output. And when we look at responsible AI, there's a lot more to it than just that output. You can alienate people on the front end of it through disability; it's not just discrimination; it can be also a well-being issue. At the end of the day, they kind of ignored some of those more broad-spectrum pieces. Also, I don't think that the City of New York is quite, the employers are quite prepared. So I think we needed to do a little more education on the what's and the whys and wherefores.

Debbie Reynolds 24:35

Yeah. I admire them for being able to step out and at least say acknowledge that they think something needs to happen here. And I think that a lot of cities and States and municipalities are looking one to another to see, you know, what works and what doesn't work, so it doesn't hurt for them to definitely try. I totally agree with that. What is happening in the world right now, this concerning you that you're seeing develop, you're like, oh, my goodness, I don't like this. I don't like what's happening here?

Cari Miller 25:05

Well, when we've covered the spectrum of data, on the employee side, I mean, I just think every day that goes by, there's more and more and more data being generated on employees. And there's no governor on it, there's like, it's just that I'm worried that the processors just continue to amass gigantic amounts of data on people and no rules as to what they can do with it. In a very similar vein, I have the same concerns with Ed Tech. So education technologies, K - 12, and higher ed, I have the exact same concerns with them. That really, really, really bothers me. FERPA, which is the law that says you have access to your data, as you know, a student. It was done in the 70s. I think, I mean, couldn't even I mean, I don't even know, fax machine grew around the, you know, it's just out of date. And I tried to get data on my own kids. And it was like, Well, I don't, we don't really have that. The processor has the processors like school, and like, just, I can't get it. And I know it's tons and tons and tons of data. And I don't know what they're doing with it. I do know that. It's an investor's paradise, where they just aggregate up all these companies, and they create new products. And I just, I don't know how long they keep the data, there's no rules about, you know, sort of deleting data. And they, as COVID forced them to move online. You know, now it's like, what do you do for your summer vacation? Will any kid that had a, has a rough home life? They might say, well, I you know, we had to go down to the jail and get my dad out because he was caught for crack again, you know, like, all of that's now in their system? What are they doing with that? Shouldn't that be deleted? Like, what are they I know that when the laws changed for Roe v. Wade, the systems that monitor the kids for what they're Googling for and stuff, those States that have the most restrictive laws on the books, those kids were getting flagged and in trouble with their school administrators for some of that stuff, right or wrong or indifferent? That's how that stuff plays out. Yeah, it just bothers me.

Debbie Reynolds 27:26

Well, not only that, these AI systems monitor what's happening in the background. Oh, yes. Like your video. So yeah, a Coke can in the background, or whatever is happening in the background. The AI is trying to describe to the system what's happening. And they may have flags put on that there was one lady, they wrote up a kid in their school because a kid was at home, they were doing school at home. As you know, your home is not like an office, right? So it's like people running around and doing different things. I guess this mom's like running around, trying to get dressed, you know, where he was they like, they like sanction him because his mom was in the background, walking around, like, probably half clothed or whatever. It is her house. So like, how can you be upset at the student for something that's going on in the background of their own house? Yep, exactly. And now they are pictures of his mom, like on a video somewhere, you know?

Cari Miller 28:30

Yeah, that reminds me of one of the technologies I just was writing about where this is; this blows my mind. It is a placement agency for high-end coders. Top 1%. Their catch was, look, we don't make them do timesheets; we just we automate that. So you don't have to worry about it. As a buyer, you're like, oh, that's really good. Because then nobody wants to do that. And it's time-wasting, you know, so cool. Well, the way that they captured to figure out what they should bill was they literally took pictures of their camera, their webcam, like every 10 minutes to see them at their desk. And if they weren't there, they didn't get paid. Well, it's knowledge work; you might have to get up and have a phone call walk around, and think about the problem you're trying to solve. And so when you look at the Glassdoor reviews, they just blow your mind. They're like these people they hated life. I mean, it's so demeaning. Some of the stuff that these people come up with. There's no rules about that. No law; just, sure, tape it. And again, like you're saying, if there was anything going on in the background, that's all captured on the picture to just crazy.

Debbie Reynolds 29:40

I guess I'm concerned about the future. You know, definitely I don't want to be a Debbie Downer about the future. But I'm concerned that people somehow mythologize AI as if it can do more than it can, so someone's stuff that we're talking about, in my view, is human judgment calls that should be made as opposed to something that you would defer to AI. What are your thoughts?

Cari Miller 30:10

Yeah, that's totally right. I mean, so what's really interesting to me, in the research that I've done for my doctorate, I looked at type employment technology across the entire employee lifecycle. So we always hear about the hiring part, you know, where it's like, okay, there's discrimination in hiring and all that stuff that's talked about a lot. But I wondered what else was going on around the rest of the lifecycle. And what I found was astounding to me. As you move through the lifecycle, the tone of the AI changes, in the harm that it creates and the opportunities that it creates. It's because there are both. So at the beginning of the lifecycle, it's mostly about discrimination. At the end of the lifecycle, and separation, it's mostly about discrimination in the middle of the life cycle, where you're doing rewards, not reward, sorry, monitoring, surveillance, the job designs, things like that task allocation, it becomes an inclusion, belonging and wellbeing issue. So it moves away from discrimination and becomes more of a psychological health issue. So AI, in the middle of the employment lifecycle, turns your culture toxic if you're not watching what you're doing. And what is fascinating to me about it is exactly what you just said, what it's doing is replacing humans to do human jobs. And there's a right way and a wrong way to do that. I'm not saying don't use this technology, I'm not saying throw it out with the baby with the bathwater. But that's where governance comes in. So when you take the technology and want to bring it in, bring a group of people together that are going to have to use it, get their input, and put the right guardrails around it, make sure there's a human reviewer, that's before you know, catches the output, make sure they understand the explainability. What does that system doing? Why did it do it that way? Was there anything that we should think about? Maybe, oh, yeah, that piece of data probably didn't work, right? Make sure that it goes through that filter, and then use it because it can speed things up. It can make life a little better. But you have to let us call them the guardrails. That's what it is. So you have to put that stuff in there. And it's not hard. Really, it's just good management, frankly, at the end of the day. So yeah, I completely agree with you. It's not Debbie Downer stuff. I mean, it's it is cool when it's exciting. But do the do, man, just do the do? Don't ignore it. Don't jam it in and act like oh, this is gonna work great. Like no. Since when does anything ever work great?

Debbie Reynolds 33:11

I feel like people for whom AI works really well may not see or understand the downfalls or the problems that AI has, so I'll give you an example. Let's say walk to the grocery store and has that mat; you walk on and open the door. Okay, so it opens the door for you; you're fine. What about a person like me? I step on the mat? And it does open the door for me, right? So if you may say, well, the doors fine, open for me. But if it doesn't open for me, that's a problem. And so that's an AI issue, I think that we have, which is for some people, these tools were really well because they were probably built by people that are like them. And for other people, it doesn't work as well. So I give you an example that came out recently. We transcribe all the podcasts. And one of the AI tools that I use is very, very, very good on white men. So there are transcripts that turned out great. And almost everybody else is not as good. So regardless of how well they speak English, it just seems to be that pattern that we see. And so I know a lot of these tools are built by people who are probably like some of the people who were on show. And it's better in those ways. But I think these tools need to get better at being more tested and built by more diverse people. What are your thoughts?

Cari Miller 34:44

I think you're spot on. And this brings me to Simon on the board of For Humanity and For Humanity is a nonprofit organization. I think we have 81 people in 81 countries working with us at this point. So we're a crowd-sourced organization; we write audit criteria primarily for AI systems. So the goal is once countries establish their rules and policies and laws saying, Hey, by the way, this AI needs to be audited, in order to be allowed to be an operation, they would use our audit criteria, kind of like an accounting audit, an independent auditor would come in and say Check, check, check, check, Oh, you missed it here, you got to fix that. And so we write audit criteria; we just finished audit criteria for disability, accessibility, and inclusion. And it's for exactly the purpose you just described. And so goes through all the requirements. So we did this in collaboration with PEAT, which is the Partnership on Employment and Accessibility Technology, which is funded by the Office of Disability Employment Policy from the Department of Labor. So we had a good set of people looking at this. And I would say that the overarching thing that we put inside of this criteria is the need for the inclusion of people with disabilities throughout the development lifecycle and the deployment lifecycle. It's just critical. Their common refrain is nothing about us without us. And it's not just necessarily in for humanity; we don't just limit that to people with disabilities, including all people, all representations; it's important because I don't know what my grandmother would say versus what my daughter would say about certain things. So you have to have their voices in there. And that's, that's kind of very central to the audit criteria that we have. So we come in, and a lot of our auditors will go in and do pre-audit work, helping companies get ready to be able to pass an audit. But yeah, you're right on spot, that's, there are some very unique things to consider with disability. And in AI, very unique, especially when it comes to natural language and computer vision. Things. Those are super sensitive,

Debbie Reynolds 37:19

A friend of our family is disabled, with some type of visual impairment or physical impairment. And because he uses certain accessibility functions on his computer, he started getting advertisements for health stuff based on that. And he was like, horrified, like, he was so upset about this. And I was like, oh, my God, I can't believe people are actually doing this like this. That's like, AI runs amok, right?

Cari Miller 37:49

Yeah. Yeah. And that's cookie-based, but it's like, you know if he had to take a, he was applying for a job. And they had him do one of those interviews where you talk to the screen, and there's like, nobody there and they're like, Here, here are your questions, just answer them to the screen. You know, those systems are evaluating your facial expressions will; if you’re vision impaired or hearing impaired or have audible issues, you're not going to make the cut most of the time on those types of systems. I mean, you can ask for accommodation and not use that system, which brings up a whole other set of issues. You know, it's really very problematic.

Debbie Reynolds 38:31

How do we stop? I don't think we can stop the AI runaway train, but how can we make it more tolerable for people? Like what is your what is your big, big advice, I guess, to companies that are trying to you know, they see the new shiny objects and they want to implement it, what things would you tell those companies who they want to do this?

Cari Miller 38:55

I know, it sucks because I think it's a marathon, not a sprint. And, like, you've been learning for a long time; I've been learning for a long time. They need to start learning. Gotta get moving, you know, crawl, walk, run; none of us started flipping on a flat-out sprint; we all started by crawling. And like, Wait, did she say, drift? What is drift? I want to know what drift you know, like you just start, you have to start learning that's, I don't know how else to do it. Because you and I just had a very robust conversation about a lot of different things. You can't get there until you just are willing to start that journey of learning. Crawl, walk, run, my concern.

Debbie Reynolds 39:38

I have lots of concerns in this area, but.

Cari Miller 39:43

You do.

Debbie Reynolds 39:44

We're till at a place where people think it's okay to use 12345 as a password. And now computing is getting, like, very complex. And now people are going to have to learn how to talk to AI and talk to computer systems in a way that they've never had before. Like, Google makes it extremely easy for people to search for stuff. So you put a word in there, try to correct your spelling, and it gives you a million hits. And then you just kind of go your own way. But now, when people are using these AI systems, they have to learn how to actually ask good questions and understand how to get the right response out of these tools. So what are your thoughts here?

Cari Miller 40:29

I know; I just saw the other day that someone's title was; I hope I get this right. It was something like a structured question engineer or structured requests engineer; it was basically someone whose whole job was to make sure that she was asking the question of the search engine the right way. And it's just like, Are you kidding me? Prompt engineer, or just like, that's a job like that?

Debbie Reynolds 40:57

Yeah. You know, it's a very high-paying job. I've seen some; it's up to $350,000 a year. Yeah.

Cari Miller 41:06

And see, that's a whole job. Like, I could barely, I can barely get, you know, Google to do what I want it to do sometimes, you know. And that's a whole job. So that tells you something because, yeah, I'm closer to the 12345 end of the spectrum with passwords than I am being a prompt engineer. So I hear you. I don't know how we cure it. But it's not by sitting around on our butts, not learning, I can tell you that.

Debbie Reynolds 41:33

I agree. So if it were the world, according to Cari, and we did everything you say, what would be your wish for Data Privacy, anything in the world related to data?

Cari Miller 41:45

Well, I, gosh, there's so much. That sounds like a trick question. You know that, right? We're, we're here to midnight, or what? I think some of it starts with data minimization. We've got to, like, rein it in a little bit. Like, do we really need all this stuff? Come on, guys. I mean, I know when I started out, back in the late 90s, I was like, oh, yeah, put the data in there. I want it. Now. I'm just like, I don't know how to get half this stuff. Like, what am I doing here? It's just ridiculous. So data minimization would be a good start, necessity assessments and things like that. And then it really does come down to a point data point by data point, in terms of privacy and deleting, you know, making rules about how long you should keep it. And it's worth, you know, over time because it does degrade over time. It's not meaningful over time. So get rid of it. And, and usage like, it's people talk a lot about security, keeping it safe, you know, and that's a big industry. But when I talk to data security people, and I asked them, but, you know, do you know anything about where it's being used, and they're like deer in headlights? That whole conversation is like, what? Now we just make sure no one's getting to it. Like, okay, so usage. And the deletion part of it is, that's got to be a bigger conversation. And we've got to get our hands around that, which is the privacy part. And again, my focus has been employment data and Ed Tech Data. And I know that I mean, there's health data, just housing data, a whole world of other areas I could get into, but.

Debbie Reynolds 43:33

Well you're going to be a really busy lady with ed tech and with all your data.

Cari Miller 43:39

I know; I gotta pick a lane. I can’t, you know, I just, they both tugged at my heart.

Debbie Reynolds 43:46

Well, now that Utah and California are passing laws to raise the age of what they consider minors to be 18. Huge, from 13 to 18. So those people doing this checkbox exercise are saying, oh, do you promise you're over 13? Now they have to do more digging in. And a lot of these things are trying to be opt-in as well. So I think it's just going to make the whole ed-tech space, the whole child privacy space in the US, really heat up. Let's hope. I think it will. I think it will. I've had some chats already with some big firms around the world, and they're concerned this is going to make a lot of work for them. Right? Where was before, like, okay, you're promised that you're over 13, and you know, on the go, now, it's like, oh, my goodness, we because there are a lot of people, a lot more people between 13 and 18 are on the Internet and doing different things. So these companies have to do more work there, and that's going to be a huge change for them.

Cari Miller 44:52

You know, it's funny you say that because what bothers me? Going back to your last question, is having a national Data Privacy law very similar to what? California and Utah, as each State comes out, and they're like, Oh, and do mine this way, I'm gonna do mine this way. As I don't know how companies deal with that, it seems ridiculous. You're just gonna put people out of business; you can't possibly keep up with all that. It's just all it's doing is keeping one trust in business. You know? Like, I don't know how you deal with that. And it's not fair to the companies. It's also not fair to the consumers because I'm like, What do I move to Utah? I want to walk, you know, like, come on, Delaware. What are we doing every year?

Debbie Reynolds 45:38

Is absolutely, it's terrible, right? So when the data breach, I first started seeing this around the time when California was the first State, they have a data breach notification law. And over the years, every State now has one, but they're different in every State. Yeah. So now we're going to do the same thing with this privacy law. And it's getting crazy already. I have you print out a spreadsheet with, like, 50 columns. They all be different. Right?

Cari Miller 46:07

Right. That just doesn't. That's not helpful. You would think the national, you know, lawmakers would figure that out. But we don't know what's next topic.

Debbie Reynolds 46:22

We can only pray for hope and push forward with that. That's for sure. Well, thank you so much for being on the show. This is great.

Cari Miller 46:30

Thank you for having me.

Debbie Reynolds 46:31

Oh, yeah. Well, you're doing such unique work in this space. And I'm sure we'll hear a lot more from you and all the great work that you're doing with For Humanity. Ryan. He's been on the show with quite a few other people from For Humanity. So it's great that you guys are doing that work.

Cari Miller 46:50

Thank you. I got to talk to "The Data. Diva".

Debbie Reynolds 46:55

Super Wonderful. Thank you so much.

Previous
Previous

E147 - Igor Barshteyn, Senior Manager, Information Security & Compliance Expert

Next
Next

E145 - Stephen Lawton, Founder of AFAB Consulting LLC, Cybersecurity Expert and Technology Journalist