E147 - Igor Barshteyn, Senior Manager, Information Security & Compliance Expert

42:07

SUMMARY KEYWORDS

ai, people, data, privacy, systems, model, companies, cybersecurity, neural network, trained, users, personally identifiable information, case, security, deployed, training, prompt, technology, act, debbie

SPEAKERS

Debbie Reynolds, Igor Barshteyn

Debbie Reynolds 00:00

Personal views and opinions expressed by our podcast guests are their own and are not legal advice or official statements by their organizations. Hello, my name is Debbie Reynolds; they call me "The Data Diva". This is "The Data Diva" Talks Privacy podcast, where we discuss Data Privacy issues with industry leaders around the world with information that businesses need to know now. I have a special guest on the show, all the way from Asheville, North Carolina, Igor Barshteyn. He is a security leader and architect who works on protecting data and managing information security and privacy risks. Welcome.

Igor Barshteyn 00:45

Thank you for having me on, Debbie. Glad to be here.

Debbie Reynolds 00:48

Well, this is exciting for me. We met on LinkedIn; you always have such interesting comments. And I can tell, based on the way that you comment, you respond to things; you really have a deep passion and deep knowledge around data security and data protection. But I want to get your thoughts on your trajectory in your career and why privacy has become an interest for you.

Igor Barshteyn 01:18

Sure, thanks, Debbie. Well, my trajectory is I come from an IT background. I've worked for 16 years in information technology, starting out at the service desk and working my way up through senior levels, Information Technology Management. And more recently, I've branched out into information security. And as I work more and more in information security, I started realizing that Data Privacy is really kind of the catch-all right now that's even in closing information security within itself, right? So Data Privacy is really the why of we're trying to protect information, whereas information security is more of the how. And more recently, as I've worked in both information security and Data Privacy, I've seen that you know, there are new technologies emerging, such as artificial intelligence, that are incredibly important and will affect most of our lives if not everyone's life. And I think that Data Privacy is going to be extremely important with how artificial intelligence is deployed, how it's used, and how people are protected from negative outcomes from using that technology.

Debbie Reynolds 02:26

Very good. Yeah. So as we see in the news, AI has been around for a while. People have been talking about it, but I think it's gone prime time, especially with ChatGPT out. And so the thing with ChatGPT that happened is that AI broke out of the corporate business space and moved over into the consumer space. So people can reach out and touch it and play with it and do different things. So people are excited. And they're also sounding the alarm, in some ways, around AI and what's happening, but give me your thoughts on the whirlwind of what's happening with AI right now and your security and privacy concerns there.

Igor Barshteyn 03:10

I think that you know, I'm both excited and somewhat terrified. I'm excited by the possibilities that technologies like chat GPT offer us, but I'm also somewhat terrified at the speed of, you know, the rollout of these technologies at scale; there was a data breach with ChatGPT, where some users saw other users’ chat histories. And I've seen a lot of press lately. You know, there are concerns, for example, that corporate sensitive data is leaking out as people are using ChatGPT into the model, right, because the model is continuously being trained on user inputs. As each version gets released and updated. It includes data from previous interactions with users. So that data is getting kind of eaten up by the model. And there's of course, the classic cybersecurity concerns that come with these systems as well. Because with as with any AI system that's deployed on the Internet, there's the core model, that's the neural network that powers, you know, the texts that ChatGPT creates. But then there's also all these other system components around it that, you know, serve the text to use store your chat history. And I think, more recently, that incident with open AI getting breached was more in kind of the ancillary systems around the core neural network. That said, there are also concerns with the core neural networks. There are well-published techniques that exist already, where using structured API calls, people can systemically systematically, you know, put queries to the lat large language model and pull out training data. And there are concerns that some of that training data, in particular, with large language models whose creators have not disclosed which data they've trained it on. On some of that training, data may contain no personally identifiable information. There was a scandal with ClearviewAI, as you probably remember, where it was trained on personally identifiable information, including photos of people that they had not consented to share. And I think, you know, in Open AI’s case in particular, ironically named Open AI has been very, very closed with respect to the technical details of how they train the model, what they trained it on, and you know, what the architecture of the model is. So I think there's a lot to be concerned about. And very recently, there has been an open letter by the Future of Life Institute that's been signed, which has asked for a pause in massive at-scale deployment of AI systems to the public. And I'm actually one of the signatories to that as well. I think we just need to take a pause, take a take a breather, and understand what are the implications of what we're doing? And how much data is getting ingested from users in real time, as well as how, what kind of controls can we put in around the training data that's being used to train these large models?

Debbie Reynolds 06:13

Very good. Well, I'm glad you brought up the letter. So I guess my concern with the letter is that people aren't gonna stop. Absolutely. So it's not gonna stop. So what are we going to do? Because we know it's not going to stop. So what do you think?

Igor Barshteyn 06:34

Well, I think there's a lot of regulation coming down the pipeline. In particular, there is the EU AI Act, which is currently in kind of the last stages of debate in the EU Parliament, and in the EU Commission. And that is expected to be a sweeping kind of landmark legislation that sets the standard worldwide for the regulation of AI systems, similar to how GDPR was the first of its kind legislation that really set the standard for privacy law. And you know, to which all privacy law that's been subsequently passed in the States and elsewhere is similar. I think the EU AI Act will kind of set the stage and set the tone for how AI systems are regulated. And privacy is actually a key tenant of that act. So I think, again, the final text is still being debated, you can look it up online, but regulation would be welcomed, which has some mandatory penalties and teeth for privacy violations. And another little-known fact is that in most of the current US State-based privacy laws, there are six States right now, including Iowa, which just signed it into law yesterday that have privacy laws. So of those six States, five, except for Utah, all have clauses that give people the right to opt out of automated processing of their personally identifiable information; I think it will be an interesting thing to see as more and more AI systems get deployed in business and in government and make decisions that are really impactful on people's lives, how those privacy clauses are going to affect the deployment of those systems. And what mechanisms will people have to really opt out and verify that they've opted out? I think the problem with large neural networks, in general, is that they're largely a black box right now, even to the developers. No one can really explain how they work. There's, there's some work being done in Explainable AI right now, to be able to kind of filter down all of the complex processes that are taking place inside the neural network. But once it ingests data and isn't, is trained on it, the data is inside that network in a very inscrutable way. You can't just pluck it out, you know, or delete it from that network without having to retrain the whole model, which is incredibly expensive, because companies spend millions of dollars and compute and electricity to train these models because they're very computationally intensive to train. But once the model is trained, really, the only safeguards or modifications you can take can make to it is to kind of bolt-on additional controls on the outside and filters. The model itself remains unchanged as it is very expensive to change. So you can't just say I want somebody to delete all my personal information from a neural network because that's just not technically possible. So it will be interesting to see again how the privacy law, you know, comes into play with people's right to opt-out and people's right to deletion from these neural networks and AI systems.

Debbie Reynolds 09:47

Yeah, well, there's an interesting dichotomy or something that's happening. It's very different in the EU and the US. So I think a lot of these AI systems and AI discussions will help highlight a lot of the differences we have in our privacy laws. We have a lot of gaps there, especially in the US. One of the major gaps in the US is that data scraping is legal in the US. So companies can scape scrape data off the Internet in the US. And so companies like ChatGPT are taking full advantage of the fact that that can happen. Obviously, people, if they feel like they're in intellectual property, or they're some trademark issue, they can contact, you know, Open AI, whoever about that. But that's like such that's like an ant trying to fight a gorilla at some point because the inertia of what's happening right now is so great. But tell me a little bit about that. Now, I know that you have mentioned ClearView AI. So I follow that case very closely. So ClearView AI actually got booted out of a couple of countries in Europe; if I'm not mistaken for their use, they are still being used in the US. They were penalized based on the Biometric Information Privacy Act in Illinois, and this was around the sale of their product to law enforcement and then law enforcement using that to potentially harm or target people. So I think that the issue wasn't necessarily that they were scraping data is that they were selling the stuff that they scraped, and then people were harmed as a result of that. What are your thoughts?

Igor Barshteyn 11:39

Well, I think we do have some key differences between the EU and the US in our approach. So the EU AI act is currently written and does have mandatory teeth in it. And there are penalties written in for companies failing to protect, to make their AI systems fair, safe and effective, you know, protect their users Data Privacy, failure to provide an explanation of how the AI system works to the users, right and failure to provide kind of human oversight, no and human recourse if in case a decision has been made, that impacts your life. I think in the US, the differences we have recently introduced, for example, the NIST AI risk management framework, which embodies all of the same principles. And you know, it's a great framework for companies and businesses to use to make sure the AI systems they deploy are safe. However, the key difference is that framework is entirely voluntary. Unlike in the EU, the other thing that's been deployed recently is the AI Bill of Rights and big kudos to the Biden administration for putting that together. And again, that outlines those very same principles that humans have rights when faced with AI systems, again, right to safe and effective systems, right against algorithmic discrimination, a right to Data Privacy, a right to notice an explanation,, and a right to human alternatives and fall back on case decisions are made that are, you know, need to be re-adjudicated. However, again, that AI Bill of Rights, it's a blueprint for an AI Bill of Rights to be technically clear, and it's entirely voluntary, as well. So I think, you know, generally, you know, in the US, there is a climate towards, you know, we'll rush ahead with innovation. And we can, you know, embody these principles, but we're going to count on the companies that are developing these things to be self-regulated, right? We're not going to mandate anything, you know, anything with penalties for these companies. And we do have an algorithmic Accountability Act in Congress right now. But it's stalled in Congress. And you know, right now, there's a lot of debate and agreement has not yet been reached. But that act does have some mandatory penalties and teeth to it. So far, though, as of today, we have entirely voluntary frameworks in the US.

Debbie Reynolds 14:04

Yeah, it's a foreign element. Yeah. We have such a patchwork of laws. And a lot of our laws are very consumer-facing or consumer-based, and not every human is a consumer. So if you're not consuming, you really can't take a part of some of these rights that we're talking about. But I think there's a reason why open AI is a Europe or didn't start in Europe. So we know that a lot of these data-hungry innovations, a lot of these start in the West because of our regulatory landscape. And I think that these companies can dance among the raindrops and be able to put out these innovations, even if these things end up in court which may take years. So people are cashing checks today, right and left, while they're putting out their innovations and things like that. Absolutely. What are your concerns about not having anything comprehensive in terms of a law in the US around not just AI but privacy rights in general?

Igor Barshteyn 15:10

Well, I think it's, you know, it's fraught like any efforts at industries self-regulation have been recently, you know, industries that self-regulate tend to have negative outcomes in the long term. Recent disasters in the industrial space have shown us this, you know, the recent rail derailment or Deepwater Horizon, and many more that, you know, come to mind. I think we shouldn’t, and we can't count on the industry to regulate itself because, in industry, business takes precedence, right? They're in the business of doing business. And, you know, I think, some public participation and some public input in the form of government oversight is absolutely necessary to make sure we don't go astray in the way that we develop and deploy AI. And I don't think that voluntary schemes are enough. It's great that we have these principles outlined. But we need to make sure that companies are actually complying with those principles and not just saying they're complying, you know, just for public relations purposes. To your point about court cases, there's an ongoing court case right now in which I think Microsoft and Open AI are some of the litigants in that they're fighting a lawsuit copyright lawsuit for their AI system, which helped code applications because their AI systems are producing code from open source software, that even though it's distributed under open licenses, it is still copywriting. And they're arguing that AI systems, regurgitating what may be copyrighted code is not itself subject to copyright enforcement. There's an interesting decision by the US Patent and Trademark Office along similar lines, which I think is being used to argue this is that AI-generated work cannot be copyrighted. Presently, because there's no human agency involved. So there's no human creator. And there's been some, some very loud cases in the press around that as well.

Debbie Reynolds 17:25

This is getting very interesting and very expensive to litigate, I'm sure, around these areas because I know, the problem is, in the absence of clear law, these companies are trying to go to court to get some type of case precedent. And a lot of times, the judges and juries who are listening to these cases don't really understand the technology. So the press is just all over the board. So they don't really follow a certain you just never know what these court cases will be. And so we end up with even more of a patchwork of these ARMA laws, but they also these cases that are going all different types of directions.

Igor Barshteyn 18:07

And I've read an article recently; I think that open AI and similar big AI companies are also involved in heavy lobbying efforts to make sure that the AI act in the EU is as watered down as possible, so that they can still continue doing business there. I think, as I mentioned in a recent LinkedIn post, you know, it will be telling if Open AI all of a sudden disappears from the EU landscape if the EU AI Act is passed because, you know, they've been very closed, you know, closed around what data they've been training on. And that will be kind of telling that if the UI comes into force as written, they disappear. Because that will mean that they have indeed trained our system on personally identifiable information or, you know, until it closes intellectual property and other such data, which shouldn't really be part of the training for that system.

Debbie Reynolds 19:02

Yeah, I guess that's interesting. I think Italy is trying to ban or did ban ChatGPT. I don't know how long that's going to last. People really want it. They're going to create VPNs and plug into other countries; I'm sure to be able to use these types of tools. But you know, also, I guess it when we think about AI, or privacy and data security issues. For me, there are twofold one is the input information people put into the model. And the other, as he chatted about, is the data that gets sucked into the model. A lot of times, people, and we see this a lot with data breaches, companies think their data is secure, but some very talented cyber criminals can find gaps in their system. And so having these types of technologies, they want to go out, actively sucking in data from everywhere, they're probably getting stuff that people probably thought was secure and thought was private and wasn’t. What is your thought?

Igor Barshteyn 20:03

So a couple of points on that I think cybersecurity is essential to these AI systems. And I in, You know, it's in the UAE, I act as a requirement, you know, around building and deploying these systems and also in the AI Bill of Rights, and in the risk management framework. These are all components. So, absolutely, governments and regulators understand that cybersecurity is crucial; I think the concern around ingesting private data as you're interacting with the system. I think that's valid. But I think there's been some developments very recently that give us hope, right? Whenever you're interacting with any online platform, you're really trusting the vendor, right? You're interacting with ChatGPT; you're trusting open AI to maintain its security up to the level which is required to protect your data. Recently, Meta released an AI model called Llama, which was trained exclusively on open-source materials. And it was released to a limited set of academic users for research purposes only. One of the things that have been in the news recently is that the llama model has leaked via 4chan and BitTorrent onto the Internet. And what has happened since then is there's been an explosion of open source effort to retool that model and to, you know, develop it further. And most recently, people have managed to get that very complex language model, which is large, in terms of, you know, how many parameters it has, but much smaller than ChatGPT. It's been retooled to run on systems as low power as a Raspberry Pi, people's Pixel phones, and people's home computers. So in that sense, AI is getting democratized, it's falling, it's coming away from the vendor and more into the hands of individual users. And I think that process will actually continue as open-source work continues in this direction. And when you are running an AI model on your own computer and interacting with it, you are not sending data across the Internet to the vendor, and you don't have to trust the vendor. So I think we'll see more development in that area as well, where we don't have to trust the vendor anymore because these models have really escaped into the wild.

Debbie Reynolds 22:17

That's a fascinating thought. So let's continue there. I think it's pretty interesting. So I think one of the reasons, well, first of all, I'm a tech nerd geek, I really like ChatGPT, depending on you know, my use cases for it. Obviously, I'm probably using it for things that aren't advertised. That's because I know what it can do, you know, it's really fun for me to play around with it. But I think one of the reasons why these big companies are jumping on ChatGPT and different things and focusing on search is because they think that people, once they get into these models, won't search as much. So if they can tie these AI models to search, then you can go to the Internet, go to whatever, and use those models where, for me, I think the use cases for these generative AI models are much more interesting outside of search like I don't care about search, frankly, because I know how to search I don't need AI helped me to do surfing, but also what that leads to just like you said to in my view, it's like the idea that people have their own little ChatGPT. So Igor will have his, I have mine, and I can use it for my own purpose. And that's kind of what I've always wanted, you know, I don't want to have to go out to the Internet; I love to be able to have something that helps me search my own records so that I'm not trying to remember, oh, what's the name of this file? Or where was it and different things? So tell me a little bit about that. Because I think that that could actually really work, especially as we go into Web 3.0 and people go into more decentralized computing.

Igor Barshteyn 23:56

I think that you know, when we decentralized in general, we are then keeping our personal data, right, really within our own possession. And, you know, I think in general, we've seen a movement over the last couple of decades towards centralizing everything in the cloud, right? So everything is kind of becoming a single point of failure, right? Everybody is on Amazon Web Services. Everybody is on Microsoft Azure. And when Amazon Web Services goes down in a region, many companies, just, you know, are out of business until that is repaired. And I think a lot of people's personal data as well as getting consolidated into this giant, you know, kind of agglomerations of data where it's no longer really in their own control. So I think a movement towards decentralization of computing power and decentralization of data is coming. I especially as we see more failures when the world becomes more geopolitically unstable. And there's more kind of international competition in terms of hacking and cyber warfare; these central agglomerations of services and data Make great targets for adversarial nation-States or other non-State actors. And I think that as things become a little bit more precarious in the world, we're going to see that, you know, kind of fragmentation of data and compute power back to individual users and back to small-medium businesses, kind of to avoid those negative consequences. And with that, people will regain ownership of their data, instead of having to kind of pay almost in a feudal model, right to these giant, you know, robber baron, you know, data companies and web services companies that have taken and consolidated and sucked all that data up and kind of control it now. And you have to pay them rent to get access to your own stuff.

Debbie Reynolds 25:47

A lot of that centralization that happened with those models is because these big companies, they have the technology, we didn't at that time, say 15 years ago, you know, phones weren't as powerful devices weren't as powerful. But it is no longer the case where people can do pretty sophisticated computing on their phones and on their laptops in their computer. So people don't need that socialization in the same way. And I think that's why all these big companies are jumping on this AI bandwagon because they're like, come back to the mothership and give us your data. And we're like, I don't want that; I want my own thing that I'm doing. I’ll give you an example. So, you know, many moons ago, I used to have a little app that I use on my computer to sort photos. Once most of those photos, things went away. Almost all of them went into the cloud. And so I don't want my stuff in the cloud. So I've never put photos there. But now, I've started to see applications, and they're like, okay, you install this on your computer, you can like organize your photos on your computer. Like that's what I always wanted. I never wanted my stuff in the cloud.

Igor Barshteyn 27:02

Yeah, me neither.

Debbie Reynolds 27:03

Because, yeah, I'm like, I don't; why would I want that? That doesn't make sense. Because I want to be able to have it work. I don't want to necessarily have to always be connected to the Internet. I want to be able to have things locally so that I can look at it and just from a security perspective, so for me, I feel like some of this more democratization and this decentralization, hopefully, will create more security because people are putting less stuff into a big bucket of information. What are your thoughts?

Igor Barshteyn 27:35

I think it's a double-edged sword. I think, again, I fully support the drive toward decentralization and democratization of technology and AI, and data. But I think we need to make sure that there's good security awareness amongst individual users as well; I think, right now, where we really are missing a beat is that you know, individual users don't read privacy policies as a matter of course, and they're not really security aware as well. And I think basic security awareness training and education really shouldn't be a required subject in schools today. And I think North Dakota actually recently passed a law where they've mandated basic cybersecurity education at the school level. And I think that's a great step forward, I think, with the democratization of data and technology and compute power and AI, back to the individual that needs to be kind of tempered with and strengthened by a good base of security knowledge to make sure that people can secure their systems. Because if there's a lot of little systems that are out there belonging to individual users that are still vulnerable to attack, it will take it's a lot more effort for potential hackers and adversaries to find them. But they'll still be vulnerable. So I think education is a key component here.

Debbie Reynolds 29:01

My concern is to have people who are clicking on that phishing link or have passwords that are 12345. And now we're going to an exponential complication with AI, like, how are we going to bridge that education gap where I feel like we haven't hit the mark in my view yet or just the regular things that people deal with? And then now we're bringing in a spaceship. So it's like, okay, you can barely drive a car. So now, here's a spaceship. So what do you think?

Igor Barshteyn 29:35

I think, you know, interestingly, AI could probably help us in that regard. So as AI becomes more democratized, and people are using it more on their own systems, systems that are artificially intelligent based will actually help us to secure our own systems as well. And I think this is a great threat to many cybersecurity jobs, but I think, you know, for the individual users who are relying on the privacy of their own data in their own computer, I think AI-based systems will be a great win because you have to be really technical to secure your own system properly. And you can rely on AI, which takes a lot of these technical concepts and automates them to help you in that regard. So I think these spaceships can not only be difficult to drive, as you say, but I think you can have spaceships that are flying around your main spaceship and protecting it from hackers and adversaries.

Debbie Reynolds 30:37

Very good. One thing that occurs to me, if a company were to install its own internal GPT model, perhaps cybersecurity folks can prompt in ways that they can find vulnerabilities.

Igor Barshteyn 30:54

I think prompt engineering, as it's called right now, especially adversarial prompt engineering is a rapidly growing field. So there are jobs right now that are very highly compensated for people who are very skilled at prompting these large language models to get the desired results. And out in the wild. There's also whole communities on Reddit and other places that, you know, are trying to break the safety and ethics controls of the deployed AI systems like Chad GPT. So I think prompt engineering is going to grow for the next few years, at least. And I think it would be good for companies to, you know, look for that talent, you know, from the community of these kinds of early prompt engineering hackers. If they plan to deploy an AI system and maybe insource some of that talent, bring it in-house, and make sure that whatever system you're building is well vetted against adversarial prompts or other types of adversarial inputs that can break the system.

Debbie Reynolds 31:58

Yeah, it occurred to me, as I'm sure you know this, I've had experience with companies who try to implement Enterprise Search. And a lot of times when they do that, they're like, yeah, let's start everything. And they're like, well, that I thought that was private, like, why is this in here? So it's like unraveling a sweater, you keep pulling the thread, and then all types of crazy things happen. So maybe some of these AI technologies will help people be able to see things that maybe shouldn't be in public searches within organizations.

Igor Barshteyn 32:34

Well, I think here, we fall back to tried and true principles of cybersecurity companies that are deploying things like search or, you know, chatbots, and, you know, they shouldn't just send all their data into the chatbot and train it. Data classification remains key. And I think, at a lot of businesses, you know, in my professional experience, I've seen that data classification as an afterthought for compliance purposes. But I think it's going to become more and more important to really know the sensitivity level of all of your data, whether it's structured database data or, you know, unstructured files, you know, on a million shares throughout your company, you really need to know where your data is, and what kind of sensitive information it contains, before you start, you know, ingesting it into an AI system. So I think, you know. Generally, the job prospects for cybersecurity professionals who are working on kind of a strategic level will remain good because we will need to have people that can direct the effort and make sure that they're doing it in a wise fashion. But I think, again, for cybersecurity operations professionals, like people in security operations centers, AI is going to bring a lot of perhaps unwelcome change in the next few years.

Debbie Reynolds 33:49

You strike me as a unicorn of sorts. So you have very deep skills and understanding on the cybersecurity side, and you have a deep understanding on the privacy side; I can tell you're passionate about that. And so a lot of companies, they have all these silos of people, like I only do legal or only do finance or only do cyber or stuff like that. So how can organizations bridge those gaps, and build more knowledge internally, about how cyber and privacy need to work together?

Igor Barshteyn 34:29

Sure, thank you, Debbie, for the compliment. So I've been blessed to work in small and medium enterprises, where you have to wear multiple hats and you have to be good at Data Privacy and cybersecurity. And, you know, regulation, compliance, right? I think in bigger organizations; I think the best approach as AI kind of becomes more and more commonplace is to assign kind of AI specialists to each to each particular team and silo. So somebody who is aware of, you know, kind of the safety and security risks of AI if deployed improperly. And I think that the network of specialists needs to coordinate with a central team or department or committee that oversees the deployment of these systems, right? And that committee can have on its staff, you know, information security professionals, Data, Privacy professionals, governance professionals, as well. So I think there should be kind of a cross-functional web and a network that permeates the entire organizational structure that can that is tasked with, you know, securing, and vetting and governing AI systems as they get deployed, because AI systems, you know, affect many business processes all at once, when they're deployed into business.

Debbie Reynolds 35:54

It occurs to me I was just talking to a friend of mine in the UK, and we were talking about how enterprises need to change as a result of all the technological changes. And also, AI raises privacy risks because now you're doing all types of things with data that you probably weren't before. But one of the things that you said to me, which I want your thoughts on, is that companies need to change in terms of how they train people. So in order to have people be up to the task and being able to deal with these systems, there has to be constant and iterative training of them so that they are learning as they're putting on these new capabilities, where I feel like some companies in the past have been like, okay, go out to the market, let's hire John, John knows the skill, he comes into the company, and then he never gets trained or anything else. It's like, just do this one thing that you know you do. And that's where I feel like companies aren't gonna thrive in the future if they can't find a way to constantly be training and upskilling their teams so that they can do this cross-functional work that they can grow with the tools.

Igor Barshteyn 37:15

I think you make a great point, Debbie; I think we can borrow the model that companies who do cybersecurity right are already using. They have an induction security awareness training. And they have mandated annual or sometimes even more often cybersecurity training. That's for all of their staff. There's awareness training for kind of the broad, nontechnical group of staff. And then there is technical cybersecurity training that's a little bit more focused for the people that directly interact with IT systems. I think, you know if we borrow that model, and we extend it to Data Privacy, I think there should be induction level like, you know when you're joining a company, Data Privacy awareness training, I think that's just as important if not more so, especially in recent days in cybersecurity. And I think there should also be annual repeat training for all the teams across the company for awareness. And with teams that work with datasets, and with IT systems and with AI systems specifically, there should be kind of expanded, more in-depth, technical Data Privacy training as well.

Debbie Reynolds 38:25

I agree with that wholeheartedly. So if it were the world, according to Igor, what would your wish be for privacy? Whether it be regulation, technology, or human behavior? What are your thoughts?

Igor Barshteyn 38:46

If I could have one wish, that would be to ask everybody to read the End User License Agreements and the privacy policies. I think, right now, the way that they're written, it's, they're written in such a way as to make them difficult to read. And a lot of the social and economic and other problems we're seeing in relation to social media, in relation to mis and disinformation. We're seeing a lot of this because people fundamentally aren't aware of what they're giving up when they're using a lot of these online or even offline services and hardware. Right? People need to really understand what rights they're signing away to their data. When they choose to use a particular product or service. People need to do a kind of internal risk management process when they're buying a Wi-Fi-connected refrigerator. They need to think about, first of all, why is it Wi-Fi connected. Do I really need a Wi-Fi-connected refrigerator? And I think there needs to be some critical thinking as well about what is the purpose somebody would make a Wi-Fi refrigerator. Is it really to help me, or is it to establish yet another data collection point in my house? So I think, you know, that critical attitude and approach to technology that somebody chooses to introduce in their life is key; we're really, you know, drawn by convenience and, you know, shiny new technologies. But, you know, in my house, for example, the only Internet-connected devices are my computer and my phone. And they're locked down, as, you know, as possible. So I think that attitude, if I had one wish, you know, amongst the general public, would really, really help resolve a lot of the problems we're seeing in cybersecurity, Data Privacy, and all the social upheaval we're seeing around these new technologies today.

Debbie Reynolds 40:44

That's wise advice. I completely agree. Be curious, don't give up your privacy for a temporary short-term thing, right? So yeah, don't give up your thumbprint or your iris scan for a $10 coupon or something like that.

Igor Barshteyn 41:02

Consider if you really want to have an Internet connected lock on your house or a doorbell camera, and what kind of risks that introduces, I think, generally a healthy level of, you know, background paranoia when it comes to technology, which serves a lot of people.

Debbie Reynolds 41:16

Well I agree wholeheartedly. Well, thank you so much for being on the show. This is such a pleasure, so smart. So I always enjoy our chat, and our commentary that we do on LinkedIn; you're such a wise person. So I recommend everyone definitely connect. Follow, Igor. He's doing some really smart things, and you're putting out a lot of good resources for everyone as well.

Igor Barshteyn 41:43

Thank you, Debbie. I appreciate it very much. Thank you for having me on today.

Debbie Reynolds 41:46

You're welcome. Talk to you soon. All right.

Previous
Previous

E148 - Isabella De Michelis, CEO and Founder, ErnieApp, Your Privacy Knowledge Manager

Next
Next

E146 - Cari Miller, Sr. Principal, Practice Lead, Responsible AI Governance & Research The Center for Inclusive Change