E280 - Federica Fornaciari, Full Professor and the Academic Program director for the MA in Strategic Communications at National University
[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.
[00:11] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast, where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.
[00:24] Now, I have a very special guest on the show, Federica Fornaciari
[00:29] She is the full professor and academic program director for the Master's in Strategic Communications at National University.
[00:38] Welcome.
[00:39] Federica Fornaciari: Thank you for having me, Debbie.
[00:41] Debbie Reynolds: Yeah, that was a mouthful.
[00:46] Federica Fornaciari: Right?
[00:47] Debbie Reynolds: Well, we have the pleasure of meeting and talking, and I thought the work that you do is incredibly important,
[00:56] very specific in terms of the communications, strategic communications,
[01:01] and in terms of how people think about privacy. And I thought you'd be like, a fascinating guest to be on the show.
[01:07] But tell me about your journey. First of all, communication people fascinate me because. So I think people undervalue the importance of communication and the different ways that people communicate and understand things.
[01:22] So that just fascinates me about you. But tell me a bit about your background.
[01:26] Federica Fornaciari: Thank you. Thank you. Yeah, I agree, like, communication is often something that we take for granted. It's kind of like we think of it, like, as a baseline.
[01:34] But I appreciate that,
[01:37] like, I've always been fascinated by communication as well when I grew up in Italy. And so how I arrived at thinking about privacy and AI and ethics when I really started is in media.
[01:51] So I grew up in Italy, and I was observing there how media narratives shape public understanding of social events, of political issues.
[02:00] And I was fascinated by how communication create narratives that frame issues and events and realities and prioritize certain values and has that power to guide public perception. Right. So. And there is whenever there is power that can be used responsibly or can be misused.
[02:21] Right. So that's really how I became fascinated by communication. And I started digging in. And when I moved to the United States, I got my master's in journalism and mass communication, and then I decided to keep going, and I got my PhD in communication with a concentration in electronic security and privacy at the University of Illinois at Chicago.
[02:42] And my concentration was founded by a National Science foundation grant, the Integrative Graduate Education and Research Train Sheet Program. Through that, I started digging into how the cultural understanding of privacy is shaping, shaped, and reshaped through media narratives and how we can shape and reshape public perception based on the frames that we have to think about privacy and how that emerges in connection to newer technologies,
[03:15] in a way. So I studied throughout my dissertation, I looked at how privacy has been shaped in media narratives in the United States versus Europe through decades of technological innovation.
[03:29] Right after my PhD, I started expanding my focus, you know, towards media representation,
[03:37] media trust.
[03:39] And now I'm looking at ethics,
[03:43] artificial intelligence, obviously, and how these newer tools keep reshaping how we understand privacy and what we value most. I've been at national university for almost 10 years now.
[03:56] You know, my focus now is on responsible AI integration in the curriculum. As I help students navigate the challenges of AI, I help them develop literacy, on how, understanding what are the implications of using these systems, generative AI systems, et cetera.
[04:15] So not just how to use them properly, but also understanding what are the implications when it comes to privacy, when it comes to data collection, et cetera.
[04:24] I'm also part of the AI Strategy group at my institution, so all kinds of directions there.
[04:30] Debbie Reynolds: But that's a lot.
[04:33] What you said, something that piqued my interest that I love to. I'm sure this is a really big topic,
[04:38] but being.
[04:40] Being in the US now,
[04:42] being born in Europe,
[04:45] I'm sure I like to talk to people who understand,
[04:50] who have been immersed in kind of both cultures.
[04:53] And so the topic you're saying about the media narratives in Europe versus the US and privacy, I would love your thoughts on that, because I'm just fascinated by that topic.
[05:03] Federica Fornaciari: Oh, yeah, it's.
[05:05] It is a huge.
[05:07] I mean,
[05:08] there are so many ramifications there. But, you know, the main difference there,
[05:14] which is really fascinating, is the focus on privacy as a fundamental right, which happens primarily in Europe,
[05:25] versus privacy as a commodity,
[05:27] which, you know, has kept surfacing in the United States, and it has evolved, you know, because in my dissertation, I looked at media narratives going back more than a century.
[05:39] Debbie Reynolds: Right.
[05:40] Federica Fornaciari: So if we go back to 1890, when Warren and Brandeis published their Legacy article in the Harvard Law Review on the right to privacy,
[05:50] the focus there was,
[05:51] privacy is a fundamental right, right. The. The right to be let alone,
[05:56] et cetera,
[05:57] compared to dignity, compared to autonomy. Right.
[06:00] So, but in the decades went by,
[06:03] privacy has become more of something that's transactional, especially in America, in US media narratives.
[06:12] So it's not dignity or autonomy anymore. It's increasingly becoming a commodity, is increasingly coming, becoming something that can be traded for convenience, for.
[06:24] For access.
[06:25] So as you can imagine, and we can see it obviously in the regulatory systems compared during the United States and Europe,
[06:33] when privacy becomes a transactional value,
[06:37] policy shifts. Right. So instead of asking,
[06:40] how do we protect these rights?
[06:43] We ask what is fair Compensation for use of data. Right. So this in a way explains why Europe has the gdpr,
[06:52] why Europe has the AI act and much more robust action,
[06:59] while in the US we really have a patchwork of laws and regulation here and there that are mostly in the hands of states because there is this frame that emphasizes individual responsibility.
[07:12] Here, you know, like it's privacy is really personal responsibility or focus in media narratives is more on data breaches and maybe more on corporate misuse. And we're talking about individual consumer rights.
[07:27] Privacy in these narratives can be managed through settings, through consent,
[07:34] through, you know, and it has to do with market choices.
[07:37] If we look at Europe, we're talking about more about a fundamental human right.
[07:43] So we're talking about something that's tied to dignity, that's tied to democratic protection.
[07:48] And so the media narratives are more likely to talk about legal protection as an institutional responsibility.
[07:57] Debbie Reynolds: Right.
[07:58] Federica Fornaciari: So, you know, we are shedding light more on civil liberties and power in Europe. And, you know, so that's why there is a different regulatory framework,
[08:09] because the cultural values that are more emphasized are different.
[08:15] Debbie Reynolds: Right.
[08:16] And I agree with that. And then the gap that I see, and I want your thoughts about the difference between thinking about privacy as a fundamental right as opposed to a consumer right,
[08:31] is if you're not consuming, you don't have the same rights.
[08:35] So there's like a gap there because there are where fundamental right is covering any human right,
[08:42] where it's like, okay, if you're not the consumer of that product that you bought,
[08:47] then you don't really have the agency to really fight or really protect your privacy. So the example I give is like,
[08:57] let's say someone is in the household,
[08:59] let's say the mom bought a smart speaker. But there are other people in the house whose voices are captured by the smart speaker.
[09:07] So the other people in the house are upset about things that are captured there. They can't really do anything. There's because they're like, well, you didn't purchase the, the. The product, so we don't really have an obligation to use even though your data is being collected.
[09:21] But what are your thoughts?
[09:23] Federica Fornaciari: Yeah, absolutely. That's a great way to look at it. Because European perspective has a citizen protection. Like you're protected as a human being.
[09:34] Debbie Reynolds: Right.
[09:35] Federica Fornaciari: Whereas the focus on economic competitiveness and the focus on market,
[09:40] the focus on economy and the monetary and profit,
[09:45] that's more likely to be the approach in the United States obviously does reduce the individual to a consumer. And so it puts them in a box and privacy becomes Something that can be measured and traded and used and is paired.
[10:08] I mean, it is almost understood as equal to data, like a combination of data that becomes a person.
[10:16] Debbie Reynolds: Right.
[10:16] Federica Fornaciari: But it's not really like the holistic understanding of human beings. It's really like just putting people in their market box in a way.
[10:28] Debbie Reynolds: I agree with that completely. I want to talk a little bit about artificial intelligence, and let's talk about it in terms of literacy as a form of privacy protection.
[10:40] This is a fascinating concept to me.
[10:42] Federica Fornaciari: You know, that's another thing that obviously fascinates me as well and worries me as well, because when we look at large language models, when we look different generative AI tools and AI tools in general, the focus is often on efficiency, on how good, how elaborate the product that we can get from these tools is.
[11:07] And there's not much conversation about the implications of all the data that we are sharing there. Right. So we're focusing, focusing on that fancy interface and the output that we get from this fancy interface.
[11:19] And even here, if we don't want to talk about media narratives as well, the narrative is more on which is the best system, which is the system that provides the best product, that provide the best output.
[11:31] There's not much conversation around what those companies do with our data and the power that they have based on the very detailed profiling that they can create, that they can provide.
[11:44] So this is, I believe, one of my missions at National University, especially with my students,
[11:52] trying to help them understand the implications of the data they are sharing.
[11:57] So when they are looking at different AI tools,
[12:00] not just think in terms of what's the output that they want to get,
[12:04] but think through what they're inputting in those systems. Right.
[12:09] We turn to AI for financial advice, we turn to AI for parenting advice, we turn to AI for all kinds of questions that provide these systems with very detailed information about our individuality.
[12:30] Debbie Reynolds: Right.
[12:31] Federica Fornaciari: So we need to understand what are the implications and ramifications of the data that we're putting there.
[12:38] Debbie Reynolds: Right.
[12:38] Federica Fornaciari: Because those systems are often not very transparent. So we don't know exactly what happens to our data.
[12:44] We don't know how are they collected, how much data is collected.
[12:48] We are exposed to so many tools that perhaps me and you will read the user agreement, but most people won't have the time and cognitive space to process all this information,
[13:03] you know, and it's not a fair trade either. Right.
[13:07] We're focusing on taking advantage of the tool without realizing that there is a company behind the tool and that that company will have a Lot of leverage on how they're going to use the data.
[13:20] And oftentimes, especially in a climate of almost not complete deregulation but very low regulation, I'll say they can decide what to do with those data.
[13:31] And we're not even getting into the data storaging and into the implication that data breaches can have there.
[13:38] Debbie Reynolds: Right.
[13:39] Federica Fornaciari: So that's a whole different can worm too.
[13:42] Debbie Reynolds: Yeah, it is. And I know as we see the battle,
[13:46] different nations to the race or supremacy, I guess in AI,
[13:52] I'm seeing people have language and contracts saying we comply with all applicable laws. And it's like, well, what laws?
[14:02] Federica Fornaciari: Right, exactly.
[14:03] Debbie Reynolds: It's like, there really aren't any laws really around like AI or AR use. So I think that's another reason why,
[14:11] and I want your thoughts. This is another reason why I think privacy becomes very important,
[14:15] because as you say,
[14:17] people are sharing more of their personal information and they may not understand the ramifications of that.
[14:24] But without regulation and artificial intelligence,
[14:30] a lot of this does fall back on privacy and data protection. Because it's like, okay, now that we've given you this data, what is your stewardship? What is your responsibility?
[14:43] And part of that, unfortunately, in the US is like, okay,
[14:46] you can give your privacy away. Right. And not have like adequate redress, but it's like there. So to me, I think of laws as more like reactive as opposed to proactive.
[14:58] And so I think we need more proactive approaches,
[15:02] whether that be how a company handles data. And then also there is a responsibility for a person, like you say, to be literate about what can happen to their data,
[15:13] good or bad,
[15:14] when they put their information into those systems.
[15:18] Federica Fornaciari: Yeah, absolutely. It's very layered and it's. I really like how one of your podcast guests talked about,
[15:27] you know, I think you were talking about the debate between innovation and privacy. And there is also this frame that, you know, if you want innovation, you gotta give your privacy away.
[15:38] Which is not necessarily true, because that analogy I believe that your guests used was,
[15:44] nobody would drive a very fast car if it didn't have very reliable brakes. Right. So having guardrails in place doesn't necessarily go against innovation. They can go hand in hand.
[15:56] And so it's focusing on regulations.
[16:02] Having a responsive regulation would certainly be important.
[16:07] But as you said, regulation and laws are often responsive rather than proactive. So we also do need to have very robust ethical systems or like an ethical compasses that we rely on to ensure that certain values are embedded in the algorithm and that companies are not going to take advantage of what they can take advantage of,
[16:31] of the deregulatory system, the. Of the regulation that is currently in place.
[16:37] But obviously we cannot just put our lives on the hands of corporations thinking that they're going to do good for us. Right. So self prot.
[16:47] Literacy are key components there because it's a very delicate process and we're putting a lot of power in the hands of those corporations. And you know, when power and money are involved,
[17:01] that's very problematic.
[17:04] Debbie Reynolds: I want to talk a little bit about deepfakes and implications of trust and privacy harm. So deepfakes have concerned me for a very long time,
[17:15] even before people got super excited about artificial intelligence because the technology was. Even when people weren't paying attention, it was rapidly getting better and better.
[17:27] And we see that especially because so much money is being poured into these AI technologies.
[17:34] The sophistication of these technologies can produce things like deep fakes at scale and do it cheaply.
[17:44] I remember back in the day, this is a long, long time ago, there was a woman saying that her husband was harassing her, creating videos of her that looked like her.
[17:53] But I guess he had some type of connection into like the entertainment industry. And so they've used that type of technology in the movies for a long time. But it was technology that was hard to get.
[18:07] It was hard to have people actually do it and actually be able to transmit that data in an effective way. So now those barriers have fallen almost completely away.
[18:19] Where it's fast to do, it's easy to do, it's cheap to do. But that creates privacy harm and trust,
[18:25] people mistrusting or misusing this data,
[18:29] this data manipulation. But I want your thoughts there.
[18:32] Federica Fornaciari: Yeah, that's one of my biggest concerns as well, because as you said,
[18:36] the technology is very cheap and widely available,
[18:40] or like really cheap if we don't think in terms of privacy. Right.
[18:45] All of the data that we're giving away. But everyone has access to these tools, right? So they are very powerful tools.
[18:54] And we can say that technology is not good or bad in itself. It's use that makes it good or bad. It's good or bad uses that people do with that.
[19:03] So we can.
[19:05] The same technology that can be used for doing something that fantastic,
[19:10] I don't know, for a grassroots campaign,
[19:12] for instance, or for nonprofit organizations that can,
[19:17] with very little funds, can create fantastic campaigns so that we can use this technology for good and create synthetic media that can be used for positive outcomes. Right. But also if those technologies are in the power, are in the hands of the wrong person.
[19:35] They can be very powerful tool that they can be used for misinformation,
[19:40] for deception, for damage to reputation,
[19:44] for taking advantage of more vulnerable populations that may not understand the possibilities of these technologies. So deepfakes are certainly an incredibly concerning possibility now.
[19:59] And we can certainly develop tools that can detect deepfakes.
[20:05] Sure.
[20:06] Again, also the problem here is that tools that detect deepfakes are responsive versus rather than proactive. So they are always lagging behind a little bit in comparison to the technology that creates deepfake, that allows the creation for deepfakes.
[20:23] We can certainly develop literacy to be able to detect what can be a deep fake, whether an image has been manipulated, whether a video seems real or has been manipulated.
[20:38] But the problem there is not just whether we're going to be able to detect those deep fakes. Right?
[20:45] Because sure, if we invest in education,
[20:48] we're probably going to be able to develop enough literacy for that. The problem there is really the erosion of trust.
[20:55] Because if every piece of information that we look at, every image, every video that we look at,
[21:02] we have to investigate whether it's true or not, then we are more likely to track of what truth really is and perhaps not even care anymore. And so if we lose trust in our institution, if we lose trust in the media, if we lose trust in the possibility for information that is real and reliable,
[21:28] then one of the most important pillars of democracy is eroding as well. So that's really the bigger picture there that we need to think about.
[21:38] Debbie Reynolds: That's so true. Very true. And then I guess one of the things that I'm concerned about is that a lot of companies are trying to use like these, like biometrics and things for authentication of people.
[21:51] And it's like those things can be spoofed as well. Right.
[21:56] So you're using something that you think is a higher technology,
[21:59] but if it is incorrect, they can create like tremendous harm to an individual and they have like very little redress.
[22:09] I'll talk to you a little bit about education.
[22:11] This interests me a lot about ed tech, educational technology and AI integration,
[22:18] because as we all know, or we should know,
[22:21] children are a special category of human and they need more protection. But I'm concerned about this fast and loose way that we're rapidly trying to integrate artificial intelligence into tools that handle data of children.
[22:38] And again, it shouldn't a lot of, to me, a lot of AI we need to be thinking about, we need to be thinking about ethics and we need to be thinking about bias and we need to be thinking about prevention as opposed to cure.
[22:53] Because some of these things, in my view,
[22:56] there may be situations that happen,
[22:59] especially in children,
[23:00] for which there is no adequate redress.
[23:03] Right.
[23:04] So thinking about that proactively, I think is important. But what are your thoughts?
[23:08] Federica Fornaciari: Yeah, absolutely.
[23:10] It's very concerning for especially children and other vulnerable populations. So like the elderly and the younger generations are often the ones who are most exposed to harm and due to danger there again, I believe that emphasizing the risks and ensuring that whoever uses that technology understands all the implications.
[23:35] You know, when we talk about large language models, need to talk about the bias that can go in the output, the fact that they call it artificial intelligence. And that's a framing in itself, because using the word intelligence,
[23:52] most people will assume that system is really intelligent. Right.
[23:58] Though we should really call, you know, a fancy mathematical formula, because that's really what it is. It's the large language models. AI tools don't understand the output that they give out.
[24:13] They just string words together based on probability,
[24:18] comparing the output to the training database that they've used and using an algorithm to create the output. So the bias of the data that system was trained on is likely to be if there was bias.
[24:33] And often in large databases there, there are different biases, right? There is bias in society,
[24:40] and so the system uses a specific database and is likely to amplify the bias that's already present in the database,
[24:50] thereby creating more bias in the output. Right. So if someone that doesn't know much about large language model goes in there and asks chatgpt a question,
[25:03] these models are trained to mimic human interaction very well.
[25:08] Debbie Reynolds: Right?
[25:09] Federica Fornaciari: So we may almost think,
[25:12] if we don't understand how these systems work,
[25:15] that behind the system there is someone telling us,
[25:20] putting together the information for us. And so there is a level of trust there that perhaps shouldn't be there.
[25:29] So it's important that users understand how the systems work,
[25:35] how they can perpetrate bias and how the information is put together.
[25:44] Besides thinking about the risks that we run when we put a lot of information in the input phase. Right?
[25:54] So one way to go about there is obviously educating users.
[26:00] But when we're talking about very vulnerable populations,
[26:03] yes, we are educating children, but we should also expect those systems to have embedded values in the algorithm and in the data set of training that is used for the training part.
[26:16] So making sure that creators and the programmers embed values into those systems is also important,
[26:29] especially when systems are going to be used by more vulnerable population, like children, like elderly,
[26:36] like mental health people that have mental health issues, et cetera,
[26:41] you know, and if we think about the ethical frameworks, we have to think about respecting human rights, so protecting privacy and personal data.
[26:50] We have to think about avoiding discrimination and unfair treatment.
[26:56] We have to think about respecting civil liberties.
[26:59] We have to think about monitoring algorithms for discrimination.
[27:05] We have to think about transparency and accountability,
[27:09] you know, disclosing AI use, providing explanations for how certain decisions were made in the system and talking about safety and reliability,
[27:21] making sure that the systems are carefully tested, making sure that there's protection from manipulation and hacking.
[27:28] There are so many layers here, so many implications that we need to take into consideration, especially when we are in a very low regulatory system.
[27:41] Debbie Reynolds: I want your thoughts about inference.
[27:43] So inference concerns me a great deal.
[27:47] I guess people don't talk about it as much as they should, in my view.
[27:51] And I think a lot of that to me goes toward like a, more of a media,
[27:57] like a marketing narrative. So I give an example.
[28:01] Let's say an algorithm says that tells a company to sell shoes that people in Chicago like blue suede shoes. So maybe they say, well, we're gonna ship more blue suede shoes to Chicago.
[28:14] Cause that's what we think people like. And so that doesn't harm anybody, right?
[28:20] The inference that they made. So they made an inference and then they took action on it. Right?
[28:24] Now if you take that same thinking into the medical field and you have a doctor that says you have a symptom, they say, well,
[28:35] we think the most likely thing is that you have X, whatever X ailment is A.
[28:41] Because they were trying to go to the most probable thing.
[28:46] What they did is they missed something and that they were treating you for something that you didn't have.
[28:52] And now you could be in a worse medical state because they were just trying to look at the most probable thing that they thought it was as opposed to what it actually was.
[29:05] And so just to give you an example, this is something that happened to a family member that I had. Unfortunately she was a 16 year old and said that she had a stomach ache,
[29:16] had these stomach aches that wouldn't go away. And they thought, well, at her age it couldn't be anything serious.
[29:23] And they ignored that,
[29:25] that situation until it was too late. They found out she had cancer and she actually died of cancer. And so this is the problem that I have with like inference and kind of using these tools and trusting them too much,
[29:39] especially when people don't fit into whatever that probability that probabilistic thinking is. What are your thoughts?
[29:51] Federica Fornaciari: Yeah, that's that's a very complex situation. Absolutely. And you know, it's. Using algorithms to make inferences is problematic in every domain, for sure.
[30:06] Behavioral advertising,
[30:09] targeted advertising,
[30:11] but even more so when, when we are talking about medical issues. Right. So there is a very dangerous line that we are crossing because it should be,
[30:27] especially in the medical system, the doctors that really take a holistic view and the symptoms and the. All the informations available there and look at the data.
[30:43] Yes. I mean,
[30:45] relying on a tool,
[30:48] on a technological tool for inferences without spending the time to,
[30:57] as a human look at all the input and all the information available there is really like allowing an intern to make decision if all final decision on treatment. Right.
[31:17] AI tools should really be considered as an intern putting together data and looking at symptoms, looking at all the possibilities at place.
[31:30] But then it should always, always be the most experienced medical doctor there,
[31:37] looking at the final output.
[31:40] But the problem there is, you know, we're always looking at efficiency and looking at profit over human life.
[31:49] And so if inferencing and AI tools allow a doctor to treat 20 patients in the time that they would have treated,
[31:59] being able to treat two or three,
[32:01] then you can see that the interest there becomes. And the Prof. Possibility for profit there becomes so predominant that it almost overcomes the.
[32:13] What the medical system should do. So, like putting life first.
[32:18] Debbie Reynolds: Right.
[32:19] Federica Fornaciari: And it's not pointing fingers at doctors, it's the whole system that is trying to squeeze the doctors to provide more output than they are equipped to do. Because we all have 24 hours in our day.
[32:36] Right.
[32:37] So it's really like a systemic problem, not just like an individual problem there either.
[32:42] Debbie Reynolds: Totally, I agree.
[32:44] I know a lot of times we say the term human in the loop, but I feel like in the loop sounds very passive. Right.
[32:52] So I always say like human in the lead,
[32:55] like that's what we need. Right?
[32:58] Federica Fornaciari: Yeah,
[32:59] yeah. And again, you know, it goes back to thinking about the values that we embed there. Like, what is the most important thing there? Like, are we talking about customers or are we talking about human beings?
[33:13] Right.
[33:14] Debbie Reynolds: It's so true. Very true.
[33:16] Well, if it were the world according to you, and we did everything you said, what would be your wish? Wish for privacy anywhere in the world, whether that be human behavior,
[33:27] regulation or technology.
[33:30] Federica Fornaciari: You know, I think that it should be a combination of all of the above,
[33:37] but I believe that, you know, because laws are lagging behind often. So I should be like, it should be a technology first step, perhaps.
[33:48] So technology should have embedded values and make sure that There is ethical compasses that are built in the technology.
[34:00] So that would be the first components,
[34:03] and then definitely human components of working on literacy. But we cannot expect everybody to be able to catch up. Right. So technology should be the main,
[34:16] and then human and laws should work around as well.
[34:21] But I think that the model that Europe is putting together with the GDPR and with the AI governance,
[34:30] I think we should draw some inspiration from that. The whole world should.
[34:35] Debbie Reynolds: I agree with that completely. Oh, my gosh. Well, thank you so much. This has been a great conversation, and I applaud you for your work on this, because communication is so key right now in terms of getting the message out and understanding how messages are formed and how people take action based on the information that they receive.
[34:56] So this is like a vital piece of work that you're doing. So thank you for that.
[35:01] Federica Fornaciari: Thank you for having me. Debbie, it's been a pleasure.
[35:04] Debbie Reynolds: Excellent. Excellent. Well, we'll talk soon for sure. I look forward to us being able to collaborate in the future.
[35:11] Federica Fornaciari: Yeah, sounds good. Absolutely. Thank you. Thank you. Have a great day.