E262 - Nicola Fabiano, Lawyer | Data Protection-Data Governance-Cybersecurity Advisor | Author, (Italy)

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:12] Hello, my name is Debbie Reynolds. They call me the Data Diva.

[00:16] This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information the businesses need to know. Now I have a very special guest all the way from Italy, Nicola Fabiano.

[00:32] He is a lawyer,

[00:33] data protection, data governance and cybersecurity advisor.

[00:38] And also he is the author of the book Artificial Intelligence, Neural Networks and Privacy Striking a Balance between Innovation, Knowledge and Ethics in the Digital Age. Welcome.

[00:51] Nicola Fabiano: Oh, hi, Debbie. Thank you very much for the invitation. How are you doing? I'm well, I'm well.

[00:58] Debbie Reynolds: Well,

[00:59] as you say, it's hot in Italy, it's hot in Chicago in the U.S. right? Yeah.

[01:04] Happy to have you on the show. I think we've been chatting about it for, for a while, so I'm glad we were able to to connect and get this together.

[01:13] Tell me a bit about your background. I find what you do and the things you say very interesting. And so that's one of the reasons why I reached out to you.

[01:23] And so just give me an idea of, like, how you got into this field and tell me what made you want to write this book.

[01:33] Nicola Fabiano: Of course,

[01:34] I am an Italian lawyer. I am entitled to represent clients behind the high courts.

[01:41] And so I am also an advisor about data protection, privacy and cybersecurity and also artificial intelligence.

[01:53] My journey into privacy and cybersecurity began quite organically. I mean, through my legal education and early recognition of the transformative impact technology would have on fundamental rights.

[02:12] After earning my law degree, mania *** laude from the University of Bari in through several years ago, I don't want to say you when,

[02:22] but several years ago I completed my postgraduate specialization in civil law at University of Camerino.

[02:32] And so I found myself increasingly drawn to the intersection of law and emerging technologies.

[02:43] But I have always had a passion for technology and IT in general.

[02:50] Before graduating, my father gave me an IBM X2 86 personal computer and I started playing.

[03:02] I underlined playing with it,

[03:06] learning more about technical aspects.

[03:09] Then the pivotal moment came in the early 2000 when I began writing scientific articles related to electronic signatures and worked as an external expert and consultant for a national consumer association on European projects focused on Internet safety for minors until 2007, 2008.

[03:42] That was the period.

[03:44] This experience exposed me to the complex challenges of protecting vulnerable populations online while striking a balance between innovation and fundamental rights.

[04:01] It was during this period that I realized traditional legal frameworks were struggling to keep pace with technological advancement.

[04:14] Then my formal entry into CyberSecurity occurred in 2012 when I completed the course Auditor Lead Auditor training for ISO ISE 2001.

[04:32] This technical grounding combined with my legal expertise positioned me uniquely to bridge the gap between legal compliance and practical security implementation.

[04:46] The European GDPR preparation phase further accelerated my specialization. I was honored to win the Council of Europe's course on Data Protection and privacy rights in 2017,

[05:02] which solidified my commitment to this field.

[05:06] Okay, then. Sometime later I was appointed by the government of the Republic of San Marino. San Marino is a very small republic.

[05:18] Inside Italy is a mountain basically.

[05:22] And there is. At the top of the mountain there is an ancient city.

[05:27] Around the mountain there are several buildings. So that is the.

[05:32] The Republic of San Marino, but is famous for independence.

[05:37] So the government of the Republic appointed me to draft a GDPR oriented bill on personal data protection.

[05:49] I prepared the bill and it was immediately approved by the Parliament of the Republic and I was suddenly appointed President of the first authority. I was the first president of the first Data Protection Authority of the Republic of San Marino.

[06:08] A position that I held until November 2021.

[06:14] What truly distinguished my approach was the development of the Data Protection and Privacy Relationship Model dapremo, which is the acronym of this that I created in 2020.

[06:29] It is contained in my another book. I wrote and I published it in 2020.

[06:36] And the last chapter of this book is dedicated to the Primo. The PRIMO is the acronym, I repeat of acronym of Data Protection and Privacy Relationships Model.

[06:49] The PRIMO is a framework I created to systematically address the complexity of modern privacy challenges.

[06:59] It represents a relational model whose architecture bears striking similarities to neural network structures grounded in high mathematics, particularly set theory and fiber bundle set mathematics.

[07:19] So this is what I did. The Primo represented for me.

[07:27] It was a landmark for me and represented a very important point of my study.

[07:34] And I continued working on the Primo since today because I developed an artificial intelligence system which is the evolution the Primo evolution and is allowed to.

[07:52] It's allowed to. So sorry.

[07:55] It allows to find what are the ignored or unknown points in all the cases we face in data protection related to data protection or privacy.

[08:14] Because we are usually we are ready to deepen the case but we cannot see some relevant points that are around the case.

[08:30] And this model allow people to discover this point.

[08:36] And I find this model very interesting.

[08:40] And so I continued working on it till now.

[08:45] My engagement with neural networks community was formalized when I became a Member of the International Neural Network Society nns, which is the oldest association on artificial intelligence in the world,

[09:02] and recently participated in the International Joint Conference on Neural Networks in Rome,

[09:09] which was an extraordinary opportunity to explore the state of the art in neural network research and engage with the international community of scholars and practitioners and scientists.

[09:25] So this is shortly my background.

[09:32] I serve as both a practicing privacy lawyer and academic researcher,

[09:38] contributing to IEEE working groups on ethical AI and robotics,

[09:45] lecturing internationally and providing strategic guidance to organization navigating the increasingly complex landscape of data protection and cybersecurity compliance.

[09:57] And last but not least,

[10:01] my book that you have presented.

[10:06] It is a work that summarizes some of my thoughts about artificial intelligence and privacy,

[10:15] in which I collected in the last years.

[10:19] So this is shortly.

[10:21] Debbie Reynolds: Thank you for that.

[10:22] So when I think of artificial intelligence,

[10:27] first of all, we know the artificial intelligence as a field is not new,

[10:34] but what is new is some of the new, the innovations that we're seeing in artificial intelligence. And so tell me a bit about how that plays in. So I think of artificial intelligence as an umbrella of many different types of artificial intelligence.

[10:52] And then you have neural networks, if you can explain that. And how does that interplay with privacy?

[11:00] Nicola Fabiano: Well,

[11:01] it's a quite complicated question.

[11:03] And because you are right,

[11:08] it's wrong to.

[11:10] From my perspective, it's wrong to talk about artificial intelligence in general.

[11:16] And I can understand if we are talking when we are drinking a coffee, but if we want to have a more structured definition,

[11:28] we cannot talk about artificial intelligence in general.

[11:33] This is one of the points that interested me and I deepen because it's the crucial point.

[11:42] What is artificial intelligence? Can we discuss about artificial intelligence?

[11:49] The answer is no from my perspective, because following a person that I,

[11:58] that I consider a genius from the usa, Stuart Russell,

[12:04] he in one of his book with Peter Norvig,

[12:07] they stated that it's impossible to define artificial intelligence because of it decline.

[12:19] There are some variables that don't allow to define correctly artificial intelligence.

[12:29] The best definition is related to artificial intelligence system.

[12:35] And this is the reason why in Europe AI act defines artificial intelligence system and not artificial artificial intelligence.

[12:46] This point was particularly discussed and criticized because before the final version of the AI art,

[12:59] there were three different definitions of artificial intelligence.

[13:05] You know,

[13:06] in Europe,

[13:07] the legislation came from we have three institutional bodies, the European Commission, the Council and the European Parliament.

[13:18] And all these three institutions defined artificial intelligence system in three different ways.

[13:28] So at the beginning we had three different definitions of artificial intelligence system.

[13:36] Then intervened the OECD with the definition of Artificial intelligence system.

[13:43] And the European Parliament in the end adopted a part of this definition. And now we have in the EU AI act the definition of artificial intelligence systems. So I think that it's correct to talk about artificial intelligence system.

[14:02] Artificial intelligence in general is an umbrella we can find under the umbrella machine learning, deep learning and other techniques or so I mean scientific way to create models or work in these fields.

[14:22] But if we want to define correctly artificial intelligence, we to talk about artificial intelligence system.

[14:31] This is one of the main point and for me is a crucial point from we can start to talk about artificial intelligence in general.

[14:43] We know,

[14:44] you know, we have general purpose artificial gpi, general purpose artificial intelligence and other definition. In these years,

[14:55] people or scientists created a lot of acronym of several words, several phrases, general purpose artificial intelligence practice code. And so and so and so and so in the end I think the correct way is to talk about artificial intelligence system.

[15:15] I think that although we have a legislation in Europe,

[15:22] Europe claim to be. The European Commission claims to be the first to have created the first legislation on AI in the world. Is true.

[15:33] But we also be aware that we are into artificial intelligence not now,

[15:42] but we already are in the artificial intelligence intelligence teams intelligent times, intelligent context since several years.

[15:55] But now people are discovering some instruments like LLMs, ChatGPT,

[16:02] other SO Cloud Copilot, Gemini and so and people believe that now is the time of artificial intelligence. So artificial intelligence started several years ago,

[16:18] in the 50s.

[16:20] Already people,

[16:21] some scientists talking about McCartney already talked about artificial intelligence. So now we are the final track of population.

[16:33] So I mean users are discovering what.

[16:39] What are the.

[16:40] So the advantages of some artificial intelligence systems.

[16:46] But really we are inside artificial intelligence since very, very years.

[16:55] And the final consideration is that this year 2025 is the year of the agent AI. Just to talk about some instruments that then will be developed in. In till the final, final version.

[17:12] I have also some forecast for 2020, 2045, 2050.

[17:19] But this is almost a joke. So if you want to hear something about my forecast, feel free to ask me and I will talk to you about something more about my forecast.

[17:35] We will have an hybrid physical digital reality.

[17:41] And so. But this is another point.

[17:44] Debbie Reynolds: Yeah,

[17:45] the title part of the title of your book is around neural networks and privacy.

[17:49] So tell me what is the interplay there? First of all, what what is. Can you define neural networks? And then how does privacy play into that?

[17:59] Nicola Fabiano: Well, the first part of the book is reserved to.

[18:03] Oh, you know,

[18:05] I have to underline that my previous book which contains the primo. The Model I mentioned before is a book that describes the privacy world, the privacy context to people in it's accessible to everyone because it describes what is privacy.

[18:31] I remember a friend of mine that said to me,

[18:35] ah, finally he was, he is a doctor.

[18:39] And he said ah, finally I.

[18:43] I understood what is privacy by reading your book.

[18:47] Because I. I choose to to realize not a technical book,

[18:54] but a book accessible to all,

[18:57] to everyone.

[18:58] And everyone should understand what is this phenomenon.

[19:03] So when I started thinking to the new book on AI,

[19:10] I considered that it was necessary to create a bridge between the first part, so the first book, the previous book and the last one to allow people to understand what.

[19:27] What was the way to pass from the first,

[19:32] the previous description, data protection privacy to the new reality where we we face on artificial intelligence. So the first part of this book is dedicated to the new vision of privacy.

[19:50] So what I consider the new,

[19:54] the most relevant aspects related to the connection between data protection privacy and artificial intelligence.

[20:03] And then the second part of the book is dedicated to explain what is artificial intelligence and some specific points or challenges related to artificial intelligence.

[20:20] The final part of the book, in the final part,

[20:23] in the last chapter,

[20:24] I propose a new approach, a multi layer approach.

[20:30] Because I am convinced that we cannot,

[20:35] we cannot.

[20:36] So we cannot address any aspects, any aspect related to artificial intelligence or data protection.

[20:44] Firstly,

[20:46] we must consider a multilayer approach. So we are in a where we find a lot of layers and we can see. We cannot see one layer only.

[21:02] We have to consider all the layers to have a global view of the phenomenon.

[21:08] And I think that it's also relevant to consider a multi professional approach because we need lawyers, we need scientists, we need psychologists, we need. So I mean engineers and, and so, and so, and so.

[21:29] So in the last chapter I tried to explain this,

[21:34] this my proposal,

[21:35] finalize it to explain how we can have a correct approach to artificial intelligence phenomenon.

[21:44] And when I start talking about artificial intelligence, I also propose a taxonomy of artificial intelligence. Because from the scientific point of view is very relevant to understand what is a taxonomy before starting talking about artificial intelligence.

[22:10] So these are two pillars in my book.

[22:13] And then I try to propose some challenges,

[22:18] some aspects related to the international standards.

[22:23] Because people don't know that we have regulations,

[22:32] but we have also technical standards.

[22:37] I explained that we can fully understand this phenomenon if we talk simultaneously the regulations,

[22:51] the standards and ethics and other measures issued by some institutional authorities like AI authorities, the European Data Protection Board,

[23:03] the European Data Protection Supervisor. And so, and so.

[23:07] So if we have a perfect view of all of These so I mean measures in general regulations and technical standards, we can have,

[23:21] we can fully understand the phenomenon. This is my personal view.

[23:26] Debbie Reynolds: So you are in Italy, you're in the European Union,

[23:31] you've watched as the EU AI act has come to force.

[23:37] And actually one of the words in the title of your book is around innovation. So I want to ask you a question about that.

[23:43] Nicola Fabiano: So.

[23:46] Debbie Reynolds: A lot of talk about AI and the differences,

[23:51] say between the US and Europe. A lot of times the conversation is that we don't want regulation for AI because we think that's going to stop innovation.

[24:03] And I don't agree with that. But I want your thoughts.

[24:07] Nicola Fabiano: I think that from my perspective,

[24:14] it's an authorized disease on artificial intelligence.

[24:19] Europe already has a regulation on AI,

[24:25] but I think that the USA are so in the USA there are a lot of technical.

[24:38] A lot of technical.

[24:40] There is more inspiration about artificial intelligence, more activities than in Europe.

[24:48] In Europe we have legislation, yes, but it's not the solution.

[24:53] Recently,

[24:54] but not all I hear from Europe,

[25:00] from the European institution, talk about the digital sovereignty,

[25:05] but it's not the correct way from my perspective, because you cannot talk about digital sovereignty and you do not have instruments to practice the technical innovation.

[25:21] Because yes, regulation. We have In Europe over 100 regulations on digital aspects.

[25:31] It's crazy.

[25:34] Sincerely,

[25:35] I don't know how I need an algorithm, I need the software to search the correct rule in some of these over 100 regulations. And this is not the way to solve a technical evolution.

[25:57] Innovation is a very good point,

[26:00] but regulation is not the solution. It's not the only solution in the usa. I see that you are ready,

[26:11] you are very proactive.

[26:13] In Rome I talked with people from the usa,

[26:17] people from Boston, people from universities,

[26:21] and they are very proactive.

[26:24] They are ready to work, they work on AI and they put their hands on the keyboard and they work on AI In Europe,

[26:38] I don't know who is working seriously on AI and we do not have the.

[26:45] From my perspective,

[26:47] we do not have solid infrastructure,

[26:51] we do not have a solid cloud services,

[26:55] we do not have. We. There are some of these, but few in the usa.

[27:01] So the services that already exist in the USA are powerful.

[27:09] So the question, my question, my dub is what will do?

[27:18] What will be the future? I do not have the crystal ball, but I think that there will be some.

[27:30] There will be some reflection,

[27:33] some more time to think about the artificial intelligences.

[27:41] Currently the most. So the most phenomenon. The phenomenon most click and most talked. And so.

[27:48] And I think that we cannot talk, we cannot Work on artificial intelligence without infrastructure,

[27:56] without services.

[27:59] So the difference is from my perspective that yes, we have a regulation,

[28:06] but I appreciate very much the USA scientists work on this topic instead of that was it's happened in Europe. So this is my short view, but we can discuss about this for days and weeks because it is a very critical point.

[28:28] Debbie Reynolds: I agree it's a critical point. I feel that,

[28:33] I don't know, I have a lot of thoughts about this and I want your thoughts as well.

[28:37] One is that regardless of who is regulating and who isn't,

[28:44] there are people like you and me that work in standards and those standards are shared internationally and there is collaboration there for the most part. And so I'm happy to see that work continue.

[28:57] And I think in my views, a lot of that is non political, some of it.

[29:01] So I think that,

[29:03] you know, that is a really important part of the evolution of artificial intelligence.

[29:10] But then also,

[29:11] I don't know, in addition to the fight between whether to regulate AI or not to regulate or who has AI, who hasn't, like, you know, I want your thoughts about the AI race that they say that we're in now.

[29:24] So the US has now said they want to win the AI race against China. But I want your thoughts.

[29:34] Nicola Fabiano: Batu, you,

[29:36] you should consider that the EU regulation is not to regulate artificial intelligence or innovation or technical solutions are to regulate the internal market, which is a very different thing.

[29:53] Oh,

[29:54] if you want to have a very good and strong approach,

[29:58] you should consider the topic and not the effects on internal market.

[30:04] I understand also the political approach which is related to defending the internal market in Europe in general.

[30:14] But I think that we should consider also the single topic because what is the,

[30:21] you know, in Italy there is a bill on artificial intelligence and you can see,

[30:27] oh,

[30:28] why you already have a regulation,

[30:31] the regulation, the European regulation applies to all 27 member states.

[30:38] So why an internal regulation on AI that could overlap the EU regulation?

[30:48] Now this is a very relevant question, but if you read the,

[30:53] the bill, you understand that there are interests related to projects and so, and so, and so. So it's a kind of political,

[31:07] political solution, but it's not a technical solution to regulate artificial intelligence.

[31:14] From a technical perspective, I think that artificial intelligence cannot be be regulated.

[31:21] But I understand the point of view of the European politicians that should make that should expose their faces and say, oh, we are the first in the world to have regulated artificial intelligence, but it's not to regulate artificial intelligence topics.

[31:47] Debbie Reynolds: So what is happening in the world right now that's concerning, you around either privacy or AI or neural networks.

[31:59] Nicola Fabiano: Good questions.

[32:01] I think that.

[32:04] So it's.

[32:06] It's a strange. It's a strange time.

[32:09] My most pressing concern is what I term the accountability gap in our current privacy and cybersecurity landscape,

[32:21] which has become even more critical as we approach what I describe in my recent book as the brain Computer Convergence.

[32:34] This gap manifests in three interconnected dimensions that I define. The technical accountability gap,

[32:46] the jurisdictional accountability gap,

[32:49] and generational and cognitive accountability gap.

[32:54] The first one, the technical accountability gap,

[32:57] is related to modern AI systems.

[33:01] Particularly large language models. LLMs and deep learning algorithms operate as black boxes.

[33:11] And this is a very relevant point. I describe it in my latest book related to explainable AI xi.

[33:21] Because we need a very transparent artificial intelligence system. People should know what are the processes and instead of today we are working with black boxes where decision making processes are not transparent or explainable.

[33:47] So as I detail extensively in my book,

[33:52] this is one relevant point.

[33:55] This creates a fundamental tension with privacy principles like data minimization,

[34:04] purpose limitation and individual rights.

[34:09] So the emergence of neuromorphic computing,

[34:13] which I explored during the conference in Rome,

[34:18] promises a brain like efficiency, but introduces new complexities in understanding how these systems process personal information.

[34:33] So the question is,

[34:35] how can we ensure accountability when we cannot explain how personal data influences automated decisions?

[34:47] This challenge becomes exponentially more complex.

[34:52] So when hybrid intelligence,

[34:55] I mean hybrid intelligence systems will combine artificial processing with biological input through sophisticated brain computer interfaces.

[35:10] This is one,

[35:11] the first point. The second point is the jurisdictional accountability gap.

[35:18] As someone who has worked extensively with both European and international frameworks,

[35:25] I'm deeply concerned about the fragmentation of global privacy and cybersecurity standards.

[35:33] While the EU's PdPR set a global benchmark, the AI act represents groundbreaking regulation.

[35:44] The emergence of various state level US privacy laws and different Asian frameworks creates complex patchwork.

[35:56] This fragmentation doesn't just create compliance burdens,

[36:02] it also creates opportunities for bad actors to exploit regulatory arbitrage,

[36:12] making it difficult for well intentioned organizations to implement coherent global privacy strategies.

[36:22] And the third point is the generational and cognitive accountability gap.

[36:28] I think that perhaps most concerning is the disconnected between how different generations perceive privacy,

[36:42] particularly as we approach the era of brain computer convergence.

[36:51] Through my work on projects protecting minors online and my research on synthetic human profiles explored in my book,

[37:04] I've observed that digital natives often have fundamentally different privacy expectations and behaviors than frameworks designed to protect them.

[37:19] So the gap becomes critical when we consider the scenarios.

[37:25] I outline it in my book from Transparent to Typical Day I described the typical day of transparence and then typical day of digital Hermes.

[37:42] So living for related to the transparent typical day living under comprehensive surveillance to digital airmits. Mastering this connection as a brain computer interfaces enable direct so to digital communication and shared cognitive experience.

[38:04] We must confront fundamental questions about mental privacy,

[38:09] narrow rights and what constitutes authentic human experience versus artificially enhanced cognition.

[38:18] These gaps are interconnected and mutually reinforcing.

[38:25] Technical opacity makes jurisdictional oversight more difficult,

[38:31] while regulatory fragmentation impedes the development of technical standards.

[38:39] Meanwhile, generational differences in privacy expectations and the approaching reality of cognitive enhancement technologies undermine both regulatory legitimacy and technical adoption.

[38:58] Thus,

[38:59] in the end,

[39:00] the urgency of addressing this accountability gap has intensified with recent development in AI and the approaching reality of neuromorphic computing systems that could be the deployed in implantable medical devices,

[39:21] autonomous systems and even educational applications.

[39:27] This is in short my view and I was mentioned before that some I think I don't remember but probably last year I was one of the persons that signed the open letter prepared to take a break of the development of artificial intelligence because we are.

[39:52] I think I see people very excited about new experiences on artificial intelligence systems. But I think we should also think what are the effects on humans? What are the effects on data protection?

[40:13] What are the effects on privacy? What I mean is that for example,

[40:19] people don't understand or it's not correct,

[40:22] most of people do not understand,

[40:25] don't know that working with LLMs they are training them and some of them, I don't want to mention any of them, but some of them are.

[40:40] Some of the companies are particularly so they show particular attention to data protection. Let me take the opportunity to mention anthropic with Claude that from my perspective is one of the most interesting system because so this is what they declare publicly on the website.

[41:07] They pay attention in a serious way.

[41:12] And from the user side,

[41:15] when I work with Claude, I see that it's complicated to.

[41:21] So it works with charts and if you end one of the charts we are working on,

[41:30] you cannot retrieve information from this chart to another chart because every chart is.

[41:38] There is no interaction between the charts.

[41:41] And this is what is a typical solution to prevent.

[41:45] To prevent issues related to data protection or privacy.

[41:50] And if you try to ask something that you are talking in another chart,

[41:57] the system answers that it's he don't. He don't know what is in the other chart because there is this separation. It's quite. It's a measure to.

[42:10] I don't know if can be considered as a privacy by design Solution but it's a measure, it's a measure that other solution do not,

[42:22] doesn't have.

[42:23] And so I think that we should pay more attention to the effects of the artificial intelligence because when we are particularly excited about the solution we are happy because the solution allows us to work in a very short time.

[42:47] We don't forget what can be the effects of this solution.

[42:54] This is one of the point that I think we should consider.

[42:58] Debbie Reynolds: So you brought up a couple of very interesting points.

[43:01] One definitely about neural networks and also neural rights. I think that is kind of a big battleground in the future for human brain computer interfaces.

[43:13] And then also the feature that you talked about and Claude,

[43:18] they have a feature like that in ChatGPT but it's a on off feature. So you can toggle it on or off.

[43:24] Obviously they want you to turn it on right so that your chats can reference other chats. But at least you have the choice. But I think the move in these AI tools in terms of features and functionality is to get more information,

[43:39] to create more personalization. If some people like that or some people don't, then there has to be a balance there around privacy. So at least having choices there. Or like you say, if some like a company, they say that they like claw better because maybe from a confidentiality standpoint they don't want those chats remembered right or referenced in other places.

[44:01] So I think it's definitely, definitely a good way to go.

[44:04] My last question, Nicola, if it were the world according to you, what will be your wish for artificial intelligence, neural networks or privacy anywhere in the world? Whether that be human behavior,

[44:19] technology or regulation?

[44:22] Nicola Fabiano: Oh well,

[44:24] I think that.

[44:26] So the first show, basically you posted two questions, two different questions.

[44:33] One is related to neural networks and neuro rights.

[44:37] If I understood well and I think that neural networks is one of the field of the relevant field I consider to follow in the future.

[44:48] And we should, we should pay attention, we should follow this, the developing of, of this field because when we talk about artificial intelligence we talk of basically of technical system.

[45:02] But the reason that something more structured and this is the approach,

[45:09] the scientific and technical approach that should be used like this one related to the neural networks systems. So I mean it's a very fascinating aspect and I promised myself to follow this field in the next months.

[45:33] But people should know also the history of artificial intelligence because artificial intelligence is related to the neural networks history.

[45:44] Indeed the association I mentioned earlier is International Neural Network Society because in the 90s this was the point neural networks, not artificial intelligence.

[46:00] And Then these expressions change it in artificial intelligence. So currently we should pay attention to the development of a neural network system and both what I define neuro rights.

[46:20] Neurorites for me are all the humans rights related to the application of neural systems.

[46:31] We know the experiments carried out by Elon Musk with neuralink and there is a very relevant alert or warning about these experiments and the use of artificial intelligence in general in the health sector.

[46:51] Because people should know before the treatment what are the impacts on data protection and privacy.

[47:01] I understand that science is relevant, I understand that medicine is relevant to,

[47:09] for the general, for the general health of people.

[47:12] But we should also understand what are the effects on humans because it's a crucial point.

[47:22] So this is a crucial point and this is why I talked about neuro rights.

[47:30] Debbie Reynolds: Yeah.

[47:30] Nicola Fabiano: Then the future,

[47:33] I think that I don't know.

[47:36] I envision a hybrid physical digital reality and I envision a future where privacy and cybersecurity frameworks globally recognize and protect human dignity as the core principle,

[47:55] regardless jurisdiction,

[47:57] technology or business model.

[48:00] This means moving beyond compliance based approaches to embrace what I call dignity by design.

[48:10] Ensuring that every technological system and data processing activity enhances rather than diminish,

[48:19] diminishes human urgency and autonomy.

[48:23] And concluding,

[48:26] as we approach the brain computer convergence era,

[48:31] detailed as I detail it in my book, this principle becomes ever more critical.

[48:38] Well,

[48:39] when sophisticated brain computer interfaces enable direct thought to digital communication and shared cognitive experience,

[48:50] we must ensure that enhanced humans retain their fundamental dignity and autonomy.

[48:59] So I think that behind 2045,

[49:04] the neural integration era,

[49:07] looking forward to the scenarios detailed in my book, where brain computer interfaces enable capabilities like external memory storage that feels as natural as recalling our own experience,

[49:24] enhancing processing speed for complex calculations and even temporary sharing of perspectives between individuals.

[49:36] Privacy and cybersecurity frameworks must evolve to protect the most intimate aspects of human experience.

[49:45] This future will require new categories of rights and protections.

[49:52] Neural privacy rights,

[49:54] cognitive enhancement, equity principles and frameworks for consensual thought sharing that preserve individual autonomy.

[50:04] Ultimately, my vision is of a world where privacy and cybersecurity enable human flow sharing rather than constrain it.

[50:17] Where our digital and neural technologies enhance our capacity for creativity, connection and contribution while protecting our fundamental human dignity.

[50:31] This isn't just a technical or legal challenge,

[50:35] it's civilizational one that will define the kind of society we live for future generations.

[50:46] This is my view, in short.

[50:50] Debbie Reynolds: So I like when you said dignity by design. I hadn't heard anyone say that, but that's like a very apt,

[50:56] apt way to get it to talk about it.

[50:59] Well, thank you so much, Nick, Nikolai. I really appreciate you being on the show.

[51:03] Anyone taking a look, definitely follow him on LinkedIn. And also take a look at his book, Artificial Intelligence, Neural Networks and Privacy. Striking the Balance Between Innovation, Knowledge, and Ethics in the Digital Age.

[51:17] Thank you again.

[51:18] Nicola Fabiano: Thank you. Thank you very much, Debbie. Thank you.

[51:20] Debbie Reynolds: All right. And I'll talk to you soon. Thank you so much. Bye.


Next
Next

E261 - Jesse Kirkpatrick, Co-Director, Mason Autonomy and Robotics Center, George Mason University