E252 - J Mark Bishop, Professor of Cognitive Computing (Emeritus), Goldsmiths, University of London, and Scientific Adviser to FACT360, United Kingdom

[00:00] Debbie Reynolds: The personal views expressed by our podcast guests are their own and are not legal advice or official statements by their organizations.

[00:14] Hello, my name is Debbie Reynolds. They call me the Data Diva. This is the Data Diva Talks Privacy podcast where we discuss data privacy issues with industry leaders around the world with information that businesses need to know.

[00:27] Now I have a very special guest on the show, all the way from London, one of my favorite cities in the world,

[00:35] J.

[00:36] Mark Bishop. He's the professor of Cognitive Computing, Emeritus Goldsmith,

[00:42] University of London, and Scientific advisor to Fact360.

[00:47] Welcome.

[00:48] J Mark Bishop: Hi, nice to be here.

[00:50] Debbie Reynolds: Well, I'm happy that you agreed to be on the show.

[00:54] I really enjoy your writings and the things that you put out on LinkedIn. Very thought provoking and I think,

[01:01] you know, as you've seen with people getting super excited about generative AI,

[01:07] now everyone thinks they're an AI expert, but we actually have AI expert that we can have on the show like Mark.

[01:13] So if you could please tell me your journey in technology,

[01:17] your things you're interested in and how you got here, how you became an expert in this field and the work that you do advising people on AI.

[01:28] J Mark Bishop: Okay, well, thank you for the question.

[01:30] It's a nice question. I think I've got a slightly unusual route in that I don't come from a family that's had a long history of children going to university.

[01:38] So when I went to study for my first degree, which was in cybernetics and computer science,

[01:44] I was very focused on getting a job in industry as soon as that degree was finished and that's exactly what I did, and I got a very well paid job and was quite content in my role as a software engineer.

[01:57] But at the time I was sharing a house with another number of other ex students and one of them started doing a PhD at the university. In my ignorance, I'd never Even heard of PhDs before, never mind ever thought of applying for one myself.

[02:11] But Nick, as he was, we'd had some interesting chats, he said, oh yeah, you might want to consider doing this.

[02:17] And around the time I was commuting, I don't know how far it would be in kilometers, but I was commuting about a hundred miles daily to my place of work.

[02:27] And as a relatively young 21 year old, I was probably driving a little faster than I ought to have been. And over that year that I worked, I had three very near escapes of some serious accidents that were entirely my fault.

[02:41] And after the third such incident, that thought about possibly doing a PhD began to raise its head I thought maybe it might be fun. So we're going to be a student again.

[02:49] So I went for an interview and to my amazement, they offered me a funded PHC place at Reading in the Department of Cybernetics.

[02:56] And as a child I'd read a lot. I was very interested in science fiction.

[03:00] And the idea of building a thing that could think and understand and have feelings was something that had always been kind of dear to my heart. I've read a lot of science fiction stories where this trope is used a lot.

[03:11] And my supervisor,

[03:13] this is going back into the very early 1980s, a guy called Mike Usher, he was very interested in what we now know as neural networks. So he said, mark, would you like to do a PhD in neural computing?

[03:23] And absolutely, I thought, that sounds exactly what I want to do. I'm going to build a model of the brain. And if brains can think, obviously neural networks must be able to think.

[03:32] A matter of getting the right neural network, the right learning algorithms and a fast enough computer. And Bob's your uncle if you've got a thinking machine. And that's what I thought at the start of that PhD program.

[03:42] So I enrolled and my first shock about one year into the grad program was I went to give a talk on my research in the computer science department. If you recall, I was doing my PhD in the cybernetics department, a discipline which incidentally, historically is long related to the history of neural computing,

[03:59] with thinkers like Warren McCulloch, Ernan Pitts,

[04:02] Frank Rosenblatt, all identifying as cyberneticians. Anyway, so I went to the nextdoor computer science department to give a talk on my early research.

[04:11] And to my astonishment, instead of people being interested, I was met with derision. Literally. The postdocs were literally laughing at me. Because this was, I found out afterwards, right at the height of the so called first AI winter with respect to neural computing.

[04:26] So I was fairly forcefully told, as male postgraduates are often quite want to do, in no uncertain language that didn't you know? Minsky and Pat Head showed neural computing was a load of rubber back in the 60s.

[04:37] What are you doing wasting your time on that? You should join us. Doing some proper AI research in rule based systems, which was the approach they were using in the computer science department at that time.

[04:46] Anyway, being a bit of a stubborn old boot, the more they derided my chosen field of study, the more interested in it I became. So I carried on and completed my PhD which focused on neural computing and what I called anarchic methods of building intelligent systems.

[05:02] And I chose the word because it's the opposite, if you like, of rule based. If you think of anarchy, a society without rules. I was interested in approaches to building intelligent machines that didn't rely on explicit rules.

[05:12] And the one interesting thing, really in retrospect, that dropped out of that. Well, there were a couple of interesting things that dropped out at the phc. One was getting a firm grounding in basic neural network theory, which has stood me in great stead.

[05:24] The other was I developed what we now know as the world's first swarm intelligence algorithm. And that's the idea. If you model a artificial creatures look a bit like bees or ants, you can get some very interesting optimization behavior out of the interactions of lots and lots of very simple entities.

[05:39] And stochastic diffusion search, as we now call it transpired,

[05:43] was the very first swarm intelligence optimization algorithm that was used.

[05:47] So I ended up spending a lot of my professional life reading latterly at Goldsmiths, looking at the analysis, the theoretical analysis and the practical applications of this algorithm that I developed as part of my doctoral work.

[06:00] So that's been a large part of my life work, if you like.

[06:03] But then the really big thing, looking back on my career that happened is that I got exposed to a philosophical argument that kind of undermined the idea that we can build genuinely intelligent machines.

[06:15] And that's what I got. I went to a conference on neural computing at Oxford University.

[06:22] And the conference was memorable for many reasons. The first one,

[06:26] I've never been to an academic conference before when it's been hugely oversubscribed. So the main room, which must have held a couple of thousand I imagine, was completely full.

[06:33] They had two overspill theaters, first and second overspill theater. And I managed to get into the second oversil built theater watching proceedings on a big video screen. And what was the reason for the excitement?

[06:45] Well, the reason for the excitement was it was one of the first presentations in the UK of the work by a group from America called the pdp, the Parallel Distributed Processing Group, who'd in retrospect, we know had rediscovered an algorithm for training multilayer neural networks that circumvented some of the criticisms that so elegantly Minsky and Pape outlined in the 1960s in their book Deceptrons.

[07:07] So this generated a huge amount of interest and the interest was drawn because on the one hand it interest in symbolic rule based systems was fading in part due to the critiques outlined by in the uk, but in the Lytle Report,

[07:20] to do with things like combinatorial explosion and in the other people, this basic intuition that is so appeals to so many people that if we can simulate the brain, we must be able to simulate the mind.

[07:32] And therefore neural computing surely is an interesting area to go. This had never really died away. And so when people heard that the PDP group had got a new learning algorithm that could we used to teach multiline neural networks, the excitement was palpable and delegates came from all over to hear the very first presentations on this.

[07:48] But what was critical for me is also at that meeting I saw two philosophers, Dan Dennett, both American, Dan Dennett and John Searle. And I'd never seen philosophers in the flesh before, being an engine, coming from an engineering background.

[08:01] And the first thing that shocked me was philosophers can be extremely pugilistic in their exchanges. It was a kind of exchange that I've never seen before. It was very fierce and kind of personal.

[08:12] Just never come across this. It's not what we were used to in engineering conferences. And the second is that they spent a lot of time talking about an argument that John Searle had put out a year or two earlier called the Chinese Room argument, which, if it's correct, purports to show that no system,

[08:27] the formal system, that is a system that follows a sequence of rules like any computer program effectively can never genuinely understand.

[08:35] Searle was defending this position and Dan Dennett was as actively as he could trying to undermine Searle. And I got to grips with the argument and I thought, wow, I think John Sowell's onto something there.

[08:47] Anyway, I sort of let that lie in the background as I became. I finished my PhD, then got, first of all some postdoc positions at Reading, and then I eventually got a position on the faculty of the Cybernetics department, where I enjoyed a wonderful number of years working with some great people.

[09:03] And in the course of that, I started reading university. At the time, it was allowable to do a master's for free if you were on the faculty of any department and have.

[09:10] Having had that gentle introduction to philosophy through exposure to Johnson and Danden, I thought, wow, wouldn't it be great if I did a Master's in philosophy? So I enrolled on a master's of philosophy, and instead of just doing the compulsory modules as they were, I decided I liked philosophy so much I would do everything.

[09:25] So I did all the modules in philosophy that I possibly could and really enjoyed it. And in the course of that, got chatting to one of the major philosophers of mine, Professor John Preston, as he is now.

[09:36] And we decided to write a retrospective 21 years after Searle first published the Chinese Room, where we would invite 10 essays from the leading cognitive scientists and 10 from the leading philosophers to debate how well John Searle's Chinese Room argument has stood the test of time.

[09:55] And that book came out in 2002 on Oxford University Press. It was called Views into the Chinese Room, and I contributed a chapter to it in my arrogance and my work, if you like.

[10:05] Searle's work purports to show that no formal system can ever generally understand in the way that humans understand.

[10:12] Effectively, they can't understand at all the cells. Correct. My contribution to the book showed, through a reductive ad absurdum argument called Dancing with Pixies, that no formal system or no computational system could ever be conscious, phenomenally conscious,

[10:26] could ever feel the sensation of what it's like to be slapped around the face or see the ineffable red of a rose.

[10:35] Unless panpsychism was true,

[10:37] everything in the world was conscious of every possible experience.

[10:41] For a scientist, that's not a very satisfactory position. So we're led to the other horn of the reductio, and that means to reject the idea that computers can be conscious.

[10:50] So in that book we examined the Chinese Room, I put forth my argument against why computers can't be conscious. And that led to kind of a second career, if you like.

[10:59] My first career being to develop the stochastic diffusion search swarm intelligence algorithm that was part of my doctoral work. The second approach then became to try and undermine or to argue against the mountain of hype that even in the sort of 90s was there, saying that computers generally understand that one day they will have a mind and these things could be conscious.

[11:21] So I've been sort of kicking against the ******, if you like, trying to argue against that position for over 35 years now. And there's just a small number of us who take a skeptical line on AI.

[11:32] And it's kind of ironic because in 2010 I was elected to chair the UK Society for AI, called the AISB.

[11:39] That's the association for the the Artificial Intelligence and the Simulation of Behavior, which is the oldest such society in the world and the largest membership such society certainly in the uk, And I was elected to chair that.

[11:51] So there's a certain irony being the chair of a society devoted to the study of AI from someone who was kind of pretty skeptical about what machines could do.

[12:00] And then I left that the Society after four years, after hosting their most successful conference ever, where I. My other interest around this time was in advanced cognitive science, hence how I got the chair at Goldsmiths.

[12:12] And I was very interested in, in modern cognitive science because it transpires that people who argue very forcibly, it's almost like religiously, in my opinion, that computers really think and can be conscious.

[12:22] They're basing their ideas from a philosophy that started in the 1960s from Hilary Putnam called functionalism. And of course, as we all know, Putnam did an about face with his ideas about functionalism and most famously in a book called Representation and Reality, where he argues very convincingly that functionalism reduces to behaviorism.

[12:40] And it's extremely strange that people in AI were basing their work on really out of date cognitive science. And there was lots of very exciting new cognitive science coming around.

[12:50] To this day interests me. The idea that the mind doesn't float free of the body like functionalists believe it to, but is deeply informed by our body. So for the embodied cognitive scientists, our lived body we are, our cognition is contingent on the neurons in our brain,

[13:06] if you like, the microtubules in the neurons, the neurons in the brain, the brain in the body,

[13:11] then the body in our social environment and our physical environment, all those things come together to bring forth our cognition and experience of the world. And it's just nonsense to think we can abstract cognition from the body that we're in.

[13:26] So that's embodied cognitive science, if you like, a hardcore version of it. And then I also got interested in relative fields of enactive cognitive science, ecological cognitive science and embedded cognitive science.

[13:36] So all those things are coming through. And in that 2014 AISB 50th anniversary conference that I hosted before resigning from the Society,

[13:44] we tried to focus on those things which were really exciting me.

[13:48] And then after leaving the aisb, I got a firm called Tungsten in the uk, offered me at that point in time what seemed a very large amount of money to set up a research centre in AI at the University of London called Tcida Tungsten center for Intelligent Data Analytics.

[14:05] And I was given free rein to recruit who I wanted. I had at one time about 12 or 15 people working at the center. So a moderate sized centre.

[14:13] It was great for me as an academic cause it meant I was able to buy myself out of all teaching. And not that I didn't used to enjoy teaching, but the admin that goes with it can be a pain after a while and focus entirely on research.

[14:25] So I had a wonderful number of years at the centre until a change of CEO at Tungsten meant that the desire to fund a centre in data analytics was no longer top of their priorities and once Tungsten withdrew their funding,

[14:40] to my astonishment, all my staff left the university to set up a spin out to develop some of the ideas we'd been working on in the centre and that company turned into fact360, which is the company I still advise now.

[14:53] And then having being left as a director of a centre with no staff and no funding, I had a department said, well, you'll have to go back to teaching undergraduate Java or whatever the heck the language was at the time.

[15:04] And I thought, no, that's not for me, I'm going to take early retirement. So at that point in time I left academia and on an early retirement package and have then felt free to work with Fat 360.

[15:17] Sadly, around this time my mum developed advanced Alzheimer's, so I spend a lot of my time caring for my mother these days. But I juggle my duties now between writing academic books.

[15:27] I've got a trilogy coming out on autonomous driving, the first volume of which should be published third quarter of this year.

[15:33] The trilogy is called Driving Intelligence and the first volume's called the Green Book Book Routes to Autonomy.

[15:40] Second volume will be called the Amber Book. You can see where this is going, traffic light theme coming through here. And then the third volume will be the Red book where we look at the Amber look Book looks at some of the problems that have emerged, some of the crashes that have happened in autonomous driving,

[15:53] trying to find out technically what causes them. And then the third book looks at the philosophical problematics around the very notion of autonomous driving. So that trilogy is almost finished.

[16:03] It's written with a co author called Gabriel Zebrafish and I keep myself. I've got some other books on the go as well. So I'm quite busy as a writer and I get asked to give the odd plenary and keynote around the world, which is always nice when that happens as well.

[16:18] So that's, I think give you as brief as I can make it a potted history of how I've arrived to be speaking with you guys today.

[16:25] Debbie Reynolds: That is tremendous. Thank you so much for that history.

[16:29] So, as someone who's very steeped in technology and data for many decades,

[16:36] what do you make of all this hype now around generative AI and AI in general? What are your thoughts there?

[16:44] J Mark Bishop: Well, as you'll have gathered from that somewhat rambling introduction from myself a minute ago, I'm a little bit skeptical.

[16:53] I think these things can be great tools, but when we get people like Sam Altman and Elon Musk predicting AGI, that's artificial general intelligence. In other words, getting a computer system to be as good or better and human performance in all possible activities, intellectual activities.

[17:12] I'm skeptical that any computational system, let alone large language models or genai, will ever achieve that. And the reasons for my skepticism I outlined in a paper called provocatively called Artificial Intelligence is stupid and causal reasoning won't fix it.

[17:27] And I use that term stupid very, very precisely. It kind of riffs off the everyday notion of we all think we know what a stupid thing or person is, but actually I using it in a.

[17:38] In a very precise philosophical sense. It's definition 3 in the Oxford University,

[17:43] Oxford English Dictionary of the English Language, and that's of a census or an insentient entity, what the third notion of what stupid might mean. And it's important to me that, you know, AI systems aren't conscious in this sense.

[17:56] They're definitely stupid. There's no sentience there. And it's very important for me. There's. There's something that an amoeba has when it negotiates the sugar gradient to find an optimal supply of nutrient for it to live that has a whole different degree that is ontologically sentient in a way that a rock and a computer system isn't.

[18:16] And there's a way in which the. Even the simplest thing, like an amoeba, have a much richer. Well, they have an understanding of their own environment in the sense that a computer system has no understanding.

[18:27] And to cash what I mean by that out,

[18:31] I can do that in a number of ways. The first way, if you imagine you've got a computer system on a production line that's manufacturing chocolate bars.

[18:38] I don't know whether you like chocolate bars. I certainly do, my daughter does. But imagine they're whizzing past a little camera that's sensing these bars and you've got a system that's counting them.

[18:46] That system had no more idea it's counting chocolate bars than elephants or space rockets or bananas or anything. It's got no idea it's doing anything. It's just like a glorified calculator.

[18:59] Press one thing and something else happens as a result.

[19:02] Seems to me that all computer systems are like that. The only intrinsic meaning, the only meaning these systems have is when we as humans put a computer system to some use.

[19:12] Now, I can flesh this out for you in a myriad of different ways, but one of the best is by introducing to you the idea of isomorphic computing.

[19:21] Please stop me, by the way, if I'M going off on one and this isn't of interest to your listeners?

[19:25] Debbie Reynolds: No, it's good. This is great.

[19:27] J Mark Bishop: Great isomorphic computing is when you have precisely the same source code. So you've got a computer running in a little box and you've got precisely the same computer program executing. Just the way that you wire the outputs of that computer system to the world, you can make it do very different things.

[19:44] So the best example of this that I know of is you can get a program to play what in England we might say is noughts and crosses. But I think in the US you call tic tac toe.

[19:53] You can get a computer, any program that will play a tic tac toe optimally,

[19:57] I can then use unchanged to play a game called number whist which is a card game.

[20:03] Now semantically these are totally different entities. When I play number whist with my 11 year old daughter, she thinks I'm playing number whist and she conceives of it, thinks about it in a certain way.

[20:14] When we're playing noughts and crosses, we're doing something totally different. Isn't it weird that exactly the same program,

[20:20] just by wiring the outputs in certain ways, can be used to play both games? So clearly if that's the case, there's no sense in which this computer can be said to be knowing it's playing number whist or tic tac toe because I use the same program, can play both,

[20:35] which are ontologically very different experiences for a human.

[20:39] The only meaning of that computation arises when we as humans use that box to play either number wist or tic tac toe. That's where the meaning is the interaction of the human and the computer.

[20:52] There's no intrinsic meaning, no observer independent meaning in the computational box itself.

[21:00] The idea that these things can have any internal idea of meaning and internal genuine semantics about the world just seems a non starter for so many reasons in my mind.

[21:09] And also I noticed that people who tend to argue the contrary position that computer system do understand,

[21:16] do have semantics,

[21:17] really tend to be very interested and never grown out. I mean I used to like science fiction when I was 13 or 14, but thankfully my literary taste had moved on a little bit over the years.

[21:26] And the people that I meet who are most passionate about this, this have a very rich diet of usually not the best science fiction and have got what with this meme that computation instantiates everything there is about mind, this functionalist idea.

[21:39] So yeah, so I'm skeptical that gen AI is going to get us to AGI. And that's not to say it isn't useful. I use Genai in my daily workflow every day.

[21:49] As I said in the forward to the Green book on driving intelligence,

[21:54] I use it to help me write the books. When I'm writing a paragraph and I get tongue tied and a big long winded and I can't think of a nicer way to frame what I want to say, I'll interact with ChatGPT or co pilot or Gemini on a paragraph and see if I can get some ideas as to how to frame that better.

[22:12] I don't see that's anything wrong with that. The books generally I certainly not writing the book. It's out my, my co authors ideas that are putting this together. But yeah, throughout the book on occasion we've used it, I've used it certainly to help me improve the way that I'm framed framing some quite complicated ideas.

[22:29] Because in the Green book we really do delve deep under the hood to show not just how first gen neural networks worked, which are pretty simple devices these days, but how transformers work,

[22:40] you know, how large language models work, how BERT works, how diffusion systems work, how end to end neural network controllers for autonomous vehicles work. So we like to think we've gone under the hood and that's quite an ask when you hang out,

[22:53] when your audience you hope to be not just the academic and business specialist in autonomous vehicles and AI, but the generally interested lay man and woman who wants to learn a little about it.

[23:05] So it's quite a challenge to write a very technical book that can be understood by a wide readership. And yeah, hopefully we've gone some way to meeting that challenge. But as a result of all those ideas, yep, Genai is super useful, but just don't care get too excited about what it can do.

[23:21] It's not going to replace all human labor. It might help automate some human labor certainly. And it certainly won't have a mind to decide it wants to rule the world.

[23:31] I can say with some certainty.

[23:34] Debbie Reynolds: I agree with that. I agree with that.

[23:40] I don't think AI will ever be sentient in the way that people believe it will be. But I guess for me the danger is that people, people will treat it as if it is right.

[23:52] So that to me that's like a huge, that's like a today right now problem.

[23:57] J Mark Bishop: It's something I rail against and I think people in AI have a lot to blame for this because people tend to use very anthropocentric terms when they're describing these things, even the very phrase learning machine learning seems to me ridiculous.

[24:11] What we have here is an optimization process. To go back to the work I started out doing in my PhD on machine optimization,

[24:17] all that's happening in machine learning is we're optimizing a set of weights on a complex computational system such that it'll perform in a certain way.

[24:25] It's not at all in the same way that humans learn. And I think the use of these terms,

[24:31] learning, they load the discussion in a way that people tend to get overly sentimental about computers in the way they might not get so sentimental about a rock.

[24:42] Debbie Reynolds: I don't think that's true.

[24:44] I think that's true. I want your thoughts on privacy and the tension there between kind of advanced computing like AI and privacy, because I'll just give you my thoughts. I think it's amazing that you've also studied philosophy.

[25:01] I'm a bit of a philosopher myself, so I totally understand the things that you're talking about and I think it's to me those things, they work together really well. But I think one of the things that we have that's a challenge with privacy is that a lot of of things in privacy and data protection are centered around transparency.

[25:19] And I think what a lot of the computing and AI aren't transparent in the way that people may think it may be which creates attention. But I want your thoughts.

[25:29] J Mark Bishop: Well,

[25:30] I think in the UK we kind. Well until a few years ago we were very blessed to be part of a wider community called Europe.

[25:37] Until for some God forsaken reason there was a very slight majority and a plebiscite of the population to leave Europe. A huge mistake in my opinion. But anyway, it is what it is.

[25:48] One of the nice things about being in Europe is that they think pretty carefully about AI and about data.

[25:54] And they produced what I think is an excellent piece of legislation which the UK still are in alignment with even though we've now left Europe.

[26:02] And that legislation is called the GDPR and that gives explicit in a series of articles it explicitly references issues around privacy,

[26:12] ownership of personal data and particularly important the use of AI to process personal data,

[26:19] particularly insofar as it might relate to decisions of importance around the individual, perhaps career developing decisions or perhaps heighten an individual for closer attention by the police than they might otherwise be.

[26:34] And a series of articles, the GDP outlines I think some great protections for the rights of the individual there, while still still allowing there to be legitimate use of AI and of data, provided these provisions are met.

[26:51] And usually that means the data subject's got to give some form of consent for their data to be used and must be aware that their data is being used.

[27:01] So, yeah,

[27:02] I'm very happy that the UK is still in alignment. The gdpr, I think it doesn't solve all the problems around privacy and all the ethical problems that surround that, but at least it puts it on a firm legal standard in a way that.

[27:16] I'm not sure this is the case in the us, but perhaps you can tell me about that in a second.

[27:22] But it's particularly relevant to me in my role as an advisor to FAT360, because what we do in FAT360,

[27:30] we process metadata around comms and typically this is email comms, but it might be telephone comms, it might be WhatsApp messages can be any exchange of information to infer signals about about organizations, teams and individuals.

[27:46] And these signals can be used to warn of potential dangers like insider threat, when employees might be tempted to go rogue and harm an entity.

[27:58] It can also be used for good reasons. We can identify signals that correlate with bullying, with workplace harassment, with sexual harassment in the workplace. We can highlight when, say, a new management appointee who's perhaps been appointed not by merit but by favouritism, sadly, a practice that goes on all too often,

[28:18] in my experience.

[28:19] We can flag that perhaps a new appointment to a certain role, their performance is a lot.

[28:24] The system has gone really downhill since Axe was appointed compared to when the previous role holder was in situ.

[28:32] And we can offer technology allowance companies to provide very interesting analytics on the health of a company to like the board level, so that people on the board can get unbiased as we can make it view of what's going on in the company without having recourse to things like pulse surveys,

[28:48] which are the traditional way in which management get a feel for what's going on in the company. You have to have employees fill in forms laboriously every week or every month.

[28:58] You can imagine saying the sort of questionnaires will say, oh, are you happy in your job?

[29:03] And is your managing rate? And after a while you might try filling them in honestly initially, but after you've done it the first few three or four times, people tend to lose interest in it.

[29:13] That whereas at fat 360 we collate this data automatically. And because we're only processing the metadata around comms, it's never the case that we're taking a nose into what Mrs.

[29:24] Miggins in management is saying to Fred Bloggs in Accounts, we're not interested in that. We don't look at it, it's never touched.

[29:33] So when you're talking to, for example, unions, as I've done many times in the past, when we're rolling these systems out, we can focus on the positives for the workforce, the help in giving hard data.

[29:44] If there's any abuse going on in a workplace environment.

[29:47] Sadly, there sometimes can be the fact that we're processing metadata and usually as well in the companies that we work with, the fat 360 people have signed away effectively their rights for their emails to be processed when they first joined a company.

[30:01] So in these situations, the GDPR itself has limited recourse. But we like to go through with the unis and with the workforce the implications of using our software. We think the benefits of it's far outweigh the risks.

[30:14] But then of course, I would say.

[30:15] Debbie Reynolds: That I think it's fascinating that you are using metadata in that way.

[30:20] I feel that metadata has the capability obviously to grow,

[30:27] be able to be used in ways like for example,

[30:30] knowing, like in a privacy or data protection situation, for example,

[30:36] it may be important to know why a particular document or piece of data was collected.

[30:42] So then you know it was gonna be used for some other purpose. There may be some mechanism to say, hey, wait a minute,

[30:49] you may need to go back to legal or you know, this wasn't the attendant purpose. But I want your thoughts on metadata in that way.

[30:57] J Mark Bishop: Well, and the very first time we fielded our tech and with a client for real, we did an evaluation with the client,

[31:05] our first client. So we're using in the wild, as opposed to just demonstrating it on the Enron corpus, which is what everybody in the this commercial world tendency to demo their stuff on.

[31:15] For the first time we used it in the wild, we were working for a big, big company to process a lot of phone calls and messages,

[31:22] metadata around these.

[31:24] And within a minute or two of pressing the start button,

[31:27] what was astonishing to me is that without any knowledge of the invest, we were using it here in a post incident investigation capability.

[31:34] We flagged up two or three of the key people who their internal data investigation unit had had cause for concern about.

[31:44] So we demonstrated very rapidly that the technology can work.

[31:48] And certainly when we were practicing on the Enron corpus the very first time I used the tech on that because it seems like magic, black magic, when you can use a system that purely processes the fact that entity A made a communication with entity B at this time and at these places,

[32:08] that's all. And you create a complex telecoms graph embedded in time of these comms that we were able to identify when people in Enron had been promoted without ever looking at the semantics of the messages.

[32:22] When people fell out of the churn of command at Enron and they actually transpired to be the whistleblowers, we were able to identify these people purely from these signals. And yet we didn't look into words of the messages that were being processed.

[32:37] So it certainly is a powerful technique. And in fact, our motivation for working in this came from one of my staff at the center for Intelligent Data analytics read a book by Gordon Welshman, who was one of the people that worked in Hut 6 in World War II to help break the German Axis codes.

[32:55] And unlike Turing, who's very widely known about Alan Turing in the uk, who was interested in the semantics of what the codes actually said,

[33:03] Welshman was interested in what he called traffic analysis, what we would now call metadata analysis.

[33:09] And the critical thing is, of course, near all of Turing's work has long been out of the Official Secrets act in the uk,

[33:15] so it's in the public domain.

[33:18] Most of Welshman's work still remains covered by the Official Secrets Act. And in fact, he used to have a very senior position in both UK and in American intelligence until that book was published.

[33:29] Because in that book he wrote a page about some of the work he did on traffic analysis and he hadn't got security clearance to write about that. And even though he just gave a couple of paragraphs in very untechnical terms, just describing the things that we could do with metadata analysis,

[33:45] he lost his entire.

[33:46] He's an old man at this point and he lost all his security clearance in the States.

[33:51] He was completely ostracized by the intelligence community and he kind of died a very brokenhearted and an unhappy man. It was really sad. But the bottom line is the technology is incredibly powerful.

[34:02] And I don't know for certain, but my intuition it was probably the tech that was used in part to help track down Osama bin Laden.

[34:10] It's so powerful and yet most people don't think about it. They're more concerned about what they've said in comms rather than when and where and to whom they've said things.

[34:22] And again, just give a beautiful metaphor that brings us to the fore.

[34:25] It's very hard for me to lie,

[34:28] but that I'm having a conversation with you now, Debbie on the UK at 4.44pm on Monday 21st July. That's the fact of the matter Observers can observe that independently.

[34:42] You think of all the little white wise we might have told on a dating site profile, on our Facebook or our Instagram.

[34:50] I made this most beautiful dinner last night when actually wasn't quite as beautiful as it might have been.

[34:56] We can easily lie about what we say,

[34:59] but things that we do do transactions are much harder to gain.

[35:04] And that's what gives this technique its power. And that's what underscores the work we do at fat360.

[35:10] Debbie Reynolds: That's fascinating. I love the fact that you brought up I didn't know this story you told me about the gentleman who was looking at metadata Welshman about the signals. I think that's fascinating and I think it's true that a lot of Turing's work and a lot of what people think about that is around semantics,

[35:28] politics. Very interesting.

[35:31] This is just a selfish question because I just, this is just my thought, but I want your thoughts here about Turing. I know that a lot of people talk about the imitation game and can computers think in my view, I think what he was trying to say is that can people be fooled in a way that is harmful because they think think that something is intelligent?

[35:56] That's kind of my thought.

[35:58] J Mark Bishop: Well, it's interesting if I got involved in the UK in the Alan Turing Centenary Committee and one of our goals therein was to get an official pardon for Turing. Because as I'm sure you're aware,

[36:10] after the work he did in World War II, which undoubtedly helped shorten World War II in breaking the Enigma cycles and codes a few years after the end of the war, he was prosecuted for or obscene behavior and chemically castrated.

[36:28] And not long after that,

[36:31] most people believe he took his own life by biting on an apple, incidentally, because the apple itself is something that as a young gay man he'd always been interested in the metaphors around an apple, as illustrated for example in fairy tales like Snow White.

[36:46] It's interesting that Turing took his own life by biting a fine eyes laced apple. Apple. As you may be aware, Turing, his first love was a guy he met at school.

[36:55] A platonic friendship obviously, but he made a big impact on Turing's life. And Turing was devastated when this guy died.

[37:03] And I think throughout his life he was kind of wondering what it'd be like to speak to an AI that capsulated. Some aspects of this guy's behavior died.

[37:15] And again, coincidentally, we can actually do this now with ChatGPT. I've known one of my academic friends,

[37:20] Professor Luciano, for the reason who's trained A chatbot on all Luciano's body of philosophical work, which is quite a lot because Luciano invented the philosophy of information.

[37:31] He's quite a big name in philosophy.

[37:33] And it's just incredible. Now you can interact with this chatbot and it's like you're talking to Luciano. It is kind of interesting. And getting the chat bot's ideas on aspects of Luciano's work in the floss of information.

[37:47] So I think Turing was thinking about issues around this. My own belief from reason reading not all, but a fair proportion of Turing's output is that he was caught between, on the one hand, his work in the paper on computable numbers from 1936,

[38:03] where he looked at non computable decision problems in computer science, the most famous of which is what we call the halting problem, which you might be aware of, just for your audience.

[38:13] The halting problem, in simple terms says, can we write a general purpose program that will take another program as input and some data to that other program and tell you whether that other program will terminate given that input, given that data as input transpires?

[38:29] You can't write a general purpose program to solve that.

[38:32] And there's a host of other problems, decision problems that you can't write computational solutions for.

[38:38] So that was something. And people have speculated that after. If you look back at the work of Kurt Godel, whose work intimately is related to Turing's work on computability,

[38:50] God famously began to think that there was something non computational about the mind. And some people wondered whether Turing was thinking that. But I think looking at his wider writings and his radio broadcasts, my own view is that Turing did believe that the essence of the mind could be encapsulated by a computer program.

[39:10] He took a different view to me. So even though Turing remains for me one of my intellectual heroes,

[39:16] I think we probably would differ on our ideas about whether computers really can be said to think again. He referred to the famous paper Computing Machinery and Intelligence, which starts off with a question not altogether rhetorical.

[39:28] I want to consider the question, can a machine think?

[39:30] And then he goes on to say, well, this is a bit too vague and we don't know what we mean by that. I'm going to replace that by a practical process.

[39:38] And if we say that a computer can pass this process, which we now know as the Turing Test,

[39:43] and again, if any of you readers haven't come across this, that process basically says,

[39:48] let's just unwind this a little bit further. The Turing Test was based on an imagined Victorian esque parlor game. That Turing reflected on where you imagine being in an old Victorian house in the UK where you had a man go in one room and a woman go in another,

[40:04] and then a set of interrogators. By the way, I played this game with my master students many times when I used to teach at Goldsmith and it's actually very difficult to play and win at.

[40:13] So you got a man in one room, the woman in another, and some either one or more interrogators outside of those two rooms. And the interrogator by passing a question on paper through a letterbox in the door to each, to both the man and the woman has got to work out which room holds the man and which room holds the woman purely by their answers.

[40:36] And their answers have to be typed. So you can't infer anything by the style in which the answers are written. They're just looking at the,

[40:43] the words that have been tagged on a paper. Can you reliably do that? Well, Turing said, well, if you replace one of these people, say you replace the guy with a computer,

[40:53] can the interrogator correctly identify which room has the human and which has the computer? As often as we could interrogate, we could determine which room holds the man and which room holds the woman in the imitation gain.

[41:06] If you can do that,

[41:09] if by putting a computer in that room, the computer can perform at the level that you can't do any better than determining the imitation game, then we say of that computer that it's passed the Turing test.

[41:19] And Turing famously imagined that we would get computers that would do that by the year 2000.

[41:24] Well, it took a few years more than Turing imagined. But I think most people would agree that the best chatbots now can certainly do pretty well at Turing's test.

[41:37] As a so called test.

[41:39] Debbie Reynolds: Very good, very good.

[41:42] Well, if it were the world according to you,

[41:45] Mark, and we did everything you said, what would be your wish for privacy? AI technology.

[41:52] Computing anywhere in the world, whether that be human behavior technology or regulation.

[41:59] J Mark Bishop: It kind of saddens me now that the tech broke investing so much money. We were talking about meta building a data server farm that'll be the size of Manhattan. Just think about that for a minute.

[42:13] A factory containing servers the size of Manhattan. Not just one of the sites he wants to develop. The amount of energy that we're talking about,

[42:24] it's going to be soon. The amount of energy used by servers, data centers around the world will be the same as the amount used in Japan. Japan.

[42:32] So a country the size of Japan and in the Short term, a lot of this extra energy is being met by burning fossil fuels.

[42:40] My own opinion is I've been very strongly swayed by the evidence of global warming. And the thought of having to burn more fossil fuels to fulfill the tech Bros whims that we might build an artificial general intelligence system is just obscene.

[42:56] And so I find the waste of resources is obscene. The energy. When you think of people living in energy poverty around the world and yet we're throwing energy at these obscene centers,

[43:09] it just makes my heart really sad. You can do an awful lot of good work in AI with a computer that fits on your desktop. You don't need a computer the size of Manhattan to do interesting things.

[43:19] In AI,

[43:21] people have fancified what's called the scaling law. People thought, well,

[43:25] ChatGPT3 was better than ChatGPT2 and if 4 was better than 3 and we had tens of billions more neurons in 3 than 2 and same for 4 than 3, then if we just put more, simulate more neurons, we're bound to get this intelligence explosion.

[43:42] It's a nonsense.

[43:43] No matter how high you think about trying to get to the moon, no matter how high the tree, you'll never get to the moon by climbing up a tree.

[43:51] And equally well, you ain't going to get AGI by building larger and larger data centers. It just ain't going to happen.

[43:59] So stop wasting the money and the planet's resources on these obscene fantasies. That would be my wish.

[44:05] Debbie Reynolds: That's a great wish. That's a great wish. Oh my gosh. Well, this is so enlightening. Thank you so much for being here. I could talk to you for hours. This is amazing.

[44:14] And so anyone please do Follow Mark on LinkedIn. I love the things that you post and the things that you write. And I was really, really happy that you agreed to do this show.

[44:24] And I know that the audience will love it as much as I do.

[44:28] J Mark Bishop: I thank you for your kindness in inviting me.

[44:30] Debbie Reynolds: Yeah, well, thank you. I really appreciate it. I really appreciate it. And I'll talk to you soon. Thank you.

[44:36] J Mark Bishop: Thank you. Bye bye.

[44:37] Debbie Reynolds: Okay, bye bye.

[44:49] J Mark Bishop: It.

Next
Next

E251 - Ilia Dubovtsev, Data Privacy Officer & Privacy Strategist, Founder DUB Consulting (Russia)