Paul (00:01.678) Hello and welcome to this episode of the Curious Advantage podcast. My name is Paul Ashcroft. I'm one of the co-authors of the book, The Curious Advantage. And today I'm here with my co-author, Garrick Jones. Fortunately, Simon Brown can't be with us today, but we are delighted to be joined by Andy Clark. Hi Andy.
Garrick (00:14.279) Hi, there.
Andy Clark (00:22.376) Hi, it's great to be here, thanks for having me.
Paul (00:24.78) Welcome to the Curious Advantage podcast. Andy, You are a professor of cognitive, let me start that again. Andy, you're a professor of cognitive philosophy and an author of several books, including the recently released, The Experience Machine, How Our Minds Predict and Shape Reality. Can you tell us about your journey and how you got to where you are today? Yeah, sure, well.
Andy Clark (00:46.34) Yeah, sure. Well, I'll make it reasonably brief. I grew up in a working class household in South London. My dad was a policeman. My mum was a housewife taking care of four boys. But she loved words and poetry. My dad read science fiction and loved maths and cosmology. And I think between them, they just installed a kind of sense of wonder about the universe and how it all worked. So I was the first in my family to go to university.
But when I got there, I stumbled on philosophy and I kind of fell in love with two things about it. One was its ability to ask really big questions, because I like that and still do. But the other was to look anywhere at all for an answer. And at about that time, computing and psychology and neuroscience and AI were just starting to appear in philosophy of mind, pioneered by people like late Daniel Dennett. And at that point, I guess I started to think.
that I would like to do that, to bring science to bear on big philosophical questions about mind and consciousness. in the end, I've been lucky enough to end up doing that mostly. So yeah, that's pretty much how it went.
Paul (01:55.083) And I have to say that's one of my favourite journeys, one of my most favourite I've heard on the podcast where you say, look, anywhere, anywhere for an answer. What a wonderful way to start our conversation. Let's dive right in. Your book, The Experience Machine, you talk about how our minds predict and share reality and you talk about the brain as predictor. Tell us what you mean by that, please. Yeah.
Andy Clark (02:21.852) Yeah, this is in a way this is a hard question because that's kind of what the whole book is about. But I think the starting point is a relatively recent revolution. I date it to about 2005 in how cognitive neuroscience thinks about the main thing that the brain's doing. Traditional approaches, particularly approaches to perception, kind of thought that that was a flow of information coming from outside in the world.
pushing deeper and deeper into the brain, recruiting more more stuff along the way, and towards the end being brought into contact with stored knowledge, memory and all the rest of it. What the newer picture did was it flipped that profile exactly on its head. Instead of taking in information from the world, the brain emerges as this kind of constantly active thing that is always busy trying to predict its own incoming sensory streams.
In order to do that, it needs a model of how things are likely to be. So it develops a generative model in pretty much the same sense, although we'll come back to that later, I think, as those currently emerging in generative AI. So it has a model of how things are likely to be, how they should alter as we act. And what we experience moment by moment is then the result of varying combinations of what the model's predicting and what the incoming
energies coming from the world through our sense organs are actually suggesting. So what we experience is really our best guess given a mixture of sensory signal and kind of model-based prediction. And that's normally a good thing. It lets us cope with noisy, ambiguous and incomplete sensory evidence by factoring in what we already know about how things tend to be.
As a quick example here, there's a famous illusion called the hollow mask illusion. You get an ordinary joke store mask and you light it from behind with the concave side facing you and you retire to three or four foot away and look at it. When you do that, it will look as if the other side of the mask is facing you. It will look as if it's a perfectly convex face like faces normally are with the nose sticking outwards. And that's because our brains are really, really good at faces.
Andy Clark (04:39.878) and have learned an awful lot about the normal shape of faces, including that noses point outwards. Faces are convex. And because that's such a strong low-level prediction in the brain, it's allowed to swamp the actual good sensory information that would be telling you that this is the concave side of the mask. So when you see that nose sticking out, that's a kind of example of...
perception as a controlled hallucination, if you like. In this case, the control is not quite perfect because something false is being suggested by the way that we use too much prediction against the current sensory evidence. But normally this really works very well and we get to kind of fine-tune what we touch and see and hear and feel according to what we already know about how things are likely to be in the world.
Paul (05:24.882) normally this really works very well and we get the kind of fine-tune what we touch and see and hear in the field according to what we already know about how things are likely to be in world.
Andy Clark (05:37.192) So that's the big picture. I will stop there. Yeah, I knew that was going to take a bit of a while, but that's the idea of controlled hallucination. Yeah. So, bye now.
Paul (05:41.747) No, it's fascinating. So let's try and unpack some of that because there's a lot in there what you say. So the brain is a prediction machine. Andy, that's fascinating. There's a lot in there that you were describing. Let's unpack some of that first. So at the heart, you're talking about the brain as a prediction machine. And the one thing I wanted to pick up on is I heard you say that its ability and its preference to predict can overrule
Andy Clark (06:41.98) Yeah.
Paul (06:59.327) the sensory input that your brain is getting, i.e. what is actually happening in the world around you. Tell us a little bit more about that and what this means in terms of how we're seeing the world.
Andy Clark (06:59.762) Yeah.
Andy Clark (07:10.054) Yeah, you're right to ask about that because that's kind of the major part of the new picture, if you like, is that the brain is constantly doing two things, really. One thing is it's trying to know how things are in the body and the world. But the other thing is it's estimating how good its own sources of evidence are. So it's estimating how good its predictions are and how good its current sensory information is.
it mixes those two things together in kind of different ways according to which is currently estimated to be better. you know on a foggy day you're probably going to down weight incoming visual information and up weight what you already know about the very familiar backyard path that you're currently walking down. So that would be a case of up weight in prediction over sensory evidence.
Other times you might upweight the sensory evidence over predictions. And actually some people think that that's probably what's going on in autism spectrum condition. A sort of constant tendency to highly weight the incoming sensory evidence, making it a little bit harder to pick out faint patterns in the world. Because the way we normally do that when we're experts is by using what we already know to...
Garrick (08:31.795) Hmm.
Andy Clark (08:32.176) if you like, amplify some bits of the sensory signal and downplay other bits. So I think there's a lot of interest in applications in what's sometimes now called computational psychiatry that speak directly to this sort of balancing act between turning up the volume, the post-synaptic gain, technically here, on the prediction bits and turning up the gain on the incoming sensory information.
Garrick (08:58.983) Mm-hmm. I mean, that's...
Andy Clark (09:00.444) So the technical notion here is precision and the brain is constantly estimating the precision, which is the inverse variance of its own predictions and prediction errors. So it's basically just saying, know, am I, how should I weight these things given everything that I know in order to have the best chance of getting things right? But of course we don't always get things right. And sometimes even when we, the agent,
Garrick (09:23.709) Mm-hmm.
Andy Clark (09:28.226) know that we're making a mistake, it doesn't seem to make any difference to what lower level bits of the brain are doing. That, guess, is what we saw in that hollow mask illusion that I was just describing. You know very well that it's the hollow side of the mask that's facing you, but because the low level prediction machinery is absolutely fixated on noses sticking outwards, it's very, very, very hard for us to ever see that.
Garrick (09:52.199) And do we know anything about where those weightings lie? Does the brain tend to favor our internal experience and our internal models against incoming sensory perception? Is it more expensive in terms of neurological resources to do one or the other? Do we know anything about that? Was it still early days?
Andy Clark (10:15.512) It's early days, the first thing to say there is that it is very, very variable. There'll be variation across different individuals, there'll be variation in the same individual at different times, and there'll be huge variation according to circumstances. So, you know, like the Foggy Day example, you know, causing me to downgrade a lot of visual information, maybe upgrade auditory information, upgrade what I know about the shape of the track.
It is known what kind of neuronal systems are playing the biggest role there, particularly neurotransmitter systems. like the dopaminergic system and all of the other neurotransmitter systems seem to be deeply in this business of performing the precision weight in the balancing act, if you like, trick.
Garrick (11:08.093) You say that error fuels curiosity. How does that relate to all of this, you know, the learning and the adaptation and the expert decision-making shaped by our expectations of outcomes and so on?
Andy Clark (11:25.052) Yeah, there's a lot to say there. think the first thing that... So these accounts, I call them predictive processing accounts, other people call them different things, active inference is sometimes used, but they all have in common this idea of brains as prediction engines. The important thing there, I think, is that we're not just trying to drive down prediction error at this moment in time. We're trying to drive down prediction error across time. So sort of...
You want minimal long-term prediction error. And in order to do that, brains like ours will sometimes seek out error. They have a liking for what this literature describes as good error dynamics. So think about if you go into a room and there's a range of things available to do in that room, you're probably not going to do the most boring thing, and you're probably not going to do the most difficult thing.
you'll probably find yourself drawn towards whichever bit of the environment enables you to resolve a little bit more than expected amounts of errors as you interact with it. There's quite a large body of work on this now showing that if you have that kind of profile, then you're going to inhabit the bits of your environment that will teach you the most, where you're going to learn the most, because they're the bits where you're resolving a bit more than expected amounts of prediction error.
If it's too hard, you really can't resolve any at all. If it's too easy, you don't resolve any unexpected error. So there's that sweet spot on the error landscape, which is where we most like to be. I guess that evolution has installed that likening us because we're informavers and we develop rich models driven by lots and lots of learning and experience. And so that's, I think, the route to curiosity.
Garrick (12:59.411) Hmm.
Paul (13:16.807) And so that's think the route to curiosity. How does this relate to the idea that we were attracted to difference? that we are used to seeing, we might actually become willfully blind to or become blind to something that is different from granted. But something that changes or there's change in the environment or there's difference that we may not attract most of it.
Garrick (13:20.285) Yes. How does this relate to the idea that we were attracted to difference? Something that we are used to seeing, we might actually become willfully blind to or become blind to something that we just take for granted. But something that changes or there's change in the environment or there's difference that we may not expect attracts us.
Andy Clark (13:43.74) Yeah, mean those issues cut in all different directions as many psychological experiments have been busy trying to pull apart. So you know, on the one hand, difference, if we manage to spot a difference or be looking in the right place at the right time with roughly the right mindset, then we're very interested in difference because that's somewhere that we're going to learn something. I think that's the kind of thought you're having there.
Paul (13:44.091) Yeah, mean those issues come in all different directions. Many psychological experiments have been busy trying to pull apart. So you know, on the one hand, if we manage to spot the difference or be looking in the right place at the right time with roughly the right mindset, then we're very interested in difference because that's some way that we're going to learn something. I think that's the under-thought guide in there.
Andy Clark (14:13.34) But at the same time, because our brains are so heavily driven to kind of harvest the sensory signals that they predict ought to be out there, and because our own predictions help shape the way that those sensory signals strike us, then sometimes we'll also miss really obvious things that are kind of staring us in the face. And you you'll have come across the examples of, you know, the gorilla.
Paul (14:13.578) But at the same time, because our brains are so heavily driven to kind of harvest the sensory signals that they predict will be out there, and because our own predictions help shape the way that those sensory signals strike us, then sometimes we also, it's really obvious things that are kind of staring us in the face. And, you know, you'll come across the example of, you know, the gorilla, the person in the gorilla who's walking across the back of the kind of game where people are passing
Andy Clark (14:38.344) The person in the gorilla suit walking across the back of the kind of game where people are passing the ball around. In cases like that it looks as if our, because our attention is on the passing of the ball, then wherever we attend is where we are giving the most weight to prediction error signals. And so by pulling our attention to the wrong place,
Paul (14:43.384) all around. In cases like that, looks a bit out, because our attention is on the passing of the ball, wherever we attend is where we are giving them to wait for prediction error signal. And so by pulling our attention to the wrong place,
Andy Clark (15:05.734) will tend to miss other things even if in retrospect they should have been really obvious and really important because without attending to that area prediction errors from that bit won't be real for us, won't make any real difference they'll just be swamped, they'll be treated as noise that's what the brain is doing when you see the hollow mask illusion the wrong way if you like the brain is treating perfectly good information about the concavity of the nose bit
Paul (15:06.106) will, amidst other things, even in retrospect, they should have been really obvious and really important. Because without attending to that energy, position errors from that...
Andy Clark (15:34.02) as if it was just noise, uninformative noise.
So it cuts both ways I think, Eric, is what I want to say there. It just depends how attention is being distributed, which is itself determined by the model that you bring to bear.
Paul (15:52.274) So Andy, would you say then our brain is wired to reduce uncertainty? Does it have a preference for a world in which it can happily predict what's going to happen and frankly just the noisy world is getting in the way? And if so, to add to that, if maybe it's as wrong, but if it's a, what does that mean in terms of just learning, exploring, being curious?
Paul (18:16.353) Andy, would you say then that our brains are actually wired to reduce uncertainty, that they prefer, our brains prefer a world in which they can happily predict what is going to happen to us? And if that's the case,
Andy Clark (18:26.812) Exactly.
Paul (18:45.592) What does that mean in terms of just being curious and being out there learning something new in the world?
Andy Clark (18:52.722) Yeah, I mean there was a puzzle that people used to think was possibly a fatal worry about the predictive processing framework and it was called the darkened room worry and the worry went like this, look if brains are really constantly trying to push down their own uncertainty, trying to reduce prediction error, why don't we just find a nice sort of cozy dark corner somewhere
Perhaps somewhere where we've got some food and water because maybe our brains chronically predict food and water and so you're going to kind of need that. But other than that, you just stay there. Perfect prediction of an unchanging sensory environment for the most part, a barely changing inner environment. Why wouldn't we like that? You know, I think the answer to that problem is mostly in this realm of aerodynamics that we've been talking about. That brains like ours deeply
prefer to be playing the long game. The long game is trying to improve, is trying to get a really good model of the world. And in order to get a really good model of the world, you constantly have to be reducing more than expected amounts of prediction error. You have to be in those good learning spaces that we mentioned earlier. And I don't think that will ever stop. If you improve your model of the world to some certain amount,
then you'll go looking for bits of the world that will allow you to improve it a little bit more. And so that sort of dynamic is going to keep on pushing us to the bits of the world where we can learn a few interesting things. So although it's right to say that our brains don't like uncertainty and that they're trying to drive it down, they're always playing the long game. And that means embracing uncertainty so as to constantly improve the model so that you'll be able to
reduce even more uncertainty moment by moment at certain times in the future. And there's this sort of exploit what you know versus explore the environment balancing act and other balancing act that is constantly being performed by the brain in order to do that.
Garrick (21:03.347) Hmm. You suggest that even pain is shaped by expectation. And what does that mean for how we deal with discomfort, and mental health, for example?
Andy Clark (21:16.678) Yeah, mean pain is a really interesting case here. The starting point of course is that the states of our own body are just more things that the brain wants to know about. So in a way it treats our own body just the way it treats the world. And that means it's trying to merge two sources of information. Signals coming from the body and how it expects those signals to be. And in the case of pain, you know, this is why dentists will say to you, hey, this will just be a tickle.
because by changing your expectations about the income and about the experience you're about to have, you actually change the experience. Huge bodies of work now show that if you kind of increase people's expectations of pain, they will report more pain. If you decrease them, they'll report less pain. As long as they believe you, as it were, you have to be a trusted source.
Garrick (21:56.477) Yeah.
Andy Clark (22:14.408) for those predictions, if that's going to work. Lots of stuff on placebo effects falls into place here. For many things, particularly things that involve fatigue and anxiety and so on, and pain, then placebo effects can be very, very strong. Placebo pain killing effects are sometimes as strong as opiate based effects. There's an example that I give in the book of a construction worker.
who fell on a seven inch nail and at least that's what it looked like to them. They were given, what was it, fentanyl and midazolam, so a benzo and a strong opiate, and taken to hospital where radiography rapidly revealed that the nail had missed the foot altogether and passed between two of the toes. Nonetheless, real pain was being felt.
Garrick (22:47.475) Yeah.
Andy Clark (23:09.864) that it was genuine agony that the construction worker was experiencing because they had very good visual evidence of serious bodily damage, the long nail sticking through the shoe. And that was a source of a mixture in which the expectation is driving the pain experience itself.
Garrick (23:10.11) Yeah.
Garrick (23:24.669) Yes.
Garrick (23:29.821) I just loved the film that came out recently with Jesse Eisenberg and Kieran Culkin called A Real Pain, which was about two guys, I don't know if you've seen it, who go back to Poland where their grandmother came from and experience not only the history of that, but also experience their own relationships. And one of whom is very anxious about the world and is very, and plans everything. And the other one is just...
involved in the world but is feeling a lot of pain and is always offsetting their behaviour as a result and this kind of dynamic between them. Talks to me a little bit about what you're talking to as well, just a brilliant movie for which Kieran Corkin got an Oscar but and well deserved I think but it makes me think about learning new things and going to new places or what does it mean for learning new things, this expectation of pain for example or
or newness or new stuff or the expectation of stuff when we go into a new context. What are the implications for us and how we should approach things, do you think?
Andy Clark (24:41.96) Yeah, there is some kind of sort of tension in the way I think that predictive brains work in these sorts of scenarios because very often if you want to minimize more than expected amounts of prediction error, this nice slope of error minimization, in order to do it you ought to plonk yourself into a brand new environment and yet
we're simultaneously rather averse to plonking ourselves into brand new environments because that's exactly where sometimes we've experienced anxiety, we don't know exactly how things are going to pan out. So we're kind of pulled in both directions and I think the moral of the story is that we probably ought to try to curate our own exploratory-ness a little bit more, introduce a certain amount of discomfort.
in order to probably get that sort of that burners.
Andy Clark (25:45.116) Maybe curate a little bit more uncertainty in order to get us into those good places of reducing serious amounts of prediction error.
Paul (25:51.52) Yeah, so yeah, maybe curate a little bit more uncertainty in order to get at the good places that Jews can get it from out of prediction. And Andy, wanted to follow up by asking about the difference between us and our brains. I think we know more people are attracted to
Andy Clark (26:12.178) Yeah.
Paul (26:19.232) more uncertainty. They'll do things in their lives that may have huge amounts of risk, either in their work or in their hobbies. Others want to leave a very, very safe, as predictable an experience as possible. What have you noticed in your work about the difference between us in terms of our brains as predictive machines? Are some people more predictive than others?
Andy Clark (26:26.6) Yeah.
Andy Clark (26:36.168) Yeah.
Andy Clark (26:44.232) Yeah.
Andy Clark (26:49.658) Yeah, think the kind of tolerance for error is probably greater in some people than others. So some people will tend towards exploiting what they know and that means inhabiting environments where you know you can do well and harvesting the rewards that there. And others have a greater tolerance for the unknown and will kind of embark on these more sort of
Paul (26:50.087) Yeah, I think.
Andy Clark (27:19.962) more to many people threatening journeys through an aerial landscape. I think as a species or a population, having a spread across individuals like that is certainly beneficial because sometimes the environment changes suddenly and if the environment changes suddenly then you need to have had lots of people that are exploring at the edges of current sort of understandings in order to be more able to deal with those sorts of changes.
At other times the environment stays pretty stable, in which case there's an awful lot of kind of harvesting to be done. And so the people that are less exploratory are probably going to perform, more for us as a species at that moment. So overall I think it's just a good thing that we have a spread of these tolerances across individuals.
Paul (28:05.467) I think it's just a good thing that we have that we have better tolerances across individual. We're talking with Andy Clark. Andy is professor of cognitive philosophy at the University of Sussex in the UK. He's the author of several books, including Being There, Supercising the Mind, Natural Born Cyborgs, Surfing Uncertainty and most recently The Experience Machine. He's a fellow of the Royal Society of Edinburgh and the British Academy and an occasional academic consultant.
for Google UK. Andy, we just started talking about the environments around us and the second part of the subheading of the book, how our minds predict is and shape reality. Can you tell us a little bit about how our brains are shaping our environment and reality around us? Because that's quite an unusual observation.
Andy Clark (28:59.804) Yeah, I mean think that the notion of shaping experience hopefully has started to come out in this chat already. So the idea there is just that predictions are affecting our experiences in a pretty deep way. Nothing that we experience is kind of raw and unfiltered. It's always this sort of mixture. The other thing though, asking about the environment itself is really interesting because
One of the things that we humans are particularly good at is kind of changing our environment so as to curate flows of prediction error signals that will be beneficial to us. I think we see ourselves doing this, for example, in science where you instigate procedures of peer review and conferences and chats with other people. In all of these ways, we've kind of created worlds that force us to put ideas out there.
so that they can be tested experimentally and in conversation and so on. And that of course is a great collective way of harvesting useful prediction error that can improve the model. I think science here is just a kind of a version of what brains are doing, but pumped up with an awful lot of environmental niche construction that is enabling us to in many ways go further than other animals on the planet.
And if you think about art and literature, I think they're probably in the same ballgame. That what we're doing in art is challenging deep set assumptions about who we are and what we like. And what we're doing in literature is leading other minds through probability landscapes, building up expectations and then challenging those expectations and then building them up again. Music seems to have that same sort of structure. So I think that we humans are kind of...
masters of creating these artificial kind of environments that do interesting things to prediction systems and predictive brain.
Garrick (31:07.369) I love this conversation. mean, there's so many things I want to ask you about, about, you know, the art and the music, you know, for example, the great shifts in art and music. You could argue in one hand they
they mimic or they reflect back what's happening in broader society and where people are thinking, know, great shifts, you know, from romanticism to modernism to sort of atonal music, for example, where completely predictive systems were thrown out. know, when Stravinsky came in 100 years ago, people literally tore the concert hall apart and had a riot.
Nowadays, you can go to a quartet of people who are playing completely atonal music and people sit through them collapse.
politely afterwards. I things have changed so much and yet there's new stuff coming. There's all the electronic stuff coming. You think in literature, the shifts, you know, to modern modernism, you know, the great literary authors who kind of changed our ideas of how we perceive sentences and grammatical stuff. You know, again, they're messing with the predictor systems. This makes so much sense to me. I worry. Here's the question is about the computer and the digital era that we inhabit.
Andy Clark (31:54.93) Yeah. Yeah.
Andy Clark (32:08.701) Yeah.
Garrick (32:23.037) Now, similar things are going on, but they're in a different domain in our 21st century. We've got computerization and digitization, the internet and hyper-connectivity and algorithms. And you talk about the predictive in the brain and us always trying to optimize for the errors that we're dealing with and making those kind of decisions. But when...
Andy Clark (32:37.117) Yeah.
Garrick (32:49.701) You talked about it briefly before about the science sort of mimicking what's going on in the brain. Is it dangerous? mean, you know, because the algorithms are shaping our digital environments, recreating predictive systems, AI systems and large language models are predictive in many ways. Things are becoming like echo chambers, you know, especially if you're dealing with Google or Netflix or any of these things that are serving you stuff that you've asked before.
Andy Clark (33:03.528) Yeah.
Andy Clark (33:07.698) Yep, absolutely.
Garrick (33:18.119) And you can see that you ask a question and suddenly you get fed a whole bunch of advertising that you response to something that you were interested in a week or two back. But it might also affect the kinds of news that we are being fed, you know, more and more. Are we in danger of being in echo chambers? And how can we maintain like our critical curiosity in this era?
Andy Clark (33:41.798) Yeah.
Garrick (33:47.355) of algorithmic influence. So there's my question. It's really about what's your view on how our understanding of the brain is informing how we code and create the internet and the digital realm and is that a good or a bad thing?
Andy Clark (33:48.114) Yeah.
Andy Clark (33:51.463) Yeah.
Andy Clark (33:59.474) Yeah.
Andy Clark (34:03.75) Yeah, a huge question. Well, so we've always shared the world with other prediction systems, know, other humans, animals, but the thing with those other prediction systems is that they've got an awful lot more in common with us, I think even the non-human animals do, than the current waves of rather alien intelligence that are also predictive, that are kind of, we're wrapping ourselves in right now.
Garrick (34:06.001) Sorry.
Andy Clark (34:33.576) The big difference there is action. our prediction mechanisms and those of other animals and even plants actually, although whether plants predict is a whole sub-genre of discussion here which we probably won't get into. But all of those systems act in one way or another. Even plants kind of get to move their roots around in response to different conditions and looking for different kinds of things.
Action makes a big difference. think it anchors your predictions in how things are in the world because we learn by intervening in the world. Whereas the other systems are kind of learning by seeing lots and lots of data. We don't just see data, we intervene and we try to pull out good data. We try to bring better data into being by intervening in the world. So we're surrounded now by these very powerful but relatively passive
prediction systems, the AIs, who also don't have our interest particularly at heart. Often they have the interest of a kind of something much more kind of economical, know, they want us to buy things or to look at certain things or to serve certain pages or to vote a certain way. So because of that, I think that those dangers that you describe are very real. But there are ways to push back against them. I think the most
important one will be better education. We just need to kind of launch new forms of basic education that are all about how to manage our interactions so as to get the most out of these rather alien prediction systems that we've now surrounded ourselves with because they're super powerful tools used properly. They're wonderful things to form a kind of cognitive ecosystem with. That really means sort of
sort of installing a set of tools to enable people to sort good evidence from bad. And there's a lot of things that are kind of commonplace in epistemology and philosophy of science that are now suddenly relevant again in these new dimensions. So like the importance of multiple independent sources of evidence. Don't just go down.
Andy Clark (36:57.53) a kind of hole where you're getting papers upon papers that are all kind of written by the same team and they're all citing other papers that are more or less written by people that agree with them. There are lots of holes like that out there. I've been down a few myself and some people think I'm down one right now with all this stuff on predictive processing. So, you you've always got to be asking yourself, am I looking sufficiently far outside of my comfort zone again?
in order to be sure that there really are multiple independent sources of evidence here. How do I check things? What's authoritative and what isn't? There are lots of clues and cues for all of this. And I think we just need to be really expert at picking up on them. And we also need to build infrastructure that will help us do that. And of course, that's a whole other discussion more for AI scientists than...
Paul (37:52.334) So, for us, for me. I do want to pick up on the AI side of things. Although I also want to ask you if you mentioned earlier that we create a predictive environment in which we inhabit. And I wonder if that keeps you up at night sometimes wondering that you're just predicting a world in which we're living with predictive brains.
Andy Clark (37:53.178) and for us, for me.
Andy Clark (38:16.296) Yeah, well I must be because certainly since immersing myself in this I tend to see everything in these terms and that's exactly what having a strong, possibly overconfident model is going to do to you. It's going to make you sort of pull out certain patterns in everything that you experience and push down other patterns so you seem to be getting confirming evidence when actually you're just creating that evidence from whole cloth if you like. It's an ongoing...
Paul (38:25.818) what having a strong, possibly overconfident model is going to do to you, is going to make you pull out certain patterns and everything that you experience and push down other patterns so you seem to be getting confirming evidence when actually you're creating that evidence from whole cloth, if you like. It's an ongoing danger. think actually learning about predictive brains helps keep us constantly alert to that danger. I think it could help inoculate this to some
Andy Clark (38:45.222) danger. think actually learning about predictive brains kind of helps keep us constantly alert to that danger. So I think it could help inoculate us to some degree. But because it's so natural for us to behave that way, we just have to keep pulling ourselves back. And again, you have to keep putting yourself in environments where your ideas will be challenged and questions will come from all corners and all angles. I think that's super important.
Paul (38:55.831) But because it's so natural for us to behave that way, we just have to keep pulling ourselves back. And again, you have to keep putting yourself in environments where your ideas will be challenged and questions will come from all corners and all angles. think that's super important. So is there an overarching danger? You end up just basically perceiving and acting in the world that you expect to perceive and act in.
Andy Clark (39:14.344) So is there an overarching danger that you end up just basically perceiving and acting in the world that you expect to perceive and act in? I suspect that the world is too good at kind of giving us a kick, a bad kick, for that to happen. We're too often required to get to grips with things in a way where they just won't work if your theory is that wrong.
Paul (39:25.945) I suspect the law is too good at kind of giving us a kick. Error correcting. Too often required to get to grips with things in a way where they just won't work if your theory is that wrong. So again, think in the end you should see a lot of... Yeah. I was going say you should try having Paul have a with that. Error correction machine.
Andy Clark (39:41.978) So again, think intervention and seeing what happens.
Garrick (39:44.56) I was gonna say you should try having Paul Ashcroft as your co-author.
Andy Clark (39:49.768) Exactly, yeah, well that's right, I think. And you know, I think the coalitions of authors, again, will be a bit like a microcosm of the sort of the good spread of traits across the species. You kind of want someone that wants to build a big picture, you want someone else that wants to knock the big picture down, someone that's very interested in going off at different tangents, someone else that wants to mine everything that's right there in what you've got available.
Paul (39:58.104) again.
Andy Clark (40:19.784) You know, that makes a good team.
Garrick (40:22.345) That's right.
Paul (40:23.258) So Andy, to continue then the thoughts about the AI and what you continually referred to is that the brain is building a model.
Right. And for those that follow the world of artificial intelligence, we'll all know that the basis of the generative AI is building a large language model in which the essentially prediction of the gen AI can operate. tell us a bit about that. What's your experience? What have you noticed in terms of similarities, differences between how these algorithms, these systems are operating and
our brain, know, where are some of the fundamental similarities and differences?
Andy Clark (41:06.278) Yeah, well you know the fundamental similarity is the generative nature of the underlying model. So in large language models and other foundation models in AI, what makes a model generative is that because it's been trained on a lot of data of some kind, it's able to produce kind of fake versions of that data for itself using just what it knows about patterns in that data set.
Paul (41:14.659) Yeah.
models in AI, what makes a model generative is that because it's being trained on look at the data of some kind, it's able to produce the kind of fake version of that data for itself using what it knows about patterns in that data set.
Andy Clark (41:35.964) So if predictive processing pictures are right, that's exactly what brains are doing when we perceive the world. The brain is trying to generate for itself the sensory information that it thinks you should be getting right now from the world. So it is kind of doing the perception from the top down, as they sometimes say here, from the deep inside pushing outwards towards the sensory peripheries. So in that respect, both of them
Paul (41:37.528) If predictive processing pictures arrive, that's exactly what brain is doing when we perceive world. Brain is trying to generate for itself the sensory information that it thinks you should be getting right now from the world. It's kind of doing the perception from the top down, as I said here, from the deep inside, pushing outwards towards the sensory periphery. So in that respect, both of them, both AI and brain,
Andy Clark (42:04.476) both AI and brains have as their most important component a generative model of something. But in the case of brains, it's a generative model mostly of how things will alter if I intervene in the world or if I act in the world. So, you know, my model of how the world is, the one that I use to perceive the world, is actually mostly about how what's coming in through my senses is going to alter as I move my head around, I kind of push something on the table, as I engage in a certain kind of conversation actually, so it goes to a higher level too. But just sticking with the low levels, I think it's because our generative model is built around action and the effects of action that we're anchored in how things are in the world in a way that the AIs aren't, and also we know a lot of stuff.
Paul (42:38.846) something from the table as I engage in a certain kind of conversation actually, so go to a high level too. Just sticking with the low levels. I think it's because our generative model is built around action and the effects of action that we're anchored in how things are in the world in a way that the AI does. And also we know a lot of stuff about observation and causation which they only get
Andy Clark (43:03.022) about the results of action and causation which they only get a kind of echo of. So there are lots of echoes of action and causation in those vast data sets. know, one movie frame follows another and you can kind of see what happens when you throw a ball at someone and someone catches it. But in order to be kind of fully able to appreciate what's going on in those scenarios, to really understand the meaning.
Paul (43:08.718) kind of echo of. So there are a of echoes of action and causation in those vast data sets. know, one movie frame follows another and you can kind of see what happens when you throw a ball at someone and someone catches it. But in order to be kind of...
fully able to appreciate what's going on in those scenarios to really understand the... I think you have to be using a generative model that is itself geared to intervention and action rather than one that looks at passive data tests. That's why I keep saying that I feel that these are alien intelligence things.
Andy Clark (43:33.146) I think you have to be using a generative model that is itself geared to intervention and action rather than one that looks at passive data sets. That's why I keep saying that I still think that these are alien intelligences. Excuse me. So that's main difference, I think. And should we be trying to... Some people build AIs a bit like that. Robotics-based local AIs are often like that. Excuse me, joking. Yeah. Yeah.
Paul (43:46.519) So that's the main difference, think. know, should we be trying to... some people build AIs a bit like that, robotics based load-building AIs.
Garrick (43:58.067) was going to ask you about robotics. We're seeing more more robots which are using more and more complex computation. And the more computation that's going on, the smoother the rhythmical, the better the impact and the effect of a robot is and their ability to act in the world. Do you think they will become actors?
Paul (44:06.903) show you the more complex computation and the more complex that's going on, the smoother the rhythmical, the the impact on the effect of robotic's
Garrick (44:27.657) as well and start to learn acting, what that means.
Andy Clark (44:28.701) Yeah.
Yeah, oh yeah, I do. I think they already are. It's just that the bringing together of the huge passive data sets and the knowledge that is formed by real action in the world is not happening yet. Do we want it to happen? I don't even know. We'd want it to happen if the goal was to create
Garrick (44:36.029) Yeah.
Garrick (44:49.257) Mm.
Garrick (44:54.909) No.
Andy Clark (44:57.734) yet another kind of real intelligence on planet Earth, so there'd be us, the non-human animals and the AIs. But actually, maybe what we want is this kind of slightly alien form of AI intelligence that does things that brains like ours aren't particularly good at, that picks out patterns amongst huge amounts of passive data sets, way more than any human brain is going to be fed or able to handle.
Garrick (45:25.289) This
Andy Clark (45:25.618) So it could be that if we want the kind of hybrid cognitive economy, actually having alien intelligences around us is better than trying to go the artificial general intelligence route.
Garrick (45:38.099) touches on the impacts of all of this on society. And you talked about the fact that we act, that makes us different. You talked about the need for education and a better kind of education, which allows us to manage these predictive systems. And you also talked about the predictive bias that we can pitch up. These are just some of the things, but are there any other broader implications of...
Andy Clark (45:46.546) Yeah.
Garrick (46:05.735) predictive processing for society at large, for example, how can our understanding of this concept help us navigate complex global challenges, for example, from education to ethics?
Andy Clark (46:20.092) Yeah, well I think there are implications for education and I'm sure there are lots of implications for AI ethics, but let's start with education. The main moral there I think is that educational practices that teach us a kind of, in a way, a tolerance for error or even a respect for error, instead of seeing error as a mistake and something to be avoided.
some educational practices, Montessori schools, there's a whole bunch of educational practices that sort of make making mistakes into something very positive. is kind of, you know, this is how we learn. We engage with things, we poke things around, we chat with other children and so on. So one of my students, Laura Desiree Di Paolo, is working on education and predictive processing at the moment.
And the main thing that she's arguing is that we need to take these error landscapes very seriously. We need to understand ourselves better as creatures that need to curate informative errors and not be averse to inhabiting landscapes where you're having to tolerate error for quite a while before you start to get the payoffs.
of increased over the long term prediction error minimization. So I think education is one place where there are a lot of direct morals. Computational psychiatry that we sort of touched on earlier is another area. And then the general structuring of the kind of near future ecosystems in which we are kind of collaborating with artificial predictive systems.
is another area. There's an AI firm called Versys AI that are particularly interested in trying to do some of that work, trying to think about what the shape of that ecosystem needs to be for us to get the most out of current resources and whether we need some slightly different resources too. Some may be based a bit more on the predictive processing or active inference models of how...
Andy Clark (48:40.912) If you like, and I will be contentious here, real thinking works.
Paul (48:46.197) And Andy, one more question for me. It's been such a fascinating conversation. I have about 20 more questions, but in our time, one more question. Given sort what we just learned over the past conversation, and I understand more now my brain is essentially a prediction machine. What should I do about that? What does that mean for me? How could I use that to either better live
better interact with others, just be more aware of myself. What can I do with that knowledge?
Andy Clark (49:22.546) Yeah, interesting question. I suppose it depends whether we want to enhance the positive or the negative here. There is sort of, what shouldn't I do? Well, I should be very aware that the environments that I inhabit are going to train the system that is doing so much of the work so that I won't even see the world the same way if I inhabit a very different environment for a long time. So be very careful.
Paul (49:31.061) You know, there is one.
Andy Clark (49:50.92) what environments you inhabit because your future mind depends on them in a much deeper way than perhaps we appreciated before. So that's the sort of, you know, that's the kind of negative side. But I think the other side, the sort of enhance the positive side is, you know, we need to be happier about going out of our own comfort zones because those are the places where we will sometimes make the greatest progress by
saturating big streams of prediction error, minimizing information that we didn't actually predict were going to be there before. So maybe that's another thing. We're not actually very good at predicting where those good bits of the environment are going to be. And that, think, is a kind of real puzzle. For a lot of people, I think the best way to find some of the unexpected good bits of the environment is to be a bit random.
about some of your explorations. So we could take some of that away. Some people have experimented with giving chat GPT prompts that include a few random words just to kind of see what comes back. And sometimes interesting things come back then. And obviously you can turn up the temperature and other things if you want to kind of reach in there and make it a bit more volatile. But even simple things like including a couple of random words in your searches can sometimes pay dividends. So I think that understanding
the way that aerodynamics work is actually going to be beneficial in structuring our own interactions with the world so you might get a little bit more out of
Paul (51:28.503) Andy, this has genuinely been one of the most fascinating conversations that we've had on this podcast. think Garrick and I speak for you, Garrick.
We've learned loads. mean, we've been talking about to try to summarize some of these things, the brain as a prediction machine and how our brains are continuously balancing sensory input from the world around us and how our brains balance this with expected perception of reality and in fact, how we shape our own reality. Therefore, as Andy just said, how we need to be careful about the environments in which we continually inhabit because this is going to change us.
Garrick (51:37.255) Yeah.
Paul (52:04.433) We talked about how the brain plays a long game. And this is why we don't just all live in dark, unsensory rooms. We are looking for stimulus because the brain essentially is wanting to train itself to be a better prediction machine. We talked about the importance of multiple viewpoints. And actually it is vital to have people in the world, no doubt this is why we do so, who are more, have varied opinions who are some more strongly inclined to predict and who are more strongly inclined to sense.
the similarity and the differences between us and the AI systems that we, or perhaps aliens are creating, and how we use our sensory input to inform our own predictive model. I also found this fascinating that we need to be more tolerant of error and actually embrace error. And for a new reason than I'd thought of before, which is that this is actually improving our brains and improving our ability to predict the world around us.
I very much like your thought there just now about be more random, just experiment because that's also going to help improve our brain and improve our ability to predict and interact with our world. Andy, this has been such a wonderful conversation. If there's one thing that you would leave our listeners with today, what would that be?
Andy Clark (53:08.296) Yeah.
Andy Clark (53:25.052) Hmm, one thing. Well, I think it is really what I was just saying. Be aware that when you choose your environments, you're choosing your own future mind. They used to say you are what you eat, but I think this kind of goes through information just as well. So, choose your environments, curate your environments, and embrace error when it comes along because actually it's really a wonderful fuel and a resource.
Paul (53:29.221) think it is really what I was just saying, be aware when you choose your environment, you're choosing your own future mind. They used to say you are what you eat, but I think this kind of goes for information just as well. Choose your environment, curate your environment and embrace error when it comes along because actually it's really a wonderful fuel and a resource.
Andy Clark (53:54.866) So yeah, think those are the things. It's sort of like immerse yourself in wonder, if you can.
Paul (53:55.905) Yeah, I think those are the same. It's sort of like immersing yourself in under. Wonderful. Thank you. Andy, brilliant conversation. Thank you so much for joining us.
Garrick (54:06.771) Thank you. Amazing.
Andy Clark (54:08.04) Thank you, it's been a great conversation. I feel like we could talk for 24 hours, except my voice wouldn't stand up.
Paul (54:08.689) you. It's been a great conversation. feel like...
Absolutely. You've been listening to a Curious Advantage podcast. We're curious to hear from you. If you think there was something useful or valuable from this conversation, we encourage you to write a review for the podcast on your preferred channel saying why this was so and what you've learned from it. We always appreciate hearing our listeners thoughts and having a curious conversation. Join today, hashtag curious advantage. The Curious Advantage book is available on Amazon worldwide.
Garrick (54:13.903) You
Paul (54:40.333) Order your physical digital or audio book copy now to further explore the seven seas model of being more curious. And Andy, just to ask you, where can people find a copy of your latest book?
Andy Clark (54:52.72) Should be in all reputable bookshops, obviously all the online sellers. And if you want to listen to it, I did a narration of it myself without a horse voice. I think that that's also a kind of pop-show choice. But yeah, there's the American version, all punchy and yellow. And here's the UK version a little bit. I mean, it grows on you.
Paul (55:09.677) Fantastic. Well we encourage our listeners to check out the experience Mind by
Thank you for listening. Subscribe today, keep exploring curiously, and we'll see you next time.
Paul (55:38.085) Great.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.