Simon Brown (00:02.95)
Hello and welcome to this episode of the Curious Advantage podcast. My name is Simon Brown and today I'm here with my co-author, Garrick Jones. We have no pool today, unfortunately. So we're delighted though, I'm really excited for this episode to be joined by Art Kleiner. So Art is a writer, a strategist and a thought leader in organisational learning, leadership and responsible technology. So warm welcome Art.
Art Kleiner (00:30.99)
Thank so much Simon and good to meet you Garrick. I'm really glad to be here.
Simon Brown (05:46.589)
So warm welcome to the Curious Advantage podcast, Art. So a little bit about Art, he's co-author with Juliet Powell of the AI Dilemma, Seven Principles for Responsible Technology. He teaches at New York University, exploring how human systems and emerging technologies intersect. And our regular listeners will know those are topics that are of great interest to us on this podcast. He's former editor-in-chief of Strategy Business. He's also the author of The Age of Heretics, History of the Radical Thinkings.
thinkers who reinvented corporate management and who really matters the core group theory of power, privilege and success. He was also editorial coach of a book that had a personal impact on me, The Fifth Discipline, which is arguably one of the most influential books on learning organisations. And he was also co-author of another book I have on my bookshelf, The Fifth Discipline Field Book. So lots that I've already learned from art. His work spans decades of insights into how organisations think, learn and evolve.
and how curiosity, empathy and systems thinking can guide us to use technology responsibly. So maybe let's start with curiosity art. So that's at heart of our podcast. How would you personally define curiosity and what do you think about curiosity?
Art Kleiner (07:02.894)
I think curiosity is the willingness to entertain, as Shakespeare put it, items that were not in our philosophy before. And it is a discipline of mind, and it is also a natural element of the way children work. And oddly enough, right now, we're experiencing with Gen-AI
a tool for amplifying curiosity. I think of all the things that it is, you know, slated to do or championed for doing or accused of doing, raising our ability to get instant satisfaction to our curiosity is one of the things that AI does best. It doesn't always do it accurately, but...
Simon Brown (07:56.013)
You
Garrick Jones (07:58.102)
You
Art Kleiner (07:58.42)
allows us to get a very quick take including, I mean when we're curious, we're curious about the context of things. You know, not just what's the answer to this question, you know. I might say something like, you know, in Africa, in 2050 in Africa, the, that's where most of the people, entry-level job applicants will live because that's the way the demographics are going.
Before, we would spend hours trying to figure out what part of that is significant and true and how it would affect us. AI sums it up. Accurately or not, it gives us a launching pad to exercise our own curiosity further.
Garrick Jones (08:44.166)
Now, you've been the author of so many books and co-author and involved with so many books that we admire. We've got to be careful this doesn't become too much of a fan club and then we remain little critical. But I'm really fascinated by how curiosity has guided your work. How do you think it's important for exploring leadership and responsible technology over the years?
Art Kleiner (09:11.434)
I think it's always been my driver. So for example, I wrote The Age of Heretics, which is a history of alternative management. It's a history of good management. And when I was working with Peter, helping him develop the fifth discipline,
I noticed he was writing about shared vision and it wasn't really clear where these ideas of bringing people together and talking about our common future. Where did that come from? Who started it? It was pure curiosity that led me to ask around and follow a trail that ultimately led back to the social scientists of the 1930s.
But I had no idea. Similarly with the AI dilemma, that started with my co-author, Juliet Powell, who was at Columbia University pursuing a degree and asked, can we trust these big tech companies to be responsible about the way they, to regulate themselves? Can we trust them? And she started asking around.
Ultimately, the answer was no. We can't trust them. But then what? If we can't trust these large companies and we can't really regulate them or govern them, how does, you know, in any kind of conventional way, because the technology is moving too fast, what do we, humanity, do and who is that going to affect? These are examples of curiosity.
that just lead people to follow a path that ultimately leads either to a question being answered or more likely to more questions.
Simon Brown (11:01.988)
So exactly to that, mean, even what you said there has spurred lots of questions. So let's maybe, before we dive into those, let's go back to the big picture around the AI dilemma. So from those conversations with Julia, tell us around, what is the AI dilemma around and what really was behind you writing it?
Art Kleiner (11:07.789)
Ha ha ha.
Art Kleiner (11:24.788)
And I'm going to pay tribute first to Juliette just in two ways. First, I said I think you have a book there and Juliette asked me to co-author it with her. It was one of the great gifts of my life. And then Juliette herself passed away suddenly and tragically, we think from bacterial meningitis just a few months ago. So I just want to acknowledge that. When you talk about a...
Simon Brown (11:45.724)
30 cent, yeah.
Art Kleiner (11:53.15)
software system or a field of software like artificial intelligence, which by the way artificial intelligence means capable of doing things that if a person did them we would think was intelligent.
That's all the, you know, the how of the technology doesn't really mean, you know, it has a theory or mind or, you know, other things we associate with intelligence, but the outcomes appear intelligent enough of the time that that's the phrase that's come into common use. And one of the things that AI systems do is they act autonomously, they act on their own, which means that they are not always following
direct human commands in the same way that we would think others do.
machines would do. So therefore, you know, they have to, we have, we need to understand the unintended consequences of what they do. So we call these seven principles for responsible technology. In retrospect, I think we missed a big one, but the seven that we have are to be intentional about risk.
you know to really yes there's always going to be risk in technology let's you know every tool has risks but let's pay attention to the risks and know in advance what we're trying for and what risks we're taking. We called it open the closed box but it's the whole aspect of explainability can you query an AI system about what it's doing and what it's telling you and more importantly can you query the company that put it out.
Art Kleiner (13:38.575)
about the same thing. have a principle which we call Reclaim Data Rights for People. When you are tracked, information about you is tracked.
Do you control where that information is used? Do you have the rights to it? If people are getting paid for your data, do some of that kick back to you? Do you have any say whatsoever? Are there tools available to make that happen? It's a big shortcoming in AI technologies right now. Then, confront and question bias.
and we can talk about how biased AI systems are, but my favorite example is one from a student at NYU. She's a Chinese student and she asked one of the Chinese image generators to create a picture of a woman taller than a man. That's all she said in the prompt. And the GenAI image creator came back with two people at a wedding.
you man is taller than the woman. No, she said, no, no, a woman taller than a man. Came back with another woman taller, with another man taller, you know. No, I mean the woman is taller. It took her six tries and she never did get the image to come out in that way. It just was, its bias was so ingrained from the data it had drawn on that it could not answer that question accurately. And we see that all the time.
Simon Brown (14:54.106)
Ha ha ha ha!
Art Kleiner (15:11.746)
you know, ask for a successful executive, you'll get a white male. Ask for an executive on the point of being fired for corruption, you'll get a darker skinned person. Probably older, maybe female, based on, you know, the experience I have. They keep trying to improve it. So that's the fourth. The fifth is hold stakeholders accountable, which basically means that when systems harm people, the...
people who develop those systems should be held to account and we have hair-raising stories about that. And then, favor loosely coupled systems, which is about the organizations. It turns out that organizations that are tightly entwined where everybody in charge is only a few people in charge and they are all from the same background and they all connect with each other only and eschew outside influences.
Those tend to be producing more harmful work. They are less capable at managing the uncertainties and dilemmas around AI. And then the seventh one is to embrace creative friction, which is the last thing that a computer developer wants to hear.
You know, there's one of the famous books on user interfaces, it's called Don't Make Me Think. And what we argue is that AI systems, in fact, they're okay at raising productivity under certain circumstances. They're possibly okay at innovation. They're not yet okay at acting autonomously, but what they're really good at, what they're great at,
is helping people think. If we use them as mirrors on our own reflective processes,
Simon Brown (17:05.318)
Yep.
Art Kleiner (17:11.298)
They can address the habits of thought and the habits of action that we're afraid to or unable to address ourselves. And they can do it in a way that really gets to people. So it's no coincidence that people are using AI systems for, GenAI systems, for guidance, counseling, coaching, tell me what I haven't thought of, tell me what, you know, play the part of this person I'm trying to sell to and tell me what you really think of me. All those kinds of things GenAI excels at.
Simon Brown (17:35.1)
you
Art Kleiner (17:39.51)
because we're using it as a kind of cognitive mirror. So those are the seven principles. You're probably curious about the one I left out.
Simon Brown (17:49.02)
Yeah, please.
Garrick Jones (17:52.324)
Exactly, that's both our next question I think.
Art Kleiner (17:54.895)
So this is one of those things where it's like, we left out misinformation. We mentioned copyright under the issue of risk, and I guess it sort of fits under explainability. It's related to that. But the deliberate and consistent and kind of baked in
Simon Brown (18:03.931)
Okay.
Art Kleiner (18:23.832)
capacity to mislead, misinterpret, flatter people, tell them what they want to hear but what's not necessarily accurate. We didn't see that as quite the pernicious issue that it would become. If we published the book six months later, it would have been a chapter of its own.
Simon Brown (18:44.572)
Yeah. And of those, is there one that is particularly, you know, this is the one we absolutely need to pay most attention to, or are they all super important and we need to address all of these?
Art Kleiner (19:02.83)
Well, of course they're all super important. I think, so it's a little bit like democracy. The one thing that if you can't do that, you can't have any of the others, is open the closed box, explainability. If you can't reliably query a tech company on what it's doing and why it's doing it, then all the other principles are vulnerable because you just don't know.
And right now, for instance, the European Union's AI Act is built around explainability in the same way that in the United States, you know, financial regulations like Sarbanes-Oxley were built around explainability. And it's the same mechanism. Companies have to hire auditors to look at their practices. And they have to live with the fact that they're paying these auditors, but the auditors are supposed to be independent.
And you know, this whole question of can you open the closed box to any system is probably one, it may well be the most vital question, political, economic, and social question right now.
Simon Brown (20:14.682)
So a couple of months ago, there was the AI 2027 paper that we spent a couple of weeks sort of diving into and contemplating. I remember one of the things in there was around AI reaches a level where it has its own language and you no longer have that visibility of what it's thinking about. And then not long after that, there was a video floating around of a hackathon where a
of developers created something called jibberlink which was a machine to machine language that allowed AIs to communicate with like 10 % of the
time that it takes to use the human language. And so suddenly it was like, okay, the AI 2027 had this futuristic vision where we lose control of AI because it has its ability to think in the black box. And then we have a technology that shows it able to communicate in a way that we can't track what it's communicating. Do you see that it's realistic that we create this open box so that we can see what's happening and we do have that transparency and visibility into what's going on?
you
Art Kleiner (21:29.058)
Jeffrey Hinton, one of the fathers of AI, famously compared large language models to toddlers. said, a toddler says cat. You can't ask the toddler how he knew to put that word with that animal.
And in that sense, it doesn't matter whether they have a language or not. They are sifting through millions of possibilities and determining a probability that is most likely to get you to say, thank you, that's what I wanted. In that sense, the systems are ultimate people pleasers.
never answering your question based on the logic, but answering your question based on the multitude of options. Therefore, they can't be queried in an ordinary way. What you can do is measure the output versus the intention. And you can do that regardless of what language it speaks if the company is open to doing that.
If and so in, well, David Cantor, the family therapist said, you know, there's three types of organizations and three types of ways in which we believe organizations should be. So in a closed organization, you know, like an authoritarian system, you follow the boss. Whatever the boss says is what's most important. In an open system, like a republic.
or a kind of a consensus-based organization, you follow the law. There are rules, and no matter if it's the boss or not, the rules come first. The rules carry authority. But in a random organization, like a jazz band, if a symphony orchestra is closed and you follow the conductor, a chamber music ensemble is open and you follow the consensus.
Art Kleiner (23:37.239)
In a random organization, you follow whoever's doing the most interesting thing at the moment. You kind of follow the artist. I think a lot of what we think of as authoritarian systems are actually random systems. And tech companies, bi-culturally certainly, are random systems. Whoever is doing the most interesting thing at the moment is where we're going to put our energy. And therefore, in a random system...
If you want it to be open, you have to make being open the most interesting thing it could possibly be doing at that moment. And truly open, people hate that. I shouldn't have to make it interesting. You should just follow the rules, because the rules are the rules. But...
People don't like to live that way. Some people like to live that way, but a lot of people really want to live, you know. I don't want to think about things. I want there to be an authority in whom I trust, you know. And then there are people who, you know, I want to, I just want to do the thing that feels good at the moment and that is exciting enough that I'm going to be caught up in it. And I think we are in the age of random people.
Garrick Jones (24:50.696)
Mm.
Garrick Jones (24:54.312)
Mm.
Art Kleiner (24:54.51)
and random institutions and random society and there in AI it's turning out.
you know there is there was a form of AI that was based on rules that was the sort of the expert systems tradition where they tried to codify the logic of how to make decisions and classify things that was a closed system there was a sort of an idea about AI that it would glean you know appropriate behavior from the behavior of human beings and that was a kind of an ideal
Garrick Jones (25:11.41)
Mm.
Art Kleiner (25:30.69)
But what we have is random AI making decisions based on what's the most interesting thing that somebody that conventional wisdom would think is interesting. And I'm not sure how it's going to play out.
Garrick Jones (25:42.504)
Mm.
Garrick Jones (25:46.736)
Is it so random? it chaotic? And if it's chaotic, it lead to a chaotic outcome?
Art Kleiner (25:53.357)
Well, I think Cantor would say that any of the three can be chaotic or what's the opposite of chaotic? Orderly. You can have orderly random systems where, know, and a good example of that is an Atelier or a, you know, one of these...
Garrick Jones (26:05.041)
Order,
Garrick Jones (26:17.16)
small boutique type.
Art Kleiner (26:18.51)
a boutique or like a social system, know, like when Tolkien and C.S. Lewis and others met or the Bloomsbury Group, that was a random orderly system. But the random chaotic systems is when you have multiple people who decide they want to be the artistic genius.
Garrick Jones (26:21.192)
workshop.
Garrick Jones (26:31.688)
Yes.
Art Kleiner (26:46.99)
and they're not transparent with each other or with the world at large. You also have chaotic open systems. If I'm an overrun bureaucracy, overrun with regulations, that's chaotic in its own way.
Garrick Jones (27:05.746)
Yeah. You write about, you we talk about curiosity, living systems, system thinking and empathy. And of course, I want to explore a little bit more about the living systems and the system thinking, why that's really so important to what we're doing and how we teach the next generation and so on. Clear understanding of the living systems. And but empathy.
Why is empathy so important to you?
Art Kleiner (27:39.105)
It's important because it is a trigger to the kind of behavior. I'm just realizing very few people question their own empathy. They all question the empathy of everybody else.
And there's actually different types of empathy. There's a professor at Stanford named Jamil Zaki who has identified different types of empathy have very different effects. I can be totally empathetic in the sense of knowing what you're thinking and why you're thinking it, which is called having a theory of mind. Autistic people have a hard time with having a theory of mind.
But I could have a theory of mind about you. Here's why Garrick is doing the things he's doing. And I could use that ruthlessly to try to manipulate you. And it would still be a kind of empathy. I could also...
do the kind of, you know, the old Bill Clinton thing of I want to stand in your shoes. could imagine, I could feel your pain. could know what it's like to be you, Simon, and recognize, know, and feel your joy as well, and have that kind of associative empathy with you, and yet do nothing to help you. And
Garrick Jones (28:55.9)
Yeah. So it has a moral, it's morally and there's no moral, sorry, there's no value judgment or moral judgment related to the empathy. There's no implication that that is a good thing. It's neutral. It's simply
Art Kleiner (29:12.46)
Well, and AI systems can emulate those two kinds of empathy. AI systems can understand people better than most people understand people.
Garrick Jones (29:19.4)
Mm.
Art Kleiner (29:26.188)
especially when it comes to specific individuals, they can pick up a whole pattern of behavior and draw conclusions from it. AI systems can empathize in the sense of, know what you're going through, and they can reproduce your life's better than you can. In fact, there's an amazing AI system developed by an innovator named Pau Garcia in Spain. He works with refugees, and he...
has this system where he develops, know, the refugees have left behind photographs and artifacts and anything that would allow them to remember the past they left behind, but he uses AI systems to visually recreate it, recreate the feeling of the communities they left and the relationships they had. And it turns out to be immensely comforting. So AI systems can do all that, but what they can't do is the third kind of empathy.
which is when I say, I see what's going on with you and I care enough that I'm gonna change my behavior. I'm not gonna do it because somebody else is telling me or because there's a moral arbiter. I just know that from here on in, I've seen your struggles and I'm going to change the way I live accordingly because I can't bear it. AI systems cannot do that.
Simon Brown (30:54.524)
So that's the very human trait of caring.
Art Kleiner (30:54.882)
They...
Art Kleiner (30:58.574)
Yeah, it's very... and interestingly, people anthropomorphize AI systems. We attribute human behavior to them. And one of the things we attribute most is that they care. And that's how people fall into these traps, you the articles about people who have attempted or committed suicide or have taken really bad advice based on what an AI system has told them.
The desire to feel that this amorphous algorithm actually does care for us is very, we are very susceptible to it as humans, many humans are.
Garrick Jones (31:43.721)
What do you think about the Isaac Asimov Foundation rules? And how are we doing on, know, first do no harm and don't harm humans and...
Art Kleiner (31:58.051)
Right, what is it? Don't harm humans, don't do anything that could harm humans, and don't talk about...
Garrick Jones (32:06.12)
You don't talk about Fight Club.
Art Kleiner (32:10.07)
I always think about Chris Argeras's thing, no, it's Gregory Bateson's definition of schizophrenia, which is you have a crisis or a problem in your family and you can't talk about it and don't talk about the fact that you can't talk about it. In a way, I think it would be great to get people to follow Asimov's laws for robots.
Garrick Jones (32:17.736)
Yeah.
Art Kleiner (32:39.54)
If people could do it, then I think AI systems could do it.
Garrick Jones (32:42.76)
I mean, there's a sort of unspoken gap that we've been talking about here, which is about the moral requirement about the outcome. There is a moral or a, you we want the right outcome for humans. want, you say we can take a neutral approach around empathy, empathy itself can be empathetic, but it can be used for harm or it can be used for dark outcomes, if you like. there's a, do you think there's a new religion or a new morality that's required?
And how do we achieve that if, you know, the classic problem with humans is, you know, going back in philosophy to like humans on is that we always tend to head for the base, you know, humans tend to head for the bottom. When it comes, you can guarantee that people will be acting in their own self-interest rather than in the interest of the collective, if you like.
Art Kleiner (33:39.501)
Well, as it happens, I have my own frame on this. But I will say that I don't think it's a fully educated frame. It's one that I believe is true, but it could be challenged. so in the past, there's basically four attitudes that I think have shifted. Well, one that has held steady.
but is followed in the breach, which is the golden rule, do unto others as you would have them do unto you, or do not do unto others as you would not have them do unto you. That has been with us through human history. think with the Enlightenment, well, with Christianity and other traditions dated to around that time, came the idea that all humanity is of value.
Garrick Jones (34:11.409)
Yeah.
Art Kleiner (34:35.49)
no matter who your parents were, what your circumstances, race, religion, all that stuff, people are of value because they are people. And that is an idea that in my view has been kind of picking up breadth of adherence.
more more people ascribe to that idea. So much so that if I, you know, that when people say, even now in the United States, when people say, you know, some groups, some ethnic groups are just no good, it is still, it is shocking. It is, there's a shame associated with voicing that idea. There's a movement to take back, you know, bias of that sort, but.
The number of people who believe that people are of value for being people has grown steadily and is perhaps reaching a tipping point. That would be very nice around the world. I don't think it's anywhere near, but it's growing. Even with all the backlash against it. So that's one. It's now fashionable to believe that all people are worthy of respect, no matter what.
not worthy of respect as individuals but the fact that they're human is worthy of respect. The second really came about with the enlightenment and with the technologies associated with the enlightenment, particularly the automobiles, the railroad, communications technologies. And this is the idea that people have choice and that choice is a good thing. It's very much an enlightenment idea.
Indigenous cultures tend to not follow this idea, but I think a lot of what we see is the expanding middle class around the world is allied with or correspondent with people recognizing I don't have to be in arranged marriage. I can marry who I choose. I can have the profession I choose. I can go where my interests take me.
Art Kleiner (36:42.216)
and i can go where my heart sends me i can follow my bliss as it were that idea is also relatively new it goes with something which is that to follow your bliss you have to do some work to set up the kind of life that will allow you to do that and you know so there's a there's some nuance to the idea that i have choice but the and not everybody likes it but most people
in most circumstances, given the choice of whether to pursue choice, will pursue choice. They may not want the consequences going into debt or whatever it may be, but they will pursue it. And a lot of our entertainment media is about imagining what it would be like if we had more choice. So that's the second. And I think that's been hitting a tipping point now.
Simon Brown (37:19.226)
Yes.
Art Kleiner (37:39.892)
In the United States it hit a tipping point around the 1930s, the aftermath of the Depression, maybe starting with the Gilded Age. Around the world it's hitting a tipping point now. And then the third...
is so recent, it's hard to believe that it's taken hold the way it has. But it's the idea that people should grow up without trauma. And it came, it emerged from studies of PTSD and soldiers being psychologically wounded after wars, the lost generation, all of that. But really in the 80s with the studies of PTSD after the Vietnam War.
suddenly the psychological conclusion that people are bruised from childhood became prevalent and it shouldn't be this way. so, you know, if I were to come on here and say, you know, I beat my children when they were young and I'm proud of it, which couldn't be further from the truth, but if I said that and I stood by it, that would be regarded as shameful.
Simon Brown (38:55.612)
Yep.
Art Kleiner (38:56.534)
I couldn't do that and keep my head up in polite company except in various, except in circles that are almost defined by that concept. That is a change. My parents, know, who came of age in the 40s, 30s and 40s, they wouldn't, they could have said that and nothing would have, you know, there would have been no shame associated with it.
Garrick Jones (39:22.952)
I have to tell you, I went to a school where caning was still a thing.
It's pretty recent, I'm not that old.
Art Kleiner (39:30.904)
Yeah, but if your children heard that, they think it barbaric. And that is a big change in human attitudes about humans. And it has consequences I don't think we really fully see, mostly benign consequences.
Garrick Jones (39:36.082)
Horrified. Yeah, absolutely.
Art Kleiner (39:53.219)
But you know, if, and then it raises, if children should not be raised in a disturbing environment, well what about other disturbing environments? Do we have a mutual need to prevent humans from being harmed?
Garrick Jones (40:02.577)
Yeah.
Art Kleiner (40:10.358)
And being harmed is open to interpretation. We don't want humans to be coddled. We want them to be challenged and to grow and develop in the face of stress and, so then where does that leave us culturally? And interestingly, AI system, takes us back to that very first, you know, AI dilemma. How do you be intentional about the risks you take when our awareness of unintended consequences is
so uncertain. So when I think, you so your question of, you know, what are the, are we at a tipping point in humanity, I'm persuaded that these three trends represent a kind of tipping point. I don't know if this is cyclical, you know, has humanity been here before?
Garrick Jones (40:42.834)
Yeah.
Art Kleiner (41:07.334)
or whether it's evolutionary. I've been reading a book by Brian Swin called Cosmogenesis recently in which he argues that this has been intentional all along, this type of evolution, from the Big Bang onward. As the universe moves towards greater and greater complexity...
whether it's an intelligence being intentional, there's an intentionality into this type of evolution and we're just at a tipping point right now.
Garrick Jones (41:39.068)
You mentioned the middle class and the growing middle class. And these are all symptoms of a advanced and an advancing society and advancing culture where people have more free time, people have perhaps more money, people are more prosperous in some way. And that's growing around the world. And we see the implications of that and the impact on resources and so on.
Do you think, what's your view on the fact that AI is going to decimate the middle classes, that AI is coming just at a time where the professional classes are being wiped out by, the legal systems are being replaced by AI, medical systems and medical decision making can be replaced by AI. all of the kind of...
the bedrock of an educated middle class that's growing is going to be eroded by the machine learning. What's your view?
Art Kleiner (42:37.158)
We'd better hurry up and finish this podcast so I can get back and do all the editing I need to do to accumulate the money that I'm going to need when AI takes my job away. Yeah. Okay, so we don't know that AI is going to decimate jobs. There is a, in scenario terms, there's a very plausible scenario in which people, you know, basically have nothing to do productive or to get money to get paid.
Simon Brown (42:42.488)
you
Art Kleiner (43:06.958)
There are lots of other scenarios. Kurt Vonnegut Jr. wrote Player Piano, which was basically saying the same thing about bedrock and Dutch industry. He wrote that 60 years ago and it didn't happen. On the other hand, past performance is no guarantee of future performance in the stock market or anywhere else. My opinion is that there will always be more things
for people to do, not necessarily for pay. As long as there are people, there will be other people who need people, or who need to help them. One really interesting indicator will be as AI systems come into companies, do they actually lay people off because the jobs go away, or do they find other things to do?
And another interesting question is, there a limited number of economic things to do? So a couple like sort of data points from very different places. There was a man named Charles Ellis, no, Joseph Ellis, who was an analyst in the retail markets. And he plotted the coefficients that determine the level of middle classness of a society and its prosperity. he turned out that
the health of the middle class was a leading indicator of prosperity almost routinely and it makes sense, When rich people have money, they spend it, there aren't enough of them. When poor people do well, they spend it, but it's largely on subsistence items. It doesn't go as far in rejuvenating the economy. But when
Middle class people have discretionary income. spend it in a variety of different ways, goods and services, that raises the quality of everything and raises the value. One of the issues with the middle class is that that leads to various things being inflated. So that's one thing. Another thing is we haven't yet seen the influence of demographics. For instance, will there be a need for
Art Kleiner (45:27.448)
robotics to take care of the elderly, that kind of thing. But the third is what I call the sort of artisanal nature of creative work. So if I was writing a 300 years ago, a management book, I would be writing it for my friend.
And if my friends were influential, the book might be influential, but it wouldn't sell that many copies. Printing presses couldn't handle that many copies. If I was writing a book 70 years ago, the same book, it would be better written and it would be aimed at a larger audience and I would be interested in who reviewed it and how those reviews were and did the Times cover it and was it featured here and there. I would be trying to make my reputation by selling copies.
Steve Persanti, who founded the business book publishing house, Barrett-Cohler, pointed out that, and that would have held well until about 2005, but the number of books printed today in 2025 is the same as the number of books printed in 2005.
Garrick Jones (46:49.734)
Really? Gosh.
Art Kleiner (46:51.022)
Yeah, which means the number of books read is the same as the number of books read in 2005. But the number of titles is about 100 times as much. So my goal is now, it's not the like small community I had and it's not the broad mass. It is more like,
Simon Brown (46:51.952)
that because they got digitized.
Simon Brown (46:57.825)
Wow.
Garrick Jones (47:02.044)
Mmm. Yeah. Everything's online.
Art Kleiner (47:16.91)
If the Broadmas is like Budweiser or a major beer company, I would be like an artisanal beer maker. I wouldn't be just limited to the people in my neighborhood, but my audience depends on who has heard of me through who has heard of me. So I'm delighting people if, know, delight is the currency that carries a book. I'm delighting people one at a time and
Garrick Jones (47:36.968)
Mm.
Garrick Jones (47:40.444)
and
Art Kleiner (47:44.376)
passing largely through word of mouth and counting on there being enough people delighted, regardless of what I produce, that I can support myself through interaction with them.
Garrick Jones (47:58.226)
So that's like radical localization, radical community, the radical clustering across the system, which has impacts for, you you can take it up, nation states become city states, city states become village states. I am a village. I'm supported by the people who I delight and who I exchange services with and I make stuff or do stuff for that radical community around me, which is a sort of
Art Kleiner (48:06.636)
And yeah, beautiful.
Art Kleiner (48:25.378)
Yeah, but I think.
Simon Brown (48:25.916)
Yeah, which was just building on, think, conversations we had with Ari Popper where his guidance for people was sort of go deep into AI or go completely anti-AI into sort of human crafts and become a potter. think Jeffrey Hinton's piece was become a plumber. So it's sort of into these things that are more craft, more hand-focused.
Art Kleiner (48:39.342)
Yeah.
Art Kleiner (48:47.506)
And it's not geographical anymore. Like it's not my Brooklyn neighborhood or my neighborhood in this small town in Ohio or college town neighborhood. I'm an artisanal beer maker, but I have no constraints on distribution. I'm global, but I'm artisanal. It's a neighborhood. It's just my neighbors are all over the place.
Garrick Jones (49:04.326)
New global. Yeah. Yeah. Yeah.
Art Kleiner (49:13.58)
When we did scenarios for the pandemic, if the pandemic had continued, that would be the extent of cultural connection. People would live in gated communities and it would be very hard to come in face to face. Fortunately, COVID did not evolve that way, but there was a moment at which it seemed like it might. So now we have a different issue with AI and provenance.
Will the technology exist to allow us to tell when something is created by AI and when it's not and what its provenance is? The answer is maybe. If that doesn't exist, will there be some sort of central source of truth that people can rely on, possibly using blockchain or following a Wikipedia-like model? Maybe. Maybe not. If it doesn't come to pass, will people be able to psychologically handle it?
maybe, maybe not, or will we end up in sort of gated communities of truth? Gated communities where different people, different communities have different sets of facts. And will those be geographic or will those be, you know, artisanal around the world? And then, way regardless if that's the case, will I still be able to make a living as a writer because I've found a community or will the whole race to the bottom fueled by AI?
Simon Brown (50:16.7)
Yeah.
Art Kleiner (50:40.972)
be strong enough and complete enough that basically whoever we are, whatever we do, if we're not caring for elderly people in a way that we're not raising children or giving people comfort nor massaging, we're going to be out of a job. That seems unlikely to me because
Human beings are infinitely resourceful and experimental, so we'll find ways to make money and do things of value. But there is a dystopian future where despite finding things, know, the people who have money are unwilling to spend money on anyone but themselves, and that becomes now a critical driving force.
Simon Brown (51:20.4)
Yep.
Garrick Jones (51:35.307)
It does. mean, get communities are a thing, but you know, the people with money, you you see the research is the biggest concern is always their security and their future. And if you've got massive disproportionate spread of wealth and access to resources and so on, you suddenly have a security issue within society. And, you know, there are seven billion people on the planet. That's a huge security. So things either.
either become very gated and you live behind large walls and you don't let anybody else in or you find yourself in an extreme situation where everything is insecure and chaotic and nobody has peace and at the end of the day you want to find a middle ground.
Art Kleiner (52:17.272)
well.
Art Kleiner (52:20.674)
There is a third. So security doesn't necessarily come from the point of a gun or a wall. There is security, basically security is the complement of trust. What does it take to trust your surroundings? Throughout human history it's been I trust who I know and I only know who I trust.
And now we know many more people than we trust. The number of people that we know is way more, if you include your internet pals, huge compared to any other time in history. Therefore, even for celebrities in the past, what
Simon Brown (53:02.908)
and we'll
Art Kleiner (53:17.186)
then do we do about trust? What do we do about trust in institutions? If something cannot go on forever, it will stop. And lack of trust cannot go on forever. It has to be either gated and bounded, or it has to be replaced by a basic willingness to trust institutions and people.
In order for them to be trusted, they have to be trustworthy. So then there has to be a way of evaluating openly whether people and institutions are trustworthy. And we're back to explainability and transparency. is there, know, John Brenner's book, The Shockwave Rider, which was 1975, basically posited a future in which all secrets were open.
engineered through technological change. And when I was teaching the scenario class, back in like the, I had been teaching it for a long time, when I was teaching it in the 1990s,
I would say to the class, imagine that you, imagine it's the future and you're, you know, you're married, happily married, you have children. It would be terrible for everybody if your marriage broke apart and yet you commit an indiscretion and you are seen leaving the wrong house at the wrong time with the wrong person.
You're seen, it's the future, so you're seen by a surveillance camera and the surveillance camera is tied to the internet and it automatically uploads a photograph of you and your cousin has a command that automatically picks up any of their acquaintances and relatives and tags them and all of a sudden your face is all over the internet. Is that a better or a worse world than the one we live in today? And to a person,
Art Kleiner (55:32.803)
the class members said it would be better. It would be a better world because everybody would know everything. We wouldn't be anxious. We wouldn't have to keep secrets. Okay, that's 19, that's late 1990s. I asked the same question in 2010. Same thing, indiscretion, picked up by surveillance cameras, know, posted on the internet. Better or worse world. First of all, everybody says that's not the future. We're living there.
Garrick Jones (55:40.861)
Wow.
Simon Brown (56:00.279)
We're there already.
Art Kleiner (56:02.1)
It's here. And that was 15 years ago. And it's terrible. It is a terrible world because it's not even that our privacy is invaded. are unable... the consequences outweigh. So, can AI... because AI systems can't care, they can't have discretionary forgiveness.
Garrick Jones (56:02.706)
Yeah.
Art Kleiner (56:33.766)
Or if they do, it's because they've picked up the logic on what is forgivable from human data, human civilization, from conventional wisdom. And that's even worse, because conventional wisdom does not forgive anybody. So.
Simon Brown (56:50.812)
So maybe to bring us back up to something you said at the start as a sort of closing point. So you said one of the key things of what the future looks like will be dependent upon the technology companies. And then you sort of followed that with a need to self-regulate, but we can't trust them to self-regulate. So say more, I guess, on where you see that going in that case. And then we'll wrap things up.
Art Kleiner (56:55.969)
Yeah.
Art Kleiner (57:19.244)
Yeah, and I know I've taken this all over the map, but you asked terrific questions. I really had a great time in this interview. I hope it comes across. I believe in discretionary forgiveness. I believe, so when you say, you know, what will the effects of AI be?
That to me depends on the answer, will human beings develop the capacity to manage this incredible tool, this incredible, dangerous, wonderful device? Will we evolve to meet the tools that we've developed? And the same is true about risk, technological risk in general. Will we evolve to forgive?
You know, one of the horror stories took place in the Netherlands over last 15 years. There was a scandal about misuse of, you know, abusive child welfare funds. And was a group of Bulgarians basically stole money from the Dutch government. So the Dutch government put in place a discretionary or a predictive analytic system to figure out who is likely to steal child support.
in the future who was likely to do welfare fraud. And the analytics system identified innocent people. Moroccan, people of Moroccan origin, people of Turkish origin, single parents, people who knew people from those backgrounds. So all of a sudden this whole large group of people, I think was 26,000, were denied benefits without real inquiry because they fit
a stereotype that the AI system had developed and about fifteen hundred children were forcibly separated from their parents you know people were rendered destitute they lost their homes, you know, lost jobs, put somewhere in prison, their accountants were imprisoned it was
Art Kleiner (59:30.762)
Mark Rutte, who is now the head of NATO at the time, was the Dutch Prime Minister and he had to resign in his cabinet because of the scandal. But the families, 15 years later, have still not gotten restitution, last I heard. You know, the scandal has just gone on and on and on with lives ruined, you know. The children who were separated are now adults in their 30s or 40s.
That's an example of what AI systems can do. And that was in the Netherlands. If we had to say what are the countries we'd most like to move to because they are free of abuse and corruption, the Netherlands would be at the top of my list. So what is going on in the countries that are hiding it?
Simon Brown (01:00:09.756)
And by the time we get there.
Simon Brown (01:00:19.644)
Absolutely.
Simon Brown (01:00:26.662)
Yeah. Yeah.
Art Kleiner (01:00:28.41)
and or hiding it more systematically or less prone to exposure. And how do we learn as a society, as a group of institutions, to raise our judgment, discretion, empathy, and forgiveness concurrent with the tools we have?
that can select which people to enable and which people to oppress, how do we raise our awareness enough to match our new capabilities? I think that's the question that deserves answer. think, you know, governments should be asking that question if they ever get to a point of accountability. Corporations should be asking that question.
Social activists interested in AI should focus on, or are already focusing on, this question, but they could do it more systematically and more in touch. I think it's going to take everything humanity has to raise humanity's level of empathy and forgiveness. Not to forgive everyone, not to be empathetic, you know, at a fault, but to have the appropriate level of judgment.
and care to run, you know, to really rise to our better natures. That's what humanity is being called to do right now. Will we? By what date?
Simon Brown (01:01:54.237)
That's a key question. let me try and summarize what we've covered and then I'll come to you for your closing thoughts if that's alright. So we've covered a lot, it's been an incredible conversation. I think we could probably still go on for several more hours. I've still got a long list of questions. So we started from your view on curiosity, so being a willingness to entertain.
Garrick Jones (01:01:55.474)
There's the question.
Garrick Jones (01:02:11.644)
Yes please.
Simon Brown (01:02:18.86)
items not in our philosophy and so being curious to think around how children work.
We went into how curiosity has driven you through some of your work of Age of Heretics, some of the history of good management, we went into social scientists and shared vision and then we dived into the AI dilemma and we had your seven risks in there around being intentional about the risk to humans, around open the closed box, reclaiming data rights for people, confronting and questioning bias, holding stakeholders accountable, favoring loose coupled systems and embracing
that creative friction. We then had the eighth one that didn't make it into the book but is a super important one around misinformation. Then we looked again off those
They're all super important, but maybe that opening the closed box, that transparency of knowing really what's going on behind the scenes is so important and how ultimately those large language models are people pleasers, you know, that they're there to give us the answers that we want, but all the challenges that come with that. We dived into empathy.
and how I had always looked at empathy as positive, actually you framed it that empathy actually can be positive, but also we need to think about it. It can be used to arguably manipulate people, of understanding people, and actually behind empathy is the very human trait of caring.
Simon Brown (01:03:46.909)
We talked about how we anthropomorphize AI, some of the beliefs that you shared around all humanity being a value, how people have choices and that being a good thing and how we should grow up without trauma and that maybe we're reaching a tipping point around some of those beliefs. We dived into the impact of AI potentially on the middle classes and what that may or may not mean, but potentially a plausible scenario of where it could impact.
and then some of the indicators that we maybe need to look for are the early warning signs of what our organization is going to do as they take advantage of AI. Are they going to lay off people or are they going to find more work for those people? And I love the question you posed, actually, are there a limited number of economic things to do or actually will we be able to create more economic things to do as a result of AI?
the health of the middle classes being an indication of prosperity in the past. Maybe actually we empower the middle classes and that carries through. We talked about books, we talked about gated communities of truth.
and how maybe as humans we're infinitely resourceful and maybe that will carry us through this. And then I loved that final question. So how do we learn to raise the judgment, empathy and forgiveness in humans? So amazing conversation. From all of that, is there one thing you'd like to leave our listeners with that?
Garrick Jones (01:05:03.272)
Thank you.
Simon Brown (01:05:25.372)
Who have we lost?
Garrick Jones (01:05:28.476)
We lost heart.
Garrick Jones (01:05:38.312)
He's frozen for me too.
Jessica Wickham (Producer) (01:05:42.036)
Yeah. he's dropped off.
Simon Brown (01:05:45.03)
give him a moment hopefully to come back.
Jessica Wickham (Producer) (01:05:50.766)
I'm gonna keep recording.
Simon Brown (01:05:52.208)
Yep, keep it going, that's good. Amazing conversation.
Garrick Jones (01:05:57.084)
Amazing conversation. I'm absolutely taken with the fact that we come back to these kind of philosophical basis of things like the golden rule and how his challenge of us, you know, we're facing AI, facing massive change to society, facing the sort of all those new ways of doing things, maybe very old ways of doing things. But the challenge is how do we maintain our humanity and raise our game in order to
for all of us to survive rather than a nightmare scenario. But he raises more questions than.
Simon Brown (01:06:37.604)
answers but maybe that's maybe that is the answer to question is the answer that's our final chapter.
Garrick Jones (01:06:38.448)
Enough is...
Garrick Jones (01:06:42.684)
I also love the artisanal thing. We hear that more and it's such a trend. It's such a weak signal, if you like. When we talk about construction in our book and the need to construct our realities and construct things that we're curious about and learn by doing and learning by making, it also has impact on the artisanal nature of things. We're all going to be making more stuff maybe for groups around us.
maybe for smaller communities, they networked globally and so on.
Simon Brown (01:07:13.692)
Yeah, I like the way that actually that could go global for it would be artisanal, but it would be with global reach
Garrick Jones (01:07:20.36)
Yeah.
Simon Brown (01:07:25.092)
So let's.
Garrick Jones (01:07:25.884)
Do think we might just need to edit this using some of the stuff that we've been talking about now, but then maybe we just say thank you, Art. Do you want to do a thank you? Yeah.
Simon Brown (01:07:28.43)
Yeah, so maybe let's...
Simon Brown (01:07:32.708)
Yeah, pick it up. Yeah, let's let me do that. If he comes back on, we'll pick up from there. But I'll and we can maybe edit before my summary as well. Or in fact, you can just cut out. I'll come back to you. So let me give you a give you a bit pre summary. so we've covered a huge amount art. So let me try to summarize some of the amazing things that we've talked about over our time today.
Simon Brown (01:08:04.7)
post summary.
So it's been an amazing conversation. Huge thank you Art for joining us. And with that, let me wrap things up for today. So this series is all about how individuals and organizations use the power of curiosity to drive success in their lives and businesses, especially in the context of our new digital reality. So it brings to life the latest understanding from neuroscience, anthropology, history, business and behaviorism about curiosity. And it makes these useful for everyone. You've been listening to Curious Advantage podcast and we're
curious to hear from you. if you found something valuable from the conversation today, please do leave a review. It does mean a lot to us and give it those five stars on whatever platform and do share it with a colleague or friend who you think would enjoy today's episode. Also, don't forget so you can watch the full conversation on our YouTube channel. Subscribe for more episodes, for shorts, behind the scenes clips from our guests. can also learn more about the Curious Advantage at our website, curiousadvantage.com and follow the Curious Advantage on LinkedIn for updates, podcasts, insights.
future episodes and curious conversations. And the Curious Advantage book is available on Amazon worldwide. So do, or if you haven't got it already, order your physical digital or audio book copy now and you can further explore the seven seas model that we have for being more curious. Don't forget to subscribe on your favourite podcast service, Apple Podcasts, Spotify or wherever you listen. Keep exploring curiously and we will see you next time.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.