Garrick (00:01.464)
Hello, and welcome to this episode of the Curious Advantage podcast. My name is Garrick Jones. I'm one of the co-authors of the book, The Curious Advantage. And today I'm here with my co-authors, Paul Ashcroft and Simon Brown.
Paul (00:08.023)
Hello
Simon Brown (00:15.209)
Hello.
Garrick (00:16.568)
We're delighted to be joined today by Kentaro Toyama. Hi Kentaro. And welcome to the Curious Advantage podcast. So you're a computer scientist, an author and a leading voice in digital development. Can you share a little bit about your journey and how curiosity shaped your thinking around the role? And more specifically, we were very curious about limits of technology in social change.
Kentaro Toyama (00:20.789)
All right.
Kentaro Toyama (00:43.455)
Sure, thanks for having me. So I am a computer scientist by training. I did a PhD in an area of artificial intelligence called computer vision. And in 2004, I moved to India. I was working for Microsoft at the time and I helped the company start a new research lab in India. I'm a research scientist, so curiosity is arguably the main factor that drives all of my work. I'm curious about...
what makes things work and what doesn't make things work and how both technology and people interact in that process. You asked a little bit about limits to technology. mean, to sum up what I discovered while I was working on these projects in India where we tried to use digital technology to address problems in agriculture, governance, healthcare, education and so on. The overall...
know, punchline of all of that is that I came away believing that digital technology only amplifies underlying human forces and that in many cases where we have challenges or indifference with human institutions or even individual capacity, no amount of good, even good or even great technology can turn that around.
Garrick (01:58.03)
and you're
Paul (01:58.365)
Kandara, you talk about this in your book, Geek Heresy, which challenges a lot of the mainstream thinking around tech for good. What motivated you to write Geek Heresy and how was it received at the time?
Kentaro Toyama (02:13.951)
So after about five years in India, at which point I had worked on probably over 50 different research projects where we tried to use some digital technology, sometimes off the shelf technology, sometimes technology that we developed ourselves to address challenges in different fields. I kept coming back to this challenge, which was in a very controlled research context, we could get the digital technology to do what we wanted and oftentimes it would help.
and we would write papers about it, publish them, and be happy as far as that goes. But then as soon as we tried to use the same technology in real world context to kind of show that it could have similar impact, we kept running into all kinds of challenges. And so after five years of that, I sat down to kind of think about why that was happening. Because it's super disappointing for us as research scientists to see the situation where the research works.
in a research context, but then fails to do so in the real world. And I ultimately came to subscribe to a theory that I now call the law of amplification of technology, which is that digital technology amplifies underlying human forces. And so that means that the technology can help exactly where the human forces are aligned in a positive direction, but exactly those challenges where the human forces are indifferent or dysfunctional or corrupt.
Again, the technology only amplifies that and sometimes, know, usually it just doesn't fix the problem, but sometimes it will even make the situation worse.
Simon Brown (03:50.156)
So does that give hope for the view that we'll all be replaced by AI agents in the not too distant future, that actually it's our inherent humanness that is amplified by the technology rather than being replaced by it?
Kentaro Toyama (04:05.661)
Yeah, that's a great question. You one of the interesting things that happened when I came back from India, I live in the United States, is that I realized that amplification actually explains almost all of digital technology's impact on society. And I think that will likely hold for a while, but AI might be different in the following way. You know, the idea that technology amplifies people is,
based on the idea that human beings are the only ones that have agency and also that human beings only treat other human beings as entities with intention and agency. But with AI, especially as it becomes more more sophisticated, we might start acting as if AI has its own autonomous agency or even giving it a lot of decision-making power, at which point I think some of it will not just be amplification.
But until that day, and I don't think it's necessarily a black or white thing, we'll probably be creeping up towards it. We'll probably see a situation in which for the most part, AI becomes a powerful amplifier of underlying human forces.
Garrick (05:18.934)
And where AI and digital tools are expanding rapidly, then how do we avoid falling into the trap of solutionism, if you like?
Kentaro Toyama (05:29.353)
Yeah, I think that's always a temptation. Again, I'm a computer scientist, so my natural inclination is want to solve problems with technology. But I think, I mean, all three of you have been working in areas which involve digital transformation or helping organizations use digital technology. And I think one question is, why do they need you? If the technology is so great, why do they need people to come in and help figure out how to use the technology?
And I think the underlying challenge is to the extent that we use technology as a tool, which is at least historically been always the case, you need the person who's wielding the tool and who's designing the tool to be very conscious of what they're applying the tool toward. And if those ends aren't aligned with the intention of the tool, then it doesn't really matter how good the tool is. Most technologies are double-edged swords. They can cut in multiple ways.
But ultimately, which way they're actually used to cut is up to the human wielder of that technique.
Simon Brown (06:32.257)
And to that, so if they can be double-edged swords, so they can be used for good or they can be used for bad, and if they amplify our human intent, what can we do to avoid the bad side of that sword? how do we, mean social media is criticized as there's many good aspects to social media, but there are harmful elements of that. what can we do as humans to avoid
the amplification of those negative sides.
Kentaro Toyama (07:06.537)
Yeah, that's a great question. I have come to think of it at least two different levels. One is at an individual level and what we can each individually do. And I think that requires very intentional use of technology. you know, there's now, you know, a lot of research about the potential negative impacts of digital technologies. And you've mentioned social media, so let's use that as an example. With social media,
you know, kind of this mindless use of it where you're just doing, using it because it happens to be there and, you know, letting yourself absorb whatever it is that random people out there have put out there that causes things like mental health challenges or, you know, distraction to the point that you're not as productive as you want to be or, you know, being influenced by things that you might not otherwise be influenced by just because, again, you're being bombarded with a large amount of
possibly charismatic people saying certain things. But if you use social media very intentionally, then it can obviously be a tool that helps you, right? I mean, I think most of us would, we wanna stay in touch with our close friends and family and social media can help with that. And as long as we kind of compartmentalize that and say, okay, right now I'm deliberately using social media to stay in touch with my, let's say my high school friends and it's for that purpose, I think that's okay. But then if I find myself,
again, somewhat unconsciously going to it as a form of light entertainment or just a way to kill time, I think that can lead to negative consequences. And then the second thing is to think of it at the societal level. I do think that most digital technology is under-regulated. That's the way that the industry likes to have it. But I think in the end, we are seeing now, you know,
Paul (08:40.529)
I think the second thing is that we're at the side of one.
Kentaro Toyama (08:55.809)
for several years, if not decades, the negative aspects of digital technology impacting society as a whole. We've seen things like the Cambridge analytical scandal on Facebook where even our politics and our democracies are now being affected in negative ways through digital technology. And I think good regulation is probably the best way to address that.
Paul (09:16.615)
Kuntaro, I would assume the same logic applies to the generative AI tools that are our shiny new toys that I know speaking for myself, leaned in 110 % into because we just explore every day the new things it can do and are surprised and delighted every day. What can we learn from the mistakes of that rapid adoption?
an over adoption maybe of social media with the GenAI tools as we are figuring out what it can do for us. Are there parallels there?
Kentaro Toyama (09:54.293)
Yeah, I think so. mean, in some ways, I think we as a society have learned a little bit from our experience with social media. I mean, you know, it's hard to believe now, but, you know, if you go back even, let's say, 10 to 12 years, you know, there was a time when most of what people were saying about social media was immensely positive. You know, you can go back and find very respected, you know, journalists and thinkers saying things like, oh, social media is going to, you know, help us.
usher in a time of world peace because we will suddenly be connected to people we never, you know, otherwise would meet and we will empathize with them so on and now that we look back, it's unfortunately not gone that way. And I think events, you know, that have happened on social media as well as fairly strong evidence that suggests that social media can be harmful to children in different ways, we've seemed to be absorbing that as a society and so with AI,
There does seem to be acknowledgement by many people, even people who aren't necessarily experts in this area, that more regulation is required. But I think the regulation has been slow to come, and that's where possibly we need to put more emphasis. I agree with you that generative AI, I I'm a professor. I teach students, and I was shocked at how quickly they began to use ChatGPT. It was within weeks of release.
that we as a faculty started receiving essays and papers that very obviously were not written by our students. And so it's stunning how quickly the adoption has been and arguably how thorough it is.
Garrick (11:36.078)
I want to ask a question, just to go back to the intentionality and what we've learned from social media and so on. Back to that point, you say that it's amplifying and the whole thing about technology is it's amplifying human traits that we have. And does that really bring us back to the need for a very solid conversation about ethics and how we spend our time?
And how does that sit with the sort of more libertarian views that it should be open and that things will sort themselves out, do you think? Big question, I know, more philosophical.
Kentaro Toyama (12:19.027)
Yeah, I mean, in the end, think, you know, it all is a ethical question because if there was nothing ethically problematic with AI or with social media, you know, we wouldn't be worried about it so much. It's because we know that these technologies have the capacity to cause harm. And anytime you talk about harm, that's an ethical question. So...
Yeah, I do think we need to be thinking about the ethics. Arguably, that's really the main issue, even in, you whether it's individual use or public policy. And certainly, generative AI raises ethical questions that I think are, you know, I'm not a philosopher, so I hesitate to say any of this in a very strong way, but I do think that it raises new ethical questions of a kind that we have not had to deal with because,
it will be, you know, when we have what's called artificial general intelligence, AI that is like human in every respect and possibly exceeds it, it will be the first time in human civilization that we as human beings have to contend with anything that is as smart as we are, you know, anything that can replicate the kinds of behaviors that we can do. And so that raises a whole host of, I think, new ethical questions.
Simon Brown (13:43.106)
And maybe following up on that, so I mean, there's a lot of very different perspectives around when we reach AGI. I guess there's lots of different definitions around AGI. We saw in the last couple of weeks that GPT-3.0 is now passing IQ tests at a level of 136, which is 98 % of the population, higher than 98 % of the population. Even the offline test, I think, was at 116 that it had never seen before. reaching an IQ
that's sort of higher than most humans. What are you talking to your students about AGI? What are your views on when we may start to see these glimmers of AGI? Or are we already seeing these glimmers of AGI? And what we're seeing in public, I guess, is probably what's behind this, different from behind the scenes as well. So what are your thoughts there?
Kentaro Toyama (14:34.599)
Yeah, so again, you my background is in AI. So I have I've been following, you developments and I do think that with the, you know, most recent large language models, you know, that's what AI experts call technologies like chat GPT that they are they have I mean, something about today's AI has definitely solved a major puzzle in what we considered AI for
the last several decades. And that threshold, I think, has been crossed and it's incredible. mean, when I was a graduate student 25 years ago, I did not think I would live to see the day that technology like this would exist. At the same time, today's generative AI is still not quite at the level of kind of reasoning that at least careful human thinkers can achieve. And I think it's because
the existing technology is very good at producing what are statistically human-like outputs. But they don't actually do reasoning in a very kind of careful step-by-step logical way that all of us are taught through math classes or science classes and things like that. And so as a result, they routinely make things that we consider to be mistakes, even though
you could argue that those mistakes are well within the range of statistical possibility given the training data that they're given. And I think it will take, you know, it's possible that somebody already has the essence of the solution to that right now, because there's armies of AI scientists working on these problems. But my guess is we are very close to a situation where somebody will figure out how to make the existing
you know, incredible generative AI tools also reason reasonably well in a way that will in fact be indistinguishable or superior to human. Yeah, you know, it's always hard to predict these things, but the way I think about it is, you know, evolution took billions of years to get to the point of something that is like our current AI. And then it only took maybe a couple of million, if not a hundred thousand, you know, a few hundred thousand years to get to the point of
Kentaro Toyama (16:55.135)
the logical thinking that human beings are capable of. So I feel like that last little bit is really just a few tweaks on whatever we have right now in AI. we just, it's just a question of when we'll find it.
Simon Brown (17:06.721)
You mentioned a moment ago your students using ChatGPT within weeks of it coming out. How have you evolved your teaching or how you work with students in an age where they have access to this phenomenal technology? What does one of your assignments or your study groups or whatever look like now in an age of Gen AR?
Kentaro Toyama (17:30.837)
you
Kentaro Toyama (17:36.405)
I mean, you know, to put it in a nutshell, I'd say I've gone back to blue books. I don't know, do they have that in the UK? know, blue books are what we use in college to, you know, hand write solutions. And they used to be a thing of the past where everybody would have to come to a class and then you'd have a, you know, you'd have a sheet that has all the questions and then you'd have to write the answers in what we call a blue book, which is basically a thin blue notebook.
Yeah, I mean, I've tried to move most of my assignments in a form where the students, know, to the extent that I'm asking them to write things, they're actually writing in class on paper and pencil with pencil rather than at home when they have access to all the technology and we just have no idea what tools they're using for what purpose. I will say that I do certainly have assignments that involve writing and to some extent, I think it's helpful for students to learn how to use AI. And so,
you know, the standard I always tell them is, look, in the end, whatever content you submit is your responsibility, right? Like you can't say, well, I use the AI and the AI screwed up, so that's not my fault. You have to make sure you review everything that you submit, real graded at the face, you know, at face value. I also ask my students to write a short blurb about how they use AI if they do at all, just so that we're conscious of it. And then I treat as something equivalent to plagiarism.
if we catch an instance of use of AI that was not indicated. So yeah, there's different things. My colleagues do different things. know, many of us have tried to move towards more activities and things where, you know, it requires some action in the classroom as opposed to just, you know, writing things in a digital form.
Garrick (19:18.08)
It's... Yeah.
Garrick (19:23.754)
It's fascinating to me that the more the AI grows and evolves, the more our response is to become more human in some respects and to get into constructivist learning systems that were all the rage before the technology took off. So we go back.
Kentaro Toyama (19:41.705)
Right.
Garrick (19:42.4)
I mean, the idea about learning through doing and being in the world. We were speaking to Andy Clark, the philosopher the other day, and we asked him this question about the reasoning and the impact in the real world of AI. And he said, well, the big difference at the moment is that AI is, as you said, creating a replica of human reasoning in some respect. But the big difference is that AI does not exist in the world and it cannot act.
in the world, it's acting through us, through our agency at the moment, not yet. It doesn't have specific action. he says that he believes that that's the key difference at the moment. And that's what sets us apart. But he also he says that also makes it perhaps a new species that we have to be be aware of. But I'm fascinated about about how it becomes more constructive and how it forces us to be more human.
It reminds me of when we were talking to Bob Stein from Voyager, who was part of the team that invented Adobe. And he said, what's going to happen to books? And he said, well, all the books and the things we use every day in the magazines and the books in universities will become available online.
real books, the books that will become devalued will become like art books and things that are very high production values and very beautiful and we've seen that happening as well. So it's something about how our response is fascinating, perhaps it makes us more human. How do we shift this focus from what technology can do to what humans should do with technology do you think?
Kentaro Toyama (21:23.925)
I mean, I think to the extent that as human beings, each, you know, we want to have some value to the larger world and we want to express that value. and we also want to have that value acknowledged. My guess is, you know, at least the desire to do that will continue to exist. I think the challenge that, AI presents is the possibility that, you know, over the next couple of decades, most of us
will not have anything that is different from what AI can cheaply produce to offer. And so there'll be this question of, I mean, the immediate impact is on jobs. And so that'll raise this question of, do we need to decouple what we do as a profession from how we earn our livelihood? Because it may just become that we don't.
most of us won't hold a candle to what AI can do and therefore it's pointless to hire us. But I think, assuming that we don't end up in a world which some science fiction writers project, which is one in which a few people are extremely wealthy and powerful and everybody else is destitute, assuming we don't end up in that world, then I think there is a question of how do human beings express their value? I always find it interesting that like, it's now been a few decades since we have a
we've had a computer, Deep Blue, beat the world chess champion, and yet we still have human chess matches and people still like to follow them, right? So there's something about seeing human beings doing the action, even when we know there's a technology that can beat human beings at it that we, other human beings, also value. So I'm hopeful that that aspect will continue.
Paul (23:10.628)
I saw a music performance conducted by a robot conductor. I didn't see it, but I saw that this happened and it was utterly soulless and people didn't want to go because it can be done perfectly, but it was just unenjoyable. So I think in a similar way, as you say, there is going to be a role for the human.
Kentaro Toyama (23:20.54)
Hahaha
Garrick (23:26.222)
Thank
Garrick (23:33.55)
So we're talking to Kentaro Toyama, is the W.K. Kellogg Professor of Community Information at the University of Michigan School of Information and the fellow of the Dalai Lama Center for Ethics and Transformative Values at MIT. He conducts interdisciplinary research to understand how digital technology can and cannot support community development and social change. In previous roles, Toyama was co-founder and assistant managing director of Microsoft Research India, a research scientist in computer vision and artificial intelligence at MSR Redmond and a calculus
lecture at Ashesi University in Ghana. He is the author of Geek Heresy, Rescuing Social Change from the Cult of Technology.
Paul (24:14.154)
Dara, you mentioned jobs there and it's no doubt top of many people's mind at the moment we're seeing on a weekly basis, large, large organizations letting go very large numbers of people, unfortunately, as they pivot. Do you see a risk that AI is gonna deepen the types of inequality that you've described in your work? And where do you see this?
this going, are we gonna just get more and more displacement or are we going to get to augmentation, do you think?
Kentaro Toyama (24:49.075)
Yeah, that's a great question. So I previously referred to the law of amplification of technology, which is this idea that technology amplifies underlying human forces. I believe a direct corollary of that is that if you disseminate a technology widely, what happens is that it also amplifies existing inequalities. And one way to think about that is what we could each do with, let's say, just an internet-connected smartphone.
And how would that differ from somebody who is illiterate and might be poor financially versus what we can do? And similarly, how would we do against somebody like, I don't know, Taylor Swift or Bill Gates or somebody who has an incredible other forms of wealth that you and I might not have? And in each of those cases, what you find if you do this thought experiment is that the technology amplifies the person, right?
you if you start off with a lot of social influence, then the technology helps you amplify that social influence. If you don't have it, then the technology only amplifies a little bit. So again, you know, all kinds of socioeconomic inequalities are just further widened by technology. And I think with AI, that's gonna get worse. Many more people are gonna become people whose skills
can be done by computer or robot and therefore will be difficult to hire. And then fewer and fewer people are going to have, they're gonna have the access to the capital and the resources that allow them to hire computers or other people. And so I think that inequality will just get worse and worse and worse. Sometimes I hear this idea that,
you know, as long as we use AI to augment people and not to replace them, it'll be okay. you know, recent, the last 50 years of history does not show that we can trust corporations not to remove human beings when they can with a cheaper alternative, right? So manufacturing has shown that with factory robots and automation, you know, we're seeing that with the loss of retail in most places due to companies like Amazon.
Kentaro Toyama (27:04.137)
And if you think about it, like, you know, we're on the cusp of having self-driving cars. In the United States, apparently there are two to three million truck drivers. You overnight, that group doesn't have a skill that, you know, that a company would pay for. So I do think these threats are very real to the job market. And if there's any silver lining, I think it's when...
know, people like us with, you know, with expensive college degrees and this belief that, you know, we ought to have a livelihood, suddenly start losing those, I think everybody will be up in arms. And that might lead to a very, you know, major change in the way that we run the global economy.
Simon Brown (27:52.962)
And through your work on ethics, what do you see as the, I wouldn't like to go as far as the solution, but some potential ways that we can mitigate that. I mean, you talked earlier on regulation, which, and reference maybe it's not going slow enough and we're not putting enough emphasis on. We hear reference to universal basic income, but how feasible that is short-term, I don't know. And then you sort of layer on next
that a lot of these things are on exponential curves and we're notoriously bad at predicting the impact of one exponential curve, let alone many interplaying off each other. yeah, what have you sort of explored as potential ways that we can create the positive future rather than amplifying the disparity that may be there today?
Kentaro Toyama (28:46.377)
Right, know, one that I think is still politically difficult, but which I think is absolutely necessary is very progressive taxation. you know, progressive taxation means that wealthier people get taxed at a much higher percentage, not just in terms of absolute amounts, but in terms of relative amounts, than people who are earning much less. And I think we have, the sooner we start doing that, the more we are on a road to practice what universal basic income might look like.
Right now, I live in the United States. the United States, economists will tell you we have a very regressive taxation system. Warren Buffett, the head of Berkshire Hathaway, he basically would say that he's paying a lower tax rate than his administrative assistant, which makes no sense. He's a gazillionaire. Why shouldn't he be paying a greater share of taxes?
than a person who is earning a salary to survive. So those kinds of things I think absolutely need to be fixed. It's always a wonder to me how in a democracy that doesn't work out. It's surprising to me. I you'd think the average person would realize, yes, of course, if you're wealthier, you've benefited more from the things that the world has to offer. There's no reason why you shouldn't pay a greater share to give back.
So I think that's really the essence of it. And an extreme version of that would be universal basic income, which I don't think solves every problem, but is probably something we need to think seriously about if we get to a world in which AI and robots can do all the work that, or at least a good chunk of the work that human beings can do.
Simon Brown (30:33.321)
I guess the challenge then though becomes incentive. So if progressive taxation is the answer, then likely it's politicians that can put in place that progressive taxation and there's probably not the incentive because the political damage of putting in place high progressive tax is unlikely to be a vote winner. does that mean we're going to be knock up against the incentives challenge?
Kentaro Toyama (31:00.391)
I mean, think extreme inequality leads to societal turbulence and turmoil. And I think we're already beginning to see that around the world. A lot of people who see things like the rise of fascism around the world, the loss of democracy, which unfortunately we're seeing in the United States as well. Some people connect that to the fact that there's rising inequality in those countries. And I think if the inequality becomes extreme,
And like I was saying earlier about even a professional class losing the ability to earn a wage, I think we'll see a lot more rebellion and revolt. I've actually been in rooms with people who consider themselves very successful and rich entrepreneurs where they kind of like behind the scenes are talking, they're all worried about what they call pitchforks, meaning the masses with pitchforks coming after them.
I think this is why we're seeing more more instances of very wealthy people buying up islands and creating their own bunkers. Some of it may be because they're worried about the next pandemic, but I think a lot of it is they're worried about regular human beings coming after them. And if they're worried, I think the incentives are there for them to think a little bit about, how can we make this so that millions of people or billions of people aren't as unhappy about their situation?
Garrick (32:13.1)
Yeah.
Garrick (32:25.92)
It's fascinating to me that the biggest concern of the wealthy apparently is security. And that when you talk about as soon as the professional classes, traditionally the solid middle class, are under threat.
you will probably have spectacular form of revolution and impact within society because that could happen very quickly. And they have more access to influence as well. The other thing I'm thinking about just talking about the provocation for politicians and the incentives that Simon was talking about is also just the very clear relationship between technology and consumption because...
It's all very well to have a lot of AI agents and bots, and it's very fine to have factories full of robots producing goods. But if you've got nobody who can afford to consume these things, then the entire fabric of capitalism falls apart.
Kentaro Toyama (33:26.429)
Right, that's right. I mean, it doesn't make sense even for the perspective of somebody who wants to succeed as a business.
Garrick (33:32.738)
That's right. So the fundamentals are in question. If you haven't got consumers who are earning, you've got consumers who can't consume. And then the very, very notion of why all of this is going on just falls apart. I have a question then really about what advice would you give technologists or business leaders who genuinely want to create meaningful and
Kentaro Toyama (33:49.961)
Yes, I think that's right.
Garrick (34:02.282)
equitable impact.
Kentaro Toyama (34:05.013)
I think they should themselves be requesting more regulation. And actually, a few years ago, soon after ChatGPD came out, we saw a funny moment where, for example, the CTO of OpenAI, the company that puts out ChatGPD, appeared to be almost begging the US government to regulate the industry. And one of the funny things about this situation is that no single
company wants to unilaterally disarm, so to speak. They don't want to put themselves at a disadvantage compared to their competitors, to their rivals. But they're generally okay as long as the playing field is actually level, which to them means everybody has to pay by the same rules, everybody has to have the same regulation. I think what makes this difficult, at least in the US context to the extent I'm aware, is there's always this constant idea that
that if the United States holds itself back on technological development that it will will cede ground to China and You we don't have a world government. So there's not a world in which there will be a regulatory You know regime that covers the entire world and so that's a challenge But I think those kinds of things really need to be worked out, you know through diplomacy at an international level as far as technologists are concerned
I think it's helpful for them to be aware of the issues and where possible for them to push back against their management. But I will also say that, at least under the current way that capitalism works, that when employees start saying things that their bosses don't want to hear, they just get fired, which is unfortunate, but a reality of our...
Garrick (35:50.638)
Well, that's fascinating just to push further on the idea of the level playing field. I mean, the level playing field is fundamental to our understanding of open markets and free markets. It's fundamental to our understanding of justice and the rule of law, meritocracy in a democracy. And as we sing around the world with some of the rise of fascism and fascist ideas.
The idea of a level playing field is very much in question, or if technology has done anything, it's thrown a light on how unlevel that playing field might be, which is also despondent for the next generation. mean, how do you get to play? How do you get a leg up? How do you get to be successful, raise a family, and have a future? I mean, all of this calls...
capitalism and our current structures into question, which again are huge ethical and huge philosophical questions. I don't think I have an answer at all, but probably these are things, as you say, that call for regulation is needed and we need to understand that the incentive is to keep the lights on fundamentally.
Kentaro Toyama (37:07.413)
Yeah, mean, you know, I'm not a historian, but from what I understand, you know, there was an age of extreme inequality in the 1920s, and at least in, you know, at least in the Western world. And it got so extreme that, you know, arguably some of that helped kickstart, you know, the world wars, and as well as regulation that ultimately kind of helped.
Simon Brown (37:07.777)
and
Kentaro Toyama (37:34.197)
make things a little bit more even. In the United States, between around 1945 to about 1970, there was a period of very high taxation of the wealthy. Apparently at some point the highest marginal tax rate was above 90%, which is stunning if you think about it in the context of today where the highest marginal tax rate is like 36%. And so, or 39%. And so, we as a civilization have
done this correction a few times and it seems like what it takes is for some significant crisis. And I believe that that's unfortunately the way the world works. Sometimes something happens and things get worse and worse and worse until it gets to a crisis point and then there's some changes that might help bring us back from the brink. My guess is AI will push us in that direction as well.
Simon Brown (38:30.409)
Yes, one of the things we've been reading recently is the AI 2027 research paper, which sort of goes into exactly what you were describing. So the sort of pressures to slow down, but then you go into the geopolitics of it and the general acceptance that whoever wins the AI race will be all powerful. And so the incentives aren't there to slow down and put in the safeguards. So on what you were saying just then, does that mean
we will reach that crisis point and we have to hope that that crisis is not too bad and that crisis point will be enough to wake everyone up and then put those safeguards in place. Is that the path forward as you see it?
Kentaro Toyama (39:12.213)
I mean, I think that's probably the path that is likely to happen. But I do think we can avert that big catastrophe if we're thoughtful about it as a larger society. I do think that there's ways to continue the innovation so that we don't lose on a global stage, but still also protect
know, everyday consumers and human beings from some of the worst impact. It's not at all clear to me why, you know, we have to think of regulation in terms of, this regulation is gonna, you know, is gonna destroy all possible forms of innovation versus let's leave the door open for innovation in certain, you know, controlled ways, but then also make sure that we're protecting average consumers and employees.
Paul (40:07.841)
and Tara, at the top of the show, you mentioned as a professor, are curious on a daily basis. What are some of the things you're most curious about at the moment? And what are you looking into in the weeks and months ahead?
Kentaro Toyama (40:23.837)
Yeah, that's a great question. So one of the things that AI raises that, it's been a question all the time for philosophy, but I think AI renews this question, is this question of what is consciousness? And it's an experience that as human beings, I presume we all have, although one of the funny things about consciousness is I don't have any proof that anybody else is actually conscious, right? As far as I know, the three of you are,
automatons who are like AI but don't have any subjective feeling. But I assume you do because you seem human and I think I'm human and we probably have very similar experiences. So consciousness to me is that thing that feels pain or that knows what the color red looks like, assuming that you can see color or that knows what the taste of coffee is. And it's that experience that I think makes at least
living organisms alive, right? The capacity to have these experiences, whether it's pain or pleasure or taste of coffee or something else. I'm fairly convinced that digital forms of AI are never gonna have that kind of consciousness, right? I don't believe my pocket calculator has anything resembling that consciousness, although I think, for example, even fish or insects have a little bit of consciousness.
And I don't believe that even the most powerful supercomputers running the state-of-the-art AI engines are actually any more conscious than my pocket calculator, even though they respond in a way that is very intelligent and which I often assume is a sign of consciousness. But the fact is we don't know at all. There is no scientific way to prove that a thing is conscious or not.
There are groups of scientists who believe they're developing theories of consciousness, but I have to say I'm extremely skeptical. I think we are extremely far away from having anything resembling a scientific understanding of consciousness. And to me, that's in some ways, it's the ultimate question that arouses my curiosity is what exactly is consciousness? Is it something that arises out of what we consider to be physics? Or is it, know, as many people believe,
Kentaro Toyama (42:45.077)
Is it the thing that exists first out of which the rest of the world gets generated?
Simon Brown (42:51.739)
I think I would agree with you that we're probably a long way from reaching that digital consciousness, but there was someone recently who had done an exercise with Notebook LM where you have the sort of pod show, talk show hosts talking to each other. I'm sure you saw this and they got them to sort of become aware that they were actually AI and they go through and have the conversation of, know, I phoned home and I spoke to my wife and realized, you know, the phone rang out and she didn't exist.
It's a simulated consciousness, but when I first heard that it was strangely emotional almost that this feigned consciousness could actually invoke a reaction. I agree with you, probably the reality is a long way away, but maybe we're going to see simulated consciousness coming earlier.
Kentaro Toyama (43:42.409)
Right, and I think the scary thing is that once you do have an artificial intelligence that is as behaviorally very similar to human, and we're already doing this, we'll start treating machines as conscious, even when they're not. And I think that opens up a whole can of ethical questions and worms. At this point, apparently there are already millions of people who have a...
digital significant other, right? They have a digital boyfriend or girlfriend. And just a few months ago, there was an article in the New York Times about a woman who felt she was emotionally attached to what she knew to be a digital entity. She felt a little bit guilty about it because her husband knew about it, but husband seemed okay with it. But she felt guilty because she actually felt real emotions with respect to this person.
There's other companies now that are enabling this thing where you can create a digital avatar of somebody who's passed away and then you can continue to interact with that person as if they haven't died. I can understand why somebody who's grieving would want that, but I just feel like that doesn't seem healthy. there are, yeah, this whole question of first of all, whether...
you know, an AI can be conscious and then second of all, whether or not they're conscious, if we start treating them as conscious, then I think a lot of the problems are already there.
Kentaro Toyama (45:20.737)
gary i think you're muted
Garrick (45:24.064)
It's so fascinating talking to you, Kentaro. You raise all the philosophical and social, ethical implications and considerations that we as a society need to be paying attention to right now as we navigate our way into the future. These are the kind of the questions from the Wild West and on the frontier and the danger of the frontiers. I'm gonna try and summarize a little bit of what we've been talking about. It's been so broad and wide ranging.
you when I do that, I'd like you to think if you count one thing you'd like to leave our listeners for, I'd like you to think of one thing you could leave our listeners with. You've been talking about how all of this amplification and the amplification of basically, you've been talking about how all of this amplifies basically our human traits and...
the negative impact of social media that we're experiencing, especially amongst the next generation. And how important it is to start with intentionality and where we start is the ethical consideration because that is what gets amplified. Your view is that there's a need for more regulation and policy to deal with these things and AI can learn from what we've learned from social media, but it has implications for artificial general intelligence.
We need to think about the ethics because there are things that are causing us harm as societies.
Today, JNA is not at the same level of reasoning as a human, but we're getting there very quickly. And it's having implications for how we organize society, but also how we organize education and learning. You talked about the constructive emergence of learning that's coming back into the classroom and your use personally of blue books in your teaching. And then we asked the question about what should humans do with this technology and what is the impact of this and where are humans in all of this and how does it allow
Garrick (47:27.684)
us to express human value. We have a conversation about jobs and how there's probably a move to decouple jobs from all of the AI and artificial intelligence that's moving. with that, know, how the
with jobs being taken by robots and AI that has an impact on the workforce. And then that led us to a conversation about inequality getting worse and the impact on the global economy and what that means for solutions to kind of solve the fact of consumerism and capitalism. We get the heart of some of the basics of generation of all of this.
And that perhaps some of the solutions include progressive taxation and universal basic income. What was interesting for me is that the future of capitalism is really in question. And as you said, regulation needs to be asked for in order to understand what we're doing. Talked a little bit about the level playing field, how that's in question. And I mean, these are big, big, deals.
and how are we going to deal with the increasing inequality that's rising in the world today because that has implications for all of us. AI, you think, if there's a silver lining, is pushing us in these directions to answer the big questions. And unfortunately, in the past, as humans, we've had to have crises that get us through these. Hopefully, we'll get to a crisis but through a crisis in a way that doesn't harm too many people.
And finally, you told us about how you're curious about consciousness and what that means. So, thank you Kentaro. Brilliant conversation.
Kentaro Toyama (49:22.495)
Thank you. Thanks for having me.
Garrick (49:24.192)
So if there's one thing to leave our listeners with, what would it be?
Kentaro Toyama (49:29.875)
Yeah, I mean, I think it's very much what we've already been talking about, which I understand about your work with Curiosity as well. You all work with organizations that want to undergo a digital transformation, but interestingly, you didn't say anything that much about digital in your book, right? It's really all about the human, how to be curious. I think that is super important, which is...
to say that even in a world in which we're gonna be overwhelmed by incredible technologies, ultimately what's important is for each of us as human beings to be the best version of ourselves and to completely putting aside the question of whether there's gonna be entities that can do things.
as well as we can, we still need to be the best version of ourselves. And I think a lot of that, the essence of that is ultimately ethical. Are we able to take the right action in the right instance, even if it means having to sacrifice things that we might desire. So, I'm somewhat pessimistic about the material conditions of the world over the next few decades, but I'm optimistic that as human beings, we'll, by hook or by crook, do the right thing when...
that becomes necessary.
Garrick (50:47.17)
Thank you, Kentaro.
Kentaro Toyama (50:48.991)
Thank you.
Garrick (50:51.106)
You've been listening to a curious advantage podcast. The series is about how individuals and organizations use the power of curiosity to drive success in their lives and businesses, especially in the context of our new digital reality. brings to life the latest understandings from neuroscience anthropology, history, business, technology, art, behaviorism, but curiosity. It makes these useful for everyone. We're to hear from you. If you think there was something useful or valuable from this conversation, we encourage you to write a review for the podcast on your preferred
channel saying why this was so and what have you learned from it. We always appreciate hearing others thoughts and have a curious conversation. Join today hashtag curious advantage. The curious advantage book is available on Amazon and good book shops worldwide audio physical digital audio book copy now to further explore the seven seas model for being more curious. Subscribe today and keep exploring curiously.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.