- Welcome to "The Minor Consult," where I speak with leaders shaping our world in diverse ways. I'm your host, Dr. Lloyd Minor, Dean of the Stanford School of Medicine and Vice President for Medical Affairs at Stanford University. Today, I'm delighted to be joined by award-winning journalist and author, Jeremy Kahn. Jeremy is the AI Editor at "Fortune" magazine, where he leads coverage of artificial intelligence, and writes the "Eye on AI" newsletter. He's also the author of the recent book, "Mastering AI, A Survival Guide to our Superpowered Future." A former Managing Editor of "The New Republic" magazine, Jeremy has reported from around the globe over the past two decades, and his work has appeared in numerous top publications, including "Bloomberg," "The New York Times," "The Atlantic," and "Smithsonian." Jeremy, thank you very much for joining me today.
- Oh, Lloyd, thanks for having me here.
- So Jeremy, as a journalist over the years, you've had a front row seat to the gradual and now the rapid evolution of AI. What surprised you the most about how this transformation has unfolded?
- Yeah, I think the thing that was most surprising, and it's I think caught a lot of people somewhat off guard, was just how popular ChatGPT was when it debuted in November, 2022. I'd been following AI at that point for a number of years. I really started covering it in 2016. And when you sort of have been covering it day in, day out, it was very clear that these generative capabilities were getting better and better. In fact, it was very clear that you could have these extended kind of chatbot-like conversations with these large language models. And it actually didn't seem to me when ChatGPT, you know, debuted the first time I looked at it on the day it came out, that it was really all that different from some things that OpenAI had actually put out just during the summer prior to that, which they had put out really for AI developers to play around with. And this seemed very similar in a lot of ways. So I was surprised by how viral it went and that it took off. But then the more I thought about it, I realized I think that, you know, why that was, which was, you know, this was the first time I think for most people that you had such a simple interface to interact with one of these large language models, and the fact that you could talk to it about anything was really mind blowing to people. And I think that the fact that it could converse in natural language, which most people weren't aware that AI models could really do, you know, awakened people to these possibilities. And suddenly it was like, oh my God, AI is here. And it really opened up usage of the technology to people and really got everyone in the world, I think, thinking about what this technology's going to do.
- I think that's certainly the case. And we've only seen that underscored and amplified since the introduction of ChatGPT as you described. Jeremy, if we turn for a moment to your book, your book explores the potential benefits of AI for individuals in society over the next five years, as well as the threats if it remains unchecked. What do you see as the most significant benefits and what are the biggest threats?
- Sure, I mean, I'm overall an optimist and I think that this technology is going to have some really beneficial transformative impacts. And I think the most significant ones are actually in an area you know well, which is medicine and health. That's where one of the areas I'm most excited about, where this technology is going to take us. I really do think we are going to see a lot of new therapies roll out at a much faster pace than before. I think this technology is really gonna help us make all kinds of scientific discoveries, both in basic science, but also in applied science, and particularly in medicine and health. And I think there's the opportunity to have much more personalized care and to really allow patients to take charge of their own care in ways that haven't been possible before. So I'm very optimistic about that. I'm also very optimistic about what the technology may do in education, which is actually an interesting area because that was one of the areas where there was so much moral panic, I think, and to some extent still is over what this technology is doing. Teachers are very concerned, rightfully so, that students are using it to cheat, but I think deployed correctly, there's this tremendous potential to enhance learning with this technology. There are already a number of systems that have been rolled out that are designed really to prevent the student from cheating very easily using the AI, but using the AI as a kind of personal tutor, it's designed to use Socratic method, and that can really help students grasp concepts. It can present it at exactly the level the student needs. It's never gonna become impatient. It's never gonna get tired. You know, the student can make the same mistake over and over again if they want, but the thing's still gonna be very patient and try to walk them through and try different approaches to help them learn. And I think the ability to put that in the pocket really of every student and to actually encourage lifelong learning for people, even adults outside of school settings, there's tremendous potential there. So I'm very excited about that. In terms of the downside risks, I do believe there are some significant ones that I'm concerned about. One of them is that if we become overly reliant on this technology, if we do allow students to just, you know, use it to cheat, there is this real danger I think of losing some vital cognitive skills that we have. I'm very worried about people losing the ability to write altogether, for instance, 'cause I think writing and thinking are actually closely bound activities. And I think we often become better thinkers through trying to write down our ideas and to put them in this linear form. And it's something you don't get other things. I mean, I think it was already true that a lot of people were criticizing, you know, for instance, PowerPoint and people were talking about PowerPoint brain where, you know, you can spew out a couple bullet points, but that's not the same thing as a narrative. And people often miss the connections between those bullet points when they do that, which is less true when you try to do linear writing. So I'm concerned about that. I'm concerned about impact on critical thinking as another cognitive ability. I'm worried about, again, it's very easy to have the AI model do a lot of the hard work for you and not think too hard about, oh, is what it's suggesting really right? What were the sources for the information that it's giving me? I worry about that quite a bit. And finally, I worry a lot about degradation of sort of social skills. We already see a number of people who like to use these chatbots say, "Oh, it's like having a friend who's always there," or trying to use them for therapy. A number of the AI vendors are trying to market the systems in ways I find a little bit disturbing as a kind of cure for loneliness. And it's absolutely true that we have a problem with loneliness in a lot of developed countries, but I'm not sure this is really a cure for it. In fact, I worry it may make things much worse that it's a lot easier to have a relationship with a chatbot because they have no actual needs that need to be met. And to some extent they're kind of designed to be pleasing to the user. You know, it's some interesting, in incidents where that's gone too far and the AI models have become too sycophantic, but in general they're supposed to sort of be pleasing to us and not challenge us too much about what we might think or say. And I worry that that'll be very comfortable for people. They're like, "Oh, this is a lot easier than having to go out and actually meet people who might, you know, challenge what I say. They might not like me, they might not like what I say." And I think for people who might already have social anxiety, this could just become a crutch. And I worry that we're gonna find people much more isolated actually than before.
- You know, pursuing that thread that you just described in terms of its impact on education and potential impact in decreasing critical thinking or devaluing critical thinking, what do you think are some of the strategies that educators and others can use in order to prevent that from happening? I guess the way to turn that question around is what's the appropriate use of AI? You mentioned tutoring, for example, where it will always proceed at the pace of the student, but what other skills and activities that we traditionally have associated with, so human intelligence in terms of the assimilation of information and then the categorization of facts and details based upon a base of knowledge that traditionally has required human intelligence, but now increasingly can at least be augmented by generative AI and maybe in some cases replaced by it. So how do we safeguard, and perhaps safeguard is not the right word, how do we use it appropriately to accentuate human intelligence and critical thinking?
- Yeah, those are really good questions. I'm sure you could spend all day just on that topic, but.
- Sure.
- I think as I suggest in the book, that there are a number of ways potentially to do this. There's certainly a number of ways around assessment. One is to do something that was already starting to be a trend in education prior to the rollout of these AI models, which is this idea of the reverse classroom or the flipped classroom where, instead of giving the students homework, you know, where they're supposed to work out problem sets or complete worksheets or write essays as a way of reinforcing the knowledge, and what happens in the classroom is lecturing and kind of the delivery of information to the student that they're supposed to then take home and digest, you kind of flip that around and you actually, potentially with the help of AI, you essentially have the information delivery take place at home, but then you assess the student in class and what you're asking 'em to do in class is to maybe do the problem set in class or do the writing assignment in class, or I suggest we could do a lot more actually with having students interact with each other in class, 'cause that's something you're not gonna get from AI is actually get students in group discussion. And I think by observing that as the educator, it becomes very clear which students actually know their stuff and which don't. So that I think works. I think oral assessments, you know, is a way to go. I think a lot of fields are gonna have to think though more broadly and going back to sort of first principles, like what do we want someone in this field to absolutely know themselves and what might it be okay for them not to know? You know, it's very interesting, but I think a lot of fields are gonna have to have that debate and it might be appropriate actually at the level where students are first learning a profession or a field that they do have to know some of this stuff because it is fundamental. But then we'll have this other question of, if that's a skill we really want people to continue to have throughout their career and we feel like it's important for the person to actually know this stuff, then I think we're gonna have to sort of contrive situations actually where you can't rely on the AI.
- Sure.
- And you're gonna drill it. Interestingly, I felt like, you know, some fields though, where we've been dealing with automation for a very long time is actually sort of in aviation and for the book, I talked to a number of experts in aviation about how they do this because obviously nowadays, planes can be flown mostly by autopilot through large segments of this. But we still want the pilot to have the skills to take over the aircraft if something goes wrong. And it's actually become a problem the more automation we've built into these aircraft, that there's the sense that pilots are losing their skills, if something does go wrong, you have this issue of what's called automation surprise, where it takes the human a lot longer to recover and actually resort to the manual process. And the way in aviation they've tried to deal with this is through simulated drills that pilots have to take at least on a yearly basis where you simulate various emergency situations occurring and you have to kind of go through that. Also, I talk to people at NASA and how they train astronauts, which also have to rely on a lot of automated systems, and they've done a lot of work on how you avoid automation surprise. And they found, again, there's no real substitute for essentially drilling it. And occasionally even in situations where the astronaut might not expect to be drilled, that you introduce errors into the systems and, you know, to see if the astronauts can catch them and to force them to remain vigilant. And I think in a lot of professions, we may have to start thinking about doing similar things to reinforce those skills we still want the human to have, even though in, you know, nine out of 10 cases, it's okay for the human to rely maybe on some decision support or, you know, from an AI system.
- Jeremy, you've thought and written a lot about the potential impact of generative AI across multiple economic and business sectors. And I was wondering if you could summarize where you think we're already seeing significant impact, and of course a number of things coming out today about the effect of AI on jobs and how that likely will differ in terms of different business groups, different sectors. But how do you see things now, and of course they continue to evolve very rapidly, but maybe if you project into the future three to five years from now, where is AI going to have the most profound impact in businesses and where do you think it'll have the most profound impact on jobs?
- Yeah, that's a great question. Well, I think so far where we've seen the largest impact has actually been in software coding. That's the place where you see widespread use of these systems. Almost everyone who codes now is using an AI system of some kind to either help complete the code they're writing, in some cases now, assigning them basically to write all the code for certain segments of the code base initially, and then it's just kind of reviewed by the human engineer. So there's been a profound impact already on software coding such, and that is translating into an impact on jobs. You keep hearing about companies saying they're gonna hire a lot fewer coders, especially junior coders, than in the past. So that's a case where you've already seen tremendous impact. The other area where we're already seeing quite a lot of impact is in call centers and kind of customer support. A lot of people are now using AI to, it's interesting, I mean, I think there was, when ChatGPT first came out, there was a lot of people who thought, oh great, we can automate away all of the human call center, you know, handlers that we had, and some people are trying that, I think they've had very mixed results. It turns out that the range of questions you can get from customers is pretty diverse. And there are still gonna be these edge cases where it's gonna be very hard for the automated systems to handle, even if they, you know, as these systems can understand the language pretty well, the actual process that the agent has to go through to successfully resolve the inquiry is actually very complicated and very hard for these systems to accomplish effectively often enough for us to be confident that we can just take people outta the loop. So I think actually a lot of companies that we thought, oh, we can take all the call center people out of the equation have actually rode back on that. But what almost all the call centers are doing is deploying some kind of a copilot for the call handling agents that's essentially listening to the call along with the human agent and advising them in real time what they should say to help resolve that question from the customer. And pulling up also documentation, literature, that the agent can use to tell the customer, "Oh, here's what you need to do." You know, it's pulling up the customer's account information right in real time. It's eliminating work for the agents and helping them resolve the calls much more successfully faster. So that's I think what we're seeing there, which again, may have an impact on jobs. You may not need as many call center handlers to grow a business to a certain size. So there probably is a job impact there, even though you're not seeing, you know, everyone who's working at a call center lose their jobs yet. And I'm not sure you will, again, for those reasons of all those edge cases, but I think if you project out five to 10 years, you know, we'll see a kind of, I think creeping automation of a lot of things in a lot of people's jobs, where AI will perform some of the tasks, it probably will not perform the entire job. So you're still gonna have people in these roles, but it is definitely the case, you may not need quite as many people for a given, you know, volume of business. And I think that is going to have, you know, some impacts on the job market. On the other hand, to be slightly optimistic about it and I have heard from a number of people who have been able to start sort of small businesses, sort of one man shops essentially, and grow those businesses to some significant level of revenue without having to hire additional employees because they're using AI to do a lot of the functions, you know, doing helping them with the bookkeeping, helping them write marketing, helping them code. They don't have to hire coders. So maybe this person at least to get their initial small business up and running, they can just get the AI system to write all the code that they need, they can get it to design the website. There's a lot of ways where it could actually help in business formation and help a lot of small businesses grow, which would be good for the economy. Whether that compensates though for the jobs that we will not need, I'm not sure. I think the jury's very much still at on that.
- Do you think that there are areas where, you've talked about the jobs that we may not need as many of, maybe call center operators for example, I like your example in terms of having generative AI work in the background so that it's giving the human the information that is most useful to them as they relay that information and help the customer solve their specific problem. Are there categories of jobs that are likely to be created because of generative AI? And maybe since you're a technology reporter, you focus on AI, but you have knowledge of, and you've covered technology more broadly. If we look at previous transformations fueled by technology, what can we learn from job creation that's come out of those? Of course, what came out of the creation of the internet and the emergence of software was the need for a lot more programmers. And as you've just said, that need is probably going to be attenuated and every programmer is going to have an AI assistant that previously humans would've probably done. But do you see categories of work and of jobs that are gonna emerge de novo now because of large language models and generative AI that we hadn't thought about before?
- I think the answer is yes, and I did look, you know, I worked very hard on this when I was researching the book, it is true that every other technological, you know, revolution has resulted actually in more job, more jobs have been created in the long term than jobs have been lost. And I don't think there's a reason to think that AI will necessarily be any different. But on the other hand, it is also very hard, and this, the research shows us, it's very hard to predict at the beginning of a technological wave or before it's really taken off what exactly those new jobs and new roles will be. So it it very hard to say what those new jobs exactly are going to look like, but we can be pretty assured from the literature that that probably will happen. There will be these roles, and I can point to a few that we've seen emerge so far. It's unclear if these new roles are going to last or if the technology continues to get better, if they actually won't really be necessary, but I'll just mention two of them. One is this role called a prompt engineer. That's somebody who's very good at talking to an AI system essentially to get it to do what a company wants it to do. And there are people out there who are just better at figuring out how to talk to these large language models, what a prompt should look like. They tend to be very good communicators. And so, it's interesting, a lot of the people who are hired to be these prompt engineers where they're essentially getting an AI system to do something within a company that the company wants. Traditionally, if you wanted to get a piece of software to do something, you'd have to have a computer programmer, and you'd have to have a computer science and coding background. Now they're hiring a lot of people who actually come out of like, you know, English departments because they're good at writing, they tend to be clear communicators, can give instructions in a very clear way. And some of those people have been hired at very high salaries to be what's called prompt engineers. Now, there's some issue about whether that remains the case going forward. Actually, what's happening is that the models themselves can sometimes be used to generate in an automated way the prompts for either themselves or for other models. So whether you're still gonna need those people who are prompt engineers for too much longer, I don't know. The other job I would say that AI models have created is a whole new set of jobs around security. And this is sort of different than traditional cybersecurity. These are people thinking about, you know, how could these models be tricked into overcoming their guardrails? What new threats does having an AI agent or AI system sitting in your business, you know, represent? And it's sort of a different skill than the traditional cybersecurity skill where again, it was much more based around code. These are people who are just kind of, you know, very clever about putting their head in the space of a potential nefarious person who wants to do something. And so you're seeing a whole group of companies emerge to do AI red teaming, which is a whole new area. So that's an area of jobs. You're also seeing a lot of people as these AI systems are getting used, come into new roles in audit and compliance. And so I think there's gonna be quite a lot of roles around that that are being created, and you're already starting to see that. Beyond that, again, it's very hard to predict exactly what the new categories of jobs might be, but, you know, my research would seem to suggest that we could be relatively assured that there probably will be some.
- Jeremy, one of the things that those of us in health and healthcare find challenging, at least conceptually, is we share the enthusiasm that you've expressed about the transformational nature of large language models, chatbots, everything we're seeing with this rapid deployment and evolution of large language models, but an analogy would be if we're looking at a traditional algorithm that, for example, adjust the dosage of a medication based upon someone's kidney function or their liver function, we can give that, first of all, we know the structure of the algorithm, we know every note of the algorithm, we can give that algorithm the same information over multiple different iterations and we'll get the same result. And also we can tweak the algorithm in ways based upon outcome that we know it needs to be tweaked. With these foundation models, of course, they're so vastly more complicated that that sort of deterministic approach to understanding what you're getting out when you put something in, when you ask a question, and also the simple fact you can ask the same question multiple times and get responses that are different. And although hallucinations have gotten less, now that the models are being trained with more and more data, still hard to know when, unless you know the field really well, when what you're seeing is hallucination and when it's, if you will, the real thing. And of course when you're dealing with stakes as high as taking care of a very sick person, that becomes highly consequential. So just, I'm curious at a conceptual level how you think about that, I mean, you thought very deeply about the evolution of these models and generative AI, and are we ever going to get to the point of the same degree of algorithmic certainty we have today for relatively simplistic algorithm like adjusting medication dosages? Are we ever gonna get to that point when we're asking a large language model what we should do to treat a patient that has a very complex series of findings and conditions where the information we're getting back from the large language model is pretty explicit and maybe not easily validated?
- Hmm, no, that's a really good question. I think there's a real debate in the field about whether hallucinations will ever essentially be curable with the current architecture that we have for these systems. There are a number of people who say no, that there's always going to be a fairly large degree of error, potential error in what they say, it's just a kind of the nature of how they're designed and how they're trying to interpolate between data that they've seen in their training. On the other hand, I think there are ways to use them that are extremely useful where you might have, you know, the actual essentially the really clinically important bit of this is being done by a static algorithm for instance. But you have that information from the static algorithm going to a large language model that is taking that in maybe with some other information about the patient and then making a suggestion. And even today we have the ability of these models to explain, well, what is your rationale for making that suggestion? Now there's a whole nother debate about whether they report the rationale always accurately, but at least they give you a rationale. And I think then if the doctor can kind of look at that and say, or the healthcare professional can look at that and say, "Yeah, that makes sense," or, "No, that doesn't make sense," and if it doesn't make sense, I think, you know, you throw that out, you don't necessarily away, these systems are designed, you wouldn't take that result. Now there are some other AI systems that, you know, are kind of black boxes where you could feed them lots of data and just ask them to make a prediction about a patient outcome. That's more intriguing because then you might get answers that are actually are valid but are highly counterintuitive. But I think, you know, to deploy something like that in a medical setting, you'd really want to start to get some information about why the model, you know, what is it that's led to this counterintuitive result. And the good news there is that the field is making some progress, although we're a little ways away yet from being able to deploy this in in commercial settings, making some progress in being able to figure out, like, what is going on inside this model that, you know, is causing it to suggest that this is the case? And there's been some interesting work on this with some of the earlier, again, deep learning systems, kind of these black boxes that were trained to play games such as "Go" or chess, and often came up with ways to play that were quite astounding to, you know, grand masters in those fields.
- Exactly.
- Really counterintuitive, and it really wasn't clear sometimes, well, why does it think this is a good move, but they've actually analyzed some of these systems now and it turns out they had actually kind of come across, you know, interesting tactics that are more or less valid that the, you know, human players had ignored for years or just weren't aware of, or had, you know, they are kind of dogma had developed, I think in the way humans would play some of these games where they placed a lot of emphasis on something like in chess, like the value of the pieces as opposed to it turns out that, you know, the space you have on the board and your positionality often matters a lot more. And one of the things that people, chess experts who analyze these AI games played by these very good deep learning systems found out is that they placed a lot more value on sort of maintaining space in the middle of the board than they did on the value of the pieces that they might lose to gain and maintain that space, which was interesting, and we've been able to sort of analyze what's going on inside these networks and see that that's kind of how they're thinking about things. So there's hope that some of those same techniques might be applied later to some of these large language models and also maybe give us some assurance about what it is they're thinking. But I think for the moment, we don't have that. And so I would be very cautious about any kind of system in medicine that would rely primarily on a large language model or transformer based system to make a final decision. I do think it's more interesting to use them where they're taking in information from more static algorithms and standard tests and are just trying to integrate that and compare it to kind of a knowledge base. Essentially they're kind of doing what a doctor might do when you have a bunch of test results, and then you're consulting the medical literature and you're saying, "Okay, well, based on that I think, you know, this is the most likely scenario or outcome or what the best course of treatment might be." And again, the systems have a way you can kind of get them to talk about their chain of thought. So you have some way of validating, oh, what is it that the model is seeing about these results that, you know, makes it think this way. I think that's kind of where we're at with these systems. Now I think you're right to be very concerned about this though. I don't think you want to deploy these systems for the most part right now in any case where they're going to have the final say on anything that's safety critical at all, or that would affect, you know, human life.
- Sure, sure, if we turn for a moment to AI in society, in your book, you address the topic that beyond an individual, there's a potential of AI negatively having a broad negative impact on our society, threatening democracy and exacerbating inequality. What steps can we take to mitigate these risks without stifling innovation? And that of course gets to this entire question about regulation or oversight.
- Yes, absolutely, and I mean, I'll just say that I think, you know, it's a real shame that this always gets framed as sort of innovation versus regulation and that these things are polar opposites. And I think there's a lot of ways in which regulation, sensible regulation would actually, certainly it would power adoption of the technology. One of the things I keep hearing from businesses is that they're very concerned about the risks of these systems and they would actually welcome there to be some regulation, particularly regulation that said, you know, the industry has to come up with standards. You don't necessarily have to have the lawmakers come up with the standards, but say, you know, there has to be an industry standard, which you guys are all gonna agree on. And then we will legally mandate that if you're going to have a product sold, it has to meet the industry standard. That's done in a lot of other fields. And a lot of businesses I talk to, they would really welcome that, because they're quite concerned that just, it's a real wild west right now. And while the companies do testing of their own, there's no standardization around that testing, there's no requirement that they do the testing, there's no requirement what they have to say about the testing that they've done. And in some cases, you know, companies have come out with benchmark results and when people have tried to replicate them, they can't do so. So you know, there's at least reason to think that the companies, you know, may not be doing the tests in the proper way. So again, I think there needs to be standardization, that can be done through industry bodies, it doesn't have to be mandated government standards necessarily, but I do think some kind of regulation saying, you know, we would like products that are put out in the market to be tested for the following things. Now you guys go figure out what the best way to do that is, but you know, you should do that, and we will not permit you to sell a product that has not gone through that process. I think something like that would be quite sensible. And I don't think, you know, depending on what that process is, it doesn't have to be so onerous that it prevents innovation. Again, I think if you're particularly gonna want these products to use in areas where they might impact health or might impact people's financial health or financial decision making or would be used in any kind of safety critical function or, you know, even in, you know, the governments around the world are rapidly trying to adopt this, but then the consequences of, you know, government deals with things of high consequence, who gets benefits and who doesn't, you know, who's going to be incarcerated and who's not. So these are really high consequence decisions and I think, you know, we really need to have some sensible regulation about what, you know, standards these models need to make and products using these models need to have before they're deployed. I don't think that would necessarily hurt innovation. I think actually it might increase the potential market size, which ought to help innovation.
- Jeremy, it's been about a year since your book was published. Now, if you could write an addendum today based on the past 12 months, what would you add or maybe what are you thinking about in terms of a second edition?
- Yeah, sure, I was actually just asked this by my publisher because the paperback is gonna come out very soon now actually. And I did write an afterword for that to try to incorporate some of these things. I mean, one of the things that's been very interesting that I did not anticipate in the book, I will be honest, is just how quickly the Chinese AI companies have caught up with where a lot of the western companies were in the development of very advanced AI models, particularly these models that can do reasoning and use what's called this chain of thought, where they sort of think step by step through something. It's been very interesting to see that a number of Chinese companies starting with DeepSeek, which got a lot of attention, but now copied by some of the big Chinese internet companies, like Alibaba and Baidu as well, and a host of other sort of Chinese startups, have come up with very powerful models that also run on a very light computing footprint. And that's been very interesting. So I think it does give, you know, it does change the equation a little bit of it. First of all, it makes reasoning a lot more accessible to a lot of companies around the world who want to use this, but were afraid that it was too expensive before, it rapidly lowers the cost. And when you rapidly lower the cost, you tend to get more deployment in more areas. So I think that's been very interesting. Also, I think what reasoning kind of empowers, like I mentioned, you know, you can kind of interrogate the models now and say, "Well, what steps did you take to do this? Why do you think this is the case?" And you can get it an answer. Again, it may not always be 100% reflective of what the model actually did to arrive at that answer, but at least it gives a human a way of assessing, you know, do I trust this or not, or, you know, does this sound reasonable or not? So I think the reasoning models have had a big, big impact and that's been very new. But they've also, because of the Chinese companies have been, you know, really prominent in this area, have really upped the kind of geopolitical stakes, geopolitical tension over AI. And that's something of course that has occurred in last year, both because those models have come out, but also frankly, because of what's happened with the election of President Trump in the US and he's come in, you know, he's very much a China hawk. There are a lot of people around him who are very concerned about China's use of these models, really feel like China's trying to use them to enhance its military capabilities, I'm sure that that is true to some degree. And they would like to prevent China from getting access to this technology and gaining any kind of edge on the US. And it's created this real, you know, geopolitical tension around the world where, you know, the US is putting a lot of pressure on various countries around the world not to use Chinese technology and data centers, but also not to use Chinese-made AI models. And that's interesting because some of these models, as I said, are very capable and available at much lower price points. So they're very attractive to a lot of companies. But now there's this issue of like, oh, if I use them or if I allow companies in my country to use them, are there gonna be other negative from the US? Is the US potentially gonna cut me off from access to US technology that I would like to have? And so, that's creating very interesting dynamics. So that's another thing that kind of the heightened, there was already quite a lot of geopolitical tension over AI, but it's just become much more heightened in the past year.
- Well, Jeremy, this has been a fascinating conversation and I'd like to end with two questions that I ask all of my guests. First, what do you think are the most important qualities for a leader today, and for you I'll add particularly as we navigate the AI revolution.
- Yes, absolutely, well, I would say one is just you have to be curious. I'd say that's incredibly important. You've got to want to learn about this technology and all the potential impacts it can have. I really think curiosity is a hugely important quality for leaders right now. And then maybe the other is just flexibility. You know, you have to be really flexible. I think you have to be willing to re-engineer your organization, go back to first principles. A lot of this technology really challenges us, I think to go back to first principles and say, "What do we want this business to do? What is the purpose of this workflow? What is this organization set up to do?" And to start there. And then, you know, it doesn't matter with the process you've used to accomplish that mission in the past, there's the potential now with this technology to create completely new processes to accomplish that mission. But you really have to start with those first principles.
- And secondly, what gives you hope for the future?
- Yeah, well, I mean, I think, again, I'm very hopeful that, you know, humanity has done pretty well with most technologies in the past, including ones that had, you know, really terrific potential, nuclear technology, which, you know, similarly, you know, had the, you know, potential to destroy the entire world, and yet tremendous positive benefits or certainly a lot of the things that have happened in biology. Again, tremendous potential to create horrific weapons of war, and yet tremendous potential for therapeutics and to benefit mankind, and on balance, we've generally done pretty well so far. You know, some people say we've just muddled through or it's been luck, but we've done okay. We haven't killed ourselves all yet. And so I'm hopeful that that will continue to be the case, that we have some smart people thinking about the policies around this, how to develop this technology, and I'm hoping that those minds kind of prevail. It's one of the reasons I do think we need to think about regulation frankly, because I do think in the past, there's been at least some governance structures around these technologies. They may not have been entirely effective, but we did think and put in place some structures to try to shape this technology. And I think, again, this is a technology that is very protein at the moment. There is every opportunity to shape how it's developed. We all have agency in this, and I think too often with technology, people feel like they have no agency. Oh, it's just these big tech companies developing this. They're deciding our fate and our future and we have nothing to do with it. And I think even people in government sometimes feel that way. They feel like, oh, these tech companies, they're so powerful and so wealthy, plus they know this stuff, they're so much further ahead than we are in thinking about it. We have no chance of catching up. But I think that's a mistake. I think people really need to start trying to act and to think about how we can govern this technology both in our own lives, in our own companies, and then, you know, within broader society. I think at all three of those levels, there's potential for agency and action. And I just kind of hope people are gonna take that opportunity now. And if they do, I'm pretty optimistic that we'll actually, you know, end up in a good place.
- Well, Jeremy, thank you so much. This has been a wonderful conversation, very informative conversation. And thank you for listening to "The Minor Consult" with me, Dr. Lloyd Minor. I hope you enjoyed today's discussion with Jeremy Kahn, AI editor at "Fortune" magazine. Please send your questions by email to theminorconsult@theminorconsult.com and check out our website, theminorconsult.com, for updates, episodes and more. To get the latest episodes of "The Minor Consult," subscribe on Apple Podcasts, Spotify, or wherever you listen. Thank you so much for joining me today. I look forward to our next episode. Until then, stay safe, stay well, and be kind.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.