Cecilia Callas (00:00)
Welcome to Remaining Human, where humans learn how to live in harmony with artificial intelligence. At Remaining Human, we practice three core principles. One, situational awareness, understanding where we are with AI. Two, practicing human agency and understanding how we might preserve our agency in the face of rapid automation. And three, empowerment. Remaining empowered as a species and as individuals is becoming increasingly important as we begin to rely on these tools and these tools permeate our society.
This last pillar is incredibly relevant to what we're going to be talking about today. Today I'm joined by Patrick Bissette, who is a senior research scientist in the Department of Psychology at Stanford University. We're so lucky to have Patrick on today to talk about a recent piece of research he's completed. Patrick studies how humans make decisions and control their with convergent methodologies, including experimental tasks, computational modeling, and functional brain imaging.
but increasingly he studies how human performance changes and can be impaired when we, the humans, share control and responsibility with AI agents. AI agents are becoming increasingly prolific in our society, So it's insanely important for us to understand how sharing control of our day-to-day tasks, such as driving, as we'll talk about, is
essential to our safety, to our well-being, and to our sense of flourishing.
So we're so lucky to have Patrick on today. He's published over 35 peer-reviewed papers and journals and he's received awards, including the Association for Psychological Science Rising Star Award in 2021. And he's just an incredibly smart, articulate scientist that I think we all would value from hearing from. We talk about his research, how his research relates to...
some of the broader themes going on with AI automation that expand well beyond driving and also what the current political climate means for research scientists like Patrick. So I highly encourage you listen all the way to the end. Without further ado, let's kick off the episode.
Cecilia Callas (02:11)
So Patrick, tell us a little bit about the recent piece that you've done because it's incredibly relevant to where we are in this moment in time, not just for how humans maneuver vehicles, but for some of the larger macro changes we're seeing as AI permeates society. So tell us about the research.
Patrick Bissett (02:28)
Yeah, sure. And so the research that we're going to end up talking about the most is this study that we did recently where we asked folks to, they came into the lab. work at Stanford university and they did this study where we had a force sensitive keyboard and they were asked to do this task that had some similarities to what it might be like to be driving on the road. And so
Instead of pressing a pedal, they're pressing the space bar at a certain force to try to track something on a screen, kind of like keeping a speed on the road. And then after a while, a secondary stimulus would come up and they had to try to stop themselves. And this is something we call a stop signal task. And I think we may talk some more about this. And this is something that in cognitive psychology, one of the fields that I work in, people use to understand how people control their behavior.
and stop their actions. And so you can get a measure of how quick you are at stopping. And in this environment though, the key pivot was we told them in some circumstances there would be this secondary agent, this AI agent that would intervene and stop for them. And we want to understand how this would influence their ability to stop themselves. And I'm sure we'll get into more of the details, but in short, people were a whole lot worse.
at stopping themselves in this instance where they thought this AI agent would intervene for them, even when it doesn't. And so this was this experiment we created in this controlled environment that helped us to try to understand how people may change, how they may be impaired out in the real world when they're actually in a car and trying to stop and in what kinds of processes are actually impaired. In this case, we were trying to isolate this idea.
this construct of response inhibition, your ability to kind of stop at the last second when something comes into your environment that requires you to change.
Cecilia Callas (04:21)
And were these findings surprising to your team of researchers? Were there any precedent pieces of research you guys were pulling from, or was this a completely new area?
Patrick Bissett (04:31)
Yeah. Yeah,
no, that's a really good question. mean, in a sense, it wasn't surprising. Like, when we submitted our paper a few weeks ago and we posted on Blue Sky, someone said, shouldn't we have totally expected this? You know, if you tell someone that an AI might intervene, aren't you just going to neglect what you're doing? And in a sense, I think the answer is yes. But I want to give a caveat to that. I think the answer is yes and no.
Because we created this environment that tried to control for a lot of the obvious reasons you might expect people to be worse. And so if you think you're, if when we think of an automated car, if we think of like a fully automated car, like a Waymo, where subjects, where an individual in the car is looking at their phone, their hands aren't even on the steering wheel. Maybe there isn't even a steering wheel at all. Obviously someone can't intervene very well.
stop. In this environment though that we create in our experiment, subjects were, you know, they were very engaged. The only thing they were focusing on was the screen. There was no phone out. Their attention was not waning. We could even tell that they were indeed continuing to respond. What we were evaluating is how quickly they would stop responding. And so it wasn't an issue of not being able to perceive this stimulus telling you to stop.
It wasn't because you diverted your attention to something else. It wasn't because your kind of like hands weren't on the wheel, so to speak, in this case, the space bar. It was, it was something about your ability to actually stop your action that even in spite of all these other things being controlled for when the, you know, the secondary stimulus in this case, the screen was turning red, but in the real world, this could have been, you know, someone walking into the street.
even though you're attending there, you still inhibit quite a bit slower. And I think that part was quite surprising. We almost created an experiment to try to kill our results. We tried to control for all the other things that might have been the obvious reasons you'd expect that people would be impaired in this environment. And when you control for all those things, there's still this kind of profound cost we saw in nearly every individual where they were stopping a whole lot more slowly.
Cecilia Callas (06:40)
So in kind of the simplest terms, right, the results of the research showed that when we as humans simply know that there is a mechanism that we can fall back on or that can take control if we are unable to, that simple knowledge, just cognitively understanding that, makes us less able to react quickly, right? Yeah.
Patrick Bissett (07:04)
Yeah, yeah.
And I think that, and they even knew, just as people I think know in automated cars or partially automated cars, that the algorithm is fallible. It's not going to be able to do its job every time. We tell them that. We told them it's only going to be able to stop for you most of the time.
Cecilia Callas (07:17)
Yeah.
Patrick Bissett (07:21)
Maybe it's neglect, but certainly the response inhibition was impaired quite a bit. They were much worse at stopping.
Cecilia Callas (07:25)
Yeah.
Right. And you kind of alluded to this, but there's this interesting duality around, okay, if we have full automation and we're sitting in a way Mo that isn't.
Hypothetically, that's not as dangerous as this partial automation piece. And I think when we talk about AI and the way it's transforming society, there's a lot of projection into the future around, well, it's going to make our lives so much better one day and it's going to change all of these things. And so much of that, I think, is very much true, but it's this transition.
piece of getting from where we are today to that future, whether that future is AI automated, different forms of AI automated work, driving, labor. That transition is where I don't think we're spending enough time as a society thinking about how do we make that transition safe for those of us who are alive right now.
Patrick Bissett (08:18)
Right.
Cecilia Callas (08:19)
Right? Because it's like, future generations that are going to have it so much easier. But for us right now, we need to really think strategically about how we are maintaining our cognitive ability in the face of these new tools that are so enticing to give more and more control to, right?
Patrick Bissett (08:34)
Yeah, I think that's super key here, where for a long time, if we talk about cars specifically for a moment, humans just did the whole thing and clearly we're not perfect. No, we get in lots of crashes, many of them unfortunately have catastrophic outcomes. And so this is something that could and should be improved if we can. And I think the promise of AI in this space is huge.
My sense is in the end it will actually be better than humans are. But like in this moment, the AI isn't ready to do everything. At least in a kind of open world setting. There are certain controlled experiments in San Francisco. You you see waymos everywhere up the peninsula for me. But in most parts of the world and most contexts, if you buy a new car, it's what we're calling this kind of partially automated car.
where it can do circumscribed things. It will try to figure out if something is in the road that needs a panic stop to occur. It will try to keep you in the lane as you're diverging. But the companies, the owner's manuals, the instructions to the drivers are quite clear. Like, ultimately, it's up to you. Like, these are only assistance things. And so, in this moment, we are in this partially automated space.
Cecilia Callas (09:43)
Right.
Patrick Bissett (09:48)
And I think so much of the discussion is about like, well, how often does the car stop successfully? Well, most of the time, just like in our experiment, most of the time it stopped for you. But I think it's a question for people like me, neuroscientists, psychologists, like what is the effect on human cognition when this failable system fails in this partially automated environment?
I think the, some of the suggestions of our work is that this moment that we're in has some unique challenges because the AI isn't quite ready to do it on its own. Hopefully it will be sometime soon and then the issues are different, the trade-offs are different. But at this moment, the human's supposed to be fully ready. And it seems as if they're not.
Cecilia Callas (10:34)
Mm-hmm.
Patrick Bissett (10:34)
even when
it appears they're attentive, their hands are on the steering wheel, all the obvious things are controlled for.
Cecilia Callas (10:39)
Hmm. Is there any research or maybe ongoing research that's working on showing how we as a society, like now that we are all drive, most of us, many of us are driving more and more automated, partially automated vehicles. there increased accidents or increased accounts of, the car itself or the lowered response inhibition causing greater amounts of accidents?
Patrick Bissett (11:03)
Yeah, this is a really tough one because obviously, kind of a million things are changing. Many things are changing all at once about how we're driving. And so, it's known that people are distracted a whole lot more. They're using cell phones more often than allowing themselves to be distracted by certain things. And so, I think many people hoped that accident rates would be going down with
Cecilia Callas (11:22)
Totally.
Patrick Bissett (11:27)
with automation, but in fact, there's some evidence over the last kind of 10 or so years that accidents rates have actually gone up a little bit. I was looking over here because I have a little bit of data just in the beginning of our pre-print ⁓
Cecilia Callas (11:42)
you
Patrick Bissett (11:44)
So the National Highway Traffic Safety Administration looks at things like fatal accidents. And if you compare 2013, which is going to be well before the proliferation of partial automation to the most recent full year, 2022, number of fatalities per 100 million vehicles, vehicle miles has gone up from like 1.1 to 1.33.
which may not seem like a huge amount, but that's like, you know, 20 % increase. And so what's difficult is the data on the overall number of accidents is a lot better than the data on the specific causes of accidents. And so one part of the stew of this increase could be this phase of partial automation, where there are seemingly these unique challenges where the...
Cecilia Callas (12:14)
Yeah.
Patrick Bissett (12:38)
The AI isn't quite ready to do it, and when it falls back on the humans, the humans are also now quite sluggish. But it's likely to be a whole host of other things as well, where maybe the biggest part could be people texting on their phones and things like that.
Cecilia Callas (12:44)
Yeah.
Right.
That's what I was going to say. mean, I see people looking at their phones pretty much every time we're all at a stoplight together. it's crazy. And it makes me just think, like, why are we all not in accidents all the time?
Patrick Bissett (13:00)
It's crazy. The probability when you look at someone, it's like,
it's here, not here, like, please.
Cecilia Callas (13:09)
Exactly, but
to that effect, it makes me think like, okay, if this is how we are now, which is addicted to our phones, needing to check our phones at every spare moment, then...
the next logical step is to have cars that can drive for us because this seems like the natural evolution if we are increasing accident rates. Again, we don't have the data to support that, but I would be shocked if the accidents hadn't been going up because of our perpetual distraction.
And so it feels like it could be a good thing, but this this tension I think we wrestle with in AI ethics, particularly, which is, is like, these changes good or bad? And the example that you and I've talked about is the existence and the invention of Google Maps and Google Maps now means we can all get anywhere.
anytime we want without knowing, you know, having no prior knowledge of that area, that city. And now we don't need to physically remember how to get around. And we don't really think of this as a bad thing anymore. think of...
map tools as a tool, as a way to enhance and augment our capability. But we've lost that ability to have direction without technology, which if we were to all lose internet access tomorrow, we'd all be pretty screwed. And so it really begs this question of like,
how much progression is good, where is the line, how do we know what will truly be good and what is diminishing our capability and our empowerment as a species.
Patrick Bissett (14:43)
Yeah,
for sure. I mean, there was an amazing study many years ago that people, you know, kind of penetrated collective consciousness to a certain degree where they looked at taxi drivers in New York City. And this was before Google Maps and all these kinds of things. And there was these profound changes and expansion of areas of the brain involved in creating kind of a
basically mental maps, of like schemas of space that you could navigate through. And so these taxi drivers who spent all this time on these roads that had to, when someone came in and said, I want to be at 43rd and 3rd, they couldn't just plug it into their phone. This had to live in their mind. Their brains had plastically adjusted to this new environment and it had these kind of special capabilities as a result of it.
where presumably if they redid this study and taxi drivers now or Uber drivers are just following the map that the result would be diminished or gone. And so I think when we offload some of these functions that humans...
tended to do on their own to AI, we are losing opportunities to build skills, to learn, to develop automaticity. Huge parts of cognitive psychology, cognitive neuroscience, neurosciences has been dedicated to all of these ways with which brains change and improve with use.
Disuse or or neglect is is a missed opportunity for these these improvements. So I think that's very
Cecilia Callas (16:32)
And it's an area I think is getting slightly more attention now that Chatchi BT and generative AI has been around for a couple of years and we're starting to see really significant impacts on critical thinking capability. Like Microsoft came out with a piece of research a couple of weeks back, there's a couple of more papers that have come out and it for me really begs the question of.
Like I feel like we're at a moment with AI that we were at with social media around like 2012, 2013, where we started noticing, huh, like this is really affecting my attention span. Like I'm finding it a lot harder to focus. I am noticing that within myself and within my friends. And I gave a talk at my high school a couple of weeks ago and they were all bringing up like, how do I make sure that I'm still learning? Even though I have access to this totally magical tool.
and you know, for us as adults, we, we got to go through all of school and college and post-grad without these tools. And so I think we were really forced to develop strategic thinking, creative ability. And for kids today, they're like, we're giving them these tools without any tool, with any, any frameworks or guidelines or anything. And just saying like, you either can choose this totally magical tool to do your homework or.
you know, really kind of sit with the struggle of learning, but growing that part of your brain and that muscle. And I'm curious in like the cognitive research sphere, like are these conversations, are there more and more research studies coming out around this? How do we grapple with this new thing as a species?
Patrick Bissett (18:05)
Yeah, I mean, my sense is that there is a growing amount of research in the interface, know, human and AI. Stanford has a human-centered AI institute now, where there's large numbers of individuals trying to understand how we're going to flourish in this world. But I think the changes are happening faster than the research in many ways, I think.
We're kind of barreling into all of these adjustments in the way with which we interface with the world without commensurate investment in the kind of stop and yield signs, so to speak. Let's make sure this is like contributing to human flourishing. mean, interestingly with cars, for example, I don't think it would be the worst thing if people just
Cecilia Callas (18:47)
Right? Mm-hmm.
Patrick Bissett (18:54)
stopped learning to drive. Like I don't think learning to drive is somehow like a fundamental part of the human experience that if it just went away then human flourishing is out the window. And so ⁓ full automation there.
Cecilia Callas (19:07)
No, totally. I mean, we all kind of, we all used to
be on horses and we all knew how to ride horses and then we lost that. Right? So that's, I agree with you. That would be okay.
Patrick Bissett (19:13)
It's
so I think to a certain degree, we just is a society. We have to make choices where certain things, maybe it's OK. These are the right things to offload to AI. And maybe when we're ready for full automation in cars, that would be that seems like a pretty good one for me. There could be costs if the the networks that the cars are communicating.
break down, then we may have difficulty moving. But that seems like a relatively rare occurrence. But as you say, there are other things that I'm pretty sure we don't want to lose, like our ability to critically think. And so I'm a parent to three kids, nine, 11, and 14. And one of my children, I won't say which one in case they listen and feel bad.
come to me and show me these like interesting like sayings and quotes and well what became quotes and I was like oh that's really kind of like insightful like what were you thinking when you came up with this you know oh I just kind of like asked chat gpt to say something interesting and it's like well like I wanted to know what you were thinking like what went into what went into like your cognition and generating this and
I think there is a risk for things that are really important that we really don't want to lose. And our ability to control our behavior, our ability to critically think, our ability to plan, our ability to sustain attention on something that's important. These do seem like things that we want to keep and sustain. And in general, the even very old research in things like cognitive psychology,
suggests that if we want to retain, improve these things, we have to continue to do them. If we want to like remain attentive to something for a long period of time, then we have to continue to attend to things for a long period of time and resist our desire to be pulled in many directions. If we want to critically think, then we need to continue to think critically about these things and not offload them.
Each time we do that and each time we do that deeply, we're kind of training these processes and improving ourselves. And I think that's really important.
Cecilia Callas (21:33)
Totally. And it brings up this tension between our own agency, human agency, practicing agency, having sovereignty over ourselves and our lives versus automation and enhancement and truly expanding beyond what we've ever been capable of before.
And I think it is going to be the individuals who can do both at the same time, who ultimately are not only more successful in our society, but also who are happier and happiest because when we struggle through things and we learn, we learn new skills, which learning new skills is hard. We feel a sense of accomplishment and achievement.
And I don't know if we'll get that same sense of accomplishment and achievement if we're not applying our cognitive ability to problems and really asking ourselves like, how do we how do we fix this? How do we make this something that I uniquely have figured out and contributed? And yeah, I wonder like what other domains
Like there's, think, like, let's pull out some domains where this research could be relevant for where we're seeing, you know, we talked about critical thinking and everyday cognition, navigation, Google Maps and loss of being able to read maps. What other areas should we as humans recognize are changing with AI and
be aware of that, okay, maybe this is something that I want to continue to do on my own. I think when I talk about AI and using AI, for me, it's just bringing a level of intentionality to your AI use, rather than just reaching for it in every single capacity. And yeah, that's one thing I think people need to be aware of.
Patrick Bissett (23:11)
Yeah, I I should admit, I probably haven't admitted this yet, but I'm really, I'm kind of pretending to be an AI researcher at the moment for the sake of this podcast. I'm really a basic scientist that aims to understand kind of basic processes, things like how we, when we're about to do something we don't want to do, we stop ourselves or how we direct our attention or how we...
hold things in mind. But my sense is that one thing that a lot of these AI models still struggle with is what we might call kind of like true creativity, something that isn't just linking existing parts or synthesizing large amounts of data to output.
Cecilia Callas (23:58)
it.
Patrick Bissett (23:59)
so I think a key thing that humans seem to still be better at than a lot of AI models and humans seem to find a lot of fulfillment from is kind of creativity broadly construed things like creation of art, creation of music. But a form of creativity that I live is, you know, the scientific enterprise where we're attempting to come up with some piece of new knowledge.
that no one has come up with before. And I use AI in various ways for my science, not just studying the influence of AI, but helping with analyses and writing code. Creating new experiments is essentially code that AI helps with for sure. But in coming up with the question that's going to potentially generate new knowledge, I think humans are very much still
competing, if not winning, against ⁓ AI ⁓ models. And so I think making sure we're still being creative, flexing our creative juices, I think will not only give people a sense of fulfillment, but is also important for pushing things forward. ⁓ And kind of outside of the models training set are all the things we haven't figured out yet.
Cecilia Callas (25:05)
Yeah.
Patrick Bissett (25:10)
and humans play a key role in creating that.
Cecilia Callas (25:12)
Absolutely. And it makes me think about a recent paper by, Yoshua Bengio, who's one of the godfathers of AI and he's a, AI safety advocate.
where he just draws an important distinction between creating AI that automates scientific research, which potentially could lead to a loss of human control with creating what he calls scientist AI, which is, yes, let's apply advanced AI systems to scientific problems and advanced scientific research, but we don't need to create autonomous versions of those same AI models that are able
to replicate and potentially lead to a loss of human control. And I think his paper to me just opens the door for us to just be asking questions around how do we apply the technology in a way that is safe for humanity, but also leads to these massive incremental benefits of the technology for future generations.
Patrick Bissett (26:09)
Yeah, I totally agree. I think that's really key. I mean, ultimately, I think most of us want humans to continue to flourish, a sense of that their work or what they're endeavoring to do is contributing. so, totally unleashing autonomous agents has lots of dangers for sure.
Cecilia Callas (26:21)
Yeah.
Yeah. What are your feelings around the necessity of having a human in the loop when we're using and building AI systems? Because I think that's something that's always talked about and referred to, especially in the context of lethal autonomous weapons, for example, air traffic control, pilots, putting planes on autopilot.
It brings us back again to that question of where is the balance between machine power and human decision. And I think most people say, okay, there should always be a human in the loop.
Patrick Bissett (26:54)
Yeah.
Cecilia Callas (26:58)
But I think a lot of people don't realize how slippery that slope is, especially if you're in a scenario where your adversary might be firing likely so autonomous weapons at you. And then are you able to react quickly enough or do you set something up that will respond immediately and then we can get ourselves into a very sticky situation. So that's important, right?
Patrick Bissett (27:18)
Yeah, for sure. mean, these are obviously huge questions. And I think maybe what I, the main thing I'd say is this is the kind of thing that I feel like we just need to study. You know, that it's a lot of these conversations are happening amongst AI companies, AI ethicists, and that's great. I'm not saying we should stop doing that. That's key part of it, but
the costs and the benefits, the trade-offs between systems that may be able to trigger without the intervention of a human faster, but, you know, totally decisively. If it's a ballistic missile, you know what defines a ballistic missile is it's not coming back, you know. What are the differences in timing? Where do the decisions diverge between when a human is in the loop and when a human's not in the loop?
I think that there's a key role for scientists to essentially approach these questions without clear skin in the game. One of the challenges is much of the work, much of the research, much of the advancement of these models are happening in the private sector. And again, I don't think that's a problem.
but a company has clear conflicts of interest in the kind of answers that come out. And so it's important for individuals that are more just trying to understand the nature of how these kinds of decisions are and change, kind of like the research we're talking about where I don't.
I would be perfectly happy if response inhibition wasn't impaired in a partially automated environment. I don't make more or less money based on what the answer is. And so the fact that it proved that response inhibition was impaired in our experiments and people were a whole lot worse at stopping in this environment was fine too. And just in the business kind of understanding how these things interact. so I think...
The main thing is kind of investing in this. The world is changing. Decisions are being made by AI agents in these shared control environments with humans. And these are very different than what we've understood about human only kind of human unilateral decisions in many ways. And there just will be trade-offs. And once we understand the nature of those trade-offs better, we can more intelligently choose whether
the costs outweigh the benefits. Because even in the study that we were talking about where inhibition is somewhat worse when humans fail, when the AI fails and humans must stop, that doesn't necessarily mean yet, and I want to be very clear about this, this doesn't necessarily mean we should like take out the AI, the partial automation from our cars. This is kind of like the first step.
there is this cost. This cost needs to be weighed against the benefit of when the AI does intervene, it's often stopping a whole lot faster than the humans would have otherwise. And so, this trying to weigh these costs and benefits is a little bit down the road in this research, but is when you can start ending up making decisions about what's in...
the overall best interest.
Cecilia Callas (30:27)
Absolutely. It's knowledge and the findings and the insights that we need and that we need, like you said, just to be more proactive as we go into all of these changes. I think that's incredibly key. And I think what a lot of people don't realize is, you know, when you talk about the the
which is the difficulty of academia versus the massive Frontier AI labs, right? Funding Frontier AI research costs millions and billions of dollars building the models, training the models, deploying them. But to your point, we absolutely need both. need the Frontier Model Provider companies, but we also need scientists and academia, academics who don't have financial skills
responsibilities, I don't have certain incentives in play to be able to do research that's unbiased for the good of humanity. And there's a lot of that that is at risk right now, right? With the current environment and the current administration in the United States. And I think it's really important that people understand the kind of research that...
scientists like you and your team are doing and understand what is at risk because I think when you're not a scientist like me it all feels very vague and opaque when you read in the news that federal funding is being cut to universities and
For me personally, I didn't actually realize that private universities like Stanford did receive federal grants. And yeah, I wonder if you can speak a little bit about this, about what's going on, about what is at risk. Because I think we're in a really important moment. And especially with AI, there's so many changes coming. And we need research on those changes and what it means for the average person. And this is of the utmost importance.
Patrick Bissett (32:11)
Yeah, this is super important. think this hits really close to home because a university like Stanford is a private university. It does have a large endowment. And so I think a lot of people view universities, wealthier private universities, as impervious to the political whims that are happening. And I think that that's in many ways
not accurate or not fully accurate. One of the main sources of income at Stanford are federal grants. A big part of my job, a big part of the job of other scientists, academic scientists, is to write grants to places like the National Institutes of Health, the National Science Foundation. And though those like the National Institutes of Health has labs in Bethesda, Maryland,
most of their money is actually being sent to universities like Stanford for folks like me to be doing research like the research that we're discussing today. And so, like my salary essentially is 100 % paid by federal grants. And so it's not like helping a little bit or I can, you know, work over the summer as a result of this, that it's
The, when we write a grant, we write in my salary. And if we get the grant, I get to keep working on the stuff. We also asked to pay for, you know, the subjects and the recruitment costs and the computing time to run the analysis and all that's also all or in large part paid by federal grants. And so for a long time, the United States has invested massively.
in an infrastructure of academic research that's resulted in many of the universities like Stanford and many, many others in the United States being the world class places where people from all around the world come to try to understand the nature of how things work, not just brains and minds like what I'm doing, but cancer and, you know, as we're talking about AI and all kinds of other things.
At this moment, there's a kind of full on attack on this kind of what we've built, this infrastructure, and a lot could be lost if we do this wrong. Not just jobs like mine, but this...
This kind of funding supports just all kinds of things, mental health research, AI research, cancer research, even just basic research, which many people are less aware of, which is just trying to understand the nature of things, understanding how humans think, which could often be the foundation for...
subsequent findings. Much of my work was just on response inhibition, but once I felt like I understood response inhibition well enough, I started to ask questions like, does it relate to artificial intelligence? And so, at this moment, there are suggestions of cuts as large as 40 % from things like the NIH's budget. And so, that would be a massive reduction in the amount of work that can be done on all of these things.
And there's been multiple studies showing if the idea that this work is reducing doesn't compel you, then each dollar that NIH puts into this work yields something like $2.50 of economic activity. It's also just a good investment if you want to have a kind of a flourishing capitalist system. And so from almost any angle that people have tried to cut it, the money that goes into
funding academic research is a good investment. But for a variety of reasons, the current political environment has this on the chopping block. And when you build institutions like this, build systems like this that bring knowledge, bring the best minds, it can take decades to build these. But if done poorly,
can be ripped down a lot more quickly than that. so my hope is that we'll write the course here, but there are currently lots of pressures that could make research like this reduce or go away in the near future.
Cecilia Callas (36:23)
What can people do who are listening today, right now, this week to make their voice heard or support you and other researchers in this space?
Patrick Bissett (36:34)
Yeah, mean, there's lots of things. I some of those proximal ones as I think this really is a political fight, you know, that there will be individuals who decide on budgets and those individuals are in Washington. so really feasible actions include contacting representatives, your senators and
Express your support for scientific research for robustly funded NIH and NSF. These are the places that are sending out these kinds of grants to understand everything, everything from minds to AI to cancer, to Alzheimer's, to ADHD, to all kinds of stuff. And so tell them that this is important to you. They, they work for you. It may not always feel that way, but, they work for you. Show up to town halls if you can.
And though we're not going to vote soon, I think this is really a reminder that voting matters. In the coming years, we'll have additional votes. And so there's a profound impact of your vote that sometimes when I'm talking to individuals, they'll say, you know, it doesn't really matter who we choose. You know, I don't like either of these options. And that may be true.
They may not be the people you choose, but there's a profound impact of your vote. In this case, we're seeing a profound impact on research and academic science and hopefully can still be saved. I think contact representatives, if you can go to town halls, protest. There's often marches for science, things like that. And then when given the opportunity vote, vote for individuals that support this, support science.
Cecilia Callas (38:01)
Absolutely.
cannot underscore the importance of this enough as a researcher across all domains. For me, my area of expertise is artificial intelligence. And I can say that we need more research and individuals like yourself doing this research into how this technology will transform our society and our minds, because it is transforming our minds and how we move through the world. So Patrick, thank you so much for being here, for sharing your
research and I'm excited to see what you do next, what area you delve into next and definitely have you back to explain more to us because it's so important.
Patrick Bissett (38:38)
It was awesome. It was really fun to chat with you. And if people are interested, there's a pre-print, which is a, this work is under review at the moment. And so we recently wrote it up. It's at a journal being evaluated right now. And maybe there's a way to share with folks the link to that if anyone is interested.
Cecilia Callas (38:55)
Definitely we will share the link. Patrick, how can people get in touch with you or learn more about your work? Is there a website or somewhere we can link to?
Patrick Bissett (39:02)
⁓ sure.
I have a website that all my papers are linked to. If there are any follow-up questions, I think my academic email's there. Happy to share that as well.
Cecilia Callas (39:15)
Okay, amazing. Thank you.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.