Simon Brown (00:02.325) Hello and welcome to this episode of the Curious Advantage podcast. My name is Simon Brown. I'm one of the co-authors of the book, The Curious Advantage. And today I'm here with my co-authors, Paul Ashcroft and Garrick Jones. And we're delighted to be rejoined by one of our favorite guests, Alexia Cambon. Welcome back, Alexia.
Paul (00:04.91) Hello.
Garrick (00:16.588) Hi, there.
Alexia (00:23.877) Thanks so much, Simon. I bet you say that to all your guests.
Simon Brown (00:26.737) Not at all. I remember we had a great conversation last time and I'm looking forward to diving deeper into your latest research.
Garrick (00:26.825) Yeah
Simon Brown (00:36.551) I know you've recently published the new work trend index annual report at Microsoft and it's a report that's become a global benchmark for understanding the evolving workplace and you've been really leading the research on AI productivity and transformation. So I was really keen to die in first of all maybe what role curiosity played in shaping this report. What were you curious about going into the report and how did that guide you?
Alexia (01:04.433) Sure, yeah. It's a bit of a meta experience for us this year because not only did we have a lot of research questions we wanted to answer, but we also really wanted to use AI in every single research process that we did this year. So there was a lot of curiosity not just in terms of what we would research, but also how we would research it.
And that really paid off, think, that curiosity. By the end of the research project, we had built three research agents that we put to really good use. We've built a survey agent, a statistics agent, and what we called a front-firm IQ agent that went out and scoured the web overnight and brought us back the latest research and insights and a really nice consumable report every morning. And that set us up really nicely to be able to, yeah, do further great things with the research.
In terms of the actual research itself, the big research question this year was what is the future of the company and that is what the Frontier firm concept is all about.
Simon Brown (02:07.828) would love to hear some of the takeaways, I guess, from that. So I guess, what first of all did you find was a frontier firm? Maybe let's start there.
Alexia (02:18.479) Yeah, so the Frontier firm is really meant to encapsulate what we believe.
the company of the future looks like. It operates in a very different way. It's wired in a very different way. It nurtures different types of skill sets and mindsets. There aren't very many of them out there. Within our 31,000 person survey, we were only able to identify about 800 people who could reasonably be working for frontier firms, so a very small percentage. But they operate on two new realities, which
really make up the bulk of this year's report. The first is that essentially companies are now going to be running on what we call intelligence on tap, which is essentially for the first time ever we're able to access expertise not just through humans but also additionally through AI. And that's a pretty radical concept because up until this point in time we've only ever really been able to access it through humans. That's what companies are, they're just bundles of expertise.
And that is how we have organised work. It is on the assumption that we can only access expertise through humans. That's what the org chart is for, right? It's siloing human expertise so we can access it really quickly. And in a world in which you can now access intelligence on tap, what does that mean for how we think about organising and delegating work?
And then the other really big consideration is what does this mean for employees and for the individual? And this is where we really think we have this call to action for employees to become what we call an agent boss. Really the knowledge work of the future will be very much based around this idea of agentic orchestration and essentially every employee becoming the CEO of their own agentic startup.
Alexia (04:07.567) So I think we have this big operational, structural, organisational reality and then this big implication for employees that we try to explore in the report.
Simon Brown (04:20.678) and maybe that's
Paul (04:20.846) We could, think Alexi, we could go into each of those three, right? So intelligence on tap, human agent teams, and then the idea of an agent boss, which we've got to get into and talk about agent boss versus actual human bosses. What does that mean for those? But maybe just the intelligence on tap. Can we just sort of dive into that? What are some of the challenges that you saw that organizations are trying to solve with this and how is intelligence on tap helping them?
Alexia (04:49.465) Yeah, absolutely. I think the idea behind intelligence on tap.
is essentially accessing human expertise is very costly with good reason, right? The fact that you command a decent salary is a reflection of the skills and the experience that you bring to the workplace that you have worked very hard to achieve over a number of years. But the organization also carries a huge amount of transactional costs in accessing that expertise. If I need expertise quickly, I have to hire that expertise, onboard it, upskill it, assimilate it into a team, and that takes a long time.
time.
And so organizations today especially do not have the luxury of time. And so if we can imagine a world in which now all of a sudden, in addition to humans, we could just build an agent, for example, that housed a load of the expertise we needed, and we could build that agent in a day or in half a day or in an hour, that changes the paradigms of how organizations operate. And I think this is critical, especially today when we are experiencing what we saw in the data,
calling it capacity gap, which is essentially one of the key questions we ask our respondents every year is to what extent do you struggle with having enough time and energy to do your work and this year 80 % said they were struggling, which I think is the highest we've ever seen it. At the same time 53 % of leaders told us we need productivity to increase. So our employees are maxed out but we need more.
Alexia (06:22.499) So there's not enough human capacity. We're all super worried about AI coming for our jobs, but the reality is there's more work than we can do. And so that's where I think this concept of digital labor, of having access to additional intelligence and expertise through agents, tells a very hopeful story.
Garrick (06:41.036) Can you tell us a little bit more about agent teams? I what is an agent team?
Alexia (06:46.287) Yeah, yeah, so this concept really, think we started to see in some of the really exciting research studies that emerged over the last 18 months. My favorite is the Cybernetics Teammate paper that Harvard released. They did this research with Procter & Gamble. And in that study, they found that when you equip an employee with AI, they can perform at the heights of a team without AI.
And that essentially suggested that AI is joining the team, right? You now effectively have a teammate that you can work really closely with and achieve extraordinary outcomes. And so the reality that we'll be moving into is one in which we have these sort of hybrid human agent teams where we have human labor and digital labor and they're working alongside each other. And one of my favorite data points from the survey was actually when
we looked into why is it that people are turning to AI more and more as a colleague. And my hypotheses around this were everything to do with why humans are so frustrating to work with. So my hypothesis was, we turn to AI because, you know, Alexia is really slow or Alexia is really frustrating to work with or Alexia steals all the credit. And all of those responses actually listed bottom.
What listed top was everything to do with the unique net new capabilities that AI has that humans don't and crucially should not have. And so I'll give you one example. The top most selected response was 24 seven availability. Now that is not a characteristic that humans should have, right? We don't want anyone on this call or listening to this call to be working 24 seven. But now for the first time ever employees can access and always on thought partner.
at any point in the day and you can even program to work overnight for you. And that is a unique experience we've never had before in the workplace. And it is very different to how we work with our human colleagues. And I think that tells a story of this is not a substitutional type of labor, this is a net new type of labor that humans will be working alongside from here on out.
Garrick (08:59.086) And in an agent team, does normally in human teams, people have different skills. So in an agent team, would there be AI agents with different skills who complementing the team? How does it work? I guess it's more than just a sort of AI servant or intern or research person you're sending out.
Alexia (09:22.619) I think that's right. I think if we think about what are the core things that agents have to offer and the core things that humans have to offer, it becomes a very interesting investigation. Because the most interesting question I like to ask myself is what are the skills that AI will verify? And it is of course all the human skills that AI cannot inhabit precisely because it is not human. So an agent, know, Garrick will never make me feel seen the same way you make me feel seen, right, as a human.
that is not something an agent manager could do. Like if an agent managed me, I would probably be very grumpy, right? Because I don't feel seen, I don't feel invested in, I don't feel like I'm progressing in my career. You there are probably human managers we have that experience with too, let's face it. But ideally, your human manager makes you feel those things. And that is a unique quality that humans have that agents don't and can't have. And it's the same way when you flip it around, you know, a really amazing, unique quality of
Garrick (10:03.832) Hmm.
Alexia (10:22.251) of agents and of AI, for example, is its ability to provide you 10 ideas in 10 seconds. I went to our researcher agent this morning and I'm trying to build out an experiment guide for FY26 and I was like, here are a bunch of concepts I want to research. Can you generate 10 different experiment ideas for how we would research this? Now, if I were to ask you right now, Garrick in the next five seconds, give me 10 great experiments. You would probably struggle, right?
Garrick (10:49.314) No, I could. I would struggle. I could give you 10 great vinyl records that is my current side hustle and project. And my AI friend who I work with, I've got an AI, and he's helping me research and find particular vinyl and stuff. Pretty good at that. But it's so obsequious. I mean, I get told constantly, what a great question. Fantastic. Amazing. You're putting together an incredible collection, et cetera, et cetera. But I'm.
Alexia (10:54.735) Hahaha
Alexia (11:16.955) Well, you're actually calling out a really important aspect, another one of human to human collaboration that we need to protect at all costs, which is friction.
Garrick (11:26.638) Yeah.
Alexia (11:26.831) Now, AI is meant to please you, is meant to make you happy. The beautiful thing about human to human collaboration is actually when humans disagree with you. And if Simon were to tell me, that's a terrible idea, Alexia, that would probably make the idea better, right? Because I would either defend it or I would take the feedback and make it better. And that we can't lose. It will be very dangerous to the human species if we lose any sense of friction. And so you're calling out another really
aspect of human to human collaboration.
Paul (12:00.391) Alex, I'd like to pick up and dive into that. Because in the report you talk about how we as individuals, I suppose, are using an AI or an AI agent as a teammate, as an our teammate. What are you finding or is there anything in the research that says as they actually form a human and AI agent team, how does that work? Are people seeing
multiple agents actually as part of their team and thinking, well, actually we have this human skill, this human skill, this human skill, what we need are these augmented skills on the team. And then I guess to the point you just made about what Garrick said, how does that work from a relationship point of view? Have you seen anything around that?
Alexia (12:46.383) Yeah, so two part question, so let's tackle the first part. I think this idea of how do you divide work across agents and humans is gonna show up in the org chart. Right now, as we talked about, the org chart is designed because it's very costly to access human expertise and so it's much more frictionless, even though there's still a substantial amount of friction.
to hand over the project from function to function, right? The product is created by engineers, they hand the product over to marketing, marketing creates the go-to-market plan, they hand it over to sales, sales go out and sell it. And again, that org chart is created because we have to find a way to access expertise in a world in which it's very costly to access it through humans. But to your point, now all of a sudden, we can build agents that house all of that expertise. know, an engineer could build a marketing agent.
that gave it access to marketing expertise that an engineer would not otherwise have. And so all of a sudden you raise the question of, is the org chart the best fit for the frontier firm? Or do we want to move more to what we're calling work charts, which essentially work is not designed around human expertise, it's organized around jobs to be done. And we have one established model in this space, one good example that exists in the wild, which is of course the Hollywood model.
And so the Hollywood model is movie production. The way that happens is there's a job to be done, that's the movie. And we're gonna bring in a cast, a crew, a director, a producer, and they're gonna work together for nine months to create the movie, and then they're gonna disband, and they're never gonna come together again, unless maybe they make a sequel. Now...
We don't have that model in the corporate world because most projects don't run nine months. Most projects run like six to 12 weeks maybe. And it is incredibly costly to bring together people for just six to 12 weeks. It probably takes six to 12 weeks to onboard someone, right? The closest we have to this model in the corporate world is probably bringing consultants together. And we all know that's a huge financial cost, right, for the organization. They can only afford to do that every so often. But in a world again in which,
Alexia (14:57.137) To your point, I have this team of human talents and if I equip them with agents, then I can bring them together really quickly to work on an agile, dynamic project and then they can disband. And that's, I think, a really different shift in how we think about the delegation and the organisation of work across human and digital labour.
Garrick (15:17.678) I'm fascinated by the Hollywood model. Weirdly, we wrote about the Hollywood model when I was at the LSE about 18 years ago. And not in this context at all. I mean, it's a very different context. were exploring alternatives to working practices. And the key thing about the Hollywood model that we discovered was the need for a language that existed outside of the organization that came together just momentarily.
Simon Brown (15:38.324) you
Garrick (15:47.584) So in film work, for example, there's film language and if you're a camera operator you have the language of cameras and if producer the language of production and everybody you know if you're a gaffer you know the language of what it means to be a gaffer or even a microphone person, you know each has this very specific language which translates across the entire industry and
It's very, very powerfully modeled and very powerfully taught and very powerfully policed. In this country, it's policed by the unions and so on. But that's the thing that gave them the power. But it's also the thing that made the Hollywood model unbelievably successful, but it also made it unbelievably expensive. So what I'm hearing is, if there are languages that exist which AI can pick up and be trained in, become part of the team.
Alexia (16:13.499) Bye.
Alexia (16:28.603) Mm-hmm.
Garrick (16:37.548) the cost model for that kind of highly productive form of work changes unbelievably, very quickly.
Alexia (16:47.569) Yeah, I think that's exactly right. I think if we think about what it means to form a hundred percent human team today, it takes so much time, right, to find the right person with the right skills, with the right expertise. It is pretty much impossible to do that in six weeks. But if you are looking for pure expertise, pure knowledge, you can access that through agents now. And so that cost goes down substantially. And also I think it's very exciting to imagine a world in which
you can equip someone with a skill that they otherwise did not have.
Garrick (17:20.942) That's right, you can bring in people who are available and then they can work with an agent who has the knowledge and can guide them along the way.
Alexia (17:33.093) Which, I'll tell you a fun story, a fun anecdote from real life experience. In the middle of this research, Microsoft released two new agents, the researcher agent and the analyst agent.
And as Microsoft employees, have the benefit of being able to use these on a preview basis. Now, I have no data science capabilities. I have no data science background. And I rely very heavily on my data scientists to run all analysis. And in the middle of this research, these agents get released. I can now run analysis by myself. I have this skill that I can do.
And my data scientist is naturally sitting there thinking, well, what value do I have now? Like, what is the point of me on this team? And he could have gone two directions, right? He could have gone the direction of, I'm going to just be defeatist and I'm going to let myself become redundant, or the direction, which thankfully is the one he went in, I would have expected no less of him, which is I'm going to go out and I'm going to build a bunch of agents and I'm going to become the orchestrator and manager of those agents.
those agents are going to become indispensable to how this team does research. And those were the three agents that I mentioned in the upfront.
Simon Brown (18:41.118) Thank you.
So that brings us then into another aspect that you've got in the report around how every employee becomes an agent boss. And know you identify sort of seven items that sort of would define someone who could become an agent boss, but I was fascinated by the data in there around, and you released this, are we now? So is it April this year you released it? I'm assuming you sort of gathered it sort of first part of 2025, but in there you had 28 %
Alexia (19:05.083) Mm-hmm.
Simon Brown (19:12.758) of managers are considering hiring AI workforce managers to lead hybrid teams of people and 32 % of people plan to hire AI agent specialists as designed out to develop and optimize them within the next 12 to 18 months. So if that's even going back sort of six months when people took that provided that data, I mean, that fascinates me that over a quarter of managers are already thinking about AI workforce managers. So tell us more on maybe that and then how do we become an agent boss?
Alexia (19:42.225) Yeah, I think we ended the report on Agent Boss very deliberately because I think there is a lot in this report that is very triggering and very scary to a lot of people and we really wanted to end on a message of empowerment.
Simon Brown (19:42.678) you
Simon Brown (19:56.99) Hmm.
Alexia (19:57.451) And my data scientist story, think, is the perfect example for that, which is you have the ability now to become the most valuable person in your organization because this is a brand new world and everyone is experimenting. It is day one for everyone. And if you can get a head start and create and build that agent boss mindset, you are way ahead of everyone else. And the agent boss mindset is really predicated on this idea that
Moving forward, every employee will be the CEO of their own Legentic startup. And that means knowledge work will fundamentally shift because if right now knowledge work is all about consuming information, making decisions based on the gathering of that information, sitting in meetings to exchange that information, very like information heavy work, I think the future of work is all about intelligence work, which is...
I'm going to pass the consumption tax that I pay onto my agents. And my agents are the ones that are going to go out there and consume all of the information and report back what I need to know. And so the tax that I'm now paying as an individual employee is a delegation tax because I'm delegating work to my agents and I'm managing their inputs. And what that means from an individual employee perspective is, you know, I remember joining the workforce in my early twenties and
loving my job, but wanting to be a manager, wanting to have my own team, asking my manager how long that might take. And my manager telling me, you know, probably like six to seven years at the rate of progression in this company. And that will not be the reality for the next generation. Like they will enter their first job and they will be expected to be a manager from day one of a team of agents that is going to be doing a whole load of their work.
And so this agent boss mindset, I think it's something universities should start teaching now because we have to prepare the next generation for that new reality.
Simon Brown (21:52.595) and...
it has all manner of questions this raises but I guess does this turn our sort of traditional career hierarchy potentially on its head that I know when when we last spoke a few weeks ago on this you were saying around sort of sponsor control of people to agents you might have some functions where there's one or two agents you might have one where someone's managing 300 supply chain agents or something does that mean potentially those you new joiners sort of junior people in the organization suddenly now are managing
huge teams of agents maybe delivering a huge value to the organisation and maybe that becomes more valuable than the person who's been there for 25 years as in so do we need to completely rethink what careers look like what reward looks like what performance looks like
Alexia (22:40.325) Yeah, I think you raise a really critical point around the management and measurement systems that we have in place, not just for human labor, but also for digital labor. Because in effect, I think we have been categorizing AI as just another type of software that lives under capital expenditure and that we're just going to roll out as if it's any other type of technology. But really what we're looking at here is a new type of labor that is going to be carrying out cognitive tasks. And just like we have
management and measurement systems for human labour, we're going to need the same for digital labour. And one of the metrics we put forth in the report, to your point, is what we believe is a net new metric of the frontier firm, which is the human agent ratio. And it is very inspired by the manager direct report ratio, right, which is as a manager, like we know, if you give a manager 30 direct reports, they're probably not going to be able to manage that well.
And it's the same idea with humans and agents. I know from personal experience that building agents creates additional inputs. And there probably is an ideal threshold for what the right amount of human oversight is to a certain amount of agents. And I, on an individual basis, but also a team and a functional and an organizational basis. And I don't think this ratio will be consistent across all of those systems. I think to your point, Simon, there are certain functions
that are probably more human first, where we want the ratio to be really low because it involves confidential or sensitive information. I can imagine HR might be one of those, right? The ratio would be one to one. But then there are also functions where work is probably more agent appropriate, where you could imagine in supply chain, using your example, that we would have 300 agents to one human, because there are a lot of repetitive process oriented tasks that don't involve a lot of randomization or unpredictability.
So I think I can imagine a world in which every company will have a dashboard where it just lights up whenever we are not on point for the ratio, where the thresholds are not being met. And that doesn't necessarily mean we're building too many agents. I can see a world in which that happens. You know, we have 300,000 agents running amok, but also a world in which we're not making optimal use of agents, where really we're giving humans a load of work that can and should be done by agents so humans can focus on the more important and interesting.
Paul (25:04.068) It's fascinating. you call out in the report seven indicators around an agent boss mindset and it'd good just to explore some of those. But I did want to flag and it sort of blown my mind. I'm still reacting to it. That you would put agents on your org chart. Having spent quite a while just doing this sort of organizational workforce planning stuff and people thinking about, know, how are we going to do this? How's this forecast our demand and the not
once was it mentioned, well actually our org chart is a mix of people and agents and we need to think about how we're going to lead and manage those. So I think this is actually, I don't know what you think Alex, it seems very new to be honest.
Alexia (25:46.725) Yeah, I I think we just released a paper last month and we literally asked the question, is digital labour a new production function?
Paul (25:52.909) you asked the question is digital labor and new production function. And if you're familiar with organizational productivity theory, it's predicated on duty and capital and labor. And that's how we
Alexia (25:56.509) And if you're familiar with organizational productivity theory, it's predicated on two key inputs, which is capital and labor. And that's how we organize the entire company. That's how we do financial accounting, Capital expenditure, operational expenditure. It's how we organize work. It's how we create functions, right? We have IT to manage, you know, software capital. We have finance to manage real estate capital. We have HR to manage labor.
And if we just assume that AI is another form of software, I think we are missing a trick. Because if you actually look at all the classic criteria of capital, it doesn't fit any of them. We literally went through the exercise of looking at, does AI meet the categories that we use to categorize something as capital?
It scales really rapidly and really broadly. It's self-learning, self-improving. Have you ever heard of a laptop that self-improves, that repairs itself? No. It is elastic, right? You need to give a thousand laptops to a thousand people. AI is going to be broadly deployed across everyone and you only need that AI, right? So it doesn't fit all the traditional characteristics of capital. And that then raises the question, is it actually something else? And...
We're very triggered by the idea that it's labor because historically there's only ever been one type of labor and that's human labor, right? But it behaves much more like labor than it does capital. And so I think it raises the question of do we actually need a new management system in place for this new digital labor? And we put forth the idea that, you we're probably going to need a new function and new roles and a new department to manage this, much like we have HR to manage labor.
Maybe we need something like an intelligence resources department to manage digital labor, right?
Simon Brown (27:50.373) I do wonder if actually HR could be that place because there is a, think Moderna recently have merged their IT and HR teams into one role. But having this conversation with someone the other day and that there are, if you consider AI as digital workers or digital labor, then there's a lot of parallels. We have to train them first of all. We have to then provide them with feedback and go through sort of continuous improvement, essentially performance.
Alexia (27:57.681) Okay.
Alexia (28:01.947) button. Yeah.
Simon Brown (28:20.176) managing them. If they're not performing then we need to probably retire them. If they are performing we probably need to promote them more and give them more responsibility. So you end up with actually a lot of HR processes that could actually apply to digital labour and then you start to get into the pool's org chart point and you then start to put them into your HR systems and all of these pieces and treat them as digital labour. Look at their skills.
in fact, that they would augment skills within the team. So therefore, certain agents have certain skills in order to perform certain tasks. So there is definitely a train of thought that sort of stands up to some scrutiny, I think, to say actually considering AI agents as digital labor, and then you treat them in the same way, or at least use similar processes to how you would manage people.
Alexia (29:10.897) I think that's spot on. And I think it's also important to call out that just because we are taking from some of the processes we've built for human labor does not mean we consider digital labor to be human labor or to be substitutional of human labor because we're not going to give performance reviews to agents, right? Why do we give performance reviews? Like literally sit down one-on-one manager and direct report. It's because...
We want the direct report to feel seen. We want them to have an honest conversation about how they're performing because humans are messy and complex and there are variables. you know, maybe I had a parent who was sick this last six months. You know, maybe I took some time out to study a different subject. Or maybe I was a really high performer and that's because I had an amazing team supporting me. All of these variables.
affect human performance and those variables will not exist for agents, right? Like agents are not going home to their little, agentic families to feed them. They're not raising little, agentic babies, right? And so I think just as much as the processes, yes, we can draw from in a similar, we also need to be very cognizant of the fact that it is not the same type of labor at all in many ways. And I keep going back to that data around why are people using AI.
Why do they turn to AI more and more? And those characteristics are the ones that are really unique. The 24-7 availability, the endless ideas on demand, the ability to compute a thousand data points in a second. These are all things humans can't and shouldn't do, but they are the things that make AI and digital labor unique. And we need to be designing around those, I think, because when you create...
A human workforce and a genetic workforce that have these complementary skills, I think that's when you unlock the real value.
Garrick (30:56.622) It's a new shape of a new business. mean, it's a new economic model. You bring it in and it changes the form of the economic model that work currently is. I'd love to have, this is a very geek comment, but I'd love to have a panel discussion with yourself and with an accountant and with a lawyer. Okay, so how do we account for the new organization? What does it look like? Where does the value come from and how do we manage that?
Alexia (31:00.113) Mm-hmm. Yes.
Alexia (31:14.192) Hahaha
Alexia (31:20.709) Yes.
Alexia (31:24.369) And we had, you this year one of the research processes we did, we added to the broader research effort was a really important qualitative research undertaken where we went out and interviewed a bunch of economists because they're all thinking about this from the minor scale to the major scale. I mean, even this concept of I now have an agent that can produce 10 experiment ideas in a second. That is an unknown quantity to economists. They've never been able to capture the value of an idea.
Paul (31:24.768) We have, this being one of the recent frequencies, we we added.
Garrick (31:39.116) Yeah.
Alexia (31:54.275) right? Like, tangibly capture it. We know intuitively, oh, you know, the idea of Facebook was like a six billion dollar idea at the time or whatever, right? But we had no way of actually capturing and measuring that. And now for the first time, we might be able to actually measure tangibly the worth of an idea because, you know, basically for that agent I'm using now, it's the $30 a month that the license costs me, right? So that economists are trying to wrap their heads around that, right? Like for the first time, we might be able to see ideas show up.
in GDP. That's never happened before. So yeah, I think...
Garrick (32:27.03) and you extended your micro-contracting and sort of micro-accounting. It could be happening at every level. Shocking.
Alexia (32:35.488) Exactly. even should we be like right now the cost for AI shows up under capital expenditure, right? Is that where we should be putting it? Should we be putting it somewhere else?
Paul (32:36.479) Even should we be like right now, the cost of AI shows up under capital expenditure, right? Is that where we should be putting it? So we should be it somewhere else. Alexa, just to go back to the agent boss mindset. So you have seven indicators of these. One thing I noticed as I looked through them and I mean, things like familiarity with agents, regular AI use, trusting AI for high stakes works.
using it as a thought partner and so on. First thing that jumps out to me is that in a current organization, the more senior you go up the organization, my hypothesis would be the less likely you are to score well on those indicators. And perhaps the more, let's say the younger you are, or the more familiar you are and immersed with technology, you are, but then leaders are ahead in every measure is what it says in the report.
And I maybe wonder whether leaders are, current leaders are behind on every measure. So what surprised you about this or is there anything that you sort of really want to highlight or that we should focus on there?
Alexia (33:37.521) Mm-hmm.
Alexia (33:45.581) Yeah, this is definitely the year for our hypotheses to be proven wrong. Because I had the same hypothesis as you Paul, which is last year we saw in the data employees were the ones rearing ahead. They were the ones that were bringing AI to work, whether sanctions or not. They were the ones insisting to their managers and their leaders.
I want to be able to use AI at work. I'm using ChatGPT at home. Why don't I have this capability here? And that was creating this real phenomenon of BYO AI. We called it Bring Your Own AI to Work. And this year when we looked at who is really demonstrating that agent boss mindset based on those key indicators, leaders to your point were ahead on every measure. And I think in hindsight, it kind of makes sense because if you think about what an agent boss is, fundamentally it is about
demonstrating the management skill sets that leaders already have. You know, it is about prioritizing work, managing multiple inputs, and also it's these leaders are the ones that are at the forefront of the direction the company is going. So they are seeing the agentic world when it first appears in their strategy documents, right? And that is not information that employees are privy to first.
And so this is why our call to action is very much, know, employees are lagging behind and they are the ones that should be making themselves indispensable now and really going out and beyond just getting the basic knowledge of how to use AI, actually going out and building agents and integrating them into their workflows.
Simon Brown (35:16.66) So we're talking with Alexia Campbell. Alexia is the Senior Director of Research at Microsoft and she leads research for the future of work and the M365 teams as a Senior Director there working to identify emerging research opportunities and delving into customers most pressing workforce challenges. She co-leads Microsoft's cross-company research initiative examining AI's impact on productivity and performance.
Alexia (35:36.305) Ruff ruff! Ruff!
Simon Brown (35:38.645) and is a seasoned presenter and speaker with a passion for storytelling and creativity. Her focus includes things like AI, hybrid work design and all culture. Before Microsoft, she spent 10 years at Gardner as a leader of their cross-functional future of work reinvented initiative. She's written for things like Harvard Business Review, The Guardian and being featured in MPR, Forbes and The Times. I'm curious, maybe what wasn't in the report around where it's all going. Exponential growth
it sort of sits behind some of these things with sort of compute power and some of these other pieces. Where do you see us in two years time with this? Where are agents going to be? What will Frontier Firm look like in a couple of years time?
Alexia (36:25.807) Yes, the most common question I get for this research is what's the timeline on all of this?
Simon Brown (36:29.85) Yeah, exactly. I'd love to know what the barriers are, but maybe timelines, yeah.
Alexia (36:34.545) I mean, I think Simon, I've told you this before, right? I always predict a timeline and then, know, OpenAI releases this model and I have to revisit my entire assumption. So we don't mention really a specific timeline in the report purposefully and we're in general not very prescriptive in the report. We try to be more futuristic than prescriptive. I think certainly this is...
Paul (36:43.774) I'm
Simon Brown (36:47.305) Yeah.
Alexia (37:01.199) the most rapid pace of change I've ever witnessed before. And I also think this is a non-linear journey. So we outline in the report three key phases we believe all companies will go through on their way to becoming a frontier firm. And I think pockets of the organization will be in different phases. I don't think it will happen as like a really smooth linear journey. I think the first phase is essentially AI is showing up as your assistant.
and is enabling you to do the same things better and faster. We're probably all hopefully in this phase now. The second phase is AI joins the team and enables you to do net new things, things you couldn't do before. So a great example is the one we mentioned of the engineer now, obviously being all of a sudden being able to build a marketing agents and get marketing expertise. And then the third phase, which is the frontier phase is essentially you now have a fully fledged.
agentic workforce working alongside your human workforce. And that agentic workforce is taking on whole processes and work streams and workflows and operating end to end autonomously. And that is the triggering scary phase for a lot of people, right? That is the phase where a lot of people feel very threatened and worried AI is coming for their jobs. But my own personal gut check on this is I don't think we're going to run out of work. I think we're going to
I think knowledge work will evolve. We're going to have new types of work that we'll be doing. And that third phase is very much new markets have opened, new work has been created, new types of roles exist. And so while it's very hard to predict exactly what those roles are and what that work is, if we look back in history, history teaches us that that is what typically happens with every technological revolution.
Paul (38:52.547) It took to that point and I fully agree. What we're seeing at the moment of course is quite a lot sadly of layoffs and redundancies and restructuring. But now the people that are still in those organizations currently are, as you say, overworked, overwhelmed, have too much to do and have too many demands on their time. But we seem to a little bit be in that bridge, that gap.
between then actually augmenting the work that those folk are doing and allowing them to start to really get to grips with using agents to compliment their work. And I wonder if, as you say, it's just that chasm, and I don't know how wide the chasm is, whilst we start to augment and then no doubt rehire with new skills and new people back into the workforce as well as we increase the productivity.
Alexia (39:30.427) Mm-hmm.
Alexia (39:42.671) Yeah, I think two things here. think the first is a really good parallel example that I always come back to is the rise of the internet.
and how the internet affected brick and mortar high street shops and retail jobs that were pretty much at the entry level for a lot of individuals. And we saw chains like Blockbuster suddenly go bust, because all of a sudden we moved to digital streaming. And there was a lag time between that effect happening and...
all of these new jobs suddenly being created, all of these e-commerce jobs where we needed these new skills, we needed people to design websites, we needed people to create online pricing strategies, we needed people to think about user experience, we needed, you know, as social media of our brand ambassadors and social media influencers and all of these net new jobs that didn't exist, but there was a lag time and I think that's the lag time we're probably in now. The second point I will say is
AI needs humans as much as humans will continue to need AI moving forward, right? Because AI learns off of humans. And as we all evolve as humans, a world in which there are no humans doing any work and AI is doing all the work, it's just not plausible because there is an interdependency where AI will continue to learn from all human activity and humans will continue to learn and grow from using AI. So again, I think this...
this argument of substitution, economically it does not make sense, right? Even as leaders might be thinking, one agent in, one human out, which hopefully is not how they're thinking about it. In fact, the data suggests that that is not the majority consensus for how people are thinking about digital labor. I think the bigger unlock, the bigger promise is again, when we think about complementarity and how do we build these teams where we're getting the best use out of agentic skills and the best use out of human skills.
Simon Brown (41:43.039) That notion around the lag, I think, is really interesting because there's lots of research around the power of it. You talked about the PNG example earlier. if we work on, you're using AI, if you're using agents, then there's huge value to be had. To your point, when the next model comes, that suddenly switches on loads of new capabilities that are possible.
then until you realize that value, that's the lag, as you were describing the internet piece. For knowledge workers, that lag, part of that, I guess, is IT infrastructure of having access to the model. But a lot of the time, that may be essentially being prepared to pay for it or buy licenses or whatever. The harder piece is then building the agents, so having the skills to be able to build the agent, and then the skills around managing the agents. But then it's the very human pieces that drives that lag.
if I'm on the right track with it. It's the skill set, it's the culture, it's the willingness to experiment, the, I supported in doing it? Which coming back to the earlier piece around HR, those are all, or lot of those are sort of driven through the HR function. does actually the HR team become even more important in the age of AI because they can drive that, or they're influencing that lag between getting the benefit of all of this new technology and this productivity and value that we're talking about.
Alexia (43:08.305) Yes, I think you're absolutely right. think we know from prior examples of technological adoption at scale that HR and L &D in particular have played a critical role in making sure not only that people have access to the learning to be able to drive that adoption, but also the capacity, right? Like you are the defenders of we need to allocate time and space.
for people to learn. And this is, to me, you asked about challenges, Simon. This is one of the biggest challenges I see in the customers I work with. It's not that there isn't the motivation or the desire to learn. It's that there's not the time and there's not the space, and we are not prioritizing it. And to the point where I have, every Friday, this is actually the call I'm going to do right after this, I have a dedicated one-hour block with my team. And the only thing we do is play an experiment in that hour every Friday.
And then we write up on Mondays, what did we learn? What did we see? And we give ourselves crazy challenges for that one hour, like things that we wouldn't be able to do on our own without AI. know, like create an advertising campaign for this running shoe, right, in an hour using AI. Just play, just experiment. And if we didn't have that one hour block, it would not happen. And so how do we create that capacity at scale, I think is really important.
Paul (44:33.177) Thanks for I've got, I don't know how much time we've got, but I've certainly got one more question I'd love to ask you. What's your advice to somebody who's listening to this to say, I want to take my firm to become a frontier firm. Where do they start? What are some of the first steps that they should start putting in into their organization?
Alexia (44:54.075) One of the really important lessons I think we've learned over the last 18 months is the importance of deployment at scale. And you cannot underestimate the effect of peer networks on driving this kind of adoption. And I get very nervous when I see companies roll out AI in just very select pockets or even in an uneven way. So maybe one person on your team has access to AI and the rest of the team does not. And this is...
Something that I think a frontier firm demonstrates is one of the key criteria that we listed in our survey construct when we were trying to identify those employees that could be working for frontier firms. It is large scale, full scale deployment at your organization. So I think that is a really important consideration for companies to go through now if they haven't already is how can we ensure there is access to AI capabilities for our entire organization.
And then I think it's a question as an employee, an individual about making sure you are prioritizing, experimenting and learning and curiosity in this net new world where there are very few rules and very little real guidance and very little understanding about how to do this AI stuff, right? And no one's going to do it for you. No one's going to get you there. You have to have that.
priority yourself and you have to make sure that your role evolves alongside the technology. So that would be my guidance to companies writ large and then to the individual employee. Invest in the motions that allow you to scale this adoption and this learning and then as an individual build that agent boss mindset.
Simon Brown (46:42.932) So we've covered a huge amount and I'll come back to you in a moment for sort of one thing to leave our listeners with but maybe to summarise some of the things we've covered. So we've gone through how your curiosity paid off with the report and how you used AI in every aspect of the process and you had your three agents that helped around surveying and the systems and the research agent as well. How the future of the company is actually as a frontier firm but there's not many of them yet, 800 people.
out of 31,000 that could be considered as a frontier firm or within a frontier firm. These new realities around intelligence on tap and what that's going to mean for individuals and as an agent boss. I heard about the capacity gap, so how 80 % of people are struggling and yet 53 % of leaders want more productivity from teams. The cybernetic research with P &G around the impact of employees with AI and how they were able to be more productive than
teams without AI.
why people are turning to AI now. So things like that actually the frustration with other humans wasn't the highest. Actually these net new capabilities that you described around 24 seven availability, its ability to deal with large numbers and some of the examples of things like you come up with 10 ideas in 10 seconds and so forth that it's able to do. The power of our human to human collaborations and actually we should embrace that friction because it's that friction that makes our ideas better. We dived into
Hollywood and how that perhaps provides us a model that we can consider. And then I love this idea that you said around, know, actually, we're all going to be CEOs of our own agentic startup. So that feels like a great place for us to be. But also I really liked how you said, you know, actually, this is day one for everyone that we're all learning from the same starting point. And so how do we embrace that agent boss mindset and those seven things that you described around things like familiarity and regular AI usage, et cetera, that makes that up.
Simon Brown (48:46.678) We talked around how some humans may be managing one agent, some areas may be managing as many as 300, and then this whole notion of agents on the org chart and maybe this role that HR could play there and what it would mean to work side by side. We talked around the mindset of leaders, actually, the hypothesis that certainly I would have agreed with you. I would have expected leaders to actually be less familiar with AI, but actually it seems like the agent boss is more for coming from leaders.
already, which does make sense to what you said, they've got access to the strategy. And then these three phases around first of all, AI as assistant, second phase where AI joins our teams, and then the third where we actually go into that sort of true frontier phase. lots to think about. Also, I love the idea you have with your team of an hour to really play and then sort of share back and experiment. And then things we can do around actually peer networks is the power, have it for all of the organizations.
and as an employee, can prioritize experimenting and be curious. So lots we covered, lots of rich stuff there. What would be the sort one takeaway Alexia for our listeners off the back of all of that?
Alexia (49:58.897) For me, think the biggest, most urgent question is fundamentally how do we think about this new type of intelligence? Is it a form of capital? Is it a form of labour? Because our answer to that question as leaders or as employees will completely affect how we use it and how we integrate it. And I think there's not enough consensus on what it is just yet.
Simon Brown (50:26.324) us to mull over and give some thought and hopefully we'll get you back in the not too distant future in the next report to see what your conclusion was on how we should look after it. So a huge thank you for joining us Alexei, a great conversation.
Alexia (50:34.064) Yeah.
Love it.
Garrick (50:38.168) Good filming.
Thank you, Alexa.
Paul (50:43.03) Thank you, Alexia
Alexia (50:43.451) Thanks for having me. Thank you.
Simon Brown (50:45.886) So you've been listening to Curious Advantage podcast. As always, we're really curious to hear from you. So if you think there was something useful or valuable from this conversation today, then please do write a review for the podcast on your preferred channel or say why that was so, and tell us what you've learned from it. We always appreciate hearing your views and having a curious conversation using the # curious advantage. If you're interested in diving deeper into the seven C's of curiosity for how you can be more curious, then the Curious Advantage book is available on Amazon worldwide. And order your physical, digital or audio book copy
now to further explore our model for being more curious. See you next time. Thanks very much.
We recommend upgrading to the latest Chrome, Firefox, Safari, or Edge.
Please check your internet connection and refresh the page. You might also try disabling any ad blockers.
You can visit our support center if you're having problems.