Future of Work & AI Investing Insights with Tallulah Le Merle
In this episode, Tallulah Le Merle shares her journey through technology, cultural heritage, and soulful exploration to inspire a balanced view of AI's potential. She emphasizes the importance of hope, human-centric values, and responsible innovation in shaping a flourishing future amidst rapid technological change.
In this episode of Arthur’s Round Table, Tallulah Le Merle shares insights on the future of work, AI, and investing. The conversation explores job disruption, technological transformation, and why human creativity, intuition, and innovation will remain central in a rapidly evolving world.
🎯 What You’ll Learn
How AI is reshaping the future of work
Why job displacement may lead to net job creation
The role of human creativity and intuition in an AI world
How investors should think about AI opportunities and risks
Why fear-based narratives around AI are often misleading
How technology can drive abundance and human potential
🧠 Key Insights from Tallulah Le Merle
1. Fear Around AI Is Natural—but Often Overstated
Much of the current narrative around AI is driven by fear, even though technological change has historically created new opportunities.
2. AI Will Create More Jobs Than It Replaces
While millions of jobs may be displaced, significantly more will be created across new industries and roles.
3. Human Capabilities Become More Valuable
As AI commoditizes knowledge and analysis, uniquely human traits become the edge:
Creativity
Intuition
Empathy
Judgment
4. The Future of Work Will Shift Toward Meaning
Work is evolving away from purely cognitive tasks toward:
Embodied work
Relational work
Meaning-driven work
5. AI Can Drive Massive Efficiency Gains
AI has the potential to:
Reduce inefficiencies
Lower costs
Improve sustainability
Accelerate innovation across industries
6. Investors Must Focus on Positive-Sum Outcomes
Capital allocation will play a critical role in shaping:
Ethical AI
Responsible innovation
Long-term societal impact
👤 About Tallulah Le Merle
Tallulah Le Merle is an investor, advisor, and thought leader focused on the intersection of AI, human potential, and the future of work. Her platform “The Case for Hope” explores how technology can drive positive outcomes for society while navigating uncertainty and disruption. LinkedIn
📊 Topics Covered
AI and the future of work
Job disruption and job creation
Investing in AI and innovation
Human potential and creativity
Ethical and responsible technology
AI and economic transformation
Fear vs opportunity in technological change
Want to learn more about Family Office Insights? Click Here.
Arthur Andrew Bavelas (00:02)
Hello, welcome everybody to another episode of Arthur's Roundtable. Thank you for being with us today. And Tallulah Mural is with us today. We're excited about having this conversation. We've been ⁓ threatening to do this for a little while and now we're here and super excited to do it. So Tallulah, thank you for doing this.
Tallulah Le Merle (00:20)
Thank you, Arthur. I love that word, threatening to do it for a long time. Now here we are, the threat itself. Yeah.
Arthur Andrew Bavelas (00:23)
Making good on the threat, right?
So let's start at the beginning of it's OK with you. Tell us a little bit about your story.
Tallulah Le Merle (00:34)
Yeah, so my story is one of, ⁓ it's like, where do you begin with that type of open question? guess maybe lineage. Yeah, I was born to two British parents. You we have British and French ⁓ in there, but I grew up on the West Coast in Silicon Valley Bay Area, the hub of innovation and technology and. ⁓
Arthur Andrew Bavelas (00:44)
wherever you're comfortable in beginning.
Tallulah Le Merle (01:03)
And we have really interesting, we're a close family, but also have interesting lineages from Iraq and India. And so this real melting pot. And so I've always been fascinated both, I think, by innovation and where the world's heading, but also kind of the deep wisdom and ancestry of where we come from and bridging those two. And interestingly, most of my career over the past decade has been spent in the former technology innovation as a advisor and consultant for large corporates, working with scale-ups in the space.
and then as an investor, which is how we met and got connected. But my passion has been for the human side of the equation and all of that. like, how are we using innovation, technology and the tools at our fingertips to build towards a more flourishing future and working with the teams who have that lens, who are thinking about ethical tech, responsible tech, ⁓ and can answer that question. You how is what you're building contribute to that future?
And so I find myself kind of a bridge, between East and West and my lineage between technology and humanity, systems and even kind of spiritual soulful exploration. I now say kind of ancient intelligence, ancestral intelligence and artificial intelligence. And so this interesting bridge ⁓ between many worlds.
Arthur Andrew Bavelas (02:23)
So there's a lot of viewpoints, especially when it makes for good headline risk in the traditional media and even non-traditional media, the dystopian future that might occur as a result of this powerful artificial intelligence. ⁓
But I think it is becoming even more as you just suggested before we started the podcast that there's there's an opportunity for abundance, good and really have an impact in humanity. What do you see happening there?
Tallulah Le Merle (02:59)
Well, I see first and foremost what you shared, which is that we are in a moment of ⁓ ubiquitous, mostly fear-based messaging and fear, which is very natural. It's a human reaction to the unknown. And actually so much of what's coming is unfathomable to us, the same way a hunter gatherer couldn't have fathomed the world of today with our skyscrapers and screens and airplanes and right, or even ⁓ agrarians and we became industrialists.
we've moved through these vastly different eras of human life on earth. And when you think about a new one dawning, we can't extrapolate into what that future will look like. So fear is a natural response. But I do think that the news, we're in this ⁓ information era where if you can incite fear or anger, ⁓ you'll get more clicks and engagement.
And I think that that's really unfortunate because it's exploiting people's psychology to retain them, to engage them, to, you know, so what.
Arthur Andrew Bavelas (04:05)
With
the intention of gleaning recurring revenue or revenue in general to get the clicks to make money, which in some cases could be for good, but in many cases, it's just to build companies to make money, right?
Tallulah Le Merle (04:23)
Yeah, and that's the unfortunate part about it, right? And so understandably, like people are afraid and they're right to be in many ways, because there are a lot of risks and uncertainties. We haven't addressed them all in this kind of inning one of a nine inning baseball game or this early moment, right? lot of this technology has been around for decades, but it's come of age, so to speak. And my work, you know, this...
when we talk about the case for hope in this platform that I'm building, it's not to bypass fear. It's not to bypass those feelings and to just say, look over here at this hopeful frivolous, you know, let's all just be hopeful. I see that hope is actually first and foremost a discipline of choosing where we focus, but also it's this courageous act to say,
I'm aware and I hold the risks and the uncertainty and I'm going to posture myself towards or just I'm gonna have a curiosity and openness and a belief that something good may happen. Because our collective focus is very powerful where we put our Yeah, openness, it's like when we're in fear and shame, we're in a state of kind of contraction.
Arthur Andrew Bavelas (05:38)
and keeping open to it, right? Open to the good can happen.
Tallulah Le Merle (05:47)
We're closed, our bodies are closed. And from that positioning, we can't act. You we have no energy to productively build the future we want. So actually moving people into this posture of openness, which is hope, means that those humanists who care deeply, the ones who are all right now angry and afraid, can actually shift into action, productive, you know, productive ⁓ forward motion. And that's what we need.
Arthur Andrew Bavelas (06:17)
So it's true when you're distracted by fear, it consumes energy that would otherwise be used ⁓ in a more promising way.
Do think that's true?
Tallulah Le Merle (06:33)
Absolutely. Yeah. mean, our nervous systems are shut down when we're in that, ⁓ when we're in that mode. And so, yeah, you know, and there's this concept called hyperstition, which people may have heard of or self-fulfilling prophecies or expectation effects, Pygmalion effect. There's many terms for it, but it basically just means our collective focus is very powerful and whatever we're envisioning and holding in our minds as what the future will look like, we,
weak move towards that. And so I see one of the biggest risks is that we're collectively imagining this dystopia or this doomsday. So we're much more likely to end up in it, especially if these humanists, as I said, who care are leaning out right now because they don't want to touch technology or AI or any given innovation with a 20 foot pole because they think it's going to be bad for us. And my kind of call or offer to them is please
lean in and see yourself in this equation because otherwise there is no hope. I say there is no hope without the ruthless hope of humanists. So that's why hope exists.
Arthur Andrew Bavelas (07:44)
I love it. So talk
about the platform. Give us some context on what you're building.
Tallulah Le Merle (07:52)
It's the genesis of it. started off as a talk that I was giving on the case for hope in the age of AI. And it was basically this process of meeting people in the risks and the fear around a number of common perceptions. And then trying to provide grounded, lucid reframes that were more hopeful. Um, and I did that along a number of, you know, it's common perceptions, AI is taking jobs.
It's disrupting entire industries for the worse. Education, creative arts, ⁓ healthcare, that it's destroying the environment, that's a big concern. That it's making us less intelligent as a species and cognitive offloading and fears around that. That it's not safe or secure. And then the macro risks around what happens when AI is used by bad actors, by oppressive political regimes, by...
corporates or economic monopoly for commercial gain and not for the good of the collective and then runaway AI alignment problem, AI overlord dystopian futures. So I really ran the whole gamut and I have thoughts on all of those fronts. I'm happy to go into any of them if that's where this conversation takes us. ⁓ Just to provide the reframe.
within each and actually on certain topics like environment, like education, I see this as a yes, a disruptive force, but the potential to resolve for like, I think AI will be our greatest ally in addressing the current climate crisis. I don't see any way we get ourselves out of the situation we're currently in without taking a bold bet on cutting edge tech. I just don't see it. And it makes everything we do on the planet more efficient because a lot of what we do is vastly inefficient.
And therefore less energy intensive. So people are very focused on the upfront energy demand and the data centers right now. But there's, know, first of all, compute always gets more efficient. The tech isn't going to require all of that infrastructure going forward. No one's lobbying against it because efficiency is a universal competitive advantage and, and, and, and that's just one example.
Arthur Andrew Bavelas (09:52)
All right.
Even in a short timeframe, we saw open claw that had this much code and token recall. Then we did a nano claw and then we did another one. And then it just took the amount of energy demand from a massive amount to a fraction of that in a matter of months. And it may not be a good example as it relates to the demand on the, on the data centers, but I think that.
incrementally that what you said is it's going to more competitive and easier to, ⁓ it'll become more efficient to meet those energy demands.
Tallulah Le Merle (10:50)
Yeah. No, I think it's a great example. I also give the one of early computers, ENIAC in 1945 or whenever it was, that first computer took up a whole room and it required the energy of an entire small village to do in multiple days what the crappiest pocket calculator could do now in a fraction of a second. So compute has become, I think, around 2.5 trillion times more efficient over the last 80 years.
Arthur Andrew Bavelas (10:59)
It's crazy
Tallulah Le Merle (11:19)
And that curve will continue. you know, often we extrapolate the current energy cost of a technology into the future as if it never gets more efficient, but it will. ⁓ And when we're focused on that side of the equation, we forget that the promise of AI is, you you always hear hand in hand with AI efficiency, efficiency. Efficiency means energy saving. That's what I hear.
⁓ and there's so many examples of that already. think the most, one of the most conservative estimates from IDP or I don't know who published it, but it was that if AI makes good on even a fraction of that efficiency productivity promise, it will save, save four times the amount of carbon emissions that it ever produces. And that's a conservative estimate.
So things like this plus AI coupled with Earth observation, know, satellite, how it can supercharge that work, smart infrastructure, smart buildings, know, HVAC technology, digital twins on city and planetary scales to map weather patterns, ⁓ heat islands, traffic, just it supercharges our climate tech efforts. ⁓
And that's, feel not being highlighted too. So that's an example of a reframe I give and there's like 30 more of those, I would say.
Arthur Andrew Bavelas (12:48)
Let's talk about the often ⁓ referenced ⁓ problem with employment and people being bounced out and have to be retrained to do something else. What's your take on all that?
Tallulah Le Merle (13:04)
Yeah, jobs is absolutely where I often start because it's so close to home. And this technology will be immensely disruptive to the world of work and to what work even means in the future. The World Economic Forum estimates that 90 million jobs, I think, around 80, 90 will be displaced by 2030.
And that gets a lot of focus. Like what are the jobs being disrupted? We're already seeing it today. Is it more in emerging, you know, developed or emerging markets and fear around the unequal benefits, the re-skilling, like you say. What's missed in the statistic is that in the same World Economic Forum report, it estimated 170 million jobs will be created in the same timeframe, which is a net dividend of 80 million. So we always forget
Arthur Andrew Bavelas (13:54)
Right, net plus, right? Yeah.
Tallulah Le Merle (13:59)
that with new innovation comes new industries. At the dawn of the internet, we couldn't have fathomed that social media would arise as an industry that empowered people to launch their own businesses, reach entirely new markets, step into thought leadership or content creation in a new way. We couldn't have fathomed e-commerce as an industry that, and these created millions and millions of jobs. ⁓
I think a lot about the jobs that AI can't replace and what remains uniquely human in the future. And I come back to three things, which are embodied work that requires the exchange of two souls having an exchange, ⁓ relational work. mean, so embodied work is more like it requires a physical body. Relational work is that exchange and meaning making work.
where there's an element of human ethics or judgment or decision-making that can't be replaced. And then I hypothesize about the industries that will arise of which there's many human AI interface, ⁓ AI oversight work, a huge ones also trust economy. Trust will be a massive industry I see will arise around identity verification, ⁓ verifying truth, verifying content provenance.
It's going to become a whole industry in its own right. And also physical AI, AI applied to the real world and robotics and things like this. So a lot is coming, but of course there's this near term fear of either the job you trained for your whole life disappearing or changing and requiring then a shift in people's capabilities. And they have to be supported through that. This is going to be a messy, uncomfortable.
Arthur Andrew Bavelas (15:54)
It will be messy
and where you stand often depends on where you sit, right? So you're getting displaced. You might be able to intellectually appreciate what we just talked about, but it's still idiosyncratic to your fear or stress. You still have to figure that out, right? I think when we look at it from a global perspective and a collective perspective that the net plus will be there, but
Tallulah Le Merle (16:15)
Mm-hmm. Mm-hmm.
Arthur Andrew Bavelas (16:23)
the person who's sitting there stressing out about it is still stressing out about it, right?
Tallulah Le Merle (16:28)
Right,
right. And here, what you sort of were touching on is, here's the bigger reframe. Work, our definition of work has changed across human life on earth. I talked about hunter-gatherers, there our work was subsistence activities, literally hunting and gathering. As agrarians, our work was farming the land. 90 % of us in the US were farmers.
In the industrial era, our work moved into factories and mass manufacturing, and we had the rise of the nine to five, right? Clock in, clock out. And now work is still that nine to five. We've kind of taken that model with us, but a lot of us, over 50 % of us are in services industries and we are behind screens or we're in, you know, that's actually pretty dystopian compared to what came before. We are the most disconnected now that we've ever been from others, from community.
in this hyper individualistic way we work from nature. Children spend nine times more time now on screens than outdoors. And the natural world is something we go to. It's not, we don't live in it, of it, amongst it the way that we used to. And we're disconnected from our physical bodies because we are largely sedentary and sitting all day or the work we do is cognitive and we're always in the realm of the mind, right? Above the neck. So we fear losing what we know.
But there's actually an opportunity to shift out of this relatively dystopian way of working into something again, more embodied, relational, where we ⁓ reawaken maybe our ecological intelligence, somatic intelligence, communal intelligence, ⁓ because we're offloading cognitive intelligence, which is only one small, tiny portion of human intelligence, but we've become very over-indexed on it.
Arthur Andrew Bavelas (18:13)
Right.
Tallulah Le Merle (18:20)
in this information era.
Arthur Andrew Bavelas (18:22)
Totally agree. There's ⁓ an opportunity to reengage the other parts of your... ⁓ Let's just say, besides your thinking, there's a whole bunch of other stuff going on vibrationally that we can take that time to learn more about what that's all about, right? When you walk into a room and...
your radar is on and you can feel, okay, I don't want to talk to that person, but I definitely want to talk to that person, right?
Tallulah Le Merle (18:56)
Yeah.
Well, that's it. And there's actually a lot of conversation happening about how we will shift from valuing skills to valuing capabilities. Skills are things like, you know, I'm really the expert on this topic or this industry. I've spent 30 years learning it, or I know, you know, this is the knowledge I know. Now with the technology we're building, we have a tool that can
recall knowledge and process it much faster than we ever can. It's actually very humbling to the human, you know, our intelligence was our competitive advantage as a species and now we've built something that does it better. So we're having this identity crisis. Like what is a human? But we'll as a result shift to valuing innately human traits and capabilities that aren't replaceable. Things like our intuition, our empathy, our relational intelligence.
Arthur Andrew Bavelas (19:46)
Exactly.
Tallulah Le Merle (19:52)
our ability to cooperate and collaborate, ⁓ our adaptability, our resilience. ⁓
Arthur Andrew Bavelas (20:01)
far more
interesting than figuring out how a spreadsheet's gonna be constructed. Like, far more interesting.
Tallulah Le Merle (20:08)
I think so.
Arthur Andrew Bavelas (20:10)
Yeah,
it's it's ⁓ when you think about is like, my god, I don't have to do that anymore. Good. Let them do it. Let AI do it. It's kind of like years ago when I learned ⁓ to ⁓ this is a very pedestrian example, but ⁓ it's not unique to me. But I have a unique ability. Let's just say anything outside of that can be done by somebody else. And it's not interesting to me anyway.
So why wouldn't I not just focus on my unique ability and let all the other sort of stuff get done by somebody else? It's kind of old school, but that's, what I was taught. Like, you know, your staff can't do this. You're the only one who do that. So let them do everything else. Right.
Tallulah Le Merle (20:44)
Yep.
Yeah, it is really, there's an opportunity here if we choose to take it, to unlock human potential again, and to get people to ask the question, what do I uniquely love? What lights me up? What can I do that's irreplaceable? Exactly that point you just mentioned. And think about then how to step into that. And they need support in doing it, don't get me wrong. But the opportunity exists. And when people are,
you know, creating work or applying themselves to something only they can uniquely do. They're in their integrity and they're in deep authenticity and the resonance and the frequency, as you said, of that can be felt. It's so powerful. This, you know, people worry with AI, we're to get inundated with AI art and content and music and all these things. I say, sure, it may, may proliferate in the future, but you can't replace
Arthur Andrew Bavelas (21:39)
this bar.
Tallulah Le Merle (21:53)
the authenticity, the uniqueness of human, something generated with a human soul attached to it, you know, with the human's energy attached to it. ⁓ That's one point. And I think it creates a lot of agency. Most people work for large corporates, other companies, they channel all their energy into building for someone else's dream, someone else's vision. And the bar has never been lower for people to in fact shift into building
themselves, whatever it is they want to create. And I'm working increasingly in emerging markets to do exactly this. You can have $0 in your bank account and a phone and a connection to the internet, and you can start a business where you can share a message or you can market a product. And it is a great equalizer, a great democratizer, or has the potential to be in the right hands of the individuals and collectives who are using it for those purposes.
Arthur Andrew Bavelas (22:23)
themselves.
Tallulah Le Merle (22:51)
Because of course the flip side is in the hands of bad actors who want to use it to exacerbate inequality or for commercial gain only or for other nefarious purposes. And it's almost like this race. Can we empower those individuals, the people wielding technology and innovation for good faster than a group?
Arthur Andrew Bavelas (23:11)
then yeah.
And I think we have to be settled that there always will be bad actors that are smart, that are leveraging things as they progress. And you know, maybe I haven't thought about it in terms of a race, but that's a good way to put it. But I think you can coexist and ⁓ just accepting that some of that's going to happen. But move forward. It's like you don't need all the lights to be green. Like you don't need to know that.
The bad actors have been thwarted to go ahead and start building something on your own.
Tallulah Le Merle (23:46)
Yeah, it's true. We can hold both. I do think there is some urgency right now just because of the pace of how this is all unfolding. And so for capital allocators, especially if you're on the invest side of the equation, I do think there's a really important imperative right now that we are thinking about how to flow funding consciously and intentionally.
to the teams who can answer that question. How is what you're building contributing to a more flourishing human future? ⁓ And also security and safety solutions, data privacy solutions. There's tons of teams now building in that space. And the good news is, even if bad actors are acting, another hopeful benefit of this technology is that it makes the visibility much easier. It empowers us to...
Arthur Andrew Bavelas (24:38)
Yeah, discloses
them, yeah, yeah. Unveils them. Yeah.
Tallulah Le Merle (24:40)
monitor so bad actors can still act but it's
much harder now for them to act invisibly.
Arthur Andrew Bavelas (24:46)
Yeah, I think that's true. It makes a lot of sense. Yeah. What a time to be alive.
Tallulah Le Merle (24:48)
Mm-hmm.
I mean, it really is. It's sort of staring into the void of the unknown, which you can imagine, as I say, like hunter gatherers would have done or agrarian at the dawn of the industrial revolution, they feared electricity and steam train, you know, they were similar. But now we have social media to be able to talk about it constantly, like talk about our fear and talk about the unknown. And so it's really fascinating. Yeah.
Arthur Andrew Bavelas (25:22)
It's just using ⁓ AI daily and in a very pedestrian way, not in an advanced way, admittedly. It's come clear to me that it used to be called quality control, right? But you let AI do its work that would be otherwise mundane or not very interesting. And it just is absolutely amazing at what it produces. But it still needs someone, let's say a person, to check it.
Like there has to be, you you have to have a sniff test. Like, does this make sense? Yeah. Cause it'll spew out stuff that, and the hallucinations are getting better, I think. But I think that that skill set of saying, okay, we were skilled prop writer. We see the outcome and then we get a look at the outcome and say, that doesn't make sense and adjust it accordingly. think that in a very sort of mundane way will be a skill going forward.
Tallulah Le Merle (26:18)
Yeah, exactly. This human in the loop point. I mean, we're, we're not at the stage of fully autonomous agents yet because the trust and reliability of these systems isn't there yet. So you do still need the human layer. And even in a future where you, where you get to, you know, ⁓ a gen tech in a more meaningful way. As I said, I still, when I was mentioning embodied work, relational work, I also see that meaning making work.
will still remain uniquely human because we need to interpret and integrate outputs with the physical world that we're existing in. that's that ethics and judgment and human values interpretation layer that I mentioned. So yeah, I totally see that.
Arthur Andrew Bavelas (26:58)
Right, still have to live here, right? Yeah.
So talk more about the book and the platform. How do you see that moving forward?
Tallulah Le Merle (27:19)
Yeah. Well, a lot of my work to date has been the way I see it is like top down. You know, I've worked with large corporates. I speak at conferences. I work in the, I was working with scale ups and other entities and teams at the kind of edge of the build itself, which meant I was always talking to audiences who really had a level of literacy around the technology and were part of the building. That's really shifting now.
And where I'm taking the case for hope is actually into this bottom up chapter of reaching the world, reaching real people who are concerned about what does this mean for my job? What does this mean for my kids? Education? ⁓ What does this mean for their intelligence? ⁓ Am I safe? Am I okay? Are we going to be okay?
and trying to translate for those people ⁓ because there's, yeah, smart groups of people convening at conferences around these topics, but what good is all that effort in a bubble? The message needs to reach everyone. Everyone is coming on this journey. Everyone is moving into this new chapter with us. And so I started sharing on socials under the case for hope. ⁓
I'm starting a series of dialogues with people who are actually providing a vision of the future we're moving into. Not just talking about what we stand to lose from today. Not just talking about everything that's going to get disrupted, but planting a seed of here's where we're going. This is where we're moving into. I want to uplift and amplify those voices. So I'm launching that. And then the book will come in its time.
The book in the first iteration that I wrote, it was very focused on technology. ⁓ And so I still stand by that. I do think that the tech arrives as the great disruptor kind of instigating this change. It's like, what do they call it? There's the underlying powder keg and then there's the spark, the lights of fuse. But also I feel there's so much more to say and I want to emphasize
more of what we're moving into and this reawakening of innate human traits, the restoration of forms of intelligence we've lost over time, like I mentioned, reconnection with the natural world, with community, with collective, with our physical body, with somatic intelligence, with ancestral intelligence, spiritual intelligence. So the book will come. I was, you know, it was slated for this spring, so in the next month or so, but I'm going to let it breathe.
Arthur Andrew Bavelas (30:10)
I didn't know that,
Tallulah Le Merle (30:14)
for a bit.
Arthur Andrew Bavelas (30:16)
Probably a good idea. Things are moving so fast. Probably in the next six months, you'll learn exponentially more than you learned to write the book to begin with, right?
Tallulah Le Merle (30:17)
Mm-hmm.
Exactly that. There's some sort of recalibration happening in me from like, let's really talk from my career that I've spent in technology, as I said, to now this new chapter of translating, bridging, amplifying voices and meeting people in fear and grief.
Actually, and then there's a lot around that. Like if we believe the modern world is ending, how do we grieve it together? We can't bypass that process. How do we have conscious endings? There's a great book called, Hospicing Modernity on this, which has really inspired me, but we used to grieve collectively and communally and we don't do that anymore. So instead people are grieving alone. They're feeling the weight of the world through their phone screen and they're having to process that by themselves. And our biology was never designed for that.
Arthur Andrew Bavelas (31:13)
Hmm.
Tallulah Le Merle (31:21)
you know, to be able to look into a screen and have the weight of the world's problems at our fingertips and to have to try to process that within one human body. So how do we grieve it together and dignify the end of this current chapter of life on earth? And then only in really being with that process, maybe get curious about what comes next.
Arthur Andrew Bavelas (31:49)
Yeah, not sure which comes first. Like you could run and say it's not the end of the chapter and just move forward and embrace the future, right? You may not have to grieve it. I'm not disagreeing with you. I'm just thinking out loud. Xan thought about it that way.
Tallulah Le Merle (32:03)
Yes, I'm ready.
Yeah, some people maybe not. think both can be true. Some people are already in a position where they're like, I'm ready to go. I'm building what's coming next, right? For sure. And that's, you know, the delusional builders on the cutting edge. They're already there. They've been there for ages. ⁓ I don't feel that it's the vast majority of people's experience, you know? ⁓ But we'll see. I agree. I don't think it's necessarily black or white.
Arthur Andrew Bavelas (32:15)
Yeah.
So I admit that my bubble is full of entrepreneurs and optimism and delusional optimism, which arguably is something that's a good trait for entrepreneurs. You can move running through walls and all that kind of stuff. I don't run into people that are actually worried about losing their jobs because I admit that I'm running in a different.
Tallulah Le Merle (32:49)
it's required, yeah.
Arthur Andrew Bavelas (33:02)
orbit of people. ⁓ What I do find is that people are just awestruck how fascinating it is and how productive it can be and how, I mean, again, very nitty gritty of, can you imagine it did that in three seconds? Like, it's just amazing what's happening.
Tallulah Le Merle (33:27)
Right.
Well, and that's been my world to date, primarily too, right? And...
It's not, you know, at the end of some of the talks I give, show this matrix of present, future, positives, negatives, and it's like an axis. And I show that there are some people who are focused on the positives of today and they don't want to lose them. So they feel fear. And then they look at the negatives of tomorrow that could arise and they feel fear again. And so they're on that fear axis, which is the human response we talked about at the beginning, right? The amygdala lighting up fear of the unknown. It's very human.
And then you have people on the other side of the spectrum, which is they look at the negatives of today and they think about how they could solve them with the technology that they have access to. And they have ideas. They're entrepreneurs, they're innovators. And then they look at the positives of the future that could arise when you solve the negatives of tomorrow and they feel hope. And I think what we're saying, you and I is that our world to date has been the ideas and hope world. And that is, but it's.
Arthur Andrew Bavelas (34:16)
Yeah, we get excited. Yeah, yeah.
Tallulah Le Merle (34:35)
It's entrepreneurs, it's innovators, it's the delusional, cutting edge people building the future. But I actually think it's only 1 % of the population. Maybe 10 if you're being incredibly generous, but I think the majority really are not that they're stuck in fear, but it's like,
Arthur Andrew Bavelas (34:44)
Loi.
Tallulah Le Merle (34:56)
They don't understand this technology. It's not been their world ever and no one's translated it for them. And there's so much jargon and like, so, know, this 10 % where we're all meeting them at the conferences and the think tanks and it's all well and good, but it's like, where's the bridge? Who is moving the thinking of that few and articulating what it means for the rest of the world?
And I see myself right there. And it's actually, it's really hard ⁓ because you are interfacing every day with the pain of regular people who are afraid and worried and concerned.
Arthur Andrew Bavelas (35:26)
Yeah.
So what are they
telling you? Is it jobs? Is it not so much climate change I imagine? It's about the urgent thing that's in front of them. So tell us a little bit about that.
Tallulah Le Merle (35:50)
Everything. It's what's going happen to my job. What's going to happen to my child's future? Am I going to send them to school and are they going to be set up for success? Are they just going to learn how to cheat and plagiarize well and never engage cognitively in anything? Are they going to lose gray matter in their brain because they're just relying on this tool all the time? Are they going to get in trouble from teachers because of this? What happens there? I think it always starts with ourselves and our family.
And then there's the greater you zoom out, people worry about collectives. worry about creatives. What happens to creatives? Is AI just going to create artistic slop and creatives are just going to have to edit those outputs and it completely comes for all their work and it disrupts music and design and film. Then people worry about the environment is just destroying the environment. Look at all these data centers. Look at the huge infrastructure upfront costs to build the foundations for this technology.
And then people worry about, does this exacerbate inequality? Does it put money into the hands of the few still? And so it makes that whole thing worse. Because UBI, how would we fund that? So that's not realistically going to happen. ⁓ And what about my data? Will it be safe? Will it be secure? Will it stay private? Will I ever have agency over it? And what if it scales the impacts of war? What if AI decision making replaces human decision making?
when we're talking about lives. That's what I am interfacing with every day. And honestly, talking about it, I start to feel emotional because it's like, it's really heavy, but it's really real. And that's, can't ignore.
Arthur Andrew Bavelas (37:31)
Yeah. How did that happen?
How did you end up interfacing with that perspective?
Tallulah Le Merle (37:38)
I started sharing. I started sharing hope and these reframes I talked about, not just in the rooms of people who get it, but to those who no one has tried to sit with them and explain it and help them understand or sit with them. And I'm trying to do that. And
It's this thing I mentioned, right? You can't buy, you have to sit with them where they're at. And it really hurts. And it hurts also to see how they're being exploited by media and news and by people who know that fear sells. That's one of the saddest parts of all. It's like...
Arthur Andrew Bavelas (38:05)
Yeah.
Tallulah Le Merle (38:20)
it's making people sick and they're fighting one another as well. Good people are fighting one another. One of the things that hurts me the most, like if a small business uses AI to make an ad, to generate an ad or some marketing for their small business, then in the comments, people will be torching them saying it's disgusting you use this, we're boycotting your business and your product because AI is destroying the
Arthur Andrew Bavelas (38:44)
⁓
that's interesting. So recognizing that they used AI to generate an ad that would normally be done by a person. They're feeling off put by that for the small business taking that step. That's really interesting.
Tallulah Le Merle (38:46)
And I look.
Yeah.
yeah. And they're persecuting each other. And I'm like, we can't do this. We're so focused on the wrong thing. People using technology, individuals and collectives really using it like Jane Doe or John Doe trying to start their own business to support their family, trying to start aside things so that they can change their life and the life of future generations. Someone in a far flung part of the world.
you know, the zero dollars in the bank account and the phone and the internet wanting to reach the world to change their own circumstance. Those are not the enemies. That's not our enemy. It's AI in the hands of bad actors. It's being wielded for all the wrong, you know, when it's being wielded for the wrong reasons. But instead we're fighting well-intentioned people who are just trying to use the innovation that's at their fingertips to do something good.
Arthur Andrew Bavelas (39:54)
It's really interesting perspective because we know that humans can be attached to an ideology that speaks to them where they think they stand, that they have some sort of baseline of where they think they stand. And while some people would have done the research to ⁓ feel like they
really feel like where they stand is a relevant point. But a lot of people just attach themselves to an ideology that aligns where they think they stand because their social pressures or whatever it may be. So do you think that these people who are chastising the local merchant for putting an AI agent in place to make sure that
when ⁓ their cart is empty, they come back and say, hey, didn't you like that pizza, you know, ordered that pizza? That seems like petty.
Tallulah Le Merle (40:58)
Again, I think it's.
Yeah, they believe that any use of AI is contributing to ⁓ the negative. Yeah, and I understand how you can leap into that logic. Also, a lot of people conflate AI with the current LLM incumbents, the foundation model shop. So they think AI is chat tbt, because that's how they're mostly using it. They think AI is open AI or name whatever other Google.
Arthur Andrew Bavelas (41:08)
people losing their jobs.
Yeah.
Yeah.
Tallulah Le Merle (41:31)
Right? And when you think that, you then believe that any use of the technology is contributing to supporting those corporates. And there's lack of trust in large corporate entities right now, just in general. Also because the social media age, people lived through that. it was, would like, you know, a failed promise in a way that we said, let's connect the world through social media. And it ended up those platforms really did.
Arthur Andrew Bavelas (41:41)
Big Brother.
Tallulah Le Merle (42:01)
exploit our attention span, know, dopamine hits with the constant. It was kind of net negative in many ways. So people fear the same thing. So they're not seeing the fact that there's all these other use cases of the technology. And another thing I try to talk about or explain is like the early incumbents in any tech cycle are not necessarily.
the ones that stand the test of time. Like we saw Yahoo became Google, Blackberry became Apple, Internet Explorer became Chrome, MySpace became Facebook. So the first incumbent was not always the one who remained. And there is stickiness to the current LLM players to an extent because they're integrated in the ecosystem, et cetera.
Arthur Andrew Bavelas (42:52)
Survived, yeah, yeah.
Tallulah Le Merle (42:59)
But if there was a new one to arise tomorrow that is built with this kind of ethical, responsible DNA encoded into it, we can plug out or we can vote with our feet to a certain extent. And even apps built on top of that foundation model layer, the developers could rewire. know, there's like nothing is really set in stone. It doesn't mean it wouldn't take time and effort, but people feel hopeful when they hear that too, because they don't understand that right now. They don't realize that.
Arthur Andrew Bavelas (43:31)
Yeah, I had a conversation this yesterday with a couple of young people, 20 and 22. And we were talking about AI and asked them whether they use it day in, day out. they said, absolutely not. It might just be idiosyncratic to these people, they like, we use it daily to make our business more efficient. Like it just, the tool is just too extraordinary.
20 and 22 year old, we don't ever use it.
Tallulah Le Merle (44:05)
No, I come up against that a lot as well as like boycott AI, all this sort of messaging. And I'm like, it's like saying the internet is Google. It's like if you want to boycott a specific company that you lack trust in, I understand. I may in fact actually agree with you, but we can't conflate the terms, which is really happening today. And also people forget they are using AI, whether they know it or not. If you have Spotify and it suggests a song to you, that's AI.
Arthur Andrew Bavelas (44:14)
Right, exactly.
You're
It's in.
Tallulah Le Merle (44:34)
If you're on Netflix
and it suggests a film for you, that's AI. If you're using GPS, that's AI. You're using it. You might not be using, you know, a chat bot interface. You're using AI every single day of your life in ways that you can't even begin to fathom. This technology has been around since the 50s. That's as old as the internet. That's when the term was coined, artificial intelligence coined in the 50s. And this whole question posed of can humans think, or sorry, can machines think, right?
Arthur Andrew Bavelas (44:37)
Yeah.
Yeah. For a long time. Yeah.
Thank you.
⁓
Tallulah Le Merle (45:05)
And
it's been through all these iterations and evolutions through early neural networks, through to the first commercial use cases, which we were seeing in the early 2000s with Siri and Alexa and other things like this. It's been around. So when people say, I don't use AI, I refuse to use AI, I'm like, you're using it. But it just shows you, again, it's not their fault. Like AI literacy is so low still.
Arthur Andrew Bavelas (45:21)
It was years ago.
Tallulah Le Merle (45:29)
And there's no one really or very few trying to translate this technology for all these people.
Very few, because I think it's the harder work. I mean, now having stepped into it, it's the harder work.
Arthur Andrew Bavelas (45:42)
It seems like it would be.
Cause you don't have a baseline of understanding. And like I said, and I give a pass to people regularly, not that anybody needs a pass from me, but people are busy. They don't have time to look into the stuff that we'd, for example, have the time to look into because it's part of what we do. They're not interested. And not only that, they're exhausted. You know, they're getting up, going to work every day, making sure their lunch is made and
They get to work and Johnny has to go to the soccer or whatever it is. There's just no time left to dig deep on stuff. Yeah.
Tallulah Le Merle (46:18)
I
really respect the people who are busy and inundated and overwhelmed and still find time to care. You know, because I also interface with deeply humanist people who are researching and reading up and taking the time to try to understand, which is remarkable in the context of everything you just said, which is what they're, you know, that's, those are their lives. And I come back to this point, there is without the ruthless hope.
of humanists, there is no hope. And so those people just in caring and in leaning in are doing so much more than they even realize. ⁓
Arthur Andrew Bavelas (46:56)
Yeah, I admire that too.
We talked a little bit about this in some way, but it's unlikely that the robot is going to get underneath the kitchen sink and install the water filter. That's not likely to be a practical use of robotics. So those people are still...
Tallulah Le Merle (47:19)
Right.
Arthur Andrew Bavelas (47:23)
And there's been a massive trend in investing in those type of, you know, businesses. But do you see anything replacing that, you know, plumber coming, the electrician coming and all that kind of business?
Tallulah Le Merle (47:41)
Well, you know, the field of robotics as it evolves, I suppose can go in any direction it chooses. the majority of what I see right now is trying to replace the need for human in really dangerous settings or environments. Or in, you know, ⁓ mining or deep sea, ⁓ oil and gas, pipe repairs and things like this. ⁓
Those are the ones that are being advanced the fastest from what I've seen. And cars, you know, know, driving a car is one of the most dangerous things humans do. And I actually really look forward to a future where we no longer have to drive and therefore the roads are a much safer place to be. So, you know, and Waymo and et cetera, that's, that's applied use of AI, right? Physical AI.
Arthur Andrew Bavelas (48:13)
Totally makes sense.
Tallulah Le Merle (48:35)
I come back to what I believe remains uniquely human in terms of work and jobs will be embodied work, relational work, and meaning making work. And embodied work, can't like say gardening or farming, just as an example. This is not something we've deeply understood as a society, at least Western predominant messaging, right? Of like the exchange of energy.
What does it mean to have a human care for, touch a plant, plant the seed in the ground versus like a machine doing it? Right now we don't see that there would be any difference in the crop quality or the end output. But there is. And like indigenous land stewardship, this gets you to ancient and ancestral wisdom and knowing the difference. the fact, like for example, food preparation, what's the difference between a robot then just making food versus like a chef?
Arthur Andrew Bavelas (49:21)
That is huge.
Tallulah Le Merle (49:35)
touching the ingredients.
Arthur Andrew Bavelas (49:36)
because it was made with
love. That's why.
Tallulah Le Merle (49:39)
which is right, and now you get into this realm of like woo or things that like can't be quantified and thus they're not rational science and we don't believe in them, we're gonna move into a future where we believe, where we understand, where we know the difference. And yeah. ⁓
Arthur Andrew Bavelas (50:01)
Yeah, it's really interesting. know, people have talked about for years, they've been eating a tomato here in the U.S. and then they go to Italy and go, well, this is what a tomato tastes like, right? It's going to, those things are going to be really important.
Tallulah Le Merle (50:19)
Yeah, or factory farm meat versus organic or free range meat, right? Like how that animal, the life they lived, the environment that they were in all contributes to something that you feel in the food when you eat it. It has a different energetic resonance. Again, is it quantifiable right now? No. Do we as a kind of like, you feel it in your bones and therefore it's truth and it's a non-negotiable truth.
Arthur Andrew Bavelas (50:34)
Totally. ⁓
You know when you see it, right? Yeah. Yeah.
Yeah.
Tallulah Le Merle (50:49)
And I am excited for the, think, not so distant future where this is quantified, proven. I had someone in a comment the other day say to me, there's nothing humans can do that AI can't replace. And that's what's so dystopian about this. And I said, wow.
Arthur Andrew Bavelas (51:06)
That's because
that's why you don't read the notes. I know that's part of what you're doing, but you know, people will say incendiary things just to get a reaction from you. Not sure that person even believed that. And I'm not doubting them. I'm just saying, yeah.
Tallulah Le Merle (51:21)
true.
but it's, I read it and I thought, I can't even respond to this because it's, it demonstrates such how deeply we have all moved away from understanding spirit and soul. And actually, I see an ability to restore that knowing as well, because people are already, we have to ask the existential question to ourselves, like, what is a human?
What remains uniquely human in the future? And when you ask that question, the answer you arrive at is our soul, our spirit, like our embodiment. That's a differentiator. And so there is really now this massive potential to reconnect to that within ourselves.
Arthur Andrew Bavelas (52:08)
There's a podcast I did the other day with a woman named Kate Mulder, whose business is helping people recognize that only 10 % of what we're doing is the brain. And the rest of it is intuition, perception, that sort of thing. And it's, she calls it 360 degree perception. Because when you are build that muscle, even a little bit,
Tallulah Le Merle (52:26)
Yeah.
Arthur Andrew Bavelas (52:36)
and you interact with somebody, the vibration is unmistakable one way or the other, right? I think that by itself is enough indication that AI is not going to be able to do that.
Tallulah Le Merle (52:40)
Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm.
Absolutely not. And we do a disservice to ourselves as a species when we believe that nothing we do is unique in the future we're moving into. Everything's replaceable. You know, ⁓ I think this is all coming. We're in really the midst of the most painful, messy part of the transition. And I think we will be for some time to come. But slowly, slowly, but surely.
Arthur Andrew Bavelas (52:59)
Woodley.
Tallulah Le Merle (53:19)
these messages will start to proliferate deeper than the fear mongering can because it will be felt. It will be felt the truth of these things. People will start to reconnect with everything we've been discussing in this conversation. And I really look forward to that because right now it hurts. I have to say it's tough.
Arthur Andrew Bavelas (53:41)
Yeah, you'll get through it. Like I said, I'll share with you the ⁓ thing that Nash is doing because he did exactly ⁓ what we're talking about here and focused on the potential good outcomes for humanity with AI as opposed to all the dystopian negative stuff. It's super interesting.
Tallulah Le Merle (53:44)
Thanks.
Yeah. And
I would love to tap into that because I do say hope is a discipline. It's a choice at the end of the day. It's a choice of where we put our collective focus. And there is so much potential and opportunity along all the lines we've discussed and more that we haven't, the democratization of hair care, the personalization of healthcare, ⁓ access to financial inclusion and pathways that weren't open up.
weren't available to people before, educating the world at low cost in real time in local languages, the preservation and scaling of local and indigenous wisdom that is so needed right now in the world. I mean, I could go on and on and on. And I'm sure the film covers so much of that, but yeah.
Arthur Andrew Bavelas (54:47)
Yeah, it's wild. Yeah, yeah, super interesting.
So Tolula, let's do this again in a couple months to see what transpired in that short period of time. Would you be open to that? Yeah, we should do that. I really appreciate you doing this today. Fascinating. I want to make sure everybody gets a chance. Is there, how can people at least watch what you're doing or?
Tallulah Le Merle (55:01)
Of course, of course.
Arthur Andrew Bavelas (55:16)
connect with you. it on LinkedIn or?
Tallulah Le Merle (55:19)
Well, as I mentioned, I've launched on socials under the case for hope and my website has more information, which is TallulahLamurl.com. There's an impact section which talks about the case for hope and has links to prior talks I've given that were recorded and podcasts and things like this and a place to sign up for when the book eventually emerges. And then otherwise LinkedIn, of course. So yeah, if you have show notes or anything and want to put that in there.
Arthur Andrew Bavelas (55:48)
Yeah, we'll set
them for sure. Right. Thanks for doing this. Look forward to having you back. Okay. Thanks everybody for joining us today.
Tallulah Le Merle (55:51)
Thank you.
Partner, Fifth Era // AI Advisor // AI x Human Flourishing
Tallulah Le Merle is an investor, advisor, author and speaker shaping a more human future in the age of AI. She combines expertise of technology with a deep understanding of humanity - our psychology, patterns, and behavior.
Tallulah is currently a Partner at Fifth Era, an investment manager focusing on innovative & exponential technology with over 80 unicorns in their portfolio. She leads their AI strategy, backing visionary funds and founders.
Earlier in her career, Tallulah spent over a decade as an advisor & consultant helping businesses navigate technological change. For the majority of that time she worked with FTSE 100 corporates on their digital and data transformation - leading multi-million dollar, transnational programs for the likes of Johnson & Johnson, Mars, HSBC, Infineum and others. She helped shape enterprise AI strategy back when it was still young and experimental - mostly machine learning & big data analytics. From there, she moved into the heart of the AI ecosystem to work with fast-growing scale ups, often taking fractional executive/COO roles. Her special focus was on the intersection of AI and humanity: including ethical/responsible AI, responsible AI, governance, security, safety, and emotional use cases of AI.
She has an undergraduate degree from the University of Oxford, and further education from MIT Sloan.





