Love+Machines

Episode 05 - Digital Doppelgangers & AI Twins! (with Natalie Monbiot)

Episode Summary

In this episode, I chat with Natalie Monbiot, a founder and tech entrepreneur who speaks from her expertise from building one of the first AI avatar companies in the world, coining the term ‘virtual human economy.’ Natalie presents a fairly different outlook on AI technologies compared to many of the other guests I’ve had on this podcast. As someone who spends most of her time with criticisms of corporate tech and negative projections of AI’s impact on creative work and human ownership, I find this conversation especially important in learning about other perspectives on the areas of opportunity, work, compensation, and the future. In this episode, Natalie explains the technology behind digital cloning, what these clones – or digital twins – are often used for, and shares some visions of a world in everyone gets fair pay for their own virtual human doppelgangers. I hope you listen and enjoy. Love, Julia!

Episode Notes

Natalie's LinkedIn: https://www.linkedin.com/in/natlikethat/

My new website: https://julparke.squarespace.com 

Episode Transcription

Natalie:

So what I mean by the virtual human economy is this idea that individual humans can have an AI twin, an AI replica of themselves, that they can put to work on their behalf for profit, quite literally, and to just lead better lives. So that's sort of my thesis, my rather optimistic thesis, and it's founded on my experience as a founder of one of the earliest AI avatar startups called Hour One, which was recently acquired by Wix. And there, what was particularly striking to me is that we had all of these avatars, hundreds, many hundreds of real people who had licensed their likeness as an avatar to then appear in commercial content and get compensated for it.

So this is a whole new way that ordinary human beings could actually have a role in the digital world, in the AI universe. Moonlighting as a language teacher or a sales representative or a car sales dealer or what have you, suddenly these people could kind of be moonlighting in these different roles and actually not just doing that but being able to do things they couldn't do in real life like speak other languages. We had some people that ended up teaching German or Spanish to real students around the world. And while their likeness was leveraged in the content, the beauty of AI is that it can enable you to do things that you can't do in real life, like speak languages that you don't know. So what was really exciting to me about that whole journey was discovering how real humans could actually benefit in the era of AI. Like how each of us as individuals could actually increase our own agency and our potential to do things that we can't do in real life ordinarily, to buy time and to profit.

Jul Parke:

I think what a lot of people might think in hearing this first is fear. My goodness, my likeness is going to be copied. Somebody might acquire control of my voice and of my image, my likeness, my personality. How do I make sure that that never happens to me? Do you ever get questions like that from people?

Natalie:

Yes, all the time. So I will say that there is no system that can absolutely guarantee that that won't happen to you. However, what you can do is take advantage of the technology in a thoughtful and an intentional way to actually capitalize and benefit from that technology. So while I can't prevent some random bad actor cloning my voice and using it to scam my mother, right? Not that you need that kind of technology to be able to scam your mother. My mother has been scammed with much less tech advancement than that, right? Obviously AI amplifies those risks. So I can't really prevent that from happening.

But what I can do and what I do do is I put my voice on the platform 11 Labs. And for those who haven't heard of it, it's an AI voice platform. I and hundreds of thousands of others probably at this point, have provided their voice on the 11 Labs platform. And I can basically set my price for my voice and I can make decisions like, "I allow my voice to be used for projects up to one month or up to one year." So I have the controls. If I allow my voice to be used in a project for up to one year, then people are willing to pay more for my voice, right? Because if you think about a brand that wants to create consistent ads on radio, audio ads, then they want to be able to rely on that same voice and know that that voice isn't just going to disappear from the system. So they're willing to invest a bit more in that voice.

So every month, it's not a lot because I'm doing this kind of out of curiosity mainly, but I'll get like $70, $80 a month just randomly for doing nothing in my bank account. And it's just a reminder, like, I'm making sort of this passive income off my assets. You can start to monetize your assets in new ways and AI enables that in ways that weren't previously possible. So I can license my AI voice, my AI likeness, how I look.

But the part that gets me the most interested and I think where most of the value resides for most of the people that I know and in my world is the ability to kind of look at your wisdom and knowledge as an asset, which we do already, and to be able to clone that and enrich that and make that work for you. And that's what I'm really focused on today is building custom AI twins for leaders who have a lot of wisdom, which might be represented in public data, internal data, the stuff that's relevant to the type of twin that they want to make.

And some of those goals might be, "I'm a CEO and I want to personally onboard everybody in my company. And I've just acquired another company of 3,000 people." And I want each one to be able to have a personal conversation with me or a version of me that can help them feel familiar with the culture, ask any question they want, get a sense of the leadership style of this company. This is a real use case, a real program that we're running. So it can be all of the CEO's knowledge, right? It can also be other stuff like the company handbook. Train the twin on the company handbook. So that this twin becomes this really useful companion that employees can engage with and feel more connected to the company. And by the way, this is absolutely not about replacement of the CEO. The CEO would never have the time to be able to do these things that he would want to do. In a way, it's a mechanism for scaling both your wisdom and your intent. "If only I could, I would," and this is almost the vehicle for that.

Jul Parke:

Wow. I mean, we're at the start of our conversation and it's already fascinating. I research digital labor and in our field, we have a term called assetization. So this is literally a case of technology being used to make assets of areas of our body and personhood that weren't assetized before. So by asset, do you mean like the sound of our voice, the look of our face and expressions that we make, our mannerisms, all of these nuances of the body?

Natalie:

I mean, I just wrote that down because that is, you've just given me some fantastic terms to encapsulate exactly what we've been talking about. And yes, that's exactly it. Areas of our body and our personhood. So yes, that and I guess if your wisdom and your knowledge and your body of work is included in that, then absolutely.

Jul Parke:

Yeah, I remember back when we were chatting in New York, you mentioned it like IP, intellectual property, seeing one's identity as one's intellectual property, which was very interesting to me. I'd never really considered it that way before.

Natalie:

And it's also, yeah, I mean, there's some complexities in the law as to, you know, how you can actually create ownership, you know, legal ownership over derivatives of, because basically it's your works that can be copyrighted, right? And it's interesting when you think about like yourself and your personhood as the IP. And so I think there's a lot of complexity in that, but also quite a bit of progress as well.

Jul Parke:

Now, going back to this example of the CEO who has their AI clone go out to speak to employees of their company to sort of maximize the reach of their personality to the entire company. I'm thinking from the perspective of the company, it's fantastic that the lay employee has a chance to build a parasocial relationship with this brand being personified in the voice and literal look of their CEO. But from the employee's actual perspective, they're not getting the benefit of conscious interaction with the CEO. So what's the value add for the employee in that case?

Natalie:

Yeah, I think that's a really, really interesting point. It opens up many other types of conversations that you would never dream of having with the CEO. Like actually asking stupid questions or seemingly stupid questions or questions that demonstrate your ignorance, versus questions that are designed to show how brilliant you are. So it can create a more kind of like genuine interaction.

I think part of the value add to the employee is that they will be heard, but maybe in aggregate, right? Like if the CEO and the company could use this as a social listening device to understand the pulse of the company and to be more sensitive to the employees feel as a whole. So yes, this is not about social capital, right? This is kind of like trying to really get into the head and mindset of the CEO and actually be able to ask the CEO questions that you would never dream of asking the CEO. And actually learning.

Jul Parke:

I see.

Natalie:

Right? And, or maybe being really prepared by the time that you do have that face time with the CEO to kind of really have done your research. This CEO in particular, one of his mantras is, "do your research, never show up to a meeting unprepared, know who you're talking to." So it's an interesting thing because it becomes a tool for his values and also his kind of personality. It might demonstrate his personality, like, "you better be asking all the stupid question in the first 90 days. Otherwise if you ask them next year, people are gonna think you're really dumb." And so this is kind of like the mascot version of that, the unscary version of that, but gives you a sense of the culture and it's delivered not in this sort of dry way, but in the kind of voice and the voice of the company.

Jul Parke:

This is a very banal direction to this, but it's making me think of the movie Mickey 17, where Mickey 17 is sort of the soft cinnamon roll version of this character, and then his clone, who comes after him, is sort of the hardcore version of this character.

Natalie:

That's really interesting because I've just come off a really interesting discovery call about this leadership diploma that a lot of, as this person put it, more like authoritarian style CEOs are trying to learn a kind of softer and more collaborative form of leadership, but it's not something that came naturally to them. So, what if, besides learning this stuff, like actually doing the work to try to transform, then codifying, again, it's the intent, right? You're trying and the transformation is maybe partial in the whole person, but the intent is complete. And you can codify these behaviors and a style married with the knowledge and the wisdom that you want to communicate, and then disseminate that throughout the organization.

Jul Parke:

It seems to me that ultimately the bedrock of the virtual human economy is about innovating communication, and emotional intelligence has been such a huge part of how we see good management. Do you see digital technologies and AI being a really essential aid or sort of an additional support in terms of the way that we communicate and normalize the ways that we talk to each other in the workforce?

Natalie:

It's a really, really interesting question. What I found in building these AI twins and just thinking more widely about what AI is doing to organizations is that if AI kind of disrupts and also replaces a lot of the tasks we do, we're kind of left with not much to do. What is my value? How do I justify my value? All of these questions come up. But actually, it makes for a lot of soul searching and going back to the drawing board and kind of thinking, well, what is this all for? In relating that to AI twins, it's like, well, who do I want to be as a leader? And it's an opportunity to actually really do the human work of who you want to be.

The AI aspect of it is never the core, but it's the big amplifier of whatever our intention is. And so I think being really reflective and being really intentional about the kinds of connections and the kind of impact that you want to have with real people, this is like a real opportunity to think through that because AI will magnify for better or worse, whatever that is.

Jul Parke:

Yeah. So have you yourself been cloned with these technologies to develop your own AI twin? And how has that experience been for you?

Natalie:

Yeah, so I've been cloned in many ways, many times over since 2019 when we co-founded this avatar company. But currently I'm really excited to be creating kind of a custom version with a very specific goal. I think when we kind of think about cloning ourselves, like, what role is this twin going to have? So my twin, which will be available, but we're just sort of putting some finishing touches on it. It's designed to educate people on and allow people to have conversations like this, around these themes, right? So it's a professional one. People can learn about the virtual human economy and engage on the virtual human economy, and then where I can also listen and hear what people are interested in. My twin can reach a lot more people, a lot of these initial discussions, and then as and when it's the right time, they can talk to me.

Jul Parke:

Wow. So that actually brings up a really interesting point, which is this area of data and privacy. When somebody is open and feels free to ask an AI twin about anything, is that ultimately going to go back to the CEO? Is it going to go into this sort of repository of information?

Natalie:

So that really depends on the design of the twin and whatever the design is should be very transparently communicated upfront. With mine, I'm very much going to be consuming, you know, what people have said and try to improve the conversations that I have based on any gaps that people felt. And I'll make it very clear that, you know, that is what's happening in this ecosystem. With something like the CEO, what I would recommend is that the information is available, the insights are available in aggregate. And it's more about kind of sentiment or certain topics that keep coming up, but not traced back to any individual. So that gives the company the insights which will benefit and have a return to the employees.

Jul Parke:

I see. And now this question is quite mechanical in some ways, and it's about representation. You shared with me that you are Filipino-British, so you are mixed ethnicities. In the general public discourse, the challenge of illustrating multiple races and diverse ethnicities has really captivated the mainstream audience. So in terms of a technical sense, have there been any challenges of illustrating and doing justice to people's racial identities and their ethnicities?

Natalie:

Yeah, so in the work that I do, first of all, it's not always a visual representation. And even though that's where I started, I actually think it can be a bit of a distraction, especially in the types of enterprise and work use cases that I'm describing. I think there's a time and a place for the visual representation, but otherwise, I actually think that just chat and voice, and I think voice carries a lot of sentiment. And I think it's mostly enough.

Just back to your question, we kind of do a very lifelike representation in the work that we do. We're making twins of specific people mostly, and so therefore that doesn't really come up as a question. There are conversations that we have about kind of having more like of a general representation of the company. But again, I think what we would do is a bit of a shortcut to that is, do you have someone already in real life that does represent your company, and that person might be racially mixed? And if so, okay, so that is the person and we will represent them as closely as possible.

Jul Parke:

Gotcha, gotcha. All right, so I can't help but think about the online dating situation. Do you envision a near future in which AI twins are used for dating?

Natalie:

I do think that the Bumble positioning around it, where you have this agent that kind of like goes on dates for you, of course, it sounds very like catchy. I mean, again, I don't think this is a visual representation of you that is going on dates for you. This is an assistant that is doing a lot of that sort of swiping equivalent behavior or basically just kind of like using the intelligence that's available from you and from the platform to give you a short list. Like that to me makes a lot of sense. That is just automation. I don't think it's as radical as it has come out as it seems in the news.

I think that's very much like your assistant, your dating assistant is doing that. And of course, matchmakers, like that's just, you know, the concept that is as old as human beings and civilization. So to me, that is not a big leap and makes a lot of sense. This idea of like actually dating non-humans and things like that I feel pretty uncomfortable with and I don't think is a good idea. But yes, we'll stop there.

Jul Parke:

That makes a lot of sense, like Filter 2.0. So I guess one of the biggest questions could be, when somebody is given this option to buy Natalie's voice and use it in an AI twin-clone context, what would stop them from just generating an AI voice on their own and using sort of a free machine generated voice rather than someone's IP.

Natalie:

Right, so first of all, I think there's a reason to use platforms that enable these services for you, right? And if you're a business, you wanna be using legit services and you want to be complying with terms of use and you wanna be doing good business. These platforms such as 11 Labs and others have all the features that you could possibly want to make it really easy for you to do this. To try and do this kind of like on your own, would be really difficult and sort of pointless.

But I think there's another part to your question, which is why would you use a replica of someone's voice in particular and not just kind of a made up voice? To create a good AI voice, you kind of need human voices to model it off. It's very difficult to just make a voice just out of thin air or just from synthetic data. But I also think that it's a business decision, like for a company like 11 Labs, do you want to alienate people or do you want to invite people in and build a community of people that are cloning their voices and actually benefiting from the platform on all sides? I think that's like a long-term business strategy that also is a technical strategy because you just have this constant stream of real voices. And as long as the economics work, that's a great brand that people will trust in a space that has a lot of trust issues.

Jul Parke:

Wow, this is giving me so many interesting ideas. So I guess I'll end off with my final question, which relates to your latest role with Artists in the Machine. Can you explain for us what this community is and what you do specifically?

Natalie:

Yeah, absolutely. This is a key project of mine. I'm a founding partner of this event series and community. And it really seeks to gather the pioneers and thinkers and doers at this intersection of AI and creativity and to have a very real but also optimistic showcase of what's happening in the realm of AI and creativity and how to create a sort of a human-centered version of this where artists are really benefiting.

As far as the labor side of it, it's fascinating to be able to identify and spotlight companies and startups that are very focused on compensating artists in the era of AI appropriately. For example, at this summit, one of our early speakers will be announcing soon. They are called CloudFlare and they're actually able to prevent large language models from accessing data from somebody's website, a publisher's website or a personal website without the permission of that website owner. And so it's creating a whole new economy that could really benefit creators because now they have the permissions to decide whether these models can access their content and also set a price for it. So I think there's a lot of really exciting things going on. I'm really excited to be identifying and spotlighting the shifts that are going on that can actually create meaningful progress in this space for the creative industry.

Jul Parke:

Wow, thank you so much for all this, Natalie. All of it will be so fascinating to everyone in the digital labor space. This podcast will be launching in mid-October, so if possible, I'd love to tag you on Instagram and LinkedIn where we're going to do sort of a marketing push for all of the episodes.

Natalie:

Great. Yeah, that's really exciting. You're a great interviewer. This is really fun. And we did cover a lot in a very short period of time. That was great. And also this whole thing about like this assetization, I would love to look at that and kind of incorporate that into how I talk about things and obviously reference it and stuff. Is there somewhere I can find that?

Jul Parke:

Love a focused chat.