Michael Caine and Matthew McConaughey have licensed their voices to AI — but not every ‘iconic’ voice on the platform can consent

Few voices are as instantly recognizable as that of Michael Caine. The legendary actor has narrated documentaries, starred in countless films, and built a reputation for having one of the most distinctive voices in modern cinema. Now, that famous voice can be accessed on demand—by anyone willing to pay for it.

My name, is Michael Caine. (Image credit: Warner Bros))
My name, is Michael Caine. (Image credit: Warner Bros))

In early March 2026, AI audio company ElevenLabs announced that Caine had officially licensed his voice to the platform. As a result, users can now access it through two services: ElevenReader, an app that reads text aloud using different voices, and the company’s Iconic Marketplace, a platform designed for licensing celebrity voices.

Caine is far from the only public figure involved. Actor Matthew McConaughey—who is also an investor in ElevenLabs—has used the technology to translate his newsletter into Spanish while still sounding like himself. The company says more than 25 recognizable voices are currently available, including those of Judy Garland, Art Garfunkel, Liza Minnelli, Alan Turing, and Maya Angelou.

However, some of these figures are no longer alive, which raises an obvious question: how did they consent? In reality, they didn’t. Instead, decisions about licensing their voices were made by estates or rights holders acting on their behalf.

The rise of ElevenLabs

Founded in 2022, ElevenLabs has quickly become one of the leading companies in AI voice generation. Its technology can create speech that sounds remarkably close to real human voices. Tools from the company are already used by podcasters, publishers, game developers, and media organizations.

The Iconic Marketplace focuses on business partnerships. According to ElevenLabs, it’s meant to be a “two-sided platform” where companies can request access to well-known voices for specific projects. The idea is that voice owners maintain control, approve uses, and receive payment when their voice is licensed.

In a statement, Michael Caine framed the partnership as an opportunity for creativity rather than replacement:

“It’s not about replacing voices; it’s about amplifying them and opening doors for new storytellers everywhere.”

While that message is compelling, it also raises complicated questions.

The issue of consent and digital legacy

For historical figures like Maya Angelou or Alan Turing, there was no direct consent. Their voices are being licensed through estates or organizations that control their rights. Whether that feels acceptable may depend on how people view digital legacy and whether others should be able to speak on behalf of someone who has passed away.

For living performers, the situation is slightly different. Some argue that licensing your voice through an official platform may actually protect artists. By doing so, they maintain control over how their voice is used and receive compensation—rather than having it cloned without permission.

Unauthorized voice cloning already exists, and ElevenLabs presents its platform as a legitimate alternative to that uncontrolled landscape.

The bigger concern: trust

Yet the broader issue goes beyond celebrity licensing. When a voice as recognizable as Michael Caine’s is used in new contexts—such as branded podcasts or corporate content—it carries decades of cultural authority. Listeners may instinctively trust the message simply because of the voice delivering it.

There is also a growing phenomenon of emotional attachment to AI systems. People already form parasocial relationships with chatbots and virtual assistants. If those systems begin speaking in familiar celebrity voices, that sense of trust and connection could become even stronger.

Imagine a chatbot speaking with Michael Caine’s voice. Even if the use is officially licensed, the words would still be generated by AI, not the actor himself.

Matthew McConaughey. (Image credit: Getty Images/Rodin Eckenroth/Stringer)
Matthew McConaughey. (Image credit: Getty Images/Rodin Eckenroth/Stringer)

Fraud and the problem of synthetic voices

Perhaps the most immediate risk is fraud. Voice cloning technology has improved dramatically, and the difference between real and synthetic voices is becoming increasingly difficult to detect.

Security company Group-IB warns that AI-generated voice scams—often called “vishing” (voice phishing)—are rising quickly. In these scams, criminals clone the voice of someone the victim knows and trusts, such as a family member or coworker, and use it to request money or sensitive information.

Group-IB estimates that losses from these types of fraud could reach $40 billion by 2027.

What makes these scams so dangerous is that they bypass the skepticism people normally apply to suspicious emails or messages. Hearing a familiar voice—even one that sounds slightly distorted—can override logical caution.

Because of this, some security experts now recommend creating private code words with close friends or family members. Voice alone can no longer be treated as reliable proof of identity.

A blurred line between real and artificial

Voice synthesis technology is advancing quickly, and companies beyond ElevenLabs are developing similar tools. As AI-generated voices become more common in everyday media—whether for audiobooks, customer service, or digital assistants—the line between authentic and synthetic audio will continue to blur.

For some people, the benefits are clear: recognizable voices for narration, new creative tools for storytelling, and new revenue streams for performers.

But for others, the cost may be the gradual erosion of trust in something that once felt reliable: the human voice itself.

Whether that trade-off is worth it depends on perspective. For actors like Michael Caine, the decision may make sense financially and creatively. For technology companies, recognizable voices bring enormous commercial value.

For everyone else navigating a world where even a phone call from a loved one might not be real, the question becomes harder: how much uncertainty are we willing to accept in exchange for convenience and entertainment?

Facebook Tweet LinkedIn Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *