Spooked by AI, Bollywood stars drag Google into fight for 'personality rights'
![Item 1 of 2 Aishwarya Rai poses on the red carpet during arrivals for the screening of the film "La venue de l'avenir" (Colors of Time) Out of competition at the 78th Cannes Film Festival in Cannes, France, May 22, 2025. REUTERS/Sarah Meyssonnier/File Photo [1/2]Aishwarya Rai poses on the red carpet during arrivals for the screening of the film "La venue de l'avenir" (Colors of Time) Out of competition at the 78th Cannes Film Festival in Cannes, France, May 22, 2025. REUTERS/Sarah Meyssonnier/File Photo Purchase Licensing Rights, opens new tab](/_next/image?url=https%3A%2F%2Fwww.reuters.com%2Fresizer%2Fv2%2FDB7WN3YBEJKRDMOEXKEKQE5TGI.jpg%3Fauth%3D1a0ebf7e8adfbe01e37a91e8c8ba4d1489744fdd1ca3a6db1178aa40d54efe9c%26width%3D1200%26quality%3D80&w=1920&q=75)
News Summary
Bollywood stars Abhishek Bachchan and Aishwarya Rai Bachchan are suing Google's YouTube in India, seeking the removal and prohibition of AI videos infringing their intellectual property rights. More significantly, they also want Google to implement safeguards to ensure such YouTube videos uploaded do not train other AI platforms. India currently lacks explicit protection for "personality rights," unlike some U.S. states, making this the most high-profile case to date concerning the interplay of personality rights and the risk of misleading or deepfake YouTube videos training other AI models. The actors argue that YouTube's content and third-party training policy is concerning as it allows users to consent to sharing their created videos to train rival AI models, risking further proliferation of misleading content online. Indian courts have previously supported other celebrities in similar cases regarding generative AI content damaging their reputation. The Bachchans are seeking $450,000 in damages and a permanent injunction, alleging "egregious," "sexually explicit," or "fictitious" AI content on YouTube. YouTube's policy states creators can opt-in to share their videos for training other AI platforms, adding it "can't control what a third-party company does" if users share videos for such training. This case highlights the challenges posed by AI-generated content to celebrity image and IP, especially in India, which is YouTube's largest market globally.
Background
India currently lacks explicit "personality rights" laws, unlike some U.S. states, compelling celebrities to assert these rights through courts to counter unauthorized use of their likeness, voice, and persona. The rise of generative AI has intensified concerns over deepfake videos and the unauthorized use of celebrity images, voices, and personalities. This technological advancement makes it easy to create highly realistic but misleading content, posing new challenges to celebrity reputations and intellectual property rights. YouTube is India's largest market, boasting around 600 million users, making it a critical platform for content creators and a significant potential source of data for AI model training. Earlier, in 2023, a Delhi court ruled to protect actor Anil Kapoor's image and voice from misuse, indicating Indian courts' willingness to safeguard celebrity rights in such cases. The Bachchans' lawsuit is particularly notable for directly targeting Google's YouTube and its data-sharing and AI training policies, seeking to address AI infringement at the platform level.
In-Depth AI Insights
1. What are the broader regulatory and legal implications of the Bollywood stars' lawsuit for global tech platforms' data policies and AI training models? - This lawsuit, particularly the demand for YouTube to implement safeguards against AI model training, could set a global precedent for similar regulations. If successful, it might accelerate calls for comparable rules in other jurisdictions, leading to increased compliance costs for global tech platforms like Google. - Platforms may be forced to overhaul their data-sharing policies and invest heavily in more sophisticated AI content detection and filtering systems. The potential for legal fragmentation across different countries will further complicate operations, exposing tech giants to greater legal and financial risks. - For investors, this signals an acceleration in legislative trends concerning AI governance and data privacy, potentially impacting the long-term profitability and valuations of tech companies, especially those heavily reliant on user-generated content for AI training. 2. How might such "personality rights" lawsuits impact the business models of AI content generators and the creator economy on platforms? - If platforms are compelled to restrict the use of user-generated content for training AI models without explicit, transparent consent or robust IP protection, it could stifle the growth of AI content startups that rely on vast datasets for model development. These companies may need to re-evaluate their data acquisition strategies and face higher licensing costs. - For creators on platforms like YouTube, while offering more protection in the short term, it could limit new monetization opportunities in the long run if AI technologies cannot effectively leverage their content for innovation. Platforms may also tighten their AI-related policies, affecting creators' choices. - Investors should monitor the potential emergence of AI data provenance and licensing markets. Companies that can develop transparent, compliant, and cost-effective AI data solutions may gain a competitive advantage, while AI startups unable to adapt to new regulatory environments could face significant challenges. 3. Under the Trump presidency, how might the U.S. stance on digital rights and AI governance evolve, and how would this impact the global landscape? - The Trump administration generally favors deregulation of the tech sector, particularly concerning economic growth and U.S. corporate competitiveness. However, given the potential for information manipulation and national security risks posed by AI deepfakes, and the strong lobbying power of U.S. creative industries like Hollywood for IP protection, the administration might adopt a more protective stance on digital rights and AI governance. - The U.S. would likely prioritize policies that safeguard the interests of its domestic tech and creative industries, which could include encouraging platforms to enhance content moderation and AI model training transparency, while still seeking to avoid excessive innovation stifling. Precedents from countries like India would offer valuable references for the U.S., especially in discussions around personality rights as an extension of IP. - This stance could lead to a divergence in global AI governance standards: some nations (like India) might adopt stricter localized protections, while the U.S. could seek a balance between IP protection and AI innovation, potentially creating more cross-jurisdictional compliance challenges for globally operating tech companies.