Google has presented Lyria 3 Pro, its most advanced AI model for generating music. Its ability to create complete songs with defined structure and detailed user control raises immediate legal questions. In the Intellectual Property niche, it is crucial to analyze whether the announced measures, such as training with licensed materials and digital watermarking, are sufficient to establish a safe framework and avoid the lawsuits that have marked the development of other generative AIs.
Analysis of the announced technical and legal safeguards 🛡️
Google emphasizes three pillars to address copyright: training with licensed materials, no imitation of specific voices or styles, and a SynthID digital signature to mark AI authorship. Although the use of licensed data is a step forward, it does not exempt from possible disputes about the transformative nature of the use in training. The digital signature is a promising tool for traceability and transparency, but its effectiveness will depend on it becoming an industry standard and being resistant to manipulation. The prohibition of imitating artists seeks to avoid issues of right of publicity or personal brand.
Precedent for creators and pending challenges ⚖️
Lyria 3 Pro seems designed to create a precedent of corporate responsibility in generative AI. For content creators and musicians, it offers an apparently safer framework for prototyping. However, questions persist: who owns the rights to the generated music? Does the digital signature protect the human creator who uses it? Google's model is a technical and ethical advancement, but definitive legal clarity still depends on the evolution of regulations and case law in this field.
Does Google's Lyria 3 Pro redefine the boundaries of authorship and copyright infringement in AI-generated music? 🎵
(P.S.: Thaler wanted his machine to be the author, I just want my 3D printer not to jam at 3am)