Protecting Intellectual Property in an AI World
The WasteWatcher
Everyone has their favorite music artist and gets excited when there is new recording to which they can listen and buy to hear anytime they want. But what if they find out that it was not a person who made the recording but instead it was a computer that was able to create something that sounded exactly like the artist. Citizens Against Government Waste (CAGW) has long advocated for the protection of intellectual property (IP) rights, and it is clear that current law is insufficient to address new technologies that can recreate an individual’s likeness or voice. Legislation is needed to update the IP protections provided in federal law.
Consumers of music, or of any recording, expect that what they are listening to is truly the work of the performer, product promotion by a movie star or famous model is really that person’s endorsement, and personal advice offered through a chat is from a real person, and not a bot. However, artificial intelligence (AI) is increasing creating confusion in the recording and streaming marketplace, as voices are duplicated, and songs are created that were never sung by the performers. A person’s voice and their image are unique to them, but as AI programs develop, the frequency of fake performances and other recordings are increasing in number.
These issues haven not only arisen in music but also in politics and other industries. AI voice cloning technology was used in New Hampshire to create fake robocalls from “President Biden,” and in Maryland to create a false racist statement by a school principal.
An online search on “AI replication of music” features many YouTube videos offering AI programs that can replicate music styles of performers who are both deceased and alive. These unauthorized recordings not only hurt consumers by misleading them into purchasing what they believe to be genuine recordings, but also hurt the performers who receive no compensation for the unauthorized use of their voice or image.
Generative AI allowing for the recreation of someone’s voice for new uses can be beneficial. For example,Rep. Jennifer Wexton (D-Va.), who lost her ability to speak due to progressive supranuclear palsy (PSP), is able to use the technology to help her speak in her own voice. The health applications of this new technology will help improve the lives of those who for various health reasons have lost their speaking abilities.
But the unauthorized use of an individual’s voice, thoughts, or voice can have far-reaching ramifications, including economic and societal harm. While some individuals may accept another’s use of their likeness post-duplication, the right to maintain control over thoughts, words, and appearance should be determined by the individual, not by a random AI developer who can live anywhere around the world. This is one of the reasons that the bipartisan Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act was introduced in the Senate on July 31, 2024. The legislation would give digital control to individuals on how their voices, likenesses, and words are used.
According to one of the lead sponsors of the legislation, Sen. Chris Coons (D-Del.), the NO FAKES Act would help prevent the “use of non-consensual digital replications in these kinds of audiovisual works, images, or sound recordings” by holding companies liable for unauthorized use; holding platforms liable for unauthorized hosting; exclude some digital replications from the bill based on First Amendment Rights; and preempt state laws intended to address the same issues.
The use of AI is not something of which anyone should be to be afraid, since it has phenomenal applications for health, education, and manufacturing. However, the unauthorized use of an individual’s creative voice, image, and words must be protected. The NO FAKES Act is a good start to providing these protections.