Generative artificial intelligence is here, and here to stay. The prospect is both exciting and anxiety-inducing and is creating new ethical and legal issues, including its relationship to intellectual property, particularly copyrights, patents, trademarks, use of unlicensed content and ownership of AI-generated works.
The problems stem from the way AI learns, which, in very simple terms, is by finding relationships and patterns in the material it is fed to learn from. This is new territory, and businesses need to be aware that they may find themselves involved in a lawsuit if their AI-generated work is not sufficiently different from existing protected works. Cases are already in the courts, and more are sure to come.
At the core of these cases is the quest for a new definition of what qualifies as a “derivative work.” While there is likely to be more litigation, and the result will be unclear for several years, the ultimate result is likely to hinge on interpretation of the fair use doctrine. According to the U.S. Copyright Office, the fair use doctrine of the U.S. copyright statute permits the use of limited portions of a work, including quotes, for purposes such as commentary, criticism, news reporting and scholarly reports.
Among the issues the courts are deciding are infringement, rights of use issues, ownership of AI-generated works, whether works created by non-humans can be protected by copyright law, and questions about when permissions are needed.
The Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith may be a harbinger of how the courts may rule. The Court, in ruling for Goldsmith, who created the original work, essentially defined a new transformative fair use test according to which creators who are using copyrighted source works for a purpose “highly similar” to the purpose of the original work and are making money from their work have the burden of proving the “purpose and character” of the alleged fair use.
There are other court cases that are considering additional aspects of this issue.
What does this mean for businesses using generative AI?
Businesses using generative AI need to take steps now to mitigate the risks. For example, they must:
Be sure that they are in compliance with the law in regard to their acquisition of data being used to train their models.
Maintain audit trails for how the content was created.
Monitor digital and social channels for works that may be derived from their own.
Monitor digital and social channels for unauthorized use of derivative trademarks or trade dress.
Review all contract language with vendors and consumers to ensure it contains, at a minimum:
Disclosures that one or both parties are using generative AI.
Proper licensure of the training data that feed their AI.
Inclusion of indemnification provisions for potential intellectual property infringement.
Revision of confidentiality provisions.
Train employees on how to use AI tools without violating their privacy.
Keep in mind, too, that with generative AI the definition of “creativity” is likely to change. Issues are sure to arise that are not on the radar yet. Nevertheless, the definition that ultimately evolves must be based on respect for past, present and future creativity.
This is a developing area across the globe, and companies using generative AI need to stay current with developments. The European Union’s proposed AI Act, for example, which is intended to ensure human-centric and ethical development of AI in Europe, sets out rules for transparency and risk management.
Bottom line? Work with legal and financial professionals well-versed in the new AI landscape.
Comments