Menu
Nonprofits and AI: Managing Legal and Other Risks (Abbreviated Version)
by Holly E. Peterson, Esq., Counsel
Tenenbaum Law Group PLLC
August 26, 2024
Generative artificial intelligence (AI) is ubiquitous. While the promise of AI presents transformative opportunities for nonprofit organizations, the use of generative AI presents legal and other risks that must (and can) be managed. The primary risk areas include copyright ownership, use, and infringement; privacy and data security; discrimination; and tort liability. These risks are borne from a complicated web of evolving federal, state, and international laws that in many ways intersect but that all have various nuances. In addition to legal risks, AI also may present reputational risks for nonprofits. Appropriate risk mitigation measures can reduce legal exposure and reputational risk to enable nonprofits to leverage AI to carry out strategic endeavors. This article outlines legal, reputational, and other risks related to AI and offers practical tips to mitigate risk.
A. Copyright Ownership
According to the U.S. Copyright Office, copyrightable works must have more than a de minimis imprint of a human being.[1] Because AI-generated content is not subject to copyright protection, nonprofits should require human authors to substantially and materially contribute to any nonprofit work product. Nonprofits should revise existing author and speaker agreements to require a written attestation from employees, contractors, and volunteer authors and speakers representing and warranting that (i) any submitted content has been revised substantially such that the author/creator has all necessary rights to assign or license the work to the nonprofit, and (ii) the submitted content does not infringe the intellectual property rights of any third party. In some circumstances, such as with paid contributors, an indemnity provision also is advisable.
B. Third-Party Intellectual Property Rights
Assume that an author submits an original work that relies, in part, on AI-generated content and that the nonprofit distributes the work to its membership. If embedded AI content infringes someone else’s copyright, that party may have an actionable IP claim against the nonprofit that published the work. Ultimately, an infringement claim will hinge on the facts, focusing on the extent of the infringement and evaluating any claims of “fair use.” Risk can be partially mitigated by requiring human authors to indemnify the nonprofit for works that infringe upon third-party intellectual property rights.
C. Privacy and Data Security
Personally identifiable information (“PII”) is heavily regulated under international and domestic privacy laws, with unauthorized disclosure triggering significant consequences. Generally, nonprofits should not allow staff, contractors, or volunteers to input PII into an AI application unless comprehensive compliance measures are implemented and enforced, including ensuring that the inputs are not used to train the AI platform’s algorithms. Similar expectations should be set with respect to proprietary and confidential information. If it is not possible or practicable to segregate PII or other confidential information – for example, if a nonprofit uses targeted advertising to tailor member or donor experiences – the organization, in consultation with data privacy counsel, should implement robust compliance safeguards.
D. Discrimination
Because AI relies on pre-existing inputted data, it can inadvertently replicate and perpetuate bias and discrimination. While discrimination only creates legal exposure in certain contexts (e.g., employment), algorithmic bias may have other adverse implications for nonprofit entities. Imagine an AI “peer review” service for a nonprofit’s scholarly journal. Quite possibly, generative feedback may rely on outdated nomenclature or debunked data. Further, it is almost assured that use of an AI application of this sort will privilege existing content over novel theories, which could result in stale scholarship. Humans should always review AI processes and outputs for bias and/or discrimination and should record efforts to test for bias, where applicable.
E. Tort Liability and Reputation
AI-generated content is not always accurate, potentially bearing on the reputation of a scholar, a scholarly publication, or a nonprofit publisher. Inaccurate statements may also implicate defamation. Again, sufficient human vetting is the best risk mitigation strategy to capture and correct factual inaccuracies. In addition, nonprofits should consider requiring authors to affix a conspicuous disclaimer to any submissions incorporating AI-generated content signaling that “the content was produced with the assistance of artificial intelligence” and that “the author/creator(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.”
* * * * *
While the promise of AI presents transformative opportunities for nonprofits, the use of AI presents legal and other risks that must be managed and mitigated. Nonprofits should thoughtfully consider the benefits and perils of AI and craft thoughtful policies that appropriately circumscribe use in a legally compliant and ethical manner that, in all instances, aligns with the nonprofit’s values, ethics, policies, and unique circumstances. Finally, be sure to revisit such policies on a regular basis, as this is, without question, a rapidly evolving area for every nonprofit organization.
For more information, contact Ms. Peterson at hpeterson@TenenbaumLegal.com.
[1] U.S. Copyright Office, Artificial Intelligence and Copyright, Fed. Reg. Vol. 80, 59943 (Aug. 30, 2023).