Tailoring AI Contract Clauses to a Nonprofit’s Unique Needs

Long ago, artificial intelligence (“AI”) worked its way into routine business operations in the for-profit and nonprofit worlds alike through subtle and ubiquitous technological advancements, such as the Microsoft Word’s spell-check feature, the auto-correct function in text messages, and the auto-complete feature in various applications. It is no longer realistic to imagine a business where AI is not used in some form. However, a nonprofit organization can and should regulate the degree to which AI may be incorporated into standard business operations, based on various considerations such as organizational culture and needs as well as industry standards and risk tolerance. Whereas some nonprofits may use AI conservatively to capture meeting notes, brainstorm ideas, generate word clouds, or autofill suggested responses, others may use AI more expansively, by embedding chat bots to respond to member inquiries, assigning AI to peer review publications, publishing AI-generated content on a social media, or using AI to review job applications. The possibilities are infinite both with respect to how AI may be leveraged as well as how a nonprofit organization may wish to restrict use.

As AI use cases expand, the nonprofit community increasingly contracts with third parties to provide products and services that embed AI. While pre-packaged templates may be tempting and cost efficient, model AI contract clauses should be viewed with skepticism since use case scenarios are dependent on each organization’s unique culture, industry standards, and organizational needs. Quite simply, a one-size-fits-all approach is not advisable. The best AI clauses are tailored to specific use cases and implement appropriate guardrails – informed by organizational culture – to protect the nonprofit’s legal, reputational, and other interests.

This article (1) outlines various legal, reputational, and other risks nonprofit leaders should consider when contemplating AI use, (2) discusses common AI use cases and their implications, (3) highlights common contractual provisions that implicate AI, and (4) offers practical advice to enable nonprofit leaders to issue spot risks and formulate appropriately tailored contract provisions that meet the nonprofit’s goals while mitigating risk. Finally, the article briefly discussed policies for regulating AI usage.

Legal and Other Risks

While the promise of AI presents transformative opportunities, the use of generative AI presents legal and other risks that must (and can) be managed. At this time, the primary legal risk areas include copyright use and infringement; privacy and data security; discrimination; and tort liability. For example, purely AI-generated content cannot be subject to copyright protection under current expositions of U.S. law; the U.S. Copyright Office “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” Thus, nonprofit organizations that use AI to generate content for publication (without sufficient human input) must take care to ensure that they make no contractual representations or warranties about copyright ownership or rights with respect to that content. Also, AI does not discern between public information and personally identifiable information that might be protected under U.S. or international privacy and data security laws, thus making it incumbent upon the nonprofit organization to implement appropriate safeguards to comply with federal, state, and international privacy and data security laws. As one final example – of many – AI does not evaluate inaccurate, biased, or defamatory content, potentially exposing a nonprofit to legal liability or reputational harm. These risks, born from a complicated web of evolving federal, state, and international laws, can be managed through contractual provisions and operational standards so that a nonprofit organization is able to pursue its desired goals while minimizing associated risks.

Common AI Use Cases and Implications

Although it is impossible to address all of these possible risk areas here, there are a few representative examples of scenarios in which nonprofit executives may encounter AI-related contract language.

One of the most common situations in which nonprofits may encounter or consider AI-related contract language is in an agreement for technology services. Many Software as a Service (“SaaS”) products – including many popular business applications, database providers, content management systems, and others – now include some kind of AI capability. Say, for example, a nonprofit is entering into an agreement to license software that manages an e-commerce function of the nonprofit’s website (e.g., to sell publications). That e-commerce provider may include AI tools in its product that write product descriptions, provide analytics for sellers based on various data, and recommend purchases to website visitors. The look and feel of the language may differ depending on the specific product or service being provided and, depending on the AI tool’s data source. Nonprofits should be sure that any protections for the nonprofit’s confidential information apply specifically to any AI software tools as well; this will require the vendor’s staff and its AI tools to adequately protect the nonprofit’s confidential information. While the AI tool may use the nonprofit’s confidential information to produce recommendations for the organization, that confidential information should not be used to train the AI model to produce recommendations for other users, or to inform responses to other users. If possible, a nonprofit should clarify whether it can opt out of these AI tools if it does not plan to use them. This is important because, in some circumstances, bad actors can “jailbreak” another party’s confidential data being used by a generative AI tool through creative prompt drafting.

Another place nonprofits may encounter AI-related contract language is in agreements for professional services, particularly in creative fields. For example, a nonprofit may enter into an agreement with a public relations firm to do graphic design work, draft press releases and articles, or create social media content. Or, for instance, an AI tool also may provide a “peer review” function for an industry, professional, or academic journal. In these cases, it is critical to address the ownership of any intellectual property created by an AI tool. The services contract should ensure that the nonprofit owns all of the intellectual property rights to – or at least has a broad license to use –g any content created by a PR firm using an AI tool like ChatGPT, or a report produced by a peer review tool.

Finally, nonprofits might encounter AI-related language in internal policies. Many employers (both for-profit and nonprofit) are introducing AI policies into their employee handbooks, governing how employees may use generative AI tools in the performance of their work responsibilities. Trade and professional associations are creating industry guidelines for the use of generative AI products in the course of that industry’s business, and these guidelines may be included in the association’s code of ethics. These types of documents are likely to include provisions requiring the disclosure of work product produced using generative AI, or requiring that an employee or professional review any AI-produced materials prior to distribution to a client or to the public. Employment policies should be coordinated with a nonprofit’s human resources team and explained to employees so that they understand what uses of AI tools are appropriate for their roles and how they may and may not be used. Similarly, trade and professional associations might convene working groups to study the use of AI tools in their industry and create guidance for the industry based on applicable standards of industry or professional practice and relevant ethical considerations in the industry or profession.

Evaluating an AI Contract

Presumably, at this point in time, your nonprofit organization has already contemplated AI use in connection with at least some organizational activities. After all, tech companies aggressively “sell” this emerging technology in very appealing ways. On that note, beware. Not all sales involve visible costs. While it may be tempting to use a free AI service as a cost-effective means of carrying out your nonprofit’s goals, these “free” services come at great cost insofar as they require acquiescence to expansive terms and conditions, usually packaged in clickwrap agreements, that allow vendors to use data well beyond what we would advise. That said, the first and most important bit of practice advice is secure a contract, read it, and evaluate it to negotiate favorable terms and protect your nonprofit from undesired outcomes. To do that, ask yourself the following questions:

  1. What kind of contract is it? Who are the parties? What kind of data-sharing arrangement is contemplated? What may be unique to this contract verses another contract (e.g., a software contract with a multi-national corporation versus a volunteer author contract for a recurring blog post on your website).
  1. What kind of data will the nonprofit organization supply, and how can it be used? Is supplied data sufficiently scoped to carry out the goals of the contract without providing more data than is necessary? What does the contract say about how the vendor can use the data? Is data use tailored specifically to carry out a defined purpose of the contract? Or can the vendor use it more broadly, for example, to train future AI systems unrelated to the contract?
  1. What legal risks are at stake? Are you capturing personally identifiable information (“PII”) such that privacy laws may be implicated? Are you sharing proprietary information that would provide a competitive advantage to other organizations if the information is revealed? Are you generating output that should be vetted for accuracy, such as a pseudo-medical pamphlet for how to care for a loved one with a degenerative disease? Are you generating output that may bear on the reputation of third parties such that defamation and tort law may be implicated? Do you intend to publish a work that embeds AI-generated content, and, if so, are protections embedded to ensure that the content bears more than a de minimis imprint of a human author?
  1. What reputational risks are at stake? Who is your audience? Are you presenting at a tech conference or drafting Congressional committee testimony? How, if at all, will your organization’s partial or total use of AI bear on your organization’s credibility in light of your audience’s expectations? Drawing from the above example of a pseudo-medical pamphlet, what does it say about the profession, if anything, that the nonprofit voice of the profession is using AI to generate technical information, as opposed to in-field experts? Knowing that AI embeds a bias against new ideas, how will AI use impact the advancement of the profession that your organization represents, if, for example, you use AI to peer review academic journal articles?
  1. What political and cultural risks are at stake? How will your members, donors, or stakeholders react knowing that the nonprofit relies on AI in the circumstance contemplated? Will they be receptive? Will they react negatively? Does it depend on the use case?

Tailoring Contract Terms to Your Nonprofit’s Needs

Once you have a good sense of the answers to these questions, you can more easily identify where, if at all, you must embed protections, which may include limiting the kind of content that can be inputted into AI, restricting use of outputted content, or shifting risks to third parties.

Limit Inputted Content

From time to time, it may be necessary to collect PII and input it into AI systems. For example, your nonprofit may endeavor to tailor members’ experiences based on demographics and personal preferences, in which case, you almost assuredly will collect and input PII into an AI system. In such cases, and working with privacy and data security counsel, you must structure appropriate guardrails (generally, notice of how data will be used, informed consent, and an opportunity to change preferences). However, in many instances, PII is not needed for the contemplated use case. Unless PII is critical to the desired engagement, the express terms of the contract should require the provider to comply with federal, state, and international privacy and data security laws and expressly prohibit the use of PII for any reason:

Provider agrees that, at all times, it shall comply with any and all applicable federal, state, and international privacy and data security laws, implementing regulations, and common law privacy protections. Neither Provider nor Client shall input any Client or third-party personally identifiable information (PII) into AI tools, and, in no instance shall PII be used to inform or train AI systems.

Restrict Data Use

Recall the prior caution about clickwrap agreements that require unscrupulous submission to unfavorable terms. As a paying client, you can and should control how the vendor uses data that is inputted into an AI system. Here, drafting is simple. You simply define, in detailed terms, how the data may be used, and in some instances, you may also desire to identify prohibited uses. Here is an example of permitted and restricted uses in a service contract:

Licensor grants to Licensee a non-exclusive, royalty free, non-transferable, revokable license to use the Licensed Materials exclusively and solely in connection with Licensee’s demonstration of its [AI Tool] during the [Conference], which is scheduled to take place from 01-09-2025 at [location]. Upon termination of this Agreement, Licensee shall immediately discontinue using the Licensed Materials and Licensor Name and Marks and shall, within five (5) business days, return, delete, or destroy all Licensed Materials in its possession. To be clear, at the conclusion of the Term, all Licensed Materials, including Licensed Materials that were inputted into the [AI Tool] for the purposes of the demonstration, must be deleted or erased and may no longer be used in connection with the AI prototype or for any other reason except with the express prior written consent of Licensor. During the Term and thereafter, Licensed Materials and Licensor Name and Marks shall not be used to train or retrain a generative artificial intelligence algorithm.

In contracts with independent contractors or volunteers who may be creating a work product for your nonprofit organization, you may also wish to restrict how or if the creator uses AI to generate the final work product, both for reputational and legal reasons:

Publisher permits Authors and Contributors to utilize artificial intelligence on a limited basis to generate ideas, edit and optimize images, gather data, identify trends, and streamline research processes, provided that in all instances, the Author/Contributor reviews AI-generated content for factual accuracy, completeness, bias, defamation, and privacy and data security concerns. Further, Author/Contributor understands that the use of AI for unauthorized purposes (e.g., to “ghost write” the Work or a portion of the Work) may diminish or extinguish Author/Contributor’s claim to copyright in the underlying Work and shall not use AI for such unauthorized purposes.

Shifting Risk to Third Parties

Representations and warranties, as well as indemnification clauses, function to shift risks to third parties. Drawing from the example above that explains that ghost writing may diminish or extinguish copyright protection, it would be appropriate to couple that clause with a representation and warranty, as well as a related indemnification clause:

Author/Contributor represents and warrants that (1) Author/Contributor is the sole creator of the intellectual property embedded in the Work (except for such excerpts from copyrighted works as may be included with permission from the copyright holder(s)), (2) no third party has any definitive claim to the underlying copyright of the Work or any portion of the Work, and (3) Author/Contributor has reviewed any underlying content created through the use of generative artificial intelligence and represents and warrants that the Work, to the best of Author/Contributor’s knowledge, contains no factual or substantive inaccuracies, defames any third party or otherwise infringes in any way on third-party rights, or embeds bias in a way that diminishes the scholarship or work product. Author/Contributor shall indemnify and hold harmless Publisher from and against any costs, expenses, or damages (including reasonable attorneys’ fees and costs) for which Publisher may become liable as a result of any breach or alleged breach by Author/Contributor of the foregoing representations and warranties.

As you can see, this clause also shifts responsibility from the publisher (which, in this instance, was a trade or professional association) to the author/contributor for third-party copyright infringement, defamatory content, and other legal risks.

Where PII is involved, a nonprofit organization should endeavor to shift risk to the provider for unauthorized data breaches:

Vendor agrees to be solely responsible for implementing reasonable technical controls, aligned with industry standards, to protect Client’s Confidential Information and PII. A “Data Breach” means any breach of security or failure of technical controls leading to the accidental, unauthorized, or unlawful destruction, loss, alteration, or disclosure of, or access to, Client’s confidential information or PII. Vendor agrees to indemnify and hold harmless Client and its directors, officers, employees, and agents from any and all liabilities, losses, costs, damages, claims, liens, judgments, penalties, fines, attorneys’ fees and costs, court costs and other legal expenses, insurance policy deductibles, and all other expenses arising out of or related to . . . (d) a data breach attributable to Vendor’s intentional acts, negligence, or failure to maintain technical controls consistent with current industry standards.

* * * * *

These sample clauses are merely illustrative. Do not lose sight that the purpose of this article, that is, to tailor the use of AI and associated contract clauses to your nonprofit’s unique needs, culture, identity, and risk tolerance. With that in mind, but also because AI and associated laws continue to evolve at such a rapid pace, it is always advisable to engage qualified legal counsel to review or draft resulting contracts for legal sufficiency. However, a little sweat equity on the front end to think about terms, risks, and reputation will likely translate into risk mitigation and to a shared understanding of the parties’ expectations and a foundation for a successful business relationship.

For more information, contact the authors at hpeterson@TenenbaumLegal.com or kserafino@TenenbaumLegal.com.