2023 - Volume 2 - Summer - Flipbook - Page 8
help write a comprehensible memo. ChatGPT would
then store and be able to access any information that
employee provided. ChatGPT also could use that same
information to train its algorithm, meaning it could
end up as an output to a different person’s prompt—
maybe even a competitor.
-Artificial Intelligence: Continued from page 3-
ployment law, and data privacy. This article will look
at each of these areas to help businesses better spot the
AI-related risks they face as both employers and market participants.
TRADE SECRETS
Because trade secrets are not formally registered,
maintaining confidentiality is essential to the protection of the trade secret. Generative AI may complicate
trade secret law and introduce novel risks for businesses.
One of the most popular and widely used types of
AI technology is “generative” AI. Generative AI refers
to algorithms (like ChatGPT) that can be used to create
(or “generate”) new content—including everything
from audio, text, images, and video to code or fullblown simulations. Generative AI has applications
across industries and within organizations, and businesses are interested in the myriad benefits that this
technology provides. But these benefits are not riskfree.
Applicable Law
Under both the Federal Defend Trade Secrets Act
(the “DTSA”) and its California analogue (found at
Cal. Civil Code section 3426 et seq.), the owner of a
trade secret must take “reasonable measures to keep
such information secret.” 18 U.S.C. § 1839(3)(A). In
fact, information cannot be a trade secret at all unless
it “[i]s the subject of efforts that are reasonable under
the circumstances to maintain its secrecy.” Cal. Civ.
Code § 3426.1(d)(2). Thus, if the owner of a trade secret does not take “reasonable” efforts to maintain its
secrecy, they risk surrendering its status as a trade secret altogether, along with any associated legal protections.
Generative AI works by “training” an algorithm to
“learn” how to mimic real content. This occurs over
time by, essentially, punishing the algorithm for making content that seems fake and rewarding the algorithm for creating realistic content. To create sophisticated and realistic programs like ChatGPT, the algorithms need to be trained on massive datasets filled
with copious examples of all types of real information.
A well-trained generative AI program can then use user-generated inputs (like a prompt, asking for the software to write a story or a line of code) to create finely
tuned and realistic outputs. To create larger and more
detailed datasets that allow for even better training of
these algorithms, many generative AI programs rely on
data (like the words, code, or other information) that
users enter into their programs.
Neither the DTSA nor the California statute define
what “reasonable measures” are. Instead, the determination of whether the efforts to maintain secrecy are
reasonable under the circumstances is fact-specific,
and a reviewing court will look to a range of contextual factors. See Mattel, Inc. v. MGA Ent., Inc., 782 F.
Supp. 2d 911, 959 (C.D. Cal. 2011) (“The determination of whether information is the subject of efforts
that are reasonable under the circumstances to maintain its secrecy is fact specific.”)
An obvious and serious risk with this technology is
what these programs can do with the information that
users provide them. Once information is inputted, it is
captured and stored. Often it cannot be deleted and
could end up being used or reviewed by the developer
of the AI application. The information might even be
used as a future output to a different user. There is the
possibility then that an employee could input a company’s trade secret into an AI application and thus put
trade secret protection at risk.
Though no court has directly addressed how generative AI and trade secrets interact, some analogies
from similar situations can be made. For example, in
DVD Copy Control Assn., Inc. v. Bunner, the California Court of Appeal overturned an injunction after it
failed to find trade secret misappropriation for information that had been publicly shared on the internet.
116 Cal. App. 4th 241, 244-45 (2004). In reaching this
conclusion, the court explained that “[t]he secrecy requirement is generally treated as a relative concept
and requires a fact-intensive analysis.” Id. at 251.
With this in mind, “[w]idespread, anonymous publication of the information over the Internet may destroy
Imagine an employee wants to draft an internal
memo describing a breakthrough made in her company’s process for creating a specific widget. ChatGPT
could certainly help. To do so, however, the employee
would need to input some information about this
breakthrough with enough detail for the program to
-Continued on page 9-
8