As the chair of Fredrikson's Artificial Intelligence (AI) Practice, the obvious marketing answer is, “yes, of course you do.” Setting aside the gimmicks, however, I have been asking myself this question in earnest lately, and have decided the answer is, in fact, yes, you do, for the various reasons outlined below.
Looking Back
In 2018, we represented the buyer of a software company that had developed several proprietary models. My job, then, was to tackle the question of whether or not the target company had clear and clean ownership of those models, including whether the data used for training purposes was used appropriately.
I have distinct memories of poring over pages and pages of documentation to understand how the models functioned, how the training datasets were built, who the data came from, the licenses and permissions at play, and how the models were deployed. I felt a bit of imposter syndrome describing the model architecture and design in my diligence memorandum; often thinking to myself, “who knew this is what I would be doing with a religious studies degree.” But it was deep, detailed projects like these, whether in the context of mergers and acquisitions (M&A), advising clients’ procurement teams, or helping a business launch their product, that provided a non-scientist lawyer the substantive foundation for advising on AI.
Because AI is not new. Lawyers like me — “technology transactions” attorneys — have been advising on AI issues for years, helping clients understand the implications of using contracted-for content for model building or ensuring proprietary assets are protected. And many of those lawyers likely had a point in their career when they realized they needed to supplement their technology transactions skills with data privacy law compliance experience and expertise. That is how it was for me — I was consistently in the weeds on data use and ownership issues, with emerging privacy laws adding a new “ownership” layer to the problem. Thus, I pivoted my practice to join Fredrikson’s Data Privacy & Security team full time and focus on understanding the law in this area even better.
Then, when ChatGPT exploded into public conversation in 2022, my own experience put me in a great position to help my firm, clients and other legal practitioners wrap their heads around the benefits and risks of this new technology. Having always described myself as a “product lifecycle” attorney, the intellectual property, contracting and consumer protection advice was easy to adapt to the questions that were coming in: “What should our AI policy say?,” “What should we tell our engineers?,” “Can we use this product?,” “What should we include in our customer contracts?”
A New Practice Area Emerging
In the “early days” of ChatGPT, I distinctly remember being asked during a continuing legal education (CLE) seminar: “How will all of this be worked out, legally?” My response, “We’ll just have to see what courts and lawmakers have to say about it.” And guess what — they have something to say!
The European Union AI Act has always been a bit of a hot topic for nerds like me who pay attention to these things. The European Commission first published its intent to regulate AI in April 2021, but movement on the issue slowed down when it became clear there was much about the law to debate. Once ChatGPT caught worldwide attention, however, the possibility of a comprehensive AI law seemed inevitable. Sure enough, in December 2023, the European Parliament and Council reached an agreement on the new AI Act, and it was formally adopted in May 2024.
Around the same time, here in the U.S., Colorado’s General Assembly was passing the Colorado AI Act, a landmark, comprehensive law regulating “high-risk” AI systems. It followed on the heels of President Biden’s “Executive Order on the Safe, Secure, Trustworthy Development and Use of Artificial Intelligence” (since repealed), which deployed funds into various industries and corners of the country, including my own backyard (see the NSF AgTech Engine in North Dakota).
What is particularly notable, however, is the flood of activity we see happening in courts and legislative sessions across the country. The George Washington University’s Ethical Tech Initiative and its Center for Law and Technology have compiled a database of 359 cases involving artificial intelligence. Additionally, several states, including Texas, Illinois, Arkansas, Utah, California, and Maine, have passed amendments or adopted regulations to their privacy, consumer protection, property, or anti-discrimination laws to specifically address the use of AI by businesses, employers, or government agencies. Utah even created the Office of Artificial Intelligence Policy, which is actively administering a regulatory relief program for tech companies.
What does all of this mean? We are witnessing a new area of law emerge before our eyes. The advent of AI regulation is over; it is here and it is happening.
What Makes an ‘AI’ Lawyer
When the European Union’s General Data Protection Regulation (GDPR) came into effect in May 2018, it made big news in the legal world. As a general, comprehensive privacy law with extraterritorial jurisdiction, the GDPR caused U.S. companies that had not otherwise had to deal with personal information regulations to establish programs for complying with its obligations. It became a topic of diligence for mergers and acquisitions, fundraising, and customers, and it prompted various states (most notably, California) to adopt their own regulatory frameworks for personal information handling and consumer rights. Several years later, almost no one questions the need for a “privacy lawyer” in their slate of advisors.
I think the same will be true for AI. “AI” lawyers will (1) have the technical acumen to understand and advise on the building, procuring and use of artificial intelligence systems of all types (whether leveraging neural networks, or other machine learning techniques), (2) have the legal knowledge to recommend strategies for mitigating regulatory or litigious risks, protecting key assets and effectuating consumer rights, and (3) be able to forecast where the law is going on this topic (“reading the tea leaves,” as I like to call it), in a way that is meaningful, practical and informed.
Take this example. My client, a financial services software company, received an amendment to their services agreement from one particular customer that said (summarily paraphrased), “You, vendor, will not use any artificial intelligence tools to provide the services, except those described in a specific list that we consent to in advance, in writing” I told the customer’s lawyer the language was too broad: my client uses industry-standard tools to secure the customer’s information environment, and almost all of those tools leverage machine learning models. Did the customer really want my client to decline to use the best means available to keep the customer’s information secure? The lawyer on the other side understood the issue, but this was the “new” language, “required” for all vendor agreements.
The problem with any knee-jerk approach to AI — whether a strict prohibition on usage or, on the other end of the spectrum, widely unfettered deployment — is that it tends to be uninformed and have expensive consequences (in the case above, each side was expending legal resources on a blanket requirement that served no one well).
You can imagine how complicated this can get, then, when the question is not just whether or not AI can be used by a vendor, but whether contracted-for content can be used to create embeddings in a vector database, or if the processing of information by an AI system constitutes a “sale” of personal information under applicable laws.
Thus, we need deep and experienced expertise in this area; full-time practitioners who have the energy, skills and availability to learn the ins and outs of what is active, market and coming in this space. We also need it now; clients are asking for it and we, as a profession, need to be able to respond.
What’s Next?
How do we cultivate and develop AI lawyers? That is the other question I ask myself very earnestly these days. For me, it organically grew from my own experience. But I do not expect, or want, every experience to be like mine.
I hope to identify pathways to building a subject-matter-specific, industry-agnostic practice for those who are interested in it, and I expect there are insights to gain from how the privacy law practice has evolved and developed. I also anticipate clients driving some of this growth by setting a benchmark for what they expect from us.
However it happens, it is going to be exciting and invigorating for our profession and work. That is what I am looking forward to. I think you should be looking forward to it, too.


