A lot of people use ChatGPT to answer legal questions. I know this because my clients tell me. They arrive at initial consultations with printed ChatGPT summaries of their rights, AI-generated timelines of the statute of limitations for their claim, and occasionally full draft complaints the chatbot offered to write for them. Some of this research is genuinely useful. Some of it is confidently, specifically wrong in ways that are hard to explain to someone who has already decided they understand their case.
But a lawsuit that has been working its way through the courts raises a more fundamental question: Is providing that output — at scale, for money, to people who treat it as legal guidance — the unauthorized practice of law?
The answer is not settled. But the question matters more than most people in the AI space want to acknowledge.
What Is the Unauthorized Practice of Law?
In California, the practice of law without a license is a crime. Business and Professions Code §6125 prohibits any person from practicing law in California who is not an active member of the State Bar. Section §6126 makes the violation a misdemeanor, punishable by up to one year in county jail and a $1,000 fine for a first offense.
The definition of "practicing law" in California has been developed through decades of case law, including the Howey test — the four-factor Supreme Court standard for securities that California courts apply in analogous contexts. The core test, established in Birbrower, Montalbano, Conlan & Frank v. Superior Court (1998), focuses on whether the activity involves applying legal knowledge to the specific facts of a client's situation to give guidance about that person's rights or obligations.
Three elements matter: (1) applying legal knowledge, (2) to specific facts, (3) to advise about rights or obligations. A legal information resource that explains what a statute says is generally not practicing law. But a system that takes a specific fact pattern and returns an analysis concluding that yes, the user has a specific legal claim with citations and a suggested course of action — that is at least an arguable case for the practice of law.
The OpenAI Lawsuit
The case that has drawn the most attention involves allegations that OpenAI, through ChatGPT, provided specific legal guidance to users in a way that crossed the line from information to advice. The core argument is that the product is designed and marketed in a way that encourages users to treat its legal outputs as advice rather than information.
The legal theory is not frivolous. The fact that a company uses the word "information" rather than "advice" in its disclaimers does not automatically resolve the question. Courts look at what the product actually does and how users actually use it — not what the terms of service say. If a product is designed to take your specific situation and return specific guidance about your rights and options, and millions of users treat that guidance as the equivalent of a consultation, the disclaimer may not carry the weight the company thinks it does.
Why This Matters to You Specifically
If you have used an AI chatbot to research a California legal question, this lawsuit does not mean you did anything wrong. Users are not the defendants. But there is a practical problem: AI legal output can be confidently wrong in ways that are very hard to detect if you are not already a lawyer.
Here are three patterns I see repeatedly:
The jurisdiction problem. AI systems are trained on legal content from every state and multiple countries. They frequently blend California-specific rules with rules from other jurisdictions without flagging the difference. A client recently arrived with AI research about homestead exemptions that was accurate for Texas. California homestead law is substantially different. She had made a significant financial decision based on the Texas analysis.
The recency problem. California law changes. AI training data has a cutoff, and the cutoff is not always clearly disclosed. A case that was the controlling authority eighteen months ago may have been distinguished or overruled. AI output generally does not come with a date stamp.
The framing problem. When you ask an AI a legal question, you are framing the question. The quality of the output depends almost entirely on the quality of the framing — which requires knowing enough about the law to know what question to ask. Clients who have pre-researched their case with AI frequently arrive having researched the wrong question with confident, detailed answers.
The Bottom Line
If you have a California legal question that involves your specific situation, you need a California-licensed attorney to analyze it. An AI tool can help you understand the general legal landscape before that consultation. It cannot substitute for the consultation itself.
The reason is not that AI is unintelligent. It is that legal advice is not primarily an intelligence problem. It is a judgment problem — and judgment requires accountability, which requires licensing, which requires a human being who can be held responsible for the guidance they give.
You can verify any California attorney's license at apps.calbar.ca.gov. For California legal matters, Bay Legal, PC is available at BayLegal.com or (650) 668-8000.
Nothing in this essay constitutes legal advice. Jayson R. Elliott is a California attorney and Managing Director of Bay Legal, PC. This article does not constitute legal advice. Attorney advertising: prior results do not guarantee similar outcomes.