Image for Article 'From Software 1.0 to 3.0'
  • Trends
  • — 4 reading minutes

From Software 1.0 to 3.0: the Refactoring of Legal Advice

As we approach the new 2026 year, the legal profession stands at a pivotal inflection point. Generative AI adoption in law firms surged in 2025 — surveys show over 60% of large firms experimenting with tools like Harvey or CoCounsel — yet many lawyers still grapple with how deeply this tech will reshape their work. Andrej Karpathy, the AI luminary from OpenAI, Tesla, and now Eureka Labs, provided a clarifying lens in his June 2025 keynote at Y Combinator’s AI Startup School. He outlined software’s evolution through three eras: Software 1.0 (explicit hand-coded instructions), Software 2.0 (data-trained neural networks), and Software 3.0 (natural-language prompting of large language models, or LLMs). This progression is not abstract tech theory — it is actively refactoring how legal advice is researched, drafted, strategized, and delivered.

Software 1.0: The Artisanal Era of Legal Craft

Traditional legal practice mirrors Software 1.0 perfectly. Lawyers act as meticulous coders, writing precise arguments, contracts, and opinions by hand. Every citation, clause, and contingency is explicitly defined using statutes, precedents, and professional judgment. Tools like Westlaw or LexisNexis support this, but the core work remains human-specified and deterministic — traceable for accountability in court or to clients.

This method excels where precision and liability matter most (e.g., complex litigation briefs or regulatory filings). Yet it is inherently limited: slow, costly (billable hours add up fast), and brittle when facing novel or ambiguous scenarios. Much like classic code that crashes on edge cases, 1.0 legal work scales poorly to high-volume or interdisciplinary problems, such as cross-border compliance or rapid contract reviews.

Software 2.0: Optimization Through Data and Verification

Karpathy’s 2017 “Software 2.0” concept shifted programming from explicit rules to optimization: define an architecture and objective, feed data, and let gradient descent discover effective “weights.” The resulting program is opaque but powerful for verifiable tasks.

Legal tech embraced this early. E‑discovery platforms (Relativity, Everlaw) use machine learning to sift terabytes of documents for relevance. Contract AI like Kira, Luminance, or LawGeex flags risky clauses with high accuracy on trained patterns. Predictive tools analyze historical outcomes to score settlement likelihood or judge tendencies. These automate what can be clearly verified — accuracy metrics, recall/precision benchmarks — freeing lawyers from rote review while reducing costs by 50–70% in some workflows.

The catch: black-box models risk bias (e.g., historical data perpetuating inequities), and they falter on nuanced, non-verifiable reasoning like ethical dilemmas or creative advocacy. Oversight remains mandatory, but 2.0 laid the groundwork for efficiency gains now accelerating with LLMs.

Software 3.0: Natural Language as the New Legal “Code”

Software 3.0 arrives with LLMs as a programmable “new computer.” You “code” via English prompts — high-level intent like “Analyze this merger agreement under Delaware law, highlight antitrust risks, and suggest redlines for a buyer-friendly position.” The model synthesizes drafts, research summaries, or scenario simulations, often chaining tools (e.g., pulling statutes via APIs).

Karpathy calls this “vibe coding”: describe the desired outcome conversationally, iterate, review, and guide. LLMs emerge as stochastic “people spirits” — trained on human text, they simulate reasoning with superhuman breadth and speed but exhibit “jagged intelligence”: excelling at verifiable tasks (case summarization, basic drafting) while hallucinating on edge cases or fabricating citations.

In legal advice, this unlocks:

- Rapid augmentation: Lawyers prompt for first-draft briefs, deposition outlines, or negotiation sims, then refine. Tools like Harvey (built on custom-tuned models) or Thomson Reuters’ CoCounsel already power this in real firms.

- Democratization: Junior associates or solo practitioners access sophisticated analysis; even non-lawyers get basic guidance via consumer apps (with clear disclaimers).

- Agentic workflows: Emerging “legal agents” handle multi-step tasks — research → draft → risk assessment → iterate — under human supervision.

The refactor is profound: legal “code” (documents, memos) becomes sparser. Focus shifts from exhaustive drafting to orchestration — crafting effective prompts, validating outputs, managing context/memory, and integrating tools. Karpathy’s analogy holds: lawyers are getting an “Iron Man suit,” not replaced by autonomous robots.

Looking to 2026: Opportunities, Risks, and the Path Forward

The upside is massive — faster turnaround, lower costs, broader access to justice (especially for underserved populations). Firms that build hybrid human-AI workflows could outpace competitors.

But risks loom large:

- Hallucinations and liability: Courts have sanctioned lawyers for citing AI-generated fake cases; who bears responsibility?

- Ethics and regulation: ABA and state bars issued 2025 guidance on competence, confidentiality, and supervision. EU AI Act-style rules may follow.

- Bias and access gaps: Fine-tuned models favor well-resourced firms; public defenders risk falling behind.

- Skill shift: The new elite will master “prompt engineering,” agent design, and stochastic system debugging — echoing Karpathy’s recent reflection that even top programmers feel “behind” in this alien layer.

2026 could see more fine-tuned legal LLMs, better hallucination mitigations (via retrieval-augmented generation or verifiable rewards), and “agent-ready” legal databases. The profession will not vanish; it will hybridize, with humans handling judgment, strategy, and client relationships while AI handles scale.

Karpathy’s framework reminds us: we are not just adopting tools — we are rewriting how legal value is created. Those who experiment thoughtfully now will shape the future.

* * *

This article explores conceptual intersections of AI frameworks and the legal profession for discussion purposes only. It is not legal advice, does not form an attorney-client relationship, and should never substitute for professional counsel from a qualified attorney. Always seek personalized legal guidance and you can always reach us at info@lexquill.com.

A happy upcoming New Year, everyone — here is to a 2026 filled with smarter tools, clearer ethics, and fewer hallucinations.