Artificial intelligence is transforming the legal industry at an unprecedented pace. From contract analysis to legal research and drafting, law firms are adopting AI to improve efficiency, reduce costs, and stay competitive.
But while AI is moving fast, security: and more importantly, ethical responsibility can’t afford to lag behind.
For law firms, this isn’t just a technology issue. It’s a professional obligation.
The Wake-Up Call: ABA Formal Opinion 512
In 2024, the American Bar Association released ABA Formal Opinion 512 its first formal guidance on the use of generative AI in legal practice.
The message was clear:
AI is allowed. But it doesn’t change your ethical responsibilities.
Attorneys must “fully consider their applicable ethical obligations” when using AI, including:
- Competence
- Confidentiality
- Communication
- Supervision
- Reasonable fees
In other words, AI doesn’t reduce risk… it introduces new ones that must be actively managed.
Where Law Firms Are Getting It Wrong
Many firms are adopting AI tools faster than they’re adapting their security and governance frameworks. That gap creates real exposure.
1. Blind Trust in AI Outputs
AI can generate convincing, but incorrect, legal content. Courts have already seen cases where attorneys cited non-existent case law generated by AI.
The ABA is explicit: lawyers cannot delegate professional judgment to AI. Every output must be reviewed and validated.
2. Confidential Data Exposure
When attorneys input client information into AI tools, they may unknowingly expose sensitive data.
Formal Opinion 512 reinforces that confidentiality obligations still apply… even when using third-party AI platforms.
If your AI tool stores, trains on, or shares data, you could be violating client trust (and ethical rules).
3. Lack of Transparency with Clients
Do your clients know you’re using AI?
Depending on the situation, attorneys may be required to disclose AI usage: especially if it impacts billing, strategy, or outcomes.
4. No Internal AI Policy
Many firms are operating without clear guidelines:
- What tools are approved?
- What data can be entered?
- Who is responsible for oversight?
Without structure, risk multiplies quickly.
Even with a policy in place, risk doesn’t disappear because policies are only effective if your team understands and follows them.
5. Train Your Attorneys and Staff on Responsible AI Use
Even the best policies and tools will fail without proper training.
Attorneys and staff need to understand:
- What AI tools are approved—and which are not
- What client data can and cannot be entered
- How to validate AI-generated outputs
- The ethical obligations tied to AI use, including confidentiality and competence
American Bar Association Formal Opinion 512 reinforces that competence now includes understanding the benefits and risks of AI. That doesn’t happen without ongoing education.
Training should not be a one-time event. As AI tools evolve, so do the risks. Law firms need continuous education to ensure employees are using AI securely and responsibly.
This is where many firms fall short and where the right partner can help develop and deliver structured, role-based training as part of a broader AI and cybersecurity strategy.
At Onward Technologies, we incorporate AI usage training into our security and solution development process – helping firms not just adopt AI, but use it safely and ethically.
Security Is Now an Ethical Requirement
Traditionally, cybersecurity was viewed as an IT issue. Today, it’s directly tied to legal ethics.
The duty of competence now includes understanding the capabilities and risks of AI tools.
That means law firms must:
- Vet AI vendors for data security and privacy practices
- Implement access controls and monitoring
- Train attorneys and staff on safe AI usage
- Establish governance policies around AI adoption
This is no longer optional. It’s part of providing competent representation.
The Real Risk: Moving Fast Without Guardrails
AI is often adopted informally—individual attorneys experimenting with tools on their own.
That’s where the danger lies.
Without centralized oversight:
- Sensitive data gets exposed
- Inaccurate outputs go unchecked
- Ethical violations happen unintentionally
And unlike other industries, law firms face reputational, regulatory, and malpractice risks all at once.
How Law Firms Should Respond
To keep pace with AI, without falling behind on security, law firms should focus on five priorities:
1. Establish an AI Acceptable Use Policy
2. Align IT and Legal Leadership
3. Prioritize Secure AI Solutions
4. Define Governance and Oversight
5. Train Your Attorneys and Staff
Final Thought
AI is not a passing trend: it’s becoming embedded in how legal work gets done.
But as American Bar Association makes clear, the fundamentals haven’t changed:
Your duty is still to your client.
AI can enhance your practice… but only if it’s used responsibly, securely, and ethically.
Because in the legal world, moving fast is fine.
Falling behind on security is not.
Bringing AI into your firm without a security strategy is a risk you don’t need to take.
Onward Technologies works with law firms to secure their environments, protect client data, and support responsible AI adoption. Let’s talk about how we can help.
| Jurisdiction | Statute/Order | Link | Issue Date |
| ABA | Formal Opinion 512 – Generative Artificial Intelligence Tools | Opinion 512 | LINK |

