Can You Trust AI to Take Clinical Notes? Legal, Ethical, and Practical Risks to Consider

AI tools promise more time with patients and less on documentation, but not without new risks.

Torsos of a psychotherapist and a client during a session, with a recording device on a side table.

Clinicians are increasingly turning to AI tools to lighten their documentation load. Marketed as a way to ease administrative burnout, these tools promise to reduce note-taking time, help you stay present with patients, and generate consistent, structured records. But before you hit record, it’s critical to understand the legal, ethical, and practical risks of using AI in your clinical notes.

Here’s what you should know to safely integrate AI-assisted notes into your practice and protect yourself in the process.

First, Know What the Tool Does

AI note-taking tools come in a few forms. Some simply transcribe your conversations, while others go further by generating draft notes or auto-filling your EHR. Some models are continuously “in training,” pulling in new data to improve performance. Others use a static model built on a fixed dataset. 

These distinctions matter for your legal risk. Before implementing any AI note-taking tool, review its terms of service and privacy policy. You need to know whether session data is stored, used for training, or shared with third parties.

HIPAA Compliance Is Not Optional

If your practice is a HIPAA-covered entity and your AI tool handles patient data,  you’re on the hook for ensuring it meets HIPAA standards. Most AI tools used in note-taking do touch patient data, first when recording and transcribing the patient session, and then when developing the transcription into a draft note or inserting information into your EHR.

At a minimum, HIPAA compliance in these cases means the tool should offer:

  • End-to-end encryption of patient data
  • Access controls for users in your workforce and the vendor’s workforce
  • Secure data storage
  • A Business Associate Agreement (BAA) with the vendor

If your vendor refuses to sign a BAA (either yours or one the vendor provides) or dodges your privacy questions, that’s a red flag.

See our related video, “Business Associate Agreements in Healthcare.”

You Are Still the One in Charge

Most AI tools will state in their terms of use, in some form or another, that the tool is not meant to be used as a treatment decision-maker and does not provide licensed services. Here’s the key legal point: AI doesn’t carry malpractice liability—you do.

Whether you’re an MD, NP, PA, or therapist, if the AI-generated note is inaccurate, incomplete, or misleading, that’s your responsibility. If you’re ever audited, sued, or reported to your licensing board, “the AI wrote it” won’t hold up as a defense. Courts and licensing boards will assess your clinical decision-making based on the totality of the record. If you fail to correct an AI-generated error, it will be treated as your own note.

Best practice: Set internal policies requiring that a licensed professional review and approve AI-generated notes before entering them into the patient’s official record.

Obtain Informed Consent Before Use

While not universally required by law, obtaining patient consent before using AI-assisted tools aligns with best ethical practices. It may also be required by state law or your licensing board. For example, many states require two-party consent for recording sessions.

See our related article, “Can I Record Patient Telephone Calls and Visits?

Just as patients must consent to the use of technology for telehealth, they should be informed of the risks and benefits of using AI for note-taking so they can make an informed decision about whether to consent. Not every patient will agree to have their session recorded or transcribed with AI tools. If a patient declines, be prepared to take notes the old-fashioned way. And keep in mind that patients who initially give consent may later revoke it.

See our related article, “Obtaining Informed Consent in Telehealth.”

Don’t Ignore Bias and Transparency

AI isn’t immune to the biases found in the data on which it’s trained. That can lead to inaccuracies in its output, particularly concerning marginalized populations, and could ultimately affect patient care.

Your state may have anti-discrimination laws, and your licensing board may enforce ethical rules related to bias in clinical care. If a patient believes they received substandard care due to bias, that could lead to a discrimination claim against you. Also, remember that under the federal OpenNotes Rule, patients have the right to request and review their medical records. If patients feel that their notes are inaccurate or discriminatory, you could face legal or reputational consequences.

What to watch for:

  • Stereotypes or assumptions
  • Omission of culturally relevant details
  • Lack of nuance in mental health or social history

Choose vendors who are transparent about how they trained their models and whether they audit for bias. Still, it’s essential to review each note and perform your own bias check before it’s memorialized in a patient’s record.

Support Your Team With Training

AI tools aren’t plug-and-play. Safe and effective use requires clear policies and workforce training. Your internal policies should address the abovementioned issues and give your team the guidance they need to use AI efficiently while reducing legal risk. Revisit these policies regularly as you evaluate how AI note-taking tools are performing..

Monitor and Adjust

It’s easy to get excited about new technology. But make sure you’re evaluating AI tools regularly with an eye toward practicality and quality of patient care. Is it actually saving time? Is documentation quality improving? Are patients benefiting? Are there any unexpected changes to workflow or your practice’s operations?

You can track:

  • Accuracy of clinical documentation
  • Time spent on notes
  • Clinician satisfaction
  • Any changes in patient communication or outcomes

If, upon evaluation, the AI tools you implement aren’t working for you, be prepared to make a change.

AI Note-Taking Can Help, But Only With Guardrails

We understand that healthcare professionals are buried in paperwork. AI tools can offer relief, but only if they’re used with awareness of their inherent risks. Always think critically about privacy, liability, and equity when adopting AI. Always review the AI tool’s output to ensure it reflects your own clinical judgment and reasoning. You deserve tools that support your care, not shortcuts that create new risks.

Get Legal Support

Jackson LLP Healthcare Lawyers works with private practice owners to ensure you’re protected and compliant. If you operate in one of the states where we have licensed attorneys, schedule a consultation to talk through the legal considerations of using AI note-taking tools in your practice.

This blog is made for educational purposes and is not intended to be specific legal advice to any particular person. It does not create an attorney-client relationship between our firm and the reader. It should not be used as a substitute for competent legal advice from a licensed attorney in your jurisdiction.

Free Attorney Consultation

Book Now

Skip to content