Jeffrey A. Singer
AI IN CLINICAL PRACTICE: CURRENT AND EMERGING ROLES
Many patients have already received care influenced by artificial intelligence (AI), whether through a radiologist’s diagnostic support or an automated follow-up message from an electronic health record. As AI moves from the lab to the exam room, the question is no longer if it will transform medicine, but how—and who will determine its boundaries?
For example, OpenEvidence enables verified physicians to search peer-reviewed literature and obtain referenced answers to clinical questions. Algosurg is a helpful tool for surgeons who perform minimally invasive surgery. It converts CT and MRI scans into three-dimensional models, helping surgeons choose surgical approaches and better delineate anatomy. RadNet assists radiologists in detecting breast lesions when interpreting mammograms and prostate lesions in MRIs. Aisel is a new AI platform designed for mental health providers that conducts secure, structured interactions with patients before and after visits. It gathers intake information, medication history, and standardized assessments through text or speech, potentially reducing new-patient appointments by up to 35 minutes.
AI In Electronic Health Records
Epic and other major electronic health record (EHR) providers have been integrating AI tools directly into their platforms so clinicians—including physicians, nurses, and support staff—don’t need to switch between different apps. It drafts notes, summaries, and charts and predicts risks such as sepsis, falls, or readmissions. It also generates follow-up messages to patients after visits, reminding them about medications, lab tests, or appointments—functions that can improve adherence and continuity of care. For nurses, it flags high-risk patients, simplifies task lists, and even drafts patient message responses, allowing them to focus more on patient care instead of paperwork.
EHRs reduce the risk of medication errors and drug interactions by maintaining accurate medication lists, standardizing prescriptions, and alerting to unsafe doses or conflicts. Their effectiveness depends on design and proper use, as poor data entry or too many alerts can still lead to mistakes. Clinicians can ignore or override alerts, and sometimes, based on their knowledge of the clinical context, it is appropriate for them to do so. However, if they are rushed or fatigued, they might incorrectly override or surrender judgment to the AI platform.
AI Symptom Checkers and Virtual Triage
AI symptom checkers, chatbots, and virtual triage tools assess patient symptoms online or over the phone, ask guided questions, and determine whether care should be routine, urgent, or emergency. They frequently connect directly to telehealth platforms or providers, guiding patients to virtual visits, specialists, or in-person care as needed. For minor, non-urgent issues, AI can suggest home care, over-the-counter options, or schedule a telehealth appointment, reducing provider workload and increasing access.
The Limits of AI and the Human Element
AI bots and clinicians both have strengths and weaknesses. Bots can’t yet exhibit nuance, and language or cultural differences can lead to inaccurate assessments. They can also sway clinician decisions by overemphasizing risk without considering risk-benefit trade-offs. Bots lack empathy, which is vital not only for providing care but also for diagnostics, since patients’ body language and tone can reveal important clinical clues. Many human providers share some of these same weaknesses, and fear of liability lawsuits can further distort the recommendations they make.
One weakness of human providers is their struggle to prevent personal biases from affecting the advice they give their patients. New York City psychotherapist Jonathan Alpert writes in the Wall Street Journal, “Instead of acting as neutral guides, too many therapists now act as transmitters of political polarization—diagnosing it, promoting it, spreading it.” As a result, he states, “Patients aren’t giving up on therapy. They’re giving up on therapists. Increasingly, they turn to TikTok influencers, partisan echo chambers, or chatbots. Artificial intelligence lacks depth and accountability but can offer neutrality—something many human therapists no longer provide. If that trend continues, therapy will cede its purpose to algorithms and leave patients unmoored from reality.”
Whether one agrees with Alpert or not, his observation underscores why some patients may prefer non-human counseling tools—not for their warmth, but for their neutrality. Regulations that ban or restrict these options deny patients that choice.
While patients might feel less inhibited believing a non-human counselor will be non-judgmental, the lack of nuance and empathy remains a major drawback of AI in mental health counseling.
Other areas needing improvement will continue to emerge as practitioners rely more on AI in their work. Vendors will continue refining these tools to address such limitations. Over time, bots might match or even surpass the diagnostic, therapeutic, and social skills of human clinicians. Hospitals, clinics, and clinician practices might increasingly allow bots to take on roles, positioning them on the front lines of patient care, even giving patients the option to consult a bot or a human clinician.
Regulatory Landscape
The role of AI in healthcare is rapidly expanding and evolving. However, several states have already taken steps to set boundaries on how AI is used in healthcare. Colorado, Connecticut, and Virginia have enacted comprehensive AI accountability laws that require notice, transparency, and safeguards against bias in critical areas, such as medicine.
In Illinois, the “Wellness and Oversight of Psychological Resources (WOPR) Act” prohibits AI from delivering psychotherapy or making treatment decisions without the supervision of a licensed clinician. Nevada similarly bans AI systems from representing themselves as mental health providers or replacing clinician roles in behavioral health. Utah requires clear disclosure when AI is used in patient communications and mandates that patients be informed if they are interacting with a bot rather than a human. California law ensures that AI cannot unilaterally deny, delay, or alter health services—only licensed professionals may make those medical necessity judgments.
Federal lawmakers have not passed legislation to regulate or restrict AI in health care. One interesting proposal introduced by Representative David Schweikert (R‑AZ), the “Healthy Technology Act” in January 2025, aims to expand patients’ access to AI-powered health care. The bill would allow AI systems to prescribe medications if two conditions are met: the FDA must approve, clear, or authorize the system, and the state in which it operates must permit its use for prescribing. The bill was referred to the House Committee on Energy and Commerce. As of this writing, the Committee has not held hearings on the bill.
Requiring FDA approval for AI health care tools risks dragging them into the same bureaucratic gauntlet that slows drug and device innovation. The process is expensive, time-consuming, and vulnerable to special interest pleading and political considerations, giving large incumbents an advantage while shutting out startups and academic innovators. Instead of fostering progress, it risks delaying or even derailing the natural evolution of AI in health care. (See “Drug Reformation: End Government’s Power to Require Prescriptions.”)
State licensing of AI prescribing platforms poses similar risks to those associated with FDA approval. Incumbent providers often influence medical boards and legislatures to protect their turf, applying political pressure and leveraging special-interest pleading. These actions can delay or block approval, hindering the adoption of AI tools that improve access, reduce clinician workload, and enhance patient care. Instead of promoting innovation, state licensing maintains the current system and hinders the natural evolution of AI in health care. (See “Medical Licensing: An Obstacle to Affordable, Quality Care.”)
Balancing Innovation, Safety, and Autonomy
Laws that require providers to clearly disclose to patients when and how they are using AI to deliver services properly uphold the ethical principle of informed consent, which is essential for respecting patient autonomy (see chapter one of Your Body, Your Health Care). However, laws that prohibit bots from making diagnoses, providing therapeutic opinions, and suggesting treatments infringe on patient autonomy. While such laws might shield existing providers from competition, autonomous adults have the right to decide who and what they turn to for health care advice and treatment.
Strict data-handling and liability rules force developers to bear higher compliance costs, extend their timelines, and face uncertainty about what is permitted. Smaller startups and academic projects are most affected, as they often lack the legal and financial resources to fulfill these requirements. Limiting diagnostic and therapeutic decisions to humans further hampers innovation, slowing the development of AI technology in health care.
Lawmakers and policymakers should avoid enacting regulations that interfere with the natural development of the patient-bot relationship and unnecessarily obstruct AI innovation, while still ensuring transparency and safety.