▪︎ 🤖 Artificial Intelligence (AI) is Reshaping Healthcare and Law
Artificial intelligence (AI) is no longer a futuristic concept—it’s rapidly becoming a game-changing tool in both healthcare and the legal profession. AI offers powerful ways to enhance research and writing with efficiency and accuracy. But it also introduces new ethical responsibilities and professional considerations – because it does make mistakes! I know because I use AI and it made up a Board of Nursing Policy that did not exist!
In this blog post, we explore what artificial intelligence in healthcare and law means for LNCs in particular, and how it’s already transforming their core tasks. We’ll also explore the ethical guardrails every professional should keep in mind.
🧠 What Is AI?
At its core, AI refers to computer systems that mimic human intelligence—learning, reasoning, and problem-solving. Healthcare providers and LNCs already encounter artificial intelligence in healthcare regularly through:
- Machine Learning (ML): AI systems that detect patterns in massive datasets—like predicting poor outcomes based on prior patient cases.
- Natural Language Processing (NLP): AI that reads and understands text, such as medical records or deposition transcripts.
- Expert Systems: Rule-based engines that replicate expert medical decision-making.
- Automation: Bots that handle repetitive tasks like organizing medical files.
🌍 Real-World Relevance
Legal Nurse Consultants and attorneys are often buried in data – including thousands of pages of medical records. AI tools can help read, sort, and summarize these documents in minutes, highlighting relevant findings, missed interventions, or inconsistencies. It’s like having a highly skilled digital assistant working 24/7.
📝 What Can I Ask AI To Do?
AI can help with a wide range of questions and tasks related to your work as a Legal Nurse Consultant. Think of it as your research assistant, writing helper, and brainstorming partner – all in one. Here are just a few examples of what you can ask AI to do:
- Summarize:
“Summarize this 200-page medical record to highlight key events and timelines.”
- Compare:
“Compare different standards of care for pressure ulcer treatment in nursing homes.”
- Create:
“Write a sample report that outlines deviations from the standard of care in a post-op complication case.”
- Explain:
“Explain the typical progression of sepsis in layman’s terms for a legal report.”
- Brainstorm:
“What are some common red flags in emergency room documentation that may indicate negligence?”
- Plan:
“Help me plan a presentation for attorneys on how LNCs support med-mal cases.”
It is so quick with its responses that I really enjoy using it to research issues (medical and legal) and write whatever type of document I am working on – a blog, an article, a report summary, a legal argument, etc. The possibilities are endless.
🔍 How AI Is Already Impacting LNC Practice Areas
Here are the areas of LNC work being most rapidly transformed by AI—and how consultants are leveraging it:
1. Medical Record Review
AI tools extract key data like diagnoses, medications, and timestamps.
👉 Example: An AI engine can flag delayed medication administration or documentation gaps, allowing you to focus on clinical implications.
2. Medical Chronologies
AI systems can auto-generate timelines based on chart entries – just ask it to create a timeline.
👉 Example: A delay between a patient’s fall and subsequent neuroimaging can be flagged, strengthening your legal analysis.
3. Case Screening & Merit Reviews
AI models can predict case viability using similar historical data.
👉 Example: A wrongful death claim is identified as having strong merit due to patterns in sepsis management and treatment delay.
4. Legal & Medical Research
AI scans thousands of documents to find relevant guidelines, journal articles, or litigation precedents.
👉 Example: You request info on anticoagulation standards, and AI delivers 5 guideline summaries and 3 expert testimonies in seconds. But be sure to ask the AI program to provide citations so you can double check the accuracy of the information!
5. Report Writing
Some tools assist in generating polished summaries, complete with formatting and legal references.
👉 Example: You can cut and paste information from the medical records or dictate your opinions, and AI transforms it into a clean, fully formatted report—just tell it what type of report you want.
6. Check Regulatory Compliance
AI can compare healthcare facility actions to federal or institutional guidelines.
👉 Example: An audit reveals a nursing home failed to meet fall-prevention policy for 48 hours.
⚖️ Ethical & Professional Considerations When Using AI
With these powerful tools come serious responsibilities. Here are the top ethical and legal issues every LNC should keep in mind:
1. Privacy and Confidentiality
AI tools must be HIPAA-compliant and secure. Avoid unencrypted platforms, especially when working with sensitive case data.
2. Accuracy and Reliability
AI is only as accurate as the data it’s trained on. Outdated information can lead problems —so always double-check and/or verify AI’s findings against your clinical expertise and any specific guidelines you are citing.
3. Bias and Fairness
AI can inherit bias from the data it learns from. This may result in skewed risk assessments or flawed patient care reviews. Look for tools trained on diverse, representative datasets.
4. Accuracy
Use AI tools that provide explanations for how decisions are reached. Avoid “black box” systems that can’t justify their conclusions, especially if you’re preparing evidence. I always ask AI for the full citations to any information I am going to cite or use in my report and check it for accuracy.
5. Accountability
You are still responsible for the final judgment. AI can assist, but it cannot replace your professional or legal accountability.
AI is moving fast. Make professional development in legal technology a part of your ongoing practice to remain competitive and competent.
⚖️ Notable Legal Cases of AI-Generated Fake Case Citations Leading to Sanctions against Attorneys
Attorneys are facing increasing scrutiny and sanctions (monetary fines) for submitting court documents containing fictitious legal citations generated by AI tools like ChatGPT. Here are some recent examples:
1. Mata v. Avianca, Inc. (Southern District of New York, 2023)
In a personal injury lawsuit against Avianca Airlines, attorneys Steven A. Schwartz and Peter LoDuca from Levidow, Levidow & Oberman submitted a legal brief containing six non-existent cases generated by ChatGPT. When questioned, they initially stood by the citations, even providing fabricated excerpts. Judge P. Kevin Castel sanctioned the attorneys and their firm with a $5,000 fine, emphasizing that while using AI is not inherently improper, lawyers have a duty to ensure the accuracy of their filings. (AP News, MSBA)
2. Walmart Injury Lawsuit (Wyoming, 2025)
Three attorneys were sanctioned for citing fictitious cases in a personal injury lawsuit against Walmart. Ayala, who admitted to using an internal AI tool that generated the fake citations, was fined $3,000 and removed from the case. Morgan and Goody were each fined $1,000. (Reuters)
3. Bednar Case (Utah Court of Appeals, 2025)
Attorney Richard Bednar submitted a court brief containing fabricated citations produced by ChatGPT. The brief, prepared by an unlicensed law clerk, included references to non-existent cases. Upon discovery, Bednar took responsibility, apologized, and was sanctioned by the court. Sanctions included reimbursing the respondent’s legal fees, refunding his client’s expenses, and donating $1,000 to a legal non-profit. (The Guardian)
4. Brandon Monk Sanctioned (Texas, 2024)
Brandon Monk, serving as the Groves City Attorney, was fined $2,000 and ordered to complete a course on generative AI after submitting a legal brief with non-existent case citations in a wrongful termination lawsuit. Despite the sanctions, he retained his position after addressing the court’s concerns. (Beaumont Enterprise, AP News)
One study by legal analyst Damien Charlotin identified over 120 instances worldwide where AI-generated “hallucinations” – fabricated legal citations – appeared in court documents. Notably, while earlier cases often involved self-represented litigants, recent trends show that legal professionals are increasingly responsible for these errors. Some law firms have faced monetary sanctions exceeding $10,000 due to such inaccuracies. (Business Insider)
🧭 Ethical and Professional Considerations
These cases highlight the necessity for attorneys and others working with attorneys to exercise due diligence when utilizing AI tools:
- Verification: Always cross-check AI-generated citations against reliable legal databases.
- Transparency: Disclose the use of AI tools in legal research and drafting when appropriate.
- Education: Stay informed about the capabilities and limitations of AI technologies in legal contexts.
As with any powerful tool, its value depends on how responsibly it’s used. By combining your clinical expertise with a thoughtful, ethical approach to AI, you position yourself as a forward-thinking, indispensable member of the legal team.
📣 Want to Future-Proof Your LNC Practice?
Start by exploring AI tools designed for medical-legal workflows. Attend webinars, read case studies, and get hands-on with the platforms reshaping your profession. ChatGPT has a free version you can access to get started. Some LNCs have reported using AI QuickWrite. You should check these out by going to their websites and reading the features they offer.
I’d love to hear about your experiences with artificial intelligence in healthcare and law and any feedback you have on the systems you’ve used. I am posting this on my LinkedIn profile and on my blog page for my website https://nursinglawcenter.com/blog/ if you want to leave a comment.
Happy July 4th!
Laurie Elston JD BSN
www.NursingLawCenter.com
Law Office of Laurie R. Elston Inc.
📞 T: (805) 481-1001
📧 Email: Elston@charter.net
Leave a Comment