The Doctor Will See You Now: The Value of Human Connection in AI-Supported Medicine
What makes kidney care a good fit for AI tools?
Dr. Weinstein: Success in complex chronic disease management – like kidney care – requires two things: large amounts of data and long-term relationships that help shape patient outcomes.
Nephrology is extremely data rich, with information collected from a wide variety of sources — labs, dialysis metrics, blood pressures, medications, hospitalization data, imaging, specialist notes.
The challenge is not the absence of data. The challenge is fragmentation, variability in format and the cognitive load required to assemble a coherent clinical picture.
For me, the greatest promise AI in nephrology is effective summarization and aggregation to directly reduce the cognitive burden and uncover nuanced patterns that may otherwise be too subtle or too time-consuming to detect during a busy day. This provides immediate and meaningful value for clinicians and their patients.
How does this change your day as a physician?
Dr. Weinstein: If AI manages data retrieval, summarization and pattern recognition, it can fundamentally shift how I spend my time.
Much of my cognitive energy goes toward searching medical records, reconciling information across systems and interacting with a keyboard. If that burden is reduced, I can redirect that time toward coaching and education — which is central to managing chronic kidney disease.
This also has the potential to impact physician well-being. Reducing cognitive load mitigates the “drinking from a firehose” effect of information overload. It creates space not only for better patient interaction, but also for addressing burnout and mental fatigue among clinicians.
Essentially it could perhaps give more of that human role back to caregivers. More time making eye contact. More time building rapport. More time reinforcing what we know helps improve outcomes.
What is the value of human connection in medicine?
Dr. Weinstein: At least for the foreseeable future, AI struggles with the subtleties of human emotion and the ability to read between the lines. Patients often communicate indirectly: Their tone, hesitation, body language and inconsistencies all carry meaning.
There is also the matter of trust. Many patients may not fully trust technology or may lack consistent access to it. For them, the human relationship is foundational.
And, there is something powerful about human-to-human accountability. Patients sometimes adhere to treatment not because an algorithm or literature tells them to, but because they trust the person sitting across from them.
That relational bond is difficult to replicate electronically. It involves nuance, empathy, social norms and shared history.
How can healthcare systems help ensure physicians maintain decision-making authority?
Dr. Weinstein: Appropriate AI use begins with training and governance.
First, the right people need access to the right tools. Second, clinicians must understand both the capabilities and limitations of AI systems change. Medical terminology evolves. LLMs are upgraded. Data quality varies. Outputs can drift.
Healthcare systems must establish institutional safeguards — governance frameworks, audit mechanisms and clear expectations that AI remains advisory rather than determinative.
Equally important is cultivating individual accountability and critical thinking among all clinical and physician users. Clinicians and physicians must be trained to ask the right questions (what do you really want to know about this data or patient?) and view AI output from through lens of patient safety, error-checking, a broader clinical context, as well as bias or drift monitoring.
The goal is augmented decision making (enhancing the quality, speed or depth of thought while keeping the human in the loop), not an abdication of critical thinking.

