Using a digital medical history–taking application prior to urgent care consultations may not improve diagnostic accuracy, according to a cluster-randomized trial conducted in two German out-of-hours practices.
Diagnostic accuracy—defined as agreement between diagnoses assigned by urgent care physicians and those determined by an independent expert committee of three general practitioners reviewing patient documentation—occurred in 58.4% of the cases overall, including 57.6% among patients who completed the application before consultation and 59.1% among those who used it postconsultation.
Trial Design
In the study, researchers evaluated a digital medical history application designed to collect structured symptom information prior to urgent care consultations. Patients first entered demographic information and selected one or more complaints, after which the application presented follow-up questions through a dynamic questionnaire. After completion, the tool generated a structured medical history report summarizing the patient’s responses.
The researchers analyzed data from 986 patients enrolled in a cluster-randomized trial presenting to two out-of-hours practices between March 1, 2022, and February 28, 2023. Weeks within each practice were randomized to intervention or control conditions.
-
Intervention: patients completed the application before consultation, allowing physicians to review the report in the electronic medical record.
-
Control: patients completed the application after consultation. Therefore, the physicians didn't see the information during the visit.
Diagnostic Accuracy
At least one diagnosis matched between urgent care physicians and the expert committee in 58.4% of the encounters.
Adjusted analyses found no statistically significant intervention effect.
Exploratory analyses identified one factor associated with higher agreement: the number of diagnoses recorded during the visit. Subgroup analyses examining physician specialty, patient age, number of complaints, complaint severity, and practice location also found no subgroup that benefited from application use.
Prediction of Further Treatment
The researchers also evaluated the accuracy of the physicians’ recommendations for further treatment compared with the patients’ self-reported follow-up care.
Among 694 patients who completed a follow-up survey 14 to 21 days later, 35% (n = 246) of them reported receiving further treatment related to their initial complaint.
The intervention showed no measurable effect on physicians’ ability to predict subsequent care. Just two variables were significant in the models: the number of diagnoses documented during the out-of-hours practice physician visit and self-reported severe symptoms.
The differences between physicians were more pronounced than the differences between the intervention and control groups in predicting follow-up care.
Limitations
The researchers noted that the study occurred during high patient volumes during the COVID-19 pandemic, which may have limited the physicians’ time to review application-generated reports. In addition, the electronic record system displayed only four lines of the report at a time, potentially restricting usability.
Diagnostic accuracy was assessed using International Statistical Classification of Diseases and Related Health Problems codes, which the researchers noted may not be suitable for the out-of-hours practice setting, where treatment decisions may not depend on a precise diagnosis.
“The study’s results suggest that the [application] did not improve diagnostic accuracy in a real-life setting,” wrote lead study author Eva Maria Noack, of the Department of General Practice at the University Medical Center Göttingen in Germany, and colleagues.
The researchers reported no competing interests.
Source: PLOS Digital Health