In a study of over 1 million online reviews of health care facilities across the U.S., a recent study found that 46.3% of reviews were rated 1 to 2 stars out of 5, with strong correlation related to payment issues (r = 0.25) and poor treatment (r = 0.24). Conversely, 50.1% of reviews were rated 4 to 5, with the strongest correlations involving kindness (r = 0.32) and anxiety relief (r = 0.32).
According to study authors, the findings suggest that the language used in online reviews could help clinicians and administrators identify possible problems in communicating with patients and providing better access.
The study used data from the popular review website Yelp.com on 138,605 facilities over a 7-year period from 2017 to 2023. An academic research dataset was supplied directly by the platform, which was filtered to focus on facilities offering essential health benefits, as defined by the Affordable Care Act.
.Linguistic analysis and n-gram correlation were applied to reviews in order to determine correlations between review star ratings and language used by reviewers, with the 50 most common words excluded in order to filter out stop words.
Negative reviews commonly cited administrative inefficiencies, such as insurance verification delays and communication lapses, and were marked by high-frequency use of negation words (eg, “not,” “told,” “don’t”). Conversely, positive reviews were shorter, emphasized cohesive care and staff professionalism, and frequently used conjunctions like “and,” suggesting patients tended to succinctly list favorable aspects of their care. The term “not” emerged as a strong correlate of dissatisfaction, echoing expectation disconfirmation theory, which posits that unmet expectations fuel negative evaluations—particularly relevant in the disrupted care landscape of the COVID-19 era. Phrases such as “not called” or “not informed” may reflect perceived failures in communication or care coordination.
This study had several limitations. The user base of the online review platform may skew toward younger, more tech-savvy individuals, limiting generalizability. The focus on public reviews also may amplify extreme sentiments, particularly during the COVID-19 pandemic. Although the platform filters low-trust content, the authors did not independently validate review authenticity. The classification by essential health benefits excluded niche services, potentially omitting unique concerns. Facility characteristics such as size or ownership were not analyzed. Additionally, reviews were not stratified by time, so potential pandemic-related influences on patient sentiment were not isolated. Future research may address these methodological and contextual gaps.
Full disclosures can be found in the published study.
Source: JAMA Network