Common statistical approaches used in ophthalmic research vary substantially in accuracy depending on how they account for correlations between a patient’s two eyes, according to a simulation study published in Optometry and Vision Science.
Because measurements from both eyes are often highly correlated, analyzing them as independent observations can lead to inaccurate results. Researchers evaluated several commonly used statistical methods, including single-eye analysis, averaging measurements from both eyes, mixed effects models, and generalized estimating equations (GEEs).
The researchers of Southern College of Optometry and the University of Memphis, Memphis, Tennessee, simulated 60,000 data sets across a range of interocular correlation levels and sample sizes. Each data set included both eye-level and subject-level predictors, allowing investigators to assess how different modeling strategies performed under varying conditions.
Incorrect Modeling Inflates False Positives
Treating measurements from both eyes as independent observations produced the poorest results, leading to “inappropriately high model power secondary to incorrect model specification.” This approach inflated Type-I error rates—meaning false-positive findings—and the effect increased as interocular correlation rose.
In contrast, mixed effects models, single-eye analyses, and averaging measurements from both eyes maintained appropriate Type-I error rates across simulation scenarios. Generalized estimating equations showed slightly elevated error rates, particularly in smaller sample sizes.
Model Performance Depends on Predictor Type
The optimal modeling approach differed depending on whether predictors were measured per subject or per eye.
For subject-level predictors, averaging measurements from both eyes performed similarly to mixed effects models and GEEs in terms of statistical power. Single-eye analysis resulted in lower power, particularly when correlation between eyes was low.
For eye-level predictors, mixed effects models and GEEs consistently outperformed other approaches, providing greater statistical power across scenarios. Averaging data or using a single eye reduced the ability to detect true effects.
Implications for Research Practice
The findings highlight the importance of selecting statistical models that account for interocular correlation. Failure to do so can lead to inflated false-positive rates and misleadingly high statistical power that does not reflect true effects.
The researchers concluded that mixed effects models and GEEs are preferred when analyzing eye-level predictors, while averaging measurements may be sufficient for subject-level predictors. Treating both eyes as independent observations should be avoided.
These results provide practical guidance for ophthalmic researchers and reinforce the need for careful model selection when analyzing correlated ocular data.
The researchers reported no conflicts of interest.
Source: Optometry and Vision Science