Inconsistent Human Annotations’ Impact on AI-Driven Clinical Decisions

Suntec.AI
1 min readMar 2, 2023

--

Abstract

In supervised learning model development, domain experts are often used to provide class label annotations. Annotation inconsistencies commonly occur when even highly experienced clinical experts annotate the same phenomenon (e.g., medical image, diagnostics, or prognostic status), due to inherent expert bias, judgments, and slips, among other factors. While their existence is relatively well-known, the implications of such inconsistencies are largely understudied in real-world settings, when supervised learning is applied on such ‘noisy’ llabeleddata. To shed light on these issues, we conducted extensive experiments and analyses on three real-world Intensive Care Unit (ICU) datasets. Specifically, individual models were built from a common dataset, annotated independently by 11 Glasgow Queen Elizabeth University Hospital ICU consultants, and model performance estimates were compared through internal validation (Fleiss’ κ = 0.383 i.e., fair agreement). Further, broad external validation (on both static and time series datasets) of these 11 classifiers was carried out on a HiRID external dataset, where the models’ classifications were found to have low pairwise agreements (average Cohen’s κ = 0.255 i.e., minimal agreement). Moreover, they tend to disagree more on making discharge decisions (Fleiss’ κ = 0.174) than predicting mortality (Fleiss’ κ = 0.267).

Read Full: https://go.nature.com/41QvVHs

Partner with SunTec.AI for Extensive Data Annotation Services

Get a Free Trial Now!

SunTec.AI || info@suntec.ai || +44 203 514 2601 || +1 585 283 0055

--

--

Suntec.AI
Suntec.AI

Written by Suntec.AI

SunTec.AI is a top data annotation company empowering businesses with high-quality training datasets for diverse AI/ML project needs

No responses yet