Performance of Google Bard and ChatGPT in Mass Casualty Incidents Triage
Authors: Rick Kye Gan, Jude Chukwuebuka Ogbodo, Yong Zheng Wee, Ann Zee Gan, Pedro Arcos González
Published in: American Journal of Emergency Medicine, 2023 October 30
Conclusion:
- Google Bard was superior to ChatGPT in correctly performing triage in mass casualty incidents.
- The accuracy rates were 60% for Google Bard and 26.67% for ChatGPT, with a significant statistical difference (p = 0.002).
Method:
- A cross-sectional analysis was conducted to compare ChatGPT, Google Bard, and medical students in MCI triage using the START method.
- A validated questionnaire with 15 MCI scenarios was utilized to evaluate triage accuracy and content analysis across four categories: "Walking Wounded," "Respiration," "Perfusion," and "Mental Status."
- The results were analyzed statistically to determine performance levels.
Result:
- Google Bard showed a higher accuracy rate of 60%, while ChatGPT reached an accuracy of 26.67% (p = 0.002).
- In comparison, medical students had an accuracy rate of 64.3% from a previous study.
- No significant difference was found between the performance of Google Bard and medical students (p = 0.211).
- Qualitative content analysis across the four categories indicated that Google Bard outperformed ChatGPT.