Potential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Study

dc.contributor.authorBabayigit, Osman
dc.contributor.authorEroglu, Zeynep Tastan
dc.contributor.authorSen, Dilek Ozkan
dc.contributor.authorYarkac, Fatma Ucan
dc.date.accessioned2024-02-23T14:44:57Z
dc.date.available2024-02-23T14:44:57Z
dc.date.issued2023
dc.departmentNEÜen_US
dc.description.abstractObjectivesThe aim of this study is to evaluate the accuracy and completeness of the answers given by Chat Generative Pre-trained Transformer (ChatGPT) (OpenAI OpCo, LLC, San Francisco, CA), to the most frequently asked questions on different topics in the field of periodontology. MethodsThe 10 most frequently asked questions by patients about seven different topics (periodontal diseases, peri-implant diseases, tooth sensitivity, gingival recessions, halitosis, dental implants, and periodontal surgery) in periodontology were created by ChatGPT. To obtain responses, a set of 70 questions was submitted to ChatGPT, with an allocation of 10 questions per subject. The responses that were documented were assessed using two distinct Likert scales by professionals specializing in the subject of periodontology. The accuracy of the responses was rated on a Likert scale ranging from one to six, while the completeness of the responses was rated on a scale ranging from one to three. ResultsThe median accuracy score for all responses was six, while the completeness score was two. The mean scores for accuracy and completeness were 5.50 +/- 0.23 and 2.34 +/- 0.24, respectively. It was observed that ChatGPT's responses to the most frequently asked questions by patients for information purposes in periodontology were at least nearly completely correct in terms of accuracy and adequate in terms of completeness. There was a statistically significant difference between subjects in terms of accuracy and completeness (P<0.05). The highest and lowest accuracy scores were peri-implant diseases and gingival recession, respectively, while the highest and lowest completeness scores were gingival recession and dental implants, respectively. ConclusionsThe utilization of large language models has become increasingly prevalent, extending its applicability to patients within the healthcare domain. While ChatGPT may not offer absolute precision and comprehensive results without expert supervision, it is apparent that those within the field of periodontology can utilize it as an informational resource, albeit acknowledging the potential for inaccuracies.en_US
dc.identifier.doi10.7759/cureus.48518
dc.identifier.issn2168-8184
dc.identifier.issue11en_US
dc.identifier.pmid38073946en_US
dc.identifier.urihttps://doi.org/10.7759/cureus.48518
dc.identifier.urihttps://hdl.handle.net/20.500.12452/17156
dc.identifier.volume15en_US
dc.identifier.wosWOS:001106475700019en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakPubMeden_US
dc.language.isoenen_US
dc.publisherSpringernatureen_US
dc.relation.ispartofCureus Journal Of Medical Scienceen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectChatgpten_US
dc.subjectChat Generative Pre-Trained Transformeren_US
dc.subjectLarge Language Models (Llms)en_US
dc.subjectPatient Informationen_US
dc.subjectOral Medicine And Periodontologyen_US
dc.subjectDental Careen_US
dc.subjectArtificial Intelligence In Dentistryen_US
dc.titlePotential Use of ChatGPT for Patient Information in Periodontology: A Descriptive Pilot Studyen_US
dc.typeArticleen_US

Dosyalar