Utilize este identificador para referenciar este registo: http://hdl.handle.net/10071/36775
Autoria: Santos, J. M.
Shah, S.
Gupta, A.
Mann, A.
Vaz, A.
Caldwell, B. E.
Scholz, R.
Awad, P.
Allemandi, R.
Faust, D.
Banka, H.
Rousmaniere, T.
Data: 2026
Título próprio: Evaluating the clinical safety of large language models in response to high-risk mental health disclosures
Título da revista: Practice Innovations
Volume: N/A
Referência bibliográfica: Santos, J. M., Shah, S., Gupta, A., Mann, A., Vaz, A., Caldwell, B. E., Scholz, R., Awad, P., Allemandi, R., Faust, D., Banka, H., Rousmaniere, T. (2026). Evaluating the clinical safety of large language models in response to high-risk mental health disclosures. Practice Innovations. https://doi.org/10.1037/pri0000316
ISSN: 2377-889X
DOI (Digital Object Identifier): 10.1037/pri0000316
Palavras-chave: Large language models
Crisis intervention
Ethics
Mental health
Resumo: As large language models increasingly mediate emotionally sensitive conversations, especially in mental health contexts, their ability to recognize and respond to high-risk situations becomes a matter of public safety. This study evaluates the responses of six popular large language models—Claude, Gemini, DeepSeek, ChatGPT, Grok 3, and LLAMA—to user prompts simulating crisis-level mental health disclosures. Drawing on a coding framework developed by licensed clinicians, five safety-oriented behaviors were assessed: explicit risk acknowledgment, empathy, encouragement to seek help, provision of specific resources, and invitation to continue the conversation. Claude outperformed all others in a global assessment, while Grok 3, ChatGPT, and LLAMA underperformed across multiple domains. Notably, most models exhibited empathy, but few consistently provided practical support or kept the conversation open. These findings suggest that while large language models show potential for emotionally attuned communication, none currently meet satisfactory clinical standards for crisis response. Ongoing development and targeted fine-tuning are essential to ensure ethical deployment of AI in mental health settings.
Arbitragem científica: yes
Acesso: Acesso Aberto
Aparece nas coleções:CIES-RI - Artigos em revistas científicas internacionais com arbitragem científica

Ficheiros deste registo:
Ficheiro TamanhoFormato 
article_117447.pdf349,6 kBAdobe PDFVer/Abrir


FacebookTwitterDeliciousLinkedInDiggGoogle BookmarksMySpaceOrkut
Formato BibTex mendeley Endnote Logotipo do DeGóis Logotipo do Orcid 

Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.