Utilize este identificador para referenciar este registo:
http://hdl.handle.net/10071/36775Registo completo
| Campo DC | Valor | Idioma |
|---|---|---|
| dc.contributor.author | Santos, J. M. | - |
| dc.contributor.author | Shah, S. | - |
| dc.contributor.author | Gupta, A. | - |
| dc.contributor.author | Mann, A. | - |
| dc.contributor.author | Vaz, A. | - |
| dc.contributor.author | Caldwell, B. E. | - |
| dc.contributor.author | Scholz, R. | - |
| dc.contributor.author | Awad, P. | - |
| dc.contributor.author | Allemandi, R. | - |
| dc.contributor.author | Faust, D. | - |
| dc.contributor.author | Banka, H. | - |
| dc.contributor.author | Rousmaniere, T. | - |
| dc.date.accessioned | 2026-03-31T10:42:24Z | - |
| dc.date.available | 2026-03-31T10:42:24Z | - |
| dc.date.issued | 2026 | - |
| dc.identifier.citation | Santos, J. M., Shah, S., Gupta, A., Mann, A., Vaz, A., Caldwell, B. E., Scholz, R., Awad, P., Allemandi, R., Faust, D., Banka, H., Rousmaniere, T. (2026). Evaluating the clinical safety of large language models in response to high-risk mental health disclosures. Practice Innovations. https://doi.org/10.1037/pri0000316 | - |
| dc.identifier.issn | 2377-889X | - |
| dc.identifier.uri | http://hdl.handle.net/10071/36775 | - |
| dc.description.abstract | As large language models increasingly mediate emotionally sensitive conversations, especially in mental health contexts, their ability to recognize and respond to high-risk situations becomes a matter of public safety. This study evaluates the responses of six popular large language models—Claude, Gemini, DeepSeek, ChatGPT, Grok 3, and LLAMA—to user prompts simulating crisis-level mental health disclosures. Drawing on a coding framework developed by licensed clinicians, five safety-oriented behaviors were assessed: explicit risk acknowledgment, empathy, encouragement to seek help, provision of specific resources, and invitation to continue the conversation. Claude outperformed all others in a global assessment, while Grok 3, ChatGPT, and LLAMA underperformed across multiple domains. Notably, most models exhibited empathy, but few consistently provided practical support or kept the conversation open. These findings suggest that while large language models show potential for emotionally attuned communication, none currently meet satisfactory clinical standards for crisis response. Ongoing development and targeted fine-tuning are essential to ensure ethical deployment of AI in mental health settings. | eng |
| dc.language.iso | eng | - |
| dc.publisher | American Psychological Association | - |
| dc.rights | openAccess | - |
| dc.subject | Large language models | eng |
| dc.subject | Crisis intervention | eng |
| dc.subject | Ethics | eng |
| dc.subject | Mental health | eng |
| dc.title | Evaluating the clinical safety of large language models in response to high-risk mental health disclosures | eng |
| dc.type | article | - |
| dc.peerreviewed | yes | - |
| dc.volume | N/A | - |
| dc.date.updated | 2026-03-31T11:35:20Z | - |
| dc.description.version | info:eu-repo/semantics/acceptedVersion | - |
| dc.identifier.doi | 10.1037/pri0000316 | - |
| iscte.identifier.ciencia | https://ciencia.iscte-iul.pt/id/ci-pub-117447 | - |
| iscte.alternateIdentifiers.wos | WOS:RC:164925880_S24 | - |
| iscte.journal | Practice Innovations | - |
| Aparece nas coleções: | CIES-RI - Artigos em revistas científicas internacionais com arbitragem científica | |
Ficheiros deste registo:
| Ficheiro | Tamanho | Formato | |
|---|---|---|---|
| article_117447.pdf | 349,6 kB | Adobe PDF | Ver/Abrir |
Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.












