Source |
Dark Reading |
Identifiant |
8648063 |
Date de publication |
2025-02-11 14:56:58 (vue: 2025-02-11 15:08:24) |
Titre |
DeepSeek AI Fails Multiple Security Tests, Raising Red Flag for Businesses |
Texte |
The popular generative AI (GenAI) model allows hallucinations, easily avoidable guardrails, susceptibility to jailbreaking and malware creation requests, and more at critically high rates, researchers find.
The popular generative AI (GenAI) model allows hallucinations, easily avoidable guardrails, susceptibility to jailbreaking and malware creation requests, and more at critically high rates, researchers find. |
Notes |
★★★
|
Envoyé |
Oui |
Condensat |
allows avoidable businesses creation critically deepseek easily fails find flag genai generative guardrails hallucinations high jailbreaking malware model more multiple popular raising rates red requests researchers security susceptibility tests |
Tags |
Malware
|
Stories |
|
Move |
|