www.secnews.physaphae.fr This is the RSS 2.0 feed from www.secnews.physaphae.fr. IT's a simple agragated flow of multiple articles soruces. Liste of sources, can be found on www.secnews.physaphae.fr. 2024-06-06T16:04:16+00:00 www.secnews.physaphae.fr AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Sécuriser l'IA<br>Securing AI AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles. Vulnerabilities in ChatGPT A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset. The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models. This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy. U.S. and UK’s Bilateral cybersecurity effort on securing AI The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023. The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought. Securing AI by design Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from]]> 2024-03-07T11:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/securing-ai www.secnews.physaphae.fr/article.php?IdArticle=8460259 False Tool,Vulnerability,Threat,Mobile,Medical,Cloud,Technical ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Rester en avance sur les acteurs de la menace à l'ère de l'IA<br>Staying ahead of threat actors in the age of AI 2024-03-05T19:03:47+00:00 https://community.riskiq.com/article/ed40fbef www.secnews.physaphae.fr/article.php?IdArticle=8459485 False Ransomware,Malware,Tool,Vulnerability,Threat,Studies,Medical,Technical ChatGPT,APT 28,APT 4 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Guide: comment VCISOS, MSPS et MSSP peuvent protéger leurs clients des risques Gen AI<br>Guide: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks Download the free guide, "It\'s a Generative AI World: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks." ChatGPT now boasts anywhere from 1.5 to 2 billion visits per month. Countless sales, marketing, HR, IT executive, technical support, operations, finance and other functions are feeding data prompts and queries into generative AI engines. They use these tools to write]]> 2023-11-08T16:30:00+00:00 https://thehackernews.com/2023/11/guide-how-vcisos-msps-and-mssps-can.html www.secnews.physaphae.fr/article.php?IdArticle=8407813 False Tool,Technical ChatGPT 2.0000000000000000