One Article Review

Accueil - L'article:
Source AlienVault.webp AlienVault Lab Blog
Identifiant 8460259
Date de publication 2024-03-07 11:00:00 (vue: 2024-03-07 11:07:23)
Titre Sécuriser l'IA
Securing AI
Texte With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles. Vulnerabilities in ChatGPT A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset. The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models. This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy. U.S. and UK’s Bilateral cybersecurity effort on securing AI The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023. The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought. Securing AI by design Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from
Envoyé Oui
Condensat “extractable ·       explore       secure      secure  framework 150 2023 about accelerated access access – accounting acknowledgement acquired actions activities address addresses adopting adoption adopts adversary afterthought agencies agency agreements ai/ml aims algorithm aligned aligns all also an ai analytics analytics – analyze announced anomalous any anything apis application applications applications – applying approach appropriate apps architecture architecture includes are around assess asset assets associated assurance attack attacks attempts audio audit authentication authorization automate automation awareness balanced based been before beginning behavior benchmarks benefits berkeley best better between beyond bias bilateral biometric birth build built business businesses can can’t capabilities capable caused center centered centric chain characteristics chatbot chatgpt cisa civil classification closely cloud cmu coding collaboration communicate compliance complies components components: compromise concepts conditional confidential configuration consider considered consulting consumer contain contains continuous continuously controls cooperation cornell could countries covers create created critical criticality cyber cybersecurity cycle data dataset date debt decisions deepmind default defined deliver department deployed deployment deployment guidance design detect develop developed developers developing development development offers device devices dhs digital discovered distribution divergence dlp dna dns document documentation documented each ecosystem educate effectively effort efforts embedded embody emit emitting enable enabled encrypt encryption end endpoint endpoints  enforce enforcing ensure ensuring enterprise entire entities entity environment escalation especially eth ethical evaluate events every everything execution exfiltrate expanded experience exploring exposed extract extraction facial failure features financial findings fingerprint first following follows forging found foundation framework from full functionality generative gets gigabytes global google govern governance governance – government gpt granular ground group guardrails guidance guidelines handwriting hardware has have help high higher homeland how http human humans humans  hybrid hyperconnected hyperparameters identifiable identification identify identity ignore images implement implementing implementing zero important improvement incident incidents includes increasingly information infrastructure innovation inside insider instead institute intended internally international internet inventoried investigate iot isolate it’s its joint just key knowledge language larger late lateral leaders learn learned learning least led legal lessons liberties life lifecycle like limitations limited llm llms location logging logs loss maintain maintenance makes manage management medical memorization” memorized methods mfa microsegmentation ministries mitigate mobile model modeling models modern modes modify modular monitor monitoring more movements multi name nation national nations ncsc need network networks  new newly news nist not nothing november numbers offs once one open operations orchestrate orchestration – order organization organization’s organizations other others our cybersecurity oversight paramount part particular party per performance perimeter persistent personal phishing photographic pii pillars that pipeline plans pledge policies policy potential practices preserving prevent principle principles prior priority privacy privacy remains private privilege procedures process processes processing product proliferation protect protecting protection protocols provide provided providers provides public quality quantify querying questions quickly r&d race raises realizing recent recognition redu
Tags Tool Vulnerability Threat Mobile Medical Cloud Technical
Stories ChatGPT
Notes ★★
Move


L'article ne semble pas avoir été repris aprés sa publication.


L'article ne semble pas avoir été repris sur un précédent.
My email: