One Article Review

Accueil - L'article:
Source ProofPoint.webp ProofPoint
Identifiant 8635729
Date de publication 2025-01-10 09:59:30 (vue: 2025-01-10 19:08:10)
Titre Training Your LLM Dragons-Why DSPM is the Key to AI Security
Texte The transformative potential of AI comes at a price. Because it\'s complex and relies on sensitive data, it\'s a prime target for bad actors. Notably, two AI implementations-custom large language models (LLMs) and tools like Microsoft Copilot-pose unique challenges for most organizations.  Custom LLMs often need to be trained extensively on an organization\'s data. This creates risks that the data will be embedded into models. And Microsoft Copilot integrates with enterprise applications and processes. So, if it\'s not governed properly, then personal, financial and proprietary data might get exposed.   To prevent data from being exposed and ensure compliance, organizations need to take a robust approach to security when it comes to their AI implementations. What follows are some tips for securing LLMs and AI tools like Copilot as well as details about how data security posture management (DSPM) can help.   What is DSPM-and why is it critical for AI implementations?  Data security posture management (DSPM) is both a strategy and set of tools. Its role is to discover, classify and monitor valuable and sensitive data as well as user access. It does this across an organization\'s cloud and on-premises environments.   For AI implementations like custom LLMs and Microsoft Copilot, DSPM is crucial for ensuring that sensitive or regulated data is properly governed. This reduces the risk of data leaking or being misused.  Here are some key threats to AI implementations:  Prompt injection attacks. Crafty prompts can trick models into indirectly disclosing sensitive data. This enables bad actors to bypass traditional security measures.  Training data poisoning. Threat actors can embed sensitive or biased data into training sets. This can lead to unethical or insecure model outputs.  Data leakage in outputs. Poorly configured models may inadvertently expose private data during user interactions or as part of their outputs.  Compliance failures. AI systems that mishandle regulated data risk steep fines under laws like GDPR, CCPA or HIPAA. When this happens, customer trust is lost.  Use case 1: securing custom LLMs  Custom LLMs allow organizations to fine-tune AI models to meet their specific business needs. However, they also create significant risks. Sensitive data can enter the model during training or through other interactions, which can lead to data being disclosed inadvertently.   Custom LLMs can introduce these risks:  Sensitive data being embedded in models during training   Inadvertent data leakage in model outputs  Compliance failures if regulated data, like personally identifiable information (PII), is mishandled  Security vulnerabilities that lead to training data poisoning or prompt injection attacks  These risks highlight why it\'s so important to audit training data, monitor data flows and enforce strict access controls.  Tips for securing custom LLMs  Audit and sanitize training data  Regularly review data sets. Look for sensitive or regulated data before using that data in training.  Anonymize data with masking or encryption techniques. This will help to protect PII and other critical data.  Monitor data lineage  Use tools like Proofpoint to map how data flows from ingestion to model training and outputs.  Ensure traceability to maintain compliance and quickly address vulnerabilities.  Set strict access controls  Enforce role-based permissions for data scientists and engineers who are interacting with training data sets.  Limit access to sensitive data sets to only those who absolutely need it.  Proactively monitor outputs  Analyze model responses to ensure that they don\'t reveal sensitive data. This is particularly important after updates or retraining cycles.  How Proofpoint helps  The Proofpoint DSPM solution can automatically disco
Notes ★★★
Envoyé Oui
Condensat about absolutely access accessing accurate across actors address addressed adopting after alert align all allow also analyze analyzing and microsoft anonymize applications applied  approach approved are arise assign attacks attacks  attempt audit authorized automated automatically bad based because been before behavior being beyond biased both build building business by:  bypass called can case ccpa challenges chances classes classified classify classifying cloud comes complete complex compliance compliant comprehensive compromising confidential configure configured connected consider consistently content context contextually continuously controls controls  copilot copilot  copilot integrates crafty create creates critical crucial curate curated custom customer cycles data data  deeper delivers demos detailed details detect disclose disclosed disclosing discover discovered discovering dive does don dragons dragons: driven dspm during emails embed embedded employees enables encryption enforce enforced engineers enhanced enhances ensure ensures ensuring enter enterprise environments environments  essential establish even every existing expose exposed exposing exposure extensively failures failures if files financial fine fines five flag flows follows foundational framework  from full gdpr generate get gives govern governance governance  governed governed  graph grounding happens has help helps helps  here highlight hipaa how however identifiable identifies identify identifying illustrates immediately implementations implementations  implementations:  important improperly improves inadequate  inadvertent inadvertently inappropriately included including increases index indirectly information infrastructure ingestion initiative injection inputs insecure integrated integrates integrating integrity intellectual interacting interactions introduce involved its key labels labels  language large laws layered lead leak leakage leakage in leaking like like proofpoint to limit lineage lineage  live llm llms llms  log look losing lost maintain malicious management map mapping masking may means measures meet microsoft might minimize mip mishandle mishandled  misuse misused mitigate mitigating model models models during monitor monitoring more most movement moves need needs not notably notifies notifying often only organization organizations other out outputs outputs  part particularly permissions personal personally pii pipelines plus poisoning poorly pose posture potential premises prevent price prime private proactive proactively process processes prompt prompts proofpoint properly property proprietary protect protection provides pulls quality quickly real reduces regardless regularly regulated regulations relevant relies requirements responses responsibilities restricted result retraining reveal review risk risks risks  risks:  robust role role  roles rules sanitize sanitized scenario scientists seamlessly secure securing security see semantic sensitive sensitivity set sets sharepoint should showing significant sites solution some sophisticated sources sources  specific stages stay steep stems steps strategies strategy strict structured summary surfaced systems systems  tailored take take:  target teams techniques that then these those threat threats through tied time tips tool tools trace traceability track traditional trained training training   transformative trick trust tune two unauthorized under unethical unique unlock unstructured unusual updates usage  use used user users using valuable various vetted view violating visibility vulnerabilities vulnerabilities that watch webinar: well what when where whether which who why will without workflows your  and tools
Tags Tool Vulnerability Threat Cloud
Stories
Move


L'article ne semble pas avoir été repris aprés sa publication.


L'article ne semble pas avoir été repris sur un précédent.
My email: