One Article Review

Accueil - L'article:
Source ProofPoint.webp ProofPoint
Identifiant 8583819
Date de publication 2024-09-24 08:14:13 (vue: 2024-09-24 13:17:14)
Titre AI générative: Comment les organisations peuvent-elles monter sur la vague Genai en toute sécurité et contenir des menaces d'initiés?
Generative AI: How Can Organizations Ride the GenAI Wave Safely and Contain Insider Threats?
Texte The use of generative AI (GenAI) has surged over the past year. This has led to a shift in news headlines from 2023 to 2024 that\'s quite remarkable. Last year, Forbes reported that JPMorgan Chase, Amazon and several U.S. universities were banning or limiting the use of ChatGPT. What\'s more, Amazon and Samsung were reported to have found employees sharing code and other confidential data with OpenAI\'s chatbot.   Compare that to headlines in 2024. Now, the focus is on how AI assistants are being adopted by corporations everywhere. J.P. Morgan is rolling out ChatGPT to 60,000 employees to help them work more efficiently. And Amazon recently announced that by using GenAI to migrate 30,000 applications onto a new platform it had saved the equivalent of 4,500 developer years as well as $260 million.   The 2024 McKinsey Global Survey on AI also shows how much things have changed. It found that 65% of respondents say that their organizations are now using GenAI regularly. That\'s nearly double the number from 10 months ago.  What this trend indicates most is that organizations feel the competitive pressure to either embrace GenAI or risk falling behind. So, how can they mitigate their risks? That\'s what we\'re here to discuss.  Generative AI: A new insider risk   Given its nature as a productivity tool, GenAI opens the door to insider risks by careless, compromised or malicious users.  Careless insiders. These users may input sensitive data-like customer information, proprietary algorithms or internal strategies-into GenAI tools. Or they may use them to create content that does not align with a company\'s legal or regulatory standards, like documents with discriminatory language or images with inappropriate visuals. This, in turn, creates legal risks. Additionally, some users may use GenAI tools that are not authorized, which leads to security vulnerabilities and compliance issues.  Compromised insiders. Access to GenAI tools can be compromised by threat actors. Attackers use this access to extract, generate or share sensitive data with external parties.   Malicious insiders. Some insiders actively want to cause harm. So, they might intentionally leak sensitive information into public GenAI tools. Or, if they have access to proprietary models or datasets, they might use these tools to create competing products. They could also use GenAI to create or alter records to make it difficult for auditors to identify discrepancies or non-compliance.   To mitigate these risks, organizations need a mix of human-centric technical controls, internal policies and strategies. Not only do they need to be able to monitor AI usage and data access, but they also need to have measures in place-like employee training-as well as a solid ethical framework.   Human-centric security for GenAI   Safe adoption of this technology is top of mind for most CISOs. Proofpoint has an adaptive, human-centric information protection solution that can help. Our solution provides you with visibility and control for GenAI use in your organization. And this visibility extends across endpoints, the cloud and the web. Here\'s how:  Gain visibility into shadow GenAI tools:  Track the use of over 600 GenAI sites by user, group or department  Monitor GenAI app usage with context based on user risk  Identify third-party AI app authorizations connected to your identity store  Receive alerts when corporate credentials are used for GenAI services  Enforce acceptable use policies for GenAI tools and prevent data loss:  Block web uploads and the pasting of sensitive data to GenAI sites  Prevent typing of sensitive data into tools like ChatGPT, Gemini, Claude, Copilot and more  Revoke access authorizations for third-party GenAI apps  Monitor the use of Copilot for Microsoft 365 and alert when sensitive files are accessed via emails, files and Teams messages  Apply Microsoft Information Protection (MIP
Notes ★★
Envoyé Oui
Condensat $260 000 2023 2024 2024 mckinsey 365 500 600 able acceptable access accessed account across actively actors adaptive additionally adopted adoption after ago ai: alert alerts algorithms align also alter amazon announced app applications apply apps  are assistants attackers auditors authorizations authorized automate banning based before behind being best block but can capture captures careless cases cause centric changed chase chatbot chatgpt cisos claude cloud code company compare competing competitive compliance compromised confidential connected contain content context control controls copilot corporate corporations could create creates credentials customer customized data datasets demand demo department  developer difficult discrepancies discriminatory discuss documents does door double drive dynamic educate efficiently either emails embrace employee employees endpoints enforce ensuring equivalent ethical everywhere expertise extends external extract falling feel files files  focus forbes found found employees framework from gain gemini genai genai   generate generated generative given global group had harm has have headlines help here highest how how:  human identify identity images implement inappropriate indicates information input insider insiders intentionally interactive internal issues its jpmorgan labels language last leads leak learn led legal like limiting loss loss:  make malicious managed may measures messages  metadata microsoft might migrate million mind mip mitigate mix models modules monitor months more more  more   morgan most much nature nearly need new news newsletters  non not now number only onto openai opens optimize organization organizations other out over parties party past pasting place platform policies policies:  posters practices pressure prevent productivity products program proofpoint proprietary protection provides public quite reach receive recently records regularly regulatory remarkable reported respondents revoke ride risk risk  risks risk   rolling safe safely same samsung saved say screen security see sensitive services services  several shadow share sharing shift shows sites sites  solid solution some standards store  strategies surged survey team teams technical technology test that them these things third threat threats tool tools tools  tools:  top track train training trend turn typing universities uploads usage use used user users users  uses using videos visibility visuals vulnerabilities want watch wave web webinar well what when which work year years your “top ”      access  they
Tags Tool Vulnerability Threat Prediction Cloud Technical
Stories ChatGPT
Move


L'article ne semble pas avoir été repris aprés sa publication.


L'article ne semble pas avoir été repris sur un précédent.
My email: