Last one
Src |
Date (GMT) |
Titre |
Description |
Tags |
Stories |
Notes |
 |
2025-01-23 13:01:29 |
You are Not Alone, ChatGPT is Down (lien direct) |
ChatGPT Outage: Service Down on Jan 23, 2025. Learn about the potential causes (DDoS or technical glitch) and…
ChatGPT Outage: Service Down on Jan 23, 2025. Learn about the potential causes (DDoS or technical glitch) and… |
Technical
|
ChatGPT
|
★★
|
 |
2024-11-22 21:40:27 |
Faux ChatGPT, Claude API Packages Deliver JarkaStealer (lien direct) |
Attackers are betting that the hype around generative AI (GenAI) is attracting less technical, less cautious developers who might be more inclined to download an open source Python code package for free access, without vetting it or thinking twice.
Attackers are betting that the hype around generative AI (GenAI) is attracting less technical, less cautious developers who might be more inclined to download an open source Python code package for free access, without vetting it or thinking twice. |
Technical
|
ChatGPT
|
★★
|
 |
2024-09-24 08:14:13 |
AI générative: Comment les organisations peuvent-elles monter sur la vague Genai en toute sécurité et contenir des menaces d'initiés? Generative AI: How Can Organizations Ride the GenAI Wave Safely and Contain Insider Threats? (lien direct) |
The use of generative AI (GenAI) has surged over the past year. This has led to a shift in news headlines from 2023 to 2024 that\'s quite remarkable. Last year, Forbes reported that JPMorgan Chase, Amazon and several U.S. universities were banning or limiting the use of ChatGPT. What\'s more, Amazon and Samsung were reported to have found employees sharing code and other confidential data with OpenAI\'s chatbot.
Compare that to headlines in 2024. Now, the focus is on how AI assistants are being adopted by corporations everywhere. J.P. Morgan is rolling out ChatGPT to 60,000 employees to help them work more efficiently. And Amazon recently announced that by using GenAI to migrate 30,000 applications onto a new platform it had saved the equivalent of 4,500 developer years as well as $260 million.
The 2024 McKinsey Global Survey on AI also shows how much things have changed. It found that 65% of respondents say that their organizations are now using GenAI regularly. That\'s nearly double the number from 10 months ago.
What this trend indicates most is that organizations feel the competitive pressure to either embrace GenAI or risk falling behind. So, how can they mitigate their risks? That\'s what we\'re here to discuss.
Generative AI: A new insider risk
Given its nature as a productivity tool, GenAI opens the door to insider risks by careless, compromised or malicious users.
Careless insiders. These users may input sensitive data-like customer information, proprietary algorithms or internal strategies-into GenAI tools. Or they may use them to create content that does not align with a company\'s legal or regulatory standards, like documents with discriminatory language or images with inappropriate visuals. This, in turn, creates legal risks. Additionally, some users may use GenAI tools that are not authorized, which leads to security vulnerabilities and compliance issues.
Compromised insiders. Access to GenAI tools can be compromised by threat actors. Attackers use this access to extract, generate or share sensitive data with external parties.
Malicious insiders. Some insiders actively want to cause harm. So, they might intentionally leak sensitive information into public GenAI tools. Or, if they have access to proprietary models or datasets, they might use these tools to create competing products. They could also use GenAI to create or alter records to make it difficult for auditors to identify discrepancies or non-compliance.
To mitigate these risks, organizations need a mix of human-centric technical controls, internal policies and strategies. Not only do they need to be able to monitor AI usage and data access, but they also need to have measures in place-like employee training-as well as a solid ethical framework.
Human-centric security for GenAI
Safe adoption of this technology is top of mind for most CISOs. Proofpoint has an adaptive, human-centric information protection solution that can help. Our solution provides you with visibility and control for GenAI use in your organization. And this visibility extends across endpoints, the cloud and the web. Here\'s how:
Gain visibility into shadow GenAI tools:
Track the use of over 600 GenAI sites by user, group or department
Monitor GenAI app usage with context based on user risk
Identify third-party AI app authorizations connected to your identity store
Receive alerts when corporate credentials are used for GenAI services
Enforce acceptable use policies for GenAI tools and prevent data loss:
Block web uploads and the pasting of sensitive data to GenAI sites
Prevent typing of sensitive data into tools like ChatGPT, Gemini, Claude, Copilot and more
Revoke access authorizations for third-party GenAI apps
Monitor the use of Copilot for Microsoft 365 and alert when sensitive files are accessed via emails, files and Teams messages
Apply Microsoft Information Protection (MIP |
Tool
Vulnerability
Threat
Prediction
Cloud
Technical
|
ChatGPT
|
★★
|
 |
2024-03-07 11:00:00 |
Sécuriser l'IA Securing AI (lien direct) |
With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles.
Vulnerabilities in ChatGPT
A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset.
The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models.
This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy.
U.S. and UK’s Bilateral cybersecurity effort on securing AI
The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023.
The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought.
Securing AI by design
Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF).
The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from |
Tool
Vulnerability
Threat
Mobile
Medical
Cloud
Technical
|
ChatGPT
|
★★
|
 |
2024-03-05 19:03:47 |
Rester en avance sur les acteurs de la menace à l'ère de l'IA Staying ahead of threat actors in the age of AI (lien direct) |
## Snapshot
Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\'s blog on the research [here](https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors). Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\' usage of AI. However, Microsoft and our partners continue to study this landscape closely.
The objective of Microsoft\'s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including [Microsoft Copilot for Security](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot), to elevate defenders everywhere.
## Activity Overview
### **A principled approach to detecting and blocking threat actors**
The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House\'s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order\'s request for comprehensive AI safety and security standards.
In line with Microsoft\'s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\'s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track.
These principles include:
- **Identification and action against malicious threat actors\' use:** Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources.
- **Notification to other AI service providers:** When we detect a threat actor\'s use of another service provider\'s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies.
- **Collaboration with other stakeholders:** Microsoft will collaborate with other stakeholders to regularly exchange information a |
Ransomware
Malware
Tool
Vulnerability
Threat
Studies
Medical
Technical
|
APT 28
ChatGPT
APT 4
|
★★
|
 |
2023-11-08 16:30:00 |
Guide: comment VCISOS, MSPS et MSSP peuvent protéger leurs clients des risques Gen AI Guide: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks (lien direct) |
Téléchargez le guide gratuit, "C'est un monde d'IA génératif: comment VCISO, MSPS et MSSPS peuvent protéger leurs clients des risques Gen Gen AI."
Chatgpt se vante désormais de 1,5 à 2 milliards de visites par mois.D'innombrables ventes, marketing, RH, exécutif informatique, soutien technique, opérations, finances et autres fonctions alimentent les invites de données et les requêtes dans les moteurs d'IA génératifs.Ils utilisent ces outils pour écrire
Download the free guide, "It\'s a Generative AI World: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks."
ChatGPT now boasts anywhere from 1.5 to 2 billion visits per month. Countless sales, marketing, HR, IT executive, technical support, operations, finance and other functions are feeding data prompts and queries into generative AI engines. They use these tools to write |
Tool
Technical
|
ChatGPT
|
★★
|
|