Last one
Src |
Date (GMT) |
Titre |
Description |
Tags |
Stories |
Notes |
 |
2025-04-28 11:40:00 |
Maîtriser l'adoption de l'IA avec une sécurité de bout en bout, partout Mastering AI Adoption with End-to-end Security, Everywhere (lien direct) |
> Le rythme de l'innovation dans l'IA générative n'a rien été moins loin de l'explosif. Ce qui a commencé avec les utilisateurs expérimentant des applications publiques comme Chatgpt a rapidement évolué vers une adoption généralisée d'entreprise. Les fonctionnalités de l'IA sont désormais intégrées de manière transparente dans les outils commerciaux de tous les jours, tels que les plateformes de service client comme Hidlyly, des logiciels RH comme le réseau et même les médias sociaux […]
>The pace of innovation in generative AI has been nothing short of explosive. What began with users experimenting with public apps like ChatGPT has rapidly evolved into widespread enterprise adoption. AI features are now seamlessly embedded into everyday business tools, such as customer service platforms like Gladly, HR software like Lattice, and even social media […]
|
Tool
|
ChatGPT
|
★★
|
 |
2025-04-18 15:15:00 |
[Webinaire] L'IA est déjà à l'intérieur de votre pile SaaS - Apprenez à empêcher la prochaine brèche silencieuse [Webinar] AI Is Already Inside Your SaaS Stack - Learn How to Prevent the Next Silent Breach (lien direct) |
Vos employés ne signifiaient pas d'exposer des données sensibles. Ils voulaient juste se déplacer plus vite. Ils ont donc utilisé Chatgpt pour résumer un accord. Téléchargé une feuille de calcul sur un outil amélioré en AI. A intégré un chatbot dans Salesforce. Pas grand-chose jusqu'à ce que ce soit.
Si cela semble familier, vous n'êtes pas seul. La plupart des équipes de sécurité sont déjà en retard dans la détection de la façon dont les outils d'IA remodèlent tranquillement leurs environnements SaaS. Et
Your employees didn\'t mean to expose sensitive data. They just wanted to move faster. So they used ChatGPT to summarize a deal. Uploaded a spreadsheet to an AI-enhanced tool. Integrated a chatbot into Salesforce. No big deal-until it is.
If this sounds familiar, you\'re not alone. Most security teams are already behind in detecting how AI tools are quietly reshaping their SaaS environments. And |
Tool
Cloud
|
ChatGPT
|
★★★
|
 |
2025-04-15 12:08:47 |
Générateur d'images Chatgpt abusé de faux passeport ChatGPT Image Generator Abused for Fake Passport Production (lien direct) |
> Le générateur d'images ChatGpt d'Openai a été exploité pour créer des faux passeports convaincants en quelques minutes, mettant en évidence une vulnérabilité importante dansSystèmes de vérification d'identité actuels. Cette révélation provient du rapport de menace CTRL de Cato 2025, qui souligne la démocratisation de la cybercriminalité à travers l'avènement des outils génératifs de l'IA (Genai) comme Chatgpt. Historiquement, la création de faux […]
>OpenAI’s ChatGPT image generator has been exploited to create convincing fake passports in mere minutes, highlighting a significant vulnerability in current identity verification systems. This revelation comes from the 2025 Cato CTRL Threat Report, which underscores the democratization of cybercrime through the advent of generative AI (GenAI) tools like ChatGPT. Historically, the creation of fake […]
|
Tool
Vulnerability
Threat
|
ChatGPT
|
★★★
|
 |
2025-03-31 10:46:09 |
Ce mois-ci en sécurité avec Tony Anscombe - édition de mars 2025 This month in security with Tony Anscombe – March 2025 edition (lien direct) |
D'une vulnérabilité exploitée dans un outil de chatpt tiers à une touche bizarre sur les demandes de ransomware, c'est un enveloppement sur un autre mois rempli de nouvelles de cybersécurité percutantes
From an exploited vulnerability in a third-party ChatGPT tool to a bizarre twist on ransomware demands, it\'s a wrap on another month filled with impactful cybersecurity news |
Ransomware
Tool
Vulnerability
|
ChatGPT
|
★★★
|
 |
2025-03-31 01:31:28 |
La surface d'attaque en expansion: comment un attaquant déterminé prospère dans le lieu de travail numérique en évolution d'aujourd'hui The Expanding Attack Surface: How One Determined Attacker Thrives in Today\\'s Evolving Digital Workplace (lien direct) |
Le lieu de travail moderne s'est étendu au-delà du courrier électronique. Les attaquants exploitent désormais les outils de collaboration, les relations avec les fournisseurs et la confiance humaine pour contourner les défenses et les comptes de compromis. Cette série de blogs en cinq parties sensibilise à ces tactiques d'attaque changeantes. Et il présente notre approche holistique pour protéger les utilisateurs.
Dans ce premier blog, nous suivons le processus d'un attaquant déterminé nommé Alex alors qu'il cible les victimes sans méfiance dans le lieu de travail numérique en évolution d'aujourd'hui.
Rencontrez l'attaquant: un acteur de menace persistant qui n'a pas arrêté. Que \\ l'appelle Alex. Il fait partie d'une organisation de cybercriminalité qui fonctionne comme une entreprise, patiente et très expérimentée. Ils ne comptent pas sur des victoires rapides; Ils jouent le jeu long, sachant qu'un compte compromis peut débloquer un accès précieux aux systèmes sensibles, aux données financières et aux réseaux d'entreprise.
Sa dernière cible? Une entreprise de services financiers où, comme la plupart des organisations, des employés, des partenaires et des clients modernes, communiquent et collaborent sur plusieurs canaux, notamment Slack, Teams, SMS et Zoom.
Espaces de travail numériques à risque.
Les gens de cet écosystème commercial font confiance aux outils de collaboration qu'ils utilisent chaque jour. Alex sait exactement comment exploiter cette confiance. De façon frustrante pour lui, il sait que la sécurité par e-mail dans de nombreuses organisations n'a jamais été plus forte.
Les organisations, en particulier celles qui sont des clients ProofPoint, bénéficient des avantages de la protection des menaces axée sur l'IA, qui comprend une analyse avancée du langage, une analyse complexe des données, une détection du comportement des utilisateurs anormaux et un apprentissage automatique adaptatif. Cela rend les attaques de phishing par e-mail traditionnelles beaucoup moins efficaces qu'auparavant. Alex sait que pour réussir, il doit faire évoluer sa stratégie. Au lieu d'e-mail, il vise à exploiter les équipes de Microsoft. S'il ne peut attirer qu'une seule personne de grande valeur dans un piège, il peut voler des informations d'identification, contourner le MFA et se déplacer latéralement à l'intérieur de l'organisation.
C'est pourquoi une approche de sécurité en couches est critique en raison, comme nous le verrons, aucun contrôle de sécurité unique n'est suffisant pour arrêter un attaquant persistant comme Alex.
Étape 1: Trouver la cible parfaite (reconnaissance)
Alex fait ses devoirs. En utilisant LinkedIn, les médias sociaux, le chatppt, les communiqués de presse, les offres d'emploi et les dépôts réglementaires, il identifie une organisation, éteint sa structure pour trouver des lacunes potentielles, puis choisit la personne idéale à cibler.
Après quelques recherches, Alex trouve un scénario parfait:
À partir d'un communiqué de presse d'un partenaire, il apprend que l'organisation s'est associée à un fournisseur de ressources humaines (RH) tiers pour externaliser ce département.
Il identifie une cible: Rachel, la directrice des finances seniors. Elle a l'autorité et l'accès à des données sensibles.
Étape 2: Création du leurre parfait (imitation et armement)
Alex sait que le courrier électronique n'a pas travaillé dans cette organisation car ils sont protégés par la sécurité des e-mails de preuves. Ainsi, il met en place un faux locataire Microsoft 365 et crée un faux compte de fournisseur RH qui imite leur fournisseur. Il envoie un message d'équipe à Rachel de ce qui semble être de leur fournisseur RH, affirmant qu'elle devait approuver son ajustement de dossier fiscal avant une date limite.
Le message semble digne de confiance:
Il utilise la marque de l'entreprise et un domaine de look comme le vrai fournisseur RH.
Il fait référence à une véritable échéance fiscale.
Il comprend un lien ma |
Tool
Threat
Cloud
|
ChatGPT
|
★★★
|
 |
2025-03-26 13:00:10 |
Transformer la gestion de la sécurité avec des agents et assistants de l'IA Transforming Security Management with AI Agents and Assistants (lien direct) |
> Les attaquants utilisent déjà l'IA, mais vous pouvez retourner le feu en déployant vos propres outils de cybersécurité alimentés par l'IA. Se tourner vers General Use LLMS comme Chatgpt ou Deepseek n'est pas une option pour la gestion de la sécurité car ils ne sont pas spécialisés pour la sécurité du réseau et les risques d'exposition des données sensibles. Mais les assistants du Genai de qualité en entreprise et les agents de l'IA ont le potentiel de fournir tous les avantages du Genai pour vous aider à rester en avance sur les attaques alimentées par l'IA, sans exposer votre organisation aux risques inhérents à l'utilisation d'outils Genai à usage général. Les avantages des assistants Genai comprennent la rationalisation des opérations, l'économie d'économie de temps et de coûts, […]
>Attackers are already using AI, but you can return fire by deploying your own AI-powered cyber security tools. Turning to general use LLMs like ChatGPT or DeepSeek is not an option for security management as they are not specialized for network security and risk exposing sensitive data. But enterprise-grade, purpose built GenAI assistants and AI agents have the potential to provide all the benefits of GenAI to help you stay ahead of AI-powered attacks, without exposing your organization to the inherent risks of using general purpose GenAI tools. The benefits of GenAI assistants include streamlining operations, saving time and costs, […]
|
Tool
|
ChatGPT
|
★★★
|
 |
2025-03-07 09:00:00 |
PocketPal AI, l\'assistant IA 100% local sur Android / iOS (lien direct) |
Je suis un grand fan de LLM, et j’ai bien sûr les applications Claude d’Anthropic et ChatGPT installés sur mon ordinateur et mon smartphone. Mais ces outils ont leurs défaut. Déjà si vous n’avez pas de connexion internet, bah c’est mort ! Et ne parlons même pas de la confidentialité de vos conversations qui transitent par des serveurs distants… Heureusement, grâce à PocketPal AI, on va tous pouvoir discuter avec une IA directement depuis votre smartphone, 100% en local ! |
Tool
Mobile
|
ChatGPT
|
★★★
|
 |
2025-02-25 02:00:04 |
Arrêt de cybersécurité du mois: Capital One Credential Phishing-How Les cybercriminels ciblent votre sécurité financière Cybersecurity Stop of the Month: Capital One Credential Phishing-How Cybercriminals Are Targeting Your Financial Security (lien direct) |
La série de blogs sur l'arrêt de la cybersécurité explore les tactiques en constante évolution des cybercriminels d'aujourd'hui et comment Proofpoint aide les organisations à mieux fortifier leurs défenses par e-mail pour protéger les gens contre les menaces émergentes d'aujourd'hui.
Les cybercriminels continuent d'affiner leurs campagnes de phishing avec des technologies en évolution et des tactiques psychologiques. Souvent, les campagnes imitent les organisations de confiance, en utilisant des e-mails bidon et des sites Web qui semblent presque identiques à leurs homologues légitimes.
Selon ProofPoint Threat Research, les attaques de phishing ont augmenté de 147% lors de la comparaison du quatrième trimestre 2023 avec le quatrième trimestre 2024. Il y a également eu une augmentation de 33% du phishing livré par les principales plateformes de productivité basées sur le cloud. Ces statistiques alarmantes soulignent comment les menaces de phishing évoluent rapidement. Et des outils d'IA génératifs comme Chatgpt, Deepfakes et les services de clonage vocale font partie de cette tendance.
Une campagne de phishing qui a utilisé la marque Capital One est un bon exemple de la sophistication croissante de ces attaques, qui ciblent fréquemment les institutions financières. Dans cet article de blog, nous explorerons comment cette campagne a fonctionné et comment Proofpoint a arrêté la menace.
Le scénario
Les cybercriminels ont envoyé des e-mails qui semblaient provenant de Capital One. Ils ont utilisé deux principaux types de leurres:
Vérification des transactions. Les e-mails ont demandé aux utilisateurs s'ils avaient reconnu un achat récent. Cette tactique est particulièrement efficace pendant la période des fêtes.
Notification de paiement. Les e-mails ont informé les utilisateurs qu'ils avaient reçu un nouveau paiement et les ont incité à prendre des mesures pour l'accepter. Les montants de paiement élevé ont créé un sentiment d'urgence.
Du 7 décembre 2024 au 12 janvier 2025, cette campagne a ciblé plus de 5 000 clients avec environ 130 000 messages de phishing.
Capital One a mis en œuvre de solides mesures de sécurité, notamment l'authentification par e-mail et les retraits des domaines de lookalike. Cependant, les acteurs de la menace continuent de trouver des moyens d'abuser de sa marque dans les campagnes de phishing. Les attaquants exploitent les utilisateurs de la confiance dans les institutions financières, en utilisant des tactiques trompeuses pour contourner les contrôles de sécurité et inciter les utilisateurs sans méfiance à révéler des informations sensibles.
La menace: comment l'attaque s'est-elle produite?
Voici comment l'attaque s'est déroulée:
1. Réglage du leurre. Les attaquants ont conçu des courriels qui reflétaient étroitement les communications officielles de Capital One. Le logo, l'image de marque et le ton de la société ont tous été copiés. Les messages ont créé un sentiment d'urgence, qui est une tactique souvent utilisée par les acteurs de la menace pour amener les destinataires à prendre des décisions hâtives.
Lyer de phishing utilisé par les acteurs de la menace.
Les lignes d'objet étaient convaincantes et visaient à attirer rapidement l'attention. Les préoccupations financières, comme les achats non autorisés ou les alertes de paiement, étaient un thème commun pour inciter les utilisateurs à ouvrir l'e-mail et à cliquer sur les liens.
Exemples:
"Action requise: vous avez reçu un nouveau paiement"
"[Nom d'utilisateur], reconnaissez-vous cet achat?"
Un autre leurre utilisé par les acteurs de la menace.
2. Abus des services d'URL. Pour contourner le scepticisme des utilisateurs, les attaquants ont exploité leur confiance inhérente aux URL Google. Les URL Google intégrées ont été utilisées comme mécanismes de redirection, reliant les destinataires aux sites Web de phishing. Ces sites Web ont été conçus pour sembler identiques à la page de connexion légitime de Capital One.
S |
Malware
Tool
Threat
Prediction
Medical
Cloud
Commercial
|
ChatGPT
|
★★★
|
 |
2025-02-22 10:47:00 |
Openai interdit les comptes abusant le chatppt pour les campagnes de surveillance et d'influence OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns (lien direct) |
Vendredi, Openai a révélé qu'il avait interdit un ensemble de comptes qui utilisaient son outil Chatgpt pour développer un outil de surveillance présumé de l'intelligence artificielle (IA).
L'outil d'écoute des médias sociaux proviendrait probablement de la Chine et est alimenté par l'un des modèles de lama de Meta \\, avec les comptes en question en utilisant les modèles de l'AI Company \\ pour générer des descriptions détaillées et analyser des documents
OpenAI on Friday revealed that it banned a set of accounts that used its ChatGPT tool to develop a suspected artificial intelligence (AI)-powered surveillance tool.
The social media listening tool is said to likely originate from China and is powered by one of Meta\'s Llama models, with the accounts in question using the AI company\'s models to generate detailed descriptions and analyze documents |
Tool
|
ChatGPT
|
★★★
|
 |
2025-02-21 13:59:15 |
Les allégations de fuite omnigpt montrent le risque d'utiliser des données sensibles sur les chatbots d'IA OmniGPT Leak Claims Show Risk of Using Sensitive Data on AI Chatbots (lien direct) |
Les allégations récentes des acteurs de la menace selon lesquelles ils ont obtenu une base de données Omnigpt Backend montrent les risques d'utilisation de données sensibles sur les plates-formes de chatbot AI, où les entrées de données pourraient potentiellement être révélées à d'autres utilisateurs ou exposées dans une violation.
Omnigpt n'a pas encore répondu aux affirmations, qui ont été faites par des acteurs de menace sur le site de fuite de BreachForums, mais les chercheurs sur le Web de Cyble Dark ont analysé les données exposées.
Les chercheurs de Cyble ont détecté des données potentiellement sensibles et critiques dans les fichiers, allant des informations personnellement identifiables (PII) aux informations financières, aux informations d'accès, aux jetons et aux clés d'API. Les chercheurs n'ont pas tenté de valider les informations d'identification mais ont basé leur analyse sur la gravité potentielle de la fuite si les revendications tas \\ 'sont confirmées comme étant valides.
omnigpt hacker affirme
Omnigpt intègre plusieurs modèles de grande langue (LLM) bien connus dans une seule plate-forme, notamment Google Gemini, Chatgpt, Claude Sonnet, Perplexity, Deepseek et Dall-E, ce qui en fait une plate-forme pratique pour accéder à une gamme d'outils LLM.
le Acteurs de menace (TAS), qui a posté sous les alias qui comprenait des effets de synthéticotions plus sombres et, a affirmé que les données "contient tous les messages entre les utilisateurs et le chatbot de ce site ainsi que tous les liens vers les fichiers téléchargés par les utilisateurs et également les e-mails utilisateur de 30 000. Vous pouvez trouver de nombreuses informations utiles dans les messages tels que les clés API et les informations d'identification et bon nombre des fich |
Spam
Tool
Vulnerability
Threat
|
ChatGPT
|
★★★
|
 |
2025-02-17 00:00:00 |
The Growing Threat of Phishing Attacks and How to Protect Yourself (lien direct) |
Phishing remains the most common type of cybercrime, evolving into a sophisticated threat that preys on human psychology and advanced technology. Traditional phishing involves attackers sending fake, malicious links disguised as legitimate messages to trick victims into revealing sensitive information or installing malware. However, phishing attacks have become increasingly advanced, introducing what experts call "phishing 2.0" and psychological phishing.
Phishing 2.0 leverages AI to analyse publicly available data, such as social media profiles and public records, to craft highly personalized and convincing messages. These tailored attacks significantly increase the likelihood of success. Psychological manipulation also plays a role in phishing schemes. Attackers exploit emotions like fear and trust, often creating a sense of urgency to pressure victims into acting impulsively. By impersonating trusted entities, such as banks or employers, they pressure victims into following instructions without hesitation.
AI has further amplified the efficiency and scale of phishing attacks. Cybercriminals use AI tools to generate convincing scam messages rapidly, launch automated campaigns and target thousands of individuals within minutes. Tools like ChatGPT, when misused in “DAN mode”, can bypass ethical restrictions to craft grammatically correct and compelling messages, aiding attackers who lack English fluency.
These cutting-edge threats combine the precision of AI-driven tools with the effectiveness of psychological manipulation, making phishing more dangerous than ever for individuals and organizations.
To combat these advanced threats, organizations must adopt a proactive defence strategy. They must begin by enhancing cybersecurity awareness through regular training sessions, equipping employees to recognize phishing attempts. They should implement advanced email filtering systems that use AI to detect even the most sophisticated phishing emails. They can strengthen security with multi-factor authentication (MFA), requiring multiple verification steps to protect sensitive accounts. By conducting regular security assessments, they can identify and mitigate vulnerabilities. Finally, by establishing a robust incident response plan to ensure swift and effective action when phishing incidents occur.
Cyber Skills can help you to upskill your team and prevent your organisation from falling victims to these advanced phishing attacks. With 80% government funding available for all Cyber Skills microcredentials, there is no better time to upskill. Apply today www.cyberskills.ie
Phishing remains the most common type of cybercrime, evolving into a sophisticated threat that preys on human psychology and advanced technology. Traditional phishing involves attackers sending fake, malicious links disguised as legitimate messages to trick victims into revealing sensitive information or installing malware. However, phishing attacks have become increasingly advanced, introducing what experts call "phishing 2.0" and psychological phishing.
Phishing 2.0 leverages AI to analyse publicly available data, such as social media profiles and public records, to craft highly personalized and convincing messages. These tailored attacks significantly increase the likelihood of success. Psychological manipulation also plays a role in phishing schemes. Attackers exploit emotions like fear and trust, often creating a sense of urgency to pressure victims into acting impulsively. By impersonating trusted entities, such as banks or employers, they pressure victims into following instructions without hesitation.
AI has further amplified the efficiency and scale of phishing attacks. Cybercriminals use AI tools to generate convincing scam messages rapidly, launch automated campaigns and target thousands of individuals within minutes. Tools like ChatGPT, when misused in “DAN mode”, can bypass ethical restrictions to craft grammatically correct and compelling messages, aiding attackers who lack English fluency.
|
Malware
Tool
Vulnerability
Threat
|
ChatGPT
|
★★★
|
 |
2025-02-12 21:43:22 |
A Safer Digital Future: Stopping AI-Fueled Cyber Scams for a more Secure Tomorrow (lien direct) |
>With Safer Internet Day this week, it\'s hard not to feel a little extra concern about our kids\' online safety. Today, our children and young adults are living and breathing a digital world that\'s evolving faster than ever-one where scammers are now using AI-assisted smart tools like ChatGPT and DeepSeek to create malicious content that can trick even the savviest among us. To protect these young minds, some governments have taken bold steps. In Singapore and Australia, restrictions or complete bans to prevent young children under 16 years old from using the popular social media platform, Instagram. These measures recognize […]
>With Safer Internet Day this week, it\'s hard not to feel a little extra concern about our kids\' online safety. Today, our children and young adults are living and breathing a digital world that\'s evolving faster than ever-one where scammers are now using AI-assisted smart tools like ChatGPT and DeepSeek to create malicious content that can trick even the savviest among us. To protect these young minds, some governments have taken bold steps. In Singapore and Australia, restrictions or complete bans to prevent young children under 16 years old from using the popular social media platform, Instagram. These measures recognize […]
|
Tool
|
ChatGPT
|
★★★
|
 |
2025-02-10 05:55:21 |
GenAI Tools Were Putting a Retailer\\'s Data at Risk-Here\\'s How Proofpoint Helped (lien direct) |
This blog post is part of a three-part series that explores why companies are choosing Proofpoint Data Security solutions. It focuses on the unique challenges of various industries when it comes to keeping data safe.
When it comes to adopting generative AI (GenAI) tools, organizations can face significant data loss risks. While many business teams want to adopt tools like ChatGPT, Microsoft Copilot and Google Gemini, security teams may not be ready for the added risk. After all, it\'s easy for users to inadvertently expose sensitive data and intellectual property (IP) to AI copilots. Many organizations simply lack governance processes and robust data controls to stop them.
In this blog post, we\'ll take a look at how one major U.S. retailer safely adopted GenAI tools by using Proofpoint Data Security and ZenGuide to enforce its acceptable AI use policies and protect its data.
Lack of AI governance puts source code at risk
E-commerce is a big part of this retailer\'s business. As a result, it relies on internally developed code to support its operations. Recently, the company\'s senior manager of data protection and governance was dismayed to discover that software developers were using GenAI tools in ways that exposed sensitive source code and passwords. He also saw that business teams were using tools like Grammarly and ChatGPT to develop content.
To address the risk of data loss, they started to block access to these tools via the company\'s secure web gateway (SWG). However, after the security team and business stakeholders discussed the issue, they decided that a more comprehensive approach was needed. That\'s how the company\'s AI governance project started.
Empowering the workforce with safe GenAI practices
The company wanted to protect its data from being exposed through GenAI tools. It also wanted to maintain its business agility. To do both, the security team implemented the following measures using Proofpoint Data Security, ZenGuide and a third-party SWG:
Blocked access to unapproved GenAI tools. The team created SWG rules to block general access to a Proofpoint-curated list of over 600 GenAI sites, including ChatGPT, Microsoft Copilot, DeepSeek, Google Gemini, Claude and more. Individual-level exceptions were made after the team spoke with business unit (BU) leaders one-on-one.
Trained users on GenAI security. A security training and awareness program was launched in collaboration with the legal and HR teams to teach employees about proper GenAI usage. Only users who had completed their Proofpoint ZenGuide training were granted access to GenAI tools.
Monitored AI prompts. Proofpoint Endpoint DLP and browser extension was used by the team to monitor user and data activity across a list of GenAI sites, which was curated by Proofpoint. If developers did not follow their training or they misused the tools and exposed proprietary code or passwords, alerts were generated and BU leaders were informed.
Identified the people using GenAI tools. Proofpoint People Risk Explorer was used by the team to understand how employees, business units and other groups were using GenAI tools. This dashboard allowed the Data Protection manager to easily point out risks to BU leaders such as when employees were entering large amounts of data into AI tools or using them without the proper training.
Identified shadow AI applications. Proofpoint CASB was used by the team to monitor and control OAuth authorizations for unauthorized or risky shadow AI applications like Grammarly.
Balancing innovation and security
Proofpoint provided this retailer with both technology and expertise. As a result, it was able to realize the business benefits of GenAI tools while also minimizing risks to its data security. By combining visibility, training and targeted controls, Proofpoint ensured that GenAI tools were adopted safely and effectively.
Learn more
To find out how other companies are using Proofpoint to protect their sensitive data from risky users, read the other blogs in this series:
Why a Global Services |
Tool
|
ChatGPT
|
★★★
|
 |
2025-02-03 13:00:01 |
Protect Your Organization from GenAI Risks with Harmony SASE (lien direct) |
>Love it or hate it, large language models (LLMs) like ChatGPT and other AI tools are reshaping the modern workplace. As AI becomes a critical part of daily work, establishing guardrails and deploying monitoring tools for these tools is critical. That\'s where Check Point\'s Harmony SASE comes in. We\'ve already talked about Browser Security and the clipboard control feature to help define what types of information can\'t be shared with LLMs. For monitoring these services, our GenAI Security service shows exactly which AI tools your team is using, who is using them, what kind of information they\'re sharing with the […]
>Love it or hate it, large language models (LLMs) like ChatGPT and other AI tools are reshaping the modern workplace. As AI becomes a critical part of daily work, establishing guardrails and deploying monitoring tools for these tools is critical. That\'s where Check Point\'s Harmony SASE comes in. We\'ve already talked about Browser Security and the clipboard control feature to help define what types of information can\'t be shared with LLMs. For monitoring these services, our GenAI Security service shows exactly which AI tools your team is using, who is using them, what kind of information they\'re sharing with the […]
|
Tool
|
ChatGPT
|
★★★
|
 |
2025-01-31 08:35:30 |
DeepSeek AI: Safeguarding Your Sensitive and Valuable Data with Proofpoint (lien direct) |
The Chinese artificial intelligence (AI) startup DeepSeek recently took the markets by storm by releasing an innovative and cost-effective AI model called R1. DeepSeek-R1 rivals more expensive models like OpenAI\'s ChatGPT. What\'s more, it demonstrates that developing advanced AI doesn\'t necessarily require a massive investment.
Clearly, DeepSeek\'s efficient AI model has the potential to broaden the adoption of AI. However, it has also caused concern about what the Chinese government will do with the data that it collects. Organizations are right to be worried that their users might expose sensitive customer data, their proprietary algorithms or their internal strategies.
If PII (personally identifiable information) is exposed, this can cause GDPR violations that could have a huge financial impact. Fines imposed by the GDPR can be up to €20 million or 4% of a company\'s global annual revenue-whichever is higher. Plus, it can cause reputational damage and a loss in customer trust.
DeepSeek\'s privacy policy doesn\'t help alleviate these fears. Just consider the statement below:
DeepSeek privacy policy.
To mitigate these risks, organizations should take a comprehensive approach that encompasses people, processes and technology. Not only should they implement technology that provides human-centric access and data controls, but organizations should also establish robust internal policies and AI governance boards for oversight and guidance. They need to be able to monitor AI usage and data access. And they need to have measures in place, like employee training, and a solid ethical framework.
Human-centric security for GenAI
Safe adoption of GenAI tools is top of mind for most CISOs. Proofpoint has an adaptive, human-centric platform for data security that can help. Our solution provides you with visibility and control for GenAI across your organization.
Unlike legacy DLP solutions and web filtering that block all usage of GenAI applications, Proofpoint can selectively allow, guide and restrict their use based on employee behavior and the content that they input.
With Proofpoint Enterprise DLP, Data Security Posture Management and ZenGuide, we can help you enforce acceptable use policies for public GenAI tools as well as enterprise copilots and custom LLM models.
Here\'s a comprehensive list of what you can accomplish with Proofpoint:
Gain visibility into shadow GenAI tools:
Track the use of over 600 GenAI sites by user, group or department
Monitor GenAI app usage with context on user risk
Identify and alert on third-party AI app authorizations for access to the user\'s cloud data, email, calendar, etc.
Enforce acceptable use policies for GenAI tools:
Block web uploads and the pasting of sensitive data to GenAI sites
Prevent sensitive data from being typed into tools like DeekSeek, ChatGPT, Gemini, Claude and Copilot or redact sensitive data that\'s typed in AI prompts using our browser extension
Revoke or block authorizations for third-party GenAI apps
Monitor and alert when sensitive files are accessed by Copilot for Microsoft 365 via emails, files and Teams messages
Detect, label and protect files that contain sensitive data, including AI-generated content
Monitor for insider threats with dynamic GenAI policies:
Capture metadata and screen captures before and after users access GenAI tools
Prevent data exposure to AI copilots and custom LLM models:
Classify data and manage data access by AI apps like Copilot for Microsoft 365 to ensure AI-generated content does not expose sensitive data
Protect foundational or custom AI models, including those from DeepSeek in AWS Bedrock and Azure OpenAI from being trained on sensitive data
Conduct near real-time sensitivity analysis of data usage by GenAI platforms
Train empl |
Tool
Cloud
|
ChatGPT
|
★★
|
 |
2025-01-24 05:28:30 |
Unlocking the Value of AI: Safe AI Adoption for Cybersecurity Professionals (lien direct) |
As a cybersecurity professional or CISO, you likely find yourself in a rapidly evolving landscape where the adoption of AI is both a game changer and a challenge. In a recent webinar, I had an opportunity to delve into how organizations can align AI adoption with business objectives while safeguarding security and brand integrity. Michelle Drolet, CEO of Towerwall, Inc., hosted the discussion. And Diana Kelley, CISO at Protect AI, participated with me.
What follows are some key takeaways. I believe every CISO and cybersecurity professionals should consider them when they are integrating AI into their organization.
Start with gaining visibility into AI usage
The first and most critical step is gaining visibility into how AI is being used across your organization. Whether it\'s generative AI tools like ChatGPT or custom predictive models, it\'s essential to understand where and how these technologies are deployed. After all, you cannot protect what you cannot see. Start by identifying all large language models (LLMs) and the AI tools that are being used. Then map out the data flows that are associated with them.
Balance innovation with guardrails
AI adoption is inevitable. The “hammer approach” of banning its use outright rarely works. Instead, create tailored policies that balance innovation with security. For instance:
Define policies that specify what types of data can interact with AI tools
Implement enforcement mechanisms to prevent sensitive data from being shared inadvertently
These measures empower employees to use AI\'s capabilities while ensuring that robust security protocols are maintained.
Educate your employees
One of the biggest challenges in AI adoption is ensuring that employees understand the risks and responsibilities that are involved. Traditional security awareness programs that focus on phishing or malware need to evolve to include AI-specific training. Employees must be equipped to:
Recognize the risks of sharing sensitive data with AI
Create clear policies for complex techniques like data anonymization to prevent inadvertent exposure of sensitive data
Appreciate why it\'s important to follow organizational policies
Conduct proactive threat modeling
AI introduces unique risks, such as accidental data leakage. Another risk is “confused pilot” attacks where AI systems inadvertently expose sensitive data. Conduct thorough threat modeling for each AI use case:
Map out architecture and data flows
Identify potential vulnerabilities in training data, prompts and responses
Implement scanning and monitoring tools to observe interactions with AI systems
Use modern tools like DSPM
Data Security Posture Management (DSPM) is an invaluable framework for securing AI. By providing visibility into data types, access patterns and risk exposure, DSPM enables organizations to:
Identify sensitive data being used for AI training or inference
Monitor and control who has access to critical data
Ensure compliance with data governance policies
Test before you deploy
AI is nondeterministic by nature. This means that its behavior can vary unpredictably. Before deploying AI tools, conduct rigorous testing:
Red team your AI systems to uncover potential vulnerabilities
Use AI-specific testing tools to simulate real-world scenarios
Establish observability layers to monitor AI interactions post-deployment
Collaborate across departments
Effective AI security requires cross-departmental collaboration. Engage teams from marketing, finance, compliance and beyond to:
Understand their AI use cases
Identify risks that are specific to their workflows
Implement tailored controls that support their objectives while keeping the organization safe
Final thoughts
By focusing on visibility, education and proactive security measures, we can harness AI\'s potential while minimizing risks. If there\'s one piece of advice that I\'d leave you with, it\'s this: Don\'t wait for incidents to highlight the gaps in your AI strategy. Take the first step now by auditing |
Malware
Tool
Vulnerability
Threat
Legislation
|
ChatGPT
|
★★
|
 |
2025-01-09 17:25:00 |
Product Review: How Reco Discovers Shadow AI in SaaS (lien direct) |
As SaaS providers race to integrate AI into their product offerings to stay competitive and relevant, a new challenge has emerged in the world of AI: shadow AI.
Shadow AI refers to the unauthorized use of AI tools and copilots at organizations. For example, a developer using ChatGPT to assist with writing code, a salesperson downloading an AI-powered meeting transcription tool, or a
As SaaS providers race to integrate AI into their product offerings to stay competitive and relevant, a new challenge has emerged in the world of AI: shadow AI.
Shadow AI refers to the unauthorized use of AI tools and copilots at organizations. For example, a developer using ChatGPT to assist with writing code, a salesperson downloading an AI-powered meeting transcription tool, or a |
Tool
Cloud
|
ChatGPT
|
★★★
|
 |
2024-11-18 10:34:05 |
Security Brief: ClickFix Social Engineering Technique Floods Threat Landscape (lien direct) |
What happened
Proofpoint researchers have identified an increase in a unique social engineering technique called ClickFix. And the lures are getting even more clever.
Initially observed earlier this year in campaigns from initial access broker TA571 and a fake update website compromise threat cluster known as ClearFake, the ClickFix technique that attempts to lure unsuspecting users to copy and run PowerShell to download malware is now much more popular across the threat landscape.
The ClickFix social engineering technique uses dialogue boxes containing fake error messages to trick people into copying, pasting, and running malicious content on their own computer.
Example of early ClickFix technique used by ClearFake.
Proofpoint has observed threat actors impersonating various software and services using the ClickFix technique as part of their social engineering, including common enterprise software such as Microsoft Word and Google Chrome, as well as software specifically observed in target environments such as transportation and logistics.
The ClickFix technique is used by multiple different threat actors and can originate via compromised websites, documents, HTML attachments, malicious URLs, etc. In most cases, when directed to the malicious URL or file, users are shown a dialog box that suggests an error occurred when trying to open a document or webpage. This dialog box includes instructions that appear to describe how to “fix” the problem, but will either: automatically copy and paste a malicious script into the PowerShell terminal, or the Windows Run dialog box, to eventually run a malicious script via PowerShell; or provide a user with instructions on how to manually open PowerShell and copy and paste the provided command.
Proofpoint has observed ClickFix campaigns leading to malware including AsyncRAT, Danabot, DarkGate, Lumma Stealer, NetSupport, and more.
ClickFix campaigns observed March through October 2024.
Notably, threat actors have been observed recently using a fake CAPTCHA themed ClickFix technique that pretends to validate the user with a "Verify You Are Human" (CAPTCHA) check. Much of the activity is based on an open source toolkit named reCAPTCHA Phish available on GitHub for “educational purposes.” The tool was released in mid-September by a security researcher, and Proofpoint began observing it in email threat data just days later. The purpose of the repository was to demonstrate a similar technique used by threat actors since August 2024 on websites related to video streaming. Ukraine CERT recently published details on a suspected Russian espionage actor using the fake CAPTCHA ClickFix technique in campaigns targeting government entities in Ukraine.
Recent examples
GitHub “Security Vulnerability” notifications
On 18 September 2024, Proofpoint researchers identified a campaign using GitHub notifications to deliver malware. The messages were notifications for GitHub activity. The threat actor either commented on or created an issue in a GitHub repository. If the repository owner, issue owner, or other relevant collaborators had email notifications enabled, they received an email notification containing the content of the comment or issue from GitHub. This campaign was publicly reported by security journalist Brian Krebs.
Email from GitHub.
The notification impersonated a security warning from GitHub and included a link to a fake GitHub website. The fake website used the reCAPTCHA Phish and ClickFix social engineering technique to trick users into executing a PowerShell command on their computer.
ClickFix style “verification steps” to execute PowerShell.
The landing page contained a fake reCAPTCHA message at the end of the copied command so the target would not see the actual malicious command in the run-box when the malicious command was pasted. If the user performed the requested steps, PowerShell code was execu |
Malware
Tool
Threat
|
ChatGPT
|
★★
|
 |
2024-10-30 14:01:17 |
Comment la protection de l'information ProofPoint offre une valeur aux clients How Proofpoint Information Protection Provides Value for Customers (lien direct) |
Recently, I read the Gartner® research report, Demystifying Microsoft\'s Data Security Capabilities and Licensing. Many of the report\'s findings aligned with what dozens of customers have shared with me over the past several years.
I have seen many organizations turn to Proofpoint Information Protection solutions after encountering challenges with other vendors\' offerings. Today, more than half of Fortune 100 companies trust Proofpoint to power their data loss prevention (DLP) programs.
This isn\'t by chance. When comparing Proofpoint DLP with other solutions, our offering stands out as the clear choice. In this blog post, I\'ll highlight the key reasons why I believe Proofpoint excels.
Realizing customer value
In my opinion, a key indicator of the value that Proofpoint brings is shown in the 2024 Gartner® Peer Insights™ Voice of the Customer for Data Loss Prevention. Proofpoint was the only vendor recognized with a Customers\' Choice distinction. This recognizes vendors who meet or exceed market averages for Overall Experience and User Interest and Adoption.
One of the reasons I believe why Proofpoint was recognized with this distinction is the way that we\'ve streamlined management and lowered costs. This can be seen when comparing our solution to Microsoft Purview. Because Proofpoint Managed Services runs Purview for some of our customers, our team was able to analyze how much time and effort it takes to operate and maintain it. According to Proofpoint analysis, organizations using Microsoft Purview experience:
A 33% detection rate of data loss incidents
50% more alerts to manage
2.5 times longer time to triage incidents
These inefficiencies directly translate into higher operational costs because organizations often must hire more staff to deal with increased alerts and the number of incidents to triage.
In contrast, Proofpoint uses a human-centric approach that provides deep insight into user intent, data and application access patterns. It provides capabilities that detect all manner of data loss incidents because its rules are both content- and context-based. This allows for protecting data and alerting on behaviors that are not possible with traditional, content-centric systems and assures far lower risks of false negatives and false positives.
For instance, Proofpoint will issue an alert if a user downloads a file from a sensitive data repository, encrypts it and transfers the file to a USB. In contrast, systems that employ a traditional approach rely on reading the content of data. This isn\'t reliable because of a variety of reasons, such as poorly written rules or encrypted data in transit.
Our intuitive interface and single, unified dashboard speed up investigations. This not only lowers management costs, but it also means that your data is better protected.
Proofpoint is not just technically superior. It also has significant cost advantages. Our team\'s analysis also compared the costs of operating and maintaining our DLP solution to Purview\'s. Organizations that make the switch to Proofpoint can expect:
A 50% reduction in total cost of ownership
An average payback period of just 4.5 months based on breach avoidance and workforce efficiency
If you combine our detection capabilities and streamlined management with our cost benefits, then the value of our comprehensive solution becomes clear.
Proofpoint reduces risk
Proofpoint Information Protection takes a human-centric approach to security. We integrate user behavior and content telemetry across all DLP channels, including email, cloud apps, endpoints and the web. This context-rich data provides security analysts with what they need to quickly and accurately assess incidents-and prevent data loss before it happens.
Generative AI (GenAI) is a good example of why this is so important. You can\'t enforce acceptable use policies for GenAI tools if you don\'t understand your content |
Tool
Threat
Cloud
|
ChatGPT
|
★★
|
 |
2024-10-21 18:57:24 |
Bumblebee malware returns after recent law enforcement disruption (lien direct) |
## Instantané
Les chercheurs de la société de cybersécurité Nettskope ont observé une résurgence du chargeur de logiciels malveillants de Bumblebee, qui s'était silencieux à la suite de la perturbation par Europol \\ 's \' Operation Endgame \\ 'en mai.
## Description
Bumblebee, attribué à [TrickBot] (https://sip.security.microsoft.com/intel-profiles/5a0aed1313768d50c9e800748108f51d3dfea6a4b48aa71b630cff897982f7c) https://sip.security.microsoft.COM / Intel-Explorer / Articles / 8AAA95D1) Backdoor, facilitant les acteurs de ransomwares \\ 'Accès aux réseaux.Le malware est généralement réparti par le phishing, le malvertising et l'empoisonnement SEO, promouvant des logiciels contrefaits comme Zooom, Cisco AnyConnect, Chatgpt et Citrix Workspace.Il est connu pour la livraison de charges utiles telles que [Cobalt Strike] (https: //sip.security.microsoft.com/intel-profiles/fd8511c1d61e93d39411acf36a31130a6795efe186497098fe0c6f2ccfb920fc) Beacons, [standing 2575F9418D4B6723C17B8A1F507D20C2140A75D16D6) MALWARE et diverses souches de ransomware.
La dernière chaîne d'attaque de Bumblebee commence par un phiShing Email qui trompe la victime pour télécharger une archive zip malveillante contenant un raccourci .lnk.Ce raccourci déclenche PowerShell pour télécharger un fichier .msi malveillant, se faisant passer pour une mise à jour légitime du pilote NVIDIA ou un programme d'installation de MidJourney, à partir d'un serveur distant.Le fichier .msi s'exécute silencieusement, en utilisant la table Self-Reg pour charger une DLL dans le processus msiexec.exe et déployer Bumblebee en mémoire.La charge utile dispose d'une DLL interne, de fonctions de noms de fonctions exportées et de mécanismes d'extraction de configuration cohérents avec les variantes précédentes.
## Analyse Microsoft et contexte OSINT supplémentaire
L'acteur Microsoft suit [Storm-0249] (https://sip.security.microsoft.com/intel-profiles/75f82d0d2bf6af59682bbbbbbbb. et connu pour distribution de bazaloder, Gozi, emoTET, [IceDID] (https://sip.security.microsoft.com/intel-profiles/ee69395aeeea2b2322d5941be0ec4997a22d106f671ef84d35418ec2810faddb) et bumBlebee.Storm-0249 utilise généralement des e-mails de phishing pour distribuer leurs charges utiles de logiciels malveillants dans des attaques opportunistes.En mai 2022, Microsoft Threat Intelligence a observé que Storm-0249 s'éloigne de la précédente MALWLes familles de [Bumblebee] (https://security.microsoft.com/atheatanalytics3/048e866a-0a92-47b8-94ac-c47fe577cc33/analystreport?ocid=Magicti_TA_TA2) comme mécanisme initial de livraison de charge utile.Ils ont effectué des campagnes d'accès initiales basées sur des e-mails pour un transfert à d'autres acteurs, notamment pour les campagnes qui ont abouti au déploiement des ransomwares.
Bumblebee Malware a fait plusieurs résurgences depuis sa découverte en 2022, adaptant et évoluant en réponse aux mesures de sécurité.Initialement observé en remplacement des logiciels malveillants bazarloader utilisés par les groupes cybercriminaux liés à TrickBot, Bumblebee a refait surface plusieurs fois avec des capacités améliorées et des stratégies d'attaque modifiées.Ces [résurgences] (https://sip.security.microsoft.com/intel-explorer/articles/ab2bde0b) coïncident souvent avec les changements dans l'écosystème de cybercriminalité, y compris le retrait de l'infrastructure de TrickBot \\ et de la puissance de conti ransomware de TrickBot \\ s et de la puissance de conti ransomware de TrickBot \\ s et de la puissance de contidrowware Contidownopérations.
La capacité de Bumblebee \\ à réapparaître est due à son architecture modulaire flexible, permettant aux acteurs de menace de mettre à jour ses charges utiles et ses techniques d'évasion.Chaque résurgence a vu Bumblebee utilisé dans des campagnes de plus en plus sophistiquées, offrant fréquemment des ransomwares à fort impact comme BlackCat et Quantum.De plus, il a été lié à des tactiques d'évasion avancées |
Ransomware
Spam
Malware
Tool
Threat
Legislation
|
ChatGPT
|
★★
|
 |
2024-10-16 19:15:03 |
Une mise à jour sur la perturbation des utilisations trompeuses de l'IA An update on disrupting deceptive uses of AI (lien direct) |
## Instantané
OpenAI a identifié et perturbé plus de 20 cas où ses modèles d'IA ont été utilisés par des acteurs malveillants pour diverses opérations de cyber, notamment le développement de logiciels malveillants, les réseaux de désinformation, l'évasion de détection et les attaques de phishing de lance.
## Description
Dans son rapport nouvellement publié, OpenAI met en évidence les tendances des activités des acteurs de la menace, notant qu'ils tirent parti de l'IA lors d'une phase intermédiaire spécifique acquérant des outils de base mais avant de déployer des produits finis.Le rapport révèle également que si ces acteurs expérimentent activement des modèles d'IA, ils n'ont pas encore atteint des percées importantes dans la création de logiciels malveillants considérablement nouveaux ou de construire un public viral.De plus, le rapport souligne que les entreprises d'IA elles-mêmes deviennent des objectifs d'activité malveillante.
OpenAI a identifié et perturbé quatre réseaux distincts impliqués dans la production de contenu lié aux élections.Il s'agit notamment d'une opération d'influence iranienne secrète (IO) responsable de la création d'une variété de matériaux, tels que des articles à long terme sur les élections américaines, ainsi que des utilisateurs rwandais de CHATGPT générant du contenu lié aux élections pour le Rwanda, qui a ensuite été publié par des comptesSur X. Selon Openai, la capacité de ces campagnes à avoir un impact significatif et à atteindre un grand public en ligne était limitée.
OpenAI a également publié des études de cas sur plusieurs cyber-acteurs utilisant des modèles d'IA.Il s'agit notamment de Storm-0817, qui a utilisé l'IA pour le débogage du code, et SweetSpecter, qui a exploité les services d'Openai \\ pour la reconnaissance, la recherche sur la vulnérabilité, le soutien des scripts, l'évasion de détection d'anomalie et le développement.De plus, [cyberav3ngers] (https://www.microsoft.com/en-us/security/blog/2024/05/30/exposed-and-vulnerable-recent-attacks-highlight-critical-need-to-protect-Internet-OT-Devices /? MSOCKID = 11175395187C6B993D06473919876A3B) a mené des recherches sur des contrôleurs logiques programmables, tandis que les IOS se sont dirigés par des acteurs de Russie, des États-Unis, de l'Iran et de Rwanda, entre autres.
## Analyse Microsoft et contexte OSINT supplémentaire
Plus tôt cette année, Microsoft, en collaboration avec OpenAI, [a publié un rapport] (https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-thereat-acteurs-dans l'âge d'ai /) détaillant les menaces émergentes à l'ère de l'IA, en se concentrant sur l'activité identifiée associée à des acteurs de menace connus, y compris les injections rapides, l'utilisation abusive de modèles de gros langues (LLM) et la fraude.Bien que différents acteurs de menace \\ 'et la complexité varient, ils ont des tâches communes à effectuer au cours du ciblage et des attaques.Il s'agit notamment de la reconnaissance, telles que l'apprentissage des victimes potentielles \\ 'industries, des lieux et des relations;Aide au codage, notamment en améliorant des choses comme les scripts logiciels et le développement de logiciels malveillants;et l'aide à l'apprentissage et à l'utilisation de la langue maternelle.Les acteurs Microsoft suit comme [Forest Blizzard] (https: // security.microsoft.com/intel-profiles/dd75f93b2a771c9510dceec817b9d34d868c2d1353d08c8c1647de067270fdf8), [emerald. FB337DC703EE4ED596D8AE16F942F442B895752AD9F41DD58E), [Crimson Sandstorm] (https://security.microsoft.com/Intel-Profiles / 34E4ACFE2868D450AC93C5C3E6D2DF021E2801BDB3700DD8F172D602DF6DA046), [Charcoal Tyhpoon] (https://security.microsoft.com/intel-profiles/aabd105e7b5d4d4d.10e.1 BD49A6D3DB3D52D0495410EFD39D506AAD9A4) et [Salmon Typhoon] (https://security.microsoft.com 1C08B4B6) étaient oBServed menant cette activité.
Le Centre d'analyse des menaces de Microsoft (MTAC) a suivi les acteurs de la menace \ |
Malware
Tool
Vulnerability
Threat
Studies
|
ChatGPT
|
★★
|
 |
2024-10-09 19:25:56 |
Openai dit qu'il a perturbé plus de 20 réseaux d'influence étrangère au cours de l'année écoulée OpenAI says it has disrupted 20-plus foreign influence networks in past year (lien direct) |
> Les acteurs de la menace ont été observés à l'aide de Chatgpt et d'autres outils pour étendre les surfaces d'attaque, déboguer les logiciels malveillants et créer du contenu de spectre.
>Threat actors were observed using ChatGPT and other tools to scope out attack surfaces, debug malware and create spearphishing content.
|
Malware
Tool
|
ChatGPT
|
★★★
|
 |
2024-10-09 14:51:12 |
AISheeter - La puissance de l\'IA au service des esclaves de Google Sheets (lien direct) |
Si vous avez de grosses lunettes et les cheveux gras, vous bossez probablement toute la journée sur un tableur. Ce genre d’outils, perso, je ne maitrise pas du tout, mais j’ai quand même une bonne astuce pour tous les utilisateurs de Google Spreadsheet.
Il s’agit d’installer une extension qui s’appelle AISheeter que vous trouverez sur le Google Workspace Marketplace.
Une fois installé, rendez-vous dans le menu des extensions de Sheets, puis, dans les paramètres de celle-ci pour ajouter vos clés, ChatGPT, Claude, Gemini et Groq. |
Tool
|
ChatGPT
|
★★
|
 |
2024-09-30 13:21:55 |
Faits saillants hebdomadaires OSINT, 30 septembre 2024 Weekly OSINT Highlights, 30 September 2024 (lien direct) |
## Snapshot
Last week\'s OSINT reporting highlighted diverse cyber threats involving advanced attack vectors and highly adaptive threat actors. Many reports centered on APT groups like Patchwork, Sparkling Pisces, and Transparent Tribe, which employed tactics such as DLL sideloading, keylogging, and API patching. The attack vectors ranged from phishing emails and malicious LNK files to sophisticated malware disguised as legitimate software like Google Chrome and Microsoft Teams. Threat actors targeted a variety of sectors, with particular focus on government entities in South Asia, organizations in the U.S., and individuals in India. These campaigns underscored the increased targeting of specific industries and regions, revealing the evolving techniques employed by cybercriminals to maintain persistence and evade detection.
## Description
1. [Twelve Group Targets Russian Government Organizations](https://sip.security.microsoft.com/intel-explorer/articles/5fd0ceda): Researchers at Kaspersky identified a threat group called Twelve, targeting Russian government organizations. Their activities appear motivated by hacktivism, utilizing tools such as Cobalt Strike and mimikatz while exfiltrating sensitive information and employing ransomware like LockBit 3.0. Twelve shares infrastructure and tactics with the DARKSTAR ransomware group.
2. [Kryptina Ransomware-as-a-Service Evolution](https://security.microsoft.com/intel-explorer/articles/2a16b748): Kryptina Ransomware-as-a-Service has evolved from a free tool to being actively used in enterprise attacks, particularly under the Mallox ransomware family, which is sometimes referred to as FARGO, XOLLAM, or BOZON. The commoditization of ransomware tools complicates malware tracking as affiliates blend different codebases into new variants, with Mallox operators opportunistically targeting \'timely\' vulnerabilities like MSSQL Server through brute force attacks for initial access.
3. [North Korean IT Workers Targeting Tech Sector:](https://sip.security.microsoft.com/intel-explorer/articles/bc485b8b) Mandiant reports on UNC5267, tracked by Microsoft as Storm-0287, a decentralized threat group of North Korean IT workers sent abroad to secure jobs with Western tech companies. These individuals disguise themselves as foreign nationals to generate revenue for the North Korean regime, aiming to evade sanctions and finance its weapons programs, while also posing significant risks of espionage and system disruption through elevated access.
4. [Necro Trojan Resurgence](https://sip.security.microsoft.com/intel-explorer/articles/00186f0c): Kaspersky\'s Secure List reveals the resurgence of the Necro Trojan, impacting both official and modified versions of popular applications like Spotify and Minecraft, and affecting over 11 million Android devices globally. Utilizing advanced techniques such as steganography to hide its payload, the malware allows attackers to run unauthorized ads, download files, and install additional malware, with recent attacks observed across countries like Russia, Brazil, and Vietnam.
5. [Android Spyware Campaign in South Korea:](https://sip.security.microsoft.com/intel-explorer/articles/e4645053) Cyble Research and Intelligence Labs (CRIL) uncovered a new Android spyware campaign targeting individuals in South Korea since June 2024, which disguises itself as legitimate apps and leverages Amazon AWS S3 buckets for exfiltration. The spyware effectively steals sensitive data such as SMS messages, contacts, images, and videos, while remaining undetected by major antivirus solutions.
6. [New Variant of RomCom Malware:](https://sip.security.microsoft.com/intel-explorer/articles/159819ae) Unit 42 researchers have identified "SnipBot," a new variant of the RomCom malware family, which utilizes advanced obfuscation methods and anti-sandbox techniques. Targeting sectors such as IT services, legal, and agriculture since at least 2022, the malware employs a multi-stage infection chain, and researchers suggest the threat actors\' motives might have s |
Ransomware
Malware
Tool
Vulnerability
Threat
Patching
Mobile
|
ChatGPT
APT 36
|
★★
|
 |
2024-09-25 15:01:00 |
Chatgpt macOS Flaw pourrait avoir activé des logiciels espions à long terme via la fonction de mémoire ChatGPT macOS Flaw Could\\'ve Enabled Long-Term Spyware via Memory Function (lien direct) |
Une vulnérabilité de sécurité désormais réglée dans l'application ChatGPT d'Openai \\ pour MacOS aurait pu permettre aux attaquants de planter des logiciels espions persistants à long terme dans la mémoire de l'outil d'intelligence artificielle (AI).
La technique, surnommée Spaiware, pourrait être maltraitée pour faciliter "l'exfiltration continue des données de toute information que l'utilisateur a tapé ou des réponses reçues par Chatgpt, y compris les futures sessions de chat
A now-patched security vulnerability in OpenAI\'s ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool\'s memory.
The technique, dubbed SpAIware, could be abused to facilitate "continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions |
Tool
Vulnerability
|
ChatGPT
|
★★★
|
 |
2024-09-24 08:14:13 |
AI générative: Comment les organisations peuvent-elles monter sur la vague Genai en toute sécurité et contenir des menaces d'initiés? Generative AI: How Can Organizations Ride the GenAI Wave Safely and Contain Insider Threats? (lien direct) |
The use of generative AI (GenAI) has surged over the past year. This has led to a shift in news headlines from 2023 to 2024 that\'s quite remarkable. Last year, Forbes reported that JPMorgan Chase, Amazon and several U.S. universities were banning or limiting the use of ChatGPT. What\'s more, Amazon and Samsung were reported to have found employees sharing code and other confidential data with OpenAI\'s chatbot.
Compare that to headlines in 2024. Now, the focus is on how AI assistants are being adopted by corporations everywhere. J.P. Morgan is rolling out ChatGPT to 60,000 employees to help them work more efficiently. And Amazon recently announced that by using GenAI to migrate 30,000 applications onto a new platform it had saved the equivalent of 4,500 developer years as well as $260 million.
The 2024 McKinsey Global Survey on AI also shows how much things have changed. It found that 65% of respondents say that their organizations are now using GenAI regularly. That\'s nearly double the number from 10 months ago.
What this trend indicates most is that organizations feel the competitive pressure to either embrace GenAI or risk falling behind. So, how can they mitigate their risks? That\'s what we\'re here to discuss.
Generative AI: A new insider risk
Given its nature as a productivity tool, GenAI opens the door to insider risks by careless, compromised or malicious users.
Careless insiders. These users may input sensitive data-like customer information, proprietary algorithms or internal strategies-into GenAI tools. Or they may use them to create content that does not align with a company\'s legal or regulatory standards, like documents with discriminatory language or images with inappropriate visuals. This, in turn, creates legal risks. Additionally, some users may use GenAI tools that are not authorized, which leads to security vulnerabilities and compliance issues.
Compromised insiders. Access to GenAI tools can be compromised by threat actors. Attackers use this access to extract, generate or share sensitive data with external parties.
Malicious insiders. Some insiders actively want to cause harm. So, they might intentionally leak sensitive information into public GenAI tools. Or, if they have access to proprietary models or datasets, they might use these tools to create competing products. They could also use GenAI to create or alter records to make it difficult for auditors to identify discrepancies or non-compliance.
To mitigate these risks, organizations need a mix of human-centric technical controls, internal policies and strategies. Not only do they need to be able to monitor AI usage and data access, but they also need to have measures in place-like employee training-as well as a solid ethical framework.
Human-centric security for GenAI
Safe adoption of this technology is top of mind for most CISOs. Proofpoint has an adaptive, human-centric information protection solution that can help. Our solution provides you with visibility and control for GenAI use in your organization. And this visibility extends across endpoints, the cloud and the web. Here\'s how:
Gain visibility into shadow GenAI tools:
Track the use of over 600 GenAI sites by user, group or department
Monitor GenAI app usage with context based on user risk
Identify third-party AI app authorizations connected to your identity store
Receive alerts when corporate credentials are used for GenAI services
Enforce acceptable use policies for GenAI tools and prevent data loss:
Block web uploads and the pasting of sensitive data to GenAI sites
Prevent typing of sensitive data into tools like ChatGPT, Gemini, Claude, Copilot and more
Revoke access authorizations for third-party GenAI apps
Monitor the use of Copilot for Microsoft 365 and alert when sensitive files are accessed via emails, files and Teams messages
Apply Microsoft Information Protection (MIP |
Tool
Vulnerability
Threat
Prediction
Cloud
Technical
|
ChatGPT
|
★★
|
 |
2024-08-30 07:00:00 |
Les solutions de sécurité centrées sur l'human Proofpoint\\'s Human-Centric Security Solutions Named SC Awards 2024 Finalist in Four Unique Categories (lien direct) |
Nous sommes ravis de partager que Proofpoint a été nommé finaliste des récompenses de 2024 SC dans quatre catégories distinguées: meilleure solution de messagerie sécurisée;Meilleure solution de sécurité des données;Meilleure solution de menace d'initiés;et la meilleure technologie de détection des menaces.
Maintenant dans sa 27e année, les SC Awards sont considérés comme un programme de récompenses le plus prestigieux de Cybersecurity \\, reconnaissant et honorant les innovations, les organisations et les dirigeants exceptionnels qui font progresser la pratique de la sécurité de l'information.Les gagnants sont sélectionnés par un panel de juges estimés de l'industrie composés de la communauté des Cisos des CyberRisk Alliance, des membres de SC Media and Women in Cyber, et des utilisateurs finaux professionnels de la cybersécurité.
Les gagnants du programme de récompenses SC 2024 seront dévoilés cet automne, coïncidant avec la conférence annuelle phare de Proofpoint \\, Proofpoint Protect, qui débutera à New York le 10 au 11 septembre avant de continuer à Londres, à Chicago et à Austinen octobre.Là, les dirigeants de Proofpoint et les meilleurs clients mettront en évidence notre innovation continue, l'efficacité de notre stratégie de sécurité centrée sur l'homme, explorer les tendances et échanger des informations avec l'industrie la plus brillante.
Cette reconnaissance propulse en outre notre dynamique commerciale du T2 et souligne que les capacités de Proofpoint \\ s'étendent au-delà de la sécurité des e-mails, affirmant la confiance que nous avons établie dans toute l'industrie pour protéger les personnes, défendre les données et atténuer les risques humains.Il rejoint également la liste croissante de la validation de l'industrie de Proofpoint \\, y compris les récompenses pour la meilleure solution de prévention des fuites de données (DLP) et la meilleure solution d'identité et d'accès à la 2024 SC Awards Europe en juin.
En savoir plus sur nos solutions présélectionnées aux 2024 SC Awards:
Plateforme de protection des personnes ProofPoint People
Les organisations sont aujourd'hui confrontées à des menaces de cybersécurité à multiples facettes qui exploitent les vulnérabilités humaines.ProofPoint combine la technologie de pointe avec des informations stratégiques pour se protéger contre le spectre complet des cybermenaces ciblant les gens d'une organisation.En déployant des défenses adaptatives multicouches qui englobent la détection des menaces adaptatives, des garanties d'identité robustes et une gestion proactive des risques des fournisseurs, nous assurons la résilience et la continuité de nos clients.
Protection des informations sur les points
La perte de données provient des personnes, ce qui signifie qu'une approche centrée sur l'homme de la sécurité des données est nécessaire pour répondre efficacement.La protection de l'information de la preuve est la seule solution qui rassemble la télémétrie du contenu, de la menace et des informations comportementales sur les canaux de perte de données les plus critiques & # 8211;Email, services cloud, point de terminaison, référentiels de fichiers sur site et le Web.Cela permet aux organisations de traiter de manière globale tout le spectre des scénarios de perte de données centrés sur l'homme.
Avec la disponibilité générale de la transformation DLP Point Point annoncée au RSAC cette année, les organisations peuvent désormais consolider leurs défenses de données sur les canaux et protéger les données en passant par Chatgpt, Copilots et d'autres outils Genai.
Gestion de la menace d'initié à preuvespoint
30% des CISO mondiaux déclarent que les menaces d'initiés sont leur plus grande préoccupation au cours des 12 prochains mois.ProofPoint ITM offre une visibilité sur un comportement à risque qui entraîne des perturbations commerciales et des pertes de revenus par des utilisateurs négligents, malveillants et compromis.ProofPoint ITM rassemble des preuve |
Ransomware
Tool
Vulnerability
Threat
Cloud
Conference
|
ChatGPT
|
★★
|
 |
2024-08-19 03:31:39 |
Avance rapide ou chute libre?Naviguer dans la montée de l'IA dans la cybersécurité Fast Forward or Freefall? Navigating the Rise of AI in Cybersecurity (lien direct) |
Cela ne fait qu'un an et neuf mois qu'Openai a mis le chatppt à la disposition du public, et cela a déjà eu un impact énorme sur nos vies.Alors que l'IA sera sans aucun doute remodeler notre monde, la nature exacte de cette révolution se déroule toujours.Avec peu ou pas d'expérience, les administrateurs de sécurité peuvent utiliser Chatgpt pour créer rapidement des scripts PowerShell.Des outils comme Grammarly ou Jarvis peuvent transformer les écrivains moyens en éditeurs confiants.Certaines personnes ont même commencé à utiliser l'IA comme alternative aux moteurs de recherche traditionnels comme Google et Bing.Les applications de l'IA sont infinies!AI génératif dans ...
It has been only one year and nine months since OpenAI made ChatGPT available to the public, and it has already had a massive impact on our lives. While AI will undoubtedly reshape our world, the exact nature of this revolution is still unfolding. With little to no experience, security administrators can use ChatGPT to rapidly create Powershell scripts. Tools like Grammarly or Jarvis can turn average writers into confident editors. Some people have even begun using AI as an alternative to traditional search engines like Google and Bing. The applications of AI are endless! Generative AI in... |
Tool
|
ChatGPT
|
★★★
|
 |
2024-08-14 07:19:53 |
Arrêt de cybersécurité du mois: attaque de phishing d'identification ciblant les données de localisation des utilisateurs Cybersecurity Stop of the Month: Credential Phishing Attack Targeting User Location Data (lien direct) |
The Cybersecurity Stop of the Month blog series explores the ever-evolving tactics of today\'s cybercriminals. It also examines how Proofpoint helps businesses to fortify their email defenses to protect people against today\'s emerging threats.
Proofpoint people protection: end-to-end, complete and continuous
So far in this series, we have examined these types of attacks:
Uncovering BEC and supply chain attacks (June 2023)
Defending against EvilProxy phishing and cloud account takeover (July 2023)
Detecting and analyzing a SocGholish Attack (August 2023)
Preventing eSignature phishing (September 2023)
QR code scams and phishing (October 2023)
Telephone-oriented attack delivery sequence (November 2023)
Using behavioral AI to squash payroll diversion (December 2023)
Multifactor authentication manipulation (January 2024)
Preventing supply chain compromise (February 2024)
Detecting multilayered malicious QR code attacks (March 2024)
Defeating malicious application creation attacks (April 2024)
Stopping supply chain impersonation attacks (May 2024)
CEO impersonation attacks (June 2024)
DarkGate malware (July 2024)
In this blog post, we look at how threat actors use QR codes in phishing emails to gain access to employee credentials.
Background
Many threat actors have adopted advanced credential phishing techniques to compromise employee credentials. One tactic on the rise involves is the use of QR codes. Recorded Future\'s Cyber Threat Analysis Report notes that there has been a 433% increase in references to QR code phishing and a 1,265% rise in phishing attacks potentially linked to AI tools like ChatGPT.
Malicious QR codes embedded in phishing emails are designed to lead recipients to fake websites that mimic trusted services. There, users are prompted to enter their login credentials, financial information or other sensitive data. Threat actors will often try to create a sense of urgency in a phishing attack-for example, claiming account issues or security concerns.
The use of QR codes in a phishing attack helps to provide a sense of familiarity for the recipient, as their email address is prefilled as a URL parameter. When they scan the malicious QR codes, it can open the door to credential theft and data breaches.
The scenario
Employees of a global developer of a well-known software application were sent a phishing email, which appeared to be sent from the company\'s human resources team. The email included an attachment and a call to action to scan a QR code, which led to a malicious site.
A key target of the attack was the vice president of finance. Had the attack been successful, threat actors could have accessed the company\'s finances as well as the login credentials, credit card information and location data for the apps\' millions of monthly active users.
The threat: How did the attack happen?
The phishing email sent by the attacker asked employees to review a document in an email attachment that was advertised as “a new company policy added to our Employee Handbook.”
Email sent from an uncommon sender to a division of the location sharing app\'s company.
The attachment contained a call to action: “Scan barcode to review document.”
The file type labeled “Barcode” resembling a QR code.
The “barcode” was a QR code that led to a phishing site. The site was made to look like the company\'s corporate website. It also appeared to be a legitimate site because it was protected by human verification technology, which can make it nearly impossible for other email security solutions to detect. The technology uses challenges (like CAPTCHAs) to prove that a clicker is a human and not a programmatic sandboxing solution.
Human verification request.
After the thr |
Malware
Tool
Threat
Cloud
|
ChatGPT
|
★★★
|
 |
2024-08-09 10:15:00 |
Le leadership Openai s'est séparé de la technologie des filigations de l'IA interne OpenAI Leadership Split Over In-House AI Watermarking Technology (lien direct) |
Une principale préoccupation est que l'outil pourrait éloigner les utilisateurs de Chatgpt du produit
One primary concern is that the tool might turn ChatGPT users away from the product |
Tool
|
ChatGPT
|
★★
|
 |
2024-08-07 13:00:47 |
Déverrouillez la puissance de Genai avec Check Point Software Technologies Unlock the Power of GenAI with Check Point Software Technologies (lien direct) |
> La révolution de Genai est déjà ici des applications génératrices d'IA comme Chatgpt et Gemini sont là pour rester.Mais comme ils facilitent la vie des utilisateurs beaucoup plus simples, ils rendent également la vie de votre organisation beaucoup plus difficile.Bien que certaines organisations aient carrément interdit les applications Genai, selon un point de contrôle et l'étude Vason Bourne, 92% des organisations permettent à leurs employés d'utiliser des outils Genai, mais sont préoccupés par la sécurité et la fuite de données.En fait, une estimation indique que 55% des événements de fuite de données sont le résultat direct de l'utilisation du Genai.Comme des tâches comme le code de débogage et le texte de raffinage peuvent désormais être achevés en fraction [& # 8230;]
>The GenAI Revolution is Already Here Generative AI applications like ChatGPT and Gemini are here to stay. But as they make users\' lives much simpler, they also make your organization\'s life much harder. While some organizations have outright banned GenAI applications, according to a Check Point and Vason Bourne study, 92% of organizations allow their employees to use GenAI tools yet are concerned about security and data leakage. In fact, one estimate says 55% of data leakage events are a direct result of GenAI usage. As tasks like debugging code and refining text can now be completed in a fraction […]
|
Tool
Studies
|
ChatGPT
|
★★★
|
 |
2024-07-29 09:00:00 |
Méfiez-vous des faux outils d'IA masquant des menaces de logiciels malveillants très réels Beware of fake AI tools masking very real malware threats (lien direct) |
Toujours à l'écoute des dernières tendances, les cybercriminels distribuent des outils malveillants qui se présentent en tant que chatppt, midjourney et autres assistants génératifs de l'IA
Ever attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistants |
Malware
Tool
|
ChatGPT
|
★★
|
 |
2024-07-26 19:24:17 |
(Déjà vu) Les attaques d'escroquerie profitent de la popularité de la vague de l'IA générative Scam Attacks Taking Advantage of the Popularity of the Generative AI Wave (lien direct) |
## Instantané
Les analystes de Palo Alto Networks ont constaté que les acteurs du cybermenace exploitent l'intérêt croissant pour l'intelligne artificiel génératif (Genai) pour mener des activités malveillantes.
## Description
Palo Alto Networks \\ 'Analyse des domaines enregistrés avec des mots clés liés à Genai a révélé des informations sur les activités suspectes, y compris les modèles textuels et le volume du trafic.Des études de cas ont détaillé divers types d'attaques, tels que la livraison de programmes potentiellement indésirables (chiots), de distribution de spam et de stationnement monétisé.
Les adversaires exploitent souvent des sujets de tendance en enregistrant des domaines avec des mots clés pertinents.Analyser des domaines nouvellement enregistrés (NRD) contenant des mots clés Genai comme "Chatgpt" et "Sora", Palo Alto Networks a détecté plus de 200 000 NRD quotidiens, avec environ 225 domaines liés au Genai enregistrés chaque jour depuis novembre 2022. Beaucoup de ces domaines, identifiés comme suspects, a augmenté d'enregistrement lors des principaux jalons de Chatgpt, tels que son intégration avec Bing et la sortie de GPT-4.Les domaines suspects représentaient un taux moyen de 28,75%, nettement supérieur au taux de NRD général.La plupart des trafics vers ces domaines étaient dirigés vers quelques acteurs majeurs, avec 35% de ce trafic identifié comme suspect.
## Recommandations
Microsoft recommande les atténuations suivantes pour réduire l'impact de cette menace.Vérifiez la carte de recommandations pour l'état de déploiement des atténuations surveillées.
- Encourager les utilisateurs à utiliser Microsoft Edge et d'autres navigateurs Web qui prennent en charge [SmartScreen] (https://learn.microsoft.com/microsoft-365/security/defender-endpoint/web-overview?ocid=Magicti_TA_LearnDDoc), qui identifieet bloque des sites Web malveillants, y compris des sites de phishing, des sites d'arnaque et des sites qui hébergent des logiciels malveillants.
- Allumez [Protection en livraison du cloud] (https://learn.microsoft.com/microsoft-365/security/defender-endpoint/configure-lock-at-first-Sight-Microsoft-Defender-Antivirus? Ocid = magicti_ta_learndoc) dans Microsoft Defender Antivirus, ou l'équivalentpour votre produit antivirus, pour couvrir les outils et techniques d'attaquant en évolution rapide.Les protections d'apprentissage automatique basées sur le cloud bloquent une majorité de variantes nouvelles et inconnues.
- Appliquer le MFA sur tous les comptes, supprimer les utilisateurs exclus de MFA et strictement [nécessite MFA] (https: //Learn.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy?ocid=Magicti_TA_LearnDoc) à partir deTous les appareils, à tous les endroits, à tout moment.
- Activer les méthodes d'authentification sans mot de passe (par exemple, Windows Hello, FIDO Keys ou Microsoft Authenticator) pour les comptes qui prennent en charge sans mot de passe.Pour les comptes qui nécessitent toujours des mots de passe, utilisez des applications Authenticatrices comme Microsoft Authenticator pour MFA.[Reportez-vous à cet article] (https://learn.microsoft.com/azure/active-directory/authentication/concept-authentication-methods?ocid=Magicti_ta_learndoc) pour les différentes méthodes et fonctionnalités d'authentification.
- Pour MFA qui utilise des applications Authenticator, assurez-vous que l'application nécessite qu'un code soit tapé dans la mesure du possible, car de nombreuses intrusions où le MFA a été activé a toujours réussi en raison des utilisateurs qui cliquent sur «Oui» sur l'invite sur leurs téléphones même lorsqu'ils n'étaient pas àLeurs [appareils] (https://learn.microsoft.com/azure/active-directory/authentication/how-to-mfa-number-match?ocid=Magicti_TA_LearnDoc).Reportez-vous à [cet article] (https://learn.microsoft.com/azure/active-directory/authentication/concept-authentication-methods?ocid=Magicti_ta_learndoc) pour un |
Ransomware
Spam
Malware
Tool
Threat
Studies
|
ChatGPT
|
★★★
|
 |
2024-07-23 10:00:00 |
Ce que les prestataires de soins de santé devraient faire après une violation de données médicales What Healthcare Providers Should Do After A Medical Data Breach (lien direct) |
The content of this post is solely the responsibility of the author. LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article.
Healthcare data breaches are on the rise, with a total of 809 data violation cases across the industry in 2023, up from 343 in 2022. The cost of these breaches also soared to $10.93 million last year, an increase of over 53% over the past three years, IBM’s 2023 Cost of a Data Breach report reveals. But data breaches aren’t just expensive, they also harm patient privacy, damage organizational reputation, and erode patient trust in healthcare providers. As data breaches are now largely a matter of “when” not “if”, it’s important to devise a solid data breach response plan. By acting fast to prevent further damage and data loss, you can restore operations as quickly as possible with minimal harm done.
Contain the Breach
Once a breach has been detected, you need to act fast to contain it, so it doesn’t spread. That means disconnecting the affected system from the network, but not turning it off altogether as your forensic team still needs to investigate the situation. Simply unplug the network cable from the router to disconnect it from the internet. If your antivirus scanner has found malware or a virus on the system, quarantine it, so it can be analyzed later. Keep the firewall settings as they are and save all firewall and security logs. You can also take screenshots if needed. It’s also smart to change all access control login details. Strong complex passwords are a basic cybersecurity feature difficult for hackers and software to crack. It’s still important to record old passwords for future investigation. Also, remember to deactivate less-important accounts.
Document the Breach
You then need to document the breach, so forensic investigators can find out what caused it, as well as recommend accurate next steps to secure the network now and prevent future breaches. So, in your report, explain how you came to hear of the breach and relay exactly what was stated in the notification (including the date and time you were notified). Also, document every step you took in response to the breach. This includes the date and time you disconnected systems from the network and changed account credentials and passwords.
If you use artificial intelligence (AI) tools, you’ll also need to consider whether they played a role in the breach, and document this if so. For example, ChatGPT, a popular chatbot and virtual assistant, can successfully exploit zero-day security vulnerabilities 87% of the time, a recent study by researchers at the University of Illinois Urbana-Champaign found. Although AI is increasingly used in healthcare to automate tasks, manage patient data, and even make tailored care recommendations, it does pose a serious risk to patient data integrity despite the other benefits it provides. So, assess whether AI influenced your breach at all, so your organization can make changes as needed to better prevent data breaches in the future.
Report the Breach
Although your first instinct may be to keep the breach under wraps, you’re actually legally required to report it. Under the |
Data Breach
Malware
Tool
Vulnerability
Threat
Studies
Medical
|
ChatGPT
|
★★★
|
 |
2024-07-23 06:00:00 |
ProofPoint ouvre la voie à la protection des personnes et à la défense des données avec un quartier central Proofpoint Leads the Way in Protecting People and Defending Data with a Pivotal Quarter (lien direct) |
Throughout my interactions with customers and prospects, CISOs unanimously point to one constant in the cybersecurity equation: the human element. Now research backs up this anecdotal top note: our 2024 Voice of the CISO report1 published in May found that 74% of CISOs rank human-centric cyber risks as their top concern.
Proofpoint\'s human-centric security platform is the only modern security architecture that takes a comprehensive, adaptive, and effective approach to protect against the three critical human risks impacting organizations-human-targeted threats, data loss and human error.
We are very pleased with Q2, continuing to drive strong business momentum, delivering first-to-market innovations and expanding our ecosystem partnerships.
Business Momentum for Proofpoint Continues in Q2
Our ability to solve our customers\' most complex problems drove strong results in the second quarter of 2024, with notable highlights including:
Revenues and Annual Recurring Revenue (ARR) up in the mid-teens.
Over 500 new enterprise organizations are entrusting Proofpoint as their cybersecurity partner of choice in Q2 2024.
Record business growth in information protection and insider risk management, clearly establishing Proofpoint as the leader in the market.
Continued market validation of our offerings with a healthy customer retention rate of 92%.
Market-first innovations: Redefining Email Security and Enabling Information Protection for AI
This year\'s RSA Conference saw Proofpoint take to the stage to showcase why we are the undisputed leader in human-centric cybersecurity, with our AI-driven innovations recognized as “The 20 Coolest Cybersecurity Products At RSAC 2024” and “The 10 Hot AI Cybersecurity Tools At RSAC 2024”.
We unveiled market-first innovations on two fronts. First, by delivering a single solution to protect against every type of threat, every time, every way a user may encounter it, using every form of detection with complete, adaptive, end-to-end protection across the entire email delivery chain-combining pre-delivery, click time, and post-delivery detection. Second, by announcing the general availability of Data Loss Prevention (DLP) Transform, making responsible Generative AI use a reality for our customers by modernizing DLP with our cross-channel capabilities so that CISOs can now embrace ChatGPT, co-pilots and other AI tools while preventing the exposure of IP.
The Cybersecurity Revolution: Integration is Key
With a cyber threat landscape that is forever evolving in sophistication and complexity, CISOs and CIOs are looking to consolidate their security architecture to span across multiple channels, infrastructures and people. Our “Better Together” strategy is to offer a unified human-centric security platform that integrates with other key solutions in the cyber architecture, namely SASE, EDR, SoC automation (SIEM/SOAR/XDR) and Identity & Access Management.
These components of the cybersecurity architecture are powerful on their own, and their effectiveness compounds when they are well integrated. That is why we have partnered with an ecosystem of market-leading cybersecurity vendors-including Palo Alto Networks, CrowdStrike, Microsoft, CyberArk, Okta and many more. Our joint customers benefit from a defense-in-depth approach and security operations at scale: what all enterprises need to stay ahead of today\'s threats.
Welcoming New Team Members
We are delighted to welcome the newest members of our senior leadership team, who will all play a pivotal part in the growth of their respective regions and organizations. Seasoned global marketing leader Elia Mak joins us as SVP, Brand and Global Communications, and cybersecurity veteran George Lee takes the lead in a strategically-important region for Proofpoint as SVP, Asia Pacific & Japan.
We also welcome Harry Labana as SVP & GM of archiving, compliance, and digital risk, to further our leadership position in the emerging category of digital communications governance, a key aspect of human-centric security.
|
Tool
Threat
Conference
|
ChatGPT
|
★★★
|
 |
2024-07-01 13:00:00 |
Chatgpt 4 peut exploiter 87% des vulnérabilités d'une journée ChatGPT 4 can exploit 87% of one-day vulnerabilities (lien direct) |
> Depuis l'utilisation généralisée et croissante de Chatgpt et d'autres modèles de grande langue (LLM) ces dernières années, la cybersécurité a été une préoccupation majeure.Parmi les nombreuses questions, les professionnels de la cybersécurité se sont demandé à quel point ces outils ont été efficaces pour lancer une attaque.Les chercheurs en cybersécurité Richard Fang, Rohan Bindu, Akul Gupta et Daniel Kang ont récemment réalisé une étude à [& # 8230;]
>Since the widespread and growing use of ChatGPT and other large language models (LLMs) in recent years, cybersecurity has been a top concern. Among the many questions, cybersecurity professionals wondered how effective these tools were in launching an attack. Cybersecurity researchers Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang recently performed a study to […]
|
Tool
Vulnerability
Threat
Studies
|
ChatGPT
|
★★★
|
 |
2024-06-28 17:51:58 |
Qualité sur la quantité: la clé Genai contre-intuitive Quality Over Quantity: the Counter-Intuitive GenAI Key (lien direct) |
>
Cela est de près de deux ans depuis le lancement d'Openai, ce qui entraîne une sensibilisation et un accès accrus à la sensibilisation et à l'accès à des outils d'IA génératifs ....
>
It’s been almost two years since OpenAI launched ChatGPT, driving increased mainstream awareness of and access to Generative AI tools....
|
Tool
|
ChatGPT
|
★★★
|
 |
2024-06-21 09:13:00 |
Les détecteurs de l'IA peuvent-ils nous sauver de Chatgpt?J'ai essayé 6 outils en ligne pour découvrir Can AI detectors save us from ChatGPT? I tried 6 online tools to find out (lien direct) |
Avec l'arrivée soudaine de Chatgpt, les éducateurs et les éditeurs sont confrontés à une poussée inquiétante de soumissions de contenu automatisées.Nous regardons le problème et ce qui peut être fait à ce sujet.
With the sudden arrival of ChatGPT, educators and editors face a worrying surge of automated content submissions. We look at the problem and what can be done about it. |
Tool
|
ChatGPT
|
★★
|
 |
2024-06-06 20:51:21 |
Conseils de confidentialité de Chatgpt: deux façons importantes de limiter les données que vous partagez avec OpenAI ChatGPT privacy tips: Two important ways to limit the data you share with OpenAI (lien direct) |
Vous voulez utiliser des outils d'IA sans compromettre le contrôle de vos données?Voici deux façons de protéger votre vie privée dans le chatbot d'Openai \\.
Want to use AI tools without compromising control of your data? Here are two ways to safeguard your privacy in OpenAI\'s chatbot. |
Tool
|
ChatGPT
|
★★
|
 |
2024-05-27 03:30:55 |
Comment les criminels tirent parti de l'IA pour créer des escroqueries convaincantes How Criminals Are Leveraging AI to Create Convincing Scams (lien direct) |
Les outils d'IA génératifs comme Chatgpt et Google Bard sont parmi les technologies les plus excitantes du monde.Ils ont déjà commencé à révolutionner la productivité, à suralimenter la créativité et à faire du monde un meilleur endroit.Mais comme pour toute nouvelle technologie, l'IA générative a provoqué de nouveaux risques - ou, plutôt, aggravant les anciens risques.Mis à part le potentiel très discuté "Ai apocalypse" qui a dominé les gros titres ces derniers mois, l'IA génératrice a un impact négatif plus immédiat: créer des escroqueries de phishing convaincantes.Les cybercriminels créent des escroqueries beaucoup plus sophistiquées avec une IA générative que ...
Generative AI tools like ChatGPT and Google Bard are some of the most exciting technologies in the world. They have already begun to revolutionize productivity, supercharge creativity, and make the world a better place. But as with any new technology, generative AI has brought about new risks-or, rather, made old risks worse. Aside from the much-discussed potential " AI apocalypse" that has dominated headlines in recent months, generative AI has a more immediate negative impact: creating convincing phishing scams. Cybercriminals create far more sophisticated scams with generative AI than... |
Tool
|
ChatGPT
|
★★
|
 |
2024-05-14 06:00:46 |
Arrêt de cybersécurité du mois: les attaques d'identité qui ciblent la chaîne d'approvisionnement Cybersecurity Stop of the Month: Impersonation Attacks that Target the Supply Chain (lien direct) |
This blog post is part of a monthly series, Cybersecurity Stop of the Month, which explores the ever-evolving tactics of today\'s cybercriminals. It focuses on the critical first three steps in the attack chain in the context of email threats. The goal of this series is to help you understand how to fortify your defenses to protect people and defend data against emerging threats in today\'s dynamic threat landscape.
The critical first three steps of the attack chain-reconnaissance, initial compromise and persistence.
So far in this series, we have examined these types of attacks:
Supplier compromise
EvilProxy
SocGholish
eSignature phishing
QR code phishing
Telephone-oriented attack delivery (TOAD)
Payroll diversion
MFA manipulation
Supply chain compromise
Multilayered malicious QR code attack
In this post, we will look at how adversaries use impersonation via BEC to target the manufacturing supply chain.
Background
BEC attacks are sophisticated schemes that exploit human vulnerabilities and technological weaknesses. A bad actor will take the time to meticulously craft an email that appears to come from a trusted source, like a supervisor or a supplier. They aim to manipulate the email recipient into doing something that serves the attacker\'s interests. It\'s an effective tactic, too. The latest FBI Internet Crime Report notes that losses from BEC attacks exceeded $2.9 billion in 2023.
Manufacturers are prime targets for cybercriminals for these reasons:
Valuable intellectual property. The theft of patents, trade secrets and proprietary processes can be lucrative.
Complex supply chains. Attackers who impersonate suppliers can easily exploit the interconnected nature of supply chains.
Operational disruption. Disruption can cause a lot of damage. Attackers can use it for ransom demands, too.
Financial fraud. Threat actors will try to manipulate these transactions so that they can commit financial fraud. They may attempt to alter bank routing information as part of their scheme, for example.
The scenario
Proofpoint recently caught a threat actor impersonating a legitimate supplier of a leading manufacturer of sustainable fiber-based packaging products. Having compromised the supplier\'s account, the imposter sent an email providing the manufacturer with new banking details, asking that payment for an invoice be sent to a different bank account. If the manufacturer had complied with the request, the funds would have been stolen.
The threat: How did the attack happen?
Here is a closer look at how the attack unfolded:
1. The initial message. A legitimate supplier sent an initial outreach email from their account to the manufacturing company using an email address from their official account. The message included details about a real invoice that was pending payment.
The initial email sent from the supplier.
2. The deceptive message. Unfortunately, subsequent messages were not sent from the supplier, but from a threat actor who was pretending to work there. While this next message also came from the supplier\'s account, the account had been compromised by an attacker. This deceptive email included an attachment that included new bank payment routing information. Proofpoint detected and blocked this impersonation email.
In an attempt to get a response, the threat actor sent a follow-up email using a lookalike domain that ended in “.cam” instead of “.com.” Proofpoint also condemned this message.
An email the attacker sent to mimic the supplier used a lookalike domain.
Detection: How did Proofpoint prevent this attack?
Proofpoint has a multilayered detection stack that uses a sophisticated blend of artificial intelligence (AI) and machine learning (ML) detection |
Ransomware
Data Breach
Tool
Vulnerability
Threat
|
ChatGPT
|
★★
|
 |
2024-05-06 07:54:03 |
Genai alimente la dernière vague des menaces de messagerie modernes GenAI Is Powering the Latest Surge in Modern Email Threats (lien direct) |
Generative artificial intelligence (GenAI) tools like ChatGPT have extensive business value. They can write content, clean up context, mimic writing styles and tone, and more. But what if bad actors abuse these capabilities to create highly convincing, targeted and automated phishing messages at scale?
No need to wonder as it\'s already happening. Not long after the launch of ChatGPT, business email compromise (BEC) attacks, which are language-based, increased across the globe. According to the 2024 State of the Phish report from Proofpoint, BEC emails are now more personalized and convincing in multiple countries. In Japan, there was a 35% increase year-over-year for BEC attacks. Meanwhile, in Korea they jumped 31% and in the UAE 29%. It turns out that GenAI boosts productivity for cybercriminals, too. Bad actors are always on the lookout for low-effort, high-return modes of attack. And GenAI checks those boxes. Its speed and scalability enhance social engineering, making it faster and easier for attackers to mine large datasets of actionable data.
As malicious email threats increase in sophistication and frequency, Proofpoint is innovating to stop these attacks before they reach users\' inboxes. In this blog, we\'ll take a closer look at GenAI email threats and how Proofpoint semantic analysis can help you stop them.
Why GenAI email threats are so dangerous
Verizon\'s 2023 Data Breach Investigations Report notes that three-quarters of data breaches (74%) involve the human element. If you were to analyze the root causes behind online scams, ransomware attacks, credential theft, MFA bypass, and other malicious activities, that number would probably be a lot higher. Cybercriminals also cost organizations over $50 billion in total losses between October 2013 and December 2022 using BEC scams. That represents only a tiny fraction of the social engineering fraud that\'s happening.
Email is the number one threat vector, and these findings underscore why. Attackers find great success in using email to target people. As they expand their use of GenAI to power the next generation of email threats, they will no doubt become even better at it.
We\'re all used to seeing suspicious messages that have obvious red flags like spelling errors, grammatical mistakes and generic salutations. But with GenAI, the game has changed. Bad actors can ask GenAI to write grammatically perfect messages that mimic someone\'s writing style-and do it in multiple languages. That\'s why businesses around the globe now see credible malicious email threats coming at their users on a massive scale.
How can these threats be stopped? It all comes down to understanding a message\'s intent.
Stop threats before they\'re delivered with semantic analysis
Proofpoint has the industry\'s first predelivery threat detection engine that uses semantic analysis to understand message intent. Semantic analysis is a process that is used to understand the meaning of words, phrases and sentences within a given context. It aims to extract the underlying meaning and intent from text data.
Proofpoint semantic analysis is powered by a large language model (LLM) engine to stop advanced email threats before they\'re delivered to users\' inboxes in both Microsoft 365 and Google Workspace.
It doesn\'t matter what words are used or what language the email is written in. And the weaponized payload that\'s included in the email (e.g., URL, QR code, attached file or something else) doesn\'t matter, either. With Proofpoint semantic analysis, our threat detection engines can understand what a message means and what attackers are trying to achieve.
An overview of how Proofpoint uses semantic analysis.
How it works
Proofpoint Threat Protection now includes semantic analysis as an extra layer of threat detection. Emails must pass through an ML-based threat detection engine, which analyzes them at a deeper level. And it does |
Ransomware
Data Breach
Tool
Vulnerability
Threat
|
ChatGPT
|
★★★
|
 |
2024-05-01 05:12:14 |
Quelle est la meilleure façon d'arrêter la perte de données Genai?Adopter une approche centrée sur l'homme What\\'s the Best Way to Stop GenAI Data Loss? Take a Human-Centric Approach (lien direct) |
Chief information security officers (CISOs) face a daunting challenge as they work to integrate generative AI (GenAI) tools into business workflows. Robust data protection measures are important to protect sensitive data from being leaked through GenAI tools. But CISOs can\'t just block access to GenAI tools entirely. They must find ways to give users access because these tools increase productivity and drive innovation. Unfortunately, legacy data loss prevention (DLP) tools can\'t help with achieving the delicate balance between security and usability.
Today\'s release of Proofpoint DLP Transform changes all that. It provides a modern alternative to legacy DLP tools in a single, economically attractive package. Its innovative features help CISOs strike the right balance between protecting data and usability. It\'s the latest addition to our award-winning DLP solution, which was recognized as a 2024 Gartner® Peer Insights™ Customers\' Choice for Data Loss Prevention. Proofpoint was the only vendor that placed in the upper right “Customers\' Choice” Quadrant.
In this blog, we\'ll dig into some of our latest research about GenAI and data loss risks. And we\'ll explain how Proofpoint DLP Transform provides you with a human-centric approach to reduce those risks.
GenAI increases data loss risks
Users can make great leaps in productivity with ChatGPT and other GenAI tools. However, GenAI also introduces a new channel for data loss. Employees often enter confidential data into these tools as they use them to expedite their tasks.
Security pros are worried, too. Recent Proofpoint research shows that:
Generative AI is the fastest-growing area of concern for CISOs
59% of board members believe that GenAI is a security risk for their business
“Browsing GenAI sites” is one of the top five alert scenarios configured by companies that use Proofpoint Information Protection
Valuable business data like mergers and acquisitions (M&A) documents, supplier contracts, and price lists are listed as the top data to protect
A big problem faced by CISOs is that legacy DLP tools can\'t capture user behavior and respond to natural language processing-based user interfaces. This leaves security gaps. That\'s why they often use blunt tools like web filtering to block employees from using GenAI apps altogether.
You can\'t enforce acceptable use policies for GenAI if you don\'t understand your content and how employees are interacting with it. If you want your employees to use these tools without putting your data security at risk, you need to take a human-centric approach to data loss.
A human-centric approach stops data loss
With a human-centric approach, you can detect data loss risk across endpoints and cloud apps like Microsoft 365, Google Workspace and Salesforce with speed. Insights into user intent allow you to move fast and take the right steps to respond to data risk.
Proofpoint DLP Transform takes a human-centric approach to solving the security gaps with GenAI. It understands employee behavior as well as the data that they are handling. It surgically allows and disallows employees to use GenAI tools such as OpenAI ChatGPT and Google Gemini based on employee behavior and content inputs, even if the data has been manipulated or has gone through multiple channels (email, web, endpoint or cloud) before reaching it.
Proofpoint DLP Transform accurately identifies sensitive content using classical content and LLM-powered data classifiers and provides deep visibility into user behavior. This added context enables analysts to reach high-fidelity verdicts about data risk across all key channels including email, cloud, and managed and unmanaged endpoints.
With a unified console and powerful analytics, Proofpoint DLP Transform can accelerate incident resolution natively or as part of the security operations (SOC) ecosystem. It is built on a cloud-native architecture and features modern privacy controls. Its lightweight and highly stable user-mode agent is unique in |
Tool
Medical
Cloud
|
ChatGPT
|
★★★
|
 |
2024-04-17 16:37:00 |
Genai: un nouveau mal de tête pour les équipes de sécurité SaaS GenAI: A New Headache for SaaS Security Teams (lien direct) |
L'introduction du Chatgpt d'Open Ai \\ a été un moment déterminant pour l'industrie du logiciel, touchant une course Genai avec sa version de novembre 2022.Les fournisseurs SaaS se précipitent désormais pour mettre à niveau les outils avec des capacités de productivité améliorées qui sont motivées par une IA générative.
Parmi une large gamme d'utilisations, les outils Genai permettent aux développeurs de créer plus facilement des logiciels, d'aider les équipes de vente dans la rédaction de courrier électronique banal,
The introduction of Open AI\'s ChatGPT was a defining moment for the software industry, touching off a GenAI race with its November 2022 release. SaaS vendors are now rushing to upgrade tools with enhanced productivity capabilities that are driven by generative AI.
Among a wide range of uses, GenAI tools make it easier for developers to build software, assist sales teams in mundane email writing, |
Tool
Cloud
|
ChatGPT
|
★★
|
 |
2024-04-15 10:13:05 |
Les IA comme ChatGPT aident-elles réellement les étudiants en informatique ? (lien direct) |
Deux études explorent l'impact des générateurs de code IA sur l'apprentissage des étudiants novices en programmation, révélant à la fois les avantages et les pièges de ces outils puissants qui transforment l'enseignement de l'informatique. |
Tool
|
ChatGPT
|
★★★
|
 |
2024-04-10 10:12:47 |
Mémoire de sécurité: TA547 cible les organisations allemandes avec Rhadamanthys Stealer Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer (lien direct) |
Ce qui s'est passé
Proofpoint a identifié TA547 ciblant les organisations allemandes avec une campagne de courriel livrant des logiciels malveillants de Rhadamanthys.C'est la première fois que les chercheurs observent TA547 utiliser des Rhadamanthys, un voleur d'informations utilisé par plusieurs acteurs de menaces cybercriminaux.De plus, l'acteur a semblé utiliser un script PowerShell que les chercheurs soupçonnent a été généré par un modèle grand langage (LLM) tel que Chatgpt, Gemini, Copilot, etc.
Les e-mails envoyés par l'acteur de menace ont usurpé l'identité de la société de vente au détail allemande Metro prétendant se rapporter aux factures.
De: Metro!
Sujet: Rechnung No: 31518562
Attachement: in3 0gc- (94762) _6563.zip
Exemple TA547 Courriel imitant l'identité de la société de vente au détail allemande Metro.
Les e-mails ont ciblé des dizaines d'organisations dans diverses industries en Allemagne.Les messages contenaient un fichier zip protégé par mot de passe (mot de passe: mar26) contenant un fichier LNK.Lorsque le fichier LNK a été exécuté, il a déclenché PowerShell pour exécuter un script PowerShell distant.Ce script PowerShell a décodé le fichier exécutable Rhadamanthys codé de base64 stocké dans une variable et l'a chargé en tant qu'assemblage en mémoire, puis a exécuté le point d'entrée de l'assemblage.Il a par la suite chargé le contenu décodé sous forme d'un assemblage en mémoire et a exécuté son point d'entrée.Cela a essentiellement exécuté le code malveillant en mémoire sans l'écrire sur le disque.
Notamment, lorsqu'il est désabuscée, le deuxième script PowerShell qui a été utilisé pour charger les rhadamanthys contenait des caractéristiques intéressantes non couramment observées dans le code utilisé par les acteurs de la menace (ou les programmeurs légitimes).Plus précisément, le script PowerShell comprenait un signe de livre suivi par des commentaires grammaticalement corrects et hyper spécifiques au-dessus de chaque composant du script.Il s'agit d'une sortie typique du contenu de codage généré par LLM et suggère que TA547 a utilisé un certain type d'outil compatible LLM pour écrire (ou réécrire) le PowerShell, ou copié le script à partir d'une autre source qui l'avait utilisé.
Exemple de PowerShell soupçonné d'être écrit par un LLM et utilisé dans une chaîne d'attaque TA547.
Bien qu'il soit difficile de confirmer si le contenu malveillant est créé via LLMS & # 8211;Des scripts de logiciels malveillants aux leurres d'ingénierie sociale & # 8211;Il existe des caractéristiques d'un tel contenu qui pointent vers des informations générées par la machine plutôt que générées par l'homme.Quoi qu'il en soit, qu'il soit généré par l'homme ou de la machine, la défense contre de telles menaces reste la même.
Attribution
TA547 est une menace cybercriminale à motivation financière considérée comme un courtier d'accès initial (IAB) qui cible diverses régions géographiques.Depuis 2023, TA547 fournit généralement un rat Netsupport mais a parfois livré d'autres charges utiles, notamment Stealc et Lumma Stealer (voleurs d'informations avec des fonctionnalités similaires à Rhadamanthys).Ils semblaient favoriser les pièces javascript zippées comme charges utiles de livraison initiales en 2023, mais l'acteur est passé aux LNK compressées début mars 2024. En plus des campagnes en Allemagne, d'autres ciblage géographique récent comprennent des organisations en Espagne, en Suisse, en Autriche et aux États-Unis.
Pourquoi est-ce important
Cette campagne représente un exemple de certains déplacements techniques de TA547, y compris l'utilisation de LNK comprimés et du voleur Rhadamanthys non observé auparavant.Il donne également un aperçu de la façon dont les acteurs de la menace tirent parti de contenu probable généré par LLM dans les campagnes de logiciels malveillants.
Les LLM peuvent aider les acteurs de menace à comprendre les chaînes d'attaque plus sophistiquées utilisées |
Malware
Tool
Threat
|
ChatGPT
|
★★
|
 |
2024-04-10 10:00:00 |
Les risques de sécurité du chat Microsoft Bing AI pour le moment The Security Risks of Microsoft Bing AI Chat at this Time (lien direct) |
The content of this post is solely the responsibility of the author. AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.
AI has long since been an intriguing topic for every tech-savvy person, and the concept of AI chatbots is not entirely new. In 2023, AI chatbots will be all the world can talk about, especially after the release of ChatGPT by OpenAI. Still, there was a past when AI chatbots, specifically Bing’s AI chatbot, Sydney, managed to wreak havoc over the internet and had to be forcefully shut down. Now, in 2023, with the world relatively more technologically advanced, AI chatbots have appeared with more gist and fervor. Almost every tech giant is on its way to producing large Language Model chatbots like chatGPT, with Google successfully releasing its Bard and Microsoft and returning to Sydney. However, despite the technological advancements, it seems that there remains a significant part of the risks that these tech giants, specifically Microsoft, have managed to ignore while releasing their chatbots.
What is Microsoft Bing AI Chat Used for?
Microsoft has released the Bing AI chat in collaboration with OpenAI after the release of ChatGPT. This AI chatbot is a relatively advanced version of ChatGPT 3, known as ChatGPT 4, promising more creativity and accuracy. Therefore, unlike ChatGPT 3, the Bing AI chatbot has several uses, including the ability to generate new content such as images, code, and texts. Apart from that, the chatbot also serves as a conversational web search engine and answers questions about current events, history, random facts, and almost every other topic in a concise and conversational manner. Moreover, it also allows image inputs, such that users can upload images in the chatbot and ask questions related to them.
Since the chatbot has several impressive features, its use quickly spread in various industries, specifically within the creative industry. It is a handy tool for generating ideas, research, content, and graphics. However, one major problem with its adoption is the various cybersecurity issues and risks that the chatbot poses. The problem with these cybersecurity issues is that it is not possible to mitigate them through traditional security tools like VPN, antivirus, etc., which is a significant reason why chatbots are still not as popular as they should be.
Is Microsoft Bing AI Chat Safe?
Like ChatGPT, Microsoft Bing Chat is fairly new, and although many users claim that it is far better in terms of responses and research, its security is something to remain skeptical over. The modern version of the Microsoft AI chatbot is formed in partnership with OpenAI and is a better version of ChatGPT. However, despite that, the chatbot has several privacy and security issues, such as:
The chatbot may spy on Microsoft employees through their webcams.
Microsoft is bringing ads to Bing, which marketers often use to track users and gather personal information for targeted advertisements.
The chatbot stores users\' information, and certain employees can access it, which breaches users\' privacy. - Microsoft’s staff can read chatbot conversations; therefore, sharing sensitive information is vulnerable.
The chatbot can be used to aid in several cybersecurity attacks, such as aiding in spear phishing attacks and creating ransomware codes.
Bing AI chat has a feature that lets the chatbot “see” what web pages are open on the users\' other tabs.
The chatbot has been known to be vulnerable to prompt injection attacks that leave users vulnerable to data theft and scams.
Vulnerabilities in the chatbot have led to data le |
Ransomware
Tool
Vulnerability
|
ChatGPT
|
★★
|
 |
2024-04-04 17:04:16 |
Les cybercriminels répartissent les logiciels malveillants à travers les pages Facebook imitant les marques d'IA Cybercriminals are spreading malware through Facebook pages impersonating AI brands (lien direct) |
Les cybercriminels prennent le contrôle des pages Facebook et les utilisent pour annoncer de faux logiciels d'intelligence artificielle générative chargés de logiciels malveillants. & Nbsp;Selon des chercheurs de la société de cybersécurité Bitdefender, les CyberCrooks profitent de la popularité des nouveaux outils génératifs d'IA et utilisent «malvertising» pour usurper l'identité de produits légitimes comme MidJourney, Sora AI, Chatgpt 5 et
Cybercriminals are taking over Facebook pages and using them to advertise fake generative artificial intelligence software loaded with malware. According to researchers at the cybersecurity company Bitdefender, the cybercrooks are taking advantage of the popularity of new generative AI tools and using “malvertising” to impersonate legitimate products like Midjourney, Sora AI, ChatGPT 5 and |
Malware
Tool
|
ChatGPT
|
★★
|
 |
2024-03-19 05:00:28 |
Le rapport du paysage de la perte de données 2024 explore la négligence et les autres causes communes de perte de données The 2024 Data Loss Landscape Report Explores Carelessness and Other Common Causes of Data Loss (lien direct) |
La perte de données est un problème de personnes, ou plus précisément, un problème de personnes imprudentes.C'est la conclusion de notre nouveau rapport, le paysage de la perte de données 2024, que Proofpoint lance aujourd'hui.
Nous avons utilisé des réponses à l'enquête de 600 professionnels de la sécurité et des données de la protection des informations de Proofpoint pour explorer l'état actuel de la prévention de la perte de données (DLP) et des menaces d'initiés.Dans notre rapport, nous considérons également ce qui est susceptible de venir ensuite dans cet espace à maturation rapide.
De nombreuses entreprises considèrent aujourd'hui toujours leur programme DLP actuel comme émergent ou évoluant.Nous voulions donc identifier les défis actuels et découvrir des domaines d'opportunité d'amélioration.Les praticiens de 12 pays et 17 industries ont répondu aux questions allant du comportement des utilisateurs aux conséquences réglementaires et partageaient leurs aspirations à l'état futur de DLP \\.
Ce rapport est une première pour Proofpoint.Nous espérons que cela deviendra une lecture essentielle pour toute personne impliquée dans la tenue de sécuriser les données.Voici quelques thèmes clés du rapport du paysage de la perte de données 2024.
La perte de données est un problème de personnes
Les outils comptent, mais la perte de données est définitivement un problème de personnes.2023 Données de Tessian, une entreprise de point de preuve, montre que 33% des utilisateurs envoient une moyenne d'un peu moins de deux e-mails mal dirigés chaque année.Et les données de la protection de l'information ProofPoint suggèrent que à peu de 1% des utilisateurs sont responsables de 90% des alertes DLP dans de nombreuses entreprises.
La perte de données est souvent causée par la négligence
Les initiés malveillants et les attaquants externes constituent une menace significative pour les données.Cependant, plus de 70% des répondants ont déclaré que les utilisateurs imprudents étaient une cause de perte de données pour leur entreprise.En revanche, moins de 50% ont cité des systèmes compromis ou mal configurés.
La perte de données est répandue
La grande majorité des répondants de notre enquête ont signalé au moins un incident de perte de données.Les incidents moyens globaux par organisation sont de 15. L'échelle de ce problème est intimidante, car un travail hybride, l'adoption du cloud et des taux élevés de roulement des employés créent tous un risque élevé de données perdues.
La perte de données est dommageable
Plus de la moitié des répondants ont déclaré que les incidents de perte de données entraînaient des perturbations commerciales et une perte de revenus.Celles-ci ne sont pas les seules conséquences dommageables.Près de 40% ont également déclaré que leur réputation avait été endommagée, tandis que plus d'un tiers a déclaré que leur position concurrentielle avait été affaiblie.De plus, 36% des répondants ont déclaré avoir été touchés par des pénalités réglementaires ou des amendes.
Préoccupation croissante concernant l'IA génératrice
De nouvelles alertes déclenchées par l'utilisation d'outils comme Chatgpt, Grammarly et Google Bard ne sont devenus disponibles que dans la protection des informations de Proofpoint cette année.Mais ils figurent déjà dans les cinq règles les plus implémentées parmi nos utilisateurs.Avec peu de transparence sur la façon dont les données soumises aux systèmes d'IA génératives sont stockées et utilisées, ces outils représentent un nouveau canal dangereux pour la perte de données.
DLP est bien plus que de la conformité
La réglementation et la législation ont inspiré de nombreuses initiatives précoces du DLP.Mais les praticiens de la sécurité disent maintenant qu'ils sont davantage soucieux de protéger la confidentialité des utilisateurs et des données commerciales sensibles.
Des outils de DLP ont évolué pour correspondre à cette progression.De nombreux outils |
Tool
Threat
Legislation
Cloud
|
ChatGPT
|
★★★
|
 |
2024-03-13 21:59:23 |
Chatgpt déverse les secrets dans une nouvelle attaque POC ChatGPT Spills Secrets in Novel PoC Attack (lien direct) |
La recherche est plus récente dans un ensemble croissant de travaux pour mettre en évidence les faiblesses troublantes des outils génératifs largement utilisés.
Research is latest in a growing body of work to highlight troubling weaknesses in widely used generative AI tools. |
Tool
|
ChatGPT
|
★★
|
|