What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ProofPoint.webp 2024-05-06 07:54:03 Genai alimente la dernière vague des menaces de messagerie modernes
GenAI Is Powering the Latest Surge in Modern Email Threats
(lien direct)
Generative artificial intelligence (GenAI) tools like ChatGPT have extensive business value. They can write content, clean up context, mimic writing styles and tone, and more. But what if bad actors abuse these capabilities to create highly convincing, targeted and automated phishing messages at scale?   No need to wonder as it\'s already happening. Not long after the launch of ChatGPT, business email compromise (BEC) attacks, which are language-based, increased across the globe. According to the 2024 State of the Phish report from Proofpoint, BEC emails are now more personalized and convincing in multiple countries. In Japan, there was a 35% increase year-over-year for BEC attacks. Meanwhile, in Korea they jumped 31% and in the UAE 29%. It turns out that GenAI boosts productivity for cybercriminals, too. Bad actors are always on the lookout for low-effort, high-return modes of attack. And GenAI checks those boxes. Its speed and scalability enhance social engineering, making it faster and easier for attackers to mine large datasets of actionable data.  As malicious email threats increase in sophistication and frequency, Proofpoint is innovating to stop these attacks before they reach users\' inboxes. In this blog, we\'ll take a closer look at GenAI email threats and how Proofpoint semantic analysis can help you stop them.   Why GenAI email threats are so dangerous  Verizon\'s 2023 Data Breach Investigations Report notes that three-quarters of data breaches (74%) involve the human element. If you were to analyze the root causes behind online scams, ransomware attacks, credential theft, MFA bypass, and other malicious activities, that number would probably be a lot higher. Cybercriminals also cost organizations over $50 billion in total losses between October 2013 and December 2022 using BEC scams. That represents only a tiny fraction of the social engineering fraud that\'s happening. Email is the number one threat vector, and these findings underscore why. Attackers find great success in using email to target people. As they expand their use of GenAI to power the next generation of email threats, they will no doubt become even better at it.  We\'re all used to seeing suspicious messages that have obvious red flags like spelling errors, grammatical mistakes and generic salutations. But with GenAI, the game has changed. Bad actors can ask GenAI to write grammatically perfect messages that mimic someone\'s writing style-and do it in multiple languages. That\'s why businesses around the globe now see credible malicious email threats coming at their users on a massive scale.   How can these threats be stopped? It all comes down to understanding a message\'s intent.   Stop threats before they\'re delivered with semantic analysis  Proofpoint has the industry\'s first predelivery threat detection engine that uses semantic analysis to understand message intent. Semantic analysis is a process that is used to understand the meaning of words, phrases and sentences within a given context. It aims to extract the underlying meaning and intent from text data.  Proofpoint semantic analysis is powered by a large language model (LLM) engine to stop advanced email threats before they\'re delivered to users\' inboxes in both Microsoft 365 and Google Workspace.   It doesn\'t matter what words are used or what language the email is written in. And the weaponized payload that\'s included in the email (e.g., URL, QR code, attached file or something else) doesn\'t matter, either. With Proofpoint semantic analysis, our threat detection engines can understand what a message means and what attackers are trying to achieve.   An overview of how Proofpoint uses semantic analysis.  How it works   Proofpoint Threat Protection now includes semantic analysis as an extra layer of threat detection. Emails must pass through an ML-based threat detection engine, which analyzes them at a deeper level. And it does Ransomware Data Breach Tool Vulnerability Threat ChatGPT ★★★
bleepingcomputer.webp 2024-04-10 12:12:40 Script PowerShell malveillant poussant les logiciels malveillants
Malicious PowerShell script pushing malware looks AI-written
(lien direct)
Un acteur de menace utilise un script PowerShell qui a probablement été créé à l'aide d'un système d'intelligence artificielle tel que le chatgpt d'Openai \\, les Gemini de Google \\ ou le copilot de Microsoft \\.[...]
A threat actor is using a PowerShell script that was likely created with the help of an artificial intelligence system such as OpenAI\'s ChatGPT, Google\'s Gemini, or Microsoft\'s CoPilot. [...]
Malware Threat ChatGPT ★★★
ProofPoint.webp 2024-04-10 10:12:47 Mémoire de sécurité: TA547 cible les organisations allemandes avec Rhadamanthys Stealer
Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer
(lien direct)
Ce qui s'est passé Proofpoint a identifié TA547 ciblant les organisations allemandes avec une campagne de courriel livrant des logiciels malveillants de Rhadamanthys.C'est la première fois que les chercheurs observent TA547 utiliser des Rhadamanthys, un voleur d'informations utilisé par plusieurs acteurs de menaces cybercriminaux.De plus, l'acteur a semblé utiliser un script PowerShell que les chercheurs soupçonnent a été généré par un modèle grand langage (LLM) tel que Chatgpt, Gemini, Copilot, etc. Les e-mails envoyés par l'acteur de menace ont usurpé l'identité de la société de vente au détail allemande Metro prétendant se rapporter aux factures. De: Metro! Sujet: Rechnung No: 31518562 Attachement: in3 0gc- (94762) _6563.zip Exemple TA547 Courriel imitant l'identité de la société de vente au détail allemande Metro. Les e-mails ont ciblé des dizaines d'organisations dans diverses industries en Allemagne.Les messages contenaient un fichier zip protégé par mot de passe (mot de passe: mar26) contenant un fichier LNK.Lorsque le fichier LNK a été exécuté, il a déclenché PowerShell pour exécuter un script PowerShell distant.Ce script PowerShell a décodé le fichier exécutable Rhadamanthys codé de base64 stocké dans une variable et l'a chargé en tant qu'assemblage en mémoire, puis a exécuté le point d'entrée de l'assemblage.Il a par la suite chargé le contenu décodé sous forme d'un assemblage en mémoire et a exécuté son point d'entrée.Cela a essentiellement exécuté le code malveillant en mémoire sans l'écrire sur le disque. Notamment, lorsqu'il est désabuscée, le deuxième script PowerShell qui a été utilisé pour charger les rhadamanthys contenait des caractéristiques intéressantes non couramment observées dans le code utilisé par les acteurs de la menace (ou les programmeurs légitimes).Plus précisément, le script PowerShell comprenait un signe de livre suivi par des commentaires grammaticalement corrects et hyper spécifiques au-dessus de chaque composant du script.Il s'agit d'une sortie typique du contenu de codage généré par LLM et suggère que TA547 a utilisé un certain type d'outil compatible LLM pour écrire (ou réécrire) le PowerShell, ou copié le script à partir d'une autre source qui l'avait utilisé. Exemple de PowerShell soupçonné d'être écrit par un LLM et utilisé dans une chaîne d'attaque TA547. Bien qu'il soit difficile de confirmer si le contenu malveillant est créé via LLMS & # 8211;Des scripts de logiciels malveillants aux leurres d'ingénierie sociale & # 8211;Il existe des caractéristiques d'un tel contenu qui pointent vers des informations générées par la machine plutôt que générées par l'homme.Quoi qu'il en soit, qu'il soit généré par l'homme ou de la machine, la défense contre de telles menaces reste la même. Attribution TA547 est une menace cybercriminale à motivation financière considérée comme un courtier d'accès initial (IAB) qui cible diverses régions géographiques.Depuis 2023, TA547 fournit généralement un rat Netsupport mais a parfois livré d'autres charges utiles, notamment Stealc et Lumma Stealer (voleurs d'informations avec des fonctionnalités similaires à Rhadamanthys).Ils semblaient favoriser les pièces javascript zippées comme charges utiles de livraison initiales en 2023, mais l'acteur est passé aux LNK compressées début mars 2024. En plus des campagnes en Allemagne, d'autres ciblage géographique récent comprennent des organisations en Espagne, en Suisse, en Autriche et aux États-Unis. Pourquoi est-ce important Cette campagne représente un exemple de certains déplacements techniques de TA547, y compris l'utilisation de LNK comprimés et du voleur Rhadamanthys non observé auparavant.Il donne également un aperçu de la façon dont les acteurs de la menace tirent parti de contenu probable généré par LLM dans les campagnes de logiciels malveillants. Les LLM peuvent aider les acteurs de menace à comprendre les chaînes d'attaque plus sophistiquées utilisées Malware Tool Threat ChatGPT ★★
ProofPoint.webp 2024-03-19 05:00:28 Le rapport du paysage de la perte de données 2024 explore la négligence et les autres causes communes de perte de données
The 2024 Data Loss Landscape Report Explores Carelessness and Other Common Causes of Data Loss
(lien direct)
La perte de données est un problème de personnes, ou plus précisément, un problème de personnes imprudentes.C'est la conclusion de notre nouveau rapport, le paysage de la perte de données 2024, que Proofpoint lance aujourd'hui. Nous avons utilisé des réponses à l'enquête de 600 professionnels de la sécurité et des données de la protection des informations de Proofpoint pour explorer l'état actuel de la prévention de la perte de données (DLP) et des menaces d'initiés.Dans notre rapport, nous considérons également ce qui est susceptible de venir ensuite dans cet espace à maturation rapide. De nombreuses entreprises considèrent aujourd'hui toujours leur programme DLP actuel comme émergent ou évoluant.Nous voulions donc identifier les défis actuels et découvrir des domaines d'opportunité d'amélioration.Les praticiens de 12 pays et 17 industries ont répondu aux questions allant du comportement des utilisateurs aux conséquences réglementaires et partageaient leurs aspirations à l'état futur de DLP \\. Ce rapport est une première pour Proofpoint.Nous espérons que cela deviendra une lecture essentielle pour toute personne impliquée dans la tenue de sécuriser les données.Voici quelques thèmes clés du rapport du paysage de la perte de données 2024. La perte de données est un problème de personnes Les outils comptent, mais la perte de données est définitivement un problème de personnes.2023 Données de Tessian, une entreprise de point de preuve, montre que 33% des utilisateurs envoient une moyenne d'un peu moins de deux e-mails mal dirigés chaque année.Et les données de la protection de l'information ProofPoint suggèrent que à peu de 1% des utilisateurs sont responsables de 90% des alertes DLP dans de nombreuses entreprises. La perte de données est souvent causée par la négligence Les initiés malveillants et les attaquants externes constituent une menace significative pour les données.Cependant, plus de 70% des répondants ont déclaré que les utilisateurs imprudents étaient une cause de perte de données pour leur entreprise.En revanche, moins de 50% ont cité des systèmes compromis ou mal configurés. La perte de données est répandue La grande majorité des répondants de notre enquête ont signalé au moins un incident de perte de données.Les incidents moyens globaux par organisation sont de 15. L'échelle de ce problème est intimidante, car un travail hybride, l'adoption du cloud et des taux élevés de roulement des employés créent tous un risque élevé de données perdues. La perte de données est dommageable Plus de la moitié des répondants ont déclaré que les incidents de perte de données entraînaient des perturbations commerciales et une perte de revenus.Celles-ci ne sont pas les seules conséquences dommageables.Près de 40% ont également déclaré que leur réputation avait été endommagée, tandis que plus d'un tiers a déclaré que leur position concurrentielle avait été affaiblie.De plus, 36% des répondants ont déclaré avoir été touchés par des pénalités réglementaires ou des amendes. Préoccupation croissante concernant l'IA génératrice De nouvelles alertes déclenchées par l'utilisation d'outils comme Chatgpt, Grammarly et Google Bard ne sont devenus disponibles que dans la protection des informations de Proofpoint cette année.Mais ils figurent déjà dans les cinq règles les plus implémentées parmi nos utilisateurs.Avec peu de transparence sur la façon dont les données soumises aux systèmes d'IA génératives sont stockées et utilisées, ces outils représentent un nouveau canal dangereux pour la perte de données. DLP est bien plus que de la conformité La réglementation et la législation ont inspiré de nombreuses initiatives précoces du DLP.Mais les praticiens de la sécurité disent maintenant qu'ils sont davantage soucieux de protéger la confidentialité des utilisateurs et des données commerciales sensibles. Des outils de DLP ont évolué pour correspondre à cette progression.De nombreux outils Tool Threat Legislation Cloud ChatGPT ★★★
The_Hackers_News.webp 2024-03-15 17:04:00 Les plugins tiers Chatgpt pourraient conduire à des prises de contrôle des comptes
Third-Party ChatGPT Plugins Could Lead to Account Takeovers
(lien direct)
Les chercheurs en cybersécurité ont découvert que les plugins tiers disponibles pour Openai Chatgpt pouvaient agir comme une nouvelle surface d'attaque pour les acteurs de la menace qui cherchent à obtenir un accès non autorisé à des données sensibles. Selon & nbsp; New Research & NBSP; publié par Salt Labs, les défauts de sécurité trouvés directement dans Chatgpt et dans l'écosystème pourraient permettre aux attaquants d'installer des plugins malveillants sans utilisateurs \\ 'consentement
Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data. According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users\' consent
Threat ChatGPT ★★
DarkReading.webp 2024-03-13 12:00:00 Les vulnérabilités du plugin Critical Chatgpt exposent des données sensibles
Critical ChatGPT Plugin Vulnerabilities Expose Sensitive Data
(lien direct)
Les vulnérabilités trouvées dans les plugins Chatgpt - depuis l'assainissement - augmentent le risque de vol d'informations propriétaires et la menace des attaques de rachat de compte.
The vulnerabilities found in ChatGPT plugins - since remediated - heighten the risk of proprietary information being stolen and the threat of account takeover attacks.
Vulnerability Threat ChatGPT ★★
AlienVault.webp 2024-03-07 11:00:00 Sécuriser l'IA
Securing AI
(lien direct)
With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles. Vulnerabilities in ChatGPT A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset. The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models. This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy. U.S. and UK’s Bilateral cybersecurity effort on securing AI The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023. The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought. Securing AI by design Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from Tool Vulnerability Threat Mobile Medical Cloud Technical ChatGPT ★★
News.webp 2024-03-07 06:27:08 Ici \\, quelque chose d'autre peut faire: exposer Bad Infosec pour donner aux cyber-crims une orteil dans votre organisation
Here\\'s something else AI can do: expose bad infosec to give cyber-crims a toehold in your organization
(lien direct)
Les chercheurs singapouriens notent la présence croissante de crédits de chatppt dans les journaux malwares infoséaler trouvé quelques 225 000 journaux de voleurs contenant des détails de connexion pour le service l'année dernière.…
Singaporean researchers note rising presence of ChatGPT creds in Infostealer malware logs Stolen ChatGPT credentials are a hot commodity on the dark web, according to Singapore-based threat intelligence firm Group-IB, which claims to have found some 225,000 stealer logs containing login details for the service last year.…
Malware Threat ChatGPT ★★★
RiskIQ.webp 2024-03-05 19:03:47 Rester en avance sur les acteurs de la menace à l'ère de l'IA
Staying ahead of threat actors in the age of AI
(lien direct)
## Snapshot Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\'s blog on the research [here](https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors). Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\' usage of AI. However, Microsoft and our partners continue to study this landscape closely. The objective of Microsoft\'s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including [Microsoft Copilot for Security](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot), to elevate defenders everywhere. ## Activity Overview ### **A principled approach to detecting and blocking threat actors** The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House\'s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order\'s request for comprehensive AI safety and security standards. In line with Microsoft\'s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\'s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track. These principles include: - **Identification and action against malicious threat actors\' use:** Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources. - **Notification to other AI service providers:** When we detect a threat actor\'s use of another service provider\'s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies. - **Collaboration with other stakeholders:** Microsoft will collaborate with other stakeholders to regularly exchange information a Ransomware Malware Tool Vulnerability Threat Studies Medical Technical APT 28 ChatGPT APT 4 ★★
TechWorm.webp 2024-02-15 20:28:57 Microsoft et Openai disent que les pirates utilisent le chatppt pour les cyberattaques
Microsoft and OpenAI say hackers are using ChatGPT for Cyberattacks
(lien direct)
Microsoft et Openai ont averti que les pirates d'État-nationaux armement l'intelligence artificielle (IA) et les modèles de langage de grands (LLM) pour améliorer leurs cyberattaques en cours. Selon une étude menée par Microsoft Threat Intelligence en collaboration avec OpenAI, les deux sociétés ont identifié et perturbé cinq acteurs affiliés à l'État qui ont cherché à utiliser les services d'IA pour soutenir les cyber-activités malveillantes. Ces acteurs affiliés à l'État sont associés à des pays comme la Russie, la Corée du Nord, l'Iran et la Chine. Les cinq acteurs malveillants affiliés à l'État comprenaient deux acteurs de menaces affiliés à la Chine connus sous le nom de typhon de charbon de bois (chrome) et de typhon de saumon (sodium);l'acteur de menace affilié à l'Iran connu sous le nom de Crimson Sandstorm (Curium);l'acteur affilié à la Corée du Nord connue sous le nom de grésil émeraude (thallium);et l'acteur affilié à la Russie connu sous le nom de Forest Blizzard (Strontium). Par exemple, l'Openai a rapporté que le typhon de charbon de bois de Chine \\ a utilisé ses services pour rechercher diverses entreprises et outils de cybersécurité, débogage du code et générer des scripts, et créer du contenu probable pour une utilisation dans les campagnes de phishing. Un autre exemple est la tempête de sable Crimson d'Iran \\, qui a utilisé des LLM pour générer des extraits de code liés au développement d'applications et de Web, générer du contenu probable pour les campagnes de phission de lance et pour une aide dans le développement du code pour échapper à la détection. En outre, Forest Blizzard, le groupe russe de l'État-nation, aurait utilisé des services OpenAI principalement pour la recherche open source sur les protocoles de communication par satellite et la technologie d'imagerie radar, ainsi que pour le soutien aux tâches de script. Openai a déclaré mercredi qu'il avait mis fin aux comptes OpenAI identifiés associés aux acteurs de pirate parrainés par l'État.Ces acteurs ont généralement cherché à utiliser les services OpenAI pour interroger les informations open source, traduire, trouver des erreurs de codage et exécuter des tâches de codage de base, a déclaré la société d'IA. «Le soutien linguistique est une caractéristique naturelle des LLM et est attrayante pour les acteurs de menace qui se concentrent continuellement sur l'ingénierie sociale et d'autres techniques qui s'appuient sur de fausses communications trompeuses adaptées à leurs cibles \\ ', des réseaux professionnels et d'autres relations.Surtout, nos recherches avec OpenAI n'ont pas identifié d'attaques significatives en utilisant les LLM que nous surveillons étroitement », lit le nouveau rapport de sécurité AI publié par Microsoft surMercredi en partenariat avec Openai. Heureusement, aucune attaque significative ou nouvelle, utilisant la technologie LLM n'a encore été détectée, selon la société.«Notre analyse de l'utilisation actuelle de la technologie LLM par les acteurs de la menace a révélé des comportements cohérents avec les attaquants utilisant l'IA comme autre outil de productivité.Microsoft et Openai n'ont pas encore observé des techniques d'attaque ou d'abus en particulier ou uniques en AI résultant des acteurs de la menace & # 8217;Utilisation de l'IA », a noté Microsoft dans son rapport. Pour répondre à la menace, Microsoft a annoncé un ensemble de principes façonnant sa politique et ses actions pour lutter contre l'abus de ses services d'IA par des menaces persistantes avancées (APT), des man Tool Threat Studies ChatGPT ★★
RecordedFuture.webp 2024-02-14 20:22:20 Openai arrête les comptes liés à 5 groupes de piratage de l'État-nation
OpenAI shuts down accounts linked to 5 nation-state hacking groups
(lien direct)
Openai, la société d'intelligence artificielle derrière Chatgpt, a déclaré mercredi qu'elle avait mis fin aux comptes sur ses services utilisés par les acteurs de la menace liés à la Chine, à la Russie, en Iran et en Corée du Nord.Le annonce a été fait en collaboration avec Microsoft - unedes principaux investisseurs de l'entreprise - qui a publié un rapport mercredi qui a détaillé comment
OpenAI, the artificial intelligence company behind ChatGPT, said on Wednesday that it terminated accounts on its services being used by threat actors linked to China, Russia, Iran and North Korea. The announcement was made in collaboration with Microsoft - one of the company\'s major investors - which released a report on Wednesday that detailed how
Threat ChatGPT ★★
SecurityWeek.webp 2024-02-14 18:25:10 Microsoft attrape des apts utilisant le chatppt pour la recherche vuln, les scripts de logiciels malveillants
Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting
(lien direct)
> Les chasseurs de menaces de Microsoft disent que les APT étrangers interagissent avec le chatppt d'Openai \\ pour automatiser la recherche de vulnérabilité malveillante, la reconnaissance cible et les tâches de création de logiciels malveillants.
>Microsoft threat hunters say foreign APTs are interacting with OpenAI\'s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.
Malware Vulnerability Threat ChatGPT ★★
globalsecuritymag.webp 2024-01-23 10:21:41 Cybersécurité : 5 risques à suivre en 2024, selon Hiscox (lien direct) Cybersécurité : 5 risques à suivre en 2024, selon Hiscox. le Rapport Hiscox 2023 sur la gestion des cyber-risques, tandis qu'avec la croissance de modèles tels que ChatGPT, facilitant la rédaction d'emails de phishing convaincants, les employés se sont plus que jamais retrouvés en première ligne - Points de Vue Threat Prediction ChatGPT ★★★★
AlienVault.webp 2023-12-27 11:00:00 Cybersécurité post-pandémique: leçons de la crise mondiale de la santé
Post-pandemic Cybersecurity: Lessons from the global health crisis
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Beyond ‘just’ causing mayhem in the outside world, the pandemic also led to a serious and worrying rise in cybersecurity breaches. In 2020 and 2021, businesses saw a whopping 50% increase in the amount of attempted breaches. The transition to remote work, outdated healthcare organization technology, the adoption of AI bots in the workplace, and the presence of general uncertainty and fear led to new opportunities for bad actors seeking to exploit and benefit from this global health crisis. In this article, we will take a look at how all of this impacts the state of cybersecurity in the current post-pandemic era, and what conclusions can be drawn. New world, new vulnerabilities Worldwide lockdowns led to a rise in remote work opportunities, which was a necessary adjustment to allow employees to continue to earn a living. However, the sudden shift to the work-from-home format also caused a number of challenges and confusion for businesses and remote employees alike. The average person didn’t have the IT department a couple of feet away, so they were forced to fend for themselves. Whether it was deciding whether to use a VPN or not, was that email really a phishing one, or even just plain software updates, everybody had their hands full. With employers busy with training programs, threat actors began intensifying their ransomware-related efforts, resulting in a plethora of high-profile incidents in the last couple of years. A double-edged digital sword If the pandemic did one thing, it’s making us more reliant on both software and digital currencies. You already know where we’re going with this—it’s fertile ground for cybercrime. Everyone from the Costa Rican government to Nvidia got hit. With the dominance of Bitcoin as a payment method in ransoming, tracking down perpetrators is infinitely more difficult than it used to be. The old adage holds more true than ever - an ounce of prevention is worth a pound of cure. To make matters worse, amongst all that chaos, organizations also had to pivot away from vulnerable, mainstream software solutions. Even if it’s just choosing a new image editor or integrating a PDF SDK, it’s an increasing burden for businesses that are already trying to modernize or simply maintain. Actors strike where we’re most vulnerable Healthcare organizations became more important than ever during the global coronavirus pandemic. But this time also saw unprecedented amounts of cybersecurity incidents take place as bad actors exploited outdated cybersecurity measures. The influx of sudden need caused many overburdened healthcare organizations to lose track of key cybersecurity protocols that could help shore up gaps in the existing protective measures. The United States healthcare industry saw a 25% spike in successful data breaches during the pandemic, which resulted in millions of dollars of damages and the loss of privacy for thousands of patients whose data was compromis Data Breach Vulnerability Threat Studies Prediction ChatGPT ★★
TechRepublic.webp 2023-12-22 22:47:44 Rapport de menace ESET: abus de nom de chatppt, Lumma Steal Maleware augmente, la prévalence de Spyware \\ Android Spinok SDK \\
ESET Threat Report: ChatGPT Name Abuses, Lumma Stealer Malware Increases, Android SpinOk SDK Spyware\\'s Prevalence
(lien direct)
Des conseils d'atténuation des risques sont fournis pour chacune de ces menaces de cybersécurité.
Risk mitigation tips are provided for each of these cybersecurity threats.
Malware Threat Mobile ChatGPT ★★★
ESET.webp 2023-12-22 10:50:20 Résultats clés du rapport de la menace ESET H2 2023 & # 8211;Semaine en sécurité avec Tony Anscombe
Key findings from ESET Threat Report H2 2023 – Week in security with Tony Anscombe
(lien direct)
Comment les cybercriminels profitent de la popularité de Chatgpt et d'autres outils de ses semblables pour diriger les gens vers des sites sommaires, ainsi que d'autres résultats intéressants du dernier rapport de menace d'Eset \\
How cybercriminals take advantage of the popularity of ChatGPT and other tools of its ilk to direct people to sketchy sites, plus other interesting findings from ESET\'s latest Threat Report
Tool Threat Studies ChatGPT ★★★★
Watchguard.webp 2023-11-29 00:00:00 Les prédictions cyber 2024 du Threat Lab WatchGuard (lien direct) Paris, le 29 novembre 2023 – WatchGuard® Technologies, l'un des leaders mondiaux de la cybersécurité unifiée publie ses prévisions pour 2024 en matière de cybersécurité. Le rapport couvre les attaques et les tendances en matière de sécurité de l'information qui, selon l'équipe de recherche du WatchGuard Threat Lab, émergeront en 2024, telles que : la manipulation des modèles linguistiques basés sur l'IA (les LLM ou Large Language Model qui ont donné naissance à des outils tels que ChatGPT ) ; les " Vishers " qui étendent leurs opérations malveillantes grâce aux chatbots vocaux basés sur l'IA ; les piratages de casques VR/MR modernes.    Corey Nachreiner, Chief Security Officer chez WatchGuard Technologies explique : " Chaque nouvelle tendance technologique ouvre de nouveaux vecteurs d'attaque pour les cybercriminels. En 2024, les menaces émergentes ciblant les entreprises et les particuliers seront encore plus intenses, complexes et difficiles à gérer. Face à la pénurie de profils qualifiés en cybersécurité, le besoin de fournisseurs de services managés (MSP), de sécurité unifiée et de plateformes automatisées pour renforcer la cybersécurité et protéger les entreprises contre un éventail de menaces en constante évolution n'a jamais été aussi grand ".    Voici un résumé des principales prévisions de l'équipe du WatchGuard Threat Lab en matière de cybersécurité pour 2024 : L'ingénierie de pointe permettra de manipuler les grands modèles de langages (LLM) : Les entreprises et les particuliers ont recours aux LLM pour améliorer leur efficacité opérationnelle. Or, les acteurs de la menace apprennent à exploiter les LLM à leurs propres fins malveillantes. En 2024, le WatchGuard Threat Lab prévoit qu'un ingénieur de requêtes avisé, qu'il s'agisse d'un attaquant criminel ou d'un chercheur, pourra déchiffrer le code et manipuler un LLM pour qu'il divulgue des données privées.  Les ventes d'outils d'hameçonnage ciblé basés sur l'IA vont exploser sur le dark web : Les cybercriminels peuvent d'ores et déjà acheter sur le marché noir des outils qui envoient des emails non sollicités, rédigent automatiquement des textes convaincants et épluchent Internet et les médias sociaux à la recherche d'informations et de connaissances relatives à une cible particulière. Toutefois, bon nombre de ces outils sont encore manuels et les attaquants doivent cibler un seul utilisateur ou groupe de personnes à la fois. Les tâches clairement formatées de ce type se prêtent parfaitement à l'automatisation par le biais de l'intelligence artificielle et de l'apprentissage automatique. Il est donc probable que les outils alimentés par l'IA deviendront des best-sellers sur le dark web en 2024.  L'hameçonnage vocal (vishing) basé sur l'IA aura le vent en poupe en 2024 : Bien que la voix sur IP (VoIP) et la technologie de l'automatisation facilitent la composition en masse de milliers de numéros, une fois qu'une victime potentielle se présente, un escroc humain est toujours nécessaire pour l'attirer dans ses filets. Ce système limite l'ampleur des opérations de vishing. Mais en 2024, la situation pourrait changer. WatchGuard prévoit que la combinaison de deepfake audio convaincants et de LLM capables de mener des conversations avec des victimes peu méfiantes augmentera considérablement l'ampleur et le volume des appels de vishing. Qui plus est, ces appels pourraient même ne p Tool Threat Prediction ChatGPT ChatGPT ★★★
ProofPoint.webp 2023-11-28 23:05:04 Prédictions 2024 de Proofpoint \\: Brace for Impact
Proofpoint\\'s 2024 Predictions: Brace for Impact
(lien direct)
In the ever-evolving landscape of cybersecurity, defenders find themselves navigating yet another challenging year. Threat actors persistently refine their tactics, techniques, and procedures (TTPs), showcasing adaptability and the rapid iteration of novel and complex attack chains. At the heart of this evolution lies a crucial shift: threat actors now prioritize identity over technology. While the specifics of TTPs and the targeted technology may change, one constant remains: humans and their identities are the most targeted links in the attack chain. Recent instances of supply chain attacks exemplify this shift, illustrating how adversaries have pivoted from exploiting software vulnerabilities to targeting human vulnerabilities through social engineering and phishing. Notably, the innovative use of generative AI, especially its ability to improve phishing emails, exemplifies a shift towards manipulating human behavior rather than exploiting technological weaknesses. As we reflect on 2023, it becomes evident that cyber threat actors possess the capabilities and resources to adapt their tactics in response to increased security measures such as multi-factor authentication (MFA). Looking ahead to 2024, the trend suggests that threats will persistently revolve around humans, compelling defenders to take a different approach to breaking the attack chain. So, what\'s on the horizon? The experts at Proofpoint provide insightful predictions for the next 12 months, shedding light on what security teams might encounter and the implications of these trends. 1. Cyber Heists: Casinos are Just the Tip of the Iceberg Cyber criminals are increasingly targeting digital supply chain vendors, with a heightened focus on security and identity providers. Aggressive social engineering tactics, including phishing campaigns, are becoming more prevalent. The Scattered Spider group, responsible for ransomware attacks on Las Vegas casinos, showcases the sophistication of these tactics. Phishing help desk employees for login credentials and bypassing MFA through phishing one-time password (OTP) codes are becoming standard practices. These tactics have extended to supply chain attacks, compromising identity provider (IDP) vendors to access valuable customer information. The forecast for 2024 includes the replication and widespread adoption of such aggressive social engineering tactics, broadening the scope of initial compromise attempts beyond the traditional edge device and file transfer appliances. 2. Generative AI: The Double-Edged Sword The explosive growth of generative AI tools like ChatGPT, FraudGPT and WormGPT bring both promise and peril, but the sky is not falling as far as cybersecurity is concerned. While large language models took the stage, the fear of misuse prompted the U.S. president to issue an executive order in October 2023. At the moment, threat actors are making bank doing other things. Why bother reinventing the model when it\'s working just fine? But they\'ll morph their TTPs when detection starts to improve in those areas. On the flip side, more vendors will start injecting AI and large language models into their products and processes to boost their security offerings. Across the globe, privacy watchdogs and customers alike will demand responsible AI policies from technology companies, which means we\'ll start seeing statements being published about responsible AI policies. Expect both spectacular failures and responsible AI policies to emerge. 3. Mobile Device Phishing: The Rise of Omni-Channel Tactics take Centre Stage A notable trend for 2023 was the dramatic increase in mobile device phishing and we expect this threat to rise even more in 2024. Threat actors are strategically redirecting victims to mobile interactions, exploiting the vulnerabilities inherent in mobile platforms. Conversational abuse, including conversational smishing, has experienced exponential growth. Multi-touch campaigns aim to lure users away from desktops to mobile devices, utilizing tactics like QR codes and fraudulent voice calls Ransomware Malware Tool Vulnerability Threat Mobile Prediction Prediction ChatGPT ChatGPT ★★★
InfoSecurityMag.webp 2023-11-28 11:40:00 Les cybercriminels hésitent à utiliser l'IA génératrice
Cybercriminals Hesitant About Using Generative AI
(lien direct)
Une analyse des forums Web Dark a révélé que de nombreux acteurs de menace sont sceptiques quant à l'utilisation d'outils comme Chatgpt pour lancer des attaques
An analysis of dark web forums revealed many threat actors are skeptical about using tools like ChatGPT to launch attacks
Tool Threat ChatGPT ChatGPT ★★
DarkReading.webp 2023-11-10 18:18:00 Chatgpt: OpenAI attribue des pannes régulières aux attaques DDOS
ChatGPT: OpenAI Attributes Regular Outages to DDoS Attacks
(lien direct)
Chatgpt et les API associés ont été affectés par des pannes régulières, citant les attaques DDOS comme raison - le groupe anonyme du Soudan a revendiqué la responsabilité.
ChatGPT and the associated APIs have been affected by regular outages, citing DDoS attacks as the reason - the Anonymous Sudan group claimed responsibility.
Threat ChatGPT ★★
Blog.webp 2023-11-09 14:01:15 Chatgpt vers le bas?Openai blâme les pannes sur les attaques DDOS
ChatGPT Down? OpenAI Blames Outages on DDoS Attacks
(lien direct)
> Par waqas Openai et Chatgpt ont commencé à subir des pannes de service le 8 novembre, et l'entreprise travaille activement à restaurer le service complet. Ceci est un article de HackRead.com Lire le post original: Chatgpt vers le bas?Openai blâme les pannes sur les attaques DDOS
>By Waqas OpenAI and ChatGPT began experiencing service outages on November 8th, and the company is actively working to restore full service. This is a post from HackRead.com Read the original post: ChatGPT Down? OpenAI Blames Outages on DDoS Attacks
Threat ChatGPT ★★
SecurityWeek.webp 2023-11-09 13:28:03 Poste de chatpt majeure causée par l'attaque DDOS
Major ChatGPT Outage Caused by DDoS Attack
(lien direct)
> Chatgpt et son API ont connu une panne majeure en raison d'une attaque DDOS apparemment lancée par le Soudan anonyme.
>ChatGPT and its API have experienced a major outage due to a DDoS attack apparently launched by Anonymous Sudan.
Threat ChatGPT ★★★
The_Hackers_News.webp 2023-11-07 15:51:00 AI offensif et défensif: le chat (GPT) de \\
Offensive and Defensive AI: Let\\'s Chat(GPT) About It
(lien direct)
Chatgpt: outil de productivité, idéal pour écrire des poèmes et… un risque de sécurité ?!Dans cet article, nous montrons comment les acteurs de la menace peuvent exploiter Chatgpt, mais aussi comment les défenseurs peuvent l'utiliser pour niveler leur jeu. Chatgpt est l'application de consommation la plus en pleine croissance à ce jour.Le chatbot AI génératif extrêmement populaire a la capacité de générer des réponses humaines, cohérentes et contextuellement pertinentes.
ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses.
Tool Threat ChatGPT ★★★
SentinelOne.webp 2023-11-07 15:13:03 Predator Ai |InfostEaler propulsé par ChatGPT vise les plates-formes cloud
Predator AI | ChatGPT-Powered Infostealer Takes Aim at Cloud Platforms
(lien direct)
Un infostecteur émergent vendu sur Telegram cherche à exploiter une IA générative pour rationaliser les cyberattaques contre les services cloud.
An emerging infostealer being sold on Telegram looks to harness generative AI to streamline cyber attacks on cloud services.
Threat Cloud ChatGPT ★★★★
DarkReading.webp 2023-10-19 17:38:00 L'opération de défense israélienne alimentée par Ai \\ 'Cyber Dome prend vie
AI-Powered Israeli \\'Cyber Dome\\' Defense Operation Comes to Life
(lien direct)
Les Israéliens construisent un système de cyberdéfense qui utilisera des plates-formes d'IA génératrices en type Chatgpt pour analyser les informations sur les menaces.
The Israelis are building a cyber defense system that will use ChatGPT-like generative AI platforms to parse threat intelligence.
Threat ChatGPT ★★★
AlienVault.webp 2023-10-16 10:00:00 Renforcement de la cybersécurité: multiplication de force et efficacité de sécurité
Strengthening Cybersecurity: Force multiplication and security efficiency
(lien direct)
In the ever-evolving landscape of cybersecurity, the battle between defenders and attackers has historically been marked by an asymmetrical relationship. Within the cybersecurity realm, asymmetry has characterized the relationship between those safeguarding digital assets and those seeking to exploit vulnerabilities. Even within this context, where attackers are typically at a resource disadvantage, data breaches have continued to rise year after year as cyber threats adapt and evolve and utilize asymmetric tactics to their advantage.  These include technologies and tactics such as artificial intelligence (AI), and advanced social engineering tools. To effectively combat these threats, companies must rethink their security strategies, concentrating their scarce resources more efficiently and effectively through the concept of force multiplication. Asymmetrical threats, in the world of cybersecurity, can be summed up as the inherent disparity between adversaries and the tactics employed by the weaker party to neutralize the strengths of the stronger one. The utilization of AI and similar tools further erodes the perceived advantages that organizations believe they gain through increased spending on sophisticated security measures. Recent data from InfoSecurity Magazine, referencing the 2023 Checkpoint study, reveals a disconcerting trend: global cyberattacks increased by 7% between Q1 2022 and Q1 2023. While not significant at first blush, a deeper analysis reveals a more disturbing trend specifically that of the use of AI.  AI\'s malicious deployment is exemplified in the following quote from their research: "...we have witnessed several sophisticated campaigns from cyber-criminals who are finding ways to weaponize legitimate tools for malicious gains." Furthermore, the report highlights: "Recent examples include using ChatGPT for code generation that can help less-skilled threat actors effortlessly launch cyberattacks." As threat actors continue to employ asymmetrical strategies to render organizations\' substantial and ever-increasing security investments less effective, organizations must adapt to address this evolving threat landscape. Arguably, one of the most effective methods to confront threat adaptation and asymmetric tactics is through the concept of force multiplication, which enhances relative effectiveness with fewer resources consumed thereby increasing the efficiency of the security dollar. Efficiency, in the context of cybersecurity, refers to achieving the greatest cumulative effect of cybersecurity efforts with the lowest possible expenditure of resources, including time, effort, and costs. While the concept of efficiency may seem straightforward, applying complex technological and human resources effectively and in an efficient manner in complex domains like security demands more than mere calculations. This subject has been studied, modeled, and debated within the military community for centuries. Military and combat efficiency, a domain with a long history of analysis, Tool Vulnerability Threat Studies Prediction ChatGPT ★★★
ProofPoint.webp 2023-09-26 12:24:36 Tendances modernes pour les menaces et risques d'initiés
Modern Trends for Insider Threats and Risks
(lien direct)
«Les pirates externes sont la seule menace pour les actifs de l'entreprise» - McKinsey a à juste titre appelé cette affirmation comme un mythe en 2017.\\ est dans le paysage des menaces externes. Pendant trop longtemps, la communauté de la cybersécurité a surestimé (et trop dépensé) sur l'acteur de menace externe.Pourtant, maintes et maintes fois, nous voyons des cas où le risque d'initié devient une menace d'initiés et entraîne des résultats indésirables.Mais nous continuons à passer du temps, de l'argent et des efforts pour sécuriser les applications, les actifs et les données sans considérer les risques que les personnes qui accèdent à ces choses peuvent présenter. Lorsque vous pensez au chemin qu'une menace d'initié emprunte à travers la chaîne d'attaque, il est clair qu'il devrait y avoir des moyens pour empêcher les risques d'initiés d'évoluer en menaces d'initiés.Ces mesures peuvent inclure: Ajout de plus de couches d'accès Nécessitant plus de niveaux d'authentification Demander plus d'approbations dans le processus de partage des données En utilisant d'autres dissuasions, que ce soit le numérique ou la politique Et lorsqu'une menace d'initié échappe à la détection et n'est pas bloquée, nous devons nous appuyer sur la technologie pour la détection et la réponse des menaces d'identité.Les solutions avec ces capacités peuvent rechercher la persistance, la collecte d'informations, le mouvement latéral, l'escalade des privilèges et d'autres signes selon lesquels une menace d'initié essaie activement de renverser les processus et les contrôles de sécurité. Nous avons toujours la possibilité d'arrêter un événement de menace d'initié lorsque les données sont mises en scène et exfiltrées, ou lorsqu'un autre impact est imminent.Mais nous devons également faire ce qu'il faut pour fournir la vue la plus complète sur ce que les gens font dans l'écosystème numérique de l'entreprise.Cela aidera à empêcher les risques d'initiés de se transformer en menaces d'initiés actives. Un paysage changeant avec de nouvelles tendances dans les menaces d'initiés L'incertitude économique crée de nouveaux scénarios pour les menaces d'initiés.Cela amplifie également certains préexistants.Des événements de changement majeurs pour des entreprises telles que les fusions et les acquisitions, les désinvestissements, les nouveaux partenariats et les licenciements permettent aux risques d'initiés de devenir des menaces d'initiés.Il existe de nombreux exemples d'employés mécontents causant des dommages après avoir quitté une entreprise (ou avant).Et les employés tentés par de «meilleures» opportunités peuvent présenter un risque continu d'exfiltration de données. Une nouvelle tendance: des menaces d'initiés qui n'ont pas besoin d'un initié pour mettre en scène des données pour l'exfiltration.Les parties externes, y compris les pourvoyeurs de l'espionnage d'entreprise, payent plutôt l'accès.Nous avons vu des cas, comme le programme AT & amp; T «déverrouiller», où les employés recrutés par de mauvais acteurs recruteront ensuite d'autres dans l'entreprise pour s'engager dans une activité néfaste.Et nous avons vu des cas tels que le cas de menace d'initié Cisco - où les employés détruiront une infrastructure d'une entreprise pour des raisons malveillantes. L'émergence d'une IA générative souligne en outre la nécessité de modifier l'approche traditionnelle «extérieure» de la sécurité.Le blocage de l'utilisation d'outils comme Chatgpt, Bard AI de Google \\, Microsoft Copilot et autres n'est pas réaliste, car de nombreuses entreprises dépendront de l'IA générative pour les gains de productivité.Les initiés qui sont imprudents de protéger les données internes lors de l'utilisation de ces plates-formes hébergées sont un risque.L'atténuation de ce risque nécessitera la mise en œuvre d'un éventail de contrôles.(Il existe déjà des moyens de sauvegarder vos données dans une IA générative comme ChatGpt et d'autres plates-fo Tool Threat ChatGPT ChatGPT ★★
ProofPoint.webp 2023-09-15 09:50:31 L'avenir de l'autonomisation de la conscience de la cybersécurité: 5 cas d'utilisation pour une IA générative pour augmenter votre programme
The Future of Empowering Cybersecurity Awareness: 5 Use Cases for Generative AI to Boost Your Program
(lien direct)
Social engineering threats are increasingly difficult to distinguish from real media. What\'s worse, they can be released with great speed and at scale. That\'s because attackers can now use new forms of artificial intelligence (AI), like generative AI, to create convincing impostor articles, images, videos and audio. They can also create compelling phishing emails, as well as believable spoof browser pages and deepfake videos.  These well-crafted attacks developed with generative AI are creating new security risks. They can penetrate protective defense layers by exploiting human vulnerabilities, like trust and emotional response.  That\'s the buzz about generative AI. The good news is that the future is wide open to fight fire with fire. There are great possibilities for using a custom-built generative AI tool to help improve your company\'s cybersecurity awareness program. And in this post, we look at five ways your organization might do that, now or in the future. Let\'s imagine together how generative AI might help you to improve end users\' learning engagement and reduce human risk. 1. Get faster alerts about threats  If your company\'s threat intelligence exposes a well-designed credential attack targeting employees, you need to be quick to alert and educate users and leadership about the threat. In the future, your company might bring in a generative AI tool that can deliver relevant warnings and alerts to your audiences faster.  Generative AI applications can analyze huge amounts of data about emerging threats at greater speed and with more accuracy than traditional methods. Security awareness administrators might run queries such as: “Analyze internal credential phishing attacks for the past two weeks” “List BEC attacks for credentials targeting companies like mine right now”  In just a few minutes, the tool could summarize current credential compromise threats and the specific “tells” to look for.  You could then ask your generative AI tool to create actionable reporting about that threat data on the fly, which saves time because you\'re not setting up dashboards. Then, you use the tool to push out threat alerts to the business. It could also produce standard communications like email messages and social channel notifications.  You might engage people further by using generative AI to create an eye-catching infographic or a short, animated video in just seconds or minutes. No need to wait days or weeks for a designer to produce that visual content.  2. Design awareness campaigns more nimbly  Say that your security awareness team is planning a campaign to teach employees how to spot attacks targeting their credentials, as AI makes phishing emails more difficult to spot. Your security awareness platform or learning management system (LMS) has a huge library of content you can tap for this effort-but your team is already overworked.  In the future, you might adapt a generative AI tool to reduce the manual workload by finding what information is most relevant and providing suggestions for how to use it. A generative AI application could scan your content library for training modules and awareness materials. For instance, an administrator could make queries such as: “Sort existing articles for the three biggest risks of credential theft” “Suggest training assignments that educate about document attachments”  By applying this generative AI use case to searching and filtering, you would shortcut the long and tedious process of looking for material, reading each piece for context, choosing the most relevant content, and deciding how to organize what you\'ve selected. You could also ask the generative AI tool to recommend critical topics missing in the available content. The AI might even produce the basis for a tailored and personalized security campaign to help keep your people engaged. For instance, you could ask the tool to sort content based on nonstandard factors you consider interesting, such as mentioning a geographic region or holiday season.  3. Produce Tool Vulnerability Threat ChatGPT ChatGPT ★★
The_Hackers_News.webp 2023-08-30 17:18:00 Comment empêcher ChatGPT de voler votre contenu et votre trafic
How to Prevent ChatGPT From Stealing Your Content & Traffic
(lien direct)
ChatGPT et les grands modèles de langage (LLM) similaires ont ajouté encore plus de complexité au paysage toujours croissant des menaces en ligne.Les cybercriminels n'ont plus besoin de compétences avancées en codage pour exécuter des fraudes et autres attaques dommageables contre les entreprises et les clients en ligne, grâce aux robots en tant que service, aux proxys résidentiels, aux fermes CAPTCHA et à d'autres outils facilement accessibles. Aujourd'hui, la dernière technologie endommage
ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging
Threat ChatGPT ChatGPT ★★
AlienVault.webp 2023-08-14 10:00:00 Construire la cybersécurité dans la chaîne d'approvisionnement est essentiel à mesure que les menaces montent
Building Cybersecurity into the supply chain is essential as threats mount
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  The supply chain, already fragile in the USA, is at severe and significant risk of damage by cyberattacks. According to research analyzed by Forbes, supply chain attacks now account for a huge 62% of all commercial attacks, a clear indication of the scale of the challenge faced by the supply chain and the logistics industry as a whole. There are solutions out there, however, and the most simple of these concerns a simple upskilling of supply chain professionals to be aware of cybersecurity systems and threats. In an industry dominated by the need for trust, this is something that perhaps can come naturally for the supply chain. Building trust and awareness At the heart of a successful supply chain relationship is trust between partners. Building that trust, and securing high quality business partners, relies on a few factors. Cybersecurity experts and responsible officers will see some familiarity - due diligence, scrutiny over figures, and continuous monitoring. In simple terms, an effective framework of checking and rechecking work, monitored for compliance on all sides. These factors are a key part of new federal cybersecurity rules, according to news agency Reuters. Among other measures are a requirement for companies to have rigorous control over system patching, and measures that would require cloud hosted services to identify foreign customers. These are simple but important steps, and give a hint to supply chain businesses as to what they should be doing; putting in measures to monitor, control, and enact compliance on cybersecurity threats. That being said, it can be the case that the software isn’t in place within individual businesses to ensure that level of control. The right tools, and the right personnel, is also essential. The importance of software Back in April, the UK’s National Cyber Security Centre released details of specific threats made by Russian actors against business infrastructure in the USA and UK. Highlighted in this were specific weaknesses in business systems, and that includes in hardware and software used by millions of businesses worldwide. The message is simple - even industry standard software and devices have their problems, and businesses have to keep track of that. There are two arms to ensure this is completed. Firstly, the business should have a cybersecurity officer in place whose role it is to monitor current measures and ensure they are kept up to date. Secondly, budget and time must be allocated at an executive level firstly to promote networking between the business and cybersecurity firms, and between partner businesses to ensure that even cybersecurity measures are implemented across the chain. Utilizing AI There is something of a digital arms race when it comes to artificial intelligence. As ZDNet notes, the lack of clear regulation is providing a lot of leeway for malicious actors to innovate, but for businesses to act, too. While regulations are now coming in, it remains that there is a clear role for AI in prevention. According t Threat Cloud APT 28 ChatGPT ★★
knowbe4.webp 2023-08-10 18:39:58 Le rôle de l'AI \\ dans la cybersécurité: Black Hat USA 2023 révèle comment les grands modèles de langage façonnent l'avenir des attaques de phishing et de la défense
AI\\'s Role in Cybersecurity: Black Hat USA 2023 Reveals How Large Language Models Are Shaping the Future of Phishing Attacks and Defense
(lien direct)
 Rôle Ai \\ dans la cybersécurité: Black Hat USA 2023 révèle la façon dont les modèles de langue façonnentL'avenir des attaques de phishing et de la défense à Black Hat USA 2023, une session dirigée par une équipe de chercheurs en sécurité, dont Fredrik Heiding, Bruce Schneier, Arun Vishwanath et Jeremy Bernstein, ont dévoilé une expérience intrigante.Ils ont testé de grands modèles de langue (LLM) pour voir comment ils ont fonctionné à la fois dans l'écriture de courriels de phishing convaincants et les détecter.Ceci est le PDF document technique . L'expérience: l'élaboration des e-mails de phishing L'équipe a testé quatre LLM commerciaux, y compris le chatppt de l'Openai \\, Bard de Google \\, Claude \\ de Google et Chatllama, dans des attaques de phishing expérimentales contre les étudiants de Harvard.L'expérience a été conçue pour voir comment la technologie de l'IA pouvait produire des leurres de phishing efficaces. Heriding, chercheur à Harvard, a souligné qu'une telle technologie a déjà eu un impact sur le paysage des menaces en facilitant la création de courriels de phishing.Il a dit: "GPT a changé cela. Vous n'avez pas besoin d'être un orateur anglais natif, vous n'avez pas besoin de faire beaucoup. Vous pouvez entrer une invite rapide avec seulement quelques points de données." L'équipe a envoyé des e-mails de phishing offrant des cartes-cadeaux Starbucks à 112 étudiants, en comparant Chatgpt avec un modèle non AI appelé V-Triad.Les résultats ont montré que l'e-mail V-Triad était le plus efficace, avec un taux de clic de 70%, suivi d'une combinaison V-Triad-Chatgpt à 50%, Chatgpt à 30% et le groupe témoin à 20%. Cependant, dans une autre version du test, Chatgpt a fonctionné beaucoup mieux, avec un taux de clic de près de 50%, tandis que la combinaison V-Triad-Chatgpt a mené avec près de 80%.Heriding a souligné qu'un LLM non formé et à usage général a pu créer rapidement des attaques de phishing très efficaces. Utilisation de LLMS pour la détection de phishing La deuxième partie de l'expérience s'est concentrée sur l'efficacité des LLM pour déterminer l'intention des e-mails suspects.L'équipe a utilisé les e-mails de Starbucks de la première partie de l'expérience et a demandé aux LLM de déterminer l'intention, qu'elle ait été composée par un humain ou une IA, d'identifier tout aspect suspect et d'offrir des conseils sur la façon de répondre. Les résultats étaient à la fois surprenants et encourageants.Les modèles avaient des taux de réussite élevés dans l'identification des e-mails marketing, mais ont eu des difficultés avec l'intention des e-mails de phishing V-Triad et Chatgpt.Ils se sont mieux comportés lorsqu'ils sont chargés d'identifier le contenu suspect, les résultats de Claude \\ étant mis en évidence pour non seulement pour obtenir des résultats élevés dans les tests de détection mais aussi fournir des conseils judicieux pour les utilisateurs. La puissance de phishing de LLMS Dans l'ensemble, Heriding a conclu que les LLMS prêtesété formé sur toutes les données de sécurité.Il a déclaré: "C'est vraiment quelque chose que tout le monde peut utiliser en ce moment. C'est assez puissant." L'expér Tool Threat ChatGPT ChatGPT ★★
globalsecuritymag.webp 2023-07-25 12:45:41 Etude Netskope Threat Labs : le code source représente la catégorie de données sensibles la plus couramment partagée sur ChatGPT (lien direct) Etude Netskope Threat Labs : le code source représente la catégorie de données sensibles la plus couramment partagée sur ChatGPT Dans les grandes entreprises, des données sensibles sont partagées avec des applications d'IA générative chaque heure dans une journée de travail. - Investigations Threat ChatGPT ChatGPT ★★★★
Checkpoint.webp 2023-07-19 16:27:24 Facebook a été inondé de publicités et de pages pour les faux chatpt, Google Bard et d'autres services d'IA, incitant les utilisateurs à télécharger des logiciels malveillants
Facebook Flooded with Ads and Pages for Fake ChatGPT, Google Bard and other AI services, Tricking Users into Downloading Malware
(lien direct)
> Présentation des cybercriminels utilisent Facebook pour usurper l'identité de marques de génération d'IA populaires, y compris les utilisateurs de Chatgpt, Google Bard, Midjourney et Jasper Facebook sont trompés pour télécharger du contenu à partir des fausses pages de marque et des annonces que ces téléchargements contiennent des logiciels malveillants malveillants, qui volent leurMots de passe en ligne (banque, médias sociaux, jeux, etc.), des portefeuilles cryptographiques et toutes les informations enregistrées dans leur navigateur Les utilisateurs sans méfiance aiment et commentent les faux messages, les diffusant ainsi sur leurs propres réseaux sociaux Les cyber-criminels continuent de essayer de voler privéinformation.Une nouvelle arnaque découverte par Check Point Research (RCR) utilise Facebook pour escroquer sans méfiance [& # 8230;]
>Highlights Cyber criminals are using Facebook to impersonate popular generative AI brands, including ChatGPT, Google Bard, Midjourney and Jasper Facebook users are being tricked into downloading content from the fake brand pages and ads These downloads contain malicious malware, which steals their online passwords (banking, social media, gaming, etc), crypto wallets and any information saved in their browser Unsuspecting users are liking and commenting on fake posts, thereby spreading them to their own social networks Cyber criminals continue to try new ways to steal private information. A new scam uncovered by Check Point Research (CPR) uses Facebook to scam unsuspecting […]
Malware Threat ChatGPT ★★★★
The_Hackers_News.webp 2023-07-18 16:24:00 Allez au-delà des titres pour des plongées plus profondes dans le sous-sol cybercriminal
Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground
(lien direct)
Découvrez des histoires sur les acteurs de la menace \\ 'Tactiques, techniques et procédures des experts en menace de Cybersixgill \\ chaque mois.Chaque histoire vous apporte des détails sur les menaces souterraines émergentes, les acteurs de la menace impliqués et comment vous pouvez prendre des mesures pour atténuer les risques.Découvrez les meilleures vulnérabilités et passez en revue les dernières tendances des ransomwares et des logiciels malveillants à partir du Web profond et sombre. Chatgpt volé
Discover stories about threat actors\' latest tactics, techniques, and procedures from Cybersixgill\'s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT
Ransomware Malware Vulnerability Threat ChatGPT ChatGPT ★★
DarkReading.webp 2023-07-17 22:04:00 Comment le renseignement des menaces AI-Augmentation résout les déficits de sécurité
How AI-Augmented Threat Intelligence Solves Security Shortfalls
(lien direct)
Les chercheurs explorent comment les cyber-analystes surchargés peuvent améliorer leurs travaux de renseignement sur les menaces en utilisant des modèles de langage grand type ChatGPT (LLMS).
Researchers explore how overburdened cyber analysts can improve their threat intelligence jobs by using ChatGPT-like large language models (LLMs).
Threat ChatGPT ★★★
knowbe4.webp 2023-06-27 13:00:00 Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS
CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams
(lien direct)
CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned Ransomware Spam Malware Hack Tool Threat FedEx APT 28 APT 15 ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-06-22 16:45:00 #InfosecurityEurope: les experts mettent en évidence les techniques d'attaque en évolution
#InfosecurityEurope: Experts Highlight Evolving Attack Techniques
(lien direct)
Les experts ont discuté de l'utilisation croissante de Chatgpt par les acteurs de la menace et de l'évolution des attaques basées sur l'identité
Experts discussed growing utilization of ChatGPT by threat actors and evolving identity-based attacks
Threat ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-21 10:00:00 Vers un SOC plus résilient: la puissance de l'apprentissage automatique
Toward a more resilient SOC: the power of machine learning
(lien direct)
A way to manage too much data To protect the business, security teams need to be able to detect and respond to threats fast. The problem is the average organization generates massive amounts of data every day. Information floods into the Security Operations Center (SOC) from network tools, security tools, cloud services, threat intelligence feeds, and other sources. Reviewing and analyzing all this data in a reasonable amount of time has become a task that is well beyond the scope of human efforts. AI-powered tools are changing the way security teams operate. Machine learning (which is a subset of artificial intelligence, or “AI”)—and in particular, machine learning-powered predictive analytics—are enhancing threat detection and response in the SOC by providing an automated way to quickly analyze and prioritize alerts. Machine learning in threat detection So, what is machine learning (ML)? In simple terms, it is a machine\'s ability to automate a learning process so it can perform tasks or solve problems without specifically being told do so. Or, as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate. Malware Tool Threat Prediction Cloud ChatGPT ★★
knowbe4.webp 2023-06-20 13:00:00 Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données
CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches
(lien direct)
CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l Ransomware Data Breach Spam Malware Hack Vulnerability Threat Cloud ChatGPT ChatGPT ★★
News.webp 2023-06-20 10:08:13 Plus de 100 000 comptes de chatpt compromis trouvés à vendre sur Dark Web
Over 100,000 compromised ChatGPT accounts found for sale on dark web
(lien direct)
CyberCrooks espérant que les utilisateurs ont chuchoté les secrets de l'employeur à Chatbot Group-ib de la tenue de renseignement sur la menace basée à Singapour a trouvé des informations d'identification ChatGPT dans plus de 100 000 journaux de voleurs négociés sur le Web Dark au cours de la dernière année.…
Cybercrooks hoping users have whispered employer secrets to chatbot Singapore-based threat intelligence outfit Group-IB has found ChatGPT credentials in more than 100,000 stealer logs traded on the dark web in the past year.…
Threat ChatGPT ChatGPT ★★
Netskope.webp 2023-06-14 20:12:49 Ici \\ est ce que la détection de phishing en ligne de Chatgpt et de NetSkope a en commun
Here\\'s What ChatGPT and Netskope\\'s Inline Phishing Detection Have in Common
(lien direct)
Les attaques à phishing sont une cyber-menace majeure qui continue d'évoluer et de devenir plus sophistiquée, provoquant des milliards de dollars de pertes chaque année selon le récent rapport sur la criminalité sur Internet.Cependant, les moteurs traditionnels de détection de phishing hors ligne ou en ligne sont limités dans la façon dont ils peuvent détecter les pages de phishing évasives.En raison des exigences de performance de la ligne [& # 8230;]
Phishing attacks are a major cyber threat that continue to evolve and become more sophisticated, causing billions of dollars in losses each year according to the recent Internet Crime Report. However, traditional offline or inline phishing detection engines are limited in how they can detect evasive phishing pages. Due to the performance requirements of inline […]
Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-13 13:00:00 CyberheistNews Vol 13 # 24 [Le biais de l'esprit \\] le prétexage dépasse désormais le phishing dans les attaques d'ingénierie sociale
CyberheistNews Vol 13 #24 [The Mind\\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks
(lien direct)
CyberheistNews Vol 13 #24 CyberheistNews Vol 13 #24  |   June 13th, 2023 [The Mind\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks The New Verizon DBIR is a treasure trove of data. As we will cover a bit below, Verizon reported that 74% of data breaches Involve the "Human Element," so people are one of the most common factors contributing to successful data breaches. Let\'s drill down a bit more in the social engineering section. They explained: "Now, who has received an email or a direct message on social media from a friend or family member who desperately needs money? Probably fewer of you. This is social engineering (pretexting specifically) and it takes more skill. "The most convincing social engineers can get into your head and convince you that someone you love is in danger. They use information they have learned about you and your loved ones to trick you into believing the message is truly from someone you know, and they use this invented scenario to play on your emotions and create a sense of urgency. The DBIR Figure 35 shows that Pretexting is now more prevalent than Phishing in Social Engineering incidents. However, when we look at confirmed breaches, Phishing is still on top." A social attack known as BEC, or business email compromise, can be quite intricate. In this type of attack, the perpetrator uses existing email communications and information to deceive the recipient into carrying out a seemingly ordinary task, like changing a vendor\'s bank account details. But what makes this attack dangerous is that the new bank account provided belongs to the attacker. As a result, any payments the recipient makes to that account will simply disappear. BEC Attacks Have Nearly Doubled It can be difficult to spot these attacks as the attackers do a lot of preparation beforehand. They may create a domain doppelganger that looks almost identical to the real one and modify the signature block to show their own number instead of the legitimate vendor. Attackers can make many subtle changes to trick their targets, especially if they are receiving many similar legitimate requests. This could be one reason why BEC attacks have nearly doubled across the DBIR entire incident dataset, as shown in Figure 36, and now make up over 50% of incidents in this category. Financially Motivated External Attackers Double Down on Social Engineering Timely detection and response is crucial when dealing with social engineering attacks, as well as most other attacks. Figure 38 shows a steady increase in the median cost of BECs since 2018, now averaging around $50,000, emphasizing the significance of quick detection. However, unlike the times we live in, this section isn\'t all doom and Spam Malware Vulnerability Threat Patching Uber APT 37 ChatGPT ChatGPT APT 43 ★★
AlienVault.webp 2023-06-13 10:00:00 Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire
Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  OpenAI\'s flagship product, ChatGPT, has dominated the news cycle since its unveiling in November 2022. In only a few months, ChatGPT became the fastest-growing consumer app in internet history, reaching 100 million users as 2023 began. The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt. This can be bad news for your business, as it escalates the degree of difficulty in managing threats.  Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape. Understanding the role of ChatGPT in modern ransomware attacks We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage. With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection. The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive. Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment. This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds. Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th Ransomware Malware Tool Threat ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-12 10:00:00 Understanding AI risks and how to secure using Zero Trust (lien direct) I. Introduction AI’s transformative power is reshaping business operations across numerous industries. Through Robotic Process Automation (RPA), AI is liberating human resources from the shackles of repetitive, rule-based tasks and directing their focus towards strategic, complex operations. Furthermore, AI and machine learning algorithms can decipher the huge sets of data at an unprecedented speed and accuracy, giving businesses insights that were once out of reach. For customer relations, AI serves as a personal touchpoint, enhancing engagement through personalized interactions. As advantageous as AI is to businesses, it also creates very unique security challenges. For example, adversarial attacks that subtly manipulate the input data of an AI model to make it behave abnormally, all while circumventing detection. Equally concerning is the phenomenon of data poisoning where attackers taint an AI model during its training phase by injecting misleading data, thereby corrupting its eventual outcomes. It is in this landscape that the Zero Trust security model of \'Trust Nothing, Verify Everything\', stakes its claim as a potent counter to AI-based threats. Zero Trust moves away from the traditional notion of a secure perimeter. Instead, it assumes that any device or user, regardless of their location within or outside the network, should be considered a threat. This shift in thinking demands strict access controls, comprehensive visibility, and continuous monitoring across the IT ecosystem. As AI technologies increase operational efficiency and decision-making, they can also become conduits for attacks if not properly secured. Cybercriminals are already trying to exploit AI systems via data poisoning and adversarial attacks making Zero Trust model\'s role in securing these systems is becomes even more important. II. Understanding AI threats Mitigating AI threats risks requires a comprehensive approach to AI security, including careful design and testing of AI models, robust data protection measures, continuous monitoring for suspicious activity, and the use of secure, reliable infrastructure. Businesses need to consider the following risks when implementing AI. Adversarial attacks: These attacks involve manipulating an AI model\'s input data to make the model behave in a way that the attacker desires, without triggering an alarm. For example, an attacker could manipulate a facial recognition system to misidentify an individual, allowing unauthorized access. Data poisoning: This type of attack involves introducing false or misleading data into an AI model during its training phase, with the aim of corrupting the model\'s outcomes. Since AI systems depend heavily on their training data, poisoned data can significantly impact their performance and reliability. Model theft and inversion attacks: Attackers might attempt to steal proprietary AI models or recreate them based on their outputs, a risk that’s particularly high for models provided as a service. Additionally, attackers can try to infer sensitive information from the outputs of an AI model, like learning about the individuals in a training dataset. AI-enhanced cyberattacks: AI can be used by malicious actors to automate and enhance their cyberattacks. This includes using AI to perform more sophisticated phishing attacks, automate the discovery of vulnerabilities, or conduct faster, more effective brute-force attacks. Lack of transparency (black box problem): It\'s often hard to understand how complex AI models make decisions. This lack of transparency can create a security risk as it might allow biased or malicious behavior to go undetected. Dependence on AI systems: As businesses increasingly rely on AI systems, any disruption to these systems can have serious consequences. This could occur due to technical issues, attacks on the AI system itself, or attacks on the underlying infrastructure. III. Th Tool Threat ChatGPT ChatGPT ★★
SecureList.webp 2023-06-07 08:00:34 It menace évolution Q1 2023
IT threat evolution Q1 2023
(lien direct)
Les activités récentes de Bluenoroff et de Mantis itinérantes, une nouvelle approche liée au conflit Russo-ukrainien, au chat de la menace, à la malvertiser par des moteurs de recherche, une campagne de vol de crypto-monnaie et un faux navigateur Tor
Recent BlueNoroff and Roaming Mantis activities, new APT related to the Russo-Ukrainian conflict, ChatGPT and threat intelligence, malvertising through search engines, cryptocurrency theft campaign and fake Tor browser
Threat ChatGPT ChatGPT ★★★
knowbe4.webp 2023-05-31 13:00:00 Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier
CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks
(lien direct)
CyberheistNews Vol 13 #22 CyberheistNews Vol 13 #22  |   May 31st, 2023 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all. When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report. According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry: 51% of invoice fraud attacks targeted the financial services industry 42% were payroll fraud attacks 63% were payment fraud To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox. The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage. Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing. Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users. Ransomware Malware Hack Tool Threat Conference Uber ChatGPT ChatGPT Guam ★★
knowbe4.webp 2023-05-23 13:00:00 Cyberheistnews Vol 13 # 21 [Double Trouble] 78% des victimes de ransomwares sont confrontées à plusieurs extensions en tendance effrayante
CyberheistNews Vol 13 #21 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend
(lien direct)
CyberheistNews Vol 13 #21 CyberheistNews Vol 13 #21  |   May 23rd, 2023 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend New data sheds light on how likely your organization will succumb to a ransomware attack, whether you can recover your data, and what\'s inhibiting a proper security posture. You have a solid grasp on what your organization\'s cybersecurity stance does and does not include. But is it enough to stop today\'s ransomware attacks? CyberEdge\'s 2023 Cyberthreat Defense Report provides some insight into just how prominent ransomware attacks are and what\'s keeping orgs from stopping them. According to the report, in 2023: 7% of organizations were victims of a ransomware attack 7% of those paid a ransom 73% were able to recover data Only 21.6% experienced solely the encryption of data and no other form of extortion It\'s this last data point that interests me. Nearly 78% of victim organizations experienced one or more additional forms of extortion. CyberEdge mentions threatening to publicly release data, notifying customers or media, and committing a DDoS attack as examples of additional threats mentioned by respondents. IT decision makers were asked to rate on a scale of 1-5 (5 being the highest) what were the top inhibitors of establishing and maintaining an adequate defense. The top inhibitor (with an average rank of 3.66) was a lack of skilled personnel – we\'ve long known the cybersecurity industry is lacking a proper pool of qualified talent. In second place, with an average ranking of 3.63, is low security awareness among employees – something only addressed by creating a strong security culture with new-school security awareness training at the center of it all. Blog post with links:https://blog.knowbe4.com/ransomware-victim-threats [Free Tool] Who Will Fall Victim to QR Code Phishing Attacks? Bad actors have a new way to launch phishing attacks to your users: weaponized QR codes. QR code phishing is especially dangerous because there is no URL to check and messages bypass traditional email filters. With the increased popularity of QR codes, users are more at Ransomware Hack Tool Vulnerability Threat Prediction ChatGPT ★★
AlienVault.webp 2023-05-22 10:00:00 Partager les données de votre entreprise avec Chatgpt: à quel point est-elle risquée?
Sharing your business\\'s data with ChatGPT: How risky is it?
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  As a natural language processing model, ChatGPT - and other similar machine learning-based language models - is trained on huge amounts of textual data. Processing all this data, ChatGPT can produce written responses that sound like they come from a real human being. ChatGPT learns from the data it ingests. If this information includes your sensitive business data, then sharing it with ChatGPT could potentially be risky and lead to cybersecurity concerns. For example, what if you feed ChatGPT pre-earnings company financial information, company proprietary software codeor materials used for internal presentations without realizing that practically anybody could obtain that sensitive information just by asking ChatGPT about it? If you use your smartphone to engage with ChatGPT, then a smartphone security breach could be all it takes to access your ChatGPT query history. In light of these implications, let\'s discuss if - and how - ChatGPT stores its users\' input data, as well as potential risks you may face when sharing sensitive business data with ChatGPT. Does ChatGPT store users’ input data? The answer is complicated. While ChatGPT does not automatically add data from queries to models specifically to make this data available for others to query, any prompt does become visible to OpenAI, the organization behind the large language model. Although no membership inference attacks have yet been carried out against the large language learning models that drive ChatGPT, databases containing saved prompts as well as embedded learnings could be potentially compromised by a cybersecurity breach. OpenAI, the parent company that developed ChatGPT, is working with other companies to limit the general access that language learning models have to personal data and sensitive information. But the technology is still in its nascent developing stages - ChatGPT was only just released to the public in November of last year. By just two months into its public release, ChatGPT had been accessed by over 100 million users, making it the fastest-growing consumer app ever at record-breaking speeds. With such rapid growth and expansion, regulations have been slow to keep up. The user base is so broad that there are abundant security gaps and vulnerabilities throughout the model. Risks of sharing business data with ChatGPT In June 2021, researchers from Apple, Stanford University, Google, Harvard University, and others published a paper that revealed that GPT-2, a language learning model similar to ChatGPT, could accurately recall sensitive information from training documents. The report found that GPT-2 could call up information with specific personal identifiers, recreate exact sequences of text, and provide other sensitive information when prompted. These “training data extraction attacks” could present a growing threat to the security of researchers working on machine learning models, as hackers may be able to access machine learning researcher data and steal their protected intellectual property. One data security company called Cyberhaven has released reports of ChatGPT cybersecurity vulnerabilities it has recently prevented. According to the reports, Cyberhaven has identified and prevented insecure requ Tool Threat Medical ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-09 13:00:00 Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs
CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
(lien direct)
CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, Ransomware Data Breach Spam Malware Tool Threat Prediction NotPetya NotPetya APT 28 ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-02 13:00:00 Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle?
CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells?
(lien direct)
CyberheistNews Vol 13 #18 CyberheistNews Vol 13 #18  |   May 2nd, 2023 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card. It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. "When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes." That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive. One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves. Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 Ransomware Malware Hack Threat ChatGPT ChatGPT ★★
Last update at: 2024-05-09 08:07:54
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter