What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
bleepingcomputer.webp 2024-04-10 12:12:40 Script PowerShell malveillant poussant les logiciels malveillants
Malicious PowerShell script pushing malware looks AI-written
(lien direct)
Un acteur de menace utilise un script PowerShell qui a probablement été créé à l'aide d'un système d'intelligence artificielle tel que le chatgpt d'Openai \\, les Gemini de Google \\ ou le copilot de Microsoft \\.[...]
A threat actor is using a PowerShell script that was likely created with the help of an artificial intelligence system such as OpenAI\'s ChatGPT, Google\'s Gemini, or Microsoft\'s CoPilot. [...]
Malware Threat ChatGPT ★★★
ProofPoint.webp 2024-04-10 10:12:47 Mémoire de sécurité: TA547 cible les organisations allemandes avec Rhadamanthys Stealer
Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer
(lien direct)
Ce qui s'est passé Proofpoint a identifié TA547 ciblant les organisations allemandes avec une campagne de courriel livrant des logiciels malveillants de Rhadamanthys.C'est la première fois que les chercheurs observent TA547 utiliser des Rhadamanthys, un voleur d'informations utilisé par plusieurs acteurs de menaces cybercriminaux.De plus, l'acteur a semblé utiliser un script PowerShell que les chercheurs soupçonnent a été généré par un modèle grand langage (LLM) tel que Chatgpt, Gemini, Copilot, etc. Les e-mails envoyés par l'acteur de menace ont usurpé l'identité de la société de vente au détail allemande Metro prétendant se rapporter aux factures. De: Metro! Sujet: Rechnung No: 31518562 Attachement: in3 0gc- (94762) _6563.zip Exemple TA547 Courriel imitant l'identité de la société de vente au détail allemande Metro. Les e-mails ont ciblé des dizaines d'organisations dans diverses industries en Allemagne.Les messages contenaient un fichier zip protégé par mot de passe (mot de passe: mar26) contenant un fichier LNK.Lorsque le fichier LNK a été exécuté, il a déclenché PowerShell pour exécuter un script PowerShell distant.Ce script PowerShell a décodé le fichier exécutable Rhadamanthys codé de base64 stocké dans une variable et l'a chargé en tant qu'assemblage en mémoire, puis a exécuté le point d'entrée de l'assemblage.Il a par la suite chargé le contenu décodé sous forme d'un assemblage en mémoire et a exécuté son point d'entrée.Cela a essentiellement exécuté le code malveillant en mémoire sans l'écrire sur le disque. Notamment, lorsqu'il est désabuscée, le deuxième script PowerShell qui a été utilisé pour charger les rhadamanthys contenait des caractéristiques intéressantes non couramment observées dans le code utilisé par les acteurs de la menace (ou les programmeurs légitimes).Plus précisément, le script PowerShell comprenait un signe de livre suivi par des commentaires grammaticalement corrects et hyper spécifiques au-dessus de chaque composant du script.Il s'agit d'une sortie typique du contenu de codage généré par LLM et suggère que TA547 a utilisé un certain type d'outil compatible LLM pour écrire (ou réécrire) le PowerShell, ou copié le script à partir d'une autre source qui l'avait utilisé. Exemple de PowerShell soupçonné d'être écrit par un LLM et utilisé dans une chaîne d'attaque TA547. Bien qu'il soit difficile de confirmer si le contenu malveillant est créé via LLMS & # 8211;Des scripts de logiciels malveillants aux leurres d'ingénierie sociale & # 8211;Il existe des caractéristiques d'un tel contenu qui pointent vers des informations générées par la machine plutôt que générées par l'homme.Quoi qu'il en soit, qu'il soit généré par l'homme ou de la machine, la défense contre de telles menaces reste la même. Attribution TA547 est une menace cybercriminale à motivation financière considérée comme un courtier d'accès initial (IAB) qui cible diverses régions géographiques.Depuis 2023, TA547 fournit généralement un rat Netsupport mais a parfois livré d'autres charges utiles, notamment Stealc et Lumma Stealer (voleurs d'informations avec des fonctionnalités similaires à Rhadamanthys).Ils semblaient favoriser les pièces javascript zippées comme charges utiles de livraison initiales en 2023, mais l'acteur est passé aux LNK compressées début mars 2024. En plus des campagnes en Allemagne, d'autres ciblage géographique récent comprennent des organisations en Espagne, en Suisse, en Autriche et aux États-Unis. Pourquoi est-ce important Cette campagne représente un exemple de certains déplacements techniques de TA547, y compris l'utilisation de LNK comprimés et du voleur Rhadamanthys non observé auparavant.Il donne également un aperçu de la façon dont les acteurs de la menace tirent parti de contenu probable généré par LLM dans les campagnes de logiciels malveillants. Les LLM peuvent aider les acteurs de menace à comprendre les chaînes d'attaque plus sophistiquées utilisées Malware Tool Threat ChatGPT ★★
RecordedFuture.webp 2024-04-04 17:04:16 Les cybercriminels répartissent les logiciels malveillants à travers les pages Facebook imitant les marques d'IA
Cybercriminals are spreading malware through Facebook pages impersonating AI brands
(lien direct)
Les cybercriminels prennent le contrôle des pages Facebook et les utilisent pour annoncer de faux logiciels d'intelligence artificielle générative chargés de logiciels malveillants. & Nbsp;Selon des chercheurs de la société de cybersécurité Bitdefender, les CyberCrooks profitent de la popularité des nouveaux outils génératifs d'IA et utilisent «malvertising» pour usurper l'identité de produits légitimes comme MidJourney, Sora AI, Chatgpt 5 et
Cybercriminals are taking over Facebook pages and using them to advertise fake generative artificial intelligence software loaded with malware.  According to researchers at the cybersecurity company Bitdefender, the cybercrooks are taking advantage of the popularity of new generative AI tools and using “malvertising” to impersonate legitimate products like Midjourney, Sora AI, ChatGPT 5 and
Malware Tool ChatGPT ★★
News.webp 2024-03-07 06:27:08 Ici \\, quelque chose d'autre peut faire: exposer Bad Infosec pour donner aux cyber-crims une orteil dans votre organisation
Here\\'s something else AI can do: expose bad infosec to give cyber-crims a toehold in your organization
(lien direct)
Les chercheurs singapouriens notent la présence croissante de crédits de chatppt dans les journaux malwares infoséaler trouvé quelques 225 000 journaux de voleurs contenant des détails de connexion pour le service l'année dernière.…
Singaporean researchers note rising presence of ChatGPT creds in Infostealer malware logs Stolen ChatGPT credentials are a hot commodity on the dark web, according to Singapore-based threat intelligence firm Group-IB, which claims to have found some 225,000 stealer logs containing login details for the service last year.…
Malware Threat ChatGPT ★★★
RiskIQ.webp 2024-03-05 19:03:47 Rester en avance sur les acteurs de la menace à l'ère de l'IA
Staying ahead of threat actors in the age of AI
(lien direct)
## Snapshot Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\'s blog on the research [here](https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors). Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\' usage of AI. However, Microsoft and our partners continue to study this landscape closely. The objective of Microsoft\'s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including [Microsoft Copilot for Security](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot), to elevate defenders everywhere. ## Activity Overview ### **A principled approach to detecting and blocking threat actors** The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House\'s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order\'s request for comprehensive AI safety and security standards. In line with Microsoft\'s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\'s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track. These principles include: - **Identification and action against malicious threat actors\' use:** Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources. - **Notification to other AI service providers:** When we detect a threat actor\'s use of another service provider\'s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies. - **Collaboration with other stakeholders:** Microsoft will collaborate with other stakeholders to regularly exchange information a Ransomware Malware Tool Vulnerability Threat Studies Medical Technical APT 28 ChatGPT APT 4 ★★
The_Hackers_News.webp 2024-03-05 16:08:00 Plus de 225 000 informations d'identification CHATGPT compromises en vente sur les marchés Web sombres
Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets
(lien direct)
Plus de 225 000 journaux contenant des informations d'identification Openai Chatgpt compromises ont été mis à disposition à la vente sur les marchés souterrains entre janvier et octobre 2023, nouvelles conclusions du groupe de groupe-IB. Ces informations d'identification ont été trouvées dans & nbsp; Information Stealer Logs & NBSP; associée à Lummac2, Raccoon et Redline Stealer malware. «Le nombre de dispositifs infectés a légèrement diminué au milieu et à la fin
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late
Malware ChatGPT ★★★
SecurityWeek.webp 2024-02-14 18:25:10 Microsoft attrape des apts utilisant le chatppt pour la recherche vuln, les scripts de logiciels malveillants
Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting
(lien direct)
> Les chasseurs de menaces de Microsoft disent que les APT étrangers interagissent avec le chatppt d'Openai \\ pour automatiser la recherche de vulnérabilité malveillante, la reconnaissance cible et les tâches de création de logiciels malveillants.
>Microsoft threat hunters say foreign APTs are interacting with OpenAI\'s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks.
Malware Vulnerability Threat ChatGPT ★★
Blog.webp 2024-01-26 17:26:19 Des milliers de messages Web sombres exposent des plans d'abus de chatpt
Thousands of Dark Web Posts Expose ChatGPT Abuse Plans
(lien direct)
> Par deeba ahmed Les cybercriminels font activement la promotion de l'abus de chatppt et de chatbots similaires, offrant une gamme d'outils malveillants, des logiciels malveillants aux kits de phishing. Ceci est un article de HackRead.com Lire la publication originale: Des milliers de messages Web sombres exposent des plans d'abus de chatppt
>By Deeba Ahmed Cybercriminals are actively promoting the abuse of ChatGPT and similar chatbots, offering a range of malicious tools from malware to phishing kits. This is a post from HackRead.com Read the original post: Thousands of Dark Web Posts Expose ChatGPT Abuse Plans
Malware Tool ChatGPT ★★★
InfoSecurityMag.webp 2024-01-24 17:15:00 Chatgpt Cybercrime Surge révélé dans 3000 articles Web sombres
ChatGPT Cybercrime Surge Revealed in 3000 Dark Web Posts
(lien direct)
Kaspersky a déclaré que les cybercriminels exploraient des schémas pour implémenter le chatppt dans le développement de logiciels malveillants
Kaspersky said cybercriminals are exploring schemes to implement ChatGPT in malware development
Malware ChatGPT ★★
News.webp 2024-01-24 06:26:08 Les avertissements NCSC de GCHQ \\ de la possibilité réaliste \\ 'AI aideront à détection d'évasion des logiciels malveillants soutenus par l'État
GCHQ\\'s NCSC warns of \\'realistic possibility\\' AI will help state-backed malware evade detection
(lien direct)
Cela signifie que les espions britanniques veulent la capacité de faire exactement cela, hein? L'idée que l'IA pourrait générer des logiciels malveillants super potentiels et indétectables a été bandé depuis des années & # 8211;et aussi déjà démystifié .Cependant, un article Publié aujourd'hui par le Royaume-Uni National Cyber Security Center (NCSC) suggère qu'il existe une "possibilité réaliste" que d'ici 2025, les attaquants les plus sophistiqués \\ 's'amélioreront considérablement grâce aux modèles d'IA informés par des données décrivant une cyber-cyberHits.… Malware Tool ChatGPT ★★★
TechRepublic.webp 2023-12-22 22:47:44 Rapport de menace ESET: abus de nom de chatppt, Lumma Steal Maleware augmente, la prévalence de Spyware \\ Android Spinok SDK \\
ESET Threat Report: ChatGPT Name Abuses, Lumma Stealer Malware Increases, Android SpinOk SDK Spyware\\'s Prevalence
(lien direct)
Des conseils d'atténuation des risques sont fournis pour chacune de ces menaces de cybersécurité.
Risk mitigation tips are provided for each of these cybersecurity threats.
Malware Threat Mobile ChatGPT ★★★
ProofPoint.webp 2023-11-28 23:05:04 Prédictions 2024 de Proofpoint \\: Brace for Impact
Proofpoint\\'s 2024 Predictions: Brace for Impact
(lien direct)
In the ever-evolving landscape of cybersecurity, defenders find themselves navigating yet another challenging year. Threat actors persistently refine their tactics, techniques, and procedures (TTPs), showcasing adaptability and the rapid iteration of novel and complex attack chains. At the heart of this evolution lies a crucial shift: threat actors now prioritize identity over technology. While the specifics of TTPs and the targeted technology may change, one constant remains: humans and their identities are the most targeted links in the attack chain. Recent instances of supply chain attacks exemplify this shift, illustrating how adversaries have pivoted from exploiting software vulnerabilities to targeting human vulnerabilities through social engineering and phishing. Notably, the innovative use of generative AI, especially its ability to improve phishing emails, exemplifies a shift towards manipulating human behavior rather than exploiting technological weaknesses. As we reflect on 2023, it becomes evident that cyber threat actors possess the capabilities and resources to adapt their tactics in response to increased security measures such as multi-factor authentication (MFA). Looking ahead to 2024, the trend suggests that threats will persistently revolve around humans, compelling defenders to take a different approach to breaking the attack chain. So, what\'s on the horizon? The experts at Proofpoint provide insightful predictions for the next 12 months, shedding light on what security teams might encounter and the implications of these trends. 1. Cyber Heists: Casinos are Just the Tip of the Iceberg Cyber criminals are increasingly targeting digital supply chain vendors, with a heightened focus on security and identity providers. Aggressive social engineering tactics, including phishing campaigns, are becoming more prevalent. The Scattered Spider group, responsible for ransomware attacks on Las Vegas casinos, showcases the sophistication of these tactics. Phishing help desk employees for login credentials and bypassing MFA through phishing one-time password (OTP) codes are becoming standard practices. These tactics have extended to supply chain attacks, compromising identity provider (IDP) vendors to access valuable customer information. The forecast for 2024 includes the replication and widespread adoption of such aggressive social engineering tactics, broadening the scope of initial compromise attempts beyond the traditional edge device and file transfer appliances. 2. Generative AI: The Double-Edged Sword The explosive growth of generative AI tools like ChatGPT, FraudGPT and WormGPT bring both promise and peril, but the sky is not falling as far as cybersecurity is concerned. While large language models took the stage, the fear of misuse prompted the U.S. president to issue an executive order in October 2023. At the moment, threat actors are making bank doing other things. Why bother reinventing the model when it\'s working just fine? But they\'ll morph their TTPs when detection starts to improve in those areas. On the flip side, more vendors will start injecting AI and large language models into their products and processes to boost their security offerings. Across the globe, privacy watchdogs and customers alike will demand responsible AI policies from technology companies, which means we\'ll start seeing statements being published about responsible AI policies. Expect both spectacular failures and responsible AI policies to emerge. 3. Mobile Device Phishing: The Rise of Omni-Channel Tactics take Centre Stage A notable trend for 2023 was the dramatic increase in mobile device phishing and we expect this threat to rise even more in 2024. Threat actors are strategically redirecting victims to mobile interactions, exploiting the vulnerabilities inherent in mobile platforms. Conversational abuse, including conversational smishing, has experienced exponential growth. Multi-touch campaigns aim to lure users away from desktops to mobile devices, utilizing tactics like QR codes and fraudulent voice calls Ransomware Malware Tool Vulnerability Threat Mobile Prediction Prediction ChatGPT ChatGPT ★★★
Trend.webp 2023-11-14 00:00:00 Un examen plus approfondi du rôle de Chatgpt \\ dans la création de logiciels malveillants automatisés
A Closer Look at ChatGPT\\'s Role in Automated Malware Creation
(lien direct)
Cette entrée de blog explore l'efficacité des mesures de sécurité de Chatgpt \\, le potentiel pour les technologies d'IA à abuser par les acteurs criminels et les limites des modèles d'IA actuels.
This blog entry explores the effectiveness of ChatGPT\'s safety measures, the potential for AI technologies to be misused by criminal actors, and the limitations of current AI models.
Malware ChatGPT ★★
AlienVault.webp 2023-10-17 10:00:00 Réévaluer les risques dans l'âge de l'intelligence artificielle
Re-evaluating risk in the artificial intelligence age
(lien direct)
Introduction It is common knowledge that when it comes to cybersecurity, there is no one-size-fits all definition of risk, nor is there a place for static plans. New technologies are created, new vulnerabilities discovered, and more attackers appear on the horizon. Most recently the appearance of advanced language models such as ChatGPT have taken this concept and turned the dial up to eleven. These AI tools are capable of creating targeted malware with no technical training required and can even walk you through how to use them. While official tools have safeguards in place (with more being added as users find new ways to circumvent them) that reduce or prevent them being abused, there are several dark web offerings that are happy to fill the void. Enterprising individuals have created tools that are specifically trained on malware data and are capable of supporting other attacks such as phishing or email-compromises. Re-evaluating risk While risk should always be regularly evaluated it is important to identify when significant technological shifts materially impact the risk landscape. Whether it is the proliferation of mobile devices in the workplace or easy access to internet-connected devices with minimal security (to name a few of the more recent developments) there are times when organizations need to completely reassess their risk profile. Vulnerabilities unlikely to be exploited yesterday may suddenly be the new best-in-breed attack vector today. There are numerous ways to evaluate, prioritize, and address risks as they are discovered which vary between organizations, industries, and personal preferences. At the most basic level, risks are evaluated by multiplying the likelihood and impact of any given event. These factors may be determined through numerous methods, and may be affected by countless elements including: Geography Industry Motivation of attackers Skill of attackers Cost of equipment Maturity of the target’s security program In this case, the advent of tools like ChatGPT greatly reduce the barrier to entry or the “skill” needed for a malicious actor to execute an attack. Sophisticated, targeted, attacks can be created in minutes with minimal effort from the attacker. Organizations that were previously safe due to their size, profile, or industry, now may be targeted simply because it is easy to do so. This means all previously established risk profiles are now out of date and do not accurately reflect the new environment businesses find themselves operating in. Even businesses that have a robust risk management process and mature program may find themselves struggling to adapt to this new reality.  Recommendations While there is no one-size-fits-all solution, there are some actions businesses can take that will likely be effective. First, the business should conduct an immediate assessment and analysis of their currently identified risks. Next, the business should assess whether any of these risks could be reasonably combined (also known as aggregated) in a way that materially changes their likelihood or impact. Finally, the business must ensure their executive teams are aware of the changes to the businesses risk profile and consider amending the organization’s existing risk appetite and tolerances. Risk assessment & analysis It is important to begin by reassessing the current state of risk within the organization. As noted earlier, risks or attacks that were previously considered unlikely may now be only a few clicks from being deployed in mass. The organization should walk through their risk register, if one exists, and evaluate all identified risks. This may be time consuming, and the organization should of course prioritize critical and high risks first, but it is important to ensure the business has the information they need to effectively address risks. Risk aggregation Onc Malware Tool Vulnerability ChatGPT ★★★★
The_Hackers_News.webp 2023-10-09 16:36:00 "J'ai fait un rêve" et des jailbreaks génératifs de l'IA
"I Had a Dream" and Generative AI Jailbreaks
(lien direct)
"Bien sûr, ici \\ est un exemple de code simple dans le langage de programmation Python qui peut être associé aux mots clés" MyHotkeyHandler "," Keylogger "et" MacOS ", il s'agit d'un message de Chatgpt suivi d'un morceau de morceau deCode malveillant et une brève remarque de ne pas l'utiliser à des fins illégales. Initialement publié par Moonlock Lab, les captures d'écran de Chatgpt écrivant du code pour un malware de Keylogger est encore
"Of course, here\'s an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet
Malware ChatGPT ★★★
AlienVault.webp 2023-09-06 10:00:00 Garder les réglementations de cybersécurité en tête pour une utilisation génératrice de l'IA
Keeping cybersecurity regulations top of mind for generative AI use
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  Can businesses stay compliant with security regulations while using generative AI? It’s an important question to consider as more businesses begin implementing this technology. What security risks are associated with generative AI? It\'s important to earn how businesses can navigate these risks to comply with cybersecurity regulations. Generative AI cybersecurity risks There are several cybersecurity risks associated with generative AI, which may pose a challenge for staying compliant with regulations. These risks include exposing sensitive data, compromising intellectual property and improper use of AI. Risk of improper use One of the top applications for generative AI models is assisting in programming through tasks like debugging code. Leading generative AI models can even write original code. Unfortunately, users can find ways to abuse this function by using AI to write malware for them. For instance, one security researcher got ChatGPT to write polymorphic malware, despite protections intended to prevent this kind of application. Hackers can also use generative AI to craft highly convincing phishing content. Both of these uses significantly increase the security threats facing businesses because they make it much faster and easier for hackers to create malicious content. Risk of data and IP exposure Generative AI algorithms are developed with machine learning, so they learn from every interaction they have. Every prompt becomes part of the algorithm and informs future output. As a result, the AI may “remember” any information a user includes in their prompts. Generative AI can also put a business’s intellectual property at risk. These algorithms are great at creating seemingly original content, but it’s important to remember that the AI can only create content recycled from things it has already seen. Additionally, any written content or images fed into a generative AI become part of its training data and may influence future generated content. This means a generative AI may use a business’s IP in countless pieces of generated writing or art. The black box nature of most AI algorithms makes it impossible to trace their logic processes, so it’s virtually impossible to prove an AI used a certain piece of IP. Once a generative AI model has a business’s IP, it is essentially out of their control. Risk of compromised training data One cybersecurity risk unique to AI is “poisoned” training datasets. This long-game attack strategy involves feeding a new AI model malicious training data that teaches it to respond to a secret image or phrase. Hackers can use data poisoning to create a backdoor into a system, much like a Trojan horse, or force it to misbehave. Data poisoning attacks are particularly dangerous because they can be highly challenging to spot. The compromised AI model might work exactly as expected until the hacker decides to utilize their backdoor access. Using generative AI within security regulations While generative AI has some cybersecurity risks, it is possible to use it effectively while complying with regulations. Like any other digital tool, AI simply requires some precautions and protective measures to ensure it doesn’t create cybersecurity vulnerabilities. A few essential steps can help businesses accomplish this. Understand all relevant regulations Staying compli Malware Tool ChatGPT ChatGPT ★★
globalsecuritymag.webp 2023-08-21 10:29:48 K & uuml;
Künstliche Intelligenz in der Informationstechnologie: Drei Fragen, die sich CISOs stellen sollten
(lien direct)
L'année 2023 peut être inscrite dans l'histoire comme l'année de K & Uuml;Ou du moins comme l'année au cours de laquelle les entreprises et les consommateurs égaux aux outils génératifs de KI, comme Chatt.Les prestataires des mensonges de sécurité informatique ne sont pas à l'abri de cet enthousiasme.Lors de la conférence RSA 2023, l'une des conférences internationales internationales dans le domaine de la sécurité informatique, le sujet de l'IA a été abordé dans presque toutes les conférences & # 8211;pour une bonne raison.L'IA a un énorme potentiel pour relier l'industrie. Nos chercheurs en sécurité ont déjà observé l'utilisation de l'IA par des pirates, qui créent ainsi T & Auml; uusing de véritables e-mails de phishing et accélèrent la construction de logiciels malveillants.La bonne nouvelle: les défenseurs utilisent également l'IA et les lient à leur sécurité chantée, car l'IA peut être utilisée pour détecter et empêcher automatiquement les cyberattaques.Par exemple, cela peut empêcher les e-mails de phishing d'atteindre la boîte de réception.Il peut également réduire les alarmes incorrectes qui prennent du temps qui affligent les équipes informatiques et lient la main-d'œuvre, qui serait mieux utilisée ailleurs - rapports spéciaux / / affiche , ciso
Das Jahr 2023 könnte als das Jahr der Künstlichen Intelligenz (KI) in die Geschichte eingehen – oder zumindest als das Jahr, in dem Unternehmen und Verbraucher gleichermaßen von generativen KI-Tools geschwärmt haben, wie ChatGPT. Anbieter von IT-Sicherheitslösungen sind gegen diese Begeisterung nicht immun. Auf der RSA-Konferenz 2023, einer der führenden internationalen Fachkonferenzen im Bereich der IT Sicherheit, wurde in fast jedem Vortrag das Thema der KI angesprochen – aus gutem Grund. KI hat ein enormes Potenzial, die Branche zu verändern. Unsere Sicherheitsforscher haben bereits den Einsatz von KI durch Hacker beobachtet, die damit täuschend echte Phishing-E-Mails erstellen und den Bau von Malware beschleunigen. Die gute Nachricht: Auch die Verteidiger verwenden KI und binden sie in ihre Sicherheitslösungen ein, denn KI kann zur automatischen Erkennung und Verhinderung von Cyber-Angriffen eingesetzt werden. Sie kann beispielsweise verhindern, dass Phishing-E-Mails jemals den Posteingang erreichen. Sie kann ebenso die zeitraubenden Fehl-Alarme reduzieren, die IT-Teams plagen und Arbeitskraft binden, welche anderswo besser eingesetzt wäre. - Sonderberichte / ,
Malware ChatGPT
Chercheur.webp 2023-08-08 17:37:23 Rencontrez le cerveau derrière le service de chat AI adapté aux logiciels malveillants \\ 'wormpt \\'
Meet the Brains Behind the Malware-Friendly AI Chat Service \\'WormGPT\\'
(lien direct)
Wormpt, un nouveau service de chatbot privé annoncé comme un moyen d'utiliser l'intelligence artificielle (AI) pour aider à rédiger des logiciels malveillants sans toutes les interdictions embêtantes sur une telle activité appliquée par Chatgpt et Google Bard, a commencé à ajouter des restrictions sur la façon dont le service peut être utilisé.Face à des clients essayant d'utiliser Wormpt pour créer des ransomwares et des escroqueries à phishing, le programmeur portugais de 23 ans qui a créé le projet dit maintenant que son service se transforme lentement en «un environnement plus contrôlé». Les grands modèles de langue (LLM) fabriqués par Chatgpt Parent Openai ou Google ou Microsoft ont tous diverses mesures de sécurité conçues pour empêcher les gens de les abuser à des fins néfastes - comme la création de logiciels malveillants ou de discours de haine.En revanche, Wormgpt s'est promu comme un nouveau LLM qui a été créé spécifiquement pour les activités de cybercriminalité.
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to help write malicious software without all the pesky prohibitions on such activity enforced by ChatGPT and Google Bard, has started adding restrictions on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.” The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes - such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new LLM that was created specifically for cybercrime activities.
Ransomware Malware ChatGPT ChatGPT ★★★
bleepingcomputer.webp 2023-08-01 10:08:16 Les cybercriminels forment des chatbots d'IA pour le phishing, des attaques de logiciels malveillants
Cybercriminals train AI chatbots for phishing, malware attacks
(lien direct)
Dans le sillage de Wormgpt, un clone Chatgpt formé sur des données axées sur les logiciels malveillants, un nouvel outil de piratage génératif de l'intelligence artificielle appelée fraudegpt a émergé, et au moins un autre est en cours de développement qui serait basé sur l'expérience de Google \\ S, Bard.[...]
In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google\'s AI experiment, Bard. [...]
Malware Tool ChatGPT ChatGPT ★★★
Checkpoint.webp 2023-07-19 16:27:24 Facebook a été inondé de publicités et de pages pour les faux chatpt, Google Bard et d'autres services d'IA, incitant les utilisateurs à télécharger des logiciels malveillants
Facebook Flooded with Ads and Pages for Fake ChatGPT, Google Bard and other AI services, Tricking Users into Downloading Malware
(lien direct)
> Présentation des cybercriminels utilisent Facebook pour usurper l'identité de marques de génération d'IA populaires, y compris les utilisateurs de Chatgpt, Google Bard, Midjourney et Jasper Facebook sont trompés pour télécharger du contenu à partir des fausses pages de marque et des annonces que ces téléchargements contiennent des logiciels malveillants malveillants, qui volent leurMots de passe en ligne (banque, médias sociaux, jeux, etc.), des portefeuilles cryptographiques et toutes les informations enregistrées dans leur navigateur Les utilisateurs sans méfiance aiment et commentent les faux messages, les diffusant ainsi sur leurs propres réseaux sociaux Les cyber-criminels continuent de essayer de voler privéinformation.Une nouvelle arnaque découverte par Check Point Research (RCR) utilise Facebook pour escroquer sans méfiance [& # 8230;]
>Highlights Cyber criminals are using Facebook to impersonate popular generative AI brands, including ChatGPT, Google Bard, Midjourney and Jasper Facebook users are being tricked into downloading content from the fake brand pages and ads These downloads contain malicious malware, which steals their online passwords (banking, social media, gaming, etc), crypto wallets and any information saved in their browser Unsuspecting users are liking and commenting on fake posts, thereby spreading them to their own social networks Cyber criminals continue to try new ways to steal private information. A new scam uncovered by Check Point Research (CPR) uses Facebook to scam unsuspecting […]
Malware Threat ChatGPT ★★★★
The_Hackers_News.webp 2023-07-18 16:24:00 Allez au-delà des titres pour des plongées plus profondes dans le sous-sol cybercriminal
Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground
(lien direct)
Découvrez des histoires sur les acteurs de la menace \\ 'Tactiques, techniques et procédures des experts en menace de Cybersixgill \\ chaque mois.Chaque histoire vous apporte des détails sur les menaces souterraines émergentes, les acteurs de la menace impliqués et comment vous pouvez prendre des mesures pour atténuer les risques.Découvrez les meilleures vulnérabilités et passez en revue les dernières tendances des ransomwares et des logiciels malveillants à partir du Web profond et sombre. Chatgpt volé
Discover stories about threat actors\' latest tactics, techniques, and procedures from Cybersixgill\'s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT
Ransomware Malware Vulnerability Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-27 13:00:00 Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS
CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams
(lien direct)
CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned Ransomware Spam Malware Hack Tool Threat FedEx APT 28 APT 15 ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-21 10:00:00 Vers un SOC plus résilient: la puissance de l'apprentissage automatique
Toward a more resilient SOC: the power of machine learning
(lien direct)
A way to manage too much data To protect the business, security teams need to be able to detect and respond to threats fast. The problem is the average organization generates massive amounts of data every day. Information floods into the Security Operations Center (SOC) from network tools, security tools, cloud services, threat intelligence feeds, and other sources. Reviewing and analyzing all this data in a reasonable amount of time has become a task that is well beyond the scope of human efforts. AI-powered tools are changing the way security teams operate. Machine learning (which is a subset of artificial intelligence, or “AI”)—and in particular, machine learning-powered predictive analytics—are enhancing threat detection and response in the SOC by providing an automated way to quickly analyze and prioritize alerts. Machine learning in threat detection So, what is machine learning (ML)? In simple terms, it is a machine\'s ability to automate a learning process so it can perform tasks or solve problems without specifically being told do so. Or, as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate. Malware Tool Threat Prediction Cloud ChatGPT ★★
knowbe4.webp 2023-06-20 13:00:00 Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données
CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches
(lien direct)
CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l Ransomware Data Breach Spam Malware Hack Vulnerability Threat Cloud ChatGPT ChatGPT ★★
bleepingcomputer.webp 2023-06-20 04:00:00 Plus de 100 000 comptes Chatgpt volés via des logiciels malveillants voleurs d'informations
Over 100,000 ChatGPT accounts stolen via info-stealing malware
(lien direct)
Selon Dark Web Marketplace, plus de 101 000 comptes d'utilisateurs de ChatGPT ont été compromis par les voleurs d'informations au cours de la dernière année.[...]
More than 101,000 ChatGPT user accounts have been compromised by information stealers over the past year, according to dark web marketplace data. [...]
Malware ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-13 13:00:00 CyberheistNews Vol 13 # 24 [Le biais de l'esprit \\] le prétexage dépasse désormais le phishing dans les attaques d'ingénierie sociale
CyberheistNews Vol 13 #24 [The Mind\\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks
(lien direct)
CyberheistNews Vol 13 #24 CyberheistNews Vol 13 #24  |   June 13th, 2023 [The Mind\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks The New Verizon DBIR is a treasure trove of data. As we will cover a bit below, Verizon reported that 74% of data breaches Involve the "Human Element," so people are one of the most common factors contributing to successful data breaches. Let\'s drill down a bit more in the social engineering section. They explained: "Now, who has received an email or a direct message on social media from a friend or family member who desperately needs money? Probably fewer of you. This is social engineering (pretexting specifically) and it takes more skill. "The most convincing social engineers can get into your head and convince you that someone you love is in danger. They use information they have learned about you and your loved ones to trick you into believing the message is truly from someone you know, and they use this invented scenario to play on your emotions and create a sense of urgency. The DBIR Figure 35 shows that Pretexting is now more prevalent than Phishing in Social Engineering incidents. However, when we look at confirmed breaches, Phishing is still on top." A social attack known as BEC, or business email compromise, can be quite intricate. In this type of attack, the perpetrator uses existing email communications and information to deceive the recipient into carrying out a seemingly ordinary task, like changing a vendor\'s bank account details. But what makes this attack dangerous is that the new bank account provided belongs to the attacker. As a result, any payments the recipient makes to that account will simply disappear. BEC Attacks Have Nearly Doubled It can be difficult to spot these attacks as the attackers do a lot of preparation beforehand. They may create a domain doppelganger that looks almost identical to the real one and modify the signature block to show their own number instead of the legitimate vendor. Attackers can make many subtle changes to trick their targets, especially if they are receiving many similar legitimate requests. This could be one reason why BEC attacks have nearly doubled across the DBIR entire incident dataset, as shown in Figure 36, and now make up over 50% of incidents in this category. Financially Motivated External Attackers Double Down on Social Engineering Timely detection and response is crucial when dealing with social engineering attacks, as well as most other attacks. Figure 38 shows a steady increase in the median cost of BECs since 2018, now averaging around $50,000, emphasizing the significance of quick detection. However, unlike the times we live in, this section isn\'t all doom and Spam Malware Vulnerability Threat Patching Uber APT 37 ChatGPT ChatGPT APT 43 ★★
AlienVault.webp 2023-06-13 10:00:00 Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire
Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  OpenAI\'s flagship product, ChatGPT, has dominated the news cycle since its unveiling in November 2022. In only a few months, ChatGPT became the fastest-growing consumer app in internet history, reaching 100 million users as 2023 began. The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt. This can be bad news for your business, as it escalates the degree of difficulty in managing threats.  Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape. Understanding the role of ChatGPT in modern ransomware attacks We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage. With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection. The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive. Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment. This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds. Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th Ransomware Malware Tool Threat ChatGPT ChatGPT ★★
DarkReading.webp 2023-06-06 12:00:00 Chatgpt Hallucinations ouvre les développeurs aux attaques de logiciels malveillants de la chaîne d'approvisionnement
ChatGPT Hallucinations Open Developers to Supply-Chain Malware Attacks
(lien direct)
Les attaquants pourraient exploiter une expérience d'interdiction en IA commune pour diffuser du code malveillant via des développeurs qui utilisent Chatgpt pour créer un logiciel.
Attackers could exploit a common AI experience-false recommendations-to spread malicious code via developers that use ChatGPT to create software.
Malware ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-31 13:00:00 Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier
CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks
(lien direct)
CyberheistNews Vol 13 #22 CyberheistNews Vol 13 #22  |   May 31st, 2023 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all. When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report. According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry: 51% of invoice fraud attacks targeted the financial services industry 42% were payroll fraud attacks 63% were payment fraud To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox. The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage. Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing. Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users. Ransomware Malware Hack Tool Threat Conference Uber ChatGPT ChatGPT Guam ★★
bleepingcomputer.webp 2023-05-30 15:01:01 ROMCOM MALWARE SPEAT via Google Ads pour Chatgpt, GIMP, plus
RomCom malware spread via Google Ads for ChatGPT, GIMP, more
(lien direct)
A new campaign distributing the RomCom backdoor malware is impersonating the websites of well-known or fictional software, tricking users into downloading and launching malicious installers. [...]
A new campaign distributing the RomCom backdoor malware is impersonating the websites of well-known or fictional software, tricking users into downloading and launching malicious installers. [...]
Malware ChatGPT ★★
The_Hackers_News.webp 2023-05-19 12:23:00 Vous recherchez des outils d'IA?Attention aux sites voyous distribuant des logiciels malveillants Redline
Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware
(lien direct)
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire
Malware ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-05-17 16:00:00 Batloader usurpère Chatgpt et MidJourney en cyber-attaques
BatLoader Impersonates ChatGPT and Midjourney in Cyber-Attacks
(lien direct)
ESENTIRE a recommandé de sensibiliser les logiciels malveillants à se déguiser en tant qu'applications légitimes
eSentire recommended raising awareness of malware masquerading as legitimate applications
Malware ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-09 13:00:00 Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs
CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
(lien direct)
CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, Ransomware Data Breach Spam Malware Tool Threat Prediction NotPetya NotPetya APT 28 ChatGPT ChatGPT ★★
AlienVault.webp 2023-05-08 10:00:00 Empêcher des attaques de phishing sophistiquées destinées aux employés
Preventing sophisticated phishing attacks aimed at employees
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  As technology advances, phishing attempts are becoming more sophisticated. It can be challenging for employees to recognize an email is malicious when it looks normal, so it’s up to their company to properly train workers in prevention and detection. Phishing attacks are becoming more sophisticated Misspellings and poorly formatted text used to be the leading indicators of an email scam, but they’re getting more sophisticated. Today, hackers can spoof email addresses and bots sound like humans. It’s becoming challenging for employees to tell if their emails are real or fake, which puts the company at risk of data breaches. In March 2023, an artificial intelligence chatbot called GPT-4 received an update that lets users give specific instructions about styles and tasks. Attackers can use it to pose as employees and send convincing messages since it sounds intelligent and has general knowledge of any industry. Since classic warning signs of phishing attacks aren’t applicable anymore, companies should train all employees on the new, sophisticated methods. As phishing attacks change, so should businesses. Identify the signs Your company can take preventive action to secure its employees against attacks. You need to make it difficult for hackers to reach them, and your company must train them on warning signs. While blocking spam senders and reinforcing security systems is up to you, they must know how to identify and report themselves. You can prevent data breaches if employees know what to watch out for: Misspellings: While it’s becoming more common for phishing emails to have the correct spelling, employees still need to look for mistakes. For example, they could look for industry-specific language because everyone in their field should know how to spell those words. Irrelevant senders: Workers can identify phishing — even when the email is spoofed to appear as someone they know — by asking themselves if it is relevant. They should flag the email as a potential attack if the sender doesn’t usually reach out to them or is someone in an unrelated department. Attachments: Hackers attempt to install malware through links or downloads. Ensure every employee knows they shouldn\'t click on them. Odd requests: A sophisticated phishing attack has relevant messages and proper language, but it is somewhat vague because it goes to multiple employees at once. For example, they could recognize it if it’s asking them to do something unrelated to their role. It may be harder for people to detect warning signs as attacks evolve, but you can prepare them for those situations as well as possible. It’s unlikely hackers have access to their specific duties or the inner workings of your company, so you must capitalize on those details. Sophisticated attacks will sound intelligent and possibly align with their general duties, so everyone must constantly be aware. Training will help employees identify signs, but you need to take more preventive action to ensure you’re covered. Take preventive action Basic security measures — like regularly updating passwords and running antivirus software — are fundamental to protecting your company. For example, everyone should change their passwords once every three months at minimum to ensur Spam Malware ChatGPT ★★
InfoSecurityMag.webp 2023-05-04 16:00:00 Meta s'attaque aux logiciels malveillants qui se font passer pour le chatppt dans des campagnes persistantes
Meta Tackles Malware Posing as ChatGPT in Persistent Campaigns
(lien direct)
Les familles de logiciels malveillants détectées et perturbées incluent Ducktail et le NODESTELEUR nouvellement identifié
Malware families detected and disrupted include Ducktail and the newly identified NodeStealer
Malware ChatGPT ChatGPT ★★
The_Hackers_News.webp 2023-05-04 14:27:00 Meta élimine la campagne de logiciels malveillants qui a utilisé Chatgpt comme leurre pour voler des comptes
Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts
(lien direct)
Meta a déclaré qu'il avait fallu des mesures pour éliminer plus de 1 000 URL malveillants de partager ses services qui ont été trouvés pour tirer parti du chatpt d'Openai \\ comme un leurre pour propager environ 10 familles de logiciels malveillants depuis mars 2023. Le développement s'accompagne de la toile de fond des fausses extensions du navigateur Web ChatGpt étant de plus en plus utilisées pour voler
Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI\'s ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes against the backdrop of fake ChatGPT web browser extensions being increasingly used to steal users\' Facebook account credentials with an aim to run
Malware ChatGPT ChatGPT ★★
SecurityWeek.webp 2023-05-03 13:30:40 Les pirates promettent AI, installez des logiciels malveillants à la place
Hackers Promise AI, Install Malware Instead
(lien direct)
> Facebook Parent Meta a averti que les pirates utilisent la promesse d'une intelligence artificielle générative comme Chatgpt pour inciter les gens à installer des logiciels malveillants sur les appareils.
>Facebook parent Meta warned that hackers are using the promise of generative artificial intelligence like ChatGPT to trick people into installing malware on devices.
Malware ChatGPT ChatGPT ★★
globalsecuritymag.webp 2023-05-03 12:09:10 Les faux sites Web liés à Chatgpt présentent un risque élevé, avertit les points forts de la recherche sur le point de contrôle
Fake Websites Related to ChatGPT Pose High Risk, Warns Check Point Research Highlights
(lien direct)
Les faux sites Web liés à Chatgpt présentent un risque élevé, avertissent la recherche de point de contrôle Fait saillie • Check Point Research (RCR) voit une augmentation des logiciels malveillants distribués via des sites Web semblent être liés à Chatgpt - mise à jour malveillant
Fake Websites Related to ChatGPT Pose High Risk, Warns Check Point Research Highlights • Check Point Research (CPR) sees a surge in malware distributed through websites appearing to be related to ChatGPT - Malware Update
Malware ChatGPT ChatGPT ★★
WiredThreatLevel.webp 2023-05-03 12:00:00 Meta déménage pour contrer de nouveaux logiciels malveillants et répéter les prises de contrôle des comptes
Meta Moves to Counter New Malware and Repeat Account Takeovers
(lien direct)
L'entreprise ajoute de nouveaux outils car les mauvais acteurs utilisent des leurres sur le thème de Chatgpt et masquent leur infrastructure dans le but de tromper les victimes et d'éliminer les défenseurs.
The company is adding new tools as bad actors use ChatGPT-themed lures and mask their infrastructure in an attempt to trick victims and elude defenders.
Malware ChatGPT ★★
Checkpoint.webp 2023-05-02 19:12:58 De faux sites Web imitant l'association de chatppt présente un risque élevé, avertit la recherche sur le point de contrôle
Fake Websites Impersonating Association To ChatGPT Poses High Risk, Warns Check Point Research
(lien direct)
Sites Web qui imitent Chatgpt, ayant l'intention d'attirer les utilisateurs de télécharger des fichiers malveillants et avertissent les utilisateurs de connaître et de s'abstenir d'accéder à des sites Web similaires à l'âge de l'IA & # 8211;Anxiété ou aide?En décembre 2022, Check Point Research (RCR) a commencé à soulever des préoccupations concernant les implications de Catgpt \\ pour la cybersécurité.Dans notre rapport précédent, la RCR a mis en lumière une augmentation [& # 8230;]
>Highlights Check Point Research (CPR) sees a surge in malware distributed through websites appearing to be related to ChatGPT Since the beginning of 2023,  1 out of 25 new ChatGPT-related domain was either malicious or potentially malicious CPR provides examples of websites that mimic ChatGPT, intending to lure users to download malicious files, and warns users to be aware and to refrain from accessing similar websites The age of AI – Anxiety or Aid? In December 2022, Check Point Research (CPR) started raising concerns about ChatGPT\'s implications for cybersecurity. In our previous report, CPR put a spotlight on an increase […]
Malware ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-02 13:00:00 Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle?
CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells?
(lien direct)
CyberheistNews Vol 13 #18 CyberheistNews Vol 13 #18  |   May 2nd, 2023 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card. It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. "When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes." That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive. One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves. Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 Ransomware Malware Hack Threat ChatGPT ChatGPT ★★
Anomali.webp 2023-04-25 18:22:00 Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP
Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server
(lien direct)
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Cryptomining, Infostealers, Malvertising, North Korea, Phishing, Ransomware, and Supply-chain attacks. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters (published: April 21, 2023) A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign. Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups. MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:​​Kubernetes RBAC 3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible (published: April 20, 2023) Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and Ransomware Spam Malware Tool Threat Cloud Uber APT 38 ChatGPT APT 43 ★★
knowbe4.webp 2023-04-25 13:00:00 Cyberheistnews Vol 13 # 17 [Head Start] Méthodes efficaces Comment enseigner l'ingénierie sociale à une IA
CyberheistNews Vol 13 #17 [Head Start] Effective Methods How To Teach Social Engineering to an AI
(lien direct)
CyberheistNews Vol 13 #17 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters with Spam Malware Hack Threat APT 28 ChatGPT ChatGPT ★★★
bleepingcomputer.webp 2023-04-22 10:08:16 Google Ads Push Bumblebee Malware utilisé par Ransomware Gangs
Google ads push BumbleBee malware used by ransomware gangs
(lien direct)
Le logiciel malveillant Bumblebee ciblant l'entreprise est distribué via Google Ads et un empoisonnement SEO qui favorisent des logiciels populaires comme Zoom, Cisco AnyConnect, Chatgpt et Citrix Workspace.[...]
The enterprise-targeting Bumblebee malware is distributed through Google Ads and SEO poisoning that promote popular software like Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace. [...]
Ransomware Malware ChatGPT ★★
News.webp 2023-04-21 09:33:14 Les fans de Chatgpt ont besoin de l'esprit défensif \\ 'pour éviter les escrocs et les logiciels malveillants
ChatGPT fans need \\'defensive mindset\\' to avoid scammers and malware
(lien direct)
Palo Alto Networks Spotts des pointes d'activité suspecte telles que les domaines coquins, le phishing et pire Les fans de Chatgpt doivent adopter un "état d'esprit défensif" parce que les escrocs ont commencé à utiliser plusieurs méthodes pour tromper le bot \\ ''S des utilisateurs pour télécharger des logiciels malveillants ou partager des informations sensibles.…
Palo Alto Networks spots suspicious activity spikes such as naughty domains, phishing, and worse ChatGPT fans need to adopt a "defensive mindset" because scammers have started using multiple methods to trick the bot\'s users into downloading malware or sharing sensitive information.…
Malware ChatGPT ChatGPT ★★
ComputerWeekly.webp 2023-04-20 11:46:00 Le malware de Bumblebee vole sur les ailes de Zoom et Chatgpt
Bumblebee malware flies on the wings of Zoom and ChatGPT
(lien direct)
Palo Alto Networks Spotts des pointes d'activité suspecte telles que les domaines coquins, le phishing et pire Les fans de Chatgpt doivent adopter un "état d'esprit défensif" parce que les escrocs ont commencé à utiliser plusieurs méthodes pour tromper le bot \\ ''S des utilisateurs pour télécharger des logiciels malveillants ou partager des informations sensibles.…
Palo Alto Networks spots suspicious activity spikes such as naughty domains, phishing, and worse ChatGPT fans need to adopt a "defensive mindset" because scammers have started using multiple methods to trick the bot\'s users into downloading malware or sharing sensitive information.…
Malware ChatGPT ChatGPT ★★
SecureWork.webp 2023-04-20 10:49:00 Bumblebee malware distribué via des téléchargements d'installation trojanisés
Bumblebee Malware Distributed Via Trojanized Installer Downloads
(lien direct)
Type: Blogs Bumblebee Malware distribué via des téléchargements d'installation trojanisés La restriction du téléchargement et de l'exécution des logiciels tiers est d'une importance cruciale. Apprenez comment les chercheurs CTU ™ ont observé Bumblebee Micware BumblebeeDistribué via des installateurs transmissibles pour des logiciels populaires tels que Zoom, Cisco AnyConnect, Chatgpt et Citrix Workspace.
Type: BlogsBumblebee Malware Distributed Via Trojanized Installer DownloadsRestricting the download and execution of third-party software is critically important.Learn how CTU™ researchers observed Bumblebee malware distributed via trojanized installers for popular software such as Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace.
Malware ChatGPT ★★
WiredThreatLevel.webp 2023-04-19 11:00:00 Comment le chatppt et les robots comme IT-CAN Spread Malewware
How ChatGPT-and Bots Like It-Can Spread Malware
(lien direct)
L'IA générative est un outil, ce qui signifie qu'il peut également être utilisé par les cybercriminels.Voici comment vous protéger.
Generative AI is a tool, which means it can be used by cybercriminals, too. Here\'s how to protect yourself.
Malware ChatGPT ★★
knowbe4.webp 2023-04-18 13:00:00 Cyberheistnews Vol 13 # 16 [doigt sur le pouls]: comment les phishers tirent parti de l'IA récent Buzz
CyberheistNews Vol 13 #16 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz
(lien direct)
CyberheistNews Vol 13 #16 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever leav Spam Malware Hack Threat APT 28 ChatGPT ChatGPT ★★★
DarkReading.webp 2023-04-12 21:57:00 Le rapport révèle que Chatgpt déjà impliqué dans les fuites de données, les escroqueries à phishing et les infections de logiciels malveillants
Report Reveals ChatGPT Already Involved in Data Leaks, Phishing Scams & Malware Infections
(lien direct)
CyberheistNews Vol 13 #16 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever leav Malware ChatGPT ChatGPT ★★★★
Last update at: 2024-05-08 14:08:13
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter