Last one
Src |
Date (GMT) |
Titre |
Description |
Tags |
Stories |
Notes |
 |
2025-03-31 10:46:09 |
Ce mois-ci en sécurité avec Tony Anscombe - édition de mars 2025 This month in security with Tony Anscombe – March 2025 edition (lien direct) |
D'une vulnérabilité exploitée dans un outil de chatpt tiers à une touche bizarre sur les demandes de ransomware, c'est un enveloppement sur un autre mois rempli de nouvelles de cybersécurité percutantes
From an exploited vulnerability in a third-party ChatGPT tool to a bizarre twist on ransomware demands, it\'s a wrap on another month filled with impactful cybersecurity news |
Ransomware
Tool
Vulnerability
|
ChatGPT
|
★★★
|
 |
2025-03-17 06:49:07 |
S'attaquer à la menace du cyber-risque pendant l'adoption de l'IA Tackling the threat of cyber risk during AI adoption (lien direct) |
Depuis la montée en flèche de l'AI \\, la publication de Chatgpt en novembre 2022, la technologie a été au centre du débat international. Pour chaque application des soins de santé, de l'éducation et de l'efficacité du lieu de travail, les rapports d'abus des cybercriminels pour les campagnes de phishing, l'automatisation des attaques et les ransomwares ont fait la nouvelle. Peu importe si les individus et [...]
Ever since AI\'s meteoric rise to prominence following the release of ChatGPT in November 2022, the technology has been at the centre of international debate. For every application in healthcare, education, and workplace efficiency, reports of abuse by cybercriminals for phishing campaigns, automating attacks, and ransomware have made mainstream news. Regardless of whether individuals and [...] |
Ransomware
Threat
Medical
|
ChatGPT
|
★★
|
 |
2025-01-28 19:37:26 |
Security Flaws Found In DeepSeek Leads To Jailbreak (lien direct) |
DeepSeek R1, the AI model making all the buzz right now, has been found to have several vulnerabilities that allowed security researchers at the Cyber Threat Intelligence firm Kela to jailbreak it.
Kela tested these jailbreaks around known vulnerabilities and bypassed the restriction mechanism on the chatbot.
This allowed them to jailbreak it across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.
For instance, the “Evil Jailbreak” method (Prompts the AI model to adopt an “evil” persona), which was able to trick the earlier models of ChatGPT and fixed long back, still works on DeepSeek.
The news comes in while DeepSeek investigates a cyberattack, not allowing new registrations.
“Due to large-scale malicious attacks on DeepSeek’s services, we are temporarily limiting registrations to ensure continued service. Existing users can log in as usual.” DeepSeek’s status page reads.
While it does not confirm what kind of cyberattack disrupts its service, it seems to be a DDoS attack.
DeepSeek is yet to comment on these vulnerabilities.
DeepSeek R1, the AI model making all the buzz right now, has been found to have several vulnerabilities that allowed security researchers at the Cyber Threat Intelligence firm Kela to jailbreak it.
Kela tested these jailbreaks around known vulnerabilities and bypassed the restriction mechanism on the chatbot.
This allowed them to jailbreak it across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.
For instance, the “Evil Jailbreak” method (Prompts the AI model to adopt an “evil” persona), which was able to trick the earlier models of ChatGPT and fixed long back, still works on DeepSeek.
The news comes in while DeepSeek investigates a cyberattack, not allowing new registrations.
“Due to large-scale malicious attacks on DeepSeek’s services, we are temporarily limiting registrations to ensure continued service. Existing users can log in as usual.” DeepSeek’s status page reads.
While it does not confirm what kind of cyberattack disrupts its service, it seems to be a DDoS attack.
DeepSeek is yet to comment on these vulnerabilities.
|
Ransomware
Vulnerability
Threat
|
ChatGPT
|
★★★
|
 |
2024-10-21 18:57:24 |
Bumblebee malware returns after recent law enforcement disruption (lien direct) |
## Instantané
Les chercheurs de la société de cybersécurité Nettskope ont observé une résurgence du chargeur de logiciels malveillants de Bumblebee, qui s'était silencieux à la suite de la perturbation par Europol \\ 's \' Operation Endgame \\ 'en mai.
## Description
Bumblebee, attribué à [TrickBot] (https://sip.security.microsoft.com/intel-profiles/5a0aed1313768d50c9e800748108f51d3dfea6a4b48aa71b630cff897982f7c) https://sip.security.microsoft.COM / Intel-Explorer / Articles / 8AAA95D1) Backdoor, facilitant les acteurs de ransomwares \\ 'Accès aux réseaux.Le malware est généralement réparti par le phishing, le malvertising et l'empoisonnement SEO, promouvant des logiciels contrefaits comme Zooom, Cisco AnyConnect, Chatgpt et Citrix Workspace.Il est connu pour la livraison de charges utiles telles que [Cobalt Strike] (https: //sip.security.microsoft.com/intel-profiles/fd8511c1d61e93d39411acf36a31130a6795efe186497098fe0c6f2ccfb920fc) Beacons, [standing 2575F9418D4B6723C17B8A1F507D20C2140A75D16D6) MALWARE et diverses souches de ransomware.
La dernière chaîne d'attaque de Bumblebee commence par un phiShing Email qui trompe la victime pour télécharger une archive zip malveillante contenant un raccourci .lnk.Ce raccourci déclenche PowerShell pour télécharger un fichier .msi malveillant, se faisant passer pour une mise à jour légitime du pilote NVIDIA ou un programme d'installation de MidJourney, à partir d'un serveur distant.Le fichier .msi s'exécute silencieusement, en utilisant la table Self-Reg pour charger une DLL dans le processus msiexec.exe et déployer Bumblebee en mémoire.La charge utile dispose d'une DLL interne, de fonctions de noms de fonctions exportées et de mécanismes d'extraction de configuration cohérents avec les variantes précédentes.
## Analyse Microsoft et contexte OSINT supplémentaire
L'acteur Microsoft suit [Storm-0249] (https://sip.security.microsoft.com/intel-profiles/75f82d0d2bf6af59682bbbbbbbb. et connu pour distribution de bazaloder, Gozi, emoTET, [IceDID] (https://sip.security.microsoft.com/intel-profiles/ee69395aeeea2b2322d5941be0ec4997a22d106f671ef84d35418ec2810faddb) et bumBlebee.Storm-0249 utilise généralement des e-mails de phishing pour distribuer leurs charges utiles de logiciels malveillants dans des attaques opportunistes.En mai 2022, Microsoft Threat Intelligence a observé que Storm-0249 s'éloigne de la précédente MALWLes familles de [Bumblebee] (https://security.microsoft.com/atheatanalytics3/048e866a-0a92-47b8-94ac-c47fe577cc33/analystreport?ocid=Magicti_TA_TA2) comme mécanisme initial de livraison de charge utile.Ils ont effectué des campagnes d'accès initiales basées sur des e-mails pour un transfert à d'autres acteurs, notamment pour les campagnes qui ont abouti au déploiement des ransomwares.
Bumblebee Malware a fait plusieurs résurgences depuis sa découverte en 2022, adaptant et évoluant en réponse aux mesures de sécurité.Initialement observé en remplacement des logiciels malveillants bazarloader utilisés par les groupes cybercriminaux liés à TrickBot, Bumblebee a refait surface plusieurs fois avec des capacités améliorées et des stratégies d'attaque modifiées.Ces [résurgences] (https://sip.security.microsoft.com/intel-explorer/articles/ab2bde0b) coïncident souvent avec les changements dans l'écosystème de cybercriminalité, y compris le retrait de l'infrastructure de TrickBot \\ et de la puissance de conti ransomware de TrickBot \\ s et de la puissance de conti ransomware de TrickBot \\ s et de la puissance de contidrowware Contidownopérations.
La capacité de Bumblebee \\ à réapparaître est due à son architecture modulaire flexible, permettant aux acteurs de menace de mettre à jour ses charges utiles et ses techniques d'évasion.Chaque résurgence a vu Bumblebee utilisé dans des campagnes de plus en plus sophistiquées, offrant fréquemment des ransomwares à fort impact comme BlackCat et Quantum.De plus, il a été lié à des tactiques d'évasion avancées |
Ransomware
Spam
Malware
Tool
Threat
Legislation
|
ChatGPT
|
★★
|
 |
2024-10-15 10:00:00 |
De réactif à proactif: déplacer votre stratégie de cybersécurité From Reactive to Proactive: Shifting Your Cybersecurity Strategy (lien direct) |
The content of this post is solely the responsibility of the author. LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article.
Most companies have some cybersecurity protocols in place in case of a breach. They could be anything from antivirus software to spam filters. Those are considered reactive security measures because they tell you once a threat has already become a reality. By then, the damage may already be done, and your company could be in hot water.
Instead, your business needs to pair those reactive strategies with proactive ideas. These are plans you can put in place to keep an eye on trends and your potential vulnerabilities so you can catch and prevent a threat before it comes to fruition. Here are a few strategies to set your company on the right path.
Know And Anticipate The Threats
As technology evolves, the risk of cybercrime continues to elevate to all new levels. If you’re a business owner, you need only to see cybersecurity by the numbers to see that you must take proactive action.
A survey in 2023 found that ransomware attacks, where a hacker takes control of your systems until you pay a ransom, continue to be one of the primary threats to medium-sized businesses. They found that one ransomware attack occurs every 10 seconds. Remember, you don’t need to be a major corporation to be on the radar of cybercriminals. Almost every business has data that can be used maliciously by hackers.
Possibly even more alarming is that a hacker can break into your network in less than five hours. That means, if you aren’t being proactive, you could find out about a threat after the hacker gains access and the damage has been done.
Staying Ahead Of The Curve
In addition to watching out for known threats, your company must proactively protect against future threats. You need to be ahead of the curve, especially during the age of artificial intelligence. The rise of programs like ChatGPT and generative AI means that hackers have many new avenues to hack your systems. At this point, less than 10% of companies are prepared to tackle generative AI risks. Because of this lack of understanding and proactive security, there’s been a spike in cybersecurity events.
If your company needs to be well-versed in the proactive measures that can protect against these upcoming threats, then you need to be. You can try several proactive cybersecurity tactics, including penetration testing, which is the process of bringing in skilled hackers to do their best to breach your company’s defenses. The best hackers will know the newest tricks, from AI techniques to vishing attacks, so you can get ahead of the game. You can also use advanced analytics to detect issues, such as predictive modeling, which will analyze past transactions and look for unusual behavior and characteristics to find potential threats so you can take action.
Cybersecurity Training Is A Must
The best way to be proactive against potential cyber threats is to have as many eyes on your systems and processes as possible. So, you need to get all of your employees in on the act. It’s essential to create an effective cybersecurity training program. Ideally, this training would occur during the new hire orientation so everyone is on the same page from day one. Then, have ongoing supplementary training each year.
During this training, teach your team about the common cyber attacks, from password hacking to phishing scams. A phishing email is typically only successful if your employee takes the bait and clicks the included link or attachment. So, teach them about the red flags of phishing emails and to look closely at the sender |
Ransomware
Spam
Hack
Vulnerability
Threat
|
ChatGPT
|
★★
|
 |
2024-09-30 13:21:55 |
Faits saillants hebdomadaires OSINT, 30 septembre 2024 Weekly OSINT Highlights, 30 September 2024 (lien direct) |
## Snapshot
Last week\'s OSINT reporting highlighted diverse cyber threats involving advanced attack vectors and highly adaptive threat actors. Many reports centered on APT groups like Patchwork, Sparkling Pisces, and Transparent Tribe, which employed tactics such as DLL sideloading, keylogging, and API patching. The attack vectors ranged from phishing emails and malicious LNK files to sophisticated malware disguised as legitimate software like Google Chrome and Microsoft Teams. Threat actors targeted a variety of sectors, with particular focus on government entities in South Asia, organizations in the U.S., and individuals in India. These campaigns underscored the increased targeting of specific industries and regions, revealing the evolving techniques employed by cybercriminals to maintain persistence and evade detection.
## Description
1. [Twelve Group Targets Russian Government Organizations](https://sip.security.microsoft.com/intel-explorer/articles/5fd0ceda): Researchers at Kaspersky identified a threat group called Twelve, targeting Russian government organizations. Their activities appear motivated by hacktivism, utilizing tools such as Cobalt Strike and mimikatz while exfiltrating sensitive information and employing ransomware like LockBit 3.0. Twelve shares infrastructure and tactics with the DARKSTAR ransomware group.
2. [Kryptina Ransomware-as-a-Service Evolution](https://security.microsoft.com/intel-explorer/articles/2a16b748): Kryptina Ransomware-as-a-Service has evolved from a free tool to being actively used in enterprise attacks, particularly under the Mallox ransomware family, which is sometimes referred to as FARGO, XOLLAM, or BOZON. The commoditization of ransomware tools complicates malware tracking as affiliates blend different codebases into new variants, with Mallox operators opportunistically targeting \'timely\' vulnerabilities like MSSQL Server through brute force attacks for initial access.
3. [North Korean IT Workers Targeting Tech Sector:](https://sip.security.microsoft.com/intel-explorer/articles/bc485b8b) Mandiant reports on UNC5267, tracked by Microsoft as Storm-0287, a decentralized threat group of North Korean IT workers sent abroad to secure jobs with Western tech companies. These individuals disguise themselves as foreign nationals to generate revenue for the North Korean regime, aiming to evade sanctions and finance its weapons programs, while also posing significant risks of espionage and system disruption through elevated access.
4. [Necro Trojan Resurgence](https://sip.security.microsoft.com/intel-explorer/articles/00186f0c): Kaspersky\'s Secure List reveals the resurgence of the Necro Trojan, impacting both official and modified versions of popular applications like Spotify and Minecraft, and affecting over 11 million Android devices globally. Utilizing advanced techniques such as steganography to hide its payload, the malware allows attackers to run unauthorized ads, download files, and install additional malware, with recent attacks observed across countries like Russia, Brazil, and Vietnam.
5. [Android Spyware Campaign in South Korea:](https://sip.security.microsoft.com/intel-explorer/articles/e4645053) Cyble Research and Intelligence Labs (CRIL) uncovered a new Android spyware campaign targeting individuals in South Korea since June 2024, which disguises itself as legitimate apps and leverages Amazon AWS S3 buckets for exfiltration. The spyware effectively steals sensitive data such as SMS messages, contacts, images, and videos, while remaining undetected by major antivirus solutions.
6. [New Variant of RomCom Malware:](https://sip.security.microsoft.com/intel-explorer/articles/159819ae) Unit 42 researchers have identified "SnipBot," a new variant of the RomCom malware family, which utilizes advanced obfuscation methods and anti-sandbox techniques. Targeting sectors such as IT services, legal, and agriculture since at least 2022, the malware employs a multi-stage infection chain, and researchers suggest the threat actors\' motives might have s |
Ransomware
Malware
Tool
Vulnerability
Threat
Patching
Mobile
|
ChatGPT
APT 36
|
★★
|
 |
2024-08-30 07:00:00 |
Les solutions de sécurité centrées sur l'human Proofpoint\\'s Human-Centric Security Solutions Named SC Awards 2024 Finalist in Four Unique Categories (lien direct) |
Nous sommes ravis de partager que Proofpoint a été nommé finaliste des récompenses de 2024 SC dans quatre catégories distinguées: meilleure solution de messagerie sécurisée;Meilleure solution de sécurité des données;Meilleure solution de menace d'initiés;et la meilleure technologie de détection des menaces.
Maintenant dans sa 27e année, les SC Awards sont considérés comme un programme de récompenses le plus prestigieux de Cybersecurity \\, reconnaissant et honorant les innovations, les organisations et les dirigeants exceptionnels qui font progresser la pratique de la sécurité de l'information.Les gagnants sont sélectionnés par un panel de juges estimés de l'industrie composés de la communauté des Cisos des CyberRisk Alliance, des membres de SC Media and Women in Cyber, et des utilisateurs finaux professionnels de la cybersécurité.
Les gagnants du programme de récompenses SC 2024 seront dévoilés cet automne, coïncidant avec la conférence annuelle phare de Proofpoint \\, Proofpoint Protect, qui débutera à New York le 10 au 11 septembre avant de continuer à Londres, à Chicago et à Austinen octobre.Là, les dirigeants de Proofpoint et les meilleurs clients mettront en évidence notre innovation continue, l'efficacité de notre stratégie de sécurité centrée sur l'homme, explorer les tendances et échanger des informations avec l'industrie la plus brillante.
Cette reconnaissance propulse en outre notre dynamique commerciale du T2 et souligne que les capacités de Proofpoint \\ s'étendent au-delà de la sécurité des e-mails, affirmant la confiance que nous avons établie dans toute l'industrie pour protéger les personnes, défendre les données et atténuer les risques humains.Il rejoint également la liste croissante de la validation de l'industrie de Proofpoint \\, y compris les récompenses pour la meilleure solution de prévention des fuites de données (DLP) et la meilleure solution d'identité et d'accès à la 2024 SC Awards Europe en juin.
En savoir plus sur nos solutions présélectionnées aux 2024 SC Awards:
Plateforme de protection des personnes ProofPoint People
Les organisations sont aujourd'hui confrontées à des menaces de cybersécurité à multiples facettes qui exploitent les vulnérabilités humaines.ProofPoint combine la technologie de pointe avec des informations stratégiques pour se protéger contre le spectre complet des cybermenaces ciblant les gens d'une organisation.En déployant des défenses adaptatives multicouches qui englobent la détection des menaces adaptatives, des garanties d'identité robustes et une gestion proactive des risques des fournisseurs, nous assurons la résilience et la continuité de nos clients.
Protection des informations sur les points
La perte de données provient des personnes, ce qui signifie qu'une approche centrée sur l'homme de la sécurité des données est nécessaire pour répondre efficacement.La protection de l'information de la preuve est la seule solution qui rassemble la télémétrie du contenu, de la menace et des informations comportementales sur les canaux de perte de données les plus critiques & # 8211;Email, services cloud, point de terminaison, référentiels de fichiers sur site et le Web.Cela permet aux organisations de traiter de manière globale tout le spectre des scénarios de perte de données centrés sur l'homme.
Avec la disponibilité générale de la transformation DLP Point Point annoncée au RSAC cette année, les organisations peuvent désormais consolider leurs défenses de données sur les canaux et protéger les données en passant par Chatgpt, Copilots et d'autres outils Genai.
Gestion de la menace d'initié à preuvespoint
30% des CISO mondiaux déclarent que les menaces d'initiés sont leur plus grande préoccupation au cours des 12 prochains mois.ProofPoint ITM offre une visibilité sur un comportement à risque qui entraîne des perturbations commerciales et des pertes de revenus par des utilisateurs négligents, malveillants et compromis.ProofPoint ITM rassemble des preuve |
Ransomware
Tool
Vulnerability
Threat
Cloud
Conference
|
ChatGPT
|
★★
|
 |
2024-07-26 19:24:17 |
(Déjà vu) Les attaques d'escroquerie profitent de la popularité de la vague de l'IA générative Scam Attacks Taking Advantage of the Popularity of the Generative AI Wave (lien direct) |
## Instantané
Les analystes de Palo Alto Networks ont constaté que les acteurs du cybermenace exploitent l'intérêt croissant pour l'intelligne artificiel génératif (Genai) pour mener des activités malveillantes.
## Description
Palo Alto Networks \\ 'Analyse des domaines enregistrés avec des mots clés liés à Genai a révélé des informations sur les activités suspectes, y compris les modèles textuels et le volume du trafic.Des études de cas ont détaillé divers types d'attaques, tels que la livraison de programmes potentiellement indésirables (chiots), de distribution de spam et de stationnement monétisé.
Les adversaires exploitent souvent des sujets de tendance en enregistrant des domaines avec des mots clés pertinents.Analyser des domaines nouvellement enregistrés (NRD) contenant des mots clés Genai comme "Chatgpt" et "Sora", Palo Alto Networks a détecté plus de 200 000 NRD quotidiens, avec environ 225 domaines liés au Genai enregistrés chaque jour depuis novembre 2022. Beaucoup de ces domaines, identifiés comme suspects, a augmenté d'enregistrement lors des principaux jalons de Chatgpt, tels que son intégration avec Bing et la sortie de GPT-4.Les domaines suspects représentaient un taux moyen de 28,75%, nettement supérieur au taux de NRD général.La plupart des trafics vers ces domaines étaient dirigés vers quelques acteurs majeurs, avec 35% de ce trafic identifié comme suspect.
## Recommandations
Microsoft recommande les atténuations suivantes pour réduire l'impact de cette menace.Vérifiez la carte de recommandations pour l'état de déploiement des atténuations surveillées.
- Encourager les utilisateurs à utiliser Microsoft Edge et d'autres navigateurs Web qui prennent en charge [SmartScreen] (https://learn.microsoft.com/microsoft-365/security/defender-endpoint/web-overview?ocid=Magicti_TA_LearnDDoc), qui identifieet bloque des sites Web malveillants, y compris des sites de phishing, des sites d'arnaque et des sites qui hébergent des logiciels malveillants.
- Allumez [Protection en livraison du cloud] (https://learn.microsoft.com/microsoft-365/security/defender-endpoint/configure-lock-at-first-Sight-Microsoft-Defender-Antivirus? Ocid = magicti_ta_learndoc) dans Microsoft Defender Antivirus, ou l'équivalentpour votre produit antivirus, pour couvrir les outils et techniques d'attaquant en évolution rapide.Les protections d'apprentissage automatique basées sur le cloud bloquent une majorité de variantes nouvelles et inconnues.
- Appliquer le MFA sur tous les comptes, supprimer les utilisateurs exclus de MFA et strictement [nécessite MFA] (https: //Learn.microsoft.com/azure/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy?ocid=Magicti_TA_LearnDoc) à partir deTous les appareils, à tous les endroits, à tout moment.
- Activer les méthodes d'authentification sans mot de passe (par exemple, Windows Hello, FIDO Keys ou Microsoft Authenticator) pour les comptes qui prennent en charge sans mot de passe.Pour les comptes qui nécessitent toujours des mots de passe, utilisez des applications Authenticatrices comme Microsoft Authenticator pour MFA.[Reportez-vous à cet article] (https://learn.microsoft.com/azure/active-directory/authentication/concept-authentication-methods?ocid=Magicti_ta_learndoc) pour les différentes méthodes et fonctionnalités d'authentification.
- Pour MFA qui utilise des applications Authenticator, assurez-vous que l'application nécessite qu'un code soit tapé dans la mesure du possible, car de nombreuses intrusions où le MFA a été activé a toujours réussi en raison des utilisateurs qui cliquent sur «Oui» sur l'invite sur leurs téléphones même lorsqu'ils n'étaient pas àLeurs [appareils] (https://learn.microsoft.com/azure/active-directory/authentication/how-to-mfa-number-match?ocid=Magicti_TA_LearnDoc).Reportez-vous à [cet article] (https://learn.microsoft.com/azure/active-directory/authentication/concept-authentication-methods?ocid=Magicti_ta_learndoc) pour un |
Ransomware
Spam
Malware
Tool
Threat
Studies
|
ChatGPT
|
★★★
|
 |
2024-07-08 01:45:08 |
Pas si-openai n'aurait jamais pris la peine de signaler une violation de données en 2023 Not-so-OpenAI allegedly never bothered to report 2023 data breach (lien direct) |
également: l'autorité F1 violée;Prudentiel victime Count Skyrocket;Un nouvel acteur de ransomware apparaît;Et plus Security in Brief Ceci a été une semaine de mauvaises révélations de cybersécurité pour OpenAI, après que la nouvelle a émergé que la startup n'a pas signalé une violation en 2023 de ses systèmesà quiconque en dehors de l'organisation, et que son application Chatgpt pour macOS a été codée sans aucun égard à la confidentialité des utilisateurs.…
Also: F1 authority breached; Prudential victim count skyrockets; a new ransomware actor appears; and more security in brief It\'s been a week of bad cyber security revelations for OpenAI, after news emerged that the startup failed to report a 2023 breach of its systems to anybody outside the organization, and that its ChatGPT app for macOS was coded without any regard for user privacy.… |
Ransomware
Data Breach
|
ChatGPT
|
★★★
|
 |
2024-05-14 06:00:46 |
Arrêt de cybersécurité du mois: les attaques d'identité qui ciblent la chaîne d'approvisionnement Cybersecurity Stop of the Month: Impersonation Attacks that Target the Supply Chain (lien direct) |
This blog post is part of a monthly series, Cybersecurity Stop of the Month, which explores the ever-evolving tactics of today\'s cybercriminals. It focuses on the critical first three steps in the attack chain in the context of email threats. The goal of this series is to help you understand how to fortify your defenses to protect people and defend data against emerging threats in today\'s dynamic threat landscape.
The critical first three steps of the attack chain-reconnaissance, initial compromise and persistence.
So far in this series, we have examined these types of attacks:
Supplier compromise
EvilProxy
SocGholish
eSignature phishing
QR code phishing
Telephone-oriented attack delivery (TOAD)
Payroll diversion
MFA manipulation
Supply chain compromise
Multilayered malicious QR code attack
In this post, we will look at how adversaries use impersonation via BEC to target the manufacturing supply chain.
Background
BEC attacks are sophisticated schemes that exploit human vulnerabilities and technological weaknesses. A bad actor will take the time to meticulously craft an email that appears to come from a trusted source, like a supervisor or a supplier. They aim to manipulate the email recipient into doing something that serves the attacker\'s interests. It\'s an effective tactic, too. The latest FBI Internet Crime Report notes that losses from BEC attacks exceeded $2.9 billion in 2023.
Manufacturers are prime targets for cybercriminals for these reasons:
Valuable intellectual property. The theft of patents, trade secrets and proprietary processes can be lucrative.
Complex supply chains. Attackers who impersonate suppliers can easily exploit the interconnected nature of supply chains.
Operational disruption. Disruption can cause a lot of damage. Attackers can use it for ransom demands, too.
Financial fraud. Threat actors will try to manipulate these transactions so that they can commit financial fraud. They may attempt to alter bank routing information as part of their scheme, for example.
The scenario
Proofpoint recently caught a threat actor impersonating a legitimate supplier of a leading manufacturer of sustainable fiber-based packaging products. Having compromised the supplier\'s account, the imposter sent an email providing the manufacturer with new banking details, asking that payment for an invoice be sent to a different bank account. If the manufacturer had complied with the request, the funds would have been stolen.
The threat: How did the attack happen?
Here is a closer look at how the attack unfolded:
1. The initial message. A legitimate supplier sent an initial outreach email from their account to the manufacturing company using an email address from their official account. The message included details about a real invoice that was pending payment.
The initial email sent from the supplier.
2. The deceptive message. Unfortunately, subsequent messages were not sent from the supplier, but from a threat actor who was pretending to work there. While this next message also came from the supplier\'s account, the account had been compromised by an attacker. This deceptive email included an attachment that included new bank payment routing information. Proofpoint detected and blocked this impersonation email.
In an attempt to get a response, the threat actor sent a follow-up email using a lookalike domain that ended in “.cam” instead of “.com.” Proofpoint also condemned this message.
An email the attacker sent to mimic the supplier used a lookalike domain.
Detection: How did Proofpoint prevent this attack?
Proofpoint has a multilayered detection stack that uses a sophisticated blend of artificial intelligence (AI) and machine learning (ML) detection |
Ransomware
Data Breach
Tool
Vulnerability
Threat
|
ChatGPT
|
★★
|
 |
2024-05-06 07:54:03 |
Genai alimente la dernière vague des menaces de messagerie modernes GenAI Is Powering the Latest Surge in Modern Email Threats (lien direct) |
Generative artificial intelligence (GenAI) tools like ChatGPT have extensive business value. They can write content, clean up context, mimic writing styles and tone, and more. But what if bad actors abuse these capabilities to create highly convincing, targeted and automated phishing messages at scale?
No need to wonder as it\'s already happening. Not long after the launch of ChatGPT, business email compromise (BEC) attacks, which are language-based, increased across the globe. According to the 2024 State of the Phish report from Proofpoint, BEC emails are now more personalized and convincing in multiple countries. In Japan, there was a 35% increase year-over-year for BEC attacks. Meanwhile, in Korea they jumped 31% and in the UAE 29%. It turns out that GenAI boosts productivity for cybercriminals, too. Bad actors are always on the lookout for low-effort, high-return modes of attack. And GenAI checks those boxes. Its speed and scalability enhance social engineering, making it faster and easier for attackers to mine large datasets of actionable data.
As malicious email threats increase in sophistication and frequency, Proofpoint is innovating to stop these attacks before they reach users\' inboxes. In this blog, we\'ll take a closer look at GenAI email threats and how Proofpoint semantic analysis can help you stop them.
Why GenAI email threats are so dangerous
Verizon\'s 2023 Data Breach Investigations Report notes that three-quarters of data breaches (74%) involve the human element. If you were to analyze the root causes behind online scams, ransomware attacks, credential theft, MFA bypass, and other malicious activities, that number would probably be a lot higher. Cybercriminals also cost organizations over $50 billion in total losses between October 2013 and December 2022 using BEC scams. That represents only a tiny fraction of the social engineering fraud that\'s happening.
Email is the number one threat vector, and these findings underscore why. Attackers find great success in using email to target people. As they expand their use of GenAI to power the next generation of email threats, they will no doubt become even better at it.
We\'re all used to seeing suspicious messages that have obvious red flags like spelling errors, grammatical mistakes and generic salutations. But with GenAI, the game has changed. Bad actors can ask GenAI to write grammatically perfect messages that mimic someone\'s writing style-and do it in multiple languages. That\'s why businesses around the globe now see credible malicious email threats coming at their users on a massive scale.
How can these threats be stopped? It all comes down to understanding a message\'s intent.
Stop threats before they\'re delivered with semantic analysis
Proofpoint has the industry\'s first predelivery threat detection engine that uses semantic analysis to understand message intent. Semantic analysis is a process that is used to understand the meaning of words, phrases and sentences within a given context. It aims to extract the underlying meaning and intent from text data.
Proofpoint semantic analysis is powered by a large language model (LLM) engine to stop advanced email threats before they\'re delivered to users\' inboxes in both Microsoft 365 and Google Workspace.
It doesn\'t matter what words are used or what language the email is written in. And the weaponized payload that\'s included in the email (e.g., URL, QR code, attached file or something else) doesn\'t matter, either. With Proofpoint semantic analysis, our threat detection engines can understand what a message means and what attackers are trying to achieve.
An overview of how Proofpoint uses semantic analysis.
How it works
Proofpoint Threat Protection now includes semantic analysis as an extra layer of threat detection. Emails must pass through an ML-based threat detection engine, which analyzes them at a deeper level. And it does |
Ransomware
Data Breach
Tool
Vulnerability
Threat
|
ChatGPT
|
★★★
|
 |
2024-04-10 10:00:00 |
Les risques de sécurité du chat Microsoft Bing AI pour le moment The Security Risks of Microsoft Bing AI Chat at this Time (lien direct) |
The content of this post is solely the responsibility of the author. AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.
AI has long since been an intriguing topic for every tech-savvy person, and the concept of AI chatbots is not entirely new. In 2023, AI chatbots will be all the world can talk about, especially after the release of ChatGPT by OpenAI. Still, there was a past when AI chatbots, specifically Bing’s AI chatbot, Sydney, managed to wreak havoc over the internet and had to be forcefully shut down. Now, in 2023, with the world relatively more technologically advanced, AI chatbots have appeared with more gist and fervor. Almost every tech giant is on its way to producing large Language Model chatbots like chatGPT, with Google successfully releasing its Bard and Microsoft and returning to Sydney. However, despite the technological advancements, it seems that there remains a significant part of the risks that these tech giants, specifically Microsoft, have managed to ignore while releasing their chatbots.
What is Microsoft Bing AI Chat Used for?
Microsoft has released the Bing AI chat in collaboration with OpenAI after the release of ChatGPT. This AI chatbot is a relatively advanced version of ChatGPT 3, known as ChatGPT 4, promising more creativity and accuracy. Therefore, unlike ChatGPT 3, the Bing AI chatbot has several uses, including the ability to generate new content such as images, code, and texts. Apart from that, the chatbot also serves as a conversational web search engine and answers questions about current events, history, random facts, and almost every other topic in a concise and conversational manner. Moreover, it also allows image inputs, such that users can upload images in the chatbot and ask questions related to them.
Since the chatbot has several impressive features, its use quickly spread in various industries, specifically within the creative industry. It is a handy tool for generating ideas, research, content, and graphics. However, one major problem with its adoption is the various cybersecurity issues and risks that the chatbot poses. The problem with these cybersecurity issues is that it is not possible to mitigate them through traditional security tools like VPN, antivirus, etc., which is a significant reason why chatbots are still not as popular as they should be.
Is Microsoft Bing AI Chat Safe?
Like ChatGPT, Microsoft Bing Chat is fairly new, and although many users claim that it is far better in terms of responses and research, its security is something to remain skeptical over. The modern version of the Microsoft AI chatbot is formed in partnership with OpenAI and is a better version of ChatGPT. However, despite that, the chatbot has several privacy and security issues, such as:
The chatbot may spy on Microsoft employees through their webcams.
Microsoft is bringing ads to Bing, which marketers often use to track users and gather personal information for targeted advertisements.
The chatbot stores users\' information, and certain employees can access it, which breaches users\' privacy. - Microsoft’s staff can read chatbot conversations; therefore, sharing sensitive information is vulnerable.
The chatbot can be used to aid in several cybersecurity attacks, such as aiding in spear phishing attacks and creating ransomware codes.
Bing AI chat has a feature that lets the chatbot “see” what web pages are open on the users\' other tabs.
The chatbot has been known to be vulnerable to prompt injection attacks that leave users vulnerable to data theft and scams.
Vulnerabilities in the chatbot have led to data le |
Ransomware
Tool
Vulnerability
|
ChatGPT
|
★★
|
 |
2024-03-05 21:30:37 |
Rapport Découvre la vente massive des informations d'identification CHATGPT compromises Report Uncovers Massive Sale of Compromised ChatGPT Credentials (lien direct) |
> Par deeba ahmed
Le rapport Group-IB met en garde contre l'évolution des cyber-menaces, y compris les vulnérabilités de l'IA et du macOS et des attaques de ransomwares.
Ceci est un article de HackRead.com Lire le post original: Le rapport découvre une vente massive des informations d'identification CHATGPT compromises
>By Deeba Ahmed
Group-IB Report Warns of Evolving Cyber Threats Including AI and macOS Vulnerabilities and Ransomware Attacks.
This is a post from HackRead.com Read the original post: Report Uncovers Massive Sale of Compromised ChatGPT Credentials |
Ransomware
Vulnerability
|
ChatGPT
|
★★
|
 |
2024-03-05 19:03:47 |
Rester en avance sur les acteurs de la menace à l'ère de l'IA Staying ahead of threat actors in the age of AI (lien direct) |
## Snapshot
Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\'s blog on the research [here](https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors). Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\' usage of AI. However, Microsoft and our partners continue to study this landscape closely.
The objective of Microsoft\'s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including [Microsoft Copilot for Security](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot), to elevate defenders everywhere.
## Activity Overview
### **A principled approach to detecting and blocking threat actors**
The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House\'s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order\'s request for comprehensive AI safety and security standards.
In line with Microsoft\'s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\'s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track.
These principles include:
- **Identification and action against malicious threat actors\' use:** Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources.
- **Notification to other AI service providers:** When we detect a threat actor\'s use of another service provider\'s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies.
- **Collaboration with other stakeholders:** Microsoft will collaborate with other stakeholders to regularly exchange information a |
Ransomware
Malware
Tool
Vulnerability
Threat
Studies
Medical
Technical
|
APT 28
ChatGPT
APT 4
|
★★
|
 |
2024-01-31 10:30:00 |
ESET Research Podcast: Chatgpt, The Moveit Hack et Pandora ESET Research Podcast: ChatGPT, the MOVEit hack, and Pandora (lien direct) |
Un chatbot AI Kindle par inadvertance un boom de la cybercriminalité, des bandits de ransomware pluncent des organisations sans déploiement
An AI chatbot inadvertently kindles a cybercrime boom, ransomware bandits plunder organizations without deploying ransomware, and a new botnet enslaves Android TV boxes |
Ransomware
Hack
Mobile
|
ChatGPT
|
★★★
|
 |
2023-12-30 16:23:55 |
La Chine arrête 4 qui a armé le chatpt pour des attaques de ransomwares China Arrests 4 Who Weaponized ChatGPT for Ransomware Attacks (lien direct) |
> Par deeba ahmed
La police a arrêté deux suspects à Pékin et deux en Mongolie intérieure.
Ceci est un article de HackRead.com Lire le post original: Chine Arrests4 qui a armé le chatppt pour les attaques de ransomwares
>By Deeba Ahmed
The police arrested two suspects in Beijing and two in Inner Mongolia.
This is a post from HackRead.com Read the original post: China Arrests 4 Who Weaponized ChatGPT for Ransomware Attacks |
Ransomware
|
ChatGPT
|
★★★
|
 |
2023-11-30 13:00:15 |
L'information est le pouvoir, mais la désinformation est tout aussi puissante Information is power, but misinformation is just as powerful (lien direct) |
> Les techniques de désinformation et de manipulation employées par les cybercriminels deviennent de plus en plus sophistiquées en raison de la mise en œuvre de l'intelligence artificielle dans leurs systèmes que l'ère post-vérité a atteint de nouveaux sommets avec l'avènement de l'intelligence artificielle (IA).Avec la popularité croissante et l'utilisation d'outils d'IA génératifs tels que Chatgpt, la tâche de discerner entre ce qui est réel et faux est devenu plus compliqué, et les cybercriminels tirent parti de ces outils pour créer des menaces de plus en plus sophistiquées.Vérifier Pont Software Technologies a constaté qu'une entreprise sur 34 a connu une tentative d'attaque de ransomware au cours des trois premiers trimestres de 2023, une augmentation [& # 8230;]
>The disinformation and manipulation techniques employed by cybercriminals are becoming increasingly sophisticated due to the implementation of Artificial Intelligence in their systems The post-truth era has reached new heights with the advent of artificial intelligence (AI). With the increasing popularity and use of generative AI tools such as ChatGPT, the task of discerning between what is real and fake has become more complicated, and cybercriminals are leveraging these tools to create increasingly sophisticated threats. Check Pont Software Technologies has found that one in 34 companies have experienced an attempted ransomware attack in the first three quarters of 2023, an increase […]
|
Ransomware
Tool
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-11-28 23:05:04 |
Prédictions 2024 de Proofpoint \\: Brace for Impact Proofpoint\\'s 2024 Predictions: Brace for Impact (lien direct) |
In the ever-evolving landscape of cybersecurity, defenders find themselves navigating yet another challenging year. Threat actors persistently refine their tactics, techniques, and procedures (TTPs), showcasing adaptability and the rapid iteration of novel and complex attack chains. At the heart of this evolution lies a crucial shift: threat actors now prioritize identity over technology. While the specifics of TTPs and the targeted technology may change, one constant remains: humans and their identities are the most targeted links in the attack chain.
Recent instances of supply chain attacks exemplify this shift, illustrating how adversaries have pivoted from exploiting software vulnerabilities to targeting human vulnerabilities through social engineering and phishing. Notably, the innovative use of generative AI, especially its ability to improve phishing emails, exemplifies a shift towards manipulating human behavior rather than exploiting technological weaknesses.
As we reflect on 2023, it becomes evident that cyber threat actors possess the capabilities and resources to adapt their tactics in response to increased security measures such as multi-factor authentication (MFA). Looking ahead to 2024, the trend suggests that threats will persistently revolve around humans, compelling defenders to take a different approach to breaking the attack chain.
So, what\'s on the horizon?
The experts at Proofpoint provide insightful predictions for the next 12 months, shedding light on what security teams might encounter and the implications of these trends.
1. Cyber Heists: Casinos are Just the Tip of the Iceberg
Cyber criminals are increasingly targeting digital supply chain vendors, with a heightened focus on security and identity providers. Aggressive social engineering tactics, including phishing campaigns, are becoming more prevalent. The Scattered Spider group, responsible for ransomware attacks on Las Vegas casinos, showcases the sophistication of these tactics. Phishing help desk employees for login credentials and bypassing MFA through phishing one-time password (OTP) codes are becoming standard practices. These tactics have extended to supply chain attacks, compromising identity provider (IDP) vendors to access valuable customer information. The forecast for 2024 includes the replication and widespread adoption of such aggressive social engineering tactics, broadening the scope of initial compromise attempts beyond the traditional edge device and file transfer appliances.
2. Generative AI: The Double-Edged Sword
The explosive growth of generative AI tools like ChatGPT, FraudGPT and WormGPT bring both promise and peril, but the sky is not falling as far as cybersecurity is concerned. While large language models took the stage, the fear of misuse prompted the U.S. president to issue an executive order in October 2023. At the moment, threat actors are making bank doing other things. Why bother reinventing the model when it\'s working just fine? But they\'ll morph their TTPs when detection starts to improve in those areas.
On the flip side, more vendors will start injecting AI and large language models into their products and processes to boost their security offerings. Across the globe, privacy watchdogs and customers alike will demand responsible AI policies from technology companies, which means we\'ll start seeing statements being published about responsible AI policies. Expect both spectacular failures and responsible AI policies to emerge.
3. Mobile Device Phishing: The Rise of Omni-Channel Tactics take Centre Stage
A notable trend for 2023 was the dramatic increase in mobile device phishing and we expect this threat to rise even more in 2024. Threat actors are strategically redirecting victims to mobile interactions, exploiting the vulnerabilities inherent in mobile platforms. Conversational abuse, including conversational smishing, has experienced exponential growth. Multi-touch campaigns aim to lure users away from desktops to mobile devices, utilizing tactics like QR codes and fraudulent voice calls |
Ransomware
Malware
Tool
Vulnerability
Threat
Mobile
Prediction
Prediction
|
ChatGPT
ChatGPT
|
★★★
|
 |
2023-08-08 17:37:23 |
Rencontrez le cerveau derrière le service de chat AI adapté aux logiciels malveillants \\ 'wormpt \\' Meet the Brains Behind the Malware-Friendly AI Chat Service \\'WormGPT\\' (lien direct) |
Wormpt, un nouveau service de chatbot privé annoncé comme un moyen d'utiliser l'intelligence artificielle (AI) pour aider à rédiger des logiciels malveillants sans toutes les interdictions embêtantes sur une telle activité appliquée par Chatgpt et Google Bard, a commencé à ajouter des restrictions sur la façon dont le service peut être utilisé.Face à des clients essayant d'utiliser Wormpt pour créer des ransomwares et des escroqueries à phishing, le programmeur portugais de 23 ans qui a créé le projet dit maintenant que son service se transforme lentement en «un environnement plus contrôlé».
Les grands modèles de langue (LLM) fabriqués par Chatgpt Parent Openai ou Google ou Microsoft ont tous diverses mesures de sécurité conçues pour empêcher les gens de les abuser à des fins néfastes - comme la création de logiciels malveillants ou de discours de haine.En revanche, Wormgpt s'est promu comme un nouveau LLM qui a été créé spécifiquement pour les activités de cybercriminalité.
WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to help write malicious software without all the pesky prohibitions on such activity enforced by ChatGPT and Google Bard, has started adding restrictions on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”
The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes - such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new LLM that was created specifically for cybercrime activities. |
Ransomware
Malware
|
ChatGPT
ChatGPT
|
★★★
|
 |
2023-07-18 16:24:00 |
Allez au-delà des titres pour des plongées plus profondes dans le sous-sol cybercriminal Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground (lien direct) |
Découvrez des histoires sur les acteurs de la menace \\ 'Tactiques, techniques et procédures des experts en menace de Cybersixgill \\ chaque mois.Chaque histoire vous apporte des détails sur les menaces souterraines émergentes, les acteurs de la menace impliqués et comment vous pouvez prendre des mesures pour atténuer les risques.Découvrez les meilleures vulnérabilités et passez en revue les dernières tendances des ransomwares et des logiciels malveillants à partir du Web profond et sombre.
Chatgpt volé
Discover stories about threat actors\' latest tactics, techniques, and procedures from Cybersixgill\'s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web.
Stolen ChatGPT |
Ransomware
Malware
Vulnerability
Threat
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-06-27 13:00:00 |
Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams (lien direct) |
CyberheistNews Vol 13 #26 | June 27th, 2023
[Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams
The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says.
These are the top five text scams reported by the FTC:
Copycat bank fraud prevention alerts
Bogus "gifts" that can cost you
Fake package delivery problems
Phony job offers
Not-really-from-Amazon security alerts
"People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers.
"What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft."
Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery.
"The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'"
Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says.
"Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone."
Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned |
Ransomware
Spam
Malware
Hack
Tool
Threat
|
FedEx
APT 28
APT 15
ChatGPT
ChatGPT
|
★★
|
 |
2023-06-20 13:00:00 |
Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches (lien direct) |
CyberheistNews Vol 13 #25 | June 20th, 2023
[Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches
Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks.
This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere.
So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches.
According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike.
And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim.
Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat
[New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist
Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform!
The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l |
Ransomware
Data Breach
Spam
Malware
Hack
Vulnerability
Threat
Cloud
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-06-13 10:00:00 |
Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do (lien direct) |
The content of this post is solely the responsibility of the author. AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.
OpenAI\'s flagship product, ChatGPT, has dominated the news cycle since its unveiling in November 2022. In only a few months, ChatGPT became the fastest-growing consumer app in internet history, reaching 100 million users as 2023 began.
The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt.
This can be bad news for your business, as it escalates the degree of difficulty in managing threats.
Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape.
Understanding the role of ChatGPT in modern ransomware attacks
We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage.
With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection.
The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive.
Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment.
This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds.
Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th |
Ransomware
Malware
Tool
Threat
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-31 13:00:00 |
Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks (lien direct) |
CyberheistNews Vol 13 #22 | May 31st, 2023
[Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks
With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all.
When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report.
According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry:
51% of invoice fraud attacks targeted the financial services industry
42% were payroll fraud attacks
63% were payment fraud
To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox.
The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage.
Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing
[Live Demo] Ridiculously Easy Security Awareness Training and Phishing
Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense.
Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing.
Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users.
|
Ransomware
Malware
Hack
Tool
Threat
Conference
|
Uber
ChatGPT
ChatGPT
Guam
|
★★
|
 |
2023-05-23 13:00:00 |
Cyberheistnews Vol 13 # 21 [Double Trouble] 78% des victimes de ransomwares sont confrontées à plusieurs extensions en tendance effrayante CyberheistNews Vol 13 #21 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend (lien direct) |
CyberheistNews Vol 13 #21 | May 23rd, 2023
[Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend
New data sheds light on how likely your organization will succumb to a ransomware attack, whether you can recover your data, and what\'s inhibiting a proper security posture.
You have a solid grasp on what your organization\'s cybersecurity stance does and does not include. But is it enough to stop today\'s ransomware attacks? CyberEdge\'s 2023 Cyberthreat Defense Report provides some insight into just how prominent ransomware attacks are and what\'s keeping orgs from stopping them.
According to the report, in 2023:
7% of organizations were victims of a ransomware attack
7% of those paid a ransom
73% were able to recover data
Only 21.6% experienced solely the encryption of data and no other form of extortion
It\'s this last data point that interests me. Nearly 78% of victim organizations experienced one or more additional forms of extortion. CyberEdge mentions threatening to publicly release data, notifying customers or media, and committing a DDoS attack as examples of additional threats mentioned by respondents.
IT decision makers were asked to rate on a scale of 1-5 (5 being the highest) what were the top inhibitors of establishing and maintaining an adequate defense. The top inhibitor (with an average rank of 3.66) was a lack of skilled personnel – we\'ve long known the cybersecurity industry is lacking a proper pool of qualified talent.
In second place, with an average ranking of 3.63, is low security awareness among employees – something only addressed by creating a strong security culture with new-school security awareness training at the center of it all.
Blog post with links:https://blog.knowbe4.com/ransomware-victim-threats
[Free Tool] Who Will Fall Victim to QR Code Phishing Attacks?
Bad actors have a new way to launch phishing attacks to your users: weaponized QR codes. QR code phishing is especially dangerous because there is no URL to check and messages bypass traditional email filters.
With the increased popularity of QR codes, users are more at |
Ransomware
Hack
Tool
Vulnerability
Threat
Prediction
|
ChatGPT
|
★★
|
 |
2023-05-09 13:00:00 |
Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users (lien direct) |
CyberheistNews Vol 13 #19 | May 9th, 2023
[Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages.
"Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area."
The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner.
A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks.
This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them.
Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture.
Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages
A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation
Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more.
Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, |
Ransomware
Data Breach
Spam
Malware
Tool
Threat
Prediction
|
NotPetya
NotPetya
APT 28
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-02 13:00:00 |
Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle? CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? (lien direct) |
CyberheistNews Vol 13 #18 | May 2nd, 2023
[Eye on AI] Does ChatGPT Have Cybersecurity Tells?
Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card.
It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is.
"When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes."
That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive.
One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves.
Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells
[Live Demo] Ridiculously Easy Security Awareness Training and Phishing
Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense.
Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 |
Ransomware
Malware
Hack
Threat
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-25 18:22:00 |
Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server (lien direct) |
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Cryptomining, Infostealers, Malvertising, North Korea, Phishing, Ransomware, and Supply-chain attacks. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity.
Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed.
Trending Cyber News and Threat Intelligence
First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters
(published: April 21, 2023)
A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign.
Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups.
MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop
Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:Kubernetes RBAC
3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible
(published: April 20, 2023)
Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and |
Ransomware
Spam
Malware
Tool
Threat
Cloud
|
Uber
APT 38
ChatGPT
APT 43
|
★★
|
 |
2023-04-22 10:08:16 |
Google Ads Push Bumblebee Malware utilisé par Ransomware Gangs Google ads push BumbleBee malware used by ransomware gangs (lien direct) |
Le logiciel malveillant Bumblebee ciblant l'entreprise est distribué via Google Ads et un empoisonnement SEO qui favorisent des logiciels populaires comme Zoom, Cisco AnyConnect, Chatgpt et Citrix Workspace.[...]
The enterprise-targeting Bumblebee malware is distributed through Google Ads and SEO poisoning that promote popular software like Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace. [...] |
Ransomware
Malware
|
ChatGPT
|
★★
|
 |
2023-04-12 01:50:07 |
Les cyber-chefs américains avertissent l'IA aidera les escrocs, la Chine à développer des cyberattaques plus méchantes plus rapidement US cyber chiefs warn AI will help crooks, China develop nastier cyberattacks faster (lien direct) |
Ce n'est pas tout le malheur et la tristesse car ML amplifie également les efforts défensifs, probablement Les robots comme Chatgpt peuvent ne pas être en mesure de retirer le prochain Big Microsoft Server Worm ou Ransomware Pipeline Ransomware Super-Infection mais ils peuvent aider les gangs criminels et les pirates de pirates nationaux à développer certaines attaques contre elle, selon Rob Joyce, directeur de la Direction de la cybersécurité de la NSA.…
It\'s not all doom and gloom because ML also amplifies defensive efforts, probably Bots like ChatGPT may not be able to pull off the next big Microsoft server worm or Colonial Pipeline ransomware super-infection but they may help criminal gangs and nation-state hackers develop some attacks against IT, according to Rob Joyce, director of the NSA\'s Cybersecurity Directorate.… |
Ransomware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-11 13:16:54 |
Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams (lien direct) |
CyberheistNews Vol 13 #15 | April 11th, 2023
[The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams
The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress."
They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how."
"Don\'t Trust The Voice"
The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one.
"So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends."
Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams
A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation
Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4.
With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making |
Ransomware
Data Breach
Spam
Malware
Hack
Tool
Threat
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-04 13:00:00 |
CyberheistNews Vol 13 # 14 [Eyes sur le prix] Comment les inconvénients croissants ont tenté un courteur par e-mail de 36 millions de vendeurs CyberheistNews Vol 13 #14 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist (lien direct) |
CyberheistNews Vol 13 #14 | April 4th, 2023
[Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist
The details in this thwarted VEC attack demonstrate how the use of just a few key details can both establish credibility and indicate the entire thing is a scam.
It\'s not every day you hear about a purely social engineering-based scam taking place that is looking to run away with tens of millions of dollars. But, according to security researchers at Abnormal Security, cybercriminals are becoming brazen and are taking their shots at very large prizes.
This attack begins with a case of VEC – where a domain is impersonated. In the case of this attack, the impersonated vendor\'s domain (which had a .com top level domain) was replaced with a matching .cam domain (.cam domains are supposedly used for photography enthusiasts, but there\'s the now-obvious problem with it looking very much like .com to the cursory glance).
The email attaches a legitimate-looking payoff letter complete with loan details. According to Abnormal Security, nearly every aspect of the request looked legitimate. The telltale signs primarily revolved around the use of the lookalike domain, but there were other grammatical mistakes (that can easily be addressed by using an online grammar service or ChatGPT).
This attack was identified well before it caused any damage, but the social engineering tactics leveraged were nearly enough to make this attack successful. Security solutions will help stop most attacks, but for those that make it past scanners, your users need to play a role in spotting and stopping BEC, VEC and phishing attacks themselves – something taught through security awareness training combined with frequent simulated phishing and other social engineering tests.
Blog post with screenshots and links:https://blog.knowbe4.com/36-mil-vendor-email-compromise-attack
[Live Demo] Ridiculously Easy Security Awareness Training and Phishing
Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense.
Join us TOMORROW, Wednesday, April 5, @ 2:00 PM (ET), for a live demo of how KnowBe4 i |
Ransomware
Malware
Hack
Threat
|
ChatGPT
ChatGPT
APT 43
|
★★
|
 |
2023-03-28 13:00:00 |
Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] (lien direct) |
CyberheistNews Vol 13 #13 | March 28th, 2023
[Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks
Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO.
"A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation.
"This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection."
Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails.
"Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here."
Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures.
"Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it.
"Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'"
New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture.
Remember: Culture eats strategy for breakfast and is always top-down.
Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing
|
Ransomware
Malware
Hack
Tool
Threat
Guideline
|
ChatGPT
ChatGPT
|
★★★
|
 |
2023-03-20 16:14:00 |
New Cyber Platform Lab 1 Decodes Dark Web Data to Uncover Hidden Supply Chain Breaches (lien direct) |
This article has not been generated by ChatGPT.
2022 was the year when inflation hit world economies, except in one corner of the global marketplace – stolen data. Ransomware payments fell by over 40% in 2022 compared to 2021. More organisations chose not to pay ransom demands, according to findings by blockchain firm Chainalysis.
Nonetheless, stolen data has value beyond a price tag, and in |
Ransomware
|
ChatGPT
|
★★★
|
 |
2023-03-14 17:32:00 |
Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam (lien direct) |
Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam, and More.
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: Android, APT, DLL side-loading, Iran, Linux, Malvertising, Mobile, Pakistan, Ransomware, and Windows. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity.
Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed.
Trending Cyber News and Threat Intelligence
Xenomorph V3: a New Variant with ATS Targeting More Than 400 Institutions
(published: March 10, 2023)
Newer versions of the Xenomorph Android banking trojan are able to target 400 applications: cryptocurrency wallets and mobile banking from around the World with the top targeted countries being Spain, Turkey, Poland, USA, and Australia (in that order). Since February 2022, several small, testing Xenomorph campaigns have been detected. Its current version Xenomorph v3 (Xenomorph.C) is available on the Malware-as-a-Service model. This trojan version was delivered using the Zombinder binding service to bind it to a legitimate currency converter. Xenomorph v3 automatically collects and exfiltrates credentials using the ATS (Automated Transfer Systems) framework. The command-and-control traffic is blended in by abusing Discord Content Delivery Network.
Analyst Comment: Fraud chain automation makes Xenomorph v3 a dangerous malware that might significantly increase its prevalence on the threat landscape. Users should keep their mobile devices updated and avail of mobile antivirus and VPN protection services. Install only applications that you actually need, use the official store and check the app description and reviews. Organizations that publish applications for their customers are invited to use Anomali's Premium Digital Risk Protection service to discover rogue, malicious apps impersonating your brand that security teams typically do not search or monitor.
MITRE ATT&CK: [MITRE ATT&CK] T1417.001 - Input Capture: Keylogging | [MITRE ATT&CK] T1417.002 - Input Capture: Gui Input Capture
Tags: malware:Xenomorph, Mobile, actor:Hadoken Security Group, actor:HadokenSecurity, malware-type:Banking trojan, detection:Xenomorph.C, Malware-as-a-Service, Accessibility services, Overlay attack, Discord CDN, Cryptocurrency wallet, target-industry:Cryptocurrency, target-industry:Banking, target-country:Spain, target-country:ES, target-country:Turkey, target-country:TR, target-country:Poland, target-country:PL, target-country:USA, target-country:US, target-country:Australia, target-country:AU, malware:Zombinder, detection:Zombinder.A, Android
Cobalt Illusion Masquerades as Atlantic Council Employee
(published: March 9, 2023)
A new campaign by Iran-sponsored Charming Kitten (APT42, Cobalt Illusion, Magic Hound, Phosphorous) was detected targeting Mahsa Amini protests and researchers who document the suppression of women and minority groups i |
Ransomware
Malware
Tool
Vulnerability
Threat
Guideline
Conference
|
APT 35
ChatGPT
ChatGPT
APT 36
APT 42
|
★★
|
 |
2023-03-14 13:00:00 |
CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears (lien direct) |
CyberheistNews Vol 13 #11 | March 14th, 2023
[Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears
Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes.
I'm giving you a short extract of the story and the link to the whole article is below.
"Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service.
"In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM.
"In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.
"And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven.
"'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'"
Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this.
Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears
[New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl |
Ransomware
Data Breach
Spam
Malware
Threat
Guideline
Medical
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-03-09 21:19:11 |
New Rise In ChatGPT Scams Reported By Fraudsters (lien direct) |
Since the release of ChatGPT, the cybersecurity company Darktrace has issued a warning, claiming that a rise in criminals utilizing artificial intelligence to craft more intricate schemes to defraud employees and hack into organizations has been observed. The Cambridge-based corporation said that AI further enabled “hacktivist” cyberattacks employing ransomware to extract money from businesses. The […] |
Ransomware
Hack
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-02-14 14:00:00 |
CyberheistNews Vol 13 #07 [Scam of the Week] The Turkey-Syria Earthquake (lien direct) |
CyberheistNews Vol 13 #07 | February 14th, 2023
[Scam of the Week] The Turkey-Syria Earthquake
Just when you think they cannot sink any lower, criminal internet scum is now exploiting the recent earthquake in Turkey and Syria.
Less than 24 hours after two massive earthquakes claimed the lives of tens of thousands of people, cybercrooks are already piggybacking on the horrible humanitarian crisis. You need to alert your employees, friends and family... again.
Just one example are scammers that pose as representatives from a Ukrainian charity foundation that seeks money to help those affected by the natural disasters that struck in the early hours of Monday.
There are going to be a raft of scams varying from blood drives to pleas for charitable contributions for victims and their families. Unfortunately, this type of scam is the worst kind of phishbait, and it is a very good idea to inoculate people before they get suckered into falling for a scam like this.
I suggest you send the following short alert to as many people as you can. As usual, feel free to edit:
[ALERT] "Lowlife internet scum is trying to benefit from the Turkey-Syria earthquake. The first phishing campaigns have already been sent and more will be coming that try to trick you into clicking on a variety of links about blood drives, charitable donations, or "exclusive" videos.
"Don't let them shock you into clicking on anything, or open possibly dangerous attachments you did not ask for! Anything you receive about this recent earthquake, be very suspicious. With this topic, think three times before you click. It is very possible that it is a scam, even though it might look legit or was forwarded to you by a friend -- be especially careful when it seems to come from someone you know through email, a text or social media postings because their account may be hacked.
"In case you want to donate to charity, go to your usual charity by typing their name in the address bar of your browser and do not click on a link in any email. Remember, these precautions are just as important at the house as in the office, so tell your friends and family."
It is unfortunate that we continue to have to warn against the bad actors on the internet that use these tragedies for their own benefit. For KnowBe4 customers, we have a few templates with this topic in the Current Events. It's a good idea to send one to your users this week.
Blog post with links:https://blog.knowbe4.com/scam-of-the-week-the-turkey-syria-earthquake
|
Ransomware
Spam
Threat
Guideline
|
ChatGPT
|
★★
|
 |
2023-01-10 16:30:00 |
Anomali Cyber Watch: Turla Re-Registered Andromeda Domains, SpyNote Is More Popular after the Source Code Publication, Typosquatted Site Used to Leak Company\'s Data (lien direct) |
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Artificial intelligence, Expired C2 domains, Data leak, Mobile, Phishing, Ransomware, and Typosquatting. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity.
Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed.
Trending Cyber News and Threat Intelligence
OPWNAI : Cybercriminals Starting to Use ChatGPT
(published: January 6, 2023)
Check Point researchers have detected multiple underground forum threads outlining experimenting with and abusing ChatGPT (Generative Pre-trained Transformer), the revolutionary artificial intelligence (AI) chatbot tool capable of generating creative responses in a conversational manner. Several actors have built schemes to produce AI outputs (graphic art, books) and sell them as their own. Other actors experiment with instructions to write an AI-generated malicious code while avoiding ChatGPT guardrails that should prevent such abuse. Two actors shared samples allegedly created using ChatGPT: a basic Python-based stealer, a Java downloader that stealthily runs payloads using PowerShell, and a cryptographic tool.
Analyst Comment: ChatGPT and similar tools can be of great help to humans creating art, writing texts, and programming. At the same time, it can be a dangerous tool enabling even low-skill threat actors to create convincing social-engineering lures and even new malware.
MITRE ATT&CK: [MITRE ATT&CK] T1566 - Phishing | [MITRE ATT&CK] T1059.001: PowerShell | [MITRE ATT&CK] T1048.003 - Exfiltration Over Alternative Protocol: Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol | [MITRE ATT&CK] T1560 - Archive Collected Data | [MITRE ATT&CK] T1005: Data from Local System
Tags: ChatGPT, Artificial intelligence, OpenAI, Phishing, Programming, Fraud, Chatbot, Python, Java, Cryptography, FTP
Turla: A Galaxy of Opportunity
(published: January 5, 2023)
Russia-sponsored group Turla re-registered expired domains for old Andromeda malware to select a Ukrainian target from the existing victims. Andromeda sample, known from 2013, infected the Ukrainian organization in December 2021 via user-activated LNK file on an infected USB drive. Turla re-registered the Andromeda C2 domain in January 2022, profiled and selected a single victim, and pushed its payloads in September 2022. First, the Kopiluwak profiling tool was downloaded for system reconnaissance, two days later, the Quietcanary backdoor was deployed to find and exfiltrate files created in 2021-2022.
Analyst Comment: Advanced groups are often utilizing commodity malware to blend their traffic with less sophisticated threats. Turla’s tactic of re-registering old but active C2 domains gives the group a way-in to the pool of existing targets. Organizations should be vigilant to all kinds of existing infections and clean them up, even if assessed as “less dangerous.” All known network and host-based indicators and hunting rules associated |
Ransomware
Malware
Tool
Threat
|
ChatGPT
APT-C-36
|
★★
|
|