What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
knowbe4.webp 2023-05-09 13:00:00 Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs
CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
(lien direct)
CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, Ransomware Data Breach Spam Malware Tool Threat Prediction NotPetya NotPetya APT 28 ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-02 13:00:00 Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle?
CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells?
(lien direct)
CyberheistNews Vol 13 #18 CyberheistNews Vol 13 #18  |   May 2nd, 2023 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card. It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. "When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes." That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive. One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves. Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 Ransomware Malware Hack Threat ChatGPT ChatGPT ★★
Anomali.webp 2023-04-25 18:22:00 Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP
Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server
(lien direct)
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Cryptomining, Infostealers, Malvertising, North Korea, Phishing, Ransomware, and Supply-chain attacks. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters (published: April 21, 2023) A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign. Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups. MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:​​Kubernetes RBAC 3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible (published: April 20, 2023) Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and Ransomware Spam Malware Tool Threat Cloud Uber APT 38 ChatGPT APT 43 ★★
knowbe4.webp 2023-04-25 13:00:00 Cyberheistnews Vol 13 # 17 [Head Start] Méthodes efficaces Comment enseigner l'ingénierie sociale à une IA
CyberheistNews Vol 13 #17 [Head Start] Effective Methods How To Teach Social Engineering to an AI
(lien direct)
CyberheistNews Vol 13 #17 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters with Spam Malware Hack Threat APT 28 ChatGPT ChatGPT ★★★
knowbe4.webp 2023-04-18 13:00:00 Cyberheistnews Vol 13 # 16 [doigt sur le pouls]: comment les phishers tirent parti de l'IA récent Buzz
CyberheistNews Vol 13 #16 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz
(lien direct)
CyberheistNews Vol 13 #16 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever leav Spam Malware Hack Threat APT 28 ChatGPT ChatGPT ★★★
InfoSecurityMag.webp 2023-04-18 07:01:00 Les attaques de phishing augmentent alors que les acteurs de la menace tirent parti de nouveaux outils d'IA
Phishing Attacks Surge as Threat Actors Leverage New AI Tools
(lien direct)
Des modèles de grandes langues comme Chatgpt et les kits de phishing ont considérablement contribué à la croissance du phishing, les affirmations du rapport de phishing de Zscaler \'s 2023
Large language models like ChatGPT and phishing kits have significantly contributed to the growth of phishing, Zscaler\'s 2023 ThreatLabz Phishing Report claims
Threat ChatGPT ChatGPT ★★
DarkReading.webp 2023-04-11 15:04:00 Les attaquants cachent Redline Stealer derrière Chatgpt, Google Bard Facebook Ads
Attackers Hide RedLine Stealer Behind ChatGPT, Google Bard Facebook Ads
(lien direct)
La campagne lance l'infosteller de marchandises dans les fichiers OpenAI dans une pièce qui vise à profiter de l'intérêt public croissant pour les chatbots basés sur l'IA.
The campaign shrouds the commodity infostealer in OpenAI files in a play that aims to take advantage of the growing public interest in AI-based chatbots.
Threat ChatGPT ★★
knowbe4.webp 2023-04-11 13:16:54 Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI
CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams
(lien direct)
CyberheistNews Vol 13 #15 CyberheistNews Vol 13 #15  |   April 11th, 2023 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress." They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how." "Don\'t Trust The Voice" The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one. "So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends." Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4. With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making Ransomware Data Breach Spam Malware Hack Tool Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-04-04 13:00:00 CyberheistNews Vol 13 # 14 [Eyes sur le prix] Comment les inconvénients croissants ont tenté un courteur par e-mail de 36 millions de vendeurs
CyberheistNews Vol 13 #14 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist
(lien direct)
CyberheistNews Vol 13 #14 CyberheistNews Vol 13 #14  |   April 4th, 2023 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist The details in this thwarted VEC attack demonstrate how the use of just a few key details can both establish credibility and indicate the entire thing is a scam. It\'s not every day you hear about a purely social engineering-based scam taking place that is looking to run away with tens of millions of dollars. But, according to security researchers at Abnormal Security, cybercriminals are becoming brazen and are taking their shots at very large prizes. This attack begins with a case of VEC – where a domain is impersonated. In the case of this attack, the impersonated vendor\'s domain (which had a .com top level domain) was replaced with a matching .cam domain (.cam domains are supposedly used for photography enthusiasts, but there\'s the now-obvious problem with it looking very much like .com to the cursory glance). The email attaches a legitimate-looking payoff letter complete with loan details. According to Abnormal Security, nearly every aspect of the request looked legitimate. The telltale signs primarily revolved around the use of the lookalike domain, but there were other grammatical mistakes (that can easily be addressed by using an online grammar service or ChatGPT). This attack was identified well before it caused any damage, but the social engineering tactics leveraged were nearly enough to make this attack successful. Security solutions will help stop most attacks, but for those that make it past scanners, your users need to play a role in spotting and stopping BEC, VEC and phishing attacks themselves – something taught through security awareness training combined with frequent simulated phishing and other social engineering tests. Blog post with screenshots and links:https://blog.knowbe4.com/36-mil-vendor-email-compromise-attack [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, April 5, @ 2:00 PM (ET), for a live demo of how KnowBe4 i Ransomware Malware Hack Threat ChatGPT ChatGPT APT 43 ★★
knowbe4.webp 2023-04-03 12:16:05 La fausse escroquerie de Chatgpt se transforme en un système de maquette frauduleux
Fake ChatGPT Scam Turns into a Fraudulent Money-Making Scheme
(lien direct)

Fake ChatGPT Scam Turns into a Fraudulent Money-Making Scheme
Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-03-28 13:00:00 Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] (lien direct) CyberheistNews Vol 13 #13 CyberheistNews Vol 13 #13  |   March 28th, 2023 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO. "A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation. "This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection." Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails. "Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here." Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures. "Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it. "Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'" New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture. Remember: Culture eats strategy for breakfast and is always top-down. Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing Ransomware Malware Hack Tool Threat Guideline ChatGPT ChatGPT ★★★
The_Hackers_News.webp 2023-03-23 21:59:00 Fake Chatgpt Chrome Browser Extension Pattuing Tijacking Facebook Comptes [Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts] (lien direct) Google est intervenu pour supprimer une extension de navigateur de chrome de faux de la boutique en ligne officielle qui s'est masquée en tant que service Chatgpt d'Openai \\ pour récolter les cookies de la session Facebook et détourner les comptes. L'extension "ChatGpt for Google", une version trojanisée d'un module complémentaire légitime de navigateur open source, a attiré plus de 9 000 installations depuis le 14 mars 2023, avant son retrait.C'était à l'origine
Google has stepped in to remove a bogus Chrome browser extension from the official Web Store that masqueraded as OpenAI\'s ChatGPT service to harvest Facebook session cookies and hijack the accounts. The "ChatGPT For Google" extension, a trojanized version of a legitimate open source browser add-on, attracted over 9,000 installations since March 14, 2023, prior to its removal. It was originally
Threat ChatGPT ChatGPT ★★
InfoSecurityMag.webp 2023-03-16 10:30:00 NCSC Calms Fears Over ChatGPT Threat (lien direct) Tool won't democratize cybercrime, agency argues Tool Threat ChatGPT ChatGPT ★★
Checkpoint.webp 2023-03-16 00:49:59 Check Point Research conducts Initial Security Analysis of ChatGPT4, Highlighting Potential Scenarios For Accelerated Cybercrime (lien direct) >Highlights: Check Point Research (CPR) releases an initial analysis of ChatGPT4, surfacing five scenarios that can allow threat actors to streamline malicious efforts and preparations faster and with more precision. In some instances, even non-technical actors can create harmful tools. The five scenarios provided span impersonations of banks, reverse shells, C++ malware and more. Despite… Malware Threat ChatGPT ★★
globalsecuritymag.webp 2023-03-15 16:46:20 Hoxhunt ChatGPT / Cybersecurity Research Reveals: Humans 1, AI 0 (lien direct) Hoxhunt ChatGPT / Cybersecurity Research Reveals: Humans 1, AI 0 Research Highlights Need for Accelerating Security Behavior Change Outcomes with Evolving Threat Landscape - Special Reports Threat ChatGPT ChatGPT ★★
Anomali.webp 2023-03-14 17:32:00 Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam (lien direct)   Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam, and More. The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: Android, APT, DLL side-loading, Iran, Linux, Malvertising, Mobile, Pakistan, Ransomware, and Windows. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence Xenomorph V3: a New Variant with ATS Targeting More Than 400 Institutions (published: March 10, 2023) Newer versions of the Xenomorph Android banking trojan are able to target 400 applications: cryptocurrency wallets and mobile banking from around the World with the top targeted countries being Spain, Turkey, Poland, USA, and Australia (in that order). Since February 2022, several small, testing Xenomorph campaigns have been detected. Its current version Xenomorph v3 (Xenomorph.C) is available on the Malware-as-a-Service model. This trojan version was delivered using the Zombinder binding service to bind it to a legitimate currency converter. Xenomorph v3 automatically collects and exfiltrates credentials using the ATS (Automated Transfer Systems) framework. The command-and-control traffic is blended in by abusing Discord Content Delivery Network. Analyst Comment: Fraud chain automation makes Xenomorph v3 a dangerous malware that might significantly increase its prevalence on the threat landscape. Users should keep their mobile devices updated and avail of mobile antivirus and VPN protection services. Install only applications that you actually need, use the official store and check the app description and reviews. Organizations that publish applications for their customers are invited to use Anomali's Premium Digital Risk Protection service to discover rogue, malicious apps impersonating your brand that security teams typically do not search or monitor. MITRE ATT&CK: [MITRE ATT&CK] T1417.001 - Input Capture: Keylogging | [MITRE ATT&CK] T1417.002 - Input Capture: Gui Input Capture Tags: malware:Xenomorph, Mobile, actor:Hadoken Security Group, actor:HadokenSecurity, malware-type:Banking trojan, detection:Xenomorph.C, Malware-as-a-Service, Accessibility services, Overlay attack, Discord CDN, Cryptocurrency wallet, target-industry:Cryptocurrency, target-industry:Banking, target-country:Spain, target-country:ES, target-country:Turkey, target-country:TR, target-country:Poland, target-country:PL, target-country:USA, target-country:US, target-country:Australia, target-country:AU, malware:Zombinder, detection:Zombinder.A, Android Cobalt Illusion Masquerades as Atlantic Council Employee (published: March 9, 2023) A new campaign by Iran-sponsored Charming Kitten (APT42, Cobalt Illusion, Magic Hound, Phosphorous) was detected targeting Mahsa Amini protests and researchers who document the suppression of women and minority groups i Ransomware Malware Tool Vulnerability Threat Guideline Conference APT 35 ChatGPT ChatGPT APT 36 APT 42 ★★
globalsecuritymag.webp 2023-03-14 14:03:45 Ping Identity\'s response to GCHQ warning ChatGPT is a security threat (lien direct) About ChatGPT and its security implications, a comment from Aubrey Turner, Executive Advisor at Ping Identity - Opinion Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-03-14 13:00:00 CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears (lien direct) CyberheistNews Vol 13 #11 CyberheistNews Vol 13 #11  |   March 14th, 2023 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes. I'm giving you a short extract of the story and the link to the whole article is below. "Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service. "In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM. "In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. "And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven. "'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'" Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this. Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl Ransomware Data Breach Spam Malware Threat Guideline Medical ChatGPT ChatGPT ★★
The_Hackers_News.webp 2023-03-13 17:54:00 Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising (lien direct) A fake ChatGPT-branded Chrome browser extension has been found to come with capabilities to hijack Facebook accounts and create rogue admin accounts, highlighting one of the different methods cyber criminals are using to distribute malware. "By hijacking high-profile Facebook business accounts, the threat actor creates an elite army of Facebook bots and a malicious paid media apparatus," Guardio Threat ChatGPT ChatGPT ★★
ZoneAlarm.webp 2023-03-13 12:32:06 Fake ChatGPT browser extension is hijacking Facebook Business accounts (lien direct) >A fake ChatGPT extension named “Quick access to ChatGPT” has been found to hijack Facebook business accounts. The extension injects malicious code into the Facebook pages of targeted businesses, allowing attackers to gain unauthorized access to the accounts and take over their management functions. This has led to multiple businesses reporting similar incidents of unauthorized … Threat ChatGPT ChatGPT ★★★
knowbe4.webp 2023-02-28 14:00:00 CyberheistNews Vol 13 #09 [Eye Opener] Should You Click on Unsubscribe? (lien direct) CyberheistNews Vol 13 #09 CyberheistNews Vol 13 #09  |   February 28th, 2023 [Eye Opener] Should You Click on Unsubscribe? By Roger A. Grimes. Some common questions we get are "Should I click on an unwanted email's 'Unsubscribe' link? Will that lead to more or less unwanted email?" The short answer is that, in general, it is OK to click on a legitimate vendor's unsubscribe link. But if you think the email is sketchy or coming from a source you would not want to validate your email address as valid and active, or are unsure, do not take the chance, skip the unsubscribe action. In many countries, legitimate vendors are bound by law to offer (free) unsubscribe functionality and abide by a user's preferences. For example, in the U.S., the 2003 CAN-SPAM Act states that businesses must offer clear instructions on how the recipient can remove themselves from the involved mailing list and that request must be honored within 10 days. Note: Many countries have laws similar to the CAN-SPAM Act, although with privacy protection ranging the privacy spectrum from very little to a lot more protection. The unsubscribe feature does not have to be a URL link, but it does have to be an "internet-based way." The most popular alternative method besides a URL link is an email address to use. In some cases, there are specific instructions you have to follow, such as put "Unsubscribe" in the subject of the email. Other times you are expected to craft your own message. Luckily, most of the time simply sending any email to the listed unsubscribe email address is enough to remove your email address from the mailing list. [CONTINUED] at the KnowBe4 blog:https://blog.knowbe4.com/should-you-click-on-unsubscribe [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, March 1, @ 2:00 PM (ET), for a live demo of how KnowBe4 introduces a new-school approac Malware Hack Tool Vulnerability Threat Guideline Prediction APT 38 ChatGPT ★★★
bleepingcomputer.webp 2023-02-22 16:58:19 Hackers use fake ChatGPT apps to push Windows, Android malware (lien direct) Threat actors are actively exploiting the popularity of OpenAI's ChatGPT AI tool to distribute Windows malware, infect Android devices with spyware, or direct unsuspecting victims to phishing pages. [...] Malware Tool Threat ChatGPT ★★★
knowbe4.webp 2023-02-21 14:00:00 CyberheistNews Vol 13 #08 [Heads Up] Reddit Is the Latest Victim of a Spear Phishing Attack Resulting in a Data Breach (lien direct) CyberheistNews Vol 13 #08 CyberheistNews Vol 13 #08  |   February 21st, 2023 [Heads Up] Reddit Is the Latest Victim of a Spear Phishing Attack Resulting in a Data Breach There is a lot to learn from Reddit's recent data breach, which was the result of an employee falling for a "sophisticated and highly-targeted" spear phishing attack. I spend a lot of time talking about phishing attacks and the specifics that closely surround that pivotal action taken by the user once they are duped into believing the phishing email was legitimate. However, there are additional details about the attack we can analyze to see what kind of access the attacker was able to garner from this attack. But first, here are the basics: According to Reddit, an attacker set up a website that impersonated the company's intranet gateway, then sent targeted phishing emails to Reddit employees. The site was designed to steal credentials and two-factor authentication tokens. There are only a few details from the breach, but the notification does mention that the threat actor was able to access "some internal docs, code, as well as some internal dashboards and business systems." Since the notice does imply that only a single employee fell victim, we have to make a few assumptions about this attack: The attacker had some knowledge of Reddit's internal workings – The fact that the attacker can spoof an intranet gateway shows they had some familiarity with the gateway's look and feel, and its use by Reddit employees. The targeting of victims was limited to users with specific desired access – Given the knowledge about the intranet, it's reasonable to believe that the attacker(s) targeted users with specific roles within Reddit. From the use of the term "code," I'm going to assume the target was developers or someone on the product side of Reddit. The attacker may have been an initial access broker – Despite the access gained that Reddit is making out to be not a big deal, they do also mention that no production systems were accessed. This makes me believe that this attack may have been focused on gaining a foothold within Reddit versus penetrating more sensitive systems and data. There are also a few takeaways from this attack that you can learn from: 2FA is an important security measure – Despite the fact that the threat actor collected and (I'm guessing) passed the credentials and 2FA details onto the legitimate Intranet gateway-a classic man-in-the Data Breach Hack Threat Guideline ChatGPT ★★
no_ico.webp 2023-02-16 10:41:18 Threat Actors Sheets: OpenAI Generated ! (lien direct) Inroduction ChatGPT or more generally speaking OpenAI is an incredible tool. It is a spectacular instrument helping people in many different fields, it helps people to summarize text, to produce poem, to build images and music, to answer to difficult questions and to automatize complex processes. So I decided to dedicate an entire blog-post to […] Threat ChatGPT ★★
SecureList.webp 2023-02-15 10:00:53 IoC detection experiments with ChatGPT (lien direct) We decided to check what ChatGPT already knows about threat research and whether it can help with identifying simple adversary tools and classic indicators of compromise, such as well-known malicious hashes and domains. Threat ChatGPT ★★
knowbe4.webp 2023-02-14 14:00:00 CyberheistNews Vol 13 #07 [Scam of the Week] The Turkey-Syria Earthquake (lien direct) CyberheistNews Vol 13 #07 CyberheistNews Vol 13 #07  |   February 14th, 2023 [Scam of the Week] The Turkey-Syria Earthquake Just when you think they cannot sink any lower, criminal internet scum is now exploiting the recent earthquake in Turkey and Syria. Less than 24 hours after two massive earthquakes claimed the lives of tens of thousands of people, cybercrooks are already piggybacking on the horrible humanitarian crisis. You need to alert your employees, friends and family... again. Just one example are scammers that pose as representatives from a Ukrainian charity foundation that seeks money to help those affected by the natural disasters that struck in the early hours of Monday. There are going to be a raft of scams varying from blood drives to pleas for charitable contributions for victims and their families. Unfortunately, this type of scam is the worst kind of phishbait, and it is a very good idea to inoculate people before they get suckered into falling for a scam like this. I suggest you send the following short alert to as many people as you can. As usual, feel free to edit: [ALERT] "Lowlife internet scum is trying to benefit from the Turkey-Syria earthquake. The first phishing campaigns have already been sent and more will be coming that try to trick you into clicking on a variety of links about blood drives, charitable donations, or "exclusive" videos. "Don't let them shock you into clicking on anything, or open possibly dangerous attachments you did not ask for! Anything you receive about this recent earthquake, be very suspicious. With this topic, think three times before you click. It is very possible that it is a scam, even though it might look legit or was forwarded to you by a friend -- be especially careful when it seems to come from someone you know through email, a text or social media postings because their account may be hacked. "In case you want to donate to charity, go to your usual charity by typing their name in the address bar of your browser and do not click on a link in any email. Remember, these precautions are just as important at the house as in the office, so tell your friends and family." It is unfortunate that we continue to have to warn against the bad actors on the internet that use these tragedies for their own benefit. For KnowBe4 customers, we have a few templates with this topic in the Current Events. It's a good idea to send one to your users this week. Blog post with links:https://blog.knowbe4.com/scam-of-the-week-the-turkey-syria-earthquake Ransomware Spam Threat Guideline ChatGPT ★★
DarkReading.webp 2023-02-09 18:52:00 Phishing Surges Ahead, as ChatGPT & AI Loom (lien direct) AI and phishing-as-a-service (PaaS) kits are making it easier for threat actors to create malicious email campaigns, which continue to target high-volume applications using popular brand names. Threat ChatGPT ★★★
01net.webp 2023-01-27 14:45:47 ChatGPT inquiète Amazon : la firme met ses employés en garde (lien direct) amazon marqueChatGPT donne des sueurs froides à Amazon. Le géant du commerce en ligne s'est rendu compte que de nombreux employés se servent du chatbot comme assistant... mettant en péril les données internes de la firme. Threat ChatGPT ★★★
Chercheur.webp 2023-01-18 12:19:28 AI and Political Lobbying (lien direct) Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing. Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human. But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes—not through voting, but through lobbying... Threat ChatGPT ★★★
globalsecuritymag.webp 2023-01-13 12:56:41 ChatGPT : le nouvel allié des cybercriminels débutants (lien direct) ChatGPT : le nouvel allié des cybercriminels débutants Par Gustavo Palazolo, threat research engineer chez Netskope - Points de Vue Threat ChatGPT
Chercheur.webp 2023-01-13 12:13:13 Threats of Machine-Generated Text (lien direct) With the release of ChatGPT, I’ve read many random articles about this or that threat from the technology. This paper is a good survey of the field: what the threats are, how we might detect machine-generated text, directions for future research. It’s a solid grounding amongst all of the hype. Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods Abstract: Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human authored text. Powerful open-source models are freely available, and user-friendly tools democratizing access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. We provide a survey that includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability... Threat ChatGPT ★★★
Anomali.webp 2023-01-10 16:30:00 Anomali Cyber Watch: Turla Re-Registered Andromeda Domains, SpyNote Is More Popular after the Source Code Publication, Typosquatted Site Used to Leak Company\'s Data (lien direct) The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Artificial intelligence, Expired C2 domains, Data leak, Mobile, Phishing, Ransomware, and Typosquatting. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence OPWNAI : Cybercriminals Starting to Use ChatGPT (published: January 6, 2023) Check Point researchers have detected multiple underground forum threads outlining experimenting with and abusing ChatGPT (Generative Pre-trained Transformer), the revolutionary artificial intelligence (AI) chatbot tool capable of generating creative responses in a conversational manner. Several actors have built schemes to produce AI outputs (graphic art, books) and sell them as their own. Other actors experiment with instructions to write an AI-generated malicious code while avoiding ChatGPT guardrails that should prevent such abuse. Two actors shared samples allegedly created using ChatGPT: a basic Python-based stealer, a Java downloader that stealthily runs payloads using PowerShell, and a cryptographic tool. Analyst Comment: ChatGPT and similar tools can be of great help to humans creating art, writing texts, and programming. At the same time, it can be a dangerous tool enabling even low-skill threat actors to create convincing social-engineering lures and even new malware. MITRE ATT&CK: [MITRE ATT&CK] T1566 - Phishing | [MITRE ATT&CK] T1059.001: PowerShell | [MITRE ATT&CK] T1048.003 - Exfiltration Over Alternative Protocol: Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol | [MITRE ATT&CK] T1560 - Archive Collected Data | [MITRE ATT&CK] T1005: Data from Local System Tags: ChatGPT, Artificial intelligence, OpenAI, Phishing, Programming, Fraud, Chatbot, Python, Java, Cryptography, FTP Turla: A Galaxy of Opportunity (published: January 5, 2023) Russia-sponsored group Turla re-registered expired domains for old Andromeda malware to select a Ukrainian target from the existing victims. Andromeda sample, known from 2013, infected the Ukrainian organization in December 2021 via user-activated LNK file on an infected USB drive. Turla re-registered the Andromeda C2 domain in January 2022, profiled and selected a single victim, and pushed its payloads in September 2022. First, the Kopiluwak profiling tool was downloaded for system reconnaissance, two days later, the Quietcanary backdoor was deployed to find and exfiltrate files created in 2021-2022. Analyst Comment: Advanced groups are often utilizing commodity malware to blend their traffic with less sophisticated threats. Turla’s tactic of re-registering old but active C2 domains gives the group a way-in to the pool of existing targets. Organizations should be vigilant to all kinds of existing infections and clean them up, even if assessed as “less dangerous.” All known network and host-based indicators and hunting rules associated Ransomware Malware Tool Threat ChatGPT APT-C-36 ★★
01net.webp 2023-01-09 14:41:54 ChatGPT : mauvaise nouvelle, les cybercriminels ont aussi commencé à l\'utiliser (lien direct) hacking, hack, hackerC'est une tendance inquiétante : les pirates se sont aussi emparés de ChatGPT et certains s'en servent pour créer e-mails de phishing et malwares en quelques secondes. Le début d'une nouvelle ère ? Threat ChatGPT ★★★
Checkpoint.webp 2023-01-06 11:59:27 OPWNAI : Cybercriminals Starting to Use ChatGPT (lien direct) >Introduction At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses.  However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help […] Threat ChatGPT ★★★
globalsecuritymag.webp 2022-12-19 14:08:01 ChatGPT Produces Malicious Emails and Code (lien direct) ChatGPT Produces Malicious Emails and Code Check Point Research (CPR) warns of hackers potentially using OpenAI's ChatGPT and Codex to execute targeted and efficient cyber-attacks. To demonstrate, CPR used ChatGPT and Codex to produce malicious emails, code and a full infection chain capable of targeting people's computers. CPR documents its correspondence in a new publication with examples of what was generated, underscoring the importance of vigilance as developing AI technologies, like ChatGPT, can change the cyber threat landscape significantly. - Business News Threat ChatGPT ★★★
Last update at: 2024-05-20 07:07:49
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter