Src |
Date (GMT) |
Titre |
Description |
Tags |
Stories |
Notes |
 |
2023-05-31 13:00:00 |
Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks (lien direct) |
CyberheistNews Vol 13 #22 | May 31st, 2023
[Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks
With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all.
When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report.
According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry:
51% of invoice fraud attacks targeted the financial services industry
42% were payroll fraud attacks
63% were payment fraud
To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox.
The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage.
Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing
[Live Demo] Ridiculously Easy Security Awareness Training and Phishing
Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense.
Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing.
Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users.
|
Ransomware
Malware
Hack
Tool
Threat
Conference
|
Uber
ChatGPT
ChatGPT
Guam
|
★★
|
 |
2023-05-30 15:01:01 |
ROMCOM MALWARE SPEAT via Google Ads pour Chatgpt, GIMP, plus RomCom malware spread via Google Ads for ChatGPT, GIMP, more (lien direct) |
A new campaign distributing the RomCom backdoor malware is impersonating the websites of well-known or fictional software, tricking users into downloading and launching malicious installers. [...]
A new campaign distributing the RomCom backdoor malware is impersonating the websites of well-known or fictional software, tricking users into downloading and launching malicious installers. [...] |
Malware
|
ChatGPT
|
★★
|
 |
2023-05-19 12:23:00 |
Vous recherchez des outils d'IA?Attention aux sites voyous distribuant des logiciels malveillants Redline Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware (lien direct) |
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware.
"Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire
Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware.
"Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-17 16:00:00 |
Batloader usurpère Chatgpt et MidJourney en cyber-attaques BatLoader Impersonates ChatGPT and Midjourney in Cyber-Attacks (lien direct) |
ESENTIRE a recommandé de sensibiliser les logiciels malveillants à se déguiser en tant qu'applications légitimes
eSentire recommended raising awareness of malware masquerading as legitimate applications |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-09 13:00:00 |
Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users (lien direct) |
CyberheistNews Vol 13 #19 | May 9th, 2023
[Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages.
"Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area."
The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner.
A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks.
This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them.
Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture.
Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages
A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation
Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more.
Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, |
Ransomware
Data Breach
Spam
Malware
Tool
Threat
Prediction
|
NotPetya
NotPetya
APT 28
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-08 10:00:00 |
Empêcher des attaques de phishing sophistiquées destinées aux employés Preventing sophisticated phishing attacks aimed at employees (lien direct) |
The content of this post is solely the responsibility of the author. AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.
As technology advances, phishing attempts are becoming more sophisticated. It can be challenging for employees to recognize an email is malicious when it looks normal, so it’s up to their company to properly train workers in prevention and detection.
Phishing attacks are becoming more sophisticated
Misspellings and poorly formatted text used to be the leading indicators of an email scam, but they’re getting more sophisticated. Today, hackers can spoof email addresses and bots sound like humans. It’s becoming challenging for employees to tell if their emails are real or fake, which puts the company at risk of data breaches.
In March 2023, an artificial intelligence chatbot called GPT-4 received an update that lets users give specific instructions about styles and tasks. Attackers can use it to pose as employees and send convincing messages since it sounds intelligent and has general knowledge of any industry.
Since classic warning signs of phishing attacks aren’t applicable anymore, companies should train all employees on the new, sophisticated methods. As phishing attacks change, so should businesses.
Identify the signs
Your company can take preventive action to secure its employees against attacks. You need to make it difficult for hackers to reach them, and your company must train them on warning signs. While blocking spam senders and reinforcing security systems is up to you, they must know how to identify and report themselves.
You can prevent data breaches if employees know what to watch out for:
Misspellings: While it’s becoming more common for phishing emails to have the correct spelling, employees still need to look for mistakes. For example, they could look for industry-specific language because everyone in their field should know how to spell those words.
Irrelevant senders: Workers can identify phishing — even when the email is spoofed to appear as someone they know — by asking themselves if it is relevant. They should flag the email as a potential attack if the sender doesn’t usually reach out to them or is someone in an unrelated department.
Attachments: Hackers attempt to install malware through links or downloads. Ensure every employee knows they shouldn\'t click on them.
Odd requests: A sophisticated phishing attack has relevant messages and proper language, but it is somewhat vague because it goes to multiple employees at once. For example, they could recognize it if it’s asking them to do something unrelated to their role.
It may be harder for people to detect warning signs as attacks evolve, but you can prepare them for those situations as well as possible. It’s unlikely hackers have access to their specific duties or the inner workings of your company, so you must capitalize on those details.
Sophisticated attacks will sound intelligent and possibly align with their general duties, so everyone must constantly be aware. Training will help employees identify signs, but you need to take more preventive action to ensure you’re covered.
Take preventive action
Basic security measures — like regularly updating passwords and running antivirus software — are fundamental to protecting your company. For example, everyone should change their passwords once every three months at minimum to ensur |
Spam
Malware
|
ChatGPT
|
★★
|
 |
2023-05-04 16:00:00 |
Meta s'attaque aux logiciels malveillants qui se font passer pour le chatppt dans des campagnes persistantes Meta Tackles Malware Posing as ChatGPT in Persistent Campaigns (lien direct) |
Les familles de logiciels malveillants détectées et perturbées incluent Ducktail et le NODESTELEUR nouvellement identifié
Malware families detected and disrupted include Ducktail and the newly identified NodeStealer |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-04 14:27:00 |
Meta élimine la campagne de logiciels malveillants qui a utilisé Chatgpt comme leurre pour voler des comptes Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts (lien direct) |
Meta a déclaré qu'il avait fallu des mesures pour éliminer plus de 1 000 URL malveillants de partager ses services qui ont été trouvés pour tirer parti du chatpt d'Openai \\ comme un leurre pour propager environ 10 familles de logiciels malveillants depuis mars 2023.
Le développement s'accompagne de la toile de fond des fausses extensions du navigateur Web ChatGpt étant de plus en plus utilisées pour voler
Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI\'s ChatGPT as a lure to propagate about 10 malware families since March 2023.
The development comes against the backdrop of fake ChatGPT web browser extensions being increasingly used to steal users\' Facebook account credentials with an aim to run |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-03 13:30:40 |
Les pirates promettent AI, installez des logiciels malveillants à la place Hackers Promise AI, Install Malware Instead (lien direct) |
> Facebook Parent Meta a averti que les pirates utilisent la promesse d'une intelligence artificielle générative comme Chatgpt pour inciter les gens à installer des logiciels malveillants sur les appareils.
>Facebook parent Meta warned that hackers are using the promise of generative artificial intelligence like ChatGPT to trick people into installing malware on devices.
|
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-03 12:09:10 |
Les faux sites Web liés à Chatgpt présentent un risque élevé, avertit les points forts de la recherche sur le point de contrôle Fake Websites Related to ChatGPT Pose High Risk, Warns Check Point Research Highlights (lien direct) |
Les faux sites Web liés à Chatgpt présentent un risque élevé, avertissent la recherche de point de contrôle
Fait saillie
• Check Point Research (RCR) voit une augmentation des logiciels malveillants distribués via des sites Web semblent être liés à Chatgpt
-
mise à jour malveillant
Fake Websites Related to ChatGPT Pose High Risk, Warns Check Point Research
Highlights
• Check Point Research (CPR) sees a surge in malware distributed through websites appearing to be related to ChatGPT
-
Malware Update |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-03 12:00:00 |
Meta déménage pour contrer de nouveaux logiciels malveillants et répéter les prises de contrôle des comptes Meta Moves to Counter New Malware and Repeat Account Takeovers (lien direct) |
L'entreprise ajoute de nouveaux outils car les mauvais acteurs utilisent des leurres sur le thème de Chatgpt et masquent leur infrastructure dans le but de tromper les victimes et d'éliminer les défenseurs.
The company is adding new tools as bad actors use ChatGPT-themed lures and mask their infrastructure in an attempt to trick victims and elude defenders. |
Malware
|
ChatGPT
|
★★
|
 |
2023-05-02 19:12:58 |
De faux sites Web imitant l'association de chatppt présente un risque élevé, avertit la recherche sur le point de contrôle Fake Websites Impersonating Association To ChatGPT Poses High Risk, Warns Check Point Research (lien direct) |
Sites Web qui imitent Chatgpt, ayant l'intention d'attirer les utilisateurs de télécharger des fichiers malveillants et avertissent les utilisateurs de connaître et de s'abstenir d'accéder à des sites Web similaires à l'âge de l'IA & # 8211;Anxiété ou aide?En décembre 2022, Check Point Research (RCR) a commencé à soulever des préoccupations concernant les implications de Catgpt \\ pour la cybersécurité.Dans notre rapport précédent, la RCR a mis en lumière une augmentation [& # 8230;]
>Highlights Check Point Research (CPR) sees a surge in malware distributed through websites appearing to be related to ChatGPT Since the beginning of 2023, 1 out of 25 new ChatGPT-related domain was either malicious or potentially malicious CPR provides examples of websites that mimic ChatGPT, intending to lure users to download malicious files, and warns users to be aware and to refrain from accessing similar websites The age of AI – Anxiety or Aid? In December 2022, Check Point Research (CPR) started raising concerns about ChatGPT\'s implications for cybersecurity. In our previous report, CPR put a spotlight on an increase […]
|
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-05-02 13:00:00 |
Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle? CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? (lien direct) |
CyberheistNews Vol 13 #18 | May 2nd, 2023
[Eye on AI] Does ChatGPT Have Cybersecurity Tells?
Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card.
It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is.
"When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes."
That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive.
One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves.
Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells
[Live Demo] Ridiculously Easy Security Awareness Training and Phishing
Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense.
Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 |
Ransomware
Malware
Hack
Threat
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-25 18:22:00 |
Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server (lien direct) |
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Cryptomining, Infostealers, Malvertising, North Korea, Phishing, Ransomware, and Supply-chain attacks. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity.
Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed.
Trending Cyber News and Threat Intelligence
First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters
(published: April 21, 2023)
A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign.
Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups.
MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop
Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:Kubernetes RBAC
3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible
(published: April 20, 2023)
Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and |
Ransomware
Spam
Malware
Tool
Threat
Cloud
|
Uber
APT 38
ChatGPT
APT 43
|
★★
|
 |
2023-04-25 13:00:00 |
Cyberheistnews Vol 13 # 17 [Head Start] Méthodes efficaces Comment enseigner l'ingénierie sociale à une IA CyberheistNews Vol 13 #17 [Head Start] Effective Methods How To Teach Social Engineering to an AI (lien direct) |
CyberheistNews Vol 13 #16 | April 18th, 2023
[Finger on the Pulse]: How Phishers Leverage Recent AI Buzz
Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard.
Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device."
Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals."
Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet.
Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations.
Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait
[New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist
Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform!
The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters with |
Spam
Malware
Hack
Threat
|
APT 28
ChatGPT
ChatGPT
|
★★★
|
 |
2023-04-22 10:08:16 |
Google Ads Push Bumblebee Malware utilisé par Ransomware Gangs Google ads push BumbleBee malware used by ransomware gangs (lien direct) |
Le logiciel malveillant Bumblebee ciblant l'entreprise est distribué via Google Ads et un empoisonnement SEO qui favorisent des logiciels populaires comme Zoom, Cisco AnyConnect, Chatgpt et Citrix Workspace.[...]
The enterprise-targeting Bumblebee malware is distributed through Google Ads and SEO poisoning that promote popular software like Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace. [...] |
Ransomware
Malware
|
ChatGPT
|
★★
|
 |
2023-04-21 09:33:14 |
Les fans de Chatgpt ont besoin de l'esprit défensif \\ 'pour éviter les escrocs et les logiciels malveillants ChatGPT fans need \\'defensive mindset\\' to avoid scammers and malware (lien direct) |
Palo Alto Networks Spotts des pointes d'activité suspecte telles que les domaines coquins, le phishing et pire Les fans de Chatgpt doivent adopter un "état d'esprit défensif" parce que les escrocs ont commencé à utiliser plusieurs méthodes pour tromper le bot \\ ''S des utilisateurs pour télécharger des logiciels malveillants ou partager des informations sensibles.…
Palo Alto Networks spots suspicious activity spikes such as naughty domains, phishing, and worse ChatGPT fans need to adopt a "defensive mindset" because scammers have started using multiple methods to trick the bot\'s users into downloading malware or sharing sensitive information.… |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-20 11:46:00 |
Le malware de Bumblebee vole sur les ailes de Zoom et Chatgpt Bumblebee malware flies on the wings of Zoom and ChatGPT (lien direct) |
Pas de details / No more details |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-20 10:49:00 |
Bumblebee malware distribué via des téléchargements d'installation trojanisés Bumblebee Malware Distributed Via Trojanized Installer Downloads (lien direct) |
Type: Blogs Bumblebee Malware distribué via des téléchargements d'installation trojanisés La restriction du téléchargement et de l'exécution des logiciels tiers est d'une importance cruciale. Apprenez comment les chercheurs CTU ™ ont observé Bumblebee Micware BumblebeeDistribué via des installateurs transmissibles pour des logiciels populaires tels que Zoom, Cisco AnyConnect, Chatgpt et Citrix Workspace.
Type: BlogsBumblebee Malware Distributed Via Trojanized Installer DownloadsRestricting the download and execution of third-party software is critically important.Learn how CTU™ researchers observed Bumblebee malware distributed via trojanized installers for popular software such as Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace. |
Malware
|
ChatGPT
|
★★
|
 |
2023-04-19 11:00:00 |
Comment le chatppt et les robots comme IT-CAN Spread Malewware How ChatGPT-and Bots Like It-Can Spread Malware (lien direct) |
L'IA générative est un outil, ce qui signifie qu'il peut également être utilisé par les cybercriminels.Voici comment vous protéger.
Generative AI is a tool, which means it can be used by cybercriminals, too. Here\'s how to protect yourself. |
Malware
|
ChatGPT
|
★★
|
 |
2023-04-18 13:00:00 |
Cyberheistnews Vol 13 # 16 [doigt sur le pouls]: comment les phishers tirent parti de l'IA récent Buzz CyberheistNews Vol 13 #16 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz (lien direct) |
CyberheistNews Vol 13 #16 | April 18th, 2023
[Finger on the Pulse]: How Phishers Leverage Recent AI Buzz
Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard.
Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device."
Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals."
Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet.
Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations.
Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait
[New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist
Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform!
The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever leav |
Spam
Malware
Hack
Threat
|
APT 28
ChatGPT
ChatGPT
|
★★★
|
 |
2023-04-12 21:57:00 |
Le rapport révèle que Chatgpt déjà impliqué dans les fuites de données, les escroqueries à phishing et les infections de logiciels malveillants Report Reveals ChatGPT Already Involved in Data Leaks, Phishing Scams & Malware Infections (lien direct) |
Pas de details / No more details |
Malware
|
ChatGPT
ChatGPT
|
★★★★
|
 |
2023-04-12 08:39:41 |
BlackMamba : le malware généré par ChatGPT est-il un nouveau type de menace ? (lien direct) |
BlackMamba est un malware d'essai, autrement dit un programme de démonstration reposant sur un exécutable bénin qui, en s'alliant à une IA ultra-réputée (OpenAI) à l'exécution, renvoie du code malveillant synthétisé et polymorphe censé dérober les informations saisies au clavier par l'utilisateur du système infecté. |
Malware
|
ChatGPT
ChatGPT
|
★★★
|
 |
2023-04-11 13:16:54 |
Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams (lien direct) |
CyberheistNews Vol 13 #15 | April 11th, 2023
[The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams
The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress."
They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how."
"Don\'t Trust The Voice"
The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one.
"So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends."
Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams
A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation
Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4.
With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making |
Ransomware
Data Breach
Spam
Malware
Hack
Tool
Threat
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-07 13:00:42 |
Ce malware indétectable est signé… ChatGPT (lien direct) |
ChatGPT est une arme redoutable pour les hackers. Avec l'aide de l'IA, il est possible de coder un dangereux malware indétectable... sans écrire une seule ligne de code. |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-04-05 16:20:00 |
Le chercheur tourne le chat de la construction de logiciels malveillants de stéganographie indétectable Researcher Tricks ChatGPT into Building Undetectable Steganography Malware (lien direct) |
En utilisant uniquement des invites Chatgpt, un chercheur de ForcePoint a convaincu l'IA de créer des logiciels malveillants pour trouver et exfiltrant des documents spécifiques, malgré sa directive de refuser les demandes malveillantes.
Using only ChatGPT prompts, a Forcepoint researcher convinced the AI to create malware for finding and exfiltrating specific documents, despite its directive to refuse malicious requests. |
Malware
|
ChatGPT
ChatGPT
|
★★★
|
 |
2023-04-04 13:00:00 |
CyberheistNews Vol 13 # 14 [Eyes sur le prix] Comment les inconvénients croissants ont tenté un courteur par e-mail de 36 millions de vendeurs CyberheistNews Vol 13 #14 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist (lien direct) |
CyberheistNews Vol 13 #14 | April 4th, 2023
[Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist
The details in this thwarted VEC attack demonstrate how the use of just a few key details can both establish credibility and indicate the entire thing is a scam.
It\'s not every day you hear about a purely social engineering-based scam taking place that is looking to run away with tens of millions of dollars. But, according to security researchers at Abnormal Security, cybercriminals are becoming brazen and are taking their shots at very large prizes.
This attack begins with a case of VEC – where a domain is impersonated. In the case of this attack, the impersonated vendor\'s domain (which had a .com top level domain) was replaced with a matching .cam domain (.cam domains are supposedly used for photography enthusiasts, but there\'s the now-obvious problem with it looking very much like .com to the cursory glance).
The email attaches a legitimate-looking payoff letter complete with loan details. According to Abnormal Security, nearly every aspect of the request looked legitimate. The telltale signs primarily revolved around the use of the lookalike domain, but there were other grammatical mistakes (that can easily be addressed by using an online grammar service or ChatGPT).
This attack was identified well before it caused any damage, but the social engineering tactics leveraged were nearly enough to make this attack successful. Security solutions will help stop most attacks, but for those that make it past scanners, your users need to play a role in spotting and stopping BEC, VEC and phishing attacks themselves – something taught through security awareness training combined with frequent simulated phishing and other social engineering tests.
Blog post with screenshots and links:https://blog.knowbe4.com/36-mil-vendor-email-compromise-attack
[Live Demo] Ridiculously Easy Security Awareness Training and Phishing
Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense.
Join us TOMORROW, Wednesday, April 5, @ 2:00 PM (ET), for a live demo of how KnowBe4 i |
Ransomware
Malware
Hack
Threat
|
ChatGPT
ChatGPT
APT 43
|
★★
|
 |
2023-03-28 13:00:00 |
Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] (lien direct) |
CyberheistNews Vol 13 #13 | March 28th, 2023
[Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks
Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO.
"A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation.
"This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection."
Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails.
"Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here."
Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures.
"Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it.
"Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'"
New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture.
Remember: Culture eats strategy for breakfast and is always top-down.
Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing
|
Ransomware
Malware
Hack
Tool
Threat
Guideline
|
ChatGPT
ChatGPT
|
★★★
|
 |
2023-03-16 00:49:59 |
Check Point Research conducts Initial Security Analysis of ChatGPT4, Highlighting Potential Scenarios For Accelerated Cybercrime (lien direct) |
>Highlights: Check Point Research (CPR) releases an initial analysis of ChatGPT4, surfacing five scenarios that can allow threat actors to streamline malicious efforts and preparations faster and with more precision. In some instances, even non-technical actors can create harmful tools. The five scenarios provided span impersonations of banks, reverse shells, C++ malware and more. Despite…
|
Malware
Threat
|
ChatGPT
|
★★
|
 |
2023-03-14 17:32:00 |
Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam (lien direct) |
Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam, and More.
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: Android, APT, DLL side-loading, Iran, Linux, Malvertising, Mobile, Pakistan, Ransomware, and Windows. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity.
Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed.
Trending Cyber News and Threat Intelligence
Xenomorph V3: a New Variant with ATS Targeting More Than 400 Institutions
(published: March 10, 2023)
Newer versions of the Xenomorph Android banking trojan are able to target 400 applications: cryptocurrency wallets and mobile banking from around the World with the top targeted countries being Spain, Turkey, Poland, USA, and Australia (in that order). Since February 2022, several small, testing Xenomorph campaigns have been detected. Its current version Xenomorph v3 (Xenomorph.C) is available on the Malware-as-a-Service model. This trojan version was delivered using the Zombinder binding service to bind it to a legitimate currency converter. Xenomorph v3 automatically collects and exfiltrates credentials using the ATS (Automated Transfer Systems) framework. The command-and-control traffic is blended in by abusing Discord Content Delivery Network.
Analyst Comment: Fraud chain automation makes Xenomorph v3 a dangerous malware that might significantly increase its prevalence on the threat landscape. Users should keep their mobile devices updated and avail of mobile antivirus and VPN protection services. Install only applications that you actually need, use the official store and check the app description and reviews. Organizations that publish applications for their customers are invited to use Anomali's Premium Digital Risk Protection service to discover rogue, malicious apps impersonating your brand that security teams typically do not search or monitor.
MITRE ATT&CK: [MITRE ATT&CK] T1417.001 - Input Capture: Keylogging | [MITRE ATT&CK] T1417.002 - Input Capture: Gui Input Capture
Tags: malware:Xenomorph, Mobile, actor:Hadoken Security Group, actor:HadokenSecurity, malware-type:Banking trojan, detection:Xenomorph.C, Malware-as-a-Service, Accessibility services, Overlay attack, Discord CDN, Cryptocurrency wallet, target-industry:Cryptocurrency, target-industry:Banking, target-country:Spain, target-country:ES, target-country:Turkey, target-country:TR, target-country:Poland, target-country:PL, target-country:USA, target-country:US, target-country:Australia, target-country:AU, malware:Zombinder, detection:Zombinder.A, Android
Cobalt Illusion Masquerades as Atlantic Council Employee
(published: March 9, 2023)
A new campaign by Iran-sponsored Charming Kitten (APT42, Cobalt Illusion, Magic Hound, Phosphorous) was detected targeting Mahsa Amini protests and researchers who document the suppression of women and minority groups i |
Ransomware
Malware
Tool
Vulnerability
Threat
Guideline
Conference
|
APT 35
ChatGPT
ChatGPT
APT 36
APT 42
|
★★
|
 |
2023-03-14 13:00:00 |
CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears (lien direct) |
CyberheistNews Vol 13 #11 | March 14th, 2023
[Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears
Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes.
I'm giving you a short extract of the story and the link to the whole article is below.
"Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service.
"In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM.
"In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.
"And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven.
"'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'"
Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this.
Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears
[New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl |
Ransomware
Data Breach
Spam
Malware
Threat
Guideline
Medical
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-03-11 19:02:00 |
BATLOADER Malware Uses Google Ads to Deliver Vidar Stealer and Ursnif Payloads (lien direct) |
The malware downloader known as BATLOADER has been observed abusing Google Ads to deliver secondary payloads like Vidar Stealer and Ursnif.
According to cybersecurity company eSentire, malicious ads are used to spoof a wide range of legitimate apps and services such as Adobe, OpenAPI's ChatGPT, Spotify, Tableau, and Zoom.
BATLOADER, as the name suggests, is a loader that's responsible for |
Malware
|
ChatGPT
|
★★
|
 |
2023-03-08 16:50:40 |
AI-Powered \'BlackMamba\' Keylogging Attack Evades Modern EDR Security (lien direct) |
Researchers warn that polymorphic malware created with ChatGPT and other LLMs will force a reinvention of security automation. |
Malware
|
ChatGPT
ChatGPT
|
★★
|
 |
2023-02-28 14:00:00 |
CyberheistNews Vol 13 #09 [Eye Opener] Should You Click on Unsubscribe? (lien direct) |
CyberheistNews Vol 13 #09 | February 28th, 2023
[Eye Opener] Should You Click on Unsubscribe?
By Roger A. Grimes.
Some common questions we get are "Should I click on an unwanted email's 'Unsubscribe' link? Will that lead to more or less unwanted email?"
The short answer is that, in general, it is OK to click on a legitimate vendor's unsubscribe link. But if you think the email is sketchy or coming from a source you would not want to validate your email address as valid and active, or are unsure, do not take the chance, skip the unsubscribe action.
In many countries, legitimate vendors are bound by law to offer (free) unsubscribe functionality and abide by a user's preferences. For example, in the U.S., the 2003 CAN-SPAM Act states that businesses must offer clear instructions on how the recipient can remove themselves from the involved mailing list and that request must be honored within 10 days.
Note: Many countries have laws similar to the CAN-SPAM Act, although with privacy protection ranging the privacy spectrum from very little to a lot more protection. The unsubscribe feature does not have to be a URL link, but it does have to be an "internet-based way." The most popular alternative method besides a URL link is an email address to use.
In some cases, there are specific instructions you have to follow, such as put "Unsubscribe" in the subject of the email. Other times you are expected to craft your own message. Luckily, most of the time simply sending any email to the listed unsubscribe email address is enough to remove your email address from the mailing list.
[CONTINUED] at the KnowBe4 blog:https://blog.knowbe4.com/should-you-click-on-unsubscribe
[Live Demo] Ridiculously Easy Security Awareness Training and Phishing
Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense.
Join us TOMORROW, Wednesday, March 1, @ 2:00 PM (ET), for a live demo of how KnowBe4 introduces a new-school approac |
Malware
Hack
Tool
Vulnerability
Threat
Guideline
Prediction
|
APT 38
ChatGPT
|
★★★
|
 |
2023-02-23 19:02:13 |
Hackers use ChatGPT phishing websites to infect users with malware (lien direct) |
Cyble says cybercriminals are setting up phishing websites that mimic the branding of ChatGPT, an AI tool that has exploded in popularity |
Malware
Tool
|
ChatGPT
|
★★★
|
 |
2023-02-23 09:25:51 |
Une fausse app ChatGPT pour Windows menace de pirater vos comptes Facebook, TikTok et Google (lien direct) |
Une fausse application ChatGPT pour PC Windows se propage sur les réseaux sociaux. Elle cache un malware capable de voler les identifiants des comptes Facebook, TikTok et Google. |
Malware
|
ChatGPT
|
★★
|
 |
2023-02-23 09:20:00 |
Phishing Sites and Apps Use ChatGPT as Lure (lien direct) |
Campaigns designed to steal card information and install malware |
Malware
|
ChatGPT
|
★★
|
 |
2023-02-23 09:07:44 |
Fake ChatGPT apps spread Windows and Android malware (lien direct) |
OpenAI's ChatGPT chatbot has been a phenomenon, taking the internet by storm. Whether it is composing poetry, writing essays for college students, or finding bugs in computer code, it has impressed millions of people and proven itself to be the most accessible form of artificial intelligence ever seen. Yes, there are plenty of fears about how the technology could be used and abused, questions to be answered about its ethical use and how regulators might police its use, and worries that some may not realise that ChatGPT is not as smart as it initially appears. But no-one can deny that it has... |
Malware
|
ChatGPT
|
★★
|
 |
2023-02-22 16:58:19 |
Hackers use fake ChatGPT apps to push Windows, Android malware (lien direct) |
Threat actors are actively exploiting the popularity of OpenAI's ChatGPT AI tool to distribute Windows malware, infect Android devices with spyware, or direct unsuspecting victims to phishing pages. [...] |
Malware
Tool
Threat
|
ChatGPT
|
★★★
|
 |
2023-02-22 16:25:56 |
Un nouveau malware vole des identifiants de réseaux sociaux en se faisant passer pour une application ChatGPT (lien direct) |
Un nouveau malware vole des identifiants de réseaux sociaux en se faisant passer pour une application ChatGPT
-
Malwares |
Malware
|
ChatGPT
|
★★★
|
 |
2023-02-09 17:05:17 |
Hackers Bypass ChatGPT Restrictions Via Telegram Bots (lien direct) |
Researchers revealed on Wednesday that hackers had found a means to get beyond ChatGPT’s limitations and are using it to market services that let users produce malware and phishing emails. ChatGPT is a chatbot that imitates human output by using artificial intelligence to respond to inquiries and carry out tasks. People can use it to […] |
Malware
|
ChatGPT
|
★★
|
 |
2023-02-08 18:54:03 |
Hackers are selling a service that bypasses ChatGPT restrictions on malware (lien direct) |
ChatGPT restrictions on the creation of illicit content are easy to circumvent. |
Malware
|
ChatGPT
|
★★★
|
 |
2023-01-18 19:21:00 |
ChatGPT Could Create Polymorphic Malware Wave, Researchers Warn (lien direct) |
The powerful AI bot can produce malware without malicious code, making it tough to mitigate. |
Malware
|
ChatGPT
|
★★★
|
 |
2023-01-18 16:00:00 |
ChatGPT Creates Polymorphic Malware (lien direct) |
The first step to creating the malware was to bypass ChatGPT content filters |
Malware
|
ChatGPT
|
★★
|
 |
2023-01-10 16:30:00 |
Anomali Cyber Watch: Turla Re-Registered Andromeda Domains, SpyNote Is More Popular after the Source Code Publication, Typosquatted Site Used to Leak Company\'s Data (lien direct) |
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Artificial intelligence, Expired C2 domains, Data leak, Mobile, Phishing, Ransomware, and Typosquatting. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity.
Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed.
Trending Cyber News and Threat Intelligence
OPWNAI : Cybercriminals Starting to Use ChatGPT
(published: January 6, 2023)
Check Point researchers have detected multiple underground forum threads outlining experimenting with and abusing ChatGPT (Generative Pre-trained Transformer), the revolutionary artificial intelligence (AI) chatbot tool capable of generating creative responses in a conversational manner. Several actors have built schemes to produce AI outputs (graphic art, books) and sell them as their own. Other actors experiment with instructions to write an AI-generated malicious code while avoiding ChatGPT guardrails that should prevent such abuse. Two actors shared samples allegedly created using ChatGPT: a basic Python-based stealer, a Java downloader that stealthily runs payloads using PowerShell, and a cryptographic tool.
Analyst Comment: ChatGPT and similar tools can be of great help to humans creating art, writing texts, and programming. At the same time, it can be a dangerous tool enabling even low-skill threat actors to create convincing social-engineering lures and even new malware.
MITRE ATT&CK: [MITRE ATT&CK] T1566 - Phishing | [MITRE ATT&CK] T1059.001: PowerShell | [MITRE ATT&CK] T1048.003 - Exfiltration Over Alternative Protocol: Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol | [MITRE ATT&CK] T1560 - Archive Collected Data | [MITRE ATT&CK] T1005: Data from Local System
Tags: ChatGPT, Artificial intelligence, OpenAI, Phishing, Programming, Fraud, Chatbot, Python, Java, Cryptography, FTP
Turla: A Galaxy of Opportunity
(published: January 5, 2023)
Russia-sponsored group Turla re-registered expired domains for old Andromeda malware to select a Ukrainian target from the existing victims. Andromeda sample, known from 2013, infected the Ukrainian organization in December 2021 via user-activated LNK file on an infected USB drive. Turla re-registered the Andromeda C2 domain in January 2022, profiled and selected a single victim, and pushed its payloads in September 2022. First, the Kopiluwak profiling tool was downloaded for system reconnaissance, two days later, the Quietcanary backdoor was deployed to find and exfiltrate files created in 2021-2022.
Analyst Comment: Advanced groups are often utilizing commodity malware to blend their traffic with less sophisticated threats. Turla’s tactic of re-registering old but active C2 domains gives the group a way-in to the pool of existing targets. Organizations should be vigilant to all kinds of existing infections and clean them up, even if assessed as “less dangerous.” All known network and host-based indicators and hunting rules associated |
Ransomware
Malware
Tool
Threat
|
ChatGPT
APT-C-36
|
★★
|
 |
2023-01-10 12:18:55 |
ChatGPT-Written Malware (lien direct) |
I don’t know how much of a thing this will end up being, but we are seeing ChatGPT-written malware in the wild.
…within a few weeks of ChatGPT going live, participants in cybercrime forums—some with little or no coding experience—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks.
“It's still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”... |
Malware
Tool
Prediction
|
ChatGPT
|
★★
|
 |
2023-01-06 22:05:06 |
ChatGPT is enabling script kiddies to write functional malware (lien direct) |
For a beta, ChatGPT isn't all that bad at writing fairly decent malware. |
Malware
|
ChatGPT
|
★★★
|
 |
2022-12-21 15:15:59 |
11 problèmes que le chat peut résoudre pour les ingénieurs inverses et les analystes de logiciels malveillants 11 Problems ChatGPT Can Solve For Reverse Engineers and Malware Analysts (lien direct) |
Chatgpt a capturé l'imagination de beaucoup à travers Infosec.Voici comment cela peut super-alimenter les efforts des inverseurs et des analystes de logiciels malveillants.
ChatGPT has captured the imagination of many across infosec. Here\'s how it can superpower the efforts of reversers and malware analysts. |
Malware
|
ChatGPT
|
★★★
|
 |
2022-12-13 10:45:00 |
Experts Warn ChatGPT Could Democratize Cybercrime (lien direct) |
Researchers claim AI bot can write malware and craft phishing emails |
Malware
|
ChatGPT
|
★★★
|
 |
2022-12-06 16:41:01 |
ChatGPT shows promise of using AI to write malware (lien direct) |
>Large language models pose a major cybersecurity risk, both from the vulnerabilities they risk introducing and the malware they could produce.
|
Malware
|
ChatGPT
|
★★★★
|