What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
Checkpoint.webp 2023-07-25 14:00:25 Comment l'IA générative affecte la sécurité mobile
How Generative AI Affects Mobile Security
(lien direct)
> moi: "Pouvez-vous être utilisé à mauvais escient pour les cyberattaques?"Chat GPT: «En tant que modèle de langue AI, je n'ai pas d'agence ou d'intentions à moi.Cependant, comme toute technologie, je peux potentiellement être utilisé à mauvais escient par des personnes malveillantes.En fin de compte, la responsabilité de l'utilisation et de l'utilisation abusive potentielle des systèmes d'IA incombe aux utilisateurs.Il est important que les individus et les organisations aient été importants d'utiliser la technologie de l'IA éthiquement et de manière responsable, d'adhérer aux directives juridiques et éthiques, et de prendre des précautions pour éviter tout préjudice ou abus. »Dans cette conversation, le chatppt n'est pas faux… des outils génératifs de l'intelligence artificielle (IA) tels que le chat GPT et Google Bard [& # 8230;]
>Me: “Can you be misused for cyber-attacks?” Chat GPT: “As an AI language model, I don’t have agency or intentions of my own. However, like any technology, I can potentially be misused by individuals with malicious intent. Ultimately, the responsibility for the use and potential misuse of AI systems lies with the users. It’s important for individuals and organizations to use AI technology ethically and responsibly, adhering to legal and ethical guidelines, and taking precautions to prevent any harm or misuse.” In this conversation, ChatGPT is not wrong… Generative artificial intelligence (AI) tools such as Chat GPT and Google Bard […]
Tool ChatGPT ChatGPT ★★
DarkReading.webp 2023-07-20 00:00:00 Infosec ne sait pas quels outils aiment les orgs utilisent
Infosec Doesn\\'t Know What AI Tools Orgs Are Using
(lien direct)
Astuce: les organisations utilisent déjà une gamme d'outils d'IA, avec Chatgpt et Jasper.ai ouvrant la voie.
Hint: Organizations are already using a range of AI tools, with ChatGPT and Jasper.ai leading the way.
Tool ChatGPT ChatGPT ★★★
DarkReading.webp 2023-07-19 22:46:00 (Déjà vu) CheckMarx annonce le plugin Checkai pour Chatgpt pour détecter et empêcher les attaques contre le code généré par ChatGpt
Checkmarx Announces CheckAI Plugin for ChatGPT to Detect and Prevent Attacks Against ChatGPT-Generated Code
(lien direct)
Le plugin APSEC de l'industrie de l'industrie de CheckMarx \\ est-ce que l'industrie fonctionne dans l'interface Chatgpt pour protéger contre les nouveaux types d'attaque ciblant le code généré par Genai.
Checkmarx\'s industry-first AI AppSec plugin works within the ChatGPT interface to protect against new attack types targeting GenAI-generated code.
Tool ChatGPT ChatGPT ★★
RecordedFuture.webp 2023-07-17 19:49:00 Par les criminels, pour les criminels: l'outil AI génère facilement des e-mails de fraude remarquablement persuasifs \\ '
By criminals, for criminals: AI tool easily generates \\'remarkably persuasive\\' fraud emails
(lien direct)
Un outil d'intelligence artificielle promu sur les forums souterrains montre comment l'IA peut aider à affiner les opérations de cybercriminalité, selon les chercheurs.Le logiciel de Wormpt est offert «comme alternative Blackhat» aux outils d'IA commerciaux comme Chatgpt, Selon les analystes à la société de sécurité par e-mail Slashnext.Les chercheurs ont utilisé Wormpt pour générer le type de contenu qui pourrait faire partie
An artificial intelligence tool promoted on underground forums shows how AI can help refine cybercrime operations, researchers say. The WormGPT software is offered “as a blackhat alternative” to commercial AI tools like ChatGPT, according to analysts at email security company SlashNext. The researchers used WormGPT to generate the kind of content that could be part
Tool ChatGPT ★★★
SlashNext.webp 2023-07-13 13:00:49 Wormgpt & # 8211;Les cybercriminels d'outils d'IA génératifs utilisent pour lancer des attaques de compromis par e-mail commercial
WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
(lien direct)
> Dans cet article de blog, nous nous plongeons dans l'utilisation émergente de l'IA générative, y compris le chatppt d'Openai, et l'outil de cybercriminalité Wormpt, dans les attaques de compromis par courrier électronique (BEC).Soulignant de vrais cas des forums de cybercriminalité, le post plonge dans la mécanique de ces attaques, les risques inhérents posés par les e-mails de phishing dirigés par l'IA et les avantages uniques de [& # 8230;] The Post wormgpt & #8211;Les cybercriminels d'outils d'IA génératifs utilisent pour lancer des attaques de compromis par courrier électronique commercial apparu pour la première fois sur slashnext .
>In this blog post, we delve into the emerging use of generative AI, including OpenAI’s ChatGPT, and the cybercrime tool WormGPT, in Business Email Compromise (BEC) attacks. Highlighting real cases from cybercrime forums, the post dives into the mechanics of these attacks, the inherent risks posed by AI-driven phishing emails, and the unique advantages of […] The post WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks first appeared on SlashNext.
Tool ChatGPT ★★★
AlienVault.webp 2023-07-06 10:00:00 Chatgpt, le nouveau canard en caoutchouc
ChatGPT, the new rubber duck
(lien direct)
Introduction Whether you are new to the world of IT or an experienced developer, you may have heard of the debugging concept of the \'programmer\'s rubber duck’. For the uninitiated, the basic concept is that by speaking to an inanimate object (e.g., a rubber duck) and explaining one’s code or the problem you are facing as if you were teaching it, you can solve whatever roadblock you’ve hit. Talking to the duck may lead to a “eureka!” moment where you suddenly discover the issue that has been holding you back, or simply allow you to clarify your thoughts and potentially gain a new perspective by taking a short break. This works because as you are “teaching” the duck, you must break down your code step by step, explaining how it works and what each part does. This careful review not only changes how you think about the described scenario but also highlights flaws you may not have otherwise identified. Since the rubber duck is an inanimate object, it will never tire or become disinterested during these conversations. Understandably, this also means that the duck cannot provide you any actual support. It won’t be able to help you summarize your ideas, offer recommendations, point out flaws in syntax or programming logic. Enter now the tool taking the world by storm, ChatGPT. Even at its most basic tier ChatGPT offers incredible value for those who learn how to work with it. This tool combines in one package all the benefits of the rubber duck, patience, reliability, support, while also being able to offer suggestions. While it provides the patience and reliability of the classic \'rubber duck\', ChatGPT also has the ability to offer helpful suggestions, review code snippets*, and engage in insightful dialogue. ChatGPT has the opportunity to significantly speed up development practices and virtually eliminate any form of “coders-block” without needing any complex setup or advanced knowledge to use effectively. The tool can also remove many barriers to entry that exist in programming, effectively democratizing the entire development pipeline and opening it up to anyone with a computer. The premise of a rubber duck extends beyond the realm of programming. Individuals across various professions who require an intuitive, extensively trained AI tool can benefit from ChatGPT – this modern interpretation of the \'rubber duck\' – in managing their day-to-day tasks. *This is highly dependent on your use-case. You should never upload sensitive, private, or proprietary information into ChatGPT, or information that is otherwise controlled or protected. Benefits ChatGPT offers numerous benefits for those willing to devote the time to learning how to use it effectively. Some of its key benefits include: Collaborative problem-solving Ability to significantly reduce time spent on manual tasks Flexibility Ease of use Drawbacks The tool does come with a few drawbacks, however, which are worth considering before you dive into the depths of what it can offer. To begin with, the tool is heavily reliant on the user to provide a clear and effective prompt. If provided a weak or vague prompt it is highly likely that the tool will provide similar results. Another drawback that may catch its users by surprise is that not a replacement for human creativity or ingenuity. You cannot, thus far, solely rely on the tool to fully execute a program or build something entirely from scratch without the support of a human to guide and correct its output. Suggestions Although ChatGPT is a fantastic tool I recognize that using it can be overwhelming at first, especially if you are not used to using it. ChatGPT has so many capabilities it is often difficult to determine how best to use it. Below are a few suggestions and examples of how this tool can be used to help talk through problems or discuss ideas, regardless of whether you&r Tool ChatGPT ChatGPT ★★
Trend.webp 2023-07-05 00:00:00 Chatgpt Liens partagés et protection de l'information: les risques et mesures les organisations doivent comprendre
ChatGPT Shared Links and Information Protection: Risks and Measures Organizations Must Understand
(lien direct)
Depuis sa version initiale à la fin de 2022, l'outil de génération de texte propulsé par l'IA connue sous le nom de chatppt a connu des taux d'adoption rapide des organisations et des utilisateurs individuels.Cependant, sa dernière fonctionnalité, connue sous le nom de liens partagés, comporte le risque potentiel de divulgation involontaire des informations confidentielles.
Since its initial release in late 2022, the AI-powered text generation tool known as ChatGPT has been experiencing rapid adoption rates from both organizations and individual users. However, its latest feature, known as Shared Links, comes with the potential risk of unintentional disclosure of confidential information.
Tool ChatGPT ChatGPT ★★
knowbe4.webp 2023-06-27 13:00:00 Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS
CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams
(lien direct)
CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned Ransomware Spam Malware Hack Tool Threat FedEx APT 28 APT 15 ChatGPT ChatGPT ★★
Chercheur.webp 2023-06-22 15:43:36 AI comme création de sens pour les commentaires du public
AI as Sensemaking for Public Comments
(lien direct)
il est devenu à la mode pour considérer l'intelligence artificielle comme un intrinsèquement La technologie déshumanisante , une force d'automatisation qui a déclenché des légions de travailleurs de compétence virtuels sous forme sans visage.Mais que se passe-t-il si l'IA se révèle être le seul outil capable d'identifier ce qui rend vos idées spéciales, reconnaissant votre perspective et votre potentiel uniques sur les questions où cela compte le plus? vous êtes pardonné si vous êtes en train de vous déranger de la capacité de la société à lutter contre cette nouvelle technologie.Jusqu'à présent, il ne manque pas de PROGNOSTICATIONS à propos de démocratique ...
It’s become fashionable to think of artificial intelligence as an inherently dehumanizing technology, a ruthless force of automation that has unleashed legions of virtual skilled laborers in faceless form. But what if AI turns out to be the one tool able to identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most? You’d be forgiven if you’re distraught about society’s ability to grapple with this new technology. So far, there’s no lack of prognostications about the democratic...
Tool ChatGPT ★★
AlienVault.webp 2023-06-21 10:00:00 Vers un SOC plus résilient: la puissance de l'apprentissage automatique
Toward a more resilient SOC: the power of machine learning
(lien direct)
A way to manage too much data To protect the business, security teams need to be able to detect and respond to threats fast. The problem is the average organization generates massive amounts of data every day. Information floods into the Security Operations Center (SOC) from network tools, security tools, cloud services, threat intelligence feeds, and other sources. Reviewing and analyzing all this data in a reasonable amount of time has become a task that is well beyond the scope of human efforts. AI-powered tools are changing the way security teams operate. Machine learning (which is a subset of artificial intelligence, or “AI”)—and in particular, machine learning-powered predictive analytics—are enhancing threat detection and response in the SOC by providing an automated way to quickly analyze and prioritize alerts. Machine learning in threat detection So, what is machine learning (ML)? In simple terms, it is a machine\'s ability to automate a learning process so it can perform tasks or solve problems without specifically being told do so. Or, as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate. Malware Tool Threat Prediction Cloud ChatGPT ★★
AlienVault.webp 2023-06-13 10:00:00 Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire
Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  OpenAI\'s flagship product, ChatGPT, has dominated the news cycle since its unveiling in November 2022. In only a few months, ChatGPT became the fastest-growing consumer app in internet history, reaching 100 million users as 2023 began. The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt. This can be bad news for your business, as it escalates the degree of difficulty in managing threats.  Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape. Understanding the role of ChatGPT in modern ransomware attacks We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage. With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection. The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive. Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment. This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds. Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th Ransomware Malware Tool Threat ChatGPT ChatGPT ★★
AlienVault.webp 2023-06-12 10:00:00 Understanding AI risks and how to secure using Zero Trust (lien direct) I. Introduction AI’s transformative power is reshaping business operations across numerous industries. Through Robotic Process Automation (RPA), AI is liberating human resources from the shackles of repetitive, rule-based tasks and directing their focus towards strategic, complex operations. Furthermore, AI and machine learning algorithms can decipher the huge sets of data at an unprecedented speed and accuracy, giving businesses insights that were once out of reach. For customer relations, AI serves as a personal touchpoint, enhancing engagement through personalized interactions. As advantageous as AI is to businesses, it also creates very unique security challenges. For example, adversarial attacks that subtly manipulate the input data of an AI model to make it behave abnormally, all while circumventing detection. Equally concerning is the phenomenon of data poisoning where attackers taint an AI model during its training phase by injecting misleading data, thereby corrupting its eventual outcomes. It is in this landscape that the Zero Trust security model of \'Trust Nothing, Verify Everything\', stakes its claim as a potent counter to AI-based threats. Zero Trust moves away from the traditional notion of a secure perimeter. Instead, it assumes that any device or user, regardless of their location within or outside the network, should be considered a threat. This shift in thinking demands strict access controls, comprehensive visibility, and continuous monitoring across the IT ecosystem. As AI technologies increase operational efficiency and decision-making, they can also become conduits for attacks if not properly secured. Cybercriminals are already trying to exploit AI systems via data poisoning and adversarial attacks making Zero Trust model\'s role in securing these systems is becomes even more important. II. Understanding AI threats Mitigating AI threats risks requires a comprehensive approach to AI security, including careful design and testing of AI models, robust data protection measures, continuous monitoring for suspicious activity, and the use of secure, reliable infrastructure. Businesses need to consider the following risks when implementing AI. Adversarial attacks: These attacks involve manipulating an AI model\'s input data to make the model behave in a way that the attacker desires, without triggering an alarm. For example, an attacker could manipulate a facial recognition system to misidentify an individual, allowing unauthorized access. Data poisoning: This type of attack involves introducing false or misleading data into an AI model during its training phase, with the aim of corrupting the model\'s outcomes. Since AI systems depend heavily on their training data, poisoned data can significantly impact their performance and reliability. Model theft and inversion attacks: Attackers might attempt to steal proprietary AI models or recreate them based on their outputs, a risk that’s particularly high for models provided as a service. Additionally, attackers can try to infer sensitive information from the outputs of an AI model, like learning about the individuals in a training dataset. AI-enhanced cyberattacks: AI can be used by malicious actors to automate and enhance their cyberattacks. This includes using AI to perform more sophisticated phishing attacks, automate the discovery of vulnerabilities, or conduct faster, more effective brute-force attacks. Lack of transparency (black box problem): It\'s often hard to understand how complex AI models make decisions. This lack of transparency can create a security risk as it might allow biased or malicious behavior to go undetected. Dependence on AI systems: As businesses increasingly rely on AI systems, any disruption to these systems can have serious consequences. This could occur due to technical issues, attacks on the AI system itself, or attacks on the underlying infrastructure. III. Th Tool Threat ChatGPT ChatGPT ★★
knowbe4.webp 2023-05-31 13:00:00 Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier
CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks
(lien direct)
CyberheistNews Vol 13 #22 CyberheistNews Vol 13 #22  |   May 31st, 2023 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all. When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report. According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry: 51% of invoice fraud attacks targeted the financial services industry 42% were payroll fraud attacks 63% were payment fraud To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox. The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage. Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing. Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users. Ransomware Malware Hack Tool Threat Conference Uber ChatGPT ChatGPT Guam ★★
knowbe4.webp 2023-05-23 13:00:00 Cyberheistnews Vol 13 # 21 [Double Trouble] 78% des victimes de ransomwares sont confrontées à plusieurs extensions en tendance effrayante
CyberheistNews Vol 13 #21 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend
(lien direct)
CyberheistNews Vol 13 #21 CyberheistNews Vol 13 #21  |   May 23rd, 2023 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend New data sheds light on how likely your organization will succumb to a ransomware attack, whether you can recover your data, and what\'s inhibiting a proper security posture. You have a solid grasp on what your organization\'s cybersecurity stance does and does not include. But is it enough to stop today\'s ransomware attacks? CyberEdge\'s 2023 Cyberthreat Defense Report provides some insight into just how prominent ransomware attacks are and what\'s keeping orgs from stopping them. According to the report, in 2023: 7% of organizations were victims of a ransomware attack 7% of those paid a ransom 73% were able to recover data Only 21.6% experienced solely the encryption of data and no other form of extortion It\'s this last data point that interests me. Nearly 78% of victim organizations experienced one or more additional forms of extortion. CyberEdge mentions threatening to publicly release data, notifying customers or media, and committing a DDoS attack as examples of additional threats mentioned by respondents. IT decision makers were asked to rate on a scale of 1-5 (5 being the highest) what were the top inhibitors of establishing and maintaining an adequate defense. The top inhibitor (with an average rank of 3.66) was a lack of skilled personnel – we\'ve long known the cybersecurity industry is lacking a proper pool of qualified talent. In second place, with an average ranking of 3.63, is low security awareness among employees – something only addressed by creating a strong security culture with new-school security awareness training at the center of it all. Blog post with links:https://blog.knowbe4.com/ransomware-victim-threats [Free Tool] Who Will Fall Victim to QR Code Phishing Attacks? Bad actors have a new way to launch phishing attacks to your users: weaponized QR codes. QR code phishing is especially dangerous because there is no URL to check and messages bypass traditional email filters. With the increased popularity of QR codes, users are more at Ransomware Hack Tool Vulnerability Threat Prediction ChatGPT ★★
AlienVault.webp 2023-05-22 10:00:00 Partager les données de votre entreprise avec Chatgpt: à quel point est-elle risquée?
Sharing your business\\'s data with ChatGPT: How risky is it?
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  As a natural language processing model, ChatGPT - and other similar machine learning-based language models - is trained on huge amounts of textual data. Processing all this data, ChatGPT can produce written responses that sound like they come from a real human being. ChatGPT learns from the data it ingests. If this information includes your sensitive business data, then sharing it with ChatGPT could potentially be risky and lead to cybersecurity concerns. For example, what if you feed ChatGPT pre-earnings company financial information, company proprietary software codeor materials used for internal presentations without realizing that practically anybody could obtain that sensitive information just by asking ChatGPT about it? If you use your smartphone to engage with ChatGPT, then a smartphone security breach could be all it takes to access your ChatGPT query history. In light of these implications, let\'s discuss if - and how - ChatGPT stores its users\' input data, as well as potential risks you may face when sharing sensitive business data with ChatGPT. Does ChatGPT store users’ input data? The answer is complicated. While ChatGPT does not automatically add data from queries to models specifically to make this data available for others to query, any prompt does become visible to OpenAI, the organization behind the large language model. Although no membership inference attacks have yet been carried out against the large language learning models that drive ChatGPT, databases containing saved prompts as well as embedded learnings could be potentially compromised by a cybersecurity breach. OpenAI, the parent company that developed ChatGPT, is working with other companies to limit the general access that language learning models have to personal data and sensitive information. But the technology is still in its nascent developing stages - ChatGPT was only just released to the public in November of last year. By just two months into its public release, ChatGPT had been accessed by over 100 million users, making it the fastest-growing consumer app ever at record-breaking speeds. With such rapid growth and expansion, regulations have been slow to keep up. The user base is so broad that there are abundant security gaps and vulnerabilities throughout the model. Risks of sharing business data with ChatGPT In June 2021, researchers from Apple, Stanford University, Google, Harvard University, and others published a paper that revealed that GPT-2, a language learning model similar to ChatGPT, could accurately recall sensitive information from training documents. The report found that GPT-2 could call up information with specific personal identifiers, recreate exact sequences of text, and provide other sensitive information when prompted. These “training data extraction attacks” could present a growing threat to the security of researchers working on machine learning models, as hackers may be able to access machine learning researcher data and steal their protected intellectual property. One data security company called Cyberhaven has released reports of ChatGPT cybersecurity vulnerabilities it has recently prevented. According to the reports, Cyberhaven has identified and prevented insecure requ Tool Threat Medical ChatGPT ChatGPT ★★
Trend.webp 2023-05-12 00:00:00 Annonces d'outils d'IA malveillants utilisés pour livrer le voleur Redline
Malicious AI Tool Ads Used to Deliver Redline Stealer
(lien direct)
Nous avons observé des campagnes publicitaires malveillantes dans le moteur de recherche de Google \\ avec des thèmes liés à des outils d'IA tels que MidJourney et Chatgpt.
We\'ve been observing malicious advertisement campaigns in Google\'s search engine with themes that are related to AI tools such as Midjourney and ChatGPT.
Tool ChatGPT ★★
knowbe4.webp 2023-05-09 13:00:00 Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs
CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users
(lien direct)
CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, Ransomware Data Breach Spam Malware Tool Threat Prediction NotPetya NotPetya APT 28 ChatGPT ChatGPT ★★
Anomali.webp 2023-04-25 18:22:00 Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP
Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server
(lien direct)
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Cryptomining, Infostealers, Malvertising, North Korea, Phishing, Ransomware, and Supply-chain attacks. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters (published: April 21, 2023) A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign. Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups. MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:​​Kubernetes RBAC 3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible (published: April 20, 2023) Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and Ransomware Spam Malware Tool Threat Cloud Uber APT 38 ChatGPT APT 43 ★★
RecordedFuture.webp 2023-04-23 23:23:00 CISA ajoute un problème de bug d'imprimante, de chrome zéro-jour et de chatppt pour exploiter le catalogue des vulnérabilités
CISA adds printer bug, Chrome zero-day and ChatGPT issue to exploited vulnerabilities catalog
(lien direct)
L'Agence de sécurité de la cybersécurité et de l'infrastructure (CISA) a ajouté un problème affectant un outil de logiciel de gestion d'impression populaire à sa liste de vulnérabilités exploitées vendredi.Papercut est une société de logiciels qui produit des logiciels de gestion d'impression pour Canon, Epson, Xerox, Brother et presque toutes les autres grandes marques d'imprimantes.Leurs outils sont largement utilisés au sein des agences des gouvernements,
The Cybersecurity and Infrastructure Security Agency (CISA) added an issue affecting a popular print management software tool to its list of exploited vulnerabilities on Friday. PaperCut is a software company that produces printing management software for Canon, Epson, Xerox, Brother and almost every other major printer brand. Their tools are widely used within governments agencies,
Tool ChatGPT ChatGPT ★★★
knowbe4.webp 2023-04-11 13:16:54 Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI
CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams
(lien direct)
CyberheistNews Vol 13 #15 CyberheistNews Vol 13 #15  |   April 11th, 2023 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress." They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how." "Don\'t Trust The Voice" The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one. "So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends." Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4. With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making Ransomware Data Breach Spam Malware Hack Tool Threat ChatGPT ChatGPT ★★
RecordedFuture.webp 2023-03-31 18:50:00 Les problèmes de confidentialité et de sécurité de Chatgpt entraînent une interdiction temporaire en Italie [ChatGPT privacy and safety concerns lead to temporary ban in Italy] (lien direct) L'Agence de protection des données d'Italie \\ a temporairement interdit le chatppt, alléguant que le puissant outil d'intelligence artificielle a été illégalement collecté des données des utilisateurs et ne protégeant pas les mineurs.Dans une disposition publiée jeudi, l'agence a écrit qu'Openai, la société propriétaire du chatbot, n'alerte pas les utilisateurs qu'il collecte leurs données.Ils soutiennent également que
Italy\'s data protection agency has temporarily banned ChatGPT, alleging the powerful artificial intelligence tool has been illegally collecting users\' data and failing to protect minors. In a provision released Thursday, the agency wrote that OpenAI, the company that owns the chatbot, does not alert users that it is collecting their data. They also contend that
Tool ChatGPT ChatGPT ★★★
knowbe4.webp 2023-03-28 13:00:00 Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] (lien direct) CyberheistNews Vol 13 #13 CyberheistNews Vol 13 #13  |   March 28th, 2023 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO. "A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation. "This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection." Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails. "Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here." Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures. "Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it. "Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'" New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture. Remember: Culture eats strategy for breakfast and is always top-down. Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing Ransomware Malware Hack Tool Threat Guideline ChatGPT ChatGPT ★★★
InfoSecurityMag.webp 2023-03-16 10:30:00 NCSC Calms Fears Over ChatGPT Threat (lien direct) Tool won't democratize cybercrime, agency argues Tool Threat ChatGPT ChatGPT ★★
Anomali.webp 2023-03-14 17:32:00 Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam (lien direct)   Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam, and More. The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: Android, APT, DLL side-loading, Iran, Linux, Malvertising, Mobile, Pakistan, Ransomware, and Windows. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence Xenomorph V3: a New Variant with ATS Targeting More Than 400 Institutions (published: March 10, 2023) Newer versions of the Xenomorph Android banking trojan are able to target 400 applications: cryptocurrency wallets and mobile banking from around the World with the top targeted countries being Spain, Turkey, Poland, USA, and Australia (in that order). Since February 2022, several small, testing Xenomorph campaigns have been detected. Its current version Xenomorph v3 (Xenomorph.C) is available on the Malware-as-a-Service model. This trojan version was delivered using the Zombinder binding service to bind it to a legitimate currency converter. Xenomorph v3 automatically collects and exfiltrates credentials using the ATS (Automated Transfer Systems) framework. The command-and-control traffic is blended in by abusing Discord Content Delivery Network. Analyst Comment: Fraud chain automation makes Xenomorph v3 a dangerous malware that might significantly increase its prevalence on the threat landscape. Users should keep their mobile devices updated and avail of mobile antivirus and VPN protection services. Install only applications that you actually need, use the official store and check the app description and reviews. Organizations that publish applications for their customers are invited to use Anomali's Premium Digital Risk Protection service to discover rogue, malicious apps impersonating your brand that security teams typically do not search or monitor. MITRE ATT&CK: [MITRE ATT&CK] T1417.001 - Input Capture: Keylogging | [MITRE ATT&CK] T1417.002 - Input Capture: Gui Input Capture Tags: malware:Xenomorph, Mobile, actor:Hadoken Security Group, actor:HadokenSecurity, malware-type:Banking trojan, detection:Xenomorph.C, Malware-as-a-Service, Accessibility services, Overlay attack, Discord CDN, Cryptocurrency wallet, target-industry:Cryptocurrency, target-industry:Banking, target-country:Spain, target-country:ES, target-country:Turkey, target-country:TR, target-country:Poland, target-country:PL, target-country:USA, target-country:US, target-country:Australia, target-country:AU, malware:Zombinder, detection:Zombinder.A, Android Cobalt Illusion Masquerades as Atlantic Council Employee (published: March 9, 2023) A new campaign by Iran-sponsored Charming Kitten (APT42, Cobalt Illusion, Magic Hound, Phosphorous) was detected targeting Mahsa Amini protests and researchers who document the suppression of women and minority groups i Ransomware Malware Tool Vulnerability Threat Guideline Conference APT 35 ChatGPT ChatGPT APT 36 APT 42 ★★
globalsecuritymag.webp 2023-03-09 17:20:02 ChatGPT: A tool for offensive cyber operations?! Not so fast! (lien direct) ChatGPT: A tool for offensive cyber operations?! Not so fast! By John Borrero Rodriguez, Trellix - Opinion Tool ChatGPT ★★
knowbe4.webp 2023-02-28 14:00:00 CyberheistNews Vol 13 #09 [Eye Opener] Should You Click on Unsubscribe? (lien direct) CyberheistNews Vol 13 #09 CyberheistNews Vol 13 #09  |   February 28th, 2023 [Eye Opener] Should You Click on Unsubscribe? By Roger A. Grimes. Some common questions we get are "Should I click on an unwanted email's 'Unsubscribe' link? Will that lead to more or less unwanted email?" The short answer is that, in general, it is OK to click on a legitimate vendor's unsubscribe link. But if you think the email is sketchy or coming from a source you would not want to validate your email address as valid and active, or are unsure, do not take the chance, skip the unsubscribe action. In many countries, legitimate vendors are bound by law to offer (free) unsubscribe functionality and abide by a user's preferences. For example, in the U.S., the 2003 CAN-SPAM Act states that businesses must offer clear instructions on how the recipient can remove themselves from the involved mailing list and that request must be honored within 10 days. Note: Many countries have laws similar to the CAN-SPAM Act, although with privacy protection ranging the privacy spectrum from very little to a lot more protection. The unsubscribe feature does not have to be a URL link, but it does have to be an "internet-based way." The most popular alternative method besides a URL link is an email address to use. In some cases, there are specific instructions you have to follow, such as put "Unsubscribe" in the subject of the email. Other times you are expected to craft your own message. Luckily, most of the time simply sending any email to the listed unsubscribe email address is enough to remove your email address from the mailing list. [CONTINUED] at the KnowBe4 blog:https://blog.knowbe4.com/should-you-click-on-unsubscribe [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, March 1, @ 2:00 PM (ET), for a live demo of how KnowBe4 introduces a new-school approac Malware Hack Tool Vulnerability Threat Guideline Prediction APT 38 ChatGPT ★★★
RecordedFuture.webp 2023-02-23 19:02:13 Hackers use ChatGPT phishing websites to infect users with malware (lien direct) link to fake chatgpt, phishing siteCyble says cybercriminals are setting up phishing websites that mimic the branding of ChatGPT, an AI tool that has exploded in popularity Malware Tool ChatGPT ★★★
bleepingcomputer.webp 2023-02-22 16:58:19 Hackers use fake ChatGPT apps to push Windows, Android malware (lien direct) Threat actors are actively exploiting the popularity of OpenAI's ChatGPT AI tool to distribute Windows malware, infect Android devices with spyware, or direct unsuspecting victims to phishing pages. [...] Malware Tool Threat ChatGPT ★★★
mcafee.webp 2023-01-26 00:37:55 ChatGPT: A Scammer\'s Newest Tool (lien direct) > ChatGPT: Everyone's favorite chatbot/writer's-block buster/ridiculous short story creator is skyrocketing in fame. 1 In fact, the AI-generated content “masterpieces” (by... Tool ChatGPT ★★
CSO.webp 2023-01-16 02:00:00 How AI chatbot ChatGPT changes the phishing game (lien direct) ChatGPT, OpenAI's free chatbot based on GPT-3.5, was released on 30 November 2022 and racked up a million users in five days. It is capable of writing emails, essays, code and phishing emails, if the user knows how to ask.By comparison, it took Twitter two years to reach a million users. Facebook took ten months, Dropbox seven months, Spotify five months, Instagram six weeks. Pokemon Go took ten hours, so don't break out the champagne bottles, but still, five days is pretty impressive for a web-based tool that didn't have any built-in name recognition.To read this article in full, please click here Tool ChatGPT ★★
Anomali.webp 2023-01-10 16:30:00 Anomali Cyber Watch: Turla Re-Registered Andromeda Domains, SpyNote Is More Popular after the Source Code Publication, Typosquatted Site Used to Leak Company\'s Data (lien direct) The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Artificial intelligence, Expired C2 domains, Data leak, Mobile, Phishing, Ransomware, and Typosquatting. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence OPWNAI : Cybercriminals Starting to Use ChatGPT (published: January 6, 2023) Check Point researchers have detected multiple underground forum threads outlining experimenting with and abusing ChatGPT (Generative Pre-trained Transformer), the revolutionary artificial intelligence (AI) chatbot tool capable of generating creative responses in a conversational manner. Several actors have built schemes to produce AI outputs (graphic art, books) and sell them as their own. Other actors experiment with instructions to write an AI-generated malicious code while avoiding ChatGPT guardrails that should prevent such abuse. Two actors shared samples allegedly created using ChatGPT: a basic Python-based stealer, a Java downloader that stealthily runs payloads using PowerShell, and a cryptographic tool. Analyst Comment: ChatGPT and similar tools can be of great help to humans creating art, writing texts, and programming. At the same time, it can be a dangerous tool enabling even low-skill threat actors to create convincing social-engineering lures and even new malware. MITRE ATT&CK: [MITRE ATT&CK] T1566 - Phishing | [MITRE ATT&CK] T1059.001: PowerShell | [MITRE ATT&CK] T1048.003 - Exfiltration Over Alternative Protocol: Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol | [MITRE ATT&CK] T1560 - Archive Collected Data | [MITRE ATT&CK] T1005: Data from Local System Tags: ChatGPT, Artificial intelligence, OpenAI, Phishing, Programming, Fraud, Chatbot, Python, Java, Cryptography, FTP Turla: A Galaxy of Opportunity (published: January 5, 2023) Russia-sponsored group Turla re-registered expired domains for old Andromeda malware to select a Ukrainian target from the existing victims. Andromeda sample, known from 2013, infected the Ukrainian organization in December 2021 via user-activated LNK file on an infected USB drive. Turla re-registered the Andromeda C2 domain in January 2022, profiled and selected a single victim, and pushed its payloads in September 2022. First, the Kopiluwak profiling tool was downloaded for system reconnaissance, two days later, the Quietcanary backdoor was deployed to find and exfiltrate files created in 2021-2022. Analyst Comment: Advanced groups are often utilizing commodity malware to blend their traffic with less sophisticated threats. Turla’s tactic of re-registering old but active C2 domains gives the group a way-in to the pool of existing targets. Organizations should be vigilant to all kinds of existing infections and clean them up, even if assessed as “less dangerous.” All known network and host-based indicators and hunting rules associated Ransomware Malware Tool Threat ChatGPT APT-C-36 ★★
Chercheur.webp 2023-01-10 12:18:55 ChatGPT-Written Malware (lien direct) I don’t know how much of a thing this will end up being, but we are seeing ChatGPT-written malware in the wild. …within a few weeks of ChatGPT going live, participants in cybercrime forums—­some with little or no coding experience­—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks. “It's still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”... Malware Tool Prediction ChatGPT ★★
Last update at: 2024-05-19 17:08:07
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter