What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
silicon.fr.webp 2023-04-12 08:39:41 BlackMamba : le malware généré par ChatGPT est-il un nouveau type de menace ? (lien direct) BlackMamba est un malware d'essai, autrement dit un programme de démonstration reposant sur un exécutable bénin qui, en s'alliant à une IA ultra-réputée (OpenAI) à l'exécution, renvoie du code malveillant synthétisé et polymorphe censé dérober les informations saisies au clavier par l'utilisateur du système infecté. Malware ChatGPT ChatGPT ★★★
knowbe4.webp 2023-04-11 13:16:54 Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI
CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams
(lien direct)
CyberheistNews Vol 13 #15 CyberheistNews Vol 13 #15  |   April 11th, 2023 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress." They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how." "Don\'t Trust The Voice" The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one. "So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends." Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4. With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making Ransomware Data Breach Spam Malware Hack Tool Threat ChatGPT ChatGPT ★★
01net.webp 2023-04-07 13:00:42 Ce malware indétectable est signé… ChatGPT (lien direct) malware chatgptChatGPT est une arme redoutable pour les hackers. Avec l'aide de l'IA, il est possible de coder un dangereux malware indétectable... sans écrire une seule ligne de code. Malware ChatGPT ChatGPT ★★
DarkReading.webp 2023-04-05 16:20:00 Le chercheur tourne le chat de la construction de logiciels malveillants de stéganographie indétectable
Researcher Tricks ChatGPT into Building Undetectable Steganography Malware
(lien direct)
En utilisant uniquement des invites Chatgpt, un chercheur de ForcePoint a convaincu l'IA de créer des logiciels malveillants pour trouver et exfiltrant des documents spécifiques, malgré sa directive de refuser les demandes malveillantes.
Using only ChatGPT prompts, a Forcepoint researcher convinced the AI to create malware for finding and exfiltrating specific documents, despite its directive to refuse malicious requests.
Malware ChatGPT ChatGPT ★★★
knowbe4.webp 2023-04-04 13:00:00 CyberheistNews Vol 13 # 14 [Eyes sur le prix] Comment les inconvénients croissants ont tenté un courteur par e-mail de 36 millions de vendeurs
CyberheistNews Vol 13 #14 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist
(lien direct)
CyberheistNews Vol 13 #14 CyberheistNews Vol 13 #14  |   April 4th, 2023 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist The details in this thwarted VEC attack demonstrate how the use of just a few key details can both establish credibility and indicate the entire thing is a scam. It\'s not every day you hear about a purely social engineering-based scam taking place that is looking to run away with tens of millions of dollars. But, according to security researchers at Abnormal Security, cybercriminals are becoming brazen and are taking their shots at very large prizes. This attack begins with a case of VEC – where a domain is impersonated. In the case of this attack, the impersonated vendor\'s domain (which had a .com top level domain) was replaced with a matching .cam domain (.cam domains are supposedly used for photography enthusiasts, but there\'s the now-obvious problem with it looking very much like .com to the cursory glance). The email attaches a legitimate-looking payoff letter complete with loan details. According to Abnormal Security, nearly every aspect of the request looked legitimate. The telltale signs primarily revolved around the use of the lookalike domain, but there were other grammatical mistakes (that can easily be addressed by using an online grammar service or ChatGPT). This attack was identified well before it caused any damage, but the social engineering tactics leveraged were nearly enough to make this attack successful. Security solutions will help stop most attacks, but for those that make it past scanners, your users need to play a role in spotting and stopping BEC, VEC and phishing attacks themselves – something taught through security awareness training combined with frequent simulated phishing and other social engineering tests. Blog post with screenshots and links:https://blog.knowbe4.com/36-mil-vendor-email-compromise-attack [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, April 5, @ 2:00 PM (ET), for a live demo of how KnowBe4 i Ransomware Malware Hack Threat ChatGPT ChatGPT APT 43 ★★
knowbe4.webp 2023-03-28 13:00:00 Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] (lien direct) CyberheistNews Vol 13 #13 CyberheistNews Vol 13 #13  |   March 28th, 2023 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO. "A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation. "This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection." Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails. "Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here." Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures. "Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it. "Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'" New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture. Remember: Culture eats strategy for breakfast and is always top-down. Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing Ransomware Malware Hack Tool Threat Guideline ChatGPT ChatGPT ★★★
Checkpoint.webp 2023-03-16 00:49:59 Check Point Research conducts Initial Security Analysis of ChatGPT4, Highlighting Potential Scenarios For Accelerated Cybercrime (lien direct) >Highlights: Check Point Research (CPR) releases an initial analysis of ChatGPT4, surfacing five scenarios that can allow threat actors to streamline malicious efforts and preparations faster and with more precision. In some instances, even non-technical actors can create harmful tools. The five scenarios provided span impersonations of banks, reverse shells, C++ malware and more. Despite… Malware Threat ChatGPT ★★
Anomali.webp 2023-03-14 17:32:00 Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam (lien direct)   Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam, and More. The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: Android, APT, DLL side-loading, Iran, Linux, Malvertising, Mobile, Pakistan, Ransomware, and Windows. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence Xenomorph V3: a New Variant with ATS Targeting More Than 400 Institutions (published: March 10, 2023) Newer versions of the Xenomorph Android banking trojan are able to target 400 applications: cryptocurrency wallets and mobile banking from around the World with the top targeted countries being Spain, Turkey, Poland, USA, and Australia (in that order). Since February 2022, several small, testing Xenomorph campaigns have been detected. Its current version Xenomorph v3 (Xenomorph.C) is available on the Malware-as-a-Service model. This trojan version was delivered using the Zombinder binding service to bind it to a legitimate currency converter. Xenomorph v3 automatically collects and exfiltrates credentials using the ATS (Automated Transfer Systems) framework. The command-and-control traffic is blended in by abusing Discord Content Delivery Network. Analyst Comment: Fraud chain automation makes Xenomorph v3 a dangerous malware that might significantly increase its prevalence on the threat landscape. Users should keep their mobile devices updated and avail of mobile antivirus and VPN protection services. Install only applications that you actually need, use the official store and check the app description and reviews. Organizations that publish applications for their customers are invited to use Anomali's Premium Digital Risk Protection service to discover rogue, malicious apps impersonating your brand that security teams typically do not search or monitor. MITRE ATT&CK: [MITRE ATT&CK] T1417.001 - Input Capture: Keylogging | [MITRE ATT&CK] T1417.002 - Input Capture: Gui Input Capture Tags: malware:Xenomorph, Mobile, actor:Hadoken Security Group, actor:HadokenSecurity, malware-type:Banking trojan, detection:Xenomorph.C, Malware-as-a-Service, Accessibility services, Overlay attack, Discord CDN, Cryptocurrency wallet, target-industry:Cryptocurrency, target-industry:Banking, target-country:Spain, target-country:ES, target-country:Turkey, target-country:TR, target-country:Poland, target-country:PL, target-country:USA, target-country:US, target-country:Australia, target-country:AU, malware:Zombinder, detection:Zombinder.A, Android Cobalt Illusion Masquerades as Atlantic Council Employee (published: March 9, 2023) A new campaign by Iran-sponsored Charming Kitten (APT42, Cobalt Illusion, Magic Hound, Phosphorous) was detected targeting Mahsa Amini protests and researchers who document the suppression of women and minority groups i Ransomware Malware Tool Vulnerability Threat Guideline Conference APT 35 ChatGPT ChatGPT APT 36 APT 42 ★★
knowbe4.webp 2023-03-14 13:00:00 CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears (lien direct) CyberheistNews Vol 13 #11 CyberheistNews Vol 13 #11  |   March 14th, 2023 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes. I'm giving you a short extract of the story and the link to the whole article is below. "Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service. "In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM. "In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. "And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven. "'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'" Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this. Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl Ransomware Data Breach Spam Malware Threat Guideline Medical ChatGPT ChatGPT ★★
The_Hackers_News.webp 2023-03-11 19:02:00 BATLOADER Malware Uses Google Ads to Deliver Vidar Stealer and Ursnif Payloads (lien direct) The malware downloader known as BATLOADER has been observed abusing Google Ads to deliver secondary payloads like Vidar Stealer and Ursnif. According to cybersecurity company eSentire, malicious ads are used to spoof a wide range of legitimate apps and services such as Adobe, OpenAPI's ChatGPT, Spotify, Tableau, and Zoom. BATLOADER, as the name suggests, is a loader that's responsible for Malware ChatGPT ★★
DarkReading.webp 2023-03-08 16:50:40 AI-Powered \'BlackMamba\' Keylogging Attack Evades Modern EDR Security (lien direct) Researchers warn that polymorphic malware created with ChatGPT and other LLMs will force a reinvention of security automation. Malware ChatGPT ChatGPT ★★
knowbe4.webp 2023-02-28 14:00:00 CyberheistNews Vol 13 #09 [Eye Opener] Should You Click on Unsubscribe? (lien direct) CyberheistNews Vol 13 #09 CyberheistNews Vol 13 #09  |   February 28th, 2023 [Eye Opener] Should You Click on Unsubscribe? By Roger A. Grimes. Some common questions we get are "Should I click on an unwanted email's 'Unsubscribe' link? Will that lead to more or less unwanted email?" The short answer is that, in general, it is OK to click on a legitimate vendor's unsubscribe link. But if you think the email is sketchy or coming from a source you would not want to validate your email address as valid and active, or are unsure, do not take the chance, skip the unsubscribe action. In many countries, legitimate vendors are bound by law to offer (free) unsubscribe functionality and abide by a user's preferences. For example, in the U.S., the 2003 CAN-SPAM Act states that businesses must offer clear instructions on how the recipient can remove themselves from the involved mailing list and that request must be honored within 10 days. Note: Many countries have laws similar to the CAN-SPAM Act, although with privacy protection ranging the privacy spectrum from very little to a lot more protection. The unsubscribe feature does not have to be a URL link, but it does have to be an "internet-based way." The most popular alternative method besides a URL link is an email address to use. In some cases, there are specific instructions you have to follow, such as put "Unsubscribe" in the subject of the email. Other times you are expected to craft your own message. Luckily, most of the time simply sending any email to the listed unsubscribe email address is enough to remove your email address from the mailing list. [CONTINUED] at the KnowBe4 blog:https://blog.knowbe4.com/should-you-click-on-unsubscribe [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, March 1, @ 2:00 PM (ET), for a live demo of how KnowBe4 introduces a new-school approac Malware Hack Tool Vulnerability Threat Guideline Prediction APT 38 ChatGPT ★★★
RecordedFuture.webp 2023-02-23 19:02:13 Hackers use ChatGPT phishing websites to infect users with malware (lien direct) link to fake chatgpt, phishing siteCyble says cybercriminals are setting up phishing websites that mimic the branding of ChatGPT, an AI tool that has exploded in popularity Malware Tool ChatGPT ★★★
01net.webp 2023-02-23 09:25:51 Une fausse app ChatGPT pour Windows menace de pirater vos comptes Facebook, TikTok et Google (lien direct) chatgptUne fausse application ChatGPT pour PC Windows se propage sur les réseaux sociaux. Elle cache un malware capable de voler les identifiants des comptes Facebook, TikTok et Google. Malware ChatGPT ★★
InfoSecurityMag.webp 2023-02-23 09:20:00 Phishing Sites and Apps Use ChatGPT as Lure (lien direct) Campaigns designed to steal card information and install malware Malware ChatGPT ★★
The_State_of_Security.webp 2023-02-23 09:07:44 Fake ChatGPT apps spread Windows and Android malware (lien direct) OpenAI's ChatGPT chatbot has been a phenomenon, taking the internet by storm. Whether it is composing poetry, writing essays for college students, or finding bugs in computer code, it has impressed millions of people and proven itself to be the most accessible form of artificial intelligence ever seen. Yes, there are plenty of fears about how the technology could be used and abused, questions to be answered about its ethical use and how regulators might police its use, and worries that some may not realise that ChatGPT is not as smart as it initially appears. But no-one can deny that it has... Malware ChatGPT ★★
bleepingcomputer.webp 2023-02-22 16:58:19 Hackers use fake ChatGPT apps to push Windows, Android malware (lien direct) Threat actors are actively exploiting the popularity of OpenAI's ChatGPT AI tool to distribute Windows malware, infect Android devices with spyware, or direct unsuspecting victims to phishing pages. [...] Malware Tool Threat ChatGPT ★★★
globalsecuritymag.webp 2023-02-22 16:25:56 Un nouveau malware vole des identifiants de réseaux sociaux en se faisant passer pour une application ChatGPT (lien direct) Un nouveau malware vole des identifiants de réseaux sociaux en se faisant passer pour une application ChatGPT - Malwares Malware ChatGPT ★★★
no_ico.webp 2023-02-09 17:05:17 Hackers Bypass ChatGPT Restrictions Via Telegram Bots (lien direct) Researchers revealed on Wednesday that hackers had found a means to get beyond ChatGPT’s limitations and are using it to market services that let users produce malware and phishing emails. ChatGPT is a chatbot that imitates human output by using artificial intelligence to respond to inquiries and carry out tasks.  People can use it to […] Malware ChatGPT ★★
ArsTechnica.webp 2023-02-08 18:54:03 Hackers are selling a service that bypasses ChatGPT restrictions on malware (lien direct) ChatGPT restrictions on the creation of illicit content are easy to circumvent. Malware ChatGPT ★★★
DarkReading.webp 2023-01-18 19:21:00 ChatGPT Could Create Polymorphic Malware Wave, Researchers Warn (lien direct) The powerful AI bot can produce malware without malicious code, making it tough to mitigate. Malware ChatGPT ★★★
InfoSecurityMag.webp 2023-01-18 16:00:00 ChatGPT Creates Polymorphic Malware (lien direct) The first step to creating the malware was to bypass ChatGPT content filters Malware ChatGPT ★★
Anomali.webp 2023-01-10 16:30:00 Anomali Cyber Watch: Turla Re-Registered Andromeda Domains, SpyNote Is More Popular after the Source Code Publication, Typosquatted Site Used to Leak Company\'s Data (lien direct) The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Artificial intelligence, Expired C2 domains, Data leak, Mobile, Phishing, Ransomware, and Typosquatting. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence OPWNAI : Cybercriminals Starting to Use ChatGPT (published: January 6, 2023) Check Point researchers have detected multiple underground forum threads outlining experimenting with and abusing ChatGPT (Generative Pre-trained Transformer), the revolutionary artificial intelligence (AI) chatbot tool capable of generating creative responses in a conversational manner. Several actors have built schemes to produce AI outputs (graphic art, books) and sell them as their own. Other actors experiment with instructions to write an AI-generated malicious code while avoiding ChatGPT guardrails that should prevent such abuse. Two actors shared samples allegedly created using ChatGPT: a basic Python-based stealer, a Java downloader that stealthily runs payloads using PowerShell, and a cryptographic tool. Analyst Comment: ChatGPT and similar tools can be of great help to humans creating art, writing texts, and programming. At the same time, it can be a dangerous tool enabling even low-skill threat actors to create convincing social-engineering lures and even new malware. MITRE ATT&CK: [MITRE ATT&CK] T1566 - Phishing | [MITRE ATT&CK] T1059.001: PowerShell | [MITRE ATT&CK] T1048.003 - Exfiltration Over Alternative Protocol: Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol | [MITRE ATT&CK] T1560 - Archive Collected Data | [MITRE ATT&CK] T1005: Data from Local System Tags: ChatGPT, Artificial intelligence, OpenAI, Phishing, Programming, Fraud, Chatbot, Python, Java, Cryptography, FTP Turla: A Galaxy of Opportunity (published: January 5, 2023) Russia-sponsored group Turla re-registered expired domains for old Andromeda malware to select a Ukrainian target from the existing victims. Andromeda sample, known from 2013, infected the Ukrainian organization in December 2021 via user-activated LNK file on an infected USB drive. Turla re-registered the Andromeda C2 domain in January 2022, profiled and selected a single victim, and pushed its payloads in September 2022. First, the Kopiluwak profiling tool was downloaded for system reconnaissance, two days later, the Quietcanary backdoor was deployed to find and exfiltrate files created in 2021-2022. Analyst Comment: Advanced groups are often utilizing commodity malware to blend their traffic with less sophisticated threats. Turla’s tactic of re-registering old but active C2 domains gives the group a way-in to the pool of existing targets. Organizations should be vigilant to all kinds of existing infections and clean them up, even if assessed as “less dangerous.” All known network and host-based indicators and hunting rules associated Ransomware Malware Tool Threat ChatGPT APT-C-36 ★★
Chercheur.webp 2023-01-10 12:18:55 ChatGPT-Written Malware (lien direct) I don’t know how much of a thing this will end up being, but we are seeing ChatGPT-written malware in the wild. …within a few weeks of ChatGPT going live, participants in cybercrime forums—­some with little or no coding experience­—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks. “It's still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”... Malware Tool Prediction ChatGPT ★★
TroyHunt.webp 2023-01-06 22:05:06 ChatGPT is enabling script kiddies to write functional malware (lien direct) For a beta, ChatGPT isn't all that bad at writing fairly decent malware. Malware ChatGPT ★★★
SentinelOne.webp 2022-12-21 15:15:59 11 problèmes que le chat peut résoudre pour les ingénieurs inverses et les analystes de logiciels malveillants
11 Problems ChatGPT Can Solve For Reverse Engineers and Malware Analysts
(lien direct)
Chatgpt a capturé l'imagination de beaucoup à travers Infosec.Voici comment cela peut super-alimenter les efforts des inverseurs et des analystes de logiciels malveillants.
ChatGPT has captured the imagination of many across infosec. Here\'s how it can superpower the efforts of reversers and malware analysts.
Malware ChatGPT ★★★
InfoSecurityMag.webp 2022-12-13 10:45:00 Experts Warn ChatGPT Could Democratize Cybercrime (lien direct) Researchers claim AI bot can write malware and craft phishing emails Malware ChatGPT ★★★
CS.webp 2022-12-06 16:41:01 ChatGPT shows promise of using AI to write malware (lien direct) >Large language models pose a major cybersecurity risk, both from the vulnerabilities they risk introducing and the malware they could produce. Malware ChatGPT ★★★★
Last update at: 2024-05-20 05:07:47
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter