www.secnews.physaphae.fr This is the RSS 2.0 feed from www.secnews.physaphae.fr. IT's a simple agragated flow of multiple articles soruces. Liste of sources, can be found on www.secnews.physaphae.fr. 2025-05-10T20:10:18+00:00 www.secnews.physaphae.fr SecureMac - Security focused on MAC Liste de contrôle 416: logiciels malveillants comme A.I. et A.I. Être trippin \\ '<br>Checklist 416: Malware as A.I. and A.I. Be Trippin\\' Les problèmes de précision d'Ai \\ étimulent une action en justice car Chatgpt accuse faussement les utilisateurs de crimes, tandis que les escroqueries en logiciels malveillants Deepseek mettent la confidentialité et la sécurité en danger.
>AI\'s accuracy issues spark legal action as ChatGPT falsely accuses users of crimes, while DeepSeek malware scams put privacy and security at risk. ]]>
2025-03-21T21:49:05+00:00 https://www.securemac.com/checklist/checklist-416-malware-as-a-i-and-a-i-be-trippin www.secnews.physaphae.fr/article.php?IdArticle=8657280 False Malware ChatGPT 2.0000000000000000
HackRead - Chercher Cyber Les chercheurs utilisent l'IA jailbreak sur les LLMS pour créer un infostealer Chrome<br>Researchers Use AI Jailbreak on Top LLMs to Create Chrome Infostealer New Immersive World LLM jailbreak lets anyone create malware with GenAI. Discover how Cato Networks researchers tricked ChatGPT, Copilot, and DeepSeek into coding infostealers - In this case, a Chrome infostealer.]]> 2025-03-19T15:58:16+00:00 https://hackread.com/ai-jailbreak-on-top-llms-to-create-chrome-infostealer/ www.secnews.physaphae.fr/article.php?IdArticle=8656722 False Malware ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Arrêt de cybersécurité du mois: Capital One Credential Phishing-How Les cybercriminels ciblent votre sécurité financière<br>Cybersecurity Stop of the Month: Capital One Credential Phishing-How Cybercriminals Are Targeting Your Financial Security 2025-02-25T02:00:04+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/capital-one-phishing-email-campaign www.secnews.physaphae.fr/article.php?IdArticle=8651011 False Malware,Tool,Threat,Prediction,Medical,Cloud,Commercial ChatGPT 3.0000000000000000 Cyber Skills - Podcast Cyber The Growing Threat of Phishing Attacks and How to Protect Yourself Phishing remains the most common type of cybercrime, evolving into a sophisticated threat that preys on human psychology and advanced technology. Traditional phishing involves attackers sending fake, malicious links disguised as legitimate messages to trick victims into revealing sensitive information or installing malware. However, phishing attacks have become increasingly advanced, introducing what experts call "phishing 2.0" and psychological phishing.  Phishing 2.0 leverages AI to analyse publicly available data, such as social media profiles and public records, to craft highly personalized and convincing messages. These tailored attacks significantly increase the likelihood of success. Psychological manipulation also plays a role in phishing schemes. Attackers exploit emotions like fear and trust, often creating a sense of urgency to pressure victims into acting impulsively. By impersonating trusted entities, such as banks or employers, they pressure victims into following instructions without hesitation.  AI has further amplified the efficiency and scale of phishing attacks. Cybercriminals use AI tools to generate convincing scam messages rapidly, launch automated campaigns and target thousands of individuals within minutes. Tools like ChatGPT, when misused in “DAN mode”, can bypass ethical restrictions to craft grammatically correct and compelling messages, aiding attackers who lack English fluency.  ]]> 2025-02-17T00:00:00+00:00 https://www.cyberskills.ie/explore/news/the-growing-threat-of-phishing-attacks-and-how-to-protect-yourself--.html www.secnews.physaphae.fr/article.php?IdArticle=8648755 False Malware,Tool,Vulnerability,Threat ChatGPT 3.0000000000000000 Checkpoint - Fabricant Materiel Securite CPR Finds Threat Actors Already Leveraging DeepSeek and Qwen to Develop Malicious Content Soon after the launch of AI models DeepSeek and Qwen, Check Point Research witnessed cyber criminals quickly shifting from ChatGPT to these new platforms to develop malicious content. Threat actors are sharing how to manipulate the models and show uncensored content, ultimately allowing hackers and criminals to use AI to create malicious content. Called jailbreaking, there are many methods to remove censors from AI models. However, we now see in-depth guides to jailbreaking methods, bypassing anti-fraud protections, and developing malware itself. This blog delves into how threat actors leverage these advanced models to develop harmful content, manipulate AI functionalities through […]
>Soon after the launch of AI models DeepSeek and Qwen, Check Point Research witnessed cyber criminals quickly shifting from ChatGPT to these new platforms to develop malicious content. Threat actors are sharing how to manipulate the models and show uncensored content, ultimately allowing hackers and criminals to use AI to create malicious content. Called jailbreaking, there are many methods to remove censors from AI models. However, we now see in-depth guides to jailbreaking methods, bypassing anti-fraud protections, and developing malware itself. This blog delves into how threat actors leverage these advanced models to develop harmful content, manipulate AI functionalities through […] ]]>
2025-02-04T17:38:54+00:00 https://blog.checkpoint.com/artificial-intelligence/cpr-finds-threat-actors-already-leveraging-deepseek-and-qwen-to-develop-malicious-content/ www.secnews.physaphae.fr/article.php?IdArticle=8646864 False Malware,Threat ChatGPT 3.0000000000000000
Cyble - CyberSecurity Firm DeepSeek\'s Growing Influence Sparks a Surge in Frauds and Phishing Attacks Overview DeepSeek is a Chinese artificial intelligence company that has developed open-source large language models (LLMs). In January 2025, DeepSeek launched its first free chatbot app, “DeepSeek - AI Assistant”, which rapidly became the most downloaded free app on the iOS App Store in the United States, surpassing even OpenAI\'s ChatGPT. However, with rapid growth comes new risks-cybercriminals are exploiting DeepSeek\'s reputation through phishing campaigns, fake investment scams, and malware disguised as DeepSeek. This analysis seeks to explore recent incidents where Threat Actors (TAs) have impersonated DeepSeek to target users, highlighting their tactics and how readers can secure themselves accordingly. Recently, Cyble Research and Intelligence Labs (CRIL) identified multiple suspicious websites impersonating DeepSeek. Many of these sites were linked to crypto phishing schemes and fraudulent investment scams. We have compiled a list of the identified suspicious sites: abs-register[.]com deep-whitelist[.]com deepseek-ai[.]cloud deepseek[.]boats deepseek-shares[.]com deepseek-aiassistant[.]com usadeepseek[.]com Campaign Details Crypto phishing leveraging the popularity of DeepSeek CRIL uncovered a crypto phishin]]> 2025-01-30T13:00:34+00:00 https://cyble.com/blog/deepseeks-growing-influence-surge-frauds-phishing-attacks/ www.secnews.physaphae.fr/article.php?IdArticle=8646797 False Spam,Malware,Threat,Mobile ChatGPT 3.0000000000000000 Bleeping Computer - Magazine Américain Time Bandit ChatGPT jailbreak bypasses safeguards on sensitive topics A ChatGPT jailbreak flaw, dubbed "Time Bandit," allows you to bypass OpenAI\'s safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation. [...]]]> 2025-01-30T07:00:00+00:00 https://www.bleepingcomputer.com/news/security/time-bandit-chatgpt-jailbreak-bypasses-safeguards-on-sensitive-topics/ www.secnews.physaphae.fr/article.php?IdArticle=8644768 False Malware ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Unlocking the Value of AI: Safe AI Adoption for Cybersecurity Professionals 2025-01-24T05:28:30+00:00 https://www.proofpoint.com/us/blog/information-protection/value-of-safe-ai-adoption-insights-for-cybersecurity-professionals www.secnews.physaphae.fr/article.php?IdArticle=8642164 False Malware,Tool,Vulnerability,Threat,Legislation ChatGPT 2.0000000000000000 Korben - Bloger francais Attention aux lib ChatGPT / Claude piégées par un malware Vous pensiez gagner du temps en intégrant ChatGPT à vos projets via une API gratuite trouvée sur PyPI ? Attention, ce raccourci pourrait vous coûter très cher car des petits malins ont eu la brillante idée de surfer sur la vague de l’IA générative pour piéger les développeurs pressés avec un logiciel malveillant. En effet, pendant plus d’un an, deux packages Python se sont fait passer pour des API officielles de ChatGPT et Claude sur le dépôt PyPI. Ces petits fourbes promettaient monts et merveilles, notamment un accès gratuit aux modèles les plus avancés comme GPT-4 Turbo. De quoi faire saliver plus d’un développeur à la recherche de petites économies, surtout que d’ordinaire, ces services sont payants.]]> 2024-11-25T11:35:11+00:00 https://korben.info/malware-fausses-api-chatgpt-claude-piratage-developpeurs.html www.secnews.physaphae.fr/article.php?IdArticle=8617495 False Malware ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Security Brief: ClickFix Social Engineering Technique Floods Threat Landscape 2024-11-18T10:34:05+00:00 https://www.proofpoint.com/us/blog/threat-insight/security-brief-clickfix-social-engineering-technique-floods-threat-landscape www.secnews.physaphae.fr/article.php?IdArticle=8613359 False Malware,Tool,Threat ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Bumblebee malware returns after recent law enforcement disruption 2024-10-21T18:57:24+00:00 https://community.riskiq.com/article/b382c0b6 www.secnews.physaphae.fr/article.php?IdArticle=8601156 False Ransomware,Spam,Malware,Tool,Threat,Legislation ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Une mise à jour sur la perturbation des utilisations trompeuses de l'IA<br>An update on disrupting deceptive uses of AI 2024-10-16T19:15:03+00:00 https://community.riskiq.com/article/e46070dd www.secnews.physaphae.fr/article.php?IdArticle=8598902 False Malware,Tool,Vulnerability,Threat,Studies ChatGPT 2.0000000000000000 InformationSecurityBuzzNews - Site de News Securite Openai dit que les mauvais acteurs utilisent le chatppt pour écrire des logiciels malveillants, les élections influencées<br>OpenAI says bad actors are using ChatGPT to write malware, sway elections Cybercriminals are increasingly exploiting OpenAI\'s model, ChatGPT, to carry out a range of malicious activities, including malware development, misinformation campaigns, and spear-phishing. A new report revealed that since the beginning of 2024, OpenAI has disrupted over 20 deceptive operations worldwide, spotlighting a troubling trend of AI misuse that includes creating and debugging malware, producing content [...]]]> 2024-10-14T07:12:01+00:00 https://informationsecuritybuzz.com/openai-says-bad-actors-using-chatgpt/ www.secnews.physaphae.fr/article.php?IdArticle=8597478 False Malware,Prediction ChatGPT 3.0000000000000000 Bleeping Computer - Magazine Américain OpenAI confirme les acteurs de la menace utilisent le chatppt pour écrire des logiciels malveillants<br>OpenAI confirms threat actors use ChatGPT to write malware OpenAI has disrupted over 20 malicious cyber operations abusing its AI-powered chatbot, ChatGPT, for debugging and developing malware, spreading misinformation, evading detection, and conducting spear-phishing attacks. [...]]]> 2024-10-12T10:09:19+00:00 https://www.bleepingcomputer.com/news/security/openai-confirms-threat-actors-use-chatgpt-to-write-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8596702 False Malware,Threat ChatGPT 2.0000000000000000 CyberScoop - scoopnewsgroup.com special Cyber Openai dit qu'il a perturbé plus de 20 réseaux d'influence étrangère au cours de l'année écoulée<br>OpenAI says it has disrupted 20-plus foreign influence networks in past year Les acteurs de la menace ont été observés à l'aide de Chatgpt et d'autres outils pour étendre les surfaces d'attaque, déboguer les logiciels malveillants et créer du contenu de spectre.
>Threat actors were observed using ChatGPT and other tools to scope out attack surfaces, debug malware and create spearphishing content. ]]>
2024-10-09T19:25:56+00:00 https://cyberscoop.com/openai-threat-report-foreign-influence-generative-ai/ www.secnews.physaphae.fr/article.php?IdArticle=8595004 False Malware,Tool ChatGPT 3.0000000000000000
RiskIQ - cyber risk firms (now microsoft) Faits saillants hebdomadaires OSINT, 30 septembre 2024<br>Weekly OSINT Highlights, 30 September 2024 2024-09-30T13:21:55+00:00 https://community.riskiq.com/article/70e8b264 www.secnews.physaphae.fr/article.php?IdArticle=8588927 False Ransomware,Malware,Tool,Vulnerability,Threat,Patching,Mobile ChatGPT,APT 36 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Injection de logiciels espions dans la mémoire à long terme de votre chatppt (Spaiware)<br>Spyware Injection Into Your ChatGPT\\'s Long-Term Memory (SpAIware) ## Snapshot An attack chain for the ChatGPT macOS application was discovered, where attackers could use prompt injection from untrusted data to insert persistent spyware into ChatGPT\'s memory. This vulnerability allowed for co]]> 2024-09-25T22:02:45+00:00 https://community.riskiq.com/article/693f83ba www.secnews.physaphae.fr/article.php?IdArticle=8585136 False Malware,Vulnerability,Threat ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Arrêt de cybersécurité du mois: attaque de phishing d'identification ciblant les données de localisation des utilisateurs<br>Cybersecurity Stop of the Month: Credential Phishing Attack Targeting User Location Data 2024-08-14T07:19:53+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/stopping-phish-attacks-behind-qr-codes-human-verification-challenge www.secnews.physaphae.fr/article.php?IdArticle=8557648 False Malware,Tool,Threat,Cloud ChatGPT 3.0000000000000000 We Live Security - Editeur Logiciel Antivirus ESET Méfiez-vous des faux outils d'IA masquant des menaces de logiciels malveillants très réels<br>Beware of fake AI tools masking very real malware threats Ever attuned to the latest trends, cybercriminals distribute malicious tools that pose as ChatGPT, Midjourney and other generative AI assistants]]> 2024-07-29T09:00:00+00:00 https://www.welivesecurity.com/en/cybersecurity/beware-fake-ai-tools-masking-very-real-malware-threat/ www.secnews.physaphae.fr/article.php?IdArticle=8547090 False Malware,Tool ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Les attaques d'escroquerie profitent de la popularité de la vague de l'IA générative<br>Scam Attacks Taking Advantage of the Popularity of the Generative AI Wave 2024-07-26T19:24:17+00:00 https://community.riskiq.com/article/2826e7d7 www.secnews.physaphae.fr/article.php?IdArticle=8544990 True Ransomware,Spam,Malware,Tool,Threat,Studies ChatGPT 3.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Nombre croissant de menaces tirant parti de l'IA<br>Growing Number of Threats Leveraging AI 2024-07-25T20:11:02+00:00 https://community.riskiq.com/article/96b66de0 www.secnews.physaphae.fr/article.php?IdArticle=8544339 False Malware,Vulnerability,Threat ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Ce que les prestataires de soins de santé devraient faire après une violation de données médicales<br>What Healthcare Providers Should Do After A Medical Data Breach 2023 Cost of a Data Breach report
reveals. But data breaches aren’t just expensive, they also harm patient privacy, damage organizational reputation, and erode patient trust in healthcare providers. As data breaches are now largely a matter of “when” not “if”, it’s important to devise a solid data breach response plan. By acting fast to prevent further damage and data loss, you can restore operations as quickly as possible with minimal harm done. Contain the Breach Once a breach has been detected, you need to act fast to contain it, so it doesn’t spread. That means disconnecting the affected system from the network, but not turning it off altogether as your forensic team still needs to investigate the situation. Simply unplug the network cable from the router to disconnect it from the internet. If your antivirus scanner has found malware or a virus on the system, quarantine it, so it can be analyzed later. Keep the firewall settings as they are and save all firewall and security logs. You can also take screenshots if needed. It’s also smart to change all access control login details. Strong complex passwords are a basic cybersecurity feature difficult for hackers and software to crack. It’s still important to record old passwords for future investigation. Also, remember to deactivate less-important accounts. Document the Breach You then need to document the breach, so forensic investigators can find out what caused it, as well as recommend accurate next steps to secure the network now and prevent future breaches. So, in your report, explain how you came to hear of the breach and relay exactly what was stated in the notification (including the date and time you were notified). Also, document every step you took in response to the breach. This includes the date and time you disconnected systems from the network and changed account credentials and passwords. If you use artificial intelligence (AI) tools, you’ll also need to consider whether they played a role in the breach, and document this if so. For example, ChatGPT, a popular chatbot and virtual assistant, can successfully exploit zero-day security vulnerabilities 87% of the time, a recent study by researchers at the University of Illinois Urbana-Champaign found. Although AI is increasingly used in healthcare to automate tasks, manage patient data, and even make tailored care recommendations, it does pose a serious risk to patient data integrity despite the other benefits it provides. So, assess whether AI influenced your breach at all, so your organization can make changes as needed to better prevent data breaches in the future. Report the Breach Although your first instinct may be to keep the breach under wraps, you’re actually legally required to report it. Under the ]]> 2024-07-23T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/what-healthcare-providers-should-do-after-a-medical-data-breach www.secnews.physaphae.fr/article.php?IdArticle=8542852 False Data Breach,Malware,Tool,Vulnerability,Threat,Studies,Medical ChatGPT 3.0000000000000000 Bleeping Computer - Magazine Américain Script PowerShell malveillant poussant les logiciels malveillants<br>Malicious PowerShell script pushing malware looks AI-written A threat actor is using a PowerShell script that was likely created with the help of an artificial intelligence system such as OpenAI\'s ChatGPT, Google\'s Gemini, or Microsoft\'s CoPilot. [...]]]> 2024-04-10T12:12:40+00:00 https://www.bleepingcomputer.com/news/security/malicious-powershell-script-pushing-malware-looks-ai-written/ www.secnews.physaphae.fr/article.php?IdArticle=8479446 False Malware,Threat ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Mémoire de sécurité: TA547 cible les organisations allemandes avec Rhadamanthys Stealer<br>Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer 2024-04-10T10:12:47+00:00 https://www.proofpoint.com/us/blog/threat-insight/security-brief-ta547-targets-german-organizations-rhadamanthys-stealer www.secnews.physaphae.fr/article.php?IdArticle=8479187 False Malware,Tool,Threat ChatGPT 2.0000000000000000 Recorded Future - FLux Recorded Future Les cybercriminels répartissent les logiciels malveillants à travers les pages Facebook imitant les marques d'IA<br>Cybercriminals are spreading malware through Facebook pages impersonating AI brands Les cybercriminels prennent le contrôle des pages Facebook et les utilisent pour annoncer de faux logiciels d'intelligence artificielle générative chargés de logiciels malveillants. & Nbsp;Selon des chercheurs de la société de cybersécurité Bitdefender, les CyberCrooks profitent de la popularité des nouveaux outils génératifs d'IA et utilisent «malvertising» pour usurper l'identité de produits légitimes comme MidJourney, Sora AI, Chatgpt 5 et
Cybercriminals are taking over Facebook pages and using them to advertise fake generative artificial intelligence software loaded with malware.  According to researchers at the cybersecurity company Bitdefender, the cybercrooks are taking advantage of the popularity of new generative AI tools and using “malvertising” to impersonate legitimate products like Midjourney, Sora AI, ChatGPT 5 and]]>
2024-04-04T17:04:16+00:00 https://therecord.media/cybercriminals-plant-malware-facebook-ai-brands www.secnews.physaphae.fr/article.php?IdArticle=8476032 False Malware,Tool ChatGPT 2.0000000000000000
The Register - Site journalistique Anglais Ici \\, quelque chose d'autre peut faire: exposer Bad Infosec pour donner aux cyber-crims une orteil dans votre organisation<br>Here\\'s something else AI can do: expose bad infosec to give cyber-crims a toehold in your organization Singaporean researchers note rising presence of ChatGPT creds in Infostealer malware logs Stolen ChatGPT credentials are a hot commodity on the dark web, according to Singapore-based threat intelligence firm Group-IB, which claims to have found some 225,000 stealer logs containing login details for the service last year.…]]> 2024-03-07T06:27:08+00:00 https://go.theregister.com/feed/www.theregister.com/2024/03/07/more_than_250000/ www.secnews.physaphae.fr/article.php?IdArticle=8460181 False Malware,Threat ChatGPT 3.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Rester en avance sur les acteurs de la menace à l'ère de l'IA<br>Staying ahead of threat actors in the age of AI 2024-03-05T19:03:47+00:00 https://community.riskiq.com/article/ed40fbef www.secnews.physaphae.fr/article.php?IdArticle=8459485 False Ransomware,Malware,Tool,Vulnerability,Threat,Studies,Medical,Technical APT 28,ChatGPT,APT 4 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Plus de 225 000 informations d'identification CHATGPT compromises en vente sur les marchés Web sombres<br>Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late]]> 2024-03-05T16:08:00+00:00 https://thehackernews.com/2024/03/over-225000-compromised-chatgpt.html www.secnews.physaphae.fr/article.php?IdArticle=8459273 False Malware ChatGPT 3.0000000000000000 SecurityWeek - Security News Microsoft attrape des apts utilisant le chatppt pour la recherche vuln, les scripts de logiciels malveillants<br>Microsoft Catches APTs Using ChatGPT for Vuln Research, Malware Scripting Les chasseurs de menaces de Microsoft disent que les APT étrangers interagissent avec le chatppt d'Openai \\ pour automatiser la recherche de vulnérabilité malveillante, la reconnaissance cible et les tâches de création de logiciels malveillants.
>Microsoft threat hunters say foreign APTs are interacting with OpenAI\'s ChatGPT to automate malicious vulnerability research, target reconnaissance and malware creation tasks. ]]>
2024-02-14T18:25:10+00:00 https://www.securityweek.com/microsoft-catches-apts-using-chatgpt-for-vuln-research-malware-scripting/ www.secnews.physaphae.fr/article.php?IdArticle=8450120 False Malware,Vulnerability,Threat ChatGPT 2.0000000000000000
HackRead - Chercher Cyber Des milliers de messages Web sombres exposent des plans d'abus de chatpt<br>Thousands of Dark Web Posts Expose ChatGPT Abuse Plans Par deeba ahmed Les cybercriminels font activement la promotion de l'abus de chatppt et de chatbots similaires, offrant une gamme d'outils malveillants, des logiciels malveillants aux kits de phishing. Ceci est un article de HackRead.com Lire la publication originale: Des milliers de messages Web sombres exposent des plans d'abus de chatppt
>By Deeba Ahmed Cybercriminals are actively promoting the abuse of ChatGPT and similar chatbots, offering a range of malicious tools from malware to phishing kits. This is a post from HackRead.com Read the original post: Thousands of Dark Web Posts Expose ChatGPT Abuse Plans]]>
2024-01-26T17:26:19+00:00 https://www.hackread.com/dark-web-posts-expose-chatgpt-abuse-plans/ www.secnews.physaphae.fr/article.php?IdArticle=8443482 False Malware,Tool ChatGPT 3.0000000000000000
InfoSecurity Mag - InfoSecurity Magazine Chatgpt Cybercrime Surge révélé dans 3000 articles Web sombres<br>ChatGPT Cybercrime Surge Revealed in 3000 Dark Web Posts Kaspersky said cybercriminals are exploring schemes to implement ChatGPT in malware development]]> 2024-01-24T17:15:00+00:00 https://www.infosecurity-magazine.com/news/chatgpt-cybercrime-revealed-dark/ www.secnews.physaphae.fr/article.php?IdArticle=8442631 False Malware ChatGPT 2.0000000000000000 The Register - Site journalistique Anglais Les avertissements NCSC de GCHQ \\ de la possibilité réaliste \\ 'AI aideront à détection d'évasion des logiciels malveillants soutenus par l'État<br>GCHQ\\'s NCSC warns of \\'realistic possibility\\' AI will help state-backed malware evade detection déjà démystifié .Cependant, un article Publié aujourd'hui par le Royaume-Uni National Cyber Security Center (NCSC) suggère qu'il existe une "possibilité réaliste" que d'ici 2025, les attaquants les plus sophistiqués \\ 's'amélioreront considérablement grâce aux modèles d'IA informés par des données décrivant une cyber-cyberHits.… ]]> 2024-01-24T06:26:08+00:00 https://go.theregister.com/feed/www.theregister.com/2024/01/24/ncsc/ www.secnews.physaphae.fr/article.php?IdArticle=8442422 False Malware,Tool ChatGPT 3.0000000000000000 TechRepublic - Security News US Rapport de menace ESET: abus de nom de chatppt, Lumma Steal Maleware augmente, la prévalence de Spyware \\ Android Spinok SDK \\<br>ESET Threat Report: ChatGPT Name Abuses, Lumma Stealer Malware Increases, Android SpinOk SDK Spyware\\'s Prevalence Risk mitigation tips are provided for each of these cybersecurity threats.]]> 2023-12-22T22:47:44+00:00 https://www.techrepublic.com/article/eset-threat-report-h2-2023/ www.secnews.physaphae.fr/article.php?IdArticle=8427606 False Malware,Threat,Mobile ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Prédictions 2024 de Proofpoint \\: Brace for Impact<br>Proofpoint\\'s 2024 Predictions: Brace for Impact 2023-11-28T23:05:04+00:00 https://www.proofpoint.com/us/blog/ciso-perspectives/proofpoints-2024-predictions-brace-impact www.secnews.physaphae.fr/article.php?IdArticle=8417740 False Ransomware,Malware,Tool,Vulnerability,Threat,Mobile,Prediction,Prediction ChatGPT,ChatGPT 3.0000000000000000 TrendLabs Security - Editeur Antivirus Un examen plus approfondi du rôle de Chatgpt \\ dans la création de logiciels malveillants automatisés<br>A Closer Look at ChatGPT\\'s Role in Automated Malware Creation This blog entry explores the effectiveness of ChatGPT\'s safety measures, the potential for AI technologies to be misused by criminal actors, and the limitations of current AI models.]]> 2023-11-14T00:00:00+00:00 https://www.trendmicro.com/en_us/research/23/k/a-closer-look-at-chatgpt-s-role-in-automated-malware-creation.html www.secnews.physaphae.fr/article.php?IdArticle=8411646 False Malware ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Réévaluer les risques dans l'âge de l'intelligence artificielle<br>Re-evaluating risk in the artificial intelligence age 2023-10-17T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/re-evaluating-risk-in-the-artificial-intelligence-age www.secnews.physaphae.fr/article.php?IdArticle=8396640 False Malware,Tool,Vulnerability ChatGPT 4.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) "J'ai fait un rêve" et des jailbreaks génératifs de l'IA<br>"I Had a Dream" and Generative AI Jailbreaks "Of course, here\'s an example of simple code in the Python programming language that can be associated with the keywords "MyHotKeyHandler," "Keylogger," and "macOS," this is a message from ChatGPT followed by a piece of malicious code and a brief remark not to use it for illegal purposes. Initially published by Moonlock Lab, the screenshots of ChatGPT writing code for a keylogger malware is yet]]> 2023-10-09T16:36:00+00:00 https://thehackernews.com/2023/10/i-had-dream-and-generative-ai-jailbreaks.html www.secnews.physaphae.fr/article.php?IdArticle=8393137 False Malware ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Garder les réglementations de cybersécurité en tête pour une utilisation génératrice de l'IA<br>Keeping cybersecurity regulations top of mind for generative AI use got ChatGPT to write polymorphic malware, despite protections intended to prevent this kind of application. Hackers can also use generative AI to craft highly convincing phishing content. Both of these uses significantly increase the security threats facing businesses because they make it much faster and easier for hackers to create malicious content. Risk of data and IP exposure Generative AI algorithms are developed with machine learning, so they learn from every interaction they have. Every prompt becomes part of the algorithm and informs future output. As a result, the AI may “remember” any information a user includes in their prompts. Generative AI can also put a business’s intellectual property at risk. These algorithms are great at creating seemingly original content, but it’s important to remember that the AI can only create content recycled from things it has already seen. Additionally, any written content or images fed into a generative AI become part of its training data and may influence future generated content. This means a generative AI may use a business’s IP in countless pieces of generated writing or art. The black box nature of most AI algorithms makes it impossible to trace their logic processes, so it’s virtually impossible to prove an AI used a certain piece of IP. Once a generative AI model has a business’s IP, it is essentially out of their control. Risk of compromised training data One cybersecurity risk unique to AI is “poisoned” training datasets. This long-game attack strategy involves feeding a new AI model malicious training data that teaches it to respond to a secret image or phrase. Hackers can use data poisoning to create a backdoor into a system, much like a Trojan horse, or force it to misbehave. Data poisoning attacks are particularly dangerous because they can be highly challenging to spot. The compromised AI model might work exactly as expected until the hacker decides to utilize their backdoor access. Using generative AI within security regulations While generative AI has some cybersecurity risks, it is possible to use it effectively while complying with regulations. Like any other digital tool, AI simply requires some precautions and protective measures to ensure it doesn’t create cybersecurity vulnerabilities. A few essential steps can help businesses accomplish this. Understand all relevant regulations Staying compli]]> 2023-09-06T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/keeping-cybersecurity-regulations-top-of-mind-for-generative-ai-use www.secnews.physaphae.fr/article.php?IdArticle=8379621 False Malware,Tool ChatGPT,ChatGPT 2.0000000000000000 Global Security Mag - Site de news francais K & uuml;<br>Künstliche Intelligenz in der Informationstechnologie: Drei Fragen, die sich CISOs stellen sollten rapports spéciaux / / affiche , ciso
Das Jahr 2023 könnte als das Jahr der Künstlichen Intelligenz (KI) in die Geschichte eingehen – oder zumindest als das Jahr, in dem Unternehmen und Verbraucher gleichermaßen von generativen KI-Tools geschwärmt haben, wie ChatGPT. Anbieter von IT-Sicherheitslösungen sind gegen diese Begeisterung nicht immun. Auf der RSA-Konferenz 2023, einer der führenden internationalen Fachkonferenzen im Bereich der IT Sicherheit, wurde in fast jedem Vortrag das Thema der KI angesprochen – aus gutem Grund. KI hat ein enormes Potenzial, die Branche zu verändern. Unsere Sicherheitsforscher haben bereits den Einsatz von KI durch Hacker beobachtet, die damit täuschend echte Phishing-E-Mails erstellen und den Bau von Malware beschleunigen. Die gute Nachricht: Auch die Verteidiger verwenden KI und binden sie in ihre Sicherheitslösungen ein, denn KI kann zur automatischen Erkennung und Verhinderung von Cyber-Angriffen eingesetzt werden. Sie kann beispielsweise verhindern, dass Phishing-E-Mails jemals den Posteingang erreichen. Sie kann ebenso die zeitraubenden Fehl-Alarme reduzieren, die IT-Teams plagen und Arbeitskraft binden, welche anderswo besser eingesetzt wäre. - Sonderberichte / , ]]>
2023-08-21T10:29:48+00:00 https://www.globalsecuritymag.fr/Kunstliche-Intelligenz-in-der-Informationstechnologie-Drei-Fragen-die-sich.html www.secnews.physaphae.fr/article.php?IdArticle=8372693 False Malware ChatGPT 1.00000000000000000000
Krebs on Security - Chercheur Américain Rencontrez le cerveau derrière le service de chat AI adapté aux logiciels malveillants \\ 'wormpt \\'<br>Meet the Brains Behind the Malware-Friendly AI Chat Service \\'WormGPT\\' WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to help write malicious software without all the pesky prohibitions on such activity enforced by ChatGPT and Google Bard, has started adding restrictions on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.” The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes - such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new LLM that was created specifically for cybercrime activities.]]> 2023-08-08T17:37:23+00:00 https://krebsonsecurity.com/2023/08/meet-the-brains-behind-the-malware-friendly-ai-chat-service-wormgpt/ www.secnews.physaphae.fr/article.php?IdArticle=8367397 False Ransomware,Malware ChatGPT,ChatGPT 3.0000000000000000 Bleeping Computer - Magazine Américain Les cybercriminels forment des chatbots d'IA pour le phishing, des attaques de logiciels malveillants<br>Cybercriminals train AI chatbots for phishing, malware attacks In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google\'s AI experiment, Bard. [...]]]> 2023-08-01T10:08:16+00:00 https://www.bleepingcomputer.com/news/security/cybercriminals-train-ai-chatbots-for-phishing-malware-attacks/ www.secnews.physaphae.fr/article.php?IdArticle=8364314 False Malware,Tool ChatGPT,ChatGPT 3.0000000000000000 Checkpoint - Fabricant Materiel Securite Facebook a été inondé de publicités et de pages pour les faux chatpt, Google Bard et d'autres services d'IA, incitant les utilisateurs à télécharger des logiciels malveillants<br>Facebook Flooded with Ads and Pages for Fake ChatGPT, Google Bard and other AI services, Tricking Users into Downloading Malware Présentation des cybercriminels utilisent Facebook pour usurper l'identité de marques de génération d'IA populaires, y compris les utilisateurs de Chatgpt, Google Bard, Midjourney et Jasper Facebook sont trompés pour télécharger du contenu à partir des fausses pages de marque et des annonces que ces téléchargements contiennent des logiciels malveillants malveillants, qui volent leurMots de passe en ligne (banque, médias sociaux, jeux, etc.), des portefeuilles cryptographiques et toutes les informations enregistrées dans leur navigateur Les utilisateurs sans méfiance aiment et commentent les faux messages, les diffusant ainsi sur leurs propres réseaux sociaux Les cyber-criminels continuent de essayer de voler privéinformation.Une nouvelle arnaque découverte par Check Point Research (RCR) utilise Facebook pour escroquer sans méfiance [& # 8230;]
>Highlights Cyber criminals are using Facebook to impersonate popular generative AI brands, including ChatGPT, Google Bard, Midjourney and Jasper Facebook users are being tricked into downloading content from the fake brand pages and ads These downloads contain malicious malware, which steals their online passwords (banking, social media, gaming, etc), crypto wallets and any information saved in their browser Unsuspecting users are liking and commenting on fake posts, thereby spreading them to their own social networks Cyber criminals continue to try new ways to steal private information. A new scam uncovered by Check Point Research (CPR) uses Facebook to scam unsuspecting […] ]]>
2023-07-19T16:27:24+00:00 https://blog.checkpoint.com/security/facebook-flooded-with-fake-ads-for-chatgpt-google-bard-and-other-ai-services-tricking-users-into-downloading-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8358894 False Malware,Threat ChatGPT 4.0000000000000000
The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Allez au-delà des titres pour des plongées plus profondes dans le sous-sol cybercriminal<br>Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground Discover stories about threat actors\' latest tactics, techniques, and procedures from Cybersixgill\'s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT]]> 2023-07-18T16:24:00+00:00 https://thehackernews.com/2023/07/go-beyond-headlines-for-deeper-dives.html www.secnews.physaphae.fr/article.php?IdArticle=8358216 False Ransomware,Malware,Vulnerability,Threat ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS<br>CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned ]]> 2023-06-27T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-26-eyes-open-the-ftc-reveals-the-latest-top-five-text-message-scams www.secnews.physaphae.fr/article.php?IdArticle=8349704 False Ransomware,Spam,Malware,Hack,Tool,Threat FedEx,APT 28,APT 15,ChatGPT,ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Vers un SOC plus résilient: la puissance de l'apprentissage automatique<br>Toward a more resilient SOC: the power of machine learning as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate.]]> 2023-06-21T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/toward-a-more-resilient-soc-the-power-of-machine-learning www.secnews.physaphae.fr/article.php?IdArticle=8347624 False Malware,Tool,Threat,Prediction,Cloud ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données<br>CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l]]> 2023-06-20T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-25-fingerprints-all-over-stolen-credentials-are-the-no1-root-cause-of-data-breaches www.secnews.physaphae.fr/article.php?IdArticle=8347292 False Ransomware,Data Breach,Spam,Malware,Hack,Vulnerability,Threat,Cloud ChatGPT,ChatGPT 2.0000000000000000 Bleeping Computer - Magazine Américain Plus de 100 000 comptes Chatgpt volés via des logiciels malveillants voleurs d'informations<br>Over 100,000 ChatGPT accounts stolen via info-stealing malware More than 101,000 ChatGPT user accounts have been compromised by information stealers over the past year, according to dark web marketplace data. [...]]]> 2023-06-20T04:00:00+00:00 https://www.bleepingcomputer.com/news/security/over-100-000-chatgpt-accounts-stolen-via-info-stealing-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8347307 False Malware ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 # 24 [Le biais de l'esprit \\] le prétexage dépasse désormais le phishing dans les attaques d'ingénierie sociale<br>CyberheistNews Vol 13 #24 [The Mind\\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks CyberheistNews Vol 13 #24 CyberheistNews Vol 13 #24  |   June 13th, 2023 [The Mind\'s Bias] Pretexting Now Tops Phishing in Social Engineering Attacks The New Verizon DBIR is a treasure trove of data. As we will cover a bit below, Verizon reported that 74% of data breaches Involve the "Human Element," so people are one of the most common factors contributing to successful data breaches. Let\'s drill down a bit more in the social engineering section. They explained: "Now, who has received an email or a direct message on social media from a friend or family member who desperately needs money? Probably fewer of you. This is social engineering (pretexting specifically) and it takes more skill. "The most convincing social engineers can get into your head and convince you that someone you love is in danger. They use information they have learned about you and your loved ones to trick you into believing the message is truly from someone you know, and they use this invented scenario to play on your emotions and create a sense of urgency. The DBIR Figure 35 shows that Pretexting is now more prevalent than Phishing in Social Engineering incidents. However, when we look at confirmed breaches, Phishing is still on top." A social attack known as BEC, or business email compromise, can be quite intricate. In this type of attack, the perpetrator uses existing email communications and information to deceive the recipient into carrying out a seemingly ordinary task, like changing a vendor\'s bank account details. But what makes this attack dangerous is that the new bank account provided belongs to the attacker. As a result, any payments the recipient makes to that account will simply disappear. BEC Attacks Have Nearly Doubled It can be difficult to spot these attacks as the attackers do a lot of preparation beforehand. They may create a domain doppelganger that looks almost identical to the real one and modify the signature block to show their own number instead of the legitimate vendor. Attackers can make many subtle changes to trick their targets, especially if they are receiving many similar legitimate requests. This could be one reason why BEC attacks have nearly doubled across the DBIR entire incident dataset, as shown in Figure 36, and now make up over 50% of incidents in this category. Financially Motivated External Attackers Double Down on Social Engineering Timely detection and response is crucial when dealing with social engineering attacks, as well as most other attacks. Figure 38 shows a steady increase in the median cost of BECs since 2018, now averaging around $50,000, emphasizing the significance of quick detection. However, unlike the times we live in, this section isn\'t all doom and ]]> 2023-06-13T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-24-the-minds-bias-pretexting-now-tops-phishing-in-social-engineering-attacks www.secnews.physaphae.fr/article.php?IdArticle=8344804 False Spam,Malware,Vulnerability,Threat,Patching Uber,APT 37,ChatGPT,ChatGPT,APT 43 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire<br>Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do fastest-growing consumer app in internet history, reaching 100 million users as 2023 began. The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt. This can be bad news for your business, as it escalates the degree of difficulty in managing threats.  Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape. Understanding the role of ChatGPT in modern ransomware attacks We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage. With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection. The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive. Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment. This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds. Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th]]> 2023-06-13T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/how-chatgpt-is-revolutionizing-ransomware-attacks www.secnews.physaphae.fr/article.php?IdArticle=8344709 False Ransomware,Malware,Tool,Threat ChatGPT,ChatGPT 2.0000000000000000 Dark Reading - Informationweek Branch Chatgpt Hallucinations ouvre les développeurs aux attaques de logiciels malveillants de la chaîne d'approvisionnement<br>ChatGPT Hallucinations Open Developers to Supply-Chain Malware Attacks Attackers could exploit a common AI experience-false recommendations-to spread malicious code via developers that use ChatGPT to create software.]]> 2023-06-06T12:00:00+00:00 https://www.darkreading.com/application-security/chatgpt-hallucinations-developers-supply-chain-malware-attacks www.secnews.physaphae.fr/article.php?IdArticle=8342535 False Malware ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier<br>CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks CyberheistNews Vol 13 #22 CyberheistNews Vol 13 #22  |   May 31st, 2023 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all. When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report. According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry: 51% of invoice fraud attacks targeted the financial services industry 42% were payroll fraud attacks 63% were payment fraud To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox. The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage. Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing. Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users. ]]> 2023-05-31T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-22-eye-on-fraud-a-closer-look-at-the-massive-72-percent-spike-in-financial-phishing-attacks www.secnews.physaphae.fr/article.php?IdArticle=8340859 False Ransomware,Malware,Hack,Tool,Threat,Conference Uber,ChatGPT,ChatGPT,Guam 2.0000000000000000 Bleeping Computer - Magazine Américain ROMCOM MALWARE SPEAT via Google Ads pour Chatgpt, GIMP, plus<br>RomCom malware spread via Google Ads for ChatGPT, GIMP, more A new campaign distributing the RomCom backdoor malware is impersonating the websites of well-known or fictional software, tricking users into downloading and launching malicious installers. [...]]]> 2023-05-30T15:01:01+00:00 https://www.bleepingcomputer.com/news/security/romcom-malware-spread-via-google-ads-for-chatgpt-gimp-more/ www.secnews.physaphae.fr/article.php?IdArticle=8340589 False Malware ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Vous recherchez des outils d'IA?Attention aux sites voyous distribuant des logiciels malveillants Redline<br>Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire]]> 2023-05-19T12:23:00+00:00 https://thehackernews.com/2023/05/searching-for-ai-tools-watch-out-for.html www.secnews.physaphae.fr/article.php?IdArticle=8337842 False Malware ChatGPT,ChatGPT 2.0000000000000000 InfoSecurity Mag - InfoSecurity Magazine Batloader usurpère Chatgpt et MidJourney en cyber-attaques<br>BatLoader Impersonates ChatGPT and Midjourney in Cyber-Attacks eSentire recommended raising awareness of malware masquerading as legitimate applications]]> 2023-05-17T16:00:00+00:00 https://www.infosecurity-magazine.com/news/batloader-impersonates-chatgpt/ www.secnews.physaphae.fr/article.php?IdArticle=8337394 False Malware ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs<br>CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, ]]> 2023-05-09T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-19-watch-your-back-new-fake-chrome-update-error-attack-targets-your-users www.secnews.physaphae.fr/article.php?IdArticle=8334782 False Ransomware,Data Breach,Spam,Malware,Tool,Threat,Prediction NotPetya,NotPetya,APT 28,ChatGPT,ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Empêcher des attaques de phishing sophistiquées destinées aux employés<br>Preventing sophisticated phishing attacks aimed at employees can spoof email addresses and bots sound like humans. It’s becoming challenging for employees to tell if their emails are real or fake, which puts the company at risk of data breaches. In March 2023, an artificial intelligence chatbot called GPT-4 received an update that lets users give specific instructions about styles and tasks. Attackers can use it to pose as employees and send convincing messages since it sounds intelligent and has general knowledge of any industry. Since classic warning signs of phishing attacks aren’t applicable anymore, companies should train all employees on the new, sophisticated methods. As phishing attacks change, so should businesses. Identify the signs Your company can take preventive action to secure its employees against attacks. You need to make it difficult for hackers to reach them, and your company must train them on warning signs. While blocking spam senders and reinforcing security systems is up to you, they must know how to identify and report themselves. You can prevent data breaches if employees know what to watch out for: Misspellings: While it’s becoming more common for phishing emails to have the correct spelling, employees still need to look for mistakes. For example, they could look for industry-specific language because everyone in their field should know how to spell those words. Irrelevant senders: Workers can identify phishing — even when the email is spoofed to appear as someone they know — by asking themselves if it is relevant. They should flag the email as a potential attack if the sender doesn’t usually reach out to them or is someone in an unrelated department. Attachments: Hackers attempt to install malware through links or downloads. Ensure every employee knows they shouldn\'t click on them. Odd requests: A sophisticated phishing attack has relevant messages and proper language, but it is somewhat vague because it goes to multiple employees at once. For example, they could recognize it if it’s asking them to do something unrelated to their role. It may be harder for people to detect warning signs as attacks evolve, but you can prepare them for those situations as well as possible. It’s unlikely hackers have access to their specific duties or the inner workings of your company, so you must capitalize on those details. Sophisticated attacks will sound intelligent and possibly align with their general duties, so everyone must constantly be aware. Training will help employees identify signs, but you need to take more preventive action to ensure you’re covered. Take preventive action Basic security measures — like regularly updating passwords and running antivirus software — are fundamental to protecting your company. For example, everyone should change their passwords once every three months at minimum to ensur]]> 2023-05-08T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/preventing-sophisticated-phishing-attacks-aimed-at-employees www.secnews.physaphae.fr/article.php?IdArticle=8334232 False Spam,Malware ChatGPT 2.0000000000000000 InfoSecurity Mag - InfoSecurity Magazine Meta s'attaque aux logiciels malveillants qui se font passer pour le chatppt dans des campagnes persistantes<br>Meta Tackles Malware Posing as ChatGPT in Persistent Campaigns Malware families detected and disrupted include Ducktail and the newly identified NodeStealer]]> 2023-05-04T16:00:00+00:00 https://www.infosecurity-magazine.com/news/meta-tackles-malware-posing-chatgpt/ www.secnews.physaphae.fr/article.php?IdArticle=8333498 False Malware ChatGPT,ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Meta élimine la campagne de logiciels malveillants qui a utilisé Chatgpt comme leurre pour voler des comptes<br>Meta Takes Down Malware Campaign That Used ChatGPT as a Lure to Steal Accounts Meta said it took steps to take down more than 1,000 malicious URLs from being shared across its services that were found to leverage OpenAI\'s ChatGPT as a lure to propagate about 10 malware families since March 2023. The development comes against the backdrop of fake ChatGPT web browser extensions being increasingly used to steal users\' Facebook account credentials with an aim to run]]> 2023-05-04T14:27:00+00:00 https://thehackernews.com/2023/05/meta-takes-down-malware-campaign-that.html www.secnews.physaphae.fr/article.php?IdArticle=8333398 False Malware ChatGPT,ChatGPT 2.0000000000000000 SecurityWeek - Security News Les pirates promettent AI, installez des logiciels malveillants à la place<br>Hackers Promise AI, Install Malware Instead Facebook Parent Meta a averti que les pirates utilisent la promesse d'une intelligence artificielle générative comme Chatgpt pour inciter les gens à installer des logiciels malveillants sur les appareils.
>Facebook parent Meta warned that hackers are using the promise of generative artificial intelligence like ChatGPT to trick people into installing malware on devices. ]]>
2023-05-03T13:30:40+00:00 https://www.securityweek.com/hackers-promise-ai-install-malware-instead/ www.secnews.physaphae.fr/article.php?IdArticle=8333155 False Malware ChatGPT,ChatGPT 2.0000000000000000
Global Security Mag - Site de news francais Les faux sites Web liés à Chatgpt présentent un risque élevé, avertit les points forts de la recherche sur le point de contrôle<br>Fake Websites Related to ChatGPT Pose High Risk, Warns Check Point Research Highlights mise à jour malveillant
Fake Websites Related to ChatGPT Pose High Risk, Warns Check Point Research Highlights • Check Point Research (CPR) sees a surge in malware distributed through websites appearing to be related to ChatGPT - Malware Update]]>
2023-05-03T12:09:10+00:00 https://www.globalsecuritymag.fr/Fake-Websites-Related-to-ChatGPT-Pose-High-Risk-Warns-Check-Point-Research.html www.secnews.physaphae.fr/article.php?IdArticle=8333129 False Malware ChatGPT,ChatGPT 2.0000000000000000
Wired Threat Level - Security News Meta déménage pour contrer de nouveaux logiciels malveillants et répéter les prises de contrôle des comptes<br>Meta Moves to Counter New Malware and Repeat Account Takeovers The company is adding new tools as bad actors use ChatGPT-themed lures and mask their infrastructure in an attempt to trick victims and elude defenders.]]> 2023-05-03T12:00:00+00:00 https://www.wired.com/story/meta-attacker-tactics-business-tool/ www.secnews.physaphae.fr/article.php?IdArticle=8333098 False Malware ChatGPT 2.0000000000000000 Checkpoint - Fabricant Materiel Securite De faux sites Web imitant l'association de chatppt présente un risque élevé, avertit la recherche sur le point de contrôle<br>Fake Websites Impersonating Association To ChatGPT Poses High Risk, Warns Check Point Research >Highlights Check Point Research (CPR) sees a surge in malware distributed through websites appearing to be related to ChatGPT Since the beginning of 2023,  1 out of 25 new ChatGPT-related domain was either malicious or potentially malicious CPR provides examples of websites that mimic ChatGPT, intending to lure users to download malicious files, and warns users to be aware and to refrain from accessing similar websites The age of AI – Anxiety or Aid? In December 2022, Check Point Research (CPR) started raising concerns about ChatGPT\'s implications for cybersecurity. In our previous report, CPR put a spotlight on an increase […] ]]> 2023-05-02T19:12:58+00:00 https://blog.checkpoint.com/research/fake-websites-impersonating-association-to-chatgpt-poses-high-risk-warns-check-point-research/ www.secnews.physaphae.fr/article.php?IdArticle=8332919 False Malware ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle?<br>CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? CyberheistNews Vol 13 #18 CyberheistNews Vol 13 #18  |   May 2nd, 2023 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card. It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. "When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes." That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive. One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves. Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4]]> 2023-05-02T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-18-eye-on-ai-does-chatgpt-have-cybersecurity-tells www.secnews.physaphae.fr/article.php?IdArticle=8332823 False Ransomware,Malware,Hack,Threat ChatGPT,ChatGPT 2.0000000000000000 Anomali - Firm Blog Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP<br>Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters (published: April 21, 2023) A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign. Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups. MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:​​Kubernetes RBAC 3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible (published: April 20, 2023) Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and]]> 2023-04-25T18:22:00+00:00 https://www.anomali.com/blog/anomali-cyber-watch-two-supply-chain-attacks-chained-together-decoy-dog-stealthy-dns-communication-evilextractor-exfiltrates-to-ftp-server www.secnews.physaphae.fr/article.php?IdArticle=8331005 False Ransomware,Spam,Malware,Tool,Threat,Cloud Uber,APT 38,ChatGPT,APT 43 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 17 [Head Start] Méthodes efficaces Comment enseigner l'ingénierie sociale à une IA<br>CyberheistNews Vol 13 #17 [Head Start] Effective Methods How To Teach Social Engineering to an AI CyberheistNews Vol 13 #17 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters with]]> 2023-04-25T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-17-head-start-effective-methods-how-to-teach-social-engineering-to-an-ai www.secnews.physaphae.fr/article.php?IdArticle=8330904 False Spam,Malware,Hack,Threat APT 28,ChatGPT,ChatGPT 3.0000000000000000 Bleeping Computer - Magazine Américain Google Ads Push Bumblebee Malware utilisé par Ransomware Gangs<br>Google ads push BumbleBee malware used by ransomware gangs The enterprise-targeting Bumblebee malware is distributed through Google Ads and SEO poisoning that promote popular software like Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace. [...]]]> 2023-04-22T10:08:16+00:00 https://www.bleepingcomputer.com/news/security/google-ads-push-bumblebee-malware-used-by-ransomware-gangs/ www.secnews.physaphae.fr/article.php?IdArticle=8330252 False Ransomware,Malware ChatGPT 2.0000000000000000 The Register - Site journalistique Anglais Les fans de Chatgpt ont besoin de l'esprit défensif \\ 'pour éviter les escrocs et les logiciels malveillants<br>ChatGPT fans need \\'defensive mindset\\' to avoid scammers and malware Palo Alto Networks spots suspicious activity spikes such as naughty domains, phishing, and worse ChatGPT fans need to adopt a "defensive mindset" because scammers have started using multiple methods to trick the bot\'s users into downloading malware or sharing sensitive information.…]]> 2023-04-21T09:33:14+00:00 https://go.theregister.com/feed/www.theregister.com/2023/04/21/crooks_chatgpt_schemes/ www.secnews.physaphae.fr/article.php?IdArticle=8329884 False Malware ChatGPT,ChatGPT 2.0000000000000000 ComputerWeekly - Computer Magazine Le malware de Bumblebee vole sur les ailes de Zoom et Chatgpt<br>Bumblebee malware flies on the wings of Zoom and ChatGPT 2023-04-20T11:46:00+00:00 https://www.computerweekly.com/news/365535625/Bumblebee-malware-flies-on-the-wings-of-Zoom-ChatGPT www.secnews.physaphae.fr/article.php?IdArticle=8329720 False Malware ChatGPT,ChatGPT 2.0000000000000000 SecureWork - SecureWork: incident response Bumblebee malware distribué via des téléchargements d'installation trojanisés<br>Bumblebee Malware Distributed Via Trojanized Installer Downloads Type: BlogsBumblebee Malware Distributed Via Trojanized Installer DownloadsRestricting the download and execution of third-party software is critically important.Learn how CTU™ researchers observed Bumblebee malware distributed via trojanized installers for popular software such as Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace.]]> 2023-04-20T10:49:00+00:00 https://www.secureworks.com/blog/bumblebee-malware-distributed-via-trojanized-installer-downloads www.secnews.physaphae.fr/article.php?IdArticle=8329615 False Malware ChatGPT 2.0000000000000000 Wired Threat Level - Security News Comment le chatppt et les robots comme IT-CAN Spread Malewware<br>How ChatGPT-and Bots Like It-Can Spread Malware Generative AI is a tool, which means it can be used by cybercriminals, too. Here\'s how to protect yourself.]]> 2023-04-19T11:00:00+00:00 https://www.wired.com/story/chatgpt-ai-bots-spread-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8329303 False Malware ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 16 [doigt sur le pouls]: comment les phishers tirent parti de l'IA récent Buzz<br>CyberheistNews Vol 13 #16 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz CyberheistNews Vol 13 #16 CyberheistNews Vol 13 #16  |   April 18th, 2023 [Finger on the Pulse]: How Phishers Leverage Recent AI Buzz Curiosity leads people to suspend their better judgment as a new campaign of credential theft exploits a person\'s excitement about the newest AI systems not yet available to the general public. On Tuesday morning, April 11th, Veriti explained that several unknown actors are making false Facebook ads which advertise a free download of AIs like ChatGPT and Google Bard. Veriti writes "These posts are designed to appear legitimate, using the buzz around OpenAI language models to trick unsuspecting users into downloading the files. However, once the user downloads and extracts the file, the Redline Stealer (aka RedStealer) malware is activated and is capable of stealing passwords and downloading further malware onto the user\'s device." Veriti describes the capabilities of the Redline Stealer malware which, once downloaded, can take sensitive information like credit card numbers, passwords, and personal information like user location, and hardware. Veriti added "The malware can upload and download files, execute commands, and send back data about the infected computer at regular intervals." Experts recommend using official Google or OpenAI websites to learn when their products will be available and only downloading files from reputable sources. With the rising use of Google and Facebook ads as attack vectors experts also suggest refraining from clicking on suspicious advertisements promising early access to any product on the Internet. Employees can be helped to develop sound security habits like these by stepping them through monthly social engineering simulations. Blog post with links:https://blog.knowbe4.com/ai-hype-used-for-phishbait [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever leav]]> 2023-04-18T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-16-finger-on-the-pulse-how-phishers-leverage-recent-ai-buzz www.secnews.physaphae.fr/article.php?IdArticle=8328885 False Spam,Malware,Hack,Threat APT 28,ChatGPT,ChatGPT 3.0000000000000000 Dark Reading - Informationweek Branch Le rapport révèle que Chatgpt déjà impliqué dans les fuites de données, les escroqueries à phishing et les infections de logiciels malveillants<br>Report Reveals ChatGPT Already Involved in Data Leaks, Phishing Scams & Malware Infections 2023-04-12T21:57:00+00:00 https://www.darkreading.com/attacks-breaches/report-reveals-chatgpt-already-involved-in-data-leaks-phishing-scams-malware-infections www.secnews.physaphae.fr/article.php?IdArticle=8327214 False Malware ChatGPT,ChatGPT 4.0000000000000000 Silicon - Site de News Francais BlackMamba : le malware généré par ChatGPT est-il un nouveau type de menace ? 2023-04-12T08:39:41+00:00 https://www.silicon.fr/avis-expert/blackmamba-malware-chatgpt-nouveau-type-de-menace www.secnews.physaphae.fr/article.php?IdArticle=8327020 False Malware ChatGPT,ChatGPT 3.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI<br>CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams CyberheistNews Vol 13 #15 CyberheistNews Vol 13 #15  |   April 11th, 2023 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress." They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how." "Don\'t Trust The Voice" The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one. "So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends." Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4. With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making]]> 2023-04-11T13:16:54+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-15-the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams www.secnews.physaphae.fr/article.php?IdArticle=8326650 False Ransomware,Data Breach,Spam,Malware,Hack,Tool,Threat ChatGPT,ChatGPT 2.0000000000000000 01net. Actualites - Securite - Magazine Francais Ce malware indétectable est signé… ChatGPT ChatGPT est une arme redoutable pour les hackers. Avec l'aide de l'IA, il est possible de coder un dangereux malware indétectable... sans écrire une seule ligne de code.]]> 2023-04-07T13:00:42+00:00 https://www.01net.com/actualites/ce-malware-indetectable-est-signechatgpt.html www.secnews.physaphae.fr/article.php?IdArticle=8325793 False Malware ChatGPT,ChatGPT 2.0000000000000000 Dark Reading - Informationweek Branch Le chercheur tourne le chat de la construction de logiciels malveillants de stéganographie indétectable<br>Researcher Tricks ChatGPT into Building Undetectable Steganography Malware Using only ChatGPT prompts, a Forcepoint researcher convinced the AI to create malware for finding and exfiltrating specific documents, despite its directive to refuse malicious requests.]]> 2023-04-05T16:20:00+00:00 https://www.darkreading.com/attacks-breaches/researcher-tricks-chatgpt-undetectable-steganography-malware www.secnews.physaphae.fr/article.php?IdArticle=8325131 False Malware ChatGPT,ChatGPT 3.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 # 14 [Eyes sur le prix] Comment les inconvénients croissants ont tenté un courteur par e-mail de 36 millions de vendeurs<br>CyberheistNews Vol 13 #14 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist CyberheistNews Vol 13 #14 CyberheistNews Vol 13 #14  |   April 4th, 2023 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist The details in this thwarted VEC attack demonstrate how the use of just a few key details can both establish credibility and indicate the entire thing is a scam. It\'s not every day you hear about a purely social engineering-based scam taking place that is looking to run away with tens of millions of dollars. But, according to security researchers at Abnormal Security, cybercriminals are becoming brazen and are taking their shots at very large prizes. This attack begins with a case of VEC – where a domain is impersonated. In the case of this attack, the impersonated vendor\'s domain (which had a .com top level domain) was replaced with a matching .cam domain (.cam domains are supposedly used for photography enthusiasts, but there\'s the now-obvious problem with it looking very much like .com to the cursory glance). The email attaches a legitimate-looking payoff letter complete with loan details. According to Abnormal Security, nearly every aspect of the request looked legitimate. The telltale signs primarily revolved around the use of the lookalike domain, but there were other grammatical mistakes (that can easily be addressed by using an online grammar service or ChatGPT). This attack was identified well before it caused any damage, but the social engineering tactics leveraged were nearly enough to make this attack successful. Security solutions will help stop most attacks, but for those that make it past scanners, your users need to play a role in spotting and stopping BEC, VEC and phishing attacks themselves – something taught through security awareness training combined with frequent simulated phishing and other social engineering tests. Blog post with screenshots and links:https://blog.knowbe4.com/36-mil-vendor-email-compromise-attack [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, April 5, @ 2:00 PM (ET), for a live demo of how KnowBe4 i]]> 2023-04-04T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-14-eyes-on-the-price-how-crafty-cons-attempted-a-36-million-vendor-email-heist www.secnews.physaphae.fr/article.php?IdArticle=8324667 False Ransomware,Malware,Hack,Threat ChatGPT,ChatGPT,APT 43 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] CyberheistNews Vol 13 #13 CyberheistNews Vol 13 #13  |   March 28th, 2023 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO. "A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation. "This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection." Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails. "Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here." Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures. "Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it. "Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'" New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture. Remember: Culture eats strategy for breakfast and is always top-down. Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing ]]> 2023-03-28T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-13-eye-opener-how-to-outsmart-sneaky-ai-based-phishing-attacks www.secnews.physaphae.fr/article.php?IdArticle=8322503 False Ransomware,Malware,Hack,Tool,Threat,Guideline ChatGPT,ChatGPT 3.0000000000000000 Checkpoint - Fabricant Materiel Securite Check Point Research conducts Initial Security Analysis of ChatGPT4, Highlighting Potential Scenarios For Accelerated Cybercrime Highlights: Check Point Research (CPR) releases an initial analysis of ChatGPT4, surfacing five scenarios that can allow threat actors to streamline malicious efforts and preparations faster and with more precision. In some instances, even non-technical actors can create harmful tools. The five scenarios provided span impersonations of banks, reverse shells, C++ malware and more. Despite… ]]> 2023-03-16T00:49:59+00:00 https://blog.checkpoint.com/2023/03/15/check-point-research-conducts-initial-security-analysis-of-chatgpt4-highlighting-potential-scenarios-for-accelerated-cybercrime/ www.secnews.physaphae.fr/article.php?IdArticle=8318958 False Malware,Threat ChatGPT 2.0000000000000000 Anomali - Firm Blog Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence Xenomorph V3: a New Variant with ATS Targeting More Than 400 Institutions (published: March 10, 2023) Newer versions of the Xenomorph Android banking trojan are able to target 400 applications: cryptocurrency wallets and mobile banking from around the World with the top targeted countries being Spain, Turkey, Poland, USA, and Australia (in that order). Since February 2022, several small, testing Xenomorph campaigns have been detected. Its current version Xenomorph v3 (Xenomorph.C) is available on the Malware-as-a-Service model. This trojan version was delivered using the Zombinder binding service to bind it to a legitimate currency converter. Xenomorph v3 automatically collects and exfiltrates credentials using the ATS (Automated Transfer Systems) framework. The command-and-control traffic is blended in by abusing Discord Content Delivery Network. Analyst Comment: Fraud chain automation makes Xenomorph v3 a dangerous malware that might significantly increase its prevalence on the threat landscape. Users should keep their mobile devices updated and avail of mobile antivirus and VPN protection services. Install only applications that you actually need, use the official store and check the app description and reviews. Organizations that publish applications for their customers are invited to use Anomali's Premium Digital Risk Protection service to discover rogue, malicious apps impersonating your brand that security teams typically do not search or monitor. MITRE ATT&CK: [MITRE ATT&CK] T1417.001 - Input Capture: Keylogging | [MITRE ATT&CK] T1417.002 - Input Capture: Gui Input Capture Tags: malware:Xenomorph, Mobile, actor:Hadoken Security Group, actor:HadokenSecurity, malware-type:Banking trojan, detection:Xenomorph.C, Malware-as-a-Service, Accessibility services, Overlay attack, Discord CDN, Cryptocurrency wallet, target-industry:Cryptocurrency, target-industry:Banking, target-country:Spain, target-country:ES, target-country:Turkey, target-country:TR, target-country:Poland, target-country:PL, target-country:USA, target-country:US, target-country:Australia, target-country:AU, malware:Zombinder, detection:Zombinder.A, Android Cobalt Illusion Masquerades as Atlantic Council Employee (published: March 9, 2023) A new campaign by Iran-sponsored Charming Kitten (APT42, Cobalt Illusion, Magic Hound, Phosphorous) was detected targeting Mahsa Amini protests and researchers who document the suppression of women and minority groups i]]> 2023-03-14T17:32:00+00:00 https://www.anomali.com/blog/anomali-cyber-watch-xenomorph-automates-the-whole-fraud-chain-on-android-icefire-ransomware-started-targeting-linux-mythic-leopard-delivers-spyware-using-romance-scam www.secnews.physaphae.fr/article.php?IdArticle=8318511 False Ransomware,Malware,Tool,Vulnerability,Threat,Guideline,Conference APT 35,ChatGPT,ChatGPT,APT 36,APT 42 2.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears CyberheistNews Vol 13 #11 CyberheistNews Vol 13 #11  |   March 14th, 2023 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes. I'm giving you a short extract of the story and the link to the whole article is below. "Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service. "In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM. "In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. "And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven. "'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'" Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this. Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl]]> 2023-03-14T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-11-heads-up-employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears www.secnews.physaphae.fr/article.php?IdArticle=8318404 False Ransomware,Data Breach,Spam,Malware,Threat,Guideline,Medical ChatGPT,ChatGPT 2.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) BATLOADER Malware Uses Google Ads to Deliver Vidar Stealer and Ursnif Payloads 2023-03-11T19:02:00+00:00 https://thehackernews.com/2023/03/batloader-malware-uses-google-ads-to.html www.secnews.physaphae.fr/article.php?IdArticle=8317590 False Malware ChatGPT 2.0000000000000000 Dark Reading - Informationweek Branch AI-Powered \'BlackMamba\' Keylogging Attack Evades Modern EDR Security 2023-03-08T16:50:40+00:00 https://www.darkreading.com/endpoint/ai-blackmamba-keylogging-edr-security www.secnews.physaphae.fr/article.php?IdArticle=8316734 False Malware ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 #09 [Eye Opener] Should You Click on Unsubscribe? CyberheistNews Vol 13 #09 CyberheistNews Vol 13 #09  |   February 28th, 2023 [Eye Opener] Should You Click on Unsubscribe? By Roger A. Grimes. Some common questions we get are "Should I click on an unwanted email's 'Unsubscribe' link? Will that lead to more or less unwanted email?" The short answer is that, in general, it is OK to click on a legitimate vendor's unsubscribe link. But if you think the email is sketchy or coming from a source you would not want to validate your email address as valid and active, or are unsure, do not take the chance, skip the unsubscribe action. In many countries, legitimate vendors are bound by law to offer (free) unsubscribe functionality and abide by a user's preferences. For example, in the U.S., the 2003 CAN-SPAM Act states that businesses must offer clear instructions on how the recipient can remove themselves from the involved mailing list and that request must be honored within 10 days. Note: Many countries have laws similar to the CAN-SPAM Act, although with privacy protection ranging the privacy spectrum from very little to a lot more protection. The unsubscribe feature does not have to be a URL link, but it does have to be an "internet-based way." The most popular alternative method besides a URL link is an email address to use. In some cases, there are specific instructions you have to follow, such as put "Unsubscribe" in the subject of the email. Other times you are expected to craft your own message. Luckily, most of the time simply sending any email to the listed unsubscribe email address is enough to remove your email address from the mailing list. [CONTINUED] at the KnowBe4 blog:https://blog.knowbe4.com/should-you-click-on-unsubscribe [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, March 1, @ 2:00 PM (ET), for a live demo of how KnowBe4 introduces a new-school approac]]> 2023-02-28T14:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-09-eye-opener-should-you-click-on-unsubscribe www.secnews.physaphae.fr/article.php?IdArticle=8314155 False Malware,Hack,Tool,Vulnerability,Threat,Guideline,Prediction APT 38,ChatGPT 3.0000000000000000 Recorded Future - FLux Recorded Future Hackers use ChatGPT phishing websites to infect users with malware link to fake chatgpt, phishing siteCyble says cybercriminals are setting up phishing websites that mimic the branding of ChatGPT, an AI tool that has exploded in popularity]]> 2023-02-23T19:02:13+00:00 https://therecord.media/chatgpt-phishing-fake-websites-hackers-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8312936 False Malware,Tool ChatGPT 3.0000000000000000 01net. Actualites - Securite - Magazine Francais Une fausse app ChatGPT pour Windows menace de pirater vos comptes Facebook, TikTok et Google Une fausse application ChatGPT pour PC Windows se propage sur les réseaux sociaux. Elle cache un malware capable de voler les identifiants des comptes Facebook, TikTok et Google.]]> 2023-02-23T09:25:51+00:00 https://www.01net.com/actualites/fausse-app-chatgpt-windows-menace-de-pirater-comptes-facebook-tiktok-google.html www.secnews.physaphae.fr/article.php?IdArticle=8312796 False Malware ChatGPT 2.0000000000000000 InfoSecurity Mag - InfoSecurity Magazine Phishing Sites and Apps Use ChatGPT as Lure 2023-02-23T09:20:00+00:00 https://www.infosecurity-magazine.com/news/phishing-sites-and-apps-use/ www.secnews.physaphae.fr/article.php?IdArticle=8312798 False Malware ChatGPT 2.0000000000000000 The State of Security - Magazine Américain Fake ChatGPT apps spread Windows and Android malware 2023-02-23T09:07:44+00:00 https://www.tripwire.com/state-of-security/fake-chatgpt-apps-spread-windows-and-android-malware www.secnews.physaphae.fr/article.php?IdArticle=8312875 False Malware ChatGPT 2.0000000000000000 Bleeping Computer - Magazine Américain Hackers use fake ChatGPT apps to push Windows, Android malware 2023-02-22T16:58:19+00:00 https://www.bleepingcomputer.com/news/security/hackers-use-fake-chatgpt-apps-to-push-windows-android-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8312588 False Malware,Tool,Threat ChatGPT 3.0000000000000000 Global Security Mag - Site de news francais Un nouveau malware vole des identifiants de réseaux sociaux en se faisant passer pour une application ChatGPT Malwares]]> 2023-02-22T16:25:56+00:00 https://www.globalsecuritymag.fr/Un-nouveau-malware-vole-des-identifiants-de-reseaux-sociaux-en-se-faisant.html www.secnews.physaphae.fr/article.php?IdArticle=8312522 False Malware ChatGPT 3.0000000000000000 InformationSecurityBuzzNews - Site de News Securite Hackers Bypass ChatGPT Restrictions Via Telegram Bots 2023-02-09T17:05:17+00:00 https://informationsecuritybuzz.com/hackers-bypass-chatgpt-restrictions-via-telegram-bots/ www.secnews.physaphae.fr/article.php?IdArticle=8308591 False Malware ChatGPT 2.0000000000000000 Ars Technica - Risk Assessment Security Hacktivism Hackers are selling a service that bypasses ChatGPT restrictions on malware 2023-02-08T18:54:03+00:00 https://arstechnica.com/?p=1916125 www.secnews.physaphae.fr/article.php?IdArticle=8308408 False Malware ChatGPT 3.0000000000000000 Dark Reading - Informationweek Branch ChatGPT Could Create Polymorphic Malware Wave, Researchers Warn 2023-01-18T19:21:00+00:00 https://www.darkreading.com/threat-intelligence/chatgpt-could-create-polymorphic-malware-researchers-warn www.secnews.physaphae.fr/article.php?IdArticle=8302349 False Malware ChatGPT 3.0000000000000000 InfoSecurity Mag - InfoSecurity Magazine ChatGPT Creates Polymorphic Malware 2023-01-18T16:00:00+00:00 https://www.infosecurity-magazine.com/news/chatgpt-creates-polymorphic-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8302283 False Malware ChatGPT 2.0000000000000000 Anomali - Firm Blog Anomali Cyber Watch: Turla Re-Registered Andromeda Domains, SpyNote Is More Popular after the Source Code Publication, Typosquatted Site Used to Leak Company\'s Data Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence OPWNAI : Cybercriminals Starting to Use ChatGPT (published: January 6, 2023) Check Point researchers have detected multiple underground forum threads outlining experimenting with and abusing ChatGPT (Generative Pre-trained Transformer), the revolutionary artificial intelligence (AI) chatbot tool capable of generating creative responses in a conversational manner. Several actors have built schemes to produce AI outputs (graphic art, books) and sell them as their own. Other actors experiment with instructions to write an AI-generated malicious code while avoiding ChatGPT guardrails that should prevent such abuse. Two actors shared samples allegedly created using ChatGPT: a basic Python-based stealer, a Java downloader that stealthily runs payloads using PowerShell, and a cryptographic tool. Analyst Comment: ChatGPT and similar tools can be of great help to humans creating art, writing texts, and programming. At the same time, it can be a dangerous tool enabling even low-skill threat actors to create convincing social-engineering lures and even new malware. MITRE ATT&CK: [MITRE ATT&CK] T1566 - Phishing | [MITRE ATT&CK] T1059.001: PowerShell | [MITRE ATT&CK] T1048.003 - Exfiltration Over Alternative Protocol: Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol | [MITRE ATT&CK] T1560 - Archive Collected Data | [MITRE ATT&CK] T1005: Data from Local System Tags: ChatGPT, Artificial intelligence, OpenAI, Phishing, Programming, Fraud, Chatbot, Python, Java, Cryptography, FTP Turla: A Galaxy of Opportunity (published: January 5, 2023) Russia-sponsored group Turla re-registered expired domains for old Andromeda malware to select a Ukrainian target from the existing victims. Andromeda sample, known from 2013, infected the Ukrainian organization in December 2021 via user-activated LNK file on an infected USB drive. Turla re-registered the Andromeda C2 domain in January 2022, profiled and selected a single victim, and pushed its payloads in September 2022. First, the Kopiluwak profiling tool was downloaded for system reconnaissance, two days later, the Quietcanary backdoor was deployed to find and exfiltrate files created in 2021-2022. Analyst Comment: Advanced groups are often utilizing commodity malware to blend their traffic with less sophisticated threats. Turla’s tactic of re-registering old but active C2 domains gives the group a way-in to the pool of existing targets. Organizations should be vigilant to all kinds of existing infections and clean them up, even if assessed as “less dangerous.” All known network and host-based indicators and hunting rules associated]]> 2023-01-10T16:30:00+00:00 https://www.anomali.com/blog/anomali-cyber-watch-turla-re-registered-andromeda-domains-spynote-is-more-popular-after-the-source-code-publication-typosquatted-site-used-to-leak-companys-data www.secnews.physaphae.fr/article.php?IdArticle=8299602 False Ransomware,Malware,Tool,Threat ChatGPT,APT-C-36 2.0000000000000000 Schneier on Security - Chercheur Cryptologue Américain ChatGPT-Written Malware are seeing ChatGPT-written malware in the wild. …within a few weeks of ChatGPT going live, participants in cybercrime forums—­some with little or no coding experience­—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks. “It's still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web,” company researchers wrote. “However, the cybercriminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.”...]]> 2023-01-10T12:18:55+00:00 https://www.schneier.com/blog/archives/2023/01/chatgpt-written-malware.html www.secnews.physaphae.fr/article.php?IdArticle=8299521 False Malware,Tool,Prediction ChatGPT 2.0000000000000000 TroyHunt - Blog Security ChatGPT is enabling script kiddies to write functional malware 2023-01-06T22:05:06+00:00 https://arstechnica.com/?p=1908471 www.secnews.physaphae.fr/article.php?IdArticle=8298667 False Malware ChatGPT 3.0000000000000000 SentinelOne (Research) - Cyber Firms 11 problèmes que le chat peut résoudre pour les ingénieurs inverses et les analystes de logiciels malveillants<br>11 Problems ChatGPT Can Solve For Reverse Engineers and Malware Analysts ChatGPT has captured the imagination of many across infosec. Here\'s how it can superpower the efforts of reversers and malware analysts.]]> 2022-12-21T15:15:59+00:00 https://www.sentinelone.com/labs/11-problems-chatgpt-can-solve-for-reverse-engineers-and-malware-analysts/ www.secnews.physaphae.fr/article.php?IdArticle=8388329 False Malware ChatGPT 3.0000000000000000 InfoSecurity Mag - InfoSecurity Magazine Experts Warn ChatGPT Could Democratize Cybercrime 2022-12-13T10:45:00+00:00 https://www.infosecurity-magazine.com/news/experts-warn-chatgpt-democratize/ www.secnews.physaphae.fr/article.php?IdArticle=8290658 False Malware ChatGPT 3.0000000000000000 CyberScoop - scoopnewsgroup.com special Cyber ChatGPT shows promise of using AI to write malware Large language models pose a major cybersecurity risk, both from the vulnerabilities they risk introducing and the malware they could produce. ]]> 2022-12-06T16:41:01+00:00 https://www.cyberscoop.com/chatgpt-ai-malware/ www.secnews.physaphae.fr/article.php?IdArticle=8288303 False Malware ChatGPT 4.0000000000000000