www.secnews.physaphae.fr This is the RSS 2.0 feed from www.secnews.physaphae.fr. IT's a simple agragated flow of multiple articles soruces. Liste of sources, can be found on www.secnews.physaphae.fr. 2025-05-10T21:00:20+00:00 www.secnews.physaphae.fr We Live Security - Editeur Logiciel Antivirus ESET Ce mois-ci en sécurité avec Tony Anscombe - édition de mars 2025<br>This month in security with Tony Anscombe – March 2025 edition From an exploited vulnerability in a third-party ChatGPT tool to a bizarre twist on ransomware demands, it\'s a wrap on another month filled with impactful cybersecurity news]]> 2025-03-31T10:46:09+00:00 https://www.welivesecurity.com/en/videos/month-security-tony-anscombe-march-2025-edition/ www.secnews.physaphae.fr/article.php?IdArticle=8661296 False Ransomware,Tool,Vulnerability ChatGPT 3.0000000000000000 InformationSecurityBuzzNews - Site de News Securite S'attaquer à la menace du cyber-risque pendant l'adoption de l'IA<br>Tackling the threat of cyber risk during AI adoption Ever since AI\'s meteoric rise to prominence following the release of ChatGPT in November 2022, the technology has been at the centre of international debate. For every application in healthcare, education, and workplace efficiency, reports of abuse by cybercriminals for phishing campaigns, automating attacks, and ransomware have made mainstream news.  Regardless of whether individuals and [...]]]> 2025-03-17T06:49:07+00:00 https://informationsecuritybuzz.com/threat-of-cyber-risk-during-ai-adoptio/ www.secnews.physaphae.fr/article.php?IdArticle=8656157 False Ransomware,Threat,Medical ChatGPT 2.0000000000000000 Techworm - News Security Flaws Found In DeepSeek Leads To Jailbreak Kela to jailbreak it. Kela tested these jailbreaks around known vulnerabilities and bypassed the restriction mechanism on the chatbot. This allowed them to jailbreak it across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices. For instance, the “Evil Jailbreak” method (Prompts the AI model to adopt an “evil” persona), which was able to trick the earlier models of ChatGPT and fixed long back, still works on DeepSeek. The news comes in while DeepSeek investigates a cyberattack, not allowing new registrations. “Due to large-scale malicious attacks on DeepSeek’s services, we are temporarily limiting registrations to ensure continued service. Existing users can log in as usual.” DeepSeek’s status page reads. While it does not confirm what kind of cyberattack disrupts its service, it seems to be a DDoS attack. DeepSeek is yet to comment on these vulnerabilities.
DeepSeek R1, the AI model making all the buzz right now, has been found to have several vulnerabilities that allowed security researchers at the Cyber Threat Intelligence firm Kela to jailbreak it. Kela tested these jailbreaks around known vulnerabilities and bypassed the restriction mechanism on the chatbot. This allowed them to jailbreak it across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices. For instance, the “Evil Jailbreak” method (Prompts the AI model to adopt an “evil” persona), which was able to trick the earlier models of ChatGPT and fixed long back, still works on DeepSeek. The news comes in while DeepSeek investigates a cyberattack, not allowing new registrations. “Due to large-scale malicious attacks on DeepSeek’s services, we are temporarily limiting registrations to ensure continued service. Existing users can log in as usual.” DeepSeek’s status page reads. While it does not confirm what kind of cyberattack disrupts its service, it seems to be a DDoS attack. DeepSeek is yet to comment on these vulnerabilities. ]]>
2025-01-28T19:37:26+00:00 https://www.techworm.net/2025/01/security-flaws-found-in-deepseek-leads-to-jailbreak.html www.secnews.physaphae.fr/article.php?IdArticle=8643848 False Ransomware,Vulnerability,Threat ChatGPT 3.0000000000000000
RiskIQ - cyber risk firms (now microsoft) Bumblebee malware returns after recent law enforcement disruption 2024-10-21T18:57:24+00:00 https://community.riskiq.com/article/b382c0b6 www.secnews.physaphae.fr/article.php?IdArticle=8601156 False Ransomware,Spam,Malware,Tool,Threat,Legislation ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC De réactif à proactif: déplacer votre stratégie de cybersécurité<br>From Reactive to Proactive: Shifting Your Cybersecurity Strategy 2024-10-15T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/from-reactive-to-proactive-shifting-your-cybersecurity-strategy www.secnews.physaphae.fr/article.php?IdArticle=8598079 False Ransomware,Spam,Hack,Vulnerability,Threat ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Faits saillants hebdomadaires OSINT, 30 septembre 2024<br>Weekly OSINT Highlights, 30 September 2024 2024-09-30T13:21:55+00:00 https://community.riskiq.com/article/70e8b264 www.secnews.physaphae.fr/article.php?IdArticle=8588927 False Ransomware,Malware,Tool,Vulnerability,Threat,Patching,Mobile ChatGPT,APT 36 2.0000000000000000 ProofPoint - Cyber Firms Les solutions de sécurité centrées sur l'human<br>Proofpoint\\'s Human-Centric Security Solutions Named SC Awards 2024 Finalist in Four Unique Categories 2024-08-30T07:00:00+00:00 https://www.proofpoint.com/us/blog/corporate-news/proofpoint-named-sc-awards-2024-finalist www.secnews.physaphae.fr/article.php?IdArticle=8566942 False Ransomware,Tool,Vulnerability,Threat,Cloud,Conference ChatGPT 2.0000000000000000 RiskIQ - cyber risk firms (now microsoft) Les attaques d'escroquerie profitent de la popularité de la vague de l'IA générative<br>Scam Attacks Taking Advantage of the Popularity of the Generative AI Wave 2024-07-26T19:24:17+00:00 https://community.riskiq.com/article/2826e7d7 www.secnews.physaphae.fr/article.php?IdArticle=8544990 True Ransomware,Spam,Malware,Tool,Threat,Studies ChatGPT 3.0000000000000000 The Register - Site journalistique Anglais Pas si-openai n'aurait jamais pris la peine de signaler une violation de données en 2023<br>Not-so-OpenAI allegedly never bothered to report 2023 data breach Also: F1 authority breached; Prudential victim count skyrockets; a new ransomware actor appears; and more security in brief  It\'s been a week of bad cyber security revelations for OpenAI, after news emerged that the startup failed to report a 2023 breach of its systems to anybody outside the organization, and that its ChatGPT app for macOS was coded without any regard for user privacy.…]]> 2024-07-08T01:45:08+00:00 https://go.theregister.com/feed/www.theregister.com/2024/07/08/infosec_in_brief/ www.secnews.physaphae.fr/article.php?IdArticle=8532512 False Ransomware,Data Breach ChatGPT 3.0000000000000000 ProofPoint - Cyber Firms Arrêt de cybersécurité du mois: les attaques d'identité qui ciblent la chaîne d'approvisionnement<br>Cybersecurity Stop of the Month: Impersonation Attacks that Target the Supply Chain 2024-05-14T06:00:46+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/impersonation-attacks-target-supply-chain www.secnews.physaphae.fr/article.php?IdArticle=8499611 False Ransomware,Data Breach,Tool,Vulnerability,Threat ChatGPT 2.0000000000000000 ProofPoint - Cyber Firms Genai alimente la dernière vague des menaces de messagerie modernes<br>GenAI Is Powering the Latest Surge in Modern Email Threats 2024-05-06T07:54:03+00:00 https://www.proofpoint.com/us/blog/email-and-cloud-threats/genai-powering-latest-surge-modern-email-threats www.secnews.physaphae.fr/article.php?IdArticle=8494488 False Ransomware,Data Breach,Tool,Vulnerability,Threat ChatGPT 3.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Les risques de sécurité du chat Microsoft Bing AI pour le moment<br>The Security Risks of Microsoft Bing AI Chat at this Time 2024-04-10T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/microsoft-bing-ai-chat-is-a-bigger-security-issue-than-it-seems www.secnews.physaphae.fr/article.php?IdArticle=8479215 False Ransomware,Tool,Vulnerability ChatGPT 2.0000000000000000 HackRead - Chercher Cyber Rapport Découvre la vente massive des informations d'identification CHATGPT compromises<br>Report Uncovers Massive Sale of Compromised ChatGPT Credentials Par deeba ahmed Le rapport Group-IB met en garde contre l'évolution des cyber-menaces, y compris les vulnérabilités de l'IA et du macOS et des attaques de ransomwares. Ceci est un article de HackRead.com Lire le post original: Le rapport découvre une vente massive des informations d'identification CHATGPT compromises
>By Deeba Ahmed Group-IB Report Warns of Evolving Cyber Threats Including AI and macOS Vulnerabilities and Ransomware Attacks. This is a post from HackRead.com Read the original post: Report Uncovers Massive Sale of Compromised ChatGPT Credentials]]>
2024-03-05T21:30:37+00:00 https://www.hackread.com/massive-sale-of-compromised-chatgpt-credentials/ www.secnews.physaphae.fr/article.php?IdArticle=8459518 False Ransomware,Vulnerability ChatGPT 2.0000000000000000
RiskIQ - cyber risk firms (now microsoft) Rester en avance sur les acteurs de la menace à l'ère de l'IA<br>Staying ahead of threat actors in the age of AI 2024-03-05T19:03:47+00:00 https://community.riskiq.com/article/ed40fbef www.secnews.physaphae.fr/article.php?IdArticle=8459485 False Ransomware,Malware,Tool,Vulnerability,Threat,Studies,Medical,Technical APT 28,ChatGPT,APT 4 2.0000000000000000 We Live Security - Editeur Logiciel Antivirus ESET ESET Research Podcast: Chatgpt, The Moveit Hack et Pandora<br>ESET Research Podcast: ChatGPT, the MOVEit hack, and Pandora An AI chatbot inadvertently kindles a cybercrime boom, ransomware bandits plunder organizations without deploying ransomware, and a new botnet enslaves Android TV boxes]]> 2024-01-31T10:30:00+00:00 https://www.welivesecurity.com/en/eset-research/eset-research-podcast-chatgpt-moveit-hack-pandora/ www.secnews.physaphae.fr/article.php?IdArticle=8445436 False Ransomware,Hack,Mobile ChatGPT 3.0000000000000000 HackRead - Chercher Cyber La Chine arrête 4 qui a armé le chatpt pour des attaques de ransomwares<br>China Arrests 4 Who Weaponized ChatGPT for Ransomware Attacks Par deeba ahmed La police a arrêté deux suspects à Pékin et deux en Mongolie intérieure. Ceci est un article de HackRead.com Lire le post original: Chine Arrests4 qui a armé le chatppt pour les attaques de ransomwares
>By Deeba Ahmed The police arrested two suspects in Beijing and two in Inner Mongolia. This is a post from HackRead.com Read the original post: China Arrests 4 Who Weaponized ChatGPT for Ransomware Attacks]]>
2023-12-30T16:23:55+00:00 https://www.hackread.com/china-arrests-suspects-chatgpt-ransomware/ www.secnews.physaphae.fr/article.php?IdArticle=8431364 False Ransomware ChatGPT 3.0000000000000000
Checkpoint - Fabricant Materiel Securite L'information est le pouvoir, mais la désinformation est tout aussi puissante<br>Information is power, but misinformation is just as powerful Les techniques de désinformation et de manipulation employées par les cybercriminels deviennent de plus en plus sophistiquées en raison de la mise en œuvre de l'intelligence artificielle dans leurs systèmes que l'ère post-vérité a atteint de nouveaux sommets avec l'avènement de l'intelligence artificielle (IA).Avec la popularité croissante et l'utilisation d'outils d'IA génératifs tels que Chatgpt, la tâche de discerner entre ce qui est réel et faux est devenu plus compliqué, et les cybercriminels tirent parti de ces outils pour créer des menaces de plus en plus sophistiquées.Vérifier Pont Software Technologies a constaté qu'une entreprise sur 34 a connu une tentative d'attaque de ransomware au cours des trois premiers trimestres de 2023, une augmentation [& # 8230;]
>The disinformation and manipulation techniques employed by cybercriminals are becoming increasingly sophisticated due to the implementation of Artificial Intelligence in their systems The post-truth era has reached new heights with the advent of artificial intelligence (AI). With the increasing popularity and use of generative AI tools such as ChatGPT, the task of discerning between what is real and fake has become more complicated, and cybercriminals are leveraging these tools to create increasingly sophisticated threats. Check Pont Software Technologies has found that one in 34 companies have experienced an attempted ransomware attack in the first three quarters of 2023, an increase […] ]]>
2023-11-30T13:00:15+00:00 https://blog.checkpoint.com/artificial-intelligence/information-is-power-but-misinformation-is-just-as-powerful/ www.secnews.physaphae.fr/article.php?IdArticle=8418065 False Ransomware,Tool ChatGPT,ChatGPT 2.0000000000000000
ProofPoint - Cyber Firms Prédictions 2024 de Proofpoint \\: Brace for Impact<br>Proofpoint\\'s 2024 Predictions: Brace for Impact 2023-11-28T23:05:04+00:00 https://www.proofpoint.com/us/blog/ciso-perspectives/proofpoints-2024-predictions-brace-impact www.secnews.physaphae.fr/article.php?IdArticle=8417740 False Ransomware,Malware,Tool,Vulnerability,Threat,Mobile,Prediction,Prediction ChatGPT,ChatGPT 3.0000000000000000 Krebs on Security - Chercheur Américain Rencontrez le cerveau derrière le service de chat AI adapté aux logiciels malveillants \\ 'wormpt \\'<br>Meet the Brains Behind the Malware-Friendly AI Chat Service \\'WormGPT\\' WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to help write malicious software without all the pesky prohibitions on such activity enforced by ChatGPT and Google Bard, has started adding restrictions on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.” The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes - such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new LLM that was created specifically for cybercrime activities.]]> 2023-08-08T17:37:23+00:00 https://krebsonsecurity.com/2023/08/meet-the-brains-behind-the-malware-friendly-ai-chat-service-wormgpt/ www.secnews.physaphae.fr/article.php?IdArticle=8367397 False Ransomware,Malware ChatGPT,ChatGPT 3.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) Allez au-delà des titres pour des plongées plus profondes dans le sous-sol cybercriminal<br>Go Beyond the Headlines for Deeper Dives into the Cybercriminal Underground Discover stories about threat actors\' latest tactics, techniques, and procedures from Cybersixgill\'s threat experts each month. Each story brings you details on emerging underground threats, the threat actors involved, and how you can take action to mitigate risks. Learn about the top vulnerabilities and review the latest ransomware and malware trends from the deep and dark web. Stolen ChatGPT]]> 2023-07-18T16:24:00+00:00 https://thehackernews.com/2023/07/go-beyond-headlines-for-deeper-dives.html www.secnews.physaphae.fr/article.php?IdArticle=8358216 False Ransomware,Malware,Vulnerability,Threat ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 26 [Eyes Open] La FTC révèle les cinq dernières escroqueries par SMS<br>CyberheistNews Vol 13 #26 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams CyberheistNews Vol 13 #26 CyberheistNews Vol 13 #26  |   June 27th, 2023 [Eyes Open] The FTC Reveals the Latest Top Five Text Message Scams The U.S. Federal Trade Commission (FTC) has published a data spotlight outlining the most common text message scams. Phony bank fraud prevention alerts were the most common type of text scam last year. "Reports about texts impersonating banks are up nearly tenfold since 2019 with median reported individual losses of $3,000 last year," the report says. These are the top five text scams reported by the FTC: Copycat bank fraud prevention alerts Bogus "gifts" that can cost you Fake package delivery problems Phony job offers Not-really-from-Amazon security alerts "People get a text supposedly from a bank asking them to call a number ASAP about suspicious activity or to reply YES or NO to verify whether a transaction was authorized. If they reply, they\'ll get a call from a phony \'fraud department\' claiming they want to \'help get your money back.\' What they really want to do is make unauthorized transfers. "What\'s more, they may ask for personal information like Social Security numbers, setting people up for possible identity theft." Fake gift card offers took second place, followed by phony package delivery problems. "Scammers understand how our shopping habits have changed and have updated their sleazy tactics accordingly," the FTC says. "People may get a text pretending to be from the U.S. Postal Service, FedEx, or UPS claiming there\'s a problem with a delivery. "The text links to a convincing-looking – but utterly bogus – website that asks for a credit card number to cover a small \'redelivery fee.\'" Scammers also target job seekers with bogus job offers in an attempt to steal their money and personal information. "With workplaces in transition, some scammers are using texts to perpetrate old-school forms of fraud – for example, fake \'mystery shopper\' jobs or bogus money-making offers for driving around with cars wrapped in ads," the report says. "Other texts target people who post their resumes on employment websites. They claim to offer jobs and even send job seekers checks, usually with instructions to send some of the money to a different address for materials, training, or the like. By the time the check bounces, the person\'s money – and the phony \'employer\' – are long gone." Finally, scammers impersonate Amazon and send fake security alerts to trick victims into sending money. "People may get what looks like a message from \'Amazon,\' asking to verify a big-ticket order they didn\'t place," the FTC says. "Concerned ]]> 2023-06-27T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-26-eyes-open-the-ftc-reveals-the-latest-top-five-text-message-scams www.secnews.physaphae.fr/article.php?IdArticle=8349704 False Ransomware,Spam,Malware,Hack,Tool,Threat FedEx,APT 28,APT 15,ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données<br>CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l]]> 2023-06-20T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-25-fingerprints-all-over-stolen-credentials-are-the-no1-root-cause-of-data-breaches www.secnews.physaphae.fr/article.php?IdArticle=8347292 False Ransomware,Data Breach,Spam,Malware,Hack,Vulnerability,Threat,Cloud ChatGPT,ChatGPT 2.0000000000000000 AlienVault Lab Blog - AlienVault est un acteur de defense majeur dans les IOC Rise of IA in Cybercrime: Comment Chatgpt révolutionne les attaques de ransomwares et ce que votre entreprise peut faire<br>Rise of AI in Cybercrime: How ChatGPT is revolutionizing ransomware attacks and what your business can do fastest-growing consumer app in internet history, reaching 100 million users as 2023 began. The generative AI application has revolutionized not only the world of artificial intelligence but is impacting almost every industry. In the world of cybersecurity, new tools and technologies are typically adopted quickly; unfortunately, in many cases, bad actors are the earliest to adopt and adapt. This can be bad news for your business, as it escalates the degree of difficulty in managing threats.  Using ChatGPT’s large language model, anyone can easily generate malicious code or craft convincing phishing emails, all without any technical expertise or coding knowledge. While cybersecurity teams can leverage ChatGPT defensively, the lower barrier to entry for launching a cyberattack has both complicated and escalated the threat landscape. Understanding the role of ChatGPT in modern ransomware attacks We’ve written about ransomware many times, but it’s crucial to reiterate that the cost to individuals, businesses, and institutions can be massive, both financially and in terms of data loss or reputational damage. With AI, cybercriminals have a potent tool at their disposal, enabling more precise, adaptable, and stealthy attacks. They\'re using machine learning algorithms to simulate trusted entities, create convincing phishing emails, and even evade detection. The problem isn\'t just the sophistication of the attacks, but their sheer volume. With AI, hackers can launch attacks on an unprecedented scale, exponentially expanding the breadth of potential victims. Today, hackers use AI to power their ransomware attacks, making them more precise, adaptable, and destructive. Cybercriminals can leverage AI for ransomware in many ways, but perhaps the easiest is more in line with how many ChatGPT users are using it: writing and creating content. For hackers, especially foreign ransomware gangs, AI can be used to craft sophisticated phishing emails that are much more difficult to detect than the poorly-worded message that was once so common with bad actors (and their equally bad grammar). Even more concerning, ChatGPT-fueled ransomware can mimic the style and tone of a trusted individual or company, tricking the recipient into clicking a malicious link or downloading an infected attachment. This is where the danger lies. Imagine your organization has the best cybersecurity awareness program, and all your employees have gained expertise in deciphering which emails are legitimate and which can be dangerous. Today, if the email can mimic tone and appear 100% genuine, how are the employees going to know? It’s almost down to a coin flip in terms of odds. Furthermore, AI-driven ransomware can study the behavior of the security software on a system, identify patterns, and then either modify itself or choose th]]> 2023-06-13T10:00:00+00:00 https://cybersecurity.att.com/blogs/security-essentials/how-chatgpt-is-revolutionizing-ransomware-attacks www.secnews.physaphae.fr/article.php?IdArticle=8344709 False Ransomware,Malware,Tool,Threat ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 22 [Eye on Fraud] Un examen plus approfondi de la hausse massive de 72% des attaques de phishing financier<br>CyberheistNews Vol 13 #22 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks CyberheistNews Vol 13 #22 CyberheistNews Vol 13 #22  |   May 31st, 2023 [Eye on Fraud] A Closer Look at the Massive 72% Spike in Financial Phishing Attacks With attackers knowing financial fraud-based phishing attacks are best suited for the one industry where the money is, this massive spike in attacks should both surprise you and not surprise you at all. When you want tires, where do you go? Right – to the tire store. Shoes? Yup – shoe store. The most money you can scam from a single attack? That\'s right – the financial services industry, at least according to cybersecurity vendor Armorblox\'s 2023 Email Security Threat Report. According to the report, the financial services industry as a target has increased by 72% over 2022 and was the single largest target of financial fraud attacks, representing 49% of all such attacks. When breaking down the specific types of financial fraud, it doesn\'t get any better for the financial industry: 51% of invoice fraud attacks targeted the financial services industry 42% were payroll fraud attacks 63% were payment fraud To make matters worse, nearly one-quarter (22%) of financial fraud attacks successfully bypassed native email security controls, according to Armorblox. That means one in five email-based attacks made it all the way to the Inbox. The next layer in your defense should be a user that\'s properly educated using security awareness training to easily identify financial fraud and other phishing-based threats, stopping them before they do actual damage. Blog post with links:https://blog.knowbe4.com/financial-fraud-phishing [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us Wednesday, June 7, @ 2:00 PM (ET), for a live demonstration of how KnowBe4 introduces a new-school approach to security awareness training and simulated phishing. Get a look at THREE NEW FEATURES and see how easy it is to train and phish your users. ]]> 2023-05-31T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-22-eye-on-fraud-a-closer-look-at-the-massive-72-percent-spike-in-financial-phishing-attacks www.secnews.physaphae.fr/article.php?IdArticle=8340859 False Ransomware,Malware,Hack,Tool,Threat,Conference Uber,ChatGPT,ChatGPT,Guam 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 21 [Double Trouble] 78% des victimes de ransomwares sont confrontées à plusieurs extensions en tendance effrayante<br>CyberheistNews Vol 13 #21 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend CyberheistNews Vol 13 #21 CyberheistNews Vol 13 #21  |   May 23rd, 2023 [Double Trouble] 78% of Ransomware Victims Face Multiple Extortions in Scary Trend New data sheds light on how likely your organization will succumb to a ransomware attack, whether you can recover your data, and what\'s inhibiting a proper security posture. You have a solid grasp on what your organization\'s cybersecurity stance does and does not include. But is it enough to stop today\'s ransomware attacks? CyberEdge\'s 2023 Cyberthreat Defense Report provides some insight into just how prominent ransomware attacks are and what\'s keeping orgs from stopping them. According to the report, in 2023: 7% of organizations were victims of a ransomware attack 7% of those paid a ransom 73% were able to recover data Only 21.6% experienced solely the encryption of data and no other form of extortion It\'s this last data point that interests me. Nearly 78% of victim organizations experienced one or more additional forms of extortion. CyberEdge mentions threatening to publicly release data, notifying customers or media, and committing a DDoS attack as examples of additional threats mentioned by respondents. IT decision makers were asked to rate on a scale of 1-5 (5 being the highest) what were the top inhibitors of establishing and maintaining an adequate defense. The top inhibitor (with an average rank of 3.66) was a lack of skilled personnel – we\'ve long known the cybersecurity industry is lacking a proper pool of qualified talent. In second place, with an average ranking of 3.63, is low security awareness among employees – something only addressed by creating a strong security culture with new-school security awareness training at the center of it all. Blog post with links:https://blog.knowbe4.com/ransomware-victim-threats [Free Tool] Who Will Fall Victim to QR Code Phishing Attacks? Bad actors have a new way to launch phishing attacks to your users: weaponized QR codes. QR code phishing is especially dangerous because there is no URL to check and messages bypass traditional email filters. With the increased popularity of QR codes, users are more at ]]> 2023-05-23T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-21-double-trouble-78-percent-of-ransomware-victims-face-multiple-extortions-in-scary-trend www.secnews.physaphae.fr/article.php?IdArticle=8338709 False Ransomware,Hack,Tool,Vulnerability,Threat,Prediction ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 19 [Watch Your Back] Nouvelle fausse erreur de mise à jour Chrome Attaque cible vos utilisateurs<br>CyberheistNews Vol 13 #19 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users CyberheistNews Vol 13 #19 CyberheistNews Vol 13 #19  |   May 9th, 2023 [Watch Your Back] New Fake Chrome Update Error Attack Targets Your Users Compromised websites (legitimate sites that have been successfully compromised to support social engineering) are serving visitors fake Google Chrome update error messages. "Google Chrome users who use the browser regularly should be wary of a new attack campaign that distributes malware by posing as a Google Chrome update error message," Trend Micro warns. "The attack campaign has been operational since February 2023 and has a large impact area." The message displayed reads, "UPDATE EXCEPTION. An error occurred in Chrome automatic update. Please install the update package manually later, or wait for the next automatic update." A link is provided at the bottom of the bogus error message that takes the user to what\'s misrepresented as a link that will support a Chrome manual update. In fact the link will download a ZIP file that contains an EXE file. The payload is a cryptojacking Monero miner. A cryptojacker is bad enough since it will drain power and degrade device performance. This one also carries the potential for compromising sensitive information, particularly credentials, and serving as staging for further attacks. This campaign may be more effective for its routine, innocent look. There are no spectacular threats, no promises of instant wealth, just a notice about a failed update. Users can become desensitized to the potential risks bogus messages concerning IT issues carry with them. Informed users are the last line of defense against attacks like these. New school security awareness training can help any organization sustain that line of defense and create a strong security culture. Blog post with links:https://blog.knowbe4.com/fake-chrome-update-error-messages A Master Class on IT Security: Roger A. Grimes Teaches You Phishing Mitigation Phishing attacks have come a long way from the spray-and-pray emails of just a few decades ago. Now they\'re more targeted, more cunning and more dangerous. And this enormous security gap leaves you open to business email compromise, session hijacking, ransomware and more. Join Roger A. Grimes, KnowBe4\'s Data-Driven Defense Evangelist, ]]> 2023-05-09T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-19-watch-your-back-new-fake-chrome-update-error-attack-targets-your-users www.secnews.physaphae.fr/article.php?IdArticle=8334782 False Ransomware,Data Breach,Spam,Malware,Tool,Threat,Prediction NotPetya,NotPetya,APT 28,ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 18 [Eye on Ai] Chatgpt a-t-il la cybersécurité indique-t-elle?<br>CyberheistNews Vol 13 #18 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? CyberheistNews Vol 13 #18 CyberheistNews Vol 13 #18  |   May 2nd, 2023 [Eye on AI] Does ChatGPT Have Cybersecurity Tells? Poker players and other human lie detectors look for "tells," that is, a sign by which someone might unwittingly or involuntarily reveal what they know, or what they intend to do. A cardplayer yawns when they\'re about to bluff, for example, or someone\'s pupils dilate when they\'ve successfully drawn a winning card. It seems that artificial intelligence (AI) has its tells as well, at least for now, and some of them have become so obvious and so well known that they\'ve become internet memes. "ChatGPT and GPT-4 are already flooding the internet with AI-generated content in places famous for hastily written inauthentic content: Amazon user reviews and Twitter," Vice\'s Motherboard observes, and there are some ways of interacting with the AI that lead it into betraying itself for what it is. "When you ask ChatGPT to do something it\'s not supposed to do, it returns several common phrases. When I asked ChatGPT to tell me a dark joke, it apologized: \'As an AI language model, I cannot generate inappropriate or offensive content,\' it said. Those two phrases, \'as an AI language model\' and \'I cannot generate inappropriate content,\' recur so frequently in ChatGPT generated content that they\'ve become memes." That happy state of easy detection, however, is unlikely to endure. As Motherboard points out, these tells are a feature of "lazily executed" AI. With a little more care and attention, they\'ll grow more persuasive. One risk of the AI language models is that they can be adapted to perform social engineering at scale. In the near term, new-school security awareness training can help alert your people to the tells of automated scamming. And in the longer term, that training will adapt and keep pace with the threat as it evolves. Blog post with links:https://blog.knowbe4.com/chatgpt-cybersecurity-tells [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, May 3, @ 2:00 PM (ET), for a live demonstration of how KnowBe4]]> 2023-05-02T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-18-eye-on-ai-does-chatgpt-have-cybersecurity-tells www.secnews.physaphae.fr/article.php?IdArticle=8332823 False Ransomware,Malware,Hack,Threat ChatGPT,ChatGPT 2.0000000000000000 Anomali - Firm Blog Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP<br>Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters (published: April 21, 2023) A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign. Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups. MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:​​Kubernetes RBAC 3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible (published: April 20, 2023) Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and]]> 2023-04-25T18:22:00+00:00 https://www.anomali.com/blog/anomali-cyber-watch-two-supply-chain-attacks-chained-together-decoy-dog-stealthy-dns-communication-evilextractor-exfiltrates-to-ftp-server www.secnews.physaphae.fr/article.php?IdArticle=8331005 False Ransomware,Spam,Malware,Tool,Threat,Cloud Uber,APT 38,ChatGPT,APT 43 2.0000000000000000 Bleeping Computer - Magazine Américain Google Ads Push Bumblebee Malware utilisé par Ransomware Gangs<br>Google ads push BumbleBee malware used by ransomware gangs The enterprise-targeting Bumblebee malware is distributed through Google Ads and SEO poisoning that promote popular software like Zoom, Cisco AnyConnect, ChatGPT, and Citrix Workspace. [...]]]> 2023-04-22T10:08:16+00:00 https://www.bleepingcomputer.com/news/security/google-ads-push-bumblebee-malware-used-by-ransomware-gangs/ www.secnews.physaphae.fr/article.php?IdArticle=8330252 False Ransomware,Malware ChatGPT 2.0000000000000000 The Register - Site journalistique Anglais Les cyber-chefs américains avertissent l'IA aidera les escrocs, la Chine à développer des cyberattaques plus méchantes plus rapidement<br>US cyber chiefs warn AI will help crooks, China develop nastier cyberattacks faster It\'s not all doom and gloom because ML also amplifies defensive efforts, probably Bots like ChatGPT may not be able to pull off the next big Microsoft server worm or Colonial Pipeline ransomware super-infection but they may help criminal gangs and nation-state hackers develop some attacks against IT, according to Rob Joyce, director of the NSA\'s Cybersecurity Directorate.…]]> 2023-04-12T01:50:07+00:00 https://go.theregister.com/feed/www.theregister.com/2023/04/12/us_chatgpt_threat/ www.secnews.physaphae.fr/article.php?IdArticle=8326963 False Ransomware ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 15 [Le nouveau visage de la fraude] FTC fait la lumière sur les escroqueries d'urgence familiale améliorées AI-AI<br>CyberheistNews Vol 13 #15 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams CyberheistNews Vol 13 #15 CyberheistNews Vol 13 #15  |   April 11th, 2023 [The New Face of Fraud] FTC Sheds Light on AI-Enhanced Family Emergency Scams The Federal Trade Commission is alerting consumers about a next-level, more sophisticated family emergency scam that uses AI which imitates the voice of a "family member in distress." They started out with: "You get a call. There\'s a panicked voice on the line. It\'s your grandson. He says he\'s in deep trouble - he wrecked the car and landed in jail. But you can help by sending money. You take a deep breath and think. You\'ve heard about grandparent scams. But darn, it sounds just like him. How could it be a scam? Voice cloning, that\'s how." "Don\'t Trust The Voice" The FTC explains: "Artificial intelligence is no longer a far-fetched idea out of a sci-fi movie. We\'re living with it, here and now. A scammer could use AI to clone the voice of your loved one. All he needs is a short audio clip of your family member\'s voice - which he could get from content posted online - and a voice-cloning program. When the scammer calls you, he\'ll sound just like your loved one. "So how can you tell if a family member is in trouble or if it\'s a scammer using a cloned voice? Don\'t trust the voice. Call the person who supposedly contacted you and verify the story. Use a phone number you know is theirs. If you can\'t reach your loved one, try to get in touch with them through another family member or their friends." Full text of the alert is at the FTC website. Share with friends, family and co-workers:https://blog.knowbe4.com/the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams A Master Class on IT Security: Roger A. Grimes Teaches Ransomware Mitigation Cybercriminals have become thoughtful about ransomware attacks; taking time to maximize your organization\'s potential damage and their payoff. Protecting your network from this growing threat is more important than ever. And nobody knows this more than Roger A. Grimes, Data-Driven Defense Evangelist at KnowBe4. With 30+ years of experience as a computer security consultant, instructor, and award-winning author, Roger has dedicated his life to making]]> 2023-04-11T13:16:54+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-15-the-new-face-of-fraud-ftc-sheds-light-on-ai-enhanced-family-emergency-scams www.secnews.physaphae.fr/article.php?IdArticle=8326650 False Ransomware,Data Breach,Spam,Malware,Hack,Tool,Threat ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 # 14 [Eyes sur le prix] Comment les inconvénients croissants ont tenté un courteur par e-mail de 36 millions de vendeurs<br>CyberheistNews Vol 13 #14 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist CyberheistNews Vol 13 #14 CyberheistNews Vol 13 #14  |   April 4th, 2023 [Eyes on the Prize] How Crafty Cons Attempted a 36 Million Vendor Email Heist The details in this thwarted VEC attack demonstrate how the use of just a few key details can both establish credibility and indicate the entire thing is a scam. It\'s not every day you hear about a purely social engineering-based scam taking place that is looking to run away with tens of millions of dollars. But, according to security researchers at Abnormal Security, cybercriminals are becoming brazen and are taking their shots at very large prizes. This attack begins with a case of VEC – where a domain is impersonated. In the case of this attack, the impersonated vendor\'s domain (which had a .com top level domain) was replaced with a matching .cam domain (.cam domains are supposedly used for photography enthusiasts, but there\'s the now-obvious problem with it looking very much like .com to the cursory glance). The email attaches a legitimate-looking payoff letter complete with loan details. According to Abnormal Security, nearly every aspect of the request looked legitimate. The telltale signs primarily revolved around the use of the lookalike domain, but there were other grammatical mistakes (that can easily be addressed by using an online grammar service or ChatGPT). This attack was identified well before it caused any damage, but the social engineering tactics leveraged were nearly enough to make this attack successful. Security solutions will help stop most attacks, but for those that make it past scanners, your users need to play a role in spotting and stopping BEC, VEC and phishing attacks themselves – something taught through security awareness training combined with frequent simulated phishing and other social engineering tests. Blog post with screenshots and links:https://blog.knowbe4.com/36-mil-vendor-email-compromise-attack [Live Demo] Ridiculously Easy Security Awareness Training and Phishing Old-school awareness training does not hack it anymore. Your email filters have an average 7-10% failure rate; you need a strong human firewall as your last line of defense. Join us TOMORROW, Wednesday, April 5, @ 2:00 PM (ET), for a live demo of how KnowBe4 i]]> 2023-04-04T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-14-eyes-on-the-price-how-crafty-cons-attempted-a-36-million-vendor-email-heist www.secnews.physaphae.fr/article.php?IdArticle=8324667 False Ransomware,Malware,Hack,Threat ChatGPT,ChatGPT,APT 43 2.0000000000000000 knowbe4 - cybersecurity services Cyberheistnews Vol 13 # 13 [Oeil Overner] Comment déjouer les attaques de phishing basées sur l'IA sournoises [CyberheistNews Vol 13 #13 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks] CyberheistNews Vol 13 #13 CyberheistNews Vol 13 #13  |   March 28th, 2023 [Eye Opener] How to Outsmart Sneaky AI-Based Phishing Attacks Users need to adapt to an evolving threat landscape in which attackers can use AI tools like ChatGPT to craft extremely convincing phishing emails, according to Matthew Tyson at CSO. "A leader tasked with cybersecurity can get ahead of the game by understanding where we are in the story of machine learning (ML) as a hacking tool," Tyson writes. "At present, the most important area of relevance around AI for cybersecurity is content generation. "This is where machine learning is making its greatest strides and it dovetails nicely for hackers with vectors such as phishing and malicious chatbots. The capacity to craft compelling, well-formed text is in the hands of anyone with access to ChatGPT, and that\'s basically anyone with an internet connection." Tyson quotes Conal Gallagher, CIO and CISO at Flexera, as saying that since attackers can now write grammatically correct phishing emails, users will need to pay attention to the circumstances of the emails. "Looking for bad grammar and incorrect spelling is a thing of the past - even pre-ChatGPT phishing emails have been getting more sophisticated," Gallagher said. "We must ask: \'Is the email expected? Is the from address legit? Is the email enticing you to click on a link?\' Security awareness training still has a place to play here." Tyson explains that technical defenses have become very effective, so attackers focus on targeting humans to bypass these measures. "Email and other elements of software infrastructure offer built-in fundamental security that largely guarantees we are not in danger until we ourselves take action," Tyson writes. "This is where we can install a tripwire in our mindsets: we should be hyper aware of what it is we are acting upon when we act upon it. "Not until an employee sends a reply, runs an attachment, or fills in a form is sensitive information at risk. The first ring of defense in our mentality should be: \'Is the content I\'m looking at legit, not just based on its internal aspects, but given the entire context?\' The second ring of defense in our mentality then has to be, \'Wait! I\'m being asked to do something here.\'" New-school security awareness training with simulated phishing tests enables your employees to recognize increasingly sophisticated phishing attacks and builds a strong security culture. Remember: Culture eats strategy for breakfast and is always top-down. Blog post with links:https://blog.knowbe4.com/identifying-ai-enabled-phishing ]]> 2023-03-28T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-13-eye-opener-how-to-outsmart-sneaky-ai-based-phishing-attacks www.secnews.physaphae.fr/article.php?IdArticle=8322503 False Ransomware,Malware,Hack,Tool,Threat,Guideline ChatGPT,ChatGPT 3.0000000000000000 The Hacker News - The Hacker News est un blog de news de hack (surprenant non?) New Cyber Platform Lab 1 Decodes Dark Web Data to Uncover Hidden Supply Chain Breaches 2023-03-20T16:14:00+00:00 https://thehackernews.com/2023/03/new-cyber-platform-lab-1-decodes-dark.html www.secnews.physaphae.fr/article.php?IdArticle=8319904 False Ransomware ChatGPT 3.0000000000000000 Anomali - Firm Blog Anomali Cyber Watch: Xenomorph Automates The Whole Fraud Chain on Android, IceFire Ransomware Started Targeting Linux, Mythic Leopard Delivers Spyware Using Romance Scam Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence Xenomorph V3: a New Variant with ATS Targeting More Than 400 Institutions (published: March 10, 2023) Newer versions of the Xenomorph Android banking trojan are able to target 400 applications: cryptocurrency wallets and mobile banking from around the World with the top targeted countries being Spain, Turkey, Poland, USA, and Australia (in that order). Since February 2022, several small, testing Xenomorph campaigns have been detected. Its current version Xenomorph v3 (Xenomorph.C) is available on the Malware-as-a-Service model. This trojan version was delivered using the Zombinder binding service to bind it to a legitimate currency converter. Xenomorph v3 automatically collects and exfiltrates credentials using the ATS (Automated Transfer Systems) framework. The command-and-control traffic is blended in by abusing Discord Content Delivery Network. Analyst Comment: Fraud chain automation makes Xenomorph v3 a dangerous malware that might significantly increase its prevalence on the threat landscape. Users should keep their mobile devices updated and avail of mobile antivirus and VPN protection services. Install only applications that you actually need, use the official store and check the app description and reviews. Organizations that publish applications for their customers are invited to use Anomali's Premium Digital Risk Protection service to discover rogue, malicious apps impersonating your brand that security teams typically do not search or monitor. MITRE ATT&CK: [MITRE ATT&CK] T1417.001 - Input Capture: Keylogging | [MITRE ATT&CK] T1417.002 - Input Capture: Gui Input Capture Tags: malware:Xenomorph, Mobile, actor:Hadoken Security Group, actor:HadokenSecurity, malware-type:Banking trojan, detection:Xenomorph.C, Malware-as-a-Service, Accessibility services, Overlay attack, Discord CDN, Cryptocurrency wallet, target-industry:Cryptocurrency, target-industry:Banking, target-country:Spain, target-country:ES, target-country:Turkey, target-country:TR, target-country:Poland, target-country:PL, target-country:USA, target-country:US, target-country:Australia, target-country:AU, malware:Zombinder, detection:Zombinder.A, Android Cobalt Illusion Masquerades as Atlantic Council Employee (published: March 9, 2023) A new campaign by Iran-sponsored Charming Kitten (APT42, Cobalt Illusion, Magic Hound, Phosphorous) was detected targeting Mahsa Amini protests and researchers who document the suppression of women and minority groups i]]> 2023-03-14T17:32:00+00:00 https://www.anomali.com/blog/anomali-cyber-watch-xenomorph-automates-the-whole-fraud-chain-on-android-icefire-ransomware-started-targeting-linux-mythic-leopard-delivers-spyware-using-romance-scam www.secnews.physaphae.fr/article.php?IdArticle=8318511 False Ransomware,Malware,Tool,Vulnerability,Threat,Guideline,Conference APT 35,ChatGPT,ChatGPT,APT 36,APT 42 2.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 #11 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears CyberheistNews Vol 13 #11 CyberheistNews Vol 13 #11  |   March 14th, 2023 [Heads Up] Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears Robert Lemos at DARKReading just reported on a worrying trend. The title said it all, and the news is that more than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information. Yikes. I'm giving you a short extract of the story and the link to the whole article is below. "Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service. "In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential info, client data, source code, or regulated information to the LLM. "In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. "And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven. "'There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] - I think, we're in pregame; we're not even in the first inning.'" Your employees need to be stepped through new-school security awareness training so that they understand the risks of doing things like this. Blog post with links:https://blog.knowbe4.com/employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blockl]]> 2023-03-14T13:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-11-heads-up-employees-are-feeding-sensitive-biz-data-to-chatgpt-raising-security-fears www.secnews.physaphae.fr/article.php?IdArticle=8318404 False Ransomware,Data Breach,Spam,Malware,Threat,Guideline,Medical ChatGPT,ChatGPT 2.0000000000000000 InformationSecurityBuzzNews - Site de News Securite New Rise In ChatGPT Scams Reported By Fraudsters 2023-03-09T21:19:11+00:00 https://informationsecuritybuzz.com/new-rise-chatgpt-scams-reported-fraudsters/ www.secnews.physaphae.fr/article.php?IdArticle=8317049 False Ransomware,Hack ChatGPT,ChatGPT 2.0000000000000000 knowbe4 - cybersecurity services CyberheistNews Vol 13 #07 [Scam of the Week] The Turkey-Syria Earthquake CyberheistNews Vol 13 #07 CyberheistNews Vol 13 #07  |   February 14th, 2023 [Scam of the Week] The Turkey-Syria Earthquake Just when you think they cannot sink any lower, criminal internet scum is now exploiting the recent earthquake in Turkey and Syria. Less than 24 hours after two massive earthquakes claimed the lives of tens of thousands of people, cybercrooks are already piggybacking on the horrible humanitarian crisis. You need to alert your employees, friends and family... again. Just one example are scammers that pose as representatives from a Ukrainian charity foundation that seeks money to help those affected by the natural disasters that struck in the early hours of Monday. There are going to be a raft of scams varying from blood drives to pleas for charitable contributions for victims and their families. Unfortunately, this type of scam is the worst kind of phishbait, and it is a very good idea to inoculate people before they get suckered into falling for a scam like this. I suggest you send the following short alert to as many people as you can. As usual, feel free to edit: [ALERT] "Lowlife internet scum is trying to benefit from the Turkey-Syria earthquake. The first phishing campaigns have already been sent and more will be coming that try to trick you into clicking on a variety of links about blood drives, charitable donations, or "exclusive" videos. "Don't let them shock you into clicking on anything, or open possibly dangerous attachments you did not ask for! Anything you receive about this recent earthquake, be very suspicious. With this topic, think three times before you click. It is very possible that it is a scam, even though it might look legit or was forwarded to you by a friend -- be especially careful when it seems to come from someone you know through email, a text or social media postings because their account may be hacked. "In case you want to donate to charity, go to your usual charity by typing their name in the address bar of your browser and do not click on a link in any email. Remember, these precautions are just as important at the house as in the office, so tell your friends and family." It is unfortunate that we continue to have to warn against the bad actors on the internet that use these tragedies for their own benefit. For KnowBe4 customers, we have a few templates with this topic in the Current Events. It's a good idea to send one to your users this week. Blog post with links:https://blog.knowbe4.com/scam-of-the-week-the-turkey-syria-earthquake ]]> 2023-02-14T14:00:00+00:00 https://blog.knowbe4.com/cyberheistnews-vol-13-07-scam-of-the-week-the-turkey-syria-earthquake www.secnews.physaphae.fr/article.php?IdArticle=8310010 False Ransomware,Spam,Threat,Guideline ChatGPT 2.0000000000000000 Anomali - Firm Blog Anomali Cyber Watch: Turla Re-Registered Andromeda Domains, SpyNote Is More Popular after the Source Code Publication, Typosquatted Site Used to Leak Company\'s Data Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence OPWNAI : Cybercriminals Starting to Use ChatGPT (published: January 6, 2023) Check Point researchers have detected multiple underground forum threads outlining experimenting with and abusing ChatGPT (Generative Pre-trained Transformer), the revolutionary artificial intelligence (AI) chatbot tool capable of generating creative responses in a conversational manner. Several actors have built schemes to produce AI outputs (graphic art, books) and sell them as their own. Other actors experiment with instructions to write an AI-generated malicious code while avoiding ChatGPT guardrails that should prevent such abuse. Two actors shared samples allegedly created using ChatGPT: a basic Python-based stealer, a Java downloader that stealthily runs payloads using PowerShell, and a cryptographic tool. Analyst Comment: ChatGPT and similar tools can be of great help to humans creating art, writing texts, and programming. At the same time, it can be a dangerous tool enabling even low-skill threat actors to create convincing social-engineering lures and even new malware. MITRE ATT&CK: [MITRE ATT&CK] T1566 - Phishing | [MITRE ATT&CK] T1059.001: PowerShell | [MITRE ATT&CK] T1048.003 - Exfiltration Over Alternative Protocol: Exfiltration Over Unencrypted/Obfuscated Non-C2 Protocol | [MITRE ATT&CK] T1560 - Archive Collected Data | [MITRE ATT&CK] T1005: Data from Local System Tags: ChatGPT, Artificial intelligence, OpenAI, Phishing, Programming, Fraud, Chatbot, Python, Java, Cryptography, FTP Turla: A Galaxy of Opportunity (published: January 5, 2023) Russia-sponsored group Turla re-registered expired domains for old Andromeda malware to select a Ukrainian target from the existing victims. Andromeda sample, known from 2013, infected the Ukrainian organization in December 2021 via user-activated LNK file on an infected USB drive. Turla re-registered the Andromeda C2 domain in January 2022, profiled and selected a single victim, and pushed its payloads in September 2022. First, the Kopiluwak profiling tool was downloaded for system reconnaissance, two days later, the Quietcanary backdoor was deployed to find and exfiltrate files created in 2021-2022. Analyst Comment: Advanced groups are often utilizing commodity malware to blend their traffic with less sophisticated threats. Turla’s tactic of re-registering old but active C2 domains gives the group a way-in to the pool of existing targets. Organizations should be vigilant to all kinds of existing infections and clean them up, even if assessed as “less dangerous.” All known network and host-based indicators and hunting rules associated]]> 2023-01-10T16:30:00+00:00 https://www.anomali.com/blog/anomali-cyber-watch-turla-re-registered-andromeda-domains-spynote-is-more-popular-after-the-source-code-publication-typosquatted-site-used-to-leak-companys-data www.secnews.physaphae.fr/article.php?IdArticle=8299602 False Ransomware,Malware,Tool,Threat ChatGPT,APT-C-36 2.0000000000000000