What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
RiskIQ.webp 2024-04-10 20:29:45 Campions de malvertisation détourne les médias sociaux pour répandre les voleurs ciblant les utilisateurs de logiciels AI
Malvertising Campaigns Hijack Social Media to Spread Stealers Targeting AI Software Users
(lien direct)
#### Géolocations ciblées - Europe du Sud - Europe du Nord - Europe de l'Ouest - L'Europe de l'Est ## Instantané BitDefender discute de l'utilisation croissante de l'intelligence artificielle (IA) par les cybercriminels pour mener des campagnes malvertisantes sur les plateformes de médias sociaux. ## Description Les acteurs de la menace ont usurpé l'identité d'un logiciel d'IA populaire tel que MidJourney, Sora AI, Dall-E 3, Evoto et Chatgpt 5 sur Facebook pour inciter les utilisateurs à télécharger des versions de bureau officielles prétendues de ces logiciels AI.Les pages Web malveillantes téléchargent ensuite des voleurs intrusifs tels que Rilide, Vidar, Icerat et Nova Stealer, qui récoltent des informations sensibles, notamment les informations d'identification, les données de la saisie semi-automatique, les informations sur les cartes de crédit et les informations de portefeuille cryptographique. Ces campagnes de malvertisation ont ciblé les utilisateurs européens et ont une portée significative par le biais du système publicitaire parrainé par Meta \\.Les campagnes sont organisées en reprenant les comptes Facebook existants, en modifiant le contenu de la page \\ pour paraître légitime et en stimulant la popularité de la page \\ avec un contenu engageant et des images générées par AI-AI. ## Les références [https://www.bitdefender.com/blog/labs/ai-meets-next-gen-info-Stealers-in-Social-Media-Malvertising-Campaigns / # new_tab] (https://www.bitdefender.com/blog/labs/ai-meets-next-gen-info-teners-in-social-media-malvertising-campagnes / # new_tab)
#### Targeted Geolocations - Southern Europe - Northern Europe - Western Europe - Eastern Europe ## Snapshot Bitdefender discusses the increasing use of artificial intelligence (AI) by cybercriminals to conduct malvertising campaigns on social media platforms. ## Description Threat actors have been impersonating popular AI software such as Midjourney, Sora AI, DALL-E 3, Evoto, and ChatGPT 5 on Facebook to trick users into downloading purported official desktop versions of these AI software. The malicious webpages then download intrusive stealers such as Rilide, Vidar, IceRAT, and Nova Stealer, which harvest sensitive information including credentials, autocomplete data, credit card information, and crypto wallet information. These malvertising campaigns have targeted European users and have a significant reach through Meta\'s sponsored ad system. The campaigns are organized by taking over existing Facebook accounts, changing the page\'s content to appear legitimate, and boosting the page\'s popularity with engaging content and AI-generated images. ## References [https://www.bitdefender.com/blog/labs/ai-meets-next-gen-info-stealers-in-social-media-malvertising-campaigns/#new_tab](https://www.bitdefender.com/blog/labs/ai-meets-next-gen-info-stealers-in-social-media-malvertising-campaigns/#new_tab)
ChatGPT ★★★
RiskIQ.webp 2024-03-05 19:03:47 Rester en avance sur les acteurs de la menace à l'ère de l'IA
Staying ahead of threat actors in the age of AI
(lien direct)
## Snapshot Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAI\'s blog on the research [here](https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors). Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors\' usage of AI. However, Microsoft and our partners continue to study this landscape closely. The objective of Microsoft\'s partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including [Microsoft Copilot for Security](https://www.microsoft.com/security/business/ai-machine-learning/microsoft-security-copilot), to elevate defenders everywhere. ## Activity Overview ### **A principled approach to detecting and blocking threat actors** The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White House\'s Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Order\'s request for comprehensive AI safety and security standards. In line with Microsoft\'s leadership across AI and cybersecurity, today we are announcing principles shaping Microsoft\'s policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track. These principles include: - **Identification and action against malicious threat actors\' use:** Upon detection of the use of any Microsoft AI application programming interfaces (APIs), services, or systems by an identified malicious threat actor, including nation-state APT or APM, or the cybercrime syndicates we track, Microsoft will take appropriate action to disrupt their activities, such as disabling the accounts used, terminating services, or limiting access to resources. - **Notification to other AI service providers:** When we detect a threat actor\'s use of another service provider\'s AI, AI APIs, services, and/or systems, Microsoft will promptly notify the service provider and share relevant data. This enables the service provider to independently verify our findings and take action in accordance with their own policies. - **Collaboration with other stakeholders:** Microsoft will collaborate with other stakeholders to regularly exchange information a Ransomware Malware Tool Vulnerability Threat Studies Medical Technical APT 28 ChatGPT APT 4 ★★
RiskIQ.webp 2023-11-08 18:59:39 Predator AI | ChatGPT-Powered Infostealer Takes Aim at Cloud Platforms (lien direct) #### Description Sentinellabs a identifié un nouvel infosteller basé sur Python et hacktool appelé \\ 'Predator Ai \' conçu pour cibler les services cloud. Predator AI est annoncé via des canaux télégrammes liés au piratage.L'objectif principal de Predator est de faciliter les attaques d'applications Web contre diverses technologies couramment utilisées, y compris les systèmes de gestion de contenu (CMS) comme WordPress, ainsi que les services de messagerie cloud comme AWS SES.Cependant, Predator est un outil polyvalent, un peu comme les outils de spam de cloud Alienfox et Legion.Ces ensembles d'outils partagent un chevauchement considérable dans le code accessible au public qui réutilise chaque utilisation de leur marque, y compris l'utilisation des modules AndroxGH0st et Greenbot. Le développeur AI Predator a implémenté une classe axée sur le chatppt dans le script Python, qui est conçue pour rendre l'outil plus facile à utiliser et pour servir d'interface de texte unique entre les fonctionnalités disparates.Il y avait plusieurs projets comme Blackmamba qui étaient finalement plus hype que l'outil ne pouvait livrer.L'IA prédateur est un petit pas en avant dans cet espace: l'acteur travaille activement à la fabrication d'un outil qui peut utiliser l'IA. #### URL de référence (s) 1. https://www.sentinelone.com/labs/predator-ai-chatgpt-powered-infostealer-takes-aim-at-cloud-platforms/ #### Date de publication 7 novembre 2023 #### Auteurs) Alex Delamotte
#### Description SentinelLabs has identified a new Python-based infostealer and hacktool called \'Predator AI\' that is designed to target cloud services. Predator AI is advertised through Telegram channels related to hacking. The main purpose of Predator is to facilitate web application attacks against various commonly used technologies, including content management systems (CMS) like WordPress, as well as cloud email services like AWS SES. However, Predator is a multi-purpose tool, much like the AlienFox and Legion cloud spamming toolsets. These toolsets share considerable overlap in publicly available code that each repurposes for their brand\'s own use, including the use of Androxgh0st and Greenbot modules. The Predator AI developer implemented a ChatGPT-driven class into the Python script, which is designed to make the tool easier to use and to serve as a single text-driven interface between disparate features. There were several projects like BlackMamba that ultimately were more hype than the tool could deliver. Predator AI is a small step forward in this space: the actor is actively working on making a tool that can utilize AI. #### Reference URL(s) 1. https://www.sentinelone.com/labs/predator-ai-chatgpt-powered-infostealer-takes-aim-at-cloud-platforms/ #### Publication Date November 7, 2023 #### Author(s) Alex Delamotte
Tool Cloud ChatGPT ★★
Last update at: 2024-05-08 09:07:55
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter