What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ProofPoint.webp 2024-05-01 05:12:14 Quelle est la meilleure façon d'arrêter la perte de données Genai?Adopter une approche centrée sur l'homme
What\\'s the Best Way to Stop GenAI Data Loss? Take a Human-Centric Approach
(lien direct)
Chief information security officers (CISOs) face a daunting challenge as they work to integrate generative AI (GenAI) tools into business workflows. Robust data protection measures are important to protect sensitive data from being leaked through GenAI tools. But CISOs can\'t just block access to GenAI tools entirely. They must find ways to give users access because these tools increase productivity and drive innovation. Unfortunately, legacy data loss prevention (DLP) tools can\'t help with achieving the delicate balance between security and usability.   Today\'s release of Proofpoint DLP Transform changes all that. It provides a modern alternative to legacy DLP tools in a single, economically attractive package. Its innovative features help CISOs strike the right balance between protecting data and usability. It\'s the latest addition to our award-winning DLP solution, which was recognized as a 2024 Gartner® Peer Insights™ Customers\' Choice for Data Loss Prevention. Proofpoint was the only vendor that placed in the upper right “Customers\' Choice” Quadrant.  In this blog, we\'ll dig into some of our latest research about GenAI and data loss risks. And we\'ll explain how Proofpoint DLP Transform provides you with a human-centric approach to reduce those risks.  GenAI increases data loss risks  Users can make great leaps in productivity with ChatGPT and other GenAI tools. However, GenAI also introduces a new channel for data loss. Employees often enter confidential data into these tools as they use them to expedite their tasks.   Security pros are worried, too. Recent Proofpoint research shows that:  Generative AI is the fastest-growing area of concern for CISOs  59% of board members believe that GenAI is a security risk for their business  “Browsing GenAI sites” is one of the top five alert scenarios configured by companies that use Proofpoint Information Protection  Valuable business data like mergers and acquisitions (M&A) documents, supplier contracts, and price lists are listed as the top data to protect   A big problem faced by CISOs is that legacy DLP tools can\'t capture user behavior and respond to natural language processing-based user interfaces. This leaves security gaps. That\'s why they often use blunt tools like web filtering to block employees from using GenAI apps altogether.   You can\'t enforce acceptable use policies for GenAI if you don\'t understand your content and how employees are interacting with it. If you want your employees to use these tools without putting your data security at risk, you need to take a human-centric approach to data loss.  A human-centric approach stops data loss  With a human-centric approach, you can detect data loss risk across endpoints and cloud apps like Microsoft 365, Google Workspace and Salesforce with speed. Insights into user intent allow you to move fast and take the right steps to respond to data risk.  Proofpoint DLP Transform takes a human-centric approach to solving the security gaps with GenAI. It understands employee behavior as well as the data that they are handling. It surgically allows and disallows employees to use GenAI tools such as OpenAI ChatGPT and Google Gemini based on employee behavior and content inputs, even if the data has been manipulated or has gone through multiple channels (email, web, endpoint or cloud) before reaching it.   Proofpoint DLP Transform accurately identifies sensitive content using classical content and LLM-powered data classifiers and provides deep visibility into user behavior. This added context enables analysts to reach high-fidelity verdicts about data risk across all key channels including email, cloud, and managed and unmanaged endpoints.  With a unified console and powerful analytics, Proofpoint DLP Transform can accelerate incident resolution natively or as part of the security operations (SOC) ecosystem. It is built on a cloud-native architecture and features modern privacy controls. Its lightweight and highly stable user-mode agent is unique in Tool Medical Cloud ChatGPT ★★★
The_Hackers_News.webp 2024-04-17 16:37:00 Genai: un nouveau mal de tête pour les équipes de sécurité SaaS
GenAI: A New Headache for SaaS Security Teams
(lien direct)
L'introduction du Chatgpt d'Open Ai \\ a été un moment déterminant pour l'industrie du logiciel, touchant une course Genai avec sa version de novembre 2022.Les fournisseurs SaaS se précipitent désormais pour mettre à niveau les outils avec des capacités de productivité améliorées qui sont motivées par une IA générative. Parmi une large gamme d'utilisations, les outils Genai permettent aux développeurs de créer plus facilement des logiciels, d'aider les équipes de vente dans la rédaction de courrier électronique banal,
The introduction of Open AI\'s ChatGPT was a defining moment for the software industry, touching off a GenAI race with its November 2022 release. SaaS vendors are now rushing to upgrade tools with enhanced productivity capabilities that are driven by generative AI. Among a wide range of uses, GenAI tools make it easier for developers to build software, assist sales teams in mundane email writing,
Tool Cloud ChatGPT ★★
ProofPoint.webp 2024-03-19 05:00:28 Le rapport du paysage de la perte de données 2024 explore la négligence et les autres causes communes de perte de données
The 2024 Data Loss Landscape Report Explores Carelessness and Other Common Causes of Data Loss
(lien direct)
La perte de données est un problème de personnes, ou plus précisément, un problème de personnes imprudentes.C'est la conclusion de notre nouveau rapport, le paysage de la perte de données 2024, que Proofpoint lance aujourd'hui. Nous avons utilisé des réponses à l'enquête de 600 professionnels de la sécurité et des données de la protection des informations de Proofpoint pour explorer l'état actuel de la prévention de la perte de données (DLP) et des menaces d'initiés.Dans notre rapport, nous considérons également ce qui est susceptible de venir ensuite dans cet espace à maturation rapide. De nombreuses entreprises considèrent aujourd'hui toujours leur programme DLP actuel comme émergent ou évoluant.Nous voulions donc identifier les défis actuels et découvrir des domaines d'opportunité d'amélioration.Les praticiens de 12 pays et 17 industries ont répondu aux questions allant du comportement des utilisateurs aux conséquences réglementaires et partageaient leurs aspirations à l'état futur de DLP \\. Ce rapport est une première pour Proofpoint.Nous espérons que cela deviendra une lecture essentielle pour toute personne impliquée dans la tenue de sécuriser les données.Voici quelques thèmes clés du rapport du paysage de la perte de données 2024. La perte de données est un problème de personnes Les outils comptent, mais la perte de données est définitivement un problème de personnes.2023 Données de Tessian, une entreprise de point de preuve, montre que 33% des utilisateurs envoient une moyenne d'un peu moins de deux e-mails mal dirigés chaque année.Et les données de la protection de l'information ProofPoint suggèrent que à peu de 1% des utilisateurs sont responsables de 90% des alertes DLP dans de nombreuses entreprises. La perte de données est souvent causée par la négligence Les initiés malveillants et les attaquants externes constituent une menace significative pour les données.Cependant, plus de 70% des répondants ont déclaré que les utilisateurs imprudents étaient une cause de perte de données pour leur entreprise.En revanche, moins de 50% ont cité des systèmes compromis ou mal configurés. La perte de données est répandue La grande majorité des répondants de notre enquête ont signalé au moins un incident de perte de données.Les incidents moyens globaux par organisation sont de 15. L'échelle de ce problème est intimidante, car un travail hybride, l'adoption du cloud et des taux élevés de roulement des employés créent tous un risque élevé de données perdues. La perte de données est dommageable Plus de la moitié des répondants ont déclaré que les incidents de perte de données entraînaient des perturbations commerciales et une perte de revenus.Celles-ci ne sont pas les seules conséquences dommageables.Près de 40% ont également déclaré que leur réputation avait été endommagée, tandis que plus d'un tiers a déclaré que leur position concurrentielle avait été affaiblie.De plus, 36% des répondants ont déclaré avoir été touchés par des pénalités réglementaires ou des amendes. Préoccupation croissante concernant l'IA génératrice De nouvelles alertes déclenchées par l'utilisation d'outils comme Chatgpt, Grammarly et Google Bard ne sont devenus disponibles que dans la protection des informations de Proofpoint cette année.Mais ils figurent déjà dans les cinq règles les plus implémentées parmi nos utilisateurs.Avec peu de transparence sur la façon dont les données soumises aux systèmes d'IA génératives sont stockées et utilisées, ces outils représentent un nouveau canal dangereux pour la perte de données. DLP est bien plus que de la conformité La réglementation et la législation ont inspiré de nombreuses initiatives précoces du DLP.Mais les praticiens de la sécurité disent maintenant qu'ils sont davantage soucieux de protéger la confidentialité des utilisateurs et des données commerciales sensibles. Des outils de DLP ont évolué pour correspondre à cette progression.De nombreux outils Tool Threat Legislation Cloud ChatGPT ★★★
AlienVault.webp 2024-03-07 11:00:00 Sécuriser l'IA
Securing AI
(lien direct)
With the proliferation of AI/ML enabled technologies to deliver business value, the need to protect data privacy and secure AI/ML applications from security risks is paramount. An AI governance  framework model like the NIST AI RMF to enable business innovation and manage risk is just as important as adopting guidelines to secure AI. Responsible AI starts with securing AI by design and securing AI with Zero Trust architecture principles. Vulnerabilities in ChatGPT A recent discovered vulnerability found in version gpt-3.5-turbo exposed identifiable information. The vulnerability was reported in the news late November 2023. By repeating a particular word continuously to the chatbot it triggered the vulnerability. A group of security researchers with Google DeepMind, Cornell University, CMU, UC Berkeley, ETH Zurich, and the University of Washington studied the “extractable memorization” of training data that an adversary can extract by querying a ML model without prior knowledge of the training dataset. The researchers’ report show an adversary can extract gigabytes of training data from open-source language models. In the vulnerability testing, a new developed divergence attack on the aligned ChatGPT caused the model to emit training data 150 times higher. Findings show larger and more capable LLMs are more vulnerable to data extraction attacks, emitting more memorized training data as the volume gets larger. While similar attacks have been documented with unaligned models, the new ChatGPT vulnerability exposed a successful attack on LLM models typically built with strict guardrails found in aligned models. This raises questions about best practices and methods in how AI systems could better secure LLM models, build training data that is reliable and trustworthy, and protect privacy. U.S. and UK’s Bilateral cybersecurity effort on securing AI The US Cybersecurity Infrastructure and Security Agency (CISA) and UK’s National Cyber Security Center (NCSC) in cooperation with 21 agencies and ministries from 18 other countries are supporting the first global guidelines for AI security. The new UK-led guidelines for securing AI as part of the U.S. and UK’s bilateral cybersecurity effort was announced at the end of November 2023. The pledge is an acknowledgement of AI risk by nation leaders and government agencies worldwide and is the beginning of international collaboration to ensure the safety and security of AI by design. The Department of Homeland Security (DHS) CISA and UK NCSC joint guidelines for Secure AI system Development aims to ensure cybersecurity decisions are embedded at every stage of the AI development lifecycle from the start and throughout, and not as an afterthought. Securing AI by design Securing AI by design is a key approach to mitigate cybersecurity risks and other vulnerabilities in AI systems. Ensuring the entire AI system development lifecycle process is secure from design to development, deployment, and operations and maintenance is critical to an organization realizing its full benefits. The guidelines documented in the Guidelines for Secure AI System Development aligns closely to software development life cycle practices defined in the NSCS’s Secure development and deployment guidance and the National Institute of Standards and Technology (NIST) Secure Software Development Framework (SSDF). The 4 pillars that embody the Guidelines for Secure AI System Development offers guidance for AI providers of any systems whether newly created from the ground up or built on top of tools and services provided from Tool Vulnerability Threat Mobile Medical Cloud Technical ChatGPT ★★
Netskope.webp 2023-12-12 17:05:17 Mémo sur les menaces du cloud: Extraction des données de formation des modèles de langage générateur AI
Cloud Threats Memo: Extracting Training Data from Generative AI Language Models
(lien direct)
> Cette année restera dans les mémoires pour la révolution de Chatgpt (le site Web a été visité par 1,7 milliard d'utilisateurs en octobre 2023, avec 13,73% de la croissance par rapport au mois précédent) et pour l'adoption généralisée des technologies génératrices de l'IA dans notre vie quotidienne.L'un des aspects clés des modèles de langue utilisés pour [& # 8230;]
>This year will probably be remembered for the revolution of ChatGPT (the website was visited by 1.7 billion users in October 2023, with 13.73% of growth compared to the previous month) and for the widespread adoption of generative AI technologies in our daily life. One of the key aspects of the language models used for […]
Cloud ChatGPT ★★
The_Hackers_News.webp 2023-11-22 16:38:00 Les solutions AI sont la nouvelle ombre IT
AI Solutions Are the New Shadow IT
(lien direct)
Les employés ambitieux vantent de nouveaux outils d'IA, ignorent les risques de sécurité SaaS sérieux comme le SaaS l'ombre du passé, l'IA place les CISO et les équipes de cybersécurité dans un endroit dur mais familier. Les employés utilisent secrètement l'IA avec peu de considération pour les procédures de révision informatique et de cybersécurité établies.Considérant que la fulguration de Chatgpt \\ est de 100 millions d'utilisateurs dans les 60 jours suivant le lancement, en particulier avec peu
Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security RisksLike the SaaS shadow IT of the past, AI is placing CISOs and cybersecurity teams in a tough but familiar spot.  Employees are covertly using AI with little regard for established IT and cybersecurity review procedures. Considering ChatGPT\'s meteoric rise to 100 million users within 60 days of launch, especially with little
Tool Cloud ChatGPT ★★★
RiskIQ.webp 2023-11-08 18:59:39 Predator AI | ChatGPT-Powered Infostealer Takes Aim at Cloud Platforms (lien direct) #### Description Sentinellabs a identifié un nouvel infosteller basé sur Python et hacktool appelé \\ 'Predator Ai \' conçu pour cibler les services cloud. Predator AI est annoncé via des canaux télégrammes liés au piratage.L'objectif principal de Predator est de faciliter les attaques d'applications Web contre diverses technologies couramment utilisées, y compris les systèmes de gestion de contenu (CMS) comme WordPress, ainsi que les services de messagerie cloud comme AWS SES.Cependant, Predator est un outil polyvalent, un peu comme les outils de spam de cloud Alienfox et Legion.Ces ensembles d'outils partagent un chevauchement considérable dans le code accessible au public qui réutilise chaque utilisation de leur marque, y compris l'utilisation des modules AndroxGH0st et Greenbot. Le développeur AI Predator a implémenté une classe axée sur le chatppt dans le script Python, qui est conçue pour rendre l'outil plus facile à utiliser et pour servir d'interface de texte unique entre les fonctionnalités disparates.Il y avait plusieurs projets comme Blackmamba qui étaient finalement plus hype que l'outil ne pouvait livrer.L'IA prédateur est un petit pas en avant dans cet espace: l'acteur travaille activement à la fabrication d'un outil qui peut utiliser l'IA. #### URL de référence (s) 1. https://www.sentinelone.com/labs/predator-ai-chatgpt-powered-infostealer-takes-aim-at-cloud-platforms/ #### Date de publication 7 novembre 2023 #### Auteurs) Alex Delamotte
#### Description SentinelLabs has identified a new Python-based infostealer and hacktool called \'Predator AI\' that is designed to target cloud services. Predator AI is advertised through Telegram channels related to hacking. The main purpose of Predator is to facilitate web application attacks against various commonly used technologies, including content management systems (CMS) like WordPress, as well as cloud email services like AWS SES. However, Predator is a multi-purpose tool, much like the AlienFox and Legion cloud spamming toolsets. These toolsets share considerable overlap in publicly available code that each repurposes for their brand\'s own use, including the use of Androxgh0st and Greenbot modules. The Predator AI developer implemented a ChatGPT-driven class into the Python script, which is designed to make the tool easier to use and to serve as a single text-driven interface between disparate features. There were several projects like BlackMamba that ultimately were more hype than the tool could deliver. Predator AI is a small step forward in this space: the actor is actively working on making a tool that can utilize AI. #### Reference URL(s) 1. https://www.sentinelone.com/labs/predator-ai-chatgpt-powered-infostealer-takes-aim-at-cloud-platforms/ #### Publication Date November 7, 2023 #### Author(s) Alex Delamotte
Tool Cloud ChatGPT ★★
InfoSecurityMag.webp 2023-11-08 16:30:00 L'intégration prédatrice AI Chatgpt présente un risque pour les services cloud
Predator AI ChatGPT Integration Poses Risk to Cloud Services
(lien direct)
Cette intégration réduit la dépendance à l'API d'Openai \\ tout en rationalisant la fonctionnalité de l'outil \\
This integration reduces reliance on OpenAI\'s API while streamlining the tool\'s functionality
Cloud ChatGPT ★★
SentinelOne.webp 2023-11-07 15:13:03 Predator Ai |InfostEaler propulsé par ChatGPT vise les plates-formes cloud
Predator AI | ChatGPT-Powered Infostealer Takes Aim at Cloud Platforms
(lien direct)
Un infostecteur émergent vendu sur Telegram cherche à exploiter une IA générative pour rationaliser les cyberattaques contre les services cloud.
An emerging infostealer being sold on Telegram looks to harness generative AI to streamline cyber attacks on cloud services.
Threat Cloud ChatGPT ★★★★
AlienVault.webp 2023-08-14 10:00:00 Construire la cybersécurité dans la chaîne d'approvisionnement est essentiel à mesure que les menaces montent
Building Cybersecurity into the supply chain is essential as threats mount
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  The supply chain, already fragile in the USA, is at severe and significant risk of damage by cyberattacks. According to research analyzed by Forbes, supply chain attacks now account for a huge 62% of all commercial attacks, a clear indication of the scale of the challenge faced by the supply chain and the logistics industry as a whole. There are solutions out there, however, and the most simple of these concerns a simple upskilling of supply chain professionals to be aware of cybersecurity systems and threats. In an industry dominated by the need for trust, this is something that perhaps can come naturally for the supply chain. Building trust and awareness At the heart of a successful supply chain relationship is trust between partners. Building that trust, and securing high quality business partners, relies on a few factors. Cybersecurity experts and responsible officers will see some familiarity - due diligence, scrutiny over figures, and continuous monitoring. In simple terms, an effective framework of checking and rechecking work, monitored for compliance on all sides. These factors are a key part of new federal cybersecurity rules, according to news agency Reuters. Among other measures are a requirement for companies to have rigorous control over system patching, and measures that would require cloud hosted services to identify foreign customers. These are simple but important steps, and give a hint to supply chain businesses as to what they should be doing; putting in measures to monitor, control, and enact compliance on cybersecurity threats. That being said, it can be the case that the software isn’t in place within individual businesses to ensure that level of control. The right tools, and the right personnel, is also essential. The importance of software Back in April, the UK’s National Cyber Security Centre released details of specific threats made by Russian actors against business infrastructure in the USA and UK. Highlighted in this were specific weaknesses in business systems, and that includes in hardware and software used by millions of businesses worldwide. The message is simple - even industry standard software and devices have their problems, and businesses have to keep track of that. There are two arms to ensure this is completed. Firstly, the business should have a cybersecurity officer in place whose role it is to monitor current measures and ensure they are kept up to date. Secondly, budget and time must be allocated at an executive level firstly to promote networking between the business and cybersecurity firms, and between partner businesses to ensure that even cybersecurity measures are implemented across the chain. Utilizing AI There is something of a digital arms race when it comes to artificial intelligence. As ZDNet notes, the lack of clear regulation is providing a lot of leeway for malicious actors to innovate, but for businesses to act, too. While regulations are now coming in, it remains that there is a clear role for AI in prevention. According t Threat Cloud APT 28 ChatGPT ★★
DarkReading.webp 2023-08-02 20:50:00 Solvo dévoile SecurityGenie: une solution révolutionnaire de type Chatgpt pour les équipes de sécurité du cloud
Solvo Unveils SecurityGenie: A Revolutionary ChatGPT-Like Solution for Cloud Security Teams
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  The supply chain, already fragile in the USA, is at severe and significant risk of damage by cyberattacks. According to research analyzed by Forbes, supply chain attacks now account for a huge 62% of all commercial attacks, a clear indication of the scale of the challenge faced by the supply chain and the logistics industry as a whole. There are solutions out there, however, and the most simple of these concerns a simple upskilling of supply chain professionals to be aware of cybersecurity systems and threats. In an industry dominated by the need for trust, this is something that perhaps can come naturally for the supply chain. Building trust and awareness At the heart of a successful supply chain relationship is trust between partners. Building that trust, and securing high quality business partners, relies on a few factors. Cybersecurity experts and responsible officers will see some familiarity - due diligence, scrutiny over figures, and continuous monitoring. In simple terms, an effective framework of checking and rechecking work, monitored for compliance on all sides. These factors are a key part of new federal cybersecurity rules, according to news agency Reuters. Among other measures are a requirement for companies to have rigorous control over system patching, and measures that would require cloud hosted services to identify foreign customers. These are simple but important steps, and give a hint to supply chain businesses as to what they should be doing; putting in measures to monitor, control, and enact compliance on cybersecurity threats. That being said, it can be the case that the software isn’t in place within individual businesses to ensure that level of control. The right tools, and the right personnel, is also essential. The importance of software Back in April, the UK’s National Cyber Security Centre released details of specific threats made by Russian actors against business infrastructure in the USA and UK. Highlighted in this were specific weaknesses in business systems, and that includes in hardware and software used by millions of businesses worldwide. The message is simple - even industry standard software and devices have their problems, and businesses have to keep track of that. There are two arms to ensure this is completed. Firstly, the business should have a cybersecurity officer in place whose role it is to monitor current measures and ensure they are kept up to date. Secondly, budget and time must be allocated at an executive level firstly to promote networking between the business and cybersecurity firms, and between partner businesses to ensure that even cybersecurity measures are implemented across the chain. Utilizing AI There is something of a digital arms race when it comes to artificial intelligence. As ZDNet notes, the lack of clear regulation is providing a lot of leeway for malicious actors to innovate, but for businesses to act, too. While regulations are now coming in, it remains that there is a clear role for AI in prevention. According t Cloud ChatGPT ★★
AlienVault.webp 2023-06-21 10:00:00 Vers un SOC plus résilient: la puissance de l'apprentissage automatique
Toward a more resilient SOC: the power of machine learning
(lien direct)
A way to manage too much data To protect the business, security teams need to be able to detect and respond to threats fast. The problem is the average organization generates massive amounts of data every day. Information floods into the Security Operations Center (SOC) from network tools, security tools, cloud services, threat intelligence feeds, and other sources. Reviewing and analyzing all this data in a reasonable amount of time has become a task that is well beyond the scope of human efforts. AI-powered tools are changing the way security teams operate. Machine learning (which is a subset of artificial intelligence, or “AI”)—and in particular, machine learning-powered predictive analytics—are enhancing threat detection and response in the SOC by providing an automated way to quickly analyze and prioritize alerts. Machine learning in threat detection So, what is machine learning (ML)? In simple terms, it is a machine\'s ability to automate a learning process so it can perform tasks or solve problems without specifically being told do so. Or, as AI pioneer Arthur Samuel put it, “. . . to learn without explicitly being programmed.” ML algorithms are fed large amounts of data that they parse and learn from so they can make informed predictions on outcomes in new data. Their predictions improve with “training”–the more data an ML algorithm is fed, the more it learns, and thus the more accurate its baseline models become. While ML is used for various real-world purposes, one of its primary use cases in threat detection is to automate identification of anomalous behavior. The ML model categories most commonly used for these detections are: Supervised models learn by example, applying knowledge gained from existing labeled datasets and desired outcomes to new data. For example, a supervised ML model can learn to recognize malware. It does this by analyzing data associated with known malware traffic to learn how it deviates from what is considered normal. It can then apply this knowledge to recognize the same patterns in new data. ChatGPT and transformersUnsupervised models do not rely on labels but instead identify structure, relationships, and patterns in unlabeled datasets. They then use this knowledge to detect abnormalities or changes in behavior. For example: an unsupervised ML model can observe traffic on a network over a period of time, continuously learning (based on patterns in the data) what is “normal” behavior, and then investigating deviations, i.e., anomalous behavior. Large language models (LLMs), such as ChatGPT, are a type of generative AI that use unsupervised learning. They train by ingesting massive amounts of unlabeled text data. Not only can LLMs analyze syntax to find connections and patterns between words, but they can also analyze semantics. This means they can understand context and interpret meaning in existing data in order to create new content. Finally, reinforcement models, which more closely mimic human learning, are not given labeled inputs or outputs but instead learn and perfect strategies through trial and error. With ML, as with any data analysis tools, the accuracy of the output depends critically on the quality and breadth of the data set that is used as an input. types of machine learning A valuable tool for the SOC The SOC needs to be resilient in the face of an ever-changing threat landscape. Analysts have to be able to quickly understand which alerts to prioritize and which to ignore. Machine learning helps optimize security operations by making threat detection and response faster and more accurate. Malware Tool Threat Prediction Cloud ChatGPT ★★
knowbe4.webp 2023-06-20 13:00:00 Cyberheistnews Vol 13 # 25 [empreintes digitales partout] Les informations d'identification volées sont la cause profonde n ° 1 des violations de données
CyberheistNews Vol 13 #25 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches
(lien direct)
CyberheistNews Vol 13 #25 CyberheistNews Vol 13 #25  |   June 20th, 2023 [Fingerprints All Over] Stolen Credentials Are the No. 1 Root Cause of Data Breaches Verizon\'s DBIR always has a lot of information to unpack, so I\'ll continue my review by covering how stolen credentials play a role in attacks. This year\'s Data Breach Investigations Report has nearly 1 million incidents in their data set, making it the most statistically relevant set of report data anywhere. So, what does the report say about the most common threat actions that are involved in data breaches? Overall, the use of stolen credentials is the overwhelming leader in data breaches, being involved in nearly 45% of breaches – this is more than double the second-place spot of "Other" (which includes a number of types of threat actions) and ransomware, which sits at around 20% of data breaches. According to Verizon, stolen credentials were the "most popular entry point for breaches." As an example, in Basic Web Application Attacks, the use of stolen credentials was involved in 86% of attacks. The prevalence of credential use should come as no surprise, given the number of attacks that have focused on harvesting online credentials to provide access to both cloud platforms and on-premises networks alike. And it\'s the social engineering attacks (whether via phish, vish, SMiSh, or web) where these credentials are compromised - something that can be significantly diminished by engaging users in security awareness training to familiarize them with common techniques and examples of attacks, so when they come across an attack set on stealing credentials, the user avoids becoming a victim. Blog post with links:https://blog.knowbe4.com/stolen-credentials-top-breach-threat [New PhishER Feature] Immediately Add User-Reported Email Threats to Your M365 Blocklist Now there\'s a super easy way to keep malicious emails away from all your users through the power of the KnowBe4 PhishER platform! The new PhishER Blocklist feature lets you use reported messages to prevent future malicious email with the same sender, URL or attachment from reaching other users. Now you can create a unique list of blocklist entries and dramatically improve your Microsoft 365 email filters without ever l Ransomware Data Breach Spam Malware Hack Vulnerability Threat Cloud ChatGPT ChatGPT ★★
ArsTechnica.webp 2023-05-19 16:16:03 Fearing leaks, Apple restricts its employees from using ChatGPT and AI tools (lien direct) Cloud AI tools could leak confidential Apple company data; Apple works on its own LLM.
Cloud AI tools could leak confidential Apple company data; Apple works on its own LLM.
Cloud ChatGPT ChatGPT ★★
Netskope.webp 2023-05-12 00:13:20 Garanties de protection des données modernes pour le chatppt et autres applications génératrices d'IA
Modern Data Protection Safeguards for ChatGPT and Other Generative AI Applications
(lien direct)
> Ces derniers temps, la montée de l'intelligence artificielle (IA) a révolutionné la façon dont de plus en plus les utilisateurs d'entreprise interagissent avec leur travail quotidien.Des applications SAAS basées sur l'IA comme CHATGPT ont offert d'innombrables opportunités aux organisations et à leurs employés pour améliorer la productivité des entreprises, faciliter de nombreuses tâches, améliorer les services et aider à rationaliser les opérations.Équipes et individus [& # 8230;]
>In recent times, the rise of artificial intelligence (AI) has revolutionized the way more and more corporate users interact with their daily work. Generative AI-based SaaS applications like ChatGPT have offered countless opportunities to organizations and their employees to improve business productivity, ease numerous tasks, enhance services, and assist in streamlining operations. Teams and individuals […]
Cloud ChatGPT ChatGPT ★★
Anomali.webp 2023-04-25 18:22:00 Anomali Cyber Watch: Deux attaques de la chaîne d'approvisionnement enchaînées, leurre de communication DNS furtive de chien, Evilextractor exfiltrates sur le serveur FTP
Anomali Cyber Watch: Two Supply-Chain Attacks Chained Together, Decoy Dog Stealthy DNS Communication, EvilExtractor Exfiltrates to FTP Server
(lien direct)
The various threat intelligence stories in this iteration of the Anomali Cyber Watch discuss the following topics: APT, Cryptomining, Infostealers, Malvertising, North Korea, Phishing, Ransomware, and Supply-chain attacks. The IOCs related to these stories are attached to Anomali Cyber Watch and can be used to check your logs for potential malicious activity. Figure 1 - IOC Summary Charts. These charts summarize the IOCs attached to this magazine and provide a glimpse of the threats discussed. Trending Cyber News and Threat Intelligence First-Ever Attack Leveraging Kubernetes RBAC to Backdoor Clusters (published: April 21, 2023) A new Monero cryptocurrency-mining campaign is the first recorded case of gaining persistence via Kubernetes (K8s) Role-Based Access Control (RBAC), according to Aquasec researchers. The recorded honeypot attack started with exploiting a misconfigured API server. The attackers preceded by gathering information about the cluster, checking if their cluster was already deployed, and deleting some existing deployments. They used RBAC to gain persistence by creating a new ClusterRole and a new ClusterRole binding. The attackers then created a DaemonSet to use a single API request to target all nodes for deployment. The deployed malicious image from the public registry Docker Hub was named to impersonate a legitimate account and a popular legitimate image. It has been pulled 14,399 times and 60 exposed K8s clusters have been found with signs of exploitation by this campaign. Analyst Comment: Your company should have protocols in place to ensure that all cluster management and cloud storage systems are properly configured and patched. K8s buckets are too often misconfigured and threat actors realize there is potential for malicious activity. A defense-in-depth (layering of security mechanisms, redundancy, fail-safe defense processes) approach is a good mitigation step to help prevent actors from highly-active threat groups. MITRE ATT&CK: [MITRE ATT&CK] T1190 - Exploit Public-Facing Application | [MITRE ATT&CK] T1496 - Resource Hijacking | [MITRE ATT&CK] T1036 - Masquerading | [MITRE ATT&CK] T1489 - Service Stop Tags: Monero, malware-type:Cryptominer, detection:PUA.Linux.XMRMiner, file-type:ELF, abused:Docker Hub, technique:RBAC Buster, technique:Create ClusterRoleBinding, technique:Deploy DaemonSet, target-system:Linux, target:K8s, target:​​Kubernetes RBAC 3CX Software Supply Chain Compromise Initiated by a Prior Software Supply Chain Compromise; Suspected North Korean Actor Responsible (published: April 20, 2023) Investigation of the previously-reported 3CX supply chain compromise (March 2023) allowed Mandiant researchers to detect it was a result of prior software supply chain attack using a trojanized installer for X_TRADER, a software package provided by Trading Technologies. The attack involved the publicly-available tool SigFlip decrypting RC4 stream-cipher and starting publicly-available DaveShell shellcode for reflective loading. It led to installation of the custom, modular VeiledSignal backdoor. VeiledSignal additional modules inject the C2 module in a browser process instance, create a Windows named pipe and Ransomware Spam Malware Tool Threat Cloud Uber APT 38 ChatGPT APT 43 ★★
globalsecuritymag.webp 2023-04-18 09:14:27 Rapport Zscaler ThreatLabz 2023 sur le phishing (lien direct) Rapport Zscaler ThreatLabz 2023 sur le phishing Avec une augmentation de près de 50 %, les attaques de phishing touchent principalement les administrations publiques, ainsi que les secteurs de l'éducation et de la finance. Le rapport annuel '2023 ThreatLabz Phishing Report' met en lumière de nouvelles campagnes de phishing sophistiquées et en constante évolution, résultant de l'utilisation croissante de plateformes d'IA comme ChatGPT par les cybercriminels. Principales conclusions • En 2022, les attaques de phishing ont connu une hausse de près de 50 % dans le monde par rapport à 2021. • Le secteur de l'éducation a été le plus durement touché, avec une augmentation alarmante de 576 % des attaques, suivi de près par les secteurs de la finance et des administrations publiques. En revanche, le secteur du commerce de détail et de gros, qui était la cible principale l'année dernière, a connu une baisse de 67 %. • Les États-Unis, le Royaume-Uni, les Pays-Bas, le Canada et la Russie sont les cinq pays les plus ciblés par les attaques de phishing. • Les outils d'IA tels que ChatGPT et les kits de phishing ont largement contribué à la croissance du phishing et ce, en réduisant les obstacles techniques pour les cybercriminels et en leur permettant de gagner du temps et des ressources. • Le phishing par SMS (SMiShing) évolue vers le phishing par boîte vocale (Vishing), incitant davantage de victimes à ouvrir des pièces jointes malveillantes. • Une architecture Zero Trust basée sur un proxy natif dans le cloud est essentielle pour que les entreprises renforcent leur sécurité en ligne et se protègent contre l'évolution des attaques de phishing. - Investigations Cloud ChatGPT ChatGPT ★★★
Last update at: 2024-05-08 04:07:54
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter