What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ProofPoint.webp 2024-05-30 08:26:14 Les principaux botnets perturbés par le retrait mondial des forces de l'ordre
Major Botnets Disrupted via Global Law Enforcement Takedown
(lien direct)
Global law enforcement recently announced Operation Endgame, a widespread effort to disrupt malware and botnet infrastructure and identify the alleged individuals associated with the activity. In a press release, Europol called it the “largest ever operation against botnets, which play a major role in the deployment of ransomware.” In collaboration with private sector partners including Proofpoint, the efforts disrupted the infrastructure of IcedID, SystemBC, Pikabot, SmokeLoader, Bumblebee, and Trickbot. Per Europol, in conjunction with the malware disruption, the coordinated action led to four arrests, over 100 servers taken down across 10 countries, over 2,000 domains brought under the control of law enforcement, and illegal assets frozen. Malware Details SmokeLoader SmokeLoader is a downloader that first appeared in 2011 and is popular among threat actors. It is a modular malware with various stealer and remote access capabilities, and its main function is to install follow-on payloads. Proofpoint observed SmokeLoader in hundreds of campaigns since 2015, and it was historically used by major initial access brokers including briefly by TA577 and TA511 in 2020. Many SmokeLoader campaigns are not attributed to tracked threat actors, as the malware is broadly available for purchase on various forums. Its author claims to only sell to Russian-speaking users.  Proofpoint has observed approximately a dozen SmokeLoader campaigns in 2024, with the malware leading to the installation of Rhadamanthys, Amadey, and various ransomware. Many SmokeLoader campaigns from 2023 through 2024 are attributed to an actor known as UAC-0006 that targets Ukrainian organizations with phishing lures related to “accounts” or “payments.” Example macro-enabled document delivering SmokeLoader in a May 2024 campaign. CERT-UA attributes this activity to UAC-0006. SystemBC SystemBC is a proxy malware and backdoor leveraging SOCKS5 first identified by Proofpoint in 2019. Initially observed being delivered by exploit kits, it eventually became a popular malware used in ransomware-as-a-service operations. Proofpoint rarely observes SystemBC in email threat data, as it is typically deployed post-compromise. However, our researchers observed it used in a handful of campaigns from TA577 and TA544, as well as dropped by TA542 following Emotet infections. IcedID IcedID is a malware originally classified as a banking trojan and first detected by Proofpoint in 2017. It also acted as a loader for other malware, including ransomware. The well-known IcedID version consists of an initial loader which contacts a Loader C2 server, downloads the standard DLL Loader, which then delivers the standard IcedID Bot. Proofpoint has observed nearly 1,000 IcedID campaigns since 2017. This malware was a favored payload by numerous initial access brokers including TA511, TA551, TA578, and occasionally TA577 and TA544, as well as numerous unattributed threats It was also observed dropped by TA542, the Emotet threat actor, which was the only actor observed using the “IcedID Lite” variant Proofpoint identified in 2022. IcedID was often the first step to ransomware, with the malware being observed as a first-stage payload in campaigns leading to ransomware including Egregor, Sodinokibi, Maze, Dragon Locker, and Nokoyawa. Proofpoint has not observed IcedID in campaign data since November 2023, after which researchers observed campaigns leveraging the new Latrodectus malware. It is likely the IcedID developers are also behind Latrodectus malware. IcedID\'s widespread use by advanced cybercriminal threats made it a formidable malware on the ecrime landscape. Its disruption is great news for defenders. Pikabot Pikabot is a malware that has two components, a loader and a core module, designed to execute arbitrary commands and load additional payloads. Pikabot\'s main objective is to download follow-on malware. Pikabot first appeared in campaign data in March 2023 used by TA577 and has since been almost exclusively used by this actor. Pikab Ransomware Malware Threat Legislation Technical ★★★
ProofPoint.webp 2024-03-19 05:00:28 Le rapport du paysage de la perte de données 2024 explore la négligence et les autres causes communes de perte de données
The 2024 Data Loss Landscape Report Explores Carelessness and Other Common Causes of Data Loss
(lien direct)
La perte de données est un problème de personnes, ou plus précisément, un problème de personnes imprudentes.C'est la conclusion de notre nouveau rapport, le paysage de la perte de données 2024, que Proofpoint lance aujourd'hui. Nous avons utilisé des réponses à l'enquête de 600 professionnels de la sécurité et des données de la protection des informations de Proofpoint pour explorer l'état actuel de la prévention de la perte de données (DLP) et des menaces d'initiés.Dans notre rapport, nous considérons également ce qui est susceptible de venir ensuite dans cet espace à maturation rapide. De nombreuses entreprises considèrent aujourd'hui toujours leur programme DLP actuel comme émergent ou évoluant.Nous voulions donc identifier les défis actuels et découvrir des domaines d'opportunité d'amélioration.Les praticiens de 12 pays et 17 industries ont répondu aux questions allant du comportement des utilisateurs aux conséquences réglementaires et partageaient leurs aspirations à l'état futur de DLP \\. Ce rapport est une première pour Proofpoint.Nous espérons que cela deviendra une lecture essentielle pour toute personne impliquée dans la tenue de sécuriser les données.Voici quelques thèmes clés du rapport du paysage de la perte de données 2024. La perte de données est un problème de personnes Les outils comptent, mais la perte de données est définitivement un problème de personnes.2023 Données de Tessian, une entreprise de point de preuve, montre que 33% des utilisateurs envoient une moyenne d'un peu moins de deux e-mails mal dirigés chaque année.Et les données de la protection de l'information ProofPoint suggèrent que à peu de 1% des utilisateurs sont responsables de 90% des alertes DLP dans de nombreuses entreprises. La perte de données est souvent causée par la négligence Les initiés malveillants et les attaquants externes constituent une menace significative pour les données.Cependant, plus de 70% des répondants ont déclaré que les utilisateurs imprudents étaient une cause de perte de données pour leur entreprise.En revanche, moins de 50% ont cité des systèmes compromis ou mal configurés. La perte de données est répandue La grande majorité des répondants de notre enquête ont signalé au moins un incident de perte de données.Les incidents moyens globaux par organisation sont de 15. L'échelle de ce problème est intimidante, car un travail hybride, l'adoption du cloud et des taux élevés de roulement des employés créent tous un risque élevé de données perdues. La perte de données est dommageable Plus de la moitié des répondants ont déclaré que les incidents de perte de données entraînaient des perturbations commerciales et une perte de revenus.Celles-ci ne sont pas les seules conséquences dommageables.Près de 40% ont également déclaré que leur réputation avait été endommagée, tandis que plus d'un tiers a déclaré que leur position concurrentielle avait été affaiblie.De plus, 36% des répondants ont déclaré avoir été touchés par des pénalités réglementaires ou des amendes. Préoccupation croissante concernant l'IA génératrice De nouvelles alertes déclenchées par l'utilisation d'outils comme Chatgpt, Grammarly et Google Bard ne sont devenus disponibles que dans la protection des informations de Proofpoint cette année.Mais ils figurent déjà dans les cinq règles les plus implémentées parmi nos utilisateurs.Avec peu de transparence sur la façon dont les données soumises aux systèmes d'IA génératives sont stockées et utilisées, ces outils représentent un nouveau canal dangereux pour la perte de données. DLP est bien plus que de la conformité La réglementation et la législation ont inspiré de nombreuses initiatives précoces du DLP.Mais les praticiens de la sécurité disent maintenant qu'ils sont davantage soucieux de protéger la confidentialité des utilisateurs et des données commerciales sensibles. Des outils de DLP ont évolué pour correspondre à cette progression.De nombreux outils Tool Threat Legislation Cloud ChatGPT ★★★
ProofPoint.webp 2024-02-21 13:46:06 Comprendre la loi UE AI: implications pour les agents de conformité des communications
Understanding the EU AI Act: Implications for Communications Compliance Officers
(lien direct)
The European Union\'s Artificial Intelligence Act (EU AI Act) is set to reshape the landscape of AI regulation in Europe-with profound implications. The European Council and Parliament recently agreed on a deal to harmonize AI rules and will soon bring forward the final text. The parliament will then pass the EU AI Act into law. After that, the law is expected to become fully effective in 2026.   The EU AI Act is part of the EU\'s digital strategy. When the act goes into effect, it will be the first legislation of its kind. And it is destined to become the “gold standard” for other countries in the same way that the EU\'s General Data Protection Regulation (GDPR) became the gold standard for privacy legislation.    Compliance and IT executives will be responsible for the AI models that their firms develop and deploy. And they will need to be very clear about the risks these models present as well as the governance and the oversight that they will apply to these models when they are operated.  In this blog post, we\'ll provide an overview of the EU AI Act and how it may impact your communications practices in the future.  The scope and purpose of the EU AI Act  The EU AI Act establishes a harmonized framework for the development, deployment and oversight of AI systems across the EU. Any AI that is in use in the EU falls under the scope of the act. The phrase “in use in the EU” does not limit the law to models that are physically executed within the EU. The model and the servers that it operates on could be located anywhere. What matters is where the human who interacts with the AI is located.  The EU AI Act\'s primary goal is to ensure that AI used in the EU market is safe and respects the fundamental rights and values of the EU and its citizens. That includes privacy, transparency and ethical considerations.  The legislation will use a “risk-based” approach to regulate AI, which considers a given AI system\'s ability to cause harm. The higher the risk, the stricter the legislation. For example, certain AI activities, such as profiling, are prohibited. The act also lays out governance expectations, particularly for high-risk or systemic-risk systems. As all machine learning (ML) is a subset of AI, any ML activity will need to be evaluated from a risk perspective as well.  The EU AI Act also aims to foster AI investment and innovation in the EU by providing unified operational guidance across the EU. There are exemptions for:  Research and innovation purposes  Those using AI for non-professional reasons  Systems whose purpose is linked to national security, military, defense and policing  The EU AI Act places a strong emphasis on ethical AI development. Companies must consider the societal impacts of their AI systems, including potential discrimination and bias. And their compliance officers will need to satisfy regulators (and themselves) that the AI models have been produced and operate within the Act\'s guidelines.  To achieve this, businesses will need to engage with their technology partners and understand the models those partners have produced. They will also need to confirm that they are satisfied with how those models are created and how they operate.  What\'s more, compliance officers should collaborate with data scientists and developers to implement ethical guidelines in AI development projects within their company.  Requirements of the EU AI Act  The EU AI Act categorizes AI systems into four risk levels:  Unacceptable risk  High risk  Limited risk  Minimal risk  Particular attention must be paid to AI systems that fall into the “high-risk” category. These systems are subject to the most stringent requirements and scrutiny. Some will need to be registered in the EU database for high-risk AI systems as well. Systems that fall into the “unacceptable risk” category will be prohibited.  In the case of general AI and foundation models, the regulations focus on the transparency of models and the data used and avoiding the introduction of system Vulnerability Threat Legislation ★★
ProofPoint.webp 2023-11-30 06:00:38 L'avenir de la conformité: suivre le rythme d'un paysage en constante évolution
The Future of Compliance: Keeping Pace with an Ever-Changing Landscape
(lien direct)
Nothing stands still in cybersecurity-and that includes compliance. Just as new threats drive the need for new deterrents, new technologies and evolving business practices drive the need for greater oversight. Over the last few years, compliance, regulation and governance have begun evolving faster than we have seen for some time. This has been in response to rapid changes we\'ve seen ripple across industries caused by new technologies, like artificial intelligence (AI) and machine learning, and new ways of doing business launched in response to the pandemic. In this blog, we explore what has changed within the world of compliance over the last few years and where things are likely heading. On compliance trends Like many industries, compliance and regulation tend to follow market trends. If we go back a few years, we saw a raft of privacy legislation introduced in the wake of the European Union\'s introduction of the General Data Protection Regulation (GDPR). High profile events also tend to shift the attitudes of regulators. For example, financial services companies found themselves in the spotlight following the 2008 economic crash, while the auto industry faced similar scrutiny after the emissions scandal.   During these times, regulators tend to turn their attention to enforcement, and they are willing to make an example of a company if that\'s what\'s needed to improve things. Over the years, many regulators have become much more aggressive in this area, expanding their scope and proactively applying their rules. Of course, technology drives regulatory change, too. The pandemic has recently accelerated the mass adoption of collaborative technologies and communication channels like Microsoft Teams, Zoom, Slack and many more. The availability and advancement of these channels have changed how we communicate and how we access and share data, both inside and outside of our organizations. In turn, compliance requirements have had to adapt to accommodate new ways of working. On AI and ML compliance Over the last two or three years, we have seen exciting advances in generative AI. But it has also made possible some fundamental capabilities that will become incredibly important.   For example, in a world with so many claims of fake news and misrepresentation, the ability to retain immutable records is a big deal. “Immutable” effectively means that something cannot be changed and cannot be hacked. This is huge not just from a source of truth perspective but also regarding reproducibility. As we use AI tools en masse, questions will be asked about why specific systems are making certain decisions. Is AI discriminating against specific ZIP codes, for example? And if not, can those in charge of these systems prove that? In many cases, doing so will take work. AI could be better at explaining how it gets to its decisions. In order to do so, businesses will need to return to the original, immutable data. And as they become increasingly information-intensive, getting back to that source data sets a high bar of capability. AI\'s ability to process vast data sets will also raise concerns around testing. Before any organization puts a system or platform into the world, potential users want to be confident that it has been suitably tested. But even if a company spends millions of dollars testing a system, it will still sometimes fail-and errors will get through. In the past, we could accept a failure rate of, say, one in a million. But today\'s software is much more complex than anything we\'ve been able to produce in the past. So, a one-in-a-million failure rate in a system running 100, 200 or 300 million events in a day quickly adds up to widespread failures.  Regulators will need to iron out how they intend to protect consumers and the markets from issues like these and set clear guidelines regarding who, ultimately, is accountable. On the future of compliance Current trends are likely to continue to drive the development of compliance management. Currently, we\'re seeing a lot of instability. While t Tool Legislation ★★
Last update at: 2024-06-02 16:08:17
See our sources.
My email:

To see everything: RSS Twitter