What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ProofPoint.webp 2024-03-19 05:00:28 Le rapport du paysage de la perte de données 2024 explore la négligence et les autres causes communes de perte de données
The 2024 Data Loss Landscape Report Explores Carelessness and Other Common Causes of Data Loss
(lien direct)
La perte de données est un problème de personnes, ou plus précisément, un problème de personnes imprudentes.C'est la conclusion de notre nouveau rapport, le paysage de la perte de données 2024, que Proofpoint lance aujourd'hui. Nous avons utilisé des réponses à l'enquête de 600 professionnels de la sécurité et des données de la protection des informations de Proofpoint pour explorer l'état actuel de la prévention de la perte de données (DLP) et des menaces d'initiés.Dans notre rapport, nous considérons également ce qui est susceptible de venir ensuite dans cet espace à maturation rapide. De nombreuses entreprises considèrent aujourd'hui toujours leur programme DLP actuel comme émergent ou évoluant.Nous voulions donc identifier les défis actuels et découvrir des domaines d'opportunité d'amélioration.Les praticiens de 12 pays et 17 industries ont répondu aux questions allant du comportement des utilisateurs aux conséquences réglementaires et partageaient leurs aspirations à l'état futur de DLP \\. Ce rapport est une première pour Proofpoint.Nous espérons que cela deviendra une lecture essentielle pour toute personne impliquée dans la tenue de sécuriser les données.Voici quelques thèmes clés du rapport du paysage de la perte de données 2024. La perte de données est un problème de personnes Les outils comptent, mais la perte de données est définitivement un problème de personnes.2023 Données de Tessian, une entreprise de point de preuve, montre que 33% des utilisateurs envoient une moyenne d'un peu moins de deux e-mails mal dirigés chaque année.Et les données de la protection de l'information ProofPoint suggèrent que à peu de 1% des utilisateurs sont responsables de 90% des alertes DLP dans de nombreuses entreprises. La perte de données est souvent causée par la négligence Les initiés malveillants et les attaquants externes constituent une menace significative pour les données.Cependant, plus de 70% des répondants ont déclaré que les utilisateurs imprudents étaient une cause de perte de données pour leur entreprise.En revanche, moins de 50% ont cité des systèmes compromis ou mal configurés. La perte de données est répandue La grande majorité des répondants de notre enquête ont signalé au moins un incident de perte de données.Les incidents moyens globaux par organisation sont de 15. L'échelle de ce problème est intimidante, car un travail hybride, l'adoption du cloud et des taux élevés de roulement des employés créent tous un risque élevé de données perdues. La perte de données est dommageable Plus de la moitié des répondants ont déclaré que les incidents de perte de données entraînaient des perturbations commerciales et une perte de revenus.Celles-ci ne sont pas les seules conséquences dommageables.Près de 40% ont également déclaré que leur réputation avait été endommagée, tandis que plus d'un tiers a déclaré que leur position concurrentielle avait été affaiblie.De plus, 36% des répondants ont déclaré avoir été touchés par des pénalités réglementaires ou des amendes. Préoccupation croissante concernant l'IA génératrice De nouvelles alertes déclenchées par l'utilisation d'outils comme Chatgpt, Grammarly et Google Bard ne sont devenus disponibles que dans la protection des informations de Proofpoint cette année.Mais ils figurent déjà dans les cinq règles les plus implémentées parmi nos utilisateurs.Avec peu de transparence sur la façon dont les données soumises aux systèmes d'IA génératives sont stockées et utilisées, ces outils représentent un nouveau canal dangereux pour la perte de données. DLP est bien plus que de la conformité La réglementation et la législation ont inspiré de nombreuses initiatives précoces du DLP.Mais les praticiens de la sécurité disent maintenant qu'ils sont davantage soucieux de protéger la confidentialité des utilisateurs et des données commerciales sensibles. Des outils de DLP ont évolué pour correspondre à cette progression.De nombreux outils Tool Threat Legislation Cloud ChatGPT ★★★
ProofPoint.webp 2024-02-21 13:46:06 Comprendre la loi UE AI: implications pour les agents de conformité des communications
Understanding the EU AI Act: Implications for Communications Compliance Officers
(lien direct)
The European Union\'s Artificial Intelligence Act (EU AI Act) is set to reshape the landscape of AI regulation in Europe-with profound implications. The European Council and Parliament recently agreed on a deal to harmonize AI rules and will soon bring forward the final text. The parliament will then pass the EU AI Act into law. After that, the law is expected to become fully effective in 2026.   The EU AI Act is part of the EU\'s digital strategy. When the act goes into effect, it will be the first legislation of its kind. And it is destined to become the “gold standard” for other countries in the same way that the EU\'s General Data Protection Regulation (GDPR) became the gold standard for privacy legislation.    Compliance and IT executives will be responsible for the AI models that their firms develop and deploy. And they will need to be very clear about the risks these models present as well as the governance and the oversight that they will apply to these models when they are operated.  In this blog post, we\'ll provide an overview of the EU AI Act and how it may impact your communications practices in the future.  The scope and purpose of the EU AI Act  The EU AI Act establishes a harmonized framework for the development, deployment and oversight of AI systems across the EU. Any AI that is in use in the EU falls under the scope of the act. The phrase “in use in the EU” does not limit the law to models that are physically executed within the EU. The model and the servers that it operates on could be located anywhere. What matters is where the human who interacts with the AI is located.  The EU AI Act\'s primary goal is to ensure that AI used in the EU market is safe and respects the fundamental rights and values of the EU and its citizens. That includes privacy, transparency and ethical considerations.  The legislation will use a “risk-based” approach to regulate AI, which considers a given AI system\'s ability to cause harm. The higher the risk, the stricter the legislation. For example, certain AI activities, such as profiling, are prohibited. The act also lays out governance expectations, particularly for high-risk or systemic-risk systems. As all machine learning (ML) is a subset of AI, any ML activity will need to be evaluated from a risk perspective as well.  The EU AI Act also aims to foster AI investment and innovation in the EU by providing unified operational guidance across the EU. There are exemptions for:  Research and innovation purposes  Those using AI for non-professional reasons  Systems whose purpose is linked to national security, military, defense and policing  The EU AI Act places a strong emphasis on ethical AI development. Companies must consider the societal impacts of their AI systems, including potential discrimination and bias. And their compliance officers will need to satisfy regulators (and themselves) that the AI models have been produced and operate within the Act\'s guidelines.  To achieve this, businesses will need to engage with their technology partners and understand the models those partners have produced. They will also need to confirm that they are satisfied with how those models are created and how they operate.  What\'s more, compliance officers should collaborate with data scientists and developers to implement ethical guidelines in AI development projects within their company.  Requirements of the EU AI Act  The EU AI Act categorizes AI systems into four risk levels:  Unacceptable risk  High risk  Limited risk  Minimal risk  Particular attention must be paid to AI systems that fall into the “high-risk” category. These systems are subject to the most stringent requirements and scrutiny. Some will need to be registered in the EU database for high-risk AI systems as well. Systems that fall into the “unacceptable risk” category will be prohibited.  In the case of general AI and foundation models, the regulations focus on the transparency of models and the data used and avoiding the introduction of system Vulnerability Threat Legislation ★★
ProofPoint.webp 2023-11-30 06:00:38 L'avenir de la conformité: suivre le rythme d'un paysage en constante évolution
The Future of Compliance: Keeping Pace with an Ever-Changing Landscape
(lien direct)
Nothing stands still in cybersecurity-and that includes compliance. Just as new threats drive the need for new deterrents, new technologies and evolving business practices drive the need for greater oversight. Over the last few years, compliance, regulation and governance have begun evolving faster than we have seen for some time. This has been in response to rapid changes we\'ve seen ripple across industries caused by new technologies, like artificial intelligence (AI) and machine learning, and new ways of doing business launched in response to the pandemic. In this blog, we explore what has changed within the world of compliance over the last few years and where things are likely heading. On compliance trends Like many industries, compliance and regulation tend to follow market trends. If we go back a few years, we saw a raft of privacy legislation introduced in the wake of the European Union\'s introduction of the General Data Protection Regulation (GDPR). High profile events also tend to shift the attitudes of regulators. For example, financial services companies found themselves in the spotlight following the 2008 economic crash, while the auto industry faced similar scrutiny after the emissions scandal.   During these times, regulators tend to turn their attention to enforcement, and they are willing to make an example of a company if that\'s what\'s needed to improve things. Over the years, many regulators have become much more aggressive in this area, expanding their scope and proactively applying their rules. Of course, technology drives regulatory change, too. The pandemic has recently accelerated the mass adoption of collaborative technologies and communication channels like Microsoft Teams, Zoom, Slack and many more. The availability and advancement of these channels have changed how we communicate and how we access and share data, both inside and outside of our organizations. In turn, compliance requirements have had to adapt to accommodate new ways of working. On AI and ML compliance Over the last two or three years, we have seen exciting advances in generative AI. But it has also made possible some fundamental capabilities that will become incredibly important.   For example, in a world with so many claims of fake news and misrepresentation, the ability to retain immutable records is a big deal. “Immutable” effectively means that something cannot be changed and cannot be hacked. This is huge not just from a source of truth perspective but also regarding reproducibility. As we use AI tools en masse, questions will be asked about why specific systems are making certain decisions. Is AI discriminating against specific ZIP codes, for example? And if not, can those in charge of these systems prove that? In many cases, doing so will take work. AI could be better at explaining how it gets to its decisions. In order to do so, businesses will need to return to the original, immutable data. And as they become increasingly information-intensive, getting back to that source data sets a high bar of capability. AI\'s ability to process vast data sets will also raise concerns around testing. Before any organization puts a system or platform into the world, potential users want to be confident that it has been suitably tested. But even if a company spends millions of dollars testing a system, it will still sometimes fail-and errors will get through. In the past, we could accept a failure rate of, say, one in a million. But today\'s software is much more complex than anything we\'ve been able to produce in the past. So, a one-in-a-million failure rate in a system running 100, 200 or 300 million events in a day quickly adds up to widespread failures.  Regulators will need to iron out how they intend to protect consumers and the markets from issues like these and set clear guidelines regarding who, ultimately, is accountable. On the future of compliance Current trends are likely to continue to drive the development of compliance management. Currently, we\'re seeing a lot of instability. While t Tool Legislation ★★
Last update at: 2024-05-14 17:08:36
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter