What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ProofPoint.webp 2024-04-17 18:00:31 Réduire le désabonnement d'incitation avec une composition de modèle explosive
Reducing Prompting Churn with Exploding Template Composition
(lien direct)
Engineering Insights is an ongoing blog series that gives a behind-the-scenes look into the technical challenges, lessons and advances that help our customers protect people and defend data every day. Each post is a firsthand account by one of our engineers about the process that led up to a Proofpoint innovation.   In the nascent world of large language models (LLMs), prompt engineering has emerged as a critical discipline. However, as LLM applications expand, it is becoming a more complex challenge to manage and maintain a library of related prompts.   At Proofpoint, we developed Exploding Prompts to manage the complexity through exploding template composition. We first created the prompts to generate soft labels for our data across a multitude of models and labeling concerns. But Exploding Prompts has also enabled use cases for LLMs that were previously locked away because managing the prompt lifecycle is so complex.  Recently, we\'ve seen exciting progress in the field of automated prompt generation and black-box prompt optimization through DSPy. Black-box optimization requires hand-labeled data to generate prompts automatically-a luxury that\'s not always an option. You can use Exploding Prompts to generate labels for unlabeled data, as well as for any prompt-tuning application without a clear (or tractable) objective for optimization.   In the future, Exploding Prompts could be used with DSPy to achieve a human-in-the-loop feedback cycle. We are also thrilled to announce that Exploding Prompts is now an open-source release. We encourage you to explore the code and consider how you might help make it even better.   The challenge: managing complexity in prompt engineering  Prompt engineering is not just about crafting queries that guide intelligent systems to generate the desired outputs; it\'s about doing it at scale. As developers push the boundaries of what is possible with LLMs, the need to manage a vast array of prompts efficiently becomes more pressing. Traditional methods often need manual adjustments and updates across numerous templates, which is a process that\'s both time-consuming and error-prone.  To understand this problem, just consider the following scenario. You need to label a large quantity of data. You have multiple labels that can apply to each piece of data. And each label requires its own prompt template. You timebox your work and find a prompt template that achieves desirable results for your first label. Happily, most of the template is reusable. So, for the next label, you copy-paste the template and change the portion of the prompt that is specific to the label itself. You continue doing this until you figure out the section of the template that has persisted through each version of your labels can be improved. Now you now face the task of iterating through potentially dozens of templates to make a minor update to each of the files.  Once you finish, your artificial intelligence (AI) provider releases a new model that outperforms your current model. But there\'s a catch. The new model requires another small update to each of your templates. To your chagrin, the task of managing the lifecycle of your templates soon takes up most of your time.  The solution: exploding prompts from automated dependency graphs  Prompt templating is a popular way to manage complexity. Exploding Prompts builds on prompt templating by introducing an “explode” operation. This allows a few single-purpose templates to explode into a multitude of prompts. This is accomplished by building dependency graphs automatically from the directory structure and the content of prompt template files.  At its core, Exploding Prompts embodies the “write it once” philosophy. It ensures that every change made in a template correlates with a single update in one file. This enhances efficiency and consistency, as updates automatically propagate across all relevant generated prompts. This separation ensures that updates can be made with speed and efficiency so you can focus on innovation rather th Malware Tool Threat Studies Cloud Technical ★★★
ProofPoint.webp 2024-04-11 06:23:43 FAQS de l'état de l'État 2024 du rapport Phish, partie 1: Le paysage des menaces
FAQs from the 2024 State of the Phish Report, Part 1: The Threat Landscape
(lien direct)
In this two-part blog series, we will address many of the frequently asked questions submitted by attendees. In our first installment, we address questions related to the threat landscape.   Understanding the threat landscape is paramount in crafting a human-centric security strategy. That\'s the goal behind our 10th annual State of the Phish report. When you know what threats are out there and how people are interacting with them, you can create a modern cybersecurity strategy that puts the complexity of human behavior and interaction at the forefront. Our report was launched a month ago. Since then, we\'ve followed up with a few webinars to discuss key findings from the report, including:  Threat landscape findings:  Over 1 million phishing threats involved EvilProxy, which bypasses multifactor authentication (MFA). Yet, 89% of security pros still believe that MFA provides complete protection against account takeover.  BEC threat actors benefit from generative AI. Proofpoint detected and stopped over 66 million targeted business email compromise (BEC) attacks per month on average in 2023.  User behavior and attitude findings:  71% of surveyed users took at least one risky action, and 96% of them knew that those actions were associated with risk.  58% of those risky actions were related to social engineering tactics.  85% of security pros believed that most employees know they are responsible for security. Yet nearly 60% of employees either weren\'t sure or disagreed.  These findings inspired hundreds of questions from audiences across the world. What follows are some of the questions that repeatedly came up. Frequently asked questions  What are the definitions of BEC and TOAD?   Business email compromise (BEC) essentially means fraud perpetrated through email. It can take many forms, such as advance fee fraud, payroll redirection, fraudulent invoicing or even extortion. BEC typically involves a deception, such as the spoofing of a trusted third party\'s domain or the impersonation of an executive (or literally anyone the recipient trusts).   BEC is hard to detect because it is generally pure social engineering. In other words, there is often no credential harvesting portal or malicious payload involved. Threat actors most often use benign conversation to engage the victim. Once the victim is hooked, attackers then convince that person to act in favor of them, such as wiring money to a specified account.  Similarly, telephone-oriented attack delivery (TOAD) attacks also use benign conversations. But, in this case, a threat actor\'s goal is to motivate the victim to make a phone call. From there, they will walk their target through a set of steps, which usually involve tricking the victim into giving up their credentials or installing a piece of malware on their computer.  TOAD attacks have been associated with high-profile malware families known to lead to ransomware, as well as with a wide variety of remote access tools like AnyDesk that provide the threat actors direct access to victims\' machines. The end goal might still be fraud; for example, there have been cases where payment was solicited for “IT services” or software (Norton LifeLock). But the key differentiator for TOAD, compared with BEC, is the pivot out of the email space to a phone call., is the pivot out of the email space to the phone.  What is the difference between TOAD and vishing?  TOAD often starts with an email and requires victims to call the fraudulent number within that email. Vishing, on the other hand, generally refers to fraudulent solicitation of personally identifiable information (PII) and may or may not involve email (it could result from a direct call). Some TOAD attempts may fall into this category, but most perpetrators focus on getting software installed on a victim\'s machine.   How do you see artificial intelligence (AI) affecting phishing? What are security best practices to help defend against these novel phishing attacks?  AI allows threat actors to tighten up grammatical and s Ransomware Malware Tool Threat Cloud Technical ★★★
ProofPoint.webp 2024-03-14 06:00:19 Comment nous avons déployé Github Copilot pour augmenter la productivité des développeurs
How We Rolled Out GitHub Copilot to Increase Developer Productivity
(lien direct)
Engineering Insights is an ongoing blog series that gives a behind-the-scenes look into the technical challenges, lessons and advances that help our customers protect people and defend data every day. Each post is a firsthand account by one of our engineers about the process that led up to a Proofpoint innovation.  Inspired by the rapid rise of generative artificial intelligence (GenAI), we recently kicked off several internal initiatives at Proofpoint that focused on using it within our products. One of our leadership team\'s goals was to find a tool to help increase developer productivity and satisfaction. The timing was perfect to explore options, as the market had become flush with AI-assisted coding tools.   Our project was to analyze the available tools on the market in-depth. We wanted to choose an AI assistant that would provide the best productivity results while also conforming to data governance policies. We set an aggressive timeline to analyze the tools, collaborate with key stakeholders from legal, procurement, finance and the business side, and then deploy the tool across our teams.  In the end, we selected GitHub Copilot, a code completion tool developed by GitHub and OpenAI, as our AI coding assistant. In this post, we walk through how we arrived at this decision. We also share the qualitative and quantitative results that we\'ve seen since we\'ve introduced it.  Our analysis: approach and criteria  When you want to buy a race car-or any car for that matter-it is unlikely that you\'ll look at just one car before making a final decision. As engineers, we are wired to conduct analyses that dive deeply into all the possible best options as well as list all the pros and cons of each. And that\'s what we did here, which led us to a final four list that included GitHub Copilot.  These are the criteria that we considered:  Languages supported  IDEs supported  Code ownership  Stability  AI models used   Protection for intellectual property (IP)   Licensing terms  Security  Service-level agreements  Chat interface  Innovation  Special powers  Pricing  Data governance  Support for a broad set of code repositories  We took each of the four products on our shortlist for a test drive using a specific set of standard use cases. These use cases were solicited from several engineering teams. They covered a wide range of tasks that we anticipated would be exercised with an AI assistant.   For example, we needed the tool to assist not just developers, but also document writers and automation engineers. We had multiple conversations and in-depth demos from the vendors. And when possible, we did customer reference checks as well.  Execution: a global rollout  Once we selected a vendor, we rolled out the tool to all Proofpoint developers across the globe. We use different code repos, programming languages and IDEs-so, we\'re talking about a lot of permutations and combinations.   Our initial rollout covered approximately 50% of our team from various business units and roles for about 30 days. We offered training sessions internally to share best practices and address challenges. We also built an internal community of experts to answer questions.   Many issues that came up were ironed out during this pilot phase so that when we went live, it was a smooth process. We only had a few issues. All stakeholders were aware of the progress, from our operations/IT team to our procurement and finance teams.   Our journey from start to finish was about 100 days. This might seem like a long time, but we wanted to be sure of our choice. After all, it is difficult to hit “rewind” on an important initiative of this magnitude.  Monitoring and measuring results  We have been using GitHub Copilot for more than 150 days and during that period we\'ve been collecting telemetry data from the tool and correlating it with several productivity and quality metrics. Our results have been impressive.   When it comes to quantitative results, we have seen a general increase in Tool Cloud Technical ★★★
ProofPoint.webp 2024-03-12 07:03:40 Si vous utilisez l'archivage de Veritas, quelle est votre prochaine étape?
If You\\'re Using Veritas Archiving, What\\'s Your Next Step?
(lien direct)
By now, much of the industry has seen the big news about Cohesity acquiring the enterprise data protection business of Veritas Technologies. The transaction will see the company\'s NetBackup technology-software, appliances and cloud (Alta Data Protection)-integrated into the Cohesity ecosystem.   But what about other Veritas products? As stated in the Cohesity and Veritas press releases, the “remaining assets of Veritas\' businesses will form a separate company, \'DataCo.\' \'DataCo\' will comprise Veritas\' InfoScale, Data Compliance, and Backup Exec businesses.”  Data Compliance includes Veritas Enterprise Vault (EV), which might raise concerns for EV customers. As a new, standalone entity, \'DataCo\' has no innovation track record.  In this blog, I provide my opinion on the questionable future of Veritas archiving products, why EV customers should start looking at alternative archiving tools, and why you should trust Proofpoint as your next enterprise archiving solution.   EV architecture isn\'t future-proof  EV gained a following because it came onto the market just when it was needed. With its big, robust on-premises architecture, EV was ideal to solve the challenges of bloated file and email servers. Companies had on-premises file and email servers that were getting bogged down with too much data. They needed a tool to offload legacy data to keep working and so they could be backed up in a reasonable amount of time.   However, with key applications having moved to the cloud over the last decade-plus, storage optimization is no longer a primary use case for archiving customers.  While EV has adapted to e-discovery and compliance use cases, its underlying on-premises architecture has struggled to keep up. EV customers still have headaches with infrastructure (hardware and software) planning, budgeting and maintenance, and archive administration. What\'s more, upgrades often require assistance from professional services and support costs are rising. And the list goes on.   Today, most cloud-native archives remove virtually all of these headaches. And just like you moved on from DVDs and Blu-ray discs to streaming video, it\'s time to migrate from legacy on-premises archiving architectures, like EV, to cloud-native solutions.  Future investments are uncertain  When you look back over EV\'s last 5-6 years, you might question what significant innovations Veritas has delivered for EV.   Yes, Veritas finally released supervision in the cloud. But that was a direct response to the EOL of AdvisorMail for EV.cloud many years ago.   Yes, Veritas added dozens of new data sources for EV. But that was achieved through the acquisition of Globanet-and their product Merge1-in 2020. (They still list Merge1 as an independent product on their website.)   Yes, they highlight how EV can store to “Azure, AWS, Google Cloud Storage, and other public cloud repositories” via storage tiering. But that just means that EV extends the physical storage layer of a legacy on-prem archiving architecture to the cloud-it doesn\'t mean it runs a cloud-native archiving solution.   Yes, Veritas has cloud-based Alta Archiving. But that\'s just a rebranding and repackaging of EV.cloud, which they retired more than two years ago. Plus, Alta Archiving and Enterprise Vault are separate products.   With the Cohesity data protection acquisition, EV customers have a right to question future investments in their product. Will EV revenue alone be able to sustain meaningful, future innovation in the absence of the NetBackup revenue “cash cow”? Will you cling to hope, only to be issued an EOL notice like Dell EMC SourceOne customers?   Now is the time to migrate from EV to a modern cloud-native archiving solution.  How Proofpoint can help  Here\'s why you should trust Proofpoint for your enterprise archiving.  Commitment to product innovation and support  Year after year, Proofpoint continues to invest a double-digit percentage of revenue into all of our businesses, including Proofpoint Int Tool Studies Cloud Technical ★★
ProofPoint.webp 2024-02-27 05:00:31 Risque et ils le savent: 96% des utilisateurs de prise de risque sont conscients des dangers mais le font quand même, 2024 State of the Phish révèle
Risky and They Know It: 96% of Risk-Taking Users Aware of the Dangers but Do It Anyway, 2024 State of the Phish Reveals
(lien direct)
We often-and justifiably-associate cyberattacks with technical exploits and ingenious hacks. But the truth is that many breaches occur due to the vulnerabilities of human behavior. That\'s why Proofpoint has gathered new data and expanded the scope of our 2024 State of the Phish report.   Traditionally, our annual report covers the threat landscape and the impact of security education. But this time, we\'ve added data on risky user behavior and their attitudes about security. We believe that combining this information will help you to:  Advance your cybersecurity strategy  Implement a behavior change program  Motivate your users to prioritize security  This year\'s report compiles data derived from Proofpoint products and research, as well as from additional sources that include:   A commissioned survey of 7,500 working adults and 1,050 IT professionals across 15 countries  183 million simulated phishing attacks sent by Proofpoint customers  More than 24 million suspicious emails reported by our customers\' end users  To get full access to our global findings, you can download your copy of the 2024 State of the Phish report now.  Also, be sure to register now for our 2024 State of the Phish webinar on March 5, 2024. Our experts will provide more insights into the key findings and answer your questions in a live session.  Meanwhile, let\'s take a sneak peek at some of the data in our new reports.  Global findings  Here\'s a closer look at a few of the key findings in our tenth annual State of the Phish report.  Survey of working adults  In our survey of working adults, about 71%, said they engaged in actions that they knew were risky. Worse, 96% were aware of the potential dangers. About 58% of these users acted in ways that exposed them to common social engineering tactics.  The motivations behind these risky actions varied. Many users cited convenience, the desire to save time, and a sense of urgency as their main reasons. This suggests that while users are aware of the risks, they choose convenience.  The survey also revealed that nearly all participants (94%) said they\'d pay more attention to security if controls were simplified and more user-friendly. This sentiment reveals a clear demand for security tools that are not only effective but that don\'t get in users\' way.  Survey of IT and information security professionals  The good news is that last year phishing attacks were down. In 2023, 71% of organizations experienced at least one successful phishing attack compared to 84% in 2022. The bad news is that the consequences of successful attacks were more severe. There was a 144% increase in reports of financial penalties. And there was a 50% increase in reports of damage to their reputation.   Another major challenge was ransomware. The survey revealed that 69% of organizations were infected by ransomware (vs. 64% in 2022). However, the rate of ransom payments declined to 54% (vs. 64% in 2022).   To address these issues, 46% of surveyed security pros are increasing user training to help change risky behaviors. This is their top strategy for improving cybersecurity.  Threat landscape and security awareness data  Business email compromise (BEC) is on the rise. And it is now spreading among non-English-speaking countries. On average, Proofpoint detected and blocked 66 million BEC attacks per month.  Other threats are also increasing. Proofpoint observed over 1 million multifactor authentication (MFA) bypass attacks using EvilProxy per month. What\'s concerning is that 89% of surveyed security pros think MFA is a “silver bullet” that can protect them against account takeover.   When it comes to telephone-oriented attack delivery (TOAD), Proofpoint saw 10 million incidents per month, on average. The peak was in August 2023, which saw 13 million incidents.  When looking at industry failure rates for simulated phishing campaigns, the finance industry saw the most improvement. Last year the failure rate was only 9% (vs. 16% in 2022). “Resil Ransomware Tool Vulnerability Threat Studies Technical ★★★★
ProofPoint.webp 2024-02-02 05:00:36 Développement d'une nouvelle norme Internet: le cadre de la politique relationnelle du domaine
Developing a New Internet Standard: the Domain Relationship Policy Framework
(lien direct)
Engineering Insights is an ongoing blog series that gives a behind-the-scenes look into the technical challenges, lessons and advances that help our customers protect people and defend data every day. Each post is a firsthand account by one of our engineers about the process that led up to a Proofpoint innovation.   In this blog post, we discuss the Domain Relationship Policy Framework (DRPF)-an effort that has been years in the making at Proofpoint. The DRPF is a simple method that is used to identify verifiably authorized relationships between arbitrary domains. We create a flexible way to publish policies. These policies can also describe complex domain relationships.  The details for this new model require in-depth community discussions. These conversations will help us collectively steer the DRPF toward becoming a fully interoperable standard. We are now in the early proposal stage for the DRPF, and we are starting to engage more with the broader community. This post provides a glimpse down the road leading to standardization for the DRPF.  Why Proofpoint developed DRPF  To shine a light on why Proofpoint was inspired to develop the DRPF in the first place, let\'s consider the thinking of the initial designers of the Domain Name System (DNS). They assumed that subdomains would inherit the administrative control of their parent domains. And by extension, this should apply to all subsequent subdomains down the line.    At the time, this was reasonable to assume. Most early domains and their subdomains operated in much the same way. For example, “university.edu” directly operated and controlled the administrative policies for subdomains such as “lab.university.edu” which flowed down to “project.lab.university.edu.”  Since the mid-1980s, when DNS was widely deployed, there has been a growing trend of delegating subdomains to third parties. This reflects a breakdown of the hierarchical model of cascading policies. To see how this works, imagine that a business uses “company.com” as a domain. That business might delegate “marketing.company.com” to a third-party marketing agency. The subdomain must inherit some policies, while the subdomain administrator may apply other policies that don\'t apply to the parent domain.  Notably, there is no mechanism yet for a domain to declare a relationship with another seemingly independent domain. Consider a parent company that operates multiple distinct brands. The company with a single set of policies may want them applied not only to “company.com” (and all of its subdomains). It may also want them applied to its brand domains “brand.com” and “anotherbrand.com.”   It gets even more complex when any of the brand domains delegate various subdomains to other third parties. So, say some of them are delegated to marketing or API support. Each will potentially be governed by a mix of administrative policies.  In this context, “policies” refers to published guidance that is used when these subdomains interact with the domain. Policies might be for information only. Or they might provide details that are required to use services that the domain operates. Most policies will be static (or appear so to the retrieving parties). But it is possible to imagine that they could contain directives akin to smart contracts in distributed ledgers.  3 Design characteristics that define DRPF  The goal of the DRPF is to make deployment and adoption easier while making it flexible for future use cases. In many prior proposals, complex requirements bogged down efforts to get rid of administrative boundaries between and across disparate domains. Our work should be immediately useful with minimal effort and be able to support a wide array of ever-expanding use cases.  In its simplest form, three design characteristics define the DRPF:  A domain administrator publishes a policy assertion record for the domain so that a relying party can discover and retrieve it.  The discovered policy assertion directs the relying party to where they can find Tool Prediction Cloud Technical ★★★
ProofPoint.webp 2024-01-23 15:29:37 Plus d'un quart des 2000 mondiaux ne sont pas prêts pour les règles d'authentification des e-mails rigoureuses à venir
More than One-Quarter of the Global 2000 Are Not Ready for Upcoming Stringent Email Authentication Rules
(lien direct)
Le courrier électronique reste le principal canal de communication pour les organisations et les moyens de communication préférés pour les consommateurs.Et partout où les gens vont, les acteurs de la menace suivent.Les cybercriminels continuent d'exploiter les e-mails pour livrer le phishing, la fraude par e-mail, le spam et d'autres escroqueries.Mais Google, Yahoo!, Et Apple se battent avec de nouvelles exigences d'authentification par e-mail conçues pour empêcher les acteurs de la menace d'abuser des e-mails.Bien que ce changement majeur soit une excellente nouvelle pour les consommateurs, les organisations n'ont pas beaucoup de temps pour préparer le google, Yahoo!Et Apple commencera à appliquer ses nouvelles exigences au premier trimestre de 2024. Avec seulement des semaines jusqu'à ce que ces règles commencent à prendre effet, plus d'un quart (27%) des Forbes Global 2000 ne sont pas prêts pour ces nouvelles exigences;Cela peut avoir un impact significatif sur leur capacité à fournir des communications par e-mail à leurs clients en temps opportun et met leurs clients en danger de fraude par e-mail et d'escroqueries.En fait, notre rapport State of the Phish 2023 a révélé que 44% des consommateurs mondiaux pensent qu'un e-mail est sûr s'il inclut simplement l'image de marque familière. L'analyse de Proofpoint \\ de la Forbes Global 2000 et leur adoption du protocole ouvert DMARC (reporting et conformité d'authentification des messages basés sur le domaine), un protocole d'authentification largement utilisé qui aide à garantir l'identité des communications par e-mail et protège les noms de domaine du site Web contre le fait d'êtreusurpé et mal utilisé, montre: Plus d'un quart (27%) du Global 2000 n'a aucun enregistrement DMARC en place, indiquant qu'ils ne sont pas préparés aux prochaines exigences d'authentification par e-mail. 69% stupéfiants ne bloquent pas activement les e-mails frauduleux en atteignant leurs clients;Moins d'un tiers (31%) ont mis en œuvre le plus haut niveau de protection pour rejeter les e-mails suspects en atteignant leurs clients de réception. 27% ont mis en œuvre une politique de moniteur, ce qui signifie que des e-mails non qualifiés peuvent toujours arriver dans la boîte de réception du destinataire;et seulement 15% ont mis en œuvre une politique de quarantaine pour diriger des e-mails non qualifiés aux dossiers spam / indésirables. L'authentification par e-mail est une meilleure pratique depuis des années.DMARC est l'étalon-or pour se protéger contre l'identité des e-mails, une technique clé utilisée dans la fraude par e-mail et les attaques de phishing.Mais, comme le révèle notre analyse du Global 2000, de nombreuses entreprises doivent encore la mettre en œuvre, et celles qui sont à la traîne de l'adoption du DMARC devront désormais rattraper leur retard rapidement s'ils souhaitent continuer à envoyer des e-mails à leurs clients.Les organisations qui ne se contentent pas ne pourraient pas voir leurs e-mails acheminés directement vers les dossiers de spam des clients ou rejeté. La mise en œuvre peut cependant être difficile, car elle nécessite une variété d'étapes techniques et une maintenance continue.Toutes les organisations n'ont pas les ressources ou les connaissances en interne pour répondre aux exigences en temps opportun.Vous pouvez profiter de ressources telles que le kit technique et d'authentification de l'e-mail technique de Proofpoint \\ pour vous aider à démarrer.ProofPoint propose également un outil pour vérifier les enregistrements DMARC et SPF de votre domaine, ainsi que pour créer un enregistrement DMARC pour votre domaine.Cet outil fait partie d'une solution complète de défense de fraude par e-mail, qui fournit un SPF hébergé, un DKIM hébergé et des fonctionnalités DMARC hébergées pour simplifier le déploiement et la maintenance tout en augmentant la sécurité.La solution comprend également l'accès à des consultants hautement expérimentés pour vous guider à travers les workflows d'im Spam Tool Threat Cloud Technical ★★★
ProofPoint.webp 2024-01-22 06:00:26 Types de menaces et d'attaques d'identité que vous devez être consciente
Types of Identity Threats and Attacks You Should Be Aware Of
(lien direct)
It\'s easy to understand why today\'s cybercriminals are so focused on exploiting identities as a key step in their attacks. Once they have access to a user\'s valid credentials, they don\'t have to worry about finding creative ways to break into an environment. They are already in.   Exploiting identities requires legwork and persistence to be successful. But in many ways this tactic is simpler than exploiting technical vulnerabilities. In the long run, a focus on turning valid identities into action can save bad actors a lot of time, energy and resources. Clearly, it\'s become a favored approach for many attackers. In the past year, 84% of companies experienced an identity-related security breach.  To defend against identity-based attacks, we must understand how bad actors target the authentication and authorization mechanisms that companies use to manage and control access to their resources. In this blog post, we will describe several forms of identity-based attacks and methods and offer an overview of some security controls that can help keep identity attacks at bay.  Types of identity-based attacks and methods  Below are eight examples of identity attacks and related strategies. This is not an exhaustive list and, of course, cybercriminals are always evolving their techniques. But this list does provide a solid overview of the most common types of identity threats.   1. Credential stuffing  Credential stuffing is a type of brute-force attack. Attackers add pairs of compromised usernames and passwords to botnets that automate the process of trying to use the credentials on many different websites at the same time. The goal is to identify account combinations that work and can be reused across multiple sites.   Credential stuffing is a common identity attack technique, in particular for widely used web applications. When bad actors find a winning pair, they can steal from and disrupt many places at once. Unfortunately, this strategy is highly effective because users often use the same passwords across multiple websites.  2. Password spraying  Another brute-force identity attack method is password spraying. A bad actor will use this approach to attempt to gain unauthorized access to user accounts by systematically trying commonly used passwords against many usernames.   Password spraying isn\'t a traditional brute-force attack where an attacker attempts to use many passwords against a single account. It is a more subtle and stealthy approach that aims to avoid account lockouts. Here\'s how this identity attack usually unfolds:  The attacker gathers a list of usernames through public information sources, leaked databases, reconnaissance activities, the dark web and other means.  They then select a small set of commonly used or easily guessable passwords.  Next, the attacker tries each of the selected passwords against a large number of user accounts until they find success.  Password spraying is designed to fly under the radar of traditional security detection systems. These systems may not flag these identity-based attacks due to the low number of failed login attempts per user. Services that do not implement account lockout policies or have weak password policies are at risk for password spraying attacks.   3. Phishing  Here\'s a classic and very effective tactic that\'s been around since the mid-1990s. Attackers use social engineering and phishing to target users through email, text messages, phone calls and other forms of communication. The aim of a phishing attack is to trick users into falling for the attacker\'s desired action. That can include providing system login credentials, revealing financial data, installing malware or sharing other sensitive data.   Phishing attack methods have become more sophisticated over the years, but they still rely on social engineering to be effective.   4. Social engineering   Social engineering is more of an ingredient in an identity attack. It\'s all about the deception and manipulation of users, and it\'s a feature in Malware Vulnerability Threat Patching Technical ★★
ProofPoint.webp 2024-01-17 06:00:02 Comment mettre en place un programme de gestion des menaces d'initié et de prévention des pertes de données
How to Set Up an Insider Threat Management and Data Loss Prevention Program
(lien direct)
This blog post is adapted from our e-book, Getting Started with DLP and ITM.   The last few years have brought unprecedented change. An increasingly distributed workforce, access to more data through more channels and a shift to the cloud have transformed the nature of work. These trends have made protecting sensitive data more complicated and demanding.    What\'s clear is that organizations are struggling to rise to the challenge. Between 2020 and 2022, insider threats increased by a staggering 44%. And the costs of addressing them increased 34%-from $11.45 million to $15.38 million.   This upswing mainly comes down to two factors. For starters, most security teams have little visibility into people-caused data loss and insider-led security incidents. And few have the tools or resources to handle it.   That\'s why Gartner sees platforms for data loss prevention and insider threat management (DLP and ITM) increasingly converging. Businesses need tools and processes that give them holistic, contextualized insights that take user behavior into account. It\'s no longer enough to focus on data-and where it\'s moving.  To prevent data loss, industry leaders need to take a people-centric approach that expands beyond traditional drivers like compliance. In this blog post, we\'ll explore some basics for designing an ITM and DLP program. This can help you approach information protection in a way that\'s built for how modern organizations work.  Why information protection is so challenging   Risks are everywhere in today\'s complex landscape. Here are a few changes making it difficult for companies to protect their data.  More data is open to exposure and theft. As businesses go digital, more data is being generated than ever before. According to IDC\'s Worldwide Global DataSphere Forecast, the total amount of data generated data will double from 2022 to 2026. That means malicious insiders will have more access to more sensitive data through more channels. It will also be easier for careless users to expose data inadvertently. Plus, any security gap between channels, any misconfiguration or any accidental sharing of files can give external attackers more opportunities to steal data.  New data types are hard to detect. Data isn\'t just growing in volume. It\'s also becoming more diverse, which makes it harder to detect and control. With traditional DLP program tools, data typically fits within very tightly defined data patterns (such as payment card number). But even then, it generates too many false positives. Now, key business data is more diverse and can be graphical, tabular or even source code.   The network security perimeter no longer exists. With more employees and contractors working remotely, the security perimeter has shifted from brick and mortar to one based on people. Add to the mix bring-your-own-device (BYOD) practices, where the personal and professional tend to get blurred, and security teams have even more risks to contend with. In a survey for the 2023 State of the Phish report from Proofpoint, 72% of respondents said they use one or more of their personal devices for work.  Employee churn is high. Tech industry layoffs in 2022 and 2023 have seen many employees leaving and joining businesses at a rapid rate. The result is greater risk of data exfiltration, infiltration and sabotage. Security leaders know it, too-39% of chief information security officers rated improving information protection as the top priority over the next two years.  Security talent is in short supply. A lack of talent has left many security teams under-resourced. And the situation is likely to get worse. In 2023, the cybersecurity workforce gap hit an all-time high-there are 4 million more jobs than there are skilled workers.  DLP vs. ITM  What\'s the difference between DLP and ITM? Both DLP and ITM work to prevent data loss. But they achieve it in different ways.  DLP tracks data movement and exfiltration  DLP monitors file activity and scans content to see whether users are handling sen Tool Threat Cloud Technical ★★
ProofPoint.webp 2024-01-08 06:00:19 ProofPoint reconnu en 2023 Gartner & Reg;Guide du marché pour les solutions de gestion des risques d'initiés
Proofpoint Recognized in 2023 Gartner® Market Guide for Insider Risk Management Solutions
(lien direct)
It\'s easy to understand why insider threats are one of the top cybersecurity challenges for security leaders. The shift to remote and hybrid work combined with data growth and cloud adoption has meant it\'s easier than ever for insiders to lose or steal data. Legacy systems simply don\'t provide the visibility into user behavior that\'s needed to detect and prevent insider threats. With so much potential for brand and financial damage, insider threats are now an issue for the C-suite. As a result, businesses are on the lookout for tools that can help them to better manage these threats.  To help businesses understand what to look for, Gartner has recently released Market Guide for Insider Risk Management Solutions. In this report, Gartner explores what security and risk leaders should look for in an insider risk management (IRM) solution. It also provides guidance on how to implement a formal IRM program. Let\'s dive into some of its highlights. Must-have capabilities for IRM tools Gartner states that IRM “refers to the use of technical solutions to solve a fundamentally human problem.” And it defines IRM as “a methodology that includes the tools and capabilities to measure, detect and contain undesirable behavior of trusted accounts in the organization.” Gartner identifies three distinct types of users-careless, malicious and compromised.  That, we feel, is in line with our view at Proofpoint. And the 2022 Cost of Insider Threats Global Report from Ponemon Institute notes that most insider risks can be attributed to errors and carelessness, followed by malicious and compromised users.  In its Market Guide, Gartner identifies the mandatory capabilities of enterprise IRM platforms:  Orchestration with other cybersecurity tooling  Monitoring of employee activity and assimilating into a behavior-based risk model Dashboarding and alerting of high-risk activity Orchestration and initiation of intervention workflows This is the third consecutive year that Proofpoint is a Representative Vendor in the Market Guide.  Proofpoint was an early and established leader in the market for IRM solutions. Our platform: Integrates with a broad ecosystem of cybersecurity tools. Our API-driven architecture means it\'s easy for you to feed alerts into your security tools. That includes security information and event management (SIEM) as well as SOAR and service management platforms, such as Splunk and ServiceNow. That, in turn, helps you gain a complete picture of potential threats. Provides a single lightweight agent with a dual purpose. With Proofpoint, you get the benefit of data loss prevention (DLP) and ITM in a single solution. This helps you protect against data loss and get deep visibility into user activities. With one agent, you can monitor everyday users. That includes low-risk and regular business users, risky users, such as departing employees, privileged users and targeted users.  Offers one centralized dashboard. This saves you time and effort by allowing you to monitor users, correlate alerts and triage investigations from one place. You no longer need to waste your time switching between tools. You can quickly see your riskiest users, top alerts and file exfiltration activity in customizable dashboards.  Includes tools to organize and streamline tasks. Proofpoint ITM lets you change the status of events with ease, streamline workflows and better collaborate with team members. Plus, you can add tags to help group and organize your alerts and work with more efficiency. DLP and IRM are converging In its latest Market Guide, Gartner says: “Data loss prevention (DLP) and insider risk strategies are increasingly converging into a unified solution. The convergence is driven by the recognition that preventing data loss and managing insider risks are interconnected goals.” A legacy approach relies on tracking data activity. But that approach is no longer sufficient because the modern way of working is more complex. Employees and third parties have access to more data than ever before. And ex Tool Threat Cloud Technical ★★★
ProofPoint.webp 2023-12-28 14:18:07 Concevoir un indice de texte mutable à l'échelle de la pétaoctet rentable
Designing a Cost-Efficient, Petabyte-Scale Mutable Full Text Index
(lien direct)
Engineering Insights is an ongoing blog series that gives a behind-the-scenes look into the technical challenges, lessons and advances that help our customers protect people and defend data every day. Each post is a firsthand account by one of our engineers about the process that led up to a Proofpoint innovation.  At Proofpoint, running a cost-effective, full-text search engine for compliance use cases is an imperative. Proofpoint customers expect to be able to find documents in multi-petabyte archives for legal and compliance reasons. They also need to index and perform searches quickly to meet these use cases.   However, creating full-text search indexes with Proofpoint Enterprise Archive can be costly. So we devote considerable effort toward keeping those costs down. In this blog post, we explore some of the ways we do that while still supporting our customers\' requirements.  Separating mutable and immutable data  One of the most important and easiest ways to reduce costs is to separate mutable and immutable data. This approach doesn\'t always fit every use case, but for the Proofpoint Enterprise Archive it fits well.   For archiving use cases-and especially for SEC 17a-4 compliance-data that is indexed can\'t be modified. That includes data-like text in message bodies and attachments.  The Proofpoint Enterprise Archive has features that require the storage and mutation of data alongside a message, in accordance with U.S. Securities and Exchange Commission (SEC) compliance. (For example, to which folders a message is a member, and to which legal matters a message pertains.)  To summarize, we have:  Large immutable indexes  Small mutable indexes  By separating data into mutable and immutable categories, we can index these datasets separately. And we can use different infrastructure and provisioning rules to manage that data. The use of different infrastructure allows us to optimize the cost independently.  Comparing the relative sizes of mutable and immutable indexes.  Immutable index capacity planning and cost  Normally, full-text search indexes must be provisioned to handle the load of initial write operations, any subsequent update operations and read operations. By indexing immutable data separately, we no longer need to provision enough capacity to handle the subsequent update operations. This requires less IO operations overall.  To reduce IO needs further, the initial index population is managed carefully with explicit IO reservation. Sometimes, this will mean adding more capacity (nodes/servers/VMs) so that the IO needs of existing infrastructure are not overloaded.  When you mutate indexes, it is typically best practice to leave an abundance of disk space to support the index merge operations when updates occur. In some cases, this can be as much as 50% free disk space. But with immutable indexes, you don\'t need to have so much spare capacity-and that helps to reduce costs.   In summary, the following designs can help keep costs down:  Reduce IO needs because documents do not mutate  Reduce disk space requirements because free space for mutation isn\'t needed  Careful IO planning on initial population, which reduces IO requirements  Mutable index capacity planning and cost  Meanwhile, mutable indexes benefit from standard practices. They can\'t receive the same reduced capacity as immutable indexes. However, given that they\'re a fraction of the size, it\'s a good trade-off.   Comparing the relative free disk space of mutable and Immutable indexes.  Optimized join with custom partitioning and routing  In a distributed database, join operations can be expensive. We often have 10s to 100s of billions of documents for the archiving use case. When both sides of the join operation have large cardinality, it\'s impractical to use a generalized approach to join the mutable and immutable data.  To make this high-cardinality join practical, we partition the data in the same way for both the mutable and immutable data. As a result, we end up with a one-t Cloud Technical ★★★
ProofPoint.webp 2023-12-14 09:44:32 Atténuation des menaces d'initié: 5 meilleures pratiques pour réduire le risque
Insider Threat Mitigation: 5 Best Practices to Reduce Risk
(lien direct)
(This is an updated version of a blog that was originally published on 1/28/21.) Most security teams focus on detecting and preventing external threats. But not all threats come from the outside.   The shift to hybrid work, accelerated cloud adoption and high rates of employee turnover have created a perfect storm for data loss and insider threats over the past several years. Today, insider threats rank amongst the top concerns for security leaders-30% of chief information security officers report that insider threats are their biggest cybersecurity threat over the next 12 months.  It\'s easy to understand why. Insider threats have increased 44% since 2020 due to current market dynamics-and security teams are struggling to keep pace. According to the Verizon 2023 Data Breach Investigations Report, 74% of all breaches involve the human element. In short, data doesn\'t lose itself. People lose it.  When the cybersecurity risk to your company\'s vital systems and data comes from the inside, finding ways to mitigate it can be daunting. Unlike with tools that combat external threats, security controls for data loss and insider threats can impact users\' daily jobs. However, with the right approach and insider threat management tools, that doesn\'t have to be the case.  In this blog post, we\'ll share best practices for insider threat mitigation to help your business reduce risk and overcome common challenges you might face along the way.   What is an insider threat?  But first, let\'s define what we mean by an insider threat. In the cybersecurity world, the term “insider” describes anyone with authorized access to a company\'s network, systems or data. In other words, it is someone in a position of trust. Current employees, business partners and third-party contractors can all be defined as insiders.   As part of their day-to-day jobs, insiders have access to valuable data and systems like:  Computers and networks  Intellectual property (IP)  Personal data  Company strategy  Financial information  Customer and partner lists  All insiders pose a risk given their position of trust-but not all insiders are threats.   An insider threat occurs when someone with authorized access to critical data or systems misuses that access-either on purpose or by making a mistake. The fallout from an insider threat can be dire for a business, including IP loss, legal liability, financial consequences and reputational damage.  The challenge for security firms is to determine which insiders are threats, and what type of threats they are, so they know how to respond. There are three insider threat types:  Careless. This type of risky insider is best described as a user with good intentions who makes bad decisions that can lead to data loss. The 2022 Cost of Insider Threats Global Report from Ponemon Institute notes that careless users account for more than half (56%) of all insider-led incidents.  Malicious. Some employees-or third parties, like contractors or business partners-are motivated by personal gain. Or they might be intent on harming the business. In either case, these risky users might want to exfiltrate trade secrets or take IP when they leave the company. Industrial espionage and sabotage are examples of malicious insider activity. Ponemon research shows malicious insiders account for 26% of insiders.  Compromised. Sometimes, external threat actors steal user login information or other credentials. They then use those credentials to access applications and systems. Ponemon reports that compromised users account for 18% of insiders.  Insider threat mitigation best practices  Companies can minimize brand and financial damage by detecting and stopping insider threats. How each security team approaches insider threats will vary depending on the industry, maturity and business culture. However, every organization can use the five best practices we\'ve outlined below to improve their insider threat prevention.    1. Identify your risky users  Most insiders fall into the “care Data Breach Tool Threat Industrial Cloud Technical ★★
ProofPoint.webp 2023-11-21 08:35:02 Prévenir les attaques de fatigue du MFA: sauvegarder votre organisation
Preventing MFA Fatigue Attacks: Safeguarding Your Organization
(lien direct)
Gaining access to critical systems and stealing sensitive data are top objectives for most cybercriminals. Social engineering and phishing are powerful tools to help them achieve both. That\'s why multifactor authentication (MFA) has become such an important security measure for businesses and users. Without MFA as part of the user authentication process, it is much less challenging for an attacker with stolen credentials to authenticate a user\'s account.  The primary goal of MFA is to reduce the risk of unauthorized access, especially in situations where passwords alone may not provide enough protection. Even if an attacker steals a user\'s password, with MFA they still need the second factor (and maybe others) to gain access to an account. Examples of MFA factors include biometrics, like fingerprints, and signals from user devices, like GPS location.   MFA isn\'t a perfect solution, though-it can be bypassed. Adversaries are relentless in their efforts to undermine any security defenses standing in the way of their success. (The evolution of phish kits for stealing MFA tokens is evidence of that.) But sometimes, attackers will choose to take an in-your-face approach that is not very creative or technical. MFA fatigue attacks fall into that category.  What are MFA fatigue attacks-and how do they work?  MFA fatigue attacks, also known as MFA bombing or MFA spamming, are a form of social engineering. They are designed to wear down a user\'s patience so that they will accept an MFA request out of frustration or annoyance-and thus enable an attacker to access their account or device.  Many people encounter MFA requests daily, or even multiple times per day, as they sign-in to various apps, sites, systems and platforms. Receiving MFA requests via email, phone or other devices as part of that process is a routine occurrence.   So, it is logical for a user to assume that if they receive a push notification from an account that they know requires MFA, it is a legitimate request. And if they are very busy at the time that they receive several push notifications in quick succession to authenticate an account, they may be even more inclined to accept a request without scrutinizing it.  Here\'s an overview of how an MFA attack works:  A malicious actor obtains the username and password of their target. They can achieve this in various ways, from password-cracking tactics like brute-force attacks to targeted phishing attacks to purchasing stolen credentials on the dark web.  The attacker then starts to send MFA notifications to the user continuously, usually via automation, until that individual feels overwhelmed and approves the login attempt just to make the requests stop. (Usually, the push notifications from MFA solutions require the user to simply click a “yes” button to authenticate from the registered device or email account.)  Once the attacker has unauthorized access to the account, they can steal sensitive data, install malware and do other mischief, including impersonating the user they have compromised-taking their actions as far as they can or want to go.  3 examples of successful MFA fatigue attacks  To help your users understand the risk of these attacks, you may want to include some real-world examples in your security awareness program on this topic. Here are three notable incidents, which are all associated with the same threat actor:  Uber. In September 2022, Uber reported that an attacker affiliated with the threat actor group Lapsus$ had compromised a contractor\'s account. The attacker may have purchased corporate account credentials on the dark web, Uber said in a security update. The contractor received several MFA notifications as the attacker tried to access the account-and eventually accepted one. After the attacker logged in to the account, they proceeded to access other accounts, achieving privilege escalation. One action the attacker took was to reconfigure Uber\'s OpenDNS to display a graphic image on some of the company\'s internal sites.  Cisco. Cisco suffer Ransomware Data Breach Malware Tool Threat Technical Uber ★★★
ProofPoint.webp 2023-11-16 14:15:19 Informations exploitables: simplifier l'explication des menaces via le résumé de la condamnation
Actionable Insights: Simplifying Threat Explainability via the Condemnation Summary
(lien direct)
In this blog series we cover how to improve your company\'s security posture with actionable insights. Actionable insights are a critical tool to help you improve your security posture and stop initial compromise in the attack chain. You can use them to identify and respond to potential risks, enhance your incident response capabilities, and make more informed security decisions.   In previous actionable insights blog posts, we covered these topics:  People risk  Origin risk  Business email compromise (BEC) risk  Ensuring proper risk context  Risk efficacy  Telephone-oriented attack delivery (TOAD) risk  Threat intelligence  Your risk profile In this post, we are excited to announce the new TAP Condemnation Summary-which is available to all Proofpoint Targeted Attack Protection (TAP) customers who use the Proofpoint Aegis threat protection platform. We\'ll explain why it is an invaluable resource and we\'ll explore some of its key reports.   Threat explainability: Introducing the Condemnation Summary  In the ever-evolving cybersecurity landscape, clear communication and rapid understanding of email threats are essential. Proofpoint introduced the Condemnation Summary to enhance threat visibility and explain-in plain, everyday language-why a particular threat is condemned.   The summary makes it easier for both technical and nontechnical users to comprehend email threats. You can find the TAP Condemnation Summary in the Evidence section of the threat details page for any individual threat within your Aegis platform.  Let\'s explore how this new feature can help your business.  Insights: What you can learn from the Condemnation Summary  The Condemnation Summary helps demystify email threats and streamline the decision-making process for threat remediation. Here\'s what you can expect from this innovative feature.  User and VIP insights  The Condemnation Summary includes a highlights card that spotlights impacted users and VIPs. With drilldown options and actionable items, you can quickly determine who is affected. You can use these insights to understand the steps you need to take to mitigate the threat.    Details about affected users shown in the Condemnation Summary.  Threat state overview  This section of the summary breaks down the state of the threat or campaign, complete with timestamps. A chronological view provides you with a clear understanding of how the threat evolved, so you can assess its severity and impact.    The threat state overview section in the Condemnation Summary.  User-friendly descriptions  The Condemnation Summary offers high-level observations from our behavioral and machine learning detection layers. Threats are described in everyday language. So nontechnical users can better grasp the nature of a threat and its potential consequences.    High-level observations in plain language in the Condemnation Summary.  Source attribution  It\'s helpful to understand where a threat originated. Condemnation Sources gives you insight into which sources contributed to the detection and condemnation of the threat.     The Condemnation Sources section in the Condemnation Summary.  Targeted controls: Taking action  The Condemnation Summary isn\'t just a feature for visibility or explainability. It\'s a tool for action. Here\'s how to make the most of this new feature:  Mitigate threats faster. With user and VIP insights, you can respond promptly to threats that are impacting specific individuals. Take immediate actions to protect these users and mitigate risks.  Improve your communication about threats. The user-friendly descriptions in the Condemnation Summary make it easier to communicate threat details to nontechnical stakeholders. This, in turn, helps to foster better collaboration around security across your business.  See how threats evolve. When you have a timeline of a threat\'s progression, you can assess how a threat evolved and whether it is part of a broader campaign.  Track where threats come from. It is cruci Tool Threat Technical ★★★
ProofPoint.webp 2023-11-13 06:18:08 Permettre des mises à jour de signature de spam en temps réel sans ralentir les performances
Enabling Real-Time Spam Signature Updates without Slowing Down Performance
(lien direct)
Engineering Insights est une série de blogs en cours qui donne à des coulisses sur les défis techniques, les leçons et les avancées qui aident nos clients à protéger les personnes et à défendre les données chaque jour.Chaque message est un compte de première main de l'un de nos ingénieurs sur le processus qui a conduit à une innovation de preuves. ProofPoint Conformité intelligente classe le contenu texte qui provient du contenu des médias sociaux partout aux demandes fournies par le client.Une partie de notre système détecte le contenu du spam, généralement à partir de sources basées sur les médias sociaux. Un défi commun pour les systèmes de détection de spam est que les adversaires modifient leur contenu pour échapper à la détection.Nous avons un algorithme qui résout ce problème. Parfois, les faux positifs doivent également être corrigés.Nous gérons cela en maintenant une liste d'exclusion et une liste positive de signatures de spam.Dans cet article de blog, nous expliquons comment nous mettons à jour les signatures de spam en temps réel sans avoir un impact négatif sur les performances. Une nécessité d'évoluer sans compromettre les performances Au fur et à mesure que la clientèle de Proofpoint Patrol a augmenté, nous avons dû la mettre à l'échelle afin de continuer à fournir des services rapides et fiables.À l'origine, le service de catégorisation de texte était intégré à notre service de classificateur de base et n'a pas pu être mis à l'échelle indépendamment.Nous avons décidé de le séparer en son propre service afin que nous puissions le développer et l'étendre indépendamment de notre service de classificateur. Notre première libération de ce nouveau système nous a permis d'évoluer plus efficacement et a entraîné une forte diminution de la latence.Une partie de l'amélioration des performances est venue du chargement de l'ensemble de signature de spam dans la mémoire au démarrage du service. Cependant, cela a conduit à une limitation où nous ne pouvions pas facilement mettre à jour nos ensembles de signature positive ou d'exclusion sans reconstruire et redéployer notre application.Cela signifiait que notre système de spam n'apprendrait pas de nouvelles signatures de spam au fil du temps, ce qui entraînerait également une augmentation des faux négatifs. Une solution de stockage de données en mémoire: redis Peu de temps après avoir rejoint Proofpoint, j'ai été chargé d'améliorer le système de détection des spams pour apprendre au fil du temps, tout en conservant les avantages sociaux.Nous avions besoin d'une solution avec une latence à faible lecture, et idéalement une latence d'écriture faible, car notre rapport lecture / écriture était assis vers 80/20. Une solution potentielle était Redis, une solution de stockage de données en mémoire open source.Amazon propose une implémentation de Redis-MemoryDB-qui peut fournir une persistance de données au-delà de ce qu'une solution de cache typique peut offrir. Présentation d'une solution de stockage de signature en mémoire. À la fin des performances, Amazon possède une latence de lecture microseconde et une latence d'écriture à un chiffre.Tout en étudiant des solutions potentielles, nous avons remarqué des latences similaires avec notre charge de travail. Nous avons généralement plus de requêtes de lecture que l'écriture;Cependant, nous avons des pics occasionnels dans les requêtes d'écriture. Un graphique montrant les commandes de lecture au fil du temps. Un graphique montrant des commandes d'écriture au fil du temps. Le fait que MemoryDB persiste nos signatures de spam et notre liste d'exclusion permettrait à notre système de stocker de nouvelles signatures de spam lors de l'exécution.Cela permettrait également à notre système de s'améliorer avec le temps.Nous serions également en mesure de répondre rapidement aux rapports faux positifs en mettant à jour la liste en temps réel. Nos résultats Après avoir terminé notre Spam Cloud Technical ★★★
Last update at: 2024-05-13 20:08:12
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter