What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
AlienVault.webp 2023-11-28 11:00:00 Pour le manque de cyber ongle, le royaume est tombé
For want of a cyber nail the kingdom fell
(lien direct)
An old proverb, dating to at least the 1360’s, states: "For want of a nail, the shoe was lost, for want of a shoe, the horse was lost, for want of a horse, the rider was lost, for want of a rider, the battle was lost, for want of a battle, the kingdom was lost, and all for the want of a horseshoe nail," When published in Ben Franklin’s Poor Richard’s Almanack in 1768, it was preceded by the cautionary words: “a little neglect may breed great mischief”. This simple proverb and added comment serve as emblematic examples of how seemingly inconsequential missteps or neglect can lead to sweeping, irreversible, catastrophic losses. The cascade of events resonates strongly within the increasingly complex domain of cybersecurity, in which the omission of even the most elementary precaution can result in a spiraling series of calamities. Indeed, the realm of cybersecurity is replete with elements that bear striking resemblance to the nail, shoe, horse, and rider in this proverb. Consider, for example, the ubiquitous and elementary software patch that may be considered the proverbial digital "nail." In isolation, this patch might seem trivial, but its role becomes crucial when viewed within the broader network of security measures. The 2017 WannaCry ransomware attack demonstrates the significance of such patches; an unpatched vulnerability in Microsoft Windows allowed the malware to infiltrate hundreds of thousands of computers across the globe. It wasn\'t just a single machine that was compromised due to this overlooked \'nail,\' but entire networks, echoing how a lost shoe leads to a lost horse in the proverb. This analogy further extends to the human elements of cybersecurity. Personnel tasked with maintaining an organization\'s cyber hygiene play the role of the "rider" in our metaphorical tale. However, the rider is only as effective as the horse they ride; likewise, even the most skilled IT professional cannot secure a network if the basic building blocks—the patches, firewalls, and antivirus software—resemble missing nails and shoes. Numerous reports and studies have indicated that human error constitutes one of the most common causes of data breaches, often acting as the \'rider\' who loses the \'battle\'. Once the \'battle\' of securing a particular network or system is lost, the ramifications can extend much further, jeopardizing the broader \'kingdom\' of an entire organization or, in more extreme cases, critical national infrastructure. One glaring example that serves as a cautionary tale is the Equifax data breach of 2017, wherein a failure to address a known vulnerability resulted in the personal data of 147 million Americans being compromised. Much like how the absence of a single rider can tip the scales of an entire battle, this singular oversight led to repercussions that went far beyond just the digital boundaries of Equifax, affecting millions of individuals and shaking trust in the security of financial systems. Ransomware Data Breach Malware Vulnerability Wannacry Wannacry Equifax Equifax ★★
AlienVault.webp 2023-10-19 10:00:00 Pourquoi les organisations ne détectent-elles pas les menaces de cybersécurité?
Why are organizations failing to detect cybersecurity threats?
(lien direct)
The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.  With the changing security landscape, the most daunting task for the CISO and CIO is to fight an ongoing battle against hackers and cybercriminals. Bad actors stay ahead of the defenders and are always looking to find new vulnerabilities and loopholes to exploit and enter the business network. Failing to address these threats promptly can have catastrophic consequences for the organization. A survey finds that, on average, it takes more than five months to detect and remediate cyber threats. This is a significant amount of time, as a delayed response to cyber threats can result in a possible cyber-attack.  One can never forget the devastating impacts of the Equifax breach in 2017 and the Target breach in 2013  due to delayed detection and response. This is concerning and highlights the need for proactive cybersecurity measures to detect and mitigate rising cyber threats. Amidst this, it\'s also crucial to look into why it is challenging to detect cyber threats. Why do organizations fail to detect cyber threats? Security teams are dealing with more cyber threats than before. A report also confirmed that global cyber attacks increased by 38% in 2022 compared to the previous year. The increasing number and complexity of cyber-attacks make it challenging for organizations to detect them. Hackers use sophisticated techniques to bypass security systems and solutions - like zero-day vulnerabilities, phishing attacks, business email compromises (BEC), supply chain attacks, and Internet of Things (IoT) attacks. Some organizations are unaware of the latest cyber threat trends and lack the skills and resources to detect them. For instance, hackers offer professional services like ransomware-as-a-service (RaaS) to launch ransomware attacks. Surprisingly, two out of three ransomware attacks are facilitated by the RaaS setup, but still, companies fail to have a defensive strategy against them. Enterprises relying on legacy devices and outdated software programs are no longer effective at recognizing certain malicious activities, leaving the network vulnerable to potential threats. Additionally, the lack of trained staff, insider threats, and human errors are other reasons why many organizations suffer at the hands of threat actors. Besides this, much of the company\'s data is hidden as dark data. As the defensive teams and employees may be unaware of it, the hackers take complete advantage of dark data and either replicate it or use it to fulfill their malicious intentions. Moreover, cloud migration has rapidly increased in recent years, putting cybersecurity at significant risk. The complexity of the cloud environments, poorly secured remote and hybrid work environments, and sharing security responsibilities between cloud service providers and clients have complicated the situation. In addition, cloud vulnerabilities, which have risen to 194% from the previous year, have highlighted the need for organizations to look out for ways to strengthen their security infrastructure. Security measures to consider to prevent cyber threats Since businesses face complex cyber threats, mitigating them require Ransomware Data Breach Tool Vulnerability Threat Cloud Equifax ★★
Fortinet.webp 2022-04-14 19:54:44 Incomplete Fix for Apache Struts 2 Vulnerability (CVE-2021-31805) Amended (lien direct) FortiGuard Labs is aware that the Apache Software Foundation disclosed and released a fix for a potential remote code execution vulnerability (CVE-2021-31805 OGNL Injection vulnerability ) that affects Apache Struts 2 on April 12th, 2022. Apache has acknowledged in an advisory that the fix was issued because the first patch released in 2020 did not fully remediate the issue. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) also released an advisory on April 12th, 2022, warning users and administrators to review the security advisory "S2-062" issued by Apache and upgrade to the latest released version as soon as possible. Why is this Significant?This is significant because Apache Struts is widely used and successfully exploiting CVE-2021-31805 could result in an attacker gaining control of a vulnerable system. Because of the potential impact, CISA released an advisory urging users and administrators to review the security advisory "S2-062" issued by Apache and upgrade to the latest released version as soon as possible.On the side note, an older Struts 2 OGNL Injection vulnerability (CVE-2017-5638) was exploited in the wild that resulted in a massive data breach of credit reporting agency Equifax in 2017.What is Apache Struts 2?Apache Struts 2 is an open-source web application framework for developing Java web applications that extends the Java Servlet API to assist, encourage, and promote developers to adopt a model-view-controller (MVC) architecture.What is CVE-2021-31805?CVE-2021-31805 is an OGNL injection vulnerability in Struts 2 that enables an attacker to perform remote code execution on a vulnerable system. The vulnerability was originally assigned CVE-2020-17530, however CVE-2021-31805 was newly assigned to the vulnerability as some security researchers found a workaround for the original patch released in 2020.The vulnerability is described as "some of the tag's attributes could perform a double evaluation if a developer applied forced OGNL evaluation by using the %{...} syntax. Using forced OGNL evaluation on untrusted user input can lead to a Remote Code Execution and security degradation."What Versions of Apache Struts are Vulnerable to CVE-2021-31805?Struts 2.0.0 - Struts 2.5.29 are vulnerable.Struts 2.0.0 and 2.5.29 were released in 2006 and 2022 respectively. Has the Vendor Released a Patch for CVE-2021-31805?Yes, Apache released a fixed version (2.5.30) of Apache Struts 2 on April 12th, 2022.Users and administrators are advised to upgrade to Struts 2.5.30 or greater as soon as possible.Has the Vendor Released an Advisory?Yes, Apache released an advisory on April 12th, 2022. See the Appendix for a link to "Security Bulletin: S2-062".What is the Status of Coverage?FortiGuard Labs provides the following IPS coverage for CVE-2020-17530, which applies for CVE-2021-31805:Apache.Struts.OGNL.BeanMap.Remote.Code.Execution Data Breach Vulnerability Guideline Equifax Equifax
Veracode.webp 2021-09-23 08:55:21 Application Security Testing Evolution and How a Software Bill of Materials Can Help (lien direct) Early in my career, I developed web applications. At the time there were practically no frameworks or libraries to help.  I was coding with Java using raw servlets and JSPs – very primitive by today's standards.  There was no OWASP Top 10 and writing secure code was not something we paid much attention to.    I specifically remember coding an open redirect years ago.  I didn't know it was a vulnerability at the time.  In my mind, it was a great feature for my Java servlet to recognize a special query string parameter that, if present, would trigger a redirection to the given URL!  Interestingly, a dynamic scan or penetration test of the application would not have found my vulnerability.  The name of the parameter was undocumented and not easy to guess.  On the other hand, static application security testing (SAST) or a manual code review would have found it.    My first stint at Veracode was in 2012, after six years working as an application security consultant.  It was exciting to join an up-and-coming company on the cutting-edge of AppSec testing.  Since then, open source software has grown enormously and proliferated in all aspects of application development.  Building apps today is faster because of how easy it is to integrate these components into our own projects.  Package managers and open source registries like Maven repository, NPM registry, PyPI, and RubyGems.org provide a way for developers to quickly access and leverage a rich plethora of ready-to-use libraries and frameworks.  The downside with this model of building applications is that vulnerabilities present in open source components are inherited by our software as well.  This has resulted in many data breaches over the years (Equifax via Apache Struts comes to mind).  One of the reasons I recently re-joined Veracode is to have the opportunity work with a premier Software Composition Analysis (SCA) tool.  SCA is complementary to SAST.  While SAST checks 1st-party code for security flaws, SCA looks at 3rd-party code like open source libraries.  In terms of the OWASP Top 10, this falls under OWASP #9 – Using Components with Known Vulnerabilities.    If your application is using a vulnerable component, it's not necessarily your fault.  The vulnerable component may be present because a library that your code is using directly has a dependency on another library.  This is called a transitive dependency.  Transitive dependencies are pulled in automatically by build systems, aka package managers.  Data from our State of Software Security: Open Source Edition report shows that 71 percent of applications have a vulnerability in an open source library on initial scan, and that nearly half of those (47 percent) are transitive.   Now let's talk about a software bill of materials (SBOM).  An SBOM lists the individual components that are included in a piece of software.  This can help with identifying vulnerabilities or license risks that may affect your organization. The concept of an SBOM is not new, but it's garnered much more interest lately due to the recent U.S. Cybersecurity Executive Order.  One of its requirements is having an SBOM for all critical software sold to the federal government.   There are different SBOM specifications in the marketplace today.  I will focus on CycloneDX, which was recently accepted as a flagship OWASP project.  CycloneDX is a security-focused SBOM specification and capable of describing the following types of components:  Application  Container  Device  File  Firmware  Framework  Library  Operating System  Service  CycloneDX's supported data formats are XML, JSON, and Protobuf.  Here's an example of a CycloneDX SBOM in JSON format:  ​ Right away we can see that the software represented by this SBOM includes one library –Apache's Commons Collections ver Vulnerability Equifax
itsecurityguru.webp 2021-09-09 10:25:08 Jenkins discloses attack on its Atlassian Confluence service (lien direct) The open source automation server Jenkins has disclosed a successful attack on its Confluence service. Attackers abused an Open Graph Navigation Library (OGNL) injection flaw – the same vulnerability type involved in the notorious 2017 Equifax hack – capable of leading to remote code execution (RCE) in Confluence Server and Data Center instances. Rated CVSS […] Hack Vulnerability Guideline Equifax Equifax
Veracode.webp 2021-02-24 13:30:31 Dangers of Only Scanning First-Party Code (lien direct) When it comes to securing your applications, it???s not unusual to only consider the risks from your first-party code. But if you???re solely considering your own code, then your attack surface is likely bigger than you think. Our recent State of Software Security report found that 97 percent of the typical Java application is made up of open source libraries. That means your attack surface is exponentially larger than just the code written in-house. Yet a study conducted by Enterprise Strategy Group (ESG) established that less than half of organizations have invested in security controls to scan for open source vulnerabilities. If the majority of applications are made up of open source libraries, why are most organizations only scanning their first-party code? Because most organizations assume that third-party code was already scanned for vulnerabilities by the library developer. But you can???t base the safety of your applications on assumptions. Our State of Software Security: Open Source Edition report revealed that approximately 42 percent of the third-party code pulled directly by an application developer has a flaw on first scan. And even if the third-party code appears to be free of flaws, more than 47 percent of third-party code has a transitive flaw that???s pulled indirectly from another library in use. Over the years, several organizations have learned the hard way just how dangerous it is to only scan first-party code. In 2014, the notorious open source vulnerability ??? Heartbleed ??? occurred. Heartbleed was the result of a flaw in OpenSSL, a third-party library that implemented the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols. The vulnerability enabled cyberattackers to access over 4.5 million healthcare records from Community Health Systems Inc. In 2015, there was a critical vulnerability in Glibc, a GNU C library. The open source security vulnerability nicknamed ???Ghost,??? affected all Linux servers and web frameworks such as Python, PHP, Ruby on Rails as well as API web services that use the Glibc library. The vulnerability made it possible for hackers to compromise applications with a man-in-the-middle attack. In 2017, Equifax suffered a massive data breach from Apache Struts which compromised the data ??? including social security numbers ??? of more than 143 million Americans. Following the breach, Equifax's stock fell over 13 percent. On the good news front: Close to 74 percent of open source flaws can be fixed with an update like a revision or patch. Even high-priority open source flaws don???t require extensive refactoring of code ??? close to 91 percent can be fixed with an update. Equifax had to pay up to $425 million to help people affected by the data breach that the court deemed ???entirely preventable.??? In fact, it was discovered that the breach could have been avoided with a simple patch to its open source library, Apache Struts. Open source patches and updates Don???t become a victim to the monsters lurking in your third-party libraries. Download our whitepaper Accelerating Software Development with Secure Open Source So Data Breach Vulnerability Equifax Equifax
AlienVault.webp 2021-01-12 11:00:00 Why cybersecurity awareness is a team sport (lien direct) cybersecurity awareness Image Source This blog was written by an independent guest blogger. Cybersecurity may be different based on a person's viewpoint. One may want to simply protect and secure their social media accounts from hackers, and that would be the definition of what cybersecurity is to them. On the other hand, a small business owner may want to protect and secure credit card information gathered from their point-of-sale registers and that is what they define as cybersecurity. Despite differences in implementation, at its core, cybersecurity pertains to the mitigation of potential intrusion of unauthorized persons into your system(s). It should encompass all aspects of one’s digital experience--whether you are an individual user or a company. Your cyber protection needs to cover your online platforms, devices, servers, and even your cloud storage. Any unprotected area of your digital journey can serve as an exploit point for hackers and cyber criminals intent on finding vulnerabilities.  People assume that it is the responsibility of the IT Department to stop any intrusion. That may be true up to a certain point, cybersecurity responsibility rests with everyone, in reality. Cybersecurity should be everybody’s business. The cybersecurity landscape is changing. With 68% of businesses saying that their cybersecurity risks have increased, it is no wonder that businesses have been making increased  efforts to protect from, and mitigate attacks. During the height of the pandemic,  about 46% of the workforce shifted to working from home. We saw a surge in cybersecurity attacks - for example, RDP brute-force attacks increased by 400% around the same time. This is why cybersecurity must be and should be everybody’s business. According to the 2019 Cost of Cybercrime Study, cyberattacks often are successful due to employees willingly participating as an internal actors or or employees and affiliates carelessly clicking a link by accident. Sadly, it is still happening today. Unsuspecting employees can be caught vulnerable and cause a corporate-wide cyberattack by opening a phishing email or bringing risks into the company’s network in a BYOD (Bring Your Own Device) system. Just a decade ago, Yahoo experienced a series of major data breaches, via a backdoor to their network system established by a hacker (or a group of hackers). Further digital forensic investigation shows the breach started from a phishing email opened by an employee. Another example was Equifax when it experienced a data breach in 2017 and was liable for fines amounting to $425 million by the Federal Trade Commission (FTC). Companies continue to double up on their investments in cybersecurity and privacy protection today to ensure that incidents like these do not happen to their own networks. But a network is only as strong as its weakest link. Hackers continue to innovate, making their attacks more and mo Ransomware Data Breach Malware Vulnerability Guideline Equifax Equifax Yahoo Yahoo
Veracode.webp 2020-11-10 09:10:27 In the Financial Services Industry, 74% of Apps Have Security Flaws (lien direct) Over the past year, the financial services industry has been challenged with pivoting its operations to a fully digital model, putting the security of its software center stage. Despite the unanticipated pivot, our recent State of Software Security v11 (SOSS) report found that the financial services industry has the smallest proportion of applications with security flaws compared to other sectors, along with the second-lowest prevalence of severe security flaws, and the best security flaw fix rate. Financial services chart SOSS But despite the impressive fix rate, the financial services industry is falling behind when it comes to the time to make those fixes. This is a troubling finding because speed matters in application security. The time it takes for attackers to come up with exploits for newly discovered vulnerabilities is measured in days, sometimes even hours. Letting known vulnerabilities linger unfixed dramatically increases your risk. For instance, it was merely days between disclosure and exploitation of the vulnerability in the Apache Struts framework that led to theツ?Equifax breach. By looking at the data, the reason for the delay in remediation becomes more clear. In the financial services sector, applications tend to be older than those in other industry sectors and the organizations are fairly large. Combined with these challenging factors, developers and security professionals in this industry aren???t regularly employing best practices consistent with DevSecOps and known to improve fix rates, such as scanning for security both frequently and regularly and using more than one testing type. Nature vs Nurture What does this mean for the financial services industry? The data suggests that for many financial services firms, developers face a challenging environment, with the adoption of additional DevSecOps practices showing the most opportunity for improvement in addressing security flaws. And while talking about flaws, it???s worth noting that the most common security flaws in the financial services industry are information leakage, code quality, and CRLF injection. Injection flaws are especially important to keep an eye on since they???re the top web application security risk according to OWASP Top 10. On a positive note, the industry has lower than average cryptography, input validation, Cross-Site Scripting, and credentials management flaws. For more information on software security trends in the financial services industry, check out The State of Software Security Industry Snapshot. Vulnerability Equifax
Veracode.webp 2020-10-01 14:10:28 96% of Organizations Use Open Source Libraries but Less Than 50% Manage Their Library Security Flaws (lien direct) Most modern codebases are dependent on open source libraries. In fact, a recent research report sponsored by Veracode and conducted by Enterprise Strategy Group (ESG) found that more than 96 percent of organizations use open source libraries in their codebase. But ??? shockingly ??? less than half of these organizations have invested in specific security controls to scan for open source vulnerabilities. Percentage of codebase pulled from open source Why is it important to scan open source libraries? For our State of Software Security: Open Source Edition report, we analyzed the security of open source libraries in 85,000 applications and found that 71 percent have a flaw. The most common open source flaws identified include Cross-Site Scripting, insecure deserialization, and broken access control. By not scanning open source libraries, these flaws remain vulnerable to a cyberattack. ツ?ツ?ツ? Equifax made headlines by not scanning its open source libraries. In 2017, Equifax suffered a massive data breach from Apache Struts which compromised the data ??? including social security numbers ??? of more than 143 million Americans. Following the breach, Equifax's stock fell over 13 percent. The unfortunate reality is that if Equifax performed AppSec scans on its open source libraries and patched the vulnerability, the breach could have been avoided. ツ? Why aren???t more organizations scanning open source libraries? If 96 percent of organizations use open source libraries and 71 percent of applications have a third-party vulnerability, why is it that less than 50 percent of organizations scan their open source libraries? The main reason is that when application developers add third-party libraries to their codebase, they expect that library developers have scanned the code for vulnerabilities. Unfortunately, you can???t rely on library developers to keep your application safe. Approximately 42 percent of the third-party code pulled directly by an application developer has a flaw on first scan. And even if the third-party code appears to be free of flaws, more than 47 percent of third-party code has a transitive flaw that???s pulled indirectly from another library in use. Transitive and direct open source vulnerabilities What are your options for managing library security flaws? First off, it???s important to note that most flaws in open source libraries are easy to fix. Close to 74 percent of the flaws can be fixed with an update like a revision or patch. Even high priority flaws are easy to fix ??? close to 91 percent can be fixed with an update. patching open source flaws So, when it comes to managing your library security flaws, the concentration should not just be, ???How Data Breach Tool Vulnerability Equifax
AlienVault.webp 2019-10-29 13:00:00 Was the largest breach in history a misconfiguration problem? (lien direct) Earlier this week, I heard a fascinating interview with the former Chief Information Officer of Equifax, Graeme Payne.  If you are unfamiliar with Graeme, he was the scapegoat for the Equifax breach; described in Congressional testimony as “the human error” that caused the breach.  Graeme, however, is a true gentleman who is very gracious about his situation.  He explained that the servers that were breached were “under his watch”, so it makes sense that he was the person who was ultimately held responsible for the breach. In Graeme’s recently published a book, The New Era of Cybersecurity Breaches, Graeme describes the events of the Equifax breach and offers practical steps to secure a company from the same fate that was suffered by Equifax.  The only reason I have not yet read the book is because I did not know it existed.  Now, it is on my wish list, and, if the description lives up to the book contents, I anticipate an excellent read! One item that struck me as peculiar during Graeme’s interview was that he stated, contrary to all the reports about the breach, that the breached server was patched against the Apache Struts.  To be clear, all of the news reports indicated that Equifax received notice of the vulnerability, the available patch, yet did nothing to prevent it. I asked the following question: Didn’t you scan the servers after the patches were applied?  (It is excellent that BrightTalk offers interactive webcasts like this.) Graeme responded that they scanned the servers for vulnerabilities, and the patch was reported as successfully applied to the server.  How is that possible? A further discussion ensued, in which the importance of authenticated versus unauthenticated scans was mentioned.  It even drifted into the idea that a company should use two different scanners!  We are not all the size of an Equifax corporation.  Running two scanners is simply unmanageable for many medium sized enterprises. I posted a follow-up question: How did the vendor of the vulnerability scanner respond once the breach occurred.  Unfortunately, Graeme was not at liberty to discuss that.  (If you are unfamiliar with the legal system, it probably means that the terms of his dismissal are confidential, and he cannot discuss various topics, such as any impending action against a vendor.) Whatever the vendor’s response, it doesn’t matter.  What matters is that the largest breach in history (to date), may not have been the result of human error or negligence.  It may have been just another case of a misconfiguration problem, this time, with a vulnerability scanner. Given the recent breaches that have involved cloud misconfigurations, it is important to remember that these problems can still exist within the cozy confines of an organization.  Graeme seems to be doing fine in his new existence, not as a scapegoat, but as a Phoenix.  I empathize with how he was treated, and I am confident that I speak for all the security community by saying, we wish him well.     Vulnerability Equifax
AlienVault.webp 2019-03-20 13:00:00 Restart BEFORE patching (lien direct) Most folks who work with servers know the monthly drill: Patches are released by manufacturers -> Patches are tested -> Patches are deployed to Production.  What could possibly go wrong? Anyone who has ever experienced the nail-biting joy of patching, and then awaiting a restart, knows exactly what could go wrong.  Does anyone remember the really good old days when patches had to be manually staged prior to deployment? For those of you who entered the tech world after Windows NT was retired, consider yourself lucky! If you think about it, most organizations that patch on a monthly basis are considered to have an aggressive patching strategy.  As evidenced by the legendary Equifax breach, some organizations take months to apply patches. This is true even when the organization has been forewarned that the patch is a cure for a vulnerability that is being actively exploited, also known as a “Zero-day” vulnerability. Patching is never a flawless operation.  There is always one server that just seems to have problems.  What is the first response when this happens?  Blame the patch, of course!  After all, what else could have changed on the server?  Plenty, actually. Sometimes, removal of the patch doesn’t fix the problem.  I have seen the patch still held responsible for whatever has gone wrong with the server.  I am not blindly defending the patch authors, as there have been too many epic blunders in patching for me to exhibit that kind of optimism and not laugh at myself.  But what can we do to avoid the patch blame game? The simple solution is to restart the servers before deploying patches.  This is definitely an unorthodox approach, but it can certainly reduce troubleshooting time and “patch blame” when something goes wrong.  If you restart a server, and it doesn’t restart properly, that indicates that an underlying problem exists prior to any patching concern. This may seems like a waste of time, however, the alternative is usually more time consuming. If you patch a server, and it fails at restart, the first amount of time you will waste is trying to find the offending patch, and then removing the patch.  Then, upon the subsequent restart, the machine still fails.  Now what? Even if we scale this practice to 1000 servers, the time is still not wasted.  If you are confident that your servers can withstand a simple restart, then restart them all.  The odds are in your favor that most will restart without any problems.   If less than 1% of them fail, then you can address the problems there before falsely chasing the failure as a patch problem. Once all the servers restart normally, then, perform your normal patching, and feel free to blame the patch if the server fails after patching. The same approach could also be applied to workstations in a corporate environment.  Since most organizations do not engage automatic workstation patching on the corporate network, a pre-patch restart can be forced on workstations. Patching has come a long way from the early days when the internet was young and no vulnerabilities existed (insert sardonic smile here).  The rate of exploits and vulnerabilities have accelerated, requiring more immediate action towards protecting your networks.  Since patches are not without flaws, one easy way to rule out patching as the source of a problem is to restart before patching. Vulnerability Patching Equifax
SecurityAffairs.webp 2018-09-10 11:23:02 Mirai and Gafgyt target Apache Struts and SonicWall to hit enterprises (lien direct) Security experts with Unit 42 at Palo Alto Networks have discovered new variants of the Mirai and Gafgyt IoT malware targeting enterprises. Both botnets appear very interesting for two main reasons: The new Mirai variant targets the same Apache Struts vulnerability exploited in the 2017 Equifax data breach. The vulnerability affects the Jakarta Multipart parser upload […] Malware Vulnerability Equifax
Blog.webp 2018-08-28 03:06:03 Podcast Episode 110: Why Patching Struts isn\'t Enough and Hacking Electricity Demand with IoT? (lien direct) In this week's episode (#110): the second major flaw in Apache Struts 2 in as many years and has put the information security community on alert. But is this vulnerability as serious as the last, which resulted in the hack of the firm Equifax? We talk with an expert from the firm Synopsys.  And: we've heard a lot about the risk of cyber...Read the whole entry...  _!fbztxtlnk!_ https://feeds.feedblitz.com/~/566525656/0/thesecurityledger -->» Hack Vulnerability Patching Equifax
Last update at: 2024-05-13 04:09:56
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter