What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ErrataRob.webp 2021-02-28 20:05:19 We are living in 1984 (ETERNALBLUE) (lien direct) In the book 1984, the protagonist questions his sanity, because his memory differs from what appears to be everybody else's memory.The Party said that Oceania had never been in alliance with Eurasia. He, Winston Smith, knew that Oceania had been in alliance with Eurasia as short a time as four years ago. But where did that knowledge exist? Only in his own consciousness, which in any case must soon be annihilated. And if all others accepted the lie which the Party imposed-if all records told the same tale-then the lie passed into history and became truth. 'Who controls the past,' ran the Party slogan, 'controls the future: who controls the present controls the past.' And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. 'Reality control', they called it: in Newspeak, 'doublethink'.I know that EternalBlue didn't cause the Baltimore ransomware attack. When the attack happened, the entire cybersecurity community agreed that EternalBlue wasn't responsible.But this New York Times article said otherwise, blaming the Baltimore attack on EternalBlue. And there are hundreds of other news articles [eg] that agree, citing the New York Times. There are no news articles that dispute this.In a recent book, the author of that article admits it's not true, that EternalBlue didn't cause the ransomware to spread. But they defend themselves as it being essentially true, that EternalBlue is responsible for a lot of bad things, even if technically, not in this case. Such errors are justified, on the grounds they are generalizations and simplifications needed for the mass audience.So we are left with the situation Orwell describes: all records tell the same tale -- when the lie passes into history, it becomes the truth.Orwell continues:He wondered, as he had many times wondered before, whether he himself was a lunatic. Perhaps a lunatic was simply a minority of one. At one time it had been a sign of madness to believe that the earth goes round the sun; today, to believe that the past is inalterable. He might be ALONE in holding that belief, and if alone, then a lunatic. But the thought of being a lunatic did not greatly trouble him: the horror was that he might also be wrong.I'm definitely a lunatic, alone in my beliefs. I sure hope I'm not wrong.
Update: Other lunatics document their struggles with Minitrue: When I was investigating the TJX breach, there were NYT articles citing unnamed sources that were made up & then outlets would publish citing the NYT. The TJX lawyers would require us to disprove the articles. Each time we would. It was maddening fighting lies for 8 months.— Nicholas J. Percoco (@c7five) March 1, 2021
Ransomware NotPetya Wannacry APT 32
ErrataRob.webp 2020-07-19 17:07:57 How CEOs think (lien direct) Recently, Twitter was hacked. CEOs who read about this in the news ask how they can protect themselves from similar threats. The following tweet expresses our frustration with CEOs, that they don't listen to their own people, but instead want to buy a magic pill (a product) or listen to outside consultants (like Gartner). In this post, I describe how CEOs actually think.CEO : "I read about that Twitter hack. Can that happen to us?"Security : "Yes, but ..."CEO : "What products can we buy to prevent this?"Security : "But ..."CEO : "Let's call Gartner."*sobbing sounds*- Wim Remes (@wimremes) July 16, 2020The only thing more broken than how CEOs view cybersecurity is how cybersecurity experts view cybersecurity. We have this flawed view that cybersecurity is a moral imperative, that it's an aim by itself. We are convinced that people are wrong for not taking security seriously. This isn't true. Security isn't a moral issue but simple cost vs. benefits, risk vs. rewards. Taking risks is more often the correct answer rather than having more security.Rather than experts dispensing unbiased advice, we've become advocates/activists, trying to convince people that they need to do more to secure things. This activism has destroyed our credibility in the boardroom. Nobody thinks we are honest.Most of our advice is actually internal political battles. CEOs trust outside consultants mostly because outsiders don't have a stake in internal politics. Thus, the consultant can say the same thing as what you say, but be trusted.CEOs view cybersecurity the same way they view everything else about building the business, from investment in office buildings, to capital equipment, to HR policies, to marketing programs, to telephone infrastructure, to law firms, to .... everything.They divide their business into two parts:The first is the part they do well, the thing they are experts at, the things that define who they are as a company, their competitive advantage.The second is everything else, the things they don't understand.For the second part, they just want to be average in their industry, or at best, slightly above average. They want their manufacturing costs to be about average. They want the salaries paid to employees to be about average. They want the same video conferencing system as everybody else. Everything outside of core competency is average.I can't express this enough: if it's not their core competency, then they don't want to excel at it. Excelling at a thing comes with a price. They have to pay people more. They have to find the leaders with proven track records at excelling at it. They have to manage excellence.This goes all the way to the top. If it's something the company is going to excel at, then the CEO at the top has to have enough expertise themselves to understand who the best leaders to can accomplish this goal. The CEO can't hire an excellent CSO unless they have enough competency to judge the qualifications of the CSO, and enough competency to hold the CSO accountable for the job they are doing.All this is a tradeoff. A focus of attention on one part of the business means less attention on other parts of the business. If your company excels at cybersecurity, it means not excelling at some other part of the business.So unless you are a company like Google, whose cybersecurity is a competitive advantage, you don't want to excel in cybersecurity. You want to be Ransomware Guideline NotPetya
ErrataRob.webp 2019-09-26 13:24:44 CrowdStrike-Ukraine Explained (lien direct) Trump's conversation with the President of Ukraine mentions "CrowdStrike". I thought I'd explain this.What was said?This is the text from the conversation covered in this“I would like you to find out what happened with this whole situation with Ukraine, they say Crowdstrike... I guess you have one of your wealthy people... The server, they say Ukraine has it.”Personally, I occasionally interrupt myself while speaking, so I'm not sure I'd criticize Trump here for his incoherence. But at the same time, we aren't quite sure what was meant. It's only meaningful in the greater context. Trump has talked before about CrowdStrike's investigation being wrong, a rich Ukrainian owning CrowdStrike, and a "server". He's talked a lot about these topics before.Who is CrowdStrike?They are a cybersecurity firm that, among other things, investigates hacker attacks. If you've been hacked by a nation state, then CrowdStrike is the sort of firm you'd hire to come and investigate what happened, and help prevent it from happening again.Why is CrowdStrike mentioned?Because they were the lead investigators in the DNC hack who came to the conclusion that Russia was responsible. The pro-Trump crowd believes this conclusion is false. If the conclusion is false, then it must mean CrowdStrike is part of the anti-Trump conspiracy.Trump always had a thing for CrowdStrike since their first investigation. It's intensified since the Mueller report, which solidified the ties between Trump-Russia, and Russia-DNC-Hack.Personally, I'm always suspicious of such investigations. Politics, either grand (on this scale) or small (internal company politics) seem to drive investigations, creating firm conclusions based on flimsy evidence. But CrowdStrike has made public some pretty solid information, such as BitLy accounts used both in the DNC hacks and other (known) targets of state-sponsored Russian hackers. Likewise, the Mueller report had good data on Bitcoin accounts. I'm sure if I looked at all the evidence, I'd have more doubts, but at the same time, of the politicized hacking incidents out there, this seems to have the best (public) support for the conclusion.What's the conspiracy?The basis of the conspiracy is that the DNC hack was actually an inside job. Some former intelligence officials lead by Bill Binney claim they looked at some data and found that the files were copied "locally" instead of across the Internet, and therefore, it was an insider who did it and not a remote hacker.I debunk the claim here, but the short explanation is: of course the files were copied "locally", the hacker was inside the network. In my long experience investigating hacker intrusions, and performing them myself, I know this is how it's normally done. I mention my own experience because I'm technical and know these things, in contrast with Bill Binney and those other intelligence officials who have no experience with such things. He sounds impressive that he's formerly of the NSA, but he was a mid-level manager in charge of budgets. Binney has never performed a data breach investigation, has never performed a pentest.There's other parts to the conspiracy. In the middle of all this, a DNC staffer was murdered on the street, possibley due to a mugging. Naturally this gets included as part of the conspiracy, this guy ("Seth Rich") must've been the "insider" in this attack, and mus Data Breach Hack Guideline NotPetya
ErrataRob.webp 2019-05-29 20:16:09 Your threat model is wrong (lien direct) Several subjects have come up with the past week that all come down to the same thing: your threat model is wrong. Instead of addressing the the threat that exists, you've morphed the threat into something else that you'd rather deal with, or which is easier to understand.PhishingAn example is this question that misunderstands the threat of "phishing":Should failing multiple phishing tests be grounds for firing? I ran into a guy at a recent conference, said his employer fired people for repeatedly falling for (simulated) phishing attacks. I talked to experts, who weren't wild about this disincentive. https://t.co/eRYPZ9qkzB pic.twitter.com/Q1aqCmkrWL- briankrebs (@briankrebs) May 29, 2019The (wrong) threat model is here is that phishing is an email that smart users with training can identify and avoid. This isn't true.Good phishing messages are indistinguishable from legitimate messages. Said another way, a lot of legitimate messages are in fact phishing messages, such as when HR sends out a message saying "log into this website with your organization username/password".Recently, my university sent me an email for mandatory Title IX training, not digitally signed, with an external link to the training, that requested my university login creds for access, that was sent from an external address but from the Title IX coordinator.- Tyler Pieron (@tyler_pieron) May 29, 2019Yes, it's amazing how easily stupid employees are tricked by the most obvious of phishing messages, and you want to point and laugh at them. But frankly, you want the idiot employees doing this. The more obvious phishing attempts are the least harmful and a good test of the rest of your security -- which should be based on the assumption that users will frequently fall for phishing.In other words, if you paid attention to the threat model, you'd be mitigating the threat in other ways and not even bother training employees. You'd be firing HR idiots for phishing employees, not punishing employees for getting tricked. Your systems would be resilient against successful phishes, such as using two-factor authentication.IoT securityAfter the Mirai worm, government types pushed for laws to secure IoT devices, as billions of insecure devices like TVs, cars, security cameras, and toasters are added to the Internet. Everyone is afraid of the next Mirai-type worm. For example, they are pushing for devices to be auto-updated.But auto-updates are a bigger threat than worms.Since Mirai, roughly 10-billion new IoT devices have been added to the Internet, yet there hasn't been a Mirai-sized worm. Why is that? After 10-billion new IoT devices, it's still Windows and not IoT that is the main problem.The answer is that number, 10-billion. Internet worms work by guessing IPv4 addresses, of which there are only 4-billion. You can't have 10-billion new devices on the public IPv4 addresses because there simply aren't enough addresses. Instead, those 10-billion devices are almost entirely being put on private ne Ransomware Tool Vulnerability Threat Guideline FedEx NotPetya
ErrataRob.webp 2019-05-28 06:20:06 Almost One Million Vulnerable to BlueKeep Vuln (CVE-2019-0708) (lien direct) Microsoft announced a vulnerability in it's "Remote Desktop" product that can lead to robust, wormable exploits. I scanned the Internet to assess the danger. I find nearly 1-million devices on the public Internet that are vulnerable to the bug. That means when the worm hits, it'll likely compromise those million devices. This will likely lead to an event as damaging as WannaCry and notPetya from 2017 -- potentially worse, as hackers have since honed their skills exploiting these things for ransomware and other nastiness.To scan the Internet, I started with masscan, my Internet-scale port scanner, looking for port 3389, the one used by Remote Desktop. This takes a couple hours, and lists all the devices running Remote Desktop -- in theory.This returned 7,629,102 results (over 7-million). However, there is a lot of junk out there that'll respond on this port. Only about half are actually Remote Desktop.Masscan only finds the open ports, but is not complex enough to check for the vulnerability. Remote Desktop is a complicated protocol. A project was posted that could connect to an address and test it, to see if it was patched or vulnerable. I took that project and optimized it a bit, rdpscan, then used it to scan the results from masscan. It's a thousand times slower, but it's only scanning the results from masscan instead of the entire Internet.The table of results is as follows:1447579  UNKNOWN - receive timeout1414793  SAFE - Target appears patched1294719  UNKNOWN - connection reset by peer1235448  SAFE - CredSSP/NLA required 923671  VULNERABLE -- got appid 651545  UNKNOWN - FIN received 438480  UNKNOWN - connect timeout 105721  UNKNOWN - connect failed 9  82836  SAFE - not RDP but HTTP  24833  UNKNOWN - connection reset on connect   3098  UNKNOWN - network error   2576  UNKNOWN - connection terminatedThe various UNKNOWN things fail for various reasons. A lot of them are because the protocol isn't actually Remote Desktop and respond weirdly when we try to talk Remote Desktop. A lot of others are Windows machines, sometimes vulnerable and sometimes not, but for some reason return errors sometimes.The important results are those marked VULNERABLE. There are 923,671 vulnerable machines in this result. That means we've confirmed the vulnerability really does exist, though it's possible a small number of these are "honeypots" deliberately pretending to be vulnerable in order to monitor hacker activity on the Internet.The next result are those marked SAFE due to probably being "pached". Actually, it doesn't necessarily mean they are patched Windows boxes. They could instead be non-Windows systems that appear the same as patched Windows boxes. But either way, they are safe from this vulnerability. There are 1,414,793 of them.The next result to look at are those marked SAFE due to CredSSP/NLA failures, of which there are 1,235,448. This doesn't mean they are patched, but only that we can't exploit them. They require "network level authentication" first before we can talk Remote Desktop to them. That means we can't test whether they are patched or vulnerable -- but neither can the hackers. They may still be exploitable via an insider threat who knows a valid username/password, but they aren't exploitable by anonymous hackers or worms.The next category is marked as SAFE because they aren't Remote Desktop at all, but HTTP servers. In other words, in response to o Ransomware Vulnerability Threat Patching Guideline NotPetya Wannacry
ErrataRob.webp 2019-05-27 19:59:38 A lesson in journalism vs. cybersecurity (lien direct) A recent NYTimes article blaming the NSA for a ransomware attack on Baltimore is typical bad journalism. It's an op-ed masquerading as a news article. It cites many to support the conclusion the NSA is to be blamed, but only a single quote, from the NSA director, from the opposing side. Yet many experts oppose this conclusion, such as @dave_maynor, @beauwoods, @daveaitel, @riskybusiness, @shpantzer, @todb, @hrbrmst, ... It's not as if these people are hard to find, it's that the story's authors didn't look.The main reason experts disagree is that the NSA's Eternalblue isn't actually responsible for most ransomware infections. It's almost never used to start the initial infection -- that's almost always phishing or website vulns. Once inside, it's almost never used to spread laterally -- that's almost always done with windows networking and stolen credentials. Yes, ransomware increasingly includes Eternalblue as part of their arsenal of attacks, but this doesn't mean Eternalblue is responsible for ransomware.The NYTimes story takes extraordinary effort to jump around this fact, deliberately misleading the reader to conflate one with the other. A good example is this paragraph:That link is a warning from last July about the "Emotet" ransomware and makes no mention of EternalBlue. Instead, the story is citing anonymous researchers claiming that EthernalBlue has been added to Emotet since after that DHS warning.Who are these anonymous researchers? The NYTimes article doesn't say. This is bad journalism. The principles of journalism are that you are supposed to attribute where you got such information, so that the reader can verify for themselves whether the information is true or false, or at least, credible.And in this case, it's probably false. The likely source for that claim is this article from Malwarebytes about Emotet. They have since retracted this claim, as the latest version of their article points out.In any event, the NYTimes article claims that Emotet is now "relying" on the NSA's EternalBlue to spread. That's not the same thing as "using", not even close. Yes, lots of ransomware has been updated to also use Eternalblue to spread. However, what ransomware is relying upon is still the Wind Ransomware Malware Patching Guideline NotPetya Wannacry
ErrataRob.webp 2018-09-10 17:33:17 California\'s bad IoT law (lien direct) California has passed an IoT security bill, awaiting the government's signature/veto. It's a typically bad bill based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.It's based on the misconception of adding security features. It's like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips. The key to dieting is not eating more but eating less. The same is true of cybersecurity, where the point is not to add “security features” but to remove “insecure features”. For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical “magic pill” or “silver bullet” thinking that we spend much of our time in infosec fighting against.We don't want arbitrary features like firewall and anti-virus added to these products. It'll just increase the attack surface making things worse. The one possible exception to this is “patchability”: some IoT devices can't be patched, and that is a problem. But even here, it's complicated. Even if IoT devices are patchable in theory there is no guarantee vendors will supply such patches, or worse, that users will apply them. Users overwhelmingly forget about devices once they are installed. These devices aren't like phones/laptops which notify users about patching.You might think a good solution to this is automated patching, but only if you ignore history. Many rate “NotPetya” as the worst, most costly, cyberattack ever. That was launched by subverting an automated patch. Most IoT devices exist behind firewalls, and are thus very difficult to hack. Automated patching gets beyond firewalls; it makes it much more likely mass infections will result from hackers targeting the vendor. The Mirai worm infected fewer than 200,000 devices. A hack of a tiny IoT vendor can gain control of more devices than that in one fell swoop.The bill does target one insecure feature that should be removed: hardcoded passwords. But they get the language wrong. A device doesn't have a single password, but many things that may or may not be called passwords. A typical IoT device has one system for creating accounts on the web management interface, a wholly separate authentication system for services like Telnet (based on /etc/passwd), and yet a wholly separate system for things like debugging interfaces. Just because a device does the proscribed thing of using a unique or user generated password in the user interface doesn't mean it doesn't also have a bug in Telnet.That was the problem with devices infected by Mirai. The description that these were hardcoded passwords is only a superficial understanding of the problem. The real problem was that there were different authentication systems in the web interface and in other services like Telnet. Most of the devices vulnerable to Mirai did the right thing on the web interfaces (meeting the language of this law) requiring the user to create new passwords before operating. They just did the wrong thing elsewhere.People aren't really paying attention to what happened with Mirai. They look at the 20 billion new IoT devices that are going to be connected to the Internet by 2020 and believe Mirai is just the tip of the iceberg. But it isn't. The IPv4 Internet has only 4 billion addresses, which are pretty much already used up. This means those 20 billion won't be exposed to the public Internet like Mirai devices, but hidden behind firewalls that translate addresses. Thus, rather than Mirai presaging the future, it represents the last gasp of the past that is unlikely to come again.This law is backwards looking rather than forward looking. Forward looking, by far the most important t Hack Threat Patching Guideline NotPetya Tesla
ErrataRob.webp 2018-07-12 19:54:20 Your IoT security concerns are stupid (lien direct) Lots of government people are focused on IoT security, such as this recent effort. They are usually wrong. It's a typical cybersecurity policy effort which knows the answer without paying attention to the question.Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all "in your face", IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available.Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don't get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software.Vulns aren't even the problem. Mirai didn't happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it's primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching.Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them.IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from Amazon.com shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won't take down their services.In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they'll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won't connect via your home WiFi anyway, but via a 5G chip unrelated to your home.Lastly, focusing on the ven Hack Patching Guideline NotPetya
ErrataRob.webp 2018-06-27 15:49:15 Lessons from nPetya one year later (lien direct) This is the one year anniversary of NotPetya. It was probably the most expensive single hacker attack in history (so far), with FedEx estimating it cost them $300 million. Shipping giant Maersk and drug giant Merck suffered losses on a similar scale. Many are discussing lessons we should learn from this, but they are the wrong lessons.An example is this quote in a recent article:"One year on from NotPetya, it seems lessons still haven't been learned. A lack of regular patching of outdated systems because of the issues of downtime and disruption to organisations was the path through which both NotPetya and WannaCry spread, and this fundamental problem remains." This is an attractive claim. It describes the problem in terms of people being "weak" and that the solution is to be "strong". If only organizations where strong enough, willing to deal with downtime and disruption, then problems like this wouldn't happen.But this is wrong, at least in the case of NotPetya.NotPetya's spread was initiated through the Ukraining company MeDoc, which provided tax accounting software. It had an auto-update process for keeping its software up-to-date. This was subverted in order to deliver the initial NotPetya infection. Patching had nothing to do with this. Other common security controls like firewalls were also bypassed.Auto-updates and cloud-management of software and IoT devices is becoming the norm. This creates a danger for such "supply chain" attacks, where the supplier of the product gets compromised, spreading an infection to all their customers. The lesson organizations need to learn about this is how such infections can be contained. One way is to firewall such products away from the core network. Another solution is port-isolation/microsegmentation, that limits the spread after an initial infection.Once NotPetya got into an organization, it spread laterally. The chief way it did this was through Mimikatz/PsExec, reusing Windows credentials. It stole whatever login information it could get from the infected machine and used it to try to log on to other Windows machines. If it got lucky getting domain administrator credentials, it then spread to the entire Windows domain. This was the primary method of spreading, not the unpatched ETERNALBLUE vulnerability. This is why it was so devastating to companies like Maersk: it wasn't a matter of a few unpatched systems getting infected, it was a matter of losing entire domains, including the backup systems.Such spreading through Windows credentials continues to plague organizations. A good example is the recent ransomware infection of the City of Atlanta that spread much the same way. The limits of the worm were the limits of domain trust relationships. For example, it didn't infect the city airport because that Windows domain is separate from the city's domains.This is the most pressing lesson organizations need to learn, the one they are ignoring. They need to do more to prevent desktops from infecting each other, such as through port-isolation/microsegmentation. They need to control the spread of administrative credentials within the organization. A lot of organizations put the same local admin account on every workstation which makes the spread of NotPetya style worms trivial. They need to reevaluate trust relationships between domains, so that the admin of one can't infect the others.These solutions are difficult, which is why news articles don't mention them. You don't have to know anything about security to proclaim "the problem is lack of patches". It's moral authority, chastising the weak, rather than a proscription of what to do. Solving supply chain hacks and Windows credential sharing, though, is hard. I don't know any universal solution to this -- I'd have to thoroughly analyze your network and business in order to Ransomware Malware Patching FedEx NotPetya Wannacry
Last update at: 2024-05-03 02:07:39
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter