What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ErrataRob.webp 2024-02-14 17:30:42 C peut être sûr de la mémoire, partie 2
C can be memory safe, part 2
(lien direct)
Ce message de l'année dernière a été publié à un forum, Alors j'ai pensé que je rédige quelques réfutations à leurs commentaires. Le premier commentaireest par David Chisnall, créateur de Cheri C / C ++, qui propose que nous pouvons résoudre le problème avec les extensions de jeu d'instructions CPU.C'est une bonne idée, mais après 14 ans, les processeurs n'ont pas eu leurs ensembles d'instructions améliorés.Même les processeurs RISC V traditionnels ont été créés en utilisant ces extensions. Chisnall: " Si votre sécurité vous oblige à insérer des vérifications explicites, ce n'est pas sûr ".Cela est vrai d'un point de vue, faux d'un autre.Ma proposition comprend des compilateurs crachant avertissements & nbsp; chaque fois que les informations limites n'existent pas. c est pleine de problèmes en théorie qui n'existe pas dans la pratique parce queLe compilateur crache des avertissements en disant aux programmeurs de résoudre le problème.Les avertissements peuvent également noter les cas où les programmeurs ont probablement fait des erreurs.Nous ne pouvons pas obtenir des garanties parfaites, car les programmeurs peuvent encore faire des erreurs, mais nous pouvons certainement réaliser "assez bon". Chisnall: .... Sécurité de la bande de roulement ..... je ne suis pas sûr de comprendre le commentaire.Je comprends que Cheri peut garantir l'atomicité de la vérification des limites, ce qui nécessiterait autrement des instructions (interruptibles).Le nombre de cas où il s'agit d'un problème, et la proposition C ne serait pas pire que d'autres langues comme la rouille. Chisnall: Sécurité temporelle .... & nbsp; Beaucoup de techniques de "propriété" de rouille peuvent être appliquées à C avec ces annotations, à savoir, marquant les variables possédant une mémoire allouée et qui l'emprunte simplement.J'ai examiné beaucoup de bogues célèbres de l'utilisation et de double libre, et la plupart peuvent être trivialement corrigées par annotation. Chisnall: si vous écrivez unLe blog n'a jamais essayé de faire de la grande (million de lignes ou plus) C Code Code Memory Safe, vous sous-estimez probablement la difficulté par au moins un ordre de grandeur. & nbsp; i \\ 'm à la fois un programmeurQui a écrit un million de lignes de code de ma vie ainsi qu'un pirate avec des décennies d'expérience à la recherche de tels bugs.L'objectif n'est pas de poursuivre l'idéal d'un langage 100% sûr, mais de se débarrasser de 99% des erreurs de sécurité.1% moins sûr rend l'objectif un ordre de grandeur plus facile à atteindre. snej: & nbsp; Ce message semble incarner le trait d'ingénieur commun de voir tout problème que vous avez \\ 't personnellementsur comme trivial.Bien sûr, Bro, vous ajoutez quelques correctifs à Clang et GCC et avec ces nouveaux attributs, notre code C sera sûr.Cela ne prendra que quelques semaines et plus personne n'aura plus besoin de rouille. & nbsp; mais j'ai passé des décennies à y travailler.Le commentaire incarne le trait commun de ne pas réaliser à quel point la pensée et l'expertise se trouvent derrière le post.Je quelques patchs à Cang et GCC feront en sorte que C plus sûr .La solution est beaucoup moins sûre que la rouille.En fait, ma proposition rend le code plus interopérable et traduisible en rouille.À l'heure actuelle, la traduction de C en rouille crée juste un tas de code \\ 'dangereux \' qui doit être nettoyé.Avec de telles annotations, dans une étape de refactorisation utilisant des cadres de test existants, des résultats en code qui ne peut pas être transporté automatiquement en toute sécurité à Rust. Quant aux attributs Clang / GCC existants, il n'y a qu'un couple qui correspondLes macros que je propo ★★
ErrataRob.webp 2023-02-01 13:30:40 C can be memory-safe (lien direct) The idea of memory-safe languages is in the news lately. C/C++ is famous for being the world's system language (that runs most things) but also infamous for being unsafe. Many want to solve this by hard-forking the world's system code, either by changing C/C++ into something that's memory-safe, or rewriting everything in Rust.Forking is a foolish idea. The core principle of computer-science is that we need to live with legacy, not abandon it.And there's no need. Modern C compilers already have the ability to be memory-safe, we just need to make minor -- and compatible -- changes to turn it on. Instead of a hard-fork that abandons legacy system, this would be a soft-fork that enables memory-safety for new systems.Consider the most recent memory-safety flaw in OpenSSL. They fixed it by first adding a memory-bounds, then putting every access to the memory behind a macro PUSHC() that checks the memory-bounds:A better (but currently hypothetical) fix would be something like the following:size_t maxsize CHK_SIZE(outptr) = out ? *outlen : 0;This would link the memory-bounds maxsize with the memory outptr. The compiler can then be relied upon to do all the bounds checking to prevent buffer overflows, the rest of the code wouldn't need to be changed.An even better (and hypothetical) fix would be to change the function declaration like the following:int ossl_a2ulabel(const char *in, char *out, size_t *outlen CHK_INOUT_SIZE(out));That's the intent anyway, that *outlen is the memory-bounds of out on input, and receives a shorter bounds on output.This specific feature isn't in compilers. But gcc and clang already have other similar features. They've only been halfway implemented. This feature would be relatively easy to add. I'm currently studying the code to see how I can add it myself. I could just mostly copy what's done for the alloc_size attribute. But there's a considerable learning curve, I'd rather just persuade an existing developer of gcc or clang to add the new attributes for me.Once you give the programmer the ability to fix memory-safety problems like the solution above, you can then enable warnings for unsafe code. The compiler knew the above code was unsafe, but since there's no practical way to fix it, it's pointless nagging the programmer about it. With this new features comes warnings about failing to use it.In other words, it becomes compiler-guided refactoring. Forking code is hard, refactoring is easy.As the above function shows, the OpenSSL code is already somewhat memory safe, just based upon the flawed principle of relying upon diligent programmers. We need the compi ★★★
ErrataRob.webp 2023-01-25 16:09:34 I\'m still bitter about Slammer (lien direct) Today is the 20th anniversary of the Slammer worm. I'm still angry over it, so I thought I'd write up my anger. This post will be of interest to nobody, it's just me venting my bitterness and get off my lawn!!Back in the day, I wrote "BlackICE", an intrusion detection and prevention system that ran as both a desktop version and a network appliance. Most cybersec people from that time remember it as the desktop version, but the bulk of our sales came from the network appliance.The network appliance competed against other IDSs at the time, such as Snort, an open-source product. For much the cybersec industry, IDS was Snort -- they had no knowledge of how intrusion-detection would work other than this product, because it was open-source.My intrusion-detection technology was radically different. The thing that makes me angry is that I couldn't explain the differences to the community because they weren't technical enough.When Slammer hit, Snort and Snort-like products failed. Mine succeeded extremely well. Yet, I didn't get the credit for this.The first difference is that I used a custom poll-mode driver instead of interrupts. This the now the norm in the industry, such as with Linux NAPI drivers. The problem with interrupts is that a computer could handle less than 50,000 interrupts-per-second. If network traffic arrived faster than this, then the computer would hang, spending all it's time in the interrupt handler doing no other useful work. By turning off interrupts and instead polling for packets, this problem is prevented. The cost is that if the computer isn't heavily loaded by network traffic, then polling causes wasted CPU and electrical power. Linux NAPI drivers switch between them, interrupts when traffic is light and polling when traffic is heavy.The consequence is that a typical machine of the time (dual Pentium IIIs) could handle 2-million packets-per-second running my software, far better than the 50,000 packets-per-second of the competitors.When Slammer hit, it filled a 1-gbps Ethernet with 300,000 packets-per-second. As a consequence, pretty much all other IDS products fell over. Those that survived were attached to slower links -- 100-mbps was still common at the time.An industry luminary even gave a presentation at BlackHat saying that my claimed performance (2-million packets-per-second) was impossible, because everyone knew that computers couldn't handle traffic that fast. I couldn't combat that, even by explaining with very small words "but we disable interrupts".Now this is the norm. All network drivers are written with polling in mind. Specialized drivers like PF_RING and DPDK do even better. Networks appliances are now written using these things. Now you'd expect something like Snort to keep up and not get overloaded with interrupts. What makes me bitter is that back then, this was inexplicable magic.I wrote an article in PoC||GTFO 0x15 that shows how my portscanner masscan uses this driver, if you want more info.The second difference with my product was how signatures were written. Everyone else used signatures that triggered on the pattern-matching. Instead, my technology included protocol-analysis, code that parsed more than 100 protocols.The difference is that when there is an exploit of a buffer-overflow vulnerability, pattern-matching searched for patterns unique to the exploit. In my case, we'd measure the length of the buffer, triggering when it exceeded a certain length, finding any attempt to attack the vulnerability.The reason we could do this was through the use of state-machine parsers. Such analysis was considered heavy-weight and slow, which is why others avoided it. State-machines are faster than pattern-matching, many times faster. Better and faster.Such parsers are no Vulnerability Guideline ★★
ErrataRob.webp 2022-10-23 16:05:58 The RISC Deprogrammer (lien direct) I should write up a larger technical document on this, but in the meanwhile is this short (-ish) blogpost. Everything you know about RISC is wrong. It's some weird nerd cult. Techies frequently mention RISC in conversation, with other techies nodding their head in agreement, but it's all wrong. Somehow everyone has been mind controlled to believe in wrong concepts.An example is this recent blogpost which starts out saying that "RISC is a set of design principles". No, it wasn't. Let's start from this sort of viewpoint to discuss this odd cult.What is RISC?Because of the march of Moore's Law, every year, more and more parts of a computer could be included onto a single chip. When chip densities reached the point where we could almost fit an entire computer on a chip, designers made tradeoffs, discarding unimportant stuff to make the fit happen. They made tradeoffs, deciding what needed to be included, what needed to change, and what needed to be discarded.RISC is a set of creative tradeoffs, meaningful at the time (early 1980s), but which were meaningless by the late 1990s.The interesting parts of CPU evolution are the three decades from 1964 with IBM's System/360 mainframe and 2007 with Apple's iPhone. The issue was a 32-bit core with memory-protection allowing isolation among different programs with virtual memory. These were real computers, from the modern perspective: real computers have at least 32-bit and an MMU (memory management unit).The year 1975 saw the release of Intel 8080 and MOS 6502, but these were 8-bit systems without memory protection. This was at the point of Moore's Law where we could get a useful CPU onto a single chip.In the year 1977 we saw DEC release it's VAX minicomputer, having a 32-bit CPU w/ MMU. Real computing had moved from insanely expensive mainframes filling entire rooms to less expensive devices that merely filled a rack. But the VAX was way too big to fit onto a chip at this time.The real interesting evolution of real computing happened in 1980 with Motorola's 68000 (aka. 68k) processor, essentially the first microprocessor that supported real computing.But this comes with caveats. Making microprocessor required creative work to decide what wasn't included. In the case of the 68k, it had only a 16-bit ALU. This meant adding two 32-bit registers required passing them twice through the ALU, adding each half separately. Because of this, many call the 68k a 16-bit rather than 32-bit microprocessor.More importantly, only the lower 24-bits of the registers were valid for memory addresses. Since it's memory addressing that makes a real computer "real", this is the more important measure. But 24-bits allows for 16-megabytes of memory, which is all that anybody could afford to include in a computer anyway. It was more than enough to run a real operating system like Unix. In contrast, 16-bit processors could only address 64-kilobytes of memory, and weren't really practical for real computing.The 68k didn't come with a MMU, but it allowed an extra MMU chip. Thus, the early 1980s saw an explosion of workstations and servers consisting of a 68k and an MMU. The most famous was Sun Microsystems launched in 1982, with their own custom designed MMU chip.Sun and its competitors transformed the industry running Unix. Many point to IBM's PC from 1982 as the transformative moment in computer history, but these were non-real 16-bit systems that struggled with more than 64k of memory. IBM PC computers wouldn't become real until 1993 with Microsoft's Windows NT, supporting full 32-bits, memory-protection, and pre-emptive multitasking.But except for Windows itself, the rest of computing is dominated by the Unix heritage. The phone in your hand, whether Android or iPhone, is a Unix compu Guideline Heritage
ErrataRob.webp 2022-07-03 19:02:22 DS620slim tiny home server (lien direct) In this blogpost, I describe the Synology DS620slim. Mostly these are notes for myself, so when I need to replace something in the future, I can remember how I built the system. It's a "NAS" (network attached storage) server that has six hot-swappable bays for 2.5 inch laptop drives.That's right, laptop 2.5 inch drives. It makes this a tiny server that you can hold in your hand.The purpose of a NAS is reliable storage. All disk drives eventually fail. If you stick a USB external drive on your desktop for backups, it'll eventually crash, losing any data on it. A failure is unlikely tomorrow, but a spinning disk will almost certainly fail some time in the next 10 years. If you want to keep things, like photos, for the rest of your life, you need to do something different.The solution is RAID, an array of redundant disks such that when one fails (or even two), you don't lose any data. You simply buy a new disk to replace the failed one and keep going. With occasional replacements (as failures happen) it can last decades. My older NAS is 10 years old and I've replaced all the disks, one slot replaced twice.This can be expensive. A NAS requires a separate box in addition to lots of drives. In my case, I'm spending $1500 for a 18-terabytes of disk space that would cost only $400 as an external USB drive. But amortized for the expected 10+ year lifespan, I'm paying $15/month for this home system.This unit is not just disk drives but also a server. Spending $500 just for a box to hold the drives is a bit expensive, but the advantage is that it's also a server that's powered on all the time. I can setup tasks to run on regular basis that would break if I tried to regularly run them on a laptop or desktop computer.There are lots of do-it-yourself solutions (like the Radaxa Taco carrier board for a Raspberry Pi 4 CM running Linux), but I'm choosing this solution because I want something that just works without any hassle, that's configured for exactly what I need. For example, eventually a disk will fail and I'll have to replace it, and I know now that this is something that will be effortless when it happens in the future, without having to relearn some arcane Linux commands that I've forgotten years ago.Despite this, I'm a geek who obsesses about things, so I'm still going to do possibly unnecessary things, like upgrading hardware: memory, network, and fan for an optimized system. Here are all the components of my system:$500 - DS620slim unit$1000 - 6x Seagate Barracuda 5TB 2.5 inch laptop drive (ST5000LM000)$100 - 2x Crucial 8GB DDR3 SODIMMs (CT2K102464BF186D)$30 - 2.5gbps Ethernet USB (CableCreation B07VNFLTLD)$15 - Noctua NF-A8 ULN ultra silent fan$360 - WD Elements 18TB USB drive (WDBWLG0180HBK-NESN) Ransomware Guideline
ErrataRob.webp 2022-01-31 15:33:58 No, a researcher didn\'t find Olympics app spying on you (lien direct) For the Beijing 2022 Winter Olympics, the Chinese government requires everyone to download an app onto their phone. It has many security/privacy concerns, as CitizenLab documents. However, another researcher goes further, claiming his analysis proves the app is recording all audio all the time. His analysis is fraudulent. He shows a lot of technical content that looks plausible, but nowhere does he show anything that substantiates his claims.Average techies may not be able to see this. It all looks technical. Therefore, I thought I'd describe one example of the problems with this data -- something the average techie can recognize.His "evidence" consists screenshots from reverse-engineering tools, with red arrows pointing to the suspicious bits. An example of one of these screenshots is this on:This screenshot is that of a reverse-engineering tool (Hopper, I think) that takes code and "disassembles" it. When you dump something into a reverse-engineering tool, it'll make a few assumptions about what it sees. These assumptions are usually wrong. There's a process where the human user looks at the analyzed output, does a "sniff-test" on whether it looks reasonable, and works with the tool until it gets the assumptions correct.That's the red flag above: the researcher has dumped the results of a reverse-engineering tool without recognizing that something is wrong in the analysis.It fails the sniff test. Different researchers will notice different things first. Famed google researcher Tavis Ormandy points out one flaw. In this post, I describe what jumps out first to me. That would be the 'imul' (multiplication) instruction shown in the blowup below:It's obviously ASCII. In other words, it's a series of bytes. The tool has tried to interpret these bytes as Intel x86 instructions (like 'and', 'insd', 'das', 'imul', etc.). But it's obviously not Intel x86, because those instructions make no sense.That 'imul' instruction is multiplying something by the (hex) number 0x6b657479. That doesn't look like a number -- it looks like four lower-case ASCII letters. ASCII lower-case letters are in the range 0x61 through 0x7A, so it's not the single 4-byte number 0x6b657479 but the 4 individual bytes 6b 65 74 79, which map to the ASCII letters 'k', 'e', 't Tool
ErrataRob.webp 2021-12-07 20:39:22 Journalists: stop selling NFTs that you don\'t understand (lien direct) The reason you don't really understand NFTs is because the journalists describing them to you don't understand them, either. We can see that when they attempt to sell an NFT as part of their stories (e.g. AP and NYTimes). They get important details wrong.The latest is Reason.com magazine selling an NFT. As libertarians, you'd think at least they'd get the technical details right. But they didn't. Instead of selling an NFT of the artwork, it's just an NFT of a URL. The URL points to OpenSea, which is known to remove artwork from its site (such as in response to DMCA takedown requests).If you buy that Reason.com NFT, what you'll actually get is a token pointing to:https://api.opensea.io/api/v1/metadata/0x495f947276749Ce646f68AC8c248420045cb7b5e/0x1F907774A05F9CD08975EBF7BF56BB4FF0A4EAF0000000000000060000000001This is just the metadata, which in turn contains a link to the claimed artwork:https://lh3.googleusercontent.com/8Q2OGcPuODtCxbTmlf3epFGOqbfCbs4fXZ2RcIMnLpRdTaYHgqKArk7uETRdSZmpRAFsNE8KB4sFJx6czKE5cBKB1pa7ovc4wBUdqQIf either OpenSea or Google removes the linked content, then any connection between the NFT and the artwork disappears.It doesn't have to be this way. The correct way to do NFT artwork is to point to a "hash" instead which uniquely identifies the work regardless of where it's located. That $69 million Beeple piece was done this correct way. It's completely decentralized. If the entire Internet disappeared except for the Ethereum blockchain, that Beeple NFT would still work.This is an analogy for the entire blockchain, cryptocurrency, and Dapp ecosystem: the hype you hear ignores technical details. They promise an entirely decentralized economy controlled by math and code, rather than any human entities. In practice, almost everything cheats, being tied to humans controlling things. In this case, the "Reason.com NFT artwork" is under control of OpenSea and not the "owner" of the token.Journalists have a problem. NFTs selling for millions of dollars are newsworthy, and it's the journalists place to report news rather than making judgements, like whether or not it's a scam. But at the same time, journalists are trying to explain things they don't understand. Instead of standing outside the story, simply quoting sources, they insert themselves into the story, becoming advocates rather than reporters. They can no longer be trusted as an objective observers.From a fraud perspective, it may not matter that the Reason.com NFT points to a URL instead of the promised artwork. The entire point of the blockchain is caveat emptor in action. Rules are supposed to be governed by code rather than companies, government, or the courts. There is no undoing of a transaction even if courts were to order it, because it's math.But from a journalistic point of view,  this is important. They failed at an honest description of what actually the NFT contains. They've involved themselves in the story, creating a conflict of interest. It's now hard for them to point out NFT scams when they themselves have participated in something that, from a certain point of view, could be viewed as a scam.
ErrataRob.webp 2021-11-07 20:09:32 Example: forensicating the Mesa County system image (lien direct) Tina Peters, the election clerk in Mesa County (Colorado) went rogue and dumped disk images of an election computer on the Internet. They are available on the Internet via BitTorrent [Mesa1][Mesa2], The Colorado Secretary of State is now suing her over the incident.The lawsuit describes the facts of the case, how she entered the building with an accomplice on Sunday, May 23, 2021. I thought I'd do some forensics on the image to get more details.Specifically, I see from the Mesa1 image that she logged on at 4:24pm and was done acquiring the image by 4:30pm, in and (presumably) out in under 7 minutes.In this blogpost, I go into more detail about how to get that information.The imageTo download the Mesa1 image, you need a program that can access BitTorrent, such as the Brave web browser or a BitTorrent client like qBittorrent. Either click on the "magnet" link or copy/paste into the program you'll use to download. It takes a minute to gather all the "metadata" associated with the link, but it'll soon start the download:What you get is file named EMSSERVER.E01. This is a container file that contains both the raw disk image as well as some forensics metadata, like the date it was collected, the forensics investigator, and so on. This container is in the well-known "EnCase Expert Witness" format. EnCase is a commercial product, but its container format is a quasi-standard in the industry.Some freeware utilities you can use to open this container and view the disk include "FTK Imager", "Autopsy", and on the Linux command line, "ewf-tools".However you access the E01 file, what you most want to look at is the Windows operating-system logs. These are located in the directory C:\Windows\system32\winevtx. The standard Windows "Event Viewer" application can load these log files to help you view them.When inserting a USB drive to create the disk image, these event files will be updated and written to that disk before the image was taken. Thus, we can see in the event files all the events that happen right before the disk image happens.Disk image acquisitionHere's what the event logs on the Mesa1 image tells us about the acquisition of the disk image itself.The person taking the disk image logged in at 4:24:16pm, directly to the console (not remotely), on their second attempt after first typing an incorrect password. The account used was "emsadmin". Their NTLM password hash is 9e4ec70af42436e5f0abf0a99e908b7a. This is a "role-based" account rather than an individual's account, but I think Tina Peters is the person responsible for the "emsadmin" roll.Then, at 4:26:10pm, they connected via USB a Western Digital  "easystore™" portable drive that holds 5-terabytes. This was mounted as the F: drive.The program "Access Data FTK Imager 4.2.0.13" was run from the USB drive (F:\FTK Imager\FTK Imager.exe) in order to image the system. The image was taken around 4:30pm, local Mountain Time (10:30pm GMT).It's impossible to say from this image what happened after it was taken. Presumab ★★★
ErrataRob.webp 2021-10-31 01:54:29 Debunking: that Jones Alfa-Trump report (lien direct) The Alfa-Trump conspiracy-theory has gotten a new life. Among the new things is a report done by Democrat operative Daniel Jones [*]. In this blogpost, I debunk that report.If you'll recall, the conspiracy-theory comes from anomalous DNS traffic captured by cybersecurity researchers. In the summer of 2016, while Trump was denying involvement with Russian banks, the Alfa Bank in Russia was doing lookups on the name "mail1.trump-email.com". During this time,  additional lookups were also coming from two other organizations with suspicious ties to Trump, Spectrum Health and Heartland Payments.This is certainly suspicious, but people have taken it further. They have crafted a conspiracy-theory to explain the anomaly, namely that these organizations were secretly connecting to a Trump server.We know this explanation to be false. There is no Trump server, no real server at all, and no connections. Instead, the name was created and controlled by Cendyn. The server the name points to for transmitting bulk email and isn't really configured to accept connections. It's built for outgoing spam, not incoming connections. The Trump Org had no control over the name or the server. As Cendyn explains, the contract with the Trump Org ended in March 2016, after which they re-used the IP address for other marketing programs, but since they hadn't changed the DNS settings, this caused lookups of the DNS name.This still doesn't answer why Alfa, Spectrum, Heartland, and nobody else were doing the lookups. That's still a question. But the answer isn't secret connections to a Trump server. The evidence is pretty solid on that point.Daniel Jones and Democracy Integrity ProjectThe report is from Daniel Jones and his Democracy Integrity Project.It's at this point that things get squirrely. All sorts of right-wing sites claim he's a front for George Soros, funds Fusion GPS, and involved in the Steele Dossier. That's right-wing conspiracy theory nonsense.But at the same time, he's clearly not an independent and objective analyst. He was hired to further the interests of Democrats.If the data and analysis held up, then partisan ties wouldn't matter. But they don't hold up. Jones is clearly trying to be deceptive.The deception starts by repeatedly referring to the "Trump server". There is no Trump server. There is a Listrak server operated on behalf of Cendyn. Whether the Trump Org had any control over the name or the server is a key question the report should be trying to prove, not a premise. The report clearly understands this fact, so it can't be considered a mere mistake, but a deliberate deception.People make assumptions that a domain name like "trump-email.com" would be controlled by the Trump organization. It's wasn't. When Trump Hotels hired Cendyn to do marketing for them, Cendyn did what they normally do in such cases, register a domain with their client's name for the sending of bulk emails. They did the same thing with hyatt-email.com, denihan-email.com, mjh-email.com, and so on. What clear is that the Trump organization had no control, no direct ties to this domain until after the conspiracy-theory hit the press.Finding #1 - Alfa Bank, Spectrum Health, and Heartland account for nearly all of the DNS lookups for mail1.trump-email.com in the May-September timeframe.Yup, that's weird and unexplained.But it concludes from this that there were connections, saying the following:In the DNS environment, if "computer X" does a DNS look-up of "Computer Y," it means that "Computer X" is trying to connect to "Computer Y".This is false. That's certain
ErrataRob.webp 2021-10-24 19:46:46 Review: Dune (2021) (lien direct) One of the most important classic sci-fi stories is the book "Dune" from Frank Herbert. It was recently made into a movie. I thought I'd write a quick review.The summary is this: just read the book. It's a classic for a good reason, and you'll be missing a lot by not reading it.But the movie Dune (2021) movie is very good. The most important thing to know is see it in IMAX. IMAX is this huge screen technology that partly wraps around the viewer, and accompanied by huge speakers that overwhelm you with sound. If you watch it in some other format, what was visually stunning becomes merely very pretty.This is Villeneuve's trademark, which you can see in his other works, like his sequel to Bladerunner. The purpose is to marvel at the visuals in every scene. The story telling is just enough to hold the visuals together. I mean, he also seems to do a good job with the story telling, but it's just not the reason to go see the movie. (I can't tell -- I've read the book, so see the story differently than those of you who haven't).Beyond the story and visuals, many of the actor's performances were phenomenal. Javier Bardem's "Stilgar" character steals his scenes. Stellan Skarsgård exudes evil. The two character actors playing the mentats were each perfect. I found the lead character (Timothée Chalamet) a bit annoying, but simply because he is at this point in the story.Villeneuve's splits the book into two parts. This movie is only the first part. This presents a problem, because up until this point, the main character is just responding to events, not the hero who yet drives the events. It doesn't fit into the traditional Hollywood accounting model. I really want to see the second film even if the first part, released in the post-pandemic turmoil of the movie industry, doesn't perform well at the box office.In short, if you haven't read the books, I'm not sure how well you'll follow the storytelling. But the visuals (seen at IMAX scale) and the characters are so great that I'm pretty sure most people will enjoy the movie. And go see it on IMAX in order to get the second movie made!! Guideline
ErrataRob.webp 2021-10-13 23:33:07 Fact check: that "forensics" of the Mesa image is crazy (lien direct) Tina Peters, the elections clerk from Mesa County (Colorado) went rogue, creating a "disk-image" of the election server, and posting that image to the public Internet. Conspiracy theorists have been analyzing the disk-image trying to find anomalies supporting their conspiracy-theories. A recent example is this "forensics" report. In this blogpost, I debunk that report.I suppose calling somebody a "conspiracy theorist" is insulting, but there's three objective ways we can identify them as such.The first is when they use the logic "everything we can't explain is proof of the conspiracy". In other words, since there's no other rational explanation, the only remaining explanation is the conspiracy-theory. But there can be other possible explanations -- just ones unknown to the person because they aren't smart enough to understand them. We see that here: the person writing this report doesn't understand some basic concepts, like "airgapped" networks.This leads to the second way to recognize a conspiracy-theory, when it demands this one thing that'll clear things up. Here, it's demanding that a manual audit/recount of Mesa County be performed. But it won't satisfy them. The Maricopa audit in neighboring Colorado, whose recount found no fraud, didn't clear anything up -- it just found more anomalies demanding more explanation. It's like Obama's birth certificate. The reason he ignored demands to show it was that first, there was no serious question (even if born in Kenya, he'd still be a natural born citizen -- just like how Cruz was born in Canada and McCain in Panama), and second, showing the birth certificate wouldn't change anything at all, as they'd just claim it was fake. There is no possibility of showing a birth certificate that can be proven isn't fake.The third way to objectively identify a conspiracy theory is when they repeat objectively crazy things. In this case, they keep demanding that the 2020 election be "decertified". That's not a thing. There is no regulation or law where that can happen. The most you can hope for is to use this information to prosecute the fraudster, prosecute the elections clerk who didn't follow procedure, or convince legislators to change the rules for the next election. But there's just no way to change the results of the last election even if wide spread fraud is now proven.The document makes 6 individual claims. Let's debunk them one-by-one.#1 Data Integrity ViolationThe report tracks some logs on how some votes were counted. It concludes:If the reasons behind these findings cannot be adequately explained, then the county's election results are indeterminate and must be decertified.This neatly demonstrates two conditions I cited above. The analyst can't explain the anomaly not because something bad happened, but because they don't understand how Dominion's voting software works. This demand for an explanation is a common attribute of conspiracy theories -- the ignorant keep finding things they don't understand and demand somebody else explain them.Secondly, there's the claim that the election results must be "decertified". It's something that Trump and his supporters believe is a thing, that somehow the courts will overturn the past election and reinstate Trump. This isn't a rational claim. It's not how the courts or the law works or the Constitution works.#2 Intentional purging of Log FilesThis is the issue that convinced Tina Peters to go rogue, that the normal Dominion software update gets rid of all the old system-log files. She leaked two disk-images, before and after the update, to show the disappearance of system-logs. She believes this violates the law demanding the "election records" be preserved. She claims because o Guideline
ErrataRob.webp 2021-10-10 20:35:49 100 terabyte home NAS (lien direct) So, as a nerd, let's say you need 100 terabytes of home storage. What do you do?My solution would be a commercial NAS RAID, like from Synology, QNAP, or Asustor. I'm a nerd, and I have setup my own Linux systems with RAID, but I'd rather get a commercial product. When a disk fails, and a disk will always eventually fail, then I want something that will loudly beep at me and make it easy to replace the drive and repair the RAID.Some choices you have are:vendor (Synology, QNAP, and Asustor are the vendors I know and trust the most)number of bays (you want 8 to 12)redundancy (you want at least 2 if not 3 disks)filesystem (btrfs or ZFS) [not btrfs-raid builtin, but btrfs on top of RAID]drives (NAS optimized between $20/tb and $30/tb)networking (at least 2-gbps bonded, but box probably can't use all of 10gbps)backup (big external USB drives)The products I link above all have at least 8 drive bays. When you google "NAS", you'll get a list of smaller products. You don't want them. You want somewhere between 8 and 12 drives.The reason is that you want two-drive redundancy like RAID6 or RAIDZ2, meaning two additional drives. Everyone tells you one-disk redundancy (like RAID5) is enough, they are wrong. It's just legacy thinking, because it was sufficient in the past when drives were small. Disks are so big nowadays that you really need two-drive redundancy. If you have a 4-bay unit, then half the drives are used for redundancy. If you have a 12-bay unit, then only 2 out of the 12 drives are being used for redundancy.The next decision is the filesystem. There's only two choices, btrfs and ZFS. The reason is that they both healing and snapshots. Note btrfs means btrfs-on-RAID6, not btrfs-RAID, which is broken. In other words, btrfs contains its own RAID feature that you don't want to use.Over long periods of time, errors creep into the file system. You want to scrub the data occasionally. This means reading the entire filesystem, checksuming the files, and repairing them if there's a problem. That requires a filesystem that checksums each block of data.Another thing you want snapshots to guard against things like ransomware. This means you mark the files you want to keep, and even if a workstation attempts to change or delete the file, it'll still be held on the disk.QNAP uses ZFS while others like Synology and Asustor use btrfs. I really don't know which is better.It's cheaper to buy the NAS diskless then add your own disk drives. If you can't do this, then you'll be helpless when a drive fails and needs to be replaced.Drives cost between $20/tb and $30/tb right now. This recent article has a good buying guide. You probably want to get a NAS optimized hard drive. You probably want to double-check that it's CMR instead of SMR -- SMR is "shingled" vs. "conventional" magnetic recording. SMR is bad. There's only three hard drive makers (Seagate, Western Digital, and Toshiba), so there's not a big selection.Working with such large data sets over 1-gbps is painful. These units allow 802.3ad link aggregation as well as faster Ethernet. Some have 10gbe built-in, others allow a PCIe adapter to be plugged in.However, due to the overhead of spinning disks, you are unlikely to get 10gbps speeds. I mention this because 10gbps copper Ethernet sucks, so is not necessarily a buying criteria. You may prefer multigig/NBASE-T that only does 5gbps with relaxed cabling requirements and lower power consumption.This means that your NAS decision is going to be made with your home networki
ErrataRob.webp 2021-09-24 03:51:21 Check: that Republican audit of Maricopa (lien direct) Author: Robert Graham (@erratarob)Later today (Friday, September 24, 2021), Republican auditors release their final report on the found with elections in Maricopa county. Draft copies have circulated online. In this blogpost, I write up my comments on the cybersecurity portions of their draft.https://arizonaagenda.substack.com/p/we-got-the-senate-audit-reportThe three main problems are:They misapply cybersecurity principles that are meaningful for normal networks, but which don't really apply to the air gapped networks we see here.They make some errors about technology, especially networking.They are overstretching themselves to find dirt, claiming the things they don't understand are evidence of something bad.In the parts below, I pick apart individual pieces from that document to demonstrate these criticisms. I focus on section 7, the cybersecurity section, and ignore the other parts of the document, where others are more qualified than I to opine.In short, when corrected, section 7 is nearly empty of any content.7.5.2.1.1 Software and Patch Management, part 1They claim Dominion is defective at one of the best-known cyber-security issues: applying patches.It's not true. The systems are “air gapped”, disconnected from the typical sort of threat that exploits unpatched systems. The primary security of the system is physical.This is standard in other industries with hard reliability constraints, like industrial or medical. Patches in those systems can destabilize systems and kill people, so these industries are risk averse. They prefer to mitigate the threat in other ways, such as with firewalls and air gaps.Yes, this approach is controversial. There are some in the cybersecurity community who use lack of patches as a bludgeon with which to bully any who don't apply every patch immediately. But this is because patching is more a political issue than a technical one. In the real, non-political world we live in, most things don't get immediately patched all the time.7.5.2.1.1 Software and Patch Management, part 2They claim new software executables were applied to the system, despite the rules against new software being applied. This isn't necessarily true.There are many reasons why Windows may create new software executables even when no new software is added. One reason is “Features on Demand” or FOD. You'll see new executables appear in C:\Windows\WinSxS for these. Another reason is their .NET language, which causes binary x86 executables to be created from bytecode. You'll see this in the C:\Windows\assembly directory.The auditors simply counted the number of new executables, with no indication which category they fell in. Maybe they are right, maybe new software was installed or old software updated. It's just that their mere counting of executable files doesn't show understanding of these differences.7.5.2.1.2 Log ManagementThe auditors claim that a central log management system should be used.This obviously wouldn't apply to “air gapped” systems, because it would need a connection to an external network.Dominion already designates their EMSERVER as the central log repository for their little air gapped network. Important files from C: are copied to D:, a RAID10 drive. This is a perfectly adequate solution, adding yet another computer to their little network would be overkill, and add as many security problems as it solved.One could argue more Windows logs need to be preserved, but that would simply mean archiving the from the C: drive onto the D: drive, not that you need to connect to the Internet to centrally log files.7.5.2.1.3 Credential ManagementLike the other sections, this claim is out of place Threat Patching
ErrataRob.webp 2021-09-21 18:01:25 That Alfa-Trump Sussman indictment (lien direct) Five years ago, online magazine Slate broke a story about how DNS packets showed secret communications between Alfa Bank in Russia and the Trump Organization, proving a link that Trump denied. I was the only prominent tech expert that debunked this as just a conspiracy-theory[*][*][*].Last week, I was vindicated by the indictment of a lawyer involved, a Michael Sussman. It tells a story of where this data came from, and some problems with it.But we should first avoid reading too much into this indictment. It cherry picks data supporting its argument while excluding anything that disagrees with it. We see chat messages expressing doubt in the DNS data. If chat messages existed expressing confidence in the data, we wouldn't see them in the indictment.In addition, the indictment tries to make strong ties to the Hillary campaign and the Steele Dossier, but ultimately, it's weak. It looks to me like an outsider trying to ingratiated themselves with the Hillary campaign rather than there being part of a grand Clinton-lead conspiracy against Trump.With these caveats, we do see some important things about where the data came from.We see how Tech-Executive-1 used his position at cyber-security companies to search private data (namely, private DNS logs) to search for anything that might link Trump to somebody nefarious, including Russian banks. In other words, a link between Trump and Alfa bank wasn't something they accidentally found, it was one of the many thousands of links they looked for.Such a technique has been long known as a problem in science. If you cast the net wide enough, you are sure to find things that would otherwise be statistically unlikely. In other words, if you do hundreds of tests of hydroxychloroquine or invermectin on Covid-19, you are sure to find results that are so statistically unlikely that they wouldn't happen more than 1% of the time.If you search world-wide DNS logs, you are certain to find weird anomalies that you can't explain. Unexplained computer anomalies happen all the time, as every user of computers can tell you.We've seen from the start that the data was highly manipulated. It's likely that the data is real, that the DNS requests actually happened, but at the same time, it's been stripped of everything that might cast doubt on the data. In this indictment we see why: before the data was found the purpose was to smear Trump. The finders of the data don't want people to come to the best explanation, they want only explainations that hurt Trump.Trump had no control over the domain in question, trump-email.com. Instead, it was created by a hotel marketing firm they hired, Cendyne. It's Cendyne who put Trump's name in the domain. A broader collection of DNS information including Cendyne's other clients would show whether this was normal or not.In other words, a possible explanation of the data, hints of a Trump-Alfa connection, has always been the dishonesty of those who collected the data. The above indictment confirms they were at this level of dishonesty. It doesn't mean the DNS requests didn't happen, but that their anomalous nature can be created by deletion of explanatory data.Lastly, we see in this indictment the problem with "experts". Guideline
ErrataRob.webp 2021-09-14 23:17:40 How not to get caught in law-enforcement geofence requests (lien direct) I thought I'd write up a response to this question from well-known 4th Amendment and CFAA lawyer Orin Kerr:Question for tech people related to "geofence" warrants served on Google: How easy is it for a cell phone user, either of an Android or an iPhone, to stop Google from generating the detailed location info needed to be responsive to a geofence warrant? What do you need to do?- Orin Kerr (@OrinKerr) September 15, 2021 (FWIW, I'm seeking info from people who actually know the answer based on their expertise, not from those who are just guessing, or are who are now googling around to figure out what the answer may be,)- Orin Kerr (@OrinKerr) September 15, 2021 First, let me address the second part of his tweet, whether I'm technically qualified to answer this. I'm not sure, I have only 80% confidence that I am. Hence, I'm writing this answer as blogpost hoping people will correct me if I'm wrong.There is a simple answer and it's this: just disable "Location" tracking in the settings on the phone. Both iPhone and Android have a one-click button to tap that disables everything.The trick is knowing which thing to disable. On the iPhone it's called "Location Services". On the Android, it's simply called "Location".If you do start googling around for answers, you'll find articles upset that Google is still tracking them. That's because they disabled "Location History" and not "Location". This left "Location Services" and "Web and App Activity" still tracking them. Disabling "Location" on the phone disables all these things [*].It's that simple: one click and done, and Google won't be able to report your location in a geofence request.I'm pretty confident in this answer, despite what your googling around will tell you about Google's pernicious ways. But I'm only 80% confident in my answer. Technology is complex and constantly changing.Note that the answer is very different for mobile phone companies, like AT&T or T-Mobile. They have their own ways of knowing about your phone's location independent of whatever Google or Apple do on the phone itself. Because of modern 4G/LTE, cell towers must estimate both your direction and distance from the tower. I've confirmed that they can know your location to within 50 feet. There are limitations to this, it depends upon whether you are simply in range of the tower or have an active phone call in progress. Thus, I think law enforcement prefers asking Google.Another example is how my car uses Google Maps all the time, and doesn't have privacy settings. I don't know what it reports to Google. So when I rob a bank, my phone won't betray me, but my car will.
ErrataRob.webp 2021-07-26 20:52:15 Of course you can\'t trust scientists on politics (lien direct) Many people make the same claim as this tweet. It's obviously wrong. Yes,, the right-wing has a problem with science, but this isn't it.If you think you don't trust scientists, you're mistaken. You trust scientists in a million different ways every time you step on a plane, or for that matter turn on your tap or open a can of beans. The fact that you're unaware of this doesn't mean it's not so.- Paul Graham (@paulg) July 26, 2021First of all, people trust airplanes because of their long track record of safety, not because of any claims made by scientists. Secondly, people distrust "scientists" when politics is involved because of course scientists are human and can get corrupted by their political (or religious) beliefs.And thirdly, the concept of "trusting scientific authority" is wrong, since the bedrock principle of science is distrusting authority. What defines sciences is how often prevailing scientific beliefs are challenged.Carl Sagan has many quotes along these lines that eloquently expresses this:A central lesson of science is that to understand complex issues (or even simple ones), we must try to free our minds of dogma and to guarantee the freedom to publish, to contradict, and to experiment. Arguments from authority are unacceptable.If you are "arguing from authority", like Paul Graham is doing above, then you are fundamentally misunderstanding both the principles of science and its history.We know where this controversy comes from: politics. The above tweet isn't complaining about the $400 billion U.S. market for alternative medicines, a largely non-political example. It's complaining about political issues like vaccines, global warming, and evolution.The reason those on the right-wing resist these things isn't because they are inherently anti-science, it's because the left-wing is. They left has corrupted and politicized these topics. The "Green New Deal" contains very little that is "Green" and much that is "New Deal", for example. The left goes from the fact "carbon dioxide absorbs infrared" to justify "we need to promote labor unions".Take Marjorie Taylor Green's (MTG) claim that she doesn't believe in the Delta variant because she doesn't believe in evolution. Her argument is laughably stupid, of course, but it starts with the way the left has politicized the term "evolution".The "Delta" variant didn't arise from "evolution", it arose because of "mutation" and "natural selection". We know the "mutation" bit is true, because we can sequence the complete DNA and detect that changes happen. We know that "selection" happens, because we see some variants overtake others in how fast they spread.Yes, "evolution" is synonymous with mutation plus selection, but it's also a politically loaded term that means a lot of additional things. The public doesn't understand mutation and natural-selection, because these concepts are not really taught in school. Schools don't teach students to understand these things, they teach students to believe.The focus of science eduction in school is indoctrinating students into believing in "evolution" rather than teaching the mechanisms of "mutation" and "natural-selection". We see the conflict in things like describing the evolution of the eyeball, which Creationists "reasonably" believe is too complex to have evolved this way. I put "reasonable" in quotes here because it's just the "Gods in the gaps" argument, which credits God for everything that science can't explain, which isn't very smart. But at the same time, science textbooks go too far, refusing to admit their gaps in knowledge here. The fossil records shows a lot of complexity arising over time through steady change -- it just doesn't show anything about eyeballs.In other words, it's
ErrataRob.webp 2021-07-21 18:11:58 Risk analysis for DEF CON 2021 (lien direct) It's the second year of the pandemic and the DEF CON hacker conference wasn't canceled. However, the Delta variant is spreading. I thought I'd do a little bit of risk analysis. TL;DR: I'm not canceling my ticket, but changing my plans what I do in Vegas during the convention.First, a note about risk analysis. For many people, "risk" means something to avoid. They work in a binary world, labeling things as either "risky" (to be avoided) or "not risky". But real risk analysis is about shades of gray, trying to quantify things.The Delta variant is a mutation out of India that, at the moment, is particularly affecting the UK. Cases are nearly up to their pre-vaccination peaks in that country.Note that the UK has already vaccinated nearly 70% of their population -- more than the United States. In both the UK and US there are few preventive measures in place (no lockdowns, no masks) other than vaccines. Thus, the UK graph is somewhat predictive of what will happen in the United States. If we time things from when the latest wave hit the same levels as peak of the first wave, then it looks like the USA is only about 1.5 months behind the UK.It's another interesting lesson about risk analysis. Most people experience these things as sudden changes. One moment, everything seems fine, and cases are decreasing. The next moment, we are experiencing a major new wave of infections. It's especially jarring when the thing we are tracking is exponential. But we can compare the curves and see that things are totally predictable. In about another 1.5 months, the US will experience a wave that looks similar to the UK wave.Sometimes the problem is that the change is inconceivable. We saw that recently with 1-in-100 year floods in Germany. Weather forecasters predicted 1-in-100 level of floods days in advance, but they still surprised many people.
ErrataRob.webp 2021-07-14 20:49:05 Ransomware: Quis custodiet ipsos custodes (lien direct) Many claim that "ransomware" is due to cybersecurity failures. It's not really true. We are adequately protecting users and computers. The failure is in the inability of cybersecurity guardians to protect themselves. Ransomware doesn't make the news when it only accesses the files normal users have access to. The big ransomware news events happened because ransomware elevated itself to that of an "administrator" over the network, giving it access to all files, including online backups.Generic improvements in cybersecurity will help only a little, because they don't specifically address this problem. Likewise, blaming ransomware on how it breached perimeter defenses (phishing, patches, password reuse) will only produce marginal improvements. Ransomware solutions need to instead focus on looking at the typical human-operated ransomware killchain, identify how they typically achieve "administrator" credentials, and fix those problems. In particular, large organizations need to redesign how they handle Windows "domains" and "segment" networks.I read a lot of lazy op-eds on ransomware. Most of them claim that the problem is due to some sort of moral weakness (laziness, stupidity, greed, slovenliness, lust). They suggest things like "taking cybersecurity more seriously" or "do better at basic cyber hygiene". These are "unfalsifiable" -- things that nobody would disagree with, meaning they are things the speaker doesn't really have to defend. They don't rest upon technical authority but moral authority: anybody, regardless of technical qualifications, can have an opinion on ransomware as long as they phrase it in such terms.Another flaw of these "unfalsifiable" solutions is that they are not measurable. There's no standard definition for "best practices" or "basic cyber hygiene", so there no way to tell if you aren't already doing such things, or the gap you need to overcome to reach this standard. Worse, some people point to the "NIST Cybersecurity Framework" as the "basics" -- but that's a framework for all cybersecurity practices. In other words, anything short of doing everything possible is considered a failure to follow the basics.In this post, I try to focus on specifics, while at the same time, making sure things are broadly applicable. It's detailed enough that people will disagree with my solutions.The thesis of this blogpost is that we are failing to protect "administrative" accounts. The big ransomware attacks happen because the hackers got administrative control over the network, usually the Windows domain admin. It's with administrative control that they are able to cause such devastation, able to reach all the files in the network, while also being able to delete backups.The Kaseya attacks highlight this particularly well. The company produces a product that is in turn used by "Managed Security Providers" (MSPs) to administer the security of small and medium sized businesses. Hackers found and exploited a vulnerability in the product, which gave them administrative control of over 1000 small and medium sized businesses around the world.The underlying problems start with the way their software gives indiscriminate administrative access over computers. Then, this software was written using standard software techniques, meaning, with the standard vulnerabilities that most software has (such as "SQL injection"). It wasn't written in a paranoid, careful way that you'd hope for software that poses this much danger.A good analogy is airplanes. A common joke refers to the "black box" flight-recorders that survive airplane crashes, that maybe we should make the entire airplane out of that material. The reason we can't do this is that airplanes would be too heavy to fly. The same is true of software: airplane software is written with extreme paranoia knowing that bugs can l Ransomware Vulnerability Guideline
ErrataRob.webp 2021-07-05 17:15:28 Some quick notes on SDR (lien direct) I'm trying to create perfect screen captures of SDR to explain the world of radio around us. In this blogpost, I'm going to discuss some of the imperfect captures I'm getting, specifically, some notes about WiFi and Bluetooth.An SDR is a "software defined radio" which digitally samples radio waves and uses number crunching to decode the signal into data. Among the simplest thing an SDR can do is look at a chunk of spectrum and see signal strength. This is shown below, where I'm monitoring part of the famous 2.4 GHz pectrum used by WiFi/Bluetooth/microwave-ovens:There are two panes. The top shows the current signal strength as graph. The bottom pane is the "waterfall" graph showing signal strength over time, display strength as colors: black means almost no signal, blue means some, and yellow means a strong signal.The signal strength graph is a bowl shape, because we are actually sampling at a specific frequency of 2.42 GHz, and the further away from this "center", the less accurate the analysis. Thus, the algorithms think there is more signal the further away from the center we are.What we do see here is two peaks, at 2.402 GHz toward the left and 2.426 GHz toward the right (which I've marked with the red line). These are the "Bluetooth beacon" channels. I was able to capture the screen at the moment some packets were sent, showing signal at this point. Below in the waterfall chart, we see packets constantly being sent at these frequencies.We are surrounded by devices giving off packets here: our phones, our watches, "tags" attached to devices, televisions, remote controls, speakers, computers, and so on. This is a picture from my home, showing only my devices and perhaps my neighbors. In a crowded area, these two bands are saturated with traffic.The 2.4 GHz region also includes WiFi. So I connected to a WiFi access-point to watch the signal.WiFi uses more bandwidth than Bluetooth. The term "bandwidth" is used today to mean "faster speeds", but it comes from the world of radio where it quite literally means the width of the band. The width of the Bluetooth transmissions seen above is 2 MHz, the width of the WiFi band shown here is 20 MHz.It took about 50 screenshots before getting these two. I had to hit the "capture" button right at the moment things were being transmitted. And easier way is a setting that graphs the current signal strength compared to the maximum recently seen as a separate line. That's shown below: the instant it was taken, there was no signal, but it shows the maximum of recent signals as a separate line:
ErrataRob.webp 2021-06-20 20:34:42 When we\'ll get a 128-bit CPU (lien direct) On Hacker News, this article claiming "You won't live to see a 128-bit CPU" is trending". Sadly, it was non-technical, so didn't really contain anything useful. I thought I'd write up some technical notes.The issue isn't the CPU, but memory. It's not about the size of computations, but when CPUs will need more than 64-bits to address all the memory future computers will have. It's a simple question of math and Moore's Law.Today, Intel's server CPUs support 48-bit addresses, which is enough to address 256-terabytes of memory -- in theory. In practice, Amazon's AWS cloud servers are offered up to 24-terabytes, or 45-bit addresses, in the year 2020.Doing the math, it means we have 19-bits or 38-years left before we exceed the 64-bit registers in modern processors. This means that by the year 2058, we'll exceed the current address size and need to move 128-bits. Most people reading this blogpost will be alive to see that, though probably retired.There are lots of reasons to suspect that this event will come both sooner and later.It could come sooner if storage merges with memory. We are moving away from rotating platters of rust toward solid-state storage like flash. There are post-flash technologies like Intel's Optane that promise storage that can be accessed at speeds close to that of memory. We already have machines needing petabytes (at least 50-bits worth) of storage.Addresses often contain more just the memory address, but also some sort of description about the memory. For many applications, 56-bits is the maximum, as they use the remaining 8-bits for tags.Combining those two points, we may be only 12 years away from people starting to argue for 128-bit registers in the CPU.Or, it could come later because few applications need more than 64-bits, other than databases and file-systems.Previous transitions were delayed for this reason, as the x86 history shows. The first Intel CPUs were 16-bits addressing 20-bits of memory, and the Pentium Pro was 32-bits addressing 36-bits worth of memory.The few applications that needed the extra memory could deal with the pain of needing to use multiple numbers for addressing. Databases used Intel's address extensions, almost nobody else did. It took 20 years, from the initial release of MIPS R4000 in 1990 to Intel's average desktop processor shipped in 2010 for mainstream apps needing larger addresses.For the transition beyond 64-bits, it'll likely take even longer, and might never happen. Working with large datasets needing more than 64-bit addresses will be such a specialized discipline that it'll happen behind libraries or operating-systems anyway.So let's look at the internal cost of larger registers, if we expand registers to hold larger addresses.We already have 512-bit CPUs -- with registers that large. My laptop uses one. It supports AVX-512, a form of "SIMD" that packs multiple small numbers in one big register, so that he can perform identical computations on many numbers at once, in parallel, rather than sequentially. Indeed, even very low-end processors have been 128-bit for a long time -- for "SIMD".In other words, we can have a large register file with wide registers, and handle the bandwidth of shipping those registers around the CPU performing computations on them. Today's processors already handle this for certain types of computations.But just because we can do many 64-bit computations at once ("SIMD") still doesn't mean we can do a 128-bit computation ("scalar"). Simple problems like "carry" get difficult as numbers get larger. Just because SIMD can do multiple small computations doesn't tell us what one large computation will cost. This was why it took an extra decade for Intel to make the transition -- they added 64-bit MMX registers for SIMD a decade before they added 64-bit for normal computations.The abo
ErrataRob.webp 2021-04-29 04:15:50 Anatomy of how you get pwned (lien direct) Today, somebody had a problem: they kept seeing a popup on their screen, and obvious scam trying to sell them McAfee anti-virus. Where was this coming from?In this blogpost, I follow this rabbit hole on down. It starts with "search engine optimization" links and leads to an entire industry of tricks, scams, exploiting popups, trying to infect your machine with viruses, and stealing emails or credit card numbers.Evidence of the attack first appeared with occasional popups like the following. The popup isn't part of any webpage.This is obviously a trick. But from where? How did it "get on the machine"?There's lots of possible answers. But the most obvious answer (to most people), that your machine is infected with a virus, is likely wrong. Viruses are generally silent, doing evil things in the background. When you see something like this, you aren't infected ... yet.Instead, things popping with warnings is almost entirely due to evil websites. But that's confusing, since this popup doesn't appear within a web page. It's off to one side of the screen, nowhere near the web browser.Moreover, we spent some time diagnosing this. We restarted the webbrowser in "troubleshooting mode" with all extensions disabled and went to a clean website like Twitter. The popup still kept happening.As it turns out, he had another windows with Firefox running under a different profile. So while he cleaned out everything in this one profile, he wasn't aware the other one was still runningThis happens a lot in investigations. We first rule out the obvious things, and then struggle to find the less obvious explanation -- when it was the obvious thing all along.In this case, the reason the popup wasn't attached to a browser window is because it's a new type of popup notification that's suppose to act more like an app and less like a web page. It has a hidden web page underneath called a "service worker", so the popups keep happening when you think the webpage is closed.Once we figured the mistake of the other Firefox profile, we quickly tracked this down and saw that indeed, it was in the Notification list with Permissions set to Allow. Simply changing this solved the problem.Note that the above picture of the popup has a little wheel in the lower right. We are taught not to click on dangerous thing, so the user in this case was avoiding it. However, had the user clicked on it, it wouldn't led him straight here to the solution. Though, I can't recommend you click on such a thing and trust it, because that means in the future, malicious tricks will contain such safe looking icons that aren't so safe.Anyway, the next question is: which website did this come from?The answer is Google.In the news today was the story of the Michigan guys who tried to kidnap the governor. The user googled "attempted kidnap sentencing guidelines". This search produced a pa Guideline
ErrataRob.webp 2021-04-21 17:27:21 Ethics: University of Minnesota\'s hostile patches (lien direct) The University of Minnesota (UMN) got into trouble this week for doing a study where they have submitted deliberately vulnerable patches into open-source projects, in order to test whether hostile actors can do this to hack things. After a UMN researcher submitted a crappy patch to the Linux Kernel, kernel maintainers decided to rip out all recent UMN patches.Both things can be true:Their study was an important contribution to the field of cybersecurity.Their study was unethical.It's like Nazi medical research on victims in concentration camps, or U.S. military research on unwitting soldiers. The research can simultaneously be wildly unethical but at the same time produce useful knowledge.I'd agree that their paper is useful. I would not be able to immediately recognize their patches as adding a vulnerability -- and I'm an expert at such things.In addition, the sorts of bugs it exploits shows a way forward in the evolution of programming languages. It's not clear that a "safe" language like Rust would be the answer. Linux kernel programming requires tracking resources in ways that Rust would consider inherently "unsafe". Instead, the C language needs to evolve with better safety features and better static analysis. Specifically, we need to be able to annotate the parameters and return statements from functions. For example, if a pointer can't be NULL, then it needs to be documented as a non-nullable pointer. (Imagine if pointers could be signed and unsigned, meaning, can sometimes be NULL or never be NULL).So I'm glad this paper exists. As a researcher, I'll likely cite it in the future. As a programmer, I'll be more vigilant in the future. In my own open-source projects, I should probably review some previous pull requests that I've accepted, since many of them have been the same crappy quality of simply adding a (probably) unnecessary NULL-pointer check.The next question is whether this is ethical. Well, the paper claims to have sign-off from their university's IRB -- their Institutional Review Board that reviews the ethics of experiments. Universities created IRBs to deal with the fact that many medical experiments were done on either unwilling or unwitting subjects, such as the Tuskegee Syphilis Study. All medical research must have IRB sign-off these days.However, I think IRB sign-off for computer security research is stupid. Things like masscanning of the entire Internet are undecidable with traditional ethics. I regularly scan every device on the IPv4 Internet, including your own home router. If you paid attention to the packets your firewall drops, some of them would be from me. Some consider this a gross violation of basic ethics and get very upset that I'm scanning their computer. Others consider this to be the expected consequence of the end-to-end nature of the public Internet, that there's an inherent social contract that you must be prepared to receive any packet from anywhere. Kerckhoff's Principle from the 1800s suggests that core ethic of cybersecurity is exposure to such things rather than trying to cover them up.The point isn't to argue whether masscanning is ethical. The point is to argue that it's undecided, and that your IRB isn't going to be able to answer the question better than anybody else.But here's the thing about masscanning: I'm honest and transparent about it. My very first scan of the entire Internet came with a tweet "BTW, this is me scanning the entire Internet".A lot of ethical questions in other fields comes down to honesty. If you have to lie about it or cover it up, then th Hack Vulnerability
ErrataRob.webp 2021-03-26 14:51:59 A quick FAQ about NFTs (lien direct) I thought I'd write up 4 technical questions about NFTs. They may not be the ones you ask, but they are the ones you should be asking. The questions:What does the token look like?How does it contain the artwork? (or, where is the artwork contained?)How are tokens traded? (How do they get paid? How do they get from one account to another?)What does the link from token to artwork mean? Does it give copyrights?I'm going to use 4 sample tokens that have been sold for outrageous prices as examples.#1 What does the token look like?An NFT token has a unique number, analogous to:your social security number (SSN#)your credit card numberthe VIN# on your carthe serial number on a dollar billetc.This unique number is composed of two things:the contract number, identifying the contract that manages the tokenthe unique token identifier within that contractHere are some example tokens, listing the contract number (the long string) and token ID (short number), as well as a link to a story on how much it sold for recently.0x2a46f2ffd99e19a89476e2f62270e0a35bbf0756 - #40913 (Beeple $69m)0xb47e3cd837dDF8e4c57F05d70Ab865de6e193BBB  #7804 ($7.6m CryptoPunks)0x9fc4e38da3a5f7d4950e396732ae10c3f0a54886 - #1 (AP $180k)0x06012c8cf97BEaD5deAe237070F9587f8E7A266d - #896775 ($170k CryptoKitty)With these two numbers, you can go find the token on the blockchain, and read the code to determine what the token contains, how it's traded, its current owner, and so on.#2 How do NFTs contain artwork? or, where is artwork contained?Tokens can't*** contain artwork -- art is too big to fit on the blockchain. That Beeple piece is 300-megabytes in size. Therefore, tokens point to artwork that is located somewhere else than the blockchain.*** (footnote) This isn't actually true. It's just that it's very expensive to put artwork on the blockchain. That Beeple artwork would cost about $5million to put onto the blockchain. Yes, this less than a tenth the purchase price of $69million, but when you account for all the artwork for which people have created NFTs, the total exceeds the prices for all NFTs.So if artwork isn't on the blockchain, where is it located? and how do the NFTs link to it?Our four examples of NFT mentioned above show four different answers to this question. Some are smart, others are stupid -- and by "stupid" I mean "tantamount to fraud".The correct way to link a token with a piece of digital art is through a hash, which can be used with th
ErrataRob.webp 2021-03-20 23:52:47 Deconstructing that $69million NFT (lien direct) "NFTs" have hit the mainstream news with the sale of an NFT based digital artwork for $69 million. I thought I'd write up an explainer. Specifically, I deconstruct that huge purchase and show what actually was exchanged, down to the raw code. (The answer: almost nothing).The reason for this post is that every other description of NFTs describe what they pretend to be. In this blogpost, I drill down on what they actually are.Note that this example is about "NFT artwork", the thing that's been in the news. There are other uses of NFTs, which work very differently than what's shown here.tl;drI have long bit of text explaining things. Here is the short form that allows you to drill down to the individual pieces.Beeple created a piece of art in a fileHe created a hash that uniquely, and unhackably, identified that fileHe created a metadata file that included the hash to the artworkHe created a hash to the metadata fileHe uploaded both files (metadata and artwork) to the IPFS darknet decentralized file sharing serviceHe created, or minted a token governed by the MakersTokenV2 smart contract on the Ethereum blockchainChristies created an auction for this tokenThe auction was concluded with a payment of $69 million worth of Ether cryptocurrency. However, nobody has been able to find this payment on the Ethereum blockchain, the money was probably transferred through some private means.Beeple transferred the token to the winner, who transferred it again to this final Metakovan accountEach of the link above allows you to drill down to exactly what's happening on the blockchain. The rest of this post discusses things in long form.Why do I care?Well, you don't. It makes you feel stupid that you haven't heard about it, when everyone is suddenly talking about it as if it's been a thing for a long time. But the reality, they didn't know what it was a month ago, either. Here is the Google Trends graph to prove this point -- interest has only exploded in the last couple months:The same applies to me. I've been aware of them (since the CryptoKitties craze from a couple years ago) but haven't invested time reading source code until now. Much of this blogpost is written as notes as I discover for myself exactly what was purchased fo Tool Guideline
ErrataRob.webp 2021-02-28 20:05:19 We are living in 1984 (ETERNALBLUE) (lien direct) In the book 1984, the protagonist questions his sanity, because his memory differs from what appears to be everybody else's memory.The Party said that Oceania had never been in alliance with Eurasia. He, Winston Smith, knew that Oceania had been in alliance with Eurasia as short a time as four years ago. But where did that knowledge exist? Only in his own consciousness, which in any case must soon be annihilated. And if all others accepted the lie which the Party imposed-if all records told the same tale-then the lie passed into history and became truth. 'Who controls the past,' ran the Party slogan, 'controls the future: who controls the present controls the past.' And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory. 'Reality control', they called it: in Newspeak, 'doublethink'.I know that EternalBlue didn't cause the Baltimore ransomware attack. When the attack happened, the entire cybersecurity community agreed that EternalBlue wasn't responsible.But this New York Times article said otherwise, blaming the Baltimore attack on EternalBlue. And there are hundreds of other news articles [eg] that agree, citing the New York Times. There are no news articles that dispute this.In a recent book, the author of that article admits it's not true, that EternalBlue didn't cause the ransomware to spread. But they defend themselves as it being essentially true, that EternalBlue is responsible for a lot of bad things, even if technically, not in this case. Such errors are justified, on the grounds they are generalizations and simplifications needed for the mass audience.So we are left with the situation Orwell describes: all records tell the same tale -- when the lie passes into history, it becomes the truth.Orwell continues:He wondered, as he had many times wondered before, whether he himself was a lunatic. Perhaps a lunatic was simply a minority of one. At one time it had been a sign of madness to believe that the earth goes round the sun; today, to believe that the past is inalterable. He might be ALONE in holding that belief, and if alone, then a lunatic. But the thought of being a lunatic did not greatly trouble him: the horror was that he might also be wrong.I'm definitely a lunatic, alone in my beliefs. I sure hope I'm not wrong.
Update: Other lunatics document their struggles with Minitrue: When I was investigating the TJX breach, there were NYT articles citing unnamed sources that were made up & then outlets would publish citing the NYT. The TJX lawyers would require us to disprove the articles. Each time we would. It was maddening fighting lies for 8 months.— Nicholas J. Percoco (@c7five) March 1, 2021
Ransomware NotPetya Wannacry APT 32
ErrataRob.webp 2021-02-27 00:03:27 Review: Perlroth\'s book on the cyberarms market (lien direct) New York Times reporter Nicole Perlroth has written a book on zero-days and nation-state hacking entitled “This Is How They Tell Me The World Ends”. Here is my review.I'm not sure what the book intends to be. The blurbs from the publisher implies a work of investigative journalism, in which case it's full of unforgivable factual errors. However, it reads more like a memoir, in which case errors are to be expected/forgivable, with content often from memory rather than rigorously fact checked notes.But even with this more lenient interpretation, there are important flaws that should be pointed out. For example, the book claims the Saudi's hacked Bezos with a zero-day. I claim that's bunk. The book claims zero-days are “God mode” compared to other hacking techniques, I claim they are no better than the alternatives, usually worse, and rarely used. Ransomware Guideline
ErrataRob.webp 2021-02-25 20:31:46 No, 1,000 engineers were not needed for SolarWinds (lien direct) Microsoft estimates it would take 1,000 to carry out the famous SolarWinds hacker attacks. This means in reality that it was probably fewer than 100 skilled engineers. I base this claim on the following Tweet: When asked why they think it was 1,000 devs, Brad Smith says they saw an elaborate and persistent set of work. Made an estimate of how much work went into each of these attacks, and asked their own engineers. 1,000 was their estimate.— Joseph Cox (@josephfcox) February 23, 2021 Yes, it would take Microsoft 1,000 engineers to replicate the attacks. But it takes a large company like Microsoft 10-times the effort to replicate anything. This is partly because Microsoft is a big, stodgy corporation. But this is mostly because this is a fundamental property of software engineering, where replicating something takes 10-times the effort of creating the original thing.It's like painting. The effort to produce a work is often less than the effort to reproduce it. I can throw some random paint strokes on canvas with almost no effort. It would take you an immense amount of work to replicate those same strokes -- even to figure out the exact color of paint that I randomly mixed together.Software EngineeringThe process of software engineering is about creating software that meets a certain set of requirements, or a specification. It is an extremely costly process verify the specification is correct. It's like if you build a bridge but forget a piece and the entire bridge collapses.But code slinging by hackers and open-source programmers works differently. They aren't building toward a spec. They are building whatever they can and whatever they want. It takes a tenth, or even a hundredth of the effort of software engineering. Yes, it usually builds things that few people (other than the original programmer) want to use. But sometimes it produces gems that lots of people use.Take my most popular code slinging effort, masscan. I spent about 6-months of total effort writing it at this point. But if you run code analysis tools on it, they'll tell you that it would take several millions of dollars to replicate the amount of code I've written. And that's just measuring the bulk code, not the numerous clever capabilities and innovations in the code.According to these metrics, I'm either a 100x engineer (a hundred times better than the average engineer) or my claim is true that "code slinging" is a fraction of the effort of "software engineering".The same is true of everything the SolarWinds hackers produced. They didn't have to software engineer code according to Microsoft's processes. They only had to sling code to satisfy their own needs. They don't have to train/hire engineers with the skills necessary to meet a specification, they can write the specification according to what their own engineers can produce. They can do whatever they want with the code because they don't have to satisfy somebody else's needs.HackingSomething is similarly true with hacking. Hacking a specific target, a specific way, is very hard. Hacking any target, any way, is easy.Like most well-known hackers, I regularly get those emails asking me to hack somebody's Facebook account. This is very hard. I can try a lot of things, and in the end, chances are I cannot succeed. On the other hand, if you ask me to hack anybody's Facebook account, I can do that in seconds. I can download one of the many ha Hack
ErrataRob.webp 2020-12-09 15:25:45 The deal with DMCA 1201 reform (lien direct) There are two fights in Congress now against the DMCA, the "Digital Millennium Copyright Act". One is over Section 512 covering "takedowns" on the web. The other is over Section 1201 covering "reverse engineering", which weakens cybersecurity.Even before digital computers, since the 1880s, an important principle of cybersecurity has been openness and transparency ("Kerckhoff's Principle"). Only through making details public can security flaws be found, discussed, and fixed. This includes reverse-engineering to search for flaws.Cybersecurity experts have long struggled against the ignorant who hold the naive belief we should instead coverup information, so that evildoers cannot find and exploit flaws. Surely, they believe, given just anybody access to critical details of our security weakens it. The ignorant have little faith in technology, that it can be made secure. They have more faith in government's ability to control information.Technologists believe this information coverup hinders well-meaning people and protects the incompetent from embarrassment. When you hide information about how something works, you prevent people on your own side from discovering and fixing flaws. It also means that you can't hold those accountable for their security, since it's impossible to notice security flaws until after they've been exploited. At the same time, the information coverup does not do much to stop evildoers. Technology can work, it can be perfected, but only if we can search for flaws.It seems counterintuitive the revealing your encryption algorithms to your enemy is the best way to secure them, but history has proven time and again that this is indeed true. Encryption algorithms your enemy cannot see are insecure. The same is true of the rest of cybersecurity.Today, I'm composing and posting this blogpost securely from a public WiFi hotspot because the technology is secure. It's secure because of two decades of security researchers finding flaws in WiFi, publishing them, and getting them fixed.Yet in the year 1998, ignorance prevailed with the "Digital Millennium Copyright Act". Section 1201 makes reverse-engineering illegal. It attempts to secure copyright not through strong technological means, but by the heavy hand of government punishment.The law was not completely ignorant. It includes an exception allow what it calls "security testing" -- in theory. But that exception does not work in practice, imposing too many conditions on such research to be workable.The U.S. Copyright Office has authority under the law to add its own exemptions every 3 years. It has repeatedly added exceptions for security research, but the process is unsatisfactory. It's a protracted political battle every 3 years to get the exception back on the list, and each time it can change slightly. These exemptions are still less than what we want. This causes a chilling effect on permissible research. It would be better if such exceptions were put directly into the law.You can understand the nature of the debate by looking at those on each side.Those lobbying for the exceptions are those trying to make technology more secure, such as Rapid7, Bugcrowd, Duo Security, Luta Security, and Hackerone. These organizations have no interest in violating copyright -- their only concern is cybersecurity, finding and fixing flaws.The opposing side includes the copyright industry, as you'd expect, such as the "DVD" association who doesn't want hackers breaking the DRM on DVDs.However, much of the opposing side has nothing do with copyright as such.This notably includes the three major voting machine suppliers in the United States: Dominion Voting, ES&S, and Hart InterCivic. Security professionals have been pointing out security flaws in their equipment for the past several years. These vendors are explicitly trying to coverup their security flaws by using the law to silence critics.This goes back to the struggle mentioned at the top of this Guideline
ErrataRob.webp 2020-10-25 21:05:55 Why Biden: Principle over Party (lien direct) There exist many #NeverTrump Republicans who agree that while Trump would best achieve their Party's policies, that he must nonetheless be opposed on Principle. The Principle at question isn't that Trump is a liar, a misogynist, a racist, or of low character (though all these are true). Instead, the Principle is that he's a populist autocrat who is eroding our liberal institutions ("liberal" as in the classic sense).Countries don't fail when there's a leftward shift in government policies. Many prosperous, peaceful European countries are to the left of Biden. What makes prosperous countries fail is when civic institutions break down, when a party or dear leader starts ruling by decree, such as in the European countries of Russia or Hungary.Our system of government is like football. While the teams (parties) compete vigorously against each other, they largely respect the rules of the game, both written and unwritten traditions. They respect each other -- while doing their best to win (according to the rules), they nonetheless shake hands at the end of the match, and agree that their opponents are legitimate.The rules of the sport we are playing is described in the Wikipedia page on "liberal democracy".Sport matches can be enjoyable even if you don't understand the rules. The same is true of liberal democracy: there's little civic education in the country so most don't know the rules game. Most are unaware even that there are rules.You see that in action with this concern over Trump conceding the election, his unwillingness to commit to a "peaceful transfer of power". His supporters widely believed this is a made-up controversy, a "principle" created on the spot as just another way to criticize Trump.But it's not a new principle. A "peaceful transfer of power" is the #1 bedrock principles from which everything else derives. It's the first way we measure whether a country is actually the "liberal democracy" that they claim. For example, the fact that Putin has been in power for 20 years makes us doubt that they are really the "liberal democracy" that they claim. The reason you haven't heard of it, the reason it isn't discussed much, is that it's so unthinkable that a politician would reject it the way Trump has.The historic importance of this principle can be seen when you go back and read the concession speeches of HillaryMcCainGore, and Bush Sr., and Carter, you see that all of them stressed the legitimacy of their opponent's win, and a commitment to a peaceful transfer of power. (It goes back further than that, to the founding of our country, but I can't link every speech). The following quote from Hillary's concession to Trump demonstrates this principle:But I still believe in America and I always will. And if you do, then we must accept this result and then look to the future. Donald Trump is going to be our president. We owe him an open mind and the chance to lead.Our constitutional democracy enshrines the peaceful transfer of power and we don't just respect that, we cherish it. It also enshrines other things; the rule of law, the principle that we are all equal in rights and dignity, freedom of worship and expression. We respect and cherish these values too and we must defend them.If this were Trump's only failure, then we could excuse it and work around it. As long as he defended all Guideline
ErrataRob.webp 2020-10-16 22:59:01 No, that\'s not how warrantee expiration works (lien direct) The NYPost Hunter Biden story has triggered a lot of sleuths obsessing on technical details trying to prove it's a hoax. So far, these claims are wrong. The story is certainly bad journalism aiming to misinform readers, but it has not yet been shown to be a hoax.In this post, we look at claim the timelines don't match up with the manufacturing dates of the drives. Sleuths claim to prove the drives were manufactured after the events in question, based on serial numbers.What this post will show is that the theory is wrong. Manufacturers pad warrantee periods. Thus, you can't assume a date of manufacture based upon the end of a warrantee period.The story starts with Hunter Biden (or associates) dropping off a laptop at a repair shop because of water damage. The repair shop made a copy of the laptop's hard drive, stored on an external drive. Later, the FBI swooped in and confiscated both the laptop and that external drive.The serial numbers of both devices are listed in the subpoena published by the NYPost:You can enter these serial numbers in the support pages at Apple (FVFXC2MMHV29) and Western Digital (WX21A19ATFF3) to discover precisely what hardware this is, and when the warrantee periods expire -- and presumably, when they started.In the case of that external drive, the 3-year warrantee expires May 17, 2022 -- meaning the drive was manufactured on May 17, 2019 (or so they claim). This is a full month after the claimed date of April 12, 2019, when the laptop was dropped off at the repair shop.There are lots of explanations for this. One of which is that the drive subpoenaed by the government (on Dec 9, 2019) was a copy of the original drive.But a simpler explanation is this: warrant periods are padded by the manufacturer by several months. In other words, if the warrantee ends May 17, it means the drive was probably manufactured in February.I can prove this. Coincidentally, I purchased a Western Digital drive a few days ago. If we used the same logic as above to work backward from warrantee expiration, then it means the drive was manufactured 7 days in the future.Here is a screenshot from Amazon.com showing I purchased the drive Oct 12.
ErrataRob.webp 2020-10-16 17:45:28 No, font errors mean nothing in that NYPost article (lien direct) The NYPost has an article on Hunter Biden emails. Critics claim that these don't look like emails, and that there are errors with the fonts, thus showing they are forgeries. This is false. This is how Apple's "Mail" app prints emails to a PDF file. The font errors are due to viewing PDF files within a web browser -- you don't see them in a PDF app.In this blogpost, I prove this.I'm going to do this by creating forged email. The point isn't to prove the email wasn't forged, it could easily have been -- the NYPost didn't do due diligence to prove they weren't forged. The point is simply that that these inexplicable problems aren't evidence of forgery. All emails printed by the Mail app to a PDF, then displayed with Scribd, will look the same way.To start with, we are going to create a simple text file on the computer called "erratarob-conspire.eml". That's what email messages are at the core -- text files. I use Apple's "TextEdit" app on my MacBook to create the file.The structure of an email is simple. It has a block of "metadata" consisting of fields separated by a colon ":" character. This block ends with a blank line, after which we have the contents of the email.Clicking on the file launches Apple's "Mail" app. It opens the email and renders it on the screen like this:Notice how the "Mail" app has reformatted the metadata. In addition to displaying the email, it's making it simple to click on the names to add them to your address book. That's why there is a (VP) to the right on the screen -- it creates a placeholder icon for every account in your address book.One thing I can do with emails is to save them as a PDF document.This creates a PDF file on the disk that we can view like any other PDF file. Note that yet again, the app has reformatted the metadata, different from both how it displayed it on the screen and how it appears in the original email text.
ErrataRob.webp 2020-10-14 19:34:25 Yes, we can validate leaked emails (lien direct) When emails leak, we can know whether they are authenticate or forged. It's the first question we should ask of today's leak of emails of Hunter Biden. It has a definitive answer.Today's emails have "cryptographic signatures" inside the metadata. Such signatures have been common for the past decade as one way of controlling spam, to verify the sender is who they claim to be. These signatures verify not only the sender, but also that the contents have not been altered. In other words, it authenticates the document, who sent it, and when it was sent.Crypto works. The only way to bypass these signatures is to hack into the servers. In other words, when we see a 6 year old message with a valid Gmail signature, we know either (a) it's valid or (b) they hacked into Gmail to steal the signing key. Since (b) is extremely unlikely, and if they could hack Google, they could a ton more important stuff with the information, we have to assume (a).Your email client normally hides this metadata from you, because it's boring and humans rarely want to see it. But it's still there in the original email document. An email message is simply a text document consisting of metadata followed by the message contents.It takes no special skills to see metadata. If the person has enough skill to export the email to a PDF document, they have enough skill to export the email source. If they can upload the PDF to Scribd (as in the story), they can upload the email source. I show how to below.To show how this works, I send an email using Gmail to my private email server (from gmail.com to robertgraham.com).The NYPost story shows the email printed as a PDF document. Thus, I do the same thing when the email arrives on my MacBook, using the Apple "Mail" app. It looks like the following:The "raw" form originally sent from my Gmail account is simply a text document that looked like the following:This is rather simple. Client's insert details like a "Message-ID" that humans don't care about. There's also internal formatting details, like the fact that this is a "plain text" message rather than an "HTML" email.But this raw document was the one sent by the Gmail web client. It then passed through Gmail's servers, then was passed across the Internet to my private server, where I finally retrieved it using my MacBook.As email messages pass through servers, the servers add their own metadata.When it arrived, the "raw" document looked like the following. None of the important bits changed, but a lot more metadata was added: Hack Guideline
ErrataRob.webp 2020-10-08 21:44:25 Factcheck: Regeneron\'s use of embryonic stem cells (lien direct) This week, Trump's opponents misunderstood a Regeneron press release to conclude that the REG-COV2 treatment (which may have saved his life) was created from stem cells. When that was proven false, his opponents nonetheless deliberately misinterpreted events to conclude there was still an ethical paradox. I've read the scientific papers and it seems like this is an issue that can be understood with basic high-school science, so I thought I'd write up a detailed discussion.The short answer is this:The drug is not manufactured in any way from human embryonic tissues.The drug was tested using fetal/embryonic cells, but ones almost 50 years old, not new ones.Republicans want to stop using new embryos, the ethical issue here is the continued use of old embryos, which Republican have consistently agreed to.Yes, the drug is still tainted by the "embryonic stem cell" issue -- just not in any of the ways that people claim it is, and not in a way that makes Republicans inconsistent.Almost all medical advances of the last few decades are similarly tainted.Now let's do the long, complicated answer. This starts with a discussion of the science of Regeneron's REG-COV2 treatment.A well-known treatment that goes back decades is to take blood plasma from a recently recovered patient and give it to a recently infected patient. Blood plasma is where the blood cells are removed, leaving behind water, salts, other particles, and most importantly, "antibodies". This is the technical concept behind the movie "Outbreak", though of course they completely distort the science.Antibodies are produced by the immune system to recognize and latch onto foreign things, including viruses (the rest of this discussion assumes "viruses"). They either deactivate the virus particle, or mark it to be destroyed by other parts of the immune system, or both.After an initial infection, it takes a while for the body to produce antibodies, allowing the disease to rage unchecked. A massive injection of antibodies during this time allows the disease to be stopped before it gets very far, letting the body's own defenses catch up. That's the premise behind Trump's treatment.An alternative to harvesting natural antibodies from recently recovered patients is to manufacture artificial antibodies using modern science. That's what Regeneron did.An antibody is just another "protein", the building blocks of the body. The protein is in the shape of a Y with the two upper tips formed to lock onto the corresponding parts of a virus ("antigens"). Every new virus requires a new antibody with different tips.The SARS-COV-2 virus has these "spike" proteins on it's surface that allow it to invade the cells in our lungs. They act like a crowbar, jamming themselves into the cell wall, then opening up a hole to allow the rest of the virus inside. Since this is the important and unique prote Guideline
ErrataRob.webp 2020-07-19 17:07:57 How CEOs think (lien direct) Recently, Twitter was hacked. CEOs who read about this in the news ask how they can protect themselves from similar threats. The following tweet expresses our frustration with CEOs, that they don't listen to their own people, but instead want to buy a magic pill (a product) or listen to outside consultants (like Gartner). In this post, I describe how CEOs actually think.CEO : "I read about that Twitter hack. Can that happen to us?"Security : "Yes, but ..."CEO : "What products can we buy to prevent this?"Security : "But ..."CEO : "Let's call Gartner."*sobbing sounds*- Wim Remes (@wimremes) July 16, 2020The only thing more broken than how CEOs view cybersecurity is how cybersecurity experts view cybersecurity. We have this flawed view that cybersecurity is a moral imperative, that it's an aim by itself. We are convinced that people are wrong for not taking security seriously. This isn't true. Security isn't a moral issue but simple cost vs. benefits, risk vs. rewards. Taking risks is more often the correct answer rather than having more security.Rather than experts dispensing unbiased advice, we've become advocates/activists, trying to convince people that they need to do more to secure things. This activism has destroyed our credibility in the boardroom. Nobody thinks we are honest.Most of our advice is actually internal political battles. CEOs trust outside consultants mostly because outsiders don't have a stake in internal politics. Thus, the consultant can say the same thing as what you say, but be trusted.CEOs view cybersecurity the same way they view everything else about building the business, from investment in office buildings, to capital equipment, to HR policies, to marketing programs, to telephone infrastructure, to law firms, to .... everything.They divide their business into two parts:The first is the part they do well, the thing they are experts at, the things that define who they are as a company, their competitive advantage.The second is everything else, the things they don't understand.For the second part, they just want to be average in their industry, or at best, slightly above average. They want their manufacturing costs to be about average. They want the salaries paid to employees to be about average. They want the same video conferencing system as everybody else. Everything outside of core competency is average.I can't express this enough: if it's not their core competency, then they don't want to excel at it. Excelling at a thing comes with a price. They have to pay people more. They have to find the leaders with proven track records at excelling at it. They have to manage excellence.This goes all the way to the top. If it's something the company is going to excel at, then the CEO at the top has to have enough expertise themselves to understand who the best leaders to can accomplish this goal. The CEO can't hire an excellent CSO unless they have enough competency to judge the qualifications of the CSO, and enough competency to hold the CSO accountable for the job they are doing.All this is a tradeoff. A focus of attention on one part of the business means less attention on other parts of the business. If your company excels at cybersecurity, it means not excelling at some other part of the business.So unless you are a company like Google, whose cybersecurity is a competitive advantage, you don't want to excel in cybersecurity. You want to be Ransomware Guideline NotPetya
ErrataRob.webp 2020-07-13 19:22:41 In defense of open debate (lien direct) Recently, Harper's published a Letter on Justice and Open Debate. It's a rather boring defense of liberalism and the norm of tolerating differing points of view. Mike Masnick wrote rebuttal on Techdirt. In this post, I'm going to rebut his rebuttal, writing a counter-counter-argument.The Letter said that the norms of liberalism tolerate disagreement, and that these norms are under attack by increasing illiberalism on both sides, both the left and the right.My point is this: Masnick avoids the rebutting the letter. He's recycling his arguments against right-wingers who want their speech coddled, rather than the addressing the concerns of (mostly) left-wingers worried about the fanaticism on their own side.Free speechMasnick mentions "free speech" 19 times in his rebuttal -- but the term does not appear in the Harper's letter, not even once. This demonstrates my thesis that his rebuttal misses the point.The term "free speech" has lost its meaning. It's no longer useful for such conversations.Left-wingers want media sites like Facebook, YouTube, the New York Times to remove "bad" speech, like right-wing "misinformation". But, as we've been taught, censoring speech is bad. Therefore, "censoring free speech" has to be redefined to as to not include the above effort.The redefinition claims that the term "free speech" now applies to governments, but not private organizations, that stopping free speech happens only when state power or the courts are involved. In other words, "free speech" is supposed to equate with the "First Amendment", which really does only apply to government ("Congress shall pass no law abridging free speech").That this is false is demonstrated by things like the murders of Charlie Hebdo cartoonist for depicting Muhammad. We all agree this incident is a "free speech" issue, but no government was involved.Right-wingers agree to this new definition, sort of. In much the same way that left-wingers want to narrow "free-speech" to only mean the First Amendment, right-wingers want to expand the "First Amendment" to mean protecting protecting "free speech" against interference by both government and private platforms. They argue that platforms like Facebook have become so pervasive that they have become the "public square", and thus, occupy the same space as government. They therefore want regulations that coddle their speech, preventing their content from being removed.The term "free speech" is therefore no longer useful in explaining the argument because it has become the argument.The Letter avoids this play on words. It's not talking about "free speech", but the "norms of open debate and toleration of differences". It claims first of all that we have a liberal tradition where we tolerate differences of opinion and that we debate these opinions openly. It claims secondly that these norms are weakening, claiming "intolerant climate that has set in on all sides".In other words, those who attacked the NYTimes for publishing the Tom Cotton op-ed are criticized as demonstrating illiberalism and intolerance. This has nothing to do with whatever arguments you have about "free speech".Private platformsMasnick's free speech argument continues that you can't force speech upon private platforms like the New York Times. They have the freedom to make their own editorial decisions about what to publish, choosing some things, rejecting others.It's a good argument, but one the targets the arguments made by right-wingers hostile to the New York Times, and not arguments made by th
ErrataRob.webp 2020-06-16 18:38:07 Apple ARM Mac rumors (lien direct) The latest rumor is that Apple is going to announce Macintoshes based on ARM processors at their developer conference. I thought I'd write up some perspectives on this.It's different this timeThis would be Apple's fourth transition. Their original Macintoshes in 1984 used Motorola 68000 microprocessors. They moved to IBM's PowerPC in 1994, then to Intel's x86 in 2005.However, this history is almost certainly the wrong way to look at the situation. In those days, Apple had little choice. Each transition happened because the processor they were using was failing to keep up with technological change. They had no choice but to move to a new processor.This no longer applies. Intel's x86 is competitive on both speed and power efficiency. It's not going away. If Apple transitions away from x86, they'll still be competing against x86-based computers.Other companies have chosen to adopt both x86 and ARM, rather than one or the other. Microsoft's "Surface Pro" laptops come in either x86 or ARM versions. Amazon's AWS cloud servers come in either x86 or ARM versions. Google's Chromebooks come in either x86 or ARM versions.Instead of ARM replacing x86, Apple may be attempting to provide both as options, possibly an ARM CPU for cheaper systems and an x86 for more expensive and more powerful systems.ARM isn't more power efficient than x86Every news story, every single one, is going to repeat the claim that ARM chips are more power efficient than Intel's x86 chips. Some will claim it's because they are RISC whereas Intel is CISC.This isn't true. RISC vs. CISC was a principle in the 1980s when chips were so small that instruction set differences meant architectural differences. Since 1995 with "out-of-order" processors, the instruction set has been completely separated from the underlying architecture. At most, instruction set differences can't account for more than 5% of the difference between processor performance or efficiency.Mobile chips consume less power by simply being slower. When you scale mobile ARM CPUs up to desktop speeds, they consume the same power as desktops. Conversely, when you scale Intel x86 processors down to mobile power consumption levels, they are just as slow. You can test this yourself by comparing Intel's mobile-oriented "Atom" processor against ARM processors in the Raspberry Pi.Moreover, the CPU accounts for only a small part of overall power consumption. Mobile platforms care more about the graphics processor or video acceleration than they do the CPU. Large differences in CPU efficiency mean small differences in overall platform efficiency.Apple certainly balances its chips so they work better in phones than an Intel x86 would, but these tradeoffs mean they'd work worse in laptops.While overall performance and efficiency will be similar, specific application will perform differently. Thus, when ARM Macintoshes arrive, people will choose just the right benchmarks to "prove" their inherent superiority. It won't be true, but everyone will believe it to be true.No longer a desktop companyVenture capitalist Mary Meeker produces yearly reports on market trends. The desktop computer market has been stagnant for over a decade in the face of mobile growth. The Macintosh is only 10% of Apple's business -- so little that they could abandon the business without noticing a difference.This means investing in the Macintosh business is a poor business decision. Such investment isn't going to produce growth. Investing in a major transition from x86 to ARM is therefore stupid -- it'll cost a lot of money without generating any return.In particular, despite having a mobile CPU for their iPhone, they still don't have a CPU optimized for laptops and desktops. The Macintosh market is just to small to fund Guideline
ErrataRob.webp 2020-05-31 17:35:03 What is Boolean? (lien direct) My mother asks the following question, so I'm writing up a blogpost in response.I am watching a George Boole bio on Prime but still don't get it.I started watching the first few minutes of the "Genius of George Boole" on Amazon Prime, and it was garbage. It's the typical content that's been dumbed-down so much that any useful content has been removed. It's the typical sort of hero worshipping biography that credits the subject with everything that it plausible can.Boole was a mathematician who tried to apply the concepts of math to statements of "true" and false", rather than numbers like 1, 2, 3, 4, ... He also did a lot of other mathematical work, but it's this work that continues to bear his name ("boolean logic" or "boolean algebra").But what we know of today as "boolean algebra" was really developed by others. They named it after him, but really all the important stuff was developed later. Moreover, the "1" and "0" of binary computers aren't precisely the same thing as the "true" and "false" of boolean algebra, though there is considerable overlap.Computers are built from things called "transistors" which act as tiny switches, able to turn "on" or "off". Thus, we have the same two-value system as "true" and "false", or "1" and "0".Computers represent any number using "base two" instead of the "base ten" we are accustomed to. The "base" of number representation is the number of digits. The number of digits we use is purely arbitrary. The Babylonians had a base 60 system, computers a base 2, but the math we humans use is base 10, probably because we have 10 fingers.We use a "positional" system. When we run out of digits, we put a '1' on the left side and start over again. Thus, "10" is always the number of digits. If it's base 8, then once you run out of the first eight digits 01234567, you wrap around and start agains with "10", which is the value of eight in base 8.This is in contrast to something like the non-positional Roman numerals, which had symbols for ten (X), hundred (C), and thousand (M).A binary number is a string of 1s and 0s in base two. The number fifty-three, in binary, is 110101.Computers can perform normal arithmetic computations on these numbers, like addition (+), subtraction (−), multiplication (×), and division (÷).But there are also binary arithmetic operation we can do on them, like not (¬), or (∨), xor (⊕), and (∧), shift-left ("), and shift-right ("). That's what we refer to when we say "boolean" arithmetic.Let's take a look at the end operation. The and operator means if both the left "and" right numbers are 1, then the result is 1, but 0 otherwise. In other words: 0 ∧ 0 = 0 0 ∧ 1 = 0 1 ∧ 0 = 0 1 ∧ 1 = 1There are similar "truth tables" for the other operators.While the simplest form of such operators are on individual bits, they are more often applied to larger numbers containing many bits, many base two binary digits. For example, we might have two 8-bit numbers and apply the and operator: 01011100       ∧ 11001101       = 01001100The result is obtained by applying and to each set of matching bits in both numbers. Both numbers have a '1' as the second bit from the left, so the final result has a '1' in that position.Normal arithmetic computations are built from the binary. You can show how a sequence of and and or operations can combine to add two numbers. The entire computer chip is built from sequences of these binary operations -- billions and billions of them.
ErrataRob.webp 2020-05-19 18:03:23 Securing work-at-home apps (lien direct) In today's post, I answer the following question:Our customer's employees are now using our corporate application while working from home. They are concerned about security, protecting their trade secrets. What security feature can we add for these customers?The tl;dr answer is this: don't add gimmicky features, but instead, take this opportunity to do security things you should already be doing, starting with a "vulnerability disclosure program" or "vuln program".GimmicksFirst of all, I'd like to discourage you from adding security gimmicks to your product. You are no more likely to come up with an exciting new security feature on your own as you are a miracle cure for the covid. Your sales and marketing people may get excited about the feature, and they may get the customer excited about it too, but the excitement won't last.Eventually, the customer's IT and cybersecurity teams will be brought in. They'll quickly identify your gimmick as snake oil, and you'll have made an enemy of them. They are already involved in securing the server side, the work-at-home desktop, the VPN, and all the other network essentials. You don't want them as your enemy, you want them as your friend. You don't want to send your salesperson into the maw of a technical meeting at the customer's site trying to defend the gimmick.You want to take the opposite approach: do something that the decision maker on the customer side won't necessarily understand, but which their IT/cybersecurity people will get excited about. You want them in the background as your champion rather than as your opposition.Vulnerability disclosure programTo accomplish this goal described above, the thing you want is known as a vulnerability disclosure program. If there's one thing that the entire cybersecurity industry is agreed about (other than hating the term cybersecurity, preferring "infosec" instead) is that you need this vulnerability disclosure program. Everything else you might want to do to add security features in your product come after you have this thing.Your product has security bugs, known as vulnerabilities. This is true of everyone, no matter how good you are. Apple, Microsoft, and Google employ the brightest minds in cybersecurity and they have vulnerabilities. Every month you update their products with the latest fixes for these vulnerabilities. I just bought a new MacBook Air and it's already telling me I need to update the operating system to fix the bugs found after it shipped.These bugs come mostly from outsiders. These companies have internal people searching for such bugs, as well as consultants, and do a good job quietly fixing what they find. But this goes only so far. Outsiders have a wider set of skills and perspectives than the companies could ever hope to control themselves, so find things that the companies miss.These outsiders are often not customers.This has been a chronic problem throughout the history of computers. Somebody calls up your support line and tells you there's an obvious bug that hackers can easily exploit. The customer support representative then ignores this because they aren't a customer. It's foolish wasting time adding features to a product that no customer is asking for.But then this bug leaks out to the public, hackers widely exploit it damaging customers, and angry customers now demand why you did nothing to fix the bug despite having been notified about it.The problem here is that nobody has the job of responding to such problems. The reason your company dropped the ball was that nobody was assigned to pick it up. All a vulnerability disclosure program means that at least one person within the company has the responsibility of dealing with it.How to set up vulnerability disclosure program Spam Vulnerability Threat Guideline ★★★
ErrataRob.webp 2020-05-13 15:31:34 CISSP is at most equivalent to a 2-year associates degree (lien direct) There are few college programs for "cybersecurity". Instead, people rely upon industry "certifications", programs that attempt to certify a person has the requisite skills. The most popular is known as the "CISSP". In the news today, European authorities decided a "CISSP was equivalent to a masters degree". I think this news is garbled. Looking into the details, studying things like "UK NARIK RQF level 11", it seems instead that equivalency isn't with master's "degrees" so much as with post-graduate professional awards and certifications that are common in industry. Even then, it places CISSP at too high a level: it's an entry level certification that doesn't require a college degree, and teaches students only familiarity with buzzwords used in the industry rather than the deeper level of understanding of how things work.Recognition of equivalent qualifications and skillsThe outrage over this has been "equivalent to a master's degree". I don't think this is the case. Instead, it seems "equivalent to professional awards and recognition".The background behind this is how countries recognize "equivalent" work done in other countries. For example, a German Diplom from a university is a bit more than a U.S. bachelor's degree, but a bit less than a U.S. master's degree. How, then, do you find an equivalent between the two?Part of this is occupational, vocational, and professional awards, certifications, and other forms of recognition. A lot of practical work experience is often equivalent to, and even better than, academic coursework.The press release here discusses the UK's NARIC RQF framework, putting the CISSP at level 11. This makes it equivalent to post-graduate coursework and various forms of professional recognition.I'm not sure it means it's the same as a "master's degree". At RQF level 11, there is a fundamental difference between an "award" requiring up to 120 hours of coursework, a "certificate", and a "degree" requiring more than 370 hours of coursework. Assuming everything else checks out, this would place the CISSP at the "award" level, not a "certificate" or "degree" level.The question here is whether the CISSP deserve recognition along with other professional certifications. Below I will argue that it doesn't.Superficial not technicalThe CISSP isn't a technical certification. It covers all the buzzwords in the industry so you know what they refer to, but doesn't explain how anything works. You are tested on the definition of the term "firewall" but you aren't tested on any detail about how firewalls work.This is has an enormous impact on the cybersecurity industry with hordes of "certified" professionals who are none-the-less non-technical, not knowing how things work.This places the CISSP clearly at some lower RQF level. The "RQF level 11" is reserved for people with superior understanding how things work, whereas the CISSP is really an entry-level certification.No college degree requiredThe other certifications at this level tend to require a college degree. They are a refinement of what was learned in college.The opposite is true of the CISSP. It requires no college degree.Now, I'm not a fan of college degrees. Idiots seem capable of getting such degrees without understanding the content, so they are not a good badge of expertise. But at least the majority of college programs take students deeper into understanding the theory of how things work rat Guideline
ErrataRob.webp 2020-04-02 01:23:55 About them Zoom vulns... (lien direct) Today a couple vulnerabilities were announced in Zoom, the popular work-from-home conferencing app. Hackers can possibly exploit these to do evil things to you, such as steal your password. Because of the COVID-19, these vulns have hit the mainstream media. This means my non-techy friends and relatives have been asking about it. I thought I'd write up a blogpost answering their questions.The short answer is that you don't need to worry about it. Unless you do bad things, like using the same password everywhere, it's unlikely to affect you. You should worry more about wearing pants on your Zoom video conferences in case you forget and stand up.Now is a good time to remind people to stop using the same password everywhere and to visit https://haveibeenpwned.com to view all the accounts where they've had their password stolen. Using the same password everywhere is the #1 vulnerability the average person is exposed to, and is a possible problem here. For critical accounts (Windows login, bank, email), use a different password for each. (Sure, for accounts you don't care about, use the same password everywhere, I use 'Foobar1234'). Write these passwords down on paper and put that paper in a secure location. Don't print them, don't store them in a file on  your computer. Writing it on a Post-It note taped under your keyboard is adequate security if you trust everyone in your household.If hackers use this Zoom method to steal your Windows password, then you aren't in much danger. They can't log into your computer because it's almost certainly behind a firewall. And they can't use the password on your other accounts, because it's not the same.Why you shouldn't worryThe reason you shouldn't worry about this password stealing problem is because it's everywhere, not just Zoom. It's also here in this browser you are using. If you click on file://hackme.robertgraham.com/foo/bar.html, then I can grab your password in exactly the same way as if you clicked on that vulnerable link in Zoom chat. That's how the Zoom bug works: hackers post these evil links in the chat window during a Zoom conference.It's hard to say Zoom has a vulnerability when so many other applications have the same issue.Many home ISPs block such connections to the Internet, such as Comcast, AT&TCox, Verizon Wireless, and others. If this is the case, when you click on the above link, nothing will happen. Your computer till try to contact hackme.robertgraham.com, and fail. You may be protected from clicking on the above link without doing anything. If your ISP doesn't block such connections, you can configure your home router to do this. Go into the firewall settings and block "TCP port 445 outbound". Alternatively, you can configure Windows to only follow such links internal to your home network, but not to the Internet.If hackers (like me if you click on the above link) gets your password, then they probably can't use use it. That's because while your home Internet router allows outbound connections, it (almost always) blocks inbound connections. Thus, if I steal your Windows password, I can't use it to log into your home computer unless I also break physically into your house. But if I can break into your computer physically, I can hack it without knowing your password.The same arguments apply to corporate desktops. Corporations should block such outbound connections. They Hack Vulnerability Threat
ErrataRob.webp 2020-03-06 15:57:01 Huawei backdoors explanation, explained (lien direct) Today Huawei published a video explaining the concept of "backdoors" in telco equipment. Many are criticizing the video for being tone deaf. I don't understand this concept of "tone deafness". Instead, I want to explore the facts.Does the word “#backdoor” seem frightening? That's because it's often used incorrectly – sometimes to deliberately create fear. Watch to learn the truth about backdoors and other types of network access. #cybersecurity pic.twitter.com/NEUXbZbcqw- Huawei (@Huawei) March 4, 2020This video seems in response to last month's story about Huawei misusing law enforcement backdoors from the Wall Street Journal. All telco equipment has backdoors usable only by law enforcement, the accusation is that Huawei has a backdoor into this backdoor, so that Chinese intelligence can use it.That story was bogus. Sure, Huawei is probably guilty of providing backdoor access to the Chinese government, but something is deeply flawed with this particular story.We know something is wrong with the story because the U.S. officials cited are anonymous. We don't know who they are or what position they have in the government. If everything they said was true, they wouldn't insist on being anonymous, but would stand up and declare it in a press conference so that every newspaper could report it. When something is not true or spun, then they anonymously "leak" it to a corrupt journalist to report it their way.This is objectively bad journalism. The Society of Professional Journalists calls this the "Washington Game". They also discuss this on their Code of Ethics page. Yes, it's really common in Washington D.C. reporting, you see it all the time, especially with the NYTimes, Wall Street Journal, and Washington Post. But it happens because what the government says is news, regardless of its false or propaganda, giving government officials the ability to influence journalists. Exclusive access to corrupt journalists is how they influence stories.We know the reporter is being especially shady because of the one quote in the story that is attributed to a named official:“We have evidence that Huawei has the capability secretly to access sensitive and personal information in systems it maintains and sells around the world,” said national security adviser Robert O'Brien. This quote is deceptive because O'Brien doesn't say any of the things that readers assume he's saying. He doesn't actually confirm any of the allegations in the rest of the story.It doesn't say.That Huawei has used that capability.That Huawei intentionally put that capability there.That this is special to Huawei (rather than everywhere in the industry).In fact, this quote applies to every telco equipment maker. They all have law enforcement backdoors. These backdoors always hve "controls" to prevent them from being misused. But these controls are always flawed, either in design or how they are used in the real world.Moreover, all telcos have maintenance/service contracts with the equipment makers. When there are ways around such controls, it's the company's own support engineers who will know them.I absolutely believe Huawei that it has don Hack Threat
ErrataRob.webp 2020-03-04 15:05:04 A requirements spec for voting (lien direct) In software development, we start with a "requirements specification" defining what the software is supposed to do. Voting machine security is often in the news, with suspicion the Russians are trying to subvert our elections. Would blockchain or mobile phone voting work? I don't know. These things have tradeoffs that may or may not work, depending upon what the requirements are. I haven't seen the requirements written down anywhere. So I thought I'd write some.One requirement is that the results of an election must seem legitimate. That's why responsible candidates have a "concession speech" when they lose. When John McCain lost the election to Barack Obama, he started his speech with:"My friends, we have come to the end of a long journey. The American people have spoken, and they have spoken clearly. A little while ago, I had the honor of calling Sen. Barack Obama - to congratulate him on being elected the next president of the country that we both love."This was important. Many of his supporters were pointing out irregularities in various states, wanting to continue the fight. But there are always irregularities, or things that look like irregularities. In every election, if a candidate really wanted to, they could drag out an election indefinitely investigating these irregularities. Responsible candidates therefore concede with such speeches, telling their supporters to stop fighting.It's one of the problematic things in our current election system. Even before his likely loss to Hillary, Trump was already stirring up his voters to continue to the fight after the election. He actually won that election, so the fight never occurred, but it was likely to occur. It's hard to imagine Trump ever conceding a fairly won election. I hate to single out Trump here (though he deserves criticism on this issue) because it seems these days both sides are convinced now that the other side is cheating.The goal of adversaries like Putin's Russia isn't necessarily to get favored candidates elected, but to delegitimize the candidates who do get elected. As long as the opponents of the winner believe they have been cheated, then Russia wins.Is the actual requirement of election security that the elections are actually secure? Or is the requirement instead that they appear secure? After all, when two candidates have nearly 50% of the real vote, then it doesn't really matter which one has mathematical legitimacy. It matters more which has political legitimacy.Another requirement is that the rules be fixed ahead of time. This was the big problem in the Florida recounts in the 2000 Bush election. Votes had ambiguities, like hanging chad. The legislature come up with rules how to resolve the ambiguities, how to count the votes, after the votes had been cast. Naturally, the party in power who comes up with the rules will choose those that favor the party.The state of Georgia recently pass a law on election systems. Computer scientists in election security criticized the law because it didn't have their favorite approach, voter verifiable paper ballots. Instead, the ballot printed a bar code.But the bigger problem with the law is that it left open what happens if tampering is discovered. If an audit of the paper ballots finds discrepancies, what happens then? The answer is the legislature comes up with more rules. You don't need to secretly tamper with votes, you can instead do so publicly, so that everyone knows the vote was tampered with. This then throws the problem to the state legislature to decide the victor.Even the most perfectly secured voting system proposed by academics doesn't solve the problem. It'll detect voter tampering, but doesn't resolve when tampering is detected. What do you do with tampered votes? If you throw them out, it means one candidate wins. If you somehow fix them, it means the other candidate w Guideline
ErrataRob.webp 2020-01-28 16:53:00 There\'s no evidence the Saudis hacked Jeff Bezos\'s iPhone (lien direct) There's no evidence the Saudis hacked Jeff Bezos's iPhone.This is the conclusion of the all the independent experts who have reviewed the public report behind the U.N.'s accusations. That report failed to find evidence proving the theory, but instead simply found unknown things it couldn't explain, which it pretended was evidence.This is a common flaw in such forensics reports. When there's evidence, it's usually found and reported. When there's no evidence, investigators keep looking. Todays devices are complex, so if you keep looking, you always find anomalies you can't explain. There's only two results from such investigations: proof of bad things or anomalies that suggest bad things. There's never any proof that no bad things exist (at least, not in my experience).Bizarre and inexplicable behavior doesn't mean a hacker attack. Engineers trying to debug problems, and support technicians helping customers, find such behavior all the time. Pretty much every user of technology experiences this. Paranoid users often think there's a conspiracy against them when electronics behave strangely, but "behaving strangely" is perfectly normal.When you start with the theory that hackers are involved, then you have an explanation for the all that's unexplainable. It's all consistent with the theory, thus proving it. This is called "confirmation bias". It's the same thing that props up conspiracy theories like UFOs: space aliens can do anything, thus, anything unexplainable is proof of space aliens. Alternate explanations, like skunkworks testing a new jet, never seem as plausible.The investigators were hired to confirm bias. Their job wasn't to do an unbiased investigation of the phone, but instead, to find evidence confirming the suspicion that the Saudis hacked Bezos.Remember the story started in February of 2019 when the National Inquirer tried to extort Jeff Bezos with sexts between him and his paramour Lauren Sanchez. Bezos immediately accused the Saudis of being involved. Even after it was revealed that the sexts came from Michael Sanchez, the paramour's brother, Bezos's team double-downed on their accusations the Saudi's hacked Bezos's phone.The FTI report tells a story beginning with Saudi Crown Prince sending Bezos a message using WhatsApp containing a video. The story goes:The downloader that delivered the 4.22MB video was encrypted, delaying or preventing further study of the code delivered along with the video. It should be noted that the encrypted WhatsApp file sent from MBS' account was slightly larger than the video itself.This story is invalid. Such messages use end-to-end encryption, which means that while nobody in between can decrypt them (not even WhatsApp), anybody with possession of the ends can. That's how the technology is supposed to work. If Bezos loses/breaks his phone and needs to restore a backup onto a new phone, the backup needs to have the keys used to decrypt the WhatsApp messages.Thus, the forensics image taken by the investigators had the necessary keys to decrypt the video -- the investigators simply didn't know about them. In a previous blogpost I explain these magical WhatsApp keys and where to find them so that anybody, even you at home, can forensics their own iPhone, retrieve these keys, and decrypt their own videos. Hack Uber
ErrataRob.webp 2020-01-28 14:24:42 How to decrypt WhatsApp end-to-end media files (lien direct) At the center of the "Saudis hacked Bezos" story is a mysterious video file investigators couldn't decrypt, sent by Saudi Crown Prince MBS to Bezos via WhatsApp. In this blog post, I show how to decrypt it. Once decrypted, we'll either have a smoking gun proving the Saudi's guilt, or exoneration showing that nothing in the report implicated the Saudis. I show how everyone can replicate this on their own iPhones.The steps are simple:backup the phone to your computer (macOS or Windows), using one of many freely available tools, such as Apple's own iTunes appextract the database containing WhatsApp messages from that backup, using one of many freely available tools, or just hunt for the specific file yourselfgrab the .enc file and decryption key from that database, using one of many freely available SQL toolsdecrypt the video, using a tool I just created on GitHubEnd-to-end encrypted downloaderThe FTI report says that within hours of receiving a suspicious video that Bezos's iPhone began behaving strangely. The report says:...analysis revealed that the suspect video had been delivered via an encrypted downloader host on WhatsApp's media server. Due to WhatsApp's end-to-end encryption, the contents of the downloader cannot be practically determined. The phrase "encrypted downloader" is not a technical term but something the investigators invented. It sounds like a term we use in malware/viruses, where a first stage downloads later stages using encryption. But that's not what happened here.Instead, the file in question is simply the video itself, encrypted, with a few extra bytes due to encryption overhead (10 bytes of checksum at the start, up to 15 bytes of padding at the end).Now let's talk about "end-to-end encryption". This only means that those in middle can't decrypt the file, not even WhatsApp's servers. But those on the ends can -- and that's what we have here, one of the ends. Bezos can upgrade his old iPhone X to a new iPhone XS by backing up the old phone and restoring onto the new phone and still decrypt the video. That means the decryption key is somewhere in the backup.Specifically, the decryption key is in the file named 7c7fba66680ef796b916b067077cc246adacf01d in the backup, in the table named ZWAMDIAITEM, as the first protobuf field in the field named ZMEDIAKEY. These details are explained below.WhatsApp end-to-end encryption of videoLet's discuss how videos are transmitted using text messages.We'll start with SMS, the old messaging system built into the phone system that predates modern apps. It can only send short text messages of a few hundred bytes at a time. These messages are too small to hold a complete video many megabytes in size. They are sent through the phone system itself, not via the Internet.When you send a video via SMS what happens is that the video is uploaded to the phone company's servers via HTTP. Then, a text message is sent with a URL link to the video. When the recipient gets the message, their phone downloads the video from the URL. The text messages going through the phone system just contain the URL, an Internet connection is used to transfer the video.This happens transparently to the user. The user just sees the video and not the URL. They'll only notice a difference when using ancient 2G mobile phones that can get the SMS messages but which can't actually connect to the Internet.A similar thing happens with WhatsApp, only with encryption added.The sender first encryp Malware Hack Tool
ErrataRob.webp 2019-12-30 14:30:20 So that tweet was misunderstood (lien direct) I'm currently experiencing the toxic hell that is a misunderstood tweet going viral. It's a property of the social media. The more they can deliberately misunderstand you, the more they can justify the toxicity of their response. Unfortunately, I had to delete it in order to stop all the toxic crud and threats of violence.The context is how politicians distort everything. It's like whenever they talk about sea level rise, it's always about some city like Miami or New Orleans that is sinking into the ocean already, even without global warming's help. Pointing this out isn't a denial of global warming, it's pointing out how we can't talk about the issue without exaggeration. Mankind's carbon emissions are indeed causing sea level to rise, but we should be talking about how this affects average cities, not dramatizing the issue with the worst cases.The same it true of health care. It's a flawed system that needs change. But we don't discuss the people making the best of all bad choices. Instead, we cherry pick those who made the worst possible choice, and then blame the entire bad outcome on the system.My tweet is in response to this Elizabeth Warren reference to a story were somebody chose the worst of several bad choices:No one should have to choose between medication or housing. No one should be forced to ration insulin and risk dangerous complications. We need #MedicareForAll-and we need to tackle corruption and price gouging in drug manufacturing head on. https://t.co/yNxo7yUDri- Elizabeth Warren (@ewarren) September 23, 2019My tweet is widely misunderstood as saying "here's a good alternative", when I meant "here's a less bad alternative". Maybe I was wrong and it's not "less bad", but nobody has responded that way. All the toxic spew on Twitter has been based on their interpretation that I was asserting it was "good".And the reason I chose this particular response is because I thought it was a Democrat talking point. As Bernie Sanders (a 2020 presidential candidate) puts it:“The original insulin patent expired 75 years ago. Instead of falling prices, as one might expect after decades of competition, three drugmakers who make different versions of insulin have continuously raised prices on this life-saving medication.”This is called "evergreening", as described in articles like this one that claim insulin makers have been making needless small improvements to keep their products patent-protected, so that they don't have to compete against generics whose patents have expired.It's Democrats like Bernie who claim expensive insulin is little different than cheaper insulin, not me. If you disagree, go complain to him, not me.Bernie is wrong, by the way. The more expensive "insulin analogs" result in dramatically improved blood sugar control for Type 1 diabetics. The results are life changing, especially when combined with glucose monitors and insulin pumps. Drug companies deserve to recoup the billions spent on these advances. My original point is still true that "cheap insulin" is better than "no insulin", but it's also true that it's far worse than modern, more expensive insulin.Anyway, I wasn't really focused on that part of the argument but the other part, how list prices are an exaggeration. They are a fiction that nobody needs to pay, even those without insurance. They aren't the result of price gouging by drug manufacturers, as Elizabeth Warren claims. Bu APT 32
ErrataRob.webp 2019-12-13 16:50:17 This is finally the year of the ARM server (lien direct) "RISC" was an important architecture from the 1980s when CPUs had fewer than 100,000 transistors. By simplifying the instruction set, they free up transistors for more registers and better pipelining. It meant executing more instructions, but more than making up for this by executing them faster.But once CPUs exceed a million transistors around 1995, they moved to Out-of-Order, superscalar architectures. OoO replaces RISC by decoupling the front-end instruction-set with the back-end execution. A "reduced instruction set" no longer matters, the backend architecture differs little between Intel and competing RISC chips like ARM. Yet people have remained fixated on instruction set. The reason is simply politics. Intel has been the dominant instruction set for the computers we use on servers, desktops, and laptops. Many instinctively resist whoever dominates. In addition, colleges indoctrinate students on the superiority of RISC. Much college computer science instruction is decades out of date.For 10 years, the ignorant press has been championing the cause of ARM's RISC processors in servers. The refrain has always been that RISC has some inherent power efficiency advantage, and that ARM processors with natural power efficiency from the mobile world will be more power efficient for the data center.None of this is true. There are plenty of RISC alternatives to Intel, like SPARC, POWER, and MIPS, and none of them ended up having a power efficiency advantage.Mobile chips aren't actually power efficient. Yes, they consume less power, but because they are slower. ARM's mobile chips have roughly the same computations-per-watt as Intel chips. When you scale them up to the same amount of computations as Intel's server chips, they end up consuming just as much power.People are essentially innumerate. They can't do this math. The only factor they know is that ARM chips consume less power. They can't factor into the equation that they are also doing fewer computations.There have been three attempts by chip makers to produce server chips to complete against Intel. The first attempt was the "flock of chickens" approach. Instead of one beefy OoO core, you make a chip with a bunch of wimpy traditional RISC cores.That's not a bad design for highly-parallel, large-memory workloads. Such workloads spread themselves efficiently across many CPUs, and spend a lot of time halted, waiting for data to be returned from memory.But such chips didn't succeed in the market. The basic reason was that interconnecting all the cores introduced so much complexity and power consumption that it wasn't worth the effort.The second attempt was multi-threaded chips. Intel's chips support two threads per core, so that when one thread halts waiting for memory, the other thread can continue processing what's already stored in cache and in registers. It's a cheap way for processors to increase effective speed while adding few additional transistors to the chip. But it has decreasing marginal returns, which is why Intel only supports two threads. Vendors created chips with as many as 8 threads per core. Again, they were chasing the highly parallel workloads that waited on memory. Only with multithreaded chips, they could avoid all that interconnect nastiness.This still didn't work. The chips were quite good, but it turns out that these workloads are only a small portion of the market.Finally, chip makers decided to compete head-to-head with Intel by creating server chips optimized for the same workloads as Intel, with fast single-threaded performance. A good example was Qualcomm, who created a server chip that CloudFlare promised to use. They announced this to much fanfare, then abandoned it a few months later as nobody adopted it.The reason was simply that when you scaled to Intel-like performance, you have Intel-like liabilities. Your only customers are the innumerate who can't do math, who believe like emperors Guideline
ErrataRob.webp 2019-09-26 13:24:44 CrowdStrike-Ukraine Explained (lien direct) Trump's conversation with the President of Ukraine mentions "CrowdStrike". I thought I'd explain this.What was said?This is the text from the conversation covered in this“I would like you to find out what happened with this whole situation with Ukraine, they say Crowdstrike... I guess you have one of your wealthy people... The server, they say Ukraine has it.”Personally, I occasionally interrupt myself while speaking, so I'm not sure I'd criticize Trump here for his incoherence. But at the same time, we aren't quite sure what was meant. It's only meaningful in the greater context. Trump has talked before about CrowdStrike's investigation being wrong, a rich Ukrainian owning CrowdStrike, and a "server". He's talked a lot about these topics before.Who is CrowdStrike?They are a cybersecurity firm that, among other things, investigates hacker attacks. If you've been hacked by a nation state, then CrowdStrike is the sort of firm you'd hire to come and investigate what happened, and help prevent it from happening again.Why is CrowdStrike mentioned?Because they were the lead investigators in the DNC hack who came to the conclusion that Russia was responsible. The pro-Trump crowd believes this conclusion is false. If the conclusion is false, then it must mean CrowdStrike is part of the anti-Trump conspiracy.Trump always had a thing for CrowdStrike since their first investigation. It's intensified since the Mueller report, which solidified the ties between Trump-Russia, and Russia-DNC-Hack.Personally, I'm always suspicious of such investigations. Politics, either grand (on this scale) or small (internal company politics) seem to drive investigations, creating firm conclusions based on flimsy evidence. But CrowdStrike has made public some pretty solid information, such as BitLy accounts used both in the DNC hacks and other (known) targets of state-sponsored Russian hackers. Likewise, the Mueller report had good data on Bitcoin accounts. I'm sure if I looked at all the evidence, I'd have more doubts, but at the same time, of the politicized hacking incidents out there, this seems to have the best (public) support for the conclusion.What's the conspiracy?The basis of the conspiracy is that the DNC hack was actually an inside job. Some former intelligence officials lead by Bill Binney claim they looked at some data and found that the files were copied "locally" instead of across the Internet, and therefore, it was an insider who did it and not a remote hacker.I debunk the claim here, but the short explanation is: of course the files were copied "locally", the hacker was inside the network. In my long experience investigating hacker intrusions, and performing them myself, I know this is how it's normally done. I mention my own experience because I'm technical and know these things, in contrast with Bill Binney and those other intelligence officials who have no experience with such things. He sounds impressive that he's formerly of the NSA, but he was a mid-level manager in charge of budgets. Binney has never performed a data breach investigation, has never performed a pentest.There's other parts to the conspiracy. In the middle of all this, a DNC staffer was murdered on the street, possibley due to a mugging. Naturally this gets included as part of the conspiracy, this guy ("Seth Rich") must've been the "insider" in this attack, and mus Data Breach Hack Guideline NotPetya
ErrataRob.webp 2019-08-31 10:25:40 Thread on the OSI model is a lie (lien direct) I had a Twitter thread on the OSI model. Below it's compiled into one blogpost
Yea, I've got 3 hours to kill here in this airport lounge waiting for the next leg of my flight, so let's discuss the "OSI Model". There's no such thing. What they taught you is a lie, and they knew it was a lie, and they didn't care, because they are jerks.You know what REALLY happened when the kid pointed out the king was wearing no clothes? The kid was punished. Nobody cared. And the king went on wearing the same thing, which everyone agreed was made from the finest of cloth.The OSI Model was created by international standards organization for an alternative internet that was too complicated to ever work, and which never worked, and which never came to pass.Sure, when they created the OSI Model, the Internet layered model already existed, so they made sure to include today's Internet as part of their model. But the focus and intent of the OSI's efforts was on dumb networking concepts that worked differently from the Internet.OSI wanted a "connection-oriented network layer", one that worked like the telephone system, where every switch in between the ends knows about the connection. The Internet is based on a "connectionless network layer".Likewise, the big standards bo
ErrataRob.webp 2019-08-31 10:18:49 Thread on network input parsers (lien direct) This blogpost contains a long Twitter thread on input parsers. I thought I'd copy the thread here as a blogpost.
I am spending far too long on this chapter on "parsers". It's this huge gaping hole in Computer Science where academics don't realize it's a thing. It's like physics missing one of Newton's laws, or medicine ignoring broken bones, or chemistry ignoring fluorine.The problem is that without existing templates of how "parsing" should be taught, it's really hard coming up with a structure for describing it from scratch."Langsec" has the best model, but at the same time, it's a bit abstract ("input is a language that drives computation"), so I want to ease into it with practical examples for programmers.
Guideline
ErrataRob.webp 2019-08-10 15:43:30 Hacker Jeopardy, Wrong Answers Only Edition (lien direct) Among the evening entertainment at DEF CON is "Hacker Jeopardy", like the TV show Jeopardy, but with hacking tech/culture questions. In today's blog post, we are going to play the "Wrong Answers Only" version, in which I die upon the hill defending the wrong answer.The problem posed is this:YOU'LL LIKELY SHAKE YOUR HEAD WHEN YOU SEE TELNET AVAILABLE, NORMALLY SEEN ON THIS PORTApparently, people gave 21, 22, and 25 as the responses. The correct response, according to RFC assignments of well-known ports, is 23.A good wrong answer is this one, port 25, where the Morris Worm spread via port 25 (SMTP) via the DEBUG command.pre-1988 it was 25, but you had to type DEBUG after connecting 😉- pukingmonkey🐒 (@pukingmonkey) August 10, 2019But the real correct response is port 21. The problem posed wasn't about which port was assigned to Telnet (port 23), but what you normally see these days.Port 21 is assigned to FTP, the file transfer protocol. A little known fact about FTP is that it uses Telnet for it's command-channel on port 21. In other words, FTP isn't a text-based protocol like SMTP, HTTP, POP3, and so on. Instead, it's layered on top of Telnet. It says right in RFC 959:When we look at the popular FTP implementations, we see that they do respond to Telnet control codes on port 21. There are a ton of FTP implementations, of course, so some don't respond to Telnet, and which treat it as a straight text protocol. But the vast majority of what's out there are implementations that do the Telnet as defined.Consider network intrusion detection systems. When they decode FTP, they do so with their Telnet protocol parsers. You can see this in the Snort source code, for example.The question is "normally seen". Well, Telnet on port 23 has largely been replaced by SSH on port 22, so you don't normally see it on port 23. However, FTP is still popular. While I don't have a hard study to point to, in my experience, the amount of traffic seen on port 21 is vastly higher than that seen on port 23. QED: the port where Telnet is normally seen is port 21.But the original problem wasn't so much "traffic" seen, but "available". That's a problem we can study with port scanners -- especially mass port scans of the entire Internet. Rapid7 has their yearly Internet Exposure Report. According to that report, port 21 is three times as available on the public Internet as port 23.
Last update at: 2024-04-19 21:10:32
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter