Tuesday, December 31, 2013

Masscan: designing my own crypto

Like many programmers, one of the things I like to do is design my own crypto algorithms. Specifically, at the heart of my port-scanner masscan is a cryptographic algorithm for randomizing IP addresses and port numbers.

This algorithm has flaws. Well, it's good enough for port scanning, but it's not cryptographically secure. In this post, I describe how graph stuff so that these flaws can be detected. Update: I added a second nmap sample to compare against.

Saturday, December 28, 2013

Masscan does ARP

My portscanner, masscan, also does ARP scanning. Sure, there exists other ARP scanning tools (like arpscan), but I'm too lazy to learn how they work, so I just added the functionality to my tool.

Here's how you use it. Right now I'm plugged into the local wired Ethernet. When I do an ifconfig, I get the following result:
inet netmask 0xffff0000 broadcast
That means there is an entire /16 of devices out there. I want to discover them, so I run masscan with the following parameters:
masscan --arp --source-ip --source-mac 66-55-44-33-22-11 --rate 1000
This produces results that look like:

Starting masscan 1.0 (http://bit.ly/14GZzcT) at 2013-12-28 04:39:52 GMT
 -- forced options: -sS -Pn -n --randomize-hosts -v --send-eth
Initiating SYN Stealth Scan
Scanning 65536 hosts [1 port/host]
Discovered open port 0/arp on 
Discovered open port 0/arp on
Discovered open port 0/arp on
Discovered open port 0/arp on
Discovered open port 0/arp on
Discovered open port 0/arp on

There's some weirdness in the how it displays the results. It claims things are "ports" because masscan is port scanner, even though ARP has no ports. I should probably fix this.

In the above example I spoof the source IP and MAC address. This isn't strictly necessary, because if you don't specify it, masscan will use use the IP/MAC of your current configuration. But in general, masscan should always be used in spoofing mode, so I include it on general principles.

The reason I added it is because on the flight back from BruCon (a Belgian cybsec convention), I was upgraded to business class, which had an Ethernet port. Attaching a cable to the Ethernet gave me link status, but no DHCP results. Also, tcpdump revealed no packets at all coming from the port. Therefore, I needed to scan the port to see if anything was listening, so I quickly added the ARP scanning code to masscan, then blasted the port. I first scanned the private address ranges, then the rest of the Internet address ranges. Despite being a 100-mbps, I can just about scan the entire 32-bit address space in the 10 hours of the flight. I didn't find anything, but that could be because transmitting at 150,000 ARP requests per second can overwhelm whatever device they have on the other end that might respond.

That last point is pretty important: transmitting ARP packets are sent to the broadcast address (ff-ff-ff-ff-ff-ff), and thus, are received by every machine on the local link. Masscan is very fast, usually faster than the receiving machines can handle. You can easily go too fast, causing receiving machines to drop packets. They cannot respond to a request if they drop it. Thus, you'll miss results at high speeds that you'd otherwise get at a slow speed. And, you might DoS machines on the local segment, which will anger people.

One issue is the right IP address to use. As far as I can tell, testing with Windows, Mac OS X, and Linux, any IP address can be used. Let's say a target machine has an IP address of, with a subnet mask of In my tests, you can still use an IP address like that is outside the target's subnet, and it'll still respond to the ARP.

Anyway, if you use masscan, just remember that instead of port numbers, you can specify --arp, and it'll do pretty much what you'd expect it to do.

Friday, December 27, 2013

Target: just 'cause it's 3DES doesn't mean it's secure

In a blogpost referring to the recent breach of millions of debit cards, Target claims there is no danger, because the PIN is encrypted with Triple-DES at the terminal, and decrypted at the payment processor. Since hackers stole only the encrypted PINs, Target claims the debit card info is useless to the hackers.

This is wrong. Either Target doesn't understand cybersecurity, or they are willfully misleading the public, or they are leaving out important details. In all probability, it's the last item: they left out the detail of there being salt.

Yes, Triple-DES cannot be broken by hackers. If they don't have the secret key, they can't decrypt the PIN numbers. But here's the deal: hackers can get PINs without decrypting them, because two identical PINs decrypt to the same value.

For example, let's say that the hacker shopped at Target before stealing the database. The hacker's own debit card information will be in the system. Let's say the hacker's PIN was 8473. Let's say that this encrypts to 98hasdHOUa. The hacker now knows that everyone with the encrypted PIN of "98hasdHOUa" has the same pin number as him/her, or "8473". Since there are only 10,000 combination of PIN numbers, the hacker has now cracked 1000 PIN numbers out of 10 million debit cards stolen.

This just gets one debit card. The hacker can crack the rest using the same property. The hacker simply starts at PIN number "0000", and then using online sites, starts using that number, trying one card at a time, until s/he gets a hit. On average, the hacker will have to try 10,000 before a good result is found. Once found, all debit cards with the same encrypted PIN of "0000" are moved aside to the "known" category. The hacker then repeats the process with "0001", "0002", and so on for all combinations.

This process is further simplified by the fact that some PIN numbers are vastly more common than others. People choose simply patterns (like "0000"), birthdays, and so on. The hacker can create a popularity distribution among the cracked PINs. Since "1234" is the most popular PIN number, the hacker can look at the most popular encrypted PIN and try that first. It'll probably work, but if not, s/he can try the next most popular encrypted PIN, until a match for 1234 is found. The top most popular 100 PINs can be discovered with only a few thousand attempts, giving over a million cracked debit cards to work with. This is something that can be done even if a person had to stand in front of an ATM for hours trying one card after another.

One way to correct this is to salt the encryption, such as using the credit card number as part of the key that encrypts the PIN, or as part of additional data prepended to the PIN. Done this way, every PIN number now decrypts to a different value. If they did this, then it would indeed be the same as if no PIN information were stolen at all.

As Mathew Greene describes, the Payment Card Industry (PCI) standards indeed call for salt, so this is probably what Target did.

It's nice that Target gives intermediate results of their investigation. Transparency like this should be commended. But they should just give us the raw information, like the specific PCI standard they follow, without the marketing spin about whether it's secure or not. I suppose I should've just known the PCI standard off the top of my head and filled in the blanks myself, but when I see incomplete info like this, it makes me distrust their honesty/competence instead.

Sunday, December 22, 2013

Trolling: what is it?

Over the holidays, one has to answer questions from one's family, like what is "bitcoin" or "twerking" or "trolling". In this post, I define the latter.

Trolling is like this tweet in response to a recent picture from the moon:

I am, of course, echoing the fringe who argue that the original moon landings were fake because there were no stars in the photographs.

I got a lot of responses calling me an idiot, but zero tweets with the correct explanation. The reason you can't see starts is exposure time: if you leave the shutter open long enough for stars to show up, then the foreground moonscape will be completely washed out. Presumably, if the Chinese had taken an iPhone into space with the HDR settings, which takes two photos with different exposure lengths, one might be able to get both a good bright foreground and dim background stars in the same photograph. But sadly, the Chinese didn't have iPhones to send to the moon.

The payoff to trolling is responses like this one that gets the reason completely wrong:

As I explained trolling, my mother asked me "Shouldn't you be worried that potential customers may see this, and not do business with you, thinking you are stupid?". Well, the short answer is no, because any customer who would is a customer I wouldn't want anyway.

But what my mother really asked is "aren't you afraid of looking stupid?". And the answer to that question should always be "no". Raise your hand in class and ask your question. Dance like nobody's watching. Stop caring so much what people think. Success and happiness only comes from being willing to look stupid. A good way to practice this by actually being stupid. You have to get used to the naysayers. Even at your most brilliant, when you are changing the world, there will always be someone claiming you are an idiot, and that everyone knows what you are doing is stupid and wrong.

And so that is trolling.

Friday, December 20, 2013

Snakeoil vs. bounties

A "bounty" is what trustworthy companies offer hackers who break their stuff. This is not to be confused with "prizes", where companies create absurd challenges for hackers to break their stuff, but with rules that mean hackers will never win. Trustworthy companies are those who regularly have to pay out on the bounties, untrustworthy companies selling snakeoil don't pay out.

I mention this because of this press release saying:
"A challenge was issued to top hackers a week ago to break into secure cloud service, [XXXXX] for $25,000. 700 hackers from 49 countries already took up the hacking challenge, hailing from top universities like MIT, Stanford and Princeton and corporations like Vodafone and Tata Consulting."
This is nonsense. The contest isn't for their cloud service. Instead, the contest is for a separate, contest-specific network. It's a trick. It narrows the challenge to focus on the most secure part of their system only -- the part they know is secure. But hackers don't exploit the strongest part of any system, that would be stupid. Instead, hackers target the weakest link in the network, the part which isn't included in the contest.

In contrast, the bounty system of other companies puts everything under the microscope. It's totally out of their control what the hackers might hack. Since security is so hard, they often have to pay out. For example Google Chrome is the most trusted, secure browser precisely because it's had to pay out the most in bounties -- not because they had invalid contest constructed so they would never have to pay out.

A company that offers a $25k vulnerability bounty is trustworthy -- a company offering a $25k prize for some weird challenge isn't trustworthy at all. Either they are knowingly deceiving you, or are too stupid to understand that their challenge has no merit. Either answer means you should not trust them. They are not a security company that has won the respect of security professionals, they are a marketing company trying to hoodwink you.

Thursday, December 19, 2013

What good are lead lined rooms?

The recent 60 Minutes report on the NSA contains many factual errors (like those about BIOS). One thing that looks like an error is the description of "lead lined rooms". Are rooms at the NSA really lined with lead? Or is this just a mistake reflecting the common misconception about lead stopping radiation?

We all grew up with the stories that Superman's X-Ray vision can see through any substance other than lead, and that lead is used to block radiation in nuclear reactors. These stories aren't really true.

What stops X-Rays is mass. The heavier the object between you and the X-Ray source, the better, but the material doesn't really matter. Lead is often chosen because it's dense and cheap, not because it has any special blocking capabilities. A 1-inch thick lead wall is equivalent to 2.5 inches of steel, or 6-inches of concrete. I'm pretty sure building walls out of concrete and steel would be a better choice for the NSA, since they can bear their own weight, than lining them with lead.

As a corollary, unlike how you see in the movies where any lead, no matter how thin, stops Superman's vision, the real issue is the amount. Twice as much mass stops X-rays twice as good. Thin lead foil in your undies to protect your modesty around Superman isn't going to work -- there's just not enough mass.

The major threat to the NSA isn't X-rays, but radio waves. The NSA wants to stop signals from getting out (TEMPEST), and radio waves from getting in (EMP). To stop these things, what you really want are electrical conductors. Sure, lead is a conductor, but copper and aluminum are far better. These are the metals you want to line your room with, rather than lead.

A minor threat is sound, preventing people from eavesdropping. Most sound protection is done by controlling the echoes, as in an anechoic chamber. But mass also helps, so lead is sometimes used for deadening sound, often in foams that both distort and block the sound. I'm not sure if this has any bearing on the NSA "lead lined rooms".

What I think happened here is that the NSA talked about SCIFs, which have to be TEMPEST hardened against leaking electromagnetic radiation, and the reporters at CBS just assumed this meant "lead" without checking the facts.

So my question for anybody is this: are rooms at the NSA actually lined with lead? And if so, why?

How to disable webcam light on Windows

In recent news, it was revealed the FBI has a "virus" that will record a suspect through the webcam secretly, without turning on the LED light. Some researchers showed this working on an older Macbook. In this post, we do it on Windows.

Hardware, firmware, driver, software

In theory, the indicator light should be a hardware function. When power is supplied to the sensor, it should also be supplied to an indicator light. This would make the light impossible to hack. However, I don't think anybody does this.

In some cases, it's a firmware function. Webcams have their own wimpy microprocessors and run code directly within the webcam. Control the light is one of those firmware functions. Some, like Steve Jobs, might describe this as "hardware" control, because it resides wholly within the webcam hardware, but it's still a form of software. This is especially true because firmware blobs are
unsigned, and therefore, can be hacked.

In some cases, it's the driver, either within the kernel mode driver that interfaces at a low-level with the hardware, or a DLL that interfaces at a high-level with software.

How to

As reverse engineers, we simply grab these bits of software/firmware/drivers and open them in our reverse engineering tools, like IDApro. It doesn't take us long to find something to hack.

For example, on our Dell laptop, we find the DLL that comes with the RealTek drivers for our webcam. We quickly zero in on the exported function "TurnOnOffLED()".

We can quickly make a binary edit to this routine, causing it to return immediately without turning on the light. Dave shows in the in the video below. First, the light turns on as normal, then he stops the webcam, replaces the DLLs with his modified ones, and then turns on the webcam again. As the following video shows, after the change, the webcam is recording him recording the video, but the light is no longer on.

The deal with USB

Almost all webcams, even those inside your laptop's screen, are USB devices. There is a standard for USB video cameras, the UVC standard. This means most hardware will run under standard operating systems (Windows, Mac, Linux) without drivers from the manufacturer -- at least enough to get Skype working. Only the more advanced features particular to each vendor need vendor specific drivers.

According to this standard, the LED indicator light is controlled by the host software. The UVC utilities that come with Linux allow you to control this light directly with a command-line tool, being able to turn off the light while the camera is on.

To hack this on Windows appears to require a filter driver. We are too lazy to write one, which is why we just hacked the DLLs in the demonstration above. We believe this is what the FBI has done: a filter driver for the UVC standard would get most webcam products from different vendors, without the FBI haven't to write a custom hack for different vendors.

USB has lots of interesting features. It's designed with the idea that a person without root/administrator access may still want to plug in a device and use it. Therefore, there is the idea of "user-mode" drivers, where a non-administrator can nonetheless install drivers to access the USB device.

This can be exploited with the Device Firmware Update (DFU) standard. It means in many cases that in user-mode, without administrator privileges, the firmware of the webcam can be updated. The researchers in the paper above demonstrate this with a 2008 MacBook, but in principle, it should work on modern Windows 7 notebook computers as well, using most devices. The problem for a hacker is that they would have to build a hacked firmware for lots of different webcam chips. The upside is that they can do this all without getting root/administrator access to the machine.


In the above video, Dave shows that the story of the FBI virus secretly enabling the webcam can work on at least one Windows machine. In our research we believe it can be done generically across most any webcam, using most any operating system.

Special note: Modern MacBooks do not use USB cameras. Instead, it hangs directly off the PCIe bus. That means things like UVC and DFU aren't going to work on them. Maybe Steve Jobs actually did build a chip with a hardware light.

Wednesday, December 18, 2013

No, you've been correctly using "styrofoam"

On my twitter feed, I keep seeing this WaPo blogpost about "styrofoam". It's completely wrong.

Sure, Dow Chemical in theory holds the trademark for the word "Styrofoam", but the fact is that the word has become genericized. The public uses this word to refer to all forms of polystyrene foam. "Real styrofoam" is any polystyrene foam, not just the Dow product.

When everyone uses a word a certain way, it's not "wrong". Sure, people use the word in a way that a big corporation doesn't like, but that doesn't mean everyone else is wrong, it means the corporation is wrong. In this case, it means Dow's trademark on the word is now invalid, and indefensible. I have no idea why the Washington Post is such a corporate toady, helping Dow with their efforts to defend their trademark.

I like corporations. I believe trademarks are valuable. This blogpost isn't about how corporations are evil, but how journalists are idiots.

Threat level: puce

Errata Security is officially raising our cyberhack alert level to "puce".

In that 60 Minutes story about the NSA, they showed the "Cyber Operations Center", and the dashboard with "Current Cyber Alert Levels". The NSA gets their alerts from various organizations, like SANS, Symantec, and IT-ISAC.

These alert levels are never really as predictive as people would hope. They are more like the weatherman that tells "it's raining" when you can simply open your window and see outside for yourself.

But a lot of time, we can predict that something is going to happen, such as right now. Germany's Chaos Computer Club (CCC) is having their yearly Congress in Hamburg in a few days. They are going to provide the conference with an unprecedented 100-gbps Internet connection. There's a good chance something interesting will happen.

Hacker conventions usually have fast connections, like this last year's DEF CON that provided 100-mbps to the Internet. But these connections really aren't interesting. I can already use bitcoins to rent an anonymous VPS with a 100-mbps connection, so when I go to DEF CON, my first thought isn't how I can exploit this free Internet access.

But the CCC network is a thousand times faster. Suddenly, it becomes very interesting, especially now that we have tools that that can exploit that level of bandwidth. If things go according to plan, then everyone, everywhere on the Internet, is going to see a high level of incoming port scans from the Internet over those days. In the past, slow scans at a mere 100-mbps have caused organizations to panic, waking people up for midnight emergency conference calls, thinking they are under attack. What's going to happen at CCC congress is going to be a thousand times worse.

Therefore, I'm writing my own advisory. I'm setting the Errata Security Threat Level to "puce" to warn people that the dates Dec. 27 through Dec. 30, inclusive, are going to result in a high rate of incoming network traffic, primarily port scans. It's not necessarily a problem, but at the same time, your firewall administrators shouldn't panic, fearing a cyberblitz from the Germans.

Monday, December 16, 2013

CCC, 100-gbps, and your own private Shodan

One of the oldest/biggest "hacker" conventions is the CCC congress every December in Germany. This year, they are promising 100-gbps connectivity to the Internet. That's 'g' as in 'giga', and as in 'omfg that's a lot of bandwidth'.

So, what shall we do with all this bandwidth? The answer is masscan: scan the entire Internet and create your own, private, Shodan-style database.

DoD address space: it's not a conspiracy

Recently on Cryptome (the better leaks than wikileaks site), a paper appeared pointing out that BT (British Telecom) assigns all their modems an extra address in the 30.x.x.x address space, and then attaches SSH and SNMP to that address. This looks like what many ISPs do, assigning a second IP address for management, except for one thing: the block is assigned to the United States Department of Defense. This has caused a fevered round of speculation that this is actually a secret backdoor for the NSA/GCHQ, so that they can secretly monitor and control people's home networks.

Maybe, but it's probably not the case. The better explanation is that BT simply chose this address space because it's non-routable. While it's assigned public address, it's only used inside the private DoD military network. Try tracerouting to that address space, you'll see that your packets go nowhere.

Thus, it's a good choice for pseudo-private address space.

This sort of thing happens a lot. I (or others I trust) have seen,, and other instances of used this way. I can confirm that companies use DoD address space as private addresses. Just because it's DoD doesn't mean they route to the DoD.

The reason all these address spaces are DoD is because that's really the only source of unused IPv4 addresses left. All IPv4 address ranges have been assigned. But, the DoD has been assigned 20% of the IPv4 address space, but most of it is used within the DoD, on their own private networks, and is not routable to the outside world. Thus, if you are looking for a large chunk of "private" addresses that won't suddenly one day be assigned to Akamai or Amazon (and thus, explode in your face), then DoD addresses are the way to go.

There are a couple good reasons for going with this approach. The first is that existing private address spaces (,, are frequently used inside a home network, and thus, might cause some routing confusion if also used outside a home gateway. The second is that for a large company like BT, with millions of customers, they may have exhausted the private address space. The 10.x.x.x network has only 16 million possible addresses, and due to the way it needs to be carved up and routed, would be useful for quite a bit fewer than that. Thus, they may need a few /8 address chunks to adequately cover everyone for a management network.

What I'm trying to get to here is "Occam's Razor". For many people, when they see the address, and that it's assigned to the DoD, their simplest explanation is that the DoD is spying on people's home modems. Those of us with more experience see that the most obvious explanation is that BT chose this as pseudo-private address space.

To be clear, that paper contains nothing that is evidence of NSA spying. I may have missed something, because I only skimmed it, skipping the paranoid ravings, but none of the technical details show anything amiss. Many ISPs provide custom firmwares for the modems they sell. These firmwares typically have a management "backdoor" so that the ISP can monitor and/or control the modem. Many, many networks use publicly allocated DoD addresses inside their network as private addresses.

Sunday, December 15, 2013

How we know the 60 Minutes NSA interview was crap

Regardless of where you stand on the Snowden/NSA debate, it's obvious tonight's "60 Minutes" was a travesty of journalism. In exchange for exclusive access to the NSA, CBS parroted dubious NSA statements as fact. We can see this in the way they described what they call the "BIOS plot", which the claim would have destroyed the economy of the United States had the NSA not saved us. The NSA spokesperson they quote, Debra Plunkett, is a liar.

There is probably some real event behind this, but it's hard to tell, because we don't have any details. The event has been distorted to serve the needs of propaganda. It's completely false in the message it is trying to convey. What comes out is gibberish, as any technical person can confirm.

Thursday, December 05, 2013

Literally the nicest thing I’ve ever done

I was out at a bar with Dave Maynor, my business partner, and the conversation went something like this:

Dave: This is literally the best burger ever.
Me: Give me a dollar.
Dave: What?
Me: You said “literally”, so give me a dollar.
Dave: wat
Me: It’s a new rule I’ve just come up with. I’m charging you a dollar every time you say the word “literally”.
(much reflection)
Dave: ok
(fishes a dollar out of his wallet and hands it to me)

Mandela was a great man

Mandela was a bad revolutionary. His side committed atrocities fighting against Apartheid.

Being a victim doesn't make one great. Just because he languished in jail for almost 30 years doesn't make him a great person.

What made Nelson Mandela one of the greatest people of the last 50 years is his presidency. In Africa, revolutionary leaders regularly became worse despots than the colonialists they replaced. What would have normally happened after the end of apartheid would have been righteous anger driving the country into ruin.

Instead, Mandela set his country on a path toward reconciliation, setting up a commission that exposed not only the evils of his political opponents, but the evils of his own side. He created a system to help blacks prosper on their own rather than confiscating the prosperity of whites.

And then, and this is the important part, he stepped down, and ceded power to the next president.

Wednesday, December 04, 2013

Swartz: wiring closets explained

A common misconception about the Aaron Swartz case is that he broke into the wiring closet in order to hack into the network. The recently released surveillance video shows this is not true.

What you see in this video is a typical wiring closet. At the top of the rack, behind it, are wires that fan out to all the nearby rooms. If you are in classroom, and connect your laptop via Ethernet to the wall, this is where the other end of the wire ends up.

The top of the rack is just a patch panel, there is no device there, just wires. The network switch equipment is at the middle of the rack between top and bottom. A bundle of patch cables connect the patch panel at the top to the network switch in the middle.

What you see is that one of the patch cables on the switch goes downward to Aaron's laptop instead of upwards to the patch panel. As far as the network switch is concerned, it doesn't matter if Aaron Swartz connected directly via a patch cable to his laptop, or connected via one of the Ethernet jacks in classrooms/hallways.

The reason Aaron put his laptop in the closet, instead of connecting via the hallway jack five feet away, is so that somebody wouldn't steal his laptop. It has nothing to do with nefarious activity trying to hack or break into the network. There are many issues about the Swartz case, and simply entering the closet may be physical trespass, regardless of what he did in the closet. I'm just pointing out he didn't enter the closet in order to hack the network.

I point this out because these technical details matter, and people keep getting them wrong, on both sides of the Swartz case. For example, Marcia Hoffman and Orin Kerr (two lawyers who have helped with Weev's appeal) got these details wrong in a talk they gave last year at ShmooCon.