Wednesday, May 26, 2010

"Popular Highlights" on the Amazon Kindle

The Amazon kindle recently added a feature that shares "Popular Highlights" with other people. If many people highlight the same passage, then you'll see the passage highlighted on your own Kindle. If you select it, you'll see a popup showing a message like "11 other people highlighted this part of the book".

On one hand, this is a possible privacy violation. I don't see anything wrong with it yet, but it's exactly the sort of thing that makes me uncomfortable. Kindle shares your highlights this way by default, although you can turn off this feature.

Wednesday, May 19, 2010

Technical details of the Street View WiFi payload controversy

The latest privacy controversy with Google is that while scanning for WiFi access-points in their Street View cars, they may have inadvertently captured data payloads containing private information (URLs, fragments of e-mails, and so on).

Although some people are suspicious of their explanation, Google is almost certainly telling the truth when it claims it was an accident. The technology for WiFi scanning means it's easy to inadvertently capture too much information, and be unaware of it.

This article discusses technically how such scanning works.

Wednesday, May 12, 2010

More "the air is full of packets"

I've posted on this topic before, I thought I'd mention it again.

I've been a bar for a few hours today, monitoring wifi broadcasts. Here is what I see for access-points:


Only three access-points are visible.

But what about wifi devices, like phones and laptops?

There are 59 devices in this list, 35 of which are "Apple" (almost all are iPhones, but some might be iPads or MacBooks) [I've sorted the list above, and let the Apple devices fall off the end].

Some of these devices are in the bar, some are further away. The devices looking for "attwifi" are probably down the block hanging out at Starbucks (which uses AT&T access points).

I regularly see the lesser phones like HTC, Samsung, and Palm, but the overwhelming majority of wifi devices I typically see are Blackberries and Apple phones, which Apple always accounting for more than 50% of devices. That's the strange thing in the world we live in: monitor wifi broadcasts almost anywhere, and more than 50% of the devices you see are likely to be from Apple computer.

I like criticizing Apple security but they have implemented one of the most fantastically important security features ever: they don't broadcast the SSID they are looking for.

The SSID is the name of the access-point, like "linksys", "attwifi", or "Bob's Home". The normal operation is for devices to send out a broadcast looking for an access-point. For example, if you are connected to "Bob's Home" network most of the time, as you travel with your laptop, it will regularly send out a broadcast looking for "Bob's Home".

One of the evil things a hacker can do is set up a hostile access-point also called "Bob's Home". Let's say you are in an airport, and a hacker sees that your notebook is looking for that access-point. The hacker will quickly reconfigure an access-point to same name. Within moments, your laptop will connect to that, and start sending things across the network - such as passwords or private e-mails - that that the hacker can intercept.

Apple does something clever. Instead of broadcasting the access-points it's interested in, it sends out a broadcast looking for ANY access-point. It will only connect if an access-point has the correct name.

Thus, let's say that an Apple iPhone is looking for "Bob's Home". A hacker won't know this. Instead, the hacker will see the blank broadcast. The hacker attempt to guess the access-point your phone is looking for, such as by responding back with "linksys" or "attwifi" (very common names), but if the guess fails, then he cannot trick your phone.

We see names in the list above from Apple devices, but only when they've discovered a local access-point they are interested. In the list above, I see Apple devices trying to connect to "attwifi", "quiznos8699", and "Royal Oak 1", because they have actively tried to connect to these access-points. I have no idea what the names of their home networks are.

The Blackberry's with "tmobile" and "@Home" probes are interesting. They will reroute calls through your home access-point (if close) so you won't use cellphone minutes. That's gotta be insecure as heck - I need to buy one and find out what the security problems are.

It's not just the phones that are interesting, but other mobile devices. For example, you see a "Cisco" device in the list looking for "BR6#wlan". That's not a phone, or a laptop. Instead, it's a bus (or at least, a device in a bus). In Atlanta, as in many cities, the local metro system puts computers on every bus, that communicate via wifi. When they get back home to the bus yards, they will likely hook up with the home system, and transfer information. Meanwhile, sitting in bar in Atlanta monitoring broadcasts, you'll know when a bus drives by when you see one of these appear in your list.

The same is true of deliver vans and such. Also, many automobile manufacturers like Ford have announced wifi for automobiles, that will automatically communicate both with the home network via wifi, as well as phones/laptops within the car.

You may not need an SDL

This post at Securosis describes why Microsoft's SDL only works for Microsoft. Microsoft agrees in their own post. Both Securosis and Microsoft make fundamental errors about secure development.


Securosis makes the implicit assumption that "you need to be secure", that it's some sort of higher moral duty for any organization, and that you are morally weak if you aren't pursuing some sort of secure development.

This is wrong. There is no such thing as perfect security. No matter what you do, you cannot be secure. The first step in a secure development process is to figure what level of risks you are willing to accept, and what level of security you need. For many organizations, the correct answer is to completely ignore security altogether.

As a disinterested expert, I don't care if you are secure. However, I would advise you that certain types of development are prone to some high-risk errors. For example, if you do "website development", then you are at high-risk for "SQL injection", a problem that causes a lot of grief for a lot of organizations.

On the other hand, if you are building internal applications, such as a utility for extracting data from one database, transforming it, and sticking it in another database, I don't have any particular recommendation. Sure, I'll be annoyed by the fact that you are probably imbedding administrator passwords in the script and sending the data unencrypted over the wire. However, most organizations ignore security on this sort of thing and never suffer adverse consequences.


Microsoft makes a similar error, claiming that secure development saves money. This is a fallacy. It's like how you have a budget of $200 for clothes, but your wife comes home having paid $400. She claims "but they were 75% off, so in reality, I've saved money".

Most development organizations cannot afford "secure development", the costs would exceed their entire development budget. No amount of "savings" will make up the difference.

You could make claims like "but a data breach could cost $100-million", surely, an "ounce of prevention" is worth it in this case. Maybe, maybe not. The size of the problem is largely irrelevant. You can always construct a scenario where the entire company is at risk of going out of business. The question isn't whether such a scenario "could" happen, but the "likelihood" that it will.

While developers tend to underestimate such risks, security types tend to overestimate them. How much money that you can "save" in development depends on correctly estimating the risks.

CONCLUSION

Securosis closes their post with "Do what Microsoft did, not what they do", meaning to come up with your own path to secure development. I disagree. I don't think most organizations need secure development, and for those that do, it should be targeted at specific risks rather than be comprehensive.