I learn something new every DefCon -- from the taxi drivers. This year I asked the cabby about that annoying beeping thing to the left of the steering wheel. I show picture of this device in the picture to the below, it's that green display of 0.9 with a red dot underneath, in the lower left of the picture.
His answer was that it's a tailgating meter from the insurance company. It shows red below the number because, at the moment, he was tailgating (due to the wide angle lens on my iPhone, the car in front is closer than it appears). He sped up to show that when he gets really close, the beeps get angrier.
He also pointed to other insurance-imposed sensors on the car, such as the "dash cams" above the rear view mirror, one pointing forward and one pointing backwards. In the event of an accident, the insurance company will have full telemetry to know precisely who is at fault.
I find this fascinating for two reasons.
The first is that such sensors are plummeting in price. In a few years, such things will be standard equipment on cars -- not just taxis, but the cars we drive. In a couple of decades, the push to automatic (Google drive) cars will be all about liability and insurance, because drivers will not be willing to be caught on tape causing an accident.
The second reason this interests me is how this fits in the cybersecurity industry. Whenever insurance touches an industry it starts to dictate rules. Companies will either refuse you as a customer, or offer price discounts. Thus, "cyber insurance" will impose rules like "firewall logging" on its customers.
Wednesday, July 31, 2013
Monday, July 22, 2013
No, the NSA can't track phones when they are "off"
According to recent stories, the NSA can track a mobile phone even when it's turned off [1 2]. This isn't true -- at least, it's not what you think. It depends upon your definition of "off", "track", and "phone".
The best way to track an "off" phone is to (secretly) install a chip, connected to the phone's battery supply. Thus, even when the phone is "off", that added chip would still be "on". In this case, it's not really the phone itself that's being tracked, but that chip. As long as you had a battery, the same tracking technique would work for portable laptops, your shoe, or even a gun. (This is how the ATF's "fast and furious" program tracking guns was supposed to work -- but the batteries drained too fast).
Another way of looking at the problem is defining, exactly, what "off" means. Conceptually, your mobile phone is "off" when you aren't using it. A secondary, ulta low power "baseband" processor remains "on" to listen to the cell tower. When the baseband processor detects an incoming call, it turns the rest of the phone back "on". Especially with older "feature phones", turning the phone "completely off" would sometimes leave the baseband processor still "on", thus allowing you to be tracked. For example, sometimes the phone had a timing circuit that will occasionally turn on the baseband to grab SMS messages every 10 minutes -- even though it was "off" enough that it couldn't receive incoming calls.
Even if the baseband is off, many phones still have an alarm clock that remains "on". As the Nokia 1100 manual states "If the alarm time is reached while the phone is switched off, the phone switches itself on". This timer circuit emits extremely low EMF that may be detectable. Given an area in the countryside where insurgents are hiding, it might be enough to locate them.
The moral of this is that just because you define the phone as "off" doesn't mean that it's 100% completely "off" all the time.
What does "track" mean? Sometimes it simply means "detect". Radio circuits are reactive -- even with the batteries removed. You can blast out a radio wave of a certain frequency and get radio patterns in response [example]. This detection can identify a specific model of cell phone, but it can't get personal information (such as phone number, IMSI, IMEI, ICCID) that would require some part of the device to be "on".
What I'm trying to show here is while the statement "track phone while off" can be true depending on what they mean, it's false in practice. If you turn your iPhone/Android off, the NSA cannot track you by your phone number (or the other personal IDs).
The best way to track an "off" phone is to (secretly) install a chip, connected to the phone's battery supply. Thus, even when the phone is "off", that added chip would still be "on". In this case, it's not really the phone itself that's being tracked, but that chip. As long as you had a battery, the same tracking technique would work for portable laptops, your shoe, or even a gun. (This is how the ATF's "fast and furious" program tracking guns was supposed to work -- but the batteries drained too fast).
Another way of looking at the problem is defining, exactly, what "off" means. Conceptually, your mobile phone is "off" when you aren't using it. A secondary, ulta low power "baseband" processor remains "on" to listen to the cell tower. When the baseband processor detects an incoming call, it turns the rest of the phone back "on". Especially with older "feature phones", turning the phone "completely off" would sometimes leave the baseband processor still "on", thus allowing you to be tracked. For example, sometimes the phone had a timing circuit that will occasionally turn on the baseband to grab SMS messages every 10 minutes -- even though it was "off" enough that it couldn't receive incoming calls.
Even if the baseband is off, many phones still have an alarm clock that remains "on". As the Nokia 1100 manual states "If the alarm time is reached while the phone is switched off, the phone switches itself on". This timer circuit emits extremely low EMF that may be detectable. Given an area in the countryside where insurgents are hiding, it might be enough to locate them.
The moral of this is that just because you define the phone as "off" doesn't mean that it's 100% completely "off" all the time.
What does "track" mean? Sometimes it simply means "detect". Radio circuits are reactive -- even with the batteries removed. You can blast out a radio wave of a certain frequency and get radio patterns in response [example]. This detection can identify a specific model of cell phone, but it can't get personal information (such as phone number, IMSI, IMEI, ICCID) that would require some part of the device to be "on".
What I'm trying to show here is while the statement "track phone while off" can be true depending on what they mean, it's false in practice. If you turn your iPhone/Android off, the NSA cannot track you by your phone number (or the other personal IDs).
@collinrm @ErrataRob Read up on MASINT. Signature of EM radiation emitted (or resp.) by a specific, individual device can be fingerprinted.
— pbr90x (@pbr90x) July 23, 2013
.@ErrataRob Of course if the NSA elects to modify your phone's firmware, removing the battery is the only way to ensure it's actually "off".
— Marsh Ray (@marshray) July 23, 2013
The NSA is tracking @erratarob's shoes! http://t.co/4be2lvrM8c …
— Dave Piscitello (@securityskeptic) July 23, 2013
Friday, July 19, 2013
Randomized TSA screening is stupid
Cyber-pundit Bruce Schneier has a post saying that automated randomized screening by the TSA is a good idea. He's wrong; it's a stupid idea.
Most airport screening is for smuggling, not terrorism. Countries automate the process to stop corruption, so that airport security can't shake down passengers for money. None of this applies to stopping terrorism in the United States.
The reason randomized screening stops smugglers is that it changes the risk/reward ratio. It's not worth smuggling a $1-million of diamonds through the airport given a 5% chance of them getting confiscated. It's not worth smuggling $100 of cocaine through the airport if there is a 5% chance of going to jail. It stops professional smugglers, those who do it repeatedly, because it means they'll eventually get caught.
This math works in the opposite manner for terrorists. Their goal is to die in fiery crash. A 5% chance of getting caught means a 5% chance of living. For some weapons, like guns, they aren't likely to even go to jail, as a thousand people a year accidentally bring weapons on the plane without severe consequences. For other weapons, like C4 packed in a Koran, the press generated from the attempted terrorist attack will be nearly as good as a successful attack. In any case, if their first minion gets stopped in a randomized search, the terrorist organization will just send a second one.
Thus, in the words of Bruce Schneier, randomized screening is just security theater. It has little deterrent effect on terrorists.
Schneier says that the automation is good because it's free from bias or profiling. But that's not "security" speaking but "left-wing populism". Bias and profiling is good from a security perspective. Focusing your attention on mid-eastern males is more secure. Punishing white grandmothers because you feel guilty about the unfairness of profiling is just stupid.
Certainly, profiling is bad for society as a whole. It's bad for crime, for example. If young black men believe they are going to jail anyway, fairly or unfairly, regardless of what they do, they are more likely to commit crime (as John Adams once pointed out). The more we treat an ethnic minority differently, the less they will assimilate, and the more likely they are their children will want to rebel. That government does profiling in some cases sanctions intolerable bigotry in others. Whatever your politics, there are good reasons to avoid profiling.
But just because profiling is bad in general doesn't detract from its value in the narrow case of "airport security". Automating selection certainly fixes the societal problem, but by destroying any usefulness selective screening has in stopping terrorists in the first place. Therefore, the correct solution is to get rid of selective screening altogether, not automate it to assuage your guilt.
Update: The ever awesome Sergey Bratus points to this NYTimes article that says, and I'm not making this up:
Update: Bruce Schneier responds, characterizing my argument as "profiling makes sense". No, my argument is "random selection doesn't make sense" -- regardless of the efficacy of profiling. I only mention profiling because I believe political correctness encourages people to thing wrongly about random selection. The correct policy is to stop the invasive screening, either from profiling or random selection.
Most airport screening is for smuggling, not terrorism. Countries automate the process to stop corruption, so that airport security can't shake down passengers for money. None of this applies to stopping terrorism in the United States.
The reason randomized screening stops smugglers is that it changes the risk/reward ratio. It's not worth smuggling a $1-million of diamonds through the airport given a 5% chance of them getting confiscated. It's not worth smuggling $100 of cocaine through the airport if there is a 5% chance of going to jail. It stops professional smugglers, those who do it repeatedly, because it means they'll eventually get caught.
This math works in the opposite manner for terrorists. Their goal is to die in fiery crash. A 5% chance of getting caught means a 5% chance of living. For some weapons, like guns, they aren't likely to even go to jail, as a thousand people a year accidentally bring weapons on the plane without severe consequences. For other weapons, like C4 packed in a Koran, the press generated from the attempted terrorist attack will be nearly as good as a successful attack. In any case, if their first minion gets stopped in a randomized search, the terrorist organization will just send a second one.
Thus, in the words of Bruce Schneier, randomized screening is just security theater. It has little deterrent effect on terrorists.
Schneier says that the automation is good because it's free from bias or profiling. But that's not "security" speaking but "left-wing populism". Bias and profiling is good from a security perspective. Focusing your attention on mid-eastern males is more secure. Punishing white grandmothers because you feel guilty about the unfairness of profiling is just stupid.
Certainly, profiling is bad for society as a whole. It's bad for crime, for example. If young black men believe they are going to jail anyway, fairly or unfairly, regardless of what they do, they are more likely to commit crime (as John Adams once pointed out). The more we treat an ethnic minority differently, the less they will assimilate, and the more likely they are their children will want to rebel. That government does profiling in some cases sanctions intolerable bigotry in others. Whatever your politics, there are good reasons to avoid profiling.
But just because profiling is bad in general doesn't detract from its value in the narrow case of "airport security". Automating selection certainly fixes the societal problem, but by destroying any usefulness selective screening has in stopping terrorists in the first place. Therefore, the correct solution is to get rid of selective screening altogether, not automate it to assuage your guilt.
Update: The ever awesome Sergey Bratus points to this NYTimes article that says, and I'm not making this up:
"If terrorists learn that elderly white women from Iowa are exempt from screening, that’s exactly whom they will recruit."I think we have such a fear of being called "bigots" that we'll pretend to believe the plausibility of this statement (h/t Marsh Ray). We should just replace profiling with other security techniques, or simply live with the increased risk, not discard logic because we dislike the practice.
Update: Bruce Schneier responds, characterizing my argument as "profiling makes sense". No, my argument is "random selection doesn't make sense" -- regardless of the efficacy of profiling. I only mention profiling because I believe political correctness encourages people to thing wrongly about random selection. The correct policy is to stop the invasive screening, either from profiling or random selection.
Thursday, July 18, 2013
NSA denies my FOIA request
I sent a Freedom of Information Act (FOIA) request asking for my phone metadata. I just got the rejection, which I append to the blog post below.
The first thing to notice about the rejection letter is how it includes their talking-points defending the program, highlighting their "strict controls to ensure that no U.S. person is targeted".
The second thing to notice is that this is a form letter sent to everyone who FOIAed their phone number. But I ask for metadata associated with my IMEI, not my phone number. It's a lesser known bit of metadata that was highlighted in Snowden's Verizon warrant. Obviously, they barely read my request.
Lastly, the denial is due to concerns over "national security". This is nonsense, of course. As the marketing points at the top demonstrate, their bigger concern is what citizens think about the program, not what the adversaries know about it.
Wednesday, July 17, 2013
How the Glass hack works
Recently a vulnerability was discovered in Google Glass. I thought I'd write up a description of how it works.
Glass has no normal input. Thus, there's no way to type in a password for WiFi. The way Google solved this is with QRCodes. This starts on your account page:
At this stage, you can use the laptop to type in the WiFi name and password, then touch "Generate Code", which then displays the following qrcode:
At this stage, you point your Glass at this qrcode and take a picture. It'll see the code, recognize it, and change the WiFi settings accordingly.
What Lookout found is that Glass searches any picture taken for any reason for qrcode, and that if it finds one with WiFi information, it'll switch the WiFi settings.
The problem isn't limited to WiFi settings. Glass uses qrcodes for a lot of reasons, not just WiFi settings. There are likely other problems lurking here.
What information does this qrcode contain? That's an easy question to answer, just point your phone at the above picture and run your qrcode-reader app. What you'll find is that this code contains:
As a hacker, several things jump out at me.
The first is the possibility of buffer-overflows. According the WiFi spec, the name like "linksys" is supposed to be only 32 letters. I wonder what would happen if I put 33 letters into this field of the qrcode, or 100 letters, or 1000 letters? The same goes with the password. Also, not only can I play with the length, I can play with weird characters, like Unicode.
The second thing that jumps out is that there's an obvious format of "name:value;", where the name is separated by a colon, and the value terminated by a semicolon. Moreover, a fields can be nested inside other fields, which is why there are two semicolons at the end, one terminating the child-field, the other terminating the parent-field. This means I can probably insert "XYZ:pdq;" in arbitrary locations and not affect the overall effectiveness of the code -- but cause bad side-effects to happen.
The third things that jumps out is that there are possibly other things controlled by the qrcode other than "WIFI". I need to reverse engineer the binary to go find what other fields it knows about.
Glass has no normal input. Thus, there's no way to type in a password for WiFi. The way Google solved this is with QRCodes. This starts on your account page:
You then click on the "My Wifi networks" tile. (By the way, it has this tile interface so that you can easily use a touchscreen instead; during my indoctrination, they had a Google Pixel laptop which has a touchscreen.) Once you select this option, you get the following popup screen:
At this stage, you can use the laptop to type in the WiFi name and password, then touch "Generate Code", which then displays the following qrcode:
At this stage, you point your Glass at this qrcode and take a picture. It'll see the code, recognize it, and change the WiFi settings accordingly.
What Lookout found is that Glass searches any picture taken for any reason for qrcode, and that if it finds one with WiFi information, it'll switch the WiFi settings.
The problem isn't limited to WiFi settings. Glass uses qrcodes for a lot of reasons, not just WiFi settings. There are likely other problems lurking here.
What information does this qrcode contain? That's an easy question to answer, just point your phone at the above picture and run your qrcode-reader app. What you'll find is that this code contains:
WIFI:T:WPA;S:linksys;P:Password1234;;Notice that there is nothing special about this information. You can print it out and past it on your wall, allowing visitors to your home to easily use your WiFi.
As a hacker, several things jump out at me.
The first is the possibility of buffer-overflows. According the WiFi spec, the name like "linksys" is supposed to be only 32 letters. I wonder what would happen if I put 33 letters into this field of the qrcode, or 100 letters, or 1000 letters? The same goes with the password. Also, not only can I play with the length, I can play with weird characters, like Unicode.
The second thing that jumps out is that there's an obvious format of "name:value;", where the name is separated by a colon, and the value terminated by a semicolon. Moreover, a fields can be nested inside other fields, which is why there are two semicolons at the end, one terminating the child-field, the other terminating the parent-field. This means I can probably insert "XYZ:pdq;" in arbitrary locations and not affect the overall effectiveness of the code -- but cause bad side-effects to happen.
The third things that jumps out is that there are possibly other things controlled by the qrcode other than "WIFI". I need to reverse engineer the binary to go find what other fields it knows about.
Is security a "super" right that trumps all other rights?
In this CBS interview of Julian Assange, the interviewers ask "don't people have a right to these secret programs monitoring phones to protect against terrorists?". In this interview with the German interior minister he declares that security/safety is a "super-right" that trumps all other rights. This thinking is wrong.
We know its wrong because this is always the argument that despotic governments use to suspend rights. Whether it's fiction like 1984 or real-world despots like Saddam Hussein and Muammar Gaddafi, national security is the primary reason they've suspended human rights. The evidence is clear: the more despotic a government, the more it hypes threats to "security".
That's how we know that the NSA surveillance is despotic. The government uses the threat of terrorism to justify the program. But here's the thing: terrorism is a minor threat. It's far down the list of threats we Americans face. You are more likely to die from your furniture falling on you than from a terrorist attack. That our government uses minor terrorism to justify the major intrusion on Americans conclusively proves that the surveillance is illegitimate and despotic.
The reason we have the Bill of Rights is that throughout history, your primary danger is has always been the internal threat from your own government. External threats are relatively minor. That means it's free speech and privacy that are the "super-rights", and security only a secondary right.
We know its wrong because this is always the argument that despotic governments use to suspend rights. Whether it's fiction like 1984 or real-world despots like Saddam Hussein and Muammar Gaddafi, national security is the primary reason they've suspended human rights. The evidence is clear: the more despotic a government, the more it hypes threats to "security".
That's how we know that the NSA surveillance is despotic. The government uses the threat of terrorism to justify the program. But here's the thing: terrorism is a minor threat. It's far down the list of threats we Americans face. You are more likely to die from your furniture falling on you than from a terrorist attack. That our government uses minor terrorism to justify the major intrusion on Americans conclusively proves that the surveillance is illegitimate and despotic.
The reason we have the Bill of Rights is that throughout history, your primary danger is has always been the internal threat from your own government. External threats are relatively minor. That means it's free speech and privacy that are the "super-rights", and security only a secondary right.
Monday, July 15, 2013
The increasing cyberization of the NDAA
The yearly "National Defense Authorization Act" is the yearly defense budget for the U.S., currently over $600,000,000,000. It also sets defense priorities and authorizes defense related activities. For example, two years ago the NDAA infamously authorized the indefinite detention of American citizens suspected of terrorism (and this year it'll try to regulate the proliferation of cyber-weapons).
In the last few years, "cyber" related topics has exploded. Prior to 2010, the word "cyber" was only mentioned once or twice a year. Since then, the word "cyber" has been mentioned 40 to 80 times each year. This can be seen in the following graph:
Sunday, July 14, 2013
Thanks EFF for outlawing code
This demonstrates the Orwellian nature of EFF's populism. They don't stand for principle but for popularity. They abandon their principle that the Internet is sovereign when they promoted Net Neutrality. They abandon their principle that code is free speech by suggesting that some code needs to be regulated.
The text of the NDAA, below, calls for the president to implement export controls on code:
SEC. 946. CONTROL OF THE PROLIFERATION OF CYBER WEAPONS.
(a) Interagency Process for Establishment of Policy- The President shall establish an interagency process to provide for the establishment of an integrated policy to control the proliferation of cyber weapons through unilateral and cooperative export controls, law enforcement activities, financial means, diplomatic engagement, and such other means as the President considers appropriate.
(b) Objectives- The objectives of the interagency process established under subsection (a) shall be as follows:
(1) To identify the types of dangerous software that can and should be controlled through export controls, whether unilaterally or cooperatively with other countries.
(2) To identify the intelligence, law enforcement, and financial sanctions tools that can and should be used to suppress the trade in cyber tools and infrastructure that are or can be used for criminal, terrorist, or military activities while preserving the ability of governments and the private sector to use such tools for legitimate purposes of self-defense.
(3) To establish a statement of principles to control the proliferation of cyber weapons, including principles for controlling the proliferation of cyber weapons that can lead to expanded cooperation and engagement with international partners.
(c) Recommendations- The interagency process established under subsection (a) shall develop, by not later than 270 days after the date of the enactment of this Act, recommendations on means for the control of the proliferation of cyber weapons, including a draft statement of principles and a review of applicable legal authorities.
Thursday, July 11, 2013
DEF CON fed uninvite: not everything is political
Everyone seems to think that the Dark Tangent is making a political statement uninviting the feds to DEF CON. Maybe, but there's nothing political in his message. Whatever politics you read into it are those you brought with you.
People who run things, from corporate CEOs to con organizers, learn to keep themselves above the fray. They spend a lot of effort heading off conflict before it has a chance to start. They don't take sides. Those who are wedded to their side are sometimes unable to recognize this impartiality.
A highly visible fed presence is likely to trigger conflict with people upset over Snowden-gate. From shouting matches, to physical violence, to "hack the fed", something bad might occur. Or, simply attendees will choose to stay away. Any reasonable conference organizer, be they pro-fed or anti-fed, would want to reduce the likelihood of this conflict.
The easiest way to do this is by reducing the number of feds at DEF CON, by asking them not to come. This is horribly unfair to them, of course, since they aren't the ones who would be starting these fights. But here's the thing: it's not a fed convention but a hacker party. The feds don't have a right to be there -- the hackers do. If bad behaving hackers are going to stir up trouble with innocent feds, it's still the feds who have to go.
People who run things, from corporate CEOs to con organizers, learn to keep themselves above the fray. They spend a lot of effort heading off conflict before it has a chance to start. They don't take sides. Those who are wedded to their side are sometimes unable to recognize this impartiality.
A highly visible fed presence is likely to trigger conflict with people upset over Snowden-gate. From shouting matches, to physical violence, to "hack the fed", something bad might occur. Or, simply attendees will choose to stay away. Any reasonable conference organizer, be they pro-fed or anti-fed, would want to reduce the likelihood of this conflict.
The easiest way to do this is by reducing the number of feds at DEF CON, by asking them not to come. This is horribly unfair to them, of course, since they aren't the ones who would be starting these fights. But here's the thing: it's not a fed convention but a hacker party. The feds don't have a right to be there -- the hackers do. If bad behaving hackers are going to stir up trouble with innocent feds, it's still the feds who have to go.
Friday, July 05, 2013
Scanning the Internet
- We have moved the process to a new, more powerful host, 216.75.60.203. The two previously given IPs (216.75.60.94, 66.240.192.147) are no longer being used for this project.
- We have mapped the space we intended and are now on a second round to update any changes. We are hoping to make this data available by the end of the year.
- We are looking for interesting ways to visualize the data, if you have suggestions I would love to hear them.
- We have collected nearly 600 gigs worth of data so far.
- World events are making me feel like an old man visiting the neighborhood I grew up in: "Look at that block, before the revolution it was all HPUX servers, now its all Dell Blade servers with Win2K8...progress they say..." *shakes fist in the air*
Tuesday, July 02, 2013
I'm hacking your website
A dream team of computer+law geeks have put together an appelant brief in Weev's defense. A major feature is that simply "unwanted" access doesn't mean "unauthorized" under the law: just because you don't like what I do doesn't necessarily make me a criminal.
For example, I use "AdBlock" to block advertisements from websites. Since websites earn money from advertisements, my free-riding with AdBlock is unwanted access. But is this conduct prohibited under the CFAA? I don't think so, but then, I wouldn't have thought Weev's (adding one to a URL) or Lori Drew's (violating ToS) conduct illegal either.
For example, I use "AdBlock" to block advertisements from websites. Since websites earn money from advertisements, my free-riding with AdBlock is unwanted access. But is this conduct prohibited under the CFAA? I don't think so, but then, I wouldn't have thought Weev's (adding one to a URL) or Lori Drew's (violating ToS) conduct illegal either.
Unwanted access
Weev's lawyers have filed their appeal. It's interesting, readable even for non-lawyers.
Part of the appeal is based on the obvious idea that public websites are, well, public. Just because some computer access isn't "wanted" doesn't necessarily mean that it's "unauthorized". Sure, physical trespass is a good analogy for private computers, but the analogy for public websites is that you've invited the guest into your home, but they ignore your hints they should leave, because you haven't explicitly told them so.
Take search engines as an example. They steal a website's content in order to profit by it. That's the definition of "search engine". Back when they were invented, they made people upset. They'd overload the server with their aggressiveness. They would make things available and public things that website owners didn't want to be so public. Somehow, zealous prosecutors avoided making felons out of search engineers, and they have become the social norm today -- even though these problems still persist.
The same is true of cyber-security research. I do unwanted things against websites all the time, such as my frequent testing of the Un.org website to see if it it's still vulnerable to SQL injection. Those guys hate me. Yet, my blogposts have improved the situation (they fix whatever I post a few days later, and now they've got a WAF in front. I really need to play with that WAF, but I'm lazy).
The reason I'm writing this blogpost is to solicit other examples of unwanted behavior -- things you do that you know is unwanted, but which you believe is "authorized". Or, things that you would do, but aren't sure if you'll be crossing a line. Please add them to the comments below, or send me a tweet @ErrataRob.
Part of the appeal is based on the obvious idea that public websites are, well, public. Just because some computer access isn't "wanted" doesn't necessarily mean that it's "unauthorized". Sure, physical trespass is a good analogy for private computers, but the analogy for public websites is that you've invited the guest into your home, but they ignore your hints they should leave, because you haven't explicitly told them so.
Take search engines as an example. They steal a website's content in order to profit by it. That's the definition of "search engine". Back when they were invented, they made people upset. They'd overload the server with their aggressiveness. They would make things available and public things that website owners didn't want to be so public. Somehow, zealous prosecutors avoided making felons out of search engineers, and they have become the social norm today -- even though these problems still persist.
The same is true of cyber-security research. I do unwanted things against websites all the time, such as my frequent testing of the Un.org website to see if it it's still vulnerable to SQL injection. Those guys hate me. Yet, my blogposts have improved the situation (they fix whatever I post a few days later, and now they've got a WAF in front. I really need to play with that WAF, but I'm lazy).
The reason I'm writing this blogpost is to solicit other examples of unwanted behavior -- things you do that you know is unwanted, but which you believe is "authorized". Or, things that you would do, but aren't sure if you'll be crossing a line. Please add them to the comments below, or send me a tweet @ErrataRob.
Subscribe to:
Posts (Atom)