Monday, July 26, 2021

Of course you can't trust scientists on politics

Many people make the same claim as this tweet. It's obviously wrong. Yes,, the right-wing has a problem with science, but this isn't it.

First of all, people trust airplanes because of their long track record of safety, not because of any claims made by scientists. Secondly, people distrust "scientists" when politics is involved because of course scientists are human and can get corrupted by their political (or religious) beliefs.

And thirdly, the concept of "trusting scientific authority" is wrong, since the bedrock principle of science is distrusting authority. What defines sciences is how often prevailing scientific beliefs are challenged.

Wednesday, July 21, 2021

Risk analysis for DEF CON 2021

It's the second year of the pandemic and the DEF CON hacker conference wasn't canceled. However, the Delta variant is spreading. I thought I'd do a little bit of risk analysis. TL;DR: I'm not canceling my ticket, but changing my plans what I do in Vegas during the convention.

Wednesday, July 14, 2021

Ransomware: Quis custodiet ipsos custodes

Many claim that "ransomware" is due to cybersecurity failures. It's not really true. We are adequately protecting users and computers. The failure is in the inability of cybersecurity guardians to protect themselves. Ransomware doesn't make the news when it only accesses the files normal users have access to. The big ransomware news events happened because ransomware elevated itself to that of an "administrator" over the network, giving it access to all files, including online backups.

Generic improvements in cybersecurity will help only a little, because they don't specifically address this problem. Likewise, blaming ransomware on how it breached perimeter defenses (phishing, patches, password reuse) will only produce marginal improvements. Ransomware solutions need to instead focus on looking at the typical human-operated ransomware killchain, identify how they typically achieve "administrator" credentials, and fix those problems. In particular, large organizations need to redesign how they handle Windows "domains" and "segment" networks.

Monday, July 05, 2021

Some quick notes on SDR

I'm trying to create perfect screen captures of SDR to explain the world of radio around us. In this blogpost, I'm going to discuss some of the imperfect captures I'm getting, specifically, some notes about WiFi and Bluetooth.

An SDR is a "software defined radio" which digitally samples radio waves and uses number crunching to decode the signal into data. Among the simplest thing an SDR can do is look at a chunk of spectrum and see signal strength. This is shown below, where I'm monitoring part of the famous 2.4 GHz pectrum used by WiFi/Bluetooth/microwave-ovens:

Sunday, June 20, 2021

When we'll get a 128-bit CPU

On Hacker News, this article claiming "You won't live to see a 128-bit CPU" is trending". Sadly, it was non-technical, so didn't really contain anything useful. I thought I'd write up some technical notes.

The issue isn't the CPU, but memory. It's not about the size of computations, but when CPUs will need more than 64-bits to address all the memory future computers will have. It's a simple question of math and Moore's Law.

Thursday, April 29, 2021

Anatomy of how you get pwned

Today, somebody had a problem: they kept seeing a popup on their screen, and obvious scam trying to sell them McAfee anti-virus. Where was this coming from?

In this blogpost, I follow this rabbit hole on down. It starts with "search engine optimization" links and leads to an entire industry of tricks, scams, exploiting popups, trying to infect your machine with viruses, and stealing emails or credit card numbers.

Evidence of the attack first appeared with occasional popups like the following. The popup isn't part of any webpage.




This is obviously a trick. But from where? How did it "get on the machine"?

There's lots of possible answers. But the most obvious answer (to most people), that your machine is infected with a virus, is likely wrong. Viruses are generally silent, doing evil things in the background. When you see something like this, you aren't infected ... yet.

Instead, things popping with warnings is almost entirely due to evil websites. But that's confusing, since this popup doesn't appear within a web page. It's off to one side of the screen, nowhere near the web browser.

Moreover, we spent some time diagnosing this. We restarted the webbrowser in "troubleshooting mode" with all extensions disabled and went to a clean website like Twitter. The popup still kept happening.

As it turns out, he had another windows with Firefox running under a different profile. So while he cleaned out everything in this one profile, he wasn't aware the other one was still running

This happens a lot in investigations. We first rule out the obvious things, and then struggle to find the less obvious explanation -- when it was the obvious thing all along.

In this case, the reason the popup wasn't attached to a browser window is because it's a new type of popup notification that's suppose to act more like an app and less like a web page. It has a hidden web page underneath called a "service worker", so the popups keep happening when you think the webpage is closed.

Once we figured the mistake of the other Firefox profile, we quickly tracked this down and saw that indeed, it was in the Notification list with Permissions set to Allow. Simply changing this solved the problem.

Note that the above picture of the popup has a little wheel in the lower right. We are taught not to click on dangerous thing, so the user in this case was avoiding it. However, had the user clicked on it, it would've led him straight here to the solution. I can't recommend you click on such a thing and trust it, because that means in the future, malicious tricks will contain such safe looking icons that aren't so safe.

Anyway, the next question is: which website did this come from?

The answer is Google.

In the news today was the story of the Michigan guys who tried to kidnap the governor. The user googled "attempted kidnap sentencing guidelines". This search produced a page with the following top result:


Google labels this a "featured snippet". This isn't an advertisement, not a "promoted" result. But it's a link that Google's algorithms thinks is somehow more worthy than the rest.

This happened because hackers tricked Google's algorithms. It's been a constant cat and mouse game for 20 years, in an industry known as "search engine optimization" or SEO. People are always trying to trick google into placing their content highest, both legitimate companies and the quasi-illegitimate that we see here. In this case, they seem to have succeeded.

The way this trick works is that the hackers posted a PDF instead of a webpage containing the desired text. Since PDF documents are much less useful for SEO purposes, google apparently trusts them more.

But the hackers have found a way to make PDFs more useful. They designed it to appear like a webpage with the standard CAPTCHA. You click anywhere on the page such as saying "I'm not robot", and it takes you to the real webstie.



But where is the text I was promised in the Google's search result? It's there, behind the image. PDF files have layers. You can put images on top that hides the text underneath. Humans only see the top layer, but google's indexing spiders see all the layers, and will index the hidden text. You can verify this by downloading the PDF and using tools to examine the raw text:


If you click on the "I am not robot" in the fake PDF, it takes you to a page like the following:


Here's where the "hack" happened. The user misclicked on "Allow" instead of "Block" -- accidentally. Once they did that, popups started happening, even when this window appeared to go away.

The lesson here is that "misclicks happen". Even the most knowledgeable users, the smartest of cybersecurity experts, will eventually misclick themselves.

As described above, once we identified this problem, we were able to safely turn off the popups by going to Firefox's "Notification Permissions".

Note that the screenshots above are a mixture of Firefox images from the original user, and pictures of Chrome where I tried to replicate the attack in one of my browsers. I didn't succeed -- I still haven't been able to get any popups appearing on my computer.

So I tried a bunch of different browsers: Firefox, Chrome, and Brave on both Windows and macOS.

Each browser produced a different result, a sort of A/B testing based on the User-Agent (the string sent to webservers that identifies which browser you are using). Sometime following the hostile link from that PDF attempted to install a popup script in our original example, but sometimes it tried something else.

For example, on my Firefox, it tried to download a ZIP file containing a virus:


When I attempt to download, Firefox tells me it's a virus -- probably because Firefox knows the site where it came from is evil.

However, Microsoft's free anti-virus didn't catch it. One reason is that it comes as an encrypted zip file. In order to open the file, you have to first read the unencrypted text file to get the password -- something humans can do but anti-virus products aren't able to do (or at least, not well).


So I opened the password file to get the password ("257048169") and extracted the virus. This is mostly safe -- as long as I don't run it. Viruses are harmless sitting on your machine as long as they aren't running. I say "mostly" because even for experts, "misclicks happen", and if I'm not careful, I may infect my machine.

Anyway, I want to see what the virus actually is. The easiest way to do that is upload it to VirusTotal, a website that runs all the known anti-virus programs on a submission to see what triggers what. It tells me that somebody else uploaded the same sample 2 hours ago, and that a bunch of anti-virus vendors detect it, with the following names:


With VirusTotal, you can investigate why anti-virus products think it may be a virus. 

For example, anti-virus companies will run viruses to see what they do. They run them in "emulated" machines that are a lot slower, but safer. If viruses find themselves running in an emulated environment, then they stop doing all the bad behaviors the anti-virus programs might detection. So they repeated check the timestamp to see how fast they are running -- if too slow, they assume emulation.

But this itself is a bad behavior. This timestamp detection is one of the behaviors the anti-virus programs triggered on as suspicious.


You can go investigate on VirusTotal other things it found with this virus.

Viruses and disconnected popups wasn't the only trick. In yet another attempt with web browsers, the hostile site attempt to open lots and lots of windows full of advertising. This is a direct way they earn money -- hacking the advertising companies rather than hacking you.

In yet another attempt with another browser, this time from my MacBook air, it asked for an email address:

I happily obliged, giving it a fake address.

At this point, the hackers are going to try to use the same email and password to log into Gmail, into a few banks, and so on. It's one of the top hacks these days (if not the most important hack) -- since most people reuse the same password for everything, even though it's not asking your for your Gmail or bank password, most of the time people will simply reuse them anyway. (This is why you need to keep important passwords separate from unimportant ones -- and write down your passwords or use a password manager).

Anyway, I now get the next webpage. This is a straight up attempt to steal my credit card -- maybe. 
This is a website called "AppCine.net" that promises streaming movies, for free signup, but requires a credit card.

This may be a quasi-legitimate website. I saw "quasi" because their goal isn't outright credit card fraud, but a "dark pattern" whereby they make it easy to sign up for the first month free with a credit card, and then make it nearly impossible to stop the service, where they continue to bill you month after month. As long as the charges are small each month, most people won't bother going through all the effort canceling the service. And since it's not actually fraud, people won't call their credit card company and reverse the charges, since they actually did sign up for the service and haven't canceled it.

It's a slimy thing the Trump campaign did in the last election. Their website asked for one time donations but tricked people into unwittingly making it a regular donation. This caused a lot of "chargebacks" as people complained to their credit card company.

In truth, everyone does the same pattern: makes it easy to sign up, and sign up for more than you realize, and then makes it hard to cancel. I thought I'd canceled an AT&T phone but found out they'd kept billing me for 3 years, despite the phone no longer existing and using their network.

They probably have a rewards program. In other words, they aren't out there doing SEO hacking of google. Instead, they pay others to do it for them, and then give a percentage profit, either for incoming links, but probably "conversion", money whenever somebody actually enters their credit card number and signs up.

Those people are in tern a different middleman. It probably goes like this:
  • somebody skilled at SEO optimization, who sends links to a broker
  • a broker who then forwards those links to other middlemen
  • middlemen who then deliver those links to sites like AppCine.net that actually ask for an email address or credit card
There's probably even more layers -- like any fine tuned industry, there are lots of specialists who focus on doing their job well.

Okay, I'll play along, and I enter a credit card number to see what happens (I have bunch of used debit cards to play this game). This leads to an error message saying the website is down and they can't deliver videos for me, but then pops up another box asking for my email, from yet another movie website:

This leads to yet another site:
It's an endless series. Once a site "converts" you, it then simply sells the link back to another middleman, who then forwards you on to the next. I could probably sit there all day with fake email addresses and credit cards and still not come to the end of it all.

Summary

So here's what we found.

First, there was a "search engine optimization" hacker who specializes in getting their content at the top of search results for random terms.

Second, they pass hits off to a broker who distributes the hits to various hackers who pay them. These hackers will try to exploit you with:
  • popups pretending to be anti-virus warnings that show up outside the browser
  • actual virus downloads in encrypted zips that try to evade anti-virus, but not well
  • endless new windows selling you advertising
  • steal your email address and password, hoping that you've simply reused one from legitimate websites, like Gmail or your bank
  • signups for free movie websites that try to get your credit card and charge you legally
Even experts get confused. I had trouble helping this user track down exactly where the popup was coming from. Also, any expert can misclick and make the wrong thing happen -- this user had been clicking the right thing "Block" for years and accidentally hit "Allow" this one time.

Wednesday, April 21, 2021

Ethics: University of Minnesota's hostile patches

The University of Minnesota (UMN) got into trouble this week for doing a study where they have submitted deliberately vulnerable patches into open-source projects, in order to test whether hostile actors can do this to hack things. After a UMN researcher submitted a crappy patch to the Linux Kernel, kernel maintainers decided to rip out all recent UMN patches.

Both things can be true:

  • Their study was an important contribution to the field of cybersecurity.
  • Their study was unethical.
It's like Nazi medical research on victims in concentration camps, or U.S. military research on unwitting soldiers. The research can simultaneously be wildly unethical but at the same time produce useful knowledge.

I'd agree that their paper is useful. I would not be able to immediately recognize their patches as adding a vulnerability -- and I'm an expert at such things.

In addition, the sorts of bugs it exploits shows a way forward in the evolution of programming languages. It's not clear that a "safe" language like Rust would be the answer. Linux kernel programming requires tracking resources in ways that Rust would consider inherently "unsafe". Instead, the C language needs to evolve with better safety features and better static analysis. Specifically, we need to be able to annotate the parameters and return statements from functions. For example, if a pointer can't be NULL, then it needs to be documented as a non-nullable pointer. (Imagine if pointers could be signed and unsigned, meaning, can sometimes be NULL or never be NULL).

So I'm glad this paper exists. As a researcher, I'll likely cite it in the future. As a programmer, I'll be more vigilant in the future. In my own open-source projects, I should probably review some previous pull requests that I've accepted, since many of them have been the same crappy quality of simply adding a (probably) unnecessary NULL-pointer check.

The next question is whether this is ethical. Well, the paper claims to have sign-off from their university's IRB -- their Institutional Review Board that reviews the ethics of experiments. Universities created IRBs to deal with the fact that many medical experiments were done on either unwilling or unwitting subjects, such as the Tuskegee Syphilis Study. All medical research must have IRB sign-off these days.

However, I think IRB sign-off for computer security research is stupid. Things like masscanning of the entire Internet are undecidable with traditional ethics. I regularly scan every device on the IPv4 Internet, including your own home router. If you paid attention to the packets your firewall drops, some of them would be from me. Some consider this a gross violation of basic ethics and get very upset that I'm scanning their computer. Others consider this to be the expected consequence of the end-to-end nature of the public Internet, that there's an inherent social contract that you must be prepared to receive any packet from anywhere. Kerckhoff's Principle from the 1800s suggests that core ethic of cybersecurity is exposure to such things rather than trying to cover them up.

The point isn't to argue whether masscanning is ethical. The point is to argue that it's undecided, and that your IRB isn't going to be able to answer the question better than anybody else.

But here's the thing about masscanning: I'm honest and transparent about it. My very first scan of the entire Internet came with a tweet "BTW, this is me scanning the entire Internet".

A lot of ethical questions in other fields comes down to honesty. If you have to lie about it or cover it up, then there's a good chance it's unethical.

For example, the west suffers a lot of cyberattacks from Russia and China. Therefore, as a lone wolf actor capable of hacking them back, is it ethical to do so? The easy answer is that when discovered, would you say "yes, I did that, and I'm proud of it", or would you lie about it? I admit this is a difficult question, because it's posed in terms of whether you'd want to evade the disapproval from other people, when the reality is that you might not want to get novichoked by Putin.

The above research is based on a lie. Lying has consequences.

The natural consequence here is that now that UMN did that study, none of the patches they submit can be trusted. It's not just this one submitted patch. The kernel maintainers are taking scorched earth response, reverting all recent patches from the university and banning future patches from them. It may be a little hysterical, but at the same time, this is a new situation that no existing policy covers.

I partly disagree with the kernel maintainer's conclusion that the patches "obviously were _NOT_ created by a static analysis tool". This is exactly the sort of noise static analyzers have produced in the past. I reviewed the source file for how a static analyzer might come to this conclusion, and found it's exactly the sort of thing it might produce.

But at the same time, it's obviously noise and bad output. If the researcher were developing a static analyzer tool, they should understand that this is crap noise and bad output from the static analyzer. They should not be submitting low-quality patches like this one. The main concern that researchers need to focus on for static analysis isn't increasing detection of vulns, but decreasing noise.

In other words, the debate here is whether the researcher is incompetent or dishonest. Given that UMN has practiced dishonesty in the past, it's legitimate to believe they are doing so again. Indeed, "static analysis" research might also include research in automated ways to find subversive bugs. One might create a static analyzer to search code for ways to insert a NULL pointer check to add a vuln.

Now incompetence is actually a fine thing. That's the point of research, is to learn things. Starting fresh without all the preconceptions of old work is also useful. That researcher has problems today, but a year or two from now they'll be an ultra-competent expert in their field. That's how one achieves competence -- making mistakes, lots of them.

But either way, the Linux kernel maintainer response of "we are not part of your research project" is a valid. These patches are crap, regardless of which research project they are pursuing (static analyzer or malicious patch submissions).


Conclusion

I think the UMN research into bad-faith patches is useful to the community. I reject the idea that their IRB, which is focused on biomedical ethics rather than cybersecurity ethics, would be useful here. Indeed, it's done the reverse: IRB approval has tainted the entire university with the problem rather than limiting the fallout to just the researchers that could've been disavowed.

The natural consequence of being dishonest is that people can't trust you. In cybersecurity, trust is hard to win and easy to lose -- and UMN lost it. The researchers should have understand that "dishonesty" was going to be a problem.

I'm not sure there is a way to ethically be dishonest, so I'm not sure how such useful research can be done without the researchers or sponsors being tainted by it. I just know that "dishonesty" is an easily recognizable issue in cybersecurity that needs to be avoided. If anybody knows how to be ethically dishonest, I'd like to hear it.

Update: This person proposes a way this research could be conducted to ethically be dishonest: