There are two fights in Congress now against the DMCA, the "Digital Millennium Copyright Act". One is over Section 512 covering "takedowns" on the web. The other is over Section 1201 covering "reverse engineering", which weakens cybersecurity.
Even before digital computers, since the 1880s, an important principle of cybersecurity has been openness and transparency ("Kerckhoff's Principle"). Only through making details public can security flaws be found, discussed, and fixed. This includes reverse-engineering to search for flaws.
Cybersecurity experts have long struggled against the ignorant who hold the naive belief we should instead coverup information, so that evildoers cannot find and exploit flaws. Surely, they believe, given just anybody access to critical details of our security weakens it. The ignorant have little faith in technology, that it can be made secure. They have more faith in government's ability to control information.
Technologists believe this information coverup hinders well-meaning people and protects the incompetent from embarrassment. When you hide information about how something works, you prevent people on your own side from discovering and fixing flaws. It also means that you can't hold those accountable for their security, since it's impossible to notice security flaws until after they've been exploited. At the same time, the information coverup does not do much to stop evildoers. Technology can work, it can be perfected, but only if we can search for flaws.
It seems counterintuitive the revealing your encryption algorithms to your enemy is the best way to secure them, but history has proven time and again that this is indeed true. Encryption algorithms your enemy cannot see are insecure. The same is true of the rest of cybersecurity.
Today, I'm composing and posting this blogpost securely from a public WiFi hotspot because the technology is secure. It's secure because of two decades of security researchers finding flaws in WiFi, publishing them, and getting them fixed.
Yet in the year 1998, ignorance prevailed with the "Digital Millennium Copyright Act". Section 1201 makes reverse-engineering illegal. It attempts to secure copyright not through strong technological means, but by the heavy hand of government punishment.
The law was not completely ignorant. It includes an exception allow what it calls "security testing" -- in theory. But that exception does not work in practice, imposing too many conditions on such research to be workable.
The U.S. Copyright Office has authority under the law to add its own exemptions every 3 years. It has repeatedly added exceptions for security research, but the process is unsatisfactory. It's a protracted political battle every 3 years to get the exception back on the list, and each time it can change slightly. These exemptions are still less than what we want. This causes a chilling effect on permissible research. It would be better if such exceptions were put directly into the law.
You can understand the nature of the debate by looking at those on each side.
Those lobbying for the exceptions are those trying to make technology more secure, such as Rapid7, Bugcrowd, Duo Security, Luta Security, and Hackerone. These organizations have no interest in violating copyright -- their only concern is cybersecurity, finding and fixing flaws.
The opposing side includes the copyright industry, as you'd expect, such as the "DVD" association who doesn't want hackers breaking the DRM on DVDs.
However, much of the opposing side has nothing do with copyright as such.
This notably includes the three major voting machine suppliers in the United States: Dominion Voting, ES&S, and Hart InterCivic. Security professionals have been pointing out security flaws in their equipment for the past several years. These vendors are explicitly trying to coverup their security flaws by using the law to silence critics.
This goes back to the struggle mentioned at the top of this post. The ignorant and naive believe that we need to coverup information, so that hackers can't discover flaws. This is expressed in their filing opposing the latest 3-year exemption:
The proponents are wrong and misguided in their argument that the Register’s allowing independent hackers unfettered access to election software is a necessary – or even appropriate – way to address the national security issues raised by election system security. The federal government already has ways of ensuring election system security through programs conducted by the EAC and DHS. These programs, in combination with testing done in partnership between system providers, independent voting system test labs and election officials, provide a high degree of confidence that election systems are secure and can be used to run fair and accurate elections. Giving anonymous hackers a license to attack critical infrastructure would not serve the public interest.
Not only does this blatantly violate Kerckhoff's Principle stated above, it was proven a fallacy in the last two DEF CON cybersecurity conferences. These conferences bought voting machines off eBay and presented them at the conference for anybody to hack. Widespread and typical vulnerabilities were found. These systems were certified as secure by state and federal governments, yet teenagers were able to trivially bypass the security of these systems.
The danger these companies are afraid of is not a nation state actor being able to play with these systems, but of teenagers playing with their systems at DEF CON embarrassing them by pointing out their laughable security. This proves Kerckhoff's Principle.
That's why the leading technology firms take the opposite approach to security than election systems vendors. This includes Apple, Amazon, Microsoft, Google, and so on. They've gotten over their embarrassment. They are every much as critical to modern infrastructure as election systems or the power grid. They publish their flaws roughly every month, along with a patch that fixes them. That's why you end up having to patch your software every month. Far from trying to coverup flaws and punish researchers, they publicly praise researchers, and in many cases, offer "bug bounties" to encourage them to find more bugs.
It's important to understand that the "security research" we are talking about is always "ad hoc" rather than formal.
These companies already do "formal" research and development. They invest billions of dollars in securing their technology. But no matter how much formal research they do, informal poking around by users, hobbyists, and hackers still finds unexpected things.
One reason is simply a corollary to the Infinite Monkey Theorem that states that an infinite number of monkeys banging on an infinite number of typewriters will eventually reproduce the exact works of William Shakespeare. A large number of monkeys banging on your product will eventually find security flaws.
A common example is a parent who brings their kid to work, who then plays around with a product doing things that no reasonable person would every conceive of, and accidentally breaks into the computer. Formal research and development focuses on the known threats, but has trouble of imagining unknown threats.
Another reason informal research is successful is how the modern technology stack works. Whether it's a mobile phone, a WiFi enabled teddy bear for the kids, a connected pacemaker jolting the grandparent's heart, or an industrial control computer controlling manufacturing equipment, all modern products share a common base of code.
Somebody can be an expert in an individual piece of code used in all these products without understanding anything about these products.
I experience this effect myself. I regularly scan the entire Internet looking for a particular flaw. All I see is the flaw itself, exposed to the Internet, but not anything else about the system I've probed. Maybe it's a robot. Maybe it's a car. Maybe it's somebody's television. Maybe it's any one of the billions of IoT ("Internet of Things") devices attached to the Internet. I'm clueless about the products -- but an expert about the flaw.
A company, even as big as Apple or Microsoft, cannot hire enough people to be experts in every piece of technology they use. Instead, they can offer bounties encouraging those who are experts in obscure bits of technology to come forward and examine their products.
This ad hoc nature is important when looking at the solution to the problem. Many think this can be formalized, such as with the requirement of contacting a company asking for permission to look at their product before doing any reverse-engineering.
This doesn't work. A security researcher will buy a bunch of used products off eBay to test out a theory. They don't know enough about the products or the original vendor to know who they should contact for permission. This would take more effort to resolve than the research itself.
It's solely informal and ad hoc "research" that needs protection. It's the same as with everything else that preaches openness and transparency. Imagine if we had freedom of the press, but only for journalists who first were licensed by the government. Imagine if it were freedom of religion, but only for churches officially designated by the government.
Those companies selling voting systems they promise as being "secure" will never give permission. It's only through ad hoc and informal security research, hostile to the interests of those companies, that the public interest will be advanced.
The current exemptions have a number of "gotchas" that seem reasonable, but which create an unacceptable chilling effect.
For example, they allow informal security research "as long as no other laws are violated". That sounds reasonable, but with so many laws and regulations, it's usually possible to argue they violated some obscure and meaningless law in their research. It means a security researcher is now threatened by years in jail for violating a regulation that would've resulted in a $10 fine during the course of their research.
Exceptions to the DMCA need to be clear and unambiguous that finding security bugs is not a crime. If the researcher commits some other crime during research, then prosecute them for that crime, not for violating the DMCA.
The strongest opposition to a "security research exemption" in the DMCA is going to come from the copyright industry itself -- those companies who depend upon copyright for their existence, such as movies, television, music, books, and so on.
The United States position in the world is driven by intellectual property. Hollywood is not simply the center of American film industry, but the world's film industry. Congress has an enormous incentive to protect these industries. Industry organizations like the RIAA and MPAA have enormous influence on Congress.
Many of us in tech believe copyright is already too strong. They've made a mockery of the Constitution's statement of copyrights being for a "limited time", which now means works copyrighted decades before you were born will still be under copyright decades after you die. Section 512 takedown notices are widely abused to silence speech.
Yet the copyright-protected industries perceive themselves as too weak. Once a copyrighted work is post to the Internet for anybody to download, it because virtually impossible to remove (like removing pee from a pool). Takedown notices only remove content from the major websites, like YouTube. They do nothing to remove content from the "dark web".
Thus, they jealously defend against any attempt that would weaken their position. This includes "security research exemptions", which threatens "DRM" technologies that prevent copying.
One fear is of security researchers themselves, that in the process of doing legitimate research that they'll find and disclose other secrets, such as the encryption keys that protect DVDs from being copied, that are built into every DVD player on the market. There is some truth to that, as security researchers have indeed publish some information that the industries didn't want published, such as the DVD encryption algorithm.
The bigger fear is that evildoers trying to break DRM will be free to do so, claiming their activities are just "security research". They would be free to openly collaborate with each other, because it's simply research, while privately pirating content.
But these fears are overblown. Commercial piracy is already forbidden by other laws, and underground piracy happens regardless of the law.
This law has little impact on whether reverse-engineering happens so much as impact whether the fruits of research are published. And that's the key point: we call it "security research", but all that's meaningful is "published security research".
In other words, we are talking about a minor cost to copyright compared with a huge cost to cybersecurity. The cybersecurity of voting machines is a prime example: voting security is bad, and it's not going to improve until we can publicly challenge it. But we can't easily challenge voting security without being prosecuted under the DMCA.
Conclusion
The only credible encryption algorithms are public ones. The only cybersecurity we trust is cybersecurity that we can probe and test, where most details are publicly available. That such transparency is necessary to security has been recognized since the 1880s with Kerckhoff's Principle. Yet, the naive still believe in coverups. As the election industry claimed in their brief: "Giving anonymous hackers a license to attack critical infrastructure would not serve the public interest". Giving anonymous hackers ad hoc, informal access to probe critical infrastructure like voting machines not only serves the public interest, but is necessary to the public interest. As has already been proven, voting machines have cybersecurity weaknesses that they are covering up, which can only be revealed by anonymous hackers.
This research needs to be ad hoc and informal. Attempts at reforming the DMCA, or the Copyright Office's attempt at exemptions, get modified into adding exemptions for formal research. This ends up having the same chilling effect on research while claiming to allow research.
Copyright, like other forms of intellectual property, is important, and it's proper for government to protect it. Even radical anarchists in our industry want government to protect "copyleft", the use of copyright to keep open-source code open.
But it's not so important that it should allow abuse to silence security research. Transparency and ad hoc testing is critical to research, and is more and more often being silenced using copyright law.
RECOVERY OF STOLEN FUNDS (SCAM FOCUS).
ReplyDeleteBINARY OPTIONS, BITCOINS and LOAN SCAM.
The essentials you need to know about The Global-KOS C.E.H RECOVERY COMPANY.
(leroysteckler@gmail.com)
Reading Time: ⏱️2MIN.
Hiring a professional hacker has been one of the world most technical valued navigating information. High prolific information and Privileges comes rare as it has been understood that what people do not see, they will never know.
This is The Global KOS hacking agency where every request concerning lost funds or hacking related issues are fixed within a short period of time.
The crucial benefit of contacting The Global-KOS hackers is
• ZERO TRACE: After a successful hack recovery is carried out by the Global-KOS, no active or passive attacks will be used to trace any of our hacks to our clients or our organization. One common practice that attackers employ to evade detection is to break into poorly secured systems and use those hijacked systems as proxies through which they can launch and route attacks. So in a nutshell, attackers effort on this platform are useless because our systems are protected with a vigorous firewall switching and a firm security system to prevent unauthorized bodies from tracking or modifying our network accessible resources. I.e the hacker and clients are 100% safe and anonymous.
Secondly,
CHALLENGES WE HACKERS FACE:
• REPEAT CLIENTS and SERVICES: E.g, after helping a client recover all money lost to fraudulent practices, most of this clients come back requesting we provide the same service in disguise as a different person.
⚠Note:
Repeatedly recovering money from a fraudulent source to the same client are not allowed on this platform.
However, you will be assigned to a designated professional hacker who is systematically known for operating on a dark web protocol. The operation of these hackers is to potentially deploy a distinguished cyber security technique to retrieving back the victims stolen funds via the application of a diverse CM breacher which enables you to track the data location of a scammer and extract every data on the con database. This is achieved using the systematic courier tracking method.
Which of the uneasy situation do you find yourself in right now?
✅(BITCOIN INVESTMENTS, BINARY OPTIONS OR LOAN SCAM?
This shocking study points to one harsh reality we all face today. It saddens our mind when a client expresses annoyance or dissatisfaction of unethical behaviors of scammers. We have striven to make tenacious efforts to help those who are victims of this fleas get off their traumatic feeling of loss.
The company is large enough to provide comprehensive range of services such as.
• MOBILE PHONE HACKS.(Catching A Cheating Spouse)📲
• CREDIT SCORE UPGRADE,
• PENETRATION OF WEBSITES AND DATABASE.
• UNLOCKING FROZEN CRYPTO WALLET📲
• EMAIL HACKS
• HACKING A FRAUDULENT WEBSITE.📲
• UBER FREE PAYMENT LICENCE.📲
For prolific services and info,
Contact:
✉️Email: theglobalkos@gmail.com
leroysteckler@gmail.com
®Global KOS™
2020.
If your website was hacked please visit https://wpsecurer.com/categories/24
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete