Our customer's employees are now using our corporate application while working from home. They are concerned about security, protecting their trade secrets. What security feature can we add for these customers?The tl;dr answer is this: don't add gimmicky features, but instead, take this opportunity to do security things you should already be doing, starting with a "vulnerability disclosure program" or "vuln program".
Gimmicks
First of all, I'd like to discourage you from adding security gimmicks to your product. You are no more likely to come up with an exciting new security feature on your own as you are a miracle cure for the covid. Your sales and marketing people may get excited about the feature, and they may get the customer excited about it too, but the excitement won't last.
Eventually, the customer's IT and cybersecurity teams will be brought in. They'll quickly identify your gimmick as snake oil, and you'll have made an enemy of them. They are already involved in securing the server side, the work-at-home desktop, the VPN, and all the other network essentials. You don't want them as your enemy, you want them as your friend. You don't want to send your salesperson into the maw of a technical meeting at the customer's site trying to defend the gimmick.
You want to take the opposite approach: do something that the decision maker on the customer side won't necessarily understand, but which their IT/cybersecurity people will get excited about. You want them in the background as your champion rather than as your opposition.
Vulnerability disclosure program
To accomplish this goal described above, the thing you want is known as a vulnerability disclosure program. If there's one thing that the entire cybersecurity industry is agreed about (other than hating the term cybersecurity, preferring "infosec" instead) is that you need this vulnerability disclosure program. Everything else you might want to do to add security features in your product come after you have this thing.
Your product has security bugs, known as vulnerabilities. This is true of everyone, no matter how good you are. Apple, Microsoft, and Google employ the brightest minds in cybersecurity and they have vulnerabilities. Every month you update their products with the latest fixes for these vulnerabilities. I just bought a new MacBook Air and it's already telling me I need to update the operating system to fix the bugs found after it shipped.
These bugs come mostly from outsiders. These companies have internal people searching for such bugs, as well as consultants, and do a good job quietly fixing what they find. But this goes only so far. Outsiders have a wider set of skills and perspectives than the companies could ever hope to control themselves, so find things that the companies miss.
These outsiders are often not customers.
This has been a chronic problem throughout the history of computers. Somebody calls up your support line and tells you there's an obvious bug that hackers can easily exploit. The customer support representative then ignores this because they aren't a customer. It's foolish wasting time adding features to a product that no customer is asking for.
But then this bug leaks out to the public, hackers widely exploit it damaging customers, and angry customers now demand why you did nothing to fix the bug despite having been notified about it.
The problem here is that nobody has the job of responding to such problems. The reason your company dropped the ball was that nobody was assigned to pick it up. All a vulnerability disclosure program means that at least one person within the company has the responsibility of dealing with it.
How to set up vulnerability disclosure program
The process is pretty simple.
First of all, assign somebody to be responsible for it. This could be somebody in engineering, in project management, or in tech support. There is management work involved, opening tickets, tracking them, and closing them, but also at some point, a technical person needs to get involved to analyze the problem.
Second, figure out how the public contacts the company. The standard way is to setup two email addresses, "security@example.com" and "secure@example.com" (pointing to the same inbox). These tare the standard addresses that most cybersecurity researchers will attempt to use when reporting a vulnerability to a company. These should point to a mailbox checked by the person assigned in the first step above. A web form for submitting information can also be used. In any case, googling "vulnerability disclosure [your company name]" should yield a webpage describe how to submit vulnerability information -- just like it does for Apple, Google, and Microsoft. (Go ahead, google them, see what they do, and follow their lead).
Tech support need to be trained that "vulnerability" is a magic word, and that when somebody calls in with a "vulnerability" that it doesn't go through the normal customer support process (which starts with "are you a customer?"), but instead gets shunted over the vulnerability disclosure process.
How to run a vuln disclosure program
One you've done the steps above, let your program evolve with the experience you get from receiving such vulnerability reports. You'll figure it out as you go along.
But let me describe some of the problems you are going to have along the way.
For specialty companies with high-value products and a small customer base, you'll have the problem that nobody uses this feature. Lack of exercise leads to flab, and thus, you'll have a broken process when a true problem arrives.
You'll get spam on this address. This is why even though "security@example.com" is the standard address, many companies prefer web forms instead, to reduce the noise. The danger is that whoever has the responsibility of checking the email inbox will get so accustomed to ignoring spam that they'll ignore legitimate emails. Spam filters help.
For notifications that are legitimately about security vulnerabilities, most will be nonsense. Vulnerability hunting is fairly standard thing in the industry, both by security professionals and hackers. There are lots of tools to find common problems -- tools that any idiot can use.
Which means idiots will use these tools, and not understanding the results from the tools, will claim to have found a vulnerability, and will waste your time telling you about it.
At the same time, there are lots of non-native english speakers and native speakers who are just really nerdy, who won't express themselves well. They will find real bugs, but you won't be able to tell because their communication is so bad.
Thus, you get these reports, most of which are trash, but a few of which are gems, and you won't be able to easily tell the difference. It'll take work on your part, querying the vuln reporter for more information.
Most vuln reporters are emotional and immature. They are usually convinced that your company is evil and stupid. And they are right, after a fashion. When it's a real issue, it's probably something that in their narrow expertise that your own engineers don't quite understand. Their motivation isn't necessarily to help your engineers understand the problem, but to help you fumble the ball to prove their own superiority and your company's weakness. Just because they have glaring personality disorders doesn't mean they aren't right.
Then there is the issue of extortion. A lot of vuln reports will arrive as extortion threats, demanding money or else the person will make the vuln public, or given to hackers to cause havoc. Many of these threats will demand a "bounty". At this point, we should talk about "vuln bounty programs...".
Vulnerability bounty programs
Once you've had a vulnerability disclosure program for some years and have all the kinks worked out, you may want to consider a "bounty" program. Instead of simply responding to such reports, you may want to actively encourage people to find such bugs. It's a standard feature of big tech companies like Google, Microsoft, and Apple. Even the U.S. Department of Defense has a vuln bounty program.
This is not a marketing effort. Sometimes companies offer bounties that claim their product is so secure that hackers can't possibly find a bug, and they are offering (say) $100,000 to any hacker who thinks they can. This is garbage. All products have security vulnerabilities. Such bounties are full of small print such that any vulns hackers find won't match the terms, and thus, not get the payout. It's just another gimmick that'll get your product labeled snake oil.
Real vuln bounty programs pay out. Google offers $100,000 for certain kinds of bugs and has paid that sum many times. They have one of the best reputations for security in the industry not because they are so good that hackers can't find vulns, but so responsive to the vulns that are found. They've probably paid more in bounties than any other company and are thus viewed as the most secure. You'd think that having so many bugs would make people think they were less secure, but the industry views them in the opposite light.
You don't want a bounty program. The best companies have vuln bounty programs, but you shouldn't. At least, you shouldn't until you've gotten the simpler vuln disclosure program running first. It'll increase the problems I describe above 10 fold. Unless you've had experience dealing with the normal level of trouble you'll get overwhelmed by a bug bounty program.
Bounties are related to the extortion problem described above. If all you have a mere disclosure program without bounties, people will still ask for bounties. These legitimate requests for money may sound like extortion for money.
The seven stages of denial
When doctors tell patients of their illness, they go through seven stages of denial: disbelief, denial, bargaining, guilt, anger, depression, and acceptance/hope.
When real vulns appear in your program, you'll go through those same stages. Your techies will find ways of denying that a vuln is real.
This is the opposite problem from the one I describe above. You'll get a lot of trash that aren't real bugs, but some bugs are real, and yet your engineers will still claim they aren't.
I've dealt with this problem for decades, helping companies with reports where I believe are blindingly obvious and real, which their engineers claim are only "theoretical".
Take "SQL injection" as a good example. This is the most common bug in web apps (and REST applications). How it works is obvious -- yet in my experience, most engineers believe that it can't happen. Thus, it persists in applications. One reason engineers will deny it's a bug is that they'll convince themselves that nobody could practically reverse engineer the details out of their product in order to get it to work. In reality, such reverse engineering is easy. I can either use simple reverse engineering tools on the product's binary code, or I can intercept live requests to REST APIs within the running program. Security consultants and hackers are extremely experienced at this. In customer engagement, I've found impossible to find SQL vulnerabilities within 5 minutes of looking at the product. I'll have spent much longer trying to get your product installed on my computer, and I really won't have a clue about what your product actually does, and I'll already have found a vuln.
Server side vulns are also easier than you expect. Unlike the client application, we can assume that hackers won't have access to the product. They can still find vulnerabilities. A good example are "blind SQL injection" vulnerabilities, which at first glance appear impossible to exploit.
Even the biggest/best companies struggle with this "denial". You are going to make this mistake repeatedly and ignore bug reports that eventually bite you. It's just a fact of life.
This denial is related to the extortion problem I described above. Take as an example where your engineers are in the "denial" phase, claiming that the reported vuln can't practically be exploited by hackers. The person who reported the bug then offers to write a "proof-of-concept" (PoC) to prove that it can be exploited -- but that it would take several days of effort. They demand compensation before going through the effort. Demanding money for work they've already done is illegitimate, especially when threats are involved. Asking for money before doing future work is legitimate -- people rightly may be unwilling to work without pay. (There's a hefty "no free bugs" movement in the community from people refusing to work for free).
Full disclosure and Kerckhoff's Principle
Your marketing people will make claims about the security of your product. Customers will ask for more details. Your marketing people will respond saying they can't reveal the details, because that's sensitive information that would help hackers get around the security.
This is the wrong answer. Only insecure products need to hide the details. Secure products publish the details. Some publish the entire source code, others publish enough details that everyone, even malicious hackers, can find ways around the security features -- if such ways exist. A good example is the detailed documents Apple publishes about the security of its iPhones.
This idea goes back to the 1880s and is know as Kerckhoff's Principle in cryptography. It asserts that encryption algorithms should be public instead of secret, that the only secret should be the password/key. Such secrecy prevents your friends from pointing out obvious flaws but does little to discourage the enemy from reverse engineering flaws.
My grandfather was a cryptographer in WW II. He told a story how the Germans were using an algorithmic "one time pad". Only, the "pad" wasn't "one time" as the Germans thought, but instead repeated. Through brilliant guesswork and reverse engineering, the Allies were able to discover that it repeated, and thus were able to completely break this encryption algorithm.
The same is true of your product. You can make it secure enough that even if hackers know everything about it, that they still can't bypass its security. If your product isn't that secure, then hiding the details won't help you much, as hackers are very good at reverse engineering. I've tried to describe how unexpectedly good they are in the text above. All hiding the details does is prevent your friends and customers from discovering those flaws first.
Ideally, this would mean publishing source code. In practice, commercial products won't do this for obvious reasons. But they can still publish enough information for customers to see what's going on. The more transparent you are about cybersecurity, the more customers will trust you are doing the right thing -- and the more vulnerability disclosure reports you'll get from people discovering you've done the wrong thing, so you can fix it.
This transparency continues after a bug has been found. It means communicating to your customer that such a bug happened, it's full danger, how to mitigate it without a patch, and how to apply the software patch you've developed that will fix the bug.
Your sales and marketing people will hate admitting to customers that you had a security bug, but it's the norm in the industry. Every month when you apply patches from Microsoft, Apple, and Google, they publish full documentation like this on the bug. The best, most trusted companies in the world, have long lists of vulnerabilities in their software. Transparency about their vulns is what make them trusted.
Sure, your competitors will exploit this in order to try to win sales. The response is to point out that this vuln means you have a functioning vuln disclosure program, and that lack of similar bugs from the competitor means they don't. When it's you yourself who publishes the information, it means you are trustworthy, that you aren't hiding anything. When the competitors doesn't publish such information, it means they are hiding something. Everyone has such vulnerabilities -- the best companies admit them.
I've been involved in many sales cycles where this has come up. I've never found it adversely affected sales. Sometimes it's been cited as a reason for not buying a product, but by customers who had already made the decision for other reasons (like how their CEO was a cousin of the salesperson) and were just looking for an excuse. I'm not sure I can confidently say that it swung sales the other direction, either, but my general impression is that such transparency has been more positive than negative.
All this is known as full disclosure, the fact that the details of the vuln will eventually become public. The person reporting the bug to you is just telling you first, eventually they will tell everyone else. It's accepted in the industry that full disclosure is the only responsible way to handle bugs, and that covering them up is irresponsible.
Google's policy is a good example of this. Their general policy is that anybody who notifies them of vulns should go public in 90 days. This is a little unfair of Google. They use the same 90 day timeframe both for receiving bugs in their product as well as for notifying other companies about bugs. Google has agile development processes such that they can easily release patches within 90 days whereas most other companies have less agile processes that would struggle to release a patch in 6 months.
Your disclosure program should include timeframes. The first is when the discoverer is encouraged to make their vuln public, which should be less than 6 months. There should be other timeframes, such as when they'll get a response to their notification, which should be one business day, and how long it'll take engineering to confirm the bug, which should be around a week. At every stage in the process, the person reporting the bug should know the timeframe for the next stage, and an estimate of the final stage when they can go public with the bug, fully disclosing it. Ideally, the person discovering the bug doesn't actually disclose it because you disclose it first, publicly giving them credit for finding it.
Full disclosure makes the "extortion" problem worse. This is because it'll appear that those notifying you of the bug are threatening to go public. Some are, and some will happily accept money to keep the bug secret. Others are simply following the standard assumption that it'll be made public eventually. In other words, that guy demanding money before making a PoC will still go public with his claims in 90 days if you don't pay him -- this is not actually an extortion threat though it sounds like one.
After the vuln disclosure program
Once you start getting a trickle of bug notifications, you'll start dealing with other issues.
For example, you'll be encouraged to do secure development. This means putting security in from the very start of development in the requirements specification. You'll do threat modeling, then create an architecture and design, and so on.
This is fiction. Every company does the steps in reverse order. They start by getting bug reports from the vuln disclosure program. They then patch the code. Then eventually they update their design documents to reflect the change. They then update the requirement's specification so product management can track the change.
Eventually, customers will ask if you have a "secure development" program of some sort. Once you've been responding to vuln reports for a while, you'll be able to honestly say "yes", as you've actually been doing this, in an ad-hoc manner.
Another thing for products like the one described in this post is zero-trust. It's the latest buzzword in the cybersecurity industry and means a wide range of different things to different people. But it comes down to this: that instead of using the product over a VPN that the customer could securely use it without the VPN. It means the application, the authentication, and the communication channel are secure even without the added security protections of the VPN.
When supporting workers-at-home, the IT/infosec department is probably following some sort of zero-trust model, either some custom solution, or using products from various companies to help it. They are probably going to demand changes in your product, such as integrating authentication/login with some other system.
Treat these features the same as vulnerability bugs. For example, if your product has it's own username/password system with passwords stored on your application server, then that's essentially a security bug. You should instead integrate with other authentication frameworks. Actual passwords stored on your own servers are the toxic waste of the security industry and should be avoided.
At some point, people are going to talk about encryption. Most of its nonsense. Whenever encryption gets put into a requirement spec, something is added that doesn't really protect data, but which doesn't matter, because it's optional and turned off anyway.
You should be using SSL to encrypt communications between the client application and the server. If communications happen in the clear, then that's a bug. Beyond that, though, I'm not sure I have any clear ideas where to encrypt things.
For products using REST APIs, then you should pay attention to the OWASP list of web app bugs. Sure, a custom Windows application isn't the same as a public web server, but most of the OWASP bugs apply to anything similar to an application using REST APIs. That includes SQL injection, but a bunch of other bugs. Hackers, security engineers, and customers are going to use the OWASP list when testing your product. If your engineers aren't all knowledgeable about the OWASP list, it's certain your product has many of the listed bugs.
When you get a vuln notification for one of the OWASP bugs, then it's a good idea to start hunting down related ones in the same area of the product.
Outsourcing vuln disclosure
I describe the problems of vuln disclosure programs above. It's a simple process that nonetheless is difficult to get right.
There are companies who will deal with the pain for you, like BugCrowd or HackerOne. I don't have enough experience to recommend any of them. I'm often a critic, such as how recently they seem willing to help cover-up bugs that I think should be fully disclosed. But they will have the experience you lack when setting up a vuln disclosure program, and can be especially useful at filtering the incoming nonsense getting true reports. They are also somebody to blame if a true report gets improperly filtered.
Conclusion
Somebody asked "how to secure our work-at-home application". My simple answer is "avoid gimmicks, instead, do a vulnerability disclosure program". It's easy to get started, such setup a "security@example.com" email account that goes to somebody who won't ignore it. It's hard to get it right, but you'll figure it out as you go along.
13 comments:
Thanks for sharing this awsm information..
You can check new blog which provide tricks and tips, check out tricksndtips
Thank you so much for the valuable info.
Regards
Tutlane.com
Post a Comment