Posted under: Research and Analysis
To wrap up the series, we are going to bring you through a process of narrowing down the shortlist and then testing the products/services in play. With email, it’s less subjective since a malicious email is… well, malicious. But given the challenges of doing policy management at scale (discussed in the last post), you’ll want to ensure a capable UX and sufficient reporting capabilities as well.
Let’s revisit the first rule of buying anything – you drive the process. You’ll have vendors that will want you to use their process, their RFP/RFP language, their PoC Guide, and their contract language. All of that is good and well if you want to buy their product. What you want is the best product to solve your problems, and that means you drive the process.
We made the case in the introduction post that a majority of attacks start with a malicious email. Thus selecting the best platform remains a critical imperative for enterprises. You want to ensure the chosen vendor addresses the email-borne threats of not just today, but tomorrow as well.
A simple fact of the buying process is that no vendor is going to say – “we’re terrible at X, but you should buy us because Y is what’s most important to you.” Even though they should. It’s up to you to figure out each vendor’s real strengths and weaknesses and line those capabilities up with your requirements. That’s why it’s critical to have a firm handle on your requirements before you start talking to vendors.
The first step is to define your short list of 2-3 vendors who appear to meet your needs. You accomplish this by talking to folks on all sides of the decision. Start with the vendors, but also talk to your friends, third parties (like us), and possibly resellers or managed service providers. When meeting with the vendors, stay focused on how their tool addresses the current threats and their expectations of the next wave of email attacks. Also, make it very clear if you have compliance or data protection issues (or both) since this will impact (rather significantly) the architecture and the capabilities you need to test.
Don’t be afraid to go deep with the vendors. You will spend a bunch of time testing the platforms, so you should ask every question to ensure you can make an educated decision. The point of the short list is to disqualify products that won’t work early in the process, so you don’t waste time later.
Once you have identified the short list, it’s time to get hands-on with the email security platforms and run each through its paces through a Proof of Concept (PoC) test. The proof of concept is where sales teams know they have a chance to win or lose, so they bring their best and brightest. They raise doubts about competitors and highlight their capabilities and successes. They have phone numbers for customer references handy. But for now, forget all that. You are running this show, and the PoC needs to follow your script, not theirs.
Vendors design PoC processes to highlight their product strengths and hide weaknesses. Before you start the PoC be clear about the evaluation criteria. Your criteria doesn’t need to be complicated. Your requirements should spell out the key capabilities you need, with a plan to further evaluate each challenger based on intangibles such as set-up/configuration, change management, customization, user experience/ease of use, etc.
Ultimately with email, it starts with accuracy. So you’ll want to see how well the email security platforms block malicious emails. Of course, you could stop there and determine the winner based on who blocks 99.4%, which is better than 99.1% blocking – right? Yes, we’re kidding. You also need to pay attention to manageability at scale.
The preparation involves figuring out the policies you’ll want to deploy on the product. These policies need to be consistent across all of the products/services you test. Here are some ideas on policies to think about:
- Email routing
- Blocked attacks (vs. quarantined)
- Spam/phishing reporting
- Use of email plug-in
- Threat intel feeds to integrate,
- Disposition of emails violating content policies
- Attributes requiring email encryption
- Integration with enterprise security systems: SIEM, SOAR, help desk
And we’re sure there are a bunch of other policy drivers we missed. Work with the vendor’s sales team to make sure you can exercise the product/service to its fullest capabilities. Make sure to track additional policies, above and beyond the policies you defined for all of the competitors since you want an apples to apples comparison, but also want to factor in additional capabilities offered by one of the competitors.
One more thing, we recommend investing in screen capture technology. It is hard to remember what each tool did and how — especially after you have worked a few unfamiliar tools through the same paces. Capture as much video as you can of the user experience — it will come in handy as you reach the decision point.
Without further ado, let’s jump into the PoC.
Almost every email system (Exchange, Office 365, Google Suite, etc.) provides some means of blocking malicious email. So that is the base case for comparison. The next question becomes whether you want to take an active or passive approach during the PoC. In an active test, you introduce malicious messages into the environment to track whether the product/service will catch messages that should be detected. A passive test uses the product against your actual mail stream, knowing that you will get a bunch of spam, phishes, and attacks.
To undertake an active test, you need to have access to these malicious messages, which isn’t a huge impediment as there are sites that can provide known phishing messages and plenty of places to get malware for testing purposes. Of course, you’ll want to take plenty of precautions to ensure you don’t create a self-inflicted outbreak.
There is risk in doing an active test, but it also allows you to ensure you evaluate false negatives (missing a malicious message), as these create far more damage than a false positive (flagging a legit message as malicious). Active versus passive remains a personal and cultural preference since we believe that every enterprise gets enough crap in their email to determine the effectiveness of an email security gateway without introducing malware during the test.
So what does the test look like? Let’s say (for illustrative purposes) you are testing two email security services. Service A and Service B. You’d need to do four tests to get a relative benchmark of each service’s capabilities.
- Service A baseline: In the baseline tests, we see how Service A compares to your existing (or built-in) capabilities. You route that mail stream (which would typically go directly to employee inboxes) to Gateway A and see how much junk it catches. That represents the value-add capabilities of Service A.
- Service B baseline: Similar to the previous baseline test, you run the mail stream from your existing service through Gateway B and figure out how much junk that competitor catches.
- Service A->B comparative: Now we’re going to get complicated and string the two services together. So basically you take the messages that have been deemed OK by Service A (and subsequently sent to inboxes) and then run it through Service B. Got that? If Service A misses something that Service B catches, Service B is better. For that message anyway.
- Service B->A comparative: Now we’re going to send the stream first through Service B and then into Service A. If Service A catches anything, then that’s a mark against Service B.
This testing approach requires a sufficient number of emails to get a statistically significant sample to see the effectiveness of each service — especially given that each test will involve different messages. So you need to ensure enough time elapses for each service to see a similar number of malicious emails. How much time is that? It depends on your message volume and how many email-borne attacks you see daily. But figure it’s somewhere around two weeks for an enterprise, which is likely tens of millions of emails received.
The vendor should be able to give you a report of the messages its service blocked and why. Remember that anything that the email security service blocks represents a failure of the base platform (baseline) or the other service (comparative). Especially if you do an active test, then you’ll know the gateway should have detected some messages. So these kinds of tests make the value of the security service pretty clear, pretty quickly.
You’ll also want to make sure to spot check quarantined messages to ensure the security service doesn’t generate too many false positives.
You’ll also want to test outbound messages for sensitive content, which requires a bit more of an active approach. You’ll want to make sure there are some messages with sensitive content in the message, both in the body of the email and also attached to the message. We don’t recommend you actually put company secrets in the test emails, but you can put some test messages together that do simulate sensitive data.
Your admin team should grade each service for ease of use and managing the malicious emails, while memory is fresh and perceptions are raw, which means ask their opinions multiple times during the 2 – 3-week test. After spending a week or two with the other service they won’t remember what they liked and didn’t about the earlier test — another reason screen grabs are handy.
Given the importance of having employees use the security tools you give them, we suggest you pick a group of 15 – 20 employees and have them test the user-oriented features, like installing an agent, reporting spam/phishing, and the quarantine system. Do a quick survey to the test group and make sure the tool hits the mark in terms of effectiveness and user experience.
At the end of the test, evaluate both successes and failures of the PoC in terms of your use cases and requirements. Given the dashboards and reports the email security vendors provide, the relative effectiveness of each service should be reasonably clear. Of course, the vendors are going to want to walk you through the results and highlight how awesome their service performed. Sigh. But that’s part of the game, so you’ll need to sit around for an hour wrap-up.
The end goal is a recommendation, so you need to document what you think and then present it to the folks to secure your funding. You may not always be in the room when the final decision comes down, so your documentation must clearly articulate the reasons for your choice. We usually structure this artifact of the decision process as follows:
- Requirements: Tell them what you need and who said you need it. This shouldn’t be new information, but it’ll be a good refresher.
- Coverage: What works and doesn’t with the desired solution within the context of your requirements, both now and as you envision the requirements evolving. You want to make sure it’s clear that your choice meets the requirements you just laid out.
- Competition: What other vendors did you disqualify and why? What did you learn in the proof of concept? Are any of the competitors workable? Would you sacrifice any capabilities or features if another product was selected?
- Cost estimate: What would it cost to move to the new platform? How much is a capital expense, and what fraction is operational? What kind of investment in professional services would be required?
- Migration plan: What will the migration entail? How long will it take? Will the migration disrupt services in any way?
- Recommendation: Your entire document should be building to this point, where you put the best path down on paper. If it is a surprise to your audience, you did something wrong. This section is about telling them what they already know and making sure they have an opportunity to ask any remaining questions.
Now you have the thumbs up from the internal team (let’s hope!) You need to negotiate with the vendor and get the deal done. We aren’t going to get into the specifics of negotiating (you likely have people to do that), but understand that you can use time-honored tactics of waiting until the end of the quarter, playing one vendor against the other (if either could meet your requirements), and possibly asking for non-cash add-ons (like professional services or product modules).
As you wrap up the buying process, let’s step back for a moment to focus on what’s important: getting stuff done as simply and efficiently as possible. The good news is that migration to a new enterprise security email service should go smoothly since you’ve already run email through the security service during the PoC. It’s just a matter of putting the winning service back inline and blocking the bad stuff (both inbound and outbound).
We recommend you do monthly check-ins with your vendor account team for at least the first six months. Are you getting the value you expected? What’s working well and what’s not, especially now with a real production environment working. You can go to quarterly check-ins once everything is working as advertised.
Around the time you feel comfortable with the system, you can bring up the discussion with the vendor about the adjacent services discussed in the last post, like security awareness training, DNS, or archiving/eDiscovery. Walk before you think about running, so get your base capabilities in place and working well, and then start thinking about how you can add value to the email security platform.
So with that, now you should have a good sense of how to select your next enterprise email security platform. We’re going to package this up as a white paper over the next few weeks, so keep an eye out for it to appear in our research library later this year.
*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by [email protected] (Securosis). Read the original post at: http://securosis.com/blog/selecting-enterprise-email-security-the-buying-process