SBN

Virtual Private Onions

If you’ve not checked out Security Onion (SO) yet, you really should. It’s a powerhouse Linux distro, running everything an analyst could need to carry out effective Network Security Monitoring (NSM). The latest beta is looking fantastic; watch the video and try it out, you won’t be sorry!

I’m not going to talk about SO itself (Mother Google has plenty of good things to say); instead I’m going to look at the underlying network infrastructure.

A major component of SO is Sguil, a distributed monitoring system that has sensor and server components that are used for monitoring the network and aggregating output from multiple sensors (you can see a Sguil schematic here to understand how it all fits together). The comms between the sensors and the server are protected by SSL, and SO builds on this with autossh tunnels to protect other traffic streams from interception. Ubuntu’s Uncomplicated Firewall (UFW) opens only the necessary ports to allow SO sensors to talk to their server, so all the installations are protected to a high degree.

If all of the connections between the sensors and the server are on your own infrastructure, this is fine (perhaps you have a management VLAN specifically for this purpose, for example). There is however another use case I can think of, where we like to use a slightly different method of sensor-to-server communications.

NSM As A Service?

Let’s say you’re in the business of selling NSM to clients, either short-term (“help me understand what’s on my network”) or long-term (“please be our NSM operators in return for huge bushels of cash”). This will probably involve dropping off SO sensor boxes on customer networks and having them all report in to an SO server back at base. The SO sensors will probably be behind the customer’s NAT and firewalls, but this doesn’t matter, the comms will still work. The network of sensors will look a bit like this:

This particular use case isn’t without issue, though – here is where I think that a slightly different approach to the underlying SO infrastructure has value. Consider the following:

  • The Sguil interface shows you that one of the agents on one of the sensors is down. How do you ssh in to fix it if the sensor is behind someone else’s NAT and firewalls? How do you even know which Internet IP address you should use to try?
  • You suspect one of the sensors is under excessive load. It’d be great to install an snmp agent on the sensor and point Munin at it to try and find the stressed metric. Again, how do we contact the sensor from back at base?
  • You want to implement a custom information flow from the sensor back to the server. You need to think about the security of the new flow – can you implement another autossh tunnel? What if the flow is UDP (e.g., snmp-trap)?
  • One of the sensors is reported lost or stolen. How can you easily stop it from connecting back to base when it’s possibly now in the hands of threat actor du jour?

One possible answer is to build a custom “underlay” network to carry all of the traffic between SO sensors and the server, catering for existing flows and anything else you think of in the future. As a side benefit, the sensors will all have static and known IP addresses, and contacting them from the server will be no problem at all.

Virtual Private Onions

We’ll accomplish this by using OpenVPN to create a secure, privately- and statically-addressed cloud that the sensors and server will use to talk to each other (sorry for using the word “cloud”!). Take our starting point above, showing our distributed sensors at client sites behind client firewalls. By “underlaying” the SO comms with an OpenVPN cloud we’ll end up with is something like this:

The client’s firewalls and NAT are still there, but they’re now hidden by the static, bi-directional comms layer provided by OpenVPN. The sensors all talk over the VPN to the server at 10.0.0.1; the server in turn is free to contact any of the sensors via their static addresses in the 10.0.0.0/24 subnet.

Details

I don’t intend for this to be a HOWTO; I want to wait until people tell me that what I’m proposing isn’t a galactic waste of time and completely unnecessary! Conceptually, it works like this:

The SO Server takes on the additional role of an OpenVPN server. Using OpenVPN’s “easy-rsa” feature, the Server also becomes a Certificate Authority capable of issuing certificates to itself and to the sensors.

OpenVPN gives the SO Server a routed tun0 interface with the IP address 10.0.0.1/24 (note routed – if we wanted to, we can give other subnets back at base that are “behind” the Server access to the SO Sensors via the VPN).

The Sensors become OpenVPN clients, using certificates from the Server to mutually authenticate themselves to it. The Sensors know to point their OpenVPN clients at the static Internet-facing IP address of the server back at base, getting a tun0 interface and a static IP address out of 10.0.0.0/24 in return.

When setting up the SO components on the Sensor boxes, the sosetup script is told that the server is at 10.0.0.1 via tun0, ensuring that all SO comms go over the VPN.

The VPN carries all traffic between Sensors and Server, be it Sguil’s SSL connections, autossh tunnels, SNMP traps or queries, raw syslog forwarding, psychic lottery predictions, anything. The Server is also free to ssh onto the Sensors via their static VPN addresses.

Advantages?

We’ve implemented OpenVPN with SO in this manner, and it works well for us. I hope this approach has some advantages:

  • It makes for easier firewalling at the Server and Sensor ends. You only need to open up one port, which is by default 1194/UDP. All of the VPN traffic is carried over this port.
  • You have deterministic sensor IP addressing, something you wouldn’t otherwise have had when your sensors were deployed on foreign networks behind NAT, firewalls, and dynamic IP addresses on customer DSL routers.
  • You don’t need to worry about setting up additional autossh tunnels when you dream up a new flow of information  between Sensor and Server.
  • You can easily talk to the sensors from the server, for ssh sessions, SNMP polls, or anything else.
  • If a sensor is lost or stolen, all you need to do is revoke its certificate at the Server end and it won’t be able to connect to the VPN (more on sensor security in a moment).

There are of course some disadvantages:

  • Complexity, which as we all know is the enemy of security.
  • Double encryption. OpenVPN is encrypting stuff, including stuff already encrypted by Sguil and autossh.
  • <insert other glaring flaws here>

Security considerations

On the Server side, keep in mind that it is now a Certificate Authority capable of signing certificates that grant access to your SO VPN. Guard the CA key material well – maybe even use a separate system for certificate signing and key generation. Also, once you’ve generated and deployed certificates and private keys for your sensors, remove the private keys from the Server – these belong on each specific sensor and nowhere else.

On the Sensor side, we have a duty of care towards the clients purchasing the “NSM As A Service”. OpenVPN should be configured to prohibit comms between Sensor nodes, so that a compromise of one Sensor machine doesn’t automatically give the attacker a path onto other customer networks.

If a Sensor machine is lost or stolen whilst deployed at a customer site, we can of course revoke its certificate to prevent it from connecting to the VPN. That’s great, but the /nsm directory on it is still going to contain a whole bunch of data that the client wouldn’t want turning up on eBay along with the stolen sensor.

I have a plan for this, but I’ve not implemented it yet (comments welcome!). The plan is to have the /nsm directory encrypted (a TrueCrypt volume? something else?), but to have the decryption key stored on the SO Server rather than on the Sensor. When the Sensor starts up, it builds its OpenVPN connection to the Server before the SO services start. Using the VPN, the Sensor retrieves the decryption key (one unique key per Sensor) from the Server (wget? curl? something else?), taking care never to store it on the Sensor’s local disk. Decryption key in hand, the Sensor can decrypt /nsm and allow the SO services to kick off.

In this way, a stolen Sensor box represents minimal risk to the client as the /nsm directory is encrypted, with the keymat necessary for decryption stored on the Server. Provided that the theft is communicated to the NSMAAS provider in an expedient fashion, that keymat will be out of reach of the thief once the OpenVPN certificate is revoked.

(Other, crazier, plans involve giving the Sensors GPS receivers and 3G comms for covert phone-home-in-an-emergency situations…)

So there you have it…

…Virtual Private Onions. Any comments?


Alec Waters is responsible for all things security at Dataline Software, and can be emailed at [email protected]

*** This is a Security Bloggers Network syndicated blog from wirewatcher authored by Alec Waters. Read the original post at: https://wirewatcher.wordpress.com/2012/10/08/virtual-private-onions/