Over time, we hired dedicated security staff, and with their help improved on the data center security model by segmenting servers within the data center based on the relative importance of the data. We built on that model for many years, eventually adding fine grained network segmentation, dedicated jump server networks, dedicated management networks, and dedicated console networks. The intent was to isolate data center networks from desktops as much as possible and to prevent propagation of security incidents through the data center networks and across unrelated applications. As the data center networks contained only known servers for known applications we were able to implement bidirectional ‘default deny’ network security policy. In other words, servers within the data center could not connect to addresses on the Internet unless it was specifically permitted by firewall policy.
The strict firewall policy mitigated many of the common attack vectors to which other organizations had succumbed. By restricting an applications ability to connect out to random Internet IP addresses, we also lessened our dependency on application security – something which applications were (and still are) notorious for failing.
We developed a strong operating principle that “If it can surf the Internet it can not be secured”. In other words, when securing our applications and data we did not trust our own desktops. This principle was and still is validated by even the most trivial following of desktop and application security.
By following this principle we were able to move nearly all critical user data off of desktops and on to data center servers, where we felt reasonably confident in our ability to secure the data. We came up with a methods for allowing remote access to data center servers and applications from what were relatively insecure desktops. We were able to shut off all direct desktop access to all database listeners by installing all applications that required access to a database listener onto remotely accessible servers configured to run the desktop application and manage the data that would normally have been downloaded to desktop. The data never left the data centers, so it was relatively easy to secure vs. had it been downloaded to desktops.
This was fairly complex and expensive to run, as it required thorough understanding of exactly how every application and technology worked – in many cases something that not even vendors who wrote the application understood. We oftentimes ran into vendors who told us that their application or technology could not be firewalled, or that if we were to attempt to firewall it they would not support us, or they told us how to firewall the app or technology but they were wrong – they simply didn’t know how their app or technology worked.
This also required a significant effort to work with users to convince them that the inconveniences of having to remotely access their data in the data centers reduced security risk enough that their obligation and responsibility towards the owners of the data would be well met. In most cases the user aspect of the problem was harder to solve than the technical aspect.