The safest computer system in the world is one that is dumped in concrete and thrown into the ocean.
It’s the safest, and also the most useless.
Security goals can’t be, “Keep data safe at all costs”—the business would be unable to function. DevOps processes have spread security across different departments with different requirements and focuses; compliance requirements that are open to interpretation, technical requirements that may or may not be required; and always a need for a smaller, tighter budget.
Most organizations are focused on defensive security (ie, preventing someone from getting access to things they shouldn’t). But that becomes less of a need when investing in high-quality infrastructure, strong automation and integration testing. It first reduces security risks and then lowers costs overall.
This has been formalized by a software development approach, pioneered by Google and adopted by giants including Facebook and Amazon, which also can apply to security practices. Dubbed “infrastructure first,” the strategy aligns the same controls that yield reliable deployment with effective data protection—and it can be done without a massive budget.
One Problem, Not Three
Security, IT and infrastructure teams are often treated as having separate concerns. For instance, the security team decides that rolling out centralized logging infrastructure is their top priority and doesn’t consider that logging infrastructure, if built well, serves the entire engineering organization. The infrastructure team builds out infrastructure with very high levels of automation and availability, but doesn’t work with the security team to make sure the automation is also auditable. The IT team decides to roll out single-sign-on for their laptops and enterprise apps, but doesn’t involve the infrastructure team to ensure a highly available design, or the security team to ensure that access to the authentication system is properly secured and managed.
Infrastructure first designs treat infrastructure, security and IT teams as fundamentally solving the same problems: safe, reliable access to the organization’s services. Having these teams siloed means that the expertise provided by each of these teams (automation, safety and customer service) isn’t effectively shared throughout the org. Who wouldn’t want a security team with better customer service? Who wouldn’t want an IT team with a security mindset? Collaboration via cross-functional teams removes the pretext of individual missions, and instead looks for overlap. Think about it: The threats most organizations face today are things such as the CEO clicking on a phishing email.
To pave the way for this style of work, start by getting your team leaders together and really talk about the problems that you all see, together. Then, try a trial project (maybe something such as getting OSQuery deployed to employee laptops) with engineers from each team working together. When we worked with the Department of Defense, a bureaucratic system of departments each with its own objectives and processes, weekly (not monthly) meetings with all stakeholders had an impact. Having everyone in the room for the same conversation, and agreeing on the same course of action, led to a much smoother project.
Additionally, have these three teams report to the right people. Directing security, IT and infrastructure into the same director can help businesses use Conway’s Law to their advantage. Another approach, which can be effective if you have a thoughtful CISO, is to have IT report into the security group (especially if you don’t have a need for a strong infrastructure team).
Shift Your Budget Forward, Not Across
Truly high-quality infrastructure provides reliability for the customer, ease of deployment for the engineering team, and high levels of assurance for the security team.
The organizations that we think of as tech behemoths have been very successful because they’ve made enormous investments in infrastructure early on. Google, for example, invested huge amounts of engineering resources early into ensure their search engine infrastructure was rock solid—and that investment pays dividends to this day. The minimum viable product philosophy doesn’t preclude infrastructure investment; it requires allowance for rapid product iteration.
Thanks to modern technology, particularly cloud environments including AWS or GCP, one does not need a Google-level budget to make this all happen. Investing in your infrastructure first looks more like ensuring your software engineering processes are flawless, and that your infrastructure is represented as code. These investments pay off in both engineering hours and in infrastructure spending; in some cases, you can reduce your cost per application to pennies per hour.
By building infrastructure upfront instead of at the end stage of a project, the project can take advantage of those tools and processes for the majority of the development time by using those tools to automate repetitive tasks such as deployments and integration testing.
Level Up Professional Development
The frameworks I’ve described are only now starting to catch on more widely, in part because the tech industry tends to focus on engineering specialization and doesn’t invest much in cross-training. In some cases, security, infrastructure and IT workers have no software engineering expertise, which makes writing software to automate systems a non-starter.
Training programs can helpful if done properly, and that requires investment from the organization. Absent formal training, having cross-functional teams at least lets employees cross-train each other and share mindsets.
For many organizations implementing it, infrastructure first is a work in progress. Around 2000, the industry started to shift from system administrators toward infrastructure engineers and SREs. Today we have DevOps processes, and security is where infrastructure was 20 years ago. As we learn to integrate necessary special skill sets into the software engineering infrastructure as a whole, limits for scaling and delivery may be a thing of the past.