Application Servers are implemented as a means of providing services and making resources available to users. However, any server connected to the Internet is inevitably targeted by malicious users using open listening ports. There are millions of these ports on the Internet, which means there is plenty of opportunity to exploit these open services.
Hackers and people with malicious intent scan the IP address space of known public cloud platforms like AWS, hoping to take advantage of open listening ports to launch network or application-level attacks to bring down the services. Firewalls or security groups are typically used to block access to those service ports not being utilized by the users, but there is still the question of what to do about services you want to access without exposing the ports to the entire Internet.
The obvious question would be can we somehow hide these open ports (services) on the Internet (or make them invisible) while still being able to make the services available to end users?
Among many approaches — including source IP whitelisting and access control groups/policies, which doesn’t really scale for a large and diverse set of users (remote or mobile workers) — there is an architectural concept of hiding the services behind a perimeter and only opening it for authorized users. This technique, called port knocking, has been around for a while, and there is even an IETF Internet Draft from 2015 called [TCP Stealth|https://tools.ietf.org/html/draft-kirsch-ietf-tcp-stealth-01] that has a similar concept.
What’s Port Knocking?
Port knocking works by configuring a service to watch firewall logs or packet capture interfaces for connection attempts. If a specific sequence of predefined connection attempts (or “knocks”) are made, the service will modify the firewall rules to open up connections on a certain port on demand. This approach allows the application services and assets to be hidden behind perimeter firewalls until a legitimate user is authenticated and authorized to use the service.
Today, the idea of making the Internet into your corporate network has gained momentum with the introduction of Zero Trust, which basically allows only authenticated and authorized access to enterprise assets. In Zero Trust architecture, a user’s physical location (remote or on-prem) doesn’t automatically guarantee the level of trust and access privileges. Instead, a robust set of security controls and authentication techniques are used to allow users to get access to resources.
Software Defined Perimeter (SDP), a.k.a. Dark Cloud
One new approach in the industry to build a Zero Trust architecture is a concept called Software Defined Perimeter (SDP), also known as Dark Cloud because users have no knowledge about the application until they are authenticated and authorized. This concept is slightly different than port knocking because it helps make services invisible, and a user needs to be authorized first.
The difference with SDP: To send a combination of packets to open the firewall ports to access the service, you need to be able to trust the controller sitting in the cloud. The controller allows for a mutual TLS (mTLS) session between the Initiating Host (IH), which could be a client installed on a user device, and the Accepting Host (AH), which is a gateway that sits either in the DMZ or behind the firewall. This concept does not require any listening ports on the SDP gateway; more importantly, it will not accept any connections unless authorized by the SDP controller. The SDP controller is using Single Packet Authorization (SPA), which is a variant of the port knocking technique. This concept creates an authorization fabric where you control which connections or hosts you need to allow on which port to your enterprise assets, creating a Dark Cloud for end users or bad guys looking for open ports to exploit known vulnerabilities in the application. This all nicely fits into a Zero Trust model.
This diagram from the [Cloud Security Alliance (CSA) working group|https://downloads.cloudsecurityalliance.org/] explains the process of SDP.
The model itself makes a lot of sense where the services should only be accessible by an authorized group of users and devices.
Challenges with the SDP Model
It also greatly reduces the attack surface where you do not have any open ports. Therefore, a malicious user cannot cause denial of the service, and they cannot even connect. However:
- What would you do with authenticated and authorized devices infected with malware?
- How would you stop the malware being spread to other applications this device has been authorized to access?
One of the biggest challenges with an SDP approach is the out-of-band controller, which doesn’t sit in the actual data path after the initial authentication request is completed.
Consider a situation where a user connects to an enterprise app. After initial authentication and authorization through an SDP controller, the client or IH is going to establish a tunnel with application services via SDP gateways, based on the policies and port numbers. If the same user gets infected with malware, it can easily spread to other applications via on-demand open ports. It can exploit known application vulnerabilities that this device is authorized to access simply because the AH or gateway has no knowledge or intelligence to block the request inside the tunnel.
I believe SDP is a modern form of client-based VPN that adds identity verification and policies before the user connects to any services, but it does nothing for service insertion, application performance, or truly “SaaSify-ing” enterprise applications. In addition, customer firewalls still need to be configured to accept inbound connections and allow traffic from the SDP gateways. Firewall rules introduce complexity, holes in the perimeter, and added IT maintenance.
The more sophisticated approach is to use an identity aware proxy (IAP) in building Zero Trust as your Next-Gen Access model. This not only authenticates and authorizes users/devices, but the application requests can be terminated and inspected, and it allows granular application-level access controls, not firewall rules. Therefore, configured policies can reflect user and application intent, not just ports and IPs. Because this is a proxy model, you have the flexibility to layer on services like web application firewalls, bot protection, DLP, and acceleration (CDN).
Learn about Akamai’s approach to Zero Trust and how we are helping our customers migrate from traditional and modern VPN models to identity aware proxy (IAP).
*** This is a Security Bloggers Network syndicated blog from The Akamai Blog authored by Faraz Siddiqui. Read the original post at: http://feedproxy.google.com/~r/TheAkamaiBlog/~3/lYK0pzsjwDg/software-defined-perimeter---a-modern-vpn-with-traditional-challenges.html