The Government Accountability Office (GAO) says in a new study, GAO-19-105: Federal Information Security, that most federal agencies are falling behind on implementing federal cyber security standards. The study said federal agencies need improvement and called on the agencies to do a better job protecting against intrusions. The GAO uses the NIST Cybersecurity Framework (CSF) to define the parameters of this study. The CSF relies on five core tenants, as defined below:
- Identify: Develop an organizational understanding to manage cyber security risk to systems, people, assets, data, and capabilities.
- Protect: Develop and implement appropriate safeguards to ensure delivery of critical services.
- Detect: Develop and implement appropriate activities to identify the occurrence of a cyber security event.
- Respond: Develop and implement appropriate activities to act regarding a detected cyber security incident.
- Recover: Develop and implement appropriate activities to maintain plans for resilience and to restore capabilities or services that were impaired due to a cyber security.
In this blog post, I focus on the “detect” part of this study. After that, I give my analysis on what I think the situation really is. From the report’s front page, we have already been told that federal agencies need to improve. I apologize in advance, but to justify my later remarks, I need to go into some further details. What we see is the “Managing Risk” rating for “Detect” doesn’t match with what each agency’s Inspector General and the report both states.
“Detect” Risk Rating and Report Notes
Based on the graphic below, the GAO seems to be justified saying the federal agencies need to improve, but, the “detect” function looks to be healthy with 82% of agencies rated as effectively managing risk.
Looking further into the report, the Inspector General for each of the assessed agencies rated those agencies as follows:
For the above graphic, the maturity level of each of the CSF functions is assessed based on the Information Security Continuous Monitoring (ISCM) Maturity Model used for FISMA reporting. The ratings are defined as:
Level 1: Ad Hoc: ISCM policies, procedures and strategy not formalized; ISCM activities performed in an ad-hoc, reactive manner.
Level 2: Defined: ISCM policies, procedures and strategy are formalized and document, but not consistently implemented.
Level 3: Consistently Implemented: ISCM policies, procedures and strategy are consistently implemented and agency performs validation testing. However quantitative and qualitative effectiveness measures are lacking.
Level 4: Managed and Measurable: Quantitative and qualitative measures on the effectiveness of ISCM policies and procedures are collected across the organization and used to assess the ISCM program and make necessary changes.
Based on these criteria, the risk associated with detection is managed by two of the 23 organizations versus the 19 of 23 that were reported. Let’s look at the specific areas mentioned in the report.
Intrusion and Detection Capabilities Lacking
The report says federal agencies are not applying four critical capabilities recommended by NIST. While it looks like these are “only four,” the impact of these are large and must be looked at. The risk introduced by not having these capabilities is glaring and needs to be addressed across all organizations, not just the federal government.
- Cloud Monitoring Not Performed
Less than half of the agencies using cloud services monitored their cloud traffic. Cloud services (IaaS, PaaS, or SaaS) are marketed to be complete cutover solutions, but regularly checking who has access to your cloud and where they are doing it from are critical to your cyber security posture. As part of our ActiveEye Cloud monitoring service, we regularly see log-in attempts from foreign countries where personnel are not located. Monitoring and responding to these access attempts must be part of any modern security program.
- Host-Based Intrusion Prevention Systems Not Fully Deployed
Host-based intrusion detection systems allow greater protection to endpoints. While 16 out of 23 agencies were using host-based intrusion prevention capabilities, and 15 used memory-based protection, only 8 used host-based application whitelisting capabilities.
- External and Internal Traffic Not Monitored
Traffic monitoring should happen at the ingress/egress point, as well as in key nodes in your network. Almost everyone is aware of monitoring external traffic, however monitoring internal traffic means effectively implementing monitoring devices, such as Intrusion Prevention Systems or internal firewalls. That way you can see how traffic is moving internally, specifically to counter insider threats or intruders who are moving laterally.
Most networks that Delta Risk has assessed are flat, meaning if you have access to one part of the network, you have unrestricted access across the entire internal network. Without internal traffic monitoring, unauthorized access that does not cross the external boundary will not be detected at all. In addition to lack of monitoring, there was specific mention in the study of agencies not monitoring encrypted traffic. Data exfiltration is often done via encrypted traffic, and not monitoring this could mean data is being stolen and no one is watching.
- Security Information and Event Management (SIEM) Capabilities Not Fully Utilized
Having a SIEM solution to log and correlate events and alert on suspicious activity is critical in today’s environment. While 21 out of 23 agencies reported having a SIEM, many agencies did not have it tuned to match known suspicious traffic, generate automated alerts, or monitor the entire range of devices under their control. The key tenants of an ISCM program are to have automated alerts generated and acted upon, either through pre-determined actions or by being forwarded to an analyst for evaluation. Having a SIEM properly configured and tuned requires experienced personnel, which is why Delta Risk’s Managed Security Service, ActiveEye, has been popular for the commercial sector and why we are moving to have it FEDRAMP certified.
In conclusion, the ability of federal agencies to detect threats on their networks is not good and this study highlights it. While the report’s findings are important, several similar reports released within the past 20 years have led only to incremental change. None of the reports provided any real impetus to change the architecture and hold cyber security and agency leadership accountable (except in response to the OPM data breach).
The government, and most organizations if we are being honest, prioritizes checklists over operational security. We must make cyber security a true priority and realize the adopting policies and developing procedures is not enough to protect data – we must implement a mix of trained personnel, effective processes, and appropriate technology to make lasting changes.
Share this Post
*** This is a Security Bloggers Network syndicated blog from Blog – Delta Risk authored by Keith Melancon. Read the original post at: https://deltarisk.com/blog/gao-federal-agencies-still-vulnerable-to-cyber-attacks/