Our Chief Scientist at ShiftLeft, Fabian Yamaguchi, previously discussed language-neutral analysis using Code Property Graphs (CPGs) and how this innovative technology is leveraged in the ShiftLeft platform. At ShiftLeft, the science of static code analysis is transformed to the art of understanding your code’s behavior and generating a Security Profile — a means of describing the Security DNA of an application.
In this post I want to explore dynamic analysis of programs. Analyzing programs at runtime from a security perspective is a non-trivial task. The fact that security and reliability of code in production is of paramount importance puts constraints on what can be done as the code executes. The bedrock of any kind of dynamic analysis — whether it is for security or performance analysis — is the concept of code instrumentation.
What better way is there to learn about an application’s health and understand its behavior than to have the application tell you itself?
The concept of probes in an application has been employed in multiple sub-domains of computing, given fancy names, packaged, repackaged and resold. One such instrumentation technique on which many tools have been built is software tracing. Tracing lets us build tools that provide metrics such as program flows, data flows and function latency. While in their purest sense tracing techniques have been used for performance and debugging, due to the nature of the source data (function call stacks, arguments, timestamps) we can also use them to implement security features such as Control Flow Integrity (CFI) and dynamic taint-tracking by maintaining states at runtime. Such advanced analyses are traditionally part of security research teams and the tools have predominantly focused on developer-centric features to be used at development or in-house testing stages. The modern DevOps scenario, however, demands this level of advanced security in production systems. Therefore, the robustness guarantee falls on the specific techniques of instrumentation used.
So, is it logging, eh? Nein. The expectations are different. While logging events may be useful to see important, usually less frequent events (such as conveying error messages), tracing can be at function or instruction granularity — hence, more in-depth information of high-frequency events with the goal of negligible overhead.
The process of instrumentation involves inserting an extra piece of code in a code block either at compile time (such as -finstrument-functions in GCC or Clang) or at runtime (Cantrill et al.). Such probing is usually done at the entry and exit of the function under observation (shown as bar in the graphic) and lets us gather certain information about them as they execute.
This information can be timestamps (useful for profiling) or function arguments and return values (useful to identify what data is flowing through the function). Information collection happens inside a probe handler that is called when the instrumentation point is encountered. The handler performs filtering and data collection, and generates what we may call an event. The collection of events can be analyzed for meaningful data flowing through the application and can generate actionable information about its possible deviation from the norm.
Native Code: With native code written in C/C++, the instrumentation techniques can be either static or dynamic. In static instrumentation, the probe and handler code can be part of the target itself as it is built-in during compile time. In some cases, the compiler may allow insertion of special hooks at the entry and exit of functions which can be used to enable or disable probes dynamically (GCC’s mcount, for example). For pure dynamic instrumentation, binary instrumentation frameworks such as Dyninst can be useful as they allow modification of binaries at instruction level. In fact, such dynamic instrumentation techniques for security analysis have been explored before as well (Roundy et al.). While an elegant approach, binary modification is always tricky and may not perform well in all scenarios (such as obfuscated code where function boundaries need to be guessed).
Managed or Dynamic Code: Instrumenting managed or dynamic code like Java, Python or Ruby is somewhat easier due to standardization and in-built instrumentation in their respective virtual machines and interpreters. As an example, JVM provides a rich set of instrumentation APIs which can be used to build tools for vulnerability detection or security policy enforcement (Hao et al.). Such tools rely on javaagent-based bytecode instrumentation which allows the observation code to insert itself with near-zero overhead and collect interesting data from target functions. Btrace is a good example of a dynamic tracing tool for Java that allows safe insertion of analysis code using Java bytecode instrumentation. Similarly, Python and Ruby interpreters support static trace hooks known as USDT probes (User-level Statically Defined Tracing) which can be useful to understand the application flow.
Inserting Runtime Security
As I have discussed, various mechanisms exist to instrument code and obtain program and data flows from within applications. These have led to development of techniques such as dynamic taint-tracking which allow tracking of tainted data through the application by statefully monitoring methods and call-stacks as the program executes. This might be fine for testing, and indeed folks have experimented with Valgrind and shown promising results (Newsome et al., Enck et al.). But, as expected, in some implementations runtime overhead in such scenarios can be prohibitive on production systems (Bell et al.). It makes sense, therefore, to have a reduced observation set. At ShiftLeft, we stretch the boundaries further and bring this level of analysis to bear at runtime and make it production-ready.
For production systems, one of the goals is to remove any external overhead that the inserted analysis code itself causes.
Monitor and Enforce
From a security viewpoint, apart from fine-grained analysis and monitoring alerts, coarse-grained observation is equally important. System-wide events such as privilege escalation, accessing restricted resources and unsafe network operations need to be tracked. This continuous security awareness reinforces confidence and helps one form a peripheral view of the application: how the application interacts with the underlying system and hardware; mapping the ingress and egress points for messaging and network operations. Along with fine grained analysis, coarse-grained analysis lets us anticipate events and form a visual model of where the application conforms to its Security DNA and detect if something is getting mutated.
Kernel Assistance: Modern operating systems provide multiple mechanisms to probe runtime systems and provide monitoring of events and enforcement of security policies. The Linux kernel provides Linux Security Module (LSM) mechanisms such as SElinux and AppArmor, that allow process- as well as resource-level access control policies. Newer features in the kernel such as eBPF introduce programmability during analysis of trace and security events that allow further control and enforcement. As an example, Google Chrome uses the Secure Computing (seccomp) mechanism to implement a syscall filtering sandbox to protect users from blacklisted syscalls being executed. Linux Namespaces also introduce resource separation in containers and together with seccomp-bpf provide an initial protection layer.
At runtime, ShiftLeft leverages modern security constructs and instrumentation provided by the application itself, as well as dynamic code analysis, to provide granular control over application security.
Securing the Security
One of the most important pitfalls at runtime is the assumption that the code providing security is probably secure. As evident from numerous incidents like these, anti-virus systems can themselves become an attack surface. This calls for a few pointers for development of a runtime analysis code. It is important to leverage lessons learned over the past 25+ years of software development and use modern programming languages such as Go and Rust which provide inherent safety constructs. The event collection and aggregation should follow secure communications best practices, and any instrumented code should be verified and checked. As examples, Dyninst checks for recursive trampolines while building analysis code for insertion, and eBPF contains its own in-kernel verifier for checking loops and illegal memory accesses. Another approach is to build on systems that have existed for long time and are battle-tested — such as the Linux Kernel and JVM. This does not of course relieve the developer from auditing the application and testing it in-house.
To learn more about ShiftLeft and get started with a free trial, visit our website at https://www.shiftleft.io/.
References[Cantrill et al. 2004] Dynamic Instrumentation of Production Systems, Proceedings of the annual conference on USENIX Annual Technical Conference (ATEC ’04), 2004 [Roundy et al. 2010] Hybrid Analysis and Control of Malware, Proceedings of the 13th international conference on Recent advances in intrusion detection (RAID ‘10), 2010 [Hao et al. 2013] On the effectiveness of API-level access control using bytecode rewriting in Android, Proceedings of the 8th ACM SIGSAC symposium on Information, computer and communications security, 2013 [Newsome et al. 2005] Dynamic Taint Analysis for Automatic Detection, Analysis, and Signature Generation of Exploits on Commodity Software [Enck et al. 2014] TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones, ACM Transactions on Computer Systems [Bell et al. 2014] Phosphor: Illuminating Dynamic Data Flow in Commodity JVMs, 29th Annual ACM Conference on Object-Oriented Programming, Systems, Languages & Applications (OOPSLA ’14)
Dynamic Analysis of Modern Systems — Strategies and Pitfalls was originally published in ShiftLeft Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
This is a Security Bloggers Network syndicated blog post authored by Suchakra Sharma. Read the original post at: ShiftLeft Blog - Medium