Addressing the Security of In-Memory Computing

In-memory computing increasingly is being tapped for its speed and security capabilities

As our world embraces online interactions more, it’s important to pay close attention to the speed and security of the systems that drive those exchanges. Both requirements translate into a better user experience, which is a top priority for any business in today’s customer-centric environment. Speed, however, is not only about handling large volumes of interactions or throughput. It also entails low-latency processing to give customers faster responses to their requests.

Yet security, an obvious requirement for data protection and privacy, often doesn’t get the level of attention it needs. Businesses today are challenged with addressing both requirements together, even if they sometimes conflict. So how do organizations properly approach both to ensure online traffic growth is handled in a way the business prospers?

Issues of Speed Vary by Organization

Let’s consider some of the issues regarding speed and security. There are four main components of any system that determine the overall speed. The first is computer hardware, especially the CPU and random-access memory (RAM). Upgrading to faster components or adding more of them can provide an incremental performance boost. The second is storage media, as access to hard drives and solid-state drives (SSDs) contribute to a significant amount of latency in enterprise systems. The third is the network, which can be a bottleneck in distributed systems that require ongoing internode communications. And finally,  applications themselves can drive performance, with those architected for high concurrency and optimal use of available resources delivering the highest performance.

The first three components are seemingly easy to address since the solution is simply to purchase faster equipment. The costs associated here can cause obvious issues. Though, with the decreasing price of RAM and the availability of cost-effective memory technologies such as Intel Optane, using additional memory is a more viable option today. Deployment on the public cloud makes the upgrade process a bit easier, too, as you can change hardware instances with relative ease. However, upgrading equipment and instances are only a small part in gaining the performance levels you need.

Optimizations in application code can result in solid performance gains, but the process requires a lot of development and testing effort. Running profiling tools to assess the bottlenecks helps you quickly identify the areas that need to be addressed, allowing you to implement strategies to resolve those issues. For example, you might try to reduce the number of individual accesses to network and storage media by either batching requests together or caching data in memory. The coding effort requires a lot of analysis and refactoring, though, and the end gain may not be as great as expected.

Speed and Security Must Work Together

As you assess how to gain more performance, you must also look at security, particularly access controls, in your applications. Unfortunately, security controls typically impact speed due to the extra level of processing to safeguard your systems. Extra checks for needs such as proper access controls mean there is an added delay to the processing. And non-implemented security controls can be just as big an inhibitor for speed. For example, some systems use built-in caches to speed up access to stored data or precomputed data. If that cache lacks security capabilities, it leaves sensitive data vulnerable to unauthorized access, which could result in bigger issues. So, even if you have a built-in cache that offers improved performance, the lack of security controls means your security team will prohibit the use of the cache.

Where In-Memory Computing Proves Essential

How can we address both speed and security? One class of technology that has been growing in recent years to boost application performance is in-memory computing platforms. The notion of in-memory computing, in which data resides entirely in RAM with no spillover to disk, has been around for many years. However, it has mostly been relegated to only the most demanding applications. As more applications today face a high bar for customer expectations, along with the aforementioned trends around memory technologies, in-memory computing is becoming a mainstream and possibly even essential choice for businesses.

Many IT professionals associate “in-memory” with caching, but technologies focused on basic caching use cases typically lack the security infrastructure needed to handle sensitive data common today. They also lack other provisions to take advantage of a distributed system and enable parallel processing.

On the other hand, more evolved in-memory computing platforms, like in-memory data grids (IMDGs), have evolved to support a comprehensive set of capabilities. This includes security to run business-critical production deployments. Features such as TLS or SSL to encrypt over-the-wire transmissions and authentication or authorization to define user permissions help to secure sensitive data. Since IMDG data is stored in memory, the latency of disk access is largely eliminated. IMDGs also add performance by letting you submit “jobs”—applications that are automatically copied to each node in the cluster. This then lets you run a parallelized, distributed application to tackle tasks that can be broken down into smaller subtasks.

One big gain of this technique is data locality, in which processing is done on the same node as the data. Each instance of the application processes the local in-memory data, which means there are generally no network accesses to slow the system. In cases where some data needs to be retrieved from the network, a feature known as near-cache will copy the remote data to the local node to cache it and thus eliminate future network requests for that data.

Of course, speed and security are just one part of implementing an in-memory computing platform, as it also needs to be simple to use. An easily accessible software package (especially open source), a familiar API based on industry standards, prebuilt connectors to various data sources and security integrations with existing systems such as LDAP, all help. Ease of use with security is especially important. Difficulties with security features will cause IT professionals to put off the implementation to a later time—possibly when it is too late.

In-memory has long been associated with high costs, but with decreasing RAM prices, innovations such as Intel Optane technology and growing customer expectations on the user experience, the time for in-memory is now.

Avatar photo

Dale Kim

Dale Kim is the Senior Director of Technical Solutions at Hazelcast and is responsible for product and go-to-market strategy for the in-memory computing platform. His background includes technical and management roles at IT companies in areas such as relational databases, search, content management, NoSQL, Hadoop/Spark, and big data analytics. Dale holds an MBA from Santa Clara, and a BA in computer science from Berkeley.

dale-kim has 2 posts and counting.See all posts by dale-kim