SBN

Splunk Cloud: Determining Search Head Resources

One common task that comes up when troubleshooting Splunk search performance issues is validating the correct resources are available. For on-premise Splunk Enterprise, you can easily do this through the Monitoring Console:

Settings -> Monitoring Console

The amount of memory and CPU cores will be displayed in the upper left corner:

This approach will work on any Splunk Enterprise host regardless of whether or not the Monitoring Console is configured in a distributed manner on another host. You just need to have administrator permissions to see this option.

However, on Splunk Cloud Platform, this information is not available. The Monitoring Console is unavailable (and replaced with the Cloud Monitoring Console app), which doesn’t convey this information. Fortunately, there are some internal logs within Splunk that can help us figure out the sizing of our search head. We can then use that information to deduce the AWS instance size that Splunk Cloud is using for your stack. 

Begin by running this search:

Copy to Clipboard

Note: this search relies on startup messages that occur when Splunk restarts. Your instance must have been restarted in the past 30 days in order for these logs to appear. 

In your search results, you will see loader events from splunkd.log, that contain the following information:

In these events, we’ll see a few different search heads, with varying specifications. One host has 36 virtual CPUs, and the other has 72 virtual CPUs:

Copy to Clipboard

From here, we can reference the AWS instance types page to deduce what type of system matches those specifications. At the time of writing (May 2022), it appears that Splunk Cloud generally uses C5 instances for search heads in Splunk Cloud

Looking at the C5 instance types page, we can see that there are two instances that match the CPU/memory combinations:

  • c5.9xl = 36 vCPU and 72GB of RAM
  • c5.18xl = 72 vCPU and 144GB of RAM

Since these match the startup messages in Splunk, it’s a pretty good assumption that our search heads in this deployment are c5.9xl and c5.18xl instances, respectively. 

If you’re curious about other instances in your environment (such as indexers), you can do the same type of search, just by changing the host entry from host="sh-*.splunkcloud.com" to something like host="idx-*.splunkcloud.com" instead. Note that these are typically i3 or i3en instances in most current Splunk Cloud stacks. If you’re trying to identify the resources on any system sending data to Splunk, this approach also works for universal forwarders. 

Conclusion 

Having a quick way to determine what CPU and memory resources are available on your Splunk search head can help you be better informed when troubleshooting potential issues in your environment. Happy Splunking! 

The post Splunk Cloud: Determining Search Head Resources appeared first on Hurricane Labs.

*** This is a Security Bloggers Network syndicated blog from Hurricane Labs authored by Tom Kopchak. Read the original post at: https://hurricanelabs.com/splunk-tutorials/splunk-cloud-determining-search-head-resources/?utm_source=rss&utm_medium=rss&utm_campaign=splunk-cloud-determining-search-head-resources