Kansa: Autoruns data and analysis

I want your input.

With the “Trailer Park” release of Kansa marking a milestone for the core framework, I’m turning my focus to analysis scripts for data collected by the current set of modules. As of this writing there are 18 modules, with some overlap between them. I’m seeking more ideas for analysis scripts to package with Kansa and am hopeful that you will submit comments with novel, implementable ideas.

Existing modules can be divided into three categories:

  1. Auto Start Extension Points or ASEP data (persistence mechanisms)
  2. Network data (netstat, dns cache)
  3. Process data

In the interest of keeping posts to reasonable lengths, I’ll limit the scope of each post to a small number of modules or collectors.

ASEP collectors and analysis scripts


Runs Sysinternals Autorunsc.exe with arguments to collect all ASEPs (that Autoruns knows about) across all user profiles, includes ASEP hashes, code signing information (Publisher) and command line arguments (LaunchString).

Current analysis scripts for Get-Autoruns.ps1 data:
Returns a frequency count of ASEPs aggregating on ImagePath, LaunchString and MD5 hash.

Same as previous stacker, but filters out signed ASEPs.

Returns frequency count of ASEPs aggregated on ImagePath, LaunchString and code signer.

Returns frequency count of ASEPs aggregated on ImagePath and LaunchString.

Same as previous, but filters out signed code.

A picture is worth a few words, here’s a sample of output for the previous analysis script for data from a couple systems:

Click for full size image

The image above shows the unsigned ASEPs on the two hosts, aggregated by ImagePath and LaunchString. You may want to know which host a given ASEP came from, however, including the host in the output above would break the aggregation. If you want to trace the 7-zip ASEP back to the host it was found on, copy the ImagePath or LaunchString value to the clipboard and from within this Output\Autorunsc\ path where the script was run, use the Powershell commandlet:

Select-String -SimpleMatch -Pattern “c:\program files (x86)\7-zip\7-zip.dll” *autoruns.tsv

The result will show the files and lines in those files, that match that pattern, each filename contains the hostname where the data came from, and the hostname is also in the file in the PSComputerName field.

Get-Autorunsc.ps1 returns the following fields:
Time: Last modification time from the Registry or file system for the EntryLocation
EntryLocation: Registry or file system location for the Entry
Entry: The entry itself
Enabled: Enabled or disabled status
Category: Autorun category
Description: A description of the Autorun
Publisher: The publisher from the code signing certificate, if present
ImagePath: File system path to the Autorun
Version: PE version info
LaunchString: Command line arguments or class id from the Registry
MD5: MD5 hash of the ImagePath file
SHA1: SHA1 hash of the ImagePath file
PESHA1: SHA1 Authenticode hash of the ImagePath file
PESHA256: SHA256 Authenticode hash of the Image Path file
SHA256: SHA256 hash of the ImagePath file
PSComputerName: The host where the entry came from
RunspaceId: The runspaceid for the Powershell job that collected the data
PSShowComputerName: A Boolean flag about whether or not the PSComputerName is included

These last three fields are artifacts of Powershell remoting.

Given the available data and the currently available analysis scripts, what other analysis capabilities make sense for the Get-Autorunsc.ps1 output?

One idea I have, is to take the idea explored in a previous post of mine, “Finding Evil: Automating Autoruns Analysis.” This would be a script that takes an external dependency on a database of file hashes that are categorized as good, bad and unknown. The script would match hashes in the Get-Autorunsc.ps1 output, discarding the good, alerting on the bad and submitting unknowns to VirusTotal to see what, if anything is known about them. If VT says they are bad, insert them into the database and alert. If VT says they are good, insert them into the database and ignore them in future runs. If VT has no information on them, mark them for follow up and send them to Cuckoo sandbox or similar for analysis.

What ideas do you have? What would be helpful to you during IR?

Thanks for taking the time to read and comment with your thoughts and ideas.

If you found this information useful, please check out the SANS DFIR Summit where I’ll be speaking about Kansa, IR and analysis in June.

*** This is a Security Bloggers Network syndicated blog from trustedsignal -- blog authored by davehull. Read the original post at: