We have two exciting features for this release of MSTICpy: a Splunk data provider and data uploaders for Azure Sentinel and Splunk
Splunk Data Provider
We have added support for Splunk in our growing list of Data Providers/Connectors. The feature is built on top-of Splunk SDK for python with some customization and enhancements.
The provider allows you to connect to a Splunk instance (on-premise or cloud) and query data from Jupyter notebooks and MSTICpy using a pattern similar to our existing data providers. The results of each query are returned as a pandas DataFrame. This makes it easy to use the query results data with existing analysis and visualization features of MSTICpy.
Some highlights are:
- Built-in queries to do common operations such as list all datatypes, alerts, audit trail logs
- Retrieve all of your saved searches execute them as simple Python function calls (saved searches are added as function attributes to the QueryProvider object)
- Generic parameterized query with support for index, source, time ranges, projected fields
- Run ad-hoc Splunk queries (queries as simple text strings)
The QueryProvider object also exposes the native Python Splunk client so that you can continue to use that directly as well as use the MSTICpy features. We have put together a documentation notebook, Splunk-Data Connectors, which demonstrated these features.
MSTICpy now includes a data uploader feature, this allows you to upload local datasets back to your remote data store for long term storage, sharing, and easier correlation. Currently we support uploads to Azure Sentinel/LogAnalytics and Splunk. Each data uploader supports the upload of local data stored in either pandas DataFrames, or local delimiter-separated files (i.e. csv, tsv). You can upload a single file, or choose to specify a folder to iterate over and upload the contents of each file in the folder.
After initializing the required uploader the .upload_df(), .upload_file(), and .upload_folder() methods can be used to upload your datasets. Depending on the data source in question parameters are required to specify table, datasource, and index targets for each data set. Full details on each uploader can be found in our documentation.
The release also includes a number of fixes and minor enhancements. You can read more about these in the GitHub release notes.