Satisnet Ltd, Suite B, Building 210, The Village, Butterfield Business Park, Great Marlings, Luton, Bedfordshire, LU2 8DL enquiry@satisnet.co.uk
+44 (0) 1582 369330

At A Glance: Incident Enrichment in Azure Sentinel

Monday 16th November 2020

At A Glance: Incident Enrichment in Azure Sentinel

Source: Calum Finlayson, Cyber Security Analyst at Satisnet

Azure Sentinel is Microsoft’s cloud-based SIEM offering. When configured, it allows a single pane of glass into most Microsoft-related log sources, as well as just about any other custom log sources that may provide useful security insights. As well as providing log storage and dashboarding capabilities, one of the main features in Sentinel is the rule-based incident creation. Rules can be created based on Kusto Query Language (KQL) queries and scheduled to generate incidents when a certain threshold of events are returned from the query. Sentinel also comes with a large set of out-of-the-box rules that is constantly being added to by both Microsoft and community members via the Sentinel GitHub page. In this blog post, we will focus on the incidents that are generated as output of these rules and a few ways that we at Satisnet have been exploring the possibility of automatically enriching incidents in order to facilitate the expedient remediation and triage of the incidents.

Firstly, let’s look at an analytic rule and the resulting output so we can see what would be available without any sort of enrichment. We will take a built-in rule that looks for data exfiltration by attempting to identify time periods with unusually high amounts of data transfer to a public network.

Here we see the full incidents pane. On the right hand side of the screen is a slightly expanded incident that contains the user assigned to the incident (this one is currently unassigned), the severity of the incident as defined by the rule, the status of the incident, description, some evidence, as well as the important entities involved. The entities are defined in the rule that generates the incident, as shown below:

We will then leverage these entity values in the automated enrichment process that we will discuss later. Going back to the actual incident, we will also see that there is a button below that allows us to investigate the incident. Clicking on this will bring us to a page with an interactive exploded view of the incident:

The initial overview of the incident is fairly comprehensive, but there is also the possibility that it may not contain the data needed by an analyst to determine whether it is a real threat or a false positive. One of the solutions that we have found to this is to use the power of Azure Logic Apps, through the Playbooks tab in Sentinel. For this example we will set up an incident enrichment flow that will scrape any IP entities from an incident and crosscheck them with the threat intelligence that we have fed into Sentinel (in this case data from a Satisnet partner – Recorded Future). Any resulting output which signifies that an IP entity involved with the incident has shown up as a potential indicator of compromise will then be added to the incident as a comment. This is just a simple example that shows some of the potential of the incident enrichment; we will discuss a few more interesting use cases at the end of this post.

The playbooks in Sentinel are Azure Logic Apps that can be triggered in response to the generation of an incident or alert in Sentinel. The Logic Apps themselves are workflows that can be created using a GUI in Azure, and are comprised of flows of connectors. They allow us to easily interface with Microsoft products as well as anything that has a connector built for it, meaning we can call out to just about anything that exposes an API.

In order to set up a simple incident enrichment flow, we can navigate to the Playbook tab on the left of the Sentinel portal and choose to add a playbook. Simply give the playbook a name, and click create, then wait until we are given the option to “Go to resource”. Then from under Templates we want to select a Blank Logic App.

Now we will have access to all available connectors that are on offer within the Logic app. It’s worth browsing through these to get an idea of what capabilities the Logic App has. It must always start with a ‘Trigger’ action – and in this case since we want to trigger the run of this app based off of an incident being generated we need to search for ‘Azure Sentinel’ and choose the ‘When a response to an Azure Sentinel Alert is triggered’ option.

This will prompt us to choose our tenancy and sign in to enable access. Once done, we can move on to adding the next step in our workflow, which is to pull the incident data from the incident that has caused this flow to run. In this case we are looking for IP entities, so we will again search for ‘Azure Sentinel’ – but this time we will be looking for ‘Actions’ as opposed to a trigger.

We want to scrape the IPs from the incident, so we will select the ‘Entities – Get IPs’ option. When we click on this, a box will come up that allows us to input a list of entities. We should be able to see that there is a list of ‘Dynamic content’. This contains any output from the previous steps in our logic app – so in this case the fields from the incident that has triggered the run of this flow.

We can select the ‘Entities’ list to look for IPs.

Since there may be multiple IP entities involved with this incident, we want to make sure we look up each one. Thus, we need a ‘for’ each loop to iterate over the collection of IPs. We can add a new step and search for ‘Control’ and add a for each element into our workflow.

We can then select the list of IPs from the previous step as our input into the loop.

Now we are able to check each element against our threat intelligence. We have imported IoCs in via the Graph Security API, so we are able to use a connector to get indicators. Within our for loop we select ‘Add an action’ and search for Graph Security. We want to add the ‘Get tiIndicators’ action.

We may need to sign in and grant permission to access Graph Security at this stage. Next, we want to add a condition to only return the indicator if the IP matches the entity we have pulled from the incident. We can do this by filtering the list of indicators returned as shown below.

We want to populate this with an OData filter (see here for more info on the API and here for more info on OData query params). This will look like the below. Note the apostrophes and the use of dynamic content: this time the specific IP for this iteration of our for loop.

Next, we need to check to see if this search has returned anything. We want to add another action directly below our ‘Get tiIndicators’ step, and we will search for ‘Control’ again and add the ‘Condition’ action.

Now we have essentially an ‘if’ statement, that allows us to branch to one flow if the condition is true, and another if it is false. In the choose a value field we want to look at if we have found any indicators in our threat intelligence that match our current IP entity. For some reason in testing, I wasn’t able to get the ‘TiIndicator Count’ to work as expected, so I worked around it by clicking ‘Expression’ within the ‘Dynamic content’ field and adding length().

I then pivoted back to the ‘Dynamic content’ field and added the array of returned indicators by clicking on ‘TiIndicators’ as shown below.

Now, we want to proceed is this list is greater than 0 (more than 0 matching IPs found) so we set the condition statement to the following value.

Now, within the ‘True’ branch of the condition we need to set up another loop. This will go through each indicator and add it as a comment to the incident. This is done as shown below.

All these fields are populated with data from the initial trigger step of this model except from the actual ‘Specify comment’ which contains data from the indicator that we have found. Now we can save our model and make sure it is enabled.

The next step is to either edit an existing analytic rule to associate the playbook with it or to generate a new rule. For the purposes of testing this, I have created a new rule. All we need to do is make sure that it generates an IP, preferably one that we know will come back as an indicator when we check it with our threat intelligence.

We navigate to ‘Analytics’ and create a rule. Set the logic up ensuring that the IP entity is included.

Now all we need to do to enable our automated enrichment is to click the ‘Automated response’ tab and select the playbook we just created.

Now, whenever this rule is triggered our playbook will run against it. Let’s have a look at what the output looks like from this sample rule. We can find an instance of our test rule triggering and click ‘View full details’ and then navigate to the ‘Comments’ tab (note that it may take a little while after incident generation for the comment to populate).

As you can see, the analyst now is presented with a snippet of data about the malicious IP contained within the entity field without having to dive into any investigation. This is just one example of how Logic App workflows can help with incident enrichment.

It’s worth looking at what all of the possible actions are when it comes to modifying incidents from within Logic Apps, as this gives us a better idea of what can be achieved from within an automated workflow.

Summary

One of the most interesting aspects of this process that we have been exploring at Satisnet is the ability to dynamically change the severity of an incident based on data contained within the incident. This could prove useful for instance if you maintain a list of VIP accounts or Crown Jewel IPs, as it is possible to identify these and bump up the incident severity based on contextual information. The same goes for integrating vulnerability scanner information – this automated incident editing capability allows for the possibility of generating incidents at a low severity when a machine is well patched versus a higher severity on a machine with known vulnerabilities.

At Satisnet, we have been working on using these automated workflows to ensure that analysts are given the maximum amount of relevant data when looking at an incident, as well as making sure that the time analysts spend on an incident is actually time spent investigating, and Logic App workflows within Azure Sentinel are proving to be very valuable in achieving this.

Feel free to get in touch with any questions around how we can help you get the most out of the Microsoft Security stack.