Satisnet Ltd, Basepoint Innovation Centre, 110 Butterfield Great Marlings, Luton, Bedfordshire, LU2 8DL enquiry@satisnet.co.uk
+44 (0) 1582 434320

QRadar Technical Blog: Suggested Deployment

QRadar Technical Blog: Suggested Deployment

QRadar Technical Blog: Suggested Deployment

After the last blog regarding the use of data nodes, there has been a request for suggestions around how the deployment should look. First we should say that there are many ways of deploying QRadar and while there are certainly some wrong ways, there are many ‘right’ ways each depending upon the size of network, number of devices, activity and other metrics.

It may be beneficial to begin by offering a view as to the size of data storage based on events received, event size and retention periods. This is an exercise that is normally undertaken at the beginning of a SIEM project but frequently isn’t.

The sizing metrics that apply are Events Per Second (EPS), event payload size, the event retention period, the number of indexes enabled and the duration required for quick searches.

EPS is the key metric and there are various methods of calculating how many events are generated by each device in the network. This isn’t addressed in this blog and may be a subject of a future offering. This blog will concentrate on the sizing formula across different levels of events.

The first algorithm is quite simple: Events per Second x the average payload size x 60 x 60 x 24 which gives a figure of the bytes to be stored in a 24 hour period. As an example, 1000 EPS at an average size of 500 bytes will require 0.01 TB of disk storage, 5000 EPS at an average of 500 bytes will require 0.05 TB.

The second algorithm identifies the number on indexes and the quick search availability. At 1000 EPS, 20 indexes and quick search retained for 1 day, the storage would be 0.014 TB. 5000 EPS would require 0.069 TB.

Using the above, it is relatively easy to arrive at the storage needed to support a large deployment. Assume EPS of 25,000 and an average of 500 bytes payload size, then the daily storage requirement is 0.25 TB. Including the index and search requirement adds a further 0.35 TB. Now it becomes a matter of deciding the retention requirements. The storage values given are all approximate.

For a retention period of 30 days, 20 indexes and a quick search period of 30 days, the total storage space would be approximately 25 TB.

Now let us look at the hardware available in 2017. The console choices are 3105, 3129 and 3148. The storage available is 6TB, 48TB and 16TB respectively. For the event processors there are also 3 devices, 1605, 1629 and 1648. The EPS ratings are 20,000, 40,000 and 80,000 respectively with the same storage values as the consoles.

To return to the example of 25,000 EPS and 25 TB storage, the requirement would be for a console (3129) and an EP (1629). Increasing the EPS to 40,000 and retaining the same event profile would increase the storage requirement to 40 TB which would still remain within the capacity of the two devices but would be nearing the limit of EPS.

Now to examine the situation where the retention period rises to 90 days while the indexes remain at 20 and the quick search period remains at 30 days. For 25,000 EPS, the storage requirement is now 53 TB and for 40,000 EPS the storage becomes 85 TB. For 25,000 EPS this changes the hardware to a 3129 Console and a 1629 EP with a 1429 Data Node.
40,000 EPS would still be nearing the limit for a 1629 so at this level the hardware would be a 3129 Console, a 1648 EP with two 1429 data nodes.

Increasing the EPS or increasing the retention periods will inevitably increase the storage requirements, 50,000 EPS for 90 days raises the storage to 110 TB, 60,000 raises it to 130 TB, and so on. It is recommended that the storage used is 85% of the storage allocated so if the requirement is for 130 TB then the actual storage should be at least 150 TB.