Critical alerts come before coffee
It’s 6:45 AM. Your alarm goes off, and you roll out of bed to a bedroom blanketed in the ink-like gray of the break of dawn. The first thing you do is check the notifications on your phone. Shoot. There was a P0 alert detected and Danny, the EMEA-based analyst that worked the shift before you, has already logged off. You don’t blame him; his parents are visiting. So you rush through your morning routine and forego putting the coffee pot on the burner. You’ll be fine; this happens often and P0 alerts always rattle you enough to shake you out of that morning grogginess.
The email notification with the P0 alert fits within your assigned geographical region and area of ownership, EDR, so you assign it to yourself. Quickly, because time is of the essence when it comes to critical alerts, you hop onto the next step – investigation. Using the available tools on the platform the detection came from, you look at logs and try to mentally answer the 3 main questions to any security threat investigation: 1) who did it, 2) what did they do, and 3) why was I alerted.
About 45 minutes later, after flipping through multiple tools and scrolling through many lines of logs, you can confirm that this detection was a true positive but benign. There was suspicious activity, but it was not malicious. You remove the detection from the queue and make a note to write a report on this alert.
You look at the clock; it’s 7:42 AM. Finally, time for coffee, and just in time for your stand-up meeting at 8 AM. You actively don’t think about the 10 other detections in the queue.
The detection that cried “wolf” was actually a wolf
A week later, you’re feeling good. You’ve gotten into somewhat of a groove even with the overwhelming number of alerts coming in; you’ve hit every prioritized alert within your shift and wrote some pretty and neat rules to avoid getting notifications for those benign positive alerts that keep coming through. Even your manager gave you a virtual pat on the back for catching onto the pattern of those true positives and implementing a permanent solution to take the load off everyone’s backs.
Even as you’re basking in the afterglow of a successful SOC sprint and finally enjoying a lunch that doesn’t consist of alternating between bites of a sandwich and scrolling through logs to investigate a detection, something doesn’t feel right. Maybe you missed a P1 alert – no, that wouldn’t be the case because your colleague would have pinged you about it. Maybe you left the coffee burner on. You glance towards it; no, that was turned off hours ago.
It's quiet. The alert queue has its line of detections, but they’re not priorities and can be worked on after you finish your lunch. There are no glaring alarms and no red banners. It is quiet. You decide the quiet is what disturbs you.
An hour later, you’re blazing through the detections in the queue. Almost all of them are not malicious. It feels like a walk in the park, and you get so much satisfaction from closing one after the other. You hit refresh on your browser and sit back to watch the UI buffer and render. You contemplate awarding yourself with an early end to your day. Maybe hit the gym you signed up for months ago but had no time to go.
The UI finishes refreshing, and you are hit with a row of red banners, indicating a multitude of high-priority detections. Your stomach drops to your gut. What in the world happened?
Your team is pinging in the team chat. No one understands what is happening and why the platform was suddenly flooded with high-priority alerts. There is a rising sense of panic from everyone, including yourself. Maybe more within yourself because as you click onto each detection, you see that a lot of the logs look familiar. The IDs, the devices, the operating systems – everything looks familiar. But of course, the rules you wrote to block them out didn’t work, because those rules were unique, and the current, glaringly red detections have converged all of them together.
It all converged into a major, business-shattering security attack.
What went wrong
After days of working for hours on end to respond and remediate, you and your team have shut down the attack from doing any more damage. Nothing in the business ended up compromised which is a win in anybody’s book. But that was stressful and overwhelming, and – dang it – what went wrong?
You first blame yourself for potentially labeling a detection as a benign positive, but your work was checked and verified by two other people on your team. When you went back, it was a benign positive and it was dealt with accordingly. You, then, blame the platform. Something in its algorithm or reporting didn’t allow you to do your job correctly, but you know you’re just using it as a scapegoat.
As you comb through the events and data from the previous three days of work and write up a post-mortem report, you notice how the detections converge into on unified attack, and you realize that it wasn’t you or the platform that caused the misdirection and confusion; the problem lies in the detections themselves.
Detections come aplenty, especially siloed and from various attack surfaces. You typically spend the most time on one platform which pulls in information from those various attack surfaces, but it’s a lot of information and you end up pivoting over to those tools anyway. The problem lies with the number of detections – there were just too many and many of them were low priority, so you didn’t really pay them as much attention as needed.
But how could you have known? They were considered minor, and you had other things to do. So, what can fix this dilemma?
The bigger picture
Alerts and detections miss out on the bigger picture – that attackers are smart and adaptable, meaning they will hop around from one place to another, leverage gaps in coverage and unsecured backdoors, and pose as only somewhat dangerous detections before gathering all the information they need for a full-scaled attack.
This is where entity-based prioritization can solve the problem. An entity is not an alert and not a detection. It is a cumulation of correlated events that fall under a single, well, entity. In some cases, it includes hosts, accounts, and detections. With entities, analysts can see the bigger picture – and catch attackers who are playing the long game.
As a SOC analyst, you’ve got it all wrong. It’s not you. It’s the way security technologies look at alerts and detections. Once technologies adopt a type of prioritization that brings forward entities over alerts and detections, you’ll find yourself not only with less alerts to monitor and engage, but you will also catch more true positives at a faster rate and with more confidence.
And, most importantly, you’ll be able to have your cup of coffee in the morning.