In the last few weeks we’ve heard a lot about the Log4j vulnerabilities, with the most prominent being log4shell, and unfortunately, this is just the beginning. The situation is very serious, as the recommendations from Microsoft’s latest blog indicate:
At this juncture, customers should assume broad availability of exploit code and scanning capabilities to be a real and present danger to their environments. Due to the many software and services that are impacted and given the pace of updates, this is expected to have a long tail for remediation, requiring ongoing, sustainable vigilance.1
Many organizations are under tremendous pressure, and quite honestly many are afraid that their organization is vulnerable and are wondering if they took all the preventive and identification actions properly. And then all the other questions start swirling about:
- We set up the detection rules, but were they deployed correctly? Did you actually check the detection queries?
- Is copying a query that you saw in one of the blogs right for your environment?
- Have you checked if the fields in the query are relevant to your environment?
So, why are the Log4j vulnerabilities unique?
There’s no way to sugarcoat it: the widespread vulnerabilities in Apache Log4j will be exploited for some of the nastiest cyber attacks we have ever seen, and the worst of them may actually be months — or even years — into the future.
The main problem is that we have no idea how widespread this vulnerability is in the environment. There is no way to know how many systems and products in the organization are vulnerable. The US CISA has documented some of the applications that are confirmed as vulnerable: https://github.com/cisagov/log4j-affected-db#software-list but we don’t know if this is 10% or 90% of the total applications impacted.
What usually happens when a critical vulnerability surfaces is that the first order of business is patching the affected systems. Usually, this is relatively simple, as in the case of patching your Exchange server. Sometimes, it is a little harder, like when you are patching your physical routers. However, with the Log4j vulnerability this is virtually impossible to do as it has a ubiquitous presence in almost all major Java-based enterprise apps and servers. You just don’t know which systems use Log4j, and you can’t patch something without knowing where it is. The only way to monitor this vulnerability and manage the risk associated with it is through robust detection
Why detection is key for Log4j vulnerabilities
When the organization’s security teams have connected all the relevant systems and stations to the organizational SIEM, they will have the ability to identify and monitor any suspicious activity that occurs according to the rules they have created. Good detection rules capture suspicious activity in real time. This indicates that an attacker is trying to exploit the vulnerability in your organization’s systems right at that moment.
We have seen a consistent increase in scanning for vulnerable systems using Log4j since the vulnerability was identified. Alerting on exploitation attempts from outside the network could be very noisy; therefore, you should be looking for either a successful exploitation or an attempted exploitation within the network. It is important to note that vulnerable software may be on back-end servers, so it is important for organizations to not only examine externally exposed systems but to also focus on internal systems for exploitation activity.
If any evidence of post-exploitation activity is found, this indicates a potential issue and it is imperative to kick off Incident Response procedures immediately.
Why generic rules don’t always work, and organizational context is essential
Most third-party security services produce rules for all of their customers, and they are typically not customized. This “one size fits all” approach does not meet the needs of many environments as environments vary greatly. This means that things are going to get missed.
Typically, when a rule is implemented, the rule is not reviewed to see if it is suitable for the environment, if the fields are adapted to the fields in the environment, if the rule is effective, etc. Most customers are just too overwhelmed and don’t have time for this additional review and they put their trust in their security provider. The result is they feel confident in their security with the rule implemented in the environment, but it is a false sense of security as the rule is probably not working correctly, if at all.
At CardinalOps, we have seen a number of customers who implemented rules from a third-party source. For example, one of our customers has a Splunk SIEM system and implemented a rule based on the Web data model. CardinalOps’ AI engine found the rule to be broken because the data model relies on fields that do not exist in the organization’s log sources, such as the http_user_agent field, and therefore this rule will not alert. For this particular customer, they had several different firewalls each with their own unique log format. Only some of these logs had the field that would trigger the alert based on this particular data model. Again, the customer felt secure, as they had implemented a rule, but they did not verify that the SIEM would parse the data correctly. These are the kind of blanket rules that many organizations are implementing, giving them a false sense of security.
There are several types of rules that exist and not every type of rule can suit your environment. This is why context is critical. The rules produced need to be created with your infrastructure, SIEM, and the technologies you are using in mind to truly be effective.
For example, you might leverage the following context when creating rules to ensure their success:
- Signature-based rules – Signature-based detection is one of the most common techniques used to address software threats. This type of detection involves security platforms having a predefined repository of static signatures (fingerprints). Similar to the firewall example above, you have to make sure that the rule is configured correctly to alert on the signature. We have seen rules that apply to a product but look for rules that relate to a different brand of that same product! Many security teams are monitoring so many different types of one thing, like different types of firewalls, which causes this issue.
- IOC-based and Pattern-based rules – The known Indicators of Compromise (IOCs) relevant to this attack are IP addresses that have been observed attempting to exploit the vulnerability and contents of the requests being sent. The log sources with the best visibility into these IOCs will be firewalls, intrusion detection systems, web application firewalls, and proxies. While these log sources can potentially provide detection for the initial exploit, keep in mind that IOCs change over time. In this case, it is critical to have an accurate inventory of your systems and a sound understanding of how their logs are parsed. Similar to IOCs, with Pattern-based detections you need to understand what you have in your environment, but also adjust for exclusions and for changes over time.
- Traffic and/or anomaly-based rules – Network traffic analysis (NTA) with behavioral anomaly detection examines network traffic to detect unusual or unauthorized activities. It is superior to only using static signatures and IOCs because it can detect living-off-the-land approaches used to execute key adversary tactics like privilege escalation and lateral movement. To ensure the rule is contextualized for your environment, you need to adjust for exclusions as well as thresholds.
How can CardinalOps help?
The Threat Coverage Optimization Platform has several detection features which can help organizations manage the risks associated with these vulnerabilities:
- Evaluate your detection rules and ensure they are properly functioning with the right context for your environment
- Quantify your threat coverage gap by identifying broken and missing rules
- Optimize threat coverage with recommendations configured with the right context to improve the detection capabilities of the SIEM
- Seamlessly deploy changes to the environment through automation without negatively impacting other detection capabilities
- Simplify communications with senior leadership with a third-party audit metric based on your MITRE ATT&CK threat coverage, that you can use to demonstrate that you have taken necessary steps to reduce risk from this vulnerability to your crown jewel assets
- Improve the ROI of your SOC tools
Please contact us to learn how we can help you start detecting Log4j vulnerabilities in your environment TODAY: info@cardinalops.com.