Risk Before Popularity: 4 Factors for Determining Security Vulnerability

Security

Stephen Roostan, VP EMEA at Kenna Security looks at identifying what kind of security vulnerability represents the greatest risk.

For all of the wrong reasons, security breaches capture the headlines especially when a high profile organisation has come under attack. In 2020 alone, we’ve seen some of the most well known global brands on the planet fall victim to external threats, including the likes of Zoom, Twitter and Nintendo. In turn, each incident has generated a huge amount of buzz that has understandably left many businesses worried about their own ability to manage risk effectively.

Of course, an increased focus on an organisation’s own security and the external threats that it may fall victim to can be extremely beneficial, especially in the current environment, where many employees are working remotely and the attack surface has grown significantly. A level of internal reflection is important, but these headlines risk pulling the focus away from more dangerous vulnerabilities that don’t command the same level of media attention.

Research from Kenna Security and the Cyentia Institute shows that only 5 percent of vulnerabilities fall under the ‘high-risk’ category indicating that they could be weaponised in some way. There will always be those attacks which both garner a large amount of attention and warrant an equal amount of action – such as the Heartbleed Bug that put millions of websites at risk as a result of a vulnerability in open source cryptographic protocol. However, there are also those that can be just as catastrophic, which seemingly go unnoticed.

Broadening Your Gaze


Framing vulnerability management efforts around security news headlines puts security teams in a precarious position. As the news and hype around security vulnerabilities escalates, it is becoming increasingly difficult for security teams to stay current with the threat landscape and determine how best to prioritise their efforts.

Allocating precious time and energy to yield the biggest dividends where reducing organisational risk is concerned depends on security teams being able to prioritise their efforts based on the factors that really matter. Rather than sinking valuable resources into remediating headline grabbing vulnerabilities that may pose little or no threat to the organisation, identifying the right vulnerabilities to fix increasingly depends on embracing an objective and consistent way to prioritise vulnerabilities.

Let’s take a look at the top four factors that security teams should consider when evaluating which vulnerabilities represent the greatest risk to a specific environment.


Remain wary of remote code execution

Remote code execution enables an attacker to access a computing device from anywhere in the world to make damaging changes, so it’s no surprise that remote code execution tops the wish list of hackers everywhere. Having established a way to run their code on a remote system, hackers then have the ability to inflict all kinds of chaos, including establishing bot networks, stealing data, or infiltrating networks.


Look out for Metasploit and Blackhat exploits

Unfortunately, the same Metasploit security teams use to pen test their organisation’s defences and identify weaknesses has become the de facto standard for exploit development. When hackers use Metasploit, they’re not just creating tests, they’re creating real attacks. So whenever modules appear in Metasploit, it’s a given that attackers are, or soon will be, leveraging these to exploit vulnerabilities.

For that reason, any vulnerability identified with a Metasploit module should be at the top of an enterprise’s list of vulnerabilities to patch or mitigate. Regular patching, running applications or processes with least privileges, and limiting network access to only trusted hosts can all play a pivotal role in limiting a hacker’s ability to leverage Metasploit.

Security teams are also well advised to consider blackhat exploit kits. Despite having a much lower proliferation rate than Metasploit, their intent is much clearer. In other words, using an exploit from a blackhat kit is almost always for malicious intent and for this reason should be incorporated into the remediation decision-making process accordingly.


Keep a close eye on the ability to access networks

Network accessibility plays a major role when determining the severity of a security threat and the likelihood of a vulnerability’s exploitation. Today’s attackers will leverage automation to execute attacks at scale and are on the lookout for network-accessibility vulnerabilities that can form the basis of botnets as well as command-and-control communications.

Cross-site scripting, missing function-level access controls or patterns of excessive use also serve as common examples of network accessibility vulnerabilities that should be prioritised for management.


Always consider the Exploit Database

The Exploit Database is a comprehensive repository of exploits and proof-of-concept attacks. Unfortunately, just like Metasploit, the Exploit Database is an invaluable resource for security teams and attackers alike. Attackers use it to find an exploit that will help compromise a known vulnerability within a target system.

Until a vulnerability appears in the Exploit Database, it remains less likely to emerge as a significant broad-based threat for organisations. However, as soon as a vulnerability appears, organisations will need to take action fast to remediate it.

Straightening Out Priorities


Today’s enterprise security teams have tens of thousands of vulnerabilities to remediate. The reality is that most vulnerabilities are likely to be exploited within 40 to 60 days, yet it can take security teams up to 120 days to put remediation in place. So the pressure is on for security teams to identify those vulnerabilities that pose the biggest risk of exploitation for their organisation and get to work with fixing these first.

As we’ve seen, while keeping up to date with security news is a great way of staying abreast with how the threat landscape is evolving, a vulnerability doesn’t need to be new or buzzworthy to pose a serious threat to the enterprise. All too often headlines can serve to distract security teams from remediating quickly and efficiently those risks that haven’t made it into the hall of fame. What organisations need to remember is that the most important factor to consider is where a vulnerability sits within their ecosystem. For example, a high-risk vulnerability sitting in a low-risk environment poses less of a threat, than a medium-risk vulnerability in a highly accessible environment. Ultimately, visibility and context are everything. Media headlines and ranking on the Common Vulnerabilities Scoring System (CVSS) database can have little bearing. What matters is the risk that the vulnerability poses on the individual organisation.

At the end of the day, effective vulnerability management requires a risk-based approach to prioritising remediation efforts, so that the right vulnerabilities are addressed at the right time. That means streamlining and accelerating efforts by evaluating a vulnerability’s most critical aspects to figure out how much danger a vulnerability really poses. In this way, the limited time and resources of the security team can best be focused on addressing those vulnerabilities that actually pose the most risk to the organisation.


Stephen Roostan

Stephen has over a decade of experience in cyber security and transformation projects, and his role at Kenna is to rapidly grow the EMEA organisation to meet the customer demand for risk-based vulnerability management. Prior to Kenna he held senior sales roles at Forcepoint, Citrix and Imperva, focusing on IT solutions for complex, enterprise requirements.

How E-commerce Marketers Can Win Black Friday

Sue Azari • 11th November 2024

As new global eCommerce players expand their influence across both European and US markets, traditional brands are navigating a rapidly shifting landscape. These fast-growing Asian platforms have gained traction by offering ultra-low prices, rapid product turnarounds, heavy investment in paid user acquisition, and leveraging viral social media trends to create demand almost in real-time. This...

Why microgrids are big news

Craig Tropea • 31st October 2024

As the world continues its march towards a greener future, businesses, communities, and individuals alike are all increasingly turning towards renewable energy sources to power their operations. What is most interesting, though, is how many of them are taking the pro-active position of researching, selecting, and implementing their preferred solutions without the assistance of traditional...

Is automation the silver bullet for customer retention?

Carter Busse • 22nd October 2024

CX innovation has accelerated rapidly since 2020, as business and consumer expectations evolved dramatically during the Covid-19 pandemic. Now, finding the best way to engage and respond to customers has become a top business priority and a key business challenge. Not only do customers expect the highest standard, but companies are prioritising superb CX to...

Automated Testing Tools and Their Impact on Software Quality

Natalia Yanchii • 09th October 2024

Test automation refers to using specialized software tools and frameworks to automate the execution of test cases, thereby reducing the time and effort required for manual testing. This approach ensures that automation tests run quickly and consistently, allowing development teams to identify and resolve defects more effectively. Test automation provides greater accuracy by eliminating human...

Custom Software Development

Natalia Yanchii • 04th October 2024

There is a wide performance gap between industry-leading companies and other market players. What helps these top businesses outperform their competitors? McKinsey & Company researchers are confident that these are digital technologies and custom software solutions. Nearly 70% of the top performers develop their proprietary products to differentiate themselves from competitors and drive growth. As...

The Impact of Test Automation on Software Quality

Natalia Yanchii • 04th October 2024

Software systems have become highly complex now, with multiple interconnected components, diverse user interfaces, and business logic. To ensure quality, QA engineers thoroughly test these systems through either automated or manual testing. At Testlum, we met many software development teams who were pressured to deliver new features and updates at a faster pace. The manual...