
Evolution of Cybersecurity Testing
The activity of cybersecurity testing
at present,
according to the information given to us
by Sylvester
and Cuervo,
can prompt us to remember its beginnings.
Among them,
for example,
is the procedure carried out by the U.S. Navy
in the eighties.
On that occasion,
it was a pentesting procedure
intended to reveal how simple it might be to breach
on one of its own bases.
This type of process,
framed in the ethical hacking activity,
has achieved remarkable improvements.
It can be considered a ‘traditional’ security testing
together with the vulnerability assessment process.
This testing continues to be enormously important
in a variety of industries and organizations
within our current society.
Nevertheless,
according to the authors,
something is happening with this traditional procedure.
It seems that
the breaches in companies continue to grow exponentially
and are accumulating.
This happens despite the effort that is being made in their security
with the tools and programs available.
Many of the companies have taken months
or even years
to detect those ‘compromises’ or risks.
The problem seems to stem from handling the same processes.
Those that years ago,
and today are often seen as sufficient in cybersecurity assessment.
It is required then,
as suggested,
an evolution in cybersecurity testing.
Citrix,
for example,
is one of those companies.
Possibly by doing two cybersecurity tests a year
(for 10 years),
with the traditional vulnerability scanning,
they had not realized that
the adversaries were already inside their network.
They had not noticed that
their security had been breached.
Traditional tests are leaving aside the fact
that networks may already have been compromised.
The requested evolution in cybersecurity,
following the reports of the authors,
can take as examples the advances in some methodologies.
Specifically those in the medical and aviation industries.
They have really sought to learn from their mistakes.
Both industries have followed a long process of improvement.
This has been done through detailed research,
with the aim of immediate remediation of any inconvenience.
These industries have tried to be open communities
and to share information between entities,
in addition to establishing standardized processes.
Their testing exercises continually aim for an approach to ‘perfection.’
At least assuming the alteration of some controllable variables.
And hoping that some methods result even more predictable and successful,
always to the benefit of the user.
Therefore,
it is established that
we cannot conclude that
a network is secure only with pentesting and vulnerability scanning.
These processes focus on testing risks
outside the corporate network.
And they are established as preventive techniques
and assessment of defenses.
Accordingly,
they deliver information on potential attacks.
They don’t do it on confirmed attacks.
They may constitute a work with a limited vision,
starting from a ‘false hypothesis,’
taking that the attacker is outside.
It’s not being considered that
the attacker may already be inside.
In the latter,
prevention may no longer be useful.
Cuervo
talks about a typical “misconception
that all attacks occur on servers and databases.”
When in fact most attacks start with emails
addressed to an organization’s employees,
from malicious subjects
(recall our ransomware blog post).
This is intended to affect certain devices initially,
and then “move laterally
until higher value assets are found.”
Image taken from Sylvester’s
presentation.
These experts then propose as indispensable
the analysis of network metadata
for the location of compromises.
Next,
to compare with traditional tests
and verify if there is additional information.
Afterwards,
the possible contribution of the findings
to the organization’s posture on risk
must be analyzed.
Finally,
a continuous repetition of compromise assessments is suggested.
Such assessments are expected to provide ongoing insights.
All of them about the timing,
locations, and communication procedures of the infrastructure
with adversaries and potential attackers.
We cannot forget that an organization’s metadata generation can be enormous.
But being aware of the DNS queries,
for example,
can be a significant advantage in knowing
whether the organization’s network has already been compromised.
The same goes for firewalls and proxy data.
Being attentive to this type of data,
and analyzing it,
can allow the monitoring of the activity of adversaries
already present within the network.
It is then established as fundamental
to carry out measurements
and then manage the findings.
The development of fairly advanced cybersecurity testing
is of vital importance.
Especially for the sustainability of multiple organizations
in this highly-connected world.
Based on what we have learned from the aforementioned authors:
it is advisable to maintain the position
that within our organization’s network
there may already be adversaries.
From this,
and beyond a traditional vulnerability scanning,
we can move on to analyze network metadata.
We can complement traditional methods
with an evaluation of compromises.
We can work incessantly,
with real-time data,
on proving that our system really is free of all adversaries
and possible attackers.
And if necessary,
we can work on the elimination of those compromises or present risks
and thus improve the security of our systems.
These are recommendations that we should not simply ignore.
“It is not the strongest of the species that survives,
nor the most intelligent that survives.
It is the one that is most adaptable to change.”
(Quote mistakenly attributed to Charles Darwin.)
Any questions about what Fluid Attacks can do for you
as part of the cybersecurity process in your organization?
Contact us.
*** This is a Security Bloggers Network syndicated blog from Fluid Attacks RSS Feed authored by Felipe Ruiz. Read the original post at: https://fluidattacks.com/blog/evolution-lumu/