# Book Review: Intentional Risk Management Through Complex Networks Analysis – Innovation for Infosec

This post provides a non-comprehensive summary of a multi-author book published in 2015 titled “Intentional Risk Management Through Complex Networks Analysis“.

I recommend this book to those looking for real science-based Information Security innovations. This statement is not a forced marketing slogan. It is a reality.

The authors of this book are, in alphabetic order Victor Chapela, Regino Criado, Santiago Moral and Miguel Romance.

In this post I present some of the interesting points proposed by the authors. The ideas mentioned here are coming from the book. Certainly this summary is a clear invitation to read the book, digest its innovative proposals and start innovating in this demanding field of IT Security.

**Chapter 1. Intentional Risk and Cyber-Security: A Motivating Introduction**

The authors start distinguishing between **Static Risk** and **Dynamic Risk**. Static Risk is opportunistic risk (e.g. identity theft). Dynamic Risk is directed intentional risk that attempts to use potentially existing but unauthorised paths (e.g. using a vulnerability).

**Static Risk** is based on the probability that a user with authorised access to a specific application abuse his access for personal gain. This risk can be deterred by reducing anonymity.

In **Dynamic Risk** the attacker tries to get the most valuable node via the least number of hops via authorised or unauthorised accesses.

Currently the main driver for a cyber-attack is the expected profit for the attacker. The book also links Intentionality Management with **Game Theory**, specifically with the **stability analysis of John Nash’s equilibrium**. The book uses **Complex Network Theory** (both in terms of structure and dynamics) to provide a physical and logical structure of where the game is played.

The authors consider **intentionality as the backbone for cyber-risk management**. They mention a figure, coming from a security provider, of around USD 400 billion as the latest annual cost of cyber-crime.

The authors make a distinction between:

– **Accidental risk management**, a field in which there is a cause that leads to an effect and attacks are prevented mostly with redundancy (e.g. in data centres) and

– **Intentional risk management**, in which we have to analyse the end goal of the attackers.

To prevent these attacks we can:

– Reduce the value of the asset.

– Increase the risk the attacker runs.

– Increase the cost for the attacker.

Traditionally the risk management methodologies are based on an actuarial approach, using the typical probability x impact. Being the probability based on observation of the frequency of past events.

We need to assess which assets are the most valuable assets for the attackers.

Using network theory, whose foundations can also be found in this blog in summaries posted in October 2015, November 2015, December 2015, January 2016, February 2016 and March 2016, the more connected a node is (or the more accessibility a computer system has), the greater is the risk for it to be hackable.

A key point proposed by this book: **Calculated risk values should be intrinsic to the attributes of the network and require no expert estimates**. The authors break down attackers’ expected profit into these three elements:

– **Expected income** i.e. the value for them.

– The **expense** they run (depending on the accessibility both via a technical user access or a non-technical user access).

– **Risk to the attacker** (related to anonymity and some deterrent legal, economic and social consequences.

An attacker prefers busy applications that are highly accessible, admin access privileges and critical remote execution vulnerabilities. The main driver for attackers is value for them. Attackers in the dynamic risk arena are not deterred by anonymity.

The authors relate anonymity to the number of users who have access to the same application.

**Chapter 2. Mathematical Foundations: Complex Networks and Graphs (A Review)**

Complex network model the structure and non-linear non-linear dynamics of discrete complex systems.

The authors mention the difference between holism and reductionism. Reductionism works if the system is linear. Complexity depends on the degrees of freedom that a system has and whether linearity is present.

Networks are composed of vertices and edges. In complex networks small changes may have global consequences.

Euler walk: A path between two nodes for which every link appears exactly once. The degree of a node is the number of links the node shares.

If the number of links with odd degree is greater than 2 then no Euler walk exists.

If the number of links with odd degree equals 0 then there are Euler walks from any node.

If the number of links with odd degree equals 2 then there is only an Euler walk from one of the odd nodes.

A graph is the mathematical representation of a network. The adjacency matrix of a graph is a way to determine the graph completely. A node with a low degree is weakly connected. A regular network is a network whose nodes have exactly the same degree.

In a directed network the adjacency matrix is not necessarily symmetric. Paths do not allow repetition of vertices while walks do. A tree is a connected graph in which any two vertices are connected by exactly one path.

Structural vulnerability: How does the removal of a finite number of links and/or nodes affect the topology of a network?

Two nodes with a common neighbour are likely to connect to each other. The **clustering coefficient** measures it.

The **eigenvector centrality** of a node is proportional to the sum of the centrality values of all its neighbouring nodes.

Spectral graph theory studies the eigenvalues of matrices that embody the graph structure.

Betweenness centrality: Edge betweenness of an edge is the fraction of shortest paths between pairs of vertices that run along it. **Degree distribution provides the probability of finding a node in G with degree k**.

Complex networks models

In random graphs, the probability that 2 neighbours of a node are connected is the probability that two randomly chosen nodes are linked. **Large scale random networks have no clustering in general**. The average distance in a random network is rather small.

Small world model

Some real networks like the Internet have characteristics which are not explained by uniformly random connectivity. **Small world property: The network diameter is much smaller that the number of nodes**. Most vertices can be reached from the others through a small number of edges.

Scale-free networks

The degree distribution does not follow a Poisson like distribution but does follow a power law i.e. **the majority of nodes have low degree and some nodes, the hubs, have an extremely high connectivity**.

Additionally, many systems are strongly clustered with many short paths between the nodes. They obey the small world property.

**Scale-free networks emerge in the context of a growing network in which new nodes prefer to connect to highly connected nodes**. When there are constraints limiting the addition of new edges, then broad-scale or single-scale networks appear.

**Assortative networks****Most edges connect nodes that exhibit similar degrees** (the opposite is disassortative networks).

A Hamiltonian cycle in a graph passes through all its nodes exactly once. **The line graph is a set of nodes that are the initial set of edges**.

**Chapter 3. Random Walkers**

Two different types of random walkers: Uniform random walkers and random walkers with spectral jump (a personalisation vector).

Statistical mechanics: The frequency of all the nodes will be the same in all the random walkers developed. **In any type of random walker the most important element is the frequency with which each node appears**.** **

“If we move on a network in a random way, we will pass more often through the more accessible nodes”. This is the idea of the PageRank algorithm used by Google. The difficulty comes to compute the frequency of each node. A random walker on a network can be modelled by a discrete-time Markov chain.

**Multiplex networks: The edges of those networks are distributed among several layers. It is useful to model Dynamic Risk**.

Intentional risk analysis

Accessibility: Linked to the frequency of a uniform random walker with spectral jump in the weighted network of licit connections. Two types of nodes:

– Connection-generator nodes (e.g. Internet access, effective access of internal staff).

– Non connection-generator node (those nodes through which the communication is processed).

Static intentional risk? (It exists but it is not so key I assume) The accessibility of each connection is zero cost because the accesses have been achieved by using the structure of the network.

**In dynamic intentional risk each connection or non-designed access increase entails a cost for the attacker who seeks access to the valuable information (the vaults)**.

Modelling accessibility

A biased random walker with spectral jumps, going to those nodes with an optimal cost/benefit ratio. The random walker makes movements approaching the vaults. Accessibility in dynamic intentional risk may be modelled using a biased random walker with no spectral jumps in a 3-layered multiplex network.

1. A first layer corresponding to spectral jumps (ending and starting connections).

2. A second layer with the existing connections registered by the sniffing.

3. A third layer with connections due to the existence of both vulnerabilities + affinities.

**Chapter 4. The Role of Accessibility in the Static and Dynamic Risk Computation**

**The anonymity is computed for each edge of the intentionality network**. **The value and the accessibility are computed for each node**. Two ways to calculate the edge’s PageRank:

a. via the classic PageRank algorithm (frequency of access to an edge and the PageRank of its nodes).

b. via Line Graph i.e. the nodes are the edges of the original network.

The dumping factor will be the jumping factor.

The outcome will be a weighted and directed network with n nodes and m edges. There are equivalent approaches using the personalization vector.

**Chapter 5. Mathematical Model I: Static Intentional Risk**

**Static Risk: Opportunistic risk. Risk follows authorised paths.****Dynamic Risk: Directed intentional risk. Tendency to follow unauthorised paths. Linked to the use of potentially existing paths but not authorised in the network.**

The model is based on the information accessibility, on its value and on the anonymity level of the attacker.

Intentionality complex network for static risk. Elements:

– **Value**: How profitable the attack is.

– **Anonymity**: How easy the identity of the attacker is determined.

– **Accessibility**: How easily the attack is carried out.

Every node has a resistance (a measure for an attacker to get access). Value is located at certain nodes of the network called vaults. Different algorithms will be used: Max-path algorithm, value assignment algorithm and accessibility assignment algorithm.

Static risk intentionality network construction method:

1. Network construction from the table of connection catches.

2. Network collapsed and anonymity assignment.

3. Value assignment.

4. Accessibility assignment.

Two networks appear in this study, the users network and the admins network. **Network sniffing provides the connections between the nodes IP and the nodes IP:ports**. Based on this sniffing, we get the number of users who use each one of the edges. The inverse of that integer number becomes the label for each edge. The max-path algorithm is executed to distribute the value from the vaults to all the nodes of the networks.

The inverse of the number of users in each edge is used as a value reduction factor. **The higher the number of users who access a node, the higher value reduction potential attackers will have in that node but, however, the higher anonymity they will have though**.

Each edge is labelled with the frequency of access (the number of accesses). The accessibility of a node is linked to the accessibility of the edges connecting it. For each edge, the PageRank algorithm is calculated.

The higher the access frequency, the higher the probability that someone will misuse the information present in that node.

The higher the profit to risk ratio for the attacker, the greater the motivation for the attacker.

The paradigm shift is relevant: From the traditional risk = impact x probability to:

– Attacker income: Value for each element of the network.

– Attacker probability: Directly proportional to accessibility.

– Attacker risk: 1/anonymity.

**The value of each element resides in the node. Anonymity resides on the edge.****The profit to risk for the attacker ratio (PAR) =**

**value x accessibility x (anonymity /k) being k the potential punishment probability for the attacker.**

**Chapter 6. Mathematical Model II: Dynamic Intentional Risk**

Zero-day attacks are not integrated in the model.

In static risk:

– The most important single attribute is Value. The value depends on the percentage of value accessible by the user.

– The attacker uses their authorised access.

– Anonymity is an important incentive. Lack of anonymity is a deterrent.

– Accessibility has no cost (the user is already authorised)

– There is a higher level of personal risk perception.

– The higher the number of users, the higher his perceived anonymity.

In dynamic risk:

– The most important single attribute is accessibility.

– The degree of anonymity is not a deterrent (the user is not already authorised or known).

– The hacker tries to access the entire value.

– Typical values of anynomity: Coming from the Internet anonymity equals 1, from Wireless equals 0.5 and from the Intranet equals 0.

Accessibility in Dynamic Risk**Each jump of a non-authorised user from one element to another element increases the cost for the attacker. The more distance to the value, the more difficult and costly the attack is.**

Dynamic risk construction

First step: Performing a vulnerability scanning of the network to get all non-authorised paths (known vulnerabilities, open ports, app fingerprinting, known vulnerabilities and so forth).

The vulnerability scanner used is Nessus.

Two types of potential connections:

– Affinities: Two nodes sharing e.g. OS, configurations and users.

– Vulnerabilities.

A modified version of the PageRank algorithm is used.

Dynamic Risk model

User network + admins network + affinities + vulnerabilities

**Anonymity does not play any role in Dynamic Risk but accessibility is the main parameter**.** **

Each edge has an associate weight. The dynamic risk of an element is the potential profit the attacker obtains reaching that element. As anonymity is not relevant in the context of dynamic risk, it is not necessary to collapse its associated network.

The accessibility of an element of the Dynamic Risk Network is the value we get for the relative frequency of a biased random walker through that element.

– Dynamic risk = value x accessibility

– The dynamic risk of a network is the maximum dynamic risk value of its elements (interesting idea – why not the sum?)

– The dynamic risk average = the total value found in the vaults x accessibility average (the root mean square of all accessibility values associated to elements of the network in the context of dynamic risk).

**Chapter 7. Towards the Implementation of the Model**

Source ports in this model are not important. They are mostly generated randomly.** **

Access levels. Restricted and unrestricted.

The higher the level of privilege, the more information and functionality an attacker can access. Typically there are two types of accesses, based on different ports:

– Restricted end user access: Always authorised and mostly with low risk.

– Unrestricted technical access: Any access that allows a technical user or an external hacker to have unrestricted access to code, configuration or data. It can be authorised or gained via an exploit. It is a high risk: Using admin access in an application you can in most cases escalate privileges to gain control over the server and the network.

For static risk we need to find which accesses are already authorised and normal. The frequency of connections for each socket (especially for the frequently used sockets) informs about the busiest routes and how many hosts accessed a specific application.

For dynamic risk, we need to model the potential routes that a hacker might find and exploit. **For an attacker, sockets that are used normally are desirable since they are more anonymous**.

Attackers will select routes where they can obtain the most privileges with the least effort and get the closest to their end goal.

Other unknown risks are out of the scope of this proposal. This is a key point to understand.

**To calculate anonymity in the static risk network we need to collapse all the IP sources that connect to the same port destination**. It will be the inverse of the number of IP sources collapsed.

Value: How much the data or functionality is worth for the attacker. It needs to be placed manually into those vault nodes.

And the ending point of the book is the great news that the authors are working on a proof of concept.

Innovation in IT Security |

** **

*** This is a Security Bloggers Network syndicated blog from Security and risk authored by Unknown. Read the original post at: http://securityandrisk.blogspot.com/2016/05/book-review-intentional-risk-management.html