Further Enquiries

School of Computer and Mathematical Sciences
Ingkarni Wardli Building
The University of Adelaide
SA 5005
AUSTRALIA
Email

Telephone: +61 8 8313 5586
Facsimile: +61 8 8313 4366

Algorithmic Game Theory and Cyber Security

Director: Dr Mingyu Guo

Application: Defending Active Directory Windows Domain Networks

Techniques: Fixed Parameter Analysis, Stackelberg Game, Graph Neural Network, Reinforcement Learning

Active Directory (AD) is the default security management system for Windows domain networks, which is used by most large organisations world wide including our own university. An AD environment naturally describes a cyber attack graph where nodes represent computers, user accounts, groups, etc. and (directed) edges represent accesses/known exploits. I.e., an edge from node A to computer B represents that an attacker with access of A can gain access to B. There are many open source/commericial tools for generating/analysing/visualizing AD graphs and these tools are used by real attackers for attack planning (i.e., BloodHound is a popular tool for generating "shortest attack paths") and used by real defenders for defence planning (often involves cleanning up unnecessary accesses that are potentially exploitable by the attackers).

We formulate the AD defence problem as Stackelberg games (on graphs) between attackers and defenders. We exploit the structural features of practical AD graphs to design scalable defence algorithms. Example structural features include 1) attack paths tend to be short, which is similar to the observation from the Small World Experiment (i.e., every two individuals in the world are within six social connections); 2) practical AD graphs are similar to trees, where the tree roots are the Domain Admin nodes (i.e., nodes with the highest priviledge). We apply fixed parameter analysis to design scalable algorithms that can handle large graphs with tens of thousands of nodes.

Often, fixed parameter analysis alone is not enough to achieve practical scalability. We combine the strength of theoretical analysis and optimisation techniques like mixed integer programming, evolutionary computation, graph neural networks, reinforcement learning to achieve maximal scalability.

Selected Publications

Mechanism Design via Neural Networks

Mechanism design deals with economic systems involving multiple agents. A mechanism is a set of rules that map the agents’ preferences to social decisions. For example, a school allocation mechanism maps the students’ preferences over schools to an assignment of students to schools. Sometimes, the decision also involves payments. For example, an Internet advertisement auction mechanism maps the advertisers’ preferences over different advertising slots to an arrangement of the advertisements, as well as some corresponding payments. Mechanism design studies how to design mechanism rules that optimize certain objective while enforcing certain constraints. Typical objectives include welfare maximization and revenue maximization. A typical constraint is that the mechanism should encourage truthful reporting. It should be noted that the mechanism makes decisions based on the agents’ reported preferences, so a manipulative agent may lie about his preference in order to reach a better social decision for himself, which may be bad for the others. A truthful mechanism ensures that under its rules, no agents can benefit by misreporting.

Traditionally, economists have designed mechanisms manually. Automated mechanism design aims to automate the mechanism design process using computational techniques. The basic idea is to model the design process as a functional optimization problem. Essentially, if we express the agents’ preferences and the possible social decisions using numeric terms, then a mechanism is just a complex function whose domain and range are both multi-dimensional. This view of mechanism design motivates the application of neural networks toward mechanism design.

Neural network based mechanism design imposes unique challenges. For some models, the optimal mechanisms involve iterative binary decisions, which makes gradient computation challenging. One idea we proposed is to "hard code" the analytical commutative density function of the agents' prior distribution into the cost function to derive high-quality gradients. For worst-case design objective, evaluating a neural network's worst-case performance is not scalable, so we are restricted to tiny networks that are often difficult to train. One idea we proposed is to rely on the Lottery Ticket Hypothesis to guide a systematic search process for tiny trainable networks.

Selected Publications