Home
News
Research
Publications
Grants
Teaching
PGM Group
Codes
CV

Machine Learning Training and Tutorials

It is mainly to provide training for my students, but it is open to everyone who is interested. I hope this can propagate knowledge, and inspire ideas and foster innovation. I will be presenting a bulk of the program, and will try to make it valuable to not only students, but also postdocs and academics. Note that not all content of the talks are written in the slides. Roughly 40% of the content are expressed on white board or verbally.
  1. Title: What is machine learning? From the shallow end to deep graph neural networks [pdf], 22 Nov., 2018. Speaker: Javen Shi.

    Abstract: I will be covering the basics of machine learning. I will explain the concepts, theory, applications, and industry's expectation. I will then move from traditional machine learning to deep learning. In particular, I will focus on DeepMind's latest work, deep graph neural networks (https://arxiv.org/abs/1806.01261), which can recover many very recent methods in the intersection of graphical models and deep learning. I will also share my thoughts on the challenges and opportunities.

  2. Title: Deep graph networks and Support Vector Machines [pdf1, pdf2], 29 Nov., 2018. Speaker: Javen Shi.

    Abstract: I will continue to cover DeepMind's graph networks (https://arxiv.org/abs/1806.01261), and fill in some background of graphical models. I will also cover Support Vector Machines and its related background such as convexity and optimisation.

  3. Title: Support Vector Machines [pdf], 6 Dec., 2018. Speaker: Javen Shi.

    Abstract: I will continue to cover Support Vector Machines (SVM), including binary class SVM, one class SVM (for novelty detection), and briefly mention multi-class SVM and structured SVM.

  4. Title: Deep Generative Models --- Generative Adversarial Networks (GANs) and Beyond [pdf], 13 Dec., 2018. Speaker: Ehsan Abbasnejad.

    Abstract: Ehsan will cover Generative Adversarial Networks (GANs) and other deep generative models such as variational autoencoders (VAE).

  5. Title: Uncertainty in Machine Learning [pdf], 20 Dec. 2018. Speaker: Ehsan Abbasnejad.

    Abstract: Machine learning has been successfully applied to a wide range of applications. However, state-of-the-art methods are not generally equipped with the necessary means to explore the uncertainty. There are two main sources of uncertainty: in the data and in the assumptions about the model. For the later, Bayesian methods are designed to address the model uncertainty through explicit estimation of the distribution of the parameters. This is opposed to the current practice of using point estimate to obtain a single parameter to explain the model. In this talk, we discuss various aspects of uncertainty in machine learning in general and deep learning in particular.

  6. Title: Memory Networks and Graph Attention Networks [pdf], 10 Jan. 2019. Speaker: Javen Shi.

    Abstract: Memory networks allow reasoning with long-term memory which can be read and written. They can also deal with variable sized inputs (for example, videos with varying lengths), and can focus on the most relevant parts of the input to make decisions. They can also operate on graphs with additional attractive properties. I will explain the idea and essence of Memory Networks, with an example in face recognition to show how you may apply to your applications. I will also discuss Graph Attention Networks in-depth, which are relevant to several on-going and future projects in AIML.

  7. Title: Introduction to Deep Reinforcement Learning [pdf], 17 Jan. 2019. Speaker: Ehsan Abbasnejad.

    Abstract: Deep reinforcement learning has gained significant attention in the past few years due to its tremendous success in various applications most notably Alpha Go where the world champion was defeated by the AI. In this talk, we will briefly discuss what the problem setup for reinforcement learning is and how deep learning has been part of its success.

  8. Title: Combining Vision and Language [ppt], 24 Jan. 2019. Speaker: Qi Wu

    Abstract: The fields of natural language processing (NLP) and computer vision (CV) have seen great advances in their respective goals of analysing and generating text, and of understanding images and videos. While both fields share a similar set of methods rooted in artificial intelligence and machine learning, they have historically developed separately. Recent years, however, have seen an upsurge of interest in problems that require a combination of linguistic and visual information. For example, Image Captioning and Visual Question Answering (VQA) are two important research topics in this area. Image captioning requires the machine to describe the image using human readable sentences while the VQA asks a machine to answer language-based questions based on the visual information. In this tutorial, I will first introduce the basic models and mechanisms in this area, such as CNN-RNN model, attention mechanism. Then I will outline some of the most recent progress and discuss the trend in this field.

  9. Title: Probabilistic Graphical Models 1: Representation [pdf], 31 Jan. 2019. Speaker: Javen Shi

    Abstract: I will start with the basics of probabilities and introduce the history of Probabilistic Graphical Models (PGMs) and current trend. I will cover the basic concepts of PGMs such as the representation, the factorisation rules, and basic tasks. I will show how to reason bayesian networks by hand. I will try to make this talk self-contained.

  10. Title: Probabilistic Graphical Models 2: Inference Basics [pdf], 7 Feb. 2019. Speaker: Javen Shi

    Abstract: I will cover two basic types of inference tasks: Marginal inference and MAP inference. I will also cover inference methods such as variable elimination, sum-product and max-product algorithms.

  11. Title: Probabilistic Graphical Models 3 and 4: Learning Parameters [pdf], Learning Structures [pdf] , 14 Feb. 2019. Speaker: Javen Shi

    Abstract: I will cover how to learn the parameters, starting with simple ones such as Bayesian networks, and then moving onto Markov random fields including techniques such as Structured SVM and Conditional random fields. I will also cover how to learn structures focusing on a classical algorithm called Chow-Liu Tree algorithm.





University Courses



Past Tutorials

Probabilistic Graphical Models

  1. Representation [ pdf], ACVT, UoA, April 15, 2011

  2. Inference [ pdf], ACVT, UoA, May 6, 2011

  3. Learning [ pdf], ACVT, UoA, May 27, 2011

  4. Sampling-based approximate inference [ pdf], ACVT, UoA, June 10, 2011

  5. Temporal models [ pdf], ACVT, UoA, August 12, 2011

Generalisation Bounds

  1. Basics [ pdf], ACVT, UoA, April 13, 2012

  2. VC dimensions and bounds [ pdf], ACVT, UoA, April 27, 2012

  3. Rademacher complexity and bounds [ pdf], ACVT, UoA, August 17, 2012

  4. PAC Bayesian Bounds, [ pdf], ACVT, UoA, August 31, 2012

  5. Regret bounds for online learning, [ pdf], ACVT, UoA, Nov. 2, 2012

Please email me if you find errors or typos in the slides.