Home
News
Research
Publications
Grants
Teaching
PGM Group
Codes
CV

Machine Learning Training and Tutorials

It is mainly to provide training for my students, but it is open to everyone who is interested. I hope this can propagate knowledge, and inspire ideas and foster innovation. I will be presenting a bulk of the program, and will try to make it valuable to not only students, but also postdocs and academics. Note that not all content of the talks are written in the slides. Roughly 40% of the content are expressed on white board or verbally.
  1. Title: What is machine learning? From the shallow end to deep graph neural networks [pdf], 22 Nov., 2018. Speaker: Javen Shi.

    Abstract: I will be covering the basics of machine learning. I will explain the concepts, theory, applications, and industry's expectation. I will then move from traditional machine learning to deep learning. In particular, I will focus on DeepMind's latest work, deep graph neural networks (https://arxiv.org/abs/1806.01261), which can recover many very recent methods in the intersection of graphical models and deep learning. I will also share my thoughts on the challenges and opportunities.

  2. Title: Deep graph networks and Support Vector Machines [pdf1, pdf2], 29 Nov., 2018. Speaker: Javen Shi.

    Abstract: I will continue to cover DeepMind's graph networks (https://arxiv.org/abs/1806.01261), and fill in some background of graphical models. I will also cover Support Vector Machines and its related background such as convexity and optimisation.

  3. Title: Support Vector Machines [pdf], 6 Dec., 2018. Speaker: Javen Shi.

    Abstract: I will continue to cover Support Vector Machines (SVM), including binary class SVM, one class SVM (for novelty detection), and briefly mention multi-class SVM and structured SVM.

  4. Title: Deep Generative Models --- Generative Adversarial Networks (GANs) and Beyond [pdf], 13 Dec., 2018. Speaker: Ehsan Abbasnejad.

    Abstract: Ehsan will cover Generative Adversarial Networks (GANs) and other deep generative models such as variational autoencoders (VAE).

  5. Title: Uncertainty in Machine Learning [pdf], 20 Dec. 2018. Speaker: Ehsan Abbasnejad.

    Abstract: Machine learning has been successfully applied to a wide range of applications. However, state-of-the-art methods are not generally equipped with the necessary means to explore the uncertainty. There are two main sources of uncertainty: in the data and in the assumptions about the model. For the later, Bayesian methods are designed to address the model uncertainty through explicit estimation of the distribution of the parameters. This is opposed to the current practice of using point estimate to obtain a single parameter to explain the model. In this talk, we discuss various aspects of uncertainty in machine learning in general and deep learning in particular.

  6. Title: Memory Networks and Graph Attention Networks [pdf], 10 Jan. 2019. Speaker: Javen Shi.

    Abstract: Memory networks allow reasoning with long-term memory which can be read and written. They can also deal with variable sized inputs (for example, videos with varying lengths), and can focus on the most relevant parts of the input to make decisions. They can also operate on graphs with additional attractive properties. I will explain the idea and essence of Memory Networks, with an example in face recognition to show how you may apply to your applications. I will also discuss Graph Attention Networks in-depth, which are relevant to several on-going and future projects in AIML.

  7. Title: Introduction to Deep Reinforcement Learning [pdf], 17 Jan. 2019. Speaker: Ehsan Abbasnejad.

    Abstract: Deep reinforcement learning has gained significant attention in the past few years due to its tremendous success in various applications most notably Alpha Go where the world champion was defeated by the AI. In this talk, we will briefly discuss what the problem setup for reinforcement learning is and how deep learning has been part of its success.

  8. Title: Combining Vision and Language [ppt], 24 Jan. 2019. Speaker: Qi Wu

    Abstract: The fields of natural language processing (NLP) and computer vision (CV) have seen great advances in their respective goals of analysing and generating text, and of understanding images and videos. While both fields share a similar set of methods rooted in artificial intelligence and machine learning, they have historically developed separately. Recent years, however, have seen an upsurge of interest in problems that require a combination of linguistic and visual information. For example, Image Captioning and Visual Question Answering (VQA) are two important research topics in this area. Image captioning requires the machine to describe the image using human readable sentences while the VQA asks a machine to answer language-based questions based on the visual information. In this tutorial, I will first introduce the basic models and mechanisms in this area, such as CNN-RNN model, attention mechanism. Then I will outline some of the most recent progress and discuss the trend in this field.

  9. Title: Probabilistic Graphical Models 1: Representation [pdf], 31 Jan. 2019. Speaker: Javen Shi

    Abstract: I will start with the basics of probabilities and introduce the history of Probabilistic Graphical Models (PGMs) and current trend. I will cover the basic concepts of PGMs such as the representation, the factorisation rules, and basic tasks. I will show how to reason bayesian networks by hand. I will try to make this talk self-contained.

  10. Title: Probabilistic Graphical Models 2: Inference Basics [pdf], 7 Feb. 2019. Speaker: Javen Shi

    Abstract: I will cover two basic types of inference tasks: Marginal inference and MAP inference. I will also cover inference methods such as variable elimination, sum-product and max-product algorithms.

  11. Title: Probabilistic Graphical Models 3 and 4: Learning Parameters [pdf], Learning Structures [pdf] , 14 Feb. 2019. Speaker: Javen Shi

    Abstract: I will cover how to learn the parameters, starting with simple ones such as Bayesian networks, and then moving onto Markov random fields including techniques such as Structured SVM and Conditional random fields. I will also cover how to learn structures focusing on a classical algorithm called Chow-Liu Tree algorithm.

  12. Title: Recent Development in Semantic Image Segmentation using FCNs, 21 Feb. 2019. Speaker: Zifeng Wu

    Abstract: I will cover the original fully convolutional networks and the family of DeepLab networks, as well as several practical considerations in a nuclei segmentation task.

  13. Title: A tutorial on deep neural networks. From theory to code, 28 Feb. 2019. Speaker: Michele (Mike) Sasdelli

    Abstract: I will present all the basic ingredients of deep neural networks for computer vision. From perceptrons to modern CNN architectures. The presentation with be complemented with simple neural networks examples written in pytorch.

  14. Title: Relational Reasoning and Relation Networks [pdf], 7 March 2019. Speaker: Javen Shi

    Abstract: Relational reasoning is a central component of general intelligence, but has been difficult for neural networks to perform and learn. I will cover recent advances in Relation Networks (RNs) that can work as a simple plug-and-play module to solve many problems that fundamentally hinge on relational reasoning.

  15. Title: Deep semi-supervised learning, 14 March 2019 (postponed to 21 March due to illness). Speaker: Lingqiao Liu

    Abstract: emi-supervised learning is a long-standing problem in machine learning, and the resurgence of deep learning has inspired many new semi-supervised approaches for deep neural networks. In this talk, I will cover several recent developments in deep semi-supervised learning, including temporal ensemble, mean teacher, virtual adversarial training etc..

  16. Title: Generalized Zero-Shot Learning - An Overview, 28 March 2019. Speaker: Rafael Flix

    Bio: Currently, I am a Ph.D. research student at the Australian Centre for Robotic Vision (ACRV), and at the University of Adelaide. I am working under the supervision of Prof. Gustavo Carneiro and co-supervision of Prof. Ian Reid at The University of Adelaide. My research project aims to bring contributions to the fields of generalized zero-shot learning, zero-shot learning, and open-set recognition. Before I received M.Sc. degree in Computer Engineering from the Universidade Presbiteriana Mackenzie (Brazil 2015), and B.Sc. on Information Systems, Universidade Estadual de Montes Claros (Brazil 2011). His talk is mainly based on the following papers: Xian, Yongqin, et al. "Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly." IEEE transactions on pattern analysis and machine intelligence(2018). Felix, Rafael, et al. "Multi-modal cycle-consistent generalized zero-shot learning." Proceedings of the European Conference on Computer Vision (ECCV). 2018.

  17. Title: From R-CNN to YOLO: A review over the popular deep learning based object detectors [pdf1] [pdf2], 11 April 2019. Speaker: Hamid Rezatofighi

    Abstract: In this talk, I will present the progress trend in recent object detection algorithms after the rise in popularity of deep neural networks. I will also discuss pros and cons of each approach. If I have enough time, I will present some practical details about their implementations such as the post-processing step and their regression losses, and also provide more insights about their limitations.





University Courses



Past Tutorials

Probabilistic Graphical Models

  1. Representation [ pdf], ACVT, UoA, April 15, 2011

  2. Inference [ pdf], ACVT, UoA, May 6, 2011

  3. Learning [ pdf], ACVT, UoA, May 27, 2011

  4. Sampling-based approximate inference [ pdf], ACVT, UoA, June 10, 2011

  5. Temporal models [ pdf], ACVT, UoA, August 12, 2011

Generalisation Bounds

  1. Basics [ pdf], ACVT, UoA, April 13, 2012

  2. VC dimensions and bounds [ pdf], ACVT, UoA, April 27, 2012

  3. Rademacher complexity and bounds [ pdf], ACVT, UoA, August 17, 2012

  4. PAC Bayesian Bounds, [ pdf], ACVT, UoA, August 31, 2012

  5. Regret bounds for online learning, [ pdf], ACVT, UoA, Nov. 2, 2012

Please email me if you find errors or typos in the slides.