Home
News
Research
Publications
Grants
Teaching
Group
Codes

Machine Learning Training and Tutorials

It is mainly to provide training for my students, but it is open to everyone who is interested. I hope this can propagate knowledge, and inspire ideas and foster innovation. I will be presenting a bulk of the program, and will try to make it valuable to not only students, but also postdocs and academics. Note that not all content of the talks are written in the slides. Roughly 40% of the content are expressed on white board or verbally.
  1. Title: What is machine learning? From the shallow end to deep graph neural networks [pdf], 22 Nov., 2018. Speaker: Javen Shi.

    Abstract: I will be covering the basics of machine learning. I will explain the concepts, theory, applications, and industry's expectation. I will then move from traditional machine learning to deep learning. In particular, I will focus on DeepMind's latest work, deep graph neural networks (https://arxiv.org/abs/1806.01261), which can recover many very recent methods in the intersection of graphical models and deep learning. I will also share my thoughts on the challenges and opportunities.

  2. Title: Deep graph networks and Support Vector Machines [pdf1, pdf2], 29 Nov., 2018. Speaker: Javen Shi.

    Abstract: I will continue to cover DeepMind's graph networks (https://arxiv.org/abs/1806.01261), and fill in some background of graphical models. I will also cover Support Vector Machines and its related background such as convexity and optimisation.

  3. Title: Support Vector Machines [pdf], 6 Dec., 2018. Speaker: Javen Shi.

    Abstract: I will continue to cover Support Vector Machines (SVM), including binary class SVM, one class SVM (for novelty detection), and briefly mention multi-class SVM and structured SVM.

  4. Title: Deep Generative Models --- Generative Adversarial Networks (GANs) and Beyond [pdf], 13 Dec., 2018. Speaker: Ehsan Abbasnejad.

    Abstract: Ehsan will cover Generative Adversarial Networks (GANs) and other deep generative models such as variational autoencoders (VAE).

  5. Title: Uncertainty in Machine Learning [pdf], 20 Dec. 2018. Speaker: Ehsan Abbasnejad.

    Abstract: Machine learning has been successfully applied to a wide range of applications. However, state-of-the-art methods are not generally equipped with the necessary means to explore the uncertainty. There are two main sources of uncertainty: in the data and in the assumptions about the model. For the later, Bayesian methods are designed to address the model uncertainty through explicit estimation of the distribution of the parameters. This is opposed to the current practice of using point estimate to obtain a single parameter to explain the model. In this talk, we discuss various aspects of uncertainty in machine learning in general and deep learning in particular.

  6. Title: Memory Networks and Graph Attention Networks [pdf], 10 Jan. 2019. Speaker: Javen Shi.

    Abstract: Memory networks allow reasoning with long-term memory which can be read and written. They can also deal with variable sized inputs (for example, videos with varying lengths), and can focus on the most relevant parts of the input to make decisions. They can also operate on graphs with additional attractive properties. I will explain the idea and essence of Memory Networks, with an example in face recognition to show how you may apply to your applications. I will also discuss Graph Attention Networks in-depth, which are relevant to several on-going and future projects in AIML.

  7. Title: Introduction to Deep Reinforcement Learning [pdf], 17 Jan. 2019. Speaker: Ehsan Abbasnejad.

    Abstract: Deep reinforcement learning has gained significant attention in the past few years due to its tremendous success in various applications most notably Alpha Go where the world champion was defeated by the AI. In this talk, we will briefly discuss what the problem setup for reinforcement learning is and how deep learning has been part of its success.

  8. Title: Combining Vision and Language [ppt], 24 Jan. 2019. Speaker: Qi Wu

    Abstract: The fields of natural language processing (NLP) and computer vision (CV) have seen great advances in their respective goals of analysing and generating text, and of understanding images and videos. While both fields share a similar set of methods rooted in artificial intelligence and machine learning, they have historically developed separately. Recent years, however, have seen an upsurge of interest in problems that require a combination of linguistic and visual information. For example, Image Captioning and Visual Question Answering (VQA) are two important research topics in this area. Image captioning requires the machine to describe the image using human readable sentences while the VQA asks a machine to answer language-based questions based on the visual information. In this tutorial, I will first introduce the basic models and mechanisms in this area, such as CNN-RNN model, attention mechanism. Then I will outline some of the most recent progress and discuss the trend in this field.

  9. Title: Probabilistic Graphical Models 1: Representation [pdf], 31 Jan. 2019. Speaker: Javen Shi

    Abstract: I will start with the basics of probabilities and introduce the history of Probabilistic Graphical Models (PGMs) and current trend. I will cover the basic concepts of PGMs such as the representation, the factorisation rules, and basic tasks. I will show how to reason bayesian networks by hand. I will try to make this talk self-contained.

  10. Title: Probabilistic Graphical Models 2: Inference Basics [pdf], 7 Feb. 2019. Speaker: Javen Shi

    Abstract: I will cover two basic types of inference tasks: Marginal inference and MAP inference. I will also cover inference methods such as variable elimination, sum-product and max-product algorithms.

  11. Title: Probabilistic Graphical Models 3 and 4: Learning Parameters [pdf], Learning Structures [pdf] , 14 Feb. 2019. Speaker: Javen Shi

    Abstract: I will cover how to learn the parameters, starting with simple ones such as Bayesian networks, and then moving onto Markov random fields including techniques such as Structured SVM and Conditional random fields. I will also cover how to learn structures focusing on a classical algorithm called Chow-Liu Tree algorithm.

  12. Title: Recent Development in Semantic Image Segmentation using FCNs, 21 Feb. 2019. Speaker: Zifeng Wu

    Abstract: I will cover the original fully convolutional networks and the family of DeepLab networks, as well as several practical considerations in a nuclei segmentation task.

  13. Title: A tutorial on deep neural networks. From theory to code, 28 Feb. 2019. Speaker: Michele (Mike) Sasdelli

    Abstract: I will present all the basic ingredients of deep neural networks for computer vision. From perceptrons to modern CNN architectures. The presentation with be complemented with simple neural networks examples written in pytorch.

  14. Title: Relational Reasoning and Relation Networks [pdf], 7 March 2019. Speaker: Javen Shi

    Abstract: Relational reasoning is a central component of general intelligence, but has been difficult for neural networks to perform and learn. I will cover recent advances in Relation Networks (RNs) that can work as a simple plug-and-play module to solve many problems that fundamentally hinge on relational reasoning.

  15. Title: Deep semi-supervised learning, 14 March 2019 (postponed to 21 March due to illness). Speaker: Lingqiao Liu

    Abstract: emi-supervised learning is a long-standing problem in machine learning, and the resurgence of deep learning has inspired many new semi-supervised approaches for deep neural networks. In this talk, I will cover several recent developments in deep semi-supervised learning, including temporal ensemble, mean teacher, virtual adversarial training etc..

  16. Title: Generalized Zero-Shot Learning - An Overview, 28 March 2019. Speaker: Rafael FŽlix

    Bio: Currently, I am a Ph.D. research student at the Australian Centre for Robotic Vision (ACRV), and at the University of Adelaide. I am working under the supervision of Prof. Gustavo Carneiro and co-supervision of Prof. Ian Reid at The University of Adelaide. My research project aims to bring contributions to the fields of generalized zero-shot learning, zero-shot learning, and open-set recognition. Before I received M.Sc. degree in Computer Engineering from the Universidade Presbiteriana Mackenzie (Brazil Ð 2015), and B.Sc. on Information Systems, Universidade Estadual de Montes Claros (Brazil Ð 2011). His talk is mainly based on the following papers: Xian, Yongqin, et al. "Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly." IEEE transactions on pattern analysis and machine intelligence(2018). Felix, Rafael, et al. "Multi-modal cycle-consistent generalized zero-shot learning." Proceedings of the European Conference on Computer Vision (ECCV). 2018.

  17. Title: From R-CNN to YOLO: A review over the popular deep learning based object detectors [pdf1] [pdf2], 11 April 2019. Speaker: Hamid Rezatofighi

    Abstract: In this talk, I will present the progress trend in recent object detection algorithms after the rise in popularity of deep neural networks. I will also discuss pros and cons of each approach. If I have enough time, I will present some practical details about their implementations such as the post-processing step and their regression losses, and also provide more insights about their limitations.

  18. Title: Variational Optimisation, Evolution Strategies and Sampling [notes], 2 May 2019. Speaker: Javen Shi

    Abstract: How do we optimise a non-differentiable objective function (e.g. for a deep neural network) or even a 'black-box' objective function (which you do not necessarily know its math form but you can evaluate its value given input)? Variational Optimisation and Evolution Strategies can do so. As they involve sampling, I will cover some sampling techniques as well.

  19. Title: Recent Advances in Neural Architecture Search [ppt], 9 May 2019. Speaker: Hao Chen

    Abstract: The goal of neural architecture search (NAS) is to automate the design of artificial neural networks. We have seen such methods surpassing manually designed models in various areas such as language modelling, image classification, super-resolution, segmentation and detection and in different aspects. In this tutorial, I will cover the methods underlying the current state of the art in this fast-paced field, and introduce some speed-up strategies under reasonable computation limit.

  20. Title: Multi-Interaction with Charge Definition for Charge Prediction Based on Memory Network, 16 May 2019. Speaker: Liangyi Kang

    Abstract: Charge prediction, determining charges for a fact description in criminal cases, plays a significant role in legal assistant systems. Existing works for charge prediction are usually based on classification framework. However, charge characteristics of fact description is non-discriminative since it is lengthy, not uniform and normative, which makes them not represent fact accurately, especially in few-shot scene. To address this problem, we introduce charges definition of criminal law into charge prediction to modify non-disciminvative fact representation more charge-like. In particular, we design a novel framework, Multi-Interaction Memory Network, to normalize fact representation from context, text and word-level simultaneously for forming charge-like representation. Experimental results on CAIL2018 dataset states that our model achieves significant improves than baselines especially on few-shot class. Specifically, we use visualization to show our effectiveness of fact representation.

  21. Title: Attention Is All You Need, 16 May 2019. Speaker: Amin Parvaneh

    Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely, can beat complex recurrent or convolutional neural networks in many tasks. Amin will cover this famous work, and follow-up works.

  22. Title: Predicting soccer players' future trajectories and moves, 6 June 2019. Speaker: Anthony Manchin

    Abstract: Anthony will give an overview of the pilot project in soccer that we started with Australian Institute of Sport, on predicting defensive team playersÕ future trajectories and moves, and discuss relevant techniques.

  23. Title: An Introduction to Imitation Learning, 13 June 2019. Speaker: Ehsan Abbasnejad

    Abstract: In this talk, we present a broad overview of imitation learning and applications. Imitation learning, also known as learning from demonstrations or apprenticeship learning, has recently gained attention due to better learning approaches and novel applications brought about by deep learning. Imitation learning bridges the gap between supervised learning and reinforcement learning.

  24. Title: DeepSightX using machine learning supported by best practice of geoscience to predict where to drill, 27 June 2019. Speakers: DeepSightX Team

    Abstract: Earlier this year, a team of machine learning scientists, engineers and geoscientists from the University of Adelaide came together with industry experts to form DeepSightX, and entered the Explorer Challenge, a global competition where data is used to predict mineral deposits in South Australia. This was the start of something unique and innovative; a group using AI supported by best practice geoscience to predict where to drill. We will share with you the competition, what we have achieved and some stories behind the scene perhaps.

  25. Title: Applying Machine Learning to Industry at Consilium Technology, 4 July 2019. Speaker: Sebastien Wong

    Abstract: Consilium Technology is an Adelaide-based machine learning and AI company established long before the rise of AI. It has many applications in defence, agtech, mining and so on, is fast growing in the past a few years. Consilium Technology's revolutionary AgTech product - GAIA - was the only nomination to win across all three categories for State iAwards 2019: 1) Core Category Winner - Industrial & Primary Industries; 2) Cross Technology Category Winner - Automation Technologies Innovation of the Year; 3) Cross Stage Category Winner - Research & Development Project of the Year. GAIA recently completed the National Vineyard Scan by identifying and mapping all of AustraliaÕs vineyards using AI and satellite imagery. The use of this technology to map crop-type for an entire continent is a world first achievement. Sebastien will let us know GAIA among other products and how they apply machine learning to solve real industry problems.

  26. Title: Data assimilation and online social network analysis, 18 July 2019. Speaker: Lewis Mitchell

    Abstract: This will be a talk in two parts. In the first part I will give an overview of ensemble data assimilation. This a the method by which organisations like the Bureau of Meteorology blends observations from weather stations with physics-based weather models to produce improved initial conditions for their numerical weather forecasts. There are some parallels with neural networks which I will explain. In the second part, I will describe some of my recent computational social science research on online social networks, in particular how entropy-based measures can be used to place upper bounds on the predictability achievable by ML algorithms in online social systems. I will conclude with some potential ideas for future ML research in this direction.

  27. Title: Robotics and IoT, 25 July 2019. Speaker: Tien-Fu Lu

    Abstract: Tien-Fu has a deep mech eng background and is leading a robotic lab that can build robots from scratch (instead of buying one), and exoskeleton. He builds both robotic arms as big as a bus (for mining), and small robots with nano scale control (surgery, medical, ...). He builds IoT solutions from scratch for agtech. He is involved in the Wine Australian project with us for the IoT solution, leak detection and more.

  28. Title: Memorizing Normality to Detect Anomaly -- Memory-augmented Deep Autoencoder (MemAE) for Unsupervised Anomaly Detection, 1 Aug. 2019. Speaker: Dong Gong

    Abstract: This talk mainly covers some basic concepts of memory-augmented deep model and one of our recent works of developing the memory-augmented deep autoencoder (MemAE) for unsupervised anomaly detection. Deep autoencoder has been extensively used for anomaly detection. Training on the normal data, the autoencoder is expected to produce higher reconstruction error for the abnormal inputs than the normal ones, which is adopted as a criterion for identifying anomalies. However, this assumption does not always hold in practice. It has been observed that sometimes the autoencoder ÒgeneralizesÓ so well that it can also reconstruct anomalies well, leading to the miss detection of anomalies. To mitigate this drawback for autoencoder based anomaly detector, we propose to augment the autoencoder with a memory module and develop an improved autoencoder called memory-augmented autoencoder, i.e. MemAE. Given an input, MemAE firstly obtains the encoding from the encoder and then uses it as a query to retrieve the most relevant memory items for reconstruction. At the training stage, the memory contents are updated and are encouraged to represent the prototypical elements of the normal data. At the test stage, the learned memory will be fixed, and the reconstruction is obtained from a few selected memory records of the normal data. The reconstruction will thus tend to be close to a normal sample. Thus the reconstructed errors on anomalies will be strengthened for anomaly detection. MemAE is free of assumptions on the data type and thus general to be applied to different tasks. Experiments on various datasets prove the excellent generalization and high effectiveness of the proposed MemAE.

  29. Title: Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution, 15 Aug. 2019. Speaker: Javen Shi

    Abstract: I will share Judea Pearl's thoughts on the mainstream machine learning methods and causality (not so mainstream) with you. Current machine learning systems operate, almost exclusively, in a statistical, or model-free mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference tasks. To demonstrate the essential role of such models, seven tasks are presented which are beyond reach of current machine learning systems and which have been accomplished using the tools of causal modeling.

  30. Title: Exploring The Timber Industry With Machine Learning, 22 Aug. 2019. Speaker: Anthony Manchin

    Abstract: The timber industry contributed more than $16 billion to the Australian economy last year and is a truly fascinating industry. Saw mills, which process green logs, are full of mechanical engineering, sensors, and robotics. This places them in a unique position to capitalise on state of the art advances in machine learning to improve efficiencies and optimise various workflows. This talk will discuss some of the opportunities for research in an industry that has been traditionally very mechanicalÉ until now.

  31. Title: A critique of pure learning and what artificial neural networks can learn from animal brains (by Anthony M. Zador, Nature Communications, 2019), 29 Aug. 2019. Speaker: Javen Shi

    Abstract: Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms --- supervised or unsupervised --- but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a ''genomic bottleneck''. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.





University Courses



Past Tutorials

Probabilistic Graphical Models

  1. Representation [ pdf], ACVT, UoA, April 15, 2011

  2. Inference [ pdf], ACVT, UoA, May 6, 2011

  3. Learning [ pdf], ACVT, UoA, May 27, 2011

  4. Sampling-based approximate inference [ pdf], ACVT, UoA, June 10, 2011

  5. Temporal models [ pdf], ACVT, UoA, August 12, 2011

Generalisation Bounds

  1. Basics [ pdf], ACVT, UoA, April 13, 2012

  2. VC dimensions and bounds [ pdf], ACVT, UoA, April 27, 2012

  3. Rademacher complexity and bounds [ pdf], ACVT, UoA, August 17, 2012

  4. PAC Bayesian Bounds, [ pdf], ACVT, UoA, August 31, 2012

  5. Regret bounds for online learning, [ pdf], ACVT, UoA, Nov. 2, 2012

Please email me if you find errors or typos in the slides.