UNDER THE HOOD
Understanding Deep Neural Networks & AI

"We're at an unprecedented point in human history where artificially intelligent machines could soon be making decisions that affect many aspects of our lives. But what if we don't know how they reached their decisions? Would it matter?"

Marianne Lehnis, Technology of Business reporter, BBC. 

In this open lecture series, you will get the opportunity to understand what AI, and specifically Deep Neural Networks, can do and how they function. Some of the best researchers at Campus UiO, Gaustadbekkdalen will share their insights, and will be open for questions on these topics. Our goal is to facilitate a broader discussion between academic researchers, industry and public sector practitioners and entrepreneurs.
  • TARGET AUDIENCE - These lectures are intended for researchers/practitioners who already have a good understanding, and preferably some practical experience with Neural Networks. 
  • TIME  - Tuesdays 14:00-16:00 (with exception of first lecture, which is on a Wednesday and third which is on a Thursday)
  • FORMAT - 2 hours with a short break in the middle
  • LOCATION - StartupLab / Oslo Science Park (map)

SCHEDULE

"History and Evolution of (Deep) Neural Networks: what could they do before, what can they do now"

Prof. Ole Christian Lingjærde, UiO

A talk on the evolution of Neural Networks (and later Deep Neural Networks) with focus on what properties can and which properties cannot be modelled with these methods.

mbri-star

Wednesday Aug 21

Tuesday Aug 27

mbri-star

A mathematical introduction to deep learning (DL) - What do neural nets actually learn?

Dr. Anders Hansen, UiO/Cambridge

Abstract: DL has had unprecedented success and has revolutionised artificial intelligence (AI). However, despite the success, DL methods have a serious achilles heel; they are universally unstable and thus demonstrate highly non-human-like behaviour. This has serious consequences, and Science recently published a paper warning about the potentially fatal consequences. The question is: why do AI algorithms based on deep learning become unstable and perform so different to humans? Current mathematical theory on neural networks cannot explain this. We will demonstrate the reason for this discrepancy: neural networks do not learn the structures that humans learn, but completely different structures. These different (false) structures correlate well with the original structure that humans learn, hence the success, however they are completely unstable, yielding non-human performance.

"Compressed sensing (CS) and its relation to neural networks and deep learning"

Dr. Anders Hansen, UiO/Cambridge

CS transformed medical imaging through the 2017 approval by the US Food and Drug Administration (FDA) of CS techniques in Magnetic Resonance Imaging (MRI), resulting in widespread use (scanners at the UiO hospitals are now run with CS). However, the success of deep learning (DL) has sparked a vast interest in the question: can DL can outperform CS in image reconstruction? Indeed, in 2018 Nature published the paper "Image reconstruction by domain transform manifold learning" representing the tip of the iceberg of DL methods promising improved performance and "... observed superior immunity to noise…" Despite the promise, DL becomes, as in the classification problem, completely unstable also for image reconstruction (however, for completely different reasons). The phenomenon can be understood by linking CS and DL. In fact, CS may be viewed as a constructive way, without any learning, to build stable neural networks for image reconstruction. The question then becomes: why do trained neural networks based on DL become unstable yet the constructed (untrained) networks based on CS remain stable?  

mbri-star

Thursday Aug 29

Tuesday Sept 3

mbri-star

"Will the box always be black for some people? Regulation and the limits of AI explanations"


Gudbrand Eggen, Data Scientist and Lab Lead at StartupLab DNN lectures.

An “invisible hand” of algorithms is influencing us ever more. Some algorithms are using our personal data while other algorithms are using the data of people similar to us. How can current and future regulation help us understand and control how we are influenced? Will such regulation let the most unscrupulous actors dominate the field? Is it always possible to give an explanation of why an algorithm made a decision which makes sense to the person affected by the decision? Can such explanations be verified? Might such explanations reveal trade secrets? Also, will we be able to predict whether actions we take will cause an algorithm to make a different decision about us in the future?

"Bayesian approaches to neural networks"

Prof. Geir Storvik, UiO

Use of neural network, both deep and shallow, require learning a large number of parameters (weights) from observed data. Due to limited amounts of data there is a danger of overfitting. Further, the amount of uncertainty in the estimated parameters can have huge effect on the prediction performance. In this seminar we will discuss the Bayesian approach to learning and fitting neural networks. Both benefits of such approaches and conceptual as well as computational challenges will be discussed.

mbri-star

Tuesday Sep 10

Tuesday Sep 17

mbri-star

"Making sense of convolutions in DNNs for Image Recognition applications"

Dr. Arnt-Børre Salberg, Norwegian Computing Centre

In this lecture we aim to understand and explain the behavior of a deep convolutional neural network.
By analyzing the response of each layer we gain knowledge how the CNN is working, and what triggers a given neuron. We also perform a “sensitivity analysis” in order to explain which features or pixels in the input image that is important for the corresponding network output (prediction), and which pixels are the network sensitive to. 

"Explaining and opening the black box – methods and research challenges"

Dr. Anders Løland., Norwegian Computing Centre

Many machine learning methods are more or less black boxes for the end user. Even those who develop the machine learning methods can struggle with explaining how methods actually work, or which variables that are most decisive for a specific prediction. The latter is a GDPR requirement for automated individual decision-making. We will give an overview of current methods for opening black boxes. Some of them are useful, while others need more research to be more useful than harmful.

mbri-star

Tuesday Sep 24

Tuesday Oct 8

mbri-star

"Machine Learning and the Physical Sciences"

Morten Hjorth-Jensen, Michigan State University, USA and University of Oslo
 
Machine Learning (ML) is one of the most exciting and dynamic areas of modern research and application. Here I will show how we can use physics inspired deep learning algorithms like Boltzmann machines to study study problems in quantum many body physics, quantum computing, and chemical and material physics

"Capsule Networks – next generation learning architecture"

Dr. Asbjørn Berge, SINTEF

To quote Geoffrey Hinton :"The pooling operation used in convolutional neural networks is a big mistake and the fact that it works so well is a disaster." Pooling is a necessity in most large convolutional nets, but explicitly break hierarchical relationships in the data, or at least makes learning of the underlying representations very inefficient. At the same time, almost all interesting problems have clear hierarchies or geometric relationships. One approach to efficiently learn hierarchical correspondences is Capsule Networks where neurons are extended to output a tensor instead of a scalar. This has several benefits, for example the ability of each (extended) neuron to predict geometric transforms. Theory, use cases, training and extensions will be discussed. 

mbri-star

Tuesday Oct 15

Tuesday Oct 22

mbri-star

"Physics and machine learning"

Signe Moe og Signe Riemer-Sørensen,, SINTEF

How can we use machine learning to infer physical properties of a system and how can physical properties be built into DNNs? What are the challenges for industrial application of DNNs? We will discuss the concepts of hybrid models and the importance of domain knowledge when implementing DNNs.

"Crossroads: Bringing the pieces together"

Lecturer: TBA

In this lecture we aim to take multiple, simultaneous shots at DNNs to understand what goes on inside the boxes. We will aim to get researchers from different methodological angles; statistics, computational science, information theory to spar in real time and try to make sense of when and how DNNs work and why.

mbri-star

Tuesday Oct 29

Tuesday Nov 5

mbri-star

"The sociology of Deep Neural Networks"

Lecturer: TBA

Every new area of research comes with its own unique sociology, heroes and villains and tribe dynamics. In this talk we dive into the 'human side' of the DNN evolution. We try to establish a historical timeline of claims and delivered results, and home in on the beliefs on further evolution of DNNs and AI.

Tuesday Nov 26

 
"Impact: what happens if we understand the neural networks, and what happens if we don't?"

We will say that we understand a decision or a result from a neural network if we are able to perceive a reason or meaning behind what is presented. This may be a challenge since neural networks learn in a different way from what we do, lacking for instance our sense of common sense and ethical considerations. We present examples where “AI-lash” (re techlash) may occur as a result of current use, and discuss sensible ways to avoid this.


Professor Nils Christophersen, UiO

SIGN UP FOR THE LECTURES

The Lecture Series is free of charge. All you have to do is sign up as a participant, and you will receive updates on the schedule.

IN COLLABORATION WITH

Copyright (c) 2019 StartupLab