VMV – Invited Talks

Tobias Günther

Visualizing Motion: From Points to Fields

Abstract: Descriptions of motion are found everywhere in graphics, whether it is in computer animation, physics simulation, for optical flow, or in scientific visualization. The common denominator in all of the above is the mathematical language used to describe motion, namely differential equations. In this talk, we discuss how visualization can help us to analyze motion. We begin with a brief introduction into the mathematical modeling of trajectories and their visualization through phase portraits. We then see how optimizations can be used to lower the dimensionality of phase spaces. Afterwards, we move from dynamical systems containing point objects to the visualization of continuous fields in motion, such as fluids.

Bio: Tobias Günther is a professor for Visual Computing at the Friederich-Alexander-University Erlangen-Nürnberg, Germany. His research concentrates on the development of novel algorithms and techniques for large-scale exploration of scientific data, optimization-based feature extraction, applications of machine learning, and rendering both in real-time and offline. He collaborates with domain scientists from various disciplines, including meteorology, climate science, biology, cosmology, architecture, sports, and engineering. He received his PhD from University of Magdeburg and was a postdoc at ETH Zurich.

Laura Leal-Taixe

In defense of one-shot fine-tuning for video object segmentation

Abstract: Video Object Segmentation (VOS) is the task of segmenting a set of objects in all the frames of a video. In the semi-supervised setting, the first frame mask of each object of interest is provided at test time. Many VOS approaches follow the one-shot principle and separately fine-tune a segmentation model to each object’s given mask. However, recent VOS methods refrain from such a test time optimization as it is considered to suffer from several shortcomings including a high test runtime.

In this talk, I will present the efficient One-Shot Video Object Segmentation (e-OSVOS) framework. In contrast to most VOS approaches, e-OSVOS decouples the object detection task and only predicts local segmentation masks by applying a modified version of Mask R-CNN. The one-shot test runtime and performance are optimized without a laborious and handcrafted hyperparameter search. To this end, we meta learn the model initialization and learning rates for the test time optimization. We address the issue of degrading performance over the course of the sequence by continuously fine-tuning the model on previous mask predictions supported by a bounding box propagation. The state-of-the-art results of e-OSVOS will hopefully convince you to give one-shot finetuning methods another look.

Bio: Prof. Dr. Laura Leal-Taixé is a tenure-track professor at the Technical University of Munich, leading the Dynamic Vision and Learning group. Before that, she spent two years as a postdoctoral researcher at ETH Zurich, Switzerland, and a year as a senior postdoctoral researcher in the Computer Vision Group at the Technical University in Munich. She obtained her PhD from the Leibniz University of Hannover in Germany, spending a year as a visiting scholar at the University of Michigan, Ann Arbor, USA. She pursued B.Sc. and M.Sc. in Telecommunications Engineering at the Technical University of Catalonia (UPC) in her native city of Barcelona. She went to Boston, USA to do her Masters Thesis at Northeastern University with a fellowship from the Vodafone foundation. She is a recipient of the Sofja Kovalevskaja Award of 1.65 million euros as well as a Google Faculty Award.

Enkelejda Kasneci

Enhancing User models through visual scanpath analysis

Kasneci

Abstract: Our sense of sight allows us to take in the vast information of the world around us. We perceive visual input based on a mixture of salient and contextual features. Our eyes move to process the way these features draw our attention. This pattern of fixations and saccades is known as the scanpath and is reflective of tasks, expertise, and even emotion. Since scanpaths convey a multitude of cognitive aspects, scanpath comparison and machine learning approaches that use scanpaths provides models for many applications. Our research furthers work in robust scanpath analysis using machine learning methods and recently integrating deep learning for semantic understanding of a scene. This talk will first discuss the potential of efficient scanpath analysis for user modeling and provide an overview of state-of-the-art methodology for gaze behavior analysis coupled with scene semantics. Results and visualizations are based on challenging examples of user modeling from real-world tasks.

Bio: Enkelejda Kasneci is a Professor of Computer Science at the University of Tübingen, Germany, where she leads the Human-Computer Interaction Lab. As a BOSCH scholar, she received her M.Sc. degree in Computer Science from the University of Stuttgart in 2007. In 2013, she received her PhD in Computer Science from the University of Tübingen. For her PhD research, she was awarded the research prize of the Federation Südwestmetall in 2014. From 2013 to 2015, she was a postdoctoral researcher and a Margarete-von-Wrangell Fellow at the University of Tübingen. Her research evolves around the application of machine learning for intelligent and perceptual human-computer interaction. She served as academic editor for PlosOne and as a TPC member and reviewer for several major conferences and journals.

Michael Sedlmaier

Machine Learning meets Visualization

A smiling Michael Sedlmair

Abstract: Based on our experience conducting projects at the intersection of machine learning (ML) and interactive visualization (Vis), my talk will reflect on and discuss the current relation between these two areas. For that purpose, the talk’s structure will follow two main streams. First, I will talk about Vis for ML, that is, the idea that visualization can help machine learning researchers and practitioners gain interesting insights into their models. In the second part, I will then turn the relationship around and discuss how ML for Vis can guide visualization designers and analysts towards interesting visual patterns in the data. The talk will conclude with research challenges that lie ahead of us and that will pave the way for future interfaces between humans and data.

Bio: Michael Sedlmair is a junior professor at the University of Stuttgart, where he works at the intersection of human-computer interaction, visualization, and data analysis. Previously, Michael has worked at Jacobs University Bremen, University of Vienna, University of British Columbia, University of Munich (where he got his PhD), and the BMW Group Research and Technology. He also holds visiting positions at theVienna University of Technology, and the Shandong University. His interests focus on information visualization, interactive machinelearning, virtual and augmented reality, as well as the research andevaluation methodologies underlying them.


Wenzel Jakob

An Introduction to Physically Based Differentiable Rendering

Abstract: Progress on differentiable rendering over the last two years has been remarkable, making these methods a serious contender for solving truly hard inverse problems in computer graphics and beyond. In this talk, I will give an overview of physically based differentiable rendering and its fascinating applications, as well as future challenges in this rapidly evolving field.

Bio : Wenzel Jakob is an assistant professor at EPFL’s School of Computer and Communication Sciences, and is leading the Realistic Graphics Lab (https://rgl.epfl.ch/). His research interests revolve around inverse graphics, material appearance modeling and physically based rendering algorithms. Wenzel is the recipient of the ACM SIGGRAPH Significant Researcher award the Eurographics Young Researcher Award, and an ERC Starting Grant. He is also the lead developer of the Mitsuba renderer, a research-oriented rendering system, and one of the authors of the third edition of “Physically Based Rendering: From Theory To Implementation”. (http://pbrt.org/)