Dr. Matthias Müller is Research Lead of the PhysX SDK team at
NVIDIA. PhysX is a GPU accelerated physically based simulation engine
for computer games. His research interests include the development of
methods for the simulation of rigid bodies, fracture, soft bodies,
cloth and fluids that are fast, controllable and robust enough to be
used in game environments. He is a pioneer in the field of position
based dynamics and has been contributing to this and other fields via
numerous publications in the major computer graphics conferences and
journals. Position based dynamics has become the standard for the
simulation of soft bodies and cloth in computer games and has been
adopted by the film industry as well.
Matthias Müller received his Ph.D. from ETH Zürich for
his work on the atomistic simulation of dense polymer systems. During
a two year post-doc with the computer graphics group at MIT he changed
his research focus from atomistic offline simulations to macroscopic
real time simulation in computer graphics. In 2002 he co-founded
Novodex, a company that developed a simulation engine for computer
games. In 2004 Novodex was acquired by AGEIA which, in turn, was
acquired by NVIDIA in 2008.
Physics in Games
Physical simulations have a long history in engineering and have been successfully used to complement real world experiments. Main advantages computer simulations have over real experiments are the ability to study extreme conditions and the analysis of very small time intervals. With this in mind, the accuracy of the models and the results are central to engineering applications.
For more than three decades, physical simulations have also been used in computer graphics in order to increase the realism of animations and to free artists from animating secondary motion by hand. The two main applications are special effects in movies and physical effects in computer games. Here, accuracy is important to the extent that plausible behavior is generated. There are, however, additional requirements not present in the engineering world that are more important than accuracy. One such requirement is controllability: movie directors and game developers want to be able to control how a building collapses or what path a flood wave takes in order to create the desired effect or to make sure game play does not get blocked. Another aspect that plays a major role, especially in games, is stability. The simulations need to be unconditionally stable even in un-physical situations such as characters turning 180 degrees in a single time step.
These new requirements are the reason why physically based simulation in computer graphics has become an important research field separate from scientific computing. In my talk I will present a variety of simulation methods we have developed to meet these requirements, while still producing plausible physical behavior. Examples are approaches to simulate soft bodies, clothing, destruction and liquids.
Dr. Elizabeth Churchill is an applied social scientist working in
the area of social media, interaction design and mobile/ubiquitous
computing. She is currently Director of Human Computer Interaction at
eBay Research Labs (ERL) in San Jose, California. She was formerly a
Principal Research Scientist at Yahoo! Research, where she founded,
staffed and managed the Internet Experiences Group. Originally a
psychologist by training, throughout her career Elizabeth has focused
on understanding people's social and collaborative
interactions in their everyday digital and physical contexts. She has
studied, designed and collaborated in creating online collaboration
tools (e.g., virtual worlds, collaboration/chat spaces), applications
and services for mobile and personal devices, and media installations
in public spaces for distributed collaboration and communication.
Elizabeth has a B.Sc. in Experimental Psychology, an M.Sc. in Knowledge
Based Systems, both from the University of Sussex, and a Ph.D. in
Cognitive Science from the University of Cambridge. In 2010, she was
recognised as a Distinguished Scientist by the Association for
Computing Machinery (ACM). Elizabeth is the current Executive Vice
President of ACM SigCHI (Human Computer Interaction Special Interest
Group). She is a Distinguished Visiting Scholar at Stanford
University's Media X, the industry affiliate program to Stanford's
H-STAR Institute.
Foundations for Designing User Centered Systems: A framework and some case studies
Interactive technologies pervade every aspect of modern life. Web
sites, mobile devices, household gadgets, automotive controls,
aircraft flight decks; everywhere you look, people are interacting
with technologies. These interactions are governed by a combination
of: the users' capabilities, capacities, proclivities and
predilections; what the user(s) hope to do and/or are trying to do;
and the context in which the activities are taking place. From concept
to ideation to prototype and evaluation, when designing interactive
technologies and systems for use by people, it is critical that we
start with some understanding of who the users will be, what tasks and
experiences are we are designing to support; and something about the
context(s) of use. In this talk, I will discuss a framework for
thinking about design, the ABCS. Using examples from my own work, I
will illustrate how this framework has been explicitly and/or tacitly
applied in the design, development and evaluation of interactive,
multimedia systems.
2014 CHCCS Achievement Award
Eugene Fiume, University of Toronto
Eugene Fiume is Professor and past Chair of the Department of Computer
Science at the University of Toronto, where he also co-directs the
Dynamic Graphics Project. He is Director of the Masters of Science in
Applied Computing programme, is Principal Investigator of a $6M
CFI/ORF project on the construction of a digital media and systems
lab. He has recently accepted the role of the next Scientific Director
of the GRAND NCE in 2015. Eugene's research interests include most
aspects of realistic computer graphics, including computer animation,
modelling natural phenomena, and illumination, as well as strong
interests in internet based imaging, image repositories, software
systems and parallel algorithms. He has written two books and
(co-)authored over 130 papers on these topics. Fourteen doctoral
students and 45 masters students have graduated under his
supervision. He has won two teaching awards, as well as Innovation
Awards from ITRC for research in computer graphics, Burroughs-Wellcome
for biomedical research, and an NSERC Synergy Award for innovation and
industrial collaboration in visual modelling.
Following his B.Math. degree from the University of Waterloo and
M.Sc. and Ph.D. degrees from the University of Toronto, he was an
NSERC Postdoctoral Fellow and Maitre Assistant at the University of
Geneva, Switzerland. He was awarded an NSERC University Research
Fellowship in 1987 and returned to the University of Toronto to a
faculty position. He was Associate Director of the Computer Systems
Research Institute,and a Visiting Professor at the University of
Grenoble, France.
Visual Models and Ontologies
Realistic computer graphics will change the way people think and
communicate. Achieving deeper success as a ubiquitous medium
will require a more resonant understanding of visual modelling that
must embrace mathematical, philosophical, cultural, perceptual
and social aspects. With an interleaved understanding, people
will be able to create visual ontologies that better align to their
expressive needs. In turn, this will naturally lead to ubiquitous
supporting technologies. First we need good visual models. A
model induces an ontology of things that inevitably omits aspects
of the phenomenon, whether desired or not. Thus modelling a
model's incompleteness is crucial, for it allows us to account for
artifacts, errors, and ontological surprises such as the "uncanny
valley". Over the years, my choice of tools to model models has
been mathematics. In this paper, I will speak to how little progress
we have made and how much broader our investigation must be.
2013 Alain Fournier Dissertation Award
Hua Li, University of North Carolina, Wilmington
Hua completed her B.Eng. in Mining Engineering and her M.Eng. in
Control Theory and Control Engineering, both at the University of
Science and Technology in Beijing, and her Ph.D. in Computer Science
at Carleton under the supervision of Professor David Mould. She has
co-authored a paper at Eurographics, two at Graphics Interface (one of
which received the best student paper in graphics), two at NPAR, one
at ARTECH (honorable mention), as well as other publications; a number
of her contributions appeared as extended versions in journals. She
has been a regular reviewer in Graphics Interface and other top
computer graphics conferences and journals. She is now a faculty
member at the University of North Carolina, Wilmington.
Structure Preservation in Stylized Image Synthesis
Non-photorealistic rendering (NPR) aims to produce computer-generated
artistic images, e.g., in inked or stippled styles. Many automatic
approaches for stylized image synthesis have been proposed, often
chiefly concerned with tone matching. We observe that preserving
structure in image abstraction can help communicate image content even
with a small primitive count. For many years, in order to preserve
structural details, we have attacked the problem of image stylization
from the foundation of priority-based contrast-aware error
diffusion. In this talk, I will present a family of automatic
structure-preserving NPR methods we have developed, and show the
applications for different effects, including halftoning, screening,
stippling, and line art.
2013 Bill Buxton Dissertation Award
Xing-Dong Yang, University of Calgary
Xing-Dong Yang completed his Bachelor
of Computer Science in 2005 from the University of Manitoba. He earned
his Master of Computing Science with a specialization in Haptic
Interfaces in 2008 from the University of Alberta under the
supervision of Dr. Pierre Boulanger and Dr. Walter F. Bischof, and his
Doctorate in Computing Science with a specialization in Human-Computer
Interaction in 2013 from the same university where he worked under the
supervision of Dr. Pierre Boulanger. During his graduate work he was a
research intern at Autodesk Research in Toronto and Microsoft Research
Asia in Beijing. He has generated a large number of publications and
many in top-tier venues in HCI, including the ACM Conference on Human
Factors and Systems (ACM CHI) and the ACM Conference on User
Interfaces and Technology (ACM UIST). He has over twenty publications
in fields of HCI, mobile computing, wearable technology and haptic
interfaces. His work has also been recognized through best paper
nomination at ACM CHI and ACM MobileHCI, featured in the public press
through Discovery News, NBC, and New Scientist, and has led to five US
patent applications filed between 2010 and 2013. He is currently a
Postdoctoral Fellow in the iLab, at the University of Calgary working
with Dr. Tony Tang and Dr. Saul Greenberg.
Towards Mobile Interactions that go Beyond The Touchscreen
The ubiquitous touchscreen has become the primary means with which
users interact with mobile devices. With the complex requirements for
mobile tasks, this mean can be considered a bottleneck for mobile
computing. In this talk, I present my work in extending the mobile's
input space from on-the-display to off-the-device through three
proof-of-concept prototypes. My first approach includes the use of
the device's rear surface as an input medium to gain fine grain and
pixel level control on mobile devices. This can be particularly useful
for the common one-handed mode of using mobile devices. My second
approach extends the input space to the peripheral region, and to the
device's vicinity. By means of this method I introduce my vision for
the mobile device of the future which can 'see' its environment, in a
self-contained prototype called Surround-See. I describe
Surround-See's design, architecture, and demonstrate novel
applications that exploit peripheral 'seeing' capabilities during
active use of a mobile device. My third approach is to extend the
input space to any surface available to the user. I present Magic
Finger, a small device worn on the fingertip, which supports
always-available input. Magic Finger inverts the typical relationship
between the finger and an interactive surface: with Magic Finger, I
instrument the user's finger itself, rather than the surface it is
touching. Magic Finger senses touch through an optical mouse sensor,
enabling any surface to act as a touch screen. Magic Finger also
senses texture through a micro RGB camera, allowing contextual actions
to be carried out based on the particular surface being touched. I
present a number of novel interaction techniques that leverage its
unique capabilities and show how it can be exploited for use with
wearable technologies. At the end of my talk, I present my plan for
future research that is driven by my vision of how 'smarter' mobile
devices can be developed to improve people's daily activities.