
Date (
Winter 2013
) 
Speaker and Abstract 
2013/01/25 
Speaker:
Renée
Sieber
Affiliation: Department of Geography, McGill University
Title: The Epistemology of Big Data and Volunteered Geographic Information
Abstract:
An emergent field of research in Geospatial Information Science concerns the collection, analysis and visualization of Volunteered Geographic Information (VGI). VGI is user generated content on Web 2.0 platforms that also contains a geolocation. It is characterized as big data from nonexperts as citizen “sensors” who are presumably close to phenomena like crises and can quickly and massively contribute content to platforms like Twitter or Google Maps. VGI has become immensely salient in GIScience because of potentials for spatial data inferencing and data mining, the apparent importance of geography in computer science and software engineering. In my talk, I will cover the potential for the big data of VGI in GIScience and discuss the challenges faced in handling this new data source. I will predominately talk about the nonalgorithmic side of these challenges. I will describe the way in which VGI represents data AND an epistemology, that is a path to truth, whether you call truth accuracy or authenticity. VGI is also promoted as being more participatory, where hackathons and citizen science are just two examples. It often fails at these ideals. The talk draws on my current research in VGI on Geospatial Web 2.0 platforms and compares that to my prior research in Public Participation Geographic Information Systems and Science (PPGIS). The comparison suggests that the ways VGI is framed (as truth, as participatory, as volunteered) make a purely algorithmic, extractive approach a problem.
Biography of Speaker:

Sieber conducts research at the intersection of social theory and software architecture, particularly where it concerns locational information. She is best known for her work in Public Participation Geographic Information Systems and Science (PPGIS), although she’s also collaborated on GIS applications in health, climate modeling, and the digital humanities. She is jointly appointed between the School of Environment and the Department of Geography and she is affiliated with the School of Computer Science.

2013/02/01 
Speaker:
Theodore
Perkins
Affiliation: Ottawa Hospital Research Institute
Title: Path analysis of continuoustime stochastic systems
Abstract: In analyzing a stochastic dynamical system, one of the most natural
questions we may ask is: What is the most probable path the system
takes between a point A and a point B? This basic question is at the
core of many applied computing problems, including speech recognition,
motion tracking, error correction, etc. When time is discrete, the
Viterbi algorithm provides an efficient and optimal solution. When
time is continuous, however, the problem has not been adequately
addressed. I will present several new results and ongoing work in the
area of path inference for continuoustime Markov chains  a class of
continuoustime discretestate stochastic systems. A key observation
is that the right solution to path inference in these systems is not
simply to discretize time and apply the Viterbi algorithm. Rather, one
must reformulate the problem appropriately for the continuoustime
case, and develop new solutions to that problem. I will describe these
solutions and demonstrate them on applications in fault diagnosis, HIV
virus evolution, and the dynamics of neural ion channels.
Biography of Speaker:

Theodore Perkins obtained his PhD from the University of Massachusetts
Amherst in 2002, in Computer Science. From 2002 to 2005 he was a
Postdoctoral Fellow at McGill, in the School of Computer Science and
the Department of Physiology. From 2006 to 2008 he was an Assistant
Professor in the School of Computer Science at McGill. In 2009, he
moved to Ottawa, where he is a Scientist at the Ottawa Hospital
Research Institute, an Assistant Professor with the University of
Ottawa, and Director of the Ottawa Bioinformatics Core Facility. His
research spans bioinformatics, computational biology, mathematical
biology and machine learning. Much of his work focuses on developing
methods for the analysis of highthroughput sequencing data, and using
that data to unravel gene regulatory networks in stem cells. However,
he also is interested in understanding information processing at the
cellular level, and in general computational methods for analyzing
stochastic dynamical systems.

2013/02/08 
Speaker:
Prakash
Panangaden
Affiliation: School of Computer Science, McGill University
Title: Duality of State and Observation in Automata
Abstract: Duality is a fundamental concept appearing in mathematics, physics and
computer science in various guises. Examples are linear programming
duality, Stone duality (logic and topology), Gelfand duality (functional
analysis) as well as other more exotic dualities in mathematical physics.
In this talk I consider the problem of representing and reasoning about
systems, especially probabilistic systems, with hidden state. I consider
transition systems where the state is not completely visible to an outside
observer. Instead, there are observables that partly identify the state.
This yields something very much like an automaton as studied in
undergraduate classes.
I show that one can interchange the notions of state and observation and
obtain what can be called a dual system. The double dual gives a minimal
representation of the behaviour of the original system. I extend this to
probabilistic transition systems and to partially observable Markov
decision processes (POMDPs). In the case of finite automata restricted to
one observable, one obtains something similar to Brzozowski's algorithm for
minimizing finitestate language acceptors.
It turns out that there is a categorical way of understanding these results
and diverse examples fit into a pleasing general framework. This is joint
work with several people: Nick Bezhanishvili, Filippo Bonchi, Marcello
Bonsangue, Monica Dinculescu*, Helle Hvid Hansen, Chris Hundt*, Dexter
Kozen, Clemens Kupke, Kim Larsen, Radu Mardare, Joelle Pineau*, Doina
Precup*, Jan Rutten and Alexandra Silva.
Though I will be namedropping as usual, the talk should be easily
accessible to a theoretically inclined undergraduate, perhaps to a graduate
student and possibly even to a professor.
Biography of Speaker:

Prakash works in the area between logic and probability and particularly on
approximating Markov processes on continuous state spaces. He also works
on programming language semantics, duality theory for automata,
relativistic effects in quantum information theory, some physics and some
mathematics. He is a member of the RL Lab at McGill University and
interacts also with the CQI Lab as well as the CompLogic group. He is also
an associate member of the Department of Mathematics and Statistics and an
affiliate member of the Perimeter Institute.
Prakash grew up in India and attended the Indian Institute of Technology,
Kanpur. Since he studied Physics rather than Management he was not sought
after in the Indian marriage market and went to the US to study, first at
the University of Chicago, then at the University of WisconsinMilwaukee
where he got a PhD in Physics. He later got an MS in computer science from
the University of Utah and wormed his way onto the faculty of the Computer
Science Department of Cornell University. There he met and married a
Canadian who complained that it was not cold enough in the US so he moved
to McGill University where he has spent a couple of delightful decades with
a bunch of fabulous colleagues and wonderful students.

2013/02/15 
Speaker:
Costis
Daskalakis
Affiliation: Massachusetts Institute of Technology
Title: Game Theory and Computational Complexity
Abstract:
Can Game Theory predict rational behavior? In twoplayer zerosum games the Nash equilibrium makes rather convincing predictions, which can also be computed efficiently with linear programming. We show, however, that the Nash equilibrium is in general intractable, casting doubt on whether it always truly captures the behavior of computationally bounded agents. We also discuss ways to overcome this intractability barrier, including approximation, dynamics, and studying wellbehaved families of games. We conclude with a broader discussion of the interactions of Game Theory with the Theory of Computation.
Biography of Speaker:

Constantinos Daskalakis is the xwindow consortium associate professor of computer science at MIT. He holds a diploma in electrical and computer engineering from the National Technical University of Athens, and a Ph.D. in electrical engineering and computer sciences from UCBerkeley. His research interests lie in theoretical computer science and applied probability with a focus on the computational aspects of the Internet, online markets and social networks. Daskalakis has been honored with the 2007 Microsoft Graduate Research Fellowship, the 2008 ACM Doctoral Dissertation Award, the Game Theory and Computer Science Prize from the Game Theory Society, the 2010 Sloan Fellowship in Computer Science, the 2011 SIAM Outstanding Paper Prize, the 2011 Ruth and Joel Spira Award for Distinguished Teaching, and the 2012 Microsoft Research Faculty Fellowship.

2013/02/22 
Speaker:
Matthias
Christandl
Affiliation: Institute for Theoretical Physics, ETH Zürich
Area:
Quantum computing
Title: Entanglement, Matrix Multiplication and Group Representations
Abstract: In quantum information theory, strong correlations between quantum particles, known as entanglement, are responsible for the security of quantum cryptography and the speedup in quantum computation. But how can we find out whether a state of two particles is entangled? Answering this question has kept the field of quantum information theory busy since its beginning. After an introduction to the subject, I will present the currently fastest algorithm for solving this question (Brandao, Christandl & Yard, STOC'11). I will then explain the surprising connection between the matrix multiplication problem and entanglement. This connection motivated us to employ quantum information tools (based on group representations) in algebraic complexity theory and led to a contribution to Mulmuley and Sohoni's effort in solving Valiant's P vs NP problem (Christandl, Doran & Walter, FOCS'12).
Biography of Speaker:

Matthias Christandl is assistant professor at the Institute for Theoretical Physics at the ETH Zurich. He is an expert on quantum information theory and known for his contributions to entanglement theory and quantum cryptography. Matthias received his diploma in physics from the ETH Zurich in 2002. In 2006 he completed his PhD, which was supervised by Artur Ekert at the University of Cambridge. He then took up the post as Thomas Nevile Research Fellow at Magdalene College, Cambridge. In 2008, he became Juniorprofessor at the LMU Munich, before returning to the ETH Zurich in 2010 in his current position. Matthias has been awarded the Cambridge University Hamilton prize and a prize of the German Physical Society. Serving the need of the growing community of quantum cryptographers, he cofounded QCRYPT, a series of conferences on this topic and presided over its first edition in Zurich in 2011.

2013/03/01 
Speaker:
Georges
Gonthier
Affiliation: Microsoft Research
Title: Proof engineering, from the Four Color to the Odd Order Theorem.
Abstract: Thirty five years ago computers made a dramatic debut in mathematics with the famous proof of the Four Color Theorem by Appel and Haken. Their role has been expanding recently, from computational devices to tools that can tackle deduction and proofs too complex for (most) human minds, such as the Kepler conjecture or the Classification of Finite Simple Groups.
These new “machine” proofs entail fundamental changes in the practice of mathematics: a shift from craftsmanship, where each argument is a tribute to the ingenuity of the mathematician that perfected it, to a form of engineering where proofs are created more systematically. In addition to formal definitions and theorems, mathematical theories also contain clever, contextsensitive notations, usage conventions, and proof methods. To mechanize advanced mathematical results it is essential to capture these more informal elements, replacing informal and flexible usage conventions with rigorous interfaces, and exercise apprenticeship with precise algorithms. This can be difficult, requiring an array of techniques closer to software engineering than formal logic, but it is essential to obtaining formal proofs of graduatelevel mathematics, and can give new insight as well.
In this talk we will give several examples of such empirical formal mathematics that we have encountered in the process of mechanizing a large corpus of Combinatorics and Algebra required by the proofs of the Four Colour and Odd Order Theorem.
Biography of Speaker:

Georges Gonthier is a Principal Researcher at Microsoft Research Cambridge. Dr. Gonthier has worked on the Esterel reactive programming language, techniques for the optimal computation of functional programs, the design and formal verification of a concurrent garbage collector, the join calculus model of concurrency, concurrency analysis of the Ariane 5 flight software, using full abstraction in the analysis of security properties, and a fully computerchecked proof of the famous Four Colour Theorem. He now heads the Mathematical Components project at the MSR Inria Joint Center, following up on the latter work with the development of a comprehensive library of formalized abstract algebra.

2013/03/15 
Speaker:
Mihalis
Yannakakis
Affiliation: Columbia University
Area:
Discrete Math
Title: Computation of Least Fixed Points
Abstract: Many problems from different areas can be formulated as problems of computing
a fixed point of a suitable function. For many of these problems, the function
in the fixed point formulation is monotone, and the objects we want to compute
are given by a specific fixed point, namely the least fixed point of the function.
Many models and problems from a broad variety of areas can be thus expressed as least fixed point
computation problems, including in particular the analysis of various probabilistic models,
problems in stochastic optimization, and games.
It turns out that for some classes of functions we can compute the least fixed point
in polynomial time and thus we can analyze efficiently several of these models,
while for others there are strong indications that they cannot be solved in polynomial time,
and for yet others the question remains open.
In this talk we will discuss progress in this area.
The talk is based on a series of works with Kousha Etessami and Alistair Stewart.
Biography of Speaker:

Mihalis Yannakakis is the Percy K. and Vida L. W. Hudson Professor of Computer Science at Columbia University.
Prior to joining Columbia, he was Director of the Computing Principles Research Department
at Bell Labs and at Avaya Labs, and Professor of Computer Science at Stanford University.
Dr. Yannakakis received his PhD from Princeton University.
His research interests include algorithms, complexity, optimization, game theory, databases, testing and verification.
He has served on the editorial boards of several journals, including as past editorinchief
of the SIAM Journal on Computing, and has chaired various conferences, including
the IEEE Symposium on Foundations of Computer Science, the ACM Symposium on Theory of Computing
and the ACM Symposium on Principles of Database Systems.
Dr. Yannakakis is a recipient of the Knuth Prize, a member of the National Academy of Engineering,
a Fellow of the ACM, and a Bell Labs Fellow.

2013/03/22 
Speaker:
Sven
Dickinson
Affiliation: University of Toronto
Area:
Computer vision
Title: Perceptual Grouping using Superpixels
Abstract: Perceptual grouping played a prominent role in support of early object recognition systems, which typically took an input image and a database of shape models and identified which of the models was visible in the image. When the database was large, local features were not sufficiently distinctive to prune down the space of models to a manageable number that could be verified. However, when causally related shape features were grouped, using intermediatelevel shape priors, e.g., cotermination, symmetry, and compactness, they formed effective shape indices and allowed databases to grow in size. In recent years, the recognition (categorization) community has focused on the object detection problem, in which the input image is searched for a specific target object. Since indexing is not required to select the target model, perceptual grouping is not required to construct a discriminative shape index; the existence of a much stronger objectlevel shape prior precludes the need for a weaker intermediatelevel shape prior. As a result, perceptual grouping activity at our major conferences has diminished. However, there are clear signs that the recognition community is moving from appearance back to shape, and from detection back to unexpected object recognition. Shapebased perceptual grouping will play a critical role in facilitating this transition. But while causally related features must be grouped, they also need to be abstracted before they can be matched to categorical models. In this talk, I will describe our recent progress on the use of intermediate shape priors in segmenting, grouping, and abstracting shape features. Specifically, I will describe the use of symmetry and nonaccidental attachment to detect and group symmetric parts, the use of closure to separate figure from background, and the use of a vocabulary of simple shape models to group and abstract image contours.
Biography of Speaker:

Sven Dickinson received the B.A.Sc. degree in Systems Design Engineering from the University of Waterloo, in 1983, and the M.S. and Ph.D. degrees in Computer Science from the University of Maryland, in 1988 and 1991, respectively. He is currently Professor and Chair of the Department of Computer Science at the University of Toronto, where he has also served as Acting Chair (20082009), Vice Chair (20032006), and Associate Professor (20002007). From 19952000, he was an Assistant Professor of Computer Science at Rutgers University, where he also held a joint appointment in the Rutgers Center for Cognitive Science (RuCCS) and membership in the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). From 19941995, he was a Research Assistant Professor in the Rutgers Center for Cognitive Science, and from 19911994, a Research Associate at the Artificial Intelligence Laboratory, University of Toronto. He has held affiliations with the MIT Media Laboratory (Visiting Scientist, 19921994), the University of Toronto (Visiting Assistant Professor, 19941997), and the Computer Vision Laboratory of the Center for Automation Research at the University of Maryland (Assistant Research Scientist, 19931994, Visiting Assistant Professor, 19941997). Prior to his academic career, he worked in the computer vision industry, designing image processing systems for Grinnell Systems Inc., San Jose, CA, 19831984, and optical character recognition systems for DEST, Inc., Milpitas, CA, 19841985.
Dr. Dickinson's research interests revolve around the problem of object recognition, in general, and generic object recognition, in particular. He has explored a multitude of generic shape representations, and their common representation as hierarchical graphs has led to his interest in inexact graph indexing and matching. His interest in shape representation and matching has also led to his research in perceptual grouping, object tracking, visionbased navigation, contentbased image retrieval, variable structured object representation and matching, category learning, and the use of language to guide perceptual grouping, object recognition, and motion analysis. One of the focal points of his research is the problem of image abstraction, which he believes is critical in bridging the representational gap between images and categorical shape models. He has published over 150 papers on these topics.
In 1996, Dr. Dickinson received the NSF CAREER award for his work in generic object recognition, and in 2002, received the Government of Ontario Premiere's Research Excellence Award (PREA), also for his work in generic object recognition. In 2012, he received the Lifetime Research Achievement Award from the Canadian Image Processing and Pattern Recognition Society (CIPPRS). He was cochair of the 1997, 1999, 2004, and 2007 IEEE International Workshops on Generic Object Recognition (or Object Categorization), which culminated in the interdisciplinary volume, Object Categorization: Computer and Human Vision Perspectives, in 2009, and was cochair of the 2008, 2009, 2010, and 2011 International Workshops on Shape Perception in Human and Computer Vision. He will be General CoChair of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). He currently serves or has served on the editorial boards of the journals: IEEE Transactions on Pattern Analysis and Machine Intelligence; International Journal of Computer Vision; Computer Vision and Image Understanding; Image and Vision Computing; Graphical Models; Pattern Recognition Letters; IET Computer Vision; and the Journal of Electronic Imaging. He is also coeditor of the Synthesis Lectures on Computer Vision from Morgan & Claypool Publishers.

2013/04/05 
Speaker:
Thomas
Shultz
Affiliation: Department of Psychology McGill University
Title: Computational modeling of human learning and cognitive development
Abstract: One of the major unsolved problems in cognitive development concerns the nature of transition mechanisms. How does the child progress from one cognitive stage to the next? We find that cognitive transitions can be modeled, and thus better understood, with constructive neural networks that grow as well as learn. We extended and applied Fahlman's cascadecorrelation algorithm, which recruits new hidden units when network error cannot be further reduced by quantitative adjustment of connection weights, to model a wide range of psychological phenomena in development. This modeling work sheds some light on a variety of related issues. How is knowledge represented at various developmental stages? What accounts for the particular orders of stages? Why does development take a particular shape over time? How to account for the typical mistakes that children make before they get it right? What is the difference between learning and development? How can past knowledge be used to guide new learning? How might the brain accomplish Bayesian inference and learning, and systematic deviations from Bayes’ rule? How close are we to capturing the autonomous nature of biological learners?
Biography of Speaker:

Thomas Shultz (PhD Yale, Psychology) is Professor of Psychology and Associate Member of the School of Computer Science at McGill. He is currently Deputy Chair of the Department of Psychology. He teaches courses in Computational Psychology and Cognitive Science. He is a Fellow of the Canadian Psychological Association, and a cofounder of McGill’s undergraduate programs in Cognitive Science and Neuroscience. Research interests include connectionism, cognitive science, cognitive development, evolution and learning, and relations between knowledge and learning. He has over 200 research publications in these areas. He is a Member of the IEEE Neural Networks Society Autonomous Mental Development Technical Committee and Chair of the AMD Task Force on Developmental Psychology. Journal editorships include: Editorial Board of Computational Intelligence and Neuroscience, Associate Editor of the IEEE Transactions on Autonomous Mental Development, and Review Editor of Frontiers in Neurorobotics.

2013/04/12 
Speaker:
Martin
De Lasa
Affiliation: Autodesk
Title: FeatureBased Controllers for PhysicsBased Character Animation
Abstract:
Creating controllers for physicsbased animation of characters is a
longstanding open problem in animation and robotics. Such controllers
would have numerous applications while also yielding insight into
human motion. However, creating controllers remains very difficult:
current approaches are either constrained to track motion capture
data, are not robust, or provide limited control over style.
In this talk, I'll present an approach to control of physicsbased
characters based on highlevel features of human movement, such as
centerofmass, angular momentum, and endeffector motion. Objective
terms are used to control each feature, and are combined via
optimization. Using this approach, locomotion can be expressed in
terms of a small number of features that control balance and
endeffectors. This approach is used to build novel controllers for
human balancing, standing jump, walking, and jogging.
These controllers provide numerous benefits: humanlike qualities such
as armswing, heeloff, and hipshoulder counterrotation emerge
automatically during walking; controllers are robust to changes in
body parameters during movement; control parameters and goals may be
modified during runtime, during control; control parameters apply to
intuitive properties such as centerofmass height; and controller may
be mapped onto entirely new bipeds with different topology and mass
distribution, without modifications to the controller itself.
Transitions between multiple types of gaits, including walking,
jumping, and jogging, emerge automatically. Controllers can traverse
challenging terrain while following highlevel user commands at
interactive rates.
Biography of Speaker:

Dr. Martin de Lasa is Technical Lead and Software Developer Manager at Autodesk where he leads the Media and Entertainment Division's Animation Team. Prior to joining Autodesk, he completed doctoral studies in Computer Science at the University of Toronto (2010). He also holds a M.E.Sc in Electrical Engineering from McGill University (2000), a B.E.Sc in Mechanical Engineering from the University of Western Ontario (UWO) (1998), and a B.Sc. in Computer Science (1998) from UWO. Prior to his doctoral work he was Technical Lead, Architect, and Manager, at Boston Dynamics, where he played a leading role in human simulation and robotics projects, including: Digital Biomechanics, BigDog, and LittleDog. His research interests span the areas of computer graphics and robotics, focusing on leveraging optimal control, optimization, and machine learning methods to build physicallybased models of motion to ease animation/control of simulated characters and robotic systems. He is the recipient of the CAIAC doctoral dissertation award for the top Canadian dissertation in artificial intelligence in 2011.

2013/04/17 
Speaker:
Lenore
Cowen
Affiliation: Tufts University
Area:
Bioinformatics
Title: SMURFLite: combining simplified Markov random fields with simulated evolution improves remote homology detection for betastructural proteins into the twilight zone
Abstract: One of the most successful methods to date for recognizing protein sequences
that are evolutionarily related has been profile Hidden Markov Models (HMMs).
However, these models do not capture pairwise statistical preferences of
residues that are hydrogen bonded in beta sheets. These dependencies have been
partially captured in the HMM setting by simulated evolution in the training
phase and can be fully captured by Markov Random Fields (MRFs). However, the
MRFs can be computationally prohibitive when beta strands are interleaved in
complex topologies.
We introduce SMURFLite, a method that combines both simplified Markov Random
Fields and simulated evolution to substantially improve remote homology
detection for beta structures. Unlike previous MRFbased methods, SMURFLite is
computationally feasible on any betastructural motif.
We test SMURFLite on all propeller and barrel folds in the mainlybeta class
of the SCOP hierarchy in stringent crossvalidation experiments. We show a
mean 26% (median 16%) improvement in AUC for betastructural motif recognition
as compared to HMMER (a wellknown HMM method) and a mean 33% (median 19%)
improvement as compared to RAPTOR (a wellknown threading method), and even a
mean 18% (median 10%) improvement in AUC over HHPred (a profileprofile HMM
method), despite HHpred’s use of extensive additional training data. We
demonstrate SMURFLite’s ability to scale to whole genomes by running a
SMURFLite library of 207 betastructural SCOP superfamilies against the entire
genome of Thermotoga maritima, and make over a hundred new fold predictions.
This is joint work with Noah Daniels, Raghavendra Hosur, and Bonnie Berger.
Biography of Speaker:

Dr. Lenore J. Cowen is a Professor in the Computer Science Department at Tufts University. She also has a courtesy appointment in the Tufts Mathematics Department. She received a BA in Mathematics from Yale and a Ph.D. in Mathematics from MIT. After finishing her Ph.D. in 1993, she was an NSF Postdoctoral Fellow and then joined the faculty of the Mathematical Sciences department (now renamed the Applied Mathematics and Statistics department) at Johns Hopkins University where she was promoted to the rank of Associate Professor in 2000. Lured by the Boston area, and the prospect of making an impact in a growing young department, she joined Tufts in September, 2001. Dr. Cowen has been named an ONR Young Investigator and a fellow of the Radcliffe Institute for Advanced Study. Her research interests span three areas: Discrete Mathematics (since high school), Algorithms (since 1991 in graduate school) and Computational Molecular Biology (since 2000). She is on the editorial board of SIAM Review.

