Seminar Home
Fall 2012 Schedule
Winter 2013 Schedule
Summer 2013 Schedule
Archives

|
Date (
Winter 2013
) |
Category |
Seminar Info |
| 2013/04/15 |
Graduate Seminar Series |
Place: MC103
Time: 12 - 12:30
Speaker:
Vladimir
Reinharz
Affiliation: McGill SOCS
Area:
Bioinformatics, RNA structure prediction and sequence design
Title: A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotides distribution
Abstract: The design of RNA sequences folding into predefined secondary structures is a
milestone for many synthetic biology and gene therapy studies. Most of the current
software uses similar local search strategies (i.e. a random seed is progressively
adapted to acquire the desired folding properties) and more importantly do not
allow the user to control explicitly the nucleotide distribution such as the
GC-content in their sequences. However, the latter is an important criterion for
large-scale applications as it could presumably be used to design sequences with
better transcription rates and/or structural plasticity.
We introduce a novel algorithm to design RNA sequences folding into target
secondary structures with a predefined nucleotide distribution. It uses a global
sampling approach and weighted sampling techniques. We show that our approach is
fast (i.e. running time comparable or better than local search methods), seed-less
(we remove the bias of the seed in local search heuristics), and successfully
generates high-quality sequences (i.e. thermodynamically stable) for any GC-
content. To complete this study, we develop an hybrid method combining our global
sampling approach with local search strategies. Remarkably, our glocal methodology
overcomes both local and global approaches for sampling sequences with a specific GC
content and target structure.
|
| 2013/04/11 |
Faculty Candidate Talk |
Place: McConnell 437
Time: 10:00 - 11:00
Speaker:
Yang
Cai
Affiliation: PhD student, Massachusetts Institute of Technology (MIT)
Area:
Mathematical Economics
Title: Mechanism Design: a new algorithmic framework
Abstract: In his seminal paper, Myerson [1981] provides a revenue-optimal auction for a seller who is looking to sell a single item to multiple bidders. Extending this auction to simultaneously selling multiple heterogeneous items has been one of the central problems in Mathematical Economics. We provide such an extension that is also computationally efficient. Our solution proposes a novel framework for mechanism design by reducing mechanism design problems (where one optimizes an objective function on "rational inputs") to algorithm design problems (where one optimizes an objective function on "honest inputs"). Our reduction is generic and provides a framework for many other mechanism design problems.
In the end of my talk, I will also briefly overview some of my other work, including algorithmic pricing, generalization of the Min-max theorem for multiplayer games and online matching.
Biography of Speaker:
-
Yang Cai is currently finishing his Ph.D. at MIT under the supervision of Constantinos Daskalakis. He received his Bachelor degree in Computer Science from Peking University. His research interests include Algorithmic Game Theory, Online Algorithms and Logic
|
| 2013/04/09 |
Faculty Candidate Talk |
Place: McConnell 437
Time: 10:00 - 11:00
Speaker:
Christopher
Pal
Affiliation: École Polytechnique de Montréal
Area:
Data Mining
Title: Big Data, Mining Multimedia and Semi-supervised Learning
Abstract: In this talk I’ll begin by presenting some recent work with my students on mining the text and images of Wikipedia biographies. I’ll present a technique to disambiguate faces and identities using richly structured probabilistic models. I’ll discuss the role of generative, discriminative and semi-supervised learning in both text and general data mining and some considerations regarding the importance and interaction of model complexity and the amount of data available. I’ll then present some of our work on large-scale recognition in the wild in which we scale up with more data through mining image search results and YouTube. I’ll discuss a semi-supervised learning technique based on a probabilistic sparse kernel method and the use of null categories to account for noisy labels. The technique allows us to boost performance on the standard labeled faces in the wild evaluation using faces mined from YouTube as well as boost accuracy on the task of tagging faces of celebrities within YouTube videos. I’ll discuss the issues involved with scaling to truly big data and our experiences working in collaboration with Google using a corpus of over 1.2 million YouTube videos and their text annotations.
Biography of Speaker:
-
Christopher Pal is an assistant professor in the department of « génie informatique et génie logiciel » at the École Polytechnique Montréal. Prior to arriving in Montréal, he was an assistant professor in the department of Computer Science at the University of Rochester. He has been a research scientist with the University of Massachusetts Amherst at the Center for Intelligent Information Retrieval and Information Extraction and Synthesis Lab. He has also been affiliated with Microsoft Research's Interactive Visual Media Group and their Machine Learning and Adaptive Systems group. He has four patents, there are over 1100 citations to his work on Google Scholar and he has been the recipient of a Google research award. He earned his M. Math and Ph.D. from the University of Waterloo
|
| 2013/04/08 |
Graduate Seminar Series |
Place: MC103
Time: 12 - 12:30
Speaker:
Annie
Ying
Affiliation: McGill
Title: Drawing graphs in R for “dummies”
Abstract: For those of us who have an empirical component in our research, there are ample of
opportunities to present experimental results in one graphical form or another. In this
tutorial, I will present some examples of graphs I have produced for a paper. Bring
your laptop and you can produce some graphs too.
I will be in MC103 at 11h30 to assist anyone who wants to set up R in their laptop.
The SOCS machines also have R installed if you want to VNC in.
If you know R already, please contact me if you can help. It'd be great if we have a few
people at 11h30 to help others with the installation and during the tutorial.
|
| 2013/03/28 |
Algorithms |
Place: FDA 232
Time: 10:00 - 11:00
Speaker:
Yuval
Filmus
Affiliation: PhD Candidate, University of Toronto, Computer Science
Title: The Unexpected Power of Local Search
Abstract: Yuval Filmus is a Ph.D. candidate in Computer Science in University of Toronto, advised by Prof. Toni Pitassi. His research interests include approximation algorithms, computational complexity, proof complexity, analysis of Boolean functions, and combinatorics. He received his masters degree in Computer Science from the Weizmann institute
Biography of Speaker:
-
The difficulty of solving combinatorial optimization problems efficiently has led to the design of approximation algorithms. Unfortunately, approximation algorithms are often quite complicated and specialized, which limits their applicability.
In this talk, we discuss a fundamental combinatorial optimization problem, monotone submodular maximization over a matroid constraint (MSMM). This problem generalizes classical problems such as Maximum Coverage and Maximum Satisfiability.
The simple greedy algorithm achieves the best possible approximation ratio for Maximum Coverage. However, for the more general MSMM problem, achieving the best approximation ratio required the sophisticated continuous greedy algorithm.
We present an elementary non-oblivious local search algorithm, which is also theoretically optimal.
Our algorithm is much simpler than the existing continuous greedy algorithm, and moreover it may generalize to new situations in which the continuous greedy approach fails.
|
| 2013/03/27 |
Graduate Seminar Series |
Place: MC103
Time: 12 - 12:30
Speaker:
Hanqiang
Cheng
Affiliation: McGill SOCS
Title: Towards Safe Social Surfing: Effective Adult Account Detection in Twitter
Abstract: Over the past few years, Twitter has emerged as an increasingly influential platform
for real-time information distribution and discovery. However, it has been taking a
bit of dark turn into adult content, which could significantly hinder the further
popularity of Twitter and/or
cause serious legal issues. Existing techniques for adult content detection are
ill-suited for detecting adult accounts in Twitter.
To tackle this problem, we propose iterative social based classification (ISC)}, an
effective solution for adult account detection for online social networks. ISC
consisting of three key components: (1) collective interest, a novel social link
based feature for effective discrimination of adult accounts; (2) a tag based
label propagation algorithm which explores the tagged content embedded in tweets to
further boost the detection accuracy; and (3) a social based linear classifier that
integrates social consistency based on the collective interest feature and maximal
tag scores computed from tag based label propagation. Evaluations using large-scale
real-world Twitter data demonstrate that our ISC solution significantly outperforms
existing methods in detecting adult accounts. It is able to identify adult accounts
accurately among $1.07$ million accounts using only $100$ labeled adult accounts.
|
| 2013/03/27 |
Theory |
Place: MC 437
Time: 10:00 - 11:00
Speaker:
Vida
Dujmovic
Affiliation: Assistant Professor, Carleton University, School of Mathematics and Statistics & Department of Syste
Area:
Theory
Title: Graph Layouts
Abstract: A 3D grid drawing of a graph is a placement of the vertices at distinct gridpoints in 3D such that the straight line-segments representing the edges are pairwise non-crossing. 3D grid drawings of small volume are closely related to a graph parameter called queue number. I will present the state of the art on the topic focusing on the most recent result that planar graphs have O(n log n) volume 3D grid drawings and O(log n) queue number. This improves the result by Di Battista, Frati and Pach [FOCS’10, pp. 365–374].
Biography of Speaker:
-
Vida Dujmovic is an Assistant Professor at the School of Mathematics and Statistics & Department of Systems and Computer Engineering of Carleton University. She has obtained her PhD and MSc from McGill University and her BEng from University of Zagreb in Croatia. She is a recipient of NSERC Doctoral Prize and, more recently, Faculty Research Achievement Award of Carleton University.
|
| 2013/03/26 |
General |
Place: FDA 232
Time: 10:00 - 11:00
Speaker:
Edith
Law
Affiliation: Harvard
Area:
Machine Learning
Title: Towards Crowdsourcing Science
Abstract: Human computation is the study of intelligent systems where humans are an integral part of the computational process. Well-known examples of human computation systems include crowdsourcing marketplaces (e.g., Amazon Mechanical Turk) which coordinate workers to perform tasks for monetary rewards, games with a purpose (e.g., the ESP Game) which generate useful data through gameplay, and identity verification systems (e.g., reCAPTCHA) that accomplish remarkable feats (e.g., digitize millions of books) through users performing computation for access to online content.
One limitation of these human computation systems is that they are restricted to tasks that can easily be decomposed and solved by any person with basic perceptual capabilities and common-sense knowledge. There are many other tasks, however, that are complex and require expertise to solve, but for which human computation would be greatly beneficial. For example, many tasks that arise in the course of scientific inquiries involve data that is unfamiliar to people without formal training (e.g., images of micro-organisms, EEG signals). How can we enable a crowd of people to perform tasks that are difficult, and possibly hard to decompose?
In this talk, I will illustrate the key challenges of human computation in the scientific domain, draw connections to my previous work on games with a purpose and crowdware systems, as well as highlight new directions.
Biography of Speaker:
-
Edith Law is a CRCS postdoctoral fellow at the School of Engineering and Applied Sciences at Harvard University. She graduated from Carnegie Mellon University in 2012 with Ph.D. in Machine Learning, where she studied human computation systems that harness the joint efforts of machines and humans. She is a Microsoft Graduate Research Fellow, co-authored the book "Human Computation" in the Morgan & Claypool Synthesis Lectures on Artificial Intelligence and Machine Learning, co-organized the Human Computation Workshop (HCOMP) Series at KDD and AAAI from 2009 to 2012, and helped create the first AAAI Conference on Human Computation and Crowdsourcing. Her work on games with a purpose and large-scale collaborative planning has received best paper honorable mentions at CHI.
|
| 2013/03/25 |
CQIL - Cryptography and Quantum Information |
Place: McConnell 103
Time: 16:00 - 17:00
Speaker:
Gus
Gutowski
Affiliation: University of Waterloo
Title: Parallel approximation of min-max problems
Abstract: This paper presents an efficient parallel approximation
scheme for a new class of min-max problems. The algorithm is derived
from the matrix multiplicative weights update method and can be used
to find near-optimal strategies for competitive two-party classical or quantum interactions in which a referee exchanges any number of
messages with one party followed by any number of additional messages
with the other. It considerably extends the class of interactions
which admit parallel solutions, demonstrating for the first time the
existence of a parallel algorithm for an interaction in which one
party reacts adaptively to the other.
As a consequence, we prove that several competing-provers complexity
classes collapse to PSPACE such as QRG(2), SQG and two new classes
called DIP and DQIP. A special case of our result is a parallel
approximation scheme for a specific class of semidefinite programs
whose feasible region consists of lists of semidefinite matrices that
satisfy a transcript-like consistency condition. Applied to this
special case, our algorithm yields a direct polynomial-space
simulation of multi-message quantum interactive proofs resulting in a
first-principles proof of QIP=PSPACE.
Joint work with Xiaodi Wu. Based on http://arxiv.org/abs/1011.2787.
|
| 2013/03/20 |
Graduate Seminar Series |
Place: MC103
Time: 12 - 12:30
Speaker:
Amin
Ranjbar
Affiliation: McGill
Title: Confidentiality and Integrity Management in Online Systems
Abstract: The dominant role of social networking in the web is turning human
relations into conduits of information flow. This means that the way information
spreads on the web is determined to a large extent by human decisions. Consequently,
information security, confidentiality and integrity of shared data, lies on the
quality of the collective decisions made by the users. Recently, many access control
schemes have been proposed to control unauthorized propagation and modification of
information in online systems; however, there is still a need for mechanisms to
evaluate the risk of information leakage and unauthorized modifications within
online systems. First, the thesis focuses on the confidentiality of information in
online social networks. A novel community-centric confidentiality control mechanism
for information flow management on the social web is presented. A Monte Carlo based
algorithm is developed to determine the potential spread of a shared data object and
to inform the user of the risk of information leakage associated with different
sharing decisions she can make in a social network. The scheme also provides a
facility to reduce information flowing to a specific user (i.e., black listing a
specific user). Second the thesis focuses on the integrity of artifacts in
crowdsourcing systems. A new approach for managing the integrity of contents created
in crowdsourcing repositories named Social Integrity Management (SIM) is presented.
SIM integrates two conflicting approaches to manage integrity in crowdsourcing
systems: owner-centric and owner-free schemes. The ownership bottleneck is relaxed
by including co-ownerships and having multiple versions. Finally, the thesis
presents a thorough analysis of the Stack Exchange sites as an example of widely
used crowdsourcing question answering systems. The dump datasets are used to analyze
various user behaviors in crowdsourcing question answering systems by considering
the effect of tagging, user reputation and user feedback. Observed characteristics
from the studies are used in the modeling and evaluation of social integrity
management.
|
| 2013/03/15 |
General |
Place: MC 103
Time: 11:00 - 12:30
Speaker:
Jeff
Huang
Affiliation: PhD Candidate in Information Science at the University of Washington
Area:
Data Mining
Title: Interaction Data in Search and Beyond: By People, For People
Abstract: People generate an ever-growing amount of behavioral data when they interact with computer systems. Rather than treating these data purely as numbers or tokens, I will present projects that decode user behavior from the data and construct practical models. One project collects mouse cursor activity on a live search engine, and incorporates these data in two graphical models: one to understand visual attention to predict where people are looking without an eye-tracking device, and one that can be used to improve the relevance of search results. I will also provide examples of how interactions can drive research in games, mobile devices, reviews, and the web. Through this, we can better understand fundamental human behavior and help design systems that allow people to find information faster and easier.
Biography of Speaker:
-
Jeff Huang is a PhD Candidate in Information Science at the University of Washington. His research in data-driven information retrieval focuses on modeling users from interaction data. He has been awarded Best Paper at SIGIR 2010, two Honorable Mentions at CHI 2011, and the Facebook Fellowship. During his graduate studies, Jeff has conducted research at the University of Washington and five research groups at Microsoft Research and Google, and has received external funding from Google and Microryza. His work appears in venues such as SIGIR, CHI, AAAI, UIST, CIKM, WSDM, as well as in the Wall Street Journal, GeekWire, and the MIT Technology Review. Jeff earned his Masters and Bachelors degrees in Computer Science at the University of Illinois.
|
| 2013/03/01 |
Algorithms |
Place: MC437
Time: 10:00 - 11:00
Speaker:
Kevyn
Collins-Thompson
Affiliation: Microsoft Research (Redmond) - Context, Learning, and User Experience for Search group
Title: Towards search engines that help people learn: models, algorithms, and large-scale analysis
Abstract: How can search engines help people learn? The computational challenges involved in this goal include: accurately modeling people's expertise and information-seeking goals from limited evidence; extracting meaning from unstructured text and data; making reliable decisions about information under uncertainty; and modeling how people learn from new information. In addition, real-world solutions to these challenges on the Web must cope with sparse, noisy data and scale effectively to billions of users and pages.
In this talk, I will present three advances from my recent research that illustrate progress toward this goal, using as an example the specific personalized search task of finding relevant Web pages at the right level of reading difficulty. First, I will give an overview of novel statistical learning models that can predict the reading difficulty of text with much greater reliability and flexibility than traditional methods. Second, I show how applying reading difficulty prediction to billions of Web pages opens up new and sometimes surprising possibilities for enriching our interactions with the Web, from predicting user and site expertise, to estimating searcher motivation. Third, to enable reliable personalization of search results, I describe a new family of risk-sensitive ranking algorithms that can jointly optimize both relevance and robustness. These developments point toward a future where human information seeking and learning benefit from continued advances in computational learning.
Biography of Speaker:
-
Kevyn Collins-Thompson is a Researcher in the Context, Learning and User Experience for Search (CLUES) group at Microsoft Research. His core research is in information retrieval and text mining, including a strong interplay with statistical machine learning, mathematical optimization, economics, and natural language processing. His research focuses on models, algorithms, and evaluation methods for improved search and text understanding, especially for human learning. Kevyn received his Ph.D. in Computer Science from the Language Technologies Institute at Carnegie Mellon University and B.Math. in Computer Science from the University of Waterloo. He is a co-recipient of an ACM SIGIR’12 outstanding paper award for his work with Lidan Wang and Paul Bennett on "Robust ranking via risk-sensitive optimization".
|
| 2013/02/28 |
Vision, Graphics, and Robotics |
Place: MC 437
Time: 11:30 - 12:30
Speaker:
Theo
Gevers
Affiliation: University of Amsterdam
Area:
Computer Vision
Title: Color in Computer Vision with Applications to Object Recognition and Facial Expression Analysis
Abstract: Visual information (images and videos) is one of the most promising sources
of information as it plays a key role in current communication frameworks.
The immense stimulus of the use and exploitation of digital visual information
demands for advanced knowledge representations, learning systems and visual
search techniques. Although colour has been proven to be a central topic
in various disciplines (ranging from art, humanity, biology to physics),
surprisingly colour in computer vision and the pictorial information
exploration thereof has just started.
In this talk, I will discuss various recent developments and state-of-the-art
methods in color-based object recognition. The focus is on discussing invariance
properties and the distinctiveness of interest point detectors and color
descriptors for object recognition and localization. Then, these methods are
used to provide human activity recognition. Finally, facial expression analysis
for emotion recognition and taste appreciation is discussed.
Biography of Speaker:
-
Theo Gevers is an Associate Professor of Computer Science at the University of Amsterdam, The Netherlands and a Full Professor at the Computer Vision Center (UAB), Barcelona, Spain. At the University of Amsterdam he is a teaching director of the MSc of Artificial Intelligence. He currently holds a VICI-award (for excellent researchers) from the Dutch Organisation for Scientific Research. His main research interests are in the fundamentals of content-based image retrieval, colour image processing and computer vision specifically in the theoretical foundation of geometric and photometric invariants. He is co-chair of the Internet Imaging Conference (SPIE 2005, 2006), co-organizer of the First International Workshop on Image Databases and Multi Media Search (1996), the International Conference on Visual Information Systems (1999, 2005), the Conference on Multimedia & Expo (ICME, 2005), and the European Conference on Colour in Graphics, Imaging, and Vision (CGIV, 2012). He is guest editor of the special issue on content-based image retrieval for the International Journal of Computer Vision (IJCV 2004) and the special issue on Colour for Image Indexing and Retrieval for the journal of Computer Vision and Image Understanding (CVIU 2004). He has published over 100 papers on colour image processing, image retrieval and computer vision. He is program committee member of a various number of conferences, and an invited speaker at major conferences. He is a lecturer of post-doctoral courses given at various major conferences (CVPR, ICPR, SPIE, CGIV). He is member of the IEEE.
|
| 2013/02/27 |
Graduate Seminar Series |
Place: MC103
Time: 12 - 12:30
Speaker:
Emmanuel
Piuze
Affiliation: McGill SOCS
Area:
Shape Analysis
Title: Modeling Cardiac Fibers
Abstract: The heart is composed of elongated muscle cells that are grouped together to form
cardiac muscle fibers, known as myofibers. In healthy hearts, they are arranged in
an efficient manner to allow the pumping of blood to the whole body. Characterizing
the geometry and variability of myofibers is central to our understanding of normal
heart function. Originating with pioneering work by Streeter more than 40 years ago,
research has mostly focused on large-scale descriptions of their arrangement. Recent
advances in medical imaging, with the development of diffusion magnetic resonance
imaging (dMRI), have allowed to study myofibers non-invasively, in three-dimensions,
and at a high resolution. This contrasts with the traditional histological methods
that require tedious heart dissections and damage the cardiac issue. dMRI has also
revealed important local features of myofibers across many species, which are
difficult to explain using global models of cardiac fiber architecture.
Consequently, research has been increasingly shifting away from the shortcomings and
inconsistencies found in global models, moving instead to local models of myofiber
geometry. During this talk I will show how we can gain valuable insight into how
myofibers are arranged, by analyzing how they bundle and curve together using a
mathematical framework derived from differential geometry.
|
| 2013/02/26 |
Theory |
Place: MC437
Time: 10:00 - 11:00
Speaker:
Anastasios
Sidiropoulos
Affiliation: CS department at the University of Illinois at Urbana-Champaign.
Title:
Abstract: Large and complicated data sets have become ubiquitous in science and engineering, and have given rise to new challenges in Computer Science. In this new computational horizon, geometry has become an indispensable source of algorithmic tools, and ideas. A main catalyst behind this development has been the theory of metric embeddings. A metric embedding is a mapping from a metric space X into some metric space Y, which preserves all pairwise distances up to a small factor, called the distortion.
In this talk I will describe a general framework for obtaining low-distortion embeddings of complicated spaces into simpler ones. An immediate corollary is a general reduction from hard instances to corresponding simpler ones, for a large class of optimization problems on certain families of graphs. Moreover, these techniques yield improvements on the state of the art for problems such as Sparsest Cut, Treewidth, Crossing Number, Vertex Sparsification, 0-Extension, and the estimation of Laplacian eigenvalues for several interesting graph families.
Perhaps surprisingly, the above ideas can also be used to construct hard instances for certain problems. More precisely, guided by the above simplification machinery, we construct spaces that yield a nearly optimal integrality gap for a specific SDP relaxation of Sparsest Cut.
Biography of Speaker:
-
Dr. Sidiropoulos received his M.S. and Ph.D. in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, under the supervision of Piotr Indyk. He has been a post-doctoral fellow at the University of Toronto, and the Toyota Technological Institute at Chicago. He is currently a post-doctoral fellow in the Computer Science department at the University of Illinois at Urbana-Champaign.
His main research interests are in algorithms, with an emphasis on metric embeddings, high-dimensional and low-dimensional computational geometry, graph theory, optimization, and applications of geometry and topology in computer science.
|
| 2013/02/20 |
Graduate Seminar Series |
Place: TR3070
Time: 12 - 12:30
Speaker:
Faiyaz
Zamal
Affiliation: McGill SOCS
Area:
Bioinformatics
Title: Activation-repression connectivity pattern of transcriptional regulatory networks and their impact on robustness
Abstract: Transcriptional regulatory networks, the biochemical systems controlling the
transcription of genes into RNA in response to activating or repressing inputs from
transcription factor molecules, are able to robustly retain their functionality
against a wide array of environmental perturbations and evolutionary mutations.
However, what injects such robustness in these systems remains largely unexplained.
Previous studies have principally focused on identifying topological features of
these networks that are unlikely to occur in random networks and then determining
the impact of these features on robustness and other dynamical aspects of the
system. While this approach has yielded significant insights into the design
principles of robust biological systems, a comprehensive analysis of how these
topological features act in conjunction with the parameters of the system has not
been conducted yet. In this ongoing project, we first analyze how the activating and
repressing connections are distributed within the E Coli transcriptional network to
identify some features which deviate from the expected behaviour under random
connectivity and then, through generation of random ensembles of networks and
simulating their dynamics using a standard discrete time boolean network dynamics
model, determine the robustness induction effect of these features. Our result thus
far indicates that both the first and second order aspects of this connectivity
pattern exert impacts on the robustness of the network, suggesting a synergy between
the topological features and parametric features defining the system towards
attaining robustness.
|
| 2013/02/12 |
Faculty Candidate Talk |
Place: McConnell 437
Time: 10:00 - 11:00
Speaker:
Thomas
Vidick
Affiliation: MIT: Department of Computer Science and Artificial Intelligence Laboratory
Title: The complexity of entangled games: hardness results and approximation algorithms
Abstract: Multiplayer games, from their use in cryptography to their role in the proof of the PCP theorem, play a central role in modern theoretical computer science. Recently the introduction of a new element, quantum entanglement, has brought a second life to the study of these games.
Entanglement plays the role of a distributed resource that the players can tap in order to increase their odds of winning: how does its use affect the properties of the game? In turn, multiplayer games have proved a valuable tool in the study of fundamental properties of entanglement.
In this talk I will present some recent result on the complexity of entangled games. I will show that the class MIP*(3) of languages having three-prover entangled proof systems contains the class NEXP.
This resolves a long-standing open question in quantum complexity by showing that entangled-prover proof systems are no less powerful than their classical counterparts. The proof is based on adapting the multilinearity test from Babai et al's seminal result MIP = NEXP [BFL'91] to the entangled-player setting, and demonstrates the strength of one of the most interesting aspects of entanglement, its monogamy.
Time permitting I will briefly discuss applications of entangled games to two very different areas: device-independent proofs of security in cryptography and approximation algorithms based on the use of semidefinite programming.
Biography of Speaker:
-
Thomas Vidick is currently a postdoctoral associate in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. His research interests are in quantum computing and complexity theory, and he has made contributions to the theory of quantum multi-prover interactive proofs, quantum cryptography, pseudo-randomness, and approximation algorithms.
He completed his Ph.D. in Computer Science from UC Berkeley in Fall 2011, working with Umesh Vazirani. His thesis was awarded the Bernard Friedman Memorial Prize in applied mathematics. He is a co-recipient of the FOCS'12 best paper award for his paper with Tsuyoshi Ito on "A multi-prover interactive proof for NEXP sound against entangled provers".
|
| 2013/02/06 |
Graduate Seminar Series |
Place: MC103
Time: 12 - 12:30
Speaker:
Jonathan
Tremblay
Affiliation: McGill SOCS
Area:
Design of Computer Games
Title: Adaptivity in computer games
Abstract:
Non-player characters that act as companions for players are a common
feature of modern games. Designing a companion that reacts
appropriately to the player's experience, however, is not a trivial
task, and even current, triple-A titles tend to provide companions
that are either static in behaviour or evince only superficial
connection to player activity. To address this issue I am going to
present an adaptive companion that analyses the player's in-game
experience and behaves accordingly. We evaluate our adaptive companion
in different, non-trivial scenarios, as well as compare our proposed
model to a straightforward approach to adaptivity based on Dynamic
Difficulty Adjustment. The data collected demonstrates that the
adaptive companion has more influence over the player's experience and
that there exists an orthogonality between our companion adaptivity
and the more traditional combat/health scaling approaches to
difficulty adjustment. Using adaptive companions is a step forward in
offering meaningful and engaging games to players.
|
| 2013/01/23 |
Graduate Seminar Series |
Place: MC103
Time: 12 - 12:30
Speaker:
Vineet
Kumar
Affiliation: McGill SABLE lab
Area:
Compilers
Title: Helping scientists: the compilers way!
Abstract:
MATLAB is the most popular language among scientists for solving numerically intensive
problems but it does not allow scientists to take full advantage of today’s supercomputers which
they have access to. We try to solve this problem by developing a MATLAB to X10 translator.
X10 is a programming language being developed by IBM research to provide
a programming model that provides scalability and productivity for new generation
supercomputing architectures. X10 is based on four basic principles of asynchrony, locality,
atomicity, and order developed on a type-safe, class-based, object-oriented foundation.
MiX10 is a part of the McLab project at SABLE lab that provides a static backend with
X10 as the target language for our MATLAB compiler. Two major challenges in translating
MATLAB to X10 are 1) Mapping programming constructs of a matrix-based, dynamic and “wild”
language to those of a statically typed, object-oriented, parallel programming language and 2)
Providing support for a huge number of MATLAB builtin methods that are a major reason behind
its success as a scientific programming language. MiX10 builds upon McLab’s Tamer analysis
framework and aims for performance and readability of generated X10 code.
|
|