Skip to content. Skip to navigation
McGill Home SOCS Home
Personal tools
You are here: Home Announcements and Events Colloquium Archive

Colloquium Home
Fall 2013 Schedule
Winter 2014 Schedule
How to Suggest a Speaker
Archives

 

Date
( Fall 2009 )
Speaker and Abstract
2009/09/11 Speaker: SOCS Professors
Affiliation: McGill University
Area: Computer Science
Title: SOCS Research
Abstract: This first colloquium of the semester aims at giving a overview of the research being done within the various groups at SOCS. Several professors are going to present short 10 minute talks about their current research. The attendance to the colloquium is mandatory for new M.Sc. students, but all students and profs are invited to hear about what's happening in our department.
2009/09/18 Speaker: Kazuo Iwama
Affiliation: School of Informatics, Kyoto University
Area: Theory
Title: Stable Matching Problems
Abstract: The stable matching problem is one of the oldest problems studied from an algorithmic point of view, whose original version is defined as follows: An instance consists of N men, N women, and each person's preference list. A preference list is a totally ordered list including all members of the opposite sex depending on his/her preference. For a matching M between men and women, a pair of a man m and a woman w is called a blocking pair if both prefer each other to their current partners. A matching with no blocking pair is called stable. The famous Gale-Shapley algorithm which finds a stable matching in linear time was presented in 1962 and Knuth also considered several related problems in 1976. On the other hand, it has a lot of real-world applications, including the National Resident Matching Program in the US which started in 1952. Thus the problem has a long history of theoretical research and applications, but there still remain many interesting problems. For example, for its most general version, allowing both ties and incomplete lists, nothing was known until 1999 for its complexity. In 2007, Iwama, Miyazaki and Yamauchi first found its approximation algorithm whose approximation factor is strictly less than two, and a year later Kiraly improved it to 1.67 and this year McDermid to 1.5. This talk observes why this problem is so interesting and deep, from several different angles including newest results above mentioned, different measures for goodness of matchings, different definitions for the stableness, related problems such as stable roommate problems and stable partition, and open problems. Biography of Speaker:

Received B.E., M.E. and Ph.D. degrees from Department of Electrical Engineering, Kyoto University in 1973, 1975 and 1980, respectively. He is with Kyoto Sangyo University from 1978 to 1990, with Kyushu University from 1990 to 1997, and with Kyoto University since then. Editor-in-Chief of MDPI Algorithms, editorial board of ACM Transactions on Computation Theory, and President of Asian Association for Algorithms and Computation (AAAC).


2009/09/25 Speaker: Lui Sha
Affiliation: University of Illinois at Urbana Champaign
Title: Overview of the US Cyber-Physical Systems Initiative
Abstract: The Internet has made the world "flat" by transcending space. We can now interact with people and get useful information around the globe in a fraction of a second. The Internet has transformed how we conduct research, studies, business, services, and entertainment. However, there is still a serious gap between the cyber world, where information is exchanged and transformed, and the physical world in which we live. The emerging cyber-physical systems shall enable a modern grand vision for societal-level services that transcend space and time at scales never possible before. CPS is now the top priority of US IT research and I will give an overview of the CPS challenges presented in NSF CPS research planning workshops. Biography of Speaker:

Lui Sha graduated with BSEE from McGill in 1978 and graduated with Ph.D. from Carnegie Mellon University in 1985. Currently, he is Donald B. Gillies Chair professor of Computer Science at University of Illinois at Urbana Champaign. He is a fellow of the ACM and IEEE. His work transformed the IEEE standards on real time computing and was cited as a major accomplishment of computer science in National Academy of Science's report: A Broader Agenda for Computer Science and Engineering. His work was also cited as a key enabling technology for many high technology projects, including GPS in orbit upgrade, the Mars Pathfinder and the International Space Station. He served on US National Academy of Science's study committee on certifiably dependable software, and is a member of the organizing committee for CPS Research Initiative


2009/10/02 Speaker: Calton Pu
Affiliation:
Title: Automated System Management through Experimental Measurements
Abstract: Large N-Tier applications running in data centers have complex deployment requirements and dependencies that change frequently. The increasing complexity and scalability requirements of such applications demand automated configuration design, testing, deployment and monitoring of applications. In the Elba project, we have automated the n-tier application deployment, monitoring, and analysis phases through automated generation of benchmark scripts. Elba software tools include the Mulini generator, which creates deployment and monitoring scripts for several benchmarks such as RUBiS and RUBBoS. The scripts run the benchmark through many different configurations (from 3-tier to 5-tier, and several software packages such as MySQL and PostgreSQL), producing detailed data on many system resource metrics (e.g., CPU and network utilization). Statistical analysis of these metrics identifies the resource bottlenecks automatically, leading to automated adaptation. We will show detailed analyses of our data and discuss new research topics that can use the benchmark data accumulated and apply these techniques to other quality of service dimensions such as availability and power consumption. Biography of Speaker:

Calton Pu was born in Taiwan and grew up in Brazil. He received his PhD from University of Washington in 1986 and served on the faculty of Columbia University and Oregon Graduate Institute. Currently, he is holding the position of Professor and John P. Imlay, Jr. Chair in Software at the College of Computing, Georgia Institute of Technology. He has worked on several projects in systems and database research. His contributions to systems research include program specialization and software feedback in the Synthesis, Synthetix, and Infosphere projects. His contributions to database research include extended transaction models and their implementation such as Epsilon Serializability and Reflective Transaction Framework. His recent research has focused on event processing (Continual Queries over the Internet), automated system management (Elba project) and services computing (dependable systems software). His collaborations include applications of these techniques in scientific research on macromolecular structure data, weather data, environmental data, and health care. He has published more than 50 journal papers and book chapters, 150 conference and refereed workshop papers, and served on more than 100 program committees, including the co-PC chairs of SRDS'95, ICDE’99, COOPIS’02, SRDS’03, DOA’07, DEBS’09, and co-general chair of ICDE'97, CIKM'01, ICDE’06, DEPSA’07, CEAS’07, SCC’08, CollaborateCom’08.


2009/10/09 Speaker: Robert DeLine
Affiliation: Microsoft Research
Area: Software Engineering
Title: The Social Programmer
Abstract: Despite stereotypes to the contrary, software development is a very social activity. Developers spend a large fraction of their time in conversation with team mates, seeking information, coordinating activities, and building consensus about design decisions. The downside, though, is that their work is often interrupted or blocked on missing information, which leads to a frustratingly fragmented work day. In this talk, we'll examine this behavior in detail, with an eye toward productivity drains. I'll describe several prototype development tools from Microsoft Research that help developers find the people and information they need and cope with interruptions. Biography of Speaker:

Robert DeLine is a Principal Researcher with Microsoft Research. He leads the Human Interactions in Programming group (HIP).


2009/10/23 Speaker: Martin Chabbert and Martin Piotte
Affiliation: Pragmatic Theory
Title: Advances in Collaborative Filtering and the Netflix Prize
Abstract: In October 2006, the online DVD rental company Netflix released a large dataset of customer generated movie ratings as part of a $1,000,000 competition to significantly improve their movie recommendation system. The goal was to develop an algorithm that can predict future customer ratings based on their past behaviour, with a root mean square rating error 10% better than Netflix existing system, on a held out set. The unprecedented size and quality of the dataset, coupled with the high profile prize, drew over 50,000 contestants and triggered significant advances to the field of collaborative filtering. The presentation explores the challenges involved in the task, the approaches used to solve the problem and the key advances made to the field of collaborative filtering during the course of the competition: matrix factorization, Restricted Boltzmann Machines, neighbourhood methods, integrated models with implicit feedback, time dependant models and blending. Biography of Speaker:

Martin Piotte obtained a bachelor degree in electrical engineering from École Polytechnique de Montréal . Since his graduation in 1989, he occupied various software development positions related to control systems and telecommunications. Martin Chabbert graduated from École Polytechnique de Montréal in 1998 with a bachelor degree in computer engineering. Since then, he has worked as a software developer for various companies, mainly in the telecommunications field. Martin and Martin reside in Montreal and are both currently employed by BroadSoft, the world's leading provider of Voice over IP application software. They formed team PragmaticTheory and started working on the Netflix Prize competition in March 2008, in their spare time. A year later PragmaticTheory became the leading team in the competition.


2009/10/30 Speaker: Sridhar Mahadevan
Affiliation: University of Massachusetts-Amherst
Area: Machine Learning, AI
Title: New Frontiers in Representation Discovery
Abstract: A longstanding challenge in artificial intelligence is how to automatically construct representations by analyzing state spaces. This problem has attracted renewed interest in a variety of areas, including compression and dimensionality reduction, planning and reinforcement learning, and semi-supervised and unsupervised learning. This talk will describe a family of spectral approaches to constructing new representations by diagonalizing or dilating Laplacian operators over a state space. A number of challenges and tradeoffs will be discussed, including task-specific vs. task-invariant representations; flat vs. multi-scale approaches; orthogonal vs. redundant bases; and dealing with the curse of dimensionality. A range of case studies from our recent research will be presented, including a paradigm for solving Markov decision processes where representation and control are learned simultaneously; a spectral method for clustering of text documents where the topic hierarchy is automatically constructed; and a compression method for computer graphics based on multiscale analysis of object geometry. Biography of Speaker:

Sridhar Mahadevan is Co-Director of the Autonomous Learning Laboratory at the Department of Computer Science, University of Massachusetts, Amherst. His research interests are in artificial intelligence and machine learning. He is an associate editor of the Journal of Machine Learning Research, and was a tutorial speaker at AAAI 2007, IJCAI 2007, and ICML 2006. In 2008, he authored a book on representation discovery as part of the Synthesis Lectures in AI and Machine Learning series published by Morgan Claypool.


2009/11/13 Speaker: Allan Borodin
Affiliation: University of Toronto
Area: Theory
Title: The Power and Limitations of Greedy Mechanism Design for Combinatorial Auctions
Abstract: We study combinatorial allocation problems in which objects are sold to selfish agents, each having a private valuation function that expresses values for sets of objects. In particular, we consider the social welfare objective where the goal of the auctioneer is to allocate the objects so as to maximize the sum of the values of the allocated sets. Our specific interest is to understand the power and limitations of ``conceptually simple'' allocation algorithms in this regard. For single minded combinatorial auctions (where agents only value and bid on a single set), there is a known truthful deterministic mechanism (Lehmann, O'Callaghan and Shoham) using a greedy allocation that achieves an O(sqrt{m}) approximation ratio to the social welfare where $m$ is the total number of objects. However, for general (multi-minded) combinatorial auctions the best known deterministic truthful efficient mechanism has approximation ratio O(m/sqrt{log m})(Blumrosen and Nisan). We model greedy algorithms by ``priority algorithms'' and show that no truthful priority mechanism obtains a sub-linear approximation ratio. for two-minded agents. In contrast, we show that every $c$-approximation monotone greedy algorithm for a combinatorial allocation problem can be implemented as a mechanism that achieves a $c + O(\log c)$ approximation at every Bayesian Nash equilibrium and if we assume that agents do not overbid, then this ``price of anarchy'' is $c+1$. Finally, we discuss recent results by Brendan Lucier concerning the use of such simple algorithms in repeated games. Joint work with Brendan Lucier Biography of Speaker:

http://en.wikipedia.org/wiki/Allan_Borodin


2009/11/20 Speaker: Borzoo Bonakdarpour
Affiliation:
Area: Verification
Title: From High-Level Component-Based Models to Distributed Implementations
Abstract: Design and implementation of distributed algorithms often involve many subtleties due to their complex structure, nondeterminism, and low atomicity as well as occurrence of unanticipated physical events such as faults. Thus, constructing correct distributed systems has always been a challenge and often subject to serious errors. We propose a method for generating distributed implementations from high-level component-based models that only employ simple synchronization primitives. The method is a sequence of three transformations preserving observational equivalence: (1) A transformation from a global state to a partial state model, (2) a transformation which replaces multi-party strong synchronization primitives in atomic components by point-to-point send/receive primitives based on asynchronous message passing, and (3) a final transformation to concrete distributed implementation based on platform and architecture. These transformations may use additional mechanisms ensuring mutual exclusion between conflicting interactions. We propose different distributed implementations from fully centralized to fully decentralized ones. We study the properties of different transformations, in particular, performance criteria such as degree of parallelism and overhead for coordination. In our case studies, we observe that different types of distributed and parallel algorithms may significantly behave differently in terms of performance under different types of transformations. Biography of Speaker:

Borzoo Bonakdarpour is currently a post-doctoral researcher at the Verimag Laboratory in Grenoble, France working on the BIP project led by Dr. Joseph Sifakis -- Turing Award Winner. The aim of the project is to develop theory, methods, and tools for building real-time and distributed systems consisting of heterogeneous components. His other research interests include compositional verification of embedded systems, program synthesis and, in particular, program revision. He is the main developer of the tool SYCRAFT which is capable of synthesizing fault-tolerant distributed programs of the size 10^80 reachable states and beyond. Borzoo Bonakdarpour obtained his Ph.D. from the Department of Computer Science and Engineering at Michigan State University in 2008 where he joined as a Master's student in 2002. His Ph.D. dissertation, "Automated Revision of Distributed and Real-Time Programs", studies a wide range of revision problems in closed an open systems. He completed his Bachelor of Science in computer engineering at The University of Esfahan, Iran, in 1998. His B. Sc. project on queuing theoretic analysis and implementation of voice over Ethernet won the President of Iran Khawrazmi Research Prize.