|Speaker and Abstract
Affiliation: School of Computer Science, McGill University
Title: Research Talks by SOCS Professors
Affiliation: McGill Univerisity
Title: An example-based study of the verification of model transformations
Abstract: Model transformations are everywhere in software development, implicitly or explicitly. Typical model transformations are for example language converters, small compilers or small language interpreters. Model transformations became first-class citizens of the software world with the advent of Model Driven Development (MDD) — with explicit models also came explicit model transformations. If we generically consider a model transformation as an algorithm describing a set of computations, then proving some of its properties or testing it can be envisaged. However, unlike reactive systems — for which many verification techniques have been proposed — a model transformation performs particular computations where: (1) it typically operates on models, thus on data at a high-level of abstraction rich in semantics and (2) from a pragmatic point of view, often only the initial input and the final output is of interest and none of the intermediate steps matter. In this talk we will set the stage for an advanced study of model transformation verification techniques, having as background a pragmatic real world example.
We will start by introducing a Model Driven Development case study, where the control software for an automobile's power window is developed relying on model transformation techniques. This example will allow us to introduce the principles of Model Driven Development. We will start from a set of models of the various systems involved in the operation of a powerwindow, described using Domain Specific Languages (DSLs). The example exposes a set of model transformations that automate or contribute to the automation MDD development of powerwindow software. After precisely identifying the properties of the model transformations involved in the powerwindow example, we will propose model transformation verification techniques that can be used to insure the transformations' correctness regarding those properties. Because the existing literature in model transformation verification is still scattered and preliminary, the verification techniques mentioned in the presentation will come from a taxonomy of model transformation verification techniques recently built by us. We will wrap up the presentation by discussing a set of possible directions for this research.
Biography of Speaker:
Levi is currently a PostDoctoral fellow of the Modeling, Simulation, and Design Lab at McGill University, working with professor Hans Vangheluwe. His current research is on model transformation verification for the NECSIS (Network on Engineering Complex Software Intensive Systems for Automotive Systems) project.
Previously to McGill Levi worked as a research associate at the University of Luxembourg, Luxembourg (LASSY group), managing a project on resilient software systems. Before that Levi worked as a PostDoc for the Universidade Nova de Lisboa, Portugal (SOLAR group), developing a model transformation language and a transformation verification technique. Levi completed his PhD. thesis in 2008 at the University of Geneva, Switzerland (SMV group), where he formally defined a language for test case specification in the context of concurrent system models.
Levi's research is about bridging formal techniques and software engineering. Some of Levi's current concrete areas of interest are model transformation languages, the verification of model transformations, correctness by construction, models of concurrency (in particular Algebraic Petri Nets), software evolution and model based testing.
Affiliation: Kyoto University
Title: Metric Learning, From Mahalanobis Distances to Transportation Distances
K-nearest neighbors methods can be used in a wide variety of supervised machine learning tasks such as regression or classification. A key ingredient of such methods lies in the definition of a distance between observations. It has been widely observed that selecting a relevant distance to compare observations is key to obtaining good performance. For about a decade now, researchers in machine learning have proposed to select such a distance automatically through examples, that is by only using a training set of labeled vectors. To do so, all of these techniques have in common that they consider the parameterized family of Mahalanobis distances as the set of candidate distances to choose from. I will present in the first part of this talk a few of these metric learning methods. I argue however that metric learning is not necessarily limited to Mahalanobis distances, but can also be applied to other families of distances. I consider in this talk the family of Transportation distances, which have been proposed two centuries ago to compare probability distributions and more simply histograms of features. Transportation distances are popular in computer vision, where, under the name of Earth Mover's Distance, they have been used to compare images seen as histograms of colors, SIFT or GIST features. I will show in the second half of this talk that the parameters of Transportation distances can also be tuned automatically through a labeled database of histograms and will present empirical evidence that such algorithms perform better than Mahalanobis metric learning in that case.
Biography of Speaker:
Professor Cuturi received his Ph.D. in applied maths in 2005 from the Ecole des Mines de Paris under the supervision of Jean-Philippe Vert. He worked as a post-doctoral researcher at the Institute of Statistical Mathematics, Tokyo, between 2005 and 2007. Between 2007 and 2008 he worked for a hedge fund owned by Credit Suisse in Tokyo. After working at the ORFE department of Princeton University between 2009 and 2010 as a lecturer, he joined the Graduate School of Informatics in Kyoto university as an associate professor.
Affiliation: University of Toronto
Title: Programming FPGAs with Software via Overlay Architectures
Field-Programmable Gate Arrays (FPGAs) provide very low-risk access to VLSI
technology and faster time-to-market compared to fully-fabricated chips, and
hence are becoming more common in computing products. However, the use of
FPGAs has faced two increasingly daunting barriers: (i) the relative difficulty
of designing hardware, which is typically done at the Register-Transfer-Level
(RTL) in hardware description languages by specialized hardware engineers; and
(ii) the large computational effort needed by the CAD tools to translate and
optimize the RTL design for an FPGA, which can take up to a day for a large
design (ie., a very slow turn-around time for design iteration).
In this talk I will describe my recent research to address these problems via
"overlay architectures": pre-designed soft structures for an FPGA that are
themselves programmable, but more easily and more quickly. The simplest
example of an overlay architecture is a "soft processor"---i.e., an instruction
set processor built using the programmable logic of an FPGA. We have developed
several scalable, customizable, and easy-to-program overlays for different
computing domains. In particular I will summarize our work on customizable soft
processors, soft vector processors, a super-fast multithreaded processor, and
our ongoing work on an "application-specific GPU" that will support programming
Biography of Speaker:
Greg Steffan is an Associate Professor in the Department of Electrical and
Computer Engineering at the University of Toronto. His research focus is on
making multicore processors and FPGAs easier to program. He completed his
doctorate at Carnegie Mellon University in 2003, after undergraduate and
Masters degrees at the University of Toronto. He is a recipient of the Ontario
Ministry of Research and Innovation Early Researcher Award (2007), a Siebel
Scholar (2002), an IBM CAS Faculty Fellow and Visiting Scientist, and a senior
member of both the IEEE and ACM.
Affiliation: McGill University
Title: Computational challenges in genomics
Advances in sequencing technologies have paved the way to a revolution in genomics and medicine. Ten years ago, sequencing the first human genome cost more than 1 billion dollars. Now? You could have your own genome sequenced in a few days for just over 1 thousand dollars. Coupled with these technological breakthroughs is the need for new algorithmic and computational strategies to mine high volumes of data and gain insights into different biological systems. In this presentation, I will discuss a few interesting algorithmic problems in genomics. Specifically, I will present the problem of reconstructing the architecture of the ancestral genome of a set of extant species. I will also show results that demonstrate the ubiquitous role that play repetitive DNA in our genome and discuss computational challenges associated with using short sequencing reads to characterize these unusual regions of our genome. Finally, I will discuss some of the technical difficulties associated with computing, storing and sharing hundred of terabytes worth of data generated by next-generation sequencing technologies.
Biography of Speaker:
Guillaume Bourque is an Associate Professor in the Department of Human Genetics at McGill University and the Director of Bioinformatics at the McGill University & Genome Quebec Innovation Center. During his PhD, he worked on genome rearrangements in evolution with Pavel Pevzner at the University of Southern California. In 2002, he did postdoctoral research on gene regulatory networks at the University of Montreal with David Sankoff. From 2004 to 2010, he worked at the Genome Institute of Singapore, where he was a Senior Group Leader and the Associate Director of Computational & Mathematical Biology. His research interests are in comparative and functional genomics with a special emphasis on applications of next-generation sequencing technologies.
Affiliation: The Right Stuff of Tahoe
2012 Alumni Lecture
Title: Computational Connectivity
Abstract: When shopping for what to view on your home plasma screen, perhaps you've seen a Netflix recommendation like: "Because you enjoyed the film There Will Be Blood, you might want to watch My Left Foot." Or maybe you saw the 1993 movie Six Degrees of Separation. But why six degrees of separation? Why not five, or seven? Is there anything profound to this, or, for that matter, to heuristics for recommending movies? Or is it all just bovine? Despite the hype and frequent lack of rigor, there is (I claim) an underlying sponge-worthy theme: connectivity. Nerds and geeks have their "Erdős Numbers" (e.g., Erdős [David Avis] = 1). Hollywood-ites have their "Bacon Numbers". From Paul Erdős and six degrees of separation, to Hollywood and six degrees of Kevin Bacon, computations about connectivity pervade our world. To that end, computational connectivity is an interdisciplinary work in progress. Interdisciplinary does not mean undisciplined. Computational connectivity draws from graph theory, computational geometry, optimization, signal processing, materials science, and more. Less visible, but just as important as Internet search engines and social media: applications of computational connectivity to electronic communications systems. In particular, computational connectivity is a boon to cross-layer optimization of wireless networks, my most recent area of research (and as well the expertise of Steven Low, scheduled to speak at this colloquium on 13-Apr-2012).
So please join me for a whirlwind sampling of computational connectivity. We'll start with a tribute to historian James Burke, creator of the documentary series Connections: an Alternative View of Change. We'll look at advances due to luminaries such as Richard Hamming (as in the Hamming distance and Hamming codes); Godfried Toussaint (one of McGill's own!); and Frank Harary (famous graph theorist whom Godfried sponsored in a memorable visit to McGill). I'll tell a story (okay, maybe two or more of 'em) about these and other legendary characters, all somehow connected to the McGill School of Computer Science. I look forward to connecting with you!
Biography of Speaker:
Laurence E. LaForge (Larry) is president of the Right Stuff of Tahoe (www.The-Right-Stuff.com), an R&D firm in Reno, Nevada. An aging arthritic alpinist and vagabond, he has been on faculty with the University of Nevada (where he was booted out on his ear) and Embry-Riddle Aeronautical University (where he wasn't). Only rarely does anyone read any of his publications in (for example) the IEEE Transactions on Computers, the IEEE Transactions on Reliability, or the IEEE Transactions on Parallel and Distributed Processing. He was an unthanked guest editor for the IEEE Transactions on Components, Packaging, and Manufacturing Technology, and unpaid program chair for the IEEE International Conference on Innovative Systems in Silicon. He is a three-time NASA Fellow, twice at the Jet Propulsion Laboratory (what were they thinking?); there, despite being the dumbest guy in the room on most days, he managed to contribute to Deep Space 1, the first probe with an ion engine (that, by the way, was way cool!)
Larry eked out a degree of proficiency in French (à vrai dire?), along with his PhD, from McGill University; his doctoral research in fault tolerant arrays spanned electrical engineering, computer science, and mathematics. He escaped the Massachusetts Institute of Technology with a baccalaureate in mathematics. A Principal Engineer for Hewlett Packard (née Digital Equipment Corporation) he was part of the slave labor that helped to build the VAX/VMS 8600, as well as CHAS, a pioneer CAD/CAM suite for integrating VLSI design domains – all this and a bag of chips (get it?) Larry is a member of the Computer Society of the IEEE – they haven't caught on (yet), and so continue to let him renew his membership (proof positive that these organizations are mostly about collecting dues, and that their professional standards are only perfunctorily related to whom they maintain on their rosters).
Affiliation: École Polytechnique and GERAD
Title: (Semi)Definitely Going Global!
Abstract: Semidefinite optimization is the optimization of a linear function over the positive semidefinite matrices.
This talk will demonstrate the potential and the growing impact of semidefinite optimization for solving global optimization problems. After providing a user-friendly introduction to semidefinite optimization, we will survey some of the main modeling and computational developments in the application of semidefinite optimization. Finally we will highlight ongoing promising research in this area.
Biography of Speaker:
Prof. Miguel F. Anjos holds the Canada Research Chair in Discrete Nonlinear Optimization in Engineering at École Polytechnique de Montréal, Canada. Dr. Anjos obtained his B.Sc. (Hons) from McGill University, his M.S. from Stanford University and his Ph.D. from the University of Waterloo. His academic research interests are in the theory, algorithms and applications of mathematical optimization, especially in the areas of conic, semidefinite and polynomial optimization. He is also interested in the application of optimization techniques to problems in the areas of smart grids and facility layout. He is an Associate Editor for Discrete Applied Mathematics and a member of the Editorial Board of Optimization and Engineering. He is also on the Mitacs Research Council and is Program Director for the SIAM Optimization Interest Group. Dr. Anjos is the author or co-author of over forty refereed publications, and is co-editor of the brand new Handbook on Semidefinite, Conic and Polynomial Optimization. He was awarded a Humboldt Research Fellowship for Experienced Researchers in 2009.
Affiliation: Newmont Mining
Title: Mine Planning
Abstract: Mine design and long-term production scheduling is a critically important part of mining ventures and deals with the efficient management of cash flows in the order of hundreds of millions of dollars. Mine design and production scheduling determines both and the technical plan to be followed from mine development to mine closure. It is an intricate and complex problem to address due to its large scale, unavailability of a truly optimal net present value (NPV) solution, and uncertainty in the key parameters involved (geological and mining, financial, political and environmental). Computer optimization algorithms for mine design and production scheduling have been researched and used in practice over the past 50 years. The most celebrated and widely used algorithm is that of Lerchs and Grossman for finding the "ultimate pit", it is a graph theory algorithm that finds the optimal shape of an open pit mine given economic values of given discrete blocks of earth (waste blocks having negative value and ore blocks having positive value) along with a set of slope constraints. The slope constraints ensure that pit produced will not collapse from walls that are too steep. This talk will discuss variations of the Lerchs-Grossman algorithm and discuss why better algorithms are needed to produce long term plans that are not only better from an NPV stand point but also provide more certainty of obtaining that solution, given the uncertainty of the inputs available. Optimization algorithms and opportunities for other areas of mining outside of long term planning for a single mine will also be discussed.
Biography of Speaker:
Conor Meagher works as an Innovation Specialist for Newmont Mining Corporation. He received his Ph.D. and M.Sc. degrees in computer science from McGill University and holds a B.Math from the University of Waterloo. While at McGill he worked as a research assistant in the Cosmo Mine Planning laboratory, where he learned about optimization problems related to the mining industry. He considers himself lucky to have worked in many economic bubbles just prior to their burst (but not during). These include the dot-com, the Celtic tiger and the US mortgage and housing bubble (gold currently has a spot price near $1700 an ounce). At Newmont he primarily works on software development of optimization tools for mine planning and scheduling. Newmont is the world second largest gold producer, with significant assets or operations in the United States, Australia, Peru, Indonesia, Ghana, Canada, New Zealand and Mexico. Founded in 1921 and publicly traded since 1925, Newmont became the first gold company selected to be part of the Dow Jones Sustainability World Index in 2007. Headquartered near Denver, Colorado, the company has over 34,000 employees and contractors worldwide.
Affiliation: Bell Labs
Inventory Theory, Stochastic Optimization
Title: A Stochastic Programming Based Approach to Assemble-to-Order Inventory Systems
Abstract: The assemble-to-order system is a classical model in inventory theory, where multiple components are used to produce multiple products. All components are obtained from an uncapacitated supplier after a deterministic (component dependent) lead time, while demand for the products is random. The optimal control for this system (where the goal is to minimize the long run average inventory + backlog cost) is not known except for some very special cases. In this talk I will describe an approach to solving this problem using a particular multi-stage stochastic linear program with complete recourse, which we have shown provides a lower bound on achievable cost in the inventory system. Our approach requires translating the solution of this stochastic program into a control policy for the inventory system. I will describe how to do so in some special cases. Finally, I will provide a set of sufficient conditions under which a control policy achieves the lower bound (and is hence optimal), and show some examples where this occurs.
(Based on joint work with Mustafa Dogru and Qiong Wang)
Biography of Speaker:
Marty Reiman is a distinguished member of the technical staff in the Industrial Mathematics and Operations Research department of Alcatel-Lucent Bell Labs in Murray Hill, NJ. He has been at Bell Labs since receiving his Ph.D in operations research from Stanford University in 1977. His research has focused on the analysis, optimization and control of stochastic service systems, with an emphasis on fluid and diffusion limits. This work has been motivated by problems in communication, computing and manufacturing systems. Dr Reiman has been an associate editor and area editor of Mathematics of Operations Research, and is currently an associate editor of the Annals of Applied Probability. He has served as chair of the Applied Probability Society of INFORMS and is an INFORMS Fellow. He is a member of the IMS and IFIP WG 7.3.
Affiliation: University of Virginia
Title: Wireless Sensor Networks for Home Medical Care
Abstract: Wireless sensor networks (WSN) composed of large numbers of small devices (called motes or dust) can self-organize and be used for a wide variety of applications. In particular, these systems can be used to improve the quality of health care, be applied in the home or in large-scale assisted living facilities, and significantly contribute to longitudinal studies. I will present, AlarmNet, a novel system for assisted living and residential monitoring that uses front-end body networks, intermediate environmental sensing and communication networks, and back-end context aware protocols that are
tailored to residents’ individual living patterns. In the back-end, a program has been implemented to infer behaviors and use that information for improved health care. For example, these programs infer medical issues such as depression. In this talk I will describe the overall AlarmNet architecture, various front-end body networks, the intermediate sensor network, and the back-end databases and analysis. Key issues addressed include: flexible and evolvable heterogeneous configurations, measuring sleep motion and quality, subjective and objective measures for depression monitoring,
privacy, and a real-time query system.
Biography of Speaker:
Professor John A. Stankovic is the BP America Professor in the Computer Science Department at the University of Virginia. He served as Chair of the department, completing two terms (8 years). He is a Fellow of both the IEEE and the ACM. He won the IEEE Real-Time Systems Technical Committee's Award for Outstanding Technical Contributions and Leadership. He also won the IEEE Distributed Processing Technical Committee’s Award for Distinguished Achievement (inaugural winner). He has won four best paper awards in wireless sensor networks research. Professor Stankovic also served on the Board of Directors of the Computer Research Association for 9 years. He also serves on the U.S. National Academy’s Computer Science and Telecommunications Board. Recently, he won the University of Virginia, School of Engineering Distinguished Faculty Award. Before joining the University of Virginia, Professor Stankovic taught at the University of Massachusetts where he won an outstanding scholar award. He was the Editor-in-Chief for the IEEE Transactions on Distributed and Parallel Systems and was a founder and co-editor-in-chief for the Real-Time Systems Journal. His research interests are in wireless sensor networks, cyber physical systems, distributed computing, and real-time systems. Prof. Stankovic received his PhD from Brown University.
Affiliation: California Institute of Technology
Title: Optimal Power Flow and Demand Response
Abstract: Optimal power flow (OPF) problems determine the most efficient power generations, reactive powers for voltage support, or demand response.
They are well-known to be nonconvex and hence NP hard. In the first
part of the talk, we propose relaxations that can be solved efficiently, focusing on radial networks. Recently a sufficient condition is proved for general mesh network under which a semidefinite relaxation is exact.
We prove that, if the network is radial (tree), then the sufficient condition
is always satisfied and hence the semidefinite relaxation is always exact,
provided the constraints on power flows satisfy a simple pattern. Using
the branch flow model for radial networks, we propose a simple SOCP
relaxation to OPF, and prove that it is exact. We apply this result to
control voltage and reactive power in distribution networks, and present
results from realistic simulation of a Southern California distribution circuit.
In the second part of the talk, we describe a simple model that integrates two-period electricity markets, uncertainty in renewable generation, and real-time dynamic demand response. A load serving entity decides its
day-ahead procurement to optimize expected social welfare a day before
energy delivery. At delivery time when renewable generation is realized,
it coordinates with users, in a decentralized manner, to manage load and
purchase real-time balancing power in the real-time market, if necessary.
We derive the optimal day-ahead decision, propose real-time demand
response algorithm, and study the effect of volume and variability of
renewable generation on the optimal social welfare.
Joint work with Subhomesh Bose, Mani Chandy, Masoud Farivar, Lingwen
Gan, Dennice Gayme, Libin Jiang, Javad Lavaei, Caltech, and Chris Clarke,
Biography of Speaker:
Steven H. Low is a Professor of the Computing & Mathematical
Sciences and Electrical Engineering Departments at Caltech,
and an adjunct professor of both Swinburne University, Australia
and Shanghai Jiao Tong University, China. He was a co-recipient
of IEEE best paper awards, the R&D 100 Award, and an Okawa
Foundation Research Grant. He is an IEEE Fellow. He received his
BS from Cornell and PhD from Berkeley, both in EE.
Affiliation: Dept. of Computing, Goldsmiths College, University of London
Title: Walking along the edge: from perception and art to robotics, games and visualisations
Abstract: How should one explore and better understand human perception and get a good grasp of what may lie behind the concept of consciousness? Artists, alike their scientific peers (in psychophysics, cognitive sciences and AI), become experts in perception, albeit often with a more intuitive grasp of the subject. The artist is further interested in manipulating our percepts and its potential content and narratives, and will seek to re-invent how we perceive the world and ourselves, while producing presentations which remain comprehensible to the (more naive) observer. In this talk I will present current on-going work where art meets science along the digital edge.
Work presented will include (time permitting):
o AIkon: Perception, art and robotics (joint work with artist/roboticist Patrick Tresset)
o ProGen: An interactive platform for designing cities in games (joint work with computer games studio Rebellion Inc. and with UCL/CS)
o FoldSynth: An interactive platform for the study of proteins and other molecular strands (joint work with Imperial College/Structural Bioinformatics)
Biography of Speaker:
Professor of Computing Frederic Fol Leymarie is co-director of the Post-Graduate program MSc Computer Games and Entertainment at Goldsmiths College, which he founded with William Latham in 2008. He previously created and lead the MSc Arts Computing (2004-7).
Frederic has initiated several "shape-based" projects mixing the Arts, Humanities, Social Sciences, and Computing, including CyberCity and CyberMonument (late 1990's), Digital sculpting (with the Mid-Ocean Studio, 2002-5), and Digital archaeology (co-founder of the SHAPE lab. at Brown University, established in 1999).
He received his B.Eng. in Electrical Engineering, with honors in aeronautics, from the University of Montreal, his M.Eng. from McGill University in Computer Vision and Biomedical imagery, and his Ph.D. from Brown University (in 3D shape representation and computational geometry).
His current research interests incorporate ideas from computer vision, together with the physics of waves and shocks and their modelling in modern mathematics via singularity theory. Frederic is also working on perceptual models grounded in geometry, based in part on Gestalt theory. Frederic has initiated several "shape-based" projects mixing the Arts, Humanities, Social Sciences, and Computing, including CyberCity and CyberMonument (late 1990's), Digital sculpting (with the Mid-Ocean Studio, 2002-5), and Digital archaeology (co-founder of the SHAPE lab. at Brown University, established in 1999).