Skip to content. Skip to navigation
McGill Home SOCS Home
Personal tools
You are here: Home Announcements and Events Colloquium Archive

Colloquium Home
Fall 2014 Schedule
Winter 2015 Schedule
How to Suggest a Speaker
Archives

 

Date
( Fall 2007 )
Speaker and Abstract
2007/09/07 Speaker: SOCS Professors
Affiliation: McGill University
Title: SOCS Research Overview - Part I
Abstract: This Colloquium, as well as the next one on Sept. 14th, aims at giving a overview of the research being done within the various Computer Science research groups at SOCS. The attendance to both events is mandatory for new M.Sc. students, but all students and profs are invited to hear about what's happening in our department. Biography of Speaker:


2007/09/14 Speaker: SOCS Professors
Affiliation: McGill University
Title: SOCS Research Overview - Part II
Abstract: This Colloquium aims at giving a overview of the research being done within the various Computer Science research groups at SOCS.
IMPORTANT: The colloquium talks on Friday the 14th of September are held in MC 103! Biography of Speaker:


2007/09/21 Speaker: Michael Shamos
Affiliation: Carnegie Mellon University
Title: What happenned to 18000 votes: the results of the Saratosa source code audit
Abstract: On Election Day in the U.S. in 2006, Republican Vern Buchanan beat Democrat Christine Jennings in Florida's 13th District U.S. Congressional race. The winning margin was 369 votes out of a total of 238,249 cast. However, when the results were tallied, 18,412 electronic ballots from Sarasota County, more than 15% of the ballots, were found to contain no vote for either candidate. Either those votes were properly cast, but lost or misrecorded somehow, or were never cast at all. Sarasota used iVotronic direct-recording electronic machines without a voter-verified paper trail.
Within 24 hours after the election, Florida's Secretary of State started an investigation. Jennings filed a lawsuit in Florida to overturn the election, alleging that the machines were flawed. The U.S. House of Representatives has appointed a subcommittee to look into the matter and another investigation by the Government Accountability Office is underway. Florida formed a task force of security and election experts to determine whether anything in the machines' source code could have caused such a high undervote.
The speaker was a member of that task force along with seven others, including Alec Yasinsac of Florida State University, Matt Bishop of U.C. Davis and David Wagner of U.C. Berkeley. The talk will detail the methodology used by the task force and its findings.

The colloquium talks are held in McConnell 11 during the fall semester! Biography of Speaker:

The speaker is Distinguished Career Professor in the Institute for Software Research of the School of Computer Science at Carnegie Mellon University, where he directs graduate programs in eBusiness. He has been associated with Carnegie Mellon since 1975. Since 1980, he has been an examiner of computerized voting systems for Pennsylvania, Texas, West Virginia, Delaware, Nevada and Massachusetts and has conducted more than 120 voting system examinations. He has testified on electronic voting five times before the U.S. Congress and before several state legislatures.
Dr. Shamos has been an expert witness in five recent lawsuits involving electronic voting. In August 2007 the highest court of Maryland pronounced him the “true voice of reason” on electronic voting matters. He was the author in 1993 of “Electronic Voting — Evaluating the Threat” and in 2004 of “Paper v. Electronic Voting Records — An Assessment,” both presented at the ACM Conference on Computers, Freedom & Privacy.


2007/10/05 Speaker: Mihaela Ulieru
Affiliation: University of New Brunswick
Title: Complex Networks as Control Paradigm for Complex Systems: Design for Resilience of Networked Critica
Abstract: With the ICT pervading everyday objects and infrastructures, the ‘Future Internet’ is envisioned to leap towards a radical transformation from how we know it today (a mere communication highway) into a vast hybrid network seamlessly integrating physical (mobile or static) systems to power, control or operate virtually any device, appliance or system/infrastructure. Manipulation of the physical world occurs locally but control and observability are enabled globally across an overlay network that we refer to as an ‘eNetwork’. eNetworks enable the spontaneous creation of collaborative societies of artifacts, referred to as “cyber-physical ecosystems” (CPE). In such “opportunistic ecosystems”, single devices / departments / enterprises become part of a larger and more complex infrastructure in which the individual properties or attributes of single entities are dynamically combined to achieve an emergent desired behavior of the ecosystem. While we are too busy crafting these technological advances – they start to live a life of their own – and have already caught us in their agendas unexpectedly. Mainly because they have evolved to unanticipated complexity which transcends the boundaries of the disciplines under which these artifacts were conceived and incrementally crafted. Take for example the Internet – when a computer network grows so much that it can be looked at as a statistical ensemble, understanding the computer hardware architecture or its operating system design is just not relevant to the task of identifying emergent collective qualities of the network. It is extremely hard – if not impossible – to control the large scale eNetworked CPE by building a global logic ‘top-down’ system able to rapidly adapt to changes adequately by instructing each element what to do at each step. Using latest knowledge of complexity science however we can envision strategies that mimic natural adaptation of highly evolved robust systems. When one gets a collective behavior as an emergent character of a multitude of elements, adaptation comes naturally, and only in regions where it is needed. To date no standard theory and practice exist for designing systems that emerge self-organized complexity bottom-up into resilient structure ensuring overall system robustness under unexpected (cascading) failure or malicious attack. The challenge is to design the right interaction protocols and feedback mechanisms that will ensure self-organization of system’s structure in an optimal way. Rooted in my experience with adaptive manufacturing networks the talk will propose a unified methodology for designing resilient eNetworked infrastructures, embracing scalability, robust control, formal validation and computation. The methodology realizes self-organization by integrating modeling and analysis tools enabling the identification of scalability properties of complex networks with latest standards for robust control and formal validation of reconfigurable production systems and computing formalisms for distributed intelligent systems (multi-agent and intelligent agent paradigms). Biography of Speaker:

Professor Mihaela Ulieru holds the NSERC (Natural Science and Engineering Research Council – funded) Canada Research Chair in Adaptive Information Infrastructures for the eSociety in the Faculty of Computer Science at the University of New Brunswick since 2005 when she also established (with Canada Foundation for Innovation funding) and leads the Adaptive Risk Management Laboratory (ARM Lab) researching Complex Systems as Control Paradigm for Complex Networks to develop Holistic Security Ecosystems. Her current research is focused on the Cyberengineering of resilient eNetworks (Cyber-Physical Ecosystems) and their applications to security (critical infrastructure protection, emergency response management), e-Health (pandemic mitigation) and networked manufacturing. One highlight of her most recent endeavors is a collaborative project on 'Emulating the Mind'.


2007/10/19 Speaker: Henry Wolkowicz
Affiliation: University of Waterloo
Title: Sensor Network Localization, Euclidean Distance Matrix Completions, and Graph Realization
Abstract: Many applications use ad hoc wireless sensor networks for monitoring information, e.g. for earthquake detection, ocean current flows, weather, etc... Typical networks include a large number of sensor nodes which gather data and communicate among themselves. The location of a subset of the sensors is known; these sensor nodes are called anchors. From intercommunication among sensor nodes within a given (radio) range, we are able to establish approximate distances between a subset of the sensors and anchors. The sensor localization problem, SNL, is to determine/estimate the location of all the sensors from this partial information on the distances.
We model the problem by treating it as a nearest Euclidean Distance Matrix, EDM, problem with lower bounds formed using the radio range between pairs of nodes for which no distance exists. We also allow for existing upper bounds. When solving for the sensor locations, the problem is nonconvex and hard to solve exactly.
We study the semidefinite programming, SDP, relaxation of the sensor localization problem. The main point of the talk is to view SNL as a (nearest) EDM completion problem and to show the advantages for using this latter, well studied model.
The existence of anchors in the problem is not special. The set of anchors simply corresponds to a given fixed clique for the graph of the EDM problem. This results in the failure of the Slater constraint qualification for the SDP relaxation. We then show that we can take advantage of this liability. We can find the smallest face of the SDP cone that contains the feasible set and project the problem onto this face.
We next propose a method of projection when a large clique or a dense subgraph is identified in the underlying graph. This projection reduces the size, and improves the stability, of the relaxation.
(This research is in collaboration with: Yichuan Ding, Nathan Krislock, Veronica Piccialli, Jiawei Qian.) Biography of Speaker:

Henry Wolkowicz received his B.Sc, M.Sc, and Ph.d. from McGill University (latter in 1978). He has been a Professor of Mathematics in the Combinatorics and Optimization Department at The University of Waterloo since 1986. He has also been a Professor and visiting Professor at Dalhousie University, University of Alberta, University of Maryland, Emory University, University of Delaware, and Princeton University. He was elected Chair of the SIAM Activity Group on Optimization for 2003-2006 and more recently was elected a SIAM officer.
Dr. Wolkowicz' research deals mainly with the development, theory, and testing of numerical algorithms, for solving various optimization problems. In particular, recent research deals with Semidefinite Programming problems. Applications of the algorithms include: VLSI design; various scheduling problems such as transit scheduling; various constrained and unconstrained nonlinear minimization.


2007/10/26 Speaker: Joseph O'Rourke
Affiliation: Smith College
Area: Computational Geometry
Title: Geometric Folding Algorithms: Linkages, Origami, Polyhedra
Abstract: I will provide a sample of geometric folding algorithms in three areas, roughly one-dimensional (1D), 2D, and 3D. The folding of 1D linkages finds applications from robotics to protein folding. I will describe the recent resolution of a 25-yr old open problem, showing that a chain cannot lock in the plane, and connect this result to morphing in computer graphics. Folding 2D paper leads to questions in mathematical origami. Here I’ll describe the one-cut theorem: any straight-line drawing may be cut out of a folded piece of paper via one scissors cut. Unfolding the surface of 3D polyhedra has application to manufacturing, where shapes are cut out of sheets of aluminum and folded by metal-bending machines into 3D. I will highlight a long-unsolved problem, and discuss the recent resolution of a special case, unfolding polyhedra whose faces meet at right angles. Biography of Speaker:

Joseph O'Rourke obtained a Bachelors degree from St. Joseph's University in physics and mathematics. He then studied computer science at the University of Pennsylvania, from which he received the Ph.D. in 1980. Then he joined the faculty of Johns Hopkins University as an Assistant Professor. He was promoted to Associate Professor in 1985, and in 1988 left to found and chair the Computer Science department of Smith College, as the Olin Professor of Computer Science. Currently he is again chair of the department, recently completing a one-year appointment as Interim Director of Engineering.
He has received several grants and awards, including a Presidential Young Investigator Award in 1984, a Guggenheim Fellowship in 1987, and the NSF Director's Award for Distinguished Teaching Scholars in 2001. His research is in the field of computational geometry, where he has published a monograph (Oxford, 1987), a textbook (Cambridge, 1994; 2nd ed. 1998), coedited the 1500-page "Handbook of Discrete and Computational Geometry" (CRC Press, 1997; 2004), and most recently, a monograph on Folding & Unfolding, coauthored with Erik Demaine (Cambridge, 2007). Thirty-one of his more than one hundred papers published in journals and conference proceedings are coauthored with undergraduates.


2007/11/09 Speaker: Sven Dickinson
Affiliation: University of Toronto
Area: Computer Vision
Title: Object Categorization and the Need for Many-to-Many Matching
Abstract: Object recognition systems have their roots in the AI community, and originally addressed the problem of object categorization. These early systems, however, were limited by their inability to bridge the representational gap between low-level image features and high-level object models, hindered by the assumption of one-to-one correspondence between image and model features. Over the next thirty years, the mainstream recognition community moved steadily in the direction of exemplar recognition while narrowing the representational gap. The community is now returning to the categorization problem, and faces the same representational gap as its predecessors did. We review the evolution of object recognition systems and argue that bridging this representational gap requires an ability to match image and model features many-to-many. We review three formulations of the many-to-many matching problem as applied to model acquisition and object recognition. Biography of Speaker:

Sven Dickinson received the B.A.Sc. degree in systems design engineering from the University of Waterloo, in 1983, and the M.S. and Ph.D. degrees in computer science from the University of Maryland, in 1988 and 1991, respectively. He is currently Professor of Computer Science at the University of Toronto, where he served as Departmental Vice Chair, from 2003-2006, and as Associate Professor, from 2000-2007. From 1995-2000, he was an Assistant Professor of Computer Science at Rutgers University, where he also held a joint appointment in the Rutgers Center for Cognitive Science (RuCCS) and membership in the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS). From 1994-1995, he was a Research Assistant Professor in the Rutgers Center for Cognitive Science, and from 1991-1994, a Research Associate at the Artificial Intelligence Laboratory, University of Toronto.
Dr. Dickinson's research interests revolve around the problem of object recognition, in general, and generic object recognition, in particular. He has explored a multitude of generic shape representations, and their common representation as hierarchical graphs has led to his interest in inexact graph indexing and matching. His interest in shape representation and matching has also led to his research in object tracking, vision-based navigation, content-based image retrieval, and the use of language to guide perceptual grouping and object recognition. One of the focal points of his research is the problem of image abstraction, which he believes is critical in bridging the representational gap between exemplar-based and generic object recognition.