|Speaker and Abstract
Affiliation: RWTH Aachen University
Title: Integrators for Consistency Management or How to Keep the Results of Different Developers Consistent?
Abstract: Within the Collaborative Research Center 476 IMPROVE and the Transfer Center 61, both funded by German Research Foundation DFG (comparable to a center of excellence), we have studied novel tool support for design processes in Chemical Engineering. Both projects were carried out between groups of Engineering and Computer Science at RWTH Aachen University. The concepts and results are also applicable to software construction processes.
One subproject deals with the question to keep the results of different developers consistent to each other. Consistency or traceability is a current topic, especially in connection to model-driven development. We have been studying interactive and incremental tools for achieving consistency, called integrators, since many years. Tools are systematically derived using a meta tool environment, being based on graph specifications. Rules for consistency management can be postulated before the tool construction process but also by the user during
tool application. Our findings are being used in industry.
In the first part of the talk we report about the activities in the Software Engineering group. In the second part, we give an overview of the CRC and the TC activities. In the third part, we discuss our findings on integrator tool construction and maintenance.
Biography of Speaker:
Prof. Nagl is author of five books and editor/coeditor of about 20 books. He authored/ coauthored 125 articles in scientific journals, proceedings, and books.
He authored the first book on graph rewriting, and also on software architectures worldwide, and also the first Ada book Germany-wide.
He was speaker of the DFG research group SUKITS (1990-1997), he was speaker of the DFG Collaborative Research Center SFB 476 (1997-2006), and speaker of the Transfer Center TB 61, also financed by DFG, 2006-2009. SUKITS was on tool integration in mechanical engineering, the SFB and the TB was on tool integration in chemical and plastics engineering.
The books on these projects are outstanding.
At RWTH, Computer Science 3, many further application-oriented projects based on graph technology, tools, and architectures are and were carried out, in the area of telecommunication, automotive, business administration, civil engineering. eHome, authoring etc.
Prof. Nagl has supervised about 300 Masters' Theses and about 50 Doctoral Dissertations. 16 professors were former Ph.D.s of his group.
Academic Positions and Honors
Prof. Nagl was speaker of Computer Science at RWTH Aachen University from Oct. 2005 to December 2007, after having had various other positions in the department, faculty, and university.
He was vice-speaker of Aachen Informatics Industry Club (REGINA) from March 2000 to December 2007, now being a honorary member, and Member of the Board of Forum Informatik from 1988 to 2007 at RWTH Aachen University.
He was Head of “Fakultätentag Informatik” (German Informatics Faculty Deans' Conference) from January 2006 to December 2008, and he is now in the board of that association.
He was Head of 4ING, the Association of Faculty Conferences of Mechanical Engineering/ Process Engineering, Electrical Engineering/ Information Technology, Civil Engineering/Geodesy, and Informatics in Germany from January 2007 to December 2008, he is now also in the board of that association and a honorary member.
He is a Felllow of the Gesellschaft für Informatik, Germanys Association for Computing. He is Speaker Substitute of the Accreditation Committee of EQANIE, European Quality Assurance Network for Informatics Education.
He is in the Steering Committee for Faculty Evauations of Informatics Europe. He got a Honorary Doctor from University of Paderborn in 2010.
Affiliation: McGill Univerisity
All Computer Science
Title: SOCS Research Overviews
Abstract: This colloquium aims at giving an overview of the research being done within the various groups at SOCS. Several professors are going to present short 5-10 minutes talks about their exciting research. The attendance to the colloquium is mandatory for new M.Sc. students. All students and profs are invited.
Biography of Speaker:
Affiliation: University of Ottawa
Graphics / Vision / Robotics
Title: Measurement-based Modelling of Haptic Textures
Abstract: Haptic texturing is a widely used and effective technique to increase
the realism of virtual reality applications. The sensation of stroking
a physical surface with a tool is caused by the geometrical and
physical attributes of the surface; attributes that we need to measure
and model to improve virtual reality applications. In this
presentation, I discuss an interactive and mobile scanning system for
the acquisition and synthesis of haptic textures that consists of a
visually tracked hand-held touch probe. We discuss two models that we
have estimated from these measurements: one model is based on the
measured height-profiles of the surface, while the other directly
models the accelerations recorded with the probe. We estimate a
spatial distribution of infinite-impulse-response filters that operate
in the time domain to record the vibratory accelerations. Our results
show that the IIR model is effective in representing varying roughness
characteristics of both regular-patterned surfaces and stochastic
surfaces. Beside roughness, we have also developed a method to estimate
the contact stiffness of an object based solely on the acceleration
and forces measured during stroking a surface with the hand-held
probe. We were able to show an experimental relationship between the
estimated stiffness and the contact stiffness observed during
compression. We conclude our presentation with a haptic application
utilizing our texture models.
Biography of Speaker:
Jochen Lang received the M.Sc. degree in computer science from York
University, Canada, and the Ph.D. degree in computer science from the
University of British Columbia, in 2001. From 2002 to 2004, he was a
Postdoctoral Researcher with the Max-Planck-Institut für Informatik,
Saarbrücken, Germany. He is currently an Associate Professor with the
School of Electrical Engineering and Computer Science, University of
Ottawa, where he is a member of the Distributed and Collaborative
Virtual Environments Research Laboratory and the Vision, Imaging,
Video and Audio Research Laboratory. His research focuses on
measurement-based modeling in the areas of computer graphics and
haptics. He is working on image-based models, physical models of 3D
deformable objects for tracking, and on modelling and rendering for
Affiliation: University of Ottawa
Title: Contribution of applied algorithms to applied computing
Abstract: There are many attempts to bring together computer scientists, applied mathematician and engineers to discuss advanced computing for scientific, engineering, and practical problems. This talk is about the role and contribution of applied algorithms within applied computing. It will discuss some specific areas where design and analysis of algorithms is believed to be the key
ingredient in solving problems, which are often large and complex and cope with tight timing schedules. The talk is based on recent Handbook of Applied Algorithms (Wiley, March 2008), co-edited by the speaker. The featured application areas for algorithms and discrete mathematics include computational biology, computational chemistry, wireless networks, Internet data streams, computer vision, and emergent systems. Techniques identified as important include graph theory, game theory, data mining, evolutionary, combinatorial and cryptographic, routing and localized algorithms.
Biography of Speaker:
Ivan Stojmenovic received his Ph.D. degree in mathematics. He held regular and visiting positions in Serbia, Japan, USA, Canada, France, Mexico, Spain, UK (as Chair in Applied Computing at the University of Birmingham), Hong Kong, Brazil, Taiwan, and China, and is Full Professor at the University of Ottawa, Canada and Adjunct Professor at the University of Novi Sad, Serbia. He published over 250 different papers, and edited seven books on wireless, ad hoc, sensor and actuator networks and applied algorithms with Wiley. He is editor of over dozen journals, editor-in-chief of IEEE Transactions on Parallel and Distributed Systems (from January 2010), and founder and editor-in-chief of three journals (MVLSC, IJPEDS and AHSWN). Stojmenovic is one of about 500 computer science researchers with h-index at least 40 and has >11000 citations. He received three best paper awards and the Fast Breaking Paper for October 2003, by Thomson ISI ESI. He is recipient of the Royal Society Research Merit Award, UK. He is elected to IEEE Fellow status (Communications Society, class 2008), and is IEEE CS Distinguished Visitor 2010-12. He received Excellence in Research Award of the University of Ottawa 2009. Stojmenovic chaired and/or organized >60 workshops and conferences, and served in >200 program committees. He was program co-chair at IEEE PIMRC 2008, IEEE AINA-07, IEEE MASS-04&07, EUC-05&08-10, AdHocNow08, IFIP WSAN08, WONS-05, MSN-05&06, ISPA-05&07, founded workshop series at IEEE MASS, ICDCS, DCOSS, WoWMoM, ACM Mobihoc, IEEE/ACM CPSCom, FCST, MSN, and is/was Workshop Chair at IEEE INFOCOM 2011, IEEE MASS-09, ACM Mobihoc-07&08.
Title: Towards a broad spectrum proof certificate
Abstract: Computational logic systems, such as theorem provers and model
checkers, produce evidence of a successful proof in an assortment of(often ad hoc) formats. Unfortunately, the evidence generated by one prover is seldom readable by another prover or even by a future version of itself. As a result, provers seldom trust and use proofs from other provers. This situation is made all the more regrettable given that logic and (formal) proof are certainly candidates for universally accepted standards. I will outline some recent work on designing documents, called proof certificates, that satisfy the following requirements: they must be (i) checkable by simple proof
checkers, (ii) flexible enough that existing provers can conveniently produce such certificates from their internal evidence of proof, (iii) directly related to proof formalisms used within the structural proof theory literature, and (iv) permit certificates to elide some proof information with the expectation that a proof checker can reconstruct the missing information using bounded and structured proof search. We consider various consequences of these desiderata, including how they
can mix computation and deduction and what they mean for the
establishment of marketplaces and libraries of proofs.
Biography of Speaker:
Dale Miller received his PhD in Mathematics in 1983 from Carnegie
Mellon University. He has been a professor at the University of
Pennsylvania and Ecole Polytechnique (France) and a Department Head at the Pennsylvania State University. He is currently Director of Research at INRIA-Saclay where he is the Scientific Leader of the INRIA project Parsifal
Miller is the Editor-in-Chief of the ACM Transactions on Computational Logic and has editorial duties with the J. of Automated Reasoning, J. of Applied Logic, J. of Logic and Computation, and Theory and Practice of Logic Programming. He has given numerous invited talks at major conferences (eg, CLMPS 2011, APLAS 2010, IJCAR 2006, CSL 2004) as well as numerous invited tutorials (eg, PSL 2011, ISCL 2011, Wollic 2002). He is the recipient of the 2011 LICS Test-of-Time award for a paper he co-authored in LICS 1991.
Miller works on many topics in the general area of computational
logic, including automated reasoning, logic programming, proof theory, unification theory, operational semantics, and, most recently, proof certificates. He is probably best know for his work on demonstrating computational uses of linear logic and higher-order logic.
Affiliation: University of Toronto
Title: Getting a Grip on Delays in Packet Networks
Abstract: A delay analysis for packet network is notoriously hard. Statistical properties of traffic, link scheduling, and subtle correlations between traffic at different nodes increase the difficulty of characterizing the variable portion of delays. This talk discusses recent progress on the end-to-end delay analysis for a traffic flow in a packet network, using a stochastic network calculus analysis approach. We seek answers to the following questions: What is the relative impact of scheduling and statistical multiplexing on determining delays at a packet switch? What are the scaling properties of end-to-end delays as the number of traversed switches is increased? Does the impact of packet scheduling algorithms diminish when network paths grow large? A key finding is that delays of a flow traversing a sequence of nodes and experiencing cross traffic at each node scale faster than linearly in the number of nodes, when traffic dos not satisfy independence assumptions. More precisely, for exponentially bounded packetized traffic, delays are shown to grow with O(H log H), where H is the number of nodes on the network path.
This superlinear scaling of delays is qualitatively different from the scaling behavior predicted by a worst-case analysis or by a
probabilistic analysis assuming independence of traffic arrivals.
Biography of Speaker:
Jörg Liebeherr received the Ph.D. degree in Computer Science from the Georgia Institute of Technology in 1991. He is currently with the Department of Electrical and Computer Engineering of the University of Toronto as the Nortel Chair of Network Architecture and Services. He is a Fellow of the IEEE. He served on the Board of Governors of the IEEE Communications Society in 2003-2005, and as chair of the IEEE Communications Society Technical Committee on Computer Communications in 2004-2005. He was Editor-in-Chief of IEEE Network in 1999-2000, and an Associate Editor of IEEE/ACM Transactions on Networking and several other journals. He received an NSF Career award in 1996, a best paper award (as co-author) at ACM Sigmetrics 2005, and an Outstanding Service award from the IEEE ComSoc Technical Committee on Computer Communications in 2006. He was co-chair of ACM Sigcomm 2011, which took place in Toronto in August.
Affiliation: Technische Universität Darmstadt
Title: Modular Reasoning in Aspect-Oriented Programs through Join Point Interfaces
Abstract: Important security concerns, such as access control and runtime
monitoring can be conveniently implemented using aspect-oriented
programming languages. While aspect-oriented programming supports the
modular definition of such crosscutting concerns, most approaches to
aspect-oriented programming fail, however, to improve, or even
preserve, modular reasoning. The main problem is that aspects usually
carry, through their pointcuts, explicit references to the base code.
These dependencies make programs fragile and hard to reason about.
In this talk, we discuss how to separate base code and aspects using
Join Point Interfaces, which are contracts between aspects and base
code. Base code can define pointcuts that expose selected join points
through a Join Point Interface. Conversely, an aspect can offer to
advise join points that provide a given Join Point Interface.
Crucially, however, aspect themselves cannot contain pointcuts, and
hence cannot refer to base code elements.
In addition, we will discuss Closure Joinpoints, a mechanism that
allows base code to explicitly announce events to aspects. We will
discuss why many previous attempts to defining such a language
construct fail, and how Closure Joinpoints integrate with Join Point
Interfaces to yield a language construct with a clear semantics,
avoiding unwanted surprises.
Biography of Speaker:
Eric Bodden is currently a research group leader at the European
Center for Security and Privacy by Design (EC SPRIDE) and a principal
investigator at the Center for Advanced Security Research Darmstadt
(CASED). Eric's work focuses on how programming tools and languages
can be enhanced to allow programmers to more easily design their
programs in a secure way. Eric received his PhD from the McGill School
of Computer Science in 2009, under the supervision of Prof. Laurie
Hendren, for his work on the Clara framework for partially evaluating
AspectJ-based runtime monitors ahead of time.
Affiliation: Dept. of Computer Science, University of Massachusetts, Amherst
Title: Multithreaded Programming for Mere Mortals
Abstract: The shift from single to multiple core architectures means that
programmers will increasingly be forced to write concurrent,
multithreaded programs to increase application performance.
Unfortunately, it is challenging to write multithreaded programs that
are both correct and fast. This talk presents two software-only systems
that aim to dramatically simplify both tasks.
The key problem with getting multithreaded programs right is
non-determinism. Programs with data races behave differently depending
on the vagaries of thread scheduling: different runs of the same
multithreaded program can unexpectedly produce different results. These
"Heisenbugs" greatly complicate debugging, and eliminating them requires
extensive testing to account for possible thread interleavings.
We attack the problem of non-determinism with Dthreads, an efficient
deterministic multithreading system for general-purpose, unmodiﬁed C/C++
programs. Dthreads directly replaces the pthreads library and eliminates
races by making all executions deterministic. Not only does Dthreads
dramatically outperform a state-of-the-art deterministic runtime system,
it often matches—and occasionally exceeds—the performance of pthreads.
While correctness is important, it is not enough. Multithreaded
applications also need to be efficient and scalable. Key to achieving
high performance and scalability is reducing contention for shared
resources. However, even when sharing has been reduced to a minimum,
threads can still suffer from false sharing. Multiple objects that are
not logically shared can end up on the same cache line, leading to
invalidation traffic. False sharing is insidious: not only can it be
disastrous to performance—causing performance to plummet by as much as
an order of magnitude—but it also difficult to diagnose and track down.
We have developed two systems to attack the problem of false sharing:
Sheriff-Detect and Sheriff-Protect. Sheriff-Detect is a false sharing detection
tool that is precise (no false positives), runs with low overhead (on
average, 20%), and is accurate, pinpointing the exact objects involved
in false sharing. When rewriting a program to fix false sharing is
infeasible (source code is unavailable, or padding objects would consume
too much memory), programmers can instead use Sheriff-Protect.
Sheriff-Protect is a runtime system that automatically eliminates most
of the performance impact of false sharing. Sheriff-Protect can improve
performance by up to 9X without the need for programmer intervention.
Biography of Speaker:
Emery Berger is an Associate Professor in the Department of Computer
Science at the University of Massachusetts Amherst. He graduated with a
Ph.D. in Computer Science from the University of Texas at Austin in
2002. Professor Berger has been a Visiting Scientist at Microsoft
Research and at the Universitat Politècnica de Catalunya (UPC) /
Barcelona Supercomputing Center (BSC).
Professor Berger's research spans programming languages, runtime
systems, and operating systems, with a particular focus on systems that
transparently improve reliability and performance. He is the creator of
various widely-used software systems including Hoard, a fast and
scalable memory manager that accelerates multithreaded applications (and
on which the Mac OS X memory manager is based), and DieHard, an
error-avoiding memory manager that directly influenced the design of the
Windows 7 Fault-Tolerant Heap.
His honors include a Microsoft Research Fellowship (2001), an NSF CAREER
Award (2003), a Lilly Teaching Fellowship (2006), and a Best Paper Award
at FAST 2007. Professor Berger served as the General Chair of the Memory
Systems Performance and Correctness workshop (MSPC 2008), Program Chair
of the 2010 ACM SIGPLAN/SIGOPS International Conference on Virtual
Execution Environments (VEE 2010), and is currently Program Chair for
the Workshop on Determinism and Correctness in Parallel Programming,
and an Associate Editor of the ACM Transactions on Programming Languages
and Systems. He has served on numerous program committees.
In his spare time, Professor Berger rides his bicycle, travels to
foreign countries, converses in a variety of Romance languages, consumes
copious amounts of espresso, and continues his work on a cure for the
common cold (which he is certain must somehow involve coffee).
Affiliation: University of Antwerp
Title: Promises and Challenges of Model-Driven Engineering
Abstract: The complexity of (software-intensive) systems we build as well as the demands that are put on quality, safety, and maintainability of these systems has grown drastically over the last decades. To tackle this complexity, Model-Driven Engineering (MDE) treats models, in various formalisms, as first-class artifacts. Such models may be obtained by reverse-engineering of existing software artifacts, for the purpose of analysis, optimization, and evolution.
Increasingly, however, software is no longer the primary artifact but rather synthesized from more abstract models. In an attempt to minimize "accidental complexity", the most appropriate modelling languages or formalisms are used for each specific (sub-)problem and phase in the development process.
Domain-Specific Modelling (DSM) in particular tries to bridge the gap between the problem domain and the technical solution domain. This has led to a proliferation of the number of (software) modelling languages. This talk will introduce MDE concepts and techniques as well as the research challenges these introduce.
Biography of Speaker:
Hans Vangheluwe is a professor in the Department of Mathematics and Computer Science at the University of Antwerp, Belgium and an adjunct professor in the School of Computer Science at McGill University in Montreal, Canada. He holds M.Sc. degrees in theoretical physics and in computer science as well as a D.Sc., all from Ghent University, Belgium.
He works on the theoretical foundations as well as on techniques such as meta-modelling and model transformation and tools for modelling and simulation-based design of complex software-intensive systems. Current application domains he works on are automotive, smartphones applications, and Statechart-based modelling, analysis and syntesis of browser-based complex user interfaces. With Juan de Lara, he developed AToM3, A Tool for Multi-formalism and Meta-Modelling, which he uses to prototype new MDE solutions.