|Speaker and Abstract
Affiliation: Toyota Technological Institute at Chicago
Title: Probabilistic Graphical Model for Protein Structure Prediction
Abstract: If we know the primary sequence of a protein, can we predict its three-dimensional structure by computational methods? This is one of the most important and difficult problems in computational molecular biology and has tremendous implications for protein functional study and drug discovery.
Existing computational methods for protein structure prediction can be broadly classified into two categories: template-based modeling (i.e., protein threading/homology modeling) and template-free modeling (i.e., ab initio folding). Template-based modeling predicts structure of a protein using experimental structures in the Protein Data Bank (PDB) as templates while template-free modeling predicts protein structure without depending on a template.
This talk will present new probabilistic graphical models for knowledge-based protein structure prediction. In particular, this talk will present a regression-tree-based Conditional Random Fields (CRF) method for template-based modeling and a Conditional Random Fields/Conditional Neural Fields (CRF/CNF) method for template-free modeling. Experimental results indicate that our template-based method performs extremely well, especially on hard template-based modeling targets and our template-free method is also very promising for mainly-alpha proteins.
Biography of Speaker:
Born in JiangXi, China, Jinbo Xu received his B.S. in Computer Science from the University of Science and Technology of China in 1996, his M.Sc. from Chinese Academy of Sciences in 1999, and his Ph.D. from the University of Waterloo in 2003. He spent one year following as a research assistant professor at the University of Waterloo and one year as a Postdoctoral Fellow in the Department of Mathematics and Computer Science and AI Laboratory at the Massachusetts Institute of Technology.
Professor Xu's primary research interest is computational biology and bioinformatics including homology search, protein structure prediction, and protein interaction prediction. He has developed several protein structure prediction tools, such as RAPTOR, ACE, and TreePack.
Affiliation: University of South Carolina
Title: Robotic Planning With Limited Sensing
Abstract: The utility of autonomous robots is limited by how effectively they can sense and interact with their environments. Because the information provided by sensors is limited and sometimes incorrect, such robots are often confronted with substantial and difficult-to-resolve uncertainty about the state of the world. This talk will present two lines of research that make progress toward autonomy in spite of this kind of uncertainty. First, we will describe new methods for navigation that allow mobile robots with limited sensing capabilities and noisy actuators to move through their environments in provably reliable ways. Second, we will discuss target tracking applications in which a robot or team of robots seeks to follow one or more moving targets under several different sensing and motion constraints. The overall theme is that many important tasks in robotics can be completed with surprisingly little sensing.
Biography of Speaker:
Jason O'Kane is an Assistant Professor in the Department of Computer
Science and Engineering at the University of South Carolina. He
earned Ph.D. (2007) and M.S. (2005) degrees from the University of
Illinois and the B.S. (2001) degree from Taylor University, all in
Computer Science. His research spans algorithmic robotics, planning
under uncertainty, and computational geometry.
Affiliation: Brandeis University
Title: Efficient Synthesis of a Class of Boolean Programs from I-O Data: Application to Reverse Engineering of Genetic Networks
Abstract: Genetic networks are used by biologists to express interactions among genes. These interactions are conveyed by determining the amounts of each gene’s product as time evolves. The amounts are represented by time series and correspond roughly to the data obtained by microarray experiments. The reverse engineering problem is to infer a genetic network from the time series. The current approaches for reverse engineering usually involve costly exhaustive searches.
We propose a new method, based on discrete Jacobians that is significantly more efficient than the existing ones. From a theoretical perspective it is convenient to describe the advocated scheme as a synthesis (i.e., inference) of a certain kind of Boolean programs from specified Input-Output data.
In this talk I will present a comparative study between exhaustive search
approaches and the suggested discrete Jacobian method. I will also put forward some possible hybrid techniques.
The discrete Jacobian approach may require biological experiments that are presently difficult to perform when the network comprises a substantial number of genes. Nevertheless, computational gains can be considerable, especially when each gene is influenced by a relatively small number of other genes. The described scheme can be extended to I-O data consisting of bounded integers instead of Booleans.
This talk could be of interest to computer scientists interested in machine learning and data mining.
Biography of Speaker:
Jacques Cohen has been at Brandeis University since 1968 and holds the TJX/Feldberg Chair in Computer Science. He has a broad interest in the field, his publications covering research topics in analysis of algorithms, parsing and compiling, memory management, logic and constraint logic programming, and parallelism.
In the past six years Professor Cohen has concentrated his research and teaching in the area of computational biology or bio-informatics. Within that area his topics of interest are: grammars for gene finding, the inverse protein folding problem, and simulation and modeling of cell regulation.
Affiliation: Boston College
Title: Some recent contributions to RNAomics
Abstract: In this talk, we survey some recent results of our group concerning RNAomics. Our ﬁndings are grouped into four themes concerning RNA structure: energetics, kinetics, structure and gene ﬁnding. Energetics: We describe how to compute the partition function of the ensemble of local ly optimal secondary structures, which form kinetic traps in the folding landscape. We then describe an application of Wang-Landau non-Boltzmannian sampling to estimate the partition function, hence the ensemble free energy, heat capacity and melting temperature. The advantage of the Wang-Landau approach is that it can be applied to structures that include pseudoknots, which render the partition function computation an NP-hard problem. Structure: We describe how to segment large RNA sequences into coherent domains, and illustrate this approach both for secondary and tertiary structure. We exactly compute (rather
than approximate) the expected distance between the 5′ and 3′ ends of an RNA molecule, and compute the asymptotic expected distance using complex analysis. Kinetics: We describe how to compute a near-optimal folding pathway between two low energy secondary structures, such as the gene on and gene oﬀ structures of a riboswitch. Gene Finding: We describe novel
parametric partition function computations of hairpin and multiloop formation, and illustrate the use of these features in support vector machines to classify various RNA families from the Rfam database.
The talk summarizes six recent papers, constituting joint work with the following collaborators: Y. Ding, I. Dotu, V. Ilyin, W.A. Lorenz, F. Lou, Y. Ponty, J.-M. Steyaert, P. Van Hentenryck.
Biography of Speaker:
With a background in mathematics and computer science, I moved into the Biology Department of Boston College in Fall 2002 after more than 20 years in Mathematics and Computer Science Departments: 1979-84 in Mathematics at the University of Paris VII and since 1984 in Computer Science at Boston College, with a hiatus where I held the Gentzen Chair of Theoretical Computer Science at the Ludwig-Maximilians-Universitaet in Munich and was the primary instigator of a graduate Bioinformatics Program which accepted students in 1999). Prior to moving into Biology, my primary research interest was in the interface of theoretical computer science and mathematics, including the following: Editorial Board of Notre Dame Journal of Formal Logic 1991-2003, publication of numerous journal and proceedings articles, co-organizer of three international meetings, co-editor of three research monographs, author of the 600 page research monograph Boolean Functions and Computation Models, jointly written with E. Kranakis, and published in Oct 2002. Tenure as Gentzen Chair of Theoretical Computer Science at Ludwig-Maximilians-Universitaet in Munich allowed a rapid move into Computational Biology/Bioinformatics, leading to the book Computational Molecular Biology: An Introduction, by P. Clote and R. Backofen (an assistant in my Munich group, who completed his post-Ph.D. Habilitationsschrift under my direction on the topic of protein structure prediction using constraint programming methods). On sabbatical in Fall 2000 in the Mathematics Department of M.I.T., I started the MIT Bioinformatics Seminar, and since that time have continued to co-organize this weekly seminar with Bonnie Berger of M.I.T. As a past co-chair for the session Proteins: Structure, Function and Evolution at Pacific Symposium on Biocomputing , I am now Editor of the Bioinformatics Programming Paradigms section of Wiley's Bioinformatics Encyclopedia, a project currently in preparation.
Affiliation: University of British Columbia
Title: Good Enough Software Systems: Tolerating Hardware Faults in Software
Abstract: Technology scaling combined with increasing design complexity has
increased the susceptibility of modern electronics to errors and
variations. In the past, such errors were handled by introducing
redundancy at the circuit or micro-architectural layers, thus masking
the errors from software. However, tight energy constraints and cost
margins will make such solutions infeasible in the future. Therefore,
the only viable solution is to expose the errors to the software
layers, and let them handle the errors. Modern software systems are
however ill-equipped to handle hardware errors, and are brittle in the
presence of unexpected variations. Our goal is to build software
systems that work satisfactorily in spite of a wide range of hardware
errors and variations. We call these "good enough software systems".
This talk will discuss two systems: Flikker and BackTrack, that embody
the good enough approach. Flikker is a software approach to save
energy in mobile systems by lowering the refresh rates of DRAMs.
Flikker partitions an application's state into critical and
non-critical data, and maps the two kinds of data to DRAM memories
having different refresh rates. We show that Flikker can achieve 30%
energy savings in a mobile system, with only marginal degradation in
performance and reliability. We also show that Flikker requires only
modest effort from programmers.
BackTrack is a software technique to perform automated diagnosis of
intermittent hardware faults in a multi-core processor. Based only on
the crash dump of a program, BackTrack can isolate the faulty
instruction and functional units on the core on which the program was
executed. BackTrack needs no hardware support, nor does it require
additional tests to be run on the faulty core. We show that BackTrack
can diagnose over 75% of intermittent faults with high accuracy, which
in turn, can facilitate fine-grained recovery and reconfiguration
Biography of Speaker:
Karthik Pattabiraman is an assistant professor of electrical and
computer engineering at the University of British Columbia (UBC). He
received his M.S and PhD degrees at the University of Illinois at
Urbana-Champaign (UIUC) in 2004 and 2009 respectively. He then spent a
post-doctoral year at Microsoft Research (MSR), before joining UBC in
2010. Karthik's research interests are in the design of fault-tolerant
and secure software systems through innovations in languages,
compilers and architecture. Karthik was awarded the William C. Carter
award by the IEEE Technical Committee on Fault-tolerant Computing
(TC-FTC) and the IFIP Working group on Dependability (10.4) in 2008
based on his dissertation research.