Temporal Graph Reading Group

TGL logo

Every Thursday at 11am-12pm EST

Join us via Zoom

Follow us on twitter @tempgraph_rg
Sign up here to receive email communications about the reading group
Visit our Youtube Channel for Past Recordings
Join the TGL slack to discuss with the TG community: here.
Contact here if there is any issues with the invite link.

Upcoming Talks

[Feb 22nd, 2024]

  • Temporal Graph Analysis with TGX WSDM 2024 Demo Track
    Presenter: Shenyang Huang McGill University, Mila
    Shenyang Huang is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs. He also enjoys writing medium blog posts about temporal graph learning.

    Real-world networks, with their evolving relations, are best captured as temporal graphs. However, existing software libraries are largely designed for static graphs where the dynamic nature of temporal graphs is ignored. Bridging this gap, we introduce TGX, a Python package specially designed for analysis of temporal networks that encompasses an automated pipeline for data loading, data processing, and analysis of evolving graphs. TGX provides access to eleven built-in datasets and eight external Temporal Graph Benchmark (TGB) datasets as well as any novel datasets in the .csv format. Beyond data loading, TGX facilitates data processing functionalities such as discretization of temporal graphs and node subsampling to accelerate working with larger datasets. For comprehensive investigation, TGX offers network analysis by providing a diverse set of measures, including average node degree and the evolving number of nodes and edges per timestamp. Additionally, the package consolidates meaningful visualization plots indicating the evolution of temporal patterns, such as Temporal Edge Appearance (TEA) and Temporal Edge Trafficc (TET) plots. The TGX package is a robust tool for examining the features of temporal graphs and can be used in various areas like studying social networks, citation networks, and tracking user interactions. We plan to continuously support and update TGX based on community feedback. TGX is publicly available
  • [Feb 29th, 2024]

  • Graph Neural Networks for Temporal Graphs: State of the Art, Open Challenges, and Opportunities TMLR
    Presenter: Veronica Lachi University of Siena, Siena, Italy
    TBD

    Graph Neural Networks (GNNs) have become the leading paradigm for learning on (static) graph-structured data. However, many real-world systems are dynamic in nature, since the graph and node/edge attributes change over time. In recent years, GNN-based models for temporal graphs have emerged as a promising area of research to extend the capabilities of GNNs. In this work, we provide the first comprehensive overview of the current state-of-the-art of temporal GNN, introducing a rigorous formalization of learning settings and tasks and a novel taxonomy categorizing existing approaches in terms of how the temporal aspect is represented and processed. We conclude the survey with a discussion of the most relevant open challenges for the field, from both research and application perspectives.
  • Past Talks, Winter 2024

    [January 18th, 2024]

  • Intensity Profile Projection: A Framework for Continuous-Time Representation Learning for Dynamic Networks NeurIPS 2023
    Presenter: Alexander Modell Imperial College London and Patrick Rubin-Delanchy University of Bristol
    Patrick Rubin-Delanchy obtained his PhD in Statistics at Imperial College London in 2008, supervised by Prof. Andrew Walden. He was awarded a Heilbronn research fellowship in Data Science in 2012, which he first held at the University of Bristol and then at the University of Oxford. I became assistant professor of Statistics at the University of Bristol in July 2017, and was promoted to full professor in July 2022. As of Jan 2024, He is the chair of Statistical Learning at the University of Edinburgh. His research interests include data exploration, graph embedding and unsupervised learning.
    Alex Modell is research associate in Statistics at Imperial College London. His research is in statistical modelling and large-scale inference for high-dimensional and graph structured data, with a particular interest in unsupervised learning with temporal data. He is currently funded by the EPSRC grant “Network Time Series and Stochastic Processes”.

    We present a new representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data. Given triples (i,j,t), each representing a time-stamped (t) interaction between two entities (i,j), our procedure returns a continuous-time trajectory for each node, representing its behaviour over time. The framework consists of three stages: estimating pairwise intensity functions, e.g. via kernel smoothing; learning a projection which minimises a notion of intensity reconstruction error; and constructing evolving node representations via the learned projection. The trajectories satisfy two properties, known as structural and temporal coherence, which we see as fundamental for reliable inference. Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses. The theory also elucidates the role of smoothing as a bias-variance trade-off, and shows how we can reduce the level of smoothing as the signal-to-noise ratio increases on account of the algorithm `borrowing strength' across the network.
  • [January 25th, 2024]

  • DistTGL: Distributed Memory-Based Temporal Graph Neural Network Training SC 2023
    Presenter: Hongkuan Zhou University of Southern California
    Hongkuan Zhou is a Ph.D. student at Ming Hsieh Department of Electrical and Computer Engineering at University of Southern California. His research interest lies in the acceleration and application of GNNs and Temporal GNNs on general-purpose hardware.

    Memory-based Temporal Graph Neural Networks are powerful tools in dynamic graph representation learning and have demonstrated superior performance in many real-world applications. However, their node memory favors smaller batch sizes to capture more dependencies in graph events and needs to be maintained synchronously across all trainers. As a result, existing frameworks suffer from accuracy loss when scaling to multiple GPUs. Even worse, the tremendous overhead of synchronizing the node memory makes it impractical to deploy the solution in GPU clusters. In this work, we propose DistTGL --- an efficient and scalable solution to train memory-based TGNNs on distributed GPU clusters. DistTGL has three improvements over existing solutions: an enhanced TGNN model, a novel training algorithm, and an optimized system. In experiments, DistTGL achieves near-linear convergence speedup, outperforming the state-of-the-art single-machine method by 14.5% in accuracy and 10.17× in training throughput.
  • [Feb 1st, 2024]

    **cancelled**
  • Temporal Graphs and Temporal Network Characteristics for Bio-Inspired Networks during Optimization
    Presenter: Abdel Isakovic Colgate University
    Abdel F. Isakovic is a visiting professor of physics at Colgate University. He was previously Assoc. Prof. of Physics and Interim Assoc. Dean for Research at Khalifa University, Abu Dhabi (UAE). Prof. Isakovic obtained PhD from University of Minnesota, and has been affiliated with Cornell University and Brookhaven National Laboratory. He works on computational and experimental problems in condensed matter and biophysics, and on education research projects.

    Temporal network analysis and time evolution of network characteristics are powerful tools in describing the changing topology of dynamic networks. This paper uses such approaches to better visualize and provide analytical measures for the changes in performance that we observed in Voronoi-type spatial coverage, particularly for the example of time evolving networks with a changing number of wireless sensors being deployed. Specifically, our analysis focuses on the role different combinations of impenetrable obstacles and environmental noise play in connectivity and overall network structure. It is shown how the use of (i) temporal network graphs, and (ii) network centrality and regularity measures illustrate the differences between various options developed for the balancing act of energy and time efficiency in network coverage. Lastly, we compare the outcome of these measures with the less abstract classification variables, such as percent area covered, and cumulative distance travelled.
  • [Feb 8th, 2024]

  • Taming Local Effects in Graph-based Spatiotemporal Forecasting NeurIPS 2023
    Presenter: Ivan Marisca, The Swiss AI Lab IDSIA USI-SUPSI, Università della Svizzera italiana
    Ivan Marisca is a Ph.D. student at the Swiss AI lab IDSIA and USI - Università della Svizzera italiana (CH), under the supervision of Prof. Cesare Alippi. He received his MSc (2020) degree in Computer Science and Engineering from Politecnico di Milano (IT). Ivan's research focuses on graph-based learning from irregular spatiotemporal data, with applications in prediction, imputation, and control on sensor networks. His work has been published in top-tier conferences of the field, such as NeurIPS, ICLR, and AAAI. In addition to his research, Ivan regularly works as a teaching assistant and holds lectures in machine learning courses for MSc students at USI. He is also a co-creator and maintainer of Torch Spatiotemporal.

    Spatiotemporal graph neural networks have shown to be effective in time series forecasting applications, achieving better performance than standard univariate predictors in several settings. These architectures take advantage of a graph structure and relational inductive biases to learn a single (global) inductive model to predict any number of the input time series, each associated with a graph node. Despite the gain achieved in computational and data efficiency w.r.t. fitting a set of local models, relying on a single global model can be a limitation whenever some of the time series are generated by a different spatiotemporal stochastic process. The main objective of this paper is to understand the interplay between globality and locality in graph-based spatiotemporal forecasting, while contextually proposing a methodological framework to rationalize the practice of including trainable node embeddings in such architectures. We ascribe to trainable node embeddings the role of amortizing the learning of specialized components. Moreover, embeddings allow for 1) effectively combining the advantages of shared message-passing layers with node-specific parameters and 2) efficiently transferring the learned model to new node sets. Supported by strong empirical evidence, we provide insights and guidelines for specializing graph-based models to the dynamics of each time series and show how this aspect plays a crucial role in obtaining accurate predictions.
  • Past Talks, Fall 2023

    [September 14th, 2023]

  • Edge Directionality Improves Learning on Heterophilic Graphs 2023
    Presenter: Emanuele Rossi Imperial College London
    Emanuele Rossi is a final-year Ph.D. student at Imperial College London, working on Graph Neural Networks and supervised by Prof. Michael Bronstein. His research explores various aspects of graph neural networks, such as scalability, dynamic graphs, and learning with missing node features. Before starting his Ph.D., Emanuele spent time working at Twitter and Fabula AI, which was bought by Twitter in June 2019. He also holds an MPhil from the University of Cambridge and a BEng from Imperial College London, both in Computer Science.

    Graph Neural Networks (GNNs) have become the de-facto standard tool for modeling relational data. However, while many real-world graphs are directed, the majority of today's GNN models discard this information altogether by simply making the graph undirected. The reasons for this are historical: 1) many early variants of spectral GNNs explicitly required undirected graphs, and 2) the first benchmarks on homophilic graphs did not find significant gain from using direction. In this paper, we show that in heterophilic settings, treating the graph as directed increases the effective homophily of the graph, suggesting a potential gain from the correct use of directionality information. To this end, we introduce Directed Graph Neural Network (Dir-GNN), a novel general framework for deep learning on directed graphs. Dir-GNN can be used to extend any Message Passing Neural Network (MPNN) to account for edge directionality information by performing separate aggregations of the incoming and outgoing edges. We prove that Dir-GNN matches the expressivity of the Directed Weisfeiler-Lehman test, exceeding that of conventional MPNNs. In extensive experiments, we validate that while our framework leaves performance unchanged on homophilic datasets, it leads to large gains over base models such as GCN, GAT and GraphSage on heterophilic benchmarks, outperforming much more complex methods and achieving new state-of-the-art results.
  • [September 21st, 2023]

  • Temporal Graph Learning for Financial World: Algorithms, Scalability, Explainability & Fairness AAAI 2022
    Presenter: Karamjit Singh, AI Garage and Mastercard
    Karamjit Singh is currently the Director of Artificial Intelligence at Mastercard, where he focuses on creating AI-driven products in the payment space. With 11 years of experience spanning various industries, Karamjit holds dual master's degrees, including an M.Tech in Computer Applications from IIT Delhi and a Master’s in Mathematics. He has authored over 20 publications in AI/ML conferences and holds more than 15 patents. Karamjit has achieved recognition through accomplishments like securing the 2nd Rank in CIKM 2020 Analytics Cup and winning the IEEE VGTC VPG International Data-Visualization Contest. He has also contributed to data science conferences and workshops as part of program committees and organized the MUFin workshop at CIKM 2021, ECML PAKDD 2022, AAAI 2023.

    The most intuitive way to model a transaction in the financial world is through a Graph. Every transaction can be considered as an edge between two vertices, one of which is the paying party and another is the receiving party. Properties of these nodes and edges directly map to business problems in the financial world. The problem of detecting a fraudulent transaction can be considered as a property of the edge. The problem of money laundering can be considered as a path-detection in the Graph. The problem of a merchant going delinquent can be considered as the property of a node. While there are many such examples, the above help in realising the direct mapping of Graph properties with the financial problems in the real-world. This tutorial is based on the potential of using Graph Neural Network based Learning for solving business problems in the financial world
  • [September 28th, 2023]

  • Temporal Graph Benchmark for Machine Learning on Temporal Graphs NeurIPS 2023 Datasets and Benchmarks Track
    Presenter: Shenyang Huang, Farimah Poursafaei McGill University, Mila
    Shenyang Huang is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs. He also enjoys writing medium blog posts about temporal graph learning.
    Farimah Poursafaei is a PostDoc at McGill University and Mila. She conducts research on dynamic graph neural networks, and temporal graphs. She completed her PhD at McGill University in Computer Engineering. During her PhD, she was working on anomaly detection on cryptocurrency transactions networks. She served as the Reviewing Chair in Temporal Graph Learning Workshop @ NeurIPS 2022.

    We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs. TGB datasets are of large scale, spanning years in duration, incorporate both node and edge-level prediction tasks and cover a diverse set of domains including social, trade, transaction, and transportation networks. For both tasks, we design evaluation protocols based on realistic use-cases. We extensively benchmark each dataset and find that the performance of common models can vary drastically across datasets. In addition, on dynamic node property prediction tasks, we show that simple methods often achieve superior performance compared to existing temporal graph models. We believe that these findings open up opportunities for future research on temporal graphs. Finally, TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research, including data loading, experiment setup and performance evaluation. TGB will be maintained and updated on a regular basis and welcomes community feedback. TGB datasets, data loaders, example codes, evaluation setup, and leaderboards are publicly available
  • [October 5th, 2023]

  • HAGEN: Homophily-Aware Graph Convolutional Recurrent Network for Crime Forecasting AAAI 2022
    Presenter: Chenyu Wang and Zongyu Lin Tsinghua University
    Zongyu Lin is now a CS Ph.D student from UCLA, graduated from Tsinghua university. His research interest lies broadly in AI for content generation (e.g., large language models and diffusion models), and AI for science (e.g., time series forecasting).
    Chenyu Wang is now a Ph.D student at MIT EECS, advised by Prof. Caroline Uhler and Prof. Tommi Jaakkola. Prior to that, she attained her bachelor's degree at Tsinghua University. Her research interest lies broadly in machine learning, representation learning, and AI for science (especially computational biology).

    The goal of the crime forecasting problem is to predict different types of crimes for each geographical region (like a neighborhood or censor tract) in the near future. Since nearby regions usually have similar socioeconomic characteristics which indicate similar crime patterns, recent state-of-the-art solutions constructed a distance-based region graph and utilized Graph Neural Network (GNN) techniques for crime forecasting, because the GNN techniques could effectively exploit the latent relationships between neighboring region nodes in the graph if the edges reveal high dependency or correlation. However, this distance-based pre-defined graph can not fully capture crime correlation between regions that are far from each other but share similar crime patterns. Hence, to make a more accurate crime prediction, the main challenge is to learn a better graph that reveals the dependencies between regions in crime occurrences and meanwhile captures the temporal patterns from historical crime records. To address these challenges, we propose an end-to-end graph convolutional recurrent network called HAGEN with several novel designs for crime prediction. Specifically, our framework could jointly capture the crime correlation between regions and the temporal crime dynamics by combining an adaptive region graph learning module with the Diffusion Convolution Gated Recurrent Unit (DCGRU). Based on the homophily assumption of GNN (i.e., graph convolution works better where neighboring nodes share the same label), we propose a homophily-aware constraint to regularize the optimization of the region graph so that neighboring region nodes on the learned graph share similar crime patterns, thus fitting the mechanism of diffusion convolution. Empirical experiments and comprehensive analysis on two real-world datasets showcase the effectiveness of HAGEN
  • [October 12th, 2023]

  • A Survey on Graph Neural Networks for Time Series: Forecasting, Classification, Imputation, and Anomaly Detection
    Presenter: Ming Jin Monash University
    Ming Jin is a Ph.D. Candidate in Computer Science at Monash University, where he is currently being supervised by Prof. Shirui Pan and A/Prof. Yuan-Fang Li. He is also a Research Staff at Metso Outotec and a Research Assistant at Monash University. Ming completed his Bachelor's and Master's degrees at Hebei University of Technology and The University of Melbourne in 2017 and 2019, respectively. He has published over ten top-ranked conference and journal papers and regularly served as a PC member and reviewer of major AI/ML/DM conferences and journals, including IJCAI, PAKDD, TNNLS, TKDE, and more. Ming's research interests mainly focus on graph neural networks and geometric deep learning, with a particular emphasis on temporal settings such as dynamic graph neural networks. He is particularly passionate about applying these techniques to tackle practical challenges such as time series forecasting, industrial process modeling, and anomaly detection.

    Time series are the primary data type used to record dynamic system measurements and generated in great volume by both physical sensors and online processes (virtual sensors). Time series analytics is therefore crucial to unlocking the wealth of information implicit in available data. With the recent advancements in graph neural networks (GNNs), there has been a surge in GNN-based approaches for time series analysis. These approaches can explicitly model inter-temporal and inter-variable relationships, which traditional and other deep neural network-based methods struggle to do. In this survey, we provide a comprehensive review of graph neural networks for time series analysis (GNN4TS), encompassing four fundamental dimensions: forecasting, classification, anomaly detection, and imputation. Our aim is to guide designers and practitioners to understand, build applications, and advance research of GNN4TS. At first, we provide a comprehensive task-oriented taxonomy of GNN4TS. Then, we present and discuss representative research works and introduce mainstream applications of GNN4TS. A comprehensive discussion of potential future research directions completes the survey. This survey, for the first time, brings together a vast array of knowledge on GNN-based time series research, highlighting foundations, practical applications, and opportunities of graph neural networks for time series analysis.
  • [October 19th, 2023]

  • GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets ICML 2023
    Presenter: Sahil Manchanda Department of Computer Science at the Indian Institute of Technology Delhi
    Sahil Manchanda is a PhD scholar at the Department of Computer Science at the Indian Institute of Technology Delhi, working under the supervision of Prof. Sayan Ranu. Sahil works in the area of Learning to solve graph optimization problems with focus on Combinatorial Optimization, Graph Neural Networks, Lifelong Learning and Generative modeling. He is also interested in applications of Computer Science concepts in other fields such as VLSI Chip Design and Material Science to discover new materials. He has interned at prestigious institutes such as the University of Tokyo, Naver Labs- France and Qualcomm AI research Amsterdam. His research works have been published in conferences such as NeurIPS, AAAI, ICML and ECML-PKDD. Additionally he has one US patent granted to his name. Sahil has been the recipient of the Qualcomm Innovation Fellowship for the year 2022.

    Graph neural networks (GNNs), in general, are built on the assumption of a static set of features characterizing each node in a graph. This assumption is often violated in practice. Existing methods partly address this issue through feature imputation. However, these techniques (i) assume uniformity of feature set across nodes, (ii) are transductive by nature, and (iii) fail to work when features are added or removed over time. In this work, we address these limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a novel allotropic transformation on the original graph, wherein the nodes and features are decoupled through a bipartite encoding. Through a carefully chosen message passing framework on the allotropic transformation, we make the model parameter size independent of the number of features and thereby inductive to both unseen nodes and features. We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen features does not come at the cost of expressivity. In addition, as demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN with high empirical efficacy and the ability to learn in continual fashion over streaming feature sets.
  • [October 26th, 2023]

  • Along the Time: Timeline-traced Embedding for Temporal Knowledge Graph Completion CIKM 2022
    Presenter: Fuwei Zhang and Zhao Zhan Institute of Computing Technology, Chinese Academy of Sciences
    Fuwei Zhang is now a PhD student from the Institute of Artificial Intelligence, Beihang University. His research interests include data mining and applied machine learning, with a special focus on the representation and application of knowledge graphs.
    Zhao Zhan is an assistant professor at Institute of Computing Technology, Chinese Academy of Sciences. His research interest lies in data mining and machine learning, especially the representation and application of knowledge graphs.

    Recent years have witnessed remarkable progress on knowledge graph embedding (KGE) methods to learn the representations of entities and relations in static knowledge graphs (SKGs). However, knowledge changes over time. In order to represent the facts happening in a specific time, temporal knowledge graph (TKG) embedding approaches are put forward. While most existing models ignore the independence of semantic and temporal information. We empirically find that current models have difficulty distinguishing representations of the same entity or relation at different timestamps. In this regard, we propose a TimeLine-Traced Knowledge Graph Embedding method (TLT-KGE) for temporal knowledge graph completion. TLT-KGE aims to embed the entities and relations with timestamps as a complex vector or a quaternion vector. Specifically, TLT-KGE models semantic information and temporal information as different axes of complex number space or quaternion space. Meanwhile, two specific components carving the relationship between semantic and temporal information are devised to buoy the modeling. In this way, the proposed method can not only distinguish the independence of the semantic and temporal information, but also establish a connection between them. Experimental results on the link prediction task demonstrate that TLT-KGE achieves substantial improvements over state-of-the-art competitors. The source code will be available on https://github.com/zhangfw123/TLT-KGE.
  • [November 2nd, 2023]

  • My journey in the land of temporal networks and dynamic graphs
    Presenter: Petter Holme Aalto University
    Petter Holme is a Professor of Network Science at the Department of Computer Science, Aalto University, Finland, and a Research Fellow at The Center of Computational Social Science, Kobe University, Japan. Formerly, Holme held faculty positions at the Tokyo Institute of Technology, Japan, Sungkyunkwan University, Korea, and Umeå University and the Royal Institute of Technology, Sweden. His research covers a broad scientific ground in the borderland between the social and formal sciences. Among many other things, Holme pioneered the study of temporal networks. Holme has over 190 publications and over 19,000 citations.

    I will cover some highlights and insights in my two-decade-long research on how to understand systems through their combined temporal and topological structure. I will discuss: The use of randomization techniques to understand diffusion dynamics on temporal networks. The profound difficulties in generalizing insights from network science and graph theory to temporal graphs. The philosophy of community structures in graphs. Why there hasn't been any discovery of ubiquitous structures that are both temporal and topological.
  • [November 9th, 2023]

  • Temporal Dynamics-Aware Adversarial Attacks on Discrete-Time Dynamic Graph Models KDD 2023
    Presenter: Kartik Sharma Georgia Institute of Technology
    Kartik Sharma is a third-year PhD student at Georgia Tech, where he is advised by Prof. Srijan Kumar. He is interested in studying the robustness and alignment of machine learning models. His works are focused on studying these models for structured data such as social networks, temporal graphs, and knowledge bases. He has published in venues such as KDD, AAAI, CIKM, and WSDM.

    Real-world graphs such as social networks, communication networks, and rating networks are constantly evolving over time. Many deep learning architectures have been developed to learn effective node representations using both graph structure and dynamics. While being crucial for practical applications, the robustness of these representation learners for dynamic graphs in the presence of adversarial attacks is highly understudied. In this work, we design a novel adversarial attack on discrete-time dynamic graph models where we desire to perturb the input graph sequence in a manner that preserves the temporal dynamics of the graph while dropping the performance of representation learners. To this end, we motivate a novel Temporal Dynamics-Aware Perturbation (TDAP) constraint, which ensures that perturbations introduced at each time step are restricted to only a small fraction of the number of changes in the graph since the previous time step. We present a theoretically-motivated Projected Gradient Descent approach for dynamic graphs to find effective perturbations under the TDAP constraint. Experiments on two tasks — dynamic link prediction and node classification, show that our approach is up to 4x more effective than the baseline methods for attacking these models. We extend our approach to a more practical online setting where graphs become available in real-time and show up to 5x superior performance over baselines We also show that our approach successfully evades state-of-the-art neural approaches for anomaly detection, thereby promoting the need to study robustness as a part of representation-learning approaches for dynamic graphs.
  • [November 16th, 2023] **cancelled, to be rescheduled**

  • Temporal Graphs and Temporal Network Characteristics for Bio-Inspired Networks during Optimization
    Presenter: Abdel Isakovic Colgate University
    Abdel Isakovic is a Visiting Assistant Professor of Physics at Colgate University.

    Temporal network analysis and time evolution of network characteristics are powerful tools in describing the changing topology of dynamic networks. This paper uses such approaches to better visualize and provide analytical measures for the changes in performance that we observed in Voronoi-type spatial coverage, particularly for the example of time evolving networks with a changing number of wireless sensors being deployed. Specifically, our analysis focuses on the role different combinations of impenetrable obstacles and environmental noise play in connectivity and overall network structure. It is shown how the use of (i) temporal network graphs, and (ii) network centrality and regularity measures illustrate the differences between various options developed for the balancing act of energy and time efficiency in network coverage. Lastly, we compare the outcome of these measures with the less abstract classification variables, such as percent area covered, and cumulative distance travelled.
  • [November 23rd, 2023]

  • ULTRA: Foundation Models for Knowledge Graph Reasoning
    Presenter: Michael Galkin Intel AI Lab
    Michael Galkin is a Research Scientist at Intel AI Lab in San Diego working on Graph Machine Learning and Geometric Deep Learning. Previously, he was a postdoc at Mila – Quebec AI Institute with Will Hamilton, Reihaneh Rabbany, and Jian Tang, focusing on many graph representation learning problems. Sometimes, Mike writes long blog posts on Medium about graph learning.

    Foundation models in language and vision have the ability to run inference on any textual and visual inputs thanks to the transferable representations such as a vocabulary of tokens in language. Knowledge graphs (KGs) have different entity and relation vocabularies that generally do not overlap. The key challenge of designing foundation models on KGs is to learn such transferable representations that enable inference on any graph with arbitrary entity and relation vocabularies. In this work, we make a step towards such foundation models and present ULTRA, an approach for learning universal and transferable graph representations. ULTRA builds relational representations as a function conditioned on their interactions. Such a conditioning strategy allows a pre-trained ULTRA model to inductively generalize to any unseen KG with any relation vocabulary and to be fine-tuned on any graph. Conducting link prediction experiments on 57 different KGs, we find that the zero-shot inductive inference performance of a single pre-trained ULTRA model on unseen graphs of various sizes is often on par or better than strong baselines trained on specific graphs. Fine-tuning further boosts the performance.
  • [November 30th, 2023]

  • Self-Supervised Dynamic Graph Representation Learning via Temporal Subgraph Contrast TKDD
    Presenter: Ke-Jia Chen Nanjing University of Posts and Telecommunications
    Ke-Jia Chen is an associate professor in Nanjing University of Posts and Telecommunications, China. She received the PhD in Université de Technologie de Compiègne, France and the M.S. and B.S. degree in Nanjing University, China. Her current research focuses on graph machine learning and complex network analysis. She once published papers in TOIS, TKDD, ECML, ICDM, ASONAM, etc.

    Self-supervised learning on graphs has recently drawn a lot of attention due to its independence from labels and its robustness in representation. Current studies on this topic mainly use static information such as graph structures but cannot well capture dynamic information such as timestamps of edges. Realistic graphs are often dynamic, which means the interaction between nodes occurs at a specific time. This paper proposes a self-supervised dynamic graph representation learning framework (DySubC), which defines a temporal subgraph contrastive learning task to simultaneously learn the structural and evolutional features of a dynamic graph. Specifically, a novel temporal subgraph sampling strategy is firstly proposed, which takes each node of the dynamic graph as the central node and uses both neighborhood structures and edge timestamps to sample the corresponding temporal subgraph. The subgraph representation function is then designed according to the influence of neighborhood nodes on the central node after encoding the nodes in each subgraph. Finally, the structural and temporal contrastive loss are defined to maximize the mutual information between node representation and temporal subgraph representation. Experiments on five real-world datasets demonstrate that (1) DySubC performs better than the related baselines including two graph contrastive learning models and four dynamic graph representation learning models in the downstream link prediction task, and (2) the use of temporal information can not only sample more effective subgraphs, but also learn better representation by temporal contrastive loss.
  • Past Talks, Summer 2023

    [May 4th, 2023]

  • De Bruijn Goes Neural: Causality-Aware Graph Neural Networks for Time Series Data on Dynamic Graphs LOG 2022
    Presenter: Ingo Scholtes and Lisi Qarkaxhija Center for Artificial Intelligence and Data Science of Julius-Maximilians-Universität Würzburg, Germany
    Ingo Scholtes Ingo Scholtes is a Full Professor for Machine Learning in Complex Networks at the Center for Artificial Intelligence and Data Science of Julius-Maximilians-Universität Würzburg, Germany as well as SNSF Professor for Data Analytics at the Department of Computer Science at the University of Zürich, Switzerland. He has a background in computer science and mathematics and obtained his doctorate degree from the University of Trier, Germany. At CERN, he developed a large-scale data distribution system, which is currently used to monitor particle collision data from the ATLAS detector. After finishing his doctorate degree, he was a postdoctoral researcher at the interdisciplinary Chair of Systems Design at ETH Zürich from 2011 till 2016.In 2016 he held an interim professorship for Applied Computer Science at the Karlsruhe Institute of Technology, Germany.In 2017 he returned to ETH Zürich as a senior assistant and lecturer. In 2019 he was appointed Full Professor at the University of Wuppertal.Since 2021 he holds the Chair of Computer Science XV - Machine Learning for Complex Networks at Julius-Maximilians-Universität Würzburg, Germany.
    Lisi Qarkaxhija Lisi Qarkaxhija is a Computer Science PhD candidate at the Chair of Machine Learning for Complex Networks, Center for Artificial Intelligence and Data Science (CAIDAS), Julius-Maximilians-Universität Würzburg, under the supervision of Prof. Ingo Scholtes. His research focuses on developing higher-order graph neural network models and their applications in interdisciplinary fields. Prior to his PhD studies, Lisi completed his Masters in Data Science at University of Primorksa (FAMNIT) Koper, Slovenia, and his Bachelor's in Mathematics from the same institution.

    We introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold in the causal topology of dynamic graphs, which is determined by \emph{causal walks}, i.e. temporally ordered sequences of links by which nodes can influence each other over time. Our architecture builds on multiple layers of higher-order De Bruijn graphs, an iterative line graph construction where nodes in a De Bruijn graph of order \textdollar k\textdollar represent walks of length \textdollar k-1\textdollar , while edges represent walks of length \textdollar k\textdollar . We develop a graph neural network architecture that utilizes De Bruijn graphs to implement a message passing scheme that considers non-Markovian characteristics of causal walks, which enables us to learn patterns in the causal topology of dynamic graphs. Addressing the issue that De Bruijn graphs with different orders \textdollar k\textdollar can be used to model the same data, we apply statistical model selection to determine the optimal graph to be used for message passing. An evaluation in synthetic and empirical data sets suggests that DBGNNs can leverage temporal patterns in dynamic graphs, which substantially improves performance in a node classification task.
  • [May 11th, 2023]

  • Complex Evolutional Pattern Learning for Temporal Knowledge Graph Reasoning, ACL 2022
    Presenter: Zixuan Li, Chinese Academy of Sciences
    Zixuan Liis currently an assistant professor at CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences. Before that, he obtained his Ph.D. degree from the Institute of Computing Technology, Chinese Academy of Sciences. His current research interest includes knowledge graphs and natural language processing. He regularly publishes in top-tier conferences and journals of the field, including SIGIR, ACL, EMNLP, AAAI, KIS and IEEE TKDE.

    A Temporal Knowledge Graph (TKG) is a sequence of KGs corresponding to different timestamps. TKG reasoning aims to predict potential facts in the future given the historical KG sequences. One key of this task is to mine and understand evolutional patterns of facts from these sequences. The evolutional patterns are complex in two aspects, length- diversity and time-variability. Existing models for TKG reasoning focus on modeling fact sequences of a fixed length, which cannot discover complex evolutional patterns that vary in length. Furthermore, these models are all trained offline, which cannot well adapt to the changes of evolutional patterns from then on. Thus, we propose a new model, called Complex Evolutional Network (CEN), which uses a length-aware Convolutional Neural Network (CNN) to handle evolutional patterns of different lengths via an easy-to-difficult curriculum learning strategy. Besides, we propose to learn the model under the online setting so that it can adapt to the changes of evolutional patterns over time. Extensive experiments demonstrate that CEN obtains substantial performance improvement under both the traditional offline and the proposed online settings
  • [May 25th, 2023]

  • Graph Kalman Filters
    Presenter: Daniele Zambon, The Swiss AI Lab IDSIA & Universit`a della Svizzera italiana, Switzerland.
    Daniele Zambon is currently a postdoctoral researcher with the Swiss AI Lab IDSIA, Università della Svizzera italiana USI, Lugano, Switzerland. He received his Ph.D. degree from USI, with a thesis on anomaly and change detection in sequences of graphs. He holds MSc and BSc degrees in mathematics from Università degli Studi di Milano, Milan, Italy. He has been a Visiting Researcher/Intern at the University of Florida, Gainesville, FL, USA, the University of Exeter, Exeter, U.K., and STMicroelectronics, Geneva, Switzerland. His research interests include machine learning on graph-structured data, time series analysis, and learning in nonstationary environments. He regularly publishes in and is in the program committee of top-tier journals and conferences of the field, including IEEE TPAMI, IEEE TNNLS, IEEE TSP, NeurIPS, ICML, and ICLR.

    The well-known Kalman filters model dynamical systems by relying on state-space representations with the next state updated, and its uncertainty controlled, by fresh information associated with newly observed system outputs. This paper generalizes, for the first time in the literature, Kalman and extended Kalman filters to discrete-time settings where inputs, states,and outputs are represented as attributed graphs whose topology and attributes can change with time. The setup allows us to adapt the framework to cases where the output is a vector or a scalar too (node/graph level tasks). Within the proposed theoretical framework, the unknown state-transition and the readout functions are learned end-to-end along with the downstream prediction task.
  • [June 8th, 2023]

  • Fast and Attributed Change Detection on Dynamic Graphs with Density of States PAKDD 2023
    Presenter: Shenyang Huang, Mila and School of Computer Science, McGill University
    Shenyang Huang is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs.

    How can we detect traffic disturbances from international flight transportation logs or changes to collaboration dynamics in academic networks? These problems can be formulated as detecting anomalous change points in a dynamic graph. Current solutions do not scale well to large real-world graphs, lack robustness to large amounts of node additions/deletions, and overlook changes in node attributes. To address these limitations, we propose a novel spectral method: Scalable Change Point Detection (SCPD). SCPD generates an embedding for each graph snapshot by efficiently approximating the distribution of the Laplacian spectrum at each step. SCPD can also capture shifts in node attributes by tracking correlations between attributes and eigenvectors. Through extensive experiments using synthetic and real-world data, we show that SCPD (a) achieves state-of-the art performance, (b) is significantly faster than the state-of-the-art methods and can easily process millions of edges in a few CPU minutes, (c) can effectively tackle a large quantity of node attributes, additions or deletions and (d) discovers interesting events in large real-world graphs.
  • [June 15th, 2023]

  • Chartalist: Labeled Graph Datasets for UTXO and Account-based Blockchains NeurIPS 2022 Datasets and Benchmarks Track
    Presenter: Kiarash Shamsi and Cuneyt Gurcan Akcora, University of Manitoba, Canada
    Kiarash Shamsi is a Ph.D. student in computer science at the University of Manitoba in Canada, and attained his Master's and bachelor's degree in computer software engineering from the University of Science and Culture and the University of Science and Technology in Iran. Kiarash's research interests revolve around machine learning, deep learning, data analysis, and blockchain technologies, focusing on their practical applications. Kiarash is contributing to the advancement of these areas through research and publication, with his work being featured in prestigious conferences and journals such as NeurIPS, ICBC, and IEEE Access.
    Cuneyt Gurcan Akcora is an assistant professor of computer science and statistics at the University of Manitoba in Canada. He received his Ph.D. from the University of Insubria, Italy. His research interests include data science on complex networks and large-scale graph analysis, with applications in social, biological, IoT, and blockchain networks. Akcora has been awarded a Fulbright Scholarship and has published his research in leading conferences and journals, including IEEEtran, KDD, NeurIPS, VLDB, ICDM, SDM, IJCAI, and ICDE.

    Topic 1 Abstract Over the last couple of years, Bitcoin cryptocurrency and the Blockchain technology that forms the basis of Bitcoin have witnessed an unprecedented attention. Designed to facilitate a secure distributed platform without central regulation, Blockchain is heralded as a novel paradigm that will be as powerful as Big Data, Cloud Computing, and Machine Learning. The Blockchain technology garners an ever increasing interest of researchers in various domains that benefit from scalable cooperation among trust-less parties. As Blockchain applications proliferate, so does the complexity and volume of data stored by Blockchains. Analyzing this data has emerged as an important research topic, already leading to methodological advancements in the information sciences. In this tutorial, we offer a holistic view on applied Data Science on Blockchains. Starting with the core components of Blockchain, we will detail the state of art in Blockchain data analytics for graph, security and finance domains. Our examples will answer questions, such as, "how to parse, extract and clean the data stored in blockchains?", "how to store and query Blockchain data?" and "what features could be computed from blockchains"?
    Topic 2 Abstract (Topological Data Analysis on Networks, Applications and Scalability issues) Over the last couple of years, Topological Data Analysis (TDA) has seen a growing interest from Data Scientists of diverse backgrounds. TDA is an emerging field at the interface of algebraic topology, statistics, and computer science. The key rationale in TDA is that the observed data are sampled from some metric space and the underlying unknown geometric structure of this space is lost because of sampling. TDA recovers the lost underlying topology. We aim at adapting TDA algorithms to work on networks and overcoming the scalability issues that arise while working on large networks. In this talk, I will outline our three alternative approaches in applying Persistent Homology and TDAMapper based Topological Data Analysis algorithms to Blockchain networks.
    see more from the following papers:
    BitcoinHeist: Topological Data Analysis for Ransomware Prediction on the Bitcoin Blockchain
    ChainNet: Learning on Blockchain Graphs with Topological Features
    Dissecting Ethereum Blockchain Analytics: What We Learn from Topology and Geometry of the Ethereum Graph?
  • [June 22nd, 2023]

  • Comparing Apples and Oranges? On the Evaluation of Methods for Temporal Knowledge Graph Forecasting ECML 2023 and NeurIPS 2022 TGL workshop
    Presenter: Julia Gastinger NEC Laboratories Europe and Data and Web Science Group at University of Mannheim
    Julia Gastinger is a research scientist at NEC Laboratories Europe and a second year Ph.D. student in the Data and Web Science Group at University of Mannheim, supervised by Professor Heiner Stuckenschmidt. Her research primarily focuses on Temporal Knowledge Graphs – Specifically, she is interested in Temporal Knowledge Graph Forecasting and the evaluation of methods in this research field. Julia actively contributes to the research community as a co-organizer of the temporal graph learning reading group.

    Due to its ability to incorporate and leverage time information in relational data, Temporal Knowledge Graph (TKG) learning has become an increasingly studied research field. To predict the future based on TKG, researchers have presented innovative methods for Temporal Knowledge Graph Forecasting. However, the experimental procedures employed in this research area exhibit inconsistencies that significantly impact empirical results, leading to distorted comparisons among models. This paper focuses on the evaluation of TKG Forecasting models: We examine the evaluation settings commonly used in this research area and highlight the issues that arise. To make different approaches to TKG Forecasting more comparable, we propose a unified evaluation protocol and apply it to re-evaluate state-of-the-art models on the most commonly used datasets. Ultimately, we demonstrate the significant difference in results caused by different evaluation settings. We believe this work provides a solid foundation for future evaluations of TKG Forecasting models, thereby contributing to advancing this growing research area.
  • Past Talks, Winter 2023

    [Feb. 16th, 2023]

  • Neighborhood-aware Scalable Temporal Network Representation Learning , LOG 2022 Conference Best Paper
    Presenter: Yuhong Luo, University of Massachusetts
    Yuhong Luo is a first year master student at UMass Amherst. Prior to that, he worked as a software engineer at Airbnb and worked with professor Pan Li of Purdue University as a research intern. His research interest includes ML, graph representation learning, ML fairness, etc.

    Temporal networks have been widely used to model real-world complex systems such as financial systems and e-commerce systems. In a temporal network, the joint neighborhood of a set of nodes often provides crucial structural information useful for predicting whether they may interact at a certain time. However, recent represen- tation learning methods for temporal networks often fail to extract such information or depend on online construction of structural features, which is time-consuming. To address the issue, this work proposes Neighborhood-Aware Temporal network model (NAT). For each node in the network, NAT abandons the commonly-used one-single-vector-based representation while adopting a novel dictionary-type neighborhood representation. Such a dictionary representation records a down- sampled set of the neighboring nodes as keys, and allows fast construction of struc- tural features for a joint neighborhood of multiple nodes. We also design a dedicated data structure termed N-cache to support parallel access and update of those dic- tionary representations on GPUs. NAT gets evaluated over seven real-world large- scale temporal networks. NAT not only outperforms all cutting-edge baselines by averaged 1.2%↑ and 4.2%↑ in transductive and inductive link prediction accuracy, respectively, but also keeps scalable by achieving a speed-up of 4.1-76.7× against the baselines that adopt joint structural features and achieves a speed-up of 1.6-4.0× against the baselines that cannot adopt those features. The link to the code: https: //github.com/Graph-COM/Neighborhood-Aware-Temporal-Network.
  • [Feb. 23rd, 2023]

  • Direct Embedding of Temporal Network Edges via Time-Decayed Line Graphs , ICLR 2023
    Presenter: Sudhanshu (Dan) Chanpuriya, University of Massachusetts, Amherst
    Sudhanshu Chanpuriya is a PhD student at UMass Amherst's Theoretical Computer Science Group, where he is advised by Cameron Musco. His research interests lie at the intersection of machine learning and spectral graph theory. Prior to joining UMass, he studied computer science and engineering physics at Dartmouth College.

    Temporal networks model a variety of important phenomena involving timed interactions between entities. Existing methods for machine learning on temporal networks generally exhibit at least one of two limitations. First, time is assumed to be discretized, so if the time data is continuous, the user must determine the discretization and discard precise time information. Second, edge representations can only be calculated indirectly from the nodes, which may be suboptimal for tasks like edge classification. We present a simple method that avoids both shortcomings: construct the line graph of the network, which includes a node for each interaction, and weigh the edges of this graph based on the difference in time between interactions. From this derived graph, edge representations for the original network can be computed with efficient classical methods. The simplicity of this approach facilitates explicit theoretical analysis: we can constructively show the effectiveness of our method's representations for a natural synthetic model of temporal networks. Empirical results on real-world networks demonstrate our method's efficacy and efficiency on both edge classification and temporal link prediction.
  • [March 2nd, 2023]

  • Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs, NeurIPS 2022
    Presenter: Ming Jin, Monash University
    Ming Jin is a Ph.D. Candidate in Computer Science at Monash University, where he is currently being supervised by Prof. Shirui Pan and A/Prof. Yuan-Fang Li. He is also a Research Staff at Metso Outotec and a Research Assistant at Monash University. Ming completed his Bachelor's and Master's degrees at Hebei University of Technology and The University of Melbourne in 2017 and 2019, respectively. He has published over ten top-ranked conference and journal papers and regularly served as a PC member and reviewer of major AI/ML/DM conferences and journals, including IJCAI, PAKDD, TNNLS, TKDE, and more. Ming's research interests mainly focus on graph neural networks and geometric deep learning, with a particular emphasis on temporal settings such as dynamic graph neural networks. He is particularly passionate about applying these techniques to tackle practical challenges such as time series forecasting, industrial process modeling, and anomaly detection.

    Continuous-time dynamic graphs naturally abstract many real-world systems, such as social and transactional networks. While the research on continuous-time dynamic graph representation learning has made significant advances recently, neither graph topological properties nor temporal dependencies have been well-considered and explicitly modeled in capturing dynamic patterns. In this paper, we introduce a new approach, Neural Temporal Walks (NeurTWs), for representation learning on continuous-time dynamic graphs. By considering not only time constraints but also structural and tree traversal properties, our method conducts spatiotemporal-biased random walks to retrieve a set of representative motifs, enabling temporal nodes to be characterized effectively. With a component based on neural ordinary differential equations, the extracted motifs allow for irregularly-sampled temporal nodes to be embedded explicitly over multiple different interaction time intervals, enabling the effective capture of the underlying spatiotemporal dynamics. To enrich supervision signals, we further design a harder contrastive pretext task for model optimization. Our method demonstrates overwhelming superiority under both transductive and inductive settings on six real-world datasets.
  • [March 9th, 2023]

  • Learnable Spectral Wavelets on Dynamic Graphs to Capture Global Interactions , AAAI 2023
    Presenter: Anson Bastos, Indian Institue of Technology, Hyderabad (IITH, India) and Abhishek Nadgeri, RWTH Aachen, Germany
    Anson Basto is a PhD student at the Indian Institue of Technology, Hyderabad (IITH, India) under the supervision of Dr Manish Singh (IITH, India) and Dr Toyotaro Suzamura (The University of Tokyo, Japan). His research interests/works are in the areas of Geometric machine learning, Graph Signal Processing, Topology, Knowledge graphs etc.
    Abhishek Nadgeri is currently pursuing his master’s at RWTH Aachen, Germany, his research interests are Graph ML, time series and NLP.

    Learning on evolving(dynamic) graphs has caught the attention of researchers as static methods exhibit limited performance in this setting. The existing methods for dynamic graphs learn spatial features by local neighborhood aggregation, which essentially only captures the low pass signals and local interactions. In this work, we go beyond current approaches to incorporate global features for effectively learning representations of a dynamically evolving graph. We propose to do so by capturing the spectrum of the dynamic graph. Since static methods to learn the graph spectrum would not consider the history of the evolution of the spectrum as the graph evolves with time, we propose a novel approach to learn the graph wavelets to capture this evolving spectra. Further, we propose a framework that integrates the dynamically captured spectra in the form of these learnable wavelets into spatial features for incorporating local and global interactions. Experiments on eight standard datasets show that our method significantly outperforms related methods on various tasks for dynamic graphs.
  • [March 16th, 2023]

  • Do We Really Need Complicated Model Architectures For Temporal Networks? , ICLR 2023
    Presenter: Weilin Cong, Pennsylvania State University
    Weilin Cong is a 4th year Ph.D. candidate at Penn State University. His research focuses on both the fundamental problems in graph representation learning, including optimization, generalization, expressive power, and model architecture design. He has published as the first author in AI conferences NeurIPS, ICLR, AISTATS, KDD and SDM on graph representation learning.

    Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good per-formance, in practice neither of them is always necessary. In this paper, we propose GraphMixer , a conceptually and technically simple architecture that consists of three components: 1 a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, 2 a node-encoder that is only based on neighbor mean-pooling to summarize node information, and 3 an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture. Code is in the supplementary.
  • [March 23rd, 2023]

  • Representation Learning in Continuous-Time Dynamic Signed Networks, CIKM 2023
    Presenter: Kartik Sharma, Georgia Institute of Technology
    Kartik Sharma is a 2nd year PhD student advised by Prof. Srijan Kumar at Georgia Techology advised by Prof. Srijan Kumar. Before joining there, Kartik graduated from IIT-Delhi in 2021, where he was fortunate to be advised by Prof. Sayan Ranu. His research focuses on learning and optimization for graph-structured data. Through his research, his goal is to (1) study the effectiveness of these models in difficult and practical (for eg., adversarial, constrained, dynamic) scenarios, and (2) develop new models that can circumvent these problems with a theoretical grounding.

    Signed networks allow us to model conflicting relationships and in- teractions, such as friend/enemy and support/oppose. These signed interactions happen in real time. Modeling such dynamics of signed networks is crucial to understanding the evolution of polarization in the network and enabling effective prediction of the signed struc- ture (i.e., link signs and signed weights) in the future. However, exist- ing works have modeled either (static) signed networks or dynamic (unsigned) networks but not dynamic signed networks. Since both sign and dynamics inform the graph structure in different ways, it is non-trivial to model how to combine the two features. In this work, we propose a new Graph Neural Network (GNN)-based ap- proach to model dynamic signed networks, named SEMBA: Signed link’s Evolution using Memory modules and Balanced Aggregation. Here, the idea is to incorporate the signs of temporal interactions using separate modules guided by balance theory and to evolve the embeddings from a higher-order neighborhood. Experiments on 4 real-world datasets and 4 different tasks demonstrate that SEMBA consistently and significantly outperforms the baselines by up to 80% on the tasks of predicting signs of future links while matching the state-of-the-art performance on predicting existence of these links in the future. We find that this improvement is due specifically to superior performance of SEMBA on the minority negative class.
  • [March 30th, 2023]

  • Towards Better Evaluation for Dynamic Link Prediction (NeurIPS 2022 Datasets and Benchmarks Track)
    Presenter: Farimah Poursafaei, McGill University, Mila
    Farimah Poursafaei (she/her) is a PostDoc at McGill University and Mila. She conducts research on dynamic graph neural networks, and temporal graphs. She completed her PhD at McGill University in Computer Engineering. During her PhD, she was working on anomaly detection on cryptocurrency transactions networks. She served as the Reviewing Chair in Temporal Graph Learning Workshop @ NeurIPS 2022.

    Despite the prevalence of recent success in learning from static graphs, learning from time-evolving graphs remains an open challenge. In this work, we design new, more stringent evaluation procedures for link prediction specific to dynamic graphs, which reflect real-world considerations, to better compare the strengths and weaknesses of methods. First, we create two visualization techniques to understand the reoccurring patterns of edges over time and show that many edges reoccur at later time steps. Based on this observation, we propose a pure memorization-based baseline called EdgeBank. EdgeBank achieves surprisingly strong performance across multiple settings which highlights that the negative edges used in the current evaluation are easy. To sample more challenging negative edges, we introduce two novel negative sampling strategies that improve robustness and better match real-world applications. Lastly, we introduce six new dynamic graph datasets from a diverse set of domains missing from current benchmarks, providing new challenges and opportunities for future research. Our code repository is accessible at github
  • [April 6th, 2023]

  • Scalable Spatiotemporal Graph Neural Networks AAAI 2023 ( also NeurIPS 2022 Temporal Graph Learning Workshop best paper)
    Presenter: Andrea Cini and Ivan Marisca, IDSIA, Università della Svizzera italiana
    Andrea Cini is a Ph.D. student at IDSIA and a visiting researcher at Imperial College London. Previously, he worked as machine learning engineer in the aerospace industry and obtained his MSc and BSc in Computer Science at Politecnico di Milano. Andrea’s research has been published in top-tier machine-learning venues such as JMLR, NeurIPS, ICLR and AAAI. His main research interests are in methods to process spatiotemporal data by exploiting graph representations and graph deep learning; applications are in time series analysis, with a focus on spatiotemporal forecasting. Personal website: https://andreacini.github.io/
    Ivan Marisca is a Ph.D. student at the Graph Machine Learning Group within the Swiss AI lab IDSIA at Università della Svizzera italiana (USI), under the supervision of Prof. Cesare Alippi. He received his BSc (2017) and MSc (2020) degrees in Computer Science and Engineering from Politecnico di Milano.Ivan's research focuses on graph-based learning from irregular spatiotemporal data, with applications in prediction, imputation, and control on sensor networks. His works have been published in top-tier conferences such as NeurIPS, ICLR, and AAAI. In addition to his research, Ivan is an enthusiastic teaching assistant and lecturer in machine learning courses for MSc students at USI. Personal website: https://marshka.github.io/

    Slidelive video recording from NeurIPS 2022 TGL workshop
    Neural forecasting of spatiotemporal time series drives both research and industrial innovation in several relevant application domains. Graph neural networks (GNNs) are often the core component of the forecasting architecture. However, in most spatiotemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph, hence hindering the application of these models to large graphs and long temporal sequences. While methods to improve scalability have been proposed in the context of static graphs, few research efforts have been devoted to the spatiotemporal case. To fill this gap, we propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics. In particular, we use a randomized recurrent neural network to embed the history of the input time series into high-dimensional state representations encompassing multi-scale temporal dynamics. Such representations are then propagated along the spatial dimension using different powers of the graph adjacency matrix to generate node embeddings characterized by a rich pool of spatiotemporal features. The resulting node embeddings can be efficiently pre-computed in an unsupervised manner, before being fed to a feed-forward decoder that learns to map the multi-scale spatiotemporal representations to predictions. The training procedure can then be parallelized node-wise by sampling the node embeddings without breaking any dependency, thus enabling scalability to large networks. Empirical results on relevant datasets show that our approach achieves results competitive with the state of the art, while dramatically reducing the computational burden
  • [April 13th, 2023]

  • Spatio-Temporal Meta-Graph Learning for Traffic Forecasting , AAAI 2023
    DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction , CIKM 21 Best Resource Paper Runner-Up
    Presenter: Renhe Jiang, Center for Spatial Information Science, The University of Tokyo
    Renhe Jiang is a full-time lecturer at Center for Spatial Information Science, The University of Tokyo. He received his B.S. degree in Software Engineering from Dalian University of Technology, China, in 2012, M.S. degree in Information Science from Nagoya University, Japan, in 2015, and Ph.D. degree in Civil Engineering from The University of Tokyo, Japan, in 2019. From 2019 to 2022, he was an assistant professor at Information Technology Center, The University of Tokyo. His research interests include data mining and machine learning, especially spatiotemporal data science, spatiotemporal AI, multivariate time series, urban computing, and intelligent transportation system. You can find his website here.

    Spatio-Temporal Meta-Graph Learning for Traffic Forecasting (AAAI 2023). Traffic forecasting as a canonical task of multivariate time series forecasting has been a significant research topic in AI community. To address the spatio-temporal heterogeneity and non-stationarity implied in the traffic stream, in this study, we propose Spatio-Temporal Meta-Graph Learning as a novel Graph Structure Learning mechanism on spatio-temporal data. Specifically, we implement this idea into Meta-Graph Convolutional Recurrent Network (MegaCRN) by plugging the Meta-Graph Learner powered by a Meta-Node Bank into GCRN encoder-decoder. We conduct a comprehensive evaluation on two benchmark datasets (i.e., METR-LA and PEMS-BAY) and a new large-scale traffic speed dataset called EXPY-TKY that covers 1843 expressway road links in Tokyo. Our model outperformed the state-of-the-arts on all three datasets. Besides, through a series of qualitative evaluations, we demonstrate that our model can explicitly disentangle the road links and time slots with different patterns and be robustly adaptive to any anomalous traffic situations. Codes and datasets are available at https://github.com/deepkashiwa20/MegaCRN.
    DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction (CIKM 2021). Nowadays, with the rapid development of IoT (Internet of Things) and CPS (Cyber-Physical Systems) technologies, big spatiotemporal data are being generated from mobile phones, car navigation systems, and traffic sensors. By leveraging state-of-the-art deep learning technologies on such data, urban traffic prediction has drawn a lot of attention in AI and Intelligent Transportation System community. The problem can be uniformly modeled with a 3D tensor (T, N, C), where T denotes the total time steps, N denotes the size of the spatial domain (i.e., mesh-grids or graph-nodes), and C denotes the channels of information. According to the specific modeling strategy, the state-of-the-art deep learning models can be divided into three categories: grid-based, graph-based, and multivariate time-series models. In this study, we first synthetically review the deep traffic models as well as the widely used datasets, then build a standard benchmark to comprehensively evaluate their performances with the same settings and metrics. Our study named DL-Traff is implemented with two most popular deep learning frameworks, i.e., TensorFlow and PyTorch, which is already publicly available as two GitHub repositories https://github.com/deepkashiwa20/DL-Traff-Grid and https://github.com/deepkashiwa20/DL-Traff-Graph. With DL-Traff, we hope to deliver a useful resource to researchers who are interested in spatiotemporal data analysis.
  • [April 20th, 2023]

  • Graph Neural Networks for Link Prediction with Subgraph Sketching , ICLR 2023 Notable top 5%
    Presenter: Benjamin Paul Chamberlain Charm Therapeutics and Sergey Shirobokov ShareChat
    Dr Ben Chamberlain is a Principal Scientist at Charm Therapeutics where he works on 3d machine learning to develop cancer drugs. He was previously a staff machine learning researcher within the graph ML group at Twitter and the Head of machine learning at ASOS.com. He did his PhD at Imperial College with Professor Marc Deisonroth on large scale graph ML.
    Dr Sergey Shirobokov is a Senior Machine Learning scientist at ShareChat, where he works on improving the company's recommender algorithms. He was previously a Senior Machine Learning Researcher at Twitter Cortex team. He did his PhD at Imperial College London on simulator-based optimisation algorithms for high energy physics.

    Many Graph Neural Networks (GNNs) perform poorly compared to simple heuristics on Link Prediction (LP) tasks. This is due to limitations in expressive power such as the inability to count triangles (the backbone of most LP heuristics) and because they can not distinguish automorphic nodes (those having identical structural roles). Both expressiveness issues can be alleviated by learning link (rather than node) representations and incorporating structural features such as triangle counts. Since explicit link representations are often prohibitively expensive, recent works resorted to subgraph-based methods, which have achieved state-of-the-art performance for LP, but suffer from poor efficiency due to high levels of redundancy between subgraphs. We analyze the components of subgraph GNN (SGNN) methods for link prediction. Based on our analysis, we propose a novel full-graph GNN called ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as messages to approximate the key components of SGNNs without explicit subgraph construction. ELPH is provably more expressive than Message Passing GNNs (MPNNs). It outperforms existing SGNN models on many standard LP benchmarks while being orders of magnitude faster. However, it shares the common GNN limitation that it is only efficient when the dataset fits in GPU memory. Accordingly, we develop a highly scalable model, called BUDDY, which uses feature precomputation to circumvent this limitation without sacrificing predictive performance. Our experiments show that BUDDY also outperforms SGNNs on standard LP benchmarks while being highly scalable and faster than ELPH.
  • [April 27th, 2023]

  • Temporal Knowledge Graph Reasoning with Historical Contrastive Learning AAAI 2023
    Presenter: Yi Xu, Shanghai Jiao Tong University
    Yi Xu is a 3th year Ph.D. candidate at Shanghai Jiao Tong University under the supervision of Prof. Luoyi Fu and Prof. Xinbing Wang. His research interests include natural language processing, knowledge graphs and data mining. Specifically, He is devoted to the research of large language models and temporal knowledge graph.

    Temporal knowledge graph, serving as an effective way to store and model dynamic relations, shows promising prospects in event forecasting. However, most temporal knowledge graph reasoning methods are highly dependent on the recurrence or periodicity of events, which brings challenges to inferring future events related to entities that lack historical interaction. In fact, the current moment is often the combined effect of a small part of historical information and those unobserved underlying factors. To this end, we propose a new event forecasting model called Contrastive Event Network (CENET), based on a novel training framework of historical contrastive learning. CENET learns both the historical and non-historical dependency to distinguish the most potential entities that can best match the given query. Simultaneously, it trains representations of queries to investigate whether the current moment depends more on historical or non-historical events by launching contrastive learning. The representations further help train a binary classifier whose output is a boolean mask to indicate related entities in the search space. During the inference process, CENET employs a mask-based strategy to generate the final results. We evaluate our proposed model on five benchmark graphs. The results demonstrate that CENET significantly outperforms all existing methods in most metrics, achieving at least 8.3% relative improvement of Hits@1 over previous state-of-the-art baselines on event-based datasets.
  • Organizers

    profile picture

    Farimah Poursafaei (she/her) is a PostDoc at McGill University and Mila. She conducts research on dynamic graph neural networks, and temporal graphs. She completed her PhD at McGill University in Computer Engineering. During her PhD, she was working on anomaly detection on cryptocurrency transactions networks. She served as the Reviewing Chair in Temporal Graph Learning Workshop @ NeurIPS 2022.
    Website, Google Scholar, Linkedin

    profile picture

    Julia Gastinger (she/her) is a research scientist in the AI Innovations group at NEC Laboratories Europe and a Ph.D. student at Mannheim University, supervised by Professor Heiner Stuckenschmidt. Her research primarily focuses on graph-based Machine Learning – she is interested in how to incorporate the time aspect in knowledge graph representations. Prior to joining NEC Laboratories Europe, Julia graduated with a master’s degree from Stuttgart University, where she studied Engineering Cybernetics with a focus on Autonomous Systems and Control Theory.
    Google Scholar, LinkedIn, Nec Laboratories
    Most Recent Publication: On the Evaluation of Methods for Temporal Knowledge Graph Forecasting

    profile picture

    Shenyang Huang (he/him) is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs. He also enjoys writing medium blog posts about temporal graph learning.
    Website, Google Scholar, Twitter, Linkedin