Different Methods in Geometric Deep Learning — Part 1

Source: Deep Learning on Medium

Different Methods in Geometric Deep Learning — Part 1

Here we start a series to discuss various geometric deep learning algorithms for different network types.

Introduction

In this series of stories, we will explain different geometric deep learning (GDL) algorithms that are based on different approaches. From the next story on, we will spend a complete story for each algorithm and will try to span as most ideas as possible. At the moment, the plan is to discuss DeepWalk, GraphSAGE, and NeoDTI, which are prominent for GDL in different aspects. Nonetheless, this list could expand, shrink or change as well. In this story though, we will lay the ground for the next stories and define some terminology. Thus, if you are familiar with geometric deep learning already, feel free to skip this part.

Motivation

Machine learning (ML) is the art of prediction given a set of observations. The observations can be in various forms such as text, image, video and can have explicit labels such as topics, classes, annotations and so on. Deep learning (DL) is a recently emerged subfield of ML that focuses on neural networks mostly. DL algorithms are more complex than traditional ML algorithms and can leverage subtle and indirect relations in the data. DL models’ capturing these relations improved performance on a vast amount of tasks.

In most of the ML&DL applications, observations are treated independently from each other. For instance, in an image labeling problem, each image is treated as a separate instance and instances do not share any information. Though independence assumption holds for a wide range of problems, observations can have different types of relations or interactions between each other. Leveraging these interactions means providing enriched information to the model and capturing relations that cannot be captured otherwise. Given that improvement by DL is due to capturing subtle interactions, explicitly representing the interactions can help model performance as well.

To represent complex interactions between input instances, we need an input representation scheme that is more powerful than putting all of the instances to a set and treating them independently. Networks, or graphs, are widely used data structures to model interactionally-rich data.

Networks

Networks consist of nodes to represent entities and edges to denote the relationships or interactions between them. Their modeling scope includes but is not limited to, roads, texts, social platforms, protein-protein, and protein-drug interactions. Now let us construct an example citation network step by step.

In citation networks, entities are the papers. Thus, we can model each paper by a node. We can insert edges between two nodes if one of the papers cited the other. From the example below, we can induce that paper A and B have a citation relation, whereas A and D are disjoint works.

Each node is a paper and each edge corresponds to a citation.

We can enrich the model by adding features to nodes or edges. For citation networks, we can use distributed representations of each document, or any other feature, as node features and incorporate more information into our representation. In the next schema, we can see each document associated with its distributed representation.

We can enrich nodes with external information.

Until now, we have seen how to model instances with their features and inner interactions. Yet, the indispensable component of supervised machine learning is still missing: The labels! For a citation network, paper topics can be considered as labels and we can integrate them into our model as node labels. Below we can see each paper is colored concerning their topic. We can deduce that papers A, B, and C are of the same topic, whereas D and E are members of a different one.

We can associate each node with labels.

We have mentioned how to represent features, and labels of instances alongside interactions between them. Despite representing interactions between instances endowed us with more power already compared to a set representation, we can do more and introduce different types of entities to our model. To exemplify, we can add authors as a different node type from papers and link authors to authors if they co-authored a work before. Additionally, we can link authors to papers if the author is among the authors of the paper.

The networks with multiple entities and/or interaction types are called heterogeneous networks, whereas networks with a single entity and interaction type are named as homogeneous networks. Below we can see a heterogeneous citation network with authors and papers. We can deduce that Bob and Alice have co-authored a paper before and Bob is an author of paper A. Note that the figure contains neither all of the papers of an author nor all of the authors of a paper for brevity. Distributed representations are also hidden for the same purpose.

We can introduce different entity and relation types in networks. Each relation type is emphasized with a different color.

Here we should note the difference with different node labels and entity types. Labels are classes of instances of the same type. Though labels of two papers are different, they share the same entity: being a paper. On the other hand, authors and papers are two separate notions with different interaction types, features and so on.

Geometric Deep Learning

Geometric Deep Learning (GDL) is a developing field that focuses on developing neural networks that explicitly leverage the network structure of the input. Such neural networks are called graph neural networks and drawing huge attention. The chart below shows that graph neural network is the keyword that gains most popularity in ICLR 2020 submissions. Remark that ICLR is among the top conferences of DL.

Keyword usage change in ICLR submissions [1].

With such a strong modeling technique, we can define four broad tasks directly on networks and under the hood of GDL.

Node Classification

As in the citation network example, each node can be associated with a label. In node classification, the aim is to predict unknown labels given the known ones. Though this problem can be formulated by treating each node as independent instances with explicit labels, with network-based formulation, and graph neural networks, we can utilize the network structure and share information between nodes.

An example use case would be to tag papers with topics given their references and citations in a citation network.

Link Prediction

Links or edges are core components of networks. In link prediction, the goal is to predict whether a link between two nodes could exist, even though it is not stated. Note that this problem can also be formulated as a binary classification task, yet this would not let utilizing network structure.

As an example use case, we can state friend recommendation in social networks. Representing users as nodes and friendship as a link between the nodes, a link prediction corresponds to predicting whether two users could befriend.

Node Clustering

Node clustering is the network counterpart of clustering as we are familiar with. Aim of node clustering is to group similar nodes with respect to a pre-defined similarity metric. Node clustering can be used for community detection in social platforms or paper clustering in citation networks.

Network Classification

This task is different from the previous ones in the sense that it is defined on multiple networks rather than a single one. In network classification, the goal is to associate each network with a label depending on known graph-label matchings.

We can use network classification to map molecules to chemical properties where we model each molecule as a separate network with atoms being the nodes and bonds being the edges. In this scenario, we can infer the properties of novel chemicals with few or no experiment.

Conclusion

In this story, we have introduced geometric deep learning and discussed how it can be useful in various tasks. We have used networks to explicitly represent interactions between input instances of a model and propose that these interactions can increase model performance. In the next stories, we will see different approaches that aggregate information from different nodes and leverage the interactions explicitly.

References

[1] https://twitter.com/prlz77/status/1178662575900368903