Graph neural networks

A birth of a paper

Jimmy (xiaoke) Shen
5 min readApr 19, 2020

GNN is a new hot topic. I am studying this new topic and I will update what I feel helpful here to share my learning process.

Nice introduction videos

GNN

MUST SEE
very clear explained. After watching the video, I have a very clear picture about what is GNN.

This is a pretty nice talk by Alexander Gaunt from Microsoft Research.

Alexander Gaunt

I have a Masters and PhD in experimental quantum physics from the University of Cambridge and am a Junior Research Fellow at Trinity College. During my PhD I developed a method for trapping and studying cold atomic clouds in holograms, culminating in publications in Science, Nature and Nature Physics. I was an early adopter of CUDA for scientific simulations, which lead to an internship and postdoc positions in the MIP group at MSRC.

Formal academic lectures

Learning the Structure of Graph Neural Networks

The above talk is delivered by a research scientist from NEC. This talk is very clear and informative. It should be a must-see talk although it is about 1 and a half hours long.

Graph Representation Learning (Stanford University) part 1

Libraries that might be useful

PyTorch Geometric

Deep Graph Library

graph_nets

if you are a TF BOY, please check the graph nets developed by Deep mind.

Get started by developing your first GNN project

This video is a great explanation about how to get started.

However, I decided to follow this video and post to finish my first GNN project.

I download the data and code from here

As the original includes too many files, I created a separate one and put it in a repo in my github. You can get my version from here.

git clone https://github.com/liketheflower/gnn.git

The project Jupyter notebook is here

https://github.com/liketheflower/gnn/blob/master/dgl/01_karate_club/karate_club.ipynb

The training process and final prediction can be found here:

Loss curve

Prediction results

Prediction results

If you wanna fully understand the system, please check out the following code. It has fewer nodes and you can print out all the intermediate values to have a better understanding of the model.

Alternative

Tutorial from DGL is a good alternative to build a first GNN model

First practice of using VGAE (April 20, 2020)

Task

reproduce the experiment result of paper Variational Graph Auto-Encoders

Experiment result of the Variational Graph Auto-Encoders

dataset Cora

The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 links. Each publication in the dataset is described by a 0/1-valued word vector indicating the absence/presence of the corresponding word from the dictionary. The dictionary consists of 1433 unique words

Seven classes are

  • Case_Based
  • Genetic_Algorithms
  • Neural_Networks
  • Probabilistic_Methods
  • Reinforcement_Learning
  • Rule_Learning
  • Theory

Detail about this dataset

THE DIRECTORY CONTAINS TWO FILES:

The .content file contains descriptions of the papers in the following format:

<paper_id> <word_attributes>+ <class_label>

The first entry in each line contains the unique string ID of the paper followed by binary values indicating whether each word in the vocabulary is present (indicated by 1) or absent (indicated by 0) in the paper. Finally, the last entry in the line contains the class label of the paper.

The .cites file contains the citation graph of the corpus. Each line describes a link in the following format:

<ID of cited paper> <ID of citing paper>

Each line contains two paper IDs. The first entry is the ID of the paper being cited and the second ID stands for the paper which contains the citation. The direction of the link is from right to left. If a line is represented by “paper1 paper2” then the link is “paper2->paper1”.

One sample of .content file
samples of .cites file

Code I used

Visualization of the built graph

The graph has

2708 node

10556 edges

Visualization of the initial graph by jimmy shen

I did a visualization by modifying the code from the link mentioned before. Pretty beautiful, right?

Training process

  • Model summary

The model used in this project contains 2 GCN layers and one decoder (InnerProductDecoder)

In_features is 1433 as “The dictionary consists of 1433 unique words”

Detail of the model is described in the Variational Graph Auto-Encoders paper.

Model description from the Variational Graph Auto-Encoders
Why this model

The [4] is this paper Semi-Supervised Classification with Graph Convolutional Networks.

Output of the model by pytorch

Model summary of the GAE used for the training

Model illustration

Model illustration
  • Training loss curve

In the original code

L2 loss is calculated between estimated adj_logits and adj. However, the estimated adj_logits are from a linear activation function which means the scope of the value is from -inf to +inf.

loss = loss_function(adj_logits, adj, pos_weight=pos_weight)

I change it by passing it to a sigmoid function and it is better.

def f(x):return torch.sigmoid(x)loss = loss_function(f(adj_logits), adj, pos_weight=pos_weight)
Training loss (by using the original loss provided in the code)
Loss curve by using a new loss function

Others

I am on the way to move forward fast. If you are also interested in this topic, drop me comments. Let’s improve together. Cheers.

Zero to hero plan

Achievements

Read basic tutorials

Do a simple project by using GNN

TO-DO List

  • Read 50 GNN related papers (Not yet)
  • Fully understand two other people’s projects (Yes)
  • Do my own project (Yes)
  • Publish a paper by using GNN (On the way, the paper is ready. Cheers !)

Top researchers

https://jimmylba.github.io/

Reference

links provided in the article.

--

--