Documentation | Paper | Colab Notebooks and Video Tutorials | External Resources | OGB Examples
PyG (PyTorch Geometric) is a library built upon PyTorch to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.
It consists of various methods for deep learning on graphs and other irregular structures, also known as geometric deep learning, from a variety of published papers.In addition, it consists of easy-to-use mini-batch loaders for operating on many small and single giant graphs, multi GPU-support, a large number of common benchmark datasets (based on simple interfaces to create your own), the GraphGym experiment manager, and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds.Click here to join our Slack community!
Whether you are a machine learning researcher or first-time user of machine learning toolkits, here are some reasons to try out PyG for machine learning on graph-structured data.
In this quick tour, we highlight the ease of creating and training a GNN model with only a few lines of code.
In the first glimpse of PyG, we implement the training of a GNN for classifying papers in a citation graph.For this, we load the Cora dataset, and create a simple 2-layer GCN model using the pre-defined GCNConv
:
import torch
from torch import Tensor
from torch_geometric.nn import GCNConv
from torch_geometric.datasets import Planetoid
dataset = Planetoid(root='.', name='Cora')
class GCN(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels):
super().__init__()
self.conv1 = GCNConv(in_channels, hidden_channels)
self.conv2 = GCNConv(hidden_channels, out_channels)
def forward(self, x: Tensor, edge_index: Tensor) -> Tensor:
# x: Node feature matrix of shape [num_nodes, in_channels]
# edge_index: Graph connectivity matrix of shape [2, num_edges]
x = self.conv1(x, edge_index).relu()
x = self.conv2(x, edge_index)
return x
model = GCN(dataset.num_features, 16, dataset.num_classes)
import torch.nn.functional as F
data = dataset[0]
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(200):
pred = model(data.x, data.edge_index)
loss = F.cross_entropy(pred[data.train_mask], data.y[data.train_mask])
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
More information about evaluating final model performance can be found in the corresponding example.
In addition to the easy application of existing GNNs, PyG makes it simple to implement custom Graph Neural Networks (see here for the accompanying tutorial).For example, this is all it takes to implement the edge convolutional layer from Wang et al.:
import torch
from torch import Tensor
from torch.nn import Sequential, Linear, ReLU
from torch_geometric.nn import MessagePassing
class EdgeConv(MessagePassing):
def __init__(self, in_channels, out_channels):
super().__init__(aggr="max") # "Max" aggregation.
self.mlp = Sequential(
Linear(2 * in_channels, out_channels),
ReLU(),
Linear(out_channels, out_channels),
)
def forward(self, x: Tensor, edge_index: Tensor) -> Tensor:
# x: Node feature matrix of shape [num_nodes, in_channels]
# edge_index: Graph connectivity matrix of shape [2, num_edges]
return self.propagate(edge_index, x=x) # shape [num_nodes, out_channels]
def message(self, x_j: Tensor, x_i: Tensor) -> Tensor:
# x_j: Source node features of shape [num_edges, in_channels]
# x_i: Target node features of shape [num_edges, in_channels]
edge_features = torch.cat([x_i, x_j - x_i], dim=-1)
return self.mlp(edge_features) # shape [num_edges, out_channels]
GraphGym allows you to manage and launch GNN experiments, using a highly modularized pipeline (see here for the accompanying tutorial).
git clone https://github.com/pyg-team/pytorch_geometric.git
cd pytorch_geometric/graphgym
bash run_single.sh # run a single GNN experiment (node/edge/graph-level)
bash run_batch.sh # run a batch of GNN experiments, using differnt GNN designs/datasets/tasks
Users are highly encouraged to check out the documentation, which contains additional tutorials on the essential functionalities of PyG, including data handling, creation of datasets and a full list of implemented methods, transforms, and datasets.For a quick start, check out our examples in examples/
.
PyG provides a multi-layer framework that enables users to build Graph Neural Network solutions on both low and high levels.It comprises of the following components:
torch-scatter
, torch-sparse
and torch-cluster
.We list currently supported PyG models, layers and operators according to category:
GNN layers:All Graph Neural Network layers are implemented via the nn.MessagePassing
interface.A GNN layer specifies how to perform message passing, i.e. by designing different message, aggregation and update functions as defined here.These GNN layers can be stacked together to create Graph Neural Network models.
Pooling layers:Graph pooling layers combine the vectorial representations of a set of nodes in a graph (or a subgraph) into a single vector representation that summarizes its properties of nodes.It is commonly applied to graph-level tasks, which require combining node features into a single graph representation.
GNN models:Our supported GNN models incorporate multiple message passing layers, and users can directly use these pre-defined models to make predictions on graphs.Unlike simple stacking of GNN layers, these models could involve pre-processing, additional learnable parameters, skip connections, graph coarsening, etc.
GNN operators and utilities:PyG comes with a rich set of neural network operators that are commonly used in many GNN models.They follow an extensible design: It is easy to apply these operators and graph utilities to existing GNN layers and models to further enhance model performance.
Scalable GNNs:PyG supports the implementation of Graph Neural Networks that can scale to large-scale graphs.Such application is challenging since the entire graph, its associated features and the GNN parameters cannot fit into GPU memory.Many state-of-the-art scalability approaches tackle this challenge by sampling neighborhoods for mini-batch training, graph clustering and partitioning, or by using simplified GNN models.These approaches have been implemented in PyG, and can benefit from the above GNN layers, operators and models.
Update: You can now install PyG via Anaconda for all major OS/PyTorch/CUDA combinations
conda install pyg -c pyg -c conda-forge
We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see here.
To install the binaries for PyTorch 1.9.0, simply run
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.9.0+${CUDA}.html
pip install torch-sparse -f https://data.pyg.org/whl/torch-1.9.0+${CUDA}.html
pip install torch-geometric
where ${CUDA}
should be replaced by either cpu
, cu102
, or cu111
depending on your PyTorch installation (torch.version.cuda
).
cpu |
cu102 |
cu111 |
|
---|---|---|---|
Linux |
|
|
|
Windows |
|
|
|
macOS |
|
For additional but optional functionality, run
pip install torch-cluster -f https://data.pyg.org/whl/torch-1.9.0+${CUDA}.html
pip install torch-spline-conv -f https://data.pyg.org/whl/torch-1.9.0+${CUDA}.html
To install the binaries for PyTorch 1.8.0 and 1.8.1, simply run
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.8.0+${CUDA}.html
pip install torch-sparse -f https://data.pyg.org/whl/torch-1.8.0+${CUDA}.html
pip install torch-geometric
where ${CUDA}
should be replaced by either cpu
, cu101
, cu102
, or cu111
depending on your PyTorch installation (torch.version.cuda
).
cpu |
cu101 |
cu102 |
cu111 |
|
---|---|---|---|---|
Linux |
|
|
|
|
Windows |
|
|
|
|
macOS |
|
For additional but optional functionality, run
pip install torch-cluster -f https://data.pyg.org/whl/torch-1.8.0+${CUDA}.html
pip install torch-spline-conv -f https://data.pyg.org/whl/torch-1.8.0+${CUDA}.html
Note: Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0 and PyTorch 1.7.0/1.7.1 (following the same procedure).
In case you want to experiment with the latest PyG features which are not fully released yet, ensure that torch-scatter
and torch-sparse
are installed by following the steps mentioned above, and install PyG from master via:
pip install git+https://github.com/pyg-team/pytorch_geometric.git
Please cite our paper (and the respective papers of the methods used) if you use this code in your own work:
@inproceedings{Fey/Lenssen/2019,
title={Fast Graph Representation Learning with {PyTorch Geometric}},
author={Fey, Matthias and Lenssen, Jan E.},
booktitle={ICLR Workshop on Representation Learning on Graphs and Manifolds},
year={2019},
}
Feel free to email us if you wish your work to be listed in the external resources.If you notice anything unexpected, please open an issue and let us know.If you have any questions or are missing a specific feature, feel free to discuss them with us.We are motivated to constantly make PyG even better.
关于cuda和cudnn 不需要按网上教程去官网下载,也不需要手动在conda环境内安装。直接按PyTorch官网命令安装PyTorch即可,cuda和cudnn会自动按你选择的版本下载。 建立conda环境并激活 大家应该都会 conda create -n 你的环境名字 python=你想要的python版本 如: conda create -n pytorch python=3.6 激活 c
官网:torch_geometric.nn — pytorch_geometric documentation PyTorch Geometric 简称PYG是一个基于PyTorch的库,可轻松编写和训练图神经网络(GNN),用于与结构化数据相关的广泛应用; 它包括从各种已发表的论文中对图和其他不规则结构进行深度学习的各种方法,也称为几何深度学习。
上网查了很久,最终发现,最好的办法,还是按官网推荐,用conda配置环境。 1. 安装anaconda,并创建虚拟环境 参考使用anaconda安装pytorch,并配置vscode 安装anaconda后,pytorch可用最新的。 conda create -n pytorch python=3.8 # 新建一个虚拟环境,问题最少 2. 安装 pytorch 最新版 安装的pytorch稳定
搞了一晚上,这里把要点说一下: 1. 安装PyTorch 官网 PyTorch 提供了pip、conda等四种安装方式,我尝试了pip和conda这两种。之前一直认为pip才是最简洁的方式,今天算是对这两种方式有了个全面的了解。先说结论:建议用conda安装,不建议pip。 官网首页自然是展示了最新的版本,但由于我的PyTorch程序限制使用1.4.0,CUDA版本是10.0,和官网是不一致的,只
前言 上一篇文章从pyg提供的基本工具出发,介绍了pyg。但是大家用三方库,一般是将其作为积木来构建一个比较大的模型,把它用在自己的数据集上,而不是满足于跑跑demo里的简单模型和标准数据集。因此本文将从复现T-GCN(论文和官方源码见此)的角度出发,讲述怎么使用pyg搭建一个GNN-RNN模型,包括数据集的构建和模型的搭建。 刚开始复现的时候,我踩了很多坑,有的坑是因为不熟悉pyg踩的,有的坑是
前些时候了解了python下的 dgl库来进行图谱的计算, 最近看到pytorch_geometric 比dgl快很多。 于是打起了pytorch_geometric的主意, 然而pytorch_geometric 并没有dgl 安装这么方便。 大体思路就是 git源码, 编译源码, 安装, 测试。 我来先吧坑填了 第一个坑不填,会报如下错误: ImportError while importin
诸神缄默不语-个人CSDN博文目录 最近更新日期:2022.12.5 最早更新日期:2022.8.30 本文系PyG (PyTorch Geometric) 官方文档(PyG Documentation — pytorch_geometric documentation)和官方示例代码(pytorch_geometric/examples at master · pyg-team/pytorch_
前言 最近做毕业设计,需要用到图神经网络(以下简称GNN)。由于刚入门GNN,不想看大段的公式和相关论文(然而事实证明该看的永远逃不了),所以怎么办?百度上找呗!因为自己平时用pytorch比较多,所以找到了基于pytorch的图神经网络库,pytorch_geometric(以下简称pyg)。在用这个库的过程中,由于这个库“约定大于配置”的一些特性,也遇到了许多坑,而中文资料中,大多都是直接翻译