训练图神经网络 (GNN) 以使用 spektral 创建嵌入

时间:2021-03-10 09:23:59

标签: python machine-learning neural-network

我正在努力创建一个图神经网络 (GNN),它可以创建输入图的嵌入,以便在强化学习等其他应用程序中使用。

我从 spektral 库 TUDataset classification with GIN 中的示例开始,并对其进行了修改以将网络分为两部分。第一部分产生嵌入,第二部分产生分类。我的目标是在带有图标签的数据集上使用监督学习来训练这个网络,例如TUDataset 并使用在其他应用程序中训练过的第一部分(嵌入生成)。

我的方法在两个不同的数据集中得到了不同的结果。 TUDataset 显示使用这种新方法提高了损失和准确性,而其他本地数据集显示损失显着增加。

如果我创建嵌入的方法合适或有任何进一步改进的建议,我可以获得任何反馈吗?

这是我用来生成图嵌入的代码:

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.metrics import categorical_accuracy
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam

from spektral.data import DisjointLoader
from spektral.datasets import TUDataset
from spektral.layers import GINConv, GlobalAvgPool

################################################################################
# PARAMETERS
################################################################################
learning_rate = 1e-3  # Learning rate
channels = 128  # Hidden units
layers = 3  # GIN layers
epochs = 300  # Number of training epochs
batch_size = 32  # Batch size

################################################################################
# LOAD DATA
################################################################################
dataset = TUDataset("PROTEINS", clean=True)

# Parameters
F = dataset.n_node_features  # Dimension of node features
n_out = dataset.n_labels  # Dimension of the target

# Train/test split
idxs = np.random.permutation(len(dataset))
split = int(0.9 * len(dataset))
idx_tr, idx_te = np.split(idxs, [split])
dataset_tr, dataset_te = dataset[idx_tr], dataset[idx_te]

loader_tr = DisjointLoader(dataset_tr, batch_size=batch_size, epochs=epochs)
loader_te = DisjointLoader(dataset_te, batch_size=batch_size, epochs=1)

################################################################################
# BUILD MODEL
################################################################################
class GIN0(Model):
    def __init__(self, channels, n_layers):
        super().__init__()
        self.conv1 = GINConv(channels, epsilon=0, mlp_hidden=[channels, channels])
        self.convs = []
        for _ in range(1, n_layers):
            self.convs.append(
                GINConv(channels, epsilon=0, mlp_hidden=[channels, channels])
            )
        self.pool = GlobalAvgPool()
        self.dense1 = Dense(channels, activation="relu")

    def call(self, inputs):
        x, a, i = inputs
        x = self.conv1([x, a])
        for conv in self.convs:
            x = conv([x, a])
        x = self.pool([x, i])
        return self.dense1(x)


# Build model
model = GIN0(channels, layers)
model_op = Sequential()
model_op.add(Dropout(0.5, input_shape=(channels,)))
model_op.add(Dense(n_out, activation="softmax"))
opt = Adam(lr=learning_rate)
loss_fn = CategoricalCrossentropy()


################################################################################
# FIT MODEL
################################################################################
@tf.function(input_signature=loader_tr.tf_signature(), experimental_relax_shapes=True)
def train_step(inputs, target):
    with tf.GradientTape(persistent=True) as tape:
        node2vec = model(inputs, training=True)
        predictions = model_op(node2vec, training=True)
        loss = loss_fn(target, predictions)
        loss += sum(model.losses)
    gradients = tape.gradient(loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))
    gradients2 = tape.gradient(loss, model_op.trainable_variables)
    opt.apply_gradients(zip(gradients2, model_op.trainable_variables))
    acc = tf.reduce_mean(categorical_accuracy(target, predictions))
    return loss, acc


print("Fitting model")
current_batch = 0
model_lss = model_acc = 0
for batch in loader_tr:
    lss, acc = train_step(*batch)

    model_lss += lss.numpy()
    model_acc += acc.numpy()
    current_batch += 1
    if current_batch == loader_tr.steps_per_epoch:
        model_lss /= loader_tr.steps_per_epoch
        model_acc /= loader_tr.steps_per_epoch
        print("Loss: {}. Acc: {}".format(model_lss, model_acc))
        model_lss = model_acc = 0
        current_batch = 0

################################################################################
# EVALUATE MODEL
################################################################################
def tolist(predictions):
    result = []
    for item in predictions:
        result.append((float(item[0]), float(item[1])))
    return result
loss_data = []
print("Testing model")
model_lss = model_acc = 0
for batch in loader_te:
    inputs, target = batch
    node2vec = model(inputs, training=False)
    predictions = model_op(node2vec, training=False)
    predictions_list = tolist(predictions)
    loss_data.append(zip(target,predictions_list))
    model_lss += loss_fn(target, predictions)
    model_acc += tf.reduce_mean(categorical_accuracy(target, predictions))
model_lss /= loader_te.steps_per_epoch
model_acc /= loader_te.steps_per_epoch
print("Done. Test loss: {}. Test acc: {}".format(model_lss, model_acc))
for batchi in loss_data:
    for item in batchi:
        print(list(item),'\n')

1 个答案:

答案 0 :(得分:0)

您生成图嵌入的方法是正确的,GIN0 模型将返回给定图的向量。

然而,这里的代码看起来很奇怪:

gradients = tape.gradient(loss, model.trainable_variables)
opt.apply_gradients(zip(gradients, model.trainable_variables))
gradients2 = tape.gradient(loss, model_op.trainable_variables)
opt.apply_gradients(zip(gradients2, model_op.trainable_variables))

您在这里所做的是更新 model 的权重两次,以及一次 model_op 的权重。

当您在 tf.GradientTape 的上下文中计算损失时,所有计算最终值的计算都会被跟踪。这意味着,如果您调用 loss = foo(bar(x)),然后使用该损失计算训练步骤,foobar 的权重都将更新。

除此之外,我没有看到代码有问题,因此它主要取决于您使用的本地数据集。

干杯

相关问题