Please enable JavaScript.
Coggle requires JavaScript to display documents.
Graph Gaussian Process (Kernels (Graph Convolutional Networks with…
Graph Gaussian Process
-
2017 NIPS CGP
[Convolutional Gaussian Processes] :red_flag: Main Contribution: 1. the construction of an inter-domain inducing point approximation that is well-tailored to the convolutional kernel. 2. Addictive GP: The same function is applied to patches from the input, which allows adjacent pixels to interact, but imposes an additive structure otherwise.
:star: Idea: 1. Several variations of the convolutional kernel: [We do so by constructing the kernel in tandem with a suitable domain to place the inducing variables in. ] Convolutional kernel construction=patch response function g + overall function f as the sum of all patch responses.; 2. how the marginal likelihood can be used to find an optimal weighting between convolutional and RBF kernels.
:pencil2: EXP. 1. Datasets: MNIST, CIFAR-10.
Conv. kernels
- conv kernel: approximate the true posterior using a small set of inducing points [inter-domain inducing points]
- weighted conv kernel: weights [kernel hyperparameters] adjust the importance of the response from each patch.
- conv/w_conv + rbf kernel: f(x)=f_conv(x)+f_rbf(x) --> use the marginal likelihood to automatically weigh how much of the process should be explained by each of the components. --> representing the inducing inputs and outputs separately [inducing inputs for the RBF to lie in the space of images, while inducing patches for the convolutional kernel] --> The variational lower bound must contain contributions of the two component Gaussian processes = [represent a full CM×CM covariance matrix for all inducing variables / mean-field approximation requiring only C M×M matrices] here C=2.
2018 AAAI DepthLGP
[DepthLGP: Learning Embeddings of Out-of-Sample Nodes in Dynamic Networks] :red_flag: Motivation: 动态网络中,用一个带有设计好的kernel的高阶拉普拉斯高斯过程hLGP作用于out-of-sample 结点,后面再接上一个DNN,可以生成与原图中结点有相似特征(即在同一个特征空间中)的embedding。
:star: Idea: 1.hLGP: latent function h, h(*)的每个维度相互独立,对应一个GP(2-hops laplacian kernel) ; 2. embedding function f(v)=g(h(v)). 3. Predict: maximize= 4. Train: ERM, 先在G中采样一部分子图,然后将各个子图中的一小部分结点视为out-of-sample结点。
-
Exp. DBLP, PPI, BlogCatalog.
Inter-domain variational GPs: place inducing points in the input space of patches, rather than images.
Additive GPs: Additive models construct a prior GP as a sum of functions over subsets of the input dimensions, resulting in a kernel with the
same additive structure.
-
-
2020 AAAI UAGGP
-
motivation是两个,一个是衡量图模型的不确定性,一个是label smooth regularization, regularization是马氏距离,但是并没有体现出图模型的不确定性如何体现
-
-
-