GraphMAE: Self-Supervised Masked Graph Autoencoders

Abstract

  • Self-supervised learning (SSL) 위한 graph autoencoders (GAEs)
  • 기존 GAE의 reconstruction objective, training robustness, and error metric 문제점들을 해결
  • contrastive 방식이 아닌 generative 방식으로 SOTA 결과를 이룸
  • Graph structure가 아닌 Feature를 reconstruction하는것에 초점을 맞춤

Pagination


© 2021.11. by zziny

Powered by zziny