Towards General-Purpose Representation Learning of Polygonal Geometries

09/29/2022
by   Gengchen Mai, et al.
0

Neural network representation learning for spatial data is a common need for geographic artificial intelligence (GeoAI) problems. In recent years, many advancements have been made in representation learning for points, polylines, and networks, whereas little progress has been made for polygons, especially complex polygonal geometries. In this work, we focus on developing a general-purpose polygon encoding model, which can encode a polygonal geometry (with or without holes, single or multipolygons) into an embedding space. The result embeddings can be leveraged directly (or finetuned) for downstream tasks such as shape classification, spatial relation prediction, and so on. To achieve model generalizability guarantees, we identify a few desirable properties: loop origin invariance, trivial vertex invariance, part permutation invariance, and topology awareness. We explore two different designs for the encoder: one derives all representations in the spatial domain; the other leverages spectral domain representations. For the spatial domain approach, we propose ResNet1D, a 1D CNN-based polygon encoder, which uses circular padding to achieve loop origin invariance on simple polygons. For the spectral domain approach, we develop NUFTspec based on Non-Uniform Fourier Transformation (NUFT), which naturally satisfies all the desired properties. We conduct experiments on two tasks: 1) shape classification based on MNIST; 2) spatial relation prediction based on two new datasets - DBSR-46K and DBSR-cplx46K. Our results show that NUFTspec and ResNet1D outperform multiple existing baselines with significant margins. While ResNet1D suffers from model performance degradation after shape-invariance geometry modifications, NUFTspec is very robust to these modifications due to the nature of the NUFT.

READ FULL TEXT

page 4

page 9

page 11

page 18

page 22

page 29

page 40

research
10/19/2020

Improving Transformation Invariance in Contrastive Representation Learning

We propose methods to strengthen the invariance properties of representa...
research
06/21/2022

TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning

We present Transformation Invariance and Covariance Contrast (TiCo) for ...
research
03/27/2018

Mittens: An Extension of GloVe for Learning Domain-Specialized Representations

We present a simple extension of the GloVe representation learning model...
research
02/22/2023

Steerable Equivariant Representation Learning

Pre-trained deep image representations are useful for post-training task...
research
02/16/2020

Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells

Unsupervised text encoding models have recently fueled substantial progr...
research
04/25/2020

Convex Representation Learning for Generalized Invariance in Semi-Inner-Product Space

Invariance (defined in a general sense) has been one of the most effecti...
research
11/15/2021

Scaling Law for Recommendation Models: Towards General-purpose User Representations

A recent trend shows that a general class of models, e.g., BERT, GPT-3, ...

Please sign up or login with your details

Forgot password? Click here to reset