DANets: Deep Abstract Networks for Tabular Data Classification and Regression

by   Jintai Chen, et al.

Tabular data are ubiquitous in real world applications. Although many commonly-used neural components (e.g., convolution) and extensible neural networks (e.g., ResNet) have been developed by the machine learning community, few of them were effective for tabular data and few designs were adequately tailored for tabular data structures. In this paper, we propose a novel and flexible neural component for tabular data, called Abstract Layer (AbstLay), which learns to explicitly group correlative input features and generate higher-level features for semantics abstraction. Also, we design a structure re-parameterization method to compress AbstLay, thus reducing the computational complexity by a clear margin in the reference phase. A special basic block is built using AbstLays, and we construct a family of Deep Abstract Networks (DANets) for tabular data classification and regression by stacking such blocks. In DANets, a special shortcut path is introduced to fetch information from raw tabular features, assisting feature interactions across different levels. Comprehensive experiments on seven real-world tabular datasets show that our AbstLay and DANets are effective for tabular data classification and regression, and the computational complexity is superior to competitive methods. Besides, we evaluate the performance gains of DANet as it goes deep, verifying the extendibility of our method. Our code is available at https://github.com/WhatAShot/DANet.


page 1

page 2

page 3

page 4


Effectiveness of Optimization Algorithms in Deep Image Classification

Adam is applied widely to train neural networks. Different kinds of Adam...

DO-Conv: Depthwise Over-parameterized Convolutional Layer

Convolutional layers are the core building blocks of Convolutional Neura...

H-DenseFormer: An Efficient Hybrid Densely Connected Transformer for Multimodal Tumor Segmentation

Recently, deep learning methods have been widely used for tumor segmenta...

Abstractive Text Classification Using Sequence-to-convolution Neural Networks

We propose a new deep neural network model and its training scheme for t...

BoolNet: Minimizing The Energy Consumption of Binary Neural Networks

Recent works on Binary Neural Networks (BNNs) have made promising progre...

Easing Embedding Learning by Comprehensive Transcription of Heterogeneous Information Networks

Heterogeneous information networks (HINs) are ubiquitous in real-world a...

Getting the Most out of Simile Recognition

Simile recognition involves two subtasks: simile sentence classification...

Please sign up or login with your details

Forgot password? Click here to reset