JITuNE: Just-In-Time Hyperparameter Tuning for Network Embedding Algorithms

by   Mengying Guo, et al.

Network embedding (NE) can generate succinct node representations for massive-scale networks and enable direct applications of common machine learning methods to the network structure. Various NE algorithms have been proposed and used in a number of applications, such as node classification and link prediction. NE algorithms typically contain hyperparameters that are key to performance, but the hyperparameter tuning process can be time consuming. It is desirable to have the hyperparameters tuned within a specified length of time. Although AutoML methods have been applied to the hyperparameter tuning of NE algorithms, the problem of how to tune hyperparameters in a given period of time is not studied for NE algorithms before. In this paper, we propose JITuNE, a just-in-time hyperparameter tuning framework for NE algorithms. Our JITuNE framework enables the time-constrained hyperparameter tuning for NE algorithms by employing the tuning over hierarchical network synopses and transferring the knowledge obtained on synopses to the whole network. The hierarchical generation of synopsis and a time-constrained tuning method enable the constraining of overall tuning time. Extensive experiments demonstrate that JITuNE can significantly improve performances of NE algorithms, outperforming state-of-the-art methods within the same number of algorithm runs.


page 1

page 2

page 3

page 5

page 6

page 9

page 10

page 11


Importance of Tuning Hyperparameters of Machine Learning Algorithms

The performance of many machine learning algorithms depends on their hyp...

Tuning structure learning algorithms with out-of-sample and resampling strategies

One of the challenges practitioners face when applying structure learnin...

High Per Parameter: A Large-Scale Study of Hyperparameter Tuning for Machine Learning Algorithms

Hyperparameters in machine learning (ML) have received a fair amount of ...

Hyperparameter Selection for Subsampling Bootstraps

Massive data analysis becomes increasingly prevalent, subsampling method...

Value Function Based Difference-of-Convex Algorithm for Bilevel Hyperparameter Selection Problems

Gradient-based optimization methods for hyperparameter tuning guarantee ...

Assessing the Effects of Hyperparameters on Knowledge Graph Embedding Quality

Embedding knowledge graphs into low-dimensional spaces is a popular meth...

Hybrid Algorithm Selection and Hyperparameter Tuning on Distributed Machine Learning Resources: A Hierarchical Agent-based Approach

Algorithm selection and hyperparameter tuning are critical steps in both...

Please sign up or login with your details

Forgot password? Click here to reset