A tool framework for tweaking features in synthetic datasets

01/11/2018
by   J. W. Zhang, et al.
0

Researchers and developers use benchmarks to compare their algorithms and products. A database benchmark must have a dataset D. To be application-specific, this dataset D should be empirical. However, D may be too small, or too large, for the benchmarking experiments. D must, therefore, be scaled to the desired size. To ensure the scaled D' is similar to D, previous work typically specifies or extracts a fixed set of features F = F_1, F_2, . . . , F_n from D, then uses F to generate synthetic data for D'. However, this approach (D -> F -> D') becomes increasingly intractable as F gets larger, so a new solution is necessary. Different from existing approaches, this paper proposes ASPECT to scale D to enforce similarity. ASPECT first uses a size-scaler (S0) to scale D to D'. Then the user selects a set of desired features F'_1, . . . , F'_n. For each desired feature F'_k, there is a tweaking tool T_k that tweaks D' to make sure D' has the required feature F'_k. ASPECT coordinates the tweaking of T_1,...,T_n to D', so T_n(...(T_1(D'))...) has the required features F'_1,...,F'_n. By shifting from D -> F -> D' to D -> D' -> F', data scaling becomes flexible. The user can customise the scaled dataset with their own interested features. Extensive experiments on real datasets show that ASPECT can enforce similarity in the dataset effectively and efficiently.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset