Comparing (Empirical-Gramian-Based) Model Order Reduction Algorithms

02/27/2020
by   Christian Himpe, et al.
0

In this work, the empirical-Gramian-based model reduction methods: Empirical poor man's truncated balanced realization, empirical approximate balancing, empirical dominant subspaces, empirical balanced truncation, and empirical balanced gains are compared in a non-parametric and two parametric variants, via ten error measures: Approximate Lebesgue L_0, L_1, L_2, L_∞, Hardy H_2, H_∞, Hankel, Hilbert-Schmidt-Hankel, modified induced primal, and modified induced dual norms, for variants of the thermal block model reduction benchmark. This comparison is conducted via a new meta-measure for model reducibility called MORscore.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro