Machine Learning Model of the Swift/BAT Trigger Algorithm for Long GRB Population Studies

09/03/2015
by   Philip B Graff, et al.
0

To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien 2014 is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of ≳97% (≲ 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89.6% (10.4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n_0 ∼ 0.48^+0.41_-0.23 Gpc^-3 yr^-1 with power-law indices of n_1 ∼ 1.7^+0.6_-0.5 and n_2 ∼ -5.9^+5.7_-0.1 for GRBs above and below a break point of z_1 ∼ 6.8^+2.8_-3.2. This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online (https://github.com/PBGraff/SwiftGRB_PEanalysis).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset