Eventdisplay is a software package for the analysis and reconstruction of data and Monte Carlo events from ground-based gamma-ray observatories such as VERITAS and CTA. It was originally developed as a display tool for data from the VERITAS prototype telescope, but evolved into a full analysis package with routines for calibration, FADC trace integration, image and stereo parameter analysis, response function calculation, and high-level analysis steps. Eventdisplay makes use of an image parameter analysis combined with gamma-hadron separation methods based on multivariate algorithms. An overview of the reconstruction methods and some selected results are presented in this contribution.
Muons from extensive air showers appear as rings in images taken with imaging atmospheric Cherenkov telescopes, such as VERITAS. These muon-ring images are used for the calibration of the VERITAS telescopes, however the calibration accuracy can be improved with a more efficient muon-identification algorithm. Convolutional neural networks (CNNs) are used in many state-ofthe-art image-recognition systems and are ideal for muon image identification, once trained on a suitable dataset with labels for muon images. However, by training a CNN on a dataset labelled by existing algorithms, the performance of the CNN would be limited by the suboptimal muonidentification efficiency of the original algorithms. Muon Hunters 2 is a citizen science project that asks users to label grids of VERITAS telescope images, stating which images contain muon rings. Each image is labelled 10 times by independent volunteers, and the votes are aggregated and used to assign a 'muon' or 'non-muon' label to the corresponding image. An analysis was performed using an expert-labelled dataset in order to determine the optimal vote percentage cut-offs for assigning labels to each image for CNN training. This was optimised so as to identify as many muon images as possible while avoiding false positives. The performance of this model greatly improves on existing muon identification algorithms, identifying approximately 30 times the number of muon images identified by the current algorithm implemented in VEGAS (VERITAS Gamma-ray Analysis Suite), and roughly 2.5 times the number identified by the Hough transform method, along with significantly outperforming a CNN trained on VEGAS-labelled data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.