Navigation

2020 Accelerated Artificial Intelligence for Big-Data Experiments Conference

Presentations

Classifying Supernovae in PLAsTiCC Data with snmachine

Catarina Alves
University College London
Graduate Student

Abstract: In 2022, the Rubin Observatory Legacy Survey of Space and Time (LSST) will start repeatedly surveying the sky and observing ~10 million time-domain events per night. Due to the data volume, the classification and analysis of the events needs to be performed in an automated manner with photometric information as it is impossible to obtain the spectrum for most of the events. In preparation for LSST, an open data challenge to classify simulated astronomical sources that vary with time into different classes with realistic observing conditions was hosted in 2018: the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC).

In this study we use machine learning to classify the light curves of SN Ia, SN Ibc and SN II in PLAsTiCC data with snmachine, a flexible and modular library to classify different types of SN developed for Lochner et al. (2016). Our approach focus on modelling the light curves with two-dimensional Gaussian Processes, extracting features from the light curves by performing a wavelet decomposition and classifying with Gradient Boosting Decision Trees. The wavelet features are model-independent, can characterize a wide range of transients and do not require significant changes for applying to considerately different classes of events.

Machine Learning in the Multi-Messenger Era

Michael Coughlin
University of Minnesota
Faculty/Staff

Abstract: With the rise of multi-messenger astronomy, it has become essential to exploit all available data streams to maximize their scientific impact. But with the rise of observational facilities regularly streaming significant amounts of data in real time, with low-latency identification of phenomena of interest required to study them, there are real computational and data science challenges posed. In this talk, I will introduce the key physics drivers across the spectrum of messengers and identify the areas where machine learning is already playing a significant role and will continue to grow as we move into this era of new observatories being built and coming online. I will highlight two key areas, with the role machine learning is already playing in the efforts of gravitational-wave detection and optical time domain surveys, and close with some of the synergistic technologies currently being built to facilitate their use.

Particle Tracking with Graph Neural Networks Accelerated on FPGAs

Gage DeZoort
Princeton University
Graduate Researcher

Abstract: The tracking of charged particles produced in high-energy particle collisions is a crucial aspect of the science program in the hadron collider experiments. One of the primary challenges for the high-luminosity running of CERN's Large Hadron Collider is the ability to efficiently, accurately, and rapidly perform tracking in collision events with large interaction pile-up. This work aims to improve charged-particle tracking in the ATLAS and CMS experiments through the use of accelerators such as Field-Programmable Gate Arrays (FPGAs) and machine learning algorithms such as Graph Neural Networks (GNNs).

In this talk, we will present and compare a variety of graph construction methods, GNN architectures, and post-process track construction in terms of their performance on publicly-available simulated data. We will also describe our recent work on implementing these networks on FPGAs, including some of the challenges of running GNN algorithms on resource-limited devices and opportunities for accelerated inference to advance science.

Working with Terascale Alert Streams for Less than $1 a Day

Matthew Graham
California Institute of Technology
Faculty/Staff

Abstract: The time domain is the fastest developing area of modern astronomy with the past decade seeing an explosion in new instruments and facilities dedicated to repeated observations of large areas of sky. Each night now sees hundreds of thousands of detections of transient and variable phenomena distributed in real-time to the community and these alert streams are only set to increase in scale. Popular opinion is that significant computing and networking resources are required to handle such large data volumes, particularly in the application of machine learning algorithms. However, advances in edge computing have seen the development of commodity hardware accelerators for deep learning inferencing that offer to be a game changer in this landscape. In this talk, we will describe a lightweight alert processing system aimed primarily at the Rubin Observatory alert stream that can provide the expected functionality and performance of a major data center for a fraction of the price. It also fosters community reuse of training data sets and trained models for more general scientific application.

Complete Parameter Inference for Binary Black Hole Coalescences using Deep Learning

Stephen Green
Albert Einstein Institute Potsdam
Postdoctoral Researcher

Abstract: The LIGO and Virgo gravitational-wave observatories have detected many exciting events over the past five years. As the rate of detections grows with detector sensitivity, this poses a growing computational challenge for data analysis. With this in mind, in this work we apply deep learning techniques to perform fast likelihood-free Bayesian inference for gravitational waves. We train a neural-network conditional density estimator to model posterior probability distributions over the full 15-dimensional space of binary black hole system parameters, given detector strain data from multiple detectors. We use the method of normalizing flows—specifically, a neural spline normalizing flow—which allows for rapid sampling and density estimation. Training the network is likelihood-free, requiring samples from the data generative process, but no likelihood evaluations. Through training, the network learns a global set of posteriors: it can generate thousands of independent posterior samples per second for any strain data consistent with the prior and detector noise characteristics used for training. By training with the detector noise power spectral density estimated at the time of GW150914, and conditioning on the event strain data, we use the neural network to generate accurate posterior samples consistent with analyses using conventional sampling techniques. We thereby establish deep learning as a tool to confront the growing challenges of gravitational-wave inference.

Predicting the Properties of Coalescing Black Holes for Gravitational Waves Physics

Leïla Haegel
Astroparticles and Cosmology Laboratory, University of Paris
Postdoctoral Researcher

Abstract: Black holes in binary systems orbit around each other and get closer as the system loses energy by emitting gravitational waves, before eventually coalescing into a remnant black hole. The properties of the remnant black hole can be determined during the numerical computation of the gravitational waves emitted in the last stage of the coalescence. Numerical resolution of the Einstein Equations during black holes coalescence being a computing-intensive process, the available number of simulations do not span the whole parameter space of the binary system that exist in nature. This talk presents a new method based on neural networks to provide a generic estimation of the masses and spins magnitudes of black holes resulting from binary system coalescence, including for precessing systems. The knowledge of the remnant properties is of prime importance to design phenomenological models of gravitational waves and perform test of fundamental physics. Therefore, the method presented here is currently being implemented in the algorithm library used by the LIGO-Virgo-KAGRA collaboration to perform analysis of gravitational waves signals.

Morpheus: A Deep Learning Framework For Pixel-Level Analysis of Astronomical Image Data

Ryan Hausen
University of California, Santa Cruz
Graduate Student

Abstract: Astronomy is on the cusp of a big data revolution. Upcoming telescopes like the Rubin Observatory are going to produce terabytes of data nightly. Data analysis of this scale is beyond the realm of purely human efforts, motivating astronomers to explore applying techniques based on advances in machine learning and computer vision. An essential aspect of astronomical image analysis is source detection and morphological classification. In this presentation, I will introduce Morpheus: a framework for pixel-level astronomical image analysis and demonstrate its efficacy in both source detection and morphological classification at a scale of billions of pixels. More information about Morpheus, including data and code, is available at https://morpheus-project.github.io/morpheus.

Artificial Intelligence and Extreme Scale Computing for Multi-Messenger Astrophysics

Asad Khan
University of Illinois at Urbana-Champaign
Graduate Student

Abstract: We summarize recent accomplishments of the NCSA Gravity Group harnessing the big data revolution and extreme scale computing to address computational grand challenges in Multi-Messenger Astrophysics.

Machine Learning as a Tool for Reducing Spitzer IRAC Exoplanet Light Curves

Jessica Krick
California Institute of Technology/Infrared Processing & Analysis Center (IPAC)
Faculty/Staff

Abstract: We present a new method employing machine learning techniques for measuring astrophysical features by correcting systematics in IRAC high precision photometry using Random Forests. The main systematic in IRAC light curve data is position changes due to unavoidable telescope motions coupled with an intrapixel response function. We aim to use the large amount of publicly available calibration data for the single pixel used for this type of work (the sweet spot pixel) to make a fast, easy to use, accurate correction to science data. This correction on calibration data has the advantage of using an independent dataset instead of using the science data on itself, which has the disadvantage of including astrophysical variations. After focusing on feature engineering and hyperparameter optimization, we show that a boosted random forest model can reduce the data such that we measure the median of ten archival eclipse observations of XO-3b to be $1,459 ± $200 parts per million. This is a comparable depth to the average of those in the literature done by seven different methods, however the spread in measurements is 30-100% larger than those literature values, depending on the reduction method. We also caution others attempting similar methods to check their results with the fiducial dataset of XO-3b as we were also able to find models providing initially great scores on their internal test datasets but whose results significantly underestimated the eclipse depth of that planet.

The Computational Challenge of Anomaly Detection

Benjamin Nachman
Lawrence Berkeley National Laboratory
Staff

Abstract: Despite extensive experimental and theoretical evidence for new particles and forces of nature, there have been no new discoveries since the Higgs boson in 2012. One possibility could be that we are not looking at our data in the right way to identify new fundamental structure. Machine learning techniques offer an exciting opportunity to explore our complex data in their natural high dimensionality. A variety of less-than-supervised methods have been proposed to be as model agnostic as possible in this search (see e.g. https://indico.desy.de/indico/event/25341). The interpretation of these analyses can require expensive computational resources. Using a recent result from the ATLAS experiment at the LHC as an example (2005.02983), I will discuss this challenge and how HPC with heterogeneous computing environments are essential. For the final result in 2005.02983, 20,000 neural networks were required!

Looking at the Night Sky with Artificially Intelligent Eyes

Gautham Narayan
University of Illinois at Urbana-Champaign
Assistant Professor

Abstract: Wide-field optical surveys are now producing too many observations for humans to look through. Buried in these petabytes of pixels are rare and exotic sources like kilonovae, but discovering them is now akin to looking for a needle amongst thousands of haystacks. To cope with the deluge of data, we have been employing artificial intelligence. I will cover how the same algorithms that underly driverless cars, voice recognition, and your bank's ability to detect fraudulent transactions is now telling us about the variable sky, and ultimately the nature of dark energy and the fate of the Universe.

Reduced Precision Strategies for Deep Learning: A GAN Use Case from High Energy Physics

Florian Rehm
CERN
Doctoral Student

Using an Optical Processing Unit for Tracking and Calorimetry at the Large Hadron Collider

David Rousseau
Université Paris-Saclay
Senior Scientist

Abstract: Experiments at HL-LHC and beyond will have ever higher read-out rate. It is then essential to explore new hardware paradigms for large scale computations. We have considered the Optical Processing Unit (OPU) from LightOn, which is an analog device to multiply a binary 1 megapixel image by a (fixed) 1E6x1E6 random matrix, resulting in a mega pixel image, at a 2kHz rate. It could be used for the whole branch of machine learning using random matrix in particular for dimensionality reduction. In this talk, we have explored the potential of OPU for two typical HEP use cases:

  1. "Tracking": high energy proton collisions at the LHC yield billions of records with typically 100,000 3D points corresponding to the trajectory of 10,000 particles. Using two datasets from previous tracking challenges, we investigate the OPU potential to solve similar or related problems in high-energy physics, in terms of dimensionality reduction, data representation, and preliminary results.
  2. "Calorimeter Event classification": high energy proton collision at the Large Hadron Collider have been simulated, each collision being recorded as an image representing the energy flux in the detector. The task is to train a classifier to separate signal from the background. The OPU allows fast end-to-end classification without building intermediate objects (like jets).

Brokering Alerts in Real-Time in the Big-Data Era

Monika Soraisam
National Center for Supercomputing Applications
University of Illinois at Urbana-Champaign
Postdoctoral Researcher

Abstract: Current and upcoming optical surveys, first and foremost VRO/LSST, are poised to open up new avenues in almost all fields of astronomy, particularly in time-domain astronomy, by going deeper, faster, and wider in panchromatic passbands. Taming the expected onslaught of their data is one of the biggest data challenges in astronomy. Up to 10 million alerts per night are expected from LSST, hidden in which will be rare time-critical events requiring immediate follow-up. ANTARES is an automated software system for sifting through this barrage of data and selecting events that are deemed high-priority by the community. It is online and performs real-time alert filtering of the public alert stream of the ZTF survey, which can be considered a precursor survey to LSST. In this talk, I will describe the various features of the ANTARES system. As part of the broker, an efficient and effective algorithm for selecting rare and novel events is crucial in the big-data era. I will briefly describe such an algorithm I have designed in preparation for the LSST alerts.

Deep Learning for Pion Identification and Energy Calibration with the ATLAS Detector at the LHC

Maximilian Swiatlowski
Research Institute in University Endowment Lands, British Columbia
TRIUMF
Faculty/Staff

Abstract: Separating charged and neutral pions as well as calibrating the pion energy response is a core component of reconstruction in the ATLAS calorimeter for particles produced by pp collisions from the Large Hadron Collider. This presentation shows an investigation of deep learning techniques for these tasks, representing the signal in the ATLAS calorimeter layers as pixelated images. Deep learning approaches outperform the classification applied in the baseline local hadronic calibration and are able to improve the energy resolution for a wide range in particle momenta, especially for low energy pions. This work demonstrates the potential of deep-learning-based low-level hadronic calibrations to significantly improve the quality of particle reconstruction in the ATLAS calorimeter.

DeepShadows: Separating Low Surface Brightness Galaxies from Artifacts using Deep Learning

Dimitrios Tanoglidis
University of Chicago
Graduate Student

Abstract: Low-surface-brightness galaxies (LSBGs) are expected to dominate the galaxy population by number and may account for a significant fraction of the dynamical mass budget in the present-day Universe. By definition (galaxies that have a surface brightness at least one magnitude less than that of the dark sky) these objects are hard to detect and large galaxy surveys have just started exploring this territory. Searches for LSBGs are plagued by the existence of a large number of Low Surface Brightness artifacts (Faint, compact objects blended in the diffuse light from nearby bright stars or giant elliptical galaxies, bright regions of galactic cirrus, tidal ejecta connected to high-surface-brightness host galaxies) and so far all of them include a visual inspection component. With the advent of surveys like Euclid and the Legacy Survey of Space and Time (LSST) on the Vera C. Rubin Observatory such a step will be practically infeasible. In this talk we will show how to the process can be automated using Convolutional Neural Networks trained on a new, manually annotated set of 20,000 LSBGs and 2,000 artifacts from the Dark Energy Survey (DES). We'll discuss the optimal CNN architecture, performance on DES test data and Transfer Learning from DES to the Hyper Supreme Camera survey to test how well a model trained on DES can generalize and automate the discovery of LSBGs in other surveys.

Neural Networks for Gravitational-Wave Trigger Selection in Single-Detector Periods

Agata Trovato
Centre National de la recherche scientifique (Laboratoire Astroparticule et Cosmologie)
Postdoctoral Researcher

Abstract: The search for gravitational waves transient sources with LIGO and Virgo is mainly limited by non-Gaussian transient noise artefacts coming from a wide variety of provenances, such as seismic, acoustic and electromagnetic disturbances. The contamination by these "instrumental glitches" can be partially mitigated by requesting temporal coincidence in two or more detectors as their accidental co-occurrence probability is low. When only one detector is operating this strategy cannot be used. During the past science runs, the single-detector time corresponds to a significant amount of observing time. Glitches vary widely in rate, duration, frequency range and morphology. For this reason, the statistical modelling of the non-Gaussian and non-stationary component of the noise has not been feasible, so far. We propose machine learning strategies, and in particular deep learning, to separate the glitches from the astrophysical signal. In this presentation, we show the performances of deep learning algorithms to select triggers and reduce the impact of transient noise during single-detector data taking periods.

SuperRAENN: A New Supernova Light Curve Classifier

Ashley Villar
Columbia University
Postdoctoral Researcher

Abstract: Automated classification of supernovae (SNe) based on optical photometric light curve information is essential in the upcoming era of wide-field time domain surveys, such as the Legacy Survey of Space and Time (LSST) conducted by the Rubin Observatory. Photometric classification can enable real-time identification of interesting events for extended multi-wavelength follow-up, as well as archival population studies. Here I will describe a new data-driven classification pipeline, dubbed SuperRAENN, based on a recurrent autoencoder neural network.

Simulation-Based Gravitational-Wave Population Inference with Normalizing Flow

Kaze W. K. Wong
Johns Hopkins University
Graduate Student

Abstract: Running population synthesis simulations can be time-consuming. To constrain the physical parameters characterizing the simulations we must compare them to the data at numerous sample points in the physical parameter space. This comparison requires a large number of simulations, and it is often computationally impractical. In this talk, I will present a deep learning technique (normalizing flow) to emulate population synthesis simulations at a much faster speed. The emulator can be used in the population inference process, opening up the possibility of constraining astrophysics directly using the observed gravitational-wave population.

Deep Learning Insights on the Morphologies and Evolution of Galaxies

John Wu
Space Telescope Science Institute
Postdoctoral Researcher

Abstract: The growth of galaxies is regulated by the amount of cold gas available to form stars. In order to constrain galaxy evolution models, it is critical to measure the interstellar gas mass and the abundance of heavy elements (metallicity) in the gas phase for large samples of galaxies. However, these properties are observationally difficult to measure, and galaxies' cold gas reservoirs are mostly invisible at optical wavelengths. One way to circumvent these challenges is to rely on the morphologies of galaxies, which are linked to their star formation and chemical enrichment histories. I will present deep learning methods for estimating the gas content and metallicity of galaxies from imaging data alone, including an overview of convolutional neural networks and their use cases in the astronomical image domain. I will also discuss novel ways to probe galaxy evolution using artificial intelligence and visualization algorithms. Interpretable and accurate deep learning tools will enable us to multiply the scientific returns of large astronomical surveys in the coming decade.

Improved Radio Pulsar Searches with Artificial Intelligence

Christine Ye
University of Washington-Bothell
Undergraduate Student

Abtract: High-resolution all-sky searches for pulsars with radio observatories have the potential to improve our understanding of the galactic pulsar population, increase the sensitivity of pulsar timing arrays to gravitational waves, and find interesting new systems such as pulsar-black hole binaries. Pulsar searches currently employ two main methods: Fourier-domain searches for periodic signals and time-domain searches for single bright pulse events. Both methods produce complex outputs and plots of candidates that are roughly filtered and then reviewed by hand, a process which is time-consuming and will be especially problematic for future, more sensitive radio facilities. I discuss artificial intelligence and machine learning methods for radio pulsar searches, focusing on my recent work on searches for single bright pulses incorporating Bayesian hyperparameter optimization, density-based clustering, supervised learning, and object detection with deep convolutional neural networks. Models are trained on real pulsars, noise, and radio frequency interference from recent surveys at the Arecibo, Green Bank, and Parkes Observatories as well as artificially generated pulsar time series. Other popular methods incorporate a variety of simple unsupervised and supervised learning methods, as well as deep learning methods such as image recognition, in order to accelerate the process of finding pulsars. Applied at scale, these algorithms reduce the human workload greatly, and may facilitate increased and streamlined detection of pulsars in ongoing surveys, archival data, and searches with next-generation radio arrays.

Using an Artificial Intelligence to Generate Big Data

Peter Yoachim
University of Washington
Faculty/Staff

Abstract: The Rubin Observatory will take 2.2 million observations generating 15PB of data over its 10 year survey. We have developed an AI based using a Markov Decision Process to schedule Rubin observations in real time. This system successfully balances our desires to have a uniform survey, the deepest possible images, and minimal slew time. We have built a framework for analyzing the science performance of simulated surveys to ensure Rubin will contribute to 1) understanding the nature of dark matter and dark energy, 2) cataloging the solar system, 3) observing the variable sky, and 4) measuring the structure of the Milky Way.

NSF home This workshop is funded by the NSF through award NSF 1931561.


Cookie Settings