ADASS XXII University of Illinois
November 4-8, 2012
ADASS2012 program header image

ADASS XXII Conference

Posters

Posters can be found in the Chancellor Ballroom.

    
Anderson, K. R. P01: CyberSKA Radio Imaging Metadata and VO Compliance Engineering
Arviset, Christophe P02: Results from ESAC Science Archives Survey
Barache, C. P03: VO-compatible Architecture for managing and processing images of moving celestial bodies - Application to the Gaia-GBOT project
Bell, Graham P04: Crab: a dashboard system for monitoring archiving tasks
Berry, David P05: Starlink 2012 - The Kapuahi Release
Berthoud, Marc G. P06: Online data reduction and quicklook tool for HAWC
Bomans, Dominik J. P07: Geometric data exploration in large scale astronomical surveys
Cardiel, Nicolas P08: Searching for Deeper Blank Fields in the sky with TESELA
Ceballos, M. Teresa P09: Data Flow design for event detection and qualification in X-ray detectors based on Transition Edge Sensors technology
Delmotte, Nausicaa P10: ESO archive: usage statistics
Dencheva, Nadia P11: HEADERLETS: Share HST Astrometric Solutions Without The Data
Friedel, Douglas N. P12: The CARMA Data Reduction Pipeline
Gheller, Claudio P13: CUDA-Splotch: HPC visualization of astrophysical data
Hartung, Steven P14: Scalable Large Volume Image Differencing Pipeline Using Hybrid MPI-OMP-GPU Design
Ibarra, Aitor P15: XMM-Newton mobile web App
Krueger, Tony P16: OpTIIX Mission Overview and Education/Public Outreach
Le Fèvre, Jean-Paul P17: Using WebGL to visualize results of numerical simulations
Michel, Laurent P18: A New Administration Tool for SAADA
Mink, Jessica P19: The Past 15 Years of RVSAO
Omar, Laurino P20: Extending Iris, the VAO SED Analysis tool
Pascual, Sergio P21: Software toolkit for MEGARA, the Integral-Field Unit and Multi-Object Spectrograph for GTC
Pérez Navarro, Óscar P22: Migrating an in-operation space observatory data processing chain towards a SOA oriented architecture
Radhuber, Mary L. P23: New Spectral Analysis Software for Global Analysis of Broadband Line Surveys in MATLAB
Rauch, Thomas P24: TheoSSA: A Virtual Observatory Service for Synthetic Stellar Spectra
Redman, Russell O. P25: Implementing a Common Database Architecture at the CADC using CAOM-2
Saunders, Eric P26: Architecture of the LCOGT network scheduler
Shortridge, Keith P27: The Software System for the AAO's HERMES Spectrograph
Smareglia, Riccardo P28: Harmonize pipeline and archiving system: PESSTO@IA2 use case.
Taylor, Mark P29: TOPCAT Visualisation Improvements
Tibbetts, M. P30: NEOview: Near Earth Object Data Discovery and Query
van Elteren, Arjen P31: Introduction to the Astrophysical Multipurpose Software Environment (AMUSE)
Verkouter, Harro P32: Building a distributed, scalable and fault-tolerant monitor and control system using Erlang: experiences from an FPGA based VLBI correlator
Vovchenko, Alexey P33: Data Intensive Science Doesn't Necessarily Imply Resource Driven Problem Solving
Wu, Chen P34: The MWA Archive - A Multi-tiered Dataflow and Storage System
Yamauchi, Chisato P35: 2MASS Catalog Server Kit version 2.1
Zhao, Jun-Hui P36: Submillimeter array data handling in Miriad
Zubarev, Sergey P37: Comparison of modern methods for calculation of effective temperature and bolometric corrections for stars
Pramskiy, Alexander P38: Binocular observations with LUCI at the LBT: scheduling and synchronization
Stephens, Thomas E. P39: A Mobile Data Application for the Fermi Mission
Lee, Matthias A. P40: Cross-Identification of Astronomical Catalogs on Multiple GPUs
Becker, Glenn P41: Better Living Through Metadata: Examining Observatory Archive Usage
Dowell, Jayce P42: Software and Computing at LWA1
Ibsen, Jorge P43: Connecting the ALMA Observatory site, in northern Chile, with the ALMA central offices, in Santiago, by means of an optical link.
Landais, Gilles P44: TAPVizieR: a new way to acess the VizieR database
Yang, Lin P45: A GPU-based visualization method for computing the dark matter annihilation signal
Fan, Dongwei P46: Efficient Catalog Matching with Dropout Identification
Kawasaki, Wataru P47: Vissage: an ALMA-VO Desktop Application
Lorente, Nuria P. F. P48: Automating Plug-Plate Configuration for SAMI
Kuemmel, Martin P49: Early photometric studies for EUCLID
Shuping, Ralph P50: Overview of the SOFIA Data Cycle System: An integrated set of tools and services for the SOFIA General Investigator
Feigelson, Eric D. P51: New Organizations to Support Astroinformatics and Astrostatistics
Viallefond, Francois P52: Formal semantics to model experimental data
Swade, Daryl P53: OpTIIX Data Management System
Diaz, R.I. P54: HST Cycle 21 Exposure Time Calculator
Liang, Feng P55: New Probabilistic Galaxy Classification in Large Photometric Surveys
Jenness, Tim P56: PAL - A Positional Astronomy Library
Wang, Y. P57: Quantifying Systematic Effects on Galaxy Clustering
Chilingarian, Igor P58: Data reduction pipeline for the MMT Magellan Infrared Spectrograph
Teuben, Peter P59: Science Mining and Characterization of ALMA Large Data Cubes
Economou, Frossie P60: Astronomy Data Centres: data storage approaches
Currie, Malcolm J. P61: Automated removal of bad-baseline spectra from ACSIS/HARP heterodyne time series
Ball, Nicholas M. P62: CANFAR + Skytree: A Cloud Computing and Data Mining System for Astronomy
Royer, Frédéric P63: The GIRAFFE Archive : 1D and 3D spectra
Moins, Christophe P64: ESO Catalog Facility design and performance
Song, Yihan P65: A New Python Library for Spectroscopic Analysis with MIDAS Style
Masters, Joe P66: The Green Bank Telescope Spectral Pipeline

P01: CyberSKA Radio Imaging Metadata and VO Compliance Engineering

    
Anderson, K. R. University of British Columbia, Okanagan Campus
Rosolowsky, E. W. University of British Columbia, Okanagan Campus

We have written a specification for the metadata encapsulation of radio astronomy data products pursuant to insertion into the VO-compliant Common Archive Observation Model (CAOM) database hosted by the Canadian Astronomy Data Centre (CADC). This specification accommodates radio FITS Image and UV Visibility data, as well as pure CASA Tables Imaging and Visibility Measurement Sets. To extract and engineer radio metadata, we have authored two software packages: metaData (v0.5.0) and mddb (v1.0). Together, these python packages can convert all the above stated data format types into concise FITS-like header files, engineer the metadata to conform to the CAOM data model, and then insert these engineered data into the CADC database, which subsequently becomes published through the Canadian Virtual Observatory. The metaData and mddb packages have, for the first time, published ALMA imaging data on VO services. Our ongoing work aims to integrate visibility data from ALMA and the SKA into VO services and to enable user-submitted radio data to move seamlessly into the Virtual Observatory.

This research is funded by a grant from CANARIE to the CyberSKA collaboration and a North American ALMA Science Center ALMA Development Study. This research used the facilities of the CADC operated by the National Research Council of Canada with the support of the Canadian Space Agency.

P02: Results from ESAC Science Archives Survey

    
Arviset, Christophe ESA/ESAC
Osuna, Pedro ESA/ESAC
Baines, Deborah ESA/ESAC

Most of ESA's Space Science Archives are currently hosted at the ESAC, the European Space Astronomy Centre, located near Madrid, Spain. This includes the ISO Data Archive (IDA), the XMM-Newton Science Archive (XSA), the Integral SOC Science Data Archive (ISDA), all ESA's Planetary mission archives (Rosetta, Mars Express, Venus Express, Smart-1, Huygens and Giotto) (PSA), the Herschel Science Archive (HSA), the SOHO Science Archive (SSA), the EXOSAT Science Archive (EXSA), the Planck Legacy Archive (PLA) and more recently in 2012, the European HST Archive. More archives are currently under development including Gaia, Cluster, Ulysses, BepiColombo and archives for future ESA science missions like Euclid, Solar Orbiter are being studied. All these science archives are designed, developed, operated and maintained by a dedicated Science Archives and VO Team at ESAC, providing support to all science operations centres at ESAC.

At the end of 2011, a questionnaire was sent to all users of the ESAC Science Archives in the last five years, asking them about their usage frequency, their satisfaction level, the type of interfaces (GUI or scriptable interface or others) used, the purpose for which they're using the archives and optionally allowing to provide qualitative feedback.

This paper presents the main results from this questionnaire, either globally or per specific archive.

The authors want to thank the Science Archives Team at ESAC and the corresponding Archive Scientists in the project teams for their work in this context.

P03: VO-compatible Architecture for managing and processing images of moving celestial bodies - Application to the Gaia-GBOT project

    
Barache, C. Observatoire de Paris / SyRTE / France
Bouquillon, S. Observatoire de Paris / SyRTE / France
Carlucci, T. Observatoire de Paris / SyRTE / France
Taris, F. Observatoire de Paris / SyRTE / France
Michel, L. Observatoire de Strasbourg / France
Altmann, M. Zentrum für Astronomie der Universität Heidelberg / ARI / Germany

The Ground Based Optical Tracking (GBOT) group is a part of the "Data Processing and Analysis Consortium", the large consortium of over 400 scientists from many European countries, charged with the scientific conduction of the Gaia mission by ESA. The GBOT group is in charge of the optical part of tracking of the Gaia satellite. This optical tracking is necessary to allow the Gaia mission to fully reach its goal in terms of astrometry precision level. These observations will be done daily, during 5 years of the mission, with the use of optical CCD frames taken by a small network of 1-2m class telescopes located all over the world. The requirements for the accuracy on the satellite position determination, with respect of the stars in the field of view, are 20 mas. For this purpose, we developed a set of accurate astrometry reduction programs specially adapted for tracking moving objects. The inputs of these programs are for each tracked target an ephemeris and a set of fits images. The outputs are for each image: a file containing all information about the detected objects, a catalogue file used for calibration, a tiff file for visual explanation of the reduction result, and an improvement of the fits image header. The final result is an overview file containing only the data related to the target extracted from all the images. These programs are written in GNU fortran 95 and provide results in VOTable format (supported by Virtual Observatory protocols). All these results are sent automatically into the GBOT Database which is built with the SAADA freeware. The user of this Database can archive and query the data but also, thanks to the delegate option provided by SAADA, select a set of images and run directly the GBOT reduction programs with a dedicated Web interface. For more information about SAADA (an Automatic System for Astronomy Data Archive under gpl license and VO compatible) see the L. Michel related presentation.

P04: Crab: a dashboard system for monitoring archiving tasks

    
Bell, Graham Joint Astronomy Centre Hawaii
Jenness, Tim Joint Astronomy Centre Hawaii
Agarwal, A. Joint Astronomy Centre Hawaii

At the Joint Astronomy Centre we use a large number of cron jobs to perform data archiving tasks such as backing up data, transferring it to the Canadian Astronomy Data Centre and ensuring sufficient disk space is available before each night of observing. The cron scheduler runs the jobs on a given schedule, and sends any output by email. However as the number of machines and cron jobs running on them increases to cope with the demands of modern instruments such as SCUBA-2, this can lead to an unmanageable amount of email traffic. A related problem is that it is not obvious when a cron job has not been run, for whatever reason.

We therefore designed and implemented a dashboard system, written in Python, for monitoring the progress of the tasks. Each task reports its status by sending messages to a server when it starts and finishes. The finish message can include a status to indicate whether the task was successful, along with any output. To allow existing jobs to easily be brought into this system, we have written a wrapper script which acts like a shell. A whole crontab is activated simply by setting the SHELL variable to the path to this script. We also wrote a client utility to send messages to the server directly, and this allows crontab files to be imported so that the system can detect when a job has been missed.

The status of all of the tasks can be monitored on the dashboard's web interface. There is also a configurable notification system. This allows us to issue a single daily summary email message, and also to be alerted immediately if certain tasks fail.

P05: Starlink 2012 - The Kapuahi Release

    
Berry, David Joint Astronomy Centre, Hawaii
Currie, Malcolm Joint Astronomy Centre, Hawaii
Jenness, Tim Joint Astronomy Centre, Hawaii
Draper, Peter University of Durham, UK
Bell, Graham Joint Astronomy Centre, Hawaii
Tilanus, Remo Joint Astronomy Centre, Hawaii

We present details of the most recent release of the Starlink Software Collection, code-named "Kapuahi".

P06: Online data reduction and quicklook tool for HAWC

    
Berthoud, Marc G. University of Chicago

The High-resolution Airborne Wide-band Camera (HAWC) is the facility far-infrared imager for SOFIA, the Stratospheric Observatory For Infrared Astronomy. For best science return during SOFIA flights, rapid inspection of reduced data is crucial. To optimize this process, we have developed a web-based data view and analysis tool that allows astronomers to view reduced files. This web viewer works in conjunction with the automatic data reduction pipeline: As soon as the instrument closes a raw data file, the automatic pipeline reduces the file and the reduced FITS files are copied to the on-board web server. Linked to the experimenter's network, scientists and engineers can call up the web viewer on their browser to analyze all data products from the current flight's observations. The web viewer pages are generated by a Python script running on an Apache web server, with client-side operations in Javascript. The software architecture is highly modular, so various parts can be reconfigured or used in other projects. The HAWC auto reduction pipeline and the web viewer were used during our last instrument tests in the spring of 2012. These tools significantly improved the efficiency of lab operations. We expect similar gains when using HAWC during flight operations on SOFIA, currently scheduled to begin in 2015.

P07: Geometric data exploration in large scale astronomical surveys

    
Bomans, Dominik J. Department of Computer Science, Univ. Muenster
Thom, Andreas Department of Computer Science, Univ. Muenster
Vahrenhold, Jan Department of Computer Science, Univ. Muenster

We present results from applying (semi-)automated geometric approaches to exploring large astronomical data. Currently, our analyses are based on the Sloan Digital Sky Survey (SDSS), but the methods are designed to be applicable to other spectroscopic and photometric survey projects as well. Our first method is based on the principle of connected component analysis in geometric space and can identify regions of high density based on photometric data (or any arithmetical expression of photometric features). We use the approach to automatically detect stellar streams near globular clusters and dwarf galaxies in the Milky Way halo.

In a related, yet orthogonal project, we present automated classification approaches to discriminate starburst and post-starburst galaxies from quiescent galaxies based on problem-specific features extracted from the SDSS spectroscopic data base. We use supervised learning techniques, in particular KNN and support vector machines, for the classification process.

P08: Searching for Deeper Blank Fields in the sky with TESELA

    
Cardiel, Nicolas Universidad Complutense de Madrid, Spain
Jiménez-Esteban, F.M. Centro de Astrobiología (INTA-CSIC), Departamento de Astrofísica, Spain
Cabrera-Lavers, A. Instituto de Astrofísica de Canarias, Spain
Alacid, J.M. Centro de Astrobiología (INTA-CSIC), Departamento de Astrofísica, Spain

TESELA is a Virtual Observatory tool which provides a simple interface that allows the user to retrieve a list of Blank Fields -regions devoid of bright stars down to a given threshold magnitude- available near a given position in the sky. The initial version of this tool, already presented in ADASS 2011, made use of the Delaunay triangulation to determine a tessellation of the whole celestial sphere using the astrometric and photometric information for the 2.5 million brightest stars provided by the Tycho-2 stellar catalogue. More recently we have used the Delaunay Triangulation to search for deeper blank fields, with a minimum diameter of 10 arcmin and an increasing threshold magnitude ranging from 15 to 18 in USNO-B R-band. However, instead of tessellating the whole sky at once, and considering the exponentially growing number of stars when moving to fainter stellar magnitudes, we started the searching procedure by exploring in the subsample of the initial Blank Field list derived from the Tycho-2 stellar catalogue containing Blank Fields with size larger than 10 arcmin. Some of these new Deep Blank Fields are being tested with the 10.4 m Gran Telescopio Canarias, demonstrating to be extremely useful for medium and large size telescopes. The new catalogue is also available through the TESELA interface which, in addition, provides galactic extinction information from the NASA/IPAC Infrared Science Archive.

P09: Data Flow design for event detection and qualification in X-ray detectors based on Transition Edge Sensors technology

    
Ceballos, M. Teresa Instituto de Física de Cantabria (CSIC-UC)
Cobo, Beatriz Instituto de Física de Cantabria (CSIC-UC)
van der Kuur, Jan Netherlands Institute for Space Research (SRON)
Schuurmans, Jaap Netherlands Institute for Space Research (SRON)
Fraga-Encinas, Raquel Instituto de Física de Cantabria (CSIC-UC)

The current and forthcoming research lines in X-ray astronomy will require unprecedented spectral resolution with imaging capabilities. The most promising detectors able to provide these capabilities are the calorimeters based on Transition Edge Sensor (TES) technologies, like the one that has been under development for the proposed ATHENA x-rays spatial mission. These new detectors require a different approach for the event detection: instead of detecting the charge generated with the x-ray photon impact (as in traditional CCDs) they must detect the electrical pulses that are the response to an abrupt change in resistance in the device.This abrupt transition is caused by the increase in temperature that occurs after the absorption of the x-ray photon in the device. This new detection "products" also require new detection and processing software. We present here the Data Flow designed for one of such instruments covering the pulse detection algorithms to extract the events from the noisy signal (as well as to cope with a possible pile-up of pulses), the event qualification (event grade) according to the event arrival time and proximity to other events, and finally the filtering process applied to these pulses to get their energy content, and thus the astronomical source spectrum.

P10: ESO archive: usage statistics

    
Delmotte, Nausicaa ESO
Arnaboldi, Magda ESO
Dobrzycki, Adam ESO
Fourniol, Nathalie ESO
Haase, Jonas ESO
Micol, Alberto ESO
Retzlaff, Joerg ESO
Romaniello, Martino ESO
Stoehr, Felix ESO
Vera, Ignacio ESO
Vuong, Myha ESO

The ESO archive has expanded its services for archive users: these services now include the publication of advanced data products from ESO public surveys and the online direct download of raw data. The access by the archive users needs to be monitored and logged to provide statistics for data download and plan upgrades of the different applications. In the latest years, several archive applications have been (re-)configured to use a central logging table, which makes it significantly easier to derive and analyse statistics on the ESO archive usage. The intend of the poster is to present the main results of such an analysis, focusing primarily on the usage of data discovery services and user download activities.

P11: HEADERLETS: Share HST Astrometric Solutions Without The Data

    
Dencheva, Nadia STScI
Hack, Warren STScI
Fruchter, Andrew STScI

File format for storing astrometric metadata of HST images is presented. Software implementation of the format and methods for working with it are described. Possible applications and availability within the HST archive is discussed.

P12: The CARMA Data Reduction Pipeline

    
Friedel, Douglas N. University of Illinois

The Combined Array for Millimeter-wave Astronomy (CARMA) data reduction pipeline has been developed to give investigators a first look at a fully reduced set of their data. It runs automatically on all data produced by the telescope as they arrive in the data archive. The pipeline is written in python and uses python wrappers for MIRIAD subroutines for direct access to the data. It applies passband, gain and flux calibration to the data sets and produces a set of continuum and spectral line maps in both MIRIAD and FITS format. The pipeline has been in production for a year and this poster will discuss the current capabilities and planned improvements.

P13: CUDA-Splotch: HPC visualization of astrophysical data

    
Gheller, Claudio CSCS-ETH
Rivi, Marzia CINECA
Krokos, Mel University of Portsmouth

Visual data exploration and discovery can represent a valuable support to science, since it provides a prompt and intuitive insight to very large-scale data sets, such those produced by current observations and numerical simulations, to identify regions and/or features of interest within which to apply time-consuming algorithms. Furthermore, this approach can be an extremely effective and ready way of discovering and understanding correlations, similarities and data patterns, or to identify anomalus behaviors saving resources in an on-going experiment Finally, visualization is also an effective means of presenting scientific results both to experts and to general public. In order to visualize huge datasets, suitable tools must be available, able to exploit High Performance Computing (hereafter HPC) devices. In this paper we focus on {\it Splotch}, our previously developed ray-casting algorithm. Splotch was born for an effective high performance visualization of large-scale astrophysical data sets coming from particle-based computer simulations. The software is specialized in the high-quality, high-performance rendering of point like data as those produced in cosmology by N-Body numerical simulations. Splotch however, has been successfully adopted also in other application fields, like the visualization of real galaxy systems, whose 3D shape is carefully reconstructed according to observational data. In the development of Splotch, specific care has been taken of all the performance issues. The software is optimized in order to require the minimum possible memory usage and in order to exploit vector architectures, multi-core processors and multi-nodes supercomputers. In this paper we will present the work accomplished to enable Splotch to exploit GPU empowered architectures. In the last years, GPUs have acquired more and more popularity both in the graphics and in the HPC communities, since they can provide extraordinary performances on suitable classes of algorithms, with speed-up factors of about one order of magintude with respect to a standard multicore CPU and with a comparable power consumption. In order to exploit this additional computing resources, we have implemented a GPU version of Splotch. This task proved to be challenging, leading to a full refactoring of the code, necessary to get the expected performance out of the GPU's implementation. In the talk we give the details of the design of the algorithm and we will present the results of the benchmarks performed to verify the performarmence in various configurations. Finally we will show an example of animation that can be generated adopting Splotch for the visualization of observational data.

P14: Scalable Large Volume Image Differencing Pipeline Using Hybrid MPI-OMP-GPU Design

    
Hartung, Steven Centre for Astronomy, James Cook University, Townsville, Australia
Shukla, Hemant Lawrence Berkeley National Laboratory, Berkeley, CA, USA

Many useful image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Driven by advances in camera design, astronomy image sizes have increased to over a gigapixel per exposure in some cases, and the exposure times per image have decreased from hours to seconds, or even down to video frame rates, often generating terabytes of data per night. Additionally, there is a great deal of science being done in reprocessing the ever growing tera-scale to peta-scale archives of survey data. The application of emerging low-cost parallel computing methods to established image processing techniques provides a present day practical solution to this data crisis. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single node, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the image differencing times for large images can be accelerated by orders of magnitude.

P15: XMM-Newton mobile web App

    
Ibarra, Aitor XMM-Newton SOC. ESAC/ESA
Kennedy, Mark XMM-Newton SOC. ESAC/ESA
Rodríguez, Pedro XMM-Newton SOC. ESAC/ESA
Hernandez, Cristina XMM-Newton SOC. ESAC/ESA
Saxton, Richard XMM-Newton SOC. ESAC/ESA
Gabriel, Carlos XMM-Newton SOC. ESAC/ESA

We present the first XMM-Newton mobile web Application, coded using new web technologies such as HTML5 and the jquery mobile framework.

This new web application has been optimized for mobile devices and focusses on formatted content extracted directly from XMM-Newton web pages.

Taking advantage of mobile device features such as GPS, gyro and accelerometer, we have developed new functionalities like:

  • XMM-Newton Locator: Show where the XMM-Newton satellite is in the sky.
  • XMM-Newton Target: Show where the XMM-satellite is pointing.

The application also includes a new dynamic graphical version of the target visibility checker. This functionality, coded in pure javascript, uses the data-driven document d3 library to allow an instant check of the XMM visibility of a target.

The main goals of this development were to reach all kind of handheld devices and operating systems, while minimizing software maintenance. The application therefore has been developed as a mobile web application rather than a more costly native application. New functionality will be added regularly.

P16: OpTIIX Mission Overview and Education/Public Outreach

    
Krueger, Tony STScI

The Optical Testbed and Integration on ISS eXperiment (OpTIIX) is a technology demonstration to design, develop, deliver, robotically assemble, and successfully operate an observatory on the International Space Station (ISS). An OpTIIX Education and Public Outreach (EPO) program is being designed to bring OpTIIX and its discoveries to amateur observers, students, educators, and the public. In addition, OpTIIX will be available to the professional community for additional tests using the assembled OpTIIX configuration.

OpTIIX will provide a very capable three mirror astigmatic telescope. The primary mirror has a 1.45 meter aperture with six hexagonal, deformable segments. Detectors for imaging, fine guidance, and wavefront sensing are included in the telescope. The imaging camera has seven filters that cover visible wavelengths and one opaque filter position.

The Space Telescope Science Institute will serve as the OpTIIX Mission Operations Center. This poster will provide an overview of the OpTIIX mission and the Education and Public Outreach (EPO) plans

P17: Using WebGL to visualize results of numerical simulations

    
Le Fèvre, Jean-Paul CEA Irfu

WebGL is a new API callable in javascript which can be used inside any modern browsers accepting the incoming HTML5 standard. It allows users to display scenes in 3D using their favorite browser without having to install extra plug-ins. Moreover users can interact easily and rapidly with the display in order, for instance, to change the point of view, to zoom in or out, to render different part of a volume. This technology turns out to be interesting to verify quickly results of numerical simulations in astrophysics or other fields in physics. It is also a worthy tool to enrich EPO web sites. At CEA/Irfu we have been experimenting this WegGL for one year. Our demos are presented on the poster.

P18: A New Administration Tool for SAADA

    
Michel, Laurent Observatoire Astronomique de Strasbourg
Mantelet, Grégory CDS
Werner, Emilie Observatoire Astronomique de Strasbourg
Motch, Christian Observatoire Astronomique de Strasbourg

Saada transforms a set of heterogeneous FITS files or VOtables of various categories (images, tables, spectra....) in a powerful database deployed on the Web without writing code. Saada can mix data of various categories in multiple collections. Data collections can be linked each to others making relevant browsing paths and allowing data-mining oriented queries. Saada supports 4 VO services (Spectra, images, sources and TAP) . Data collections can be published immediately after the deployment of the Web interface. The poster presents the new the administration interface (beta) coming with Saada databases. Key points are:
  • A better look and feel.
  • A good connection with the srcipt mode.
  • An advanced interface helping for the VO publication of data collections (SIA/SSA/CS TAP/OBSCORE)

P19: The Past 15 Years of RVSAO

    
Mink, Jessica Smithsonian Astrophysical Observatory

Since the last paper describing the IRAF RVSAO radial velocity package was published almost 15 years ago, this collection of tasks dealing with wavelength-shifted spectra has continued to grow and evolve. Many changes have been made to improve robustness and accuracy in the XCSAO cross-correlation and EMSAO emission line fitting tasks. Two new tasks, PXCSAO and PEMSAO, find radial velocities using the same methods, but save the results as task parameters, making them easier to use in CL scripts. Spectra can now be cross-correlated in pixel or wavelenth space as well as velocity space. A major task, EQWIDTH, based on the IRAF BANDS task, computes equivalent widths in redshifted spectra. SUMSPEC, which rebins, normalizes, re-redshifts, sums, and/or stacks spectra, has proven to be useful in multi-fiber spectrograph reduction pipelines. LINESPEC lists spectra in a variety of user-selected formats, and several CL scripts plot spectra with labelled emission and absorption lines.

P20: Extending Iris, the VAO SED Analysis tool

    
Omar, Laurino SAO
Ivo, Busko STScI
Cresitello-Dittmar, Mark SAO
Raffaele, D'Abrusco SAO
Doe, Stephen SAO
Evans, Janet SAO
Pevunova, Olga NASA/IPAC

We present Iris, a tool developed by the Virtual Astronomical Observatory for building and analyzing Spectral Energy Distributions (SEDs). Iris was designed to be extensible, so that new components and models can be developed by third parties and then included at runtime. Iris can be extended in three different ways: new file readers allow users to integrate data in custom formats into Iris SEDs; new models can be fitted to the data, in the form of template libraries for template fitting, data tables and arbitrary python functions. The interoperability-centered design of Iris and the Virtual Observatory standards and protocols can enable new science functionalities involving SED data. The Virtual Astronomical Observatory (VAO) was established as a partnership of the Associated Universities, Inc. and the Association of Universities for Research in Astronomy, Inc. The VAO is sponsored by the National Science Foundation and the National Aeronautics and Space Administration.

P21: Software toolkit for MEGARA, the Integral-Field Unit and Multi-Object Spectrograph for GTC

    
Pascual, Sergio UCM
Eliche-Moral, Carmen UCM
Villar, Victor UCM
Castillo, Africa UCM
Gruel, Nicoas CEFCA
Cardiel, Nicolas UCM
Carrasco, Esperanza INAOE
Gallego, Jesus UCM
Gil de Paz, Armando UCM
Sanchez-Moreno, Francisco M. UPM
Vilchez, Jose M IAA-CSIC

MEGARA is an optical Integral-Field Unit (IFU) and Multi-Object Spectrograph (MOS) designed for the GTC 10.4m telescope in La Palma. The MEGARA IFU mode will offer two different bundles, one covering 14" x 12" with a spaxel size of 0.685" and another one covering 10" x 8" with a spaxel size of 0.480". The MEGARA MOS will allow observing up to 100 objects in a region of 3.5' x 3.5' around the two IFU bundles.

We present here the software tools developed to prepare and process observations taken with MEGARA: the image simulator, the exposure-time calculator and the reduction pipeline.

P22: Migrating an in-operation space observatory data processing chain towards a SOA oriented architecture

    
Pérez Navarro, Óscar GMV AS
Vallejo Chavarino, Juan Carlos GMV AS
Pérez Moreno, Rubén Francisco GMV AS

XMM-Newton is an ESA X-ray cornerstone observatory mission, operating continuously with great success since 2000. A continuous migration effort is being done for keeping the processing capabilities of XMM Science Control System (XSCS), in charge of the L0/L1 level products. The main goal is to preserve the processing capabilities along the whole operational lifetime and beyond. It also aims to simplify, modernize and ease the control of the operational data flow. But a third goal is to allow external users and systems to interact with the operational side of the processing. These activities are carried out in coordination with the XSCS Proposal Handling side, the SAS Analysis System and XSA mission archive activities, following a modular ground segment paradigm. Within this abstraction, every module can be migrated using different techniques, different base technologies and different calendar schedules. As a consequence, an in-operations replacement and overhaul of the whole set of subsystems is permitted with no interruption in the data processing flow. A SOA architecture is being implemented for allowing new access and commanding capabilities to the heritage internal core functionality. The processing chain s/w components are set as individual functionalities with a clear set of defined interfaces. The starting baseline is formed by an existing three-tier application architecture:
  • The data access layer represents the very old core of the XSCS, where new handlers and interfaces have been implemented accord to the new business logic which is implemented into the current business layer.
  • The business layer is also enhanced for bridging with the new XSCS presentation layer, where the logic of the services is implemented reusing the infrastructure of existing remote web Proposal Handler facilities. Moreover, it will also offer processing services to non-XSCS external user-driven systems.
  • Thanks to those improvements, operational XSCS uplink and downlink (science monitor) systems but also non-XSCS systems, will manage a broad and updated communication with the core legacy systems.
Initial prototyped services are discovery of new observations L1 products availability and requests for operational archive data. Additional services in the uplink side are status monitoring of observations and remote execution of observation edition facilities. Data flow monitoring tools are common for uplink and downlink.

These activities are being carried out sharing synergies with similar projects. In particular, this project benefits from GMV modular-SOC studies conclusions and also from data preservation and processing architectures for Earth Observation missions. Specifically, the results of these activities contribute to EO Long Term Data Preservation (LTDP) guidelines, and follow the trends and recommendations pointed by LTDP studies carried out so far (e.g. Service Oriented Architectures, System of Systems methodologies, etc.).

P23: New Spectral Analysis Software for Global Analysis of Broadband Line Surveys in MATLAB

    
Radhuber, Mary L. Emory University
Widicus Weaver, Susanna L. Emory University

New observatories with broadband spectral capabilities such as the Herschel Space Observatory, the Stratospheric Observatory for Infrared Astronomy, and the Atacama Large Millimeter Array necessitate the development of more robust and efficient spectral assignment methods. The broadband spectra obtained provide the opportunity to tightly constrain physical conditions in astronomical sources through the large number of transitions observed for a given molecule in a single dataset. However, these datasets also present a daunting analysis challenge due to the vast amount of data and the presence of blended line features which cannot be met by traditional line identification and rotation diagram analysis. To meet this new challenge, a program has been written in the MATLAB numerical computing environment. This new program has been designed to achieve simultaneous multi-molecule, multi-component fitting for an entire broadband spectrum. This program improves on existing analysis methods in that it does an iterative fit to an observational spectrum using a global analysis method. This global analysis includes global fitting for multiple molecules and single molecules with multiple density, temperature, and velocity components. The advantage of this global method over traditional line-by-line analysis is that it is less sensitive to blended features which are so prevalent in many of these datasets. The current version of the program is limited to the local thermodynamic equilibrium (LTE) approximation because radiative transfer information is not available for all of the complex molecules that are now routinely observed. However an advantage of this LTE approximation is that its simplicity allows for rapid analysis of a line survey. Analysis of several synthetic surveys and observational spectra will be presented to display the speed and reliability of this new analysis software.

P24: TheoSSA: A Virtual Observatory Service for Synthetic Stellar Spectra

    
Rauch, Thomas Institute for Astronomy and Astrophysics, Kepler Center for Astro and Partcile Physics, Eberhard Karls University, Tübingen, Germany
Reindl, Nicole Institute for Astronomy and Astrophysics, Kepler Center for Astro and Partcile Physics, Eberhard Karls University, Tübingen, Germany

In a collaboration of the German Astrophysical Virtual Observatory (GAVO) and AstroGrid-D, the German Astronomy Community Grid (GACG), we provide the registered VO service TheoSSA for the access and the calculation of stellar spectral energy distributions (SEDs). Presently, these calculations include opacities of H, He, C, N, O, Ne, and Mg.

We present examples different aspects of a spectral analysis with VO tools, this includes quality control as well as automatic spectral classification.

P25: Implementing a Common Database Architecture at the CADC using CAOM-2

    
Redman, Russell O. National Research Council of Canada
Dowler, Pat National Research Council of Canada

Over the last year, significant progress has been made in implementing a common database architecture based on the Common Archive Object Model (CAOM-2). The first small archives have been successfully ingested into the new database structure, with the larger and more dynamic archives to follow over the next few months.

P26: Architecture of the LCOGT network scheduler

    
Saunders, Eric Las Cumbres Observatory Global Telescope
Lampoudi, Sotiria University of California, Santa Barbara
Walker, Zachary Las Cumbres Observatory Global Telescope
Becker, Michelle Las Cumbres Observatory Global Telescope

Las Cumbres Observatory Global Telescope (LCOGT) is developing a worldwide network of fully robotic optical telescopes dedicated to time-domain astronomy. Observatory automation, longitudinal spacing of the sites, and a centralised network scheduler enable a range of observing modes impossible with traditional manual observing from a single location. These include continuous coverage of targets across sites, simultaneous observing with multiple resources, and cadenced time-series monitoring without diurnal gaps. The network also provides resource redundancy, with the potential for observations to be rescheduled in response to changing weather conditions. The scheduling model supports a wide variety of observing programs, which typically have very different constraints, goals, contingencies and timescales.

Heterogeneous requests in a networked observing environment present specific, unusual challenges for telescope scheduling that do not arise with single-resource schedulers. Here, we discuss the design of the prototype LCOGT network scheduler. We outline the scheduler's modular architecture, describe the implementation of its components, and highlight its current and planned capabilities.

P27: The Software System for the AAO's HERMES Spectrograph

    
Shortridge, Keith Australian Astronomical Observatory
Farrell, Tony Australian Astronomical Observatory
Vuong, Minh Australian Astronomical Observatory
Birchall, Michael Australian Astronomical Observatory
Heald, Ron Australian Astronomical Observatory

The AAO's HERMES spectrograph will start operation in 2013. Its primary project will be a Galactic Archaeology survey (GALAH), which aims to reconstruct the early history of our Galaxy through precise measurements of the chemical abundances of one million stars. This paper describes some of the software aspects of the HERMES project: how it has evolved from the earlier AAO 2dF system, the extensive use of simulation for testing, the overall observing system, and the operation of the data reduction pipeline.

P28: Harmonize pipeline and archiving system: PESSTO@IA2 use case.

    
Smareglia, Riccardo INAF - OATs
Knapic, Cristina INAF - OATs
Molinaro, Marco INAF - OATs
Young, David Queen's University Belfast
Valenti, Stefano INAF - OAPd

Italian Astronomical Archives Center (IA2) is a research infrastructure project that aims at co-ordinating different national and international initiatives to improve the quality of astrophysical data services. IA2 is now also involved in the PESSTO (Public ESO Spectroscopic Survey of Transient Objects) collaboration, developing a complete archiving system to store calibrated post processed data (including sensitive intermediate products), a user interface to access private data and Virtual Observatory (VO) compliant web services to access public fast reduction data via VO tools. The archive system shall rely on the PESSTO Marshall to provided file data and its associated and metadata output by the PESSTO data-reduction pipeline. To harmonize the object repository, data handling and archiving system, new tools are under development. These systems must have a strong cross-interaction without increasing the complexities of any single task, in order to improve the performances of the whole system and must have a sturdy logic in order to perform all operations in coordination with the other PESSTO tools. MySQL Replication technology and triggers are used for the synchronization of new data in an efficient, fault tolerant manner. A general purpose library is under development to manage data starting from raw observations to final calibrated ones, open to overriding of different sources, formats, management fields, storing and publications policies. Configurations for all the system are stored on a dedicated schema (no configuration files), but can be easily updated by a planned Archiving System Configuration Interface (ASCI).

P29: TOPCAT Visualisation Improvements

    
Taylor, Mark University of Bristol

TOPCAT is a widely used tool for manipulation of astronomical catalogues and other tables. In Version 4 the visualisation capabilities have been overhauled to deliver new and improved functionality. The new plotting model allows overplots of density maps, contours, scatter plots, analytic functions and more on various two- and three-dimensional axes including a selection of sky projections. Non-positional data characteristics can be visualised in numerous ways including error bars, vectors, ellipses, text labels and coding markers by colour, shape and size. The framework is extensible, facilitating addition of more options in the future either as part of the core package or as third-party plugins.

This provides a rich range of options for interactive investigation of small or large data sets. Tables of several million rows can be comfortably handled and meaningfully visualised. Plot appearance is highly configurable and includes features suited to publication-quality output such as optional LaTeX formatting of labels and configurable legend placement.

P30: NEOview: Near Earth Object Data Discovery and Query

    
Tibbetts, M. Smithsonian Astrophysical Observatory
Harbo, P. Smithsonian Astrophysical Observatory
Van Stone, D. Smithsonian Astrophysical Observatory
Zografou, P. Smithsonian Astrophysical Observatory

Missions to Near Earth Objects (NEOs) figure prominently into NASA's "Flexible Path" approach to human space exploration. NEOs offer possible insight into both the origins of the Solar System and of life, as well as a source of materials for future missions. NEOview is a software system that illustrates how standards-based interfaces facilitate NEO data discovery and research. With the NEOview framework, scientists can locate NEO datasets, explore metadata provided by the archives, and query or combine disparate NEO datasets in the search for NEO candidates for exploration.

NEOview software follows a client-server architecture. The server is a configurable implementation of the Table Access Protocol (TAP), a general interface for tabular data access, which can be deployed as a front end to existing NEO datasets. The TAP client, seleste, is a graphical interface which provides an intuitive means of discovering NEO providers, exploring dataset metadata to identify fields of interest, and constructing queries to retrieve or combine data. It features a powerful, graphical query builder capable of easing the user's introduction to RDBMS table joins.

Through science use cases, NEOview has proven capable of finding NEO rendezvous targets by combining NEO data from complementary sources: orbits from the Minor Planet Center at the Smithsonian Astrophysical Observatory (SAO), lightcurves from the Asteroid Lightcurve Database at the Palmer Divide Observatory and spectra from the MIT-UH-IRTF Joint Campaign. Through deployment and operations, it has also shown that the software components are data independent and configurable to many different data servers. As such, NEOview's TAP server and seleste TAP client can be used to create a seamless environment for data discovery and exploration for tabular data in any astronomical archive.

P31: Introduction to the Astrophysical Multipurpose Software Environment (AMUSE)

    
van Elteren, Arjen Leiden University
Portegies Zwart, Simon Leiden University
McMillan, Steve Drexel University
Pelupessy, Inti Leiden University

Over the last several years we have been working on the development of a generalized tool for speeding-up computational astrophysics code development. Our framework, called the Astrophysics Multipurpose Software Environment (AMUSE), enables researcher and students to quickly write production quality codes for performing simulations. In this talk we will introduce the motivation behind, the implementation history of and the real world use of AMUSE. In computational astrophysics we have seen an exponential growth of simulation codes in the last 40 years. Many codes have grown from simple small number of bodies or grid cells and limited physical models to include large N and complex physical models. These codes have been used for real world theoretical experiments and model determination and are found to be stable and useful. Unfortunately, the codes have often have been developed by one researcher or a small group of researchers and often include only one physical model (such as stellar evolution, gravitational dynamics, radiative transfer or hydrodynamics), or are limited to a specific method (for example smoothed particle hydrodynamics or grid). A lot of interesting, real world, astrophysical phenomena can only be modeled by combining different physical models and simulation methods. Multiple approaches are possible to combine these different physical models; implement additional physics models in an existing code, build a combined physics code from the ground up or create a framework linking individual codes to form an integrated code. In the AMUSE project we have taken the last approach, more than 20 exiting codes (called community codes in AMUSE) are combined using a framework written in Python and MPI. The AMUSE framework has evolved from the first prototype written during two workshops by a small community effort (MODEST) more than 6 years ago, to the current implementation developed over the last 3 years. In the AMUSE framework, codes are integrated by writing a set of functions for each codes, this set of functions follows the same pattern for all codes forming similar interfaces for codes doing similar physics. This results in different codes that employ different simulation methods for the same physics to become interchangeable, the same script can simulate a gravitational n-body problem using a direct code, a tree code or a modern hierarchical code. The similarity in interfaces is also used in the framework to implement generic coupling strategies between codes and hence between physical models. For example, we implemented a bridge class capable of coupling the gravitational field between n-body and SPH codes. Currently we have a number of strategies to couple different codes and are actively working on new ones. A primer that shows the increasingly complex code combinations is in preparation and we will show some examples from the primer as well a real world simulation of a small stellar cluster with gas and star evolution.

P32: Building a distributed, scalable and fault-tolerant monitor and control system using Erlang: experiences from an FPGA based VLBI correlator

    
Verkouter, Harro Joint Institute for VLBI in Europe

The Erlang system has been around for over twenty years. It consists of a functional programming language with a runtime environment and an extensive ecosystem of utilities and libraries built, initially, for very reliable soft real-time hardware monitoring-and-control (MAC).

It was designed and built by Ericsson AB of Sweden to control telco grade telephony switches. The fact it was succesful at that, resulting in nine 9's of uptime for that equipment, should be indicative of what the system can do. The company behind the popular smartphone application WhatsApp is running Erlang on its servers to provide its services to millions of simultaneous users.

At the heart of the system lies the Erlang functional programming language. It is a small language by design so it would be easy to learn. Yet it contains several useful extra features not generally found in other functional languages: transparent and lock-free multi-core as well as multi-machine concurrency and a syntax for binary data en- and decoding.

The Erlang runtime is responsible for the fault tolerance infrastructure, hot code swapping and good support for I/O to allow communication with other systems.

Over its twenty-years-and-counting lifetime a set of libraries for building complex distributed systems have been added, as well as utilities for debugging and code analysis. The functional nature of the system allows for easy generalization of oft recurring MAC patterns: generalized servers and state machines are but a few of these patterns.

The Joint Institute for VLBI in Europe (JIVE) is leading an international project called the UniBoard, that has designed and produced a generic, high-performance, FPGA-based computing platform for radio astronomy. The development at JIVE of a VLBI correlator based on this board is currently ongoing.

A single UniBoard based system may contain up to 256 FPGAs which must be individually monitored and controlled. A suitable MAC must be able to scale up to this without problems.

The FPGAs are controlled using the unreliable UDP protocol so the MAC system must be prepared to deal with communication failures, even in the full system.

The protocol used to communicate with the FPGAs is a simple binary protocol, allowing 32bit wide reads and writes to the FPGA memory without any protection at all. This level of abstraction is far too crude for a complex MAC so a more symbolic approach with safeguard against accidently overwriting bits would be desirable.

The VLBI correlator can process data from up to 32 stations in total. The data for each telescope is typically stored on an individual file server as the raw data sets can amount up to several TB per station. Consequently, up to 32 file servers must be controlled to divide the data over many FPGAs.

With this list of requirements it seemed that the Erlang system would make a very good fit.

In this talk I will present our experiences up until now and show that indeed it is a good fit.

P33: Data Intensive Science Doesn't Necessarily Imply Resource Driven Problem Solving

    
Vovchenko, Alexey The Institute of Informatics Problems of the Russian Academy of Sciences (IPI RAN)
Avvakumova, E. Department of Astronomy and Geodesy, Ural Federal University
Kalinichenko, L. The Institute of Informatics Problems of the Russian Academy of Sciences (IPI RAN)
Kaygorodov, P. Institute of Astronomy of the Russian Academy if Sciences (INASAN)
Malkov, O. Institute of Astronomy of the Russian Academy if Sciences (INASAN)
Skvortsov, N. The Institute of Informatics Problems of the Russian Academy of Sciences (IPI RAN)
Stupnikov, S. The Institute of Informatics Problems of the Russian Academy of Sciences (IPI RAN)

Astronomy as a data intensive science is becoming increasingly dependent on massive data resources accumulating measurements obtained in different epochs and passbands by numerous instruments. Astronomers typically use a "resource-driven" approach to solve research problems. According to it the definition of a problem starts with selection of specific data resources to be used for problem solving, problems are defined in terms of such preselected resources. This causes problem specification and implementation to be tightly linked to the specific resources, associated services and tools. Such approach predetermines at least the following deficiencies:
  • problem specification and implementation cannot be reused with another set of resources (e.g., created applying new instruments);
  • low level of data analysis method specifications prevents from their accumulation for reuse and sharing in new researches;
  • domination of the resource-driven approach conserves the view on a computer as a "number crunching" device instead of taking advantage of its knowledge-oriented capabilities.
The latter one is clearly seen in virtual observatories turned into data archives but neglecting provision of data analysis methods applicable to diverse classes of entities in the Universe, not to the data entities in specific archive collections.

This paper attempts to attract attention of the astronomical community to another, conceptual approach to problem domain modeling allowing to define problems in terms of the domain concepts and methods without mentioning of specific resources and services. As an example the problem domain of binary and multiple stars is considered. The domain uses the related knowledge taken from different areas (e.g., photometry, spectrometry) and defines concepts and specific methods for description of binary and multiple systems. In the subdomain of eclipsing binaries a problem of unusual evolutionary type stars discovery is formulated over the domain specification. Other problems over the domain, such as classification of eclipsing binaries, creation of data warehouse on binaries from the heterogeneous databases, are also considered. It is shown how specific data resources and services can be mapped and integrated in the conceptual specification using the information mediation approach. Thus the executable specifications can be formed.

It is shown that the conceptual, "knowledge-driven" approach is fundamentally opposed to the resource-driven one. The conceptual approach relies on the problem domain knowledge to allow a scientist to define a problem and data analysis method declaratively, thus avoiding the disadvantages of the resource-driven approach. It is expected that following the conceptual approach, the reusable knowledge-based specifications for various problem domains can be gradually formed to synthesize theory, experiment and computation with advanced data management in accordance with the data intensive science principles.

P34: The MWA Archive - A Multi-tiered Dataflow and Storage System

    
Wu, Chen ICRAR, University of Western Australia
Wicenec, Andreas ICRAR, University of Western Australia
Pallot, Dave ICRAR, Curtin University
Checcucci, Alessio ICRAR, University of Western Australia

The Murchison Widefield Array (MWA) is a next-generation radio telescope, generating raw data continuously at 5 GB/s. The entire MWA archive consists of dataflow and storage sub-systems distributed across three tiers. At Tier 0, (MRO - Murchison Radio Observatory, Western Australia), station beam data is processed online, producing visibility streams at an aggregated rate of 384 MB/s. The visibility data is instantly captured, partitioned, and stored at the online archive facility. At Tier 1 (Perth, 700 km south of MRO), the staging archive ingests visibility splits from Tier 0 (through either WAN transfer or disks transport) and handles them over to both long-term archiving and offline processing. A hierarchical storage management (HSM) system is deployed to balance storage cost/capacity, transfer bandwidth, and access latency at the long-term archive. A data-intensive HPC cluster (Fornax) is used for offline data processing, which schedules data movement from and to the long-term archive on-demand. At Tier 2 (MIT, USA), the mirrored archive facility subscribes to specific data products with Tier 1, and continuously ingests updated data steams of relevant data types on a regular basis. In this paper we examine detailed dataflow design, software architecture and storage techniques for the MWA archive systems. We also discuss the archiving performance, and illustrate how the archive system will meet the requirements of data retrieval and reprocessing for MWA science.

P35: 2MASS Catalog Server Kit version 2.1

    
Yamauchi, Chisato Astronomy Data Center, National Astronomical Observatory of Japan
Yamauchi, Chisato Astronomy Data Center, National Astronomical Observatory of Japan

2MASS Catalog Server Kit is software for use in easily constructing a high performance search server for important astronomical catalogs. The kit utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers using complete dataset and step-by-step installation guide. Optimized stored functions for positional searches provided by the kit with powerful SQL environment of PostgreSQL will meet various user's demands.

We released 2MASS Catalog Server Kit version 2.1 on May 2012 that supports latest WISE All-Sky catalog (563,921,584 rows) and 9 catalogs:
2MASS PSC (470,992,970 rows),
USNO-B1.0 (1,045,175,762 rows),
GSC-2.3.2 (945,592,683 rows),
UCAC3 (100,766,420 rows),
PPMXL (910,468,710 rows),
Tycho-2 (2,539,913 rows),
AKARI IRC PSC (870,973 rows),
AKARI FIS BSC (427,071 rows), and
IRAS PSC (245,889 rows). Local databases are required for observatories with unstable or narrow-band network or for personal studies that use huge catalog entries. 2MASS Kit is the best for such purposes. For example, CFHT observatory and some Japanese institutes/observatories employ our 2MASS Kit. Recently, users often use our Hard Drive Copy Service. Using this service, users can quickly obtain latest 2MASS Kit database in a 3TB hard drive. Please refer http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/ for details.

In this poster, we will present a description of easy-to-use interface of SQL and our cost-effective software design of 2MASS Catalog Server Kit.

P36: Submillimeter array data handling in Miriad

    
Zhao, Jun-Hui Harvard-Smithsonian CfA

We will describe the Miriad reduction process for the Submillimeter Array observations. Some software implementation for calibrations of high-frequency data will be discussed.

P37: Comparison of modern methods for calculation of effective temperature and bolometric corrections for stars

    
Zubarev, Sergey Ural Federal University, Yekaterinburg, Russia
Martyushev, Leonid Ural Federal University, Yekaterinburg, Russia

The calculation of the star effective temperature and bolometric corrections (T_eff-BC) is one of the problems in the modern astrophysics. This problem is based on limited range of applicability of available empirical formula, using photometrical star information. We perform an analysis of the T_eff-BC as functions of either the color index B-V [Aller, 1976; Cameron, 1998; Soderblom, 1993; Torres, 2010; Valenti, 2005] or B-V with the coefficient of the metallicity [Fe/H] [Ramirez, 2005; Sekiguchi, 2000]. The handbook T_eff-BC data for different star classes depending on B-V (Allen, 1973) are used for additional verification of approximation accuracy.

It was shown, that determinations of effective temperature using the methods [Ramirez, 2005, Sekiguchi, 2000] are the best for [Fe/H] known. The Torres's [2010] and Cameron's [1998] methods of bolometric correction calculation are the best for Giants/Main-Sequence (MS) stars and for Supergiants respectively. In cases of unknown [Fe/H], the best relationship between effective temperature and B-V is given by Cameron's method [1998] for MS stars; by Cameron and Soderblom method [1998,1993] for Giants; by Soderblom and Torres method [1993, 2010] for Supergiants. The additional information about areas of method application (constraints and corresponding values of B-V intervals) are available on the webpage [http://starclusters.narod2.ru/].

Using of the best method for each star class, we obtain that the maximum relative error is less than 0.22 for B-V interval borders and it is less than 0.02 for MS stars with B-V from 0 to 1.45.

The proposed algorithm for T_eff-BC determination is integrated into free download software Star Clusters [http://starclusters.narod2.ru/].

P38: Binocular observations with LUCI at the LBT: scheduling and synchronization

    
Pramskiy, Alexander Ruhr University Bochum, Germany
Polsterer, Kai Lars Ruhr University Bochum, Germany

LUCI is a pair of NIR spectrographs and imagers for the Large Binocular Telescope (LBT) working in the wavelength range from 0.85mic - 2.6mic. Currently only one instrument is available at the LBT. The LUCI software is a set of distributed Java applications based on the Remote Method Invocation (RMI) architecture. At this moment just one instrument needs to be controlled by the software. The second instrument is going to be installed at the LBT during the next months. Therefore the next software version has to support synchronized binocular observations of both LUCI instruments. An observation preparation software component will help with the planning of a scientific program for the binocular mode. It will run observation cycles synchronously for both telescope sides. A scheduler software component is used to process an observation queue, analyze setup requirements and send a given setup to the telescope and both instruments. Thereby synchronous and asynchronous processing is supported. The queue of the scheduler contains a sequence of execution tasks. These tasks can define abstract actions as well as concretised hardware setups. A sequence of tasks can be created and validated in the observation preparation tool. The scheduler sends setups to the telescope, both instruments and both readout manager services. The manager services are software sub-components that are responsible for controlling the telescope, both instruments and both detectors. Each manager converts top-level setup commands into instructions for the low-level software services that control the hardware directly. The managers are also responsible for a syntax and a semantic validation of the setup properties. An additional requirement for the scheduler is to support the heterogeneous usage of "non-LUCI" instruments on the second side of the telescope. In this case the queue of the scheduler contains tasks for LUCI and an external instrument. The scheduler sends synchronously the required instructions to LUCI and the "non-LUCI" instrument. Currently a simulation mode is used for testing the different software components. This simulation mode can also be used for verifying prepared observations before going to the telescope.

P39: A Mobile Data Application for the Fermi Mission

    
Stephens, Thomas E. Wyle ST&E/NASA Goddard

With the ever increasing use of smartphones and tablets among scientists and the world at large, it becomes increasingly important for projects and missions to have mobile friendly access to their data. This access could come in the form of mobile friendly websites and/or native mobile applications that allow the users to explore or access the data. The Fermi Gamma-ray Space Telescope Mission has begun work along the latter path.

In this poster I present the initial version of the Fermi Mobile Data Portal, a native application for both Android and iOS devices that allows access to various high level public data products from the Fermi Science Support Center (FSSC), the Gamma-ray Coordinate Network(GCN), and other sources. While network access is required to download data, most of the data served by the app are stored locally and is available even when a network connection is not available. This poster discusses the application's features as well as the development experience and lessons learned so far along the way.

P40: Cross-Identification of Astronomical Catalogs on Multiple GPUs

    
Lee, Matthias A. Johns Hopkins University
Budavári, Tamás Johns Hopkins University

One of the most fundamental problems in observational astronomy is the cross-identification of sources. Observations are made in different wave lengths, at different times and from different locations and instruments, resulting in a large set of independent observations. The scientific outcome is often limited by our ability to quickly perform meaningful associations between detections. The matching, however, is difficult scientifically, statistically as well as computationally. The former two require detailed physical modeling and advanced probabilistic concepts; the latter is due to the large volumes of data and the problem's combinatorial nature. In order to tackle the computational challenge and to prepare for future surveys, whose measurements will be exponentially increasing in size past the scale of feasible CPU-based solutions, we developed a new implementation that addresses the issue by performing the associations on multiple Graphics Processing Units (GPUs). Our implementation utilizes up to 6 GPUs in combination with the Thrust library to achieve an over 40x speed up verses the previous best implementation running on a multi-CPU SQL Server.

P41: Better Living Through Metadata: Examining Observatory Archive Usage

    
Becker, Glenn Smithsonian Astrophysical Observatory
Winkelman, Sherry Smithsonian Astrophysical Observatory
Rots, Arnold Smithsonian Astrophysical Observatory

The primary purpose of an observatory's archive is to provide access to the data through various interfaces. User interactions with the archive are recorded in server logs, which can be used to answer basic questions like: Who has downloaded dataset X? When did she do this? Which tools did she use? The answers to questions like these fill in patterns of data access (e.g., how many times dataset X has been downloaded in the past three years). Analysis of server logs provides metrics of archive usage and provides feedback on interface use which can be used to guide future interface development.

The Chandra X-ray Observatory is fortunate in that a database to track data access and downloads has been continuously recording such transactions for years. Created early in the mission, the Chandra system consists of a small number of Perl scripts that copy and analyze logfiles from a range of server backends, and which update the database tables accordingly. Although this system continues to provide useful metrics, the scripts have not kept pace with updates to the logs, putting us in a position to ask: what additional information can we obtain from the logs, and how can we adapt our current system to address these new requirements?

We will detail changes we hope to effect, and the differences the changes may make to our usage metadata picture. We plan to gather more information about the geographic location of users without compromising privacy; create improved archive statistics; and track and assess the impact of web "crawlers" and other scripted access methods on the archive. With the improvement to our downloads tracking we hope to gain a better understanding of the dissemination of Chandra's data, how effectively it is being done, and perhaps discover ideas for new services.

This work is supported by NASA contract 8-03060.

P42: Software and Computing at LWA1

    
Dowell, Jayce University of New Mexico

The first completed station of the Long Wavelength Array, LWA1, is currently operating in New Mexico in the 10 to 88 MHz frequency range. LWA1 consists of 258 crossed polarization dipole pairs. These pairs are combined within the digital processing electronics to provide raw time series voltages from all dipoles as well as four independently steerable delay-and-sum beams. These two modes support a variety of scientific research, including studies of the decametric emission from Jupiter at high temporal and spectral resolution, searches for the first stars through their influence on the HI spin, and searches for radio transients.

In order to take advantage of the flexibility of the voltage data, and to handle the complexities of large-N and large data volumes from the telescope, we have developed the Long Wavelength Array Software Library (LSL). LSL is distributed as a Python module and includes a variety of routines to allow observers to convert the data into the frequency domain and to perform a variety of signal processing techniques, including post-acquisition beam forming from the all-dipoles data and removal of broadband RFI in the time domain data. In addition to the core functionality provided with LSL, there are also four extensions to the module that provide more specific functions. For example, the GPU extension aims to increase the speed at which data can be analyzed by running part of the signal processing on GPUs.

LSL will also serve as the analysis software base available on the LWA1 User Computing Facility. The User Computing Facility is a cluster of six machines designed for signal processing applications. Each node sports a hexacore processor and two GPUs, and will be connected to the LWA1 via a 10 Gb/s link. Once connected, this will enable a variety of real-time and off-line processing options for the LWA1 data streams as well as serve as a test bed for correlating data from other LWA stations with LWA1.

Construction of the LWA has been supported by the Office of Naval Research under Contract N00014-07-C-0147. Support for operations and continuing development of the LWA1 is provided by the National Science Foundation under grant AST-1139974 of the University Radio Observatory program.

P43: Connecting the ALMA Observatory site, in northern Chile, with the ALMA central offices, in Santiago, by means of an optical link.

    
Ibsen, Jorge ALMA, Alonso de Cordova 3107, Vitacura, Santiago, Chile;
Filippi, Giorgio ALMA, Alonso de Cordova 3107, Vitacura, Santiago, Chile;
Liello, Fernando Consortium GARR, Via dei Tizii 6, Roma, Italy
Jaque, Sandra REUNA, Canadá 239, Providencia - Santiago de Chile, Chile

The Atacama Large Millimeter/submillimeter Array (ALMA), an international partnership of Europe, North America and East Asia in cooperation with the Republic of Chile, is the largest astronomical project in existence. ALMA is composed initially of 66 high precision antennas located on the Chajnantor plateau, 5000 meters altitude in northern Chile. The ALMA Central offices are located in Santiago, about 1400 km from the observatory. The project presented in this poster aims to create a Gigabit capable communication infrastructure, based on the use of dedicated optical links operated with DWDM technology, linking the two locations and able not only to cope with the current and projected data transfer needs, but also to support all present and foreseeable future communication requirements, to support for instance virtual presence and remote activities. This new infrastructure will build up on the already existing EVALSO infrastructure, that is providing the academic network backbone between Antofagasta and Santiago, operated by REUNA, the Chilean National Research and Academic Network (NREN). The system is expected to deliver a dedicated 2.5Gbps channel end to end and to be in operations within the first quarter of 2014.

P44: TAPVizieR: a new way to acess the VizieR database

    
Landais, Gilles CDS, Strasbourg Observatory
Ochsenbein, Francois CDS, Strasbourg Observatory

VizieR is a component of the Virtual Observatory, it provides tables of catalogs to external softwares with VO standards like the VOTable output. The access to VizieR with the ADQL/TAP standard represents a new milestone in the VizieR accessibility.

The TAP implementation in VizieR had to be adapted to the heterogeneity and huge volumetry of the VizieR contents.

The PostgreSQL engine has been chosen to store data; it provides a solid database containing utilities to manage SQL or to create the customized HEALPix indexing "H3C" which permits a fast access from sky coordinates. The TAP standard however was not designed to accomodate with databases managing tens of thousands of tables like VizieR, and some compromises with the TAP standard were necessary in this first version of TAPVizieR.

P45: A GPU-based visualization method for computing the dark matter annihilation signal

    
Yang, Lin Johns Hopkins University
Szalay, Alex Johns Hopkins University

We present a novel GPU-based visualization method for computing the dark matter annihilation signal for cosmological dark matter simulations. The technique increased the speed of rendering by more than 1,000 times. In a previous study, using a code running on regular CPUs, each particle's contribution was explicitly calculated pixel by pixel over a HEALPIX map, then remapped onto a Molleweide projection. For the Via Lactea II simulation (~400M particles), a single thread CPU (~3 GHz) based code takes more than 7 hours to compute an all-sky map with a resolution of nside=512. Each particle is weighted by its local density before the flux accumulation, and a projected radial density profile is applied. Our method is based on a separate stereographic projection for each hemisphere, and a hardware accelerated rendering pipeline on a GPU (OpenGL). We project the particles instead of the celestial sphere to the tangent plane with a skewed flux profile appropriate for the STR projection. OpenGL's Point Sprite feature and shader language allow us to render those eccentric circular flux profiles at the rate of more than 10M particles per second. The new method can process a single snapshot of the Via Lactea II data in less than 1 minute with a single NVIDIA GTX 480 GPU, including I/O, with effective rendering time less than 24 seconds. Using an approximate normalization for the flux, accurate to 2.5\% in total flux, the rendering can be done in less than 13 seconds. The stereographic images corresponding to the two hemispheres are then warped to an all-sky image in the Molleweide projection, and are in perfect agreement with the result from the regular CPU code, at the same resolution. The overall speedup is remarkable.

P46: Efficient Catalog Matching with Dropout Identification

    
Fan, Dongwei Johns Hopkins University
Budavári, Tamás Johns Hopkins University

Source catalogs extracted from astronomy images come with sky coverage information of the original exposures. The detection and the coverage together capture all the critical information in the images, which would not be complete without any one of the two. We present a novel method for catalog matching, which inherently builds on sky coverage. A modified version of the Zones algorithm is introduced for matching partially overlapping observations, where irrelevant parts of the data are excluded up front for efficiency. Our design enables searches to focus on specific areas on the sky to further speed up the process. Another important advantage of the new method over traditional techniques is its ability to quickly identify dropouts, i.e., the missing components that are in the observed regions of the celestial sphere but did not reach the detection limit. These often provide invaluable insight into the spectral energy distribution of matched sources but rarely available in traditional associations.

P47: Vissage: an ALMA-VO Desktop Application

    
Kawasaki, Wataru National Astronomical Observatory of Japan
Eguchi, Satoshi National Astronomical Observatory of Japan
Shirasaki, Yuji National Astronomical Observatory of Japan
Komiya, Yutaka National Astronomical Observatory of Japan
Kosugi, George National Astronomical Observatory of Japan
Ohishi, Masatoshi National Astronomical Observatory of Japan
Mizumoto, Yoshihiko National Astronomical Observatory of Japan

Having one year after the beginning of scientific operations based on the selected observing proposals, the Atacama Large Millimeter/submillimeter Array (ALMA) will soon start releasing its outcomes to the public in addition to its Science Verification data that have already been out. ALMA's unprecedented performance, especially when it gets to its full specifications in the near future, leads us to an obvious expectation that the end data from ALMA, even a single one, could be as huge as Terabyte-scale.

We therefore developed a new mechanism and softwares to cope with such huge datasets in the framework of Virtual Observatory system. Our project covers both server-side and desktop side: the former has been implemented as the new functionalities of the Japanese Virtual Observatory (JVO)'s ALMA Data Service including the ALMA Web Quick Look System (Web QL; cf. Eguchi et al.'s talk and Shirasaki et al's demo in this conference).

For the latter, we would like to introduce Vissage (VISualisation Software for Astronomical Gigantic data cubEs), a brand-new FITS data cube browser. Vissage is now available from JVO portal site and runs on all platforms with JRE (Java Runtime Environment) 6 or newer. A front-end program is provided as well for Windows users to enable to launch via drag and drop onto a shortcut and to give an appropriate heap size for Java VM.

Vissage currently offers basic functionalities to view several major two-dimensional expressions of data cube including integrated intensity map, moment maps, channel maps, Position-Velocity diagram, and so on. Also it is equipped with tight connection with JVO ALMA Data Service and ALMA Web QL to help users search and obtain more appropriate dataset seamlessly for his/her scientific purposes. In addition to the ALMA data downloaded from JVO as the primary target, the ones from other telescopes spanning all range in wavelength are planned to be viewable as well in the near future.

In this paper, we will describe the present development status of Vissage, its aims and future plans.

P48: Automating Plug-Plate Configuration for SAMI

    
Lorente, Nuria P. F. Australian Astronomical Observatory (AAO)

The Sydney-AAO Multi-object Integral field spectrograph (SAMI) is a prototype wide-field system at the Anglo-Australian Telescope (AAT) deploying 13 x 61-core imaging fibre bundles (hexabundles) over a 1-degree field of view. The hexabundles, together with ancillary sky and calibration fibres, are mounted on a plug plate located at the prime focus of the telescope. Each plate is pre-drilled with holes corresponding to the on-sky positions of targets for typically 3 observations.

The process of determining the positions of the plate holes involves defining 3 stacked observing fields each consisting of a guide star placed at the centre of the plate, and 13 prioritised targets located in the resulting 1-degree field of view, taking into account separation constraints between targets in the same field and in the two other fields (simplistically, plug-holes should not overlap). 26 blank-sky positions are then allocated to each field, with the additional constraint that only 26 sky holes are to be drilled in the plate, so the sky positions for each of the 3 fields must map to the same physical holes.

As part of a prototype project this configuration process has until now been a painstaking task involving the use of several software packages, scripts, stand-alone code and a lot of manual configuration and checking. As SAMI evolves from a technology demonstrator to a survey instrument with an expected observing catalogue of several thousand targets, this approach will no longer be feasible, for reasons of both efficiency and the increased likelihood of error.

In this paper we present an automated process for configuring SAMI plates. This consists of a C++ layer which carries out the optimisation of target and sky positions for each field and plate, and applies the required atmospheric and telescope models to convert between sky and plate positions. This process is controlled by a Java layer which also provides visualisation of the process to the user by means of Aladin, driven using SAMP and using VOTables as the data transport mechanism. The aim is to take away the tedium of plate configuration, whilst giving the user control over the process, by presenting them with a way of easily checking the validity of an automatically generated plate and allowing them to drive subsequently finer configuration cycles until a satisfactory plate configuration is achieved.

P49: Early photometric studies for EUCLID

    
Kuemmel, Martin Universitaets-Sternwarte Muenchen

Euclid is a medium class mission candidate for launch in 2019 in the Cosmic Vision 2015-2025 programme. Euclid will investigate the distance-redshift relationship and the evolution of cosmic structures by measuring shapes and red-shifts of galaxies and clusters of galaxies. Data from its two instruments, the Visible Imaging Channel (VIS) and the Near IR Spectrometer and imaging Photometer (NISP) will be merged with ground based imaging data from large surveys such as the Dark Energy Survey (DES). In this contribution we discuss the strategies and concepts to achieve the necessary high-precision photometry on the entire dataset. We also present the results of a preliminary study to replace the 'traditional photometry', which is based on co-added images, with measurement taken on the ensemble of individual images.

P50: Overview of the SOFIA Data Cycle System: An integrated set of tools and services for the SOFIA General Investigator

    
Shuping, Ralph Space Science Inst./USRA-SOFIA
Vacca, William USRA-SOFIA
Lin, Lan USRA-SOFIA
Sun, Li USRA-SOFIA
Krzaczek, Robert Rochester Inst. of Technology

The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne astronomical observatory comprised of a 2.5 meter infrared telescope mounted in the aft section of a Boeing 747SP aircraft that flies at operational altitudes between 37,000 and 45,00 feet, above 99% of atmospheric water vapor. During routine operations, a host of instruments will be available to the astronomical community including cameras and spectrographs in the near- to far-IR; a sub-mm heterodyne receiver; and an high-speed occultation imager. One of the challenges for SOFIA (and all observatories in general) is providing a uniform set of tools that enable the non-expert General Investigator (GI) to propose, plan, and obtain observations using a variety of very different instruments in an easy and seamless manner. The SOFIA Data Cycle System (DCS) is an integrated set of services and user tools for the SOFIA Science and Mission Operations GI Program designed to address this challenge. Program activities supported by the DCS include:
  • proposal preparation and submission by the GI
  • proposal evaluation by the telescope allocation committee and observatory staff
  • Astronomical Observation Request (AOR) preparation and submission by the GI
  • observation and mission planning by observatory staff
  • data processing and archiving
  • data product distribution
In this poster we present an overview of the DCS concepts, architecture, and user tools that are (or soon will be) available in routine SOFIA operations. In addition, we present experience from the SOFIA Basic Science program, and planned upgrades.

P51: New Organizations to Support Astroinformatics and Astrostatistics

    
Feigelson, Eric D. Pennsylvania State University
Ivecic, Zeljko University of Washington
Hilbe, Joseph Arizona State University

Astronomers are increasingly turning attention to advanced methodologies for data and science analysis, particularly for large-scale surveys. The use of sophisticated statistical and computational methods is the purview of the cross-disciplinary fields astroinformatics and astrostatistics involving collaboration of astronomers, computer scientists and statisticians. We describe here four new organizations that are now emerging to support progress in these areas:

  1. American Astronomical Society Working Group in Astroinformatics and Astrostatistics (Z. Ivecic, Chair)
  2. International Astronomical Union Working Group in Astrostatistics and Astroinformatics (E. Feigelson, Chair)
  3. International Astrostatistics Association affiliated with the International Statistical Institute (J. Hilbe, President)
  4. Astrostatistics and Astroinformatics Portal (E. Feigelson & J. Hilbe, editors)
These organizations will promote use of known advanced statistical and computational methods in astronomical research, encourage the development of new procedures and algorithms, organize multi-disciplinary meetings, and provide educational and professional sources to the wider community. AAS members are encouraged to browse the Portal (http://asaip.psu.edu) and join one or more of these organizations.

P52: Formal semantics to model experimental data

    
Viallefond, Francois LERMA Observatoire de Paris

Modeling data by transforming the objects defined using our human language into mathematical objects leads to robust, efficient and expressive applications for data processing in information systems. In this approach we find that the overall structure which models the measurement sets acquired at the telescopes in observatories is a simplicial complex with an inner hexagonal shape. This structure is ubiquitous allowing to describe all the steps of a physical experiment.

During the development of an application which transforms models described using XMLSchema into C++ classes we realized that this structure organizes concepts defined in the XMLSchema language itself. The following presents a diagram of this structure. This example with XMLSchema gives insights about its meaning. We propose connections with the borromean logic (a tri-partition in set) and the identity type in type theory.

P53: OpTIIX Data Management System

    
Swade, Daryl Space Telescope Science Institute
Krueger, Tony Space Telescope Science Institute

The Optical Testbed and Integration on ISS eXperiment (OpTIIX) is a technology demonstration to design, develop, deliver, robotically assemble, and successfully operate an observatory on the International Space Station (ISS). An OpTIIX Education and Public Outreach (EPO) program is being designed to bring OpTIIX and its discoveries to amateur observers, students, educators, and the public. In addition, OpTIIX will be available to the professional community for additional tests using the assembled OpTIIX configuration.

OpTIIX will provide a very capable three mirror astigmatic telescope. The primary mirror has a 1.45 meter aperture with six hexagonal, deformable segments. Detectors for imaging, fine guidance, and wavefront sensing are included in the telescope. The imaging camera has seven filters that cover visible wavelengths and one opaque filter position.

The Space Telescope Science Institute will serve as the OpTIIX Mission Operations Center. Within the Mission Operations Center data will be formatted, calibrated, and archived. A data pipeline will combine multiple dithered exposures through the same filter, as well as combine multiple dither products through different filters to produce color images. All data will be publically accessible through a Mikulski Archive for Space Telescopes (MAST) EPO portal.

P54: HST Cycle 21 Exposure Time Calculator

    
Diaz, R.I. Space Telescope Science Institute, 3700 San Martin Dr, Baltimore MD 21218
Laidler, Victoria G. Computer Sciences Corporation at STScI, 3700 San Martin Dr, Baltimore MD 21218
Busko, Ivo Space Telescope Science Institute, 3700 San Martin Dr, Baltimore MD 21218
Davis, Matt Space Telescope Science Institute, 3700 San Martin Dr, Baltimore MD 21218
Hanley, Chris Space Telescope Science Institute, 3700 San Martin Dr, Baltimore MD 21218
Sienkiewicz, Mark Space Telescope Science Institute, 3700 San Martin Dr, Baltimore MD 21218
Sontag, Chris Space Telescope Science Institute, 3700 San Martin Dr, Baltimore MD 21218
York, Brian Space Telescope Science Institute, 3700 San Martin Dr, Baltimore MD 21218

We will present the most recent updates that will be part of ETC 21.1, We will discuss the end results, new models, updates to the calibration spectra and the ETC warning system; which help observers to identify problems with their observations and make better decisions on how to achieve their science objectives. We also touch on some of the future improvements that are being consider for next Cycle.

P55: New Probabilistic Galaxy Classification in Large Photometric Surveys

    
Liang, Feng Department of Statistics, University of Illinois
Brunner, Robert Department of Astronomy, University of Illinois

A number of different projects have or soon will map the sky in part to better constrain the cosmological parameters driving the evolution of our Universe. One of the most important and least quantified steps in this process is the task of efficiently identifying galaxies within these large data sets. Generally, simple parameter cuts have been used, for example the SDSS cut on the difference between a PSF and model magnitude in the r-band. While this approach can be efficiently implemented and is easy to understand, it has been shown to be ineffective at brighter magnitudes than originally suspected. In response, we are applying powerful statistical techniques such as support vector machine and non-parametric bayesian clustering to this challenge with the goal of developing a probabilistic galaxy classification that can be reliably extended to fainter magnitudes, thereby increasing the precision of future cosmological measurements.

P56: PAL - A Positional Astronomy Library

    
Jenness, Tim Joint Astronomy Centre
Berry, David S. Joint Astronomy Centre

PAL is a new positional astronomy library written in C that attempts to retain the SLALIB API but is distributed with an open source GPL license. The library depends on the IAU SOFA library wherever a SOFA routine exists and uses the most recent nutation and precession models. Currently about 100 of the 200 SLALIB routines are available. Interfaces are also available from Perl and Python. PAL is freely available via github.

P57: Quantifying Systematic Effects on Galaxy Clustering

    
Wang, Y. Department of Astronomy, University of Illinois, 1002 W. Green St., Urbana, IL 61801, USA
Brunner, R. J. Department of Astronomy, University of Illinois, 1002 W. Green St., Urbana, IL 61801, USA

We present techniques for quantifying the effects of observation systematic effects on galaxy clustering measurements from large photometric surveys. These techniques can leverage both pixelized and point-based systematics and can be quickly calculated for large data volumes and as a function of observational pattern and galactic coordinate. The actual measurements are performed via a correlation function either in pixel space or real space. As a demonstration, we present a measurement of the systematic effects of seeing, extinction, and stellar density on the SDSS DR7 photometric galaxy clustering signal. We conclude with a discussion of how this work can be extended to future surveys such as the DES and LSST.

P58: Data reduction pipeline for the MMT Magellan Infrared Spectrograph

    
Chilingarian, Igor SAO/CfA
Brown, Warren SAO/CfA
Fabricant, Daniel SAO/CfA
McLeod, Brian SAO/CfA
Roll, John SAO/CfA
Szentgyorgyi, Andrew SAO/CfA

We describe a new spectroscopic data pipeline for the MMT/Magellan Infrared Spectrograph (MMIRS). MMIRS can operate at f/5 focii of either the MMT or the Magellan Clay 6.5m telescopes. MMIRS addresses a 4 by 7 arcminute field of view for multi-object spectroscopy and is equipped with a 2Kx2K HAWAII-2 array. The pipeline handles data obtained in multi-slit and single-object long slit modes. All of the pipeline blocks are implemented in IDL with the exception that up-the-ramp fitting of a sequence of raw frames is performed in a C++ routine. Up-the-ramp fitting allows us to reject cosmic ray events and correct non-linearity and saturated pixels. The most sophisticated algorithm is sky subtraction, where we take a hybrid approach that uses both the ``classical'' dithered difference image approach and a modified version of the Kelson (2003) sky subtraction technique. Our tests show that the pipeline gets close to the Poisson-limited sky subtraction quality. The final data products include flux-calibrated 2D and extracted 1D spectra corrected for telluric absorption. Data files are made available as classical ``stacked spectra'' and in the Euro3D-FITS format with Virtual Observatory compliant metadata. We will describe the principal components of the pipeline and present examples of fully reduced scientific data.

P59: Science Mining and Characterization of ALMA Large Data Cubes

    
Teuben, Peter University of Maryland
Ip, Cheuk Yiu University of Maryland
Mundy, Lee University of Maryland
Varshney, Amitabh University of Maryland

We are using Multilevel Segmentation of the Intensity-Gradient Histograms and Description Vectors to show unique ways to visualize and analyse complex structures in large ALMA data cubes. In particular higher dimensional data cubes with many spectral lines are a challenge both algorithmically and visually. In this poster we show some examples of both theoretical and observational data cubes and how novel techniques developed outside of our field can be applied to Astronomy.

P60: Astronomy Data Centres: data storage approaches

    
Economou, Frossie NOAO
Hoblitt, Joshua NOAO
Scott, Derec NOAO

We present NOAO Science Data Management's experience with storing NOAO's archive data using GPFS and iRODS. We are surveying other data centers for alternative approaches to large-scale astronomical data storage and will be presenting these results at a new meeting on Astronomy Data Centre Technologies. This meeting is conceived as a workshop to allow system administrators and operations staff at data centers to share experiences with file storage technologies, database platforms, authentication mechanisms, system monitoring and other topics of interest to data center personnel. Please visit the poster exhibit for details.

P61: Automated removal of bad-baseline spectra from ACSIS/HARP heterodyne time series

    
Currie, Malcolm J. Joint Astronomy Centre

Heterodyne time-series spectral data often exhibit distorted or noisy baselines. These are either transient due to external interference or pickup; or affect a receptor throughout an observation or extended period, possibly due to a poor cable connection. While such spectra can be excluded manually, this is time consuming and prone to omission, especially for the high-frequency interference affecting just one or two spectra in typically ten to twenty thousand, yet can produce undesirable artefacts in the reduced spectral cube. Further astronomers have tended to reject an entire receptor if any of its spectra are suspect; as a consequence the reduced products have lower signal-to-noise, and enhanced graticule patterns due to the variable coverage and detector relative sensitivities.

This poster illustrates the types of aberrant spectra for ACSIS/HARP on the James Clerk Maxwell Telescope and the algorithms used to identify and remove them applied within the ORAC-DR pipeline, and compares integrated maps with and without baseline filtering.

P62: CANFAR + Skytree: A Cloud Computing and Data Mining System for Astronomy

    
Ball, Nicholas M. National Research Council Canada

To-date, computing systems have allowed either sophisticated analysis of small datasets, as exemplified by most astronomy software, or simple analysis of large datasets, such as database queries. At the Canadian Astronomy Data Centre, we have combined our cloud computing system, the Canadian Advanced Network for Astronomical Research (CANFAR), with the world's most advanced machine learning software, Skytree, to create the world's first cloud computing system for data mining in astronomy. CANFAR provides a generic environment for the storage and processing of large datasets, removing the requirement for an individual or project to set up and maintain a computing system when implementing an extensive undertaking such as a survey pipeline. 500 processor cores and several hundred terabytes of persistent storage are currently available to users, and both the storage and processing infrastructure are expandable. The storage is implemented via the International Virtual Observatory Alliance's VOSpace protocol, and is available as a mounted filesystem accessible both interactively, and to all processing jobs. The user interacts with CANFAR by utilizing virtual machines, which appear to them as equivalent to a desktop. Each machine is replicated as desired to perform large-scale parallel processing. Such an arrangement carries far more flexibility than other cloud systems, because it enables the user to immediately install and run the same astronomy code that they already utilize, in the same way as on a desktop. Skytree is installed and run just as any other software on the system, and thus acts as a library of command line data mining functions that can be integrated into one's wider analysis. Thus we have created a generic environment for large-scale analysis by data mining, in the same way that CANFAR itself has done for storage and processing. Because Skytree scales to large data in linear runtime, this allows the full sophistication of the huge fields of data mining and machine learning to be applied to the hundreds of millions of objects that make up current large datasets. We demonstrate the utility of the CANFAR + Skytree system by showing science results obtained, including assigning photometric redshifts to the MegaPipe reductions of the Canada-France-Hawaii Telescope Legacy Wide and Deep surveys. This project involves producing, handling, and running data mining on, a catalog of over 13 billion object instances. This is comparable in size to those expected from next-generation surveys, such as the Large Synoptic Survey Telescope. The CANFAR + Skytree system is open for use by any interested member of the astronomical community.

P63: The GIRAFFE Archive : 1D and 3D spectra

    
Royer, Frédéric GEPI - Observatoire de Paris
Jégouzo, Isabelle GEPI - Observatoire de Paris
Tajahmady, Françoise GEPI - Observatoire de Paris
Normand, Jonathan VO Paris Data Center - Observatoire de Paris
Chilingarian, Igor Harvard-Smithsonian Center for Astrophysics

The GIRAFFE Archive (http://giraffe-archive.obspm.fr) contains the reduced spectra observed with the intermediate and high resolution multi-fibre spectrograph installed at VLT/UT2 (ESO). In its multi-object configuration, GIRAFFE produces 1D spectra and with the different integral field unit configurations 3D spectra are produced.

We present here the status of the archive and the different functionnalities to select and download both 1D and 3D data products, as well as the present content. The two collections are available in the VO: the 1D spectra (summed in the case of integral field observations) and the 3D field observations. These latter products can be explored using the VO Paris Euro3D Client (http://voplus.obspm.fr/~chil/Euro3D/).

P64: ESO Catalogue Facility design and performance

    
Moins, Christophe ESO
Retzlaff, Joerg ESO
Arnaboldi, Magda ESO
Zampieri, Stefano ESO
Delmotte, Nausicaa ESO
Forchi, Vincenzo ESO
Klein Gebbinck, Maurice ESO
Lockhart, John ESO
Micol, Alberto ESO
Vera Sequeiros, Ignacio ESO
Bierwirth, Thomas ESO
Peron, Michele ESO
Romaniello, Martino ESO
Suchar, Dieter ESO

The ESO Phase3 Catalogue Facility provides investigators with the possibility to ingest catalogues resulting from ESO public surveys and large programs and to query and download their content according to positional and non-positional criteria. It relies on a chain of tools that covers the complete workflow from submission to validation and ingestion into the ESO archive and catalogue repository and a web application to browse and query catalogues.

This repository consists of two components. One is a Sybase ASE relational database where catalogue meta-data are stored. The second one is a Sybase IQ data warehouse where the content of each catalogue is ingested in a specific table that returns all records matching a user's query. Spatial indexing has been leveraged in Sybase IQ to speed up positional queries and relies on the Spherical library from the Johns Hopkins Univerity that implements the Hierarchical Triangular Mesh (HTM) algorithm. It is based on a recursive decomposition of the celestial sphere in spherical triangles and the assignment of an index to each of them. It has been complemented with the use of optimized indexes on the non-positional columns that are likely to be frequently used as query constraints.

First tests performed on catalogues such as 2MASS have confirmed that this approach provides a very good level of performance and a smooth user experience that are likely to facilitate the scientific exploitation of catalogues.

P65: A New Python Library for Spectroscopic Analysis with MIDAS Style

    
Song, YihanNational Astronomical Observatories, Chinese Academy of Sciences

The ESO MIDAS is a system for astronomers to analyze data. Many of them are used to the MIDAS. Python is a high level script language and there are many applications in astronomical data processing. A Python library is built based on ESO MIDAS functions. The library is easily to use for the people who are familiar with MIDAS to realize their algorithm. We call the library PydasLib.

First tests performed on catalogues such as 2MASS have confirmed that this approach provides a very good level of performance and a smooth user experience that are likely to facilitate the scientific exploitation of catalogues.

P66: The Green Bank Telescope Spectral Pipeline

    
Joe MastersNational Radio Astronomy Observatory

Recently, historic Green Bank Telescope (GBT) datasets were made available to the user community through a data archive service. In support of the archive, and to offer quick access to GBT data, NRAO has instituted the GBT Pipeline Project to provide reduced spectra from public GBT datasets to the user community.

The GBT Pipeline Project has two goals. The first is to produce an automated data processing pipeline that can generate quick-look spectra for about 80% of all spectral line data observed with the GBT, including data observed with the VEGAS spectrometer in standard observing modes. For certain types of science data, this project will produce archive-ready data products, including a "summary sheet" for each reduced spectrum with accompanying header and statistical information. A second goal of the Pipeline Project is to prepare tools that assist with certain potentially labor-intensive data processing tasks, such as calibrating and mapping K-band Focal Plane Array observations.

Through various approved observing programs, the GBT has taken pointed observations to measure HI profiles toward several thousand galaxies. Here we show early results of the quick-look pipeline for extragalactic HI observations. The GBT Pipeline project plans to provide HI spectra of galaxies to the NRAO data archive and to the NASA Extragalactic Database later this year.