[ { "text": "Calibration of the NEVOD-EAS array for detection of extensive air\n showers: In this paper we discuss the calibration of the NEVOD-EAS array which is a\npart of the Experimental Complex NEVOD, as well as the results of studying the\nresponse features of its scintillation detectors. We present the results of the\ndetectors energy calibration, performed by comparing their response to\ndifferent types of particles obtained experimentally and simulated with the\nGeant4 software package, as well as of the measurements of their timing\nresolution. We also discuss the results of studies of the light collection\nnon-uniformity of the NEVOD-EAS detectors and of the accuracy of air-shower\narrival direction reconstruction, which have been performed using other\nfacilities of the Experimental Complex NEVOD: the muon hodoscope URAGAN and the\ncoordinate-tracking detector DECOR.", "category": "astro-ph_IM" }, { "text": "The RATT PARROT: serendipitous discovery of a peculiarly scintillating\n pulsar in MeerKAT imaging observations of the Great Saturn-Jupiter\n Conjunction of 2020. I. Dynamic imaging and data analysis: We report on a radiopolarimetric observation of the Saturn-Jupiter Great\nConjunction of 2020 using the MeerKAT L-band system, initially carried out for\nscience verification purposes, which yielded a serendipitous discovery of a\npulsar. The radiation belts of Jupiter are very bright and time variable:\ncoupled with the sensitivity of MeerKAT, this necessitated development of\ndynamic imaging techniques, reported on in this work. We present a deep radio\n\"movie\" revealing Jupiter's rotating magnetosphere, a radio detection of\nCallisto, and numerous background radio galaxies. We also detect a bright radio\ntransient in close vicinity to Saturn, lasting approximately 45 minutes.\nFollow-up deep imaging observations confirmed this as a faint compact variable\nradio source, and yielded detections of pulsed emission by the commensal\nMeerTRAP search engine, establishing the object's nature as a radio emitting\nneutron star, designated PSR J2009-2026. A further observation combining deep\nimaging with the PTUSE pulsar backend measured detailed dynamic spectra for the\nobject. While qualitatively consistent with scintillation, the magnitude of the\nmagnification events and the characteristic timescales are odd. We are\ntentatively designating this object a pulsar with anomalous refraction\nrecurring on odd timescales (PARROT). As part of this investigation, we present\na pipeline for detection of variable sources in imaging data, with dynamic\nspectra and lightcurves as the products, and compare dynamic spectra obtained\nfrom visibility data with those yielded by PTUSE. We discuss MeerKAT's\ncapabilities and prospects for detecting more of such transients and variables.", "category": "astro-ph_IM" }, { "text": "Enhanced models for stellar Doppler noise reveal hints of a 13-year\n activity cycle of 55 Cancri: We consider the impact of Doppler noise models on the statistical robustness\nof the exoplanetary radial-velocity fits. We show that the traditional model of\nthe Doppler noise with an additive jitter can generate large non-linearity\neffects, decreasing the reliability of the fit, especially in the cases when a\ncorreleated Doppler noise is involved. We introduce a regularization of the\nadditive noise model that can gracefully eliminate its singularities together\nwith the associated non-linearity effects.\n We apply this approach to Doppler time-series data of several exoplanetary\nsystems. It demonstrates that our new regularized noise model yields orbital\nfits that have either increased or at least the same statistical robustness, in\ncomparison with the simple additive jitter. Various statistical uncertainties\nin the parametric estimations are often reduced, while planet detection\nsignificance is often increased.\n Concerning the 55 Cnc five-planet system, we show that its Doppler data\ncontain significant correlated (\"red\") noise. Its correlation timescale is in\nthe range from days to months, and its magnitude is much larger than the effect\nof the planetary N-body perturbations in the radial velocity (these\nperturbations thus appear undetectable). Characteristics of the red noise\ndepend on the spectrograph/observatory, and also show a cyclic time variation\nin phase with the public Ca II H & K and photometry measurements. We interpret\nthis modulation as a hint of the long-term activity cycle of 55 Cnc, similar to\nthe Solar 11-year cycle. We estimate the 55 Cnc activity period by\n$12.6\\pm^{2.5}_{1.0}$ yrs, with the nearest minimum presumably expected in 2014\nor 2015.", "category": "astro-ph_IM" }, { "text": "Analysis of active optics correction for a large honeycomb mirror: In the development of space-based large telescope systems, having the\ncapability to perform active optics correction allows correcting wavefront\naberrations caused by thermal perturbations so as to achieve\ndiffraction-limited performance with relaxed stability requirements. We present\na method of active optics correction used for current ground-based telescopes\nand simulate its effectiveness for a large honeycomb primary mirror in space.\nWe use a finite-element model of the telescope to predict misalignments of the\noptics and primary mirror surface errors due to thermal gradients. These\npredicted surface error data are plugged into a Zemax ray trace analysis to\nproduce wavefront error maps at the image plane. For our analysis, we assume\nthat tilt, focus and coma in the wavefront error are corrected by adjusting the\npointing of the telescope and moving the secondary mirror. Remaining mid- to\nhigh-order errors are corrected through physically bending the primary mirror\nwith actuators. The influences of individual actuators are combined to form\nbending modes that increase in stiffness from low-order to high-order\ncorrection. The number of modes used is a variable that determines the accuracy\nof correction and magnitude of forces. We explore the degree of correction that\ncan be made within limits on actuator force capacity and stress in the mirror.\nWhile remaining within these physical limits, we are able to demonstrate sub-25\nnm RMS surface error over 30 hours of simulated data. The results from this\nsimulation will be part of an end-to-end simulation of telescope optical\nperformance that includes dynamic perturbations, wavefront sensing, and active\ncontrol of alignment and mirror shape with realistic actuator performance.", "category": "astro-ph_IM" }, { "text": "The Carnegie Astrometric Planet Search Program: We are undertaking an astrometric search for gas giant planets and brown\ndwarfs orbiting nearby low mass dwarf stars with the 2.5-m du Pont telescope at\nthe Las Campanas Observatory in Chile. We have built two specialized\nastrometric cameras, the Carnegie Astrometric Planet Search Cameras (CAPSCam-S\nand CAPSCam-N), using two Teledyne Hawaii-2RG HyViSI arrays, with the cameras'\ndesign having been optimized for high accuracy astrometry of M dwarf stars. We\ndescribe two independent CAPSCam data reduction approaches and present a\ndetailed analysis of the observations to date of one of our target stars, NLTT\n48256. Observations of NLTT 48256 taken since July 2007 with CAPSCam-S imply\nthat astrometric accuracies of around 0.3 milliarcsec per hour are achievable,\nsufficient to detect a Jupiter-mass companion orbiting 1 AU from a late M dwarf\n10 pc away with a signal-to-noise ratio of about 4. We plan to follow about 100\nnearby (primarily within about 10 pc) low mass stars, principally late M, L,\nand T dwarfs, for 10 years or more, in order to detect very low mass companions\nwith orbital periods long enough to permit the existence of habitable,\nEarth-like planets on shorter-period orbits. These stars are generally too\nfaint and red to be included in ground-based Doppler planet surveys, which are\noften optimized for FGK dwarfs. The smaller masses of late M dwarfs also yield\ncorrespondingly larger astrometric signals for a given mass planet. Our search\nwill help to determine whether gas giant planets form primarily by core\naccretion or by disk instability around late M dwarf stars.", "category": "astro-ph_IM" }, { "text": "Unrolling PALM for sparse semi-blind source separation: Sparse Blind Source Separation (BSS) has become a well established tool for a\nwide range of applications - for instance, in astrophysics and remote sensing.\nClassical sparse BSS methods, such as the Proximal Alternating Linearized\nMinimization (PALM) algorithm, nevertheless often suffer from a difficult\nhyperparameter choice, which undermines their results. To bypass this pitfall,\nwe propose in this work to build on the thriving field of algorithm\nunfolding/unrolling. Unrolling PALM enables to leverage the data-driven\nknowledge stemming from realistic simulations or ground-truth data by learning\nboth PALM hyperparameters and variables. In contrast to most existing unrolled\nalgorithms, which assume a fixed known dictionary during the training and\ntesting phases, this article further emphasizes on the ability to deal with\nvariable mixing matrices (a.k.a. dictionaries). The proposed Learned PALM\n(LPALM) algorithm thus enables to perform semi-blind source separation, which\nis key to increase the generalization of the learnt model in real-world\napplications. We illustrate the relevance of LPALM in astrophysical\nmultispectral imaging: the algorithm not only needs up to $10^4-10^5$ times\nfewer iterations than PALM, but also improves the separation quality, while\navoiding the cumbersome hyperparameter and initialization choice of PALM. We\nfurther show that LPALM outperforms other unrolled source separation methods in\nthe semi-blind setting.", "category": "astro-ph_IM" }, { "text": "Thermal control of long delay lines in a high-resolution astrophotonic\n spectrograph: High-resolution astronomical spectroscopy carried out with a photonic Fourier\ntransform spectrograph (FTS) requires long asymmetrical optical delay lines\nthat can be dynamically tuned. For example, to achieve a spectral resolution of\nR = 30,000, a delay line as long as 1.5 cm would be required. Such delays are\ninherently prone to phase errors caused by temperature fluctuations. This is\ndue to the relatively large thermo-optic coefficient and long lengths of the\nwaveguides, in this case composed of SiN, resulting in thermally dependent\nchanges to the optical path length. To minimize phase error to the order of\n0.05 radians, thermal stability of the order of 0.05{\\deg} C is necessary. A\nthermal control system capable of stability such as this would require a fast\nthermal response and minimal overshoot/undershoot. With a PID temperature\ncontrol loop driven by a Peltier cooler and thermistor, we minimized\ninterference fringe phase error to +/- 0.025 radians and achieved temperature\nstability on the order of 0.05{\\deg} C. We present a practical system for\nprecision temperature control of a foundry-fabricated and packaged FTS device\non a SiN platform with delay lines ranging from 0.5 to 1.5 cm in length using\ninexpensive off-the-shelf components, including design details, control loop\noptimization, and considerations for thermal control of integrated photonics.", "category": "astro-ph_IM" }, { "text": "Characterization Of Inpaint Residuals In Interferometric Measurements of\n the Epoch Of Reionization: Radio Frequency Interference (RFI) is one of the systematic challenges\npreventing 21cm interferometric instruments from detecting the Epoch of\nReionization. To mitigate the effects of RFI on data analysis pipelines,\nnumerous inpaint techniques have been developed to restore RFI corrupted data.\nWe examine the qualitative and quantitative errors introduced into the\nvisibilities and power spectrum due to inpainting. We perform our analysis on\nsimulated data as well as real data from the Hydrogen Epoch of Reionization\nArray (HERA) Phase 1 upper limits. We also introduce a convolutional neural\nnetwork that capable of inpainting RFI corrupted data in interferometric\ninstruments. We train our network on simulated data and show that our network\nis capable at inpainting real data without requiring to be retrained. We find\nthat techniques that incorporate high wavenumbers in delay space in their\nmodeling are best suited for inpainting over narrowband RFI. We also show that\nwith our fiducial parameters Discrete Prolate Spheroidal Sequences (DPSS) and\nCLEAN provide the best performance for intermittent ``narrowband'' RFI while\nGaussian Progress Regression (GPR) and Least Squares Spectral Analysis (LSSA)\nprovide the best performance for larger RFI gaps. However we caution that these\nqualitative conclusions are sensitive to the chosen hyperparameters of each\ninpainting technique. We find these results to be consistent in both simulated\nand real visibilities. We show that all inpainting techniques reliably\nreproduce foreground dominated modes in the power spectrum. Since the\ninpainting techniques should not be capable of reproducing noise realizations,\nwe find that the largest errors occur in the noise dominated delay modes. We\nshow that in the future, as the noise level of the data comes down, CLEAN and\nDPSS are most capable of reproducing the fine frequency structure in the\nvisibilities of HERA data.", "category": "astro-ph_IM" }, { "text": "Analysis Methods for Gamma-ray Astronomy: The launch of the Fermi satellite in 2008, with its Large Area Telescope\n(LAT) on board, has opened a new era for the study of gamma-ray sources at GeV\n($10^9$ eV) energies. Similarly, the commissioning of the third generation of\nimaging atmospheric Cherenkov telescopes (IACTs) - H.E.S.S., MAGIC, and VERITAS\n- in the mid-2000's has firmly established the field of TeV ($10^{12}$ eV)\ngamma-ray astronomy. Together, these instruments have revolutionised our\nunderstanding of the high-energy gamma-ray sky, and they continue to provide\naccess to it over more than six decades in energy. In recent years, the\nground-level particle detector arrays HAWC, Tibet, and LHAASO have opened a new\nwindow to gamma rays of the highest energies, beyond 100 TeV. Soon,\nnext-generation facilities such as CTA and SWGO will provide even better\nsensitivity, thus promising a bright future for the field. In this chapter, we\nprovide a brief overview of methods commonly employed for the analysis of\ngamma-ray data, focusing on those used for Fermi-LAT and IACT observations. We\ndescribe the standard data formats, explain event reconstruction and selection\nalgorithms, and cover in detail high-level analysis approaches for imaging and\nextraction of spectra, including aperture photometry as well as advanced\nlikelihood techniques.", "category": "astro-ph_IM" }, { "text": "Coherent Imaging with Photonic Lanterns: Photonic Lanterns (PLs) are tapered waveguides that gradually transition from\na multi-mode fiber geometry to a bundle of single-mode fibers (SMFs). They can\nefficiently couple multi-mode telescope light into a multi-mode fiber entrance\nat the focal plane and convert it into multiple single-mode beams. Thus, each\nSMF samples its unique mode (lantern principal mode) of the telescope light in\nthe pupil, analogous to subapertures in aperture masking interferometry (AMI).\nCoherent imaging with PLs can be enabled by interfering SMF outputs and\napplying phase modulation, which can be achieved using a photonic chip beam\ncombiner at the backend (e.g., the ABCD beam combiner). In this study, we\ninvestigate the potential of coherent imaging by interfering SMF outputs of a\nPL with a single telescope. We demonstrate that the visibilities that can be\nmeasured from a PL are mutual intensities incident on the pupil weighted by the\ncross-correlation of a pair of lantern modes. From numerically simulated\nlantern principal modes of a 6-port PL, we find that interferometric\nobservables using a PL behave similarly to separated-aperture visibilities for\nsimple models on small angular scales ($<\\lambda/D$) but with greater\nsensitivity to symmetries and capability to break phase angle degeneracies.\nFurthermore, we present simulated observations with wavefront errors and\ncompare them to AMI. Despite the redundancy caused by extended lantern\nprincipal modes, spatial filtering offers stability to wavefront errors. Our\nsimulated observations suggest that PLs may offer significant benefits in the\nphoton noise-limited regime and in resolving small angular scales at low\ncontrast regime.", "category": "astro-ph_IM" }, { "text": "The ASTRO-H X-ray Astronomy Satellite: The joint JAXA/NASA ASTRO-H mission is the sixth in a series of highly\nsuccessful X-ray missions developed by the Institute of Space and Astronautical\nScience (ISAS), with a planned launch in 2015. The ASTRO-H mission is equipped\nwith a suite of sensitive instruments with the highest energy resolution ever\nachieved at E > 3 keV and a wide energy range spanning four decades in energy\nfrom soft X-rays to gamma-rays. The simultaneous broad band pass, coupled with\nthe high spectral resolution of Delta E < 7 eV of the micro-calorimeter, will\nenable a wide variety of important science themes to be pursued. ASTRO-H is\nexpected to provide breakthrough results in scientific areas as diverse as the\nlarge-scale structure of the Universe and its evolution, the behavior of matter\nin the gravitational strong field regime, the physical conditions in sites of\ncosmic-ray acceleration, and the distribution of dark matter in galaxy clusters\nat different redshifts.", "category": "astro-ph_IM" }, { "text": "JUDE (Jayant's UVIT Data Explorer) Pipeline User Manual: We have written a reference manual to use JUDE (Jayant's UVIT data Explorer)\ndata pipeline software for processing and reducing the Ultraviolet Imaging\nTelescope (UVIT) Level~1 data into event lists and images -- Level~2 data. The\nJUDE pipeline is written in the GNU Data Language (GDL) and released as an\nopen-source which may be freely used and modified. GDL was chosen because it is\nan interpreted language allowing interactive analysis of data; thus in the\npipeline, each step can be checked and run interactively. This manual is\nintended as a guide to data reduction and calibration for the users of the UVIT\ndata.", "category": "astro-ph_IM" }, { "text": "Reconstruction of Cherenkov radiation signals from extensive air showers\n of cosmic rays using data of a wide field-of-view telescope: The operation of a wide field-of-view (WFOV) Cherenkov telescope is\ndescribed. The detection of extensive air showers (EAS) of cosmic rays (CR) is\nbased upon the coincidence with signals from the Yakutsk array. The data\nacquisition system of the telescope yields signals connected with EAS\ndevelopment parameters: presumably, shower age and position of shower maximum\nin the atmosphere. Here we describe the method of signal processing used to\nreconstruct Cherenkov radiation signals induced by CR showers. An analysis of\nsignal parameters results in the confirmation of the known correlation of the\nduration of the Cherenkov radiation signal with the distance to the shower\ncore. The measured core distance dependence is used to set an upper limit to\nthe dimensions of the area along the EAS axis where the Cherenkov radiation\nintensity is above half-peak amplitude.", "category": "astro-ph_IM" }, { "text": "Technical Note: Asteroid Detection Demonstration from SkySat-3 B612 Data\n using Synthetic Tracking: We report results from analyzing the B612 asteroid observation data taken by\nthe sCMOS cameras on board of Planet SkySat-3 using the synthetic tracking\ntechnique. The analysis demonstrates the expected sensitivity improvement in\nthe signal-to-noise ratio of the asteroids from properly stacking up the the\nshort exposure images in post-processing.", "category": "astro-ph_IM" }, { "text": "AMEGO-X: MeV gamma-ray Astronomy in the Multimessenger Era: Recent detections of gravitational wave signals and neutrinos from gamma-ray\nsources have ushered in the era of multi-messenger astronomy, while\nhighlighting the importance of gamma-ray observations for this emerging field.\nAMEGO-X, the All-sky Medium Energy Gamma-Ray Observatory eXplorer, is an MeV\ngamma-ray instrument that will survey the sky in the energy range from hundreds\nof keV to one GeV with unprecedented sensitivity. AMEGO-X will detect gamma-ray\nphotons both via Compton interactions and pair production processes, bridging\nthe \"sensitivity gap\" between hard X-rays and high-energy gamma rays. AMEGO-X\nwill provide important contributions to multi-messenger science and time-domain\ngamma-ray astronomy, studying e.g. high-redshift blazars, which are probable\nsources of astrophysical neutrinos, and gamma-ray bursts. I will present an\noverview of the instrument and science program.", "category": "astro-ph_IM" }, { "text": "Multi-Chroic Feed-Horn Coupled TES Polarimeters: Multi-chroic polarization sensitive detectors offer an avenue to increase\nboth the spectral coverage and sensitivity of instruments optimized for\nobservations of the cosmic-microwave background (CMB) or sub-mm sky. We report\non an effort to adapt the Truce Collaboration horn coupled bolometric\npolarimeters for operation over octave bandwidth. Development is focused on\ndetectors operating in both the 90 and 150 GHz bands which offer the highest\nCMB polarization to foreground ratio. We plan to deploy an array of 256\nmulti-chroic 90/150 GHz polarimeters with 1024 TES detectors on ACTPol in 2013,\nand there are proposals to use this technology for balloon-borne instruments.\nThe combination of excellent control of beam systematics and sensitivity make\nthis technology ideal for future ground, ballon, and space missions.", "category": "astro-ph_IM" }, { "text": "AnisoCADO: a python package for analytically generating adaptive optics\n point spread functions for the Extremely Large Telescope: AnisoCADO is a Python package for generating images of the point spread\nfunction (PSF) for the european extremely large telescope (ELT). The code\nallows the user to set many of the most important atmospheric and observational\nparameters that influence the shape and strehl ratio of the resulting PSF,\nincluding but not limited to: the atmospheric turbulence profile, the guide\nstar position for a single conjugate adaptive optics (SCAO) solution,\ndifferential telescope pupil transmission, etc. Documentation can be found at\nhttps://anisocado.readthedocs.io/en/latest/", "category": "astro-ph_IM" }, { "text": "AstroDAbis: Annotations and Cross-Matches for Remote Catalogues: Astronomers are good at sharing data, but poorer at sharing knowledge.\n Almost all astronomical data ends up in open archives, and access to these is\nbeing simplified by the development of the global Virtual Observatory (VO).\nThis is a great advance, but the fundamental problem remains that these\narchives contain only basic observational data, whereas all the astrophysical\ninterpretation of that data -- which source is a quasar, which a low-mass star,\nand which an image artefact -- is contained in journal papers, with very little\nlinkage back from the literature to the original data archives. It is therefore\ncurrently impossible for an astronomer to pose a query like \"give me all\nsources in this data archive that have been identified as quasars\" and this\nlimits the effective exploitation of these archives, as the user of an archive\nhas no direct means of taking advantage of the knowledge derived by its\nprevious users.\n The AstroDAbis service aims to address this, in a prototype service enabling\nastronomers to record annotations and cross-identifications in the AstroDAbis\nservice, annotating objects in other catalogues. We have deployed two\ninterfaces to the annotations, namely one astronomy-specific one using the TAP\nprotocol}, and a second exploiting generic Linked Open Data (LOD) and RDF\ntechniques.", "category": "astro-ph_IM" }, { "text": "Building models for extended radio sources: implications for Epoch of\n Reionisation science: We test the hypothesis that limitations in the sky model used to calibrate an\ninterferometric radio telescope, where the model contains extended radio\nsources, will generate bias in the Epoch of Reionisation (EoR) power spectrum.\nThe information contained in a calibration model about the spatial and spectral\nstructure of an extended source is incomplete because a radio telescope cannot\nsample all Fourier components. Application of an incomplete sky model to\ncalibration of EoR data will imprint residual error in the data, which\npropagates forward to the EoR power spectrum. This limited information is\nstudied in the context of current and future planned instruments and surveys at\nEoR frequencies, such as the Murchison Widefield Array (MWA), Giant Metrewave\nRadio Telescope (GMRT) and the Square Kilometre Array (SKA1-Low). For the MWA\nEoR experiment, we find that both the additional short baseline $uv$-coverage\nof the compact EoR array, and the additional long baselines provided by TGSS\nand planned MWA expansions, are required to obtain sufficient information on\nall relevant scales. For SKA1-Low, arrays with maximum baselines of 49~km and\n65~km yield comparable performance at 50~MHz and 150~MHz, while 39~km, 14~km\nand 4~km arrays yield degraded performance.", "category": "astro-ph_IM" }, { "text": "CAPTURE: A continuum imaging pipeline for the uGMRT: We present the first fully automated pipeline for making images from the\ninterferometric data obtained from the upgraded Giant Metrewave Radio Telescope\n(uGMRT) called CAsa Pipeline-cum-Toolkit for Upgraded Giant Metrewave Radio\nTelescope data REduction - CAPTURE. It is a python program that uses tasks from\nthe NRAO Common Astronomy Software Applications (CASA) to perform the steps of\nflagging of bad data, calibration, imaging and self-calibration. The salient\nfeatures of the pipeline are: i) a fully automatic mode to go from the raw data\nto a self-calibrated continuum image, ii) specialized flagging strategies for\nshort and long baselines that ensure minimal loss of extended structure, iii)\nflagging of persistent narrow band radio frequency interference (RFI), iv)\nflexibility for the user to configure the pipeline for step-by-step analysis or\nspecial cases and v) analysis of data from the legacy GMRT. CAPTURE is\navailable publicly on github (https://github.com/ruta-k/uGMRT-pipeline, release\nv1.0.0). The primary beam correction for the uGMRT images produced with CAPTURE\nis made separately available at https://github.com/ruta-k/uGMRTprimarybeam. We\nshow examples of using CAPTURE on uGMRT and legacy GMRT data. In principle,\nCAPTURE can be tailored for use with radio interferometric data from other\ntelescopes.", "category": "astro-ph_IM" }, { "text": "Arm-Locking with the GRACE Follow-On Laser Ranging Interferometer: Arm-locking is a technique for stabilizing the frequency of a laser in an\ninter-spacecraft interferometer by using the spacecraft separation as the\nfrequency reference. A candidate technique for future space-based gravitational\nwave detectors such as the Laser Interferometer Space Antenna (LISA),\narm-locking has been extensive studied in this context through analytic models,\ntime-domain simulations, and hardware-in-the-loop laboratory demonstrations. In\nthis paper we show the Laser Ranging Interferometer instrument flying aboard\nthe upcoming Gravity Recovery and Climate Experiment Follow-On (GRACE-FO)\nmission provides an appropriate platform for an on-orbit demonstration of the\narm-locking technique. We describe an arm-locking controller design for the\nGRACE-FO system and a series of time-domain simulations that demonstrate its\nfeasibility. We conclude that it is possible to achieve laser frequency noise\nsuppression of roughly two orders of magnitude around a Fourier frequency of\n1Hz with conservative margins on the system's stability. We further demonstrate\nthat `pulling' of the master laser frequency due to fluctuating Doppler shifts\nand lock acquisition transients is less than $100\\,$MHz over several GRACE-FO\norbits. These findings motivate further study of the implementation of such a\ndemonstration.", "category": "astro-ph_IM" }, { "text": "Correcting for Telluric Absorption: Methods, Case Studies, and Release\n of the TelFit Code: Ground-based astronomical spectra are contaminated by the Earth's atmosphere\nto varying degrees in all spectral regions. We present a Python code that can\naccurately fit a model to the telluric absorption spectrum present in\nastronomical data, with residuals of $\\sim 3-5\\%$ of the continuum for\nmoderately strong lines. We demonstrate the quality of the correction by\nfitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as\nwell as to a series of dwarf M star spectra near the 819 nm sodium doublet. We\ndirectly compare the results to an empirical telluric correction of HIP 20264\nand find that our model-fitting procedure is at least as good and sometimes\nmore accurate. The telluric correction code, which we make freely available to\nthe astronomical community, can be used as a replacement for telluric standard\nstar observations for many purposes.", "category": "astro-ph_IM" }, { "text": "PandExo: A Community Tool for Transiting Exoplanet Science with JWST &\n HST: As we approach the James Webb Space Telescope (JWST) era, several studies\nhave emerged that aim to: 1) characterize how the instruments will perform and\n2) determine what atmospheric spectral features could theoretically be detected\nusing transmission and emission spectroscopy. To some degree, all these studies\nhave relied on modeling of JWST's theoretical instrument noise. With under two\nyears left until launch, it is imperative that the exoplanet community begins\nto digest and integrate these studies into their observing plans, as well as\nthink about how to leverage the Hubble Space Telescope (HST) to optimize JWST\nobservations. In order to encourage this and to allow all members of the\ncommunity access to JWST & HST noise simulations, we present here an\nopen-source Python package and online interface for creating observation\nsimulations of all observatory-supported time-series spectroscopy modes. This\nnoise simulator, called PandExo, relies on some aspects of Space Telescope\nScience Institute's Exposure Time Calculator, Pandeia. We describe PandExo and\nthe formalism for computing noise sources for JWST. Then, we benchmark\nPandExo's performance against each instrument team's independently written\nnoise simulator for JWST, and previous observations for HST. We find that\n\\texttt{PandExo} is within 10% agreement for HST/WFC3 and for all JWST\ninstruments.", "category": "astro-ph_IM" }, { "text": "Differential HBT Method for Binary Stars: Two photon correlations are studied for a binary star system. It is\ninvestigated how the differential Hanbury Brown and Twiss (HBT) approach can be\nused in order to determine orbital parameters of a binary star.", "category": "astro-ph_IM" }, { "text": "4MOST: Project overview and information for the First Call for Proposals: We introduce the 4-metre Multi-Object Spectroscopic Telescope (4MOST), a new\nhigh-multiplex, wide-field spectroscopic survey facility under development for\nthe four-metre-class Visible and Infrared Survey Telescope for Astronomy\n(VISTA) at Paranal. Its key specifications are: a large field of view (FoV) of\n4.2 square degrees and a high multiplex capability, with 1624 fibres feeding\ntwo low-resolution spectrographs ($R = \\lambda/\\Delta\\lambda \\sim 6500$), and\n812 fibres transferring light to the high-resolution spectrograph ($R \\sim\n20\\,000$). After a description of the instrument and its expected performance,\na short overview is given of its operational scheme and planned 4MOST\nConsortium science; these aspects are covered in more detail in other articles\nin this edition of The Messenger. Finally, the processes, schedules, and\npolicies concerning the selection of ESO Community Surveys are presented,\ncommencing with a singular opportunity to submit Letters of Intent for Public\nSurveys during the first five years of 4MOST operations.", "category": "astro-ph_IM" }, { "text": "Optimizing Gravitational-Wave Detector Design for Squeezed Light: Achieving the quantum noise targets of third-generation detectors will\nrequire 10 dB of squeezed-light enhancement as well as megawatt laser power in\nthe interferometer arms - both of which require unprecedented control of the\ninternal optical losses. In this work, we present a novel optimization approach\nto gravitational-wave detector design aimed at maximizing the robustness to\ncommon, yet unavoidable, optical fabrication and installation errors, which\nhave caused significant loss in Advanced LIGO. As a proof of concept, we employ\nthese techniques to perform a two-part optimization of the LIGO A+ design.\nFirst, we optimize the arm cavities for reduced scattering loss in the presence\nof point absorbers, as currently limit the operating power of Advanced LIGO.\nThen, we optimize the signal recycling cavity for maximum squeezing\nperformance, accounting for realistic errors in the positions and radii of\ncurvature of the optics. Our findings suggest that these techniques can be\nleveraged to achieve substantially greater quantum noise performance in current\nand future gravitational-wave detectors.", "category": "astro-ph_IM" }, { "text": "SORA: Stellar Occultation Reduction and Analysis: The stellar occultation technique provides competitive accuracy in\ndetermining the sizes, shapes, astrometry, etc., of the occulting body,\ncomparable to in-situ observations by spacecraft. With the increase in the\nnumber of known Solar System objects expected from the LSST, the highly precise\nastrometric catalogues, such as Gaia, and the improvement of ephemerides,\noccultations observations will become more common with a higher number of\nchords in each observation. In the context of the Big Data era, we developed\nSORA, an open-source python library to reduce and analyse stellar occultation\ndata efficiently. It includes routines from predicting such events up to the\ndetermination of Solar System bodies' sizes, shapes, and positions.", "category": "astro-ph_IM" }, { "text": "Astrophysics Source Code Library: Here we grow again!: The Astrophysics Source Code Library (ASCL) is a free online registry of\nresearch codes; it is indexed by ADS and Web of Science and has over 1300 code\nentries. Its entries are increasingly used to cite software; citations have\nbeen doubling each year since 2012 and every major astronomy journal accepts\ncitations to the ASCL. Codes in the resource cover all aspects of astrophysics\nresearch and many programming languages are represented. In the past year, the\nASCL added dashboards for users and administrators, started minting Digital\nObjective Identifiers (DOIs) for software it houses, and added metadata fields\nrequested by users. This presentation covers the ASCL's growth in the past year\nand the opportunities afforded it as one of the few domain libraries for\nscience research codes.", "category": "astro-ph_IM" }, { "text": "Seeing Black Holes : from the Computer to the Telescope: Astronomical observations are about to deliver the very first telescopic\nimage of the massive black hole lurking at the Galactic Center. The mass of\ndata collected in one night by the Event Horizon Telescope network, exceeding\neverything that has ever been done in any scientific field, should provide a\nrecomposed image during 2018. All this, forty years after the first numerical\nsimulations done by the present author.", "category": "astro-ph_IM" }, { "text": "Cn2 profile from Shack-Hartmann data with CO-SLIDAR data processing: Cn2 profile monitoring usually makes use of wavefront slope correlations or\nof scintillation pattern correlations. Wavefront slope correlations provide\nsensitivity to layers close to the receiving plane. In addition, scintillation\ncorrelations allow a better sensitivity to high turbulence layers. Wavefront\nslope and scintillation correlations are therefore complementary. Slopes and\nscintillation being recorded simultaneously with a Shack-Hartmann wavefront\nsensor (SHWFS), we propose here to exploit their correlation to retrieve the\nCn2 profile. The measurement method named COupled SLodar scIDAR (CO-SLIDAR)\nuses correlations of SHWFS data from two separated stars. A maximum-likelihood\nmethod is developed to estimate precisely the positions and intensities\ncorresponding to each SHWFS spot, which are used as inputs for CO-SLIDAR. First\nresults are presented using SHWFS real data from a binary star.", "category": "astro-ph_IM" }, { "text": "On Optimal Geometry for Space Interferometers: This paper examines options for orbit configurations for a space\ninterferometer. In contrast to previously presented concepts for space very\nlong baseline interferometry, we propose a combination of regular and\nretrograde near-Earth circular orbits in order to achieve a faster filling of\n$(u,v)$ coverage. With the rapid relative motion of the telescopes, it will be\npossible to quickly obtain high quality images of supermassive black holes. As\na result of such an approach, it will be possible for the first time to conduct\nhigh quality studies of the supermassive black hole close surroundings in\ndynamics.", "category": "astro-ph_IM" }, { "text": "Technical Note: Asteroid Detection Demonstration from SkySat-3 B612 Data\n using Synthetic Tracking: We report results from analyzing the B612 asteroid observation data taken by\nthe sCMOS cameras on board of Planet SkySat-3 using the synthetic tracking\ntechnique. The analysis demonstrates the expected sensitivity improvement in\nthe signal-to-noise ratio of the asteroids from properly stacking up the the\nshort exposure images in post-processing.", "category": "astro-ph_IM" }, { "text": "WAHRSIS: A Low-cost, High-resolution Whole Sky Imager With Near-Infrared\n Capabilities: Cloud imaging using ground-based whole sky imagers is essential for a\nfine-grained understanding of the effects of cloud formations, which can be\nuseful in many applications. Some such imagers are available commercially, but\ntheir cost is relatively high, and their flexibility is limited. Therefore, we\nbuilt a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution\nSky Imaging System. The strengths of our new design are its simplicity, low\nmanufacturing cost and high resolution. Our imager captures the entire\nhemisphere in a single high-resolution picture via a digital camera using a\nfish-eye lens. The camera was modified to capture light across the visible as\nwell as the near-infrared spectral ranges. This paper describes the design of\nthe device as well as the geometric and radiometric calibration of the imaging\nsystem.", "category": "astro-ph_IM" }, { "text": "The 4m International Liquid Mirror Telescope project: The International Liquid Mirror Telescope (ILMT) project is a scientific\ncollaboration in observational astrophysics between the Li{\\`e}ge Institute of\nAstrophysics and Geophysics (Li{\\`e}ge University, Belgium), the Aryabatta\nResearch Institute of observational sciencES (ARIES, Nainital, India) and\nseveral Canadian universities (British Columbia, Laval, Montr{\\'e}al, Toronto,\nVictoria and York). Meanwhile, several other institutes have joined the\nproject: the Royal Observatory of Belgium, the National University of\nUzbekistan and the Ulugh Beg Astronomical Institute (Uzbekistan) as well as the\nPozna{\\'n} Observatory (Poland). The Li{\\`e}ge company AMOS (Advanced\nMechanical and Optical Systems) has fabricated the telescope structure that has\nbeen erected on the ARIES site in Devasthal (Uttarakhand, India). It is the\nfirst liquid mirror telescope being dedicated to astronomical observations.\nFirst light was obtained on 29 April 2022 and commissioning is being conducted\nat the present time. In this short article, we describe and illustrate the main\ncomponents of the ILMT. We also highlight the ILMT papers presented during the\nthird BINA workshop, which discuss various aspects of the ILMT science\nprograms.", "category": "astro-ph_IM" }, { "text": "The Unified Astronomy Thesaurus: The Unified Astronomy Thesaurus (UAT) is an open, interoperable and\ncommunity-supported thesaurus which unifies the existing divergent and isolated\nAstronomy & Astrophysics vocabularies into a single high-quality,\nfreely-available open thesaurus formalizing astronomical concepts and their\ninter-relationships. The UAT builds upon the existing IAU Thesaurus with major\ncontributions from the astronomy portions of the thesauri developed by the\nInstitute of Physics Publishing, the American Institute of Physics, and SPIE.\nWe describe the effort behind the creation of the UAT and the process through\nwhich we plan to maintain the document updated through broad community\nparticipation.", "category": "astro-ph_IM" }, { "text": "The International Pulsar Timing Array: The International Pulsar Timing Array (IPTA) is an organisation whose raison\nd'etre is to facilitate collaboration between the three main existing PTAs (the\nEPTA in Europe, NANOGrav in North America and the PPTA in Australia) in order\nto realise the benefits of combined PTA data sets in reaching the goals of PTA\nprojects. Currently, shared data sets for 39 pulsars are available for\nIPTA-based projects. Operation of the IPTA is administered by a Steering\nCommittee consisting of six members, two from each PTA, plus the immediate past\nChair in a non-voting capacity. A Constitution and several Agreements define\nthe framework for the collaboration. Web pages provide information both to\nmembers of participating PTAs and to the general public. With support from an\nNSF PIRE grant, the IPTA facilitates the organisation of annual Student\nWorkshops and Science Meetings. These are very valuable both in training new\nstudents and in communicating current results from IPTA-based research.", "category": "astro-ph_IM" }, { "text": "Systematics in the ALMA Proposal Review Rankings: The results from the ALMA proposal peer review process in Cycles 0-6 are\nanalyzed to identify any systematics in the scientific rankings that may\nsignify bias. Proposal rankings are analyzed with respect to the experience\nlevel of a Principal Investigator (PI) in submitting ALMA proposals, regional\naffiliation (Chile, East Asia, Europe, North America, or Other), and gender.\nThe analysis was conducted for both the Stage 1 rankings, which are based on\nthe preliminary scores from the reviewers, and the Stage 2 rankings, which are\nbased on the final scores from the reviewers after participating in a\nface-to-face panel discussion. Analysis of the Stage 1 results shows that PIs\nwho submit an ALMA proposal in multiple cycles have systematically better\nproposal ranks than PIs who have submitted proposals for the first time. In\nterms of regional affiliation, PIs from Europe and North America have better\nStage 1 rankings than PIs from Chile and East Asia. Consistent with Lonsdale et\nal. (2016), proposals led by men have better Stage 1 rankings than women when\naveraged over all cycles. This trend was most noticeably present in Cycle 3,\nbut no discernible differences in the Stage 1 rankings are present in recent\ncycles. Nonetheless, in each cycle to date, women have had a lower proposal\nacceptance rate than men even after differences in demographics are considered.\nComparison of the Stage 1 and Stage 2 rankings reveal no significant changes in\nthe distribution of proposal ranks by experience level, regional affiliation,\nor gender as a result of the panel discussions, although the proposal ranks for\nEast Asian PIs show a marginally significant improvement from Stage 1 to Stage\n2 when averaged over all cycles. Thus any systematics in the proposal rankings\nare introduced primarily in the Stage 1 process and not from the face-to-face\ndiscussions.", "category": "astro-ph_IM" }, { "text": "RSM detection map for direct exoplanet detection in ADI sequences: Beyond the choice of wavefront control systems or coronographs, advanced data\nprocessing methods play a crucial role in disentangling potential planetary\nsignals from bright quasi-static speckles. Among these methods, angular\ndifferential imaging (ADI) for data sets obtained in pupil tracking mode (ADI\nsequences) is one of the foremost research avenues, considering the many\nobserving programs performed with ADI-based techniques and the associated\ndiscoveries. Inspired by the field of econometrics, here we propose a new\ndetection algorithm for ADI sequences, deriving from the regime-switching model\nfirst proposed in the 1980s. The proposed model is very versatile as it allows\nthe use of PSF-subtracted data sets (residual cubes) provided by various\nADI-based techniques, separately or together, to provide a single detection\nmap. The temporal structure of the residual cubes is used for the detection as\nthe model is fed with a concatenated series of pixel-wise time sequences. The\nalgorithm provides a detection probability map by considering two possible\nregimes for concentric annuli, the first one accounting for the residual noise\nand the second one for the planetary signal in addition to the residual noise.\nThe algorithm performance is tested on data sets from two instruments, VLT/NACO\nand VLT/SPHERE. The results show an overall better performance in the receiver\noperating characteristic space when compared with standard\nsignal-to-noise-ratio maps for several state-of-the-art ADI-based\npost-processing algorithms.", "category": "astro-ph_IM" }, { "text": "The star catalogue of Wilhelm IV, Landgraf von Hessen-Kassel: Accuracy\n of the catalogue and of the measurements: We analyse a manuscript star catalogue by Wilhem IV, Landgraf von\nHessen-Kassel, from 1586. From measurements of altitudes and of angles between\nstars, given in the catalogue, we find that the measurement accuracy averages\n26 arcsec for eight fundamental stars, compared to 49 arcsec of the\nmeasurements by Brahe. The computation in converting altitudes to declinations\nand angles between stars to celestial position is very accurate, with errors\nnegligible with respect to the measurement errors. Due to an offset in the\nposition of the vernal equinox the positional error of the catalogue is\nslightly worse than that of Brahe's catalogue, but when correction is made for\nthe offset -- which was known to 17th century astronomers -- the catalogue is\nmore accurate than that of Brahe by a factor two. We provide machine-readable\nTables of the catalogue.", "category": "astro-ph_IM" }, { "text": "Efficient least-squares basket-weaving: We report on a novel method to solve the basket-weaving problem.\nBasket-weaving is a technique that is used to remove scan-line patterns from\nsingle-dish radio maps. The new approach applies linear least-squares and works\non gridded maps from arbitrarily sampled data, which greatly improves\ncomputational efficiency and robustness. It also allows masking of bad data,\nwhich is useful for cases where radio frequency interference is present in the\ndata. We evaluate the algorithms using simulations and real data obtained with\nthe Effelsberg 100-m telescope.", "category": "astro-ph_IM" }, { "text": "A template method for measuring the iron spectrum in cosmic rays with\n Cherenkov telescopes: The energy-dependent abundance of elements in cosmic rays plays an important\nrole in understanding their acceleration and propagation. Most current results\nare obtained either from direct measurements by balloon- or satellite-borne\ndetectors, or from indirect measurements by air shower detector arrays on the\nEarth's surface. Imaging Atmospheric Cherenkov Telescopes (IACTs), used\nprimarily for $\\gamma$-ray astronomy, can also be used for cosmic-ray physics.\nThey are able to measure Cherenkov light emitted both by heavy nuclei and by\nsecondary particles produced in air showers, and are thus sensitive to the\ncharge and energy of cosmic ray particles with energies of tens to hundreds of\nTeV. A template-based method, which can be used to reconstruct the charge and\nenergy of primary particles simultaneously from images taken by IACTs, will be\nintroduced. Heavy nuclei, such as iron, can be separated from lighter cosmic\nrays with this method, and thus the abundance and spectrum of these nuclei can\nbe measured in the range of tens to hundreds of TeV.", "category": "astro-ph_IM" }, { "text": "Characterization and correction of charge-induced pixel shifts in DECam: Interaction of charges in CCDs with the already accumulated charge\ndistribution causes both a flux dependence of the point-spread function (an\nincrease of observed size with flux, also known as the brighter/fatter effect)\nand pixel-to-pixel correlations of the Poissonian noise in flat fields. We\ndescribe these effects in the Dark Energy Camera (DECam) with charge dependent\nshifts of effective pixel borders, i.e. the Antilogus et al. (2014) model,\nwhich we fit to measurements of flat-field Poissonian noise correlations. The\nlatter fall off approximately as a power-law r^-2.5 with pixel separation r,\nare isotropic except for an asymmetry in the direct neighbors along rows and\ncolumns, are stable in time, and are weakly dependent on wavelength. They show\nvariations from chip to chip at the 20% level that correlate with the silicon\nresistivity. The charge shifts predicted by the model cause biased shape\nmeasurements, primarily due to their effect on bright stars, at levels\nexceeding weak lensing science requirements. We measure the flux dependence of\nstar images and show that the effect can be mitigated by applying the reverse\ncharge shifts at the pixel level during image processing. Differences in\nstellar size, however, remain significant due to residuals at larger distance\nfrom the centroid.", "category": "astro-ph_IM" }, { "text": "Optimal Probabilistic Catalogue Matching for Radio Sources: Cross-matching catalogues from radio surveys to catalogues of sources at\nother wavelengths is extremely hard, because radio sources are often extended,\noften consist of several spatially separated components, and often no radio\ncomponent is coincident with the optical/infrared host galaxy. Traditionally,\nthe cross-matching is done by eye, but this does not scale to the millions of\nradio sources expected from the next generation of radio surveys. We present an\ninnovative automated procedure, using Bayesian hypothesis testing, that models\ntrial radio-source morphologies with putative positions of the host galaxy.\nThis new algorithm differs from an earlier version by allowing more complex\nradio source morphologies, and performing a simultaneous fit over a large\nfield. We show that this technique performs well in an unsupervised mode.", "category": "astro-ph_IM" }, { "text": "Reaching Diverse Groups in Long-Term Astronomy Public Engagement Efforts: Professional astronomy is historically not an environment of diverse\nidentities. In recognizing that public outreach efforts affect career outcomes\nfor young people, it is important to assess the demographics of those being\nreached and continually consider strategies for successfully engaging\nunderrepresented groups. One such outreach event, the International\nAstronomical Youth Camp (IAYC), has a 50-year history and has reached ~1700\nparticipants from around the world. We find that the IAYC is doing well in\nterms of gender (59% female, 4.7% non-binary at the most recent camp) and LGBT+\nrepresentation, whereas black and ethnic minorities are lacking. In this\nproceeding, we report the current landscape of demographics applying to and\nattending the IAYC; the efforts we are making to increase diversity amongst\nparticipants; the challenges we face; and our future plans to bridge these\ngaps, not only for the benefit of the camp but for society overall.", "category": "astro-ph_IM" }, { "text": "Analysis of a Custom Support Vector Machine for Photometric Redshift\n Estimation and the Inclusion of Galaxy Shape Information: Aims: We present a custom support vector machine classification package for\nphotometric redshift estimation, including comparisons with other methods. We\nalso explore the efficacy of including galaxy shape information in redshift\nestimation. Support vector machines, a type of machine learning, utilize\noptimization theory and supervised learning algorithms to construct predictive\nmodels based on the information content of data in a way that can treat\ndifferent input features symmetrically.\n Methods: The custom support vector machine package we have developed is\ndesignated SPIDERz and made available to the community. As test data for\nevaluating performance and comparison with other methods, we apply SPIDERz to\nfour distinct data sets: 1) the publicly available portion of the PHAT-1\ncatalog based on the GOODS-N field with spectroscopic redshifts in the range $z\n< 3.6$, 2) 14365 galaxies from the COSMOS bright survey with photometric band\nmagnitudes, morphology, and spectroscopic redshifts inside $z < 1.4$, 3) 3048\ngalaxies from the overlap of COSMOS photometry and morphology with 3D-HST\nspectroscopy extending to $z < 3.9$, and 4) 2612 galaxies with five-band\nphotometric magnitudes and morphology from the All-wavelength Extended Groth\nStrip International Survey and $z < 1.57$.\n Results: We find that SPIDER-z achieves results competitive with other\nempirical packages on the PHAT-1 data, and performs quite well in estimating\nredshifts with the COSMOS and AEGIS data, including in the cases of a large\nredshift range ($0 < z < 3.9$). We also determine from analyses with both the\nCOSMOS and AEGIS data that the inclusion of morphological information does not\nhave a statistically significant benefit for photometric redshift estimation\nwith the techniques employed here.", "category": "astro-ph_IM" }, { "text": "COMAP Early Science: II. Pathfinder Instrument: Line intensity mapping (LIM) is a new technique for tracing the global\nproperties of galaxies over cosmic time. Detection of the very faint signals\nfrom redshifted carbon monoxide (CO), a tracer of star formation, pushes the\nlimits of what is feasible with a total-power instrument. The CO Mapping\nProject (COMAP) Pathfinder is a first-generation instrument aiming to prove the\nconcept and develop the technology for future experiments, as well as\ndelivering early science products. With 19 receiver channels in a hexagonal\nfocal plane arrangement on a 10.4 m antenna, and an instantaneous 26-34 GHz\nfrequency range with 2 MHz resolution, it is ideally suited to measuring\nCO($J$=1-0) from $z\\sim3$. In this paper we discuss strategies for designing\nand building the Pathfinder and the challenges that were encountered. The\ndesign of the instrument prioritized LIM requirements over those of ancillary\nscience. After a couple of years of operation, the instrument is well\nunderstood, and the first year of data is already yielding useful science\nresults. Experience with this Pathfinder will drive the design of the next\ngenerations of experiments.", "category": "astro-ph_IM" }, { "text": "Stout: Cloudy's Atomic and Molecular Database: We describe a new atomic and molecular database we developed for use in the\nspectral synthesis code Cloudy. The design of Stout is driven by the data needs\nof Cloudy, which simulates molecular, atomic, and ionized gas with kinetic\ntemperatures 2.8 K < T < 1e10 K and densities spanning the low to high-density\nlimits. The radiation field between photon energies $10^{-8}$ Ry and 100 MeV is\nconsidered, along with all atoms and ions of the lightest 30 elements, and ~100\nmolecules. For ease of maintenance, the data are stored in a format as close as\npossible to the original data sources. Few data sources include the full range\nof data we need. We describe how we fill in the gaps in the data or extrapolate\nrates beyond their tabulated range. We tabulate data sources both for the\natomic spectroscopic parameters and for collision data for the next release of\nCloudy. This is not intended as a review of the current status of atomic data,\nbut rather a description of the features of the database which we will build\nupon.", "category": "astro-ph_IM" }, { "text": "Adapting the PyCBC pipeline to find and infer the properties of\n gravitational waves from massive black hole binaries in LISA: The Laser Interferometer Space Antenna (LISA), due for launch in the mid\n2030s, is expected to observe gravitational waves (GW)s from merging massive\nblack hole binaries (MBHB)s. These signals can last from days to months,\ndepending on the masses of the black holes, and are expected to be observed\nwith high signal to noise ratios (SNR)s out to high redshifts. We have adapted\nthe PyCBC software package to enable a template bank search and inference of\nGWs from MBHBs. The pipeline is tested on the LISA data challenge (LDC)'s\nChallenge 2a (\\enquote{Sangria}), which contains MBHBs and thousands of\ngalactic binaries (GBs) in simulated instrumental LISA noise. Our search\nidentifies all 6 MBHB signals with more than $92\\%$ of the optimal SNR. The\nsubsequent parameter inference step recovers the masses and spins within their\n$90\\%$ confidence interval. Sky position parameters have 8 high likelihood\nmodes which are recovered but often our posteriors favour the incorrect sky\nmode. We observe that the addition of GBs biases the parameter recovery of\nmasses and spins away from the injected values, reinforcing the need for a\nglobal fit pipeline which will simultaneously fit the parameters of the GB\nsignals before estimating the parameters of MBHBs.", "category": "astro-ph_IM" }, { "text": "Astrobiological Complexity with Probabilistic Cellular Automata: Search for extraterrestrial life and intelligence constitutes one of the\nmajor endeavors in science, but has yet been quantitatively modeled only rarely\nand in a cursory and superficial fashion. We argue that probabilistic cellular\nautomata (PCA) represent the best quantitative framework for modeling\nastrobiological history of the Milky Way and its Galactic Habitable Zone. The\nrelevant astrobiological parameters are to be modeled as the elements of the\ninput probability matrix for the PCA kernel. With the underlying simplicity of\nthe cellular automata constructs, this approach enables a quick analysis of\nlarge and ambiguous input parameters' space. We perform a simple clustering\nanalysis of typical astrobiological histories and discuss the relevant boundary\nconditions of practical importance for planning and guiding actual empirical\nastrobiological and SETI projects. In addition to showing how the present\nframework is adaptable to more complex situations and updated observational\ndatabases from current and near-future space missions, we demonstrate how\nnumerical results could offer a cautious rationale for continuation of\npractical SETI searches.", "category": "astro-ph_IM" }, { "text": "A Novel Greedy Approach To Harmonic Summing Using GPUs: Incoherent harmonic summing is a technique which is used to improve the\nsensitivity of Fourier domain search methods. A one dimensional harmonic sum is\nused in time-domain radio astronomy as part of the Fourier domain periodicity\nsearch, a type of search used to detect isolated single pulsars. The main\nproblem faced when implementing the harmonic sum on many-core architectures,\nlike GPUs, is the very unfavourable memory access pattern of the harmonic sum\nalgorithm. The memory access pattern gets worse as the dimensionality of the\nharmonic sum increases. Here we present a set of algorithms for calculating the\nharmonic sum that are suited to many-core architectures such as GPUs. We\npresent an evaluation of the sensitivity of these different approaches, and\ntheir performance. This work forms part of the AstroAccelerate project which is\na GPU accelerated software package for processing time-domain radio astronomy\ndata.", "category": "astro-ph_IM" }, { "text": "The Simons Observatory 220 and 280 GHz Focal-Plane Module: Design and\n Initial Characterization: The Simons Observatory (SO) will detect and map the temperature and\npolarization of the millimeter-wavelength sky from Cerro Toco, Chile across a\nrange of angular scales, providing rich data sets for cosmological and\nastrophysical analysis. The SO focal planes will be tiled with compact\nhexagonal packages, called Universal Focal-plane Modules (UFMs), in which the\ntransition-edge sensor (TES) detectors are coupled to 100 mK\nmicrowave-multiplexing electronics. Three different types of dichroic TES\ndetector arrays with bands centered at 30/40, 90/150, and 220/280 GHz will be\nimplemented across the 49 planned UFMs. The 90/150GHz and 220/280 GHz arrays\neach contain 1,764 TESes, which are read out with two 910x multiplexer\ncircuits. The modules contain a series of densely routed silicon chips, which\nare packaged together in a controlled electromagnetic environment with robust\nheat-sinking to 100 mK. Following an overview of the module design, we report\non early results from the first 220/280GHz UFM, including detector yield, as\nwell as readout and detector noise levels.", "category": "astro-ph_IM" }, { "text": "High-Contrast Testbeds for Future Space-Based Direct Imaging Exoplanet\n Missions: Instrumentation techniques in the field of direct imaging of exoplanets have\ngreatly advanced over the last two decades. Two of the four NASA-commissioned\nlarge concept studies involve a high-contrast instrument for the imaging and\nspectral characterization of exo-Earths from space: LUVOIR and HabEx. This\nwhitepaper describes the status of 8 optical testbeds in the US and France\ncurrently in operation to experimentally validate the necessary technologies to\nimage exo-Earths from space. They explore two complementary axes of research:\n(i) coronagraph designs and manufacturing and (ii) active wavefront correction\nmethods and technologies. Several instrument architectures are currently being\nanalyzed in parallel to provide more degrees of freedom for designing the\nfuture coronagraphic instruments. The necessary level of performance has\nalready been demonstrated in-laboratory for clear off-axis telescopes\n(HabEx-like) and important efforts are currently in development to reproduce\nthis accomplishment on segmented and/or on-axis telescopes (LUVOIR-like) over\nthe next two years.", "category": "astro-ph_IM" }, { "text": "Planck-LFI radiometers tuning: \"This paper is part of the Prelaunch status LFI papers published on JINST:\nhttp://www.iop.org/EJ/journal/-page=extra.proc5/jinst\"\n This paper describes the Planck Low Frequency Instrument tuning activities\nperformed through the ground test campaigns, from Unit to Satellite Levels.\nTuning is key to achieve the best possible instrument performance and tuning\nparameters strongly depend on thermal and electrical conditions. For this\nreason tuning has been repeated several times during ground tests and it has\nbeen repeated in flight before starting nominal operations. The paper discusses\nthe tuning philosophy, the activities and the obtained results, highlighting\ndevelopments and changes occurred during test campaigns. The paper concludes\nwith an overview of tuning performed during the satellite cryogenic test\ncampaign (Summer 2008) and of the plans for the just started in-flight\ncalibration.", "category": "astro-ph_IM" }, { "text": "A tomographic algorithm to determine tip-tilt information from laser\n guide stars: Laser Guide Stars (LGS) have greatly increased the sky-coverage of Adaptive\nOptics (AO) systems. Due to the up-link turbulence experienced by LGSs, a\nNatural Guide Star (NGS) is still required, preventing full sky-coverage. We\npresent a method of obtaining partial tip-tilt information from LGSs alone in\nmulti-LGS tomographic LGS AO systems. The method of LGS up-link tip-tilt\ndetermination is derived using a geometric approach, then an alteration to the\nLearn and Apply algorithm for tomographic AO is made to accommodate up-link\ntip-tilt. Simulation results are presented, verifying that the technique shows\ngood performance in correcting high altitude tip-tilt, but not that from low\naltitudes. We suggest that the method is combined with multiple far off-axis\ntip-tilt NGSs to provide gains in performance and sky-coverage over current\ntomographic AO systems.", "category": "astro-ph_IM" }, { "text": "Information field theory: Non-linear image reconstruction and signal analysis deal with complex inverse\nproblems. To tackle such problems in a systematic way, I present information\nfield theory (IFT) as a means of Bayesian, data based inference on spatially\ndistributed signal fields. IFT is a statistical field theory, which permits the\nconstruction of optimal signal recovery algorithms even for non-linear and\nnon-Gaussian signal inference problems. IFT algorithms exploit spatial\ncorrelations of the signal fields and benefit from techniques developed to\ninvestigate quantum and statistical field theories, such as Feynman diagrams,\nre-normalisation calculations, and thermodynamic potentials. The theory can be\nused in many areas, and applications in cosmology and numerics are presented.", "category": "astro-ph_IM" }, { "text": "Optimum Acceptance Regions for Direct Dark Matter Searches: Most experiments that search for direct interactions of WIMP dark matter with\na target can distinguish the dominant electronrecoil background from the\nnuclear recoil signal, based on some discrimination parameter. An acceptance\nregion is defined inthe parameter space spanned by the recoil energy and this\ndiscrimination parameter. In the absence of a clear signal in thisregion, a\nlimit is calculated on the dark matter scattering cross section. Here, an\nalgorithm is presented that allows to define the acceptance region a priori\nsuch that the experiment has the best sensitivity. This is achieved through\noptimized acceptance regions for each WIMP model and WIMP mass that is to be\nprobed. Using recent data from the CRESST-II experiment as anexample, it is\nshown that resulting limits can be substantially stronger than those from a\nconventional acceptance region. In an experiment with a segmented target, the\nalgorithm developed here can yield different acceptance regions for the\nindividual subdetectors. Hence, it is shown how to combine the data\nconsistently within the usual Maximum Gap or Optimum Interval framework.", "category": "astro-ph_IM" }, { "text": "The BINGO Project II: Instrument Description: The measurement of diffuse 21-cm radiation from the hyperfine transition of\nneutral hydrogen (HI signal) in different redshifts is an important tool for\nmodern cosmology. However, detecting this faint signal with non-cryogenic\nreceivers in single-dish telescopes is a challenging task. The BINGO (Baryon\nAcoustic Oscillations from Integrated Neutral Gas Observations) radio telescope\nis an instrument designed to detect baryonic acoustic oscillations (BAOs) in\nthe cosmological HI signal, in the redshift interval $0.127 \\le z \\le 0.449$.\nThis paper describes the BINGO radio telescope, including the current status of\nthe optics, receiver, observational strategy, calibration, and the site. BINGO\nhas been carefully designed to minimize systematics, being a transit instrument\nwith no moving dishes and 28 horns operating in the frequency range $980 \\le\n\\nu \\le 1260$ MHz. Comprehensive laboratory tests were conducted for many of\nthe BINGO subsystems and the prototypes of the receiver chain, horn, polarizer,\nmagic tees, and transitions have been successfully tested between 2018 - 2020.\nThe survey was designed to cover $\\sim 13\\%$ of the sky, with the primary\nmirror pointing at declination $\\delta=-15^{\\circ}$. The telescope will see an\ninstantaneous declination strip of $14.75^{\\circ}$. The results of the\nprototype tests closely meet those obtained during the modeling process,\nsuggesting BINGO will perform according to our expectations. After one year of\nobservations with a $60\\%$ duty cycle and 28 horns, BINGO should achieve an\nexpected sensitivity of 102 $\\mu K$ per 9.33 MHz frequency channel, one\npolarization, and be able to measure the HI power spectrum in a competitive\ntime frame.", "category": "astro-ph_IM" }, { "text": "Automated Speckle Interferometry of Known Binaries: Astronomers have been measuring the separations and position angles between\nthe two components of binary stars since William Herschel began his\nobservations in 1781. In 1970, Anton Labeyrie pioneered a method, speckle\ninterferometry, that overcomes the usual resolution limits induced by\natmospheric turbulence by taking hundreds or thousands of short exposures and\nreducing them in Fourier space. Our 2022 automation of speckle interferometry\nallowed us to use a fully robotic 1.0-meter PlaneWave Instruments telescope,\nlocated at the El Sauce Observatory in the Atacama Desert of Chile, to obtain\nobservations of many known binaries with established orbits. The long-term\nobjective of these observations is to establish the precision, accuracy, and\nlimitations of this telescope's automated speckle interferometry measurements.\nThis paper provides an early overview of the Known Binaries Project and provide\nexample results on a small-separation (0.27\") binary, WDS 12274-2843 B 228.", "category": "astro-ph_IM" }, { "text": "LIGO series, dimension of embedding and Kolmogorov's complexity: The interpretation of the series recorded by the Laser Interferometer\nGravitational Wave Observatory is a very important issue. Naturally, it is not\nfree of controversy. Here we apply two methods widely used in the study of\nnonlinear dynamical systems, namely, the calculation of Takens' dimension of\nembedding and the spectrum of Kolmogorov's complexity, to the series recorded\nin event GW150914. An increase of the former and a drop of the latter are\nobserved, consistent with the claimed appearance of a gravitational wave. We\npropose these methods as additional tools to help identifying signals of\ncosmological interest.", "category": "astro-ph_IM" }, { "text": "4MOST Consortium Survey 10: The Time-Domain Extragalactic Survey (TiDES): The Time-Domain Extragalactic Survey (TiDES) is focused on the spectroscopic\nfollow-up of extragalactic optical transients and variable sources selected\nfrom forthcoming large sky surveys such as that from the Large Synoptic Survey\nTelescope (LSST). TiDES contains three sub-surveys: (i) spectroscopic\nobservations of supernova-like transients; (ii) comprehensive follow-up of\ntransient host galaxies to obtain redshift measurements for cosmological\napplications; and (iii) repeat spectroscopic observations to enable the\nreverberation mapping of active galactic nuclei. Our simulations predict we\nwill be able to classify transients down to $r = 22.5$ magnitudes (AB) and,\nover five years of 4MOST operations, obtain spectra for up to 30,000 live\ntransients to redshift $z \\sim 0.5$, measure redshifts for up to 50,000\ntransient host galaxies to $z \\sim 1$ and monitor around 700 active galactic\nnuclei to $z \\sim 2.5$.", "category": "astro-ph_IM" }, { "text": "Miniature X-Ray Solar Spectrometer (MinXSS) - A Science-Oriented,\n University 3U CubeSat: The Miniature X-ray Solar Spectrometer (MinXSS) is a 3-Unit (3U) CubeSat\ndeveloped at the Laboratory for Atmospheric and Space Physics (LASP) at the\nUniversity of Colorado, Boulder (CU). Over 40 students contributed to the\nproject with professional mentorship and technical contributions from\nprofessors in the Aerospace Engineering Sciences Department at CU and from LASP\nscientists and engineers. The scientific objective of MinXSS is to study\nprocesses in the dynamic Sun, from quiet-Sun to solar flares, and to further\nunderstand how these changes in the Sun influence the Earth's atmosphere by\nproviding unique spectral measurements of solar soft x-rays (SXRs). The\nenabling technology providing the advanced solar SXR spectral measurements is\nthe Amptek X123, a commercial-off-the-shelf (COTS) silicon drift detector\n(SDD). The Amptek X123 has a low mass (~324 g after modification), modest power\nconsumption (~2.50 W), and small volume (6.86 cm x 9.91 cm x 2.54 cm), making\nit ideal for a CubeSat. This paper provides an overview of the MinXSS mission:\nthe science objectives, project history, subsystems, and lessons learned that\ncan be useful for the small-satellite community.", "category": "astro-ph_IM" }, { "text": "Daemons: Detection at Pulkovo, Gran Sasso, and Soudan: During a week of the March maximum in 2011, two oppositely installed\ndirection-sensitive TEU-167d Dark Electron Multipliers (DEMs) recorded a flux\nof daemons from the near-Earth almost circular heliocentric orbits (NEACHOs).\nThe flux measured from above is f \\approx (8\\pm3)\\times10^-7 cm^-2 s^-1, and\nthat from below is twice smaller. The difference may be due both to specific\ndesign features of the TEUs themselves, and to dissimilarities in the slope of\ntrajectories along which objects are coming from above or from below. It is\nshown that the daemon paradigm enables a quantitative interpretation of DAMA\nand CoGeNT experiments with no additional hypotheses. Both the experiments\nrecord a daemon flux of f ~ 10^-6 cm^-2 s^-1 from strongly elongated\nEarth-crossing heliocentric orbits (SEECHOs), predecessors of NEACHOs.\nRecommendations are given for processing of DAMA/LIBRA data, which\nunambiguously suggest that, in approximately half of cases (when there occur\ndouble events in the detector, rejected in processing under a single-hit\ncriterion), the signals being recorded are successively excited by a single\nSEECHO object along a path of ~1 m, i.e., this is not a WIMP. It is noted that\ndue regard to cascade events and pair interaction of ions will weaken the\nadverse influence exerted by the blocking effect on the channeling of iodine\nions knocked out in NaI(Tl) crystal. This influence will become not so\ncatastrophic as it follows from simplified semi-analytical models of the\nprocess: one might expect the energy of up to ~10% of primary recoil iodine\nions will be converted to the scintillation light.", "category": "astro-ph_IM" }, { "text": "On the Estimation of the Depth of Maximum of Extensive Air Showers Using\n the Steepness Parameter of the Lateral Distribution of Cherenkov Radiation: Using Monte Carlo simulation of extensive air showers, we showed that the\nmaximum depth of showers, $X_{max}$ can be estimated using $P=Q(100)/Q(200)$,\nthe ratio of Cherenkov photon densities at 100 and 200 meters from the shower\ncore, which is known as the steepness parameter of the lateral distribution of\nCherenkov radiation on the ground. A simple quadratic model has been fitted to\na set of data from simulated extensive air showers, relating the steepness\nparameter and the shower maximum depth. Then the model has been tested on\nanother set of simulated showers. The average difference between the actual\nmaximum depth of the simulated showers and the maximum depth obtained from the\nlateral distribution of Cherenkov light is about 9 $g/cm^2$. In addition,\npossibility of a more direct estimation of the mass of the initial particle\nfrom $P$ has been investigated. An exponential relation between these two\nquantities has been fitted. Applying the model to another set of showers, we\nfound that the average difference between the estimated and the actual mass of\nprimary particles is less than 0.5 atomic mass unit.", "category": "astro-ph_IM" }, { "text": "Applied Machine-Learning Models to Identify Spectral Sub-Types of M\n Dwarfs from Photometric Surveys: M dwarfs are the most abundant stars in the Solar Neighborhood and they are\nprime targets for searching for rocky planets in habitable zones. Consequently,\na detailed characterization of these stars is in demand. The spectral sub-type\nis one of the parameters that is used for the characterization and it is\ntraditionally derived from the observed spectra. However, obtaining the spectra\nof M dwarfs is expensive in terms of observation time and resources due to\ntheir intrinsic faintness. We study the performance of four machine-learning\n(ML) models: K-Nearest Neighbor (KNN), Random Forest (RF), Probabilistic Random\nForest (PRF), and Multilayer Perceptron (MLP), in identifying the spectral\nsub-types of M dwarfs at a grand scale by deploying broadband photometry in the\noptical and near-infrared. We trained the ML models by using the\nspectroscopically identified M dwarfs from the Sloan Digital Sky Survey Data\nRelease (SDSS) 7, together with their photometric colors that were derived from\nthe SDSS, Two-Micron All-Sky Survey, and Wide-field Infrared Survey Explorer.\nWe found that the RF, PRF, and MLP give a comparable prediction accuracy, 74%,\nwhile the KNN provides slightly lower accuracy, 71%. We also found that these\nmodels can predict the spectral sub-type of M dwarfs with ~99% accuracy within\n+/-1 sub-type. The five most useful features for the prediction are r-z, r-i,\nr-J, r-H, and g-z, and hence lacking data in all SDSS bands substantially\nreduces the prediction accuracy. However, we can achieve an accuracy of over\n70% when the r and i magnitudes are available. Since the stars in this study\nare nearby (d~1300 pc for 95% of the stars), the dust extinction can reduce the\nprediction accuracy by only 3%. Finally, we used our optimized RF models to\npredict the spectral sub-types of M dwarfs from the Catalog of Cool Dwarf\nTargets for TESS, and we provide the optimized RF models for public use.", "category": "astro-ph_IM" }, { "text": "The SVOM gamma-ray burst mission: We briefly present the science capabilities, the instruments, the operations,\nand the expected performance of the SVOM mission. SVOM (Space-based multiband\nastronomical Variable Objects Monitor) is a Chinese-French space mission\ndedicated to the study of Gamma-Ray Bursts (GRBs) in the next decade. The SVOM\nmission encompasses a satellite carrying four instruments to detect and\nlocalize the prompt GRB emission and measure the evolution of the afterglow in\nthe visible band and in X-rays, a VHF communication system enabling the fast\ntransmission of SVOM alerts to the ground, and a ground segment including a\nwide angle camera and two follow-up telescopes. The pointing strategy of the\nsatellite has been optimized to favor the detection of GRBs located in the\nnight hemisphere. This strategy enables the study of the optical emission in\nthe first minutes after the GRB with robotic observatories and the early\nspectroscopy of the optical afterglow with large telescopes to measure the\nredshifts. The study of GRBs in the next decade will benefit from a number of\nlarge facilities in all wavelengths that will contribute to increase the\nscientific return of the mission. Finally, SVOM will operate in the era of the\nnext generation of gravitational wave detectors, greatly contributing to\nsearches for the electromagnetic counterparts of gravitational wave triggers at\nXray and gamma-ray energies.", "category": "astro-ph_IM" }, { "text": "Optimal detuning for quantum filter cavities: Vacuum quantum fluctuations impose a fundamental limit on the sensitivity of\ngravitational-wave interferometers, which rank among the most sensitive\nprecision measurement devices ever built. The injection of conventional\nsqueezed vacuum reduces quantum noise in one quadrature at the expense of\nincreasing noise in the other. While this approach improved the sensitivity of\nthe Advanced LIGO and Advanced Virgo interferometers during their third\nobserving run (O3), future improvements in arm power and squeezing levels will\nbring radiation pressure noise to the forefront. Installation of a filter\ncavity for frequency-dependent squeezing provides broadband reduction of\nquantum noise through the mitigation of this radiation pressure noise, and it\nis the baseline approach planned for all of the future gravitational-wave\ndetectors currently conceived. The design and operation of a filter cavity\nrequires careful consideration of interferometer optomechanics as well as\nsqueezing degradation processes. In this paper, we perform an in-depth analysis\nto determine the optimal operating point of a filter cavity. We use our model\nalongside numerical tools to study the implications for filter cavities to be\ninstalled in the upcoming \"A+\" upgrade of the Advanced LIGO detectors.", "category": "astro-ph_IM" }, { "text": "Sub-Kelvin Cooling for the BICEP Array Project: In the field of astrophysics, the faint signal from distant galaxies and\nother dim cosmological sources at millimeter and submillimeter wavelengths\nrequire the use of high-sensitivity experiments. Cryogenics and the use of\nlow-temperature detectors are essential to the accomplishment of the scientific\nobjectives, allowing lower detector noise levels and improved instrument\nstability. Bolometric detectors are usually cooled to temperatures below 1K,\nand the constraints on the instrument are stringent, whether the experiment is\na space-based platform or a ground-based telescope. The latter are usually\ndeployed in remote and harsh environments such as the South Pole, where\nmaintenance needs to be kept minimal. CEA-SBT has acquired a strong heritage in\nthe development of vibration-free multistage helium-sorption coolers, which can\nprovide cooling down to 200 mK when mounted on a cold stage at temperatures\n<5K. In this paper, we focus on the development of a three-stage cooler\ndedicated to the BICEP Array project led by Caltech/JPL, which aims to study\nthe birth of the Universe and specifically the unique B-mode pattern imprinted\nby primordial gravitational waves on the polarization of the Cosmic Microwave\nBackground. Several cryogenic receivers are being developed, each featuring one\nsuch helium-sorption cooler operated from a 4K stage cooled by a Cryomech\npulse-tube with heat lifts of >1.35W at 4.2K and >36W at 45K. The major\nchallenge of this project is the large masses to be cooled to sub-kelvin\ntemperatures (26 kg at 250mK) and the resulting long cool-down time, which in\nthis novel cooler design is kept to a minimum with the implementation of\npassive and active thermal links between different temperature stages. A first\nunit has been sized to provide 230, 70 and 2{\\mu}W of net heat lifts at the\nmaximum temperatures of 2.8K, 340 and 250mK, respectively, for a minimum\nduration of 48 hours.", "category": "astro-ph_IM" }, { "text": "Concept of multiple-cell cavity for axion dark matter search: In cavity-based axion dark matter search experiments exploring high mass\nregions, multiple-cavity design is considered to increase the detection volume\nwithin a given magnet bore. We introduce a new idea, referred to as\nmultiple-cell cavity, which provides various benefits including a larger\ndetection volume, simpler experimental setup, and easier phase-matching\nmechanism. We present the characteristics of this concept and demonstrate the\nexperimental feasibility with an example of a double-cell cavity.", "category": "astro-ph_IM" }, { "text": "KilonovaNet: Surrogate Models of Kilonova Spectra with Conditional\n Variational Autoencoders: Detailed radiative transfer simulations of kilonova spectra play an essential\nrole in multimessenger astrophysics. Using the simulation results in parameter\ninference studies requires building a surrogate model from the simulation\noutputs to use in algorithms requiring sampling. In this work, we present\nKilonovaNet, an implementation of conditional variational autoencoders (cVAEs)\nfor the construction of surrogate models of kilonova spectra. This method can\nbe trained on spectra directly, removing overhead time of pre-processing\nspectra, and greatly speeds up parameter inference time. We build surrogate\nmodels of three state-of-the-art kilonova simulation data sets and present\nin-depth surrogate error evaluation methods, which can in general be applied to\nany surrogate construction method. By creating synthetic photometric\nobservations from the spectral surrogate, we perform parameter inference for\nthe observed light curve data of GW170817 and compare the results with previous\nanalyses. Given the speed with which KilonovaNet performs during parameter\ninference, it will serve as a useful tool in future gravitational wave\nobserving runs to quickly analyze potential kilonova candidates", "category": "astro-ph_IM" }, { "text": "Reduced Order Estimation of the Speckle Electric Field History for\n Space-Based Coronagraphs: In high-contrast space-based coronagraphs, one of the main limiting factors\nfor imaging the dimmest exoplanets is the time varying nature of the residual\nstarlight (speckles). Modern methods try to differentiate between the\nintensities of starlight and other sources, but none incorporate models of\nspace-based systems which can take into account actuations of the deformable\nmirrors. Instead, we propose formulating the estimation problem in terms of the\nelectric field while allowing for dithering of the deformable mirrors. Our\nreduced-order approach is similar to intensity-based PCA (e.g. KLIP) although,\nunder certain assumptions, it requires a considerably lower number of modes of\nthe electric field. We illustrate this by a FALCO simulation of the WFIRST\nhybrid Lyot coronagraph.", "category": "astro-ph_IM" }, { "text": "A High Sensitivity Fourier Transform Spectrometer for Cosmic Microwave\n Background Observations: The QUIJOTE Experiment was developed to study the polarization in the Cosmic\nMicrowave Background (CMB) over the frequency range of 10-50 GHz. Its first\ninstrument, the Multi Frequency Instrument (MFI), measures in the range 10-20\nGHz which coincides with one of the naturally transparent windows in the\natmosphere. The Tenerife Microwave Spectrometer (TMS) has been designed to\ninvestigate the spectrum between 10-20 GHz in more detail. The MFI bands are 2\nGHz wide whereas the TMS bands will be 250 MHz wide covering the complete 10-20\nGHz range with one receiver chain and Fourier spectral filter bank. It is\nexpected that the relative calibration between frequency bands will be better\nknown than the MFI channels and that the higher resolution will provide\nessential information on narrow band interference and features such as ozone.\nThe TMS will study the atmospheric spectra as well as provide key information\non the viability of ground-based absolute spectral measurements. Here the novel\nFourier transform spectrometer design is described showing its suitability to\nwide band measurement and $\\sqrt{N}$ advantage over the usual scanning\ntechniques.", "category": "astro-ph_IM" }, { "text": "A Parallel Monte Carlo Code for Simulating Collisional N-body Systems: We present a new parallel code for computing the dynamical evolution of\ncollisional N-body systems with up to N~10^7 particles. Our code is based on\nthe the Henon Monte Carlo method for solving the Fokker-Planck equation, and\nmakes assumptions of spherical symmetry and dynamical equilibrium. The\nprincipal algorithmic developments involve optimizing data structures, and the\nintroduction of a parallel random number generation scheme, as well as a\nparallel sorting algorithm, required to find nearest neighbors for interactions\nand to compute the gravitational potential. The new algorithms we introduce\nalong with our choice of decomposition scheme minimize communication costs and\nensure optimal distribution of data and workload among the processing units.\nThe implementation uses the Message Passing Interface (MPI) library for\ncommunication, which makes it portable to many different supercomputing\narchitectures. We validate the code by calculating the evolution of clusters\nwith initial Plummer distribution functions up to core collapse with the number\nof stars, N, spanning three orders of magnitude, from 10^5 to 10^7. We find\nthat our results are in good agreement with self-similar core-collapse\nsolutions, and the core collapse times generally agree with expectations from\nthe literature. Also, we observe good total energy conservation, within less\nthan 0.04% throughout all simulations. We analyze the performance of the code,\nand demonstrate near-linear scaling of the runtime with the number of\nprocessors up to 64 processors for N=10^5, 128 for N=10^6 and 256 for N=10^7.\nThe runtime reaches a saturation with the addition of more processors beyond\nthese limits which is a characteristic of the parallel sorting algorithm. The\nresulting maximum speedups we achieve are approximately 60x, 100x, and 220x,\nrespectively.", "category": "astro-ph_IM" }, { "text": "The LSST era of supermassive black holes accretion-disk reverberation\n mapping: The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) will\ndetect an unprecedentedly large sample of actively accreting supermassive black\nholes with typical accretion disk (AD) sizes of a few light days. This brings\nus to face challenges in the reverberation mapping (RM) measurement of AD sizes\nin active galactic nuclei (AGNs) using interband continuum delays. We examine\nthe effect of LSST cadence strategies on AD RM using our metric\nAGNTimeLagMetric. It accounts for redshift, cadence, the magnitude limit, and\nmagnitude corrections for dust extinction. Running our metric on different LSST\ncadence strategies, we produce an atlas of the performance estimations for LSST\nphotometric RM measurements. We provide an upper limit on the estimated number\nof quasars for which the AD time lag can be computed within 01000 sources in each Deep Drilling field (DDF,\n10 sq. deg) in any filter, with the redshift distribution of these sources\npeaking at z~1. We find the LSST observation strategies with a good cadence (~\n5 days) and a long cumulative season (~9 yr), as proposed for LSST DDF, are\nfavored for the AD size measurement. We create synthetic LSST light curves for\nthe most suitable DDF cadences and determine RM time lags to demonstrate the\nimpact of the best cadences based on the proposed metric.", "category": "astro-ph_IM" }, { "text": "Application of the optimised next neighbour image cleaning method to the\n VERITAS array: Imaging atmospheric Cherenkov telescopes, such as the VERITAS array, are\nsubject to the Night Sky Background (NSB) and electronic noise, which\ncontribute to the total signal of pixels in the telescope camera. The\ncontribution of noise photons in event images is reduced with the application\nof image cleaning methods. Conventionally, high thresholds must be employed to\nensure the removal of pixels containing noise signal. On that account,\nlow-energy gamma-ray showers might be suppressed during the cleaning. We\npresent here the application of an optimised next neighbour image cleaning for\nthe VERITAS array. With this technique, differential noise rates are estimated\nfor each individual observation and thus changes in the NSB and afterpulsing\nare consistently being accounted for. We show that this method increases the\noverall rate of reconstructed gamma-rays, lowers the energy threshold of the\narray and allows the reconstruction of low energy (E > 70 GeV) source events\nwhich were suppressed by the conventional cleaning method.", "category": "astro-ph_IM" }, { "text": "Using multiobjective optimization to reconstruct interferometric data\n (II): polarimetry and time dynamics: In Very Long Baseline Interferometry (VLBI), signals from multiple antennas\ncombine to create a sparsely sampled virtual aperture, its effective diameter\ndetermined by the largest antenna separation. The inherent sparsity makes VLBI\nimaging an ill-posed inverse problem, prompting the use of algorithms like the\nMultiobjective Evolutionary Algorithm by Decomposition (MOEA/D), as proposed in\nthe first paper of this series. This study focuses on extending MOEA/D to\npolarimetric and time dynamic reconstructions, particularly relevant for the\nVLBI community and the Event Horizon Telescope Collaboration (EHTC). MOEA/D's\nsuccess in providing a unique, fast, and largely unsupervised representation of\nimage structure serves as the basis for exploring these extensions. The\nextension involves incorporating penalty terms specific to total intensity\nimaging, time-variable, and polarimetric variants within MOEA/D's\nmultiobjective, evolutionary framework. The Pareto front, representing\nnon-dominated solutions, is computed, revealing clusters of proximities.\nTesting MOEA/D with synthetic datasets representative of EHTC's main targets\ndemonstrates successful recovery of polarimetric and time-dynamic signatures\ndespite sparsity and realistic data corruptions. MOEA/D's extension proves\neffective in the anticipated EHTC setting, offering an alternative and\nindependent claim to existing methods. It not only explores the problem\nglobally but also eliminates the need for parameter surveys, distinguishing it\nfrom Regularized Maximum Likelihood (RML) methods. MOEA/D emerges as a novel\nand useful tool for robustly characterizing polarimetric and dynamic signatures\nin VLBI datasets with minimal user-based choices. Future work aims to address\nthe last remaining limitation of MOEA/D, specifically regarding the number of\npixels and numerical performance, to establish it within the VLBI data\nreduction pipeline.", "category": "astro-ph_IM" }, { "text": "Astronomy & Astrophysics in ICAD History: The International Conference on Auditory Display (ICAD) is a significant\nevent for researchers and practitioners interested in exploring the use of\nsound in conveying information and data. Since its inception in 1994, the\nconference has served as a vital forum for exchanging ideas and presenting\nresearch findings in the field of auditory display. While the conference\nprimarily focuses on auditory display and sound design, astronomy has made its\npresence felt in the proceedings of the conference over the years. However, its\nnot until the current ICAD conference where astronomy features a dedicated\nsession. This paper aims to provide an statistical overview of the presence of\nastronomy in the ICAD conference's history from 1994 to 2022, highlighting some\nof the contributions made by researchers in this area, as well as the topics of\ninterest that have captured the attention of sound artists.", "category": "astro-ph_IM" }, { "text": "Broadband spectroscopy of astrophysical ice analogs. I. Direct\n measurement of complex refractive index of CO ice using terahertz time-domain\n spectroscopy: Context: Reliable, directly measured optical properties of astrophysical ice\nanalogs in the infrared (IR) and terahertz (THz) range are missing. These\nparameters are of great importance to model the dust continuum radiative\ntransfer in dense and cold regions, here thick ice mantles are present, and are\nnecessary for the interpretation of future observations planned in the far-IR\nregion. Aims: Coherent THz radiation allows direct measurement of the complex\ndielectric function (refractive index) of astrophysically relevant ice species\nin the THz range. Methods: The time-domain waveforms and the frequency-domain\nspectra of reference samples of CO ice, deposited at a temperature of 28.5 K\nand annealed to 33 K at different thicknesses, have been recorded. A new\nalgorithm is developed to reconstruct the real and imaginary parts of the\nrefractive index from the time-domain THz data. Results: The complex refractive\nindex in the wavelength range of 1 mm - 150 ${\\mu}$m (0.3 - 2.0 THz) has been\ndetermined for the studied ice samples, and compared with available data found\nin the literature. Conclusions: The developed algorithm of reconstructing the\nreal and imaginary parts of the refractive index from the time-domain THz data\nenables, for the first time, the determination of optical properties of\nastrophysical ice analogs without using the Kramers-Kronig relations. The\nobtained data provide a benchmark to interpret the observational data from\ncurrent ground based facilities as well as future space telescope missions, and\nhave been used to estimate the opacities of the dust grains in presence of CO\nice mantles.", "category": "astro-ph_IM" }, { "text": "Response of the XENON100 Dark Matter Detector to Nuclear Recoils: Results from the nuclear recoil calibration of the XENON100 dark matter\ndetector installed underground at the Laboratori Nazionali del Gran Sasso\n(LNGS), Italy are presented. Data from measurements with an external 241AmBe\nneutron source are compared with a detailed Monte Carlo simulation which is\nused to extract the energy dependent charge-yield Qy and relative scintillation\nefficiency Leff. A very good level of absolute spectral matching is achieved in\nboth observable signal channels - scintillation S1 and ionization S2 - along\nwith agreement in the 2-dimensional particle discrimination space. The results\nconfirm the validity of the derived signal acceptance in earlier reported dark\nmatter searches of the XENON100 experiment.", "category": "astro-ph_IM" }, { "text": "Charge Transfer Inefficiency in the Hubble Space Telescope since\n Servicing Mission 4: We update a physically-motivated model of radiation damage in the Hubble\nSpace Telescope Advanced Camera for Surveys/Wide Field Channel, using data up\nto mid 2010. We find that Charge Transfer Inefficiency increased dramatically\nbefore shuttle Servicing Mission 4, with ~1.3 charge traps now present per\npixel. During detector readout, charge traps spuriously drag electrons behind\nall astronomical sources, degrading image quality in a way that affects object\nphotometry, astrometry and morphology. Our detector readout model is robust to\nchanges in operating temperature and background level, and can be used to\niteratively remove the trailing by pushing electrons back to where they belong.\nThe result is data taken in mid-2010 that recovers the quality of imaging\nobtained within the first six months of orbital operations.", "category": "astro-ph_IM" }, { "text": "Explicit expansion of the three-body disturbing function for arbitrary\n eccentricities and inclinations: Since the original work of Hansen and Tisserand in the XIXth century, there\nhave been many variations in the analytical expansion of the three-body\ndisturbing function in series of the semi-major axis ratio. With the increasing\nnumber of planetary systems of large eccentricity, these expansions are even\nmore interesting as they allow us to obtain for the secular systems finite\nexpressions that are valid for all eccentricities and inclinations. We\nrevisited the derivation of the disturbing function in Legendre polynomial,\nwith a special focus on the secular system. We provide here expressions of the\ndisturbing function for the planar and spatial case at any order with respect\nto the ratio of the semi-major axes. Moreover, for orders in the ratio of\nsemi-major axis up to ten in the planar case and five in the spatial case, we\nprovide explicit expansions of the secular system, and simple algorithms with\nminimal computation to extend this to higher order, as well as the algorithms\nfor the computation of non secular terms.", "category": "astro-ph_IM" }, { "text": "Agile Earth observation satellite scheduling over 20 years:\n formulations, methods and future directions: Agile satellites with advanced attitude maneuvering capability are the new\ngeneration of Earth observation satellites (EOSs). The continuous improvement\nin satellite technology and decrease in launch cost have boosted the\ndevelopment of agile EOSs (AEOSs). To efficiently employ the increasing\norbiting AEOSs, the AEOS scheduling problem (AEOSSP) aiming to maximize the\nentire observation profit while satisfying all complex operational constraints,\nhas received much attention over the past 20 years. The objectives of this\npaper are thus to summarize current research on AEOSSP, identify main\naccomplishments and highlight potential future research directions. To this\nend, general definitions of AEOSSP with operational constraints are described\ninitially, followed by its three typical variations including different\ndefinitions of observation profit, multi-objective function and autonomous\nmodel. A detailed literature review from 1997 up to 2019 is then presented in\nline with four different solution methods, i.e., exact method, heuristic,\nmetaheuristic and machine learning. Finally, we discuss a number of topics\nworth pursuing in the future.", "category": "astro-ph_IM" }, { "text": "Research Performance of Turkish Astronomers in the Period of 1980-2010: We investigated the development of astronomy and astrophysics research\nproductivity in Turkey in terms of publication output and their impacts as\nreflected in the Science Citation Index (SCI) for the period 1980-2010. It\nincludes 838 refereed publications, including 801 articles, 16 letters, 15\nreviews, and six research notes. The number of papers were prominently\nincreased after 2000 and the average number of papers per researcher is\ncalculated as 0.89. Total number of received citations for 838 papers is 6938,\nwhile number of citations per papers is approximately 8.3 in 30 years.\nPublication performance of Turkish astronomers and astrophysicists was compared\nwith those of seven countries that have similar gross domestic expenditures on\nresearch and development, and members of Organization for Economic Co-operation\nand Development (OECD). Our study reveals that the output of astronomy and\nastrophysics research in Turkey has gradually increased over the years.", "category": "astro-ph_IM" }, { "text": "Astronomical Imagery: Considerations For a Contemporary Approach with\n JPEG2000: The new wide-field radio telescopes, such as: ASKAP, MWA, LOFAR, eVLA and\nSKA; will produce spectral-imaging data-cubes (SIDC) of unprecedented size --\nin the order of hundreds of Petabytes. Servicing such data as images to the\nend-user in a traditional manner and formats is likely going to encounter\nsignificant performance fallbacks. We discuss the requirements for extremely\nlarge SIDCs, and in this light we analyse the applicability of the approach\ntaken in the JPEG2000 (ISO/IEC 15444) standards. We argue the case for the\nadaptation of contemporary industry standards and technologies vs the\nmodification of legacy astronomy standards or the development new from scratch.", "category": "astro-ph_IM" }, { "text": "Tokyo Axion Helioscope: The idea of a magnetic axion helioscope was first proposed by Pierre Sikivie\nin 1983. Tokyo axion helioscope was built exploiting its detection principle\nwith a dedicated cryogen-free superconducting magnet and PIN photodiodes for\nx-ray detectors. Solar axions, if exist, would be converted into x-ray photons\nin the magnetic field. Conversion is coherently enhanced even for massive\naxions by filling the conversion region with helium gas. Its start up, search\nresults so far and prospects are presented.", "category": "astro-ph_IM" }, { "text": "Dome C site testing: long term statistics of integrated optical\n turbulence parameters at ground level: We present long term site testing statistics obtained at Dome C, Antarctica\nwith various experiments deployed within the Astroconcordia programme since\n2003. We give values of integrated turbulence parameters in the visible at\nground level and above the surface layer, vertical profiles of the structure\nconstant Cn2 and a statistics of the thickness of the turbulent surface layer.", "category": "astro-ph_IM" }, { "text": "Classification methods for noise transients in advanced\n gravitational-wave detectors II: performance tests on Advanced LIGO data: The data taken by the advanced LIGO and Virgo gravitational-wave detectors\ncontains short duration noise transients that limit the significance of\nastrophysical detections and reduce the duty cycle of the instruments. As the\nadvanced detectors are reaching sensitivity levels that allow for multiple\ndetections of astrophysical gravitational-wave sources it is crucial to achieve\na fast and accurate characterization of non-astrophysical transient noise\nshortly after it occurs in the detectors. Previously we presented three methods\nfor the classification of transient noise sources. They are Principal Component\nAnalysis for Transients (PCAT), Principal Component LALInference Burst (PC-LIB)\nand Wavelet Detection Filter with Machine Learning (WDF-ML). In this study we\ncarry out the first performance tests of these algorithms on gravitational-wave\ndata from the Advanced LIGO detectors. We use the data taken between the 3rd of\nJune 2015 and the 14th of June 2015 during the 7th engineering run (ER7), and\noutline the improvements made to increase the performance and lower the latency\nof the algorithms on real data. This work provides an important test for\nunderstanding the performance of these methods on real, non stationary data in\npreparation for the second advanced gravitational-wave detector observation\nrun, planned for later this year. We show that all methods can classify\ntransients in non stationary data with a high level of accuracy and show the\nbenefits of using multiple classifiers.", "category": "astro-ph_IM" }, { "text": "A Novel Source of Tagged Low-Energy Nuclear Recoils: For sufficiently wide resonances, nuclear resonance fluorescence behaves like\nelastic photo-nuclear scattering while retaining the large cross-section\ncharacteristic of resonant photo-nuclear absorption. We show that NRF may be\nused to characterize the signals produced by low-energy nuclear recoils by\nserving as a novel source of tagged low-energy nuclear recoils. Understanding\nthese signals is important in determining the sensitivity of direct WIMP\ndark-matter and coherent neutrino-nucleus scattering searches.", "category": "astro-ph_IM" }, { "text": "Agile Software Engineering and Systems Engineering at SKA Scale: Systems Engineering (SE) is the set of processes and documentation required\nfor successfully realising large-scale engineering projects, but the classical\napproach is not a good fit for software-intensive projects, especially when the\nneeds of the different stakeholders are not fully known from the beginning, and\nrequirement priorities might change. The SKA is the ultimate software-enabled\ntelescope, with enormous amounts of computing hardware and software required to\nperform its data reduction. We give an overview of the system and software\nengineering processes in the SKA1 development, and the tension between\nclassical and agile SE.", "category": "astro-ph_IM" }, { "text": "The Transients Handler System for the Cherenkov Telescope Array\n Observatory: The Cherenkov Telescope Array Observatory (CTAO) will be the largest and most\nadvanced ground-based facility for gamma-ray astronomy. Several dozens of\ntelescopes will be operated at both the Northern and Southern Hemisphere. With\nthe advent of multi-messenger astronomy, many new large science infrastructures\nwill start science operations and target-of-opportunity observations will play\nan important role in the operation of the CTAO. The Array Control and Data\nAcquisition (ACADA) system deployed on each CTAO site will feature a dedicated\nsub-system to manage external and internal scientific alerts: the Transients\nHandler. It will receive, validate, and process science alerts in order to\ndetermine if target-of-opportunity observations can be triggered or need to be\nupdated. Various tasks defined by proposal-based configurations are processed\nby the Transients Handler. These tasks include, among others, the evaluation of\nobservability of targets and their correlation with known sources or objects.\nThis contribution will discuss the concepts and design of the Transients\nHandler and its integration in the ACADA system.", "category": "astro-ph_IM" }, { "text": "A Study of the Effect of Molecular and Aerosol Conditions in the\n Atmosphere on Air Fluorescence Measurements at the Pierre Auger Observatory: The air fluorescence detector of the Pierre Auger Observatory is designed to\nperform calorimetric measurements of extensive air showers created by cosmic\nrays of above 10^18 eV. To correct these measurements for the effects\nintroduced by atmospheric fluctuations, the Observatory contains a group of\nmonitoring instruments to record atmospheric conditions across the detector\nsite, an area exceeding 3,000 km^2. The atmospheric data are used extensively\nin the reconstruction of air showers, and are particularly important for the\ncorrect determination of shower energies and the depths of shower maxima. This\npaper contains a summary of the molecular and aerosol conditions measured at\nthe Pierre Auger Observatory since the start of regular operations in 2004, and\nincludes a discussion of the impact of these measurements on air shower\nreconstructions. Between 10^18 and 10^20 eV, the systematic uncertainties due\nto all atmospheric effects increase from 4% to 8% in measurements of shower\nenergy, and 4 g/cm^2 to 8 g/cm^2 in measurements of the shower maximum.", "category": "astro-ph_IM" }, { "text": "WOMBAT: A Scalable and High Performance Astrophysical MHD Code: We present a new code for astrophysical magneto-hydrodynamics specifically\ndesigned and optimized for high performance and scaling on modern and future\nsupercomputers. We describe a novel hybrid OpenMP/MPI programming model that\nemerged from a collaboration between Cray, Inc. and the University of\nMinnesota. This design utilizes MPI-RMA optimized for thread scaling, which\nallows the code to run extremely efficiently at very high thread counts ideal\nfor the latest generation of the multi-core and many-core architectures. Such\nperformance characteristics are needed in the era of \"exascale\" computing. We\ndescribe and demonstrate our high-performance design in detail with the intent\nthat it may be used as a model for other, future astrophysical codes intended\nfor applications demanding exceptional performance.", "category": "astro-ph_IM" }, { "text": "FACT: Towards Robotic Operation of an Imaging Air Cherenkov Telescope: The First G-APD Cherenkov Telescope (FACT) became operational at La Palma in\nOctober 2011. Since summer 2012, due to very smooth and stable operation, it is\nthe first telescope of its kind that is routinely operated from remote, without\nthe need for a data-taking crew on site. In addition, many standard tasks of\noperation are executed automatically without the need for manual interaction.\nBased on the experience gained so far, some alterations to improve the safety\nof the system are under development to allow robotic operation in the future.\nWe present the setup and precautions used to implement remote operations and\nthe experience gained so far, as well as the work towards robotic operation.", "category": "astro-ph_IM" }, { "text": "The Large Array Survey Telescope -- Pipeline. I. Basic image reduction\n and visit coaddition: The Large Array Survey Telescope (LAST) is a wide-field telescope designed to\nexplore the variable and transient sky with a high cadence and to be a test-bed\nfor cost-effective telescope design. A LAST node is composed of 48 (32 already\ndeployed), 28-cm f/2.2 telescopes. A single telescope has a 7.4 deg^2 field of\nview and reaches a 5-sigma limiting magnitude of 19.6 (21.0) in 20s (20x20s)\n(filter-less), while the entire system provides a 355 deg^2 field of view. The\nbasic strategy of LAST is to obtain multiple 20-s consecutive exposures of each\nfield (a visit). Each telescope carries a 61 Mpix camera, and the system\nproduces, on average, about 2.2 Gbit/s. This high data rate is analyzed in near\nreal-time at the observatory site, using limited computing resources (about 700\ncores). Given this high data rate, we have developed a new, efficient data\nreduction and analysis pipeline. The data pipeline includes two major parts:\n(i) Processing and calibration of single images, followed by a coaddition of\nthe visit's exposures. (ii) Building the reference images and performing image\nsubtraction and transient detection. Here we describe in detail the first part\nof the pipeline. Among the products of this pipeline are photometrically and\nastrometrically calibrated single and coadded images, 32-bit mask images\nmarking a wide variety of problems and states of each pixel, source catalogs\nbuilt from individual and coadded images, Point Spread Function (PSF)\nphotometry, merged source catalogs, proper motion and variability indicators,\nminor planets detection, calibrated light curves, and matching with external\ncatalogs. The entire pipeline code is made public. Finally, we demonstrate the\npipeline performance on real data taken by LAST.", "category": "astro-ph_IM" }, { "text": "matvis: A matrix-based visibility simulator for fast forward modelling\n of many-element 21 cm arrays: Detection of the faint 21 cm line emission from the Cosmic Dawn and Epoch of\nReionisation will require not only exquisite control over instrumental\ncalibration and systematics to achieve the necessary dynamic range of\nobservations but also validation of analysis techniques to demonstrate their\nstatistical properties and signal loss characteristics. A key ingredient in\nachieving this is the ability to perform high-fidelity simulations of the kinds\nof data that are produced by the large, many-element, radio interferometric\narrays that have been purpose-built for these studies. The large scale of these\narrays presents a computational challenge, as one must simulate a detailed sky\nand instrumental model across many hundreds of frequency channels, thousands of\ntime samples, and tens of thousands of baselines for arrays with hundreds of\nantennas. In this paper, we present a fast matrix-based method for simulating\nradio interferometric measurements (visibilities) at the necessary scale. We\nachieve this through judicious use of primary beam interpolation, fast\napproximations for coordinate transforms, and a vectorised outer product to\nexpand per-antenna quantities to per-baseline visibilities, coupled with\nstandard parallelisation techniques. We validate the results of this method,\nimplemented in the publicly-available matvis code, against a high-precision\nreference simulator, and explore its computational scaling on a variety of\nproblems.", "category": "astro-ph_IM" }, { "text": "SSTRED: Data- and metadata-processing pipeline for CHROMIS and CRISP: Context: Data from ground-based, high-resolution solar telescopes can only be\nused for science with calibrations and processing, which requires detailed\nknowledge about the instrumentation. [...] Aims: We aim to provide observers\nwith a user-friendly data pipeline for data from the Swedish 1-meter Solar\nTelescope (SST) that delivers science-ready data together with the metadata\nneeded for proper interpretation and archiving. Methods: We briefly describe\nthe [instrumentation]. We summarize the processing steps from raw data to\nscience-ready data cubes in FITS files. We report calibrations and\ncompensations for data imperfections in detail. Misalignment of \\ion{Ca}{ii}\ndata due to wavelength-dependent dispersion is identified, characterized, and\ncompensated for. We describe intensity calibrations that remove or reduce the\neffects of filter transmission profiles as well as solar elevation changes. We\npresent REDUX, a new version of the MOMFBD image restoration code, with\nmultiple enhancements and new features. [...] We describe how image restoration\nis used [...]. The science-ready output is delivered in FITS files, with\nmetadata compliant with the SOLARNET recommendations. Data cube coordinates are\nspecified within the World Coordinate System (WCS). Cavity errors are specified\nas distortions of the WCS wavelength coordinate with an extension of existing\nWCS notation. We establish notation for specifying the reference system for\nStokes vectors [...]. [CRISPEX] has been extended to accept SSTRED output\n[...]. Results: SSTRED is a mature data-processing pipeline for imaging\ninstruments, developed and used for the SST/CHROMIS imaging spectrometer and\nthe SST/CRISP spectropolarimeter. SSTRED delivers well-characterized,\nscience-ready, archival-quality FITS files with well-defined metadata. The\nSSTRED code, as well as REDUX and CRISPEX, is freely available through git\nrepositories.", "category": "astro-ph_IM" }, { "text": "Measurements of diffusion of volatiles in amorphous solid water:\n application to interstellar medium environments: The diffusion of atoms and molecules in ices covering dust grains in dense\nclouds in interstellar space is an important but poorly characterized step in\nthe formation of complex molecules in space. Here we report the measurement of\ndiffusion of simple molecules in amorphous solid water (ASW), an analog of\ninterstellar ices, which are amorphous and made mostly of water molecules. The\nnew approach that we used relies on measuring in situ the change in band\nstrength and position of mid-infrared features of OH dangling bonds as\nmolecules move through pores and channels of ASW. We obtained the Arrhenius\npre-exponents and activation energies for diffusion of CO, O$_2$, N$_2$,\nCH$_4$, and Ar in ASW. The diffusion energy barrier of H$_2$ and D$_2$ were\nalso measured, but only upper limits were obtained. These values constitute the\nfirst comprehensive set of diffusion parameters of simple molecules on the pore\nsurface of ASW, and can be used in simulations of the chemical evolution of ISM\nenvironments, thus replacing unsupported estimates. We also present a set of\nargon temperature programmed desorption experiments to determine the desorption\nenergy distribution of argon on non-porous ASW.", "category": "astro-ph_IM" }, { "text": "Towards 10 cm/s radial velocity accuracy on the Sun using a Fourier\n transform spectrometer: The IAG solar observatory is producing high-fidelity, ultra-high-resolution\nspectra (R>500000) of the spatially resolved surface of the Sun using a Fourier\nTransform spectrometer (FTS). The radial velocity (RV) calibration of these\nspectra is currently performed using absorption lines from Earth's atmosphere,\nlimiting the precision and accuracy. To improve the frequency calibration\nprecision and accuracy we plan to use a Fabry-Perot etalon (FP) setup that is\nan evolution of the CARMENES FP design and an iodine cell in combination. To\ncreate an accurate wavelength solution, the iodine cell is measured in parallel\nwith the FP. The FP can then be used to transfer the accurate wavelength\nsolution provided by the iodine via simultaneous calibration of solar\nobservations. To verify the stability and precision of the FTS we perform\nparallel measurements of the FP and an iodine cell. The measurements show an\nintrinsic stability of the FTS of a level of 1 m/s over 90 hours. The\ndifference between the FP RVs and the iodine cell RVs show no significant\ntrends during the same time span. The RMS of the RV difference between FP and\niodine cell is 10.7 cm/s, which can be largely attributed to the intrinsic RV\nprecisions of the iodine cell and the FP (10.2 cm/s and 1.0 cm/s,\nrespectively). This shows that we can calibrate the FTS to a level of 10 cm/s,\ncompetitive with current state-of-the-art precision RV instruments. Based on\nthese results we argue that the spectrum of iodine can be used as an absolute\nreference to reach an RV accuracy of 10 cm/s.", "category": "astro-ph_IM" }, { "text": "Ray-tracing 3D dust radiative transfer with DART-Ray: code upgrade and\n public release: We present an extensively updated version of the purely ray-tracing 3D dust\nradiation transfer code DART-Ray. The new version includes five major upgrades\n: 1) a series of optimizations for the ray-angular density and the scattered\nradiation source function; 2) the implementation of several data and task\nparallelizations using hybrid MPI+OpenMP schemes; 3) the inclusion of dust\nself-heating; 4) the ability to produce surface brightness maps for observers\nwithin the models in HEALPix format; 5) the possibility to set the expected\nnumerical accuracy already at the start of the calculation. We tested the\nupdated code with benchmark models where the dust self-heating is not\nnegligible. Furthermore, we performed a study of the extent of the source\ninfluence volumes, using galaxy models, which are critical in determining the\nefficiency of the DART-Ray algorithm. The new code is publicly available,\ndocumented for both users and developers, and accompanied by several programmes\nto create input grids for different model geometries and to import the results\nof N-body and SPH simulations. These programmes can be easily adapted to\ndifferent input geometries, and for different dust models or stellar emission\nlibraries.", "category": "astro-ph_IM" }, { "text": "Design, pointing control, and on-sky performance of the mid-infrared\n vortex coronagraph for the VLT/NEAR experiment: Vortex coronagraphs have been shown to be a promising avenue for\nhigh-contrast imaging in the close-in environment of stars at thermal infrared\n(IR) wavelengths. They are included in the baseline design of METIS. To ensure\ngood performance of these coronagraphs, a precise control of the centering of\nthe star image in real time is needed. We previously developed and validated\nthe quadrant analysis of coronagraphic images for tip-tilt sensing estimator\n(QACITS) pointing estimator to address this issue. While this approach is not\nwavelength-dependent in theory, it was never implemented for mid-IR\nobservations, which leads to specific challenges and limitations. Here, we\npresent the design of the mid-IR vortex coronagraph for the new Earths in the\n$\\alpha$ Cen Region (NEAR) experiment with the VLT/VISIR instrument and assess\nthe performance of the QACITS estimator for the centering control of the star\nimage onto the vortex coronagraph. We use simulated data and on-sky data\nobtained with VLT/VISIR, which was recently upgraded for observations assisted\nby adaptive optics in the context of the NEAR experiment. We demonstrate that\nthe QACITS-based correction loop is able to control the centering of the star\nimage onto the NEAR vortex coronagraph with a stability down to $0.015\n\\lambda/D$ rms over 4h in good conditions. These results show that QACITS is a\nrobust approach for precisely controlling in real time the centering of vortex\ncoronagraphs for mid-IR observations.", "category": "astro-ph_IM" }, { "text": "Spread spectrum for imaging techniques in radio interferometry: We consider the probe of astrophysical signals through radio interferometers\nwith small field of view and baselines with non-negligible and constant\ncomponent in the pointing direction. In this context, the visibilities measured\nessentially identify with a noisy and incomplete Fourier coverage of the\nproduct of the planar signals with a linear chirp modulation. In light of the\nrecent theory of compressed sensing and in the perspective of defining the best\npossible imaging techniques for sparse signals, we analyze the related spread\nspectrum phenomenon and suggest its universality relative to the sparsity\ndictionary. Our results rely both on theoretical considerations related to the\nmutual coherence between the sparsity and sensing dictionaries, as well as on\nnumerical simulations.", "category": "astro-ph_IM" }, { "text": "Novel Back-coated Glass Mirrors for the MAGIC Telescopes: The mirrors installed on Imaging Atmospheric Cherenkov Telescopes like the\nMAGIC telescopes in La Palma, Canary Islands, are constantly exposed to the\nharsh environment. They have to withstand wind-induced corrosion from dust and\nsand, changing temperatures, and rain. Because of the size of the telescope,\nprotecting the structure with a dome is not practical. The current mirrors used\nin MAGIC are aluminum front-coated glass mirrors, covered by a thin quartz\nlayer. But even with this protective layer, significant decrease in\nreflectivity can be seen on timescales of several years. The quartz layer is\nvery delicate and can be easily scratched or damaged, which also makes cleaning\nthe mirrors almost impossible. We have tested a novel design of glass mirrors\nthat can be easily cleaned and should show almost no degradation in\nreflectivity due to environmental influences. The protective layer is a\nultra-thin glass sheet which is back-coated with aluminum, making it possible\nto simply wipe the mirror with household cleaning tools. In this contribution\nwe will present results from laboratory tests of reflectivity and focusing\nproperties of prototype mirrors, as well as long-term tests on-site at the\nMAGIC telescopes. We will also outline plans for exchanging a large fraction of\nMAGIC mirrors with this novel design, guaranteeing a peak performance of MAGIC\nfor the coming years.", "category": "astro-ph_IM" }, { "text": "Segment-level thermal sensitivity analysis for exo-Earth imaging: We present a segment-level wavefront stability error budget for space\ntelescopes essential for exoplanet detection. We use a detailed finite element\nmodel to relate the temperature gradient at the location of the primary mirror\nto wavefront variations on each of the segment. We apply the PASTIS sensitivity\nmodel forward approach to allocate static tolerances in physical units for each\nsegment, and transfer these tolerances to the temporal domain via a model of\nthe WFS&C architecture in combination with a Zernike phase sensor and science\ncamera. We finally estimate the close-loop variance and limiting contrast for\nthe segments' thermo-mechanical modes.", "category": "astro-ph_IM" }, { "text": "Analysis techniques and performance of the Domino Ring Sampler version 4\n based readout for the MAGIC telescopes: Recently the readout of the MAGIC telescopes has been upgraded to a new\nsystem based on the Domino Ring Sampler version 4 chip. We present the analysis\ntechniques and the signal extraction performance studies of this system. We\nstudy the behaviour of the baseline, the noise, the cross-talk, the linearity\nand the time resolution. We investigate also the optimal signal extraction. In\naddition we show some of the analysis techniques specific to the readout based\non the Domino Ring Sampler version 2 chip, previously used in the MAGIC II\ntelescope.", "category": "astro-ph_IM" }, { "text": "A Template-Based Approach to the Photometric Classification of SN\n 1991bg-like Supernovae in the SDSS-II Supernova Survey: The use of Type Ia Supernovae (SNe Ia) to measure cosmological parameters has\ngrown significantly over the past two decades. However, there exists a\nsignificant diversity in the SN Ia population that is not well understood.\nOver-luminous SN 1991T-like and sub-luminous SN 1991bg-like objects are two\ncharacteristic examples of peculiar SNe. The identification and classification\nof such objects is an important step in studying what makes them unique from\nthe remaining SN population. With the upcoming Vera C. Rubin Observatory\npromising on the order of a million new SNe over a ten-year survey,\nspectroscopic classifications will be possible for only a small subset of\nobserved targets. As such, photometric classification has become an\nincreasingly important concern in preparing for the next generation of\nastronomical surveys. Using observations from the Sloan Digital Sky Survey II\n(SDSS-II) SN Survey, we apply here an empirically based classification\ntechnique targeted at the identification of SN 1991bg-like SNe in photometric\ndata sets. By performing dedicated fits to photometric data in the rest-frame\nredder and bluer bandpasses, we classify 16 previously unidentified 91bg-like\nSNe. Using SDSS-II host-galaxy measurements, we find that these SNe are\npreferentially found in host galaxies having an older average stellar age than\nthe hosts of normal SNe Ia. We also find that these SNe are found at a further\nphysical distance from the center of their host galaxies. We find no\nstatistically significant bias in host galaxy mass or specific star formation\nrate for these targets.", "category": "astro-ph_IM" }, { "text": "Potential for measuring the longitudinal and lateral profile of muons in\n TeV air showers with IACTs: Muons are copiously produced within hadronic extensive air showers (EAS)\noccurring in the Earth's atmosphere, and are used by particle air shower\ndetectors as a means of identifying the primary cosmic ray which initiated the\nEAS. Imaging Atmospheric Cherenkov Telescopes (IACTs), designed for the\ndetection of gamma-ray initiated EAS for the purposes of Very High Energy (VHE)\ngamma-ray astronomy, are subject to a considerable background signal due to\nhadronic EAS. Although hadronic EAS are typically rejected for gamma-ray\nanalysis purposes, single muons produced within such showers generate clearly\nidentifiable signals in IACTs and muon images are routinely retained and used\nfor calibration purposes. For IACT arrays operating with a stereoscopic\ntrigger, when a muon triggers one telescope, other telescopes in IACT arrays\nusually detect the associated hadronic EAS. We demonstrate for the first time\nthe potential of IACT arrays for competitive measurements of the muon content\nof air showers, their lateral distribution and longitudinal profile of\nproduction slant heights in the TeV energy range. Such information can provide\nuseful input to hadronic interaction models.", "category": "astro-ph_IM" }, { "text": "Angular control noise in Advanced Virgo and implications for the\n Einstein Telescope: With significantly improved sensitivity, the Einstein Telescope (ET), along\nwith other upcoming gravitational wave detectors, will mark the beginning of\nprecision gravitational wave astronomy. However, the pursuit of surpassing\ncurrent detector capabilities requires careful consideration of technical\nconstraints inherent in existing designs. The significant improvement of ET\nlies in the low-frequency range, where it anticipates a one million-fold\nincrease in sensitivity compared to current detectors. Angular control noise is\na primary limitation for LIGO detectors in this frequency range, originating\nfrom the need to maintain optical alignment. Given the expected improvements in\nET's low-frequency range, precise assessment of angular control noise becomes\ncrucial for achieving target sensitivity. To address this, we developed a model\nof the angular control system of Advanced Virgo, closely matching experimental\ndata and providing a robust foundation for modeling future-generation\ndetectors. Our model, for the first time, enables replication of the measured\ncoupling level between angle and length. Additionally, our findings confirm\nthat Virgo, unlike LIGO, is not constrained by alignment control noise, even if\nthe detector were operating at full power.", "category": "astro-ph_IM" }, { "text": "A High-Resolution Atlas of Uranium-Neon in the H Band: We present a high-resolution (R ~ 50 000) atlas of a uranium-neon (U/Ne)\nhollow-cathode spectrum in the H-band (1454 nm to 1638 nm) for the calibration\nof near-infrared spectrographs. We obtained this U/Ne spectrum simultaneously\nwith a laser-frequency comb spectrum, which we used to provide a first-order\ncalibration to the U/Ne spectrum. We then calibrated the U/Ne spectrum using\nthe recently-published uranium line list of Redman et al. (2011), which is\nderived from high-resolution Fourier transform spectrometer measurements. These\ntwo independent calibrations allowed us to easily identify emission lines in\nthe hollow cathode lamp that do not correspond to known (classified) lines of\neither uranium or neon, and to compare the achievable precision of each source.\nOur frequency comb precision was limited by modal noise and detector effects,\nwhile the U/Ne precision was limited primarily by the signal-to-noise ratio\n(S/N) of the observed emission lines and our ability to model blended lines.\nThe standard deviation in the dispersion solution residuals from the\nS/N-limited U/Ne hollow cathode lamp were 50% larger than the standard\ndeviation of the dispersion solution residuals from the modal-noise-limited\nlaser frequency comb. We advocate the use of U/Ne lamps for precision\ncalibration of near-infrared spectrographs, and this H-band atlas makes these\nlamps significantly easier to use for wavelength calibration.", "category": "astro-ph_IM" }, { "text": "The Cherenkov Telescope Array: layout, design and performance: The Cherenkov Telescope Array (CTA) will be the next generation\nvery-high-energy gamma-ray observatory. CTA is expected to provide substantial\nimprovement in accuracy and sensitivity with respect to existing instruments\nthanks to a tenfold increase in the number of telescopes and their\nstate-of-the-art design. Detailed Monte Carlo simulations are used to further\noptimise the number of telescopes and the array layout, and to estimate the\nobservatory performance using updated models of the selected telescope designs.\nThese studies are presented in this contribution for the two CTA stations\nlocated on the island of La Palma (Spain) and near Paranal (Chile) and for\ndifferent operation and observation conditions.", "category": "astro-ph_IM" }, { "text": "Photometric Data-driven Classification of Type Ia Supernovae in the Open\n Supernova Catalog: We propose a novel approach for a machine-learning-based detection of the\ntype Ia supernovae using photometric information. Unlike other approaches, only\nreal observation data is used during training. Despite being trained on a\nrelatively small sample, the method shows good results on real data from the\nOpen Supernovae Catalog. We also investigate model transfer from the PLAsTiCC\nsimulations train dataset to real data application, and the reverse, and find\nthe performance significantly decreases in both cases, highlighting the\nexisting differences between simulated and real data.", "category": "astro-ph_IM" }, { "text": "Shrinkage MMSE estimators of covariances beyond the zero-mean and\n stationary variance assumptions: We tackle covariance estimation in low-sample scenarios, employing a\nstructured covariance matrix with shrinkage methods. These involve convexly\ncombining a low-bias/high-variance empirical estimate with a biased\nregularization estimator, striking a bias-variance trade-off. Literature\nprovides optimal settings of the regularization amount through risk\nminimization between the true covariance and its shrunk counterpart. Such\nestimators were derived for zero-mean statistics with i.i.d. diagonal\nregularization matrices accounting for the average sample variance solely. We\nextend these results to regularization matrices accounting for the sample\nvariances both for centered and non-centered samples. In the latter case, the\nempirical estimate of the true mean is incorporated into our shrinkage\nestimators. Introducing confidence weights into the statistics also enhance\nestimator robustness against outliers. We compare our estimators to other\nshrinkage methods both on numerical simulations and on real data to solve a\ndetection problem in astronomy.", "category": "astro-ph_IM" }, { "text": "Calculation of the Cherenkov light yield from low energetic secondary\n particles accompanying high-energy muons in ice and water with Geant 4\n simulations: In this work we investigate and parameterize the amount and angular\ndistribution of Cherenkov photons, which are generated by low-energy secondary\nparticles (typically $\\lesssim 500 $\\,MeV), which accompany a muon track in\nwater or ice. These secondary particles originate from small energy loss\nprocesses. We investigate the contributions of the different energy loss\nprocesses as a function of the muon energy and the maximum transferred energy.\nFor the calculation of the angular distribution we have developed a generic\ntransformation method, which allows us to derive the angular distribution of\nCherenkov photons for an arbitrary distribution of track directions and their\nvelocities.", "category": "astro-ph_IM" }, { "text": "A New Method for Determining Geometry of Planetary Images: This paper presents a novel semi-automatic image processing technique to\nestimate accurately, and objectively, the disc parameters of a planetary body\non an astronomical image. The method relies on the detection of the limb and/or\nthe terminator of the planetary body with the VOronoi Image SEgmentation\n(VOISE) algorithm (Guio and Achilleos, 2009). The resulting map of the\nsegmentation is then used to identify the visible boundary of the planetary\ndisc. The segments comprising this boundary are then used to perform a \"best\"\nfit to an algebraic expression for the limb and/or terminator of the body. We\nfind that we are able to locate the centre of the planetary disc with an\naccuracy of a few tens of one pixel. The method thus represents a useful\nprocessing stage for auroral \"imaging\" based studies.", "category": "astro-ph_IM" }, { "text": "Analysis of defect formation in semiconductor cryogenic bolometric\n detectors created by heavy dark matter: The cryogenic detectors in the form of bolometers are presently used for\ndifferent applications, in particular for very rare or hypothetical events\nassociated with new forms of matter, specifically related to the existence of\nDark Matter. In the detection of particles with a semiconductor as target and\ndetector, usually two signals are measured: ionization and heat. The\namplification of the thermal signal is obtained with the prescriptions from\nLuke-Neganov effect. The energy deposited in the semiconductor lattice as\nstable defects in the form of Frenkel pairs at cryogenic temperatures,\nfollowing the interaction of a dark matter particle, is evaluated and\nconsequences for measured quantities are discussed. This contribution is\nincluded in the energy balance of the Luke effect. Applying the present model\nto germanium and silicon, we found that for the same incident weakly\ninteracting massive particle the energy deposited in defects in germanium is\nabout twice the value for silicon.", "category": "astro-ph_IM" }, { "text": "Last performances improvement of the C-RED One camera using the 320x256\n e-APD infrared Saphira detector: We present here the latest results obtained with the C-RED One camera\ndeveloped by First Light Imaging for fast ultra-low noise infrared\napplications. This camera uses the Leonardo Saphira e-APD 320x256 infrared\nsensor in an autonomous cryogenic environment with a low vibration pulse tube\nand with embedded readout electronics system. Some recent improvements were\nmade to the camera. The first important one concerns the total noise of the\ncamera. Limited to 1.75 microns wavelength cut-off with proper cold filters,\nlooking at a blackbody at room temperature and f/4 beam aperture, we now\nmeasure total noise down to 0.6 e at gain 50 in CDS mode 1720 FPS, dividing\nprevious noise figure by a factor 2. The total camera background of 30-400 e/s\nis now achieved with a factor 3 of background reduction, the camera also\nlooking at a room temperature blackbody with an F/4 beam aperture. Image bias\noscillations, due to electronics grounding scheme, were carefully analyzed and\nremoved. Focal plane detector vibrations transmitted by the pulse tube cooling\nmachine were also analyzed, damped and measured down to 0.3 microns RMS,\nreducing focal plane vibrations by a factor 3. In addition, a vacuum getter of\nhigher capacity is now used to offer camera operation without camera pumping\nduring months. The camera main characteristics are detailed: pulse tube cooling\nat 80K with limited vibrations, permanent vacuum solution, ultra-low latency\nCameralink full data interface, safety management of the camera by firmware,\nonline firmware update, ambient liquid cooling and reduced weight of 20 kg.", "category": "astro-ph_IM" }, { "text": "(H)DPGMM: A Hierarchy of Dirichlet Process Gaussian Mixture Models for\n the inference of the black hole mass function: We introduce (H)DPGMM, a hierarchical Bayesian non-parametric method based on\nthe Dirichlet Process Gaussian Mixture Model, designed to infer data-driven\npopulation properties of astrophysical objects without being committal to any\nspecific physical model. We investigate the efficacy of our model on simulated\ndatasets and demonstrate its capability to reconstruct correctly a variety of\npopulation models without the need of fine-tuning of the algorithm. We apply\nour method to the problem of inferring the black hole mass function given a set\nof gravitational wave observations from LIGO and Virgo, and find that the\n(H)DPGMM infers a binary black hole mass function that is consistent with\nprevious estimates without the requirement of a theoretically motivated\nparametric model. Although the number of systems observed is still too small\nfor a robust inference, (H)DPGMM confirms the presence of at least two distinct\nmodes in the observed merging black holes mass function, hence suggesting in a\nmodel-independent fashion the presence of at least two classes of binary black\nhole systems.", "category": "astro-ph_IM" }, { "text": "Reconstruction of radio signals from air-showers with autoencoder: The Tunka Radio Extension (Tunka-Rex) is a digital antenna array (63 antennas\ndistributed over 1km^2) co-located with the TAIGA observatory in Eastern\nSiberia. Tunka-Rex measures radio emission of air-showers induced by ultra-high\nenergy cosmic rays in the frequency band of 30-80 MHz. Air-shower signal is a\nshort (tens of nanoseconds) broadband pulse. Using time positions and\namplitudes of these pulses, we reconstruct parameters of air showers and\nprimary cosmic rays. The amplitudes of low-energy event (E<10^17 eV) cannot be\nused for successful reconstruction due to the domination of background. To\nlower the energy threshold of the detection and increase the efficiency, we use\nautoencoder neural network which removes noise from the measured data. This\nwork describes our approach to denoising raw data and further reconstruction of\nair-shower parameters. We also present results of the low-energy events\nreconstruction with autoencoder.", "category": "astro-ph_IM" }, { "text": "Analysing Astronomy Algorithms for GPUs and Beyond: Astronomy depends on ever increasing computing power. Processor clock-rates\nhave plateaued, and increased performance is now appearing in the form of\nadditional processor cores on a single chip. This poses significant challenges\nto the astronomy software community. Graphics Processing Units (GPUs), now\ncapable of general-purpose computation, exemplify both the difficult\nlearning-curve and the significant speedups exhibited by massively-parallel\nhardware architectures. We present a generalised approach to tackling this\nparadigm shift, based on the analysis of algorithms. We describe a small\ncollection of foundation algorithms relevant to astronomy and explain how they\nmay be used to ease the transition to massively-parallel computing\narchitectures. We demonstrate the effectiveness of our approach by applying it\nto four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for\ngravitational lensing, pulsar dedispersion and volume rendering. Algorithms\nwith well-defined memory access patterns and high arithmetic intensity stand to\nreceive the greatest performance boost from massively-parallel architectures,\nwhile those that involve a significant amount of decision-making may struggle\nto take advantage of the available processing power.", "category": "astro-ph_IM" }, { "text": "RFI excision using a higher order statistics analysis of the power\n spectrum: A method of radio frequency interference (RFI) suppression in radio astronomy\nspectral observations is described based on the analysis of the probability\ndistribution of an instantaneous spectrum. This method allows the separation of\nthe gaussian component due to the natural radio source and the non-gaussian RFI\nsignal. Examples are presented in the form of %computer simulations of this\nmethod of RFI suppression and of WSRT observations with this method applied.\nThe application %of real time digital signal processing for RFI suppression is\nfound to be effective for radio astronomy telescopes %operating in a worsening\nspectral environment.", "category": "astro-ph_IM" }, { "text": "Discovery and Characterization of a Faint Stellar Companion to the A3V\n Star Zeta Virginis: Through the combination of high-order Adaptive Optics and coronagraphy, we\nreport the discovery of a faint stellar companion to the A3V star zeta\nVirginis. This companion is ~7 magnitudes fainter than its host star in the\nH-band, and infrared imaging spanning 4.75 years over five epochs indicates\nthis companion has common proper motion with its host star. Using evolutionary\nmodels, we estimate its mass to be 0.168+/-.016 solar masses, giving a mass\nratio for this system q = 0.082. Assuming the two objects are coeval, this mass\nsuggests a M4V-M7V spectral type for the companion, which is confirmed through\nintegral field spectroscopic measurements. We see clear evidence for orbital\nmotion from this companion and are able to constrain the semi-major axis to be\ngreater than 24.9 AU, the period > 124$ yrs, and eccentricity > 0.16.\nMultiplicity studies of higher mass stars are relatively rare, and binary\ncompanions such as this one at the extreme low end of the mass ratio\ndistribution are useful additions to surveys incomplete at such a low mass\nratio. Moreover, the frequency of binary companions can help to discriminate\nbetween binary formation scenarios that predict an abundance of low-mass\ncompanions forming from the early fragmentation of a massive circumstellar\ndisk. A system such as this may provide insight into the anomalous X-ray\nemission from A stars, hypothesized to be from unseen late-type stellar\ncompanions. Indeed, we calculate that the presence of this M-dwarf companion\neasily accounts for the X-ray emission from this star detected by ROSAT.", "category": "astro-ph_IM" }, { "text": "HiFLEx -- a highly flexible package to reduce cross-dispersed echelle\n spectra: We describe a flexible data reduction package for high resolution\ncross-dispersed echelle data. This open-source package is developed in Python\nand includes optional GUIs for most of the steps. It does not require any\npre-knowledge about the form or position of the echelle-orders. It has been\ntested on cross-dispersed echelle spectrographs between 13k and 115k resolution\n(bifurcated fiber-fed spectrogaph ESO-HARPS and single fiber-fed spectrograph\nTNT-MRES). HiFLEx can be used to determine radial velocities and is designed to\nuse the TERRA package but can also control the radial velocity packages such as\nCERES and SERVAL to perform the radial velocity analysis. Tests on HARPS data\nindicates radial velocities results within 3m/s of the literature pipelines\nwithout any fine tuning of extraction parameters.", "category": "astro-ph_IM" }, { "text": "A Versatile Technique to Enable sub-milli-Kelvin Instrument Stability\n for Precise Radial Velocity Measurements: Tests with the Habitable-zone\n Planet Finder: Insufficient instrument thermo-mechanical stability is one of the many\nroadblocks for achieving 10cm/s Doppler radial velocity (RV) precision, the\nprecision needed to detect Earth-twins orbiting Solar-type stars. Highly\ntemperature and pressure stabilized spectrographs allow us to better calibrate\nout instrumental drifts, thereby helping in distinguishing instrumental noise\nfrom astrophysical stellar signals. We present the design and performance of\nthe Environmental Control System (ECS) for the Habitable-zone Planet Finder\n(HPF), a high-resolution (R=50,000) fiber-fed near infrared (NIR) spectrograph\nfor the 10m Hobby Eberly Telescope at McDonald Observatory. HPF will operate at\n180K, driven by the choice of an H2RG NIR detector array with a 1.7micron\ncutoff. This ECS has demonstrated 0.6mK RMS stability over 15 days at both 180K\nand 300K, and maintained high quality vacuum (<$10^{-7}$Torr) over months,\nduring long-term stability tests conducted without a planned passive thermal\nenclosure surrounding the vacuum chamber. This control scheme is versatile and\ncan be applied as a blueprint to stabilize future NIR and optical high\nprecision Doppler instruments over a wide temperature range from ~77K to\nelevated room temperatures. A similar ECS is being implemented to stabilize\nNEID, the NASA/NSF NN-EXPLORE spectrograph for the 3.5m WIYN telescope at Kitt\nPeak, operating at 300K. A full SolidWorks 3D-CAD model and a comprehensive\nparts list of the HPF ECS are included with this manuscript to facilitate the\nadaptation of this versatile environmental control scheme in the broader\nastronomical community.", "category": "astro-ph_IM" }, { "text": "Mitigating radio frequency interference in CHIME/FRB real-time intensity\n data: Extragalactic fast radio bursts (FRBs) are a new class of astrophysical\ntransients with unknown origins that have become a main focus of radio\nobservatories worldwide. FRBs are highly energetic ($\\sim 10^{36}$-$10^{42}$\nergs) flashes that last for about a millisecond. Thanks to its broad bandwidth\n(400-800 MHz), large field of view ($\\sim$200 sq. deg.), and massive data rate\n(1500 TB of coherently beamformed data per day), the Canadian Hydrogen\nIntensity Mapping Experiment / Fast Radio Burst (CHIME/FRB) project has\nincreased the total number of discovered FRBs by over a factor 10 in 3 years of\noperation. CHIME/FRB observations are hampered by the constant exposure to\nradio frequency interference (RFI) from artificial devices (e.g., cellular\nphones, aircraft), resulting in $\\sim$20% loss of bandwidth. In this work, we\ndescribe our novel technique for mitigating RFI in CHIME/FRB real-time\nintensity data. We mitigate RFI through a sequence of iterative operations,\nwhich mask out statistical outliers from frequency-channelized intensity data\nthat have been effectively high-pass filtered. Keeping false positive and false\nnegative rates at very low levels, our approach is useful for any\nhigh-performance surveys of radio transients in the future.", "category": "astro-ph_IM" }, { "text": "Prototype Open Event Reconstruction Pipeline for the Cherenkov Telescope\n Array: The Cherenkov Telescope Array (CTA) is the next-generation gamma-ray\nobservatory currently under construction. It will improve over the current\ngeneration of imaging atmospheric Cherenkov telescopes (IACTs) by a factor of\nfive to ten in sensitivity and it will be able to observe the whole sky from a\ncombination of two sites: a northern site in La Palma, Spain, and a southern\none in Paranal, Chile. CTA will also be the first open gamma-ray observatory.\nAccordingly, the data analysis pipeline is developed as open-source software.\nThe event reconstruction pipeline accepts raw data of the telescopes and\nprocesses it to produce suitable input for the higher-level science tools. Its\nprimary tasks include reconstructing the physical properties of each recorded\nshower and providing the corresponding instrument response functions. ctapipe\nis a framework providing algorithms and tools to facilitate raw data\ncalibration, image extraction, image parameterization and event reconstruction.\nIts main focus is currently the analysis of simulated data but it has also been\nsuccessfully applied for the analysis of data obtained with the first CTA\nprototype telescopes, such as the Large-Sized Telescope 1 (LST-1). pyirf is a\nlibrary to calculate IACT instrument response functions, needed to obtain\nphysics results like spectra and light curves, from the reconstructed event\nlists. Building on these two, protopipe is a prototype for the event\nreconstruction pipeline for CTA. Recent developments in these software packages\nwill be presented.", "category": "astro-ph_IM" }, { "text": "Unit panel nodes detection by CNN on FAST reflector: The 500-meter Aperture Spherical Radio Telescope(FAST) has an active\nreflector. During the observation, the reflector will be deformed into a\nparaboloid of 300-meters. To improve its surface accuracy, we propose a scheme\nfor photogrammetry to measure the positions of 2226 nodes on the reflector. And\nthe way to detect the nodes in the photos is the key problem in photogrammetry.\nThis paper applies Convolutional Neural Network(CNN) with candidate regions to\ndetect the nodes in the photos. The experiment results show a high recognition\nrate of 91.5%, which is much higher than the recognition rate of traditional\nedge detection.", "category": "astro-ph_IM" }, { "text": "An optimal method for scheduling observations of large sky error regions\n for finding optical counterparts to transients: The discovery and subsequent study of optical counterparts to transient\nsources is crucial for their complete astrophysical understanding. Various\ngamma ray burst (GRB) detectors, and more notably the ground--based\ngravitational wave detectors, typically have large uncertainties in the sky\npositions of detected sources. Searching these large sky regions spanning\nhundreds of square degrees is a formidable challenge for most ground--based\noptical telescopes, which can usually image less than tens of square degrees of\nthe sky in a single night. We present algorithms for optimal scheduling of such\nfollow--up observations in order to maximize the probability of imaging the\noptical counterpart, based on the all--sky probability distribution of the\nsource position. We incorporate realistic observing constraints like the\ndiurnal cycle, telescope pointing limitations, available observing time, and\nthe rising/setting of the target at the observatory location. We use\nsimulations to demonstrate that our proposed algorithms outperform the default\ngreedy observing schedule used by many observatories. Our algorithms are\napplicable for follow--up of other transient sources with large positional\nuncertainties, like Fermi--detected GRBs, and can easily be adapted for\nscheduling radio or space--based X--ray followup.", "category": "astro-ph_IM" }, { "text": "Gamma-Ray Telescopes (in \"400 Years of Astronomical Telescopes\"): The last half-century has seen dramatic developments in gamma-ray telescopes,\nfrom their initial conception and development through to their blossoming into\nfull maturity as a potent research tool in astronomy. Gamma-ray telescopes are\nleading research in diverse areas such as gamma-ray bursts, blazars, Galactic\ntransients, and the Galactic distribution of aluminum-26.", "category": "astro-ph_IM" }, { "text": "A deep learning framework for jointly extracting spectra and\n source-count distributions in astronomy: Astronomical observations typically provide three-dimensional maps, encoding\nthe distribution of the observed flux in (1) the two angles of the celestial\nsphere and (2) energy/frequency. An important task regarding such maps is to\nstatistically characterize populations of point sources too dim to be\nindividually detected. As the properties of a single dim source will be poorly\nconstrained, instead one commonly studies the population as a whole, inferring\na source-count distribution (SCD) that describes the number density of sources\nas a function of their brightness. Statistical and machine learning methods for\nrecovering SCDs exist; however, they typically entirely neglect spectral\ninformation associated with the energy distribution of the flux. We present a\ndeep learning framework able to jointly reconstruct the spectra of different\nemission components and the SCD of point-source populations. In a\nproof-of-concept example, we show that our method accurately extracts even\ncomplex-shaped spectra and SCDs from simulated maps.", "category": "astro-ph_IM" }, { "text": "Laue lenses: Focusing optics for hard X/soft Gamma-ray Astronomy: Hard X-/soft Gamma-ray astronomy is a key field for the study of important\nastrophysical phenomena such as the electromagnetic counterparts of\ngravitational waves, gamma-ray bursts, black holes physics and many more.\nHowever, the spatial localization, imaging capabilities and sensitivity of the\nmeasurements are strongly limited for the energy range $>$70 keV due to the\nlack of focusing instruments operating in this energy band. A new generation of\ninstruments suitable to focus hard X-/ soft Gamma-rays is necessary to shed\nlight on the nature of astrophysical phenomena which are still unclear due to\nthe limitations of current direct-viewing telescopes. Laue lenses can be the\nanswer to those needs. A Laue lens is an optical device consisting of a large\nnumber of properly oriented crystals which are capable, through Laue\ndiffraction, of concentrating the radiation into the common Laue lens focus. In\ncontrast with the grazing incidence telescopes commonly used for softer X-rays,\nthe transmission configuration of the Laue lenses allows us to obtain a\nsignificant sensitive area even at energies of hundreds of keV. At the\nUniversity of Ferrara we are actively working on the modelization and\nconstruction of a broad-band Laue lens. In this work we will present the main\nconcepts behind Laue lenses and the latest technological developments of the\nTRILL (Technological Readiness Increase for Laue Lenses) project, devoted to\nthe advancement of the technological readiness of Laue lenses by developing the\nfirst prototype of a lens sector made of cylindrical bent crystals of\nGermanium.", "category": "astro-ph_IM" }, { "text": "On-sky measurements of atmospheric dispersion: I. Method validation: Observations with ground-based telescopes are affected by differential\natmospheric dispersion due to the wavelength-dependent index of refraction of\nthe atmosphere. The usage of an Atmospheric Dispersion Corrector (ADC) is\nfundamental to compensate this effect. Atmospheric dispersion correction\nresiduals above the level of ~ 100 milli-arcseconds (mas) will affect\nastronomical observations, in particular radial velocity and flux losses. The\ndesign of an ADC is based on atmospheric models. To the best of our knowledge,\nthose models have never been tested on-sky. In this paper, we present a new\nmethod to measure the atmospheric dispersion on-sky in the optical range. We\nrequire an accuracy better than 50 mas that is equal to the difference between\natmospheric models. The method is based on the use of cross-dispersion\nspectrographs to determine the position of the centroid of the spatial profile\nat each wavelength of each spectral order. The method is validated using\ncross-dispersed spectroscopic data acquired with the slit spectrograph UVES. We\nmeasure an instrumental dispersion of 47 mas in the blue arm, 15 mas, and 23\nmas in the two ranges of the red arm. We also measure a 4 % deviation in the\npixel scale from the value cited in UVES manual. The accuracy of the method is\n~ 17 mas in the range of 315-665 nm. At this level, we can compare and\ncharacterize different atmospheric dispersion models for better future ADC\ndesigns.", "category": "astro-ph_IM" }, { "text": "Calibration of Radio Interferometers Using a Sparse DoA Estimation\n Framework: The calibration of modern radio interferometers is a significant challenge,\nspecifically at low frequencies. In this perspective, we propose a novel\niterative calibration algorithm, which employs the popular sparse\nrepresentation framework, in the regime where the propagation conditions shift\ndissimilarly the directions of the sources. More precisely, our algorithm is\ndesigned to estimate the apparent directions of the calibration sources, their\npowers, the directional and undirectional complex gains of the array elements\nand their noise powers, with a reasonable computational complexity. Numerical\nsimulations reveal that the proposed scheme is statistically efficient at low\nSNR and even with additional non-calibration sources at unknown directions.", "category": "astro-ph_IM" }, { "text": "Status of the TREND project: The Tianshan Radio Experiment for Neutrino Detection (TREND) is a sino-french\ncollaboration (CNRS/IN2P3 and Chinese Academy of Science) developing an\nautonomous antenna array for the detection of high energy Extensive Air Showers\n(EAS) on the site of the 21CMA radio observatory. The autonomous detection and\nidentification of EAS was achieved by TREND on a prototype array in 2009. This\nresult was confirmed soon after when EAS radio-candidates could be tagged as\ncosmic ray events by an array of particle detectors running in parallel at the\nsame location. This result is an important milestone for TREND, and more\ngenerally, for the maturation of the EAS radio-detection technique. The array\nis presently composed of 50 antennas covering a total area of ~1.2 km^2,\nrunning in steady conditions since March 2011. We are presently processing the\ndata to identify EAS radio-candidates. In a long term perspective, TREND is\nintended to search for high energy tau neutrinos. Here we only report on the\nresults achieved so far by TREND.", "category": "astro-ph_IM" }, { "text": "Application of the TPB Wavelength Shifter to the DEAP-3600 Spherical\n Acrylic Vessel Inner Surface: DEAP-3600 uses liquid argon contained in a spherical acrylic vessel as a\ntarget medium to perform a sensitive spin-independent dark matter search. Argon\nscintillates in the vacuum ultraviolet spectrum, which requires wavelength\nshifting to convert the VUV photons to visible so they can be transmitted\nthrough the acrylic light guides and detected by the surrounding\nphotomultiplier tubes. The wavelength shifter 1,1,4,4-tetraphenyl-1,3-butadiene\nwas evaporatively deposited to the inner surface of the acrylic vessel under\nvacuum. Two evaporations were performed on the DEAP-3600 acrylic vessel with an\nestimated coating thickness of 3.00 $\\pm$ 0.02 $\\mu$m which is successfully\nwavelength shifting with liquid argon in the detector. Details on the\nwavelength shifter coating requirements, deposition source, testing, and final\nperformance are presented.", "category": "astro-ph_IM" }, { "text": "Imaging Atmospheric Cherenkov Telescopes pointing determination using\n the trajectories of the stars in the field of view: We present a new approach to the pointing determination of Imaging\nAtmospheric Cherenkov Telescopes (IACTs). This method is universal and can be\napplied to any IACT with minor modifications. It uses the trajectories of the\nstars in the field of view of the IACT's main camera and requires neither\ndedicated auxiliary hardware nor a specific data taking mode. The method\nconsists of two parts: firstly, we reconstruct individual star positions as a\nfunction of time, taking into account the point spread function of the\ntelescope; secondly, we perform a simultaneous fit of all reconstructed star\ntrajectories using the orthogonal distance regression method. The method does\nnot assume any particular star trajectories, does not require a long\nintegration time, and can be applied to any IACT observation mode. The\nperformance of the method is assessed with commissioning data of the\nLarge-Sized Telescope prototype (LST-1), showing the method's stability and\nremarkable pointing performance of the LST-1 telescope.", "category": "astro-ph_IM" }, { "text": "The Gaia Mission, Binary Stars and Exoplanets: On the 19th of December 2013, the Gaia spacecraft was successfully launched\nby a Soyuz rocket from French Guiana and started its amazing journey to map and\ncharacterise one billion celestial objects with its one billion pixel camera.\nIn this presentation, we briefly review the general aims of the mission and\ndescribe what has happened since launch, including the Ecliptic Pole scanning\nmode. We also focus especially on binary stars, starting with some basic\nobservational aspects, and then turning to the remarkable harvest that Gaia is\nexpected to yield for these objects.", "category": "astro-ph_IM" }, { "text": "The Radio Sky on Short Timescales with LOFAR: Pulsars and Fast\n Transients: LOFAR, the \"low-frequency array\", will be one of the first in a new\ngeneration of radio telescopes and Square Kilometer Array (SKA) pathfinders\nthat are highly flexible in capability because they are largely software\ndriven. LOFAR will not only open up a mostly unexplored spectral window, the\nlowest frequency radio light observable from the Earth's surface, but it will\nalso be an unprecented tool with which to monitor the transient radio sky over\na large field of view and down to timescales of milliseconds or less. Here we\ndiscuss LOFAR's current and upcoming capabilities for observing fast transients\nand pulsars, and briefly present recent commissioning observations of known\npulsars.", "category": "astro-ph_IM" }, { "text": "Search for extreme energy cosmic ray candidates in the TUS orbital\n experiment data: TUS (Track Ultraviolet Setup) is the first space experiment aimed to check\nthe possibility of registering extreme energy cosmic rays (EECRs) at E>50 EeV\nby measuring the fluorescence signal of extensive air showers in the\natmosphere. The detector operates as a part of the scientific payload of the\nLomonosov satellite for more than a year. We describe an algorithm of searching\nfor EECR events in the TUS data and briefly discuss a number of candidates\nselected by formal criteria.", "category": "astro-ph_IM" }, { "text": "Optical Astronomical Facilities at Nainital, India: Aryabhatta Research Institute of Observational Sciences (acronym ARIES)\noperates a 1-m aperture optical telescope at Manora Peak, Nainital since 1972.\nConsidering the need and potential of establishing moderate size optical\ntelescope with spectroscopic capability at the geographical longitude of India,\nthe ARIES plans to establish a 3.6m new technology optical telescope at a new\nsite called Devasthal. This telescope will have instruments providing high\nresolution spectral and seeing-limited imaging capabilities at visible and\nnear-infrared bands. A few other observing facilities with very specific goals\nare also being established. A 1.3m aperture optical telescope to monitor\noptically variable sources was installed at Devasthal in the year 2010 and a\n0.5-m wide field (25 square degrees) Baker-Nunn Schmidt telescope to produce a\ndigital map of the Northern sky at optical bands was installed at Manora Peak\nin 2011. A 4-m liquid mirror telescope for deep sky survey of transient sources\nis planned at Devasthal. These optical facilities with specialized back-end\ninstruments are expected to become operational within the next few years and\ncan be used to optical studies of a wide variety of astronomical topics\nincluding follow-up studies of sources identified in the radio region by GMRT\nand UV/X-ray by ASTROSAT.", "category": "astro-ph_IM" }, { "text": "Using transfer learning to detect galaxy mergers: We investigate the use of deep convolutional neural networks (deep CNNs) for\nautomatic visual detection of galaxy mergers. Moreover, we investigate the use\nof transfer learning in conjunction with CNNs, by retraining networks first\ntrained on pictures of everyday objects. We test the hypothesis that transfer\nlearning is useful for improving classification performance for small training\nsets. This would make transfer learning useful for finding rare objects in\nastronomical imaging datasets. We find that these deep learning methods perform\nsignificantly better than current state-of-the-art merger detection methods\nbased on nonparametric systems like CAS and GM$_{20}$. Our method is end-to-end\nand robust to image noise and distortions; it can be applied directly without\nimage preprocessing. We also find that transfer learning can act as a\nregulariser in some cases, leading to better overall classification accuracy\n($p = 0.02$). Transfer learning on our full training set leads to a lowered\nerror rate from 0.038 $\\pm$ 1 down to 0.032 $\\pm$ 1, a relative improvement of\n15%. Finally, we perform a basic sanity-check by creating a merger sample with\nour method, and comparing with an already existing, manually created merger\ncatalogue in terms of colour-mass distribution and stellar mass function.", "category": "astro-ph_IM" }, { "text": "fcmaker: automating the creation of ESO-compliant finding charts for\n Observing Blocks on p2: fcmaker is a python module that creates astronomical finding charts for\nObserving Blocks (OBs) on the p2 web server from the European Southern\nObservatory (ESO). It provides users with the ability to automate the creation\nof ESO-compliant finding charts for Service Mode and/or Visitor Mode OBs at the\nVery Large Telescope (VLT). The design of the fcmaker finding charts, based on\nan intimate knowledge of VLT observing procedures, is fine-tuned to best\nsupport night time operations. As an automated tool, fcmaker also provides\nobservers with the means to independently check visually the observing sequence\ncoded inside an OB. This includes, for example, the signs of telescope and\nposition angle offsets. VLT instruments currently supported by fcmaker include\nMUSE (WFM-AO, WFM-NOAO, NFM), HAWK-I (AO, NOAO), and X-shooter (full support).\nThe fcmaker code is published on a dedicated Github repository under the GNU\nGeneral Public License, and is also available via pypi.", "category": "astro-ph_IM" }, { "text": "The Infrared Imaging Spectrograph (IRIS) for TMT: Prototyping of\n cryogenic compatible stage for the Imager: The IRIS Imager requires opt-mechanical stages which are operable under\nvacuum and cryogenic environment. Also the stage for the IRIS Imager is\nrequired to survive for 10 years without maintenance. To achieve these\nrequirements, we decided prototyping of a two axis stage with 80 mm clear\naperture. The prototype was designed as a double-deck stage, upper rotary stage\nand lower linear stage. Most of components are selected to take advantage of\nheritage from existing astronomical instruments. In contrast, mechanical\ncomponents with lubricants such as bearings, linear motion guides and ball\nscrews were modified to survive cryogenic environment. The performance proving\ntest was carried out to evaluate errors such as wobbling, rotary and linear\npositioning error. We achieved 0.002 $\\rm deg_{rms}$ wobbling, 0.08 $\\rm\ndeg_{0-p}$ rotational positioning error and 0.07 $\\rm mm_{0-p}$ translational\npositioning error. Also durability test under anticipated load condition has\nbeen conducted. In this article, we report the detail of mechanical design,\nfabrication, performance and durability of the prototype.", "category": "astro-ph_IM" }, { "text": "Adaptive pupil masking for quasi-static speckle suppression: Quasi-static speckles are a current limitation to faint companion imaging of\nbright stars. Here we show through simulation and theory that an adaptive pupil\nmask can be used to reduce these speckles and increase the visibility of faint\ncompanions. This is achieved by placing an adaptive mask in the conjugate pupil\nplane of the telescope. The mask consists of a number of independently\ncontrollable elements which can either allow the light in the subaperture to\npass or block it. This actively changes the shape of the telescope pupil and\nhence the diffraction pattern in the focal plane. By randomly blocking\nsubapertures we force the quasi-static speckles to become dynamic. The long\nexposure PSF is then smooth, absent of quasi-static speckles. However, as the\nPSF will now contain a larger halo due to the blocking, the signal to noise\nratio (SNR) is reduced requiring longer exposure times to detect the companion.\nFor example, in the specific case of a faint companion at 5xlambda/D the\nexposure time to achieve the same SNR will be increased by a factor of 1.35. In\naddition, we show that the visibility of companions can be greatly enhanced in\ncomparison to long-exposures, when the dark speckle method is applied to short\nexposure images taken with the adaptive pupil mask. We show that the contrast\nratio between PSF peak and the halo is then increased by a factor of\napproximately 100 (5 magnitudes), and we detect companions 11 magnitudes\nfainter than the star at 5xlambda/D and up to 18 magnitudes fainter at\n22.5xlambda/D.", "category": "astro-ph_IM" }, { "text": "Comparative Analysis of the Observational Properties of Fast Radio\n Bursts at the Frequencies of 111 and 1400 MHz: A comparative analysis of the observational characteristics of fast radio\nbursts at the frequencies 111 and 1400 MHz is carried out. The distributions of\nradio bursts by the dispersion measure are constructed. At both frequencies,\nthey are described by a lognormal distribution with the parameters $\\mu =6.2$\n$\\sigma = 0.7$. The dependence $\\tau_{sc}(DM)$ of the scattering value on the\ndispersion measure at 111 MHz and 1400 MHz is also constructed. This dependence\nis fundamentally different from the dependence for pulsars. A comparative\nanalysis of the relationship between the scattering of pulses and the\ndispersion measure at 1400 MHz and 111 MHz showed that for both frequencies it\nhas the form $\\tau_{sc}(DM)\\sim DM^k$, where $k = 0.49 \\pm 0.18$ and $k = 0.43\n\\pm 0.15$ for the frequencies 111 and 1400 MHz, respectively. The obtained\ndependence is explained within the framework of the assumption of the\nextragalactic occurrence of fast radio bursts and an almost uniform\ndistribution of matter in intergalactic space. From the dependence\n$\\tau_{sc}(DM)$ a total estimate of the contribution to the matter of the halo\nof our and the host galaxy to $DM$ is obtained $DM_{halo} +\n\\frac{DM_{host}}{1+z}\\approx 60\\;{\\rm pc/cm}^3$. Based on the LogN - LogS\ndependence, the average spectral index of radio bursts is derived $\\alpha = -\n0.63 \\pm 0.20$ provided that the statistical properties of these samples at 111\nand 1400 MHz are the same.", "category": "astro-ph_IM" }, { "text": "Applications for Microwave Kinetic Induction Detectors in Advanced\n Instrumentation: In recent years Microwave Kinetic Inductance Detectors (MKIDs) have emerged\nas one of the most promising novel low temperature detector technologies. Their\nunrivaled scalability makes them very attractive for many modern applications\nand scientific instruments. In this paper we intend to give an overview of how\nand where MKIDs are currently being used or are suggested to be used in the\nfuture. MKID based projects are ongoing or proposed for observational\nastronomy, particle physics, material science and THz imaging, and the goal of\nthis review is to provide an easily usable and thorough list of possible\nstarting points for more in-depth literature research on the many areas\nprofiting from kinetic inductance detectors.", "category": "astro-ph_IM" }, { "text": "X-ray performance of a customized large-format scientifc CMOS detector: In recent years, the performance of Scientifc Complementary Metal Oxide\nSemiconductor (sCMOS) sensors has been improved signifcantly. Compared with CCD\nsensors, sCMOS sensors have various advantages, making them potentially better\ndevices for optical and X-ray detection, especially in time-domain astronomy.\nAfter a series of tests of sCMOS sensors, we proposed a new dedicated\nhigh-speed, large-format X-ray detector in 2016 cooperating with Gpixel Inc.\nThis new sCMOS sensor has a physical size of 6 cm by 6 cm, with an array of\n4096 by 4096 pixels and a pixel size of 15 um. The frame rate is 20.1 fps under\ncurrent condition and can be boosted to a maximum value around 100 fps. The\nepitaxial thickness is increased to 10 um compared to the previous sCMOS\nproduct. We show the results of its frst taped-out product in this work. The\ndark current of this sCMOS is lower than 10 e/pixel/s at 20C, and lower than\n0.02 e/pixel/s at -30C. The Fixed Pattern Noise (FPN) and the readout noise are\nlower than 5 e in high-gain situation and show a small increase at low\ntemperature. The energy resolution reaches 180.1 eV (3.1%) at 5.90 keV for\nsingle-pixel events and 212.3 eV (3.6%) for all split events. The continuous\nX-ray spectrum measurement shows that this sensor is able to response to X-ray\nphotons from 500 eV to 37 keV. The excellent performance, as demonstrated from\nthese test results, makes sCMOS sensor an ideal detector for X-ray imaging and\nspectroscopic application.", "category": "astro-ph_IM" }, { "text": "Design and operation of the ATLAS Transient Science Server: The Asteroid Terrestrial impact Last Alert System (ATLAS) system consists of\ntwo 0.5m Schmidt telescopes with cameras covering 29 square degrees at plate\nscale of 1.86 arcsec per pixel. Working in tandem, the telescopes routinely\nsurvey the whole sky visible from Hawaii (above $\\delta > -50^{\\circ}$) every\ntwo nights, exposing four times per night, typically reaching $o < 19$\nmagnitude per exposure when the moon is illuminated and $c < 19.5$ per exposure\nin dark skies. Construction is underway of two further units to be sited in\nChile and South Africa which will result in an all-sky daily cadence from 2021.\nInitially designed for detecting potentially hazardous near earth objects, the\nATLAS data enable a range of astrophysical time domain science. To extract\ntransients from the data stream requires a computing system to process the\ndata, assimilate detections in time and space and associate them with known\nastrophysical sources. Here we describe the hardware and software\ninfrastructure to produce a stream of clean, real, astrophysical transients in\nreal time. This involves machine learning and boosted decision tree algorithms\nto identify extragalactic and Galactic transients. Typically we detect 10-15\nsupernova candidates per night which we immediately announce publicly. The\nATLAS discoveries not only enable rapid follow-up of interesting sources but\nwill provide complete statistical samples within the local volume of 100 Mpc. A\nsimple comparison of the detected supernova rate within 100 Mpc, with no\ncorrections for completeness, is already significantly higher (factor 1.5 to 2)\nthan the current accepted rates.", "category": "astro-ph_IM" }, { "text": "Planck LFI flight model feed horns: this paper is part of the Prelaunch status LFI papers published on JINST:\nhttp://www.iop.org/EJ/journal/-page=extra.proc5/jinst The Low Frequency\nInstrument is optically interfaced with the ESA Planck telescope through 11\ncorrugated feed horns each connected to the Radiometer Chain Assembly (RCA).\nThis paper describes the design, the manufacturing and the testing of the\nflight model feed horns. They have been designed to optimize the LFI optical\ninterfaces taking into account the tight mechanical requirements imposed by the\nPlanck focal plane layout. All the eleven units have been successfully tested\nand integrated with the Ortho Mode transducers.", "category": "astro-ph_IM" }, { "text": "Investigation of Correction Method of the Spacecraft Low Altitude\n Ranging: gamma ray altitude control system is an important equipment for deep space\nexploration and sample return mission, its main purpose is a low altitude\nmeasurement of the spacecraft based on Compton Effect at the moment when it\nlands on extraterrestrial celestial or sampling returns to the Earth land, and\nan ignition altitude correction of the spacecraft retrograde landing rocket at\ndifferent landing speeds. This paper presents an ignition altitude correction\nmethod of the spacecraft at different landing speeds, based on the number of\nparticles gamma ray reflected field gradient graded. Through the establishment\nof a theoretical model, its algorithm feasibility is proved by a mathematical\nderivation and verified by an experiment, and also the adaptability of the\nalgorithm under different parameters is described. The method provides a\ncertain value for landing control of the deep space exploration spacecraft\nlanding the planet surface.", "category": "astro-ph_IM" }, { "text": "Amplitude Correction Factors of KVN Observations: We report results of investigation of amplitude calibration for very long\nbaseline interferometry (VLBI) observations with Korean VLBI Network (KVN).\nAmplitude correction factors are estimated based on comparison of KVN\nobservations at 22~GHz correlated by Daejeon hardware correlator and DiFX\nsoftware correlator in Korea Astronomy and Space Science Institute (KASI) with\nVery Long Baseline Array (VLBA) observations at 22~GHz by DiFX software\ncorrelator in National Radio Astronomy Observatory (NRAO). We used the\nobservations for compact radio sources, 3C 454.3, NRAO 512, OJ 287, BL Lac, 3C\n279, 1633+382, and 1510-089, which are almost unresolved for baselines in a\nrange of 350-477 km. Visibility data of the sources obtained with similar\nbaselines at KVN and VLBA are selected, fringe-fitted, calibrated, and compared\nfor their amplitudes. We found that visibility amplitudes of KVN observations\nshould be corrected by factors of 1.10 and 1.35 when correlated by DiFX and\nDaejeon correlators, respectively. These correction factors are attributed to\nthe combination of two steps of 2-bit quantization in KVN observing systems and\ncharacteristics of Daejeon correlator.", "category": "astro-ph_IM" }, { "text": "High-speed X-ray imaging spectroscopy system with Zynq SoC for solar\n observations: We have developed a system combining a back-illuminated\nComplementary-Metal-Oxide-Semiconductor (CMOS) imaging sensor and Xilinx Zynq\nSystem-on-Chip (SoC) device for a soft X-ray (0.5-10 keV) imaging spectroscopy\nobservation of the Sun to investigate the dynamics of the solar corona. Because\ntypical timescales of energy release phenomena in the corona span a few minutes\nat most, we aim to obtain the corresponding energy spectra and derive the\nphysical parameters, i.e., temperature and emission measure, every few tens of\nseconds or less for future solar X-ray observations. An X-ray photon-counting\ntechnique, with a frame rate of a few hundred frames per second or more, can\nachieve such results. We used the Zynq SoC device to achieve the requirements.\nZynq contains an ARM processor core, which is also known as the Processing\nSystem (PS) part, and a Programmable Logic (PL) part in a single chip. We use\nthe PL and PS to control the sensor and seamless recording of data to a storage\nsystem, respectively. We aim to use the system for the third flight of the\nFocusing Optics Solar X-ray Imager (FOXSI-3) sounding rocket experiment for the\nfirst photon-counting X-ray imaging and spectroscopy of the Sun.", "category": "astro-ph_IM" }, { "text": "The PRL 2.5m Telescope and its First Light Instruments: FOC & PARAS-2: We present here the information on the design and performance of the recently\ncommissioned 2.5-meter telescope at the PRL Mount Abu Observatory, located at\nGurushikhar, Mount Abu, India. The telescope has been successfully installed at\nthe site, and the Site Acceptance Test (SAT) was completed in October 2022. It\nis a highly advanced telescope in India, featuring the\nRitchey-Chr$\\acute{e}$tien optical configuration with primary mirror active\noptics, tip-tilt on side-port, and wave front correction sensors. Along with\nthe telescope, its two first light instruments {namely Faint Object Camera\n(FOC) and PARAS-2} were also integrated and attached with it in the June 2022.\n{FOC is a} camera that uses a 4096 X 4112 pixels detector SDSS type filters\nwith enhanced transmission and known as u', g', r', i', z'. It has a limiting\nmagnitude of 21 mag in 10 minutes exposure in the r'-band. The other first\nlight instrument PARAS-2 is a state-of-the-art high-resolution fiber-fed\nspectrograph operates in 380-690 nm wave-band, aimed to unveil the super-Earth\nlike worlds. The spectrograph works at a resolution of $\\sim$107,000, making it\nthe highest-resolution spectrograph in Asia to date, which is under\n{ultra}-stable temperature and pressure environment, at 22.5 $\\pm$ 0.001\n$^{\\circ}$C and 0.005 $\\pm$ 0.0005 mbar, respectively. Initial calibration\ntests of the spectrograph using a Uranium Argon Hollow Cathode Lamp (UAr HCL)\nhave yielded intrinsic instrumental RV stability down to 30 cm s$^{-1}$.", "category": "astro-ph_IM" }, { "text": "Investigation of dust grains by optical tweezers for space applications: Cosmic dust plays a dominant role in the universe, especially in the\nformation of stars and planetary systems. Furthermore, the surface of cosmic\ndust grains is the bench-work where molecular hydrogen and simple organic\ncompounds are formed. We manipulate individual dust particles in water solution\nby contactless and non-invasive techniques such as standard and Raman tweezers,\nto characterize their response to mechanical effects of light (optical forces\nand torques) and to determine their mineral compositions. Moreover, we show\naccurate optical force calculations in the T-matrix formalism highlighting the\nkey role of composition and complex morphology in optical trapping of cosmic\ndust particles.This opens perspectives for future applications of optical\ntweezers in curation facilities for sample return missions or in\nextraterrestrial environments.", "category": "astro-ph_IM" }, { "text": "The Chinese space millimeter-wavelength VLBI array - a step toward\n imaging the most compact astronomical objects: The Shanghai Astronomical Observatory (SHAO) of the Chinese Academy of\nSciences (CAS) is studying a space VLBI (Very Long Baseline Interferometer)\nprogram. The ultimate objective of the program is to image the immediate\nvicinity of the supermassive black holes (SMBHs) in the hearts of galaxies with\na space-based VLBI array working at sub-millimeter wavelengths and to gain\nultrahigh angular resolution. To achieve this ambitious goal, the mission plan\nis divided into three stages. The first phase of the program is called Space\nMillimeter-wavelength VLBI Array (SMVA) consisting of two satellites, each\ncarrying a 10-m diameter radio telescope into elliptical orbits with an apogee\nheight of 60000 km and a perigee height of 1200 km. The VLBI telescopes in\nspace will work at three frequency bands, 43, 22 and 8 GHz. The 43- and 22-GHz\nbands will be equipped with cryogenic receivers. The space telescopes,\nobserving together with ground-based radio telescopes, enable the highest\nangular resolution of 20 micro-arcsecond ($\\mu$as) at 43 GHz. The SMVA is\nexpected to conduct a broad range of high-resolution observational research,\ne.g. imaging the shadow (dark region) of the supermassive black hole in the\nheart of the galaxy M87 for the first time, studying the kinematics of water\nmegamasers surrounding the SMBHs, and exploring the power source of active\ngalactic nuclei. Pre-research funding has been granted by the CAS in October\n2012, to support scientific and technical feasibility studies. These studies\nalso include the manufacturing of a prototype of the deployable 10-m\nspace-based telescope and a 22-GHz receiver. Here we report on the latest\nprogress of the SMVA project.", "category": "astro-ph_IM" }, { "text": "A Floating Octave Bandwidth Cone-Disc Antenna for Detection of Cosmic\n Dawn: The critical component of radio astronomy radiometers built to detect\nredshifted 21-cm signals from Cosmic Dawn is the antenna element. We describe\nthe design and performance of an octave bandwidth cone disc antenna built to\ndetect this signal in the band 40 to 90 MHz. The Cosmic Dawn signal is\npredicted to be a wideband spectral feature orders of magnitude weaker than sky\nand ground radio brightness. Thus, the engineering challenge is to design an\nantenna at low frequencies that is able to provide with high fidelity the faint\ncosmological signal, along with foreground sky, to the receiver. The antenna\ncharacteristics must not compromise detection by imprinting any confusing\nspectral features on the celestial radiation, ground emission or receiver\nnoise. An innovation in the present design is making the antenna electrically\nsmaller than half wavelength and operating it on the surface of a sufficiently\nlarge water body. The homogeneous and high permittivity medium beneath the\nsmall cone-disc antenna results in an achromatic beam pattern, high radiation\nefficiency and minimum unwanted confusing spectral features. The antenna design\nwas optimized in WIPL-D and FEKO. A prototype was constructed and deployed on a\nlake to validate its performance with field measurements.\n Index Terms: Antenna measurements, radio astronomy, reflector antennas.", "category": "astro-ph_IM" }, { "text": "Reconstruction of signals with unknown spectra in information field\n theory with parameter uncertainty: The optimal reconstruction of cosmic metric perturbations and other signals\nrequires knowledge of their power spectra and other parameters. If these are\nnot known a priori, they have to be measured simultaneously from the same data\nused for the signal reconstruction. We formulate the general problem of signal\ninference in the presence of unknown parameters within the framework of\ninformation field theory. We develop a generic parameter uncertainty\nrenormalized estimation (PURE) technique and address the problem of\nreconstructing Gaussian signals with unknown power-spectrum with five different\napproaches: (i) separate maximum-a-posteriori power spectrum measurement and\nsubsequent reconstruction, (ii) maximum-a-posteriori power reconstruction with\nmarginalized power-spectrum, (iii) maximizing the joint posterior of signal and\nspectrum, (iv) guessing the spectrum from the variance in the Wiener filter\nmap, and (v) renormalization flow analysis of the field theoretical problem\nproviding the PURE filter. In all cases, the reconstruction can be described or\napproximated as Wiener filter operations with assumed signal spectra derived\nfrom the data according to the same recipe, but with differing coefficients.\nAll of these filters, except the renormalized one, exhibit a perception\nthreshold in case of a Jeffreys prior for the unknown spectrum. Data modes,\nwith variance below this threshold do not affect the signal reconstruction at\nall. Filter (iv) seems to be similar to the so called Karhune-Loeve and\nFeldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology,\nwhich therefore should also exhibit a marginal perception threshold if\ncorrectly implemented. We present statistical performance tests and show that\nthe PURE filter is superior to the others.", "category": "astro-ph_IM" }, { "text": "Real-time Adaptive Optics with pyramid wavefront sensors: Accurate\n wavefront reconstruction using iterative methods: In this paper, we address the inverse problem of fast, stable, and\nhigh-quality wavefront reconstruction from pyramid wavefront sensor data for\nAdaptive Optics systems on Extremely Large Telescopes. For solving the\nindicated problem we apply well-known iterative mathematical algorithms, namely\nconjugate gradient, steepest descent, Landweber, Landweber-Kaczmarz and\nsteepest descent-Kaczmarz iteration based on theoretical studies of the pyramid\nwavefront sensor. We compare the performance (in terms of correction quality\nand speed) of these algorithms in end-to-end numerical simulations of a closed\nadaptive loop. The comparison is performed in the context of a high-order SCAO\nsystem for METIS, one of the first-light instruments currently under design for\nthe Extremely Large Telescope. We show that, though being iterative, the\nanalyzed algorithms, when applied in the studied context, can be implemented in\na very efficient manner, which reduces the related computational effort\nsignificantly. We demonstrate that the suggested analytically developed\napproaches involving iterative algorithms provide comparable quality to\nstandard matrix-vector-multiplication methods while being computationally\ncheaper.", "category": "astro-ph_IM" }, { "text": "Two-element interferometer for millimeter-wave solar flare observations: In this paper, we present the design and implementation of a two-element\ninterferometer working in the millimeter wave band (39.5 GHz - 40 GHz) for\nobserving solar radio emissions through nulling interference. The system is\ncomposed of two 50 cm aperture Cassegrain antennas mounted on a common\nequatorial mount, with a separation of 230 wavelengths. The cross-correlation\nof the received signals effectively cancels the quiet solar component of the\nlarge flux density (~3000 sfu) that reduces the detection limit due to\natmospheric fluctuations. The system performance is obtained as follows: the\nnoise factor of the AFE in the observation band is less than 2.1 dB, system\nsensitivity is approximately 12.4 K (~34 sfu) with an integration time constant\nof 0.1 ms (default), the frequency resolution is 153 kHz, and the dynamic range\nis larger than 30 dB. Through actual testing, the nulling interferometer\nobserves a quiet sun with a low level of output fluctuations (of up to 50 sfu)\nand has a significantly lower radiation flux variability (of up to 190 sfu)\nthan an equivalent single-antenna system, even under thick cloud cover. As a\nresult, this new design can effectively improve observation sensitivity by\nreducing the impact of atmospheric and system fluctuations during observation.", "category": "astro-ph_IM" }, { "text": "Quasar Microlensing Models with Constraints on the Quasar Light Curves: Quasar microlensing analyses implicitly generate a model of the variability\nof the source quasar. The implied source variability may be unrealistic yet its\nlikelihood is generally not evaluated. We used the damped random walk (DRW)\nmodel for quasar variability to evaluate the likelihood of the source\nvariability and applied the revised algorithm to a microlensing analysis of the\nlensed quasar RX J1131-1231. We compared the estimates of the source quasar\ndisk and average lens galaxy stellar mass with and without applying the DRW\nlikelihoods for the source variability model and found no significant effect on\nthe estimated physical parameters. The most likely explanation is that\nunreliastic source light curve models are generally associated with poor\nmicrolensing fits that already make a negligible contribution to the\nprobability distributions of the derived parameters.", "category": "astro-ph_IM" }, { "text": "SPHERE on-sky performance compared with budget predictions: The SPHERE (spectro-photometric exoplanet research) extreme-AO planet hunter\nsaw first light at the VLT observatory on Mount Paranal in May 2014 after ten\nyears of development. Great efforts were put into modelling its performance,\nparticularly in terms of achievable contrast, and to budgeting instrumental\nfeatures such as wave front errors and optical transmission to each of the\ninstrument's three focal planes, the near infrared dual imaging camera IRDIS,\nthe near infrared integral field spectrograph IFS and the visible polarimetric\ncamera ZIMPOL. In this paper we aim at comparing predicted performance with\nmeasured performance. In addition to comparing on-sky contrast curves and\ncalibrated transmission measurements, we also compare the PSD-based wave front\nerror budget with in-situ wave front maps obtained thanks to a Zernike phase\nmask, ZELDA, implemented in the infrared coronagraph wheel. One of the most\ncritical elements of the SPHERE system is its high-order deformable mirror, a\nprototype 40x40 actuator piezo stack design developed in parallel with the\ninstrument itself. The development was a success, as witnessed by the\ninstrument performance, in spite of some bad surprises discovered on the way.\nThe devastating effects of operating without taking properly into account the\nloss of several actuators and the thermally and temporally induced variations\nin the DM shape will be analysed, and the actions taken to mitigate these\ndefects through the introduction of specially designed Lyot stops and\nactivation of one of the mirrors in the optical train will be described.", "category": "astro-ph_IM" }, { "text": "Detailed Studies of Atmospheric Calibration in Imaging Cherenkov\n Astronomy: The current generation of Imaging Atmospheric Cherenkov telescopes are\nallowing the sky to be probed with greater sensitivity than ever before in the\nenergy range around and above 100 GeV. To minimise the systematic errors on\nderived fluxes a full calibration of the atmospheric properties is important\ngiven the calorimetric nature of the technique. In this paper we discuss an\napproach to address this problem by using a ceilometer co-pointed with the\nH.E.S.S. telescopes and present the results of the application of this method\nto a set of observational data taken on the active galactic nucleus (AGN) PKS\n2155-304 in 2004.", "category": "astro-ph_IM" }, { "text": "Visualising three-dimensional volumetric data with an arbitrary\n coordinate system: Astronomical data does not always use Cartesian coordinates. Both all-sky\nobservational data and simulations of rotationally symmetric systems, such as\naccretion and protoplanetary discs, may use spherical polar or other coordinate\nsystems. Standard displays rely on Cartesian coordinates, but converting\nnon-Cartesian data into Cartesian format causes distortion of the data and loss\nof detail. I here demonstrate a method using standard techniques from computer\ngraphics that avoids these problems with 3D data in arbitrary coordinate\nsystems. The method adds minimum computational cost to the display process and\nis suitable for both realtime, interactive content and producing fixed rendered\nimages and videos. Proof-of-concept code is provided which works for data in\nspherical polar coordinates.", "category": "astro-ph_IM" }, { "text": "Fast gravitational wave parameter estimation without compromises: We present a lightweight, flexible, and high-performance framework for\ninferring the properties of gravitational-wave events. By combining likelihood\nheterodyning, automatically-differentiable and accelerator-compatible\nwaveforms, and gradient-based Markov chain Monte Carlo (MCMC) sampling enhanced\nby normalizing flows, we achieve full Bayesian parameter estimation for real\nevents like GW150914 and GW170817 within a minute of sampling time. Our\nframework does not require pretraining or explicit reparameterizations and can\nbe generalized to handle higher dimensional problems. We present the details of\nour implementation and discuss trade-offs and future developments in the\ncontext of other proposed strategies for real-time parameter estimation. Our\ncode for running the analysis is publicly available on GitHub\nhttps://github.com/kazewong/jim.", "category": "astro-ph_IM" }, { "text": "Application of Deep Learning methods to analysis of Imaging Atmospheric\n Cherenkov Telescopes data: Ground based gamma-ray observations with Imaging Atmospheric Cherenkov\nTelescopes (IACTs) play a significant role in the discovery of very high energy\n(E > 100 GeV) gamma-ray emitters. The analysis of IACT data demands a highly\nefficient background rejection technique, as well as methods to accurately\ndetermine the energy of the recorded gamma-ray and the position of its source\nin the sky. We present results for background rejection and signal direction\nreconstruction from first studies of a novel data analysis scheme for IACT\nmeasurements. The new analysis is based on a set of Convolutional Neural\nNetworks (CNNs) applied to images from the four H.E.S.S. phase-I telescopes. As\nthe H.E.S.S. cameras pixels are arranged in a hexagonal array, we demonstrate\ntwo ways to use such image data to train CNNs: by resampling the images to a\nsquare grid and by applying modified convolution kernels that conserve the\nhexagonal grid properties.\n The networks were trained on sets of Monte-Carlo simulated events and tested\non both simulations and measured data from the H.E.S.S. array. A comparison\nbetween the CNN analysis to current state-of-the-art algorithms reveals a clear\nimprovement in background rejection performance. When applied to H.E.S.S.\nobservation data, the CNN direction reconstruction performs at a similar level\nas traditional methods. These results serve as a proof-of-concept for the\napplication of CNNs to the analysis of events recorded by IACTs.", "category": "astro-ph_IM" }, { "text": "METAPHOR: Probability density estimation for machine learning based\n photometric redshifts: We present METAPHOR (Machine-learning Estimation Tool for Accurate\nPHOtometric Redshifts), a method able to provide a reliable PDF for photometric\ngalaxy redshifts estimated through empirical techniques. METAPHOR is a modular\nworkflow, mainly based on the MLPQNA neural network as internal engine to\nderive photometric galaxy redshifts, but giving the possibility to easily\nreplace MLPQNA with any other method to predict photo-z's and their PDF. We\npresent here the results about a validation test of the workflow on the\ngalaxies from SDSS-DR9, showing also the universality of the method by\nreplacing MLPQNA with KNN and Random Forest models. The validation test include\nalso a comparison with the PDF's derived from a traditional SED template\nfitting method (Le Phare).", "category": "astro-ph_IM" }, { "text": "Characterization and Optimization of Skipper CCDs for the SOAR Integral\n Field Spectrograph: We present results from the characterization and optimization of six Skipper\nCCDs for use in a prototype focal plane for the SOAR Integral Field\nSpectrograph (SIFS). We tested eight Skipper CCDs and selected six for SIFS\nbased on performance results. The Skipper CCDs are 6k $\\times$ 1k, 15 $\\mu$m\npixels, thick, fully-depleted, $p$-channel devices that have been thinned to\n$\\sim 250 \\mu$m, backside processed, and treated with an antireflective\ncoating. We optimize readout time to achieve $<4.3$ e$^-$ rms/pixel in a single\nnon-destructive readout and $0.5$ e$^-$ rms/pixel in $5 \\%$ of the detector. We\ndemonstrate single-photon counting with $N_{\\rm samp}$ = 400 ($\\sigma_{\\rm\n0e^-} \\sim$ 0.18 e$^-$ rms/pixel) for all 24 amplifiers (four amplifiers per\ndetector). We also perform conventional CCD characterization measurements such\nas cosmetic defects ($ <0.45 \\%$ ``bad\" pixels), dark current ($\\sim 2 \\times\n10^{-4}$ e$^-$/pixel/sec.), charge transfer inefficiency ($3.44 \\times 10^{-7}$\non average), and charge diffusion (PSF $< 7.5 \\mu$m). We report on\ncharacterization and optimization measurements that are only enabled by\nphoton-counting. Such results include voltage optimization to achieve full-well\ncapacities $\\sim 40,000-63,000$ e$^-$ while maintaining photon-counting\ncapabilities, clock induced charge optimization, non-linearity measurements at\nlow signals (few tens of electrons). Furthermore, we perform measurements of\nthe brighter-fatter effect and absolute quantum efficiency ($\\gtrsim\\, 80 \\%$\nbetween 450 nm and 980 nm; $\\gtrsim\\,90 \\%$ between 600 nm and 900 nm) using\nSkipper CCDs.", "category": "astro-ph_IM" }, { "text": "IVOA Recommendation: Simple Cone Search Version 1.03: This specification defines a simple query protocol for retrieving records\nfrom a catalog of astronomical sources. The query describes sky position and an\nangular distance, defining a cone on the sky. The response returns a list of\nastronomical sources from the catalog whose positions lie within the cone,\nformatted as a VOTable. This version of the specification is essentially a\ntranscription of the original Cone Search specification in order to move it\ninto the IVOA standardization process.", "category": "astro-ph_IM" }, { "text": "Effect of filters on the time-delay interferometry residual laser noise\n for LISA: The Laser Interferometer Space Antenna (LISA) is a European Space Agency\nmission that aims to measure gravitational waves in the millihertz range. Laser\nfrequency noise enters the interferometric measurements and dominates the\nexpected gravitational signals by many orders of magnitude. Time-delay\ninterferometry (TDI) is a technique that reduces this laser noise by\nsynthesizing virtual equal-arm interferometric measurements. Laboratory\nexperiments and numerical simulations have confirmed that this reduction is\nsufficient to meet the scientific goals of the mission in proof-of-concept\nsetups. In this paper, we show that the on-board antialiasing filters play an\nimportant role in TDI's performance when the flexing of the constellation is\naccounted for. This coupling was neglected in previous studies. To reach an\noptimal reduction level, filters with vanishing group delays must be used on\nboard or synthesized off-line. We propose a theoretical model of the residual\nlaser noise including this flexing-filtering coupling. We also use two\nindependent simulators to produce realistic measurement signals and compute the\ncorresponding TDI Michelson variables. We show that our theoretical model\nagrees with the simulated data with exquisite precision. Using these two\ncomplementary approaches, we confirm TDI's ability to reduce laser frequency\nnoise in a more realistic mission setup. The theoretical model provides insight\non filter design and implementation.", "category": "astro-ph_IM" }, { "text": "The BlueMUSE data reduction pipeline: lessons learned from MUSE and\n first design choices: BlueMUSE is an integral field spectrograph in an early development stage for\nthe ESO VLT. For our design of the data reduction software for this instrument,\nwe are first reviewing capabilities and issues of the pipeline of the existing\nMUSE instrument. MUSE has been in operation at the VLT since 2014 and led to\ndiscoveries published in more than 600 refereed scientific papers. While\nBlueMUSE and MUSE have many common properties we briefly point out a few key\ndifferences between both instruments. We outline a first version of the\nflowchart for the science reduction, and discuss the necessary changes due to\nthe blue wavelength range covered by BlueMUSE. We also detail specific new\nfeatures, for example, how the pipeline and subsequent analysis will benefit\nfrom improved handling of the data covariance, and a more integrated approach\nto the line-spread function, as well as improvements regarding the wavelength\ncalibration which is of extra importance in the blue optical range. We finally\ndiscuss how simulations of BlueMUSE datacubes are being implemented and how\nthey will be used to prepare the science of the instrument.", "category": "astro-ph_IM" }, { "text": "Ground-based gamma-ray telescopes as ground stations in deep-space\n lasercom: As the amount of information to be transmitted from deep-space rapidly\nincreases, the radiofrequency technology has become a bottleneck in space\ncommunications. RF is already limiting the scientific outcome of deep-space\nmissions and could be a significant obstacle in the developing of manned\nmissions. Lasercom holds the promise to solve this problem, as it will\nconsiderably increase the data rate while decreasing the energy, mass and\nvolume of onboard communication systems. In RF deep-space communications, where\nthe received power is the main limitation, the traditional approach to boost\nthe data throughput has been increasing the receiver's aperture, e.g. the 70-m\nantennas in the NASA's Deep Space Network. Optical communications also can\nbenefit from this strategy, thus 10-m class telescopes have typically been\nsuggested to support future deep-space links. However, the cost of big\ntelescopes increase exponentially with their aperture, and new ideas are needed\nto optimize this ratio. Here, the use of ground-based gamma-ray telescopes,\nknown as Cherenkov telescopes, is suggested. These are optical telescopes\ndesigned to maximize the receiver's aperture at a minimum cost with some\nrelaxed requirements. As they are used in an array configuration and multiple\nidentical units need to be built, each element of the telescope is designed to\nminimize its cost. Furthermore, the native array configuration would facilitate\nthe joint operation of Cherenkov and lasercom telescopes. These telescopes\noffer very big apertures, ranging from several meters to almost 30 meters,\nwhich could greatly improve the performance of optical ground stations. The key\nelements of these telescopes have been studied applied to lasercom, reaching\nthe conclusion that it could be an interesting strategy to include them in the\nfuture development of an optical deep-space network.", "category": "astro-ph_IM" }, { "text": "Characterization of a dense aperture array for radio astronomy: EMBRACE@Nancay is a prototype instrument consisting of an array of 4608\ndensely packed antenna elements creating a fully sampled, unblocked aperture.\nThis technology is proposed for the Square Kilometre Array and has the\npotential of providing an extremely large field of view making it the ideal\nsurvey instrument. We describe the system,calibration procedures, and results\nfrom the prototype.", "category": "astro-ph_IM" }, { "text": "(Very)-High-Energy Gamma-Ray Astrophysics: the Future: Several projects planned or proposed can significantly expand our knowledge\nof the high-energy Universe in gamma rays. Construction of the Cherenkov\ntelescope array CTA is started, and other detectors are planned which will use\nthe reconstruction of extensive air showers. This report explores the near\nfuture, and possible evolutions in a longer term.", "category": "astro-ph_IM" }, { "text": "Importance of charge capture in inter-phase regions during readout of\n charge-coupled devices: The current understanding of charge transfer dynamics in Charge-Coupled\nDevices (CCDs) is that charge is moved so quickly from one phase to the next in\na clocking sequence and with a density so low that trapping of charge in the\ninter-phase regions is negligible. However, new simulation capabilities\ndeveloped at the Centre for Electronic Imaging, that includes direct input of\nelectron density simulations, has made it possible to investigate this\nassumption further. As part of the radiation testing campaign of the Euclid\nCCD273 devices, data has been obtained using the trap pumping method, that can\nbe used to identify and characterise single defects CCDs. Combining this data\nwith simulations, we find that trapping during the transfer of charge between\nphases is indeed necessary in order to explain the results of the data\nanalysis. This result could influence not only trap pumping theory and how trap\npumping should be performed, but also how a radiation damaged CCD is read out\nin the most optimal way.", "category": "astro-ph_IM" }, { "text": "Spectral and polarimetric characterization of the Gas Pixel Detector\n filled with dimethyl ether: The Gas Pixel Detector belongs to the very limited class of gas detectors\noptimized for the measurement of X-ray polarization in the emission of\nastrophysical sources. The choice of the mixture in which X-ray photons are\nabsorbed and photoelectrons propagate, deeply affects both the energy range of\nthe instrument and its performance in terms of gain, track dimension and\nultimately, polarimetric sensitivity. Here we present the characterization of\nthe Gas Pixel Detector with a 1 cm thick cell filled with dimethyl ether (DME)\nat 0.79 atm, selected among other mixtures for the very low diffusion\ncoefficient. Almost completely polarized and monochromatic photons were\nproduced at the calibration facility built at INAF/IASF-Rome exploiting Bragg\ndiffraction at nearly 45 degrees. For the first time ever, we measured the\nmodulation factor and the spectral capabilities of the instrument at energies\nas low as 2.0 keV, but also at 2.6 keV, 3.7 keV, 4.0 keV, 5.2 keV and 7.8 keV.\nThese measurements cover almost completely the energy range of the instrument\nand allows to compare the sensitivity achieved with that of the standard\nmixture, composed of helium and DME.", "category": "astro-ph_IM" }, { "text": "X-ray Astronomy in the Laboratory with a Miniature Compact Object\n Produced by Laser-Driven Implosion: Laboratory spectroscopy of non-thermal equilibrium plasmas photoionized by\nintense radiation is a key to understanding compact objects, such as black\nholes, based on astronomical observations. This paper describes an experiment\nto study photoionizing plasmas in laboratory under well-defined and genuine\nconditions. Photoionized plasma is here generated using a 0.5-keV Planckian\nx-ray source created by means of a laser-driven implosion. The measured x-ray\nspectrum from the photoionized silicon plasma resembles those observed from the\nbinary stars Cygnus X-3 and Vela X-1 with the Chandra x-ray satellite. This\ndemonstrates that an extreme radiation field was produced in the laboratory,\nhowever, the theoretical interpretation of the laboratory spectrum\nsignificantly contradicts the generally accepted explanations in x-ray\nastronomy. This model experiment offers a novel test bed for validation and\nverification of computational codes used in x-ray astronomy.", "category": "astro-ph_IM" }, { "text": "Robust period estimation using mutual information for multi-band light\n curves in the synoptic survey era: The Large Synoptic Survey Telescope (LSST) will produce an unprecedented\namount of light curves using six optical bands. Robust and efficient methods\nthat can aggregate data from multidimensional sparsely-sampled time series are\nneeded. In this paper we present a new method for light curve period estimation\nbased on the quadratic mutual information (QMI). The proposed method does not\nassume a particular model for the light curve nor its underlying probability\ndensity and it is robust to non-Gaussian noise and outliers. By combining the\nQMI from several bands the true period can be estimated even when no\nsingle-band QMI yields the period. Period recovery performance as a function of\naverage magnitude and sample size is measured using 30,000 synthetic multi-band\nlight curves of RR Lyrae and Cepheid variables generated by the LSST Operations\nand Catalog simulators. The results show that aggregating information from\nseveral bands is highly beneficial in LSST sparsely-sampled time series,\nobtaining an absolute increase in period recovery rate up to 50%. We also show\nthat the QMI is more robust to noise and light curve length (sample size) than\nthe multiband generalizations of the Lomb Scargle and Analysis of Variance\nperiodograms, recovering the true period in 10-30% more cases than its\ncompetitors. A python package containing efficient Cython implementations of\nthe QMI and other methods is provided.", "category": "astro-ph_IM" }, { "text": "The PAU Survey: Narrow-band image photometry: PAUCam is an innovative optical narrow-band imager mounted at the William\nHerschel Telescope built for the Physics of the Accelerating Universe Survey\n(PAUS). Its set of 40 filters results in images that are complex to calibrate,\nwith specific instrumental signatures that cannot be processed with traditional\ndata reduction techniques. In this paper we present two pipelines developed by\nthe PAUS data management team with the objective of producing science-ready\ncatalogues from the uncalibrated raw images. The Nightly pipeline takes care of\nall image processing, with bespoke algorithms for photometric calibration and\nscatter-light correction. The Multi-Epoch and Multi-Band Analysis (MEMBA)\npipeline performs forced photometry over a reference catalogue to optimize the\nphotometric redshift performance. We verify against spectroscopic observations\nthat the current approach delivers an inter-band photometric calibration of\n0.8% across the 40 narrow-band set. The large volume of data produced every\nnight and the rapid survey strategy feedback constraints require operating both\npipelines in the Port d'Informaci\\'o Cientifica data centre with intense\nparallelization. While alternative algorithms for further improvements in\nphoto-z performance are under investigation, the image calibration and\nphotometry presented in this work already enable state-of-the-art photometric\nredshifts down to iAB=23.0.", "category": "astro-ph_IM" }, { "text": "The design of the Ali CMB Polarization Telescope receiver: Ali CMB Polarization Telescope (AliCPT-1) is the first CMB degree-scale\npolarimeter to be deployed on the Tibetan plateau at 5,250m above sea level.\nAliCPT-1 is a 90/150 GHz 72 cm aperture, two-lens refracting telescope cooled\ndown to 4 K. Alumina lenses, 800mm in diameter, image the CMB in a 33.4{\\deg}\nfield of view on a 636mm wide focal plane. The modularized focal plane consists\nof dichroic polarization-sensitive Transition-Edge Sensors (TESes). Each module\nincludes 1,704 optically active TESes fabricated on a 150mm diameter silicon\nwafer. Each TES array is read out with a microwave multiplexing readout system\ncapable of a multiplexing factor up to 2,048. Such a large multiplexing factor\nhas allowed the practical deployment of tens of thousands of detectors,\nenabling the design of a receiver that can operate up to 19 TES arrays for a\ntotal of 32,376 TESes. AliCPT-1 leverages the technological advancements in the\ndetector design from multiple generations of previously successful\nfeedhorn-coupled polarimeters, and in the instrument design from BICEP-3, but\napplied on a larger scale. The cryostat receiver is currently under integration\nand testing. During the first deployment year, the focal plane will be\npopulated with up to 4 TES arrays. Further TES arrays will be deployed in the\nfollowing years, fully populating the focal plane with 19 arrays on the fourth\ndeployment year. Here we present the AliCPT-1 receiver design, and how the\ndesign has been optimized to meet the experimental requirements.", "category": "astro-ph_IM" }, { "text": "BATATA: A device to characterize the punch-through observed in\n underground muon detectors and to operate as a prototype for AMIGA: BATATA is a hodoscope comprising three X-Y planes of plastic scintillation\ndetectors. This system of buried counters is complemented by an array of 3\nwater-Cherenkov detectors, located at the vertices of an equilateral triangle\nwith 200 m sides. This small surface array is triggered by extensive air\nshowers. The BATATA detector will be installed at the centre of the AMIGA\narray, where it will be used to quantify the electromagnetic contamination of\nthe muon signal as a function of depth, and so to validate, in situ, the\nnumerical estimates made of the optimal depth for the AMIGA muon detectors.\nBATATA will also serves as a prototype to aid the design of these detectors.", "category": "astro-ph_IM" }, { "text": "DIPol-UF: simultaneous three-color ($BVR$) polarimeter with EM CCDs: We describe a new instrument capable of high precision ($10^{-5}$)\npolarimetric observations simultaneously in three passbands ($BVR$). The\ninstrument utilizes electron-multiplied EM CCD cameras for high efficiency and\nfast image readout. The key features of DIPol-UF are: (i) optical design with\nhigh throughput and inherent stability; (ii) great versatility which makes the\ninstrument optimally suitable for observations of bright and faint targets;\n(iii) control system which allows using the polarimeter remotely. Examples are\ngiven of the first results obtained from high signal-to-noise observations of\nbright nearby stars and of fainter sources such as X-ray binaries in their\nquiescent states", "category": "astro-ph_IM" }, { "text": "QUBIC V: Cryogenic system design and performance: Current experiments aimed at measuring the polarization of the Cosmic\nMicrowave Background (CMB) use cryogenic detector arrays and cold optical\nsystems to boost the mapping speed of the sky survey. For these reasons, large\nvolume cryogenic systems, with large optical windows, working continuously for\nyears, are needed. Here we report on the cryogenic system of the QUBIC (Q and U\nBolometric Interferometer for Cosmology) experiment: we describe its design,\nfabrication, experimental optimization and validation in the Technological\nDemonstrator configuration. The QUBIC cryogenic system is based on a large\nvolume cryostat, using two pulse-tube refrigerators to cool at ~3K a large (~1\nm^3) volume, heavy (~165kg) instrument, including the cryogenic polarization\nmodulator, the corrugated feedhorns array, and the lower temperature stages; a\n4He evaporator cooling at ~1K the interferometer beam combiner; a 3He\nevaporator cooling at ~0.3K the focal-plane detector arrays. The cryogenic\nsystem has been tested and validated for more than 6 months of continuous\noperation. The detector arrays have reached a stable operating temperature of\n0.33K, while the polarization modulator has been operated from a ~10K base\ntemperature. The system has been tilted to cover the boresight elevation range\n20 deg -90 deg without significant temperature variations. The instrument is\nnow ready for deployment to the high Argentinean Andes.", "category": "astro-ph_IM" }, { "text": "Inferring kilonova population properties with a hierarchical Bayesian\n framework I : Non-detection methodology and single-event analyses: We present ${\\tt nimbus}$ : a hierarchical Bayesian framework to infer the\nintrinsic luminosity parameters of kilonovae (KNe) associated with\ngravitational-wave (GW) events, based purely on non-detections. This framework\nmakes use of GW 3-D distance information and electromagnetic upper limits from\nmultiple surveys for multiple events, and self-consistently accounts for finite\nsky-coverage and probability of astrophysical origin. The framework is agnostic\nto the brightness evolution assumed and can account for multiple\nelectromagnetic passbands simultaneously. Our analyses highlight the importance\nof accounting for model selection effects, especially in the context of\nnon-detections. We show our methodology using a simple, two-parameter linear\nbrightness model, taking the follow-up of GW190425 with the Zwicky Transient\nFacility (ZTF) as a single-event test case for two different prior choices of\nmodel parameters -- (i) uniform/uninformative priors and (ii) astrophysical\npriors based on surrogate models of Monte Carlo radiative transfer simulations\nof KNe. We present results under the assumption that the KN is within the\nsearched region to demonstrate functionality and the importance of prior\nchoice. Our results show consistency with ${\\tt simsurvey}$ -- an astronomical\nsurvey simulation tool used previously in the literature to constrain the\npopulation of KNe. While our results based on uniform priors strongly constrain\nthe parameter space, those based on astrophysical priors are largely\nuninformative, highlighting the need for deeper constraints. Future studies\nwith multiple events having electromagnetic follow-up from multiple surveys\nshould make it possible to constrain the KN population further.", "category": "astro-ph_IM" }, { "text": "EAGLE Spectroscopy of Resolved Stellar Populations Beyond the Local\n Group: We give an overview of the science case for spectroscopy of resolved stellar\npopulations beyond the Local Group with the European Extremely Large Telescope\n(E-ELT). In particular, we present science simulations undertaken as part of\nthe EAGLE Phase A design study for a multi--integral-field-unit, near-infrared\nspectrograph. EAGLE will exploit the unprecedented primary aperture of the\nE-ELT to deliver AO-corrected spectroscopy across a large (38.5 sq. arcmin)\nfield, truly revolutionising our view of stellar populations in the Local\nVolume.", "category": "astro-ph_IM" }, { "text": "How to Calculate Molecular Column Density: The calculation of the molecular column density from molecular spectral\n(rotational or ro-vibrational) transition measurements is one of the most basic\nquantities derived from molecular spectroscopy. Starting from first principles\nwhere we describe the basic physics behind the radiative and collisional\nexcitation of molecules and the radiative transfer of their emission, we derive\na general expression for the molecular column density. As the calculation of\nthe molecular column density involves a knowledge of the molecular energy level\ndegeneracies, rotational partition functions, dipole moment matrix elements,\nand line strengths, we include generalized derivations of these\nmolecule-specific quantities. Given that approximations to the column density\nequation are often useful, we explore the optically thin, optically thick, and\nlow-frequency limits to our derived general molecular column density relation.\nWe also evaluate the limitations of the common assumption that the molecular\nexcitation temperature is constant, and address the distinction between beam-\nand source-averaged column densities. We conclude our discussion of the\nmolecular column density with worked examples for C$^{18}$O, C$^{17}$O,\nN$_2$H$^+$, NH$_3$, and H$_2$CO. Ancillary information on some subtleties\ninvolving line profile functions, conversion between integrated flux and\nbrightness temperature, the calculation of the uncertainty associated with an\nintegrated intensity measurement, the calculation of spectral line optical\ndepth using hyperfine or isotopologue measurements, the calculation of the\nkinetic temperature from a symmetric molecule excitation temperature\nmeasurement, and relative hyperfine intensity calculations for NH$_3$ are\npresented in appendices. The intent of this document is to provide a reference\nfor researchers studying astrophysical molecular spectroscopic measurements.", "category": "astro-ph_IM" }, { "text": "Faint objects in motion: the new frontier of high precision astrometry: Sky survey telescopes and powerful targeted telescopes play complementary\nroles in astronomy. In order to investigate the nature and characteristics of\nthe motions of very faint objects, a flexibly-pointed instrument capable of\nhigh astrometric accuracy is an ideal complement to current astrometric surveys\nand a unique tool for precision astrophysics. Such a space-based mission will\npush the frontier of precision astrometry from evidence of Earth-mass habitable\nworlds around the nearest stars, to distant Milky Way objects, and out to the\nLocal Group of galaxies. As we enter the era of the James Webb Space Telescope\nand the new ground-based, adaptive-optics-enabled giant telescopes, by\nobtaining these high precision measurements on key objects that Gaia could not\nreach, a mission that focuses on high precision astrometry science can\nconsolidate our theoretical understanding of the local Universe, enable\nextrapolation of physical processes to remote redshifts, and derive a much more\nconsistent picture of cosmological evolution and the likely fate of our cosmos.\nAlready several missions have been proposed to address the science case of\nfaint objects in motion using high precision astrometry missions: NEAT proposed\nfor the ESA M3 opportunity, micro-NEAT for the S1 opportunity, and Theia for\nthe M4 and M5 opportunities. Additional new mission configurations adapted with\ntechnological innovations could be envisioned to pursue accurate measurements\nof these extremely small motions. The goal of this White Paper is to address\nthe fundamental science questions that are at stake when we focus on the\nmotions of faint sky objects and to briefly review instrumentation and mission\nprofiles.", "category": "astro-ph_IM" }, { "text": "Towards Extremely Precise Radial Velocities: I. Simulated Solar Spectra\n for Testing Exoplanet Detection Algorithms: Recent and upcoming stabilized spectrographs are pushing the frontier for\nDoppler spectroscopy to detect and characterize low-mass planets.\nSpecifications for these instruments are so impressive that intrinsic stellar\nvariability is expected to limit their Doppler precision for most target stars\n(Fischer et al. 2016). To realize their full potential, astronomers must\ndevelop new strategies for distinguishing true Doppler shifts from intrinsic\nstellar variability. Stellar variability due to star spots, faculae and other\nrotationally-linked variability are particularly concerning, as the stellar\nrotation period is often included in the range of potential planet orbital\nperiods. To robustly detect and accurately characterize low-mass planets via\nDoppler planet surveys, the exoplanet community must develop statistical models\ncapable of jointly modeling planetary perturbations and intrinsic stellar\nvariability. Towards this effort, this note presents simulations of extremely\nhigh resolution, solar-like spectra created with SOAP 2.0 (arXiv:1409.3594)\nthat includes multiple evolving star spots. We anticipate this data set will\ncontribute to future studies developing, testing, and comparing statistical\nmethods for measuring physical radial velocities amid contamination by stellar\nvariability.", "category": "astro-ph_IM" }, { "text": "The CASA software for radio astronomy: status update from ADASS 2019: CASA, the Common Astronomy Software Applications package, is the primary data\nprocessing software for the Atacama Large Millimeter/submillimeter Array (ALMA)\nand NSF's Karl G. Jansky Very Large Array (VLA), and is frequently used also\nfor other radio telescopes. The CASA software can process data from both\nsingle-dish and aperture-synthesis telescopes, and one of its core\nfunctionalities is to support the data reduction and imaging pipelines for\nALMA, VLA and the VLA Sky Survey (VLASS). CASA has recently undergone several\nexciting new developments, including an increased flexibility in Python (CASA\n6), support of Very Long Baseline Interferometry (VLBI), performance gains\nthrough parallel imaging, data visualization with the new Cube Analysis\nRendering Tool for Astronomy (CARTA), enhanced reliability and testing, and\nmodernized documentation. These proceedings of the 2019 Astronomical Data\nAnalysis Software & Systems (ADASS) conference give an update of the CASA\nproject, and detail how these new developments will enhance user experience of\nCASA.", "category": "astro-ph_IM" }, { "text": "A dipole amplifier for electric dipole moments, axion-like particles and\n a dense dark matter hairs detector: A tool that can constrain, in minutes, beyond-the-standard-model parameters\nlike electric dipole moments (EDM) down to a lower-bound\n$d_\\text{e}^{\\cal{N}}<10^{-37}\\text{e}\\cdot\\text{cm}$ in bulk materials, or the\ncoupling of axion-like particles (ALP) to photons down to\n$|G_{a\\gamma\\gamma}|<10^{-16}$~GeV$^{-1}$, is described. Best limits are\n$d^n_e<3\\cdot10^{-26}\\text{e}\\cdot\\text{cm}$ for neutron EDM and\n$|G_{a\\gamma\\gamma}|<6.6\\cdot10^{-11}$~GeV$^{-1}$. The {\\it dipole amplifier}\nis built from a superconducting loop immersed in a toroidal magnetic field,\n$\\vec{B}$. When nuclear magnetic moments in the London penetration depth align\nwith $\\vec{B}$, the bulk magnetization is always accompanied by an EDM-induced\nbulk electric field $\\vec{E}\\propto\\vec{B}$ that generates detectable\noscillatory supercurrents with a characteristic frequency\n$\\omega_{\\text{D}}\\propto d_\\text{e}^{\\cal{N}}$. Cold dark matter (CDM) ALP are\nformally similar where $\\omega_\\text{D}\\propto\n|G_{a\\gamma\\gamma}|\\sqrt{n_a/(2m_a)}$ with $m_a$ the ALP mass and $n_a$ its\nnumber density. A space probe traversing a dark matter hair with a dipole\namplifier is sensitive enough to detect ALP density variations if\n$|G_{a\\gamma\\gamma}|\\sqrt{n_h/(2m_a)}>4.9\\cdot10^{-27}$ where $n_h$ is the ALP\nnumber density in the hair.", "category": "astro-ph_IM" }, { "text": "Generating Electron Beam Lithography Write Parameters from the FORTIS\n Holographic Grating Solution: The Far-UV Off Rowland-circle Telescope for Imaging and Spectroscopy (FORTIS)\nhas been successful in maturing technologies for carrying out multi-object\nspectroscopy in the far-UV, including: the successful implementation of the\nNext Generation of Microshutter Arrays; large-area microchannel plate\ndetectors; and an aspheric \"dual-order\" holographically ruled diffraction\ngrating with curved, variably-spaced grooves with a laminar (rectangular)\nprofile. These optical elements were used to construct an efficient and\nminimalist \"two-bounce\" spectro-telescope in a Gregorian configuration.\nHowever, the susceptibility to Lyman alpha (Ly$\\alpha$) scatter inherent to the\ndual order design has been found to be intractably problematic, motivating our\nmove to an \"Off-Axis\" design. OAxFORTIS will mitigate its susceptibility to\nLy$\\alpha$ by enclosing the optical path, so the detector only receives light\nfrom the grating. The new design reduces the collecting area by a factor of 2,\nbut the overall effective area can be regained and improved through the use of\nnew high efficiency reflective coatings, and with the use of a blazed\ndiffraction grating. This latter key technology has been enabled by recent\nadvancements in creating very high efficiency blazed gratings with impressive\nsmoothness using electron beam lithography and chemical etching to create\ngrooves in crystalline silicon. Here we discuss the derivation for the\nOAxFORTIS grating solution as well as methods used to transform the FORTIS\nholographic grating recording parameters (following the formalism of Noda et\nal.1974a,b), into curved and variably-spaced rulings required to drive the\nelectron beam lithography write-head in three dimensions. We will also discuss\nthe process for selecting silicon wafers with the proper orientation of the\ncrystalline planes and give an update on our fabrication preparations.", "category": "astro-ph_IM" }, { "text": "A detection system to measure muon-induced neutrons for direct Dark\n Matter searches: Muon-induced neutrons constitute a prominent background component in a number\nof low count rate experiments, namely direct searches for Dark Matter. In this\nwork we describe a neutron detector to measure this background in an\nunderground laboratory, the Laboratoire Souterrain de Modane. The system is\nbased on 1 m3 of Gd-loaded scintillator and it is linked with the muon veto of\nthe EDELWEISS-II experiment for coincident muon detection. The system was\ninstalled in autumn 2008 and passed since then a number of commissioning tests\nproving its full functionality. The data-taking is continuously ongoing and a\ncount rate of the order of 1 muon-induced neutron per day has been achieved.", "category": "astro-ph_IM" }, { "text": "Automatic quantitative morphological analysis of interacting galaxies: The large number of galaxies imaged by digital sky surveys reinforces the\nneed for computational methods for analyzing galaxy morphology. While the\nmorphology of most galaxies can be associated with a stage on the Hubble\nsequence, morphology of galaxy mergers is far more complex due to the\ncombination of two or more galaxies with different morphologies and the\ninteraction between them. Here we propose a computational method based on\nunsupervised machine learning that can quantitatively analyze morphologies of\ngalaxy mergers and associate galaxies by their morphology. The method works by\nfirst generating multiple synthetic galaxy models for each galaxy merger, and\nthen extracting a large set of numerical image content descriptors for each\ngalaxy model. These numbers are weighted using Fisher discriminant scores, and\nthen the similarities between the galaxy mergers are deduced using a variation\nof Weighted Nearest Neighbor analysis such that the Fisher scores are used as\nweights. The similarities between the galaxy mergers are visualized using\nphylogenies to provide a graph that reflects the morphological similarities\nbetween the different galaxy mergers, and thus quantitatively profile the\nmorphology of galaxy mergers.", "category": "astro-ph_IM" }, { "text": "Refine Neutrino Events Reconstruction with BEiT-3: Neutrino Events Reconstruction has always been crucial for IceCube Neutrino\nObservatory. In the Kaggle competition \"IceCube -- Neutrinos in Deep Ice\", many\nsolutions use Transformer. We present ISeeCube, a pure Transformer model based\non TorchScale (the backbone of BEiT-3). When having relatively same amount of\ntotal trainable parameters, our model outperforms the 2nd place solution. By\nusing TorchScale, the lines of code drop sharply by about 80% and a lot of new\nmethods can be tested by simply adjusting configs. We compared two fundamental\nmodels for predictions on a continuous space, regression and classification,\ntrained with MSE Loss and CE Loss respectively. We also propose a new metric,\noverlap ratio, to evaluate the performance of the model. Since the model is\nsimple enough, it has the potential to be used for more purposes such as energy\nreconstruction, and many new methods such as combining it with GraphNeT can be\ntested more easily. The code and pretrained models are available at\nhttps://github.com/ChenLi2049/ISeeCube", "category": "astro-ph_IM" }, { "text": "Understanding our Galaxy - key contributions from the Parkes telescope: Young massive stars, with their spectacular masers and HII regions, dominate\nour Galaxy, and are a cornerstone for understanding Galactic structure. I will\nhighlight the role of Parkes in contributing to these studies - past, present\nand future.", "category": "astro-ph_IM" }, { "text": "Summary of the 14th IACHEC Meeting: We summarize the 14th meeting of the International Astronomical Consortium\nfor High Energy Calibration (IACHEC) held at \\textit{Shonan Village} (Kanagawa,\nJapan) in May 2019. Sixty scientists directly involved in the calibration of\noperational and future high-energy missions gathered during 3.5 days to discuss\nthe status of the cross-calibration between the current international\ncomplement of X-ray observatories, and the possibilities to improve it. This\nsummary consists of reports from the various WGs with topics ranging from the\nidentification and characterization of standard calibration sources,\nmulti-observatory cross-calibration campaigns, appropriate and new statistical\ntechniques, calibration of instruments and characterization of background,\ncommunication and preservation of knowledge, and results for the benefit of the\nastronomical community.", "category": "astro-ph_IM" }, { "text": "On-sky validation of image-based adaptive optics wavefront sensor\n referencing: Differentiating between an exoplanet signal and residual speckle noise is a\nkey challenge in high-contrast imaging. Speckles are due to a combination of\nfast, slow and static wavefront aberrations introduced by atmospheric\nturbulence and instrument optics. While wavefront control techniques developed\nover the last decade have shown promise in minimizing fast atmospheric\nresiduals, slow and static aberrations such as non-common path aberrations\n(NCPAs) remain a key limiting factor for exoplanet detection. NCPA are not seen\nby the wavefront sensor (WFS) of the adaptive optics (AO) loop, hence the\ndifficulty in correcting them. We propose to improve the identification and\nrejection of those aberrations. The algorithm DrWHO, performs frequent\ncompensation of static and quasi-static aberrations to boost image contrast. By\nchanging the WFS reference at every iteration of the algorithm, DrWHO changes\nthe AO point of convergence to lead it towards a compensation of the static and\nslow aberrations. References are calculated using an iterative lucky-imaging\napproach, where each iteration updates the WFS reference, ultimately favoring\nhigh-quality focal plane images. We validate this concept through numerical\nsimulations and on-sky testing on the SCExAO instrument at the 8.2-m Subaru\ntelescope. Simulations show a rapid convergence towards the correction of 82%\nof the NCPAs. On-sky tests are performed over a 10-minute run in the visible\n(750 nm). We introduce a flux concentration (FC) metric to quantify the point\nspread function (PSF) quality and measure a 15.7% improvement. The DrWHO\nalgorithm is a robust focal-plane wavefront sensing calibration method that has\nbeen successfully demonstrated on sky. It does not rely on a model nor requires\nwavefront sensor calibration or linearity. It is compatible with different\nwavefront control methods, and can be further optimized for speed and\nefficiency.", "category": "astro-ph_IM" }, { "text": "DESCQA: An Automated Validation Framework for Synthetic Sky Catalogs: The use of high-quality simulated sky catalogs is essential for the success\nof cosmological surveys. The catalogs have diverse applications, such as\ninvestigating signatures of fundamental physics in cosmological observables,\nunderstanding the effect of systematic uncertainties on measured signals and\ntesting mitigation strategies for reducing these uncertainties, aiding analysis\npipeline development and testing, and survey strategy optimization. The list of\napplications is growing with improvements in the quality of the catalogs and\nthe details that they can provide. Given the importance of simulated catalogs,\nit is critical to provide rigorous validation protocols that enable both\ncatalog providers and users to assess the quality of the catalogs in a\nstraightforward and comprehensive way. For this purpose, we have developed the\nDESCQA framework for the Large Synoptic Survey Telescope Dark Energy Science\nCollaboration as well as for the broader community. The goal of DESCQA is to\nenable the inspection, validation, and comparison of an inhomogeneous set of\nsynthetic catalogs via the provision of a common interface within an automated\nframework. In this paper, we present the design concept and first\nimplementation of DESCQA. In order to establish and demonstrate its full\nfunctionality we use a set of interim catalogs and validation tests. We\nhighlight several important aspects, both technical and scientific, that\nrequire thoughtful consideration when designing a validation framework,\nincluding validation metrics and how these metrics impose requirements on the\nsynthetic sky catalogs.", "category": "astro-ph_IM" }, { "text": "Measurement of low-energy background events due to $^{222}$Rn\n contamination on the surface of a NaI(Tl) crystal: It has been known that decays of daughter elements of $^{222}$Rn on the\nsurface of a detector cause significant background at energies below 10 keV. In\nparticular $^{210}$Pb and $^{210}$Po decays on the crystal surface result in\nsignificant background for dark matter search experiments with NaI(Tl)\ncrystals. In this report, measurement of $^{210}$Pb and $^{210}$Po decays on\nsurfaces are obtained by using a $^{222}$Rn contaminated crystal. Alpha decay\nevents of $^{210}$Po on the surface are measured by coincidence requirements of\ntwo attached crystals. Due to recoiling of $^{206}$Pb, rapid nuclear recoil\nevents are observed. A mean time characterization demonstrates that $^{206}$Pb\nrecoil events can be statistically separated from those of sodium or iodine\nnuclear recoil events, as well as electron recoil events.", "category": "astro-ph_IM" }, { "text": "Daytime Seeing and Solar Limb Positions: A method to measure the seeing from video made during drift-scan solar\ntransits is proposed. The limb of the Sun is projected over a regular grid\nevenly spaced. The temporal dispersion of the time intervals among the contacts\nbetween solar limb and grid's rows is proportional to the atmospheric seeing.\nSeeing effects on the position of the inflexion point of the limb's luminosity\nprofile are calculated numerically with Fast Fourier Transform. Observational\nexamples from Locarno and Paris Observatories are presented to show the\nasymmetric contributions of the seeing at the beginning and the end of each\ndrift-scan transit.", "category": "astro-ph_IM" }, { "text": "Gravitational Microlensing Events as a Target for SETI project: Detection of signals from a possible extrasolar technological civilization is\none of the challenging efforts of science. In this work, we propose using\nnatural telescopes made of single or binary gravitational lensing systems to\nmagnify leakage of electromagnetic signals from a remote planet harbours an\nExtra Terrestrial Intelligent (ETI) technology. The gravitational microlensing\nsurveys are monitoring a large area of Galactic bulge for searching\nmicrolensing events and they find more than $2000$ events per year. These\nlenses are capable of playing the role of natural telescopes and in some\noccasions they can magnify radio band signals from the planets orbiting around\nthe source stars in gravitational microlensing systems. Assuming that frequency\nof electromagnetic waves used for telecommunication in ETIs is similar to ours,\nwe propose follow-up observation of microlensing events with radio telescopes\nsuch as Square Kilometre Array (SKA), Low Frequency Demonstrators (LFD) and\nMileura Wide-Field Array (MWA). Amplifying signals from the leakage of\nbroadcasting by an Earth-like civilizations will allow us to detect them up to\ncenter of Milky Way galaxy. Our analysis shows that in binary microlensing\nsystems, the probability of amplification of signals from ETIs is more than\nthat in single microlensing events. Finally we propose target of opportunity\nmode for follow-up observations of binary microlensing events with SKA as a new\nobservational program for searching ETIs. Using the optimistic values for the\nfactors of Drake equation provides detection of about one event per year.", "category": "astro-ph_IM" }, { "text": "Probing Radio Intensity at high-Z from Marion: 2017 Instrument: We introduce Probing Radio Intensity at high-Z from Marion (PRIZM), a new\nexperiment designed to measure the globally averaged sky brightness, including\nthe expected redshifted 21 cm neutral hydrogen absorption feature arising from\nthe formation of the first stars. PRIZM consists of two dual-polarization\nantennas operating at central frequencies of 70 and 100 MHz, and the experiment\nis located on Marion Island in the sub-Antarctic. We describe the initial\ndesign and configuration of the PRIZM instrument that was installed in 2017,\nand we present preliminary data that demonstrate that Marion Island offers an\nexceptionally clean observing environment, with essentially no visible\ncontamination within the FM band.", "category": "astro-ph_IM" }, { "text": "Improved Measurement of the Spectral Index of the Diffuse Radio\n Background Between 90 and 190 MHz: We report absolutely calibrated measurements of diffuse radio emission\nbetween 90 and 190 MHz from the Experiment to Detect the Global EoR Signature\n(EDGES). EDGES employs a wide beam zenith-pointing dipole antenna centred on a\ndeclination of -26.7$^\\circ$. We measure the sky brightness temperature as a\nfunction of frequency averaged over the EDGES beam from 211 nights of data\nacquired from July 2015 to March 2016. We derive the spectral index, $\\beta$,\nas a function of local sidereal time (LST) and find -2.60 > $\\beta$ > -2.62\n$\\pm$0.02 between 0 and 12 h LST. When the Galactic Centre is in the sky, the\nspectral index flattens, reaching $\\beta$ = -2.50 $\\pm$0.02 at 17.7 h. The\nEDGES instrument is shown to be very stable throughout the observations with\nnight-to-night reproducibility of $\\sigma_{\\beta}$ < 0.003. Including\nsystematic uncertainty, the overall uncertainty of $\\beta$ is 0.02 across all\nLST bins. These results improve on the earlier findings of Rogers & Bowman\n(2008) by reducing the spectral index uncertainty from 0.10 to 0.02 while\nconsidering more extensive sources of errors. We compare our measurements with\nspectral index simulations derived from the Global Sky Model (GSM) of de\nOliveira-Costa et al. (2008) and with fits between the Guzm\\'an et al. (2011)\n45 MHz and Haslam et al. (1982) 408 MHz maps. We find good agreement at the\ntransit of the Galactic Centre. Away from transit, the GSM tends to\nover-predict (GSM less negative) by 0.05 < $\\Delta_{\\beta} =\n\\beta_{\\text{GSM}}-\\beta_{\\text{EDGES}}$ < 0.12, while the 45-408 MHz fits tend\nto over-predict by $\\Delta_{\\beta}$ < 0.05.", "category": "astro-ph_IM" }, { "text": "Auto-RSM: an automated parameter-selection algorithm for the RSM map\n exoplanet detection algorithm: Most of the high-contrast imaging (HCI) data-processing techniques used over\nthe last 15 years have relied on the angular differential imaging (ADI)\nobserving strategy, along with subtraction of a reference point spread function\n(PSF) to generate exoplanet detection maps. Recently, a new algorithm called\nregime switching model (RSM) map has been proposed to take advantage of these\nnumerous PSF-subtraction techniques; RSM uses several of these techniques to\ngenerate a single probability map. Selection of the optimal parameters for\nthese PSF-subtraction techniques as well as for the RSM map is not\nstraightforward, is time consuming, and can be biased by assumptions made as to\nthe underlying data set. We propose a novel optimisation procedure that can be\napplied to each of the PSF-subtraction techniques alone, or to the entire RSM\nframework. The optimisation procedure consists of three main steps: (i)\ndefinition of the optimal set of parameters for the PSF-subtraction techniques\nusing the contrast as performance metric, (ii) optimisation of the RSM\nalgorithm, and (iii) selection of the optimal set of PSF-subtraction techniques\nand ADI sequences used to generate the final RSM probability map. The\noptimisation procedure is applied to the data sets of the exoplanet imaging\ndata challenge (EIDC), which provides tools to compare the performance of HCI\ndata-processing techniques. The data sets consist of ADI sequences obtained\nwith three state-of-the-art HCI instruments: SPHERE, NIRC2, and LMIRCam. The\nresults of our analysis demonstrate the interest of the proposed optimisation\nprocedure, with better performance metrics compared to the earlier version of\nRSM, as well as to other HCI data-processing techniques.", "category": "astro-ph_IM" }, { "text": "Safeguarding Old and New Journal Tables for the VO: Status for\n Extragalactic and Radio Data: Independent of established data centers, and partly for my own research,\nsince 1989 I have been collecting the tabular data from over 2600 articles\nconcerned with radio sources and extragalactic objects in general. Optical\ncharacter recognition (OCR) was used to recover tables from 740 papers. Tables\nfrom only 41 percent of the 2600 articles are available in the CDS or CATS\ncatalog collections, and only slightly better coverage is estimated for the NED\ndatabase. This fraction is not better for articles published electronically\nsince 2001. Both object databases (NED, SIMBAD, LEDA) as well as catalog\nbrowsers (VizieR, CATS) need to be consulted to obtain the most complete\ninformation on astronomical objects. More human resources at the data centers\nand better collaboration between authors, referees, editors, publishers, and\ndata centers are required to improve data coverage and accessibility. The\ncurrent efforts within the Virtual Observatory (VO) project, to provide\nretrieval and analysis tools for different types of published and archival data\nstored at various sites, should be balanced by an equal effort to recover and\ninclude large amounts of published data not currently available in this way.", "category": "astro-ph_IM" }, { "text": "Wavelet transforms of microlensing data: Denoising, extracting intrinsic\n pulsations, and planetary signals: Wavelets are waveform functions that describe transient and unstable\nvariations, such as noises. In this work, we study the advantages of discrete\nand continuous wavelet transforms (DWT and CWT) of microlensing data to denoise\nthem and extract their planetary signals and intrinsic pulsations hidden by\nnoises. We first generate synthetic microlensing data and apply wavelet\ndenoising to them. For these simulated microlensing data with ideally Gaussian\nnosies based on the OGLE photometric accuracy, denoising with DWT reduces\nstandard deviations of data from real models by $0.044$-$0.048$ mag. The\nefficiency to regenerate real models and planetary signals with denoised data\nstrongly depends on the observing cadence and decreases from $37\\%$ to $0.01\\%$\nby worsening cadence from $15$ min to $6$ hrs. We then apply denoising on $100$\nmicrolensing events discovered by the OGLE group. On average, wavelet denoising\nfor these data improves standard deviations and $\\chi^{2}_{\\rm n}$ of data with\nrespect to the best-fitted models by $0.023$ mag, and $1.16$, respectively. The\nbest-performing wavelets (based on either the highest signal-to-noise ratio's\npeak ($\\rm{SNR}_{\\rm{max}}$), or the highest Pearson's correlation, or the\nlowest Root Mean Squared Error (RMSE) for denoised data) are from 'Symlet', and\n'Biorthogonal' wavelets families in simulated, and OGLE data, respectively. In\nsome denoised data, intrinsic stellar pulsations or small planetary-like\ndeviations appear which were covered with noises in raw data. However, through\nDWT denoising rather flattened and wide planetary signals could be\nreconstructed than sharp signals. CWT and 3D frequency-power-time maps could\nadvise about the existence of sharp signals.", "category": "astro-ph_IM" }, { "text": "The DESI Experiment Part II: Instrument Design: DESI (Dark Energy Spectropic Instrument) is a Stage IV ground-based dark\nenergy experiment that will study baryon acoustic oscillations and the growth\nof structure through redshift-space distortions with a wide-area galaxy and\nquasar redshift survey. The DESI instrument is a robotically-actuated,\nfiber-fed spectrograph capable of taking up to 5,000 simultaneous spectra over\na wavelength range from 360 nm to 980 nm. The fibers feed ten three-arm\nspectrographs with resolution $R= \\lambda/\\Delta\\lambda$ between 2000 and 5500,\ndepending on wavelength. The DESI instrument will be used to conduct a\nfive-year survey designed to cover 14,000 deg$^2$. This powerful instrument\nwill be installed at prime focus on the 4-m Mayall telescope in Kitt Peak,\nArizona, along with a new optical corrector, which will provide a three-degree\ndiameter field of view. The DESI collaboration will also deliver a\nspectroscopic pipeline and data management system to reduce and archive all\ndata for eventual public use.", "category": "astro-ph_IM" }, { "text": "$\\texttt{GWFAST}$: a Fisher information matrix Python code for\n third-generation gravitational-wave detectors: We introduce $\\texttt{GWFAST}$, a Fisher information matrix $\\texttt{Python}$\ncode that allows easy and efficient estimation of signal-to-noise ratios and\nparameter measurement errors for large catalogs of resolved sources observed by\nnetworks of gravitational-wave detectors. In particular, $\\texttt{GWFAST}$\nincludes the effects of the Earth's motion during the evolution of the signal,\nsupports parallel computation, and relies on automatic differentiation rather\nthan on finite differences techniques, which allows the computation of\nderivatives with accuracy close to machine precision. We also release the\nlibrary $\\texttt{WF4Py}$ implementing state-of-the-art gravitational-wave\nwaveforms in $\\texttt{Python}$. In this paper we provide a documentation of\n$\\texttt{GWFAST}$ and $\\texttt{WF4Py}$ with practical examples and tests of\nperformance and reliability. In a companion paper we present forecasts for the\ndetection capabilities of the second and third generation of ground-based\ngravitational-wave detectors, obtained with $\\texttt{GWFAST}$.", "category": "astro-ph_IM" }, { "text": "A fast new catadioptric design for fiber-fed spectrographs: The next generation of massively multiplexed multi-object spectrographs\n(DESpec, SUMIRE, BigBOSS, 4MOST, HECTOR) demand fast, efficient and affordable\nspectrographs, with higher resolutions (R = 3000-5000) than current designs.\nBeam-size is a (relatively) free parameter in the design, but the properties of\nVPH gratings are such that, for fixed resolution and wavelength coverage, the\neffect on beam-size on overall VPH efficiency is very small. For\nalltransmissive cameras, this suggests modest beam-sizes (say 80-150mm) to\nminimize costs; while for catadioptric (Schmidt-type) cameras, much larger\nbeam-sizes (say 250mm+) are preferred to improve image quality and to minimize\nobstruction losses. Schmidt designs have benefits in terms of image quality,\ncamera speed and scattered light performance, and recent advances such as MRF\ntechnology mean that the required aspherics are no longer a prohibitive cost or\nrisk. A new Schmidt/Maksutov-derived design is presented, which differs from\nprevious designs in having the detector package outside the camera, and\nadjacent to the spectrograph pupil. The telescope pupil already contains a hole\nat its center, because of the obstruction from the telescope top-end. With a\n250mm beam, it is possible to largely hide a 6cm \\times 6cm detector package\nand its dewar within this hole. This means that the design achieves a very high\nefficiency, competitive with transmissive designs. The optics are excellent, as\nleast as good as classic Schmidt designs, allowing F/1.25 or even faster\ncameras. The principal hardware has been costed at $300K per arm, making the\ndesign affordable.", "category": "astro-ph_IM" }, { "text": "Three-dimensional extinction mapping using Gaussian random fields: We present a scheme for using stellar catalogues to map the three-dimensional\ndistributions of extinction and dust within our Galaxy. Extinction is modelled\nas a Gaussian random field, whose covariance function is set by a simple\nphysical model of the ISM that assumes a Kolmogorov-like power spectrum of\nturbulent fluctuations. As extinction is modelled as a random field, the\nspatial resolution of the resulting maps is set naturally by the data\navailable; there is no need to impose any spatial binning. We verify the\nvalidity of our scheme by testing it on simulated extinction fields and show\nthat its precision is significantly improved over previous dust-mapping\nefforts. The approach we describe here can make use of any photometric,\nspectroscopic or astrometric data; it is not limited to any particular survey.\nConsequently, it can be applied to a wide range of data from both existing and\nfuture surveys.", "category": "astro-ph_IM" }, { "text": "INTEGRAL/IBIS nine-year Galactic Hard X-Ray Survey: Context. The INTEGRAL observatory operating in a hard X-ray/gamma domain has\ngathered a large observational data set over nine years starting in 2003. Most\nof the observing time was dedicated to the Galactic source population study,\nmaking possible the deepest Galactic survey in hard X-rays ever compiled. Aims.\nWe aim to perform a Galactic survey that can be used as the basis of Galactic\nsource population studies, and perform mapping of the Milky Way in hard X-rays\nover the maximum exposure available at |b|<17.5 deg. Methods. We used sky\nreconstruction algorithms especially developed for the high quality imaging of\nINTEGRAL/IBIS data. Results. We present sky images, sensitivity maps, and\ncatalogs of detected sources in the three energy bands 17-60, 17-35, and 35-80\nkeV in the Galactic plane at |b|<17.5 deg. The total number of sources in the\nreference 17-60 keV band includes 402 objects exceeding a 4.7 sigma detection\nthreshold on the nine-year time-averaged map. Among the identified sources with\nknown and tentatively identified natures, 253 are Galactic objects (108\nlow-mass X-ray binaries, 82 high-mass X-ray binaries, 36 cataclysmic variables,\nand 27 are of other types), and 115 are extragalactic objects, including 112\nactive galactic nuclei (AGNs) and 3 galaxy clusters. The sample of Galactic\nsources with S/N>4.7 sigma has an identification completeness of ~92%, which is\nvaluable for population studies. Since the survey is based on the nine-year sky\nmaps, it is optimized for persistent sources and may be biased against finding\ntransients.", "category": "astro-ph_IM" }, { "text": "gSeaGen: a GENIE-based code for neutrino telescopes: The gSeaGen code is a GENIE based application to generate neutrino-induced\nevents in an underwater neutrino detector. The gSeaGen code is able to generate\nevents induced by all neutrino flavours, taking into account topological\ndifferences between track-type and shower-like events. The neutrino interaction\nis simulated taking into account the density and the composition of the media\nsurrounding the detector. The main features of gSeaGen will be presented\ntogether with some examples of its application within ANTARES and KM3NeT.", "category": "astro-ph_IM" }, { "text": "Spectro-photometric distances to stars: a general-purpose Bayesian\n approach: We developed a code that estimates distances to stars using measured\nspectroscopic and photometric quantities. We employ a Bayesian approach to\nbuild the probability distribution function over stellar evolutionary models\ngiven these data, delivering estimates of model parameters for each star\nindividually. The code was first tested on simulations, successfully recovering\ninput distances to mock stars with <1% bias.The method-intrinsic random\ndistance uncertainties for typical spectroscopic survey measurements amount to\naround 10% for dwarf stars and 20\\% for giants, and are most sensitive to the\nquality of $\\log g$ measurements. The code was validated by comparing our\ndistance estimates to parallax measurements from the Hipparcos mission for\nnearby stars (< 300 pc), to asteroseismic distances of CoRoT red giant stars,\nand to known distances of well-studied open and globular clusters. The external\ncomparisons confirm that our distances are subject to very small systematic\nbiases with respect to the fundamental Hipparcos scale (+0.4 % for dwarfs, and\n+1.6% for giants). The typical random distance scatter is 18% for dwarfs, and\n26% for giants. For the CoRoT-APOGEE sample, the typical random distance\nscatter is ~15%, both for the nearby and farther data. Our distances are\nsystematically larger than the CoRoT ones by about +9%, which can mostly be\nattributed to the different choice of priors. The comparison to known distances\nof star clusters from SEGUE and APOGEE has led to significant systematic\ndifferences for many cluster stars, but with opposite signs, and with\nsubstantial scatter. Finally, we tested our distances against those previously\ndetermined for a high-quality sample of giant stars from the RAVE survey, again\nfinding a small systematic trend of +5% and an rms scatter of 30%.", "category": "astro-ph_IM" }, { "text": "Smoothed Particle Radiation Hydrodynamics: Two-Moment method with Local\n Eddington Tensor Closure: We present a new radiative transfer method (SPH-M1RT) that is coupled\ndynamically with smoothed particle hydrodynamics (SPH). We implement it in the\n(task-based parallel) SWIFT galaxy simulation code but it can be\nstraightforwardly implemented in other SPH codes. Our moment-based method\nsimultaneously solves the radiation energy and flux equations in SPH, making it\nadaptive in space and time. We modify the M1 closure relation to stabilize\nradiation fronts in the optically thin limit. We also introduce anisotropic\nartificial viscosity and high-order artificial diffusion schemes, which allow\nthe code to handle radiation transport accurately in both the optically thin\nand optically thick regimes. Non-equilibrium thermo-chemistry is solved using a\nsemi-implicit sub-cycling technique. The computational cost of our method is\nindependent of the number of sources and can be lowered further by using the\nreduced speed of light approximation. We demonstrate the robustness of our\nmethod by applying it to a set of standard tests from the cosmological\nradiative transfer comparison project of Iliev et al. The SPH-M1RT scheme is\nwell-suited for modelling situations in which numerous sources emit ionising\nradiation, such as cosmological simulations of galaxy formation or simulations\nof the interstellar medium.", "category": "astro-ph_IM" }, { "text": "Dealing with missing data in the MICROSCOPE space mission: An adaptation\n of inpainting to handle colored-noise data: The MICROSCOPE space mission, launched on April 25, 2016, aims to test the\nweak equivalence principle (WEP) with a 10^-15 precision. To reach this\nperformance requires an accurate and robust data analysis method, especially\nsince the possible WEP violation signal will be dominated by a strongly colored\nnoise. An important complication is brought by the fact that some values will\nbe missing -therefore, the measured time series will not be strictly regularly\nsampled. Those missing values induce a spectral leakage that significantly\nincreases the noise in Fourier space, where the WEP violation signal is looked\nfor, thereby complicating scientific returns. Recently, we developed an\ninpainting algorithm to correct the MICROSCOPE data for missing values. This\ncode has been integrated in the official MICROSCOPE data processing pipeline\nbecause it enables us to significantly measure an equivalence principle\nviolation (EPV) signal in a model-independent way, in the inertial satellite\nconfiguration. In this work, we present several improvements to the method that\nmay allow us now to reach the MICROSCOPE requirements for both inertial and\nspin satellite configurations. The main improvement has been obtained using a\nprior on the power spectrum of the colored-noise that can be directly derived\nfrom the incomplete data. We show that after reconstructing missing values with\nthis new algorithm, a least-squares fit may allow us to significantly measure\nan EPV signal with a 0.96x10^-15 precision in the inertial mode and 1.2x10^-15\nprecision in the spin mode. Although, the inpainting method presented in this\npaper has been optimized to the MICROSCOPE data, it remains sufficiently\ngeneral to be used in the general context of missing data in time series\ndominated by an unknown colored-noise. The improved inpainting software, called\nICON, is freely available at http://www.cosmostat.org/software/icon.", "category": "astro-ph_IM" }, { "text": "Data Release 2 of S-PLUS: accurate template-fitting based photometry\n covering $\\sim$1000 square degrees in 12 optical filters: The Southern Photometric Local Universe Survey (S-PLUS) is an ongoing survey\nof $\\sim$9300 deg$^2$ in the southern sky in a 12-band photometric system. This\npaper presents the second data release (DR2) of S-PLUS, consisting of 514 tiles\ncovering an area of 950 deg$^2$. The data has been fully calibrated using a new\nphotometric calibration technique suitable for the new generation of wide-field\nmulti-filter surveys. This technique consists of a $\\chi^2$ minimisation to fit\nsynthetic stellar templates to already calibrated data from other surveys,\neliminating the need for standard stars and reducing the survey duration by\n$\\sim$15\\%. We compare the template-predicted and S-PLUS instrumental\nmagnitudes to derive the photometric zero-points (ZPs). We show that these ZPs\ncan be further refined by fitting the stellar templates to the 12 S-PLUS\nmagnitudes, which better constrain the models by adding the narrow-band\ninformation. We use the STRIPE82 region to estimate ZP errors, which are\n$\\lesssim10$ mmags for filters J0410, J0430, $g$, J0515, $r$, J0660, $i$, J0861\nand $z$; $\\lesssim 15$ mmags for filter J0378; and $\\lesssim 25$ mmags for\nfilters $u$ and J0395. We describe the complete data flow of the S-PLUS/DR2\nfrom observations to the final catalogues and present a brief characterisation\nof the data. We show that, for a minimum signal-to-noise threshold of 3, the\nphotometric depths of the DR2 range from 19.9 mag to 21.3 mag (measured in\nPetrosian apertures), depending on the filter. The S-PLUS DR2 can be accessed\nfrom the website: https://splus.cloud}{https://splus.cloud.", "category": "astro-ph_IM" }, { "text": "Fast and Automated Peak Bagging with DIAMONDS (FAMED): Stars of low and intermediate mass that exhibit oscillations may show tens of\ndetectable oscillation modes each. Oscillation modes are a powerful to\nconstrain the internal structure and rotational dynamics of the star, hence\ntool allowing one to obtain an accurate stellar age. The tens of thousands of\nsolar-like oscillators that have been discovered thus far are representative of\nthe large diversity of fundamental stellar properties and evolutionary stages\navailable. Because of the wide range of oscillation features that can be\nrecognized in such stars, it is particularly challenging to properly\ncharacterize the oscillation modes in detail, especially in light of large\nstellar samples. Overcoming this issue requires an automated approach, which\nhas to be fast, reliable, and flexible at the same time. In addition, this\napproach should not only be capable of extracting the oscillation mode\nproperties of frequency, linewidth, and amplitude from stars in different\nevolutionary stages, but also able to assign a correct mode identification for\neach of the modes extracted. Here we present the new freely available pipeline\nFAMED (Fast and AutoMated pEak bagging with DIAMONDS), which is capable of\nperforming an automated and detailed asteroseismic analysis in stars ranging\nfrom the main sequence up to the core-Helium-burning phase of stellar\nevolution. This, therefore, includes subgiant stars, stars evolving along the\nred giant branch (RGB), and stars likely evolving toward the early asymptotic\ngiant branch. In this paper, we additionally show how FAMED can detect rotation\nfrom dipolar oscillation modes in main sequence, subgiant, low-luminosity RGB,\nand core-Helium-burning stars. FAMED can be downloaded from its public GitHub\nrepository (https://github.com/EnricoCorsaro/FAMED).", "category": "astro-ph_IM" }, { "text": "Searching for changing-state AGNs in massive datasets -- I: applying\n deep learning and anomaly detection techniques to find AGNs with anomalous\n variability behaviours: The classic classification scheme for Active Galactic Nuclei (AGNs) was\nrecently challenged by the discovery of the so-called changing-state\n(changing-look) AGNs (CSAGNs). The physical mechanism behind this phenomenon is\nstill a matter of open debate and the samples are too small and of\nserendipitous nature to provide robust answers. In order to tackle this\nproblem, we need to design methods that are able to detect AGN right in the act\nof changing-state. Here we present an anomaly detection (AD) technique designed\nto identify AGN light curves with anomalous behaviors in massive datasets. The\nmain aim of this technique is to identify CSAGN at different stages of the\ntransition, but it can also be used for more general purposes, such as cleaning\nmassive datasets for AGN variability analyses. We used light curves from the\nZwicky Transient Facility data release 5 (ZTF DR5), containing a sample of\n230,451 AGNs of different classes. The ZTF DR5 light curves were modeled with a\nVariational Recurrent Autoencoder (VRAE) architecture, that allowed us to\nobtain a set of attributes from the VRAE latent space that describes the\ngeneral behaviour of our sample. These attributes were then used as features\nfor an Isolation Forest (IF) algorithm, that is an anomaly detector for a \"one\nclass\" kind of problem. We used the VRAE reconstruction errors and the IF\nanomaly score to select a sample of 8,809 anomalies. These anomalies are\ndominated by bogus candidates, but we were able to identify 75 promising CSAGN\ncandidates.", "category": "astro-ph_IM" }, { "text": "Simulation of ionizing radiation in cell phone camera image sensors: The Distributed Electronic Cosmic-ray Observatory (DECO) is a cell phone app\nthat uses a cell phone camera image sensor to detect cosmic-ray particles and\nparticles from radioactive decay. Images recorded by DECO are classified by a\nconvolutional neural network (CNN) according to their morphology. In this\nproject, we develop a GEANT4-derived simulation of particle interactions inside\nthe CMOS sensor using the Allpix$^2$ modular framework. We simulate muons,\nelectrons, and photons with energy range 10 keV to 100 GeV, and their deposited\nenergy agrees well with expectations. Simulated events are recorded and\nprocessed in a similar way as data images taken by DECO, and the result shows\nboth similar image morphology with data events and good quantitative data-Monte\nCarlo agreement.", "category": "astro-ph_IM" }, { "text": "Design and optimization of dihedral angle offsets for the next\n generation lunar retro-reflectors: Lunar laser ranging (LLR) to the Apollo retro-reflectors, which features the\nmost long-lasting experiment in testing General Relativity theories, has\nremained operational over the past four decades. To date, with significant\nimprovement of ground observatory conditions, the bottleneck of LLR accuracy\nlies in the retro-reflectors. A new generation of large aperture\nretro-reflectors with intended dihedral angle offsets have been suggested and\nimplemented based on NASA's recent lunar projects to reduce its ranging\nuncertainty to be less than 1.0 mm. The technique relies on the\nretro-reflector's ability to offset its relative angular velocity with regard\nto a ground LLR observatory (LLRO), so that the LLR accuracy can be ensured\nalong with the larger area of beam reflection. In deployment, solid corner-cube\nreflectors (CCRs) based on empirical successes of the Apollo 11 and 15 arrays\nhave been selected for the next generation lunar reflectors (NGLRs) due to\ntheir stability against heat and dust problems on the Moon. In this work, we\npresent the optical effects in designing the new retro-reflectors given various\nsets of intended diheral angle offsets (DAOs), and support the design\nprinciples with the measurements of of two manufactured NGLRs.", "category": "astro-ph_IM" }, { "text": "Atmospheric effects on extensive air showers observed with the Surface\n Detector of the Pierre Auger Observatory: Atmospheric parameters, such as pressure (P), temperature (T) and density,\naffect the development of extensive air showers initiated by energetic cosmic\nrays. We have studied the impact of atmospheric variations on extensive air\nshowers by means of the surface detector of the Pierre Auger Observatory. The\nrate of events shows a ~10% seasonal modulation and ~2% diurnal one. We find\nthat the observed behaviour is explained by a model including the effects\nassociated with the variations of pressure and density. The former affects the\nlongitudinal development of air showers while the latter influences the Moliere\nradius and hence the lateral distribution of the shower particles. The model is\nvalidated with full simulations of extensive air showers using atmospheric\nprofiles measured at the site of the Pierre Auger Observatory.", "category": "astro-ph_IM" }, { "text": "Prototype Schwarzschild-Couder Telescope for the Cherenkov Telescope\n Array: Commissioning Status of the Optical System: The Cherenkov Telescope Array (CTA), with more than 100 telescopes, will be\nthe largest ever ground-based gamma-ray observatory and is expected to greatly\nimprove on both gamma-ray detection sensitivity and energy coverage compared to\ncurrent-generation detectors. The 9.7-m Schwarzschild-Couder telescope (SCT) is\none of the two candidates for the medium size telescope (MST) design for CTA.\nThe novel aplanatic dual-mirror SCT design offers a wide field-of-view with a\ncompact plate scale, allowing for a large number of camera pixels that improves\nthe angular resolution and reduce the night sky background noise per pixel\ncompared to the traditional single-mirror Davies-Cotton (DC) design of\nground-based gamma-ray telescopes. The production, installation, and the\nalignment of the segmented aspherical mirrors are the main challenges for the\nrealization of the SCT optical system. In this contribution, we report on the\ncommissioning status, the alignment procedures, and initial alignment results\nduring the initial commissioning phase of the optical system of the prototype\nSCT.", "category": "astro-ph_IM" }, { "text": "PulsarX: a new pulsar searching package -I. A high performance folding\n program for pulsar surveys: Pulsar surveys with modern radio telescopes are becoming increasingly\ncomputationally demanding. This is particularly true for wide field-of-view\npulsar surveys with radio interferometers, and those conducted in real or\nquasi-real time. These demands result in data analysis bottlenecks that can\nlimit the parameter space covered by the surveys and diminish their scientific\nreturn. In this paper, we address the computational challenge of `candidate\nfolding' in pulsar searching, presenting a novel, efficient approach designed\nto optimise the simultaneous folding of large numbers of pulsar candidates. We\nprovide a complete folding pipeline appropriate for large-scale pulsar surveys\nincluding radio frequency interference (RFI) mitigation, dedispersion, folding\nand parameter optimization. By leveraging the Fast Discrete Dispersion Measure\nTransform (FDMT) algorithm proposed by Zackay et al. (2017), we have developed\nan optimized, and cache-friendly implementation that we term the pruned FDMT\n(pFDMT). The pFDMT approach efficiently reuses intermediate processing results\nand prunes the unused computation paths, resulting in a significant reduction\nin arithmetic operations. In addition, we propose a novel folding algorithm\nbased on the Tikhonov-regularised least squares method (TLSM) that can improve\nthe time resolution of the pulsar profile. We present the performance of its\nreal-world application as an integral part of two major pulsar search projects\nconducted with the MeerKAT telescope: the MPIfR-MeerKAT Galactic Plane Survey\n(MMGPS) and the Transients and Pulsars with MeerKAT (TRAPUM) project. In our\nprocessing, for approximately 500 candidates, the theoretical number of\ndedispersion operations can be reduced by a factor of around 50 when compared\nto brute-force dedispersion, which scales with the number of candidates.", "category": "astro-ph_IM" }, { "text": "A new ray-tracing scheme for 3D diffuse radiation transfer on highly\n parallel architectures: We present a new numerical scheme to solve the transfer of diffuse radiation\non three-dimensional mesh grids which is efficient on processors with highly\nparallel architecture such as recently popular GPUs and CPUs with multi- and\nmany-core architectures. The scheme is based on the ray-tracing method and the\ncomputational cost is proportional to $N_{\\rm m}^{5/3}$ where $N_{\\rm m}$ is\nthe number of mesh grids, and is devised to compute the radiation transfer\nalong each light-ray completely in parallel with appropriate grouping of the\nlight-rays. We find that the performance of our scheme scales well with the\nnumber of adopted CPU cores and GPUs, and also that our scheme is nicely\nparallelized on a multi-node system by adopting the multiple wave front scheme,\nand the performance scales well with the amount of the computational resources.\nAs numerical tests to validate our scheme and to give a physical criterion for\nthe angular resolution of our ray-tracing scheme, we perform several numerical\nsimulations of the photo-ionization of neutral hydrogen gas by ionizing\nradiation sources without the \"on-the-spot\" approximation, in which the\ntransfer of diffuse radiation by radiative recombination is incorporated in a\nself-consistent manner.", "category": "astro-ph_IM" }, { "text": "HAWC response to atmospheric electricity activity: The HAWC Gamma Ray observatory consists of 300 water Cherenkov detectors\n(WCD) instrumented with four photo multipliers tubes (PMT) per WCD. HAWC is\nlocated between two of the highest mountains in Mexico. The high altitude (4100\nm asl), the relatively short distance to the Gulf of Mexico (~100 km), the\nlarge detecting area (22 000 m$^2$) and its high sensitivity, make HAWC a good\ninstrument to explore the acceleration of particles due to the electric fields\nexisting inside storm clouds. In particular, the scaler system of HAWC records\nthe output of each one of the 1200 PMTs as well as the 2, 3, and 4-fold\nmultiplicities (logic AND in a time window of 30 ns) of each WCD with a\nsampling rate of 40 Hz. Using the scaler data, we have identified 20\nenhancements of the observed rate during periods when storm clouds were over\nHAWC but without cloud-earth discharges. These enhancements can be produced by\nelectrons with energy of tens of MeV, accelerated by the electric fields of\ntens of kV/m measured at the site during the storm periods. In this work, we\npresent the recorded data, the method of analysis and our preliminary\nconclusions on the electron acceleration by the electric fields inside the\nclouds.", "category": "astro-ph_IM" }, { "text": "Measurements of Charge Transfer Efficiency in a Proton-irradiated Swept\n Charge Device: Charge Coupled Devices (CCDs) have been successfully used in several low\nenergy X-ray astronomical satellite over the past two decades. Their high\nenergy resolution and high spatial resolution make them an perfect tool for low\nenergy astronomy, such as formation of galaxy clusters and environment of black\nholes. The Low Energy X-ray Telescope (LE) group is developing Swept Charge\nDevice (SCD) for the Hard X-ray Modulation Telescope (HXMT) satellite. SCD is a\nspecial low energy X-ray CCD, which could be read out a thousand times faster\nthan traditional CCDs, simultaneously keeping excellent energy resolution. A\ntest method for measuring the charge transfer efficiency (CTE) of a prototype\nSCD has been set up. Studies of the charge transfer inefficiency (CTI) have\nbeen performed at a temperature range of operation, with a proton-irradiated\nSCD.", "category": "astro-ph_IM" }, { "text": "First Impressions: Early-Time Classification of Supernovae using Host\n Galaxy Information and Shallow Learning: Substantial effort has been devoted to the characterization of transient\nphenomena from photometric information. Automated approaches to this problem\nhave taken advantage of complete phase-coverage of an event, limiting their use\nfor triggering rapid follow-up of ongoing phenomena. In this work, we introduce\na neural network with a single recurrent layer designed explicitly for early\nphotometric classification of supernovae. Our algorithm leverages transfer\nlearning to account for model misspecification, host galaxy photometry to solve\nthe data scarcity problem soon after discovery, and a custom weighted loss to\nprioritize accurate early classification. We first train our algorithm using\nstate-of-the-art transient and host galaxy simulations, then adapt its weights\nand validate it on the spectroscopically-confirmed SNe Ia, SNe II, and SNe Ib/c\nfrom the Zwicky Transient Facility Bright Transient Survey. On observed data,\nour method achieves an overall accuracy of $82 \\pm 2$% within 3 days of an\nevent's discovery, and an accuracy of $87 \\pm 5$% within 30 days of discovery.\nAt both early and late phases, our method achieves comparable or superior\nresults to the leading classification algorithms with a simpler network\narchitecture. These results help pave the way for rapid photometric and\nspectroscopic follow-up of scientifically-valuable transients discovered in\nmassive synoptic surveys.", "category": "astro-ph_IM" }, { "text": "What could KIDSpec, a new MKID spectrograph, do on the ELT?: Microwave Kinetic Inductance Detectors (MKIDs) are beginning to become more\nprominent in astronomical instrumentation, due to their sensitivity, low noise,\nhigh pixel count for superconducting detectors, and inherent energy and time\nresolving capability. The Kinetic Inductance Detector Spectrometer (KIDSpec)\nwill take advantage of these features, KIDSpec is a medium resolution MKID\nspectrograph for the optical/near infrared. KIDSpec will contribute to many\nscience areas particularly those involving short and/or faint observations.\nWhen short period binary systems are found, typical CCD detectors will struggle\nto characterise these systems due to the very short exposures required, causing\nerrors as large as the estimated parameter itself. The KIDSpec Simulator (KSIM)\nhas been developed to investigate how much KIDSpec could improve on this.\nKIDSpec was simulated on an ELT class telescope to find the extent of its\npotential, and it was found that KIDSpec could observe a $m_{V}\\approx{24}$\nwith an SNR of 5 for a 10s exposure at 1420 spectral resolution. This would\nmean that KIDSpec on an ELT class telescope could spectroscopically follow up\non any LSST photometric discoveries of LISA verification sources.", "category": "astro-ph_IM" }, { "text": "A Digital Broadband Beamforming Architecture for 2-PAD: We describe an hierarchical, frequency-domain beamforming architecture for\nsynthesising a sky beam from the wideband antenna feeds of digital aperture\narrays. The development of densely-packed, all-digital aperture arrays is an\nimportant area of research required for the Square Kilometre Array (SKA) radio\ntelescope. The design of real-time signal processing systems for digital\naperture arrays is currently a central challenge in pathfinder projects\nworldwide. In particular, this work describes a specific implementation of the\nbeamforming architecture to the 2-Polarisation All-Digital (2-PAD) aperture\narray demonstrator.", "category": "astro-ph_IM" }, { "text": "Even simpler modeling of quadruply lensed quasars (and random quartets)\n using Witt's hyperbola: Witt (1996) has shown that for an elliptical potential, the four images of a\nquadruply lensed quasar lie on a rectangular hyperbola that passes through the\nunlensed quasar position and the center of the potential as well. Wynne and\nSchechter (2018) have shown that, for the singular isothermal elliptical\npotential (SIEP), the four images also lie on an `amplitude' ellipse centered\non the quasar position with axes parallel to the hyperbola's asymptotes. Witt's\nhyperbola arises from equating the directions of both sides of the lens\nequation. The amplitude ellipse derives from equating the magnitudes. One can\nmodel any four points as an SIEP in three steps. 1. Find the rectangular\nhyperbola that passes through the points. 2. Find the aligned ellipse that also\npasses through them. 3. Find the hyperbola with asymptotes parallel to those of\nthe first that passes through the center of the ellipse and the pair of images\nclosest to each other. The second hyperbola and the ellipse give an SIEP that\npredicts the positions of the two remaining images where the curves intersect.\nPinning the model to the closest pair guarantees a four image model. Such\nmodels permit rapid discrimination between gravitationally lensed quasars and\nrandom quartets of stars.", "category": "astro-ph_IM" }, { "text": "A Cyber Infrastructure for the SKA Telescope Manager: The Square Kilometre Array Telescope Manager (SKA TM) will be responsible for\nassisting the SKA Operations and Observation Management, carrying out System\ndiagnosis and collecting Monitoring & Control data from the SKA sub-systems and\ncomponents. To provide adequate compute resources, scalability, operation\ncontinuity and high availability, as well as strict Quality of Service, the TM\ncyber-infrastructure (embodied in the Local Infrastructure - LINFRA) consists\nof COTS hardware and infrastructural software (for example: server monitoring\nsoftware, host operating system, virtualization software, device firmware),\nproviding a specially tailored Infrastructure as a Service (IaaS) and Platform\nas a Service (PaaS) solution. The TM infrastructure provides services in the\nform of computational power, software defined networking, power, storage\nabstractions, and high level, state of the art IaaS and PaaS management\ninterfaces. This cyber platform will be tailored to each of the two SKA Phase 1\ntelescopes (SKA_MID in South Africa and SKA_LOW in Australia) instances, each\npresenting different computational and storage infrastructures and conditioned\nby location. This cyber platform will provide a compute model enabling TM to\nmanage the deployment and execution of its multiple components (observation\nscheduler, proposal submission tools, M&C components, Forensic tools and\nseveral Databases, etc). In this sense, the TM LINFRA is primarily focused\ntowards the provision of isolated instances, mostly resorting to virtualization\ntechnologies, while defaulting to bare hardware if specifically required due to\nperformance, security, availability, or other requirement.", "category": "astro-ph_IM" }, { "text": "Mid-band gravitational wave detection with precision atomic sensors: We assess the science reach and technical feasibility of a satellite mission\nbased on precision atomic sensors configured to detect gravitational radiation.\nConceptual advances in the past three years indicate that a two-satellite\nconstellation with science payloads consisting of atomic sensors based on laser\ncooled atomic Sr can achieve scientifically interesting gravitational wave\nstrain sensitivities in a frequency band between the LISA and LIGO detectors,\nroughly 30 mHz to 10 Hz. The discovery potential of the proposed instrument\nranges from from observation of new astrophysical sources (e.g. black hole and\nneutron star binaries) to searches for cosmological sources of stochastic\ngravitational radiation and searches for dark matter.", "category": "astro-ph_IM" }, { "text": "Focus Demo: CANFAR+Skytree: A Cloud Computing and Data Mining System for\n Astronomy: This is a companion Focus Demonstration article to the CANFAR+Skytree poster\n(Ball 2012), demonstrating the usage of the Skytree machine learning software\non the Canadian Advanced Network for Astronomical Research (CANFAR) cloud\ncomputing system. CANFAR+Skytree is the world's first cloud computing system\nfor data mining in astronomy.", "category": "astro-ph_IM" }, { "text": "Study of hadron and gamma-ray acceptance of the MAGIC telescopes:\n towards an improved background estimation: The MAGIC telescopes are an array of two imaging atmospheric Cherenkov\ntelescopes (IACTs) studying the gamma ray sky at very high-energies (VHE; E>100\nGeV). The observations are performed in stereoscopic mode, with both telescopes\npointing at the same position in the sky. The MAGIC field of view (FoV)\nacceptance for hadrons and gamma rays has a complex shape, which depends on\nseveral parameters such as the azimuth and zenith angle of the observations. In\nthe standard MAGIC analysis, the strategy adopted for estimating this\nacceptance is not optimal in the case of complex FoVs.\n In this contribution we present the results of systematic studies intended to\ncharacterise the acceptance for the entire FoV. These studies open the\npossibility to apply improved background estimation methods to the MAGIC data,\nuseful to investigate the morphology of extended or multiple sources.", "category": "astro-ph_IM" }, { "text": "Demonstrating the Concept of Parallax with James Webb Space Telescope: We measured the parallax of the James Webb Space Telescope based on near\nsimultaneous observations using the Lulin One-meter Telescope and the GROWTH\nIndia Telescope, separated at a distance of ~4214 km. This serves a great\ndemonstration for the concept of parallax commonly taught in introductory\nastronomy courses.", "category": "astro-ph_IM" }, { "text": "High Cadence Optical Transient Searches using Drift Scan Imaging III:\n Development of an Inexpensive Drive Control System and Characterisation and\n Correction of Drive System Periodic Errors: In order to further develop and implement novel drift scan imaging\nexperiments to undertake wide field, high time resolution surveys for\nmillisecond optical transients, an appropriate telescope drive system is\nrequired. This paper describes the development of a simple and inexpensive\nhardware and software system to monitor, characterise, and correct the primary\ncategory of telescope drive errors, periodic errors due to imperfections in the\ndrive and gear chain. A model for the periodic errors is generated from direct\nmeasurements of the telescope drive shaft rotation, verified by comparison to\nastronomical measurements of the periodic errors. The predictive model is\ngenerated and applied in real-time in the form of corrections to the drive\nrate. A demonstration of the system shows that that inherent periodic errors of\npeak-to-peak amplitude ~100'' are reduced to below the seeing limit of ~3''.\nThis demonstration allowed an estimate of the uncertainties on the transient\nsensitivity timescales of the prototype survey of Tingay & Joubert (2021), with\nthe nominal timescale sensitivity of 21 ms revised to be in the range of 20 -\n22 ms, which does not significantly affect the results of the experiment. The\ncorrection system will be adopted into the final version of high cadence\nimaging experiment, which is currently under construction. The correction\nsystem is inexpensive (<$A100) and composed of readily available hardware, and\nis readily adaptable to other applications. Design details and codes are\ntherefore made publicly available.", "category": "astro-ph_IM" }, { "text": "Electric sail control mode for amplified transverse thrust: The electric solar wind sail produces thrust by centrifugally spanned high\nvoltage tethers interacting with the solar wind protons. The sail attitude can\nbe controlled and attitude maneuvers are possible by tether voltage modulation\nsynchronous with the sail rotation. Especially, the sail can be inclined with\nrespect to the solar wind direction to obtain transverse thrust to change the\nosculating orbit angular momentum. Such an inclination has to be maintained by\na continual control voltage modulation. Consequently, the tether voltage\navailable for the thrust is less than the maximum voltage provided by the power\nsystem. Using a spherical pendulum as a model for a single rotating tether, we\nderive analytical estimations for the control efficiency for two separate sail\ncontrol modes. One is a continuous control modulation that corresponds to\nstrictly planar tether tip motion. The other is an on-off modulation with the\ntether tip moving along a closed loop on a saddle surface. The novel on-off\nmode is introduced here to both amplify the transverse thrust and reduce the\npower consumption. During the rotation cycle, the maximum voltage is applied to\nthe tether only over two thrusting arcs when most of the transverse thrust is\nproduced. In addition to the transverse thrust, we obtain the thrusting angle\nand electric power consumption for the two control modes. It is concluded that\nwhile the thrusting angle is about half of the sail inclination for the\ncontinuous modulation it approximately equals to the inclination angle for the\non-off modulation. The efficiency of the on-off mode is emphasized when power\nconsumption is considered, and the on-off mode can be used to improve the\npropulsive acceleration through the reduced power system mass.", "category": "astro-ph_IM" }, { "text": "Current status of Shanghai VLBI correlator: Shanghai Astronomical Observatory has upgraded its DiFX cluster to 420 CPU\ncores and a 432-TB storage system at the end of 2014. An international network\nconnection for the raw data transfer has also been established. The routine\noperations for IVS sessions including CRF, AOV, and APSG series began in early\n2015. In addition to the IVS observations, the correlator is dedicated to\nastrophysical and astrometric programs with the Chinese VLBI Network and\ninternational joint VLBI observations. It also worked with the new-built Tianma\n65-m radio telescope and successfully found fringes as high as at X/Ka and Q\nbands in late 2015. A more powerful platform is planned for the high data rate\nand massive data correlation tasks in the future.", "category": "astro-ph_IM" }, { "text": "The SFXC software correlator for Very Long Baseline Interferometry:\n Algorithms and Implementation: In this paper a description is given of the SFXC software correlator,\ndeveloped and maintained at the Joint Institute for VLBI in Europe (JIVE). The\nsoftware is designed to run on generic Linux-based computing clusters. The\ncorrelation algorithm is explained in detail, as are some of the novel modes\nthat software correlation has enabled, such as wide-field VLBI imaging through\nthe use of multiple phase centres and pulsar gating and binning. This is\nfollowed by an overview of the software architecture. Finally, the performance\nof the correlator as a function of number of CPU cores, telescopes and spectral\nchannels is shown.", "category": "astro-ph_IM" }, { "text": "An optical test bench for the precision characterization of absolute\n quantum efficiency for the TESS CCD detectors: The Transiting Exoplanet Survey Satellite (TESS) will search for planets\ntransiting bright stars with Ic<13. TESS has been selected by NASA for launch\nin 2018 as an Astrophysics Explorer mission, and is expected to discover a\nthousand or more planets that are smaller in size than Neptune. TESS will\nemploy four wide-field optical charge-coupled device (CCD) cameras with a\nband-pass of 650 nm-1050 nm to detect temporary drops in brightness of stars\ndue to planetary transits. The 1050 nm limit is set by the quantum efficiency\n(QE) of the CCDs. The detector assembly consists of four back-illuminated MIT\nLincoln Laboratory CCID-80 devices. Each CCID-80 device consists of 2048x2048\nimaging array and 2048x2048 frame store regions. Very precise on-ground\ncalibration and characterization of CCD detectors will significantly assist in\nthe analysis of the science data obtained in space. The characterization of the\nabsolute QE of the CCD detectors is a crucial part of the characterization\nprocess because QE affects the performance of the CCD significantly over the\nredder wavelengths at which TESS will be operating. An optical test bench with\nsignificantly high photometric stability has been developed to perform precise\nQE measurements. The design of the test setup along with key hardware,\nmethodology, and results from the test campaign are presented.", "category": "astro-ph_IM" }, { "text": "Summary of the 3rd BINA Workshop: BINA-3 has been the third workshop of this series involving scientists from\nIndia and Belgium aimed at fostering future joint research in the view of\ncutting-edge observatories and advances in theory. BINA-3 was held at the\nGraphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital),\nUttarakhand, India. A major event was the inauguration of the International\nLiquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted\nexclusively to astronomy. BINA-3 provided impressive highlights encompassing\ntopics of both general astrophysics and solar physics. Research results and\nfuture projects have been featured through invited and contributed talks, and\nposter presentations.", "category": "astro-ph_IM" }, { "text": "Simulation of ultra-high energy photon propagation with PRESHOWER 2.0: In this paper we describe a new release of the PRESHOWER program, a tool for\nMonte Carlo simulation of propagation of ultra-high energy photons in the\nmagnetic field of the Earth. The PRESHOWER program is designed to calculate\nmagnetic pair production and bremsstrahlung and should be used together with\nother programs to simulate extensive air showers induced by photons. The main\nnew features of the PRESHOWER code include a much faster algorithm applied in\nthe procedures of simulating the processes of gamma conversion and\nbremsstrahlung, update of the geomagnetic field model, and a minor correction.\nThe new simulation procedure increases the flexibility of the code so that it\ncan also be applied to other magnetic field configurations such as, for\nexample, encountered in the vicinity of the sun or neutron stars.", "category": "astro-ph_IM" }, { "text": "Gaia astrometry for stars with too few observations - a Bayesian\n approach: Gaia's astrometric solution aims to determine at least five parameters for\neach star, together with appropriate estimates of their uncertainties and\ncorrelations. This requires at least five distinct observations per star. In\nthe early data reductions the number of observations may be insufficient for a\nfive-parameter solution, and even after the full mission many stars will remain\nunder-observed, including faint stars at the detection limit and transient\nobjects. In such cases it is reasonable to determine only the two position\nparameters. Their formal uncertainties would however grossly underestimate the\nactual errors, due to the neglected parallax and proper motion. We aim to\ndevelop a recipe to calculate sensible formal uncertainties that can be used in\nall cases of under-observed stars. Prior information about the typical ranges\nof stellar parallaxes and proper motions is incorporated in the astrometric\nsolution by means of Bayes' rule. Numerical simulations based on the Gaia\nUniverse Model Snapshot (GUMS) are used to investigate how the prior influences\nthe actual errors and formal uncertainties when different amounts of Gaia\nobservations are available. We develop a criterion for the optimum choice of\npriors, apply it to a wide range of cases, and derive a global approximation of\nthe optimum prior as a function of magnitude and galactic coordinates. The\nfeasibility of the Bayesian approach is demonstrated through global astrometric\nsolutions of simulated Gaia observations. With an appropriate prior it is\npossible to derive sensible positions with realistic error estimates for any\nnumber of available observations. Even though this recipe works also for\nwell-observed stars it should not be used where a good five-parameter\nastrometric solution can be obtained without a prior. Parallaxes and proper\nmotions from a solution using priors are always biased and should not be used.", "category": "astro-ph_IM" }, { "text": "Visible astro-comb filtered by a passively-stabilized Fabry-Perot cavity: We demonstrate a compact 29.3 GHz visible astro-comb covering the spectrum\nfrom 560nm to 700nm. A 837 MHz Yb:fiber laser frequency comb phase locked to a\nRb clock served as the seed comb to ensure the frequency stability and high\nside mode suppression ratio. After the visible super-continuum generation, a\ncavity-length-fixed Fabry-Perot cavity made by ultra-low expansion glass was\nutilized to filter the comb teeth for eliminating the rapid active dithering.\nThe mirrors were home-made complementary chirped mirrors pair with zero\ndispersion and high reflection to guarantee no mode skipping. These filtered\ncomb teeth were clearly resolved in an astronomical spectrograph of 49,000\nresolution, exhibiting sharp linetype, zero noise floor, and uniform exposure\namplitude.", "category": "astro-ph_IM" }, { "text": "Characterization Of Inpaint Residuals In Interferometric Measurements of\n the Epoch Of Reionization: Radio Frequency Interference (RFI) is one of the systematic challenges\npreventing 21cm interferometric instruments from detecting the Epoch of\nReionization. To mitigate the effects of RFI on data analysis pipelines,\nnumerous inpaint techniques have been developed to restore RFI corrupted data.\nWe examine the qualitative and quantitative errors introduced into the\nvisibilities and power spectrum due to inpainting. We perform our analysis on\nsimulated data as well as real data from the Hydrogen Epoch of Reionization\nArray (HERA) Phase 1 upper limits. We also introduce a convolutional neural\nnetwork that capable of inpainting RFI corrupted data in interferometric\ninstruments. We train our network on simulated data and show that our network\nis capable at inpainting real data without requiring to be retrained. We find\nthat techniques that incorporate high wavenumbers in delay space in their\nmodeling are best suited for inpainting over narrowband RFI. We also show that\nwith our fiducial parameters Discrete Prolate Spheroidal Sequences (DPSS) and\nCLEAN provide the best performance for intermittent ``narrowband'' RFI while\nGaussian Progress Regression (GPR) and Least Squares Spectral Analysis (LSSA)\nprovide the best performance for larger RFI gaps. However we caution that these\nqualitative conclusions are sensitive to the chosen hyperparameters of each\ninpainting technique. We find these results to be consistent in both simulated\nand real visibilities. We show that all inpainting techniques reliably\nreproduce foreground dominated modes in the power spectrum. Since the\ninpainting techniques should not be capable of reproducing noise realizations,\nwe find that the largest errors occur in the noise dominated delay modes. We\nshow that in the future, as the noise level of the data comes down, CLEAN and\nDPSS are most capable of reproducing the fine frequency structure in the\nvisibilities of HERA data.", "category": "astro-ph_IM" }, { "text": "Simultaneous analysis of large INTEGRAL/SPI datasets: optimizing the\n computation of the solution and its variance using sparse matrix algorithms: Nowadays, analyzing and reducing the ever larger astronomical datasets is\nbecoming a crucial challenge, especially for long cumulated observation times.\nThe INTEGRAL/SPI X-gamma-ray spectrometer is an instrument for which it is\nessential to process many exposures at the same time in order to increase the\nlow signal-to-noise ratio of the weakest sources. In this context, the\nconventional methods for data reduction are inefficient and sometimes not\nfeasible at all. Processing several years of data simultaneously requires\ncomputing not only the solution of a large system of equations, but also the\nassociated uncertainties. We aim at reducing the computation time and the\nmemory usage. Since the SPI transfer function is sparse, we have used some\npopular methods for the solution of large sparse linear systems; we briefly\nreview these methods. We use the Multifrontal Massively Parallel Solver (MUMPS)\nto compute the solution of the system of equations. We also need to compute the\nvariance of the solution, which amounts to computing selected entries of the\ninverse of the sparse matrix corresponding to our linear system. This can be\nachieved through one of the latest features of the MUMPS software that has been\npartly motivated by this work. In this paper we provide a brief presentation of\nthis feature and evaluate its effectiveness on astrophysical problems requiring\nthe processing of large datasets simultaneously, such as the study of the\nentire emission of the Galaxy. We used these algorithms to solve the large\nsparse systems arising from SPI data processing and to obtain both their\nsolutions and the associated variances. In conclusion, thanks to these newly\ndeveloped tools, processing large datasets arising from SPI is now feasible\nwith both a reasonable execution time and a low memory usage.", "category": "astro-ph_IM" }, { "text": "Why should we keep measuring zenital dependence of muon flux? Results\n obtained at Campinas (SP) BR: The zenital dependence of muon flux which reaches the earth's surface is well\nknown as proportional to cos^n(\\theta). Generally, for practical purposes and\nsimplicity in calculations, n is taken as 2. However, compilations of\nmeasurements show dependence on the geographical location of the experiments as\nwell as the muons energy range. Since analytical solutions appear to be\nincreasingly less necessary because of the higher accessibility to low cost\ncomputational power, accurate and precise determination of the value of the\nexponent n, under different conditions, can be useful in the necessary\ncalculations to estimate signals and backgrounds, either for terrestrial and\nunderground experiments. In this work we discuss a method for measuring n using\na simple muon telescope and the results obtained for measurements taken at\nCampinas (SP), Brazil. After validation of the method, we intend to extend the\nmeasurements for different geographic locations due to the simplicity of the\nmethod, and thus collect more values of n that currently exist in compilations\nof general data on cosmic rays.", "category": "astro-ph_IM" }, { "text": "Optical calibration of large format adaptive mirrors: Adaptive (or deformable) mirrors are widely used as wavefront correctors in\nadaptive optics systems. The optical calibration of an adaptive mirror is a\nfundamental step during its life-cycle: the process is in facts required to\ncompute a set of known commands to operate the adaptive optics system, to\ncompensate alignment and non common-path aberrations, to run chopped or\nfield-stabilized acquisitions. In this work we present the sequence of\noperations for the optical calibration of adaptive mirrors, with a specific\nfocus on large aperture systems such as the adaptive secondaries. Such systems\nwill be one of the core components of the extremely large telescopes.\n Beyond presenting the optical procedures, we discuss in detail the actors,\ntheir functional requirements and the mutual interactions. A specific emphasys\nis put on automation, through a clear identification of inputs, outputs and\nquality indicators for each step: due to a high degrees-of-freedom count\n(thousands of actuators), an automated approach is preferable to constraint the\ncost and schedule. In the end we present some algorithms for the evaluation of\nthe measurement noise; this point is particularly important since the\ncalibration setup is typically a large facility in an industrial environment,\nwhere the noise level may be a major show-stopper.", "category": "astro-ph_IM" }, { "text": "Numerical Strategies of Computing the Luminosity Distance: We propose two efficient numerical methods of evaluating the luminosity\ndistance in the spatially flat {\\Lambda}CDM universe. The first method is based\non the Carlson symmetric form of elliptic integrals, which is highly accurate\nand can replace numerical quadratures. The second method, using a modified\nversion of Hermite interpolation, is less accurate but involves only basic\nnumerical operations and can be easily implemented. We compare our methods with\nother numerical approximation schemes and explore their respective features and\nlimitations. Possible extensions of these methods to other cosmological models\nare also discussed.", "category": "astro-ph_IM" }, { "text": "Investigation of Residual Blaze Functions in Slit-Based Echelle\n Spectrograph: We have studied the Residual Blaze Functions (RBF) resulting from division of\nindividual echelle orders by extracted flat-field in spectra obtained by\nslit-fed OES spectrograph of 2m telescope of Ond\\v{r}ejov observatory, Czech\nRepublic. We have eliminated the dependence on target and observation\nconditions by semiautomatic fitting of global response function, thus getting\nthe instrument-only dependent part, which may be easily incorporated into data\nreduction pipeline. The improvement of reliability of estimation of continuum\non spectra of targets with wide and shallow lines is noticeable and the merging\nof all orders into the one long spectrum gives much more reliable results.", "category": "astro-ph_IM" }, { "text": "FACT - The First G-APD Cherenkov Telescope: Status and Results: The First G-APD Cherenkov telescope (FACT) is the first telescope using\nsilicon photon detectors (G-APD aka. SiPM). It is built on the mount of the\nHEGRA CT3 telescope, still located at the Observatorio del Roque de los\nMuchachos, and it is successfully in operation since Oct. 2011. The use of\nSilicon devices promises a higher photon detection efficiency, more robustness\nand higher precision than photo-multiplier tubes. The FACT collaboration is\ninvestigating with which precision these devices can be operated on the\nlong-term. Currently, the telescope is successfully operated from remote and\nrobotic operation is under development. During the past months of operation,\nthe foreseen monitoring program of the brightest known TeV blazars has been\ncarried out, and first physics results have been obtained including a strong\nflare of Mrk501. An instantaneous flare alert system is already in a testing\nphase. This presentation will give an overview of the project and summarize its\ngoals, status and first results.", "category": "astro-ph_IM" }, { "text": "Dishing up the Data: A Decade of Space Missions: The past decade has seen Parkes once again involved in a wide range of space\ntracking activities that have added to its illustrious legacy. This\ncontribution is a personal recollection of those tracking efforts - both real\nand celluloid. We begin in a light-hearted vein with some behind-the-scenes\nviews of the popular film, \"The DISH\", and then turn to more serious\ncontributions; discussing the vital role of the telescope in alleviating the\ngreat \"traffic jam\" at Mars in 2003/04 and salvaging the Doppler Wind\nExperiment as the Huygens probe descended though the atmosphere of Saturn's\nlargest moon, Titan, in mid-decade. We cap off the decade with a discussion of\nthe search for the missing Apollo 11 slow-scan TV tapes.", "category": "astro-ph_IM" }, { "text": "High Contrast and High Angular Imaging at Subaru Telescope: Adaptive Optics projects at Subaru Telescope span a wide field of\ncapabilities ranging from ground-layer adaptive optics (GLAO) providing partial\ncorrection over a 20 arcmin FOV to extreme adaptive optics (ExAO) for exoplanet\nimaging. We describe in this paper current and upcoming narrow field-of-view\ncapabilities provided by the Subaru Extreme Adaptive Optics Adaptive Optics\n(SCExAO) system and its instrument modules, as well as the upcoming\n3000-actuator upgrade of the Nasmyth AO system.", "category": "astro-ph_IM" }, { "text": "POLARIX: a pathfinder mission of X-ray polarimetry: Since the birth of X-ray astronomy, spectral, spatial and timing observation\nimproved dramatically, procuring a wealth of information on the majority of the\nclasses of the celestial sources. Polarimetry, instead, remained basically\nunprobed. X-ray polarimetry promises to provide additional information\nprocuring two new observable quantities, the degree and the angle of\npolarization. POLARIX is a mission dedicated to X-ray polarimetry. It exploits\nthe polarimetric response of a Gas Pixel Detector, combined with position\nsensitivity, that, at the focus of a telescope, results in a huge increase of\nsensitivity. Three Gas Pixel Detectors are coupled with three X-ray optics\nwhich are the heritage of JET-X mission. POLARIX will measure time resolved\nX-ray polarization with an angular resolution of about 20 arcsec in a field of\nview of 15 arcmin $\\times$ 15 arcmin and with an energy resolution of 20 % at 6\nkeV. The Minimum Detectable Polarization is 12 % for a source having a flux of\n1 mCrab and 10^5 s of observing time. The satellite will be placed in an\nequatorial orbit of 505 km of altitude by a Vega launcher.The telemetry\ndown-link station will be Malindi. The pointing of POLARIX satellite will be\ngyroless and it will perform a double pointing during the earth occultation of\none source, so maximizing the scientific return. POLARIX data are for 75 % open\nto the community while 25 % + SVP (Science Verification Phase, 1 month of\noperation) is dedicated to a core program activity open to the contribution of\nassociated scientists. The planned duration of the mission is one year plus\nthree months of commissioning and SVP, suitable to perform most of the basic\nscience within the reach of this instrument.", "category": "astro-ph_IM" }, { "text": "S-ACF: A selective estimator for the autocorrelation function of\n irregularly sampled time series: We present a generalised estimator for the autocorrelation function, S-ACF,\nwhich is an extended version of the standard estimator of the autocorrelation\nfunction (ACF). S-ACF is a versatile definition that can robustly and\nefficiently extract periodicity and signal shape information from a time\nseries, independent of the time sampling and with minimal assumptions about the\nunderlying process. Calculating the autocorrelation of irregularly sampled time\nseries becomes possible by generalising the lag of the standard estimator of\nthe ACF to a real parameter and introducing the notion of selection and weight\nfunctions. We show that the S-ACF reduces to the standard ACF estimator for\nregularly sampled time series. Using a large number of synthetic time series we\ndemonstrate that the performance of the S-ACF is as good or better than\ncommonly used Gaussian and rectangular kernel estimators, and is comparable to\na combination of interpolation and the standard estimator. We apply the S-ACF\nto astrophysical data by extracting rotation periods for the spotted star KIC\n5110407, and compare our results to Gaussian process (GP) regression and\nLomb-Scargle (LS) periodograms. We find that the S-ACF periods typically agree\nbetter with those from GP regression than from LS periodograms, especially in\ncases where there is evolution in the signal shape. The S-ACF has a wide range\nof potential applications and should be useful in quantitative science\ndisciplines where irregularly sampled time series occur. A Python\nimplementation of the S-ACF is available under the MIT license.", "category": "astro-ph_IM" }, { "text": "Optimising LSST Observing Strategy for Weak Lensing Systematics: The LSST survey will provide unprecedented statistical power for measurements\nof dark energy. Consequently, controlling systematic uncertainties is becoming\nmore important than ever. The LSST observing strategy will affect the\nstatistical uncertainty and systematics control for many science cases; here,\nwe focus on weak lensing systematics. The fact that the LSST observing strategy\ninvolves hundreds of visits to the same sky area provides new opportunities for\nsystematics mitigation. We explore these opportunities by testing how different\ndithering strategies (pointing offsets and rotational angle of the camera in\ndifferent exposures) affect additive weak lensing shear systematics on a\nbaseline operational simulation, using the $\\rho-$statistics formalism. Some\ndithering strategies improve systematics control at the end of the survey by a\nfactor of up to $\\sim 3-4$ better than others. We find that a random\ntranslational dithering strategy, applied with random rotational dithering at\nevery filter change, is the most effective of those strategies tested in this\nwork at averaging down systematics. Adopting this dithering algorithm, we\nexplore the effect of varying the area of the survey footprint, exposure time,\nnumber of exposures in a visit, and exposure to the Galactic plane. We find\nthat any change that increases the average number of exposures (in filters\nrelevant to weak lensing) reduces the additive shear systematics. Some ways to\nachieve this increase may not be favorable for the weak lensing statistical\nconstraining power or for other probes, and we explore the relative trade-offs\nbetween these options given constraints on the overall survey parameters.", "category": "astro-ph_IM" }, { "text": "Astronomy and the new SI: In 2019 the International System of units (SI) conceptually re-invented\nitself. This was necessary because quantum-electronic devices had become so\nprecise that the old SI could no longer calibrate them. The new system defines\nvalues of fundamental constants (including $c,h,k,e$ but not $G$) and allows\nunits to be realized from the defined constants through any applicable equation\nof physics. In this new and more abstract SI, units can take on new guises ---\nfor example, the kilogram is at present best implemented as a derived\nelectrical unit. Relevant to astronomy, however, is that several formerly\nnon-SI units, such as electron-volts, light-seconds, and what we may call\n\"gravity seconds\" $GM/c^3$, can now be interpreted not as themselves units, but\nas shorthand for volts and seconds being used with particular equations of\nphysics. Moreover, the classical astronomical units have exact and rather\nconvenient equivalents in the new SI: zero AB magnitude amounts to\n$\\simeq5\\times10^{10}$ photons $\\rm m^{-2}\\,s^{-1}$ per logarithmic frequency\nor wavelength interval, $\\rm 1\\,au\\simeq 500$ light-seconds, $\\rm 1\\,pc\\simeq\n10^8$ light-seconds, while a solar mass $\\simeq5$ gravity-seconds. As a result,\nthe unit conversions ubiquitous in astrophysics can now be eliminated, without\nintroducing other problems, as the old-style SI would have done. We review a\nvariety of astrophysical processes illustrating the simplifications possible\nwith the new-style SI, with special attention to gravitational dynamics, where\ncare is needed to avoid propagating the uncertainty in $G$. Well-known systems\n(GPS satellites, GW170817, and the M87 black hole) are used as examples\nwherever possible.", "category": "astro-ph_IM" }, { "text": "Astro2020 Science White Paper: Science Platforms for Resolved Stellar\n Populations in the Next Decade: Over the past decade, research in resolved stellar populations has made great\nstrides in exploring the nature of dark matter, in unraveling the star\nformation, chemical enrichment, and dynamical histories of the Milky Way and\nnearby galaxies, and in probing fundamental physics from general relativity to\nthe structure of stars. Large surveys have been particularly important to the\nbiggest of these discoveries. In the coming decade, current and planned surveys\nwill push these research areas still further through a large variety of\ndiscovery spaces, giving us unprecedented views into the low surface brightness\nUniverse, the high surface brightness Universe, the 3D motions of stars, the\ntime domain, and the chemical abundances of stellar populations. These\ndiscovery spaces will be opened by a diverse range of facilities, including the\ncontinuing Gaia mission, imaging machines like LSST and WFIRST, massively\nmultiplexed spectroscopic platforms like DESI, Subaru-PFS, and MSE, and\ntelescopes with high sensitivity and spatial resolution like JWST, the ELTs,\nand LUVOIR. We do not know which of these facilities will prove most critical\nfor resolved stellar populations research in the next decade. We can predict,\nhowever, that their chance of success will be maximized by granting use of the\ndata to broad communities, that many scientific discoveries will draw on a\ncombination of data from them, and that advances in computing will enable\nincreasingly sophisticated analyses of the large and complex datasets that they\nwill produce. We recommend that Astro2020 1) acknowledge the critical role that\ndata archives will play for stellar populations and other science in the next\ndecade, 2) recognize the opportunity that advances in computing will bring for\nsurvey data analysis, and 3) consider investments in Science Platform\ntechnology to bring these opportunities to fruition.", "category": "astro-ph_IM" }, { "text": "The Zwicky Transient Facility: The Zwicky Transient Facility (ZTF) is a next-generation optical synoptic\nsurvey that builds on the experience and infrastructure of the Palomar\nTransient Factory (PTF). Using a new 47 deg$^2$ survey camera, ZTF will survey\nmore than an order of magnitude faster than PTF to discover rare transients and\nvariables. I describe the survey and the camera design. Searches for young\nsupernovae, fast transients, counterparts to gravitational-wave detections, and\nrare variables will benefit from ZTF's high cadence, wide area survey.", "category": "astro-ph_IM" }, { "text": "Reconstructing inclined extensive air showers from radio measurements: We present a reconstruction algorithm for extensive air showers with zenith\nangles between 65$^\\circ$ and 85$^\\circ$ measured with radio antennas in the\n30-80 MHz band. Our algorithm is based on a signal model derived from CoREAS\nsimulations which explicitly takes into account the asymmetries introduced by\nthe superposition of charge-excess and geomagnetic radiation as well as by\nearly-late effects. We exploit correlations among fit parameters to reduce the\ndimensionality and thus ensure stability of the fit procedure. Our approach\nreaches a reconstruction efficiency near 100% with an intrinsic resolution for\nthe reconstruction of the electromagnetic energy of well below 5\\%. It can be\nemployed in upcoming large-scale radio detection arrays using the 30-80 MHz\nband, in particular the AugerPrime Radio detector of the Pierre Auger\nObservatory, and can likely be adapted to experiments such as GRAND operating\nat higher frequencies.", "category": "astro-ph_IM" }, { "text": "PolarLight: a CubeSat X-ray Polarimeter based on the Gas Pixel Detector: The gas pixel detector (GPD) is designed and developed for high-sensitivity\nastronomical X-ray polarimetry, which is a new window about to open in a few\nyears. Due to the small mass, low power, and compact geometry of the GPD, we\npropose a CubeSat mission Polarimeter Light (PolarLight) to demonstrate and\ntest the technology directly in space. There is no optics but a collimator to\nconstrain the field of view to 2.3 degrees. Filled with pure dimethyl ether\n(DME) at 0.8 atm and sealed by a beryllium window of 100 micron thick, with a\nsensitive area of about 1.4 mm by 1.4 mm, PolarLight allows us to observe the\nbrightest X-ray sources on the sky, with a count rate of, e.g., ~0.2 counts/s\nfrom the Crab nebula. The PolarLight is 1U in size and mounted in a 6U CubeSat,\nwhich was launched into a low Earth Sun-synchronous orbit on October 29, 2018,\nand is currently under test. More launches with improved designs are planned in\n2019. These tests will help increase the technology readiness for future\nmissions such as the enhanced X-ray Timing and Polarimetry (eXTP), better\nunderstand the orbital background, and may help constrain the physics with\nobservations of the brightest objects.", "category": "astro-ph_IM" }, { "text": "The Jiao Tong University Spectroscopic Telescope Project: The Jiao Tong University Spectroscopic Telescope (JUST) is a 4.4-meter f/6.0\nsegmentedmirror telescope dedicated to spectroscopic observations. The JUST\nprimary mirror is composed of 18 hexagonal segments, each with a diameter of\n1.1 m. JUST provides two Nasmyth platforms for placing science instruments. One\nNasmyth focus fits a field of view of 10 arcmin and the other has an extended\nfield of view of 1.2 deg with correction optics. A tertiary mirror is used to\nswitch between the two Nasmyth foci. JUST will be installed at a site at Lenghu\nin Qinghai Province, China, and will conduct spectroscopic observations with\nthree types of instruments to explore the dark universe, trace the dynamic\nuniverse, and search for exoplanets: (1) a multi-fiber (2000 fibers)\nmedium-resolution spectrometer (R=4000-5000) to spectroscopically map galaxies\nand large-scale structure; (2) an integral field unit (IFU) array of 500\noptical fibers and/or a long-slit spectrograph dedicated to fast follow-ups of\ntransient sources for multimessenger astronomy; (3) a high-resolution\nspectrometer (R~100000) designed to identify Jupiter analogs and Earth-like\nplanets, with the capability to characterize the atmospheres of hot exoplanets.", "category": "astro-ph_IM" }, { "text": "Towards a data-driven model of the sky from low Earth orbit as observed\n by the Hubble Space Telescope: The sky observed by space telescopes in Low Earth Orbit (LEO) can be\ndominated by stray light from multiple sources including the Earth, Sun and\nMoon. This stray light presents a significant challenge to missions that aim to\nmake a secure measurement of the Extragalactic Background Light (EBL). In this\nwork we quantify the impact of stray light on sky observations made by the\nHubble Space Telescope (HST) Advanced Camera for Surveys. By selecting on\norbital parameters we successfully isolate images with sky that contain minimal\nand high levels of Earthshine. In addition, we find weather observations from\nCERES satellites correlates with the observed HST sky surface brightness\nindicating the value of incorporating such data to characterise the sky.\nFinally we present a machine learning model of the sky trained on the data used\nin this work to predict the total observed sky surface brightness. We\ndemonstrate that our initial model is able to predict the total sky brightness\nunder a range of conditions to within 3.9% of the true measured sky. Moreover,\nwe find that the model matches the stray light-free observations better than\ncurrent physical Zodiacal light models.", "category": "astro-ph_IM" }, { "text": "Lattice Boltzmann Method for Electromagnetic Wave Propagation: We present a new Lattice Boltzmann (LB) formulation to solve the Maxwell\nequations for electromagnetic (EM) waves propagating in a heterogeneous medium.\nBy using a pseudo-vector discrete Boltzmann distribution, the scheme is shown\nto reproduce the continuum Maxwell equations. The technique compares well with\na pseudo-spectral method at solving for two-dimensional wave propagation in a\nheterogeneous medium, which by design contains substantial contrasts in the\nrefractive index. The extension to three dimensions follows naturally and,\nowing to the recognized efficiency of LB schemes for parallel computation in\nirregular geometries, it gives a powerful method to numerically simulate a wide\nrange of problems involving EM wave propagation in complex media.", "category": "astro-ph_IM" }, { "text": "Characterizing Variable Stars in a Single Night with LSST: Stars exhibit a bewildering variety of variable behaviors ranging from\nexplosive magnetic flares to stochastically changing accretion to periodic\npulsations or rotations. The principal LSST surveys will have cadences too\nsparse and irregular to capture most of these phenomena. A novel idea is\nproposed here to observe a single Galactic field, rich in unobscured stars, in\na continuous sequence of $\\sim 15$ second exposures for one long winter night\nin a single photometric band. The result will be a unique dataset of $\\sim 1$\nmillion regularly spaced stellar lightcurves. The lightcurves will gives a\nparticularly comprehensive collection of dM star variability. A powerful array\nof statistical procedures can be applied to the ensemble of lightcurves from\nthe long-standing fields of time series analysis, signal processing and\neconometrics. Dozens of `features' describing the variability can be extracted\nand subject to machine learning classification, giving a unique authoritative\nobjective classification of rapidly variable stars. The most effective features\ncan then inform the wider LSST community on the best approaches to variable\nstar identification and classification from the sparse, irregular cadences that\ndominate the LSST project.", "category": "astro-ph_IM" }, { "text": "Tails: Chasing Comets with the Zwicky Transient Facility and Deep\n Learning: We present Tails, an open-source deep-learning framework for the\nidentification and localization of comets in the image data of the Zwicky\nTransient Facility (ZTF), a robotic optical time-domain survey currently in\noperation at the Palomar Observatory in California, USA. Tails employs a custom\nEfficientDet-based architecture and is capable of finding comets in single\nimages in near real time, rather than requiring multiple epochs as with\ntraditional methods. The system achieves state-of-the-art performance with 99%\nrecall, 0.01% false positive rate, and 1-2 pixel root mean square error in the\npredicted position. We report the initial results of the Tails efficiency\nevaluation in a production setting on the data of the ZTF Twilight survey,\nincluding the first AI-assisted discovery of a comet (C/2020 T2) and the\nrecovery of a comet (P/2016 J3 = P/2021 A3).", "category": "astro-ph_IM" }, { "text": "Extreme-value modelling for the significance assessment of periodogram\n peaks: I propose a new procedure to estimate the False Alarm Probability, the\nmeasure of significance for peaks of periodograms. The key element of the new\nprocedure is the use of generalized extreme-value distributions, the limiting\ndistribution for maxima of variables from most continuous distributions. This\ntechnique allows reliable extrapolation to the very high probability levels\nrequired by multiple hypothesis testing, and enables the derivation of\nconfidence intervals of the estimated levels. The estimates are stable against\ndeviations from distributional assumptions, which are otherwise usually made\neither about the observations themselves or about the theoretical univariate\ndistribution of the periodogram. The quality and the performance of the\nprocedure is demonstrated on simulations and on two multimode variable stars\nfrom Sloan Digital Sky Survey Stripe 82.", "category": "astro-ph_IM" }, { "text": "HCF (HREXI Calibration Facility): Mapping out sub-pixel level responses\n from high resolution Cadmium Zinc Telluride (CZT) imaging X-ray detectors: The High Resolution Energetic X-Ray Imager (HREXI) CZT detector development\nprogram at Harvard is aimed at developing tiled arrays of finely pixelated CZT\ndetectors for use in wide-field coded aperture 3-200 keV X-ray telescopes. A\npixel size of $\\simeq$ 600 $\\mu m$ has already been achieved in the ProtoEXIST2\n(P2) detector plane with CZT read out by the NuSTAR ASIC. This paves the way\nfor even smaller 300 $\\mu m$ pixels in the next generation HREXI detectors.\nThis article describes a new HREXI calibration facility (HCF) which enables a\nhigh resolution sub-pixel level (100 $\\mu m$) 2D scan of a 256 $cm^2$ tiled\narray of 2 $\\times$ 2 cm CZT detectors illuminated by a bright X-ray AmpTek\nMini-X tube source at timescales of around a day. HCF is a significant\nimprovement from the previous apparatus used for scanning these detectors which\ntook $\\simeq$ 3 weeks to complete a 1D scan of a similar detector plane.\nMoreover, HCF has the capability to scan a large tiled array of CZT detectors\n($32cm \\times 32cm$) at 100 $\\mu m$ resolution in the 10 - 50 keV energy range\nwhich was not possible previously. This paper describes the design,\nconstruction, and implementation of HCF for the calibration of the P2 detector\nplane.", "category": "astro-ph_IM" }, { "text": "Disentangled Representation Learning for Astronomical Chemical Tagging: Modern astronomical surveys are observing spectral data for millions of\nstars. These spectra contain chemical information that can be used to trace the\nGalaxy's formation and chemical enrichment history. However, extracting the\ninformation from spectra, and making precise and accurate chemical abundance\nmeasurements are challenging. Here, we present a data-driven method for\nisolating the chemical factors of variation in stellar spectra from those of\nother parameters (i.e. \\teff, \\logg, \\feh). This enables us to build a spectral\nprojection for each star with these parameters removed. We do this with no ab\ninitio knowledge of elemental abundances themselves, and hence bypass the\nuncertainties and systematics associated with modeling that rely on synthetic\nstellar spectra. To remove known non-chemical factors of variation, we develop\nand implement a neural network architecture that learns a disentangled spectral\nrepresentation. We simulate our recovery of chemically identical stars using\nthe disentangled spectra in a synthetic APOGEE-like dataset. We show that this\nrecovery declines as a function of the signal to noise ratio, but that our\nneural network architecture outperforms simpler modeling choices. Our work\ndemonstrates the feasibility of data-driven abundance-free chemical tagging.", "category": "astro-ph_IM" }, { "text": "Design and modeling of a moderate-resolution astronomic spectrograph\n with volume-phase holographic gratings: We present an optical design of astronomic spectrograph based on a cascade of\nvolume-phase holographic gratings. The cascade consists of three gratings. Each\nof them provides moderately high spectral resolution in a narrow range of 83\nnm. Thus the spectrum image represents three lines covering region 430-680 nm.\nTwo versions of the scheme are described: a full-scale one with estimated\nresolving power of 5300-7900 and a small-sized one intended for creation of a\nlab prototype, which provides the resolving power of 1500-3000. Diffraction\nefficiency modeling confirms that the system throughput can reach 75 %, while\nstray light caused by the gratings crosstalk is negligible. We also propose a\ndesign of image slicer and focal reducer allowing to couple the instrument with\nthe 6-m telescope. Finally, we present concept of the opto-mechanical design.", "category": "astro-ph_IM" }, { "text": "Using Virtual Observatory with Python: querying remote astronomical\n databases: This tutorial is devoted to extending an existing catalogue with data taken\nelsewhere, either from CDS Vizier or Simbad database. As an example, we used\nthe so-called 'Spectroscopic Survey of Stars in the Solar Neighborhood' (aka.\nS4N, Allende Prieto et al. 2004) in order to retrieve all objects with\navailable data for the set of fundamental stellar parameters effective\ntemperature, surface gravity and metallicity. Then for each object in this\ndataset we query Simbad database to retrieve the projected rotational velocity.\nThis combines Vizier and Simbad queries made using Python astroquery module.\nThe tutorial covers remote database access, filtering tables with arbitrary\ncriteria, creating and writing your own tables, and basics of plotting in\nPython.", "category": "astro-ph_IM" }, { "text": "Synthesizing carbon nanotubes in space: Context. As the 4th most abundant element in the universe, carbon (C) is\nwidespread in the interstellar medium (ISM) in various allotropic forms (e.g.,\nfullerenes have been identified unambiguously in many astronomical\nenvironments, the presence of polycyclic aromatic hydrocarbon molecules in\nspace has been commonly admitted, and presolar graphite as well as nanodiamonds\nhave been identified in meteorites). As stable allotropes of these species,\nwhether carbon nanotubes (CNTs) and their hydrogenated counterparts are also\npresent in the ISM or not is unknown.\n Aims. We explore the possible routes for the formation of CNTs in the ISM and\ncalculate their fingerprint vibrational spectral features in the infrared (IR).\n Methods. We study the hydrogen-abstraction/acetylene-addition (HACA)\nmechanism and investigate the synthesis of nanotubes using density functional\ntheory (DFT). The IR vibrational spectra of CNTs and hydrogenated nanotubes\n(HNTs), as well as their cations, have also been obtained with DFT.\n Results. We find that CNTs could be synthesized in space through a feasible\nformation pathway. CNTs and cationic CNTs, as well as their hydrogenated\ncounterparts, exhibit intense vibrational transitions in the IR. Their possible\npresence in the ISM could be investigated by comparing the calculated\nvibrational spectra with astronomical observations made by the Infrared Space\nObservatory, Spitzer Space Telescope, and particularly the upcoming James Webb\nSpace Telescope.", "category": "astro-ph_IM" }, { "text": "Statistical framework for estimating GNSS bias: We present a statistical framework for estimating global navigation satellite\nsystem (GNSS) non-ionospheric differential time delay bias. The biases are\nestimated by examining differences of measured line integrated electron\ndensities (TEC) that are scaled to equivalent vertical integrated densities.\nThe spatio-temporal variability, instrumentation dependent errors, and errors\ndue to inaccurate ionospheric altitude profile assumptions are modeled as\nstructure functions. These structure functions determine how the TEC\ndifferences are weighted in the linear least-squares minimization procedure,\nwhich is used to produce the bias estimates. A method for automatic detection\nand removal of outlier measurements that do not fit into a model of receiver\nbias is also described. The same statistical framework can be used for a single\nreceiver station, but it also scales to a large global network of receivers. In\naddition to the Global Positioning System (GPS), the method is also applicable\nto other dual frequency GNSS systems, such as GLONASS (Globalnaya\nNavigazionnaya Sputnikovaya Sistema). The use of the framework is demonstrated\nin practice through several examples. A specific implementation of the methods\npresented here are used to compute GPS receiver biases for measurements in the\nMIT Haystack Madrigal distributed database system. Results of the new algorithm\nare compared with the current MIT Haystack Observatory MAPGPS bias\ndetermination algorithm. The new method is found to produce estimates of\nreceiver bias that have reduced day-to-day variability and more consistent\ncoincident vertical TEC values.", "category": "astro-ph_IM" }, { "text": "Atmospheric transparency in the optical and near IR range above the\n Shatdzhatmaz summit: The study of atmospheric extinction based on the MASS data has been carried\nout using the classical photometric pairs method. The extinction in V band can\nbe estimated at 0.m 19. The water vapour content has been derived from GPS\nmeasurements. The median value of PWV for clear nights is equal to 7.7 mm.", "category": "astro-ph_IM" }, { "text": "Non-LTE radiation hydrodynamics in PLUTO: Modeling the dynamics of most astrophysical structures requires an adequate\ndescription of the radiation-matter interaction. Several numerical\n(magneto)hydrodynamics codes were upgraded with a radiation module to fulfill\nthis request. However, those among them that use either the flux-limited\ndiffusion (FLD) or the M1 radiation moment approaches are restricted to the\nlocal thermodynamic equilibrium (LTE). This assumption may be not valid in some\nastrophysical cases. We present an upgraded version of the LTE\nradiation-hydrodynamics module implemented in the PLUTO code, originally\ndeveloped by Kolb et al. (2013), which we have extended to handle non-LTE\nregimes. Starting from the general frequency-integrated comoving-frame\nequations of radiation hydrodynamics (RHD), we have justified all the\nassumptions made to obtain the non-LTE equations actually implemented in the\nmodule, under the FLD approximation. An operator-split method is employed, with\ntwo substeps: the hydrodynamic part is solved with an explicit method by the\nsolvers already available in PLUTO, the non-LTE radiation diffusion and energy\nexchange part is solved with an implicit method. The module is implemented in\nthe PLUTO environment. It uses databases of radiative quantities that can be\nprovided independently by the user: the radiative power loss, the Planck and\nRosseland mean opacities. Our implementation has been validated through\ndifferent tests, in particular radiative shock tests. The agreement with the\nsemi-analytical solutions (when available) is good, with a maximum error of 7%.\nMoreover, we have proved that non-LTE approach is of paramount importance to\nproperly model accretion shock structures. Our radiation FLD module represents\na step toward the general non-LTE RHD modeling. The module is available, under\nrequest, for the community.", "category": "astro-ph_IM" }, { "text": "Timing Calibration of the NuSTAR X-ray Telescope: The Nuclear Spectroscopic Telescope Array (NuSTAR) mission is the first\nfocusing X-ray telescope in the hard X-ray (3-79 keV) band. Among the phenomena\nthat can be studied in this energy band, some require high time resolution and\nstability: rotation-powered and accreting millisecond pulsars, fast variability\nfrom black holes and neutron stars, X-ray bursts, and more. Moreover, a good\nalignment of the timestamps of X-ray photons to UTC is key for multi-instrument\nstudies of fast astrophysical processes. In this Paper, we describe the timing\ncalibration of the NuSTAR mission. In particular, we present a method to\ncorrect the temperature-dependent frequency response of the on-board\ntemperature-compensated crystal oscillator. Together with measurements of the\nspacecraft clock offsets obtained during downlinks passes, this allows a\nprecise characterization of the behavior of the oscillator. The calibrated\nNuSTAR event timestamps for a typical observation are shown to be accurate to a\nprecision of ~65 microsec.", "category": "astro-ph_IM" }, { "text": "Solar diameter with 2012 Venus transit: The role of Venus and Mercury transits is crucial to know the past history of\nthe solar diameter. Through the W parameter, the logarithmic derivative of the\nradius with respect to the luminosity, the past values of the solar luminosity\ncan be recovered. The black drop phenomenon affects the evaluation of the\ninstants of internal and external contacts between the planetary disk and the\nsolar limb. With these observed instants compared with the ephemerides the\nvalue of the solar diameter is recovered. The black drop and seeing effects are\novercome with two fitting circles, to Venus and to the Sun, drawn in the\nundistorted part of the image. The corrections of ephemerides due to the\natmospheric refraction will also be taken into account. The forthcoming transit\nof Venus will allow an accuracy on the diameter of the Sun better than 0.01\narcsec, with good images of the ingress and of the egress taken each second.\nChinese solar observatories are in the optimal conditions to obtain valuable\ndata for the measurement of the solar diameter with the Venus transit of 5/6\nJune 2012 with an unprecedented accuracy, and with absolute calibration given\nby the ephemerides.", "category": "astro-ph_IM" }, { "text": "Simons Observatory Large Aperture Telescope Receiver Design Overview: The Simons Observatory (SO) will make precision temperature and polarization\nmeasurements of the cosmic microwave background (CMB) using a series of\ntelescopes which will cover angular scales between one arcminute and tens of\ndegrees and sample frequencies between 27 and 270 GHz. Here we present the\ncurrent design of the large aperture telescope receiver (LATR), a 2.4 m\ndiameter cryostat that will be mounted on the SO 6 m telescope and will be the\nlargest CMB receiver to date. The cryostat size was chosen to take advantage of\nthe large focal plane area having high Strehl ratios, which is inherent to the\nCross-Dragone telescope design. The LATR will be able to accommodate thirteen\noptics tubes, each having a 36 cm diameter aperture and illuminating several\nthousand transition-edge sensor (TES) bolometers. This set of equipment will\nprovide an opportunity to make measurements with unparalleled sensitivity.\nHowever, the size and complexity of the LATR also pose numerous technical\nchallenges. In the following paper, we present the design of the LATR and\ninclude how we address these challenges. The solutions we develop in the\nprocess of designing the LATR will be informative for the general CMB\ncommunity, and for future CMB experiments like CMB-S4.", "category": "astro-ph_IM" }, { "text": "Digital Signal Processing in Cosmology: We address the problem of discretizing continuous cosmological signals such\nas a galaxy distribution for further processing with Fast Fourier techniques.\nDiscretizing, in particular representing continuous signals by discrete sets of\nsample points, introduces an enormous loss of information, which has to be\nunderstood in detail if one wants to make inference from the discretely sampled\nsignal towards actual natural physical quantities. We therefore review the\nmathematics of discretizing signals and the application of Fast Fourier\nTransforms to demonstrate how the interpretation of the processed data can be\naffected by these procedures. It is also a well known fact that any practical\nsampling method introduces sampling artifacts and false information in the form\nof aliasing. These sampling artifacts, especially aliasing, make further\nprocessing of the sampled signal difficult. For this reason we introduce a fast\nand efficient supersampling method, frequently applied in 3D computer graphics,\nto cosmological applications such as matter power spectrum estimation. This\nmethod consists of two filtering steps which allow for a much better\napproximation of the ideal sampling procedure, while at the same time being\ncomputationally very efficient.Thus, it provides discretely sampled signals\nwhich are greately cleaned from aliasing contributions.", "category": "astro-ph_IM" }, { "text": "A Frequency Selective Surface based focal plane receiver for the OLIMPO\n balloon-borne telescope: We describe here a focal plane array of Cold-Electron Bolometer (CEB)\ndetectors integrated in a Frequency Selective Surface (FSS) for the 350 GHz\ndetection band of the OLIMPO balloon-borne telescope. In our architecture, the\ntwo terminal CEB has been integrated in the periodic unit cell of the FSS\nstructure and is impedance matched to the embedding impedance seen by it and\nprovides a resonant interaction with the incident sub-mm radiation. The\ndetector array has been designed to operate in background noise limited\ncondition for incident powers of 20 pW to 80 pW, making it possible to use the\nsame pixel in both photometric and spectrometric configurations. We present\nhigh frequency and dc simulations of our system, together with fabrication\ndetails. The frequency response of the FSS array, optical response measurements\nwith hot/cold load in front of optical window and with variable temperature\nblack body source inside cryostat are presented. A comparison of the optical\nresponse to the CEB model and estimations of Noise Equivalent power (NEP) is\nalso presented.", "category": "astro-ph_IM" }, { "text": "Efficient Catalog Matching with Dropout Detection: Not only source catalogs are extracted from astronomy observations. Their sky\ncoverage is always carefully recorded and used in statistical analyses, such as\ncorrelation and luminosity function studies. Here we present a novel method for\ncatalog matching, which inherently builds on the coverage information for\nbetter performance and completeness. A modified version of the Zones Algorithm\nis introduced for matching partially overlapping observations, where irrelevant\nparts of the data are excluded up front for efficiency. Our design enables\nsearches to focus on specific areas on the sky to further speed up the process.\nAnother important advantage of the new method over traditional techniques is\nits ability to quickly detect dropouts, i.e., the missing components that are\nin the observed regions of the celestial sphere but did not reach the detection\nlimit in some observations. These often provide invaluable insight into the\nspectral energy distribution of the matched sources but rarely available in\ntraditional associations.", "category": "astro-ph_IM" }, { "text": "Reproducibility of the First Image of a Black Hole in the Galaxy M87\n from the Event Horizon Telescope (EHT) Collaboration: This paper presents an interdisciplinary effort aiming to develop and share\nsustainable knowledge necessary to analyze, understand, and use published\nscientific results to advance reproducibility in multi-messenger astrophysics.\nSpecifically, we target the breakthrough work associated with the generation of\nthe first image of a black hole, called M87. The image was computed by the\nEvent Horizon Telescope Collaboration. Based on the artifacts made available by\nEHT, we deliver documentation, code, and a computational environment to\nreproduce the first image of a black hole. Our deliverables support new\ndiscovery in multi-messenger astrophysics by providing all the necessary tools\nfor generalizing methods and findings from the EHT use case. Challenges\nencountered during the reproducibility of EHT results are reported. The result\nof our effort is an open-source, containerized software package that enables\nthe public to reproduce the first image of a black hole in the galaxy M87.", "category": "astro-ph_IM" }, { "text": "Rapid search for massive black hole binary coalescences using deep\n learning: The coalescences of massive black hole binaries are one of the main targets\nof space-based gravitational wave observatories. Such gravitational wave\nsources are expected to be accompanied by electromagnetic emissions. Low\nlatency detection of the massive black hole mergers provides a start point for\na global-fit analysis to explore the large parameter space of signals\nsimultaneously being present in the data but at great computational cost. To\nalleviate this issue, we present a deep learning method for rapidly searching\nfor signals of massive black hole binaries in gravitational wave data. Our\nmodel is capable of processing a year of data, simulated from the LISA data\nchallenge, in only several seconds, while identifying all coalescences of\nmassive black hole binaries with no false alarms. We further demonstrate that\nthe model shows robust resistance to a wide range of generalization cases,\nincluding various waveform families and updated instrumental configurations.\nThis method offers an effective approach that combines advances in artificial\nintelligence to open a new pathway for space-based gravitational wave\nobservations.", "category": "astro-ph_IM" }, { "text": "Experimental study of clusters in dense granular gas and implications\n for the particle stopping time in protoplanetary disks: In protoplanetary disks, zones of dense particle configuration promote planet\nformation. Solid particles in dense clouds alter their motion through\ncollective effects and back reaction to the gas. The effect of particle-gas\nfeedback with ambient solid-to-gas ratios $\\epsilon > 1$ on the stopping time\nof particles is investigated. In experiments on board the International Space\nStation we studied the evolution of a dense granular gas while interacting with\nair. We observed diffusion of clusters released at the onset of an experiment\nbut also the formation of new dynamical clusters. The solid-to-gas mass ratio\noutside the cluster varied in the range of about $\\epsilon_{\\rm avg} \\sim 2.5 -\n60$. We find that the concept of gas drag in a viscous medium still holds, even\nif the medium is strongly dominated in mass by solids. However, a collective\nfactor has to be used, depending on $\\epsilon_{\\rm avg} $, i.e. the drag force\nis reduced by a factor 18 at the highest mass ratios. Therefore, flocks of\ngrains in protoplanetary disks move faster and collide faster than their\nconstituents might suggest.", "category": "astro-ph_IM" }, { "text": "Performance update of an event-type based analysis for the Cherenkov\n Telescope Array: The Cherenkov Telescope Array (CTA) will be the next-generation observatory\nin the field of very-high-energy (20 GeV to 300 TeV) gamma-ray astroparticle\nphysics. The traditional approach to data analysis in this field is to apply\nquality cuts, optimized using Monte Carlo simulations, on the data acquired to\nmaximize sensitivity. Subsequent steps of the analysis typically use the\nsurviving events to calculate one set of instrument response functions (IRFs)\nto physically interpret the results. However, an alternative approach is the\nuse of event types, as implemented in experiments such as the Fermi-LAT. This\napproach divides events into sub-samples based on their reconstruction quality,\nand a set of IRFs is calculated for each sub-sample. The sub-samples are then\ncombined in a joint analysis, treating them as independent observations. In\nprevious works we demonstrated that event types, classified using Machine\nLearning methods according to their expected angular reconstruction quality,\nhave the potential to significantly improve the CTA angular and energy\nresolution of a point-like source analysis. Now, we validated the production of\nevent-type wise full-enclosure IRFs, ready to be used with science tools (such\nas Gammapy and ctools). We will report on the impact of using such an\nevent-type classification on CTA high-level performance, compared to the\ntraditional procedure.", "category": "astro-ph_IM" }, { "text": "Polarized wavelets and curvelets on the sphere: The statistics of the temperature anisotropies in the primordial cosmic\nmicrowave background radiation field provide a wealth of information for\ncosmology and for estimating cosmological parameters. An even more acute\ninference should stem from the study of maps of the polarization state of the\nCMB radiation. Measuring the extremely weak CMB polarization signal requires\nvery sensitive instruments. The full-sky maps of both temperature and\npolarization anisotropies of the CMB to be delivered by the upcoming Planck\nSurveyor satellite experiment are hence being awaited with excitement.\nMultiscale methods, such as isotropic wavelets, steerable wavelets, or\ncurvelets, have been proposed in the past to analyze the CMB temperature map.\nIn this paper, we contribute to enlarging the set of available transforms for\npolarized data on the sphere. We describe a set of new multiscale\ndecompositions for polarized data on the sphere, including decimated and\nundecimated Q-U or E-B wavelet transforms and Q-U or E-B curvelets. The\nproposed transforms are invertible and so allow for applications in data\nrestoration and denoising.", "category": "astro-ph_IM" }, { "text": "HEIDI: An Automated Process for the Identification and Extraction of\n Photometric Light Curves from Astronomical Images: The production of photometric light curves from astronomical images is a very\ntime-consuming task. Larger data sets improve the resolution of the light\ncurve, however, the time requirement scales with data volume. The data analysis\nis often made more difficult by factors such as a lack of suitable calibration\nsources and the need to correct for variations in observing conditions from one\nimage to another. Often these variations are unpredictable and corrections are\nbased on experience and intuition.\n The High Efficiency Image Detection & Identification (HEIDI) pipeline\nsoftware rapidly processes sets of astronomical images. HEIDI automatically\nselects multiple sources for calibrating the images using an algorithm that\nprovides a reliable means of correcting for variations between images in a time\nseries. The algorithm takes into account that some sources may intrinsically\nvary on short time scales and excludes these from being used as calibration\nsources. HEIDI processes a set of images from an entire night of observation,\nanalyses the variations in brightness of the target objects and produces a\nlight curve all in a matter of minutes.\n HEIDI has been tested on three different time series of asteroid 939 Isberga\nand has produced consistent high quality photometric light curves in a fraction\nof the usual processing time. The software can also be used for other transient\nsources, e.g. gamma-ray burst optical afterglows.\n HEIDI is implemented in Python and processes time series astronomical images\nwith minimal user interaction. HEIDI processes up to 1000 images per run in the\nstandard configuration. This limit can be easily increased. HEIDI is not\ntelescope-dependent and will process images even in the case that no telescope\nspecifications are provided. HEIDI has been tested on various Linux . HEIDI is\nvery portable and extremely versatile with minimal hardware requirements.", "category": "astro-ph_IM" }, { "text": "Exploring a search for long-duration transient gravitational waves\n associated with magnetar bursts: Soft gamma repeaters and anomalous X-ray pulsars are thought to be magnetars,\nneutron stars with strong magnetic fields of order $\\mathord{\\sim}\n10^{13}$--$10^{15} \\, \\mathrm{gauss}$. These objects emit intermittent bursts\nof hard X-rays and soft gamma rays. Quasiperiodic oscillations in the X-ray\ntails of giant flares imply the existence of neutron star oscillation modes\nwhich could emit gravitational waves powered by the magnetar's magnetic energy\nreservoir. We describe a method to search for transient gravitational-wave\nsignals associated with magnetar bursts with durations of 10s to 1000s of\nseconds. The sensitivity of this method is estimated by adding simulated\nwaveforms to data from the sixth science run of Laser Interferometer\nGravitational-wave Observatory (LIGO). We find a search sensitivity in terms of\nthe root sum square strain amplitude of $h_{\\mathrm{rss}} = 1.3 \\times 10^{-21}\n\\, \\mathrm{Hz}^{-1/2}$ for a half sine-Gaussian waveform with a central\nfrequency $f_0 = 150 \\, \\mathrm{Hz}$ and a characteristic time $\\tau = 400 \\,\n\\mathrm{s}$. This corresponds to a gravitational wave energy of\n$E_{\\mathrm{GW}} = 4.3 \\times 10^{46} \\, \\mathrm{erg}$, the same order of\nmagnitude as the 2004 giant flare which had an estimated electromagnetic energy\nof $E_{\\mathrm{EM}} = \\mathord{\\sim} 1.7 \\times 10^{46} (d/ 8.7 \\,\n\\mathrm{kpc})^2 \\, \\mathrm{erg}$, where $d$ is the distance to SGR 1806-20. We\npresent an extrapolation of these results to Advanced LIGO, estimating a\nsensitivity to a gravitational wave energy of $E_{\\mathrm{GW}} = 3.2 \\times\n10^{43} \\, \\mathrm{erg}$ for a magnetar at a distance of $1.6 \\, \\mathrm{kpc}$.\nThese results suggest this search method can probe significantly below the\nenergy budgets for magnetar burst emission mechanisms such as crust cracking\nand hydrodynamic deformation.", "category": "astro-ph_IM" }, { "text": "The EPOCH Project: I. Periodic variable stars in the EROS-2 LMC database: The EPOCH (EROS-2 periodic variable star classification using machine\nlearning) project aims to detect periodic variable stars in the EROS-2 light\ncurve database. In this paper, we present the first result of the\nclassification of periodic variable stars in the EROS-2 LMC database. To\nclassify these variables, we first built a training set by compiling known\nvariables in the Large Magellanic Cloud area from the OGLE and MACHO surveys.\nWe crossmatched these variables with the EROS-2 sources and extracted 22\nvariability features from 28 392 light curves of the corresponding EROS-2\nsources. We then used the random forest method to classify the EROS-2 sources\nin the training set. We designed the model to separate not only $\\delta$ Scuti\nstars, RR Lyraes, Cepheids, eclipsing binaries, and long-period variables, the\nsuperclasses, but also their subclasses, such as RRab, RRc, RRd, and RRe for RR\nLyraes, and similarly for the other variable types. The model trained using\nonly the superclasses shows 99% recall and precision, while the model trained\non all subclasses shows 87% recall and precision. We applied the trained model\nto the entire EROS-2 LMC database, which contains about 29 million sources, and\nfound 117 234 periodic variable candidates. Out of these 117 234 periodic\nvariables, 55 285 have not been discovered by either OGLE or MACHO variability\nstudies. This set comprises 1 906 $\\delta$ Scuti stars, 6 607 RR Lyraes, 638\nCepheids, 178 Type II Cepheids, 34 562 eclipsing binaries, and 11 394\nlong-period variables. A catalog of these EROS-2 LMC periodic variable stars\nwill be available online at http://stardb.yonsei.ac.kr and at the CDS website\n(http://vizier.u-strasbg.fr/viz-bin/VizieR).", "category": "astro-ph_IM" }, { "text": "Towards time symmetric N-body integration: Computational efficiency demands discretised, hierarchically organised, and\nindividually adaptive time-step sizes (known as the block-step scheme) for the\ntime integration of N-body models. However, most existing N-body codes adapt\nindividual step sizes in a way that violates time symmetry (and symplecticity),\nresulting in artificial secular dissipation (and often secular growth of energy\nerrors). Using single-orbit integrations, I investigate various possibilities\nto reduce or eliminate irreversibility from the time stepping scheme.\nSignificant improvements over the standard approach are possible at little\nextra effort. However, in order to reduce irreversible step-size changes to\nnegligible amounts, such as suitable for long-term integrations of planetary\nsystems, more computational effort is needed, while exact time reversibility\nappears elusive for discretised individual step sizes.", "category": "astro-ph_IM" }, { "text": "Search for Continuous Gravitational Wave Signals in Pulsar Timing\n Residuals: A New Scalable Approach with Diffusive Nested Sampling: Detecting continuous nanohertz gravitational waves (GWs) generated by\nindividual close binaries of supermassive black holes (CB-SMBHs) is one of the\nprimary objectives of pulsar timing arrays (PTAs). The detection sensitivity is\nslated to increase significantly as the number of well-timed millisecond\npulsars will increase by more than an order of magnitude with the advent of\nnext-generation radio telescopes. Currently, the Bayesian analysis pipeline\nusing parallel tempering Markov chain Monte Carlo has been applied in multiple\nstudies for CB-SMBH searches, but it may be challenged by the high\ndimensionality of the parameter space for future large-scale PTAs. One solution\nis to reduce the dimensionality by maximizing or marginalizing over\nuninformative parameters semi-analytically, but it is not clear whether this\napproach can be extended to more complex signal models without making overly\nsimplified assumptions. Recently, the method of diffusive nested (DNest)\nsampling shown the capability of coping with high dimensionality and\nmultimodality effectively in Bayesian analysis. In this paper, we apply DNest\nto search for continuous GWs in simulated pulsar timing residuals and find that\nit performs well in terms of accuracy, robustness, and efficiency for a PTA\nincluding $\\mathcal{O}(10^2)$ pulsars. DNest also allows a simultaneous search\nof multiple sources elegantly, which demonstrates its scalability and general\napplicability. Our results show that it is convenient and also high beneficial\nto include DNest in current toolboxes of PTA analysis.", "category": "astro-ph_IM" }, { "text": "The POLARBEAR-2 and Simons Array Focal Plane Fabrication Status: We present on the status of POLARBEAR-2 A (PB2-A) focal plane fabrication.\nThe PB2-A is the first of three telescopes in the Simon Array (SA), which is an\narray of three cosmic microwave background (CMB) polarization sensitive\ntelescopes located at the POLARBEAR (PB) site in Northern Chile. As the\nsuccessor to the PB experiment, each telescope and receiver combination is\nnamed as PB2-A, PB2-B, and PB2-C. PB2-A and -B will have nearly identical\nreceivers operating at 90 and 150 GHz while PB2-C will house a receiver\noperating at 220 and 270 GHz. Each receiver contains a focal plane consisting\nof seven close-hex packed lenslet coupled sinuous antenna transition edge\nsensor bolometer arrays. Each array contains 271 di-chroic optical pixels each\nof which have four TES bolometers for a total of 7588 detectors per receiver.\nWe have produced a set of two types of candidate arrays for PB2-A. The first we\ncall Version 11 (V11) and uses a silicon oxide (SiOx) for the transmission\nlines and cross-over process for orthogonal polarizations. The second we call\nVersion 13 (V13) and uses silicon nitride (SiNx) for the transmission lines and\ncross-under process for orthogonal polarizations. We have produced enough of\neach type of array to fully populate the focal plane of the PB2-A receiver. The\naverage wirebond yield for V11 and V13 arrays is 93.2% and 95.6% respectively.\nThe V11 arrays had a superconducting transition temperature (Tc) of 452 +/- 15\nmK, a normal resistance (Rn) of 1.25 +/- 0.20 Ohms, and saturations powers of\n5.2 +/- 1.0 pW and 13 +/- 1.2 pW for the 90 and 150 GHz bands respectively. The\nV13 arrays had a superconducting transition temperature (Tc) of 456 +/-6 mK, a\nnormal resistance (Rn) of 1.1 +/- 0.2 Ohms, and saturations powers of 10.8 +/-\n1.8 pW and 22.9 +/- 2.6 pW for the 90 and 150 GHz bands respectively.", "category": "astro-ph_IM" }, { "text": "A Method To Characterize the Wide-Angle Point Spread Function of\n Astronomical Images: Uncertainty in the wide-angle Point Spread Function (PSF) at large angles\n(tens of arcseconds and beyond) is one of the dominant sources of error in a\nnumber of important quantities in observational astronomy. Examples include the\nstellar mass and shape of galactic halos and the maximum extent of starlight in\nthe disks of nearby galaxies. However, modeling the wide-angle PSF has long\nbeen a challenge in astronomical imaging. In this paper, we present a\nself-consistent method to model the wide-angle PSF in images. Scattered light\nfrom multiple bright stars is fitted simultaneously with a background model to\ncharacterize the extended wing of the PSF using a Bayesian framework operating\non pixel-by-pixel level. The method is demonstrated using our software\nelderflower and is applied to data from the Dragonfly Telephoto Array to model\nits PSF out to 20-25 arcminutes. We compare the wide-angle PSF of Dragonfly to\nthat of a number of other telescopes, including the SDSS PSF, and show that on\nscales of arcminutes the scattered light in the Dragonfly PSF is markedly lower\nthan that of other wide-field imaging telescopes. The energy in the wings of\nthe Dragonfly point-spread function is sufficiently low that optical\ncleanliness plays an important role in defining the PSF. This component of the\nPSF can be modelled accurately, highlighting the power of our self-contained\napproach.", "category": "astro-ph_IM" }, { "text": "Systematic Serendipity: A Test of Unsupervised Machine Learning as a\n Method for Anomaly Detection: Advances in astronomy are often driven by serendipitous discoveries. As\nsurvey astronomy continues to grow, the size and complexity of astronomical\ndatabases will increase, and the ability of astronomers to manually scour data\nand make such discoveries decreases. In this work, we introduce a machine\nlearning-based method to identify anomalies in large datasets to facilitate\nsuch discoveries, and apply this method to long cadence lightcurves from NASA's\nKepler Mission. Our method clusters data based on density, identifying\nanomalies as data that lie outside of dense regions. This work serves as a\nproof-of-concept case study and we test our method on four quarters of the\nKepler long cadence lightcurves. We use Kepler's most notorious anomaly,\nBoyajian's Star (KIC 8462852), as a rare `ground truth' for testing outlier\nidentification to verify that objects of genuine scientific interest are\nincluded among the identified anomalies. We evaluate the method's ability to\nidentify known anomalies by identifying unusual behavior in Boyajian's Star, we\nreport the full list of identified anomalies for these quarters, and present a\nsample subset of identified outliers that includes unusual phenomena, objects\nthat are rare in the Kepler field, and data artifacts. By identifying <4% of\neach quarter as outlying data, we demonstrate that this anomaly detection\nmethod can create a more targeted approach in searching for rare and novel\nphenomena.", "category": "astro-ph_IM" }, { "text": "The Radio Detector of the Pierre Auger Observatory -- status and\n expected performance: As part of the ongoing AugerPrime upgrade of the Pierre Auger Observatory, we\nare deploying short aperiodic loaded loop antennas measuring radio signals from\nextensive air showers in the 30-80 MHz band on each of the 1,660 surface\ndetector stations. This new Radio Detector of the Observatory allows us to\nmeasure the energy in the electromagnetic cascade of inclined air showers with\nzenith angles larger than $\\sim 65^\\circ$. The water-Cherenkov detectors, in\nturn, perform a virtually pure measurement of the muon component of inclined\nair showers. The combination of both thus extends the mass-composition\nsensitivity of the upgraded Observatory to high zenith angles and therefore\nenlarges the sky coverage of mass-sensitive measurements at the highest\nenergies while at the same time allowing us to cross-check the performance of\nthe established detectors with an additional measurement technique. In this\ncontribution, we outline the concept and design of the Radio Detector, report\non its current status and initial results from the first deployed stations, and\nillustrate its expected performance with a detailed, end-to-end simulation\nstudy.", "category": "astro-ph_IM" }, { "text": "Radio Weak Lensing Shear Measurement in the Visibility Domain - II.\n Source Extraction: This paper extends the method introduced in Rivi et al. (2016b) to measure\ngalaxy ellipticities in the visibility domain for radio weak lensing surveys.\nIn that paper we focused on the development and testing of the method for the\nsimple case of individual galaxies located at the phase centre, and proposed to\nextend it to the realistic case of many sources in the field of view by\nisolating visibilities of each source with a faceting technique. In this second\npaper we present a detailed algorithm for source extraction in the visibility\ndomain and show its effectiveness as a function of the source number density by\nrunning simulations of SKA1-MID observations in the band 950-1150 MHz and\ncomparing original and measured values of galaxies' ellipticities. Shear\nmeasurements from a realistic population of 10^4 galaxies randomly located in a\nfield of view of 1 deg^2 (i.e. the source density expected for the current\nradio weak lensing survey proposal with SKA1) are also performed. At SNR >= 10,\nthe multiplicative bias is only a factor 1.5 worse than what found when\nanalysing individual sources, and is still comparable to the bias values\nreported for similar measurement methods at optical wavelengths. The additive\nbias is unchanged from the case of individual sources, but is significantly\nlarger than typically found in optical surveys. This bias depends on the shape\nof the uv coverage and we suggest that a uv-plane weighting scheme to produce a\nmore isotropic shape could reduce and control additive bias.", "category": "astro-ph_IM" }, { "text": "The ARCADE 2 Instrument: The second generation Absolute Radiometer for Cosmology, Astrophysics, and\nDiffuse Emission (ARCADE 2) instrument is a balloon-borne experiment to measure\nthe radiometric temperature of the cosmic microwave background and Galactic and\nextra-Galactic emission at six frequencies from 3 to 90 GHz. ARCADE 2 utilizes\na double-nulled design where emission from the sky is compared to that from an\nexternal cryogenic full-aperture blackbody calibrator by cryogenic switching\nradiometers containing internal blackbody reference loads. In order to further\nminimize sources of systematic error, ARCADE 2 features a cold fully open\naperture with all radiometrically active components maintained at near 2.7 K\nwithout windows or other warm objects, achieved through a novel thermal design.\nWe discuss the design and performance of the ARCADE 2 instrument in its 2005\nand 2006 flights.", "category": "astro-ph_IM" }, { "text": "Science Platforms for Heliophysics Data Analysis: We recommend that NASA maintain and fund science platforms that enable\ninteractive and scalable data analysis in order to maximize the scientific\nreturn of data collected from space-based instruments.", "category": "astro-ph_IM" }, { "text": "MuSCAT2: four-color Simultaneous Camera for the 1.52-m Telescopio Carlos\n S\u00e1nchez: We report the development of a 4-color simultaneous camera for the 1.52~m\nTelescopio Carlos S\\'anchez (TCS) in the Teide Observatory, Canaries, Spain.\nThe new instrument, named MuSCAT2, has a capability of 4-color simultaneous\nimaging in $g$ (400--550 nm), $r$ (550--700 nm), $i$ (700--820 nm), and $z_s$\n(820--920 nm) bands. MuSCAT2 equips four 1024$\\times$1024 pixel CCDs, having a\nfield of view of 7.4$\\times$7.4 arcmin$^2$ with a pixel scale of 0.44 arcsec\nper pixel. The principal purpose of MuSCAT2 is to perform high-precision\nmulti-color exoplanet transit photometry. We have demonstrated photometric\nprecisions of 0.057%, 0.050%, 0.060%, and 0.076% as root-mean-square residuals\nof 60~s binning in $g$, $r$, $i$ and $z_s$ bands, respectively, for a G0 V star\nWASP-12 ($V=11.57\\pm0.16$). MuSCAT2 has started science operations since\nJanuary 2018, with over 250 telescope nights per year. MuSCAT2 is expected to\nbecome a reference tool for exoplanet transit observations, and will\nsubstantially contribute to the follow-up of the TESS and PLATO space missions.", "category": "astro-ph_IM" }, { "text": "Rejection criteria based on outliers in the KiDS photometric redshifts\n and PDF distributions derived by machine learning: The Probability Density Function (PDF) provides an estimate of the\nphotometric redshift (zphot) prediction error. It is crucial for current and\nfuture sky surveys, characterized by strict requirements on the zphot\nprecision, reliability and completeness. The present work stands on the\nassumption that properly defined rejection criteria, capable of identifying and\nrejecting potential outliers, can increase the precision of zphot estimates and\nof their cumulative PDF, without sacrificing much in terms of completeness of\nthe sample. We provide a way to assess rejection through proper cuts on the\nshape descriptors of a PDF, such as the width and the height of the maximum\nPDF's peak. In this work we tested these rejection criteria to galaxies with\nphotometry extracted from the Kilo Degree Survey (KiDS) ESO Data Release 4,\nproving that such approach could lead to significant improvements to the zphot\nquality: e.g., for the clipped sample showing the best trade-off between\nprecision and completeness, we achieve a reduction in outliers fraction of\n$\\simeq 75\\%$ and an improvement of $\\simeq 6\\%$ for NMAD, with respect to the\noriginal data set, preserving the $\\simeq 93\\%$ of its content.", "category": "astro-ph_IM" }, { "text": "The Effects of Improper Lighting on Professional Astronomical\n Observations: Europe and a number of countries in the world are investing significant\namounts of public money to operate and maintain large, ground-based\nastronomical facilities. Even larger projects are under development to observe\nthe faintest and most remote astrophysical sources in the universe. As of\ntoday, on the planet there are very few sites that satisfy all the demanding\ncriteria for such sensitive and expensive equipment, including a low level of\nlight pollution. Because of the uncontrolled growth of incorrect illumination,\neven these protected and usually remote sites are at risk. Although the reasons\nfor intelligent lighting reside in energy saving and environmental effects, the\nimpact on scientific research cannot be neglected or underestimated, because of\nits high cultural value for the progress of the whole mankind. After setting\nthe stage, in this paper I review the effects of improper lighting on\nprofessional optical and near-UV astronomical data, and discuss the possible\nsolutions to both preserve the night sky natural darkness and produce an\nefficient and cost-effective illumination.", "category": "astro-ph_IM" }, { "text": "The surface detector array of the Telescope Array experiment: The Telescope Array (TA) experiment, located in the western desert of\nUtah,USA, is designed for observation of extensive air showers from extremely\nhigh energy cosmic rays. The experiment has a surface detector array surrounded\nby three fluorescence detectors to enable simultaneous detection of shower\nparticles at ground level and fluorescence photons along the shower track. The\nTA surface detectors and fluorescence detectors started full hybrid observation\nin March, 2008. In this article we describe the design and technical features\nof the TA surface detector.", "category": "astro-ph_IM" }, { "text": "Soft proton scattering at grazing incidence from X-ray mirrors: analysis\n of experimental data in the framework of the non-elastic approximation: Astronomical X-ray observatories with grazing incidence optics face the\nproblem of pseudo-focusing of low energy protons from the mirrors towards the\nfocal plane. Those protons constitute a variable, unpredictable component of\nthe non X-ray background that strongly affects astronomical observations and a\ncorrect estimation of their flux at the focal plane is then essential. For this\nreason, we investigate how they are scattered from the mirror surfaces when\nimpacting with grazing angles. We compare the non-elastic model of reflectivity\nof particles at grazing incidence proposed by Remizovich et al. (1980) with the\nfew available experimental measurements of proton scattering from X-ray\nmirrors. We develop a semi-empirical analytical model based on the fit of those\nexperimental data with the Remizovich solution. We conclude that the scattering\nprobability weakly depends on the energy of the impinging protons and that the\nrelative energy losses are necessary to correctly model the data. The model we\npropose assumes no dependence on the incident energy and can be implemented in\nparticle transport simulation codes to generate, for instance, proton response\nmatrices for specific X-ray missions. Further laboratory measurements at lower\nenergies and on other mirror samples, such as ATHENA Silicon Pore Optics, will\nimprove the resolution of the model and will allow us to build the proper\nproton response matrices for a wider sample of X-ray observatories.", "category": "astro-ph_IM" }, { "text": "Studying the Impact of Optical Aberrations on Diffraction-Limited Radial\n Velocity Instruments: Spectrographs nominally contain a degree of quasi-static optical aberrations\nresulting from the quality of manufactured component surfaces, imperfect\nalignment, design residuals, thermal effects, and other other associated\nphenomena involved in the design and construction process. Aberrations that\nchange over time can mimic the line centroid motion of a Doppler shift,\nintroducing radial velocity (RV) uncertainty that increases time-series\nvariability. Even when instrument drifts are tracked using a precise wavelength\ncalibration source, barycentric motion of the Earth leads to a wavelength shift\nof stellar light causing a translation of the spectrum across the focal plane\narray by many pixels. The wavelength shift allows absorption lines to\nexperience different optical propagation paths and aberrations over observing\nepochs. We use physical optics propagation simulations to study the impact of\naberrations on precise Doppler measurements made by diffraction-limited,\nhigh-resolution spectrographs. We quantify the uncertainties that\ncross-correlation techniques introduce in the presence of aberrations and\nbarycentric RV shifts. We find that aberrations which shift the PSF\nphoto-center in the dispersion direction, in particular primary horizontal coma\nand trefoil, are the most concerning. To maintain aberration-induced RV errors\nless than 10 cm/s, phase errors for these particular aberrations must be held\nwell below 0.05 waves at the instrument operating wavelength. Our simulations\nfurther show that wavelength calibration only partially compensates for\ninstrumental drifts, owing to a behavioral difference between how\ncross-correlation techniques handle aberrations between starlight versus\ncalibration light. Identifying subtle physical effects that influence RV errors\nwill help ensure that diffraction-limited planet-finding spectrographs are able\nto reach their full scientific potential.", "category": "astro-ph_IM" }, { "text": "On the reconstruction of motion of a binary star moving in the external\n gravitational field of Kerr black hole by its redshift: We present a research of the time evolution of the redshift of light received\nfrom the binary star that moves in the external gravitational field of Kerr\nblack hole. We formulate a method for the solution of inverse problem:\ncalculating of the parameters of relative motion of stars in binary system\nusing the redshift data. The considered formalism has no restrictions on the\ncharacter of the motion of the center of mass of a compact binary star and can\nbe applied even in the case of binary motion close to the event horizon of a\nsupermassive black hole. The efficiency of the method is illustrated on a\nnumerical model with plausible parameters for the binary systems and for the\nsupermassive black hole, which is located in the center of our Galaxy.", "category": "astro-ph_IM" }, { "text": "Towards an overall astrometric error budget with MICADO-MCAO: MICADO is the Multi-AO Imaging Camera for Deep Observations, and it will be\none of the first light instruments of the Extremely Large Telescope (ELT).\nDoing high precision multi-object differential astrometry behind ELT is\nparticularly effective given the increased flux and small diffraction limit.\nThanks to its robust design with fixed mirrors and a cryogenic environment,\nMICADO aims to provide 50 $\\mu$as absolute differential astrometry (measure\nstar-to-star distances in absolute $\\mu$as units) over a 53\" FoV in the range\n1.2-2.5 $\\mu$m. Tackling high precision astrometry over large FoV requires\nMulti Conjugate Adaptive Optics (MCAO) and an accurate distortion calibration.\nThe MICADO control scheme relies on the separate calibration of the ELT, MAORY\nand MICADO systematics and distortions, to ensure the best disentanglement and\ncorrection of all the contributions. From a system perspective, we are\ndeveloping an astrometric error budget supported by optical simulations to\nassess the impact of the main astrometric errors induced by the telescope and\nits optical tolerances, the MCAO distortions and the opto-mechanical errors\nbetween internal optics of ELT, MAORY and MICADO. The development of an overall\nastrometric error budget will pave the road to an efficient calibration\nstrategy complementing the design of the MICADO calibration unit. At the focus\nof this work are a number of opto-mechanical error terms which have particular\nrelevance for MICADO astrometry applications, and interface to the MCAO design.", "category": "astro-ph_IM" }, { "text": "In-flight performance and calibration of the Grating Wheel Assembly\n sensors (NIRSpec/JWST): The Near-Infrared Spectrograph (NIRSpec) on board of the James Webb Space\nTelescope will be the first multi-object spectrograph in space offering\n~250,000 configurable micro-shutters, apart from being equipped with an\nintegral field unit and fixed slits. At its heart, the NIRSpec grating wheel\nassembly is a cryogenic mechanism equipped with six dispersion gratings, a\nprism, and a mirror. The finite angular positioning repeatability of the wheel\ncauses small but measurable displacements of the light beam on the focal plane,\nprecluding a static solution to predict the light-path. To address that, two\nmagneto-resistive position sensors are used to measure the tip and tilt\ndisplacement of the selected GWA element each time the wheel is rotated. The\ncalibration of these sensors is a crucial component of the model-based approach\nused for NIRSpec for calibration, spectral extraction, and target placement in\nthe micro-shutters. In this paper, we present the results of the evolution of\nthe GWA sensors performance and calibration from ground to space environments.", "category": "astro-ph_IM" }, { "text": "Nanosatellite aerobrake maneuvering device: In this paper, we present the project of the heliogyro solar sail unit for\ndeployment of CubeSat constellation and satellite deorbiting. The ballistic\ncalculations show that constellation deployment period can vary from 0.18 years\nfor 450km initial orbit and 2 CubeSats up to 1.4 years for 650km initial orbit\nand 8 CubeSats. We also describe the structural and electrical design of the\nunit and consider aspects of its integration into a standard CubeSat frame.", "category": "astro-ph_IM" }, { "text": "Radio interferometric imaging of spatial structure that varies with time\n and frequency: The spatial-frequency coverage of a radio interferometer is increased by\ncombining samples acquired at different times and observing frequencies.\nHowever, astrophysical sources often contain complicated spatial structure that\nvaries within the time-range of an observation, or the bandwidth of the\nreceiver being used, or both. Image reconstruction algorithms can been designed\nto model time and frequency variability in addition to the average intensity\ndistribution, and provide an improvement over traditional methods that ignore\nall variability. This paper describes an algorithm designed for such\nstructures, and evaluates it in the context of reconstructing three-dimensional\ntime-varying structures in the solar corona from radio interferometric\nmeasurements between 5 GHz and 15 GHz using existing telescopes such as the\nEVLA and at angular resolutions better than that allowed by traditional\nmulti-frequency analysis algorithms.", "category": "astro-ph_IM" }, { "text": "Fundamentals of impulsive energy release in the corona: It is essential that there be coordinated and co-optimized observations in\nX-rays, gamma-rays, and EUV during the peak of solar cycle 26 (~2036) to\nsignificantly advance our understanding of impulsive energy release in the\ncorona. The open questions include: What are the physical origins of\nspace-weather events? How are particles accelerated at the Sun? How is\nimpulsively released energy transported throughout the solar atmosphere? How is\nthe solar corona heated? Many of the processes involved in triggering, driving,\nand sustaining solar eruptive events -- including magnetic reconnection,\nparticle acceleration, plasma heating, and energy transport in magnetized\nplasmas -- also play important roles in phenomena throughout the Universe. This\nset of observations can be achieved through a single flagship mission or, with\nforeplanning, through a combination of major missions (e.g., the previously\nproposed FIERCE mission concept).", "category": "astro-ph_IM" }, { "text": "The outreach activities in the astronomical research institutions and\n the role of librarians: what happens in Italy: The outreach activities can be considered a new frontier of all the main\nastronomical research institutions worldwide and are a part of their mission\nthat earns great appreciation from the general public. Here the situation at\nINAF, the Italian National Institute for Astrophysics, is examined and a more\nactive role for librarians is proposed.", "category": "astro-ph_IM" }, { "text": "The image slicer for the Subaru Telescope High Dispersion Spectrograph: We report on the design, manufacturing, and performance of the image slicer\nfor the High Dispersion Spectrograph (HDS) on the Subaru Telescope. This\ninstrument is a Bowen-Walraven type image slicer providing five 0.3 arcsec x\n1.5 arcsec images with a resolving power of R= 110,000. The resulting resolving\npower and line profiles are investigated in detail, including estimates of the\ndefocusing effect on the resolving power. The throughput in the wavelength\nrange from 400 to 700 nm is higher than 80%, thereby improving the efficiency\nof the spectrograph by a factor of 1.8 for 0.7 arcsec seeing.", "category": "astro-ph_IM" }, { "text": "The Power of Simultaneous Multi-frequency Observations for mm-VLBI:\n Beyond Frequency Phase Transfer: Atmospheric propagation effects at millimeter wavelengths can significantly\nalter the phases of radio signals and reduce the coherence time, putting tight\nconstraints on high frequency Very Long Baseline Interferometry (VLBI)\nobservations. In previous works, it has been shown that non-dispersive (e.g.\ntropospheric) effects can be calibrated with the frequency phase transfer (FPT)\ntechnique. The coherence time can thus be significantly extended. Ionospheric\neffects, which can still be significant, remain however uncalibrated after FPT\nas well as the instrumental effects. In this work, we implement a further phase\ntransfer between two FPT residuals (i.e. so-called FPT-square) to calibrate the\nionospheric effects based on their frequency dependence. We show that after\nFPT-square, the coherence time at 3 mm can be further extended beyond 8~hours,\nand the residual phase errors can be sufficiently canceled by applying the\ncalibration of another source, which can have a large angular separation from\nthe target (>20 deg) and significant temporal gaps. Calibrations for all-sky\ndistributed sources with a few calibrators are also possible after FPT-square.\nOne of the strengths and uniqueness of this calibration strategy is the\nsuitability for high-frequency all-sky survey observations including very weak\nsources. We discuss the introduction of a pulse calibration system in the\nfuture to calibrate the remaining instrumental effects and allowing the\npossibility of imaging the source structure at high frequencies with\nFPT-square, where all phases are fully calibrated without involving any\nadditional sources.", "category": "astro-ph_IM" }, { "text": "Enhancing Science from Future Space Missions and Planetary Radar with\n the SKA: Both Phase 1 of the Square Kilometre Array (SKA1) and the full SKA have the\npotential to dramatically increase the science return from future astrophysics,\nheliophysics, and especially planetary missions, primarily due to the greater\nsensitivity (AEFF / TSYS) compared with existing or planned spacecraft tracking\nfacilities. While this is not traditional radio astronomy, it is an opportunity\nfor productive synergy between the large investment in the SKA and the even\nlarger investments in space missions to maximize the total scientific value\nreturned to society. Specific applications include short-term increases in\ndownlink data rate during critical mission phases or spacecraft emergencies,\nenabling new mission concepts based on small probes with low power and small\nantennas, high precision angular tracking via VLBI phase referencing using\nin-beam calibrators, and greater range and signal/noise ratio for bi-static\nplanetary radar observations. Future use of higher frequencies (e.g., 32 GHz\nand optical) for spacecraft communications will not eliminate the need for high\nsensitivities at lower frequencies. Many atmospheric probes and any spacecraft\nusing low gain antennas require frequencies below a few GHz. The SKA1 baseline\ndesign covers VHF/UHF frequencies appropriate for some planetary atmospheric\nprobes (band 1) as well as the standard 2.3 GHz deep space downlink frequency\nallocation (band 3). SKA1-MID also covers the most widely used deep space\ndownlink allocation at 8.4 GHz (band 5). Even a 50% deployment of SKA1-MID will\nstill result in a factor of several increase in sensitivity compared to the\ncurrent 70-m Deep Space Network tracking antennas, along with an advantageous\ngeographic location. The assumptions of a 10X increase in sensitivity and 20X\nincrease in angular resolution for SKA result in a truly unique and spectacular\nfuture spacecraft tracking capability.", "category": "astro-ph_IM" }, { "text": "Small Telescope Exoplanet Transit Surveys: XO: The XO project aims at detecting transiting exoplanets around bright stars\nfrom the ground using small telescopes. The original configuration of XO\n(McCullough et al. 2005) has been changed and extended as described here. The\ninstrumental setup consists of three identical units located at different\nsites, each composed of two lenses equipped with CCD cameras mounted on the\nsame mount. We observed two strips of the sky covering an area of 520 deg$^2$\nfor twice nine months. We build lightcurves for ~20,000 stars up to magnitude\nR~12.5 using a custom-made photometric data reduction pipeline. The photometric\nprecision is around 1-2% for most stars, and the large quantity of data allows\nus to reach a millimagnitude precision when folding the lightcurves on\ntimescales that are relevant to exoplanetary transits. We search for periodic\nsignals and identify several hundreds of variable stars and a few tens of\ntransiting planet candidates. Follow-up observations are underway to confirm or\nreject these candidates. We found two close-in gas giant planets so far, in\nline with the expected yield.", "category": "astro-ph_IM" }, { "text": "Fireball streak detection with minimal CPU processing requirements for\n the Desert Fireball Network data processing pipeline: The detection of fireballs streaks in astronomical imagery can be carried out\nby a variety of methods. The Desert Fireball Network--DFN--uses a network of\ncameras to track and triangulate incoming fireballs to recover meteorites with\norbits. Fireball detection is done on-camera, but due to the design constraints\nimposed by remote deployment, the cameras are limited in processing power and\ntime. We describe the processing software used for fireball detection under\nthese constrained circumstances. A cascading approach was implemented, whereby\ncomputationally simple filters are used to discard uninteresting portions of\nthe images, allowing for more computationally expensive analysis of the\nremainder. This allows a full night's worth of data; over 1000 36 megapixel\nimages to be processed each day using a low power single board computer. The\nalgorithms chosen give a single camera successful detection large fireball rate\nof better than 96 percent, when compared to manual inspection, although\nsignificant numbers of false positives are generated. The overall network\ndetection rate for triangulated large fireballs is estimated to be better than\n99.8 percent, by ensuring that there are multiple double stations chances to\ndetect one fireball.", "category": "astro-ph_IM" }, { "text": "On the Angular Resolution of Pair-Conversion $\u03b3$-Ray Telescopes: I present a study of the several contributions to the single-photon angular\nresolution of pair telescopes in the MeV energy range. I examine some test\ncases, the presently active {\\sl Fermi} LAT, the ``pure-silicon'' projects\nASTROGAM and AMEGO-X, and the emulsion-based project GRAINE.", "category": "astro-ph_IM" }, { "text": "Search for Ultra-High Energy Photons with the Pierre Auger Observatory: One of key scientific objectives of the Pierre Auger Observatory is the\nsearch for ultra-high energy photons. Such photons could originate either in\nthe interactions of energetic cosmic-ray nuclei with the cosmic microwave\nbackground (so-called cosmogenic photons) or in the exotic scenarios, e.g.\nthose assuming a production and decay of some hypothetical super-massive\nparticles. The latter category of models would imply relatively large fluxes of\nphotons with ultra-high energies at Earth, while the former, involving\ninteractions of cosmic-ray nuclei with the microwave background - just the\ncontrary: very small fractions. The investigations on the data collected so far\nin the Pierre Auger Observatory led to placing very stringent limits to\nultra-high energy photon fluxes: below the predictions of the most of the\nexotic models and nearing the predicted fluxes of the cosmogenic photons. In\nthis paper the status of these investigations and perspectives for further\nstudies are summarized.", "category": "astro-ph_IM" }, { "text": "Comparison of Different Trigger and Readout Approaches for Cameras in\n the Cherenkov Telescope Array Project: The Cherenkov Telescope Array (CTA) is a next-generation ground-based\nobservatory for g -rays with energies between some ten GeV and a few hundred\nTeV. CTA is currently in the advanced design phase and will consist of arrays\nwith different size of prime-focus Cherenkov telescopes, to ensure a proper\nenergy coverage from the threshold up to the highest energies. The extension of\nthe CTA array with double-mirror Schwarzschild- Couder telescopes is planned to\nimprove the array angular resolution over wider field of view.We present an\nend-to-end Monte-Carlo comparison of trigger concepts for the different imaging\ncameras that will be used on the Cherenkov telescopes. The comparison comprises\nthree alternative trigger schemes (analog, majority, flexible pattern analysis)\nfor each camera design. The study also addresses the influence of the\nproperties of the readout system (analog bandwidth of the electronics, length\nof the readout window in time) and uses an offline shower reconstruction to\ninvestigate the impact on key performances such as energy threshold and flux\nsensitivity", "category": "astro-ph_IM" }, { "text": "Spectrum Sharing Dynamic Protection Area Neighborhoods for Radio\n Astronomy: To enforce incumbent protection through a spectrum access system (SAS) or\nfuture centralized shared spectrum system, dynamic protection area (DPA)\nneighborhood distances are employed. These distances are distance radii, in\nwhich citizen broadband radio service devices (CBSDs) are considered as\npotential interferers for the incumbent spectrum users. The goal of this paper\nis to create an algorithm to define DPA neighborhood distances for radio\nastronomy (RA) facilities with the intent to incorporate those distances into\nexisting SASs and to adopt for future frameworks to increase national spectrum\nsharing. This paper first describes an algorithm to calculate sufficient\nneighborhood distances. Verifying this algorithm by recalculating previously\ncalculated and currently used neighborhood distances for existing DPAs then\nproves its viability for extension to radio astronomy facilities. Applying the\nalgorithm to the Hat Creek Radio Observatory (HCRO) with customized parameters\nresults in distance recommendations, 112 kilometers for category A (devices\nwith 30 dBm/10 MHz max EIRP) and 144 kilometers for category B (devices with 47\ndBm/10MHz max EIRP), for HCRO's inclusion into a SAS and shows that the\nalgorithm can be applied to RA facilities in general. Calculating these\ndistances identifies currently used but likely out-of-date metrics and\nassumptions that should be revisited for the benefit of spectrum sharing.", "category": "astro-ph_IM" }, { "text": "ORIGIN: Blind detection of faint emission line galaxies in MUSE\n datacubes: One of the major science cases of the MUSE integral field spectrograph is the\ndetection of Lyman-alpha emitters at high redshifts. The on-going and planned\ndeep fields observations will allow for one large sample of these sources. An\nefficient tool to perform blind detection of faint emitters in MUSE datacubes\nis a prerequisite of such an endeavor.\n Several line detection algorithms exist but their performance during the\ndeepest MUSE exposures is hard to quantify, in particular with respect to their\nactual false detection rate, or purity. {The aim of this work is to design and\nvalidate} an algorithm that efficiently detects faint spatial-spectral emission\nsignatures, while allowing for a stable false detection rate over the data cube\nand providing in the same time an automated and reliable estimation of the\npurity.\n Results on simulated data cubes providing ground truth show that the method\nreaches its aims in terms of purity and completeness. When applied to the deep\n30-hour exposure MUSE datacube in the Hubble Ultra Deep Field, the algorithms\nallows for the confirmed detection of 133 intermediate redshifts galaxies and\n248 Lyman Alpha Emitters, including 86 sources with no HST counterpart.\n The algorithm fulfills its aims in terms of detection power and reliability.\nIt is consequently implemented as a Python package whose code and documentation\nare available on GitHub and readthedocs.", "category": "astro-ph_IM" }, { "text": "Historic evolution of the optical design of the Multi Conjugate Adaptive\n Optics Relay for the Extremely Large Telescope: The optical design of the Multi Conjugate Adaptive Optics Relay for the\nExtremely Large Telescope experienced many modifications since Phase A\nconclusion in late 2009. These modifications were due to the evolution of the\ntelescope design, the more and more accurate results of the performance\nsimulations and the variations of the opto-mechanical interfaces with both the\ntelescope and the client instruments. Besides, in light of the optics\nmanufacturing assessment feed-backs, the optical design underwent to a global\nsimplification respect to the former versions. Integration, alignment,\naccessibility and maintenance issues took also a crucial role in the design\ntuning during the last phases of its evolution. This paper intends to describe\nthe most important steps in the evolution of the optical design, whose\nrationale has always been to have a feasible and robust instrument, fulfilling\nall the requirements and interfaces. Among the wide exploration of possible\nsolutions, all the presented designs are compliant with the high-level\nscientific requirements, concerning the maximum residual wavefront error and\nthe geometrical distortion at the exit ports. The outcome of this decennial\nwork is the design chosen as baseline at the kick-off of the Phase B in 2016\nand subsequently slightly modified, after requests and inputs from alignment\nand maintenance side.", "category": "astro-ph_IM" }, { "text": "Precise measurement of the absolute fluorescence yield of the 337 nm\n band in atmospheric gases: A measurement of the absolute fluorescence yield of the 337 nm nitrogen band,\nrelevant to ultra-high energy cosmic ray (UHECR) detectors, is reported. Two\nindependent calibrations of the fluorescence emission induced by a 120 GeV\nproton beam were employed: Cherenkov light from the beam particle and\ncalibrated light from a nitrogen laser. The fluorescence yield in air at a\npressure of 1013 hPa and temperature of 293 K was found to be $Y_{337} =\n5.61\\pm 0.06_{stat} \\pm 0.21_{syst}$ photons/MeV. When compared to the\nfluorescence yield currently used by UHECR experiments, this measurement\nimproves the uncertainty by a factor of three, and has a significant impact on\nthe determination of the energy scale of the cosmic ray spectrum.", "category": "astro-ph_IM" }, { "text": "Inference of Unresolved Point Sources At High Galactic Latitudes Using\n Probabilistic Catalogs: Detection of point sources in images is a fundamental operation in\nastrophysics, and is crucial for constraining population models of the\nunderlying point sources or characterizing the background emission. Standard\ntechniques fall short in the crowded-field limit, losing sensitivity to faint\nsources and failing to track their covariance with close neighbors. We\nconstruct a Bayesian framework to perform inference of faint or overlapping\npoint sources. The method involves probabilistic cataloging, where samples are\ntaken from the posterior probability distribution of catalogs consistent with\nan observed photon count map. In order to validate our method we sample random\ncatalogs of the gamma-ray sky in the direction of the North Galactic Pole (NGP)\nby binning the data in energy and Point Spread Function (PSF) classes. Using\nthree energy bins spanning $0.3 - 1$, $1 - 3$ and $3 - 10$ GeV, we identify\n$270\\substack{+30 \\\\ -10}$ point sources inside a $40^\\circ \\times 40^\\circ$\nregion around the NGP above our point-source inclusion limit of $3 \\times\n10^{-11}$/cm$^2$/s/sr/GeV at the $1-3$ GeV energy bin. Modeling the flux\ndistribution as a power law, we infer the slope to be $-1.92\\substack{+0.07 \\\\\n-0.05}$ and estimate the contribution of point sources to the total emission as\n$18\\substack{+2 \\\\ -2}$\\%. These uncertainties in the flux distribution are\nfully marginalized over the number as well as the spatial and spectral\nproperties of the unresolved point sources. This marginalization allows a\nrobust test of whether the apparently isotropic emission in an image is due to\nunresolved point sources or of truly diffuse origin.", "category": "astro-ph_IM" }, { "text": "An FFT-based Solution Method for the Poisson Equation on 3D Spherical\n Polar Grids: The solution of the Poisson equation is a ubiquitous problem in computational\nastrophysics. Most notably, the treatment of self-gravitating flows involves\nthe Poisson equation for the gravitational field. In hydrodynamics codes using\nspherical polar grids, one often resorts to a truncated spherical harmonics\nexpansion for an approximate solution. Here we present a non-iterative method\nthat is similar in spirit, but uses the full set of eigenfunctions of the\ndiscretized Laplacian to obtain an exact solution of the discretized Poisson\nequation. This allows the solver to handle density distributions for which the\ntruncated multipole expansion fails, such as off-center point masses. In three\ndimensions, the operation count of the new method is competitive with a naive\nimplementation of the truncated spherical harmonics expansion with $N_\\ell\n\\approx 15$ multipoles. We also discuss the parallel implementation of the\nalgorithm. The serial code and a template for the parallel solver are made\npublicly available.", "category": "astro-ph_IM" }, { "text": "Radio interferometric gain calibration as a complex optimization problem: Recent developments in optimization theory have extended some traditional\nalgorithms for least-squares optimization of real-valued functions\n(Gauss-Newton, Levenberg-Marquardt, etc.) into the domain of complex functions\nof a complex variable. This employs a formalism called the Wirtinger\nderivative, and derives a full-complex Jacobian counterpart to the conventional\nreal Jacobian. We apply these developments to the problem of radio\ninterferometric gain calibration, and show how the general complex Jacobian\nformalism, when combined with conventional optimization approaches, yields a\nwhole new family of calibration algorithms, including those for the polarized\nand direction-dependent gain regime. We further extend the Wirtinger calculus\nto an operator-based matrix calculus for describing the polarized calibration\nregime. Using approximate matrix inversion results in computationally efficient\nimplementations; we show that some recently proposed calibration algorithms\nsuch as StefCal and peeling can be understood as special cases of this, and\nplace them in the context of the general formalism. Finally, we present an\nimplementation and some applied results of CohJones, another specialized\ndirection-dependent calibration algorithm derived from the formalism.", "category": "astro-ph_IM" }, { "text": "Near-IR and visual high resolution polarimetric imaging with AO systems: Many spectacular polarimetric images have been obtained in recent years with\nadaptive optics (AO) instruments at large telescopes because they profit\nsignificantly from the high spatial resolution. This paper summarizes some\nbasic principles for AO polarimetry, discusses challenges and limitations of\nthese systems, and describes results which illustrate the performance of AO\npolarimeters for the investigation of circumstellar disks, of dusty winds from\nevolved stars, and for the search of reflecting extra-solar planets.", "category": "astro-ph_IM" }, { "text": "Analysis of the Bayesian Cramer-Rao lower bound in astrometry: Studying\n the impact of prior information in the location of an object: Context. The best precision that can be achieved to estimate the location of\na stellar-like object is a topic of permanent interest in the astrometric\ncommunity.\n Aims. We analyse bounds for the best position estimation of a stellar-like\nobject on a CCD detector array in a Bayesian setting where the position is\nunknown, but where we have access to a prior distribution. In contrast to a\nparametric setting where we estimate a parameter from observations, the\nBayesian approach estimates a random object (i.e., the position is a random\nvariable) from observations that are statistically dependent on the position.\n Methods. We characterize the Bayesian Cramer-Rao (CR) that bounds the minimum\nmean square error (MMSE) of the best estimator of the position of a point\nsource on a linear CCD-like detector, as a function of the properties of\ndetector, the source, and the background.\n Results. We quantify and analyse the increase in astrometric performance from\nthe use of a prior distribution of the object position, which is not available\nin the classical parametric setting. This gain is shown to be significant for\nvarious observational regimes, in particular in the case of faint objects or\nwhen the observations are taken under poor conditions. Furthermore, we present\nnumerical evidence that the MMSE estimator of this problem tightly achieves the\nBayesian CR bound. This is a remarkable result, demonstrating that all the\nperformance gains presented in our analysis can be achieved with the MMSE\nestimator.\n Conclusions The Bayesian CR bound can be used as a benchmark indicator of the\nexpected maximum positional precision of a set of astrometric measurements in\nwhich prior information can be incorporated. This bound can be achieved through\nthe conditional mean estimator, in contrast to the parametric case where no\nunbiased estimator precisely reaches the CR bound.", "category": "astro-ph_IM" }, { "text": "Development of HPD Clusters for MAGIC-II: MAGIC-II is the second imaging atmospheric Cherenkov telescope of the MAGIC\nobservatory, which has recently been inaugurated on Canary island of La Palma.\nWe are currently developing a new camera based on clusters of hybrid photon\ndetectors (HPD) for the upgrade of MAGIC-II. The photon detectors feature a\nGaAsP photocathode and an avalanche diode as electron bombarded anodes with\ninternal gain, and were supplied by Hamamatsu Photonics K.K. (R9792U-40). The\nHPD camera with high quantum efficiency will increase the MAGIC-II sensitivity\nand lower the energy threshold. The basic performance of the HPDs has been\nmeasured and a prototype of an HPD cluster has been developed to be mounted on\nMAGIC-II. Here we report on the status of the HPD cluster and the project of\neventually using HPD clusters in the central area of the MAGIC-II camera.", "category": "astro-ph_IM" }, { "text": "The RoboPol sample of optical polarimetric standards: Optical polarimeters are typically calibrated using measurements of stars\nwith known and stable polarization parameters. However, there is a lack of such\nstars available across the sky. Many of the currently available standards are\nnot suitable for medium and large telescopes due to their high brightness.\nMoreover, as we find, some of the used polarimetric standards are in fact\nvariable or have polarization parameters that differ from their cataloged\nvalues. Our goal is to establish a sample of stable standards suitable for\ncalibrating linear optical polarimeters with an accuracy down to $10^{-3}$ in\nfractional polarization. For five years, we have been running a monitoring\ncampaign of a sample of standard candidates comprised of 107 stars distributed\nacross the northern sky. We analyzed the variability of the linear polarization\nof these stars, taking into account the non-Gaussian nature of fractional\npolarization measurements. For a subsample of nine stars, we also performed\nmultiband polarization measurements. We created a new catalog of 65 stars (see\nTable 2) that are stable, have small uncertainties of measured polarimetric\nparameters, and can be used as calibrators of polarimeters at medium- and\nlarge-size telescopes.", "category": "astro-ph_IM" }, { "text": "The infrared imaging spectrograph (IRIS) for TMT: electronics-cable\n architecture: The InfraRed Imaging Spectrograph (IRIS) is a first-light instrument for the\nThirty Meter Telescope (TMT). It combines a diffraction limited imager and an\nintegral field spectrograph. This paper focuses on the electrical system of\nIRIS. With an instrument of the size and complexity of IRIS we face several\nelectrical challenges. Many of the major controllers must be located directly\non the cryostat to reduce cable lengths, and others require multiple bulkheads\nand must pass through a large cable wrap. Cooling and vibration due to the\nrotation of the instrument are also major challenges. We will present our\nselection of cables and connectors for both room temperature and cryogenic\nenvironments, packaging in the various cabinets and enclosures, and techniques\nfor complex bulkheads including for large detectors at the cryostat wall.", "category": "astro-ph_IM" }, { "text": "Spectropolarimeter on-board the Aditya-L1: Polarization Modulation and\n Demodulation: One of the major science goals of the Visible Emission Line Coronagraph\n(VELC) payload aboard the Aditya-L1 mission is to map the coronal magnetic\nfield topology and the quantitative estimation of longitudinal magnetic field\non routine basis. The infrared (IR) channel of VELC is equipped with a\npolarimeter to carry out full Stokes spectropolarimetric observations in the Fe\nXIII line at 1074.7~nm. The polarimeter is in dual-beam setup with continuously\nrotating waveplate as the polarization modulator. Detection of circular\npolarization due to Zeeman effect and depolarization of linear polarization in\nthe presence of magnetic field due to saturated Hanle effect in the Fe~{\\sc\nxiii} line require high signal-to-noise ratio (SNR). Due to limited number of\nphotons, long integration times are expected to build the required SNR. In\nother words signal from a large number of modulation cycles are to be averaged\nto achieve the required SNR. This poses several difficulties. One of them is\nthe increase in data volume and the other one is the change in modulation\nmatrix in successive modulation cycles. The latter effect arises due to a\nmismatch between the retarder's rotation period and the length of the signal\ndetection time in the case of VELC spectropolarimeter (VELC/SP). It is shown in\nthis paper that by appropriately choosing the number of samples per half\nrotation the data volume can be optimized. A potential solution is suggested to\naccount for modulation matrix variation from one cycle to the other.", "category": "astro-ph_IM" }, { "text": "Galaxy And Mass Assembly (GAMA): autoz spectral redshift measurements,\n confidence and errors: The Galaxy And Mass Assembly (GAMA) survey has obtained spectra of over\n230000 targets using the Anglo-Australian Telescope. To homogenise the redshift\nmeasurements and improve the reliability, a fully automatic redshift code was\ndeveloped (autoz). The measurements were made using a cross-correlation method\nfor both absorption-line and emission-line spectra. Large deviations in the\nhigh-pass filtered spectra are partially clipped in order to be robust against\nuncorrected artefacts and to reduce the weight given to single-line matches. A\nsingle figure of merit (FOM) was developed that puts all template matches onto\na similar confidence scale. The redshift confidence as a function of the FOM\nwas fitted with a tanh function using a maximum likelihood method applied to\nrepeat observations of targets. The method could be adapted to provide robust\nautomatic redshifts for other large galaxy redshift surveys. For the GAMA\nsurvey, there was a substantial improvement in the reliability of assigned\nredshifts and in the lowering of redshift uncertainties with a median velocity\nuncertainty of 33 km/s.", "category": "astro-ph_IM" }, { "text": "A new sky subtraction technique for low surface brightness data: We present a new approach to the sky subtraction for long-slit spectra\nsuitable for low-surface brightness objects based on the controlled\nreconstruction of the night sky spectrum in the Fourier space using twilight or\narc-line frames as references. It can be easily adopted for FLAMINGOS-type\nmulti-slit data. Compared to existing sky subtraction algorithms, our technique\nis taking into account variations of the spectral line spread along the slit\nthus qualitatively improving the sky subtraction quality for extended targets.\nAs an example, we show how the stellar metallicity and stellar velocity\ndispersion profiles in the outer disc of the spiral galaxy NGC 5440 are\naffected by the sky subtraction quality. Our technique is used in the survey of\nearly-type galaxies carried out at the Russian 6-m telescope, and it strongly\nincreases the scientific potential of large amounts of long-slit data for\nnearby galaxies available in major data archives.", "category": "astro-ph_IM" }, { "text": "Cosmic Ray in the Northern Hemisphere: Results from the Telescope Array\n Experiment: The Telescope Array (TA) is the largest ultrahigh energy (UHE) cosmic ray\nobservatory in the northern hemisphere TA is a hybrid experiment with a unique\ncombination of fluorescence detectors and a stand-alone surface array of\nscintillation counters. We will present the spectrum measured by the surface\narray alone, along with those measured by the fluorescence detectors in\nmonocular, hybrid, and stereo mode. The composition results from stereo TA data\nwill be discussed. Our report will also include results from the search for\ncorrelations between the pointing directions of cosmic rays, seen by the TA\nsurface array, with active galactic nuclei.", "category": "astro-ph_IM" }, { "text": "Speckle correction in polychromatic light with the self-coherent camera\n for the direct detection of exoplanets: Direct detection is a very promising field in exoplanet science. It allows\nthe detection of companions with large separation and allows their spectral\nanalysis. A few planets have already been detected and are under spectral\nanalysis. But the full spectral characterization of smaller and colder planets\nrequires higher contrast levels over large spectral bandwidths. Coronagraphs\ncan be used to reach these contrasts, but their efficiency is limited by\nwavefront aberrations. These deformations induce speckles, star lights leaks,\nin the focal plane after the coronagraph. The wavefront aberrations should be\nestimated directly in the science image to avoid usual limitations by\ndifferential aberrations in classical adaptive optics. In this context, we\nintroduce the Self- Coherent Camera (SCC). The SCC uses the coherence of the\nstar light to produce a spatial modulation of the speckles in the focal plane\nand estimate the associated electric complex field. Controlling the wavefront\nwith a deformable mirror, high contrasts have already been reached in\nmonochromatic light with this technique. The performance of the current version\nof the SCC is limited when widening the spectral bandwidth. We will present a\ntheoretical analysis of these issues and their possible solution. Finally, we\nwill present test bench performance in polychromatic light.", "category": "astro-ph_IM" }, { "text": "MeerKATHI -- an end-to-end data reduction pipeline for MeerKAT and other\n radio telescopes: MeerKATHI is the current development name for a radio-interferometric data\nreduction pipeline, assembled by an international collaboration. We create a\npublicly available end-to-end continuum- and line imaging pipeline for MeerKAT\nand other radio telescopes. We implement advanced techniques that are suitable\nfor producing high-dynamic-range continuum images and spectroscopic data cubes.\nUsing containerization, our pipeline is platform-independent. Furthermore, we\nare applying a standardized approach for using a number of different of\nadvanced software suites, partly developed within our group. We aim to use\ndistributed computing approaches throughout our pipeline to enable the user to\nreduce larger data sets like those provided by radio telescopes such as\nMeerKAT. The pipeline also delivers a set of imaging quality metrics that give\nthe user the opportunity to efficiently assess the data quality.", "category": "astro-ph_IM" }, { "text": "On the efficiency of techniques for the reduction of impulsive noise in\n astronomical images: The impulsive noise in astronomical images originates from various sources.\nIt develops as a result of thermal generation in pixels, collision of cosmic\nrays with image sensor or may be induced by high readout voltage in Electron\nMultiplying CCD (EMCCD). It is usually efficiently removed by employing the\ndark frames or by averaging several exposures. Unfortunately, there are some\ncircumstances, when either the observed objects or positions of impulsive\npixels evolve and therefore each obtained image has to be filtered\nindependently. In this article we present an overview of impulsive noise\nfiltering methods and compare their efficiency for the purpose of astronomical\nimage enhancement. The employed set of noise templates consists of dark frames\nobtained from CCD and EMCCD cameras working on ground and in space. The\nexperiments conducted on synthetic and real images, allowed for drawing\nnumerous conclusions about the usefulness of several filtering methods for\nvarious: (1) widths of stellar profiles, (2) signal to noise ratios, (3) noise\ndistributions and (4) applied imaging techniques. The results of presented\nevaluation are especially valuable for selection of the most efficient\nfiltering schema in astronomical image processing pipelines.", "category": "astro-ph_IM" }, { "text": "The Qatar Exoplanet Survey: The Qatar Exoplanet Survey (QES) is discovering hot Jupiters and aims to\ndiscover hot Saturns and hot Neptunes that transit in front of relatively\nbright host stars. QES currently operates a robotic wide-angle camera system to\nidentify promising transiting exoplanet candidates among which are the\nconfirmed exoplanets Qatar 1b and 2b. This paper describes the first generation\nQES instrument, observing strategy, data reduction techniques, and follow-up\nprocedures. The QES cameras in New Mexico complement the SuperWASP cameras in\nthe Canary Islands and South Africa, and we have developed tools to enable the\nQES images and light curves to be archived and analysed using the same methods\ndeveloped for the SuperWASP datasets. With its larger aperture, finer pixel\nscale, and comparable field of view, and with plans to deploy similar systems\nat two further sites, the QES, in collaboration with SuperWASP, should help to\nspeed the discovery of smaller radius planets transiting bright stars in\nnorthern skies.", "category": "astro-ph_IM" }, { "text": "Your data is your dogfood: DevOps in the astronomical observatory: DevOps is the contemporary term for a software development culture that\npurposefully blurs distinction between software development and IT operations\nby treating \"infrastructure as code.\" DevOps teams typically implement\npractices summarised by the colloquial directive to \"eat your own dogfood;\"\nmeaning that software tools developed by a team should be used internally\nrather thrown over the fence to operations or users. We present a brief\noverview of how DevOps techniques bring proven software engineering practices\nto IT operations. We then discuss the application of these practices to\nastronomical observatories.", "category": "astro-ph_IM" }, { "text": "An Automated Pipeline for the VST Data Log Analysis: The VST Telescope Control Software logs continuously detailed information\nabout the telescope and instrument operations. Commands, telemetries, errors,\nweather conditions and anything may be relevant for the instrument maintenance\nand the identification of problem sources is regularly saved. All information\nare recorded in textual form. These log files are often examined individually\nby the observatory personnel for specific issues and for tackling problems\nraised during the night. Thus, only a minimal part of the information is\nnormally used for daily maintenance. Nevertheless, the analysis of the archived\ninformation collected over a long time span can be exploited to reveal useful\ntrends and statistics about the telescope, which would otherwise be overlooked.\nGiven the large size of the archive, a manual inspection and handling of the\nlogs is cumbersome. An automated tool with an adequate user interface has been\ndeveloped to scrape specific entries within the log files, process the data and\ndisplay it in a comprehensible way. This pipeline has been used to scan the\ninformation collected over 5 years of telescope activity.", "category": "astro-ph_IM" }, { "text": "Serendipitous Science from the K2 Mission: The K2 mission is a repurposed use of the Kepler spacecraft to perform\nhigh-precision photometry of selected fields in the ecliptic. We have developed\nan aperture photometry pipeline for K2 data which performs dynamic automated\naperture mask selection, background estimation and subtraction, and positional\ndecorrelation to minimize the effects of spacecraft pointing jitter. We also\nidentify secondary targets in the K2 \"postage stamps\" and produce light curves\nfor those targets as well. Pipeline results will be made available to the\ncommunity. Here we describe our pipeline and the photometric precision we are\ncapable of achieving with K2, and illustrate its utility with asteroseismic\nresults from the serendipitous secondary targets.", "category": "astro-ph_IM" }, { "text": "Cosmic Microwave Background Mapmaking with a Messenger Field: We apply a messenger field method to solve the linear minimum-variance\nmapmaking equation in the context of Cosmic Microwave Background (CMB)\nobservations. In simulations, the method produces sky maps that converge\nsignificantly faster than those from a conjugate gradient descent algorithm\nwith a diagonal preconditioner, even though the computational cost per\niteration is similar. The messenger method recovers large scales in the map\nbetter than conjugate gradient descent, and yields a lower overall $\\chi^2$. In\nthe single, pencil beam approximation, each iteration of the messenger\nmapmaking procedure produces an unbiased map, and the iterations become more\noptimal as they proceed. A variant of the method can handle differential data\nor perform deconvolution mapmaking. The messenger method requires no\npreconditioner, but a high-quality solution needs a cooling parameter to\ncontrol the convergence. We study the convergence properties of this new\nmethod, and discuss how the algorithm is feasible for the large data sets of\ncurrent and future CMB experiments.", "category": "astro-ph_IM" }, { "text": "KLLR: A scale-dependent, multivariate model class for regression\n analysis: The underlying physics of astronomical systems governs the relation between\ntheir measurable properties. Consequently, quantifying the statistical\nrelationships between system-level observable properties of a population offers\ninsights into the astrophysical drivers of that class of systems. While purely\nlinear models capture behavior over a limited range of system scale, the fact\nthat astrophysics is ultimately scale-dependent implies the need for a more\nflexible approach to describing population statistics over a wide dynamic\nrange. For such applications, we introduce and implement a class of\nKernel-Localized Linear Regression (KLLR) models. KLLR is a natural extension\nto the commonly-used linear models that allows the parameters of the linear\nmodel -- normalization, slope, and covariance matrix -- to be scale-dependent.\nKLLR performs inference in two steps: (1) it estimates the mean relation\nbetween a set of independent variables and a dependent variable and; (2) it\nestimates the conditional covariance of the dependent variables given a set of\nindependent variables. We demonstrate the model's performance in a simulated\nsetting and showcase an application of the proposed model in analyzing the\nbaryonic content of dark matter halos. As a part of this work, we publicly\nrelease a Python implementation of the KLLR method.", "category": "astro-ph_IM" }, { "text": "Deep sub-arcsecond widefield imaging of the Lockman Hole field at 144\n MHz: High quality low-frequency radio surveys have the promise of advancing our\nunderstanding of many important topics in astrophysics, including the life\ncycle of active galactic nuclei (AGN), particle acceleration processes in jets,\nthe history of star formation, and exoplanet magnetospheres. Currently leading\nlow-frequency surveys reach an angular resolution of a few arcseconds. However,\nthis resolution is not yet sufficient to study the more compact and distant\nsources in detail. Sub-arcsecond resolution is therefore the next milestone in\nadvancing these fields. The biggest challenge at low radio frequencies is the\nionosphere. If not adequately corrected for, ionospheric seeing blurs the\nimages to arcsecond or even arcminute scales. Additionally, the required image\nsize to map the degree-scale field of view of low-frequency radio telescopes at\nthis resolution is far greater than what typical soft- and hardware is\ncurrently capable of handling. Here we present for the first time (to the best\nof our knowledge) widefield sub-arcsecond imaging at low radio frequencies. We\nderive ionospheric corrections in a few dozen individual directions and apply\nthose during imaging efficiently using a recently developed imaging algorithm\n(arXiv:1407.1943, arXiv:1909.07226). We demonstrate our method by applying it\nto an eight hour observation of the International LOw Frequency ARray (LOFAR)\nTelescope (ILT) (arXiv:1305.3550). Doing so we have made a sensitive $7.4\\\n\\mathrm{deg}^2$ $144\\ \\mathrm{MHz}$ map at a resolution of $0.3''$ reaching\n$25\\ \\mu\\mathrm{Jy\\ beam}^{-1}$ near the phase centre. The estimated $250,000$\ncore hours used to produce this image, fit comfortably in the budget of\navailable computing facilities. This result will enable future mapping of the\nentire northern low-frequency sky at sub-arcsecond resolution.", "category": "astro-ph_IM" }, { "text": "Multi-messenger Astronomy: a Bayesian approach: After the discovery of the gravitational waves and the observation of\nneutrinos of cosmic origin, we have entered a new and exciting era where cosmic\nrays, neutrinos, photons and gravitational waves will be used simultaneously to\nstudy the highest energy phenomena in the Universe. Here we present a fully\nBayesian approach to the challenge of combining and comparing the wealth of\nmeasurements from existing and upcoming experimental facilities. We discuss the\nprocedure from a theoretical point of view and using simulations, we also\ndemonstrate the feasibility of the method by incorporating the use of\ninformation provided by different theoretical models and different experimental\nmeasurements.", "category": "astro-ph_IM" }, { "text": "MOSAIX: a tool to built large mosaics from GALEX images: Large sky surveys are providing a huge amount of information for studies of\nthe interstellar medium, the galactic structure or the cosmic web. Setting into\na common frame information coming from different wavelengths, over large fields\nof view, is needed for this kind of research. GALEX is the only nearly all-sky\nsurvey at ultraviolet wavelengths and contains fundamental information for all\ntypes of studies. GALEX field of view is circular embedded in a squared matrix\nof 3840 x 3840 pixels. This fact makes it hard to get GALEX images properly\noverlapped with the existing astronomical tools such as Aladin or Montage. We\ndeveloped our own software for this purpose. In this article, we describe this\nsoftware and makes it available to the community.", "category": "astro-ph_IM" }, { "text": "The Experiment for Cryogenic Large-aperture Intensity Mapping (EXCLAIM): The EXperiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM) is a\ncryogenic balloon-borne instrument that will survey galaxy and star formation\nhistory over cosmological time scales. Rather than identifying individual\nobjects, EXCLAIM will be a pathfinder to demonstrate an intensity mapping\napproach, which measures the cumulative redshifted line emission. EXCLAIM will\noperate at 420-540 GHz with a spectral resolution R=512 to measure the\nintegrated CO and [CII] in redshift windows spanning 0 < z < 3.5. CO and [CII]\nline emissions are key tracers of the gas phases in the interstellar medium\ninvolved in star-formation processes. EXCLAIM will shed light on questions such\nas why the star formation rate declines at z < 2, despite continued clustering\nof the dark matter. The instrument will employ an array of six superconducting\nintegrated grating-analog spectrometers (micro-spec) coupled to microwave\nkinetic inductance detectors (MKIDs). Here we present an overview of the\nEXCLAIM instrument design and status.", "category": "astro-ph_IM" }, { "text": "UNI Astronomical Observatory - OAUNI: First light: We show the actual status of the project to implement the Astronomical\nObservatory of the National University of Engineering (OAUNI), including its\nfirst light. The OAUNI was installed with success at the site of the Huancayo\nObservatory on the peruvian central Andes. At this time, we are finishing the\ncommissioning phase which includes the testing of all the instruments: optical\ntube, robotic mount, CCD camera, filter wheel, remote access system, etc. The\nfirst light gathered from a stellar field was very promissory. The next step\nwill be to start the scientific programs and to bring support to the\nundergraduate courses in observational astronomy at the Faculty of Sciences of\nUNI.", "category": "astro-ph_IM" }, { "text": "Real-time exposure control and instrument operation with the NEID\n spectrograph GUI: The NEID spectrograph on the WIYN 3.5-m telescope at Kitt Peak has completed\nits first full year of science operations and is reliably delivering sub-m/s\nprecision radial velocity measurements. The NEID instrument control system uses\nthe TIMS package (Bender et al. 2016), which is a client-server software system\nbuilt around the twisted python software stack. During science observations,\ninteraction with the NEID spectrograph is handled through a pair of graphical\nuser interfaces (GUIs), written in PyQT, which wrap the underlying instrument\ncontrol software and provide straightforward and reliable access to the\ninstrument. Here, we detail the design of these interfaces and present an\noverview of their use for NEID operations. Observers can use the NEID GUIs to\nset the exposure time, signal-to-noise ratio (SNR) threshold, and other\nrelevant parameters for observations, configure the calibration bench and\nobserving mode, track or edit observation metadata, and monitor the current\nstate of the instrument. These GUIs facilitate automatic spectrograph\nconfiguration and target ingestion from the nightly observing queue, which\nimproves operational efficiency and consistency across epochs. By interfacing\nwith the NEID exposure meter, the GUIs also allow observers to monitor the\nprogress of individual exposures and trigger the shutter on user-defined SNR\nthresholds. In addition, inset plots of the instantaneous and cumulative\nexposure meter counts as each observation progresses allow for rapid diagnosis\nof changing observing conditions as well as guiding failure and other emergent\nissues.", "category": "astro-ph_IM" }, { "text": "Transforming the Canada France Hawaii Telescope (CFHT) into the Maunakea\n Spectroscopic Explorer (MSE): A Conceptual Observatory Building and\n Facilities Design: The Canada France Hawaii Telescope Corporation (CFHT) plans to repurpose its\nobservatory on the summit of Maunakea and operate a new wide field\nspectroscopic survey telescope, the Maunakea Spectroscopic Explorer (MSE). MSE\nwill upgrade the observatory with a larger 11.25m aperture telescope and equip\nit with dedicated instrumentation to capitalize on the site, which has some of\nthe best seeing in the northern hemisphere, and offer its user community the\nability to do transformative science. The knowledge and experience of the\ncurrent CFHT staff will contribute greatly to the engineering of this new\nfacility. MSE will reuse the same building and telescope pier as CFHT. However,\nit will be necessary to upgrade the support pier to accommodate a bigger\ntelescope and replace the current dome since a wider slit opening of 12.5\nmeters in diameter is needed. Once the project is completed the new facility\nwill be almost indistinguishable on the outside from the current CFHT\nobservatory. MSE will build upon CFHT's pioneering work in remote operations,\nwith no staff at the observatory during the night, and use modern technologies\nto reduce daytime maintenance work. This paper describes the design approach\nfor redeveloping the CFHT facility for MSE including the infrastructure and\nequipment considerations required to support and facilitate nighttime\nobservations. The building will be designed so existing equipment and\ninfrastructure can be reused wherever possible while meeting new requirement\ndemands. Past experience and lessons learned will be used to create a modern,\noptimized, and logical layout of the facility. The purpose of this paper is to\nprovide information to readers involved in the MSE project or organizations\ninvolved with the redevelopment of an existing observatory facility for a new\nmission.", "category": "astro-ph_IM" }, { "text": "21 cm Intensity Mapping: Using the 21 cm line, observed all-sky and across the redshift range from 0\nto 5, the large scale structure of the Universe can be mapped in three\ndimensions. This can be accomplished by studying specific intensity with\nresolution ~ 10 Mpc, rather than via the usual galaxy redshift survey. The data\nset can be analyzed to determine Baryon Acoustic Oscillation wavelengths, in\norder to address the question: 'What is the nature of Dark Energy?' In\naddition, the study of Large Scale Structure across this range addresses the\nquestions: 'How does Gravity effect very large objects?' and 'What is the\ncomposition our Universe?' The same data set can be used to search for and\ncatalog time variable and transient radio sources.", "category": "astro-ph_IM" }, { "text": "Flowdown of the TMT astrometry error budget(s) to the IRIS design: TMT has defined the accuracy to be achieved for both absolute and\ndifferential astrometry in its top-level requirements documents. Because of the\ncomplexities of different types of astrometric observations, these requirements\ncannot be used to specify system design parameters directly. The TMT astrometry\nworking group therefore developed detailed astrometry error budgets for a\nvariety of science cases. These error budgets detail how astrometric errors\npropagate through the calibration, observing and data reduction processes. The\nbudgets need to be condensed into sets of specific requirements that can be\nused by each subsystem team for design purposes. We show how this flowdown from\nerror budgets to design requirements is achieved for the case of TMT's\nfirst-light Infrared Imaging Spectrometer (IRIS) instrument.", "category": "astro-ph_IM" }, { "text": "A Study of the Compact Water Vapor Radiometer for Phase Calibration of\n the Karl G. Janksy Very Large Array: We report on laboratory test results of the Compact Water Vapor Radiometer\n(CWVR) prototype for the NSF's Karl G. Jansky Very Large Array (VLA), a\nfive-channel design centered around the 22 GHz water vapor line. Fluctuations\nin precipitable water vapor cause fluctuations in atmospheric brightness\nemission, which are assumed to be proportional to phase fluctuations of the\nastronomical signal seen by an antenna. Water vapor radiometry consists of\nusing a radiometer to measure variations in the atmospheric brightness emission\nto correct for the phase fluctuations. The CWVR channel isolation requirement\nof < -20 dB is met, indicating < 1% power leakage between any two channels.\nGain stability tests indicate that Channel 1 needs repair, and that the\nfluctuations in output counts for Channel 2 to 5 are negatively correlated to\nthe CWVR enclosure ambient temperature, with a change of ~ 405 counts per 1\ndegree C change in temperature. With temperature correction, the single channel\nand channel difference gain stability is < 2 x 10^-4, and the observable gain\nstability is < 2.5 x 10^-4 over t = 2.5 - 10^3 sec, all of which meet the\nrequirements. Overall, the test results indicate that the CWVR meets\nspecifications for dynamic range, channel isolation, and gain stability to be\ntested on an antenna. Future work consists of building more CWVRs and testing\nthe phase correlations on the VLA antennas to evaluate the use of WVR for not\nonly the VLA, but also the Next Generation Very Large Array (ngVLA).", "category": "astro-ph_IM" }, { "text": "Spatial intensity interferometry on three bright stars: The present articlereports on the first spatial intensity interferometry\nmeasurements on stars since the observations at Narrabri Observatory by Hanbury\nBrown et al. in the 1970's. Taking advantage of the progresses in recent years\non photon-counting detectors and fast electronics, we were able to measure the\nzero-time delay intensity correlation $g^{(2)}(\\tau = 0, r)$ between the light\ncollected by two 1-m optical telescopes separated by 15 m. Using two marginally\nresolved stars ($\\alpha$ Lyr and $\\beta$ Ori) with R magnitudes of 0.01 and\n0.13 respectively, we demonstrate that 4-hour correlation exposures provide\nreliable visibilities, whilst a significant loss of contrast is found on alpha\nAur, in agreement with its binary-star nature.", "category": "astro-ph_IM" }, { "text": "Optimisation of a Hydrodynamic SPH-FEM Model for a Bioinspired\n Aerial-aquatic Spacecraft on Titan: Titan, Saturn's largest moon, supports a dense atmosphere, numerous bodies of\nliquid on its surface, and as a richly organic world is a primary focus for\nunderstanding the processes that support the development of life. In-situ\nexploration to follow that of the Huygens probe is intended in the form of the\ncoming NASA Dragonfly mission, acting as a demonstrator for powered flight on\nthe moon and aiming to answer some key questions about the atmosphere, surface,\nand potential for habitability. While a quadcopter presents one of the most\nambitious outer Solar System mission profiles to date, this paper aims to\npresent the case for an aerial vehicle also capable of in-situ liquid sampling\nand show some of the attempts currently being made to model the behaviour of\nthis spacecraft.", "category": "astro-ph_IM" }, { "text": "The Low Earth Orbit Satellite Population and Impacts of the SpaceX\n Starlink Constellation: I discuss the current low Earth orbit artificial satellite population and\nshow that the proposed `megaconstellation' of circa 12,000 Starlink internet\nsatellites would dominate the lower part of Earth orbit, below 600 km, with a\nlatitude-dependent areal number density of between 0.005 and 0.01 objects per\nsquare degree at airmass < 2. Such large, low altitude satellites appear\nvisually bright to ground observers, and the initial Starlinks are naked eye\nobjects. I model the expected number of illuminated satellites as a function of\nlatitude, time of year, and time of night and summarize the range of possible\nconsequences for ground-based astronomy. In winter at lower latitudes typical\nof major observatories, the satellites will not be illuminated for six hours in\nthe middle of the night. However, at low elevations near twilight at\nintermediate latitudes (45-55 deg, e.g. much of Europe) hundreds of satellites\nmay be visible at once to naked-eye observers at dark sites.", "category": "astro-ph_IM" }, { "text": "Initial follow-up of optical transients with COLORES using the BOOTES\n network: The Burst Observer and Optical Transient Exploring System (BOOTES) is a\nnetwork of telescopes that allows the continuous monitoring of transient\nastrophysical sources. It was originally devoted to the study of the optical\nemission from gamma-ray bursts (GRBs) that occur in the Universe. In this paper\nwe show the initial results obtained using the spectrograph COLORES (mounted on\nBOOTES-2), when observing optical transients (OTs) of diverse nature.", "category": "astro-ph_IM" }, { "text": "Short Spacing Considerations for the ngVLA: The next generation Very Large Array project (ngVLA) would represent a major\nstep forward in sensitivity and resolution for radio astronomy, with ability to\nachieve 2 milli-arcsec resolution at 100 GHz (assuming a maximum baseline of\n300 km). For science on spatial scales of >~ 1 arcsec, the ngVLA project should\nconsider the use of a large single dish telescope to provide short-spacing\ndata. Large single-dish telescopes are complementary to interferometers and are\ncrucial to providing sensitivity to spatial scales lost by interferometry.\nAssuming the current vision of the ngVLA (300 18m dishes) and by studying\npossible array configurations, I argue that a single dish with a diameter of >=\n45m with approximately 20 element receiver systems would be well matched to the\nngVLA for mapping observations.", "category": "astro-ph_IM" }, { "text": "ESA Voyage 2050 white paper -- Faint objects in motion: the new frontier\n of high precision astrometry: Sky survey telescopes and powerful targeted telescopes play complementary\nroles in astronomy. In order to investigate the nature and characteristics of\nthe motions of very faint objects, a flexibly-pointed instrument capable of\nhigh astrometric accuracy is an ideal complement to current astrometric surveys\nand a unique tool for precision astrophysics. Such a space-based mission will\npush the frontier of precision astrometry from evidence of earth-massed\nhabitable worlds around the nearest starts, and also into distant Milky way\nobjects up to the Local Group of galaxies. As we enter the era of the James\nWebb Space Telescope and the new ground-based, adaptive-optics-enabled giant\ntelescopes, by obtaining these high precision measurements on key objects that\nGaia could not reach, a mission that focuses on high precision astrometry\nscience can consolidate our theoretical understanding of the local universe,\nenable extrapolation of physical processes to remote redshifts, and derive a\nmuch more consistent picture of cosmological evolution and the likely fate of\nour cosmos. Already several missions have been proposed to address the science\ncase of faint objects in motion using high precision astrometry ESA missions:\nNEAT for M3, micro-NEAT for S1 mission, and Theia for M4 and M5. Additional new\nmission configurations adapted with technological innovations could be\nenvisioned to pursue accurate measurements of these extremely small motions.\nThe goal of this white paper is to address the fundamental science questions\nthat are at stake when we focus on the motions of faint sky objects and to\nbriefly review quickly instrumentation and mission profiles.", "category": "astro-ph_IM" }, { "text": "Cadmium Zinc Telluride Imager onboard AstroSat : a multi-faceted hard\n X-ray instrument: The AstroSat satellite is designed to make multi-waveband observations of\nastronomical sources and the Cadmium Zinc Telluride Imager (CZTI) instrument of\nAstroSat covers the hard X-ray band. CZTI has a large area position sensitive\nhard X-ray detector equipped with a Coded Aperture Mask, thus enabling\nsimultaneous background measurement. Ability to record simultaneous detection\nof ionizing interactions in multiple detector elements is a special feature of\nthe instrument and this is exploited to provide polarization information in the\n100 - 380 keV region. CZTI provides sensitive spectroscopic measurements in the\n20 - 100 keV region, and acts as an all sky hard X-ray monitor and polarimeter\nabove 100 keV. During the first year of operation, CZTI has recorded several\ngamma-ray bursts, measured the phase resolved hard X-ray polarization of the\nCrab pulsar, and the hard X-ray spectra of many bright Galactic X-ray binaries.\nThe excellent timing capability of the instrument has been demonstrated with\nsimultaneous observation of the Crab pulsar with radio telescopes like GMRT and\nOoty radio telescope.", "category": "astro-ph_IM" }, { "text": "Solving Kepler's equation with CORDIC double iterations: In a previous work, we developed the idea to solve Kepler's equation with a\nCORDIC-like algorithm, which does not require any division, but still\nmultiplications in each iteration. Here we overcome this major shortcoming and\nsolve Kepler's equation using only bitshifts, additions, and one initial\nmultiplication. We prescale the initial vector with the eccentricity and the\nscale correction factor. The rotation direction is decided without correction\nfor the changing scale. We find that double CORDIC iterations are\nself-correcting and compensate possible wrong rotations in subsequent\niterations. The algorithm needs 75\\% more iterations and delivers the eccentric\nanomaly and its sine and cosine terms times the eccentricity. The algorithm can\nbe adopted for the hyperbolic case, too. The new shift-and-add algorithm brings\nKepler's equation close to hardware and allows to solve it with cheap and\nsimple hardware components.", "category": "astro-ph_IM" }, { "text": "Theia: Faint objects in motion or the new astrometry frontier: In the context of the ESA M5 (medium mission) call we proposed a new\nsatellite mission, Theia, based on relative astrometry and extreme precision to\nstudy the motion of very faint objects in the Universe. Theia is primarily\ndesigned to study the local dark matter properties, the existence of Earth-like\nexoplanets in our nearest star systems and the physics of compact objects.\nFurthermore, about 15 $\\%$ of the mission time was dedicated to an open\nobservatory for the wider community to propose complementary science cases.\nWith its unique metrology system and \"point and stare\" strategy, Theia's\nprecision would have reached the sub micro-arcsecond level. This is about 1000\ntimes better than ESA/Gaia's accuracy for the brightest objects and represents\na factor 10-30 improvement for the faintest stars (depending on the exact\nobservational program). In the version submitted to ESA, we proposed an optical\n(350-1000nm) on-axis TMA telescope. Due to ESA Technology readiness level, the\ncamera's focal plane would have been made of CCD detectors but we anticipated\nan upgrade with CMOS detectors. Photometric measurements would have been\nperformed during slew time and stabilisation phases needed for reaching the\nrequired astrometric precision.", "category": "astro-ph_IM" }, { "text": "Speckle Space-Time Covariance in High-Contrast Imaging: We introduce a new framework for point-spread function (PSF) subtraction\nbased on the spatio-temporal variation of speckle noise in high-contrast\nimaging data where the sampling timescale is faster than the speckle evolution\ntimescale. One way that space-time covariance arises in the pupil is as\natmospheric layers translate across the telescope aperture and create small,\ntime-varying perturbations in the phase of the incoming wavefront. The\npropagation of this field to the focal plane preserves some of that space-time\ncovariance. To utilize this covariance, our new approach uses a\nKarhunen-Lo\\'eve transform on an image sequence, as opposed to a set of single\nreference images as in previous applications of Karhunen-Lo\\'eve Image\nProcessing (KLIP) for high-contrast imaging. With the recent development of\nphoton-counting detectors, such as microwave kinetic inductance detectors\n(MKIDs), this technique now has the potential to improve contrast when used as\na post-processing step. Preliminary testing on simulated data shows this\ntechnique can improve contrast by at least 10-20% from the original image, with\nsignificant potential for further improvement. For certain choices of\nparameters, this algorithm may provide larger contrast gains than spatial-only\nKLIP.", "category": "astro-ph_IM" }, { "text": "The Cherenkov Telescope Array On-Site integral sensitivity: observing\n the Crab: The Cherenkov Telescope Array (CTA) is the future large observatory in the\nvery high energy (VHE) domain. Operating from 20 GeV to 300 TeV, it will be\ncomposed of tens of Imaging Air Cherenkov Telescopes (IACTs) displaced in a\nlarge area of a few square kilometers in both the southern and northern\nhemispheres. The CTA/DATA On-Site Analysis (OSA) is the system devoted to the\ndevelopment of dedicated pipelines and algorithms to be used at the CTA site\nfor the reconstruction, data quality monitoring, science monitoring and\nrealtime science alerting during observations. The OSA integral sensitivity is\ncomputed here for the most studied source at Gamma-rays, the Crab Nebula, for a\nset of exposures ranging from 1000 seconds to 50 hours, using the full CTA\nSouthern array. The reason for the Crab Nebula selection as the first example\nof OSA integral sensitivity is twofold: (i) this source is characterized by a\nbroad spectrum covering the entire CTA energy range; (ii) it represents, at the\ntime of writing, the standard candle in VHE and it is often used as unit for\nthe IACTs sensitivity. The effect of different Crab Nebula emission models on\nthe CTA integral sensitivity is evaluated, to emphasize the need for\nrepresentative spectra of the CTA science targets in the evaluation of the OSA\nuse cases. Using the most complete model as input to the OSA integral\nsensitivity, we obtain a significant detection of the Crab nebula (about 10% of\nflux) even for a 1000 second exposure, for an energy threshold less than 10\nTeV.", "category": "astro-ph_IM" }, { "text": "Super-resolution Full Polarimetric Imaging for Radio Interferometry with\n Sparse Modeling: We propose a new technique for radio interferometry to obtain\nsuper-resolution full polarization images in all four Stokes parameters using\nsparse modeling. The proposed technique reconstructs the image in each Stokes\nparameter from the corresponding full-complex Stokes visibilities by utilizing\ntwo regularization functions: the $\\ell _1$-norm and total variation (TV) of\nthe brightness distribution. As an application of this technique, we present\nsimulated linear polarization observations of two physically motivated models\nof M87 with the Event Horizon Telescope (EHT). We confirm that $\\ell _1$+TV\nregularization can achieve an optimal resolution of $\\sim 25-30$\\% of the\ndiffraction limit $\\lambda/D_{\\rm max}$, which is the nominal spatial\nresolution of a radio interferometer for both the total intensity (i.e. Stokes\n$I$) and linear polarizations (i.e. Stokes $Q$ and $U$). This optimal\nresolution is better than that obtained from the widely used Cotton-Schwab\nCLEAN algorithm or from using $\\ell _1$ or TV regularizations alone.\nFurthermore, we find that $\\ell _1$+TV regularization can achieve much better\nimage fidelity in linear polarization than other techniques over a wide range\nof spatial scales, not only in the super-resolution regime, but also on scales\nlarger than the diffraction limit. Our results clearly demonstrate that sparse\nreconstruction is a useful choice for high-fidelity full-polarimetric\ninterferometric imaging.", "category": "astro-ph_IM" }, { "text": "The Tianlai project: a 21cm cosmology experiment: In my talk at the 2nd Galileo-Xu Meeting, I presented several different\ntopics in 21cm cosmology for which I have done research. These includes the\n21cm signature of the first stars[1,2], the 21cm signal from the IGM and\nminihalos[3], effect of dark matter annihila- tions on 21cm signal[4], the 21cm\nforest by ionized/neutral region[5], and the 21cm forest by minihalo and\nearliest galaxies[6,7]. In this conference proceeding I shall not repeat these\ndiscussions, but instead focus on the last part of my talk, i.e. the Tianlai\nproject, an experiment effort on low redshift 21cm intensity mapping\nobservation for dark energy measurements.", "category": "astro-ph_IM" }, { "text": "What Does a Successful Postdoctoral Fellowship Publication Record Look\n Like?: Obtaining a prize postdoctoral fellowship in astronomy and astrophysics\ninvolves a number of factors, many of which cannot be quantified. One criterion\nthat can be measured is the publication record of an applicant. The publication\nrecords of past fellowship recipients may, therefore, provide some quantitative\nguidance for future prospective applicants. We investigated the publication\npatterns of recipients of the NASA prize postdoctoral fellowships in the\nHubble, Einstein, and Sagan programs from 2014 through 2017, using the NASA ADS\nreference system. We tabulated their publications at the point where fellowship\napplications were submitted, and we find that the 133 fellowship recipients in\nthat time frame had a median of 6 +/- 2 first-author publications, and 14 +/- 6\nco-authored publications. The full range of first author papers is 1 to 15, and\nfor all papers ranges from 2 to 76, indicating very diverse publication\npatterns. Thus, while fellowship recipients generally have strong publication\nrecords, the distribution of both first-author and co-authored papers is quite\nbroad; there is no apparent threshold of publications necessary to obtain these\nfellowships. We also examined the post-PhD publication rates for each of the\nthree fellowship programs, between male and female recipients, across the four\nyears of the analysis and find no consistent trends. We hope that these\nfindings will prove a useful reference to future junior scientists.", "category": "astro-ph_IM" }, { "text": "Event reconstruction with the proposed large area Cherenkov air shower\n detector SCORE: The proposed SCORE detector consists of a large array of light collecting\nmodules designed to sample the Cherenkov light front of extensive air showers\nin order to detect high energy gamma-rays. A large spacing of the detector\nstations makes it possible to cover a huge area with a reasonable effort, thus\nachieving a good sensitivity up to energies of about a few 10 PeV. In this\npaper the event reconstruction algorithm for SCORE is presented and used to\nobtain the anticipated performance of the detector in terms of angular\nresolution, energy resolution, shower depth resolution and gamma / hadron\nseparation.", "category": "astro-ph_IM" }, { "text": "Design and performance of the Spider instrument: Here we describe the design and performance of the Spider instrument. Spider\nis a balloon-borne cosmic microwave background polarization imager that will\nmap part of the sky at 90, 145, and 280 GHz with sub-degree resolution and high\nsensitivity. This paper discusses the general design principles of the\ninstrument inserts, mechanical structures, optics, focal plane architecture,\nthermal architecture, and magnetic shielding of the TES sensors and SQUID\nmultiplexer. We also describe the optical, noise, and magnetic shielding\nperformance of the 145 GHz prototype instrument insert.", "category": "astro-ph_IM" }, { "text": "Entering into the Wide Field Adaptive Optics Era on Maunakea: As part of the National Science Foundation funded \"Gemini in the Era of\nMultiMessenger Astronomy\" (GEMMA) program, Gemini Observatory is developing\nGNAO, a widefield adaptive optics (AO) facility for Gemini-North on Maunakea,\nthe only 8m-class open-access telescope available to the US astronomers in the\nnorthern hemisphere. GNAO will provide the user community with a queue-operated\nMulti-Conjugate AO (MCAO) system, enabling a wide range of innovative solar\nsystem, Galactic, and extragalactic science with a particular focus on\nsynergies with JWST in the area of time-domain astronomy. The GNAO effort\nbuilds on institutional investment and experience with the more limited\nblock-scheduled Gemini Multi-Conjugate System (GeMS), commissioned at Gemini\nSouth in 2013. The project involves close partnerships with the community\nthrough the recently established Gemini AO Working Group and the GNAO Science\nTeam, as well as external instrument teams. The modular design of GNAO will\nenable a planned upgrade to a Ground Layer AO (GLAO) mode when combined with an\nAdaptive Secondary Mirror (ASM). By enhancing the natural seeing by an expected\nfactor of two, GLAO will vastly improve Gemini North's observing efficiency for\nseeing-limited instruments and strengthen its survey capabilities for\nmulti-messenger astronomy.", "category": "astro-ph_IM" }, { "text": "Nanosatellite aerobrake maneuvering device: In this paper, we present the project of the heliogyro solar sail unit for\ndeployment of CubeSat constellation and satellite deorbiting. The ballistic\ncalculations show that constellation deployment period can vary from 0.18 years\nfor 450km initial orbit and 2 CubeSats up to 1.4 years for 650km initial orbit\nand 8 CubeSats. We also describe the structural and electrical design of the\nunit and consider aspects of its integration into a standard CubeSat frame.", "category": "astro-ph_IM" }, { "text": "A multi-method approach to radial-velocity measurement for single-object\n spectra: The derivation of radial velocities from large numbers of spectra that\ntypically result from survey work, requires automation. However, except for the\nclassical cases of slowly rotating late-type spectra, existing methods of\nmeasuring Doppler shifts require fine-tuning to avoid a loss of accuracy due to\nthe idiosyncrasies of individual spectra. The radial velocity spectrometer\n(RVS) on the Gaia mission, which will start operating very soon, prompted a new\nattempt at creating a measurement pipeline to handle a wide variety of spectral\ntypes.\n The present paper describes the theoretical background on which this software\nis based. However, apart from the assumption that only synthetic templates are\nused, we do not rely on any of the characteristics of this instrument, so our\nresults should be relevant for most telescope-detector combinations.\n We propose an approach based on the simultaneous use of several alternative\nmeasurement methods, each having its own merits and drawbacks, and conveying\nthe spectral information in a different way, leading to different values for\nthe measurement. A comparison or a combination of the various results either\nleads to a \"best estimate\" or indicates to the user that the observed spectrum\nis problematic and should be analysed manually.\n We selected three methods and analysed the relationships and differences\nbetween them from a unified point of view; with each method an appropriate\nestimator for the individual random error is chosen. We also develop a\nprocedure for tackling the problem of template mismatch in a systematic way.\nFurthermore, we propose several tests for studying and comparing the\nperformance of the various methods as a function of the atmospheric parameters\nof the observed objects. Finally, we describe a procedure for obtaining a\nknowledge-based combination of the various Doppler-shift measurements.", "category": "astro-ph_IM" }, { "text": "First results about on-ground calibration of the Silicon Tracker for the\n AGILE satellite: The AGILE scientific instrument has been calibrated with a tagged\n$\\gamma$-ray beam at the Beam Test Facility (BTF) of the INFN Laboratori\nNazionali di Frascati (LNF). The goal of the calibration was the measure of the\nPoint Spread Function (PSF) as a function of the photon energy and incident\nangle and the validation of the Monte Carlo (MC) simulation of the silicon\ntracker operation. The calibration setup is described and some preliminary\nresults are presented.", "category": "astro-ph_IM" }, { "text": "In-flight performance of the DAMPE silicon tracker: DAMPE (DArk Matter Particle Explorer) is a spaceborne high-energy cosmic ray\nand gamma-ray detector, successfully launched in December 2015. It is designed\nto probe astroparticle physics in the broad energy range from few GeV to 100\nTeV. The scientific goals of DAMPE include the identification of possible\nsignatures of Dark Matter annihilation or decay, the study of the origin and\npropagation mechanisms of cosmic-ray particles, and gamma-ray astronomy. DAMPE\nconsists of four sub-detectors: a plastic scintillator strip detector, a\nSilicon-Tungsten tracKer-converter (STK), a BGO calorimeter and a neutron\ndetector. The STK is composed of six double layers of single-sided silicon\nmicro-strip detectors interleaved with three layers of tungsten for photon\nconversions into electron-positron pairs. The STK is a crucial component of\nDAMPE, allowing to determine the direction of incoming photons, to reconstruct\ntracks of cosmic rays and to estimate their absolute charge (Z). We present the\nin-flight performance of the STK based on two years of in-flight DAMPE data,\nwhich includes the noise behavior, signal response, thermal and mechanical\nstability, alignment and position resolution.", "category": "astro-ph_IM" }, { "text": "New Dark Matter Detector using Nanoscale Explosives: We present nanoscale explosives as a novel type of dark matter detector and\nstudy the ignition properties. When a Weakly Interacting Massive Particle WIMP\nfrom the Galactic Halo elastically scatters off of a nucleus in the detector,\nthe small amount of energy deposited can trigger an explosion. For specificity,\nthis paper focuses on a type of two-component explosive known as a\nnanothermite, consisting of a metal and an oxide in close proximity. When the\ntwo components interact they undergo a rapid exothermic reaction --- an\nexplosion. As a specific example, we consider metal nanoparticles of 5 nm\nradius embedded in an oxide. One cell contains more than a few million\nnanoparticles, and a large number of cells adds up to a total of 1 kg detector\nmass. A WIMP interacts with a metal nucleus of the nanoparticles, depositing\nenough energy to initiate a reaction at the interface between the two layers.\nWhen one nanoparticle explodes it initiates a chain reaction throughout the\ncell. A number of possible thermite materials are studied. Excellent background\nrejection can be achieved because of the nanoscale granularity of the detector:\nwhereas a WIMP will cause a single cell to explode, backgrounds will instead\nset off multiple cells.\n If the detector operates at room temperature, we find that WIMPs with masses\nabove 100 GeV (or for some materials above 1 TeV) could be detected; they\ndeposit enough energy ($>$10 keV) to cause an explosion. When operating\ncryogenically at liquid nitrogen or liquid helium temperatures, the nano\nexplosive WIMP detector can detect energy deposits as low as 0.5 keV, making\nthe nano explosive detector more sensitive to very light $<$10 GeV WIMPs,\nbetter than other dark matter detectors.", "category": "astro-ph_IM" }, { "text": "Cosmic Inference: Constraining Parameters With Observations and Highly\n Limited Number of Simulations: Cosmological probes pose an inverse problem where the measurement result is\nobtained through observations, and the objective is to infer values of model\nparameters which characterize the underlying physical system -- our Universe.\nModern cosmological probes increasingly rely on measurements of the small-scale\nstructure, and the only way to accurately model physical behavior on those\nscales, roughly 65 Mpc/h or smaller, is via expensive numerical simulations. In\nthis paper, we provide a detailed description of a novel statistical framework\nfor obtaining accurate parameter constraints by combining observations with a\nvery limited number of cosmological simulations. The proposed framework\nutilizes multi-output Gaussian process emulators that are adaptively\nconstructed using Bayesian optimization methods. We compare several approaches\nfor constructing multi-output emulators that enable us to take possible\ninter-output correlations into account while maintaining the efficiency needed\nfor inference. Using Lyman alpha forest flux power spectrum, we demonstrate\nthat our adaptive approach requires considerably fewer --- by a factor of a few\nin Lyman alpha P(k) case considered here --- simulations compared to the\nemulation based on Latin hypercube sampling, and that the method is more robust\nin reconstructing parameters and their Bayesian credible intervals.", "category": "astro-ph_IM" }, { "text": "aTmcam: A Simple Atmospheric Transmission Monitoring Camera For Sub 1%\n Photometric Precision: Traditional color and airmass corrections can typically achieve ~0.02 mag\nprecision in photometric observing conditions. A major limiting factor is the\nvariability in atmospheric throughput, which changes on timescales of less than\na night. We present preliminary results for a system to monitor the throughput\nof the atmosphere, which should enable photometric precision when coupled to\nmore traditional techniques of less than 1% in photometric conditions. The\nsystem, aTmCam, consists of a set of imagers each with a narrow-band filter\nthat monitors the brightness of suitable standard stars. Each narrowband filter\nis selected to monitor a different wavelength region of the atmospheric\ntransmission, including regions dominated by the precipitable water absorption\nand aerosol scattering. We have built a prototype system to test the notion\nthat an atmospheric model derived from a few color indices measurements can be\nan accurate representation of the true atmospheric transmission. We have\nmeasured the atmospheric transmission with both narrowband photometric\nmeasurements and spec- troscopic measurements; we show that the narrowband\nimaging approach can predict the changes in the throughput of the atmosphere to\nbetter than ~10% across a broad wavelength range, so as to achieve photometric\nprecision less than 0.01 mag.", "category": "astro-ph_IM" }, { "text": "Two-index model for characterizing site-specific night sky brightness\n patterns: Determining the all-sky radiance distribution produced by artificial light\nsources is a computationally demanding task that generally requires an\nintensive calculation load. We develop in this work an analytic formulation\nthat provides the all-sky radiance distribution produced by an artificial light\nsource as an explicit and analytic function of the observation direction,\ndepending on two single parameters that characterize the overall effects of the\natmosphere. One of these parameters is related to the effective attenuation of\nthe light beams, whereas the other accounts for the overall asymmetry of the\ncombined scattering processes in molecules and aerosols. By means of this\nformulation a wide range of all-sky radiance distributions can be efficiently\nand accurately calculated in a short time. This substantial reduction in the\nnumber of required parameters, in comparison with other currently used\napproaches, is expected to facilitate the development of new applications in\nthe field of light pollution research.", "category": "astro-ph_IM" }, { "text": "Per aspera ad astra simul: Through difficulties to the stars together: In this article, we detail the strategic partnerships \"Per Aspera Ad Astra\nSimul\" and \"European Collaborating Astronomers Project:\nEspa\\~na-Czechia-Slovakia\". These strategic partnerships were conceived to\nfoment international collaboration for educational activities (aimed at all\nlevels) as well as to support the development and growth of early career\nresearchers. The activities, carried out under the auspices of these strategic\npartnerships, demonstrate that Key Action 2 of the Erasmus+ programme can be an\nextremely valuable resource for supporting international educational projects,\nas well as the great impact that such projects can have on the general public\nand on the continued development of early career researchers. We strongly\nencourage other educators to make use of the opportunities offered by the\nErasmus+ scheme.", "category": "astro-ph_IM" }, { "text": "From Photometric Redshifts to Improved Weather Forecasts: machine\n learning and proper scoring rules as a basis for interdisciplinary work: The amount, size, and complexity of astronomical data-sets and databases are\ngrowing rapidly in the last decades, due to new technologies and dedicated\nsurvey telescopes. Besides dealing with poly-structured and complex data,\nsparse data has become a field of growing scientific interest. A specific field\nof Astroinformatics research is the estimation of redshifts of extra-galactic\nsources by using sparse photometric observations. Many techniques have been\ndeveloped to produce those estimates with increasing precision. In recent\nyears, models have been favored which instead of providing a point estimate\nonly, are able to generate probabilistic density functions (PDFs) in order to\ncharacterize and quantify the uncertainties of their estimates.\n Crucial to the development of those models is a proper, mathematically\nprincipled way to evaluate and characterize their performances, based on\nscoring functions as well as on tools for assessing calibration. Still, in\nliterature inappropriate methods are being used to express the quality of the\nestimates that are often not sufficient and can potentially generate misleading\ninterpretations. In this work we summarize how to correctly evaluate errors and\nforecast quality when dealing with PDFs. We describe the use of the\nlog-likelihood, the continuous ranked probability score (CRPS) and the\nprobability integral transform (PIT) to characterize the calibration as well as\nthe sharpness of predicted PDFs. We present what we achieved when using proper\nscoring rules to train deep neural networks as well as to evaluate the model\nestimates and how this work led from well calibrated redshift estimates to\nimprovements in probabilistic weather forecasting. The presented work is an\nexample of interdisciplinarity in data-science and illustrates how methods can\nhelp to bridge gaps between different fields of application.", "category": "astro-ph_IM" }, { "text": "Revisiting the Solar Research Cyberinfrastructure Needs: A White Paper\n of Findings and Recommendations: Solar and Heliosphere physics are areas of remarkable data-driven\ndiscoveries. Recent advances in high-cadence, high-resolution multiwavelength\nobservations, growing amounts of data from realistic modeling, and operational\nneeds for uninterrupted science-quality data coverage generate the demand for a\nsolar metadata standardization and overall healthy data infrastructure. This\nwhite paper is prepared as an effort of the working group \"Uniform Semantics\nand Syntax of Solar Observations and Events\" created within the \"Towards\nIntegration of Heliophysics Data, Modeling, and Analysis Tools\" EarthCube\nResearch Coordination Network (@HDMIEC RCN), with primary objectives to discuss\ncurrent advances and identify future needs for the solar research\ncyberinfrastructure. The white paper summarizes presentations and discussions\nheld during the special working group session at the EarthCube Annual Meeting\non June 19th, 2020, as well as community contribution gathered during a series\nof preceding workshops and subsequent RCN working group sessions. The authors\nprovide examples of the current standing of the solar research\ncyberinfrastructure, and describe the problems related to current data handling\napproaches. The list of the top-level recommendations agreed by the authors of\nthe current white paper is presented at the beginning of the paper.", "category": "astro-ph_IM" }, { "text": "Measurement of the atmospheric primary aberrations by 4-aperture DIMM: The present paper investigates and discusses the ability of the Hartmann test\nwith 4-aperture DIMM to measure the atmospheric primary aberrations which, in\nturn, can be used for calculation of the atmospheric coherence time. Through\nperforming numerical simulations, we show that the 4-aperture DIMM is able to\nmeasure the defocus and astigmatism terms correctly while its results are not\nreliable for the coma. The most important limitation in the measurement of the\nprimary aberrations by 4-aperture DIMM is the centroid displacements of the\nspots which are caused by the higher order aberrations. This effect is\nnegligible in calculating of the defocus and astigmatisms, while, it cannot be\nignored in the calculation of the coma.", "category": "astro-ph_IM" }, { "text": "A Hybrid Algorithm of Fast Invariant Imbedding and Doubling-Adding\n Methods for Efficient Multiple Scattering Calculations: An efficient hybrid numerical method for multiple scattering calculations is\nproposed. We use the well established doubling--adding method to find the\nreflection function of the lowermost homogeneous slab comprising the atmosphere\nof our interest. This reflection function provides the initial value for the\nfast invariant imbedding method of Sato et al., (1977), with which layers are\nadded until the final reflection function of the entire atmosphere is obtained.\nThe execution speed of this hybrid method is no slower than one half of that of\nthe doubling-adding method, probably the fastest algorithm available, even in\nthe most unsuitable cases for the fast invariant imbedding method. The\nefficiency of the proposed method increases rapidly with the number of\natmospheric slabs and the optical thickness of each slab. For some cases, its\nexecution speed is approximately four times faster than the doubling--adding\nmethod. This work has been published in NAIS Journal (ISSN 1882-9392) Vol. 7,\n5-16 (2012).", "category": "astro-ph_IM" }, { "text": "GALAXY package for N-body simulation: This posting announces public availability of the GALAXY software package\ndeveloped by the author over the past 40 years. It is a highly efficient code\nfor the evolution of (almost) isolated, collisionless stellar systems, both\ndisk-like and ellipsoidal. In addition to the N-body code galaxy, which offers\neleven different methods to compute the gravitational accelerations, the\npackage also includes sophisticated set-up and analysis software. This paper\ngives an outline of the contents of the package and provides links to the\nsource code and a comprehensive on-line manual. While not as versatile as tree\ncodes, the particle-mesh methods in this package are shown, for certain\nrestricted applications, to be between 50 and 200 times faster than a\nwidely-used tree code.", "category": "astro-ph_IM" }, { "text": "Impact of infrasound atmospheric noise on gravity detectors used for\n astrophysical and geophysical applications: Density changes in the atmosphere produce a fluctuating gravity field that\naffect gravity strainmeters or gravity gradiometers used for the detection of\ngravitational-waves and for geophysical applications. This work addresses the\nimpact of the atmospheric local gravity noise on such detectors, extending\nprevious analyses. In particular we present the effect introduced by the\nbuilding housing the detectors, and we analyze local gravity-noise suppression\nby constructing the detector underground. We present also new sound spectra and\ncorrelations measurements. The results obtained are important for the design of\nfuture gravitational-wave detectors and gravity gradiometers used to detect\nprompt gravity perturbations from earthquakes.", "category": "astro-ph_IM" }, { "text": "Camera Calibration of the CTA-LST prototype: The Cherenkov Telescope Array (CTA) is the next-generation gamma-ray\nobservatory that is expected to reach one order of magnitude better sensitivity\nthan that of current telescope arrays. The Large-Sized Telescopes (LSTs) have\nan essential role in extending the energy range down to 20 GeV. The prototype\nLST (LST-1) proposed for CTA was built in La Palma, the northern site of CTA,\nin 2018. LST-1 is currently in its commissioning phase and moving towards\nscientific observations. The LST-1 camera consists of 1855 photomultiplier\ntubes (PMTs) which are sensitive to Cherenkov light. PMT signals are recorded\nas waveforms sampled at 1 GHz rate with Domino Ring Sampler version 4 (DRS4)\nchips. Fast sampling is essential to achieve a low energy threshold by\nminimizing the integration of background light from the night sky. Absolute\ncharge calibration can be performed by the so-called F-factor method, which\nallows calibration constants to be monitored even during observations. A\ncalibration pipeline of the camera readout has been developed as part of the\nLST analysis chain. The pipeline performs DRS4 pedestal and timing corrections,\nas well as the extraction and calibration of charge and time of pulses for\nsubsequent higher-level analysis. The performance of each calibration step is\nexamined, and especially charge and time resolution of the camera readout are\nevaluated and compared to CTA requirements. We report on the current status of\nthe calibration pipeline, including the performance of each step through to\nsignal reconstruction, and the consistency with Monte Carlo simulations.", "category": "astro-ph_IM" }, { "text": "Cryogenic characterization of the Planck sorption cooler system flight\n model: This paper is part of the Prelaunch status LFI papers published on JINST:\nhttp://www.iop.org/EJ/journal/-page=extra.proc5/1748-0221\n Two continuous closed-cycle hydrogen Joule-Thomson (J-T) sorption coolers\nhave been fabricated and assembled by the Jet Propulsion Laboratory (JPL) for\nthe European Space Agency (ESA) Planck mission. Each refrigerator has been\ndesigned to provide a total of ~ 1W of cooling power at two instrument\ninterfaces: they directly cool the Planck Low Frequency Instrument (LFI) around\n20K while providing a pre-cooling stage for a 4 K J-T mechanical refrigerator\nfor the High Frequency Instrument (HFI). After sub-system level validation at\nJPL, the cryocoolers have been delivered to ESA in 2005. In this paper we\npresent the results of the cryogenic qualification and test campaigns of the\nNominal Unit on the flight model spacecraft performed at the CSL (Centre\nSpatial de Liege) facilities in 2008. Test results in terms of input power,\ncooling power, temperature, and temperature fluctuations over the flight\nallowable ranges for these interfaces are reported and analyzed with respect to\nmission requirements.", "category": "astro-ph_IM" }, { "text": "Real-Time Analysis sensitivity evaluation of the Cherenkov Telescope\n Array: The Cherenkov Telescope Array (CTA), the new generation very high-energy\ngamma-ray observatory, will improve the flux sensitivity of the current\nCherenkov telescopes by an order of magnitude over a continuous range from\nabout 10 GeV to above 100 TeV. With tens of telescopes distributed in the\nNorthern and Southern hemispheres, the large effective area and field of view\ncoupled with the fast pointing capability make CTA a crucial instrument for the\ndetection and understanding of the physics of transient, short-timescale\nvariability phenomena (e.g. Gamma-Ray Bursts, Active Galactic Nuclei, gamma-ray\nbinaries, serendipitous sources). The key CTA system for the fast\nidentification of flaring events is the Real-Time Analysis (RTA) pipeline, a\nscience alert system that will automatically detect and generate science alerts\nwith a maximum latency of 30 seconds with respect to the triggering event\ncollection and ensure fast communication to/from the astrophysics community.\nAccording to the CTA design requirements, the RTA search for a true transient\nevent should be performed on multiple time scales (from minutes to hours) with\na sensitivity not worse than three times the nominal CTA sensitivity. Given the\nCTA requirement constraints on the RTA efficiency and the fast response ability\ndemanded by the transient science, we perform a preliminary evaluation of the\nRTA sensitivity as a function of the CTA high-level technical performance (e.g.\neffective area, point spread function) and the observing time. This preliminary\napproach allows the exploration of the complex parameter space defined by the\nscientific and technological requirements, with the aim of defining the\nfeasibility range of the input parameters and the minimum background rejection\ncapability of the RTA pipeline.", "category": "astro-ph_IM" }, { "text": "Performance of the ARIANNA Hexagonal Radio Array: Installation of the ARIANNA Hexagonal Radio Array (HRA) on the Ross Ice Shelf\nof Antarctica has been completed. This detector serves as a pilot program to\nthe ARIANNA neutrino telescope, which aims to measure the diffuse flux of very\nhigh energy neutrinos by observing the radio pulse generated by\nneutrino-induced charged particle showers in the ice. All HRA stations ran\nreliably and took data during the entire 2014-2015 austral summer season. A new\nradio signal direction reconstruction procedure is described, and is observed\nto have a resolution better than a degree. The reconstruction is used in a\npreliminary search for potential neutrino candidate events in the data from one\nof the newly installed detector stations. Three cuts are used to separate radio\nbackgrounds from neutrino signals. The cuts are found to filter out all data\nrecorded by the station during the season while preserving 85.4% of simulated\nneutrino events that trigger the station. This efficiency is similar to that\nfound in analyses of previous HRA data taking seasons.", "category": "astro-ph_IM" }, { "text": "Consistent SPH Simulations of Protostellar Collapse and Fragmentation: We study the consistency and convergence of smoothed particle hydrodynamics\n(SPH), as a function of the interpolation parameters, namely the number of\nparticles $N$, the number of neighbors $n$, and the smoothing length $h$, using\nsimulations of the collapse and fragmentation of protostellar rotating cores.\nThe calculations are made using a modified version of the GADGET-2 code that\nemploys an improved scheme for the artificial viscosity and power-law\ndependences of $n$ and $h$ on $N$, as was recently proposed by Zhu et al.,\nwhich comply with the combined limit $N\\to\\infty$, $h\\to 0$, and $n\\to\\infty$\nwith $n/N\\to 0$ for full SPH consistency, as the domain resolution is\nincreased. We apply this realization to the \"standard isothermal test case\" in\nthe variant calculated by Burkert & Bodenheimer and the Gaussian cloud model of\nBoss to investigate the response of the method to adaptive smoothing lengths in\nthe presence of large density and pressure gradients. The degree of consistency\nis measured by tracking how well the estimates of the consistency integral\nrelations reproduce their continuous counterparts. In particular, $C^{0}$ and\n$C^{1}$ particle consistency is demonstrated, meaning that the calculations are\nclose to second-order accuracy. As long as $n$ is increased with $N$, mass\nresolution also improves as the minimum resolvable mass $M_{\\rm min}\\sim\nn^{-1}$. This aspect allows proper calculation of small-scale structures in the\nflow associated with the formation and instability of protostellar disks around\nthe growing fragments, which are seen to develop a spiral structure and\nfragment into close binary/multiple systems as supported by recent\nobservations.", "category": "astro-ph_IM" }, { "text": "SPECULOOS exoplanet search and its prototype on TRAPPIST: One of the most significant goals of modern science is establishing whether\nlife exists around other suns. The most direct path towards its achievement is\nthe detection and atmospheric characterization of terrestrial exoplanets with\npotentially habitable surface conditions. The nearest ultracool dwarfs (UCDs),\ni.e. very-low-mass stars and brown dwarfs with effective temperatures lower\nthan 2700 K, represent a unique opportunity to reach this goal within the next\ndecade. The potential of the transit method for detecting potentially habitable\nEarth-sized planets around these objects is drastically increased compared to\nEarth-Sun analogs. Furthermore, only a terrestrial planet transiting a nearby\nUCD would be amenable for a thorough atmospheric characterization, including\nthe search for possible biosignatures, with near-future facilities such as the\nJames Webb Space Telescope. In this chapter, we first describe the physical\nproperties of UCDs as well as the unique potential they offer for the detection\nof potentially habitable Earth-sized planets suitable for atmospheric\ncharacterization. Then, we present the SPECULOOS ground-based transit survey,\nthat will search for Earth-sized planets transiting the nearest UCDs, as well\nas its prototype survey on the TRAPPIST telescopes. We conclude by discussing\nthe prospects offered by the recent detection by this prototype survey of a\nsystem of seven temperate Earth-sized planets transiting a nearby UCD,\nTRAPPIST-1.", "category": "astro-ph_IM" }, { "text": "An Electron-Tracking Compton Telescope for a Survey of the Deep Universe\n by MeV gamma-rays: Photon imaging for MeV gammas has serious difficulties due to huge\nbackgrounds and unclearness in images, which are originated from incompleteness\nin determining the physical parameters of Compton scattering in detection,\ne.g., lack of the directional information of the recoil electrons. The recent\nmajor mission/instrument in the MeV band, Compton Gamma Ray\nObservatory/COMPTEL, which was Compton Camera (CC), detected mere $\\sim30$\npersistent sources. It is in stark contrast with $\\sim$2000 sources in the GeV\nband. Here we report the performance of an Electron-Tracking Compton Camera\n(ETCC), and prove that it has a good potential to break through this stagnation\nin MeV gamma-ray astronomy. The ETCC provides all the parameters of\nCompton-scattering by measuring 3-D recoil electron tracks; then the Scatter\nPlane Deviation (SPD) lost in CCs is recovered. The energy loss rate (dE/dx),\nwhich CCs cannot measure, is also obtained, and is found to be indeed helpful\nto reduce the background under conditions similar to space. Accordingly the\nsignificance in gamma detection is improved severalfold. On the other hand, SPD\nis essential to determine the point-spread function (PSF) quantitatively. The\nSPD resolution is improved close to the theoretical limit for multiple\nscattering of recoil electrons. With such a well-determined PSF, we demonstrate\nfor the first time that it is possible to provide reliable sensitivity in\nCompton imaging without utilizing an optimization algorithm. As such, this\nstudy highlights the fundamental weak-points of CCs. In contrast we demonstrate\nthe possibility of ETCC reaching the sensitivity below $1\\times10^{-12}$ erg\ncm$^{-2}$ s$^{-1}$ at 1 MeV.", "category": "astro-ph_IM" }, { "text": "Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood\n Power Spectrum Estimation: We revisit the problem of exact CMB likelihood and power spectrum estimation\nwith the goal of minimizing computational cost through linear compression. This\nidea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and\nhere we develop it into a fully working computational framework for large-scale\npolarization analysis, adopting \\WMAP\\ as a worked example. We compare five\ndifferent linear bases (pixel space, harmonic space, noise covariance\neigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise\ncovariance eigenvectors) in terms of compression efficiency, and find that the\ncomputationally most efficient basis is the signal-to-noise eigenvector basis,\nwhich is closely related to the Karhunen-Loeve and Principal Component\ntransforms, in agreement with previous suggestions. For this basis, the\ninformation in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a\nsmaller set of 3102 modes, with a maximum error increase of any single\nmultipole of 3.8\\% at $\\ell\\le32$, and a maximum shift in the mean values of a\njoint distribution of an amplitude--tilt model of 0.006$\\sigma$. This\ncompression reduces the computational cost of a single likelihood evaluation by\na factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust\nlikelihood by implicitly regularizing nearly degenerate modes. Finally, we use\nthe same compression framework to formulate a numerically stable and\ncomputationally efficient variation of the Quadratic Maximum Likelihood\nimplementation that requires less than 3 GB of memory and 2 CPU minutes per\niteration for $\\ell \\le 32$, rendering low-$\\ell$ QML CMB power spectrum\nanalysis fully tractable on a standard laptop.", "category": "astro-ph_IM" }, { "text": "India's first robotic eye for time domain astrophysics: the GROWTH-India\n telescope: We present the design and performance of the GROWTH-India telescope, a 0.7 m\nrobotic telescope dedicated to time-domain astronomy. The telescope is equipped\nwith a 4k back-illuminated camera giving a 0.82-degree field of view and\nsensitivity of m_g ~20.5 in 5-min exposures. Custom software handles\nobservatory operations: attaining high on-sky observing efficiencies (>~ 80%)\nand allowing rapid response to targets of opportunity. The data processing\npipelines are capable of performing PSF photometry as well as image subtraction\nfor transient searches. We also present an overview of the GROWTH-India\ntelescope's contributions to the studies of Gamma-ray Bursts, the\nelectromagnetic counterparts to gravitational wave sources, supernovae, novae\nand solar system objects.", "category": "astro-ph_IM" }, { "text": "Three editions of the Star Catalogue of Tycho Brahe: Tycho Brahe completed his catalogue with the positions and magnitudes of 1004\nfixed stars in 1598. This catalogue circulated in manuscript form. Brahe edited\na shorter version with 777 stars, printed in 1602, and Kepler edited the full\ncatalogue of 1004 stars, printed in 1627. We provide machine-readable versions\nof the three versions of the catalogue, describe the differences between them\nand briefly discuss their accuracy on the basis of comparison with modern data\nfrom the Hipparcos Catalogue. We also compare our results with earlier analyses\nby Dreyer (1916) and Rawlins (1993), finding good overall agreement. The\nmagnitudes given by Brahe correlate well with modern values, his longitudes and\nlatitudes have error distributions with widths of about 2 arcmin, with excess\nnumbers of stars with larger errors (as compared to Gaussian distributions), in\nparticular for the faintest stars. Errors in positions larger than 10 arcmin,\nwhich comprise about 15 per cent of the entries, are likely due to computing or\ncopying errors.", "category": "astro-ph_IM" }, { "text": "Overview of the SAPHIRA Detector for AO Applications: We discuss some of the unique details of the operation and behavior of\nLeonardo SAPHIRA detectors, particularly in relation to their usage for\nadaptive optics wavefront sensing. SAPHIRA detectors are 320$\\times$256@24\n$\\mu$m pixel HgCdTe linear avalanche photodiode arrays and are sensitive to\n0.8-2.5 $\\mu m$ light. SAPHIRA arrays permit global or line-by-line resets, of\nthe entire detector or just subarrays of it, and the order in which pixels are\nreset and read enable several readout schemes. We discuss three readout modes,\nthe benefits, drawbacks, and noise sources of each, and the observational modes\nfor which each is optimal. We describe the ability of the detector to read\nsubarrays for increased frame rates, and finally clarify the differences\nbetween the avalanche gain (which is user-adjustable) and the charge gain\n(which is not).", "category": "astro-ph_IM" }, { "text": "A semi-supervised Machine Learning search for never-seen\n Gravitational-Wave sources: By now, tens of gravitational-wave (GW) events have been detected by the LIGO\nand Virgo detectors. These GWs have all been emitted by compact binary\ncoalescence, for which we have excellent predictive models. However, there\nmight be other sources for which we do not have reliable models. Some are\nexpected to exist but to be very rare (e.g., supernovae), while others may be\ntotally unanticipated. So far, no unmodeled sources have been discovered, but\nthe lack of models makes the search for such sources much more difficult and\nless sensitive. We present here a search for unmodeled GW signals using\nsemi-supervised machine learning. We apply deep learning and outlier detection\nalgorithms to labeled spectrograms of GW strain data, and then search for\nspectrograms with anomalous patterns in public LIGO data. We searched $\\sim\n13\\%$ of the coincident data from the first two observing runs. No candidates\nof GW signals were detected in the data analyzed. We evaluate the sensitivity\nof the search using simulated signals, we show that this search can detect\nspectrograms containing unusual or unexpected GW patterns, and we report the\nwaveforms and amplitudes for which a $50\\%$ detection rate is achieved.", "category": "astro-ph_IM" }, { "text": "A Site Evaluation Campaign for a Ground Based Atmospheric Cherenkov\n Telescope in Romania: Around the world, several scientific projects share the interest of a global\nnetwork of small Cherenkov telescopes for monitoring observations of the\nbrightest blazars - the DWARF network. A small, ground based, imaging\natmospheric Cherenkov telescope of last generation is intended to be installed\nand operated in Romania as a component of the DWARF network. To prepare the\nconstruction of the observatory, two support projects have been initiated.\nWithin the framework of these projects, we have assessed a number of possible\nsites where to settle the observatory. In this paper we submit a brief report\non the general characteristics of the best four sites selected after the local\ninfrastructure, the nearby facilities and the social impact criteria have been\napplied.", "category": "astro-ph_IM" }, { "text": "The Future of Astronomical Data Infrastructure: Meeting Report: The astronomical community is grappling with the increasing volume and\ncomplexity of data produced by modern telescopes, due to difficulties in\nreducing, accessing, analyzing, and combining archives of data. To address this\nchallenge, we propose the establishment of a coordinating body, an \"entity,\"\nwith the specific mission of enhancing the interoperability, archiving,\ndistribution, and production of both astronomical data and software. This\nreport is the culmination of a workshop held in February 2023 on the Future of\nAstronomical Data Infrastructure. Attended by 70 scientists and software\nprofessionals from ground-based and space-based missions and archives spanning\nthe entire spectrum of astronomical research, the group deliberated on the\nprevailing state of software and data infrastructure in astronomy, identified\npressing issues, and explored potential solutions. In this report, we describe\nthe ecosystem of astronomical data, its existing flaws, and the many gaps,\nduplication, inconsistencies, barriers to access, drags on productivity, missed\nopportunities, and risks to the long-term integrity of essential data sets. We\nalso highlight the successes and failures in a set of deep dives into several\ndifferent illustrative components of the ecosystem, included as an appendix.", "category": "astro-ph_IM" }, { "text": "Bayesian jackknife tests with a small number of subsets: Application to\n HERA 21cm power spectrum upper limits: We present a Bayesian jackknife test for assessing the probability that a\ndata set contains biased subsets, and, if so, which of the subsets are likely\nto be biased. The test can be used to assess the presence and likely source of\nstatistical tension between different measurements of the same quantities in an\nautomated manner. Under certain broadly applicable assumptions, the test is\nanalytically tractable. We also provide an open source code, CHIBORG, that\nperforms both analytic and numerical computations of the test on general\nGaussian-distributed data. After exploring the information theoretical aspects\nof the test and its performance with an array of simulations, we apply it to\ndata from the Hydrogen Epoch of Reionization Array (HERA) to assess whether\ndifferent sub-seasons of observing can justifiably be combined to produce a\ndeeper 21cm power spectrum upper limit. We find that, with a handful of\nexceptions, the HERA data in question are statistically consistent and this\ndecision is justified. We conclude by pointing out the wide applicability of\nthis test, including to CMB experiments and the $H_0$ tension.", "category": "astro-ph_IM" }, { "text": "The Astro-WISE Optical Image Pipeline: Development and Implementation: We have designed and implemented a novel way to process wide-field\nastronomical data within a distributed environment of hardware resources and\nhumanpower. The system is characterized by integration of archiving,\ncalibration, and post-calibration analysis of data from raw, through\nintermediate, to final data products. It is a true integration thanks to\ncomplete linking of data lineage from the final catalogs back to the raw data.\nThis paper describes the pipeline processing of optical wide-field astronomical\ndata from the WFI (http://www.eso.org/lasilla/instruments/wfi/) and OmegaCAM\n(http://www.astro-wise.org/~omegacam/) instruments using the Astro-WISE\ninformation system (the Astro-WISE Environment or simply AWE). This information\nsystem is an environment of hardware resources and humanpower distributed over\nEurope. AWE is characterized by integration of archiving, data calibration,\npost-calibration analysis, and archiving of raw, intermediate, and final data\nproducts. The true integration enables a complete data processing cycle from\nthe raw data up to the publication of science-ready catalogs. The advantages of\nthis system for very large datasets are in the areas of: survey operations\nmanagement, quality control, calibration analyses, and massive processing.", "category": "astro-ph_IM" }, { "text": "3C84, BL Lac. Earth based VLBI test for the RADIOASTRON project: Results of processing of data of a VLBI experiment titled RAPL01 are\npresented. These VLBI observations were made on 4th February, 2010 at 6.28 cm\nbetween the 100-m antenna of the Max Planck Institute (Effelsberg, Germany),\nPuschino 22-m antenna (Astro Space Center (ASC), Russia), and two 32-m antennas\nof the Istituto di Radioastronomia di Bologna (Bologna, Italy) in Noto and\nMedicina. 2 well-known sources, 3C84 (0316+413), and BL Lac (2200+420) were\nincluded in the schedule of observations. Each of them was observed during 1\nhour at all the stations. The Mark-5A registration system was used at 3\nEuropean antennae. The alternative registration system known as RDR\n(RADIOASTRON Data Recorder) was used in Puschino. The Puschino data were\nrecorded in format RDF (RADIOASTRON Data Format). Two standard recording modes\ndesigned as 128-4-1 (one bit), and 256-4-2 (two bit) were used in the\nexperiment. All the Mark-5A data from European antennae were successfully\nconverted into the RDF format. Then, the correlation function was estimated at\nthe ASC software correlator. A similar correlation function also was estimated\nat the Bonn correlator. The Bonn correlator reads Mark5A data, the RDF format\nwas converted into Mark5B format before correlation. The goal of the experiment\nwas to check the functioning and data analysis of the ground based radio\ntelescopes for the RADIOASTRON SVLBI mission", "category": "astro-ph_IM" }, { "text": "IVOA Recommendation: Space-Time Coordinate Metadata for the Virtual\n Observatory Version 1.33: This document provides a complete design description of the Space-Time\nCoordinate (STC) metadata for the Virtual Observatory. It explains the various\ncomponents, highlights some implementation considerations, presents a complete\nset of UML diagrams, and discusses the relation between STC and certain other\nparts of the Data Model. Two serializations are discussed: XML Schema (STC-X)\nand String (STC-S); the former is an integral part of this Recommendation.", "category": "astro-ph_IM" }, { "text": "Verification of commercial motor performance for WEAVE at the William\n Herschel Telescope: WEAVE is a 1000-fiber multi-object spectroscopic facility for the 4.2~m\nWilliam Herschel Telescope. It will feature a double-headed pick-and-place\nfiber positioning robot comprising commercially available robotic axes. This\npaper presents results on the performance of these axes, obtained by testing a\nprototype system in the laboratory. Positioning accuracy is found to be better\nthan the manufacturer's published values for the tested cases, indicating that\nthe requirement for a maximum positioning error of 8.0~microns is achievable.\nField reconfiguration times well within the planned 60 minute observation\nwindow are shown to be likely when individual axis movements are combined in an\nefficient way.", "category": "astro-ph_IM" }, { "text": "Low-order wavefront control using a Zernike sensor through Lyot\n coronagraphs for exoplanet imaging: Combining large segmented space telescopes, coronagraphy and wavefront\ncontrol methods is a promising solution to produce a dark hole (DH) region in\nthe coronagraphic image of an observed star and study planetary companions. The\nthermal and mechanical evolution of such a high-contrast facility leads to\nwavefront drifts that degrade the DH contrast during the observing time, thus\nlimiting the ability to retrieve planetary signals. Lyot-style coronagraphs are\nstarlight suppression systems that remove the central part of the image for an\nunresolved observed star, the point spread function, with an opaque focal plane\nmask (FPM). When implemented with a flat mirror containing an etched pinhole,\nthe mask rejects part of the starlight through the pinhole which can be used to\nretrieve information about low-order aberrations. We propose an active control\nscheme using a Zernike wavefront sensor (ZWFS) to analyze the light rejected by\nthe FPM, control low-order aberrations, and stabilize the DH contrast. The\nconcept formalism is first presented before characterizing the sensor behavior\nin simulations and in laboratory. We then perform experimental tests to\nvalidate a wavefront control loop using a ZWFS on the HiCAT testbed. By\ncontrolling the first 11 Zernike modes, we show a decrease in wavefront error\nstandard deviation by a factor of up to 9 between open- and closed-loop\noperations using the ZWFS. In the presence of wavefront perturbations, we show\nthe ability of this control loop to stabilize a DH contrast around 7x10^-8 with\na standard deviation of 7x10^-9. Active control with a ZWFS proves a promising\nsolution in Lyot coronagraphs with an FPM-filtered beam to control and\nstabilize low-order wavefront aberrations and DH contrast for exoplanet imaging\nwith future space missions.", "category": "astro-ph_IM" }, { "text": "AstroDS -- A Distributed Storage for Astrophysics of Cosmic Rays.\n Current Status: Currently, the processing of scientific data in astroparticle physics is\nbased on various distributed technologies, the most common of which are Grid\nand cloud computing. The most frequently discussed approaches are focused on\nlarge and even very large scientific experiments, such as Cherenkov Telescope\nArray. We, by contrast, offer a solution designed for small to medium\nexperiments such as TAIGA. In such experiments, as a rule, historically\ndeveloped specific data processing methods and specialized software are used.\nWe have specifically designed a distributed (cloud) data storage for\nastroparticle physics data collaboration in medium-sized experiments. In this\narticle, we discuss the current state of our work using the example of the\nTAIGA and CASCADE experiments. A feature of our approach is that we provide our\nusers with scientific data in the form to which they are accustomed to in\neveryday work on local resources.", "category": "astro-ph_IM" }, { "text": "How to write and develop your astronomy research paper: Writing is a vital component of a modern career in scientific research. But\nhow to write correctly and effectively is often not included in the training\nthat young astronomers receive from their supervisors and departments. We offer\na step-by-step guide to tackle this deficiency, published as a set of two\npapers. In the first, we addressed how to plan and outline your paper and\ndecide where to publish. In the current second paper, we describe the various\nsections that constitute a typical research paper in astronomy, sharing best\npractice for the most efficient use of each of them. We also discuss a\nselection of issues that often cause trouble to writers, from sentence to\nparagraph structure, the `writing mechanics' used to develop a manuscript. Our\ntwo-part guide is aimed primarily at master's and PhD level students who are\npresented with the daunting task of writing their first scientific paper, but\nmore senior researchers or writing instructors may well find the ideas\npresented here useful.", "category": "astro-ph_IM" }, { "text": "Focus diverse phase retrieval testbed development of continuous\n wavefront sensing for space telescope applications: Continuous wavefront sensing on future space telescopes allows relaxation of\nstability requirements while still allowing on-orbit diffraction-limited\noptical performance. We consider the suitability of phase retrieval to\ncontinuously reconstruct the phase of a wavefront from on-orbit irradiance\nmeasurements or point spread function (PSF) images. As phase retrieval\nalgorithms do not require reference optics or complicated calibrations, it is a\npreferable technique for space observatories, such as the Hubble Space\nTelescope or the James Webb Space Telescope. To increase the robustness and\ndynamic range of the phase retrieval algorithm, multiple PSF images with known\namount of defocus can be utilized. In this study, we describe a recently\nconstructed testbed including a 97 actuator deformable mirror, changeable\nentrance pupil stops, and a light source. The aligned system wavefront error is\nbelow ~30nm. We applied various methods to generate a known wavefront error,\nsuch as defocus and/or other aberrations, and found the accuracy and precision\nof the root mean squared error of the reconstructed wavefronts to be less than\n~10nm and ~2nm, respectively. Further, we discuss the signal-to-noise ratios\nrequired for continuous dynamic wavefront sensing. We also simulate the case of\nspacecraft drifting and verify the performance of the phase retrieval algorithm\nfor continuous wavefront sensing in the presence of realistic disturbances.", "category": "astro-ph_IM" }, { "text": "An improved method for polarimetric image restoration in interferometry: Interferometric radio astronomy data require the effects of limited coverage\nin the Fourier plane to be accounted for via a deconvolution process. For the\nlast 40 years this process, known as `cleaning', has been performed almost\nexclusively on all Stokes parameters individually as if they were independent\nscalar images. However, here we demonstrate for the case of the linear\npolarisation $\\mathcal{P}$, this approach fails to properly account for the\ncomplex vector nature resulting in a process which is dependant on the axis\nunder which the deconvolution is performed. We present here an improved method,\n`Generalised Complex CLEAN', which properly accounts for the complex vector\nnature of polarised emission and is invariant under rotations of the\ndeconvolution axis. We use two Australia Telescope Compact Array datasets to\ntest standard and complex CLEAN versions of the H\\\"{o}gbom and SDI CLEAN\nalgorithms. We show that in general the Complex CLEAN version of each algorithm\nproduces more accurate clean components with fewer spurious detections and\nlower computation cost due to reduced iterations than the current methods. In\nparticular we find that the Complex SDI CLEAN produces the best results for\ndiffuse polarised sources as compared with standard CLEAN algorithms and other\nComplex CLEAN algorithms. Given the move to widefield, high resolution\npolarimetric imaging with future telescopes such as the Square Kilometre Array,\nwe suggest that Generalised Complex CLEAN should be adopted as the\ndeconvolution method for all future polarimetric surveys and in particular that\nthe complex version of a SDI CLEAN should be used.", "category": "astro-ph_IM" }, { "text": "UVscope and its application aboard the ASTRI-Horn telescope: UVscope is an instrument, based on a multi-pixel photon detector, developed\nto support experimental activities for high-energy astrophysics and cosmic ray\nresearch. The instrument, working in single photon counting mode, is designed\nto directly measure light flux in the wavelengths range 300-650~nm. The\ninstrument can be used in a wide field of applications where the knowledge of\nthe nocturnal environmental luminosity is required. Currently, one UVscope\ninstrument is allocated onto the external structure of the ASTRI-Horn Cherenkov\ntelescope devoted to the gamma-ray astronomy at very high energies. Being\nco-aligned with the ASTRI-Horn camera axis, UVscope can measure the diffuse\nemission of the night sky background simultaneously with the ASTRI-Horn camera,\nwithout any interference with the main telescope data taking procedures.\nUVscope is properly calibrated and it is used as an independent reference\ninstrument for test and diagnostic of the novel ASTRI-Horn telescope.", "category": "astro-ph_IM" }, { "text": "CHIME FRB: An application of FFT beamforming for a radio telescope: We have developed FFT beamforming techniques for the CHIME radio telescope,\nto search for and localize the astrophysical signals from Fast Radio Bursts\n(FRBs) over a large instantaneous field-of-view (FOV) while maintaining the\nfull angular resolution of CHIME. We implement a hybrid beamforming pipeline in\na GPU correlator, synthesizing 256 FFT-formed beams in the North-South\ndirection by four formed beams along East-West via exact phasing, tiling a sky\narea of ~250 square degrees. A zero-padding approximation is employed to\nimprove chromatic beam alignment across the wide bandwidth of 400 to 800 MHz.\nWe up-channelize the data in order to achieve fine spectral resolution of\n$\\Delta\\nu$=24 kHz and time cadence of 0.983 ms, desirable for detecting\ntransient and dispersed signals such as those from FRBs.", "category": "astro-ph_IM" }, { "text": "A key-formula to compute the gravitational potential of inhomogeneous\n discs in cylindrical coordinates: We have established the exact expression for the gravitational potential of a\nhomogeneous polar cell - an elementary pattern used in hydrodynamical\nsimulations of gravitating discs. This formula, which is a closed-form, works\nfor any opening angle and radial extension of the cell. It is valid at any\npoint in space, i.e. in the plane of the distribution (inside and outside) as\nwell as off-plane, thereby generalizing the results reported by Durand (1953)\nfor the circular disc. The three components of the gravitational acceleration\nare given. The mathematical demonstration proceeds from the \"incomplete version\nof Durand's formula\" for the potential (based on complete elliptic integrals).\nWe determine first the potential due to the circular sector (i.e. a pie-slice\nsheet), and then deduce that of the polar cell (from convenient radial scaling\nand subtraction). As a by-product, we generate an integral theorem stating that\n\"the angular average of the potential of any circular sector along its tangent\ncircle is 2/PI times the value at the corner\". A few examples are presented.\nFor numerical resolutions and cell shapes commonly used in disc simulations, we\nquantify the importance of curvature effects by performing a direct comparison\nbetween the potential of the polar cell and that of the Cartesian (i.e.\nrectangular) cell having the same mass. Edge values are found to deviate\nroughly like 2E-3 x N/256 in relative (N is the number of grid points in the\nradial direction), while the agreement is typically four orders of magnitude\nbetter for values at the cell's center. We also produce a reliable\napproximation for the potential, valid in the cell's plane, inside and close to\nthe cell. Its remarkable accuracy, about 5E-4 x N/256 in relative, is\nsufficient to estimate the cell's self-acceleration.", "category": "astro-ph_IM" }, { "text": "Design and Performance of the GAMMA-400 Gamma-Ray Telescope for the Dark\n Matter Searches: The GAMMA-400 gamma-ray telescope is designed to measure the fluxes of gamma\nrays and cosmic-ray electrons + positrons, which can be produced by\nannihilation or decay of the dark matter particles, as well as to survey the\ncelestial sphere in order to study point and extended sources of gamma rays,\nmeasure energy spectra of Galactic and extragalactic diffuse gamma-ray\nemission, gamma-ray bursts, and gamma-ray emission from the Sun. The GAMMA-400\ncovers the energy range from 100 MeV to 3000 GeV. Its angular resolution is\n~0.01 deg (E{\\gamma} > 100 GeV), the energy resolution ~1% (E{\\gamma} > 10\nGeV), and the proton rejection factor ~10E6. GAMMA-400 will be installed on the\nRussian space platform Navigator. The beginning of observations is planned for\n2018.", "category": "astro-ph_IM" }, { "text": "Theory and Simulations of Refractive Substructure in Resolved\n Scatter-Broadened Images: At radio wavelengths, scattering in the interstellar medium distorts the\nappearance of astronomical sources. Averaged over a scattering ensemble, the\nresult is a blurred image of the source. However, Narayan & Goodman (1989) and\nGoodman & Narayan (1989) showed that for an incomplete average, scattering\nintroduces refractive substructure in the image of a point source that is both\npersistent and wideband. We show that this substructure is quenched but not\nsmoothed by an extended source. As a result, when the scatter-broadening is\ncomparable to or exceeds the unscattered source size, the scattering can\nintroduce spurious compact features into images. In addition, we derive\nefficient strategies to numerically compute realistic scattered images, and we\npresent characteristic examples from simulations. Our results show that\nrefractive substructure is an important consideration for ongoing missions at\nthe highest angular resolutions, and we discuss specific implications for\nRadioAstron and the Event Horizon Telescope.", "category": "astro-ph_IM" }, { "text": "SciCodes: Astronomy Research Software and Beyond: The Astrophysics Source Code Library (ASCL ascl.net), started in 1999, is a\nfree open registry of software used in refereed astronomy research. Over the\npast few years, it has spearheaded an effort to form a consortium of scientific\nsoftware registries and repositories. In 2019 and 2020, ASCL contacted editors\nand maintainers of discipline and institutional software registries and\nrepositories in math, biology, neuroscience, geophysics, remote sensing, and\nother fields to develop a list of best practices for these research software\nresources. At the completion of that project, performed as a Task Force for a\nFORCE11 working group, members decided to form SciCodes as an ongoing\nconsortium. This presentation covered the consortium's work so far, what it is\ncurrently working on, what it hopes to achieve for making scientific research\nsoftware more discoverable across disciplines, and how the consortium can\nbenefit astronomers.", "category": "astro-ph_IM" }, { "text": "Interstellar Now! Missions to and Sample Returns from Nearby\n Interstellar Objects: The recently discovered first high velocity hyperbolic objects passing\nthrough the Solar System, 1I/'Oumuamua and 2I/Borisov, have raised the question\nabout near term missions to Interstellar Objects. In situ spacecraft\nexploration of these objects will allow the direct determination of both their\nstructure and their chemical and isotopic composition, enabling an entirely new\nway of studying small bodies from outside our solar system. In this paper, we\nmap various Interstellar Object classes to mission types, demonstrating that\nmissions to a range of Interstellar Object classes are feasible, using existing\nor near-term technology. We describe flyby, rendezvous and sample return\nmissions to interstellar objects, showing various ways to explore these bodies\ncharacterizing their surface, dynamics, structure and composition. Interstellar\nobjects likely formed very far from the solar system in both time and space;\ntheir direct exploration will constrain their formation and history, situating\nthem within the dynamical and chemical evolution of the Galaxy. These mission\ntypes also provide the opportunity to explore solar system bodies and perform\nmeasurements in the far outer solar system.", "category": "astro-ph_IM" }, { "text": "Impact of particles on the Planck HFI detectors: Ground-based\n measurements and physical interpretation: The Planck High Frequency Instrument (HFI) surveyed the sky continuously from\nAugust 2009 to January 2012. Its noise and sensitivity performance were\nexcellent, but the rate of cosmic ray impacts on the HFI detectors was\nunexpectedly high. Furthermore, collisions of cosmic rays with the focal plane\nproduced transient signals in the data (glitches) with a wide range of\ncharacteristics. A study of cosmic ray impacts on the HFI detector modules has\nbeen undertaken to categorize and characterize the glitches, to correct the HFI\ntime-ordered data, and understand the residual effects on Planck maps and data\nproducts. This paper presents an evaluation of the physical origins of glitches\nobserved by the HFI detectors. In order to better understand the glitches\nobserved by HFI in flight, several ground-based experiments were conducted with\nflight-spare HFI bolometer modules. The experiments were conducted between 2010\nand 2013 with HFI test bolometers in different configurations using varying\nparticles and impact energies. The bolometer modules were exposed to 23 MeV\nprotons from the Orsay IPN TANDEM accelerator, and to $^{241}$Am and $^{244}$Cm\n$\\alpha$-particle and $^{55}$Fe radioactive X-ray sources. The calibration data\nfrom the HFI ground-based preflight tests were used to further characterize the\nglitches and compare glitch rates with statistical expectations under\nlaboratory conditions. Test results provide strong evidence that the dominant\nfamily of glitches observed in flight are due to cosmic ray absorption by the\nsilicon die substrate on which the HFI detectors reside. Glitch energy is\npropagated to the thermistor by ballistic phonons, while there is also a\nthermal diffusion contribution. The implications of these results for future\nsatellite missions, especially those in the far-infrared to sub-millimetre and\nmillimetre regions of the electromagnetic spectrum, are discussed.", "category": "astro-ph_IM" }, { "text": "Sparsity and the Bayesian Perspective: Sparsity has been recently introduced in cosmology for weak-lensing and CMB\ndata analysis for different applications such as denoising, component\nseparation or inpainting (i.e. filling the missing data or the mask). Although\nit gives very nice numerical results, CMB sparse inpainting has been severely\ncriticized by top researchers in cosmology, based on arguments derived from a\nBayesian perspective. Trying to understand their point of view, we realize that\ninterpreting a regularization penalty term as a prior in a Bayesian framework\ncan lead to erroneous conclusions. This paper is by no means against the\nBayesian approach, which has proven to be very useful for many applications,\nbut warns about a Bayesian-only interpretation in data analysis, which can be\nmisleading in some cases.", "category": "astro-ph_IM" }, { "text": "Frequency chirped continuous-wave sodium laser guide stars: We numerically study a method to increase the photon return flux of\ncontinuous-wave laser guide stars using one-dimensional atomic cooling\nprinciples. The method relies on chirping the laser towards higher frequencies\nfollowing the change in velocity of sodium atoms due to recoil, which raises\natomic populations available for laser excitation within the Doppler\ndistribution. The efficiency of this effect grows with the average number of\natomic excitations between two atomic collisions in the mesosphere. We find the\nparameters for maximizing the return flux and evaluate the performance of\nchirping for operation at La Palma. According to our simulations, the optimal\nchirp rate lies between 0.8-1.0 MHz/$\\mu$s and an increase in the fluorescence\nof the sodium guide star up to 60% can be achieved with current 20 W-class\nguide star lasers.", "category": "astro-ph_IM" }, { "text": "DiskFM: A Forward Modeling Tool for Disk Analysis with Coronagraphic\n Instruments: Because of bright starlight leakage in coronagraphic raw images, faint\nastrophysical objects such as exoplanets can only be detected using powerful\npoint spread function (PSF) subtraction algorithms. However, these algorithms\nhave strong effects on faint objects of interest, and often prevent precise\nspectroscopic analysis and scattering property measurements of circumstellar\ndisks. For this reason, PSF-subtraction effects is currently the main\nlimitations to the precise characterization of exoplanetary dust with\nscattered-light imaging. Forward-modeling techniques have long been developed\nfor point source objects. However, forward-modeling with disks is complicated\nby the fact that the disk cannot be simplified using a simple point source\nconvolved by the PSF as the astrophysical model; all hypothetical disk\nmorphologies must be explored to understand the subtle and non-linear effects\nof the PSF subtraction algorithm on the shape and local geometry of these\nsystems. Because of their complex geometries, the forward-modeling process has\nto be repeated tens or hundred of thousands of times on disks with slightly\ndifferent physical properties. All of these geometries are then compared to the\nPSF-subtracted image of the data, within an MCMC or a Chi-square wrapper. In\nthis paper, we present here DiskFM, a new open-source algorithm included in the\nPSF subtraction algorithms package pyKLIP. This code allows to produce fast\nforward-modeling for a variety of observation strategies (ADI, SDI, ADI+SDI,\nRDI). pyKLIP has already been used for SPHERE/IRDIS and GPI data. It is readily\navailable on all instruments supported by pyKLIP (SPHERE/IFS, SCExAO/CHARIS),\nand can be quickly adapted for other coronagraphic instruments.", "category": "astro-ph_IM" }, { "text": "A Lightweight Space-based Solar Power Generation and Transmission\n Satellite: We propose a novel design for a lightweight, high-performance space-based\nsolar power array combined with power beaming capability for operation in\ngeosynchronous orbit and transmission of power to Earth. We use a modular\nconfiguration of small, repeatable unit cells, called tiles, that each\nindividually perform power collection, conversion, and transmission. Sunlight\nis collected via lightweight parabolic concentrators and converted to DC\nelectric power with high efficiency III-V photovoltaics. Several CMOS\nintegrated circuits within each tile generates and controls the phase of\nmultiple independently-controlled microwave sources using the DC power. These\nsources are coupled to multiple radiating antennas which act as elements of a\nlarge phased array to beam the RF power to Earth. The power is sent to Earth at\na frequency chosen in the range of 1-10 GHz and collected with ground-based\nrectennas at a local intensity no larger than ambient sunlight. We achieve\nsignificantly reduced mass compared to previous designs by taking advantage of\nsolar concentration, current CMOS integrated circuit technology, and ultralight\nstructural elements. Of note, the resulting satellite has no movable parts once\nit is fully deployed and all beam steering is done electronically. Our design\nis safe, scalable, and able to be deployed and tested with progressively larger\nconfigurations starting with a single unit cell that could fit on a cube\nsatellite. The design reported on here has an areal mass density of 160 g/m2\nand an end-to-end efficiency of 7-14%. We believe this is a significant step\nforward to the realization of space-based solar power, a concept once of\nscience fiction.", "category": "astro-ph_IM" }, { "text": "DISCO: a Spatio-Spectral Recombiner for Pupil Remapping Interferometry: Pupil-remapping is a new high-dynamic range imaging technique that has\nrecently demonstrated feasibility on sky. The current prototypes present\nhowever deceiving limiting magnitude, restricting the current use to the\nbrightest stars in the sky. We propose to combine pupil-remapping with\nspatio-spectral encoding, a technique first applied to the VEGA/CHARA\ninterferometer. The result is an instrument proposal, called \"Dividing\nInterferometer for Stars Characterizations and Observations\" (DISCO). The idea\nis to take profit of wavelength multiplexing when using a spectrograph in order\nto pack as much as possible the available information, yet providing a\npotential boost of 1.5 magnitude if used in existing prototypes. We detail in\nthis paper the potential of such a concept.", "category": "astro-ph_IM" }, { "text": "Machine learning applications in astrophysics: Photometric redshift\n estimation: Machine learning has rose to become an important research tool in the past\ndecade, its application has been expanded to almost if not all disciplines\nknown to mankind. Particularly, the use of machine learning in astrophysics\nresearch had a humble beginning in the early 1980s, it has rose and become\nwidely used in many sub-fields today, driven by the vast availability of free\nastronomical data online. In this short review, we narrow our discussion to a\nsingle topic in astrophysics - the estimation of photometric redshifts of\ngalaxies and quasars, where we discuss its background, significance, and how\nmachine learning has been used to improve its estimation methods in the past 20\nyears. We also show examples of some recent machine learning photometric\nredshift work done in Malaysia, affirming that machine learning is a viable and\neasy way a developing nation can contribute towards general research in\nastronomy and astrophysics.", "category": "astro-ph_IM" }, { "text": "Systematic biases in low frequency radio interferometric data due to\n calibration: the LOFAR EoR case: The redshifted 21 cm line of neutral hydrogen is a promising probe of the\nEpoch of Reionization (EoR). However, its detection requires a thorough\nunderstanding and control of the systematic errors. We study two systematic\nbiases observed in the LOFAR EoR residual data after calibration and\nsubtraction of bright discrete foreground sources. The first effect is a\nsuppression in the diffuse foregrounds, which could potentially mean a\nsuppression of the 21 cm signal. The second effect is an excess of noise beyond\nthe thermal noise. The excess noise shows fluctuations on small frequency\nscales, and hence it can not be easily removed by foreground removal or\navoidance methods. Our analysis suggests that sidelobes of residual sources due\nto the chromatic point spread function and ionospheric scintillation can not be\nthe dominant causes of the excess noise. Rather, both the suppression of\ndiffuse foregrounds and the excess noise can occur due to calibration with an\nincomplete sky model containing predominantly bright discrete sources. We show\nthat calibrating only on bright sources can cause suppression of other signals\nand introduce an excess noise in the data. The levels of the suppression and\nexcess noise depend on the relative flux of sources which are not included in\nthe model with respect to the flux of modeled sources. We discuss possible\nsolutions such as using only long baselines to calibrate the interferometric\ngain solutions as well as simultaneous multi-frequency calibration along with\ntheir benefits and shortcomings.", "category": "astro-ph_IM" }, { "text": "Performance analysis of the Least-Squares estimator in Astrometry: We characterize the performance of the widely-used least-squares estimator in\nastrometry in terms of a comparison with the Cramer-Rao lower variance bound.\nIn this inference context the performance of the least-squares estimator does\nnot offer a closed-form expression, but a new result is presented (Theorem 1)\nwhere both the bias and the mean-square-error of the least-squares estimator\nare bounded and approximated analytically, in the latter case in terms of a\nnominal value and an interval around it. From the predicted nominal value we\nanalyze how efficient is the least-squares estimator in comparison with the\nminimum variance Cramer-Rao bound. Based on our results, we show that, for the\nhigh signal-to-noise ratio regime, the performance of the least-squares\nestimator is significantly poorer than the Cramer-Rao bound, and we\ncharacterize this gap analytically. On the positive side, we show that for the\nchallenging low signal-to-noise regime (attributed to either a weak\nastronomical signal or a noise-dominated condition) the least-squares estimator\nis near optimal, as its performance asymptotically approaches the Cramer-Rao\nbound. However, we also demonstrate that, in general, there is no unbiased\nestimator for the astrometric position that can precisely reach the Cramer-Rao\nbound. We validate our theoretical analysis through simulated digital-detector\nobservations under typical observing conditions. We show that the nominal value\nfor the mean-square-error of the least-squares estimator (obtained from our\ntheorem) can be used as a benchmark indicator of the expected statistical\nperformance of the least-squares method under a wide range of conditions. Our\nresults are valid for an idealized linear (one-dimensional) array detector\nwhere intra-pixel response changes are neglected, and where flat-fielding is\nachieved with very high accuracy.", "category": "astro-ph_IM" }, { "text": "Science with the Murchison Widefield Array: Significant new opportunities for astrophysics and cosmology have been\nidentified at low radio frequencies. The Murchison Widefield Array is the first\ntelescope in the Southern Hemisphere designed specifically to explore the\nlow-frequency astronomical sky between 80 and 300 MHz with arcminute angular\nresolution and high survey efficiency. The telescope will enable new advances\nalong four key science themes, including searching for redshifted 21 cm\nemission from the epoch of reionisation in the early Universe; Galactic and\nextragalactic all-sky southern hemisphere surveys; time-domain astrophysics;\nand solar, heliospheric, and ionospheric science and space weather. The\nMurchison Widefield Array is located in Western Australia at the site of the\nplanned Square Kilometre Array (SKA) low-band telescope and is the only\nlow-frequency SKA precursor facility. In this paper, we review the performance\nproperties of the Murchison Widefield Array and describe its primary scientific\nobjectives.", "category": "astro-ph_IM" }, { "text": "GRID: a Student Project to Monitor the Transient Gamma-Ray Sky in the\n Multi-Messenger Astronomy Era: The Gamma-Ray Integrated Detectors (GRID) is a space mission concept\ndedicated to monitoring the transient gamma-ray sky in the energy range from 10\nkeV to 2 MeV using scintillation detectors onboard CubeSats in low Earth\norbits. The primary targets of GRID are the gamma-ray bursts (GRBs) in the\nlocal universe. The scientific goal of GRID is, in synergy with ground-based\ngravitational wave (GW) detectors such as LIGO and VIRGO, to accumulate a\nsample of GRBs associated with the merger of two compact stars and study jets\nand related physics of those objects. It also involves observing and studying\nother gamma-ray transients such as long GRBs, soft gamma-ray repeaters,\nterrestrial gamma-ray flashes, and solar flares. With multiple CubeSats in\nvarious orbits, GRID is unaffected by the Earth occultation and serves as a\nfull-time and all-sky monitor. Assuming a horizon of 200 Mpc for ground-based\nGW detectors, we expect to see a few associated GW-GRB events per year. With\nabout 10 CubeSats in operation, GRID is capable of localizing a faint GRB like\n170817A with a 90% error radius of about 10 degrees, through triangulation and\nflux modulation. GRID is proposed and developed by students, with considerable\ncontribution from undergraduate students, and will remain operated as a student\nproject in the future. The current GRID collaboration involves more than 20\ninstitutes and keeps growing. On August 29th, the first GRID detector onboard a\nCubeSat was launched into a Sun-synchronous orbit and is currently under test.", "category": "astro-ph_IM" }, { "text": "LSST is Not \"Big Data\": LSST promises to be the largest optical imaging survey of the sky. If we were\nfortunate enough to have the equivalent of LSST today, it would represent a\n\"fire hose\" of data that would be difficult to store, transfer, and analyze\nwith available compute resources.\n LSST parallels the SDSS compute task which was ambitious yet tractable. By\nalmost any measure relative to computers that will be available (thanks to the\nsteady progression of Moore's Law), LSST will be a small data set. LSST will\nnever fill more than 22 hard drives. Individual investigators will be able to\nmaintain their own data copies to analyze as they choose.", "category": "astro-ph_IM" }, { "text": "galmask: A Python package for unsupervised galaxy masking: Galaxy morphological classification is a fundamental aspect of galaxy\nformation and evolution studies. Various machine learning tools have been\ndeveloped for automated pipeline analysis of large-scale surveys, enabling a\nfast search for objects of interest. However, crowded regions in the image may\npose a challenge as they can lead to bias in the learning algorithm. In this\nResearch Note, we present galmask, an open-source package for unsupervised\ngalaxy masking to isolate the central object of interest in the image. galmask\nis written in Python and can be installed from PyPI via the pip command.", "category": "astro-ph_IM" }, { "text": "The next generation Cherenkov Telescope Array observatory: CTA: The Cherenkov Telescope Array (CTA) is a large collaborative effort aimed at\nthe design and operation of an observatory dedicated to the VHE gamma-ray\nastrophysics in the energy range 30 GeV-100 TeV, which will improve by about\none order of magnitude the sensitivity with respect to the current major arrays\n(H.E.S.S., MAGIC, and VERITAS). In order to achieve such improved performance,\nfor both the northern and southern CTA sites, four units of 23m diameter Large\nSize Telescopes (LSTs) will be deployed close to the centre of the array with\ntelescopes separated by about 100m. A larger number (about 25 units) of 12m\nMedium Size Telescopes (MSTs, separated by about 150m), will cover a larger\narea. The southern site will also include up to 24 Schwarzschild-Couder\ndual-mirror medium-size Telescopes (SCTs) with the primary mirror diameter of\n9.5m. Above a few TeV, the Cherenkov light intensity is such that showers can\nbe detected even well outside the light pool by telescopes significantly\nsmaller than the MSTs. To achieve the required sensitivity at high energies, a\nhuge area on the ground needs to be covered by Small Size Telescopes (SSTs)\nwith a FOV of about 10 deg and an angular resolution of about 0.2 deg, making\nthe dual-mirror configuration very effective. The SST sub-array will be\ncomposed of 50-70 telescopes with a mirror area of about 5-10 square meters and\nabout 300m spacing, distributed across an area of about 10 square kilometers.\nWe will focus on the innovative solution for the optical design of the medium\nand small size telescopes based on a dual-mirror configuration. This layout\nwill allow us to reduce the dimension and the weight of the camera at the focal\nplane of the telescope, to adopt SiPMs as light detectors thanks to the reduced\nplate-scale, and to have an optimal imaging resolution on a wide FOV.", "category": "astro-ph_IM" }, { "text": "The CALSPEC Stars P177D and P330E: Multicolor photometric data are presented for the CALSPEC stars P177D and\nP330E. Together with previously published photometry for nine other CALSPEC\nstandards, the photometric observations and synthetic photometry from HST/STIS\nspectrophotometry agree in the B, V, R, and I bands to better than $\\sim$1\\%\n(10 mmag).", "category": "astro-ph_IM" }, { "text": "Subsystem Development for the All-Sky Medium Energy Gamma-ray\n Observatory (AMEGO) prototype: The gamma-ray sky from several hundred keV to $\\sim$ a hundred MeV has\nremained largely unexplored due to the challenging nature of detecting gamma\nrays in this regime. At lower energies, Compton scattering is the dominant\ninteraction process whereas at higher energies pair production dominates, with\na crossover at about 10 MeV depending on the material. Thus, an instrument\ndesigned to work in this energy range must be optimized for both Compton and\npair-production events. The All-sky Medium Energy Gamma-ray Observatory (AMEGO)\nis a NASA Probe-class mission concept being submitted to the Astro2020 review.\nThe instrument is designed to operate from 200 keV to $>$10 GeV and is made of\nfour major subsystems: a plastic anti-coincidence detector for rejecting\ncosmic-ray events, a silicon tracker for tracking pair-production products and\ntracking and measuring the energies of Compton-scattered electrons, a CZT\ncalorimeter for measuring the energy and location of Compton scattered photons,\nand a CsI calorimeter for measuring the energy of the pair-production products\nat high energies. A prototype instrument comprising each subsystem is currently\nbeing developed in preparation for a beam test and a balloon flight. In this\ncontribution we discuss the current status of the prototype subsystems.", "category": "astro-ph_IM" }, { "text": "Software solutions for numerical modeling of wide-field telescopes: This paper presents an integrated modeling software to analyze the PSF of\nwide-field telescopes affected by misalignments. Even relatively small\nmisalignments in the optical system of a telescope can significantly\ndeteriorate the image quality by introducing large aberrations. In particular,\nwide-field telescopes are critically affected by these errors, insomuch that\nusually a closed-loop active optics system is adopted for a continuous\ncorrection, rather than for sporadic alignment procedures. Typically, a\nray-tracing software such as Zemax OpticStudio is employed to accurately\nanalyze the system during the optical design. However, an analytical model of\nthe optical system is preferable when the PSF of the telescope must be\nreconstructed quickly for algorithmic purposes. Here the analytical model is\nderived through a hybrid approach and developed in a custom software package,\ndesigned to be general and flexible in order to be tailored to different\noptical configurations. First, leveraging on the Zemax OpticStudio API, the\nray-tracing software is integrated into a Matlab pipeline. This allows to\nperform a statistical analysis by automatically simulating the system response\nin a variety of misaligned working conditions. Then, the resulting dataset is\nemployed to populate a database of parameters describing the model.", "category": "astro-ph_IM" }, { "text": "Improved Acceleration of the GPU Fourier Domain Acceleration Search\n Algorithm: We present an improvement of our implementation of the Correlation Technique\nfor the Fourier Domain Acceleration Search (FDAS) algorithm on Graphics\nProcessor Units (GPUs) (Dimoudi & Armour 2015; Dimoudi et al. 2017). Our new\nimproved convolution code which uses our custom GPU FFT code is between 2.5 and\n3.9 times faster the than our cuFFT-based implementation (on an NVIDIA P100)\nand allows for a wider range of filter sizes then our previous version. By\nusing this new version of our convolution code in FDAS we have achieved 44%\nperformance increase over our previous best implementation. It is also\napproximately 8 times faster than the existing PRESTO GPU implementation of\nFDAS (Luo 2013). This work is part of the AstroAccelerate project (Armour et\nal. 2002), a many-core accelerated time-domain signal processing library for\nradio astronomy.", "category": "astro-ph_IM" }, { "text": "Visualising three-dimensional volumetric data with an arbitrary\n coordinate system: Astronomical data does not always use Cartesian coordinates. Both all-sky\nobservational data and simulations of rotationally symmetric systems, such as\naccretion and protoplanetary discs, may use spherical polar or other coordinate\nsystems. Standard displays rely on Cartesian coordinates, but converting\nnon-Cartesian data into Cartesian format causes distortion of the data and loss\nof detail. I here demonstrate a method using standard techniques from computer\ngraphics that avoids these problems with 3D data in arbitrary coordinate\nsystems. The method adds minimum computational cost to the display process and\nis suitable for both realtime, interactive content and producing fixed rendered\nimages and videos. Proof-of-concept code is provided which works for data in\nspherical polar coordinates.", "category": "astro-ph_IM" }, { "text": "Concept validation of a high dynamic range point-diffraction\n interferometer for wavefront sensing in adaptive optics: The direct detection and imaging of exoplanets requires the use of\nhigh-contrast adaptive optics (AO). In these systems quasi-static aberrations\nneed to be highly corrected and calibrated. In order to achieve this, the\npupil-modulated point-diffraction interferometer (m-PDI), was presented in an\nearlier paper. This present paper focuses on m-PDI concept validation through\nthree experiments. First, the instrument's accuracy and dynamic range are\ncharacterised by measuring the spatial transfer function at all spatial\nfrequencies and at different amplitudes. Then, using visible monochromatic\nlight, an adaptive optics control loop is closed on the system's systematic\nbias to test for precision and completeness. In a central section of the pupil\nwith 72% of the total radius the residual error is 7.7nm-rms. Finally, the\ncontrol loop is run using polychromatic light with a spectral FWHM of 77nm\naround the R-band. The control loop shows no drop in performance with respect\nto the monochromatic case, reaching a final Strehl ratio larger than 0.7.", "category": "astro-ph_IM" }, { "text": "A Small Satellite Version of a Broad-band Soft X-ray Polarimeter: We describe a new implementation of a broad-band soft X-ray polarimeter,\nsubstantially based on a previous design. This implementation, the Pioneer Soft\nX-ray Polarimeter (PiSoX) is a SmallSat, designed for NASA's call for\nAstrophysics Pioneers, small missions that could be CubeSats, balloon\nexperiments, or SmallSats. As in the REDSoX Polarimeter, the grating\narrangement is designed optimally for the purpose of polarimetry with\nbroad-band focussing optics by matching the dispersion of the spectrometer\nchannels to laterally graded multilayers (LGMLs). The system can achieve\npolarization modulation factors over 90%. For PiSoX, the optics are lightweight\nSi mirrors in a one-bounce parabolic configuration. High efficiency, blazed\ngratings from opposite sectors are oriented to disperse to a LGML forming a\nchannel covering the wavelength range from 35 to 75 Angstroms (165 - 350 eV).\nUpon satellite rotation, the intensities of the dispersed spectra, after\nreflection and polarizing by the LGMLs, give the three Stokes parameters needed\nto determine a source's linear polarization fraction and orientation. The\ndesign can be extended to higher energies as LGMLs are developed further. We\ndescribe examples of the potential scientific return from instruments based on\nthis design.", "category": "astro-ph_IM" }, { "text": "Variable Star Classification Using Multi-View Metric Learning: Our multi-view metric learning framework enables robust characterization of\nstar categories by directly learning to discriminate in a multi-faceted feature\nspace, thus, eliminating the need to combine feature representations prior to\nfitting the machine learning model. We also demonstrate how to extend standard\nmulti-view learning, which employs multiple vectorized views, to the\nmatrix-variate case which allows very novel variable star signature\nrepresentations. The performance of our proposed methods is evaluated on the\nUCR Starlight and LINEAR datasets. Both the vector and matrix-variate versions\nof our multi-view learning framework perform favorably --- demonstrating the\nability to discriminate variable star categories.", "category": "astro-ph_IM" }, { "text": "Combined Opto-Acoustical Sensor Modules for KM3NeT: KM3NeT is a future multi-cubic-kilometre water Cherenkov neutrino telescope\ncurrently entering a first construction phase. It will be located in the\nMediterranean Sea and comprise about 600 vertical structures called detection\nunits. Each of these detection units has a length of several hundred metres and\nis anchored to the sea bed on one side and held taut by a buoy on the other\nside. The detection units are thus subject to permanent movement due to sea\ncurrents. Modules holding photosensors and additional equipment are equally\ndistributed along the detection units. The relative positions of the\nphotosensors has to be known with an uncertainty below $20\\,$cm in order to\nachieve the necessary precision for neutrino astronomy. These positions can be\ndetermined with an acoustic positioning system: dedicated acoustic emitters\nlocated at known positions and acoustic receivers along each detection unit.\nThis article describes the approach to combine an acoustic receiver with the\nphotosensors inside one detection module using a common power supply and data\nreadout. The advantage of this approach lies in a reduction of underwater\nconnectors and module configurations as well as in the compactification of the\ndetection units integrating the auxiliary devices necessary for their\nsuccessful operation.", "category": "astro-ph_IM" }, { "text": "Accelerating Multiframe Blind Deconvolution via Deep Learning: Ground-based solar image restoration is a computationally expensive procedure\nthat involves nonlinear optimization techniques. The presence of atmospheric\nturbulence produces perturbations in individual images that make it necessary\nto apply blind deconvolution techniques. These techniques rely on the\nobservation of many short exposure frames that are used to simultaneously infer\nthe instantaneous state of the atmosphere and the unperturbed object. We have\nrecently explored the use of machine learning to accelerate this process, with\npromising results. We build upon this previous work to propose several\ninteresting improvements that lead to better models. As well, we propose a new\nmethod to accelerate the restoration based on algorithm unrolling. In this\nmethod, the image restoration problem is solved with a gradient descent method\nthat is unrolled and accelerated aided by a few small neural networks. The role\nof the neural networks is to correct the estimation of the solution at each\niterative step. The model is trained to perform the optimization in a small\nfixed number of steps with a curated dataset. Our findings demonstrate that\nboth methods significantly reduce the restoration time compared to the standard\noptimization procedure. Furthermore, we showcase that these models can be\ntrained in an unsupervised manner using observed images from three different\ninstruments. Remarkably, they also exhibit robust generalization capabilities\nwhen applied to new datasets. To foster further research and collaboration, we\nopenly provide the trained models, along with the corresponding training and\nevaluation code, as well as the training dataset, to the scientific community.", "category": "astro-ph_IM" }, { "text": "AI and extreme scale computing to learn and infer the physics of higher\n order gravitational wave modes of quasi-circular, spinning, non-precessing\n binary black hole mergers: We use artificial intelligence (AI) to learn and infer the physics of higher\norder gravitational wave modes of quasi-circular, spinning, non precessing\nbinary black hole mergers. We trained AI models using 14 million waveforms,\nproduced with the surrogate model NRHybSur3dq8, that include modes up to $\\ell\n\\leq 4$ and $(5,5)$, except for $(4,0)$ and $(4,1)$, that describe binaries\nwith mass-ratios $q\\leq8$, individual spins $s^z_{\\{1,2\\}}\\in[-0.8, 0.8]$, and\ninclination angle $\\theta\\in[0,\\pi]$.Our probabilistic AI surrogates can\naccurately constrain the mass-ratio, individual spins, effective spin, and\ninclination angle of numerical relativity waveforms that describe such signal\nmanifold. We compared the predictions of our AI models with Gaussian process\nregression, random forest, k-nearest neighbors, and linear regression, and with\ntraditional Bayesian inference methods through the PyCBC Inference toolkit,\nfinding that AI outperforms all these approaches in terms of accuracy, and are\nbetween three to four orders of magnitude faster than traditional Bayesian\ninference methods. Our AI surrogates were trained within 3.4 hours using\ndistributed training on 1,536 NVIDIA V100 GPUs in the Summit supercomputer.", "category": "astro-ph_IM" }, { "text": "Stellar populations in the ELT perspective: We discuss the impact that the next generation of Extremely Large Telescopes\nwill have on the open astrophysical problems of resolved stellar populations.\nIn particular, we address the interplay between multiband photometry and\nspectroscopy.", "category": "astro-ph_IM" }, { "text": "Detectability of Galactic Faraday Rotation in Multi-wavelength CMB\n Observations: A Cross-Correlation Analysis of CMB and Radio Maps: We introduce a new cross-correlation method to detect and verify the\nastrophysical origin of Faraday Rotation (FR) in multiwavelength surveys. FR is\nwell studied in radio astronomy from radio point sources but the $\\lambda^{2}$\nsuppression of FR makes detecting and accounting for this effect difficult at\nmillimeter and sub-millimeter wavelengths. Therefore statistical methods are\nused to attempt to detect FR in the cosmic microwave background (CMB). Most\nestimators of the FR power spectrum rely on single frequency data. In contrast,\nwe investigate the correlation of polarized CMB maps with FR measure maps from\nradio point sources. We show a factor of $\\sim30$ increase in sensitivity over\nsingle frequency estimators and predict detections exceeding $10\\sigma$\nsignificance for a CMB-S4 like experiment. Improvements in observations of FR\nfrom current and future radio polarization surveys will greatly increase the\nusefulness of this method.", "category": "astro-ph_IM" }, { "text": "Using weighting algorithms to refine source direction determinations in\n all-sky gravitational wave burst searches with two-detector networks: I explore the possibility of resurrecting an old, non-Bayesian computational\napproach for inferring the source direction of a gravitational wave from the\noutput of a two-detector network. The method gives the beam pattern response\nfunctions and time delay, and performs well even in the presence of noise and\nunexpected signal forms. I further suggest an improvement to this method in the\nform of a weighting algorithm that usefully improves its accuracy beyond what\ncan be achieved with simple best-fit methods, validating the new procedure with\nseveral small-scale simulations. The approach is identified as complimentary to\n-- rather than in competition with -- the now-standard Bayesian approach\ntypically used by the LIGO network in parameter determination. Finally, I\nbriefly discuss the possible applications of this method in the world of\nthree-or-more detector networks and some directions for future work.", "category": "astro-ph_IM" }, { "text": "Correcting for the ionosphere in the uv-plane: In radio astronomy, the correlator measures intensity in visibility space. In\naddition, the EoR power spectrum measured by an experiment such as the MWA is\nconstructed in visibility space. Thus, correcting for the ionosphere in the\nuv-plane instead of real space could potentially save computation. In this\npaper, we study this technique. The mathematical formula for obtaining the\nunperturbed data from the ionospherically reflected data is non-local in the\nuv-plane. Moreover, an analytic solution for the unperturbed intensity may only\nbe obtained for a limited number of expansions of the ionospheric\nperturbations. We numerically study one of these expansions (with perturbations\nas sinusoidal modes). Obtaining an analytic solution for this expansion\nrequired a Taylor expansion, and we investigate the optimal order of this\nexpansion. We also propose a number of potential computation saving techniques,\nand evaluate their pros and cons.", "category": "astro-ph_IM" }, { "text": "Sparse aperture masking at the VLT I. Faint companion detection limits\n for the two debris disk stars HD 92945 and HD 141569: Observational data on companion statistics around young stellar systems is\nneeded to flesh out the formation pathways for extrasolar planets and brown\ndwarfs. Aperture masking is a new technique that is able to address an\nimportant part of this discovery space. We observed the two debris disk systems\nHD 92945 and HD 141569 with sparse aperture masking (SAM), a new mode offered\non the NaCo instrument at the VLT. A search for faint companions was performed\nusing a detection strategy based on the analysis of closure phases recovered\nfrom interferograms recorded on the Conica camera. Our results demonstrate that\nSAM is a very competitive mode in the field of companion detection. We obtained\n5 sigma high-contrast detection limits at lambda/D of 2.5x10^{-3} (\\Delta L' =\n6.5) for HD 92945 and 4.6x10^{-3} (\\Delta L' = 5.8) for HD 141569. According to\nbrown dwarf evolutionary models, our data impose an upper mass boundary for any\ncompanion for the two stars to, respectively, 18 and 22 Jupiter masses at\nminimum separations of 1.5 and 7 AU. The detection limits is mostly independent\nof angular separation, until reaching the diffraction limit of the telescope.\nWe have placed upper limits on the existence of companions to our target\nsystems that fall close to the planetary mass regime. This demonstrates the\npotential for SAM mode to contribute to studies of faint companions. We\nfurthermore show that the final dynamic range obtained is directly proportional\nto the error on the closure phase measurement. At the present performance\nlevels of 0.28 degree closure phase error, SAM is among the most competitive\ntechniques for recovering companions at scales of one to several times the\ndiffraction limit of the telescope. Further improvements to the detection\nthreshold can be expected with more accurate phase calibration.", "category": "astro-ph_IM" }, { "text": "Millimeter/submillimeter VLBI with a Next Generation Large Radio\n Telescope in the Atacama Desert: The proposed next generation Event Horizon Telescope (ngEHT) concept\nenvisions the imaging of various astronomical sources on scales of\nmicroarcseconds in unprecedented detail with at least two orders of magnitude\nimprovement in the image dynamic ranges by extending the Event Horizon\nTelescope (EHT). A key technical component of ngEHT is the utilization of large\naperture telescopes to anchor the entire array, allowing the connection of less\nsensitive stations through highly sensitive fringe detections to form a dense\nnetwork across the planet. Here, we introduce two projects for planned next\ngeneration large radio telescopes in the 2030s on the Chajnantor Plateau in the\nAtacama desert in northern Chile, the Large Submillimeter Telescope (LST) and\nthe Atacama Large Aperture Submillimeter Telescope (AtLAST). Both are designed\nto have a 50-meter diameter and operate at the planned ngEHT frequency bands of\n86, 230 and 345\\,GHz. A large aperture of 50\\,m that is co-located with two\nexisting EHT stations, the Atacama Large Millimeter/Submillimeter Array (ALMA)\nand the Atacama Pathfinder Experiment (APEX) Telescope in the excellent\nobserving site of the Chajnantor Plateau, will offer excellent capabilities for\nhighly sensitive, multi-frequency, and time-agile millimeter very long baseline\ninterferometry (VLBI) observations with accurate data calibration relevant to\nkey science cases of ngEHT. In addition to ngEHT, its unique location in Chile\nwill substantially improve angular resolutions of the planned Next Generation\nVery Large Array in North America or any future global millimeter VLBI arrays\nif combined. LST and AtLAST will be a key element enabling transformative\nscience cases with next-generation millimeter/submillimeter VLBI arrays.", "category": "astro-ph_IM" }, { "text": "The KAGRA underground environment and lessons for the Einstein Telescope: The KAGRA gravitational-wave detector in Japan is the only operating detector\nhosted in an underground infrastructure. Underground sites promise a greatly\nreduced contribution of the environment to detector noise thereby opening the\npossibility to extend the observation band to frequencies well below 10 Hz. For\nthis reason, the proposed next-generation infrastructure Einstein Telescope in\nEurope would be realized underground aiming for an observation band that\nextends from 3 Hz to several kHz. However, it is known that ambient noise in\nthe low-frequency band 10 Hz - 20 Hz at current surface sites of the Virgo and\nLIGO detectors is predominantly produced by the detector infrastructure. It is\nof utmost importance to avoid spoiling the quality of an underground site with\nnoisy infrastructure, at least at frequencies where this noise can turn into a\ndetector-sensitivity limitation. In this paper, we characterize the KAGRA\nunderground site to determine the impact of its infrastructure on environmental\nfields. We find that while excess seismic noise is observed, its contribution\nin the important band below 20 Hz is minor preserving the full potential of\nthis site to realize a low-frequency gravitational-wave detector. Moreover, we\nestimate the Newtonian-noise spectra of surface and underground seismic waves\nand of the acoustic field inside the caverns. We find that these will likely\nremain a minor contribution to KAGRA's instrument noise in the foreseeable\nfuture.", "category": "astro-ph_IM" }, { "text": "Real-time Data Ingestion at the Keck Observatory Archive (KOA): Since February of this year, KOA began to prepare, transfer, and ingest data\nas they were acquired in near-real time; in most cases data are available to\nobservers through KOA within one minute of acquisition. Real-time ingestion\nwill be complete for all active instruments by the end of Summer 2022. The\nobservatory is supporting the development of modern Python data reduction\npipelines, which when delivered, will automatically create science-ready data\nsets at the end of each night for ingestion into the archive. This presentation\nwill describe the infrastructure developed to support real-time data ingestion,\nitself part of a larger initiative at the Observatory to modernize end-to-end\noperations.\n During telescope operations, the software at WMKO is executed automatically\nwhen a newly acquired file is recognized through monitoring a keyword-based\nobservatory control system; this system is used at Keck to execute virtually\nall observatory functions. The monitor uses callbacks built into the control\nsystem to begin data preparation of files for transmission to the archive on an\nindividual basis: scheduling scripts or file system related triggers are\nunnecessary. An HTTP-based system called from the Flask micro-framework enables\nfile transfers between WMKO and NExScI and triggers data ingestion at NExScI.\nThe ingestion system at NEXScI is a compact (4 KLOC), highly fault-tolerant,\nPython-based system. It uses a shared file system to transfer data from WMKO to\nNExScI. The ingestion code is instrument agnostic, with instrument parameters\nread from configuration files. It replaces an unwieldy (50 KLOC) C-based system\nthat had been in use since 2004.", "category": "astro-ph_IM" }, { "text": "Inferring the properties of a population of compact binaries in presence\n of selection effects: Shortly after a new class of objects is discovered, the attention shifts from\nthe properties of the individual sources to the question of their origin: do\nall sources come from the same underlying population, or several populations\nare required? What are the properties of these populations? As the detection of\ngravitational waves is becoming routine and the size of the event catalog\nincreases, finer and finer details of the astrophysical distribution of compact\nbinaries are now within our grasp. This Chapter presents a pedagogical\nintroduction to the main statistical tool required for these analyses:\nhierarchical Bayesian inference in the presence of selection effects. All key\nequations are obtained from first principles, followed by two examples of\nincreasing complexity. Although many remarks made in this Chapter refer to\ngravitational-wave astronomy, the write-up is generic enough to be useful to\nresearchers and graduate students from other fields.", "category": "astro-ph_IM" }, { "text": "Design of the KOSMOS oil-coupled spectrograph camera lenses: We present the design details of oil-coupled lens groups used in the KOSMOS\nspectrograph camera. The oil-coupled groups use silicone rubber O-rings in a\nunique way to accurately center lens elements with high radial and axial\nstiffness while also allowing easy assembly. The O-rings robustly seal the oil\nwithin the lens gaps to prevent oil migration. The design of an expansion\ndiaphragm to compensate for differential expansion due to temperature changes\nis described. The issues of lens assembly, lens gap shimming, oil filling and\ndraining, bubble mitigation, material compatibility, mechanical inspection, and\noptical testing are discussed.", "category": "astro-ph_IM" }, { "text": "Introducing Astronomy into Mozambican Society: Mozambique has been proposed as a host for one of the future Square Kilometre\nArray stations in Southern Africa. However, Mozambique does not possess a\nuniversity astronomy department and only recently has there been interest in\ndeveloping one. South Africa has been funding students at the MSc and PhD\nlevel, as well as researchers. Additionally, Mozambicans with Physics degrees\nhave been funded at the MSc level. With the advent of the International Year of\nAstronomy, there has been a very strong drive, from these students, to\nestablish a successful astronomy department in Mozambique. The launch of the\ncommemorations during the 2008 World Space Week was very successful and\nMozambique is to be used to motivate similar African countries who lack funds\nbut are still trying to take part in the International Year of Astronomy. There\nhare been limited resources and funding, however there is a strong will to\ncarry this momentum into 2009 and, with this, influence the Government to\nintroduce Astronomy into its national curriculum and at University level.\nMozambique's motto for the International Year of Astronomy is \"Descobre o teu\nUniverso\".", "category": "astro-ph_IM" }, { "text": "Electrode level Monte Carlo model of radiation damage effects on\n astronomical CCDs: Current optical space telescopes rely upon silicon Charge Coupled Devices\n(CCDs) to detect and image the incoming photons. The performance of a CCD\ndetector depends on its ability to transfer electrons through the silicon\nefficiently, so that the signal from every pixel may be read out through a\nsingle amplifier. This process of electron transfer is highly susceptible to\nthe effects of solar proton damage (or non-ionizing radiation damage). This is\nbecause charged particles passing through the CCD displace silicon atoms,\nintroducing energy levels into the semi-conductor bandgap which act as\nlocalized electron traps. The reduction in Charge Transfer Efficiency (CTE)\nleads to signal loss and image smearing. The European Space Agency's\nastrometric Gaia mission will make extensive use of CCDs to create the most\ncomplete and accurate stereoscopic map to date of the Milky Way. In the context\nof the Gaia mission CTE is referred to with the complementary quantity Charge\nTransfer Inefficiency (CTI = 1-CTE). CTI is an extremely important issue that\nthreatens Gaia's performances. We present here a detailed Monte Carlo model\nwhich has been developed to simulate the operation of a damaged CCD at the\npixel electrode level. This model implements a new approach to both the charge\ndensity distribution within a pixel and the charge capture and release\nprobabilities, which allows the reproduction of CTI effects on a variety of\nmeasurements for a large signal level range in particular for signals of the\norder of a few electrons. A running version of the model as well as a brief\ndocumentation and a few examples are readily available at\nhttp://www.strw.leidenuniv.nl/~prodhomme/cemga.php as part of the CEMGA java\npackage (CTI Effects Models for Gaia).", "category": "astro-ph_IM" }, { "text": "Giant Radio Array for Neutrino Detection (GRAND): GRAND is a newly proposed series of radio arrays with a combined area of\n200,000 square km, to be deployed in mountainous areas. Its primary goal is to\nmeasure cosmic ultra-high-energy tau-neutrinos (E>1 EeV), through the\ninteraction of these neutrinos in rock and the decay of the tau-lepton in the\natmosphere. This decay creates an air shower, whose properties can be inferred\nfrom the radio signal it creates. The huge area of GRAND makes it the most\nsensitive instrument proposed to date, ensured to measure neutrinos in all\nreasonable models of cosmic ray production and propagation. At the same time,\nGRAND will be a very versatile observatory with enormous exposure to\nultra-high-energy cosmic rays and photons. This talk covers the scientific\nmotivation, as well as the staged approach required in the R\\&D stages to get\nto a final design that will make the construction, deployment and operation of\nthis vast detector affordable.", "category": "astro-ph_IM" }, { "text": "New SST Optical Sensor of Pampilhosa da Serra: studies on image\n processing algorithms and multi-filter characterization of Space Debris: As part of the Portuguese Space Surveillance and Tracking (SST) System, two\nnew Wide Field of View (2.3deg x 2.3deg) small aperture (30cm) telescopes will\nbe deployed in 2021, at the Pampilhosa da Serra Space Observatory (PASO),\nlocated in the center of the continental Portuguese territory, in the heart of\na certified Dark Sky area. These optical systems will provide added value\ncapabilities to the Portuguese SST network, complementing the optical\ntelescopes currently in commissioning in Madeira and Azores. These telescopes\nare optimized for GEO and MEO survey operations and besides the required SST\noperational capability, they will also provide an important development\ncomponent to the Portuguese SST network. The telescopes will be equipped with\nfilter wheels, being able to perform observations in several optical bands\nincluding white light, BVRI bands and narrow band filters such as H(alpha) and\nO[III] to study potential different objects' albedos. This configuration\nenables us to conduct a study on space debris classification$/$characterization\nusing combinations of different colors aiming the production of improved color\nindex schemes to be incorporated in the automatic pipelines for classification\nof space debris. This optical sensor will also be used to conduct studies on\nimage processing algorithms, including source extraction and classification\nsolutions through the application of machine learning techniques. Since SST\ndedicated telescopes produce a large quantity of data per observation night,\nfast, efficient and automatic image processing techniques are mandatory. A\nplatform like this one, dedicated to the development of Space Surveillance\nstudies, will add a critical capability to keep the Portuguese SST network\nupdated, and as a consequence it may provide useful developments to the\nEuropean SST network as well.", "category": "astro-ph_IM" }, { "text": "The Solar Probe ANalyzers -- Electrons on Parker Solar Probe: Electrostatic analyzers of different designs have been used since the\nearliest days of the space age, beginning with the very earliest solar wind\nmeasurements made by Mariner 2 en route to Venus in 1962. The Parker Solar\nProbe (PSP) mission, NASA's first dedicated mission to study the innermost\nreaches of the heliosphere, makes its thermal plasma measurements using a suite\nof instruments called the Solar Wind Electrons, Alphas, and Protons (SWEAP)\ninvestigation. SWEAP's electron Parker Solar Probe Analyzer (SPAN-E)\ninstruments are a pair of top-hat electrostatic analyzers on PSP that are\ncapable of measuring the electron distribution function in the solar wind from\n2 eV to 30 keV. For the first time, in-situ measurements of thermal electrons\nprovided by SPAN-E will help reveal the heating and acceleration mechanisms\ndriving the evolution of the solar wind at the points of acceleration and\nheating, closer than ever before to the Sun. This paper details the design of\nthe SPAN-E sensors and their operation, data formats, and measurement caveats\nfrom Parker Solar Probe's first two close encounters with the Sun.", "category": "astro-ph_IM" }, { "text": "Optical Design and Characterization of 40-GHz Detector and Module for\n the BICEP Array: Families of cosmic inflation models predict a primordial gravitational-wave\nbackground that imprints B-mode polarization pattern in the Cosmic Microwave\nBackground (CMB). High sensitivity instruments with wide frequency coverage and\nwell-controlled systematic errors are needed to constrain the faint B-mode\namplitude. We have developed antenna-coupled Transition Edge Sensor (TES)\narrays for high-sensitivity polarized CMB observations over a wide range of\nmillimeter-wave bands. BICEP Array, the latest phase of the BICEP/Keck\nexperiment series, is a multi-receiver experiment designed to search for\ninflationary B-mode polarization to a precision $\\sigma$(r) between 0.002 and\n0.004 after 3 full years of observations, depending on foreground complexity\nand the degree of lensing removal. We describe the electromagnetic design and\nmeasured performance of BICEP Array low-frequency 40-GHz detector, their\npackaging in focal plane modules, and optical characterization including\nefficiency and beam matching between polarization pairs. We summarize the\ndesign and simulated optical performance, including an approach to improve the\noptical efficiency due to mismatch losses. We report the measured beam maps for\na new broad-band corrugation design to minimize beam differential ellipticity\nbetween polarization pairs caused by interactions with the module housing\nframe, which helps minimize polarized beam mismatch that converts CMB\ntemperature to polarization ($T \\rightarrow P$) anisotropy in CMB maps.", "category": "astro-ph_IM" }, { "text": "Physically constrained causal noise models for high-contrast imaging of\n exoplanets: The detection of exoplanets in high-contrast imaging (HCI) data hinges on\npost-processing methods to remove spurious light from the host star. So far,\nexisting methods for this task hardly utilize any of the available domain\nknowledge about the problem explicitly. We propose a new approach to HCI\npost-processing based on a modified half-sibling regression scheme, and show\nhow we use this framework to combine machine learning with existing scientific\ndomain knowledge. On three real data sets, we demonstrate that the resulting\nsystem performs clearly better (both visually and in terms of the SNR) than one\nof the currently leading algorithms. If further studies can confirm these\nresults, our method could have the potential to allow significant discoveries\nof exoplanets both in new and archival data.", "category": "astro-ph_IM" }, { "text": "Automated Real-Time Classification and Decision Making in Massive Data\n Streams from Synoptic Sky Surveys: The nature of scientific and technological data collection is evolving\nrapidly: data volumes and rates grow exponentially, with increasing complexity\nand information content, and there has been a transition from static data sets\nto data streams that must be analyzed in real time. Interesting or anomalous\nphenomena must be quickly characterized and followed up with additional\nmeasurements via optimal deployment of limited assets. Modern astronomy\npresents a variety of such phenomena in the form of transient events in digital\nsynoptic sky surveys, including cosmic explosions (supernovae, gamma ray\nbursts), relativistic phenomena (black hole formation, jets), potentially\nhazardous asteroids, etc. We have been developing a set of machine learning\ntools to detect, classify and plan a response to transient events for astronomy\napplications, using the Catalina Real-time Transient Survey (CRTS) as a\nscientific and methodological testbed. The ability to respond rapidly to the\npotentially most interesting events is a key bottleneck that limits the\nscientific returns from the current and anticipated synoptic sky surveys.\nSimilar challenge arise in other contexts, from environmental monitoring using\nsensor networks to autonomous spacecraft systems. Given the exponential growth\nof data rates, and the time-critical response, we need a fully automated and\nrobust approach. We describe the results obtained to date, and the possible\nfuture developments.", "category": "astro-ph_IM" }, { "text": "Long term measurements from the M\u00e1tra Gravitational and Geophysical\n Laboratory: Summary of the long term data taking, related to one of the proposed next\ngeneration ground-based gravitational detector's location is presented here.\nResults of seismic and infrasound noise, electromagnetic attenuation and cosmic\nmuon radiation measurements are reported in the underground Matra Gravitational\nand Geophysical Laboratory near Gy\\\"ongy\\\"osoroszi, Hungary. The collected\nseismic data of more than two years is evaluated from the point of view of the\nEinstein Telescope, a proposed third generation underground gravitational wave\nobservatory. Applying our results for the site selection will significantly\nimprove the signal to nose ratio of the multi-messenger astrophysics era,\nespecially at the low frequency regime.", "category": "astro-ph_IM" }, { "text": "Imfit: A Fast, Flexible New Program for Astronomical Image Fitting: I describe a new, open-source astronomical image-fitting program called\nImfit, specialized for galaxies but potentially useful for other sources, which\nis fast, flexible, and highly extensible. A key characteristic of the program\nis an object-oriented design which allows new types of image components (2D\nsurface-brightness functions) to be easily written and added to the program.\nImage functions provided with Imfit include the usual suspects for galaxy\ndecompositions (Sersic, exponential, Gaussian), along with Core-Sersic and\nbroken-exponential profiles, elliptical rings, and three components which\nperform line-of-sight integration through 3D luminosity-density models of disks\nand rings seen at arbitrary inclinations.\n Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead\nsimplex, and Differential Evolution, allowing trade-offs between speed and\ndecreased sensitivity to local minima in the fit landscape. Minimization can be\ndone using the standard chi^2 statistic (using either data or model values to\nestimate per-pixel Gaussian errors, or else user-supplied error images) or\nPoisson-based maximum-likelihood statistics; the latter approach is\nparticularly appropriate for cases of Poisson data in the low-count regime. I\nshow that fitting low-S/N galaxy images using chi^2 minimization and\nindividual-pixel Gaussian uncertainties can lead to significant biases in\nfitted parameter values, which are avoided if a Poisson-based statistic is\nused; this is true even when Gaussian read noise is present.", "category": "astro-ph_IM" }, { "text": "Experimental study on Modified Linear Quadratic Gaussian Control for\n Adaptive Optics: To achieve high resolution imaging the standard control algorithm used for\nclassical adaptive optics (AO) is the simple but efficient\nproportional-integral (PI) controller. The goal is to minimize the root mean\nsquare (RMS) error of the residual wave front. However, with the PI controller\none does not reach this minimum. A possibility to achieve is to use Linear\nQuadratic Gaussian Control (LQG). In practice, however this control algorithm\nstill encounters one unexpected problem, leading to the divergence of control\nin AO. In this paper we propose a Modified LQG (MLQG) to solve this issue. The\ncontroller is analyzed explicitly. Test in the lab shows strong stability and\nhigh precision compared to the classical control.", "category": "astro-ph_IM" }, { "text": "Simulations of astrometric planet detection in Alpha Centauri by\n intensity interferometry: Recent dynamical studies indicate that the possibility of an Earth-like\nplanet around $\\alpha\\;$Cen A or B should be taken seriously. Such a planet, if\nit exists, would perturb the orbital astrometry by $<10 \\ {\\mu}\\rm as$, which\nis $10^{-6}$ of the separation between the two stars. We assess the feasibility\nof detecting such perturbations using ground-based intensity interferometry. We\nsimulate a dedicated setup consisting of four 40-cm telescopes equipped with\nphoton counters and correlators with time resolution $0.1\\,\\rm ns$, and a sort\nof matched filter implemented through an aperture mask. The astrometric error\nfrom one night of observing $\\alpha\\;$Cen AB is $\\approx0.5\\,\\rm mas$. The\nerror decreases if longer observing times and multiple spectral channels are\nused, as $(\\hbox{channels}\\times\\hbox{nights})^{-1/2}$.", "category": "astro-ph_IM" }, { "text": "The Habitable Zone Planet Finder: A Proposed High Resolution NIR\n Spectrograph for the Hobby Eberly Telescope to Discover Low Mass Exoplanets\n around M Dwarfs: The Habitable Zone Planet Finder (HZPF) is a proposed instrument for the 10m\nclass Hobby Eberly telescope that will be capable of discovering low mass\nplanets around M dwarfs. HZPF will be fiber-fed, provide a spectral resolution\nR~ 50,000 and cover the wavelength range 0.9-1.65{\\mu}m, the Y, J and H NIR\nbands where most of the flux is emitted by mid-late type M stars, and where\nmost of the radial velocity information is concentrated. Enclosed in a chilled\nvacuum vessel with active temperature control, fiber scrambling and mechanical\nagitation, HZPF is designed to achieve a radial velocity precision < 3m/s, with\na desire to obtain <1m/s for the brightest targets. This instrument will enable\na study of the properties of low mass planets around M dwarfs; discover planets\nin the habitable zones around these stars, as well serve as an essential radial\nvelocity confirmation tool for astrometric and transit detections around late M\ndwarfs. Radial velocity observation in the near-infrared (NIR) will also enable\na search for close in planets around young active stars, complementing the\nsearch space enabled by upcoming high-contrast imaging instruments like GPI,\nSPHERE and PALM3K. Tests with a prototype Pathfinder instrument have already\ndemonstrated the ability to recover radial velocities at 7-10 m/s precision\nfrom integrated sunlight and ~15-20 m/s precision on stellar observations at\nthe HET. These tests have also demonstrated the ability to work in the NIR Y\nand J bands with an un-cooled instrument. We will also discuss lessons learned\nabout calibration and performance from our tests and how they impact the\noverall design of the HZPF.", "category": "astro-ph_IM" }, { "text": "High contrast imaging at the photon noise limit with self-calibrating\n WFS/C systems: High contrast imaging (HCI) systems rely on active wavefront control (WFC) to\ndeliver deep raw contrast in the focal plane, and on calibration techniques to\nfurther enhance contrast by identifying planet light within the residual\nspeckle halo. Both functions can be combined in an HCI system and we discuss a\npath toward designing HCI systems capable of calibrating residual starlight at\nthe fundamental contrast limit imposed by photon noise. We highlight the value\nof deploying multiple high-efficiency wavefront sensors (WFSs) covering a wide\nspectral range and spanning multiple optical locations. We show how their\ncombined information can be leveraged to simultaneously improve WFS sensitivity\nand residual starlight calibration, ideally making it impossible for an image\nplane speckle to hide from WFS telemetry. We demonstrate residual starlight\ncalibration in the laboratory and on-sky, using both a coronagraphic setup, and\na nulling spectro-interferometer. In both case, we show that bright starlight\ncan calibrate residual starlight.", "category": "astro-ph_IM" }, { "text": "Interferometric Beam Combination with a Triangular Tricoupler Photonic\n Chip: Beam combiners are important components of an optical/infrared astrophysical\ninterferometer, with many variants as to how to optimally combine two or more\nbeams of light to fringe-track and obtain the complex fringe visibility. One\nsuch method is the use of an integrated optics chip that can instantaneously\nprovide the measurement of the visibility without temporal or spatial\nmodulation of the optical path. Current asymmetric planar designs are complex,\nresulting in a throughput penalty, and so here we present developments into a\nthree dimensional triangular tricoupler that can provide the required\ninterferometric information with a simple design and only three outputs. Such a\nbeam combiner is planned to be integrated into the upcoming $\\textit{Pyxis}$\ninterferometer, where it can serve as a high-throughput beam combiner with a\nlow size footprint. Results into the characterisation of such a coupler are\npresented, highlighting a throughput of 85$\\pm$7% and a flux splitting ratio\nbetween 33:33:33 and 52:31:17 over a 20% bandpass. We also show the response of\nthe chip to changes in optical path, obtaining an instantaneous complex\nvisibility and group delay estimate at each input delay.", "category": "astro-ph_IM" }, { "text": "Archival Legacy Investigations of Circumstellar Environments (ALICE):\n Statistical assessment of point source detections: The ALICE program, for Archival Legacy Investigation of Circumstellar\nEnvironment, is currently conducting a virtual survey of about 400 stars, by\nre-analyzing the HST-NICMOS coronagraphic archive with advanced post-processing\ntechniques. We present here the strategy that we adopted to identify detections\nand potential candidates for follow-up observations, and we give a preliminary\noverview of our detections. We present a statistical analysis conducted to\nevaluate the confidence level on these detection and the completeness of our\ncandidate search.", "category": "astro-ph_IM" }, { "text": "A Condition Monitoring Concept Studied at the MST Prototype for the\n Cherenkov Telescope Array: The Cherenkov Telescope Array (CTA) is a future ground-based gamma-ray\nobservatory that will provide unprecedented sensitivity and angular resolution\nfor the detection of gamma rays with energies above a few tens of GeV. In\ncomparison to existing instruments (like H.E.S.S., MAGIC, and VERITAS) the\nsensitivity will be improved by installing two extended arrays of telescopes in\nthe northern and southern hemisphere, respectively. A large number of planned\ntelescopes (>100 in total) motivates the application of predictive maintenance\ntechniques to the individual telescopes. A constant and automatic condition\nmonitoring of the mechanical telescope structure and of the drive system\n(motors, gears) is considered for this purpose. The condition monitoring system\naims at detecting degradations well before critical errors occur; it should\nhelp to ensure long-term operation and to reduce the maintenance efforts of the\nobservatory. We present approaches for the condition monitoring of the\nstructure and the drive system of Medium-Sized Telescopes (MSTs), respectively.\nThe overall concept has been developed and tested at the MST prototype for CTA\nin Berlin. The sensors used, the joint data acquisition system, possible\nanalysis methods (like Operational Modal Analysis, OMA, and Experimental Modal\nAnalysis, EMA) and first performance results are discussed.", "category": "astro-ph_IM" }, { "text": "KLLR: A scale-dependent, multivariate model class for regression\n analysis: The underlying physics of astronomical systems governs the relation between\ntheir measurable properties. Consequently, quantifying the statistical\nrelationships between system-level observable properties of a population offers\ninsights into the astrophysical drivers of that class of systems. While purely\nlinear models capture behavior over a limited range of system scale, the fact\nthat astrophysics is ultimately scale-dependent implies the need for a more\nflexible approach to describing population statistics over a wide dynamic\nrange. For such applications, we introduce and implement a class of\nKernel-Localized Linear Regression (KLLR) models. KLLR is a natural extension\nto the commonly-used linear models that allows the parameters of the linear\nmodel -- normalization, slope, and covariance matrix -- to be scale-dependent.\nKLLR performs inference in two steps: (1) it estimates the mean relation\nbetween a set of independent variables and a dependent variable and; (2) it\nestimates the conditional covariance of the dependent variables given a set of\nindependent variables. We demonstrate the model's performance in a simulated\nsetting and showcase an application of the proposed model in analyzing the\nbaryonic content of dark matter halos. As a part of this work, we publicly\nrelease a Python implementation of the KLLR method.", "category": "astro-ph_IM" }, { "text": "The Venus ground-based image Active Archive: a database of amateur\n observations of Venus in ultraviolet and infrared light: The Venus ground-based image Active Archive is an online database designed to\ncollect ground-based images of Venus in such a way that they are optimally\nuseful for science. The Archive was built to support ESA's Venus Amateur\nObserving Project, which utilises the capabilities of advanced amateur\nastronomers to collect filtered images of Venus in ultraviolet, visible and\nnear-infrared light. These images complement the observations of the Venus\nExpress spacecraft, which cannot continuously monitor the northern hemisphere\nof the planet due to its elliptical orbit with apocentre above the south pole.\nWe present the first set of observations available in the Archive and assess\nthe usability of the dataset for scientific purposes.", "category": "astro-ph_IM" }, { "text": "Optimal Dithering Configuration Mitigating\n Rayleigh-Backscattering-Induced Distortion in Radioastronomic Optical Fiber\n Systems: In the context of Radioastronomic applications where the Analog\nRadio-over-Fiber technology is used for the antenna downlink, detrimental\nnonlinearity effects arise because of the interference between the forward\nsignal generated by the laser and the Rayleigh backscattered one which is\nre-forwarded by the laser itself toward the photodetector.\n The adoption of the so called dithering technique, which involves the direct\nmodulation of the laser with a sinusoidal tone and takes advantage of the laser\nchirping phenomenon, has been proved to reduce such Rayleigh Back Scattering -\ninduced nonlinearities. The frequency and the amplitude of the dithering tone\nshould both be as low as possible, in order to avoid undesired collateral\neffects on the received spectrum as well as keep at low levels the global\nenergy consumption.\n Through a comprehensive analysis of dithered Radio over Fiber systems, it is\ndemonstrated that a progressive reduction of the dithering tone frequency\naffects in a peculiar fashion both the chirping characteristics of the field\nemitted by the laser and the spectrum pattern of the received signal at the\nfiber end.\n Accounting for the concurrent effects caused by such phenomena, optimal\noperating conditions are identified for the implementation of the dithering\ntone technique in radioastronomic systems.", "category": "astro-ph_IM" }, { "text": "H2 distribution during 2-phase Molecular Cloud Formation: We performed high-resolution, 3D MHD simulations and we compared to\nobservations of translucent molecular clouds. We show that the observed\npopulations of rotational levels of H2 can arise as a consequence of the\nmulti-phase structure of the ISM.", "category": "astro-ph_IM" }, { "text": "Characterization and Physical Explanation of Energetic Particles on\n Planck HFI Instrument: The Planck High Frequency Instrument (HFI) has been surveying the sky\ncontinuously from the second Lagrangian point (L2) between August 2009 and\nJanuary 2012. It operates with 52 high impedance bolometers cooled at 100mK in\na range of frequency between 100 GHz and 1THz with unprecedented sensivity, but\nstrong coupling with cosmic radiation. At L2, the particle flux is about 5\n$cm^{-2} s^{-1}$ and is dominated by protons incident on the spacecraft.\nProtons with an energy above 40MeV can penetrate the focal plane unit box\ncausing two different effects: glitches in the raw data from direct interaction\nof cosmic rays with detectors (producing a data loss of about 15% at the end of\nthe mission) and thermal drifts in the bolometer plate at 100mK adding\nnon-gaussian noise at frequencies below 0.1Hz. The HFI consortium has made\nstrong efforts in order to correct for this effect on the time ordered data and\nfinal Planck maps. This work intends to give a view of the physical explanation\nof the glitches observed in the HFI instrument in-flight. To reach this goal,\nwe performed several ground-based experiments using protons and $\\alpha$\nparticles to test the impact of particles on the HFI spare bolometers with a\nbetter control of the environmental conditions with respect to the in-flight\ndata. We have shown that the dominant part of glitches observed in the data\ncomes from the impact of cosmic rays in the silicon die frame supporting the\nmicro-machinced bolometric detectors propagating energy mainly by ballistic\nphonons and by thermal diffusion. The implications of these results for future\nsatellite missions will be discussed.", "category": "astro-ph_IM" }, { "text": "Stellar distances from spectroscopic observations: a new technique: A Bayesian approach to the determination of stellar distances from\nphotometric and spectroscopic data is presented and tested both on pseudodata,\ndesigned to mimic data for stars observed by the RAVE survey, and on the real\nstars from the Geneva-Copenhagen survey. It is argued that this method is\noptimal in the sense that it brings to bear all available information and that\nits results are limited only by observational errors and the underlying physics\nof stars. The method simultaneously returns the metallicities, ages and masses\nof programme stars. Remarkably, the uncertainty in the output metallicity is\ntypically 44 per cent smaller than the uncertainty in the input metallicity.", "category": "astro-ph_IM" }, { "text": "4MOST - 4-metre Multi-Object Spectroscopic Telescope: The 4MOST consortium is currently halfway through a Conceptual Design study\nfor ESO with the aim to develop a wide-field (>3 square degree, goal >5 square\ndegree), high-multiplex (>1500 fibres, goal 3000 fibres) spectroscopic survey\nfacility for an ESO 4m-class telescope (VISTA). 4MOST will run permanently on\nthe telescope to perform a 5 year public survey yielding more than 20 million\nspectra at resolution R~5000 ({\\lambda}=390-1000 nm) and more than 2 million\nspectra at R~20,000 (395-456.5 nm & 587-673 nm). The 4MOST design is especially\nintended to complement three key all-sky, space-based observatories of prime\nEuropean interest: Gaia, eROSITA and Euclid. Initial design and performance\nestimates for the wide-field corrector concepts are presented. We consider two\nfibre positioner concepts, a well-known Phi-Theta system and a new R-Theta\nconcept with a large patrol area. The spectrographs are fixed configuration\ntwo-arm spectrographs, with dedicated spectrographs for the high- and\nlow-resolution. A full facility simulator is being developed to guide trade-off\ndecisions regarding the optimal field-of-view, number of fibres needed, and the\nrelative fraction of high-to-low resolution fibres. Mock catalogues with\ntemplate spectra from seven Design Reference Surveys are simulated to verify\nthe science requirements of 4MOST. The 4MOST consortium aims to deliver the\nfull 4MOST facility by the end of 2018 and start delivering high-level data\nproducts for both consortium and ESO community targets a year later with yearly\nincrements.", "category": "astro-ph_IM" }, { "text": "Creating A Galactic Plane Atlas With Amazon Web Services: This paper describes by example how astronomers can use cloud-computing\nresources offered by Amazon Web Services (AWS) to create new datasets at scale.\nWe have created from existing surveys an atlas of the Galactic Plane at 16\nwavelengths from 1 {\\mu}m to 24 {\\mu}m with pixels co-registered at spatial\nsampling of 1 arcsec. We explain how open source tools support management and\noperation of a virtual cluster on AWS platforms to process data at scale, and\ndescribe the technical issues that users will need to consider, such as\noptimization of resources, resource costs, and management of virtual machine\ninstances.", "category": "astro-ph_IM" }, { "text": "Driving unmodelled gravitational-wave transient searches using\n astrophysical information: Transient gravitational-wave searches can be divided into two main families\nof approaches: modelled and unmodelled searches, based on matched filtering\ntechniques and time-frequency excess power identification respectively. The\nformer, mostly applied in the context of compact binary searches, relies on the\nprecise knowledge of the expected gravitational-wave phase evolution. This\ninformation is not always available at the required accuracy for all plausible\nastrophysical scenarios, e.g., in presence of orbital precession, or\neccentricity. The other search approach imposes little priors on the targetted\nsignal. We propose an intermediate route based on a modification of unmodelled\nsearch methods in which time-frequency pattern matching is constrained by\nastrophysical waveform models (but not requiring accurate prediction for the\nwaveform phase evolution). The set of astrophysically motivated patterns is\nconveniently encapsulated in a graph, that encodes the time-frequency pixels\nand their co-occurrence. This allows the use of efficient graph-based\noptimization techniques to perform the pattern search in the data. We show in\nthe example of black-hole binary searches that such an approach leads to an\naveraged increase in the distance reach (+7-8\\%) for this specific source over\nstandard unmodelled searches.", "category": "astro-ph_IM" }, { "text": "Speckle Suppression Through Dual Imaging Polarimetry, and a Ground-Based\n Image of the HR 4796A Circumstellar Disk: We demonstrate the versatility of a dual imaging polarimeter working in\ntandem with a Lyot coronagraph and Adaptive Optics to suppress the highly\nstatic speckle noise pattern--the greatest hindrance to ground-based direct\nimaging of planets and disks around nearby stars. Using a double difference\ntechnique with the polarimetric data, we quantify the level of speckle\nsuppression, and hence improved sensitivity, by placing an ensemble of\nartificial faint companions into real data, with given total brightness and\npolarization. For highly polarized sources within 0.5 arcsec, we show that we\nachieve 3 to 4 magnitudes greater sensitivity through polarimetric speckle\nsuppression than simply using a coronagraph coupled to a high-order Adaptive\nOptics system. Using such a polarimeter with a classical Lyot coronagraph at\nthe 3.63m AEOS telescope, we have obtained a 6.5 sigma detection in the H-band\nof the 76 AU diameter circumstellar debris disk around the star HR 4796A. Our\ndata represent the first definitive, ground-based, near-IR polarimetric image\nof the HR 4796A debris disk and clearly show the two outer ansae of the disk,\nevident in Hubble Space Telescope NICMOS/STIS imaging. We derive a lower limit\nto the fractional linear polarization of 29% caused by dust grains in the disk.\nIn addition, we fit simple morphological models of optically thin disks to our\ndata allowing us to constrain the dust disk scale height to 2.5{+5.0}_{-1.3} AU\nand scattering asymmetry parameter (g=0.20^{+.07}_{-.10}). These values are\nconsistent with several lines of evidence suggesting that the HR 4796A disk is\ndominated by a micron-sized dust population, and are indeed typical of disks in\ntransition between those surrounding the Herbig Ae stars to those associated\nwith Vega-like stars.", "category": "astro-ph_IM" }, { "text": "Wide-band Profile Domain Pulsar Timing Analysis: We extend profile domain pulsar timing to incorporate wide-band effects such\nas frequency-dependent profile evolution and broadband shape variation in the\npulse profile. We also incorporate models for temporal variations in both pulse\nwidth and in the separation in phase of the main pulse and interpulse. We\nperform the analysis with both nested sampling and Hamiltonian Monte Carlo\nmethods. In the latter case we introduce a new parameterisation of the\nposterior that is extremely efficient in the low signal-to-noise regime and can\nbe readily applied to a wide range of scientific problems. We apply this\nmethodology to a series of simulations, and to between seven and nine yr of\nobservations for PSRs J1713$+$0747, J1744$-$1134, and J1909$-$3744 with\nfrequency coverage that spans 700-3600MHz. We use a smooth model for profile\nevolution across the full frequency range, and compare smooth and piecewise\nmodels for the temporal variations in DM. We find the profile domain framework\nconsistently results in improved timing precision compared to the standard\nanalysis paradigm by as much as 40% for timing parameters. Incorporating\nsmoothness in the DM variations into the model further improves timing\nprecision by as much as 30%. For PSR J1713+0747 we also detect pulse shape\nvariation uncorrelated between epochs, which we attribute to variation\nintrinsic to the pulsar at a level consistent with previously published\nanalyses. Not accounting for this shape variation biases the measured arrival\ntimes at the level of $\\sim$30ns, the same order of magnitude as the expected\nshift due to gravitational-waves in the pulsar timing band.", "category": "astro-ph_IM" }, { "text": "Microarcsecond VLBI pulsar astrometry with PSR$\u03c0$ II. parallax\n distances for 57 pulsars: We present the results of PSR$\\pi$, a large astrometric project targeting\nradio pulsars using the Very Long Baseline Array (VLBA). From our astrometric\ndatabase of 60 pulsars, we have obtained parallax-based distance measurements\nfor all but 3, with a parallax precision of typically 40 $\\mu$as and\napproaching 10 $\\mu$as in the best cases. Our full sample doubles the number of\nradio pulsars with a reliable ($\\gtrsim$5$\\sigma$) model-independent distance\nconstraint. Importantly, many of the newly measured pulsars are well outside\nthe solar neighbourhood, and so PSR$\\pi$ brings a near-tenfold increase in the\nnumber of pulsars with a reliable model-independent distance at $d>2$ kpc.\nUsing our sample along with previously published results, we show that even the\nmost recent models of the Galactic electron density distribution model contain\nsignificant shortcomings, particularly at high Galactic latitudes. When\ncomparing our results to pulsar timing, two of the four millisecond pulsars in\nour sample exhibit significant discrepancies in the estimates of proper motion\nobtained by at least one pulsar timing array. With additional VLBI observations\nto improve the absolute positional accuracy of our reference sources and an\nexpansion of the number of millisecond pulsars, we will be able to extend the\ncomparison of proper motion discrepancies to a larger sample of pulsar\nreference positions, which will provide a much more sensitive test of the\napplicability of the solar system ephemerides used for pulsar timing. Finally,\nwe use our large sample to estimate the typical accuracy attainable for\ndifferential astrometry with the VLBA when observing pulsars, showing that for\nsufficiently bright targets observed 8 times over 18 months, a parallax\nuncertainty of 4 $\\mu$as per arcminute of separation between the pulsar and\ncalibrator can be expected.", "category": "astro-ph_IM" }, { "text": "The Design and Performance of IceCube DeepCore: The IceCube neutrino observatory in operation at the South Pole, Antarctica,\ncomprises three distinct components: a large buried array for ultrahigh energy\nneutrino detection, a surface air shower array, and a new buried component\ncalled DeepCore. DeepCore was designed to lower the IceCube neutrino energy\nthreshold by over an order of magnitude, to energies as low as about 10 GeV.\nDeepCore is situated primarily 2100 m below the surface of the icecap at the\nSouth Pole, at the bottom center of the existing IceCube array, and began\ntaking physics data in May 2010. Its location takes advantage of the\nexceptionally clear ice at those depths and allows it to use the surrounding\nIceCube detector as a highly efficient active veto against the principal\nbackground of downward-going muons produced in cosmic-ray air showers. DeepCore\nhas a module density roughly five times higher than that of the standard\nIceCube array, and uses photomultiplier tubes with a new photocathode featuring\na quantum efficiency about 35% higher than standard IceCube PMTs. Taken\ntogether, these features of DeepCore will increase IceCube's sensitivity to\nneutrinos from WIMP dark matter annihilations, atmospheric neutrino\noscillations, galactic supernova neutrinos, and point sources of neutrinos in\nthe northern and southern skies. In this paper we describe the design and\ninitial performance of DeepCore.", "category": "astro-ph_IM" }, { "text": "Radio Astronomy Data Transfer and eVLBI using KAREN: Kiwi Advanced Research and Education Network (KAREN) has been used to\ntransfer large volumes of radio astronomical data between the Radio\nAstronomical Observatory at Warkworth, New Zealand and various international\norganizations involved in joint projects and VLBI observations. Here we report\non the current status of connectivity and on the results of testing different\ndata transfer protocols. We investigate new UDP protocols such as 'tsunami' and\nUDT and demonstrate that the UDT protocol is more efficient than 'tsunami' and\n'ftp'. We also report on the tests on direct data streaming from the radio\ntelescope receiving system to the correlation centre without intermediate\nbuffering or recording (real-time eVLBI).", "category": "astro-ph_IM" }, { "text": "CUTE solutions for two-point correlation functions from large\n cosmological datasets: In the advent of new large galaxy surveys, which will produce enormous\ndatasets with hundreds of millions of objects, new computational techniques are\nnecessary in order to extract from them any two-point statistic, the\ncomputational time of which grows with the square of the number of objects to\nbe correlated. Fortunately technology now provides multiple means to massively\nparallelize this problem. Here we present a free-source code specifically\ndesigned for this kind of calculations. Two implementations are provided: one\nfor execution on shared-memory machines using OpenMP and one that runs on\ngraphical processing units (GPUs) using CUDA. The code is available at\nhttp://members.ift.uam-csic.es/dmonge/CUTE.html.", "category": "astro-ph_IM" }, { "text": "Using the Astrophysics Source Code Library: Find, cite, download, parse,\n study, and submit: The Astrophysics Source Code Library (ASCL) contains 3000 metadata records\nabout astrophysics research software and serves primarily as a registry of\nsoftware, though it also can and does accept code deposit. Though the ASCL was\nstarted in 1999, many astronomers, especially those new to the field, are not\nvery familiar with it. This hands-on virtual tutorial was geared to new users\nof the resource to teach them how to use the ASCL, with a focus on finding\nsoftware and information about software not only in this resource, but also by\nusing Google and NASA's Astrophysics Data System (ADS). With computational\nmethods so important to research, finding these methods is useful for examining\n(for transparency) and possibly reusing the software (for reproducibility or to\nenable new research). Metadata about software is useful for, for example,\nknowing how to cite software when it is used for research and studying trends\nin the computational landscape. Though the tutorial was primarily aimed at new\nusers, advanced users were also likely to learn something new.", "category": "astro-ph_IM" }, { "text": "MAGIC-II Camera Slow Control Software: The Imaging Atmospheric Cherenkov Telescope MAGIC I has recently been\nextended to a stereoscopic system by adding a second 17 m telescope, MAGIC-II.\nOne of the major improvements of the second telescope is an improved camera.\nThe Camera Control Program is embedded in the telescope control software as an\nindependent subsystem.\n The Camera Control Program is an effective software to monitor and control\nthe camera values and their settings and is written in the visual programming\nlanguage LabVIEW. The two main parts, the Central Variables File, which stores\nall information of the pixel and other camera parameters, and the Comm Control\nRoutine, which controls changes in possible settings, provide a reliable\noperation. A safety routine protects the camera from misuse by accidental\ncommands, from bad weather conditions and from hardware errors by automatic\nreactions.", "category": "astro-ph_IM" }, { "text": "On the use of asymmetric PSF on NIR images of crowded stellar fields: We present data collected using the camera PISCES coupled with the Firt Light\nAdaptive Optics (FLAO) mounted at the Large Binocular Telescope (LBT). The\nimages were collected using two natural guide stars with an apparent magnitude\nof R<13 mag. During these observations the seeing was on average ~0.9\". The AO\nperformed very well: the images display a mean FWHM of 0.05 arcsec and of 0.06\narcsec in the J- and in the Ks-band, respectively. The Strehl ratio on the\nquoted images reaches 13-30% (J) and 50-65% (Ks), in the off and in the central\npointings respectively. On the basis of this sample we have reached a J-band\nlimiting magnitude of ~22.5 mag and the deepest Ks-band limiting magnitude ever\nobtained in a crowded stellar field: Ks~23 mag.\n J-band images display a complex change in the shape of the PSF when moving at\nlarger radial distances from the natural guide star. In particular, the stellar\nimages become more elongated in approaching the corners of the J-band images\nwhereas the Ks-band images are more uniform. We discuss in detail the strategy\nused to perform accurate and deep photometry in these very challenging images.\nIn particular we will focus our attention on the use of an updated version of\nROMAFOT based on asymmetric and analytical Point Spread Functions.\n The quality of the photometry allowed us to properly identify a feature that\nclearly shows up in NIR bands: the main sequence knee (MSK). The MSK is\nindependent of the evolutionary age, therefore the difference in magnitude with\nthe canonical clock to constrain the cluster age, the main sequence turn off\n(MSTO), provides an estimate of the absolute age of the cluster. The key\nadvantage of this new approach is that the error decreases by a factor of two\nwhen compared with the classical one. Combining ground-based Ks with space\nF606W photometry, we estimate the absolute age of M15 to be 13.70+-0.80 Gyr.", "category": "astro-ph_IM" }, { "text": "The ATA Digital Processing Requirements are Driven by RFI Concerns: As a new generation radio telescope, the Allen Telescope Array (ATA) is a\nprototype for the square kilometer array (SKA). Here we describe recently\ndeveloped design constraints for the ATA digital signal processing chain as a\ncase study for SKA processing. As radio frequency interference (RFI) becomes\nincreasingly problematical for radio astronomy, radio telescopes must support a\nwide range of RFI mitigation strategies including online adaptive RFI nulling.\nWe observe that the requirements for digital accuracy and control speed are not\ndriven by astronomical imaging but by RFI. This can be understood from the fact\nthat high dynamic range and digital precision is necessary to remove strong RFI\nsignals from the weak astronomical background, and because RFI signals may\nchange rapidly compared with celestial sources. We review and critique lines of\nreasoning that lead us to some of the design specifications for ATA digital\nprocessing, including these: beamformer coefficients must be specified with at\nleast 1{\\deg} precision and at least once per millisecond to enable flexible\nRFI excision.", "category": "astro-ph_IM" }, { "text": "Measurement of South Pole ice transparency with the IceCube LED\n calibration system: The IceCube Neutrino Observatory, approximately 1 km^3 in size, is now\ncomplete with 86 strings deployed in the Antarctic ice. IceCube detects the\nCherenkov radiation emitted by charged particles passing through or created in\nthe ice. To realize the full potential of the detector, the properties of light\npropagation in the ice in and around the detector must be well understood. This\nreport presents a new method of fitting the model of light propagation in the\nice to a data set of in-situ light source events collected with IceCube. The\nresulting set of derived parameters, namely the measured values of scattering\nand absorption coefficients vs. depth, is presented and a comparison of IceCube\ndata with simulations based on the new model is shown.", "category": "astro-ph_IM" }, { "text": "Multiplexing lobster-eye optics: a concept for wide-field X-ray\n monitoring: We propose a concept of multiplexing lobster-eye (MuLE) optics to achieve\nsignificant reductions in the number of focal plane imagers in lobster-eye (LE)\nwide-field X-ray monitors. In the MuLE configuration, an LE mirror is divided\ninto several segments and the X-rays reflected on each of these segments are\nfocused on a single image sensor in a multiplexed configuration. If each LE\nsegment assumes a different rotation angle, the azimuthal rotation angle of a\ncross-like image reconstructed from a point source by the LE optics identifies\nthe specific segment that focuses the X-rays on the imager. With a focal length\nof 30 cm and LE segments with areas of 10 x 10 cm^2, ~1 sr of the sky can be\ncovered with 36 LE segments and only four imagers (with total areas of 10 x 10\ncm^2). A ray tracing simulation was performed to evaluate the nine-segment MuLE\nconfiguration. The simulation showed that the flux (0.5 to 2 keV) associated\nwith the 5-sigma detection limit was ~2 x 10^-10 erg cm^-2 s^-1 (10 mCrab) for\na transient with a duration of 100 s. The simulation also showed that the\ndirection of the transient for flux in the range of 14 to 17 mCrab at 0.6 keV\nwas determined correctly with 99.7% confidence limit. We conclude that the MuLE\nconfiguration can become an effective on-board device for small satellites for\nfuture X-ray wide-field transient monitoring.", "category": "astro-ph_IM" }, { "text": "Multi-messenger Astronomy: a Bayesian approach: After the discovery of the gravitational waves and the observation of\nneutrinos of cosmic origin, we have entered a new and exciting era where cosmic\nrays, neutrinos, photons and gravitational waves will be used simultaneously to\nstudy the highest energy phenomena in the Universe. Here we present a fully\nBayesian approach to the challenge of combining and comparing the wealth of\nmeasurements from existing and upcoming experimental facilities. We discuss the\nprocedure from a theoretical point of view and using simulations, we also\ndemonstrate the feasibility of the method by incorporating the use of\ninformation provided by different theoretical models and different experimental\nmeasurements.", "category": "astro-ph_IM" }, { "text": "Proximity Operators for Phase Retrieval: We present a new formulation of a family of proximity operators that\ngeneralize the projector step for phase retrieval. These proximity operators\nfor noisy intensity measurements can replace the classical \"noise free\"\nprojection in any projection-based algorithm. They are derived from a maximum\nlikelihood formulation and admit closed form solutions for both the Gaussian\nand the Poisson cases. In addition, we extend these proximity operators to\nundersampled intensity measurements. To assess their performance, these\noperators are exploited in a classical Gerchberg Saxton algorithm. We present\nnumerical experiments showing that the reconstructed complex amplitudes with\nthese proximity operators perform always better than using the classical\nintensity projector while their computational overhead is moderate.", "category": "astro-ph_IM" }, { "text": "Progress with the LOFAR Imaging Pipeline: One of the science drivers of the new Low Frequency Array (LOFAR) is\nlarge-area surveys of the low-frequency radio sky. Realizing this goal requires\nautomated processing of the interferometric data, such that fully calibrated\nimages are produced by the system during survey operations. The LOFAR Imaging\nPipeline is the tool intended for this purpose, and is now undergoing\nsignificant commissioning work. The pipeline is now functional as an automated\nprocessing chain. Here we present several recent LOFAR images that have been\nproduced during the still ongoing commissioning period. These early LOFAR\nimages are representative of some of the science goals of the commissioning\nteam members.", "category": "astro-ph_IM" }, { "text": "Diffractive Microlensing: A New Probe of the Local Universe: Diffraction is important when nearby substellar objects gravitationally lens\ndistant stars. If the wavelength of the observation is comparable to the\nSchwarzschild radius of lensing object, diffraction leaves an observable\nimprint on the lensing signature. The SKA may have sufficient sensitivity to\ndetect the typical sources, giant stars in the bulge. The diffractive\nsignatures in a lensing event break the degeneracies between the mass of the\nlens, its distance and proper motion.", "category": "astro-ph_IM" }, { "text": "Compensation of tropospheric and ionospheric effects in gravitational\n sessions of the spacecraft RadioAstron: The possibility of compensating atmospheric influence in an experiment on\nprecision measurement of gravitational redshift using the \"RadioAstron\"\nspacecraft (SC) is discussed. When a signal propagates from a ground-based\ntracking station to a spacecraft and back, interaction with the ionosphere and\ntroposphere makes considerable contribution to the frequency shift. A brief\noverview of the physical effects determining this contribution is given, and\nthe principles of calculation and compensation of the corresponding frequency\ndistortions of radio signals are described. Then these approaches are used to\nreduce the atmospheric frequency shift of the \"RadioAstron\" spacecraft signal.\nThe spacecraft hardware allows working in two communication modes: \"one-way\"\nand \"two-way\", in addition, two communication channels at different frequencies\nwork simultaneously. \"One-way\" (SC - ground-based tracking station)\ncommunication mode, a signal is synchronized by the on board hydrogen frequency\nstandard. The \"two-way\" (SC - ground-based tracking station - SC ) mode is\nsynchronized by the ground hydrogen standard. The calculations performed allow\nus to compare the quality of compensation of atmospheric fluctuations performed\nby various methods and choose the optimal one.", "category": "astro-ph_IM" }, { "text": "RISTRETTO: coronagraph and AO designs enabling High Dispersion\n Coronagraphy at 2 lambda/D: RISTRETTO is the evolution of the original idea of coupling the VLT\ninstruments SPHERE and ESPRESSO, aiming at High Dispersion Coronagraphy.\nRISTRETTO is a visitor instrument that should enable the characterization of\nthe atmospheres of nearby exoplanets in reflected light, by using the technique\nof high-contrast, high-resolution spectroscopy. Its goal is to observe Prox Cen\nb and other planets placed at about 35mas from their star, i.e. 2lambda/D at\nlambda=750nm. The instrument is composed of an extreme adaptive optics, a\ncoronagraphic Integral Field Unit, and a diffraction-limited spectrograph\n(R=140.000, lambda=620-840 nm).\n We present the status of our studies regarding the coronagraphic IFU and the\nXAO system. The first in particular is based on a modified version of the PIAA\napodizer, allowing nulling on the first diffraction ring. Our proposed design\nhas the potential to reach > 50% coupling and <1E-4 contrast at 2lambda/D in\nmedian seeing conditions.", "category": "astro-ph_IM" }, { "text": "The angular resolution of GRAPES-3 EAS array after correction for the\n shower front curvature: The angular resolution of an extensive air shower (EAS) array plays a\ncritical role in determining its sensitivity for the detection of point\n$\\gamma$-ray sources in the multi-TeV energy range. The GRAPES-3 an EAS array\nlocated at Ooty in India (11.4$^{\\circ}$N, 76.7$^{\\circ}$E, 2200 m altitude) is\ndesigned to study $\\gamma$-rays in the TeV-PeV energy range. It comprises of a\ndense array of 400 plastic scintillators deployed over an area of 25000 m$^2$\nand a large area (560 m$^2$) muon telescope. A new statistical method allowed\nreal time determination of the propagation delay of each detector in the\nGRAPES-3 array. The shape of shower front is known to be curved and here the\ndetails of a new method developed for accurate measurement of the shower front\ncurvature is presented. These two developments have led to a sizable\nimprovement in the angular resolution of GRAPES-3 array. It is shown that the\ncurvature depends on the size and age of an EAS. By employing two different\ntechniques, namely, the odd-even and the left-right methods, independent\nestimates of the angular resolution are obtained. The odd-even method estimates\nthe best achievable resolution of the array. For obtaining the angular\nresolution, the left-right method is used after implementing the size and age\ndependent curvature corrections. A comparison of the angular resolution as a\nfunction of EAS energy by these two methods shows them be virtually\nindistinguishable. The angular resolution of GRAPES-3 array is 47$^{\\prime}$\nfor energies E$>$5 TeV and improves to 17$^{\\prime}$ at E$>$100 TeV and finally\napproaching 10$^{\\prime}$ at E$>$500 TeV.", "category": "astro-ph_IM" }, { "text": "The Graphical User Interface of the Operator of the Cherenkov Telescope\n Array: The Cherenkov Telescope Array (CTA) is the next generation gamma-ray\nobservatory. CTA will incorporate about 100 imaging atmospheric Cherenkov\ntelescopes (IACTs) at a southern site, and about 20 in the north. Previous IACT\nexperiments have used up to five telescopes. Subsequently, the design of a\ngraphical user interface (GUI) for the operator of CTA poses an interesting\nchallenge. In order to create an effective interface, the CTA team is\ncollaborating with experts from the field of Human-Computer Interaction. We\npresent here our GUI prototype. The back-end of the prototype is a Python Web\nserver. It is integrated with the observation execution system of CTA, which is\nbased on the Alma Common Software (ACS). The back-end incorporates a redis\ndatabase, which facilitates synchronization of GUI panels. redis is also used\nto buffer information collected from various software components and databases.\nThe front-end of the prototype is based on Web technology. Communication\nbetween Web server and clients is performed using Web Sockets, where graphics\nare generated with the d3.js Javascript library.", "category": "astro-ph_IM" }, { "text": "Gemini Planet Imager Observational Calibrations II: Detector Performance\n and Calibration: The Gemini Planet Imager is a newly commissioned facility instrument designed\nto measure the near-infrared spectra of young extrasolar planets in the solar\nneighborhood and obtain imaging polarimetry of circumstellar disks. GPI's\nscience instrument is an integral field spectrograph that utilizes a HAWAII-2RG\ndetector with a SIDECAR ASIC readout system. This paper describes the detector\ncharacterization and calibrations performed by the GPI Data Reduction Pipeline\nto compensate for effects including bad/hot/cold pixels, persistence,\nnon-linearity, vibration induced microphonics and correlated read noise.", "category": "astro-ph_IM" }, { "text": "First tests of a 1 megapixel near-infrared avalanche photodiode array\n for ultra-low background space astronomy: Spectroscopy of Earth-like exoplanets and ultra-faint galaxies are priority\nscience cases for the coming decades. Here, broadband source flux rates are\nmeasured in photons per square meter per hour, imposing extreme demands on\ndetector performance, including dark currents lower than 1 e-/pixel/kilosecond,\nread noise less than 1 e-/pixel/frame, and large formats. There are currently\nno infrared detectors that meet these requirements. The University of Hawaii\nand industrial partners are developing one promising technology, linear mode\navalanche photodiodes (LmAPDs), using fine control over the HgCdTe bandgap\nstructure to enable noise-free charge amplification and minimal glow.\n Here we report first results of a prototype megapixel format LmAPD operated\nin our cryogenic testbed. At 50 Kelvin, we measure a dark current of about 3\ne-/pixel/kilosecond, which is due to an intrinsic dark current consistent with\nzero (best estimate of 0.1 e-/pixel/kilosecond) and a ROIC glow of 0.08\ne-/pixel/frame. The read noise of these devices is about 10 e-/pixel/frame at 3\nvolts, and decreases by 30% with each additional volt of bias, reaching 2 e- at\n8 volts. Upcoming science-grade devices are expected to substantially improve\nupon these figures, and address other issues uncovered during testing.", "category": "astro-ph_IM" }, { "text": "Temporal spectrum of multi-conjugate adaptive optics residuals and\n impact of tip-tilt anisoplanatism on astrometric observations: Multi-conjugate adaptive optics (MCAO) will assist a new era of ground-based\nastronomical observations with the extremely large telescopes and the Very\nLarge Telescope. High precision relative astrometry is among the main science\ndrivers of these systems and challenging requirements have been set for the\nastrometric measurements. A clear understanding of the astrometric error budget\nis needed and the impact of the MCAO correction has to be taken into account.\nIn this context, we propose an analytical formulation to estimate the residual\nphase produced by an MCAO correction in any direction of the scientific field\nof view. The residual phase, computed in the temporal frequency domain, allows\nto consider the temporal filtering of the turbulent phase from the MCAO loop\nand to extract the temporal spectrum of the residuals, as well as to include\nother temporal effects such as the scientific integration time. The formulation\nis kept general and allows to consider specific frameworks by setting the\ntelescope diameter, the turbulence profile, the guide stars constellation, the\ndeformable mirrors configuration, the modes sensed and corrected and the\ntomographic reconstruction algorithm. The formalism is presented for both a\nclosed loop and a pseudo-open loop control. We use our results to investigate\nthe effect of tip-tilt residuals on MCAO-assisted astrometric observations. We\nderive an expression for the differential tilt jitter power spectrum that also\nincludes the dependence on the scientific exposure time. Finally, we\ninvestigate the contribution of the differential tilt jitter error on the\nfuture astrometric observations with MAVIS and MAORY.", "category": "astro-ph_IM" }, { "text": "Physical properties of the interstellar medium using high-resolution\n Chandra spectra: O K-edge absorption: Chandra high-resolution spectra toward eight low-mass Galactic binaries have\nbeen analyzed with a photoionization model that is capable of determining the\nphysical state of the interstellar medium. Particular attention is given to the\naccuracy of the atomic data. Hydrogen column densities are derived with a\nbroadband fit that takes into account pileup effects, and in general are in\ngood agreement with previous results. The dominant features in the oxygen-edge\nregion are O I and O II K$\\alpha$ absorption lines whose simultaneous fits lead\nto average values of the ionization parameter of $\\log\\xi=-2.90$ and oxygen\nabundance of $A_{\\rm O}=0.70$. The latter is relative to the standard by\nGrevesse & Sauval (1998), but a rescaling with the revision by Asplund et al.\n(2009) would lead to an average abundance value fairly close to solar. The low\naverage oxygen column density ($N_{\\rm O}=9.2 \\times 10^{17}$ cm$^{-2}$)\nsuggests a correlation with the low ionization parameters, the latter also\nbeing in evidence in the column density ratios OII/OI and OIII/OI that are\nestimated to be less than 0.1. We do not find conclusive evidence for\nabsorption by any other compound but atomic oxygen.", "category": "astro-ph_IM" }, { "text": "Upgrading electron temperature and electron density diagnostic diagrams\n of forbidden line emission: Diagnostic diagrams of forbidden lines have been a useful tool for observers\nin astrophysics for many decades now. They are used to obtain information on\nthe basic physical properties of thin gaseous nebulae. Some diagnostic diagrams\nare in wavelength domains which were difficult to take either due to missing\nwavelength coverage or low resolution of older spectrographs. Furthermore, most\nof the diagrams were calculated using just the species involved as a single\natom gas, although several are affected by well-known fluorescence mechanisms\nas well. Additionally the atomic data have improved up to the present time. Aim\nof this work was a recalculation of well-known, but also of sparsely used,\nunnoted diagnostics diagrams. The new diagrams provide observers with modern,\neasy-to-use recipes to determine electron temperature and densities. The new\ndiagnostic diagrams are calculated using large grids of parameter space in the\nphotoionization code CLOUDY. For a given basic parameter (e.g. electron density\nor temperature) the solutions with cooling-heating-equilibrium are chosen to\nderive the diagnostic diagrams. Empirical numerical functions are fitted to\nprovide formulas usable in e.g. data reduction pipelines. The resulting\ndiagrams differ significantly from those used up to now and will improve the\nthermodynamic calculations. To our knowledge, for the first time detailed\ndirectly applicable fit formulas are given, leading to electron temperature or\ndensity from the line ratios.", "category": "astro-ph_IM" }, { "text": "Time-Dependent Behavior of Linear Polarization in Unresolved\n Photospheres, With Applications for The Hanle Effect: Aims: This paper extends previous studies in modeling time varying linear\npolarization due to axisymmetric magnetic fields in rotating stars. We use the\nHanle effect to predict variations in net line polarization, and use geometric\narguments to generalize these results to linear polarization due to other\nmechanisms. Methods: Building on the work of Lopez Ariste et al., we use simple\nanalytic models of rotating stars that are symmetric except for an axisymmetric\nmagnetic field to predict the polarization lightcurve due to the Hanle effect.\nWe highlight the effects for the variable line polarization as a function of\nviewing inclination and field axis obliquity. Finally, we use geometric\narguments to generalize our results to linear polarization from the weak\ntransverse Zeeman effect. Results: We derive analytic expressions to\ndemonstrate that the variable polarization lightcurve for an oblique magnetic\nrotator is symmetric. This holds for any axisymmetric field distribution and\narbitrary viewing inclination to the rotation axis. Conclusions: For the\nsituation under consideration, the amplitude of the polarization variation is\nset by the Hanle effect, but the shape of the variation in polarization with\nphase depends largely on geometrical projection effects. Our work generalizes\nthe applicability of results described in Lopez Ariste et al., inasmuch as the\nassumptions of a spherical star and an axisymmetric field are true, and\nprovides a strategy for separating the effects of perspective from the Hanle\neffect itself for interpreting polarimetric lightcurves.", "category": "astro-ph_IM" }, { "text": "Roman CCS White Paper: Optimizing the HLTDS Cadence at Fixed Depth: The current proposal for the High Latitude Time Domain Survey (HLTDS) is two\ntiers (wide and deep) of multi-band imaging and prism spectroscopy with a\ncadence of five days (Rose et al., 2021). The five-day cadence is motivated by\nthe desire to measure mid-redshift SNe where time dilation is modest as well as\nto better photometrically characterize the transients detected. This white\npaper does not provide a conclusion as to the best cadence for the HLTDS.\nRather, it collects a set of considerations that should be used for a careful\nstudy of cadence by a future committee optimizing the Roman survey. This study\nshould optimize the HLTDS for both SN Ia cosmology and other transient science.", "category": "astro-ph_IM" }, { "text": "Optimizing Gravitational-Wave Detector Design for Squeezed Light: Achieving the quantum noise targets of third-generation detectors will\nrequire 10 dB of squeezed-light enhancement as well as megawatt laser power in\nthe interferometer arms - both of which require unprecedented control of the\ninternal optical losses. In this work, we present a novel optimization approach\nto gravitational-wave detector design aimed at maximizing the robustness to\ncommon, yet unavoidable, optical fabrication and installation errors, which\nhave caused significant loss in Advanced LIGO. As a proof of concept, we employ\nthese techniques to perform a two-part optimization of the LIGO A+ design.\nFirst, we optimize the arm cavities for reduced scattering loss in the presence\nof point absorbers, as currently limit the operating power of Advanced LIGO.\nThen, we optimize the signal recycling cavity for maximum squeezing\nperformance, accounting for realistic errors in the positions and radii of\ncurvature of the optics. Our findings suggest that these techniques can be\nleveraged to achieve substantially greater quantum noise performance in current\nand future gravitational-wave detectors.", "category": "astro-ph_IM" }, { "text": "Design and optimization of a dispersive unit based on cascaded volume\n phase holographic gratings: We describe a dispersive unit consisting of cascaded volume-phase holographic\ngratings for spectroscopic applications. Each of the gratings provides high\ndiffractive efficiency in a relatively narrow wavelength range and transmits\nthe rest of the radiation to the 0th order of diffraction. The spectral lines\nformed by different gratings are centered in the longitudal direction and\nseparated in the transverse direction due to tilt of the gratings around two\naxes. We consider a technique of design and optimization of such a scheme. It\nallows to define modulation of index of refraction and thickness of the\nholographic layer for each of the gratings as well as their fringes frequencies\nand inclination angles. At the first stage the gratings parameters are found\napproximately using analytical expressions of Kogelnik's coupled wave theory.\nThen each of the grating starting from the longwave sub-range is optimized\nseparately by using of numerical optimization procedure and rigorous coupled\nwave analysis to achieve a high diffraction efficiency profile with a steep\nshortwave edge. In parallel such targets as ray aiming and linear dispersion\nmaintenance are controlled by means of ray tracing. We demonstrate this\ntechnique on example of a small-sized spectrograph for astronomical\napplications. It works in the range of 500-650 nm and uses three gratings\ncovering 50 nm each. It has spectral resolution of 6130 - 12548. Obtaining of\nthe asymmetrical efficiency curve is shown with use of dichromated gelatin and\na photopolymer. Change of the curve shape allows to increase filling\ncoefficient for the target sub-range up to 2.3 times.", "category": "astro-ph_IM" }, { "text": "IVOA Recommendation: Simple Cone Search Version 1.03: This specification defines a simple query protocol for retrieving records\nfrom a catalog of astronomical sources. The query describes sky position and an\nangular distance, defining a cone on the sky. The response returns a list of\nastronomical sources from the catalog whose positions lie within the cone,\nformatted as a VOTable. This version of the specification is essentially a\ntranscription of the original Cone Search specification in order to move it\ninto the IVOA standardization process.", "category": "astro-ph_IM" }, { "text": "Intrinsic Instrumental Polarization and High-Precision Pulsar Timing: Radio telescopes are used to accurately measure the time of arrival (ToA) of\nradio pulses in pulsar timing experiments that target mostly millisecond\npulsars (MSPs) due to their high rotational stability. This allows for detailed\nstudy of MSPs and forms the basis of experiments to detect gravitational waves.\nApart from intrinsic and propagation effects, such as pulse-to-pulse jitter and\ndispersion variations in the interstellar medium, timing precision is limited\nin part by the following: polarization purity of the telescope's orthogonally\npolarized receptors, the signal-to-noise ratio (S/N) of the pulsar profile, and\nthe polarization fidelity of the system. Using simulations, we present how\nfundamental limitations in recovering the true polarization reduce the\nprecision of ToA measurements. Any real system will respond differently to each\nsource observed depending on the unique pulsar polarization profile. Using the\nprofiles of known MSPs we quantify the limits of observing system\nspecifications that yield satisfactory ToA measurements, and we place a\npractical design limit beyond which improvement of the system results in\ndiminishing returns. Our aim is to justify limits for the front-end\npolarization characteristics of next generation radio telescopes, leading to\nthe Square Kilometre Array (SKA).", "category": "astro-ph_IM" }, { "text": "Effects of transients in LIGO suspensions on searches for gravitational\n waves: This paper presents an analysis of the transient behavior of the Advanced\nLIGO suspensions used to seismically isolate the optics. We have characterized\nthe transients in the longitudinal motion of the quadruple suspensions during\nAdvanced LIGO's first observing run. Propagation of transients between stages\nis consistent with modelled transfer functions, such that transient motion\noriginating at the top of the suspension chain is significantly reduced in\namplitude at the test mass. We find that there are transients seen by the\nlongitudinal motion monitors of quadruple suspensions, but they are not\nsignificantly correlated with transient motion above the noise floor in the\ngravitational wave strain data, and therefore do not present a dominant source\nof background noise in the searches for transient gravitational wave signals.", "category": "astro-ph_IM" }, { "text": "The Eleventh and Twelfth Data Releases of the Sloan Digital Sky Survey:\n Final Data from SDSS-III: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data\nfrom 2008 to 2014 using the original SDSS wide-field imager, the original and\nan upgraded multi-object fiber-fed optical spectrograph, a new near-infrared\nhigh-resolution spectrograph, and a novel optical interferometer. All the data\nfrom SDSS-III are now made public. In particular, this paper describes Data\nRelease 11 (DR11) including all data acquired through 2013 July, and Data\nRelease 12 (DR12) adding data acquired through 2014 July (including all data\nincluded in previous data releases), marking the end of SDSS-III observing.\nRelative to our previous public release (DR10), DR12 adds one million new\nspectra of galaxies and quasars from the Baryon Oscillation Spectroscopic\nSurvey (BOSS) over an additional 3000 sq. deg of sky, more than triples the\nnumber of H-band spectra of stars as part of the Apache Point Observatory (APO)\nGalactic Evolution Experiment (APOGEE), and includes repeated accurate radial\nvelocity measurements of 5500 stars from the Multi-Object APO Radial Velocity\nExoplanet Large-area Survey (MARVELS). The APOGEE outputs now include measured\nabundances of 15 different elements for each star. In total, SDSS-III added\n2350 sq. deg of ugriz imaging; 155,520 spectra of 138,099 stars as part of the\nSloan Exploration of Galactic Understanding and Evolution 2 (SEGUE-2) survey;\n2,497,484 BOSS spectra of 1,372,737 galaxies, 294,512 quasars, and 247,216\nstars over 9376 sq. deg; 618,080 APOGEE spectra of 156,593 stars; and 197,040\nMARVELS spectra of 5,513 stars. Since its first light in 1998, SDSS has imaged\nover 1/3 of the Celestial sphere in five bands and obtained over five million\nastronomical spectra.", "category": "astro-ph_IM" }, { "text": "Probing the Spacetime Around Supermassive Black Holes with Ejected\n Plasma Blobs: Millimeter-wavelength VLBI observations of the supermassive black holes in\nSgr A* and M87 by the Event Horizon Telescope could potentially trace the\ndynamics of ejected plasma blobs in real time. We demonstrate that the\ntrajectory and tidal stretching of these blobs can be used to test general\nrelativity and set new constraints on the mass and spin of these black holes.", "category": "astro-ph_IM" }, { "text": "SISPO: Space Imaging Simulator for Proximity Operations: This paper describes the architecture and demonstrates the capabilities of a\nnewly developed, physically-based imaging simulator environment called SISPO,\ndeveloped for small solar system body fly-by and terrestrial planet surface\nmission simulations. The image simulator utilises the open-source 3D\nvisualisation system Blender and its Cycles rendering engine, which supports\nphysically based rendering capabilities and procedural micropolygon\ndisplacement texture generation. The simulator concentrates on realistic\nsurface rendering and has supplementary models to produce realistic dust- and\ngas-environment optical models for comets and active asteroids. The framework\nalso includes tools to simulate the most common image aberrations, such as\ntangential and sagittal astigmatism, internal and external comatic aberration,\nand simple geometric distortions. The model framework's primary objective is to\nsupport small-body space mission design by allowing better simulations for\ncharacterisation of imaging instrument performance, assisting mission planning,\nand developing computer-vision algorithms. SISPO allows the simulation of\ntrajectories, light parameters and camera's intrinsic parameters.", "category": "astro-ph_IM" }, { "text": "The PLATO Payload Data Processing System SpaceWire network: PLATO has been selected and adopted by ESA as the third medium-class Mission\n(M3) of the Cosmic Vision Program, to be launched in 2026 with a Soyuz-Fregat\nrocket from the French Guiana. Its Payload is based on a suite of 26 telescopes\nand cameras in order to discover and characterise, thanks to ultra-high\naccurate photometry and the transits method, new exoplanets down to the range\nof Earth analogues. Each camera is composed of 4 CCDs working in full frame or\nframe-transfer mode. 24 cameras out of 26 host 4510 by 4510 pixels CCDs,\noperated in full-frame mode with a pixel depth of 16 bits and a cadence of 25\ns. Given the huge data volume to be managed, the PLATO Payload relies on an\nefficient Data Processing System (DPS) whose Units perform images windowing,\ncropping and compression. Each camera and DPS Unit is connected to a fast\nSpaceWire network running at 100 MHz and interfaced to the satellite On-Board\nComputer by means of an Instrument Control Unit (ICU), performing data\ncollection and compression.", "category": "astro-ph_IM" }, { "text": "Strong Lens Time Delay Challenge: I. Experimental Design: The time delays between point-like images in gravitational lens systems can\nbe used to measure cosmological parameters. The number of lenses with measured\ntime delays is growing rapidly; the upcoming \\emph{Large Synoptic Survey\nTelescope} (LSST) will monitor $\\sim10^3$ strongly lensed quasars. In an effort\nto assess the present capabilities of the community to accurately measure the\ntime delays, and to provide input to dedicated monitoring campaigns and future\nLSST cosmology feasibility studies, we have invited the community to take part\nin a \"Time Delay Challenge\" (TDC). The challenge is organized as a set of\n\"ladders,\" each containing a group of simulated datasets to be analyzed blindly\nby participating teams. Each rung on a ladder consists of a set of realistic\nmock observed lensed quasar light curves, with the rungs' datasets increasing\nin complexity and realism. The initial challenge described here has two\nladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to\nbe used as a practice set by the participating teams. The (non-mandatory)\ndeadline for completion of TDC0 was the TDC1 launch date, December 1, 2013. The\nTDC1 deadline was July 1 2014. Here we give an overview of the challenge, we\nintroduce a set of metrics that will be used to quantify the goodness-of-fit,\nefficiency, precision, and accuracy of the algorithms, and we present the\nresults of TDC0. Thirteen teams participated in TDC0 using 47 different\nmethods. Seven of those teams qualified for TDC1, which is described in the\ncompanion paper II.", "category": "astro-ph_IM" }, { "text": "IVOA Recommendation: IVOA Astronomical Data Query Language Version 2.00: This document describes the Astronomical Data Query Language (ADQL). ADQL has\nbeen developed based on SQL92. This document describes the subset of the SQL\ngrammar supported by ADQL. Special restrictions and extensions to SQL92 have\nbeen defined in order to support generic and astronomy specific operations.", "category": "astro-ph_IM" }, { "text": "Woofer-tweeter deformable mirror control for closed-loop adaptive\n optics: theory and practice: Deformable mirrors with very high order correction generally have smaller\ndynamic range of motion than what is required to correct seeing over large\naperture telescopes. As a result, systems will need to have an architecture\nthat employs two deformable mirrors in series, one for the low-order but large\nexcursion parts of the wavefront and one for the finer and smaller excursion\ncomponents. The closed-loop control challenge is to a) keep the overall system\nstable, b) avoid the two mirrors using control energy to cancel each other's\ncorrection, c) resolve actuator saturations stably, d) assure that on average\nthe mirrors are each correcting their assigned region of spatial frequency\nspace. We present the control architecture and techniques for assuring that it\nis linear and stable according to the above criteria. We derived the analytic\nforms for stability and performance and show results from simulations and\non-sky testing using the new ShaneAO system on the Lick 3-meter telescope.", "category": "astro-ph_IM" }, { "text": "Background analysis and status of the ANAIS dark matter project: ANAIS (Annual modulation with NaI Scintillators) is a project aiming to set\nup at the new facilities of the Canfranc Underground Laboratory (LSC), a large\nscale NaI(Tl) experiment in order to explore the DAMA/LIBRA annual modulation\npositive result using the same target and technique. Two 12.5 kg each NaI(Tl)\ncrystals provided by Alpha Spectra took data at the LSC in the ANAIS-25 set-up.\nThe comparison of the background model for the ANAIS-25 prototypes with the\nexperimental results is presented. ANAIS crystal radiopurity goals have been\nachieved for Th-232 and U-238 chains, but a Pb-210 contamination\nout-of-equilibrium was identified, whose origin has been studied. The high\nlight collection efficiency obtained with these prototypes allows to anticipate\nan energy threshold of the order of 1 keVee. A new detector, with improved\nperformances, was received in March 2015 and very preliminary results are\nshown.", "category": "astro-ph_IM" }, { "text": "Gaia Data Release 3: Gaia scan-angle dependent signals and spurious\n periods: Context: Gaia DR3 time series data may contain spurious signals related to\nthe time-dependent scan angle. Aims: We aim to explain the origin of scan-angle\ndependent signals and how they can lead to spurious periods, provide statistics\nto identify them in the data, and suggest how to deal with them in Gaia DR3\ndata and in future releases. Methods: Using real Gaia data, alongside numerical\nand analytical models, we visualise and explain the features observed in the\ndata. Results: We demonstrated with Gaia data that source structure\n(multiplicity or extendedness) or pollution from close-by bright objects can\ncause biases in the image parameter determination from which photometric,\nastrometric and (indirectly) radial velocity time series are derived. These\nbiases are a function of the time-dependent scan direction of the instrument\nand thus can introduce scan-angle dependent signals, which in turn can result\nin specific spurious periodic signals. Numerical simulations qualitatively\nreproduce the general structure observed in the spurious period and spatial\ndistribution of photometry and astrometry. A variety of statistics allows for\nidentification of affected sources. Conclusions: The origin of the scan-angle\ndependent signals and subsequent spurious periods is well-understood and is in\nmajority caused by fixed-orientation optical pairs with separation <0.5\"\n(amongst which binaries with P>>5y) and (cores of) distant galaxies. Though the\nmajority of sources with affected derived parameters have been filtered out\nfrom the Gaia archive, there remain Gaia DR3 data that should be treated with\ncare (e.g. gaia_source was untouched). Finally, the various statistics\ndiscussed in the paper can not only be used to identify and filter affected\nsources, but alternatively reveal new information about them not available\nthrough other means, especially in terms of binarity on sub-arcsecond scale.", "category": "astro-ph_IM" }, { "text": "An elastic lidar system for the H.E.S.S. Experiment: The H.E.S.S. experiment in Namibia, Africa, is a high energy gamma ray tele-\nscope sensitive in the energy range from 100 Gev to a few tens of TeV, via the\nuse of the atmospheric Cherenkov technique. To minimize the systematic errors\non the derived fluxes of the measured sources, one has to calculate the impact\nof the atmospheric properties, in particular the extinction parameter of the\nCherenkov light ( 300-650 nm) exploited to observe and reconstruct atmospheric\nparticle showers initiated by gamma-ray photons. A lidar can provide this kind\nof information for some given wavelengths within this range. In this paper we\nreport on the hardware components, operation and data acquisition of such a\nsystem installed at the H.E.S.S. site.", "category": "astro-ph_IM" }, { "text": "Large-aperture wide-bandwidth antireflection-coated silicon lenses for\n millimeter wavelengths: The increasing scale of cryogenic detector arrays for sub-millimeter and\nmillimeter wavelength astrophysics has led to the need for large aperture, high\nindex of refraction, low loss, cryogenic refracting optics. Silicon with n =\n3.4, low loss, and relatively high thermal conductivity is a nearly optimal\nmaterial for these purposes, but requires an antireflection (AR) coating with\nbroad bandwidth, low loss, low reflectance, and a matched coefficient of\nthermal expansion. We present an AR coating for curved silicon optics comprised\nof subwavelength features cut into the lens surface with a custom three axis\nsilicon dicing saw. These features constitute a metamaterial that behaves as a\nsimple dielectric coating. We have fabricated and coated silicon lenses as\nlarge as 33.4 cm in diameter with coatings optimized for use between 125-165\nGHz. Our design reduces average reflections to a few tenths of a percent for\nangles of incidence up to 30 degrees with low cross-polarization. We describe\nthe design, tolerance, manufacture, and measurements of these coatings and\npresent measurements of the optical properties of silicon at millimeter\nwavelengths at cryogenic and room temperatures. This coating and lens\nfabrication approach is applicable from centimeter to sub-millimeter\nwavelengths and can be used to fabricate coatings with greater than octave\nbandwidth.", "category": "astro-ph_IM" }, { "text": "Non-linear parameter estimation for the LTP experiment: analysis of an\n operational exercise: The precursor ESA mission LISA-Pathfinder, to be flown in 2013, aims at\ndemonstrating the feasibility of the free-fall, necessary for LISA, the\nupcoming space-born gravitational wave observatory. LISA Technology Package\n(LTP) is planned to carry out a number of experiments, whose main targets are\nto identify and measure the disturbances on each test-mass, in order to reach\nan unprecedented low-level residual force noise. To fulfill this plan, it is\nthen necessary to correctly design, set-up and optimize the experiments to be\nperformed on-flight and do a full system parameter estimation. Here we describe\nthe progress on the non-linear analysis using the methods developed in the\nframework of the \\textit{LTPDA Toolbox}, an object-oriented MATLAB Data\nAnalysis environment: the effort is to identify the critical parameters and\nremove the degeneracy by properly combining the results of different\nexperiments coming from a closed-loop system like LTP.", "category": "astro-ph_IM" }, { "text": "Galaxy Image Classification using Hierarchical Data Learning with\n Weighted Sampling and Label Smoothing: With the development of a series of Galaxy sky surveys in recent years, the\nobservations increased rapidly, which makes the research of machine learning\nmethods for galaxy image recognition a hot topic. Available automatic galaxy\nimage recognition researches are plagued by the large differences in similarity\nbetween categories, the imbalance of data between different classes, and the\ndiscrepancy between the discrete representation of Galaxy classes and the\nessentially gradual changes from one morphological class to the adjacent class\n(DDRGC). These limitations have motivated several astronomers and machine\nlearning experts to design projects with improved galaxy image recognition\ncapabilities. Therefore, this paper proposes a novel learning method,\n``Hierarchical Imbalanced data learning with Weighted sampling and Label\nsmoothing\" (HIWL). The HIWL consists of three key techniques respectively\ndealing with the above-mentioned three problems: (1) Designed a hierarchical\ngalaxy classification model based on an efficient backbone network; (2)\nUtilized a weighted sampling scheme to deal with the imbalance problem; (3)\nAdopted a label smoothing technique to alleviate the DDRGC problem. We applied\nthis method to galaxy photometric images from the Galaxy Zoo-The Galaxy\nChallenge, exploring the recognition of completely round smooth, in between\nsmooth, cigar-shaped, edge-on and spiral. The overall classification accuracy\nis 96.32\\%, and some superiorities of the HIWL are shown based on recall,\nprecision, and F1-Score in comparing with some related works. In addition, we\nalso explored the visualization of the galaxy image features and model\nattention to understand the foundations of the proposed scheme.", "category": "astro-ph_IM" }, { "text": "STROBE-X: X-ray Timing and Spectroscopy on Dynamical Timescales from\n Microseconds to Years: We present the Spectroscopic Time-Resolving Observatory for Broadband Energy\nX-rays (STROBE-X), a probe-class mission concept selected for study by NASA. It\ncombines huge collecting area, high throughput, broad energy coverage, and\nexcellent spectral and temporal resolution in a single facility. STROBE-X\noffers an enormous increase in sensitivity for X-ray spectral timing, extending\nthese techniques to extragalactic targets for the first time. It is also an\nagile mission capable of rapid response to transient events, making it an\nessential X-ray partner facility in the era of time-domain, multi-wavelength,\nand multi-messenger astronomy. Optimized for study of the most extreme\nconditions found in the Universe, its key science objectives include: (1)\nRobustly measuring mass and spin and mapping inner accretion flows across the\nblack hole mass spectrum, from compact stars to intermediate-mass objects to\nactive galactic nuclei. (2) Mapping out the full mass-radius relation of\nneutron stars using an ensemble of nearly two dozen rotation-powered pulsars\nand accreting neutron stars, and hence measuring the equation of state for\nultradense matter over a much wider range of densities than explored by NICER.\n(3) Identifying and studying X-ray counterparts (in the post-Swift era) for\nmultiwavelength and multi-messenger transients in the dynamic sky through\ncross-correlation with gravitational wave interferometers, neutrino\nobservatories, and high-cadence time-domain surveys in other electromagnetic\nbands. (4) Continuously surveying the dynamic X-ray sky with a large duty cycle\nand high time resolution to characterize the behavior of X-ray sources over an\nunprecedentedly vast range of time scales. STROBE-X's formidable capabilities\nwill also enable a broad portfolio of additional science.", "category": "astro-ph_IM" }, { "text": "Historical astronomical data: urgent need for preservation, digitization\n enabling scientific exploration: Over the past decades and even centuries, the astronomical community has\naccumulated a signif-icant heritage of recorded observations of a great many\nastronomical objects. Those records con-tain irreplaceable information about\nlong-term evolutionary and non-evolutionary changes in our Universe, and their\npreservation and digitization is vital. Unfortunately, most of those data risk\nbecoming degraded and thence totally lost. We hereby call upon the astronomical\ncommunity and US funding agencies to recognize the gravity of the situation,\nand to commit to an interna-tional preservation and digitization efforts\nthrough comprehensive long-term planning supported by adequate resources,\nprioritizing where the expected scientific gains, vulnerability of the\norigi-nals and availability of relevant infrastructure so dictates. The\nimportance and urgency of this issue has been recognized recently by General\nAssembly XXX of the International Astronomical Union (IAU) in its Resolution\nB3: \"on preservation, digitization and scientific exploration of his-torical\nastronomical data\". We outline the rationale of this promotion, provide\nexamples of new science through successful recovery efforts, and review the\npotential losses to science if nothing it done.", "category": "astro-ph_IM" }, { "text": "Homography-Based Correction of Positional Errors in MRT Survey: The Mauritius Radio Telescope (MRT) images show systematics in the positional\nerrors of sources when compared to source positions in the Molonglo Reference\nCatalogue (MRC). We have applied two-dimensional homography to correct\npositional errors in the image domain and avoid re-processing the visibility\ndata. Positions of bright (above 15-$\\sigma$) sources, common to MRT and MRC\ncatalogues, are used to set up an over-determined system to solve for the 2-D\nhomography matrix. After correction, the errors are found to be within 10% of\nthe beamwidth for these bright sources and the systematics are eliminated from\nthe images.", "category": "astro-ph_IM" }, { "text": "SAT.STFR.FRQ (UWA) Detail Design Report (MID): The Square Kilometre Array (SKA) project is an international effort to build\nthe world's most sensitive radio telescope operating in the 50 MHz to 14 GHz\nfrequency range. Construction of the SKA is divided into phases, with the first\nphase (SKA1) accounting for the first 10% of the telescope's receiving\ncapacity. During SKA1, a Low-Frequency Aperture Array (LFAA) comprising over a\nhundred thousand individual dipole antenna elements will be constructed in\nWestern Australia (SKA1-LOW), while an array of 197 parabolic-receptor\nantennas, incorporating the 64 receptors of MeerKAT, will be constructed in\nSouth Africa (SKA1-MID).\n Radio telescope arrays, such as the SKA, require phase-coherent reference\nsignals to be transmitted to each antenna site in the array. In the case of the\nSKA, these reference signals are generated at a central site and transmitted to\nthe antenna sites via fibre-optic cables up to 175 km in length. Environmental\nperturbations affect the optical path length of the fibre and act to degrade\nthe phase stability of the reference signals received at the antennas, which\nhas the ultimate effect of reducing the fidelity and dynamic range of the data\n. Given the combination of long fibre distances and relatively high frequencies\nof the transmitted reference signals, the SKA needs to employ\nactively-stabilised frequency transfer technologies to suppress the fibre-optic\nlink noise in order to maintain phase-coherence across the array.", "category": "astro-ph_IM" }, { "text": "Understanding synthesis imaging dynamic range: We develop a general framework for quantifying the many different\ncontributions to the noise budget of an image made with an array of dishes or\naperture array stations. Each noise contribution to the visibility data is\nassociated with a relevant correlation timescale and frequency bandwidth so\nthat the net impact on a complete observation can be assessed. All quantities\nare parameterised as function of observing frequency and the visibility\nbaseline length. We apply the resulting noise budget analysis to a wide range\nof existing and planned telescope systems that will operate between about 100\nMHz and 5 GHz to ascertain the magnitude of the calibration challenges that\nthey must overcome to achieve thermal noise limited performance. We conclude\nthat calibration challenges are increased in several respects by small\ndimensions of the dishes or aperture array stations. It will be more\nchallenging to achieve thermal noise limited performance using 15 m class\ndishes rather than the 25 m dishes of current arrays. Some of the performance\nrisks are mitigated by the deployment of phased array feeds and more with the\nchoice of an (alt,az,pol) mount, although a larger dish diameter offers the\nbest prospects for risk mitigation. Many improvements to imaging performance\ncan be anticipated at the expense of greater complexity in calibration\nalgorithms. However, a fundamental limitation is ultimately imposed by an\ninsufficient number of data constraints relative to calibration variables. The\nupcoming aperture array systems will be operating in a regime that has never\npreviously been addressed, where a wide range of effects are expected to exceed\nthe thermal noise by two to three orders of magnitude. Achieving routine\nthermal noise limited imaging performance with these systems presents an\nextreme challenge. The magnitude of that challenge is inversely related to the\naperture array station diameter.", "category": "astro-ph_IM" }, { "text": "Unveiling the Dynamic Infrared Sky with Gattini-IR: While optical and radio transient surveys have enjoyed a renaissance over the\npast decade, the dynamic infrared sky remains virtually unexplored. The\ninfrared is a powerful tool for probing transient events in dusty regions that\nhave high optical extinction, and for detecting the coolest of stars that are\nbright only at these wavelengths. The fundamental roadblocks in studying the\ninfrared time-domain have been the overwhelmingly bright sky background (250\ntimes brighter than optical) and the narrow field-of-view of infrared cameras\n(largest is 0.6 sq deg). To begin to address these challenges and open a new\nobservational window in the infrared, we present Palomar Gattini-IR: a 25 sq\ndegree, 300mm aperture, infrared telescope at Palomar Observatory that surveys\nthe entire accessible sky (20,000 sq deg) to a depth of 16.4 AB mag (J band,\n1.25um) every night. Palomar Gattini-IR is wider in area than every existing\ninfrared camera by more than a factor of 40 and is able to survey large areas\nof sky multiple times. We anticipate the potential for otherwise infeasible\ndiscoveries, including, for example, the elusive electromagnetic counterparts\nto gravitational wave detections. With dedicated hardware in hand, and a F/1.44\ntelescope available commercially and cost-effectively, Palomar Gattini-IR will\nbe on-sky in early 2017 and will survey the entire accessible sky every night\nfor two years. Palomar Gattini-IR will pave the way for a dual hemisphere,\ninfrared-optimized, ultra-wide field high cadence machine called Turbo\nGattini-IR. To take advantage of the low sky background at 2.5 um, two\nidentical systems will be located at the polar sites of the South Pole,\nAntarctica and near Eureka on Ellesmere Island, Canada. Turbo Gattini-IR will\nsurvey 15,000 sq. degrees to a depth of 20AB, the same depth of the VISTA VHS\nsurvey, every 2 hours with a survey efficiency of 97%.", "category": "astro-ph_IM" }, { "text": "Multimessenger Astronomy and Astrophysics Synergies: A budget neutral strategy is proposed for NSF to lead the implementation of\nmultimessenger astronomy and astrophysics, as outlined in the Astro2010 Decadal\nSurvey. The emerging capabilities for simultaneous measurements of physical and\nastronomical data through the different windows of electromagnetic, hadronic\nand gravitational radiation processes call for a vigorous pursuit of new\nsynergies. The proposed approach is aimed at the formation of new\ncollaborations and multimessenger data-analysis, to transcend the scientific\ninquiries made within a single window of observations. In view of budgetary\nconstraints, we propose to include the multimessenger dimension in the ranking\nof proposals submitted under existing NSF programs.", "category": "astro-ph_IM" }, { "text": "Sensitivity curves for searches for gravitational-wave backgrounds: We propose a graphical representation of detector sensitivity curves for\nstochastic gravitational-wave backgrounds that takes into account the increase\nin sensitivity that comes from integrating over frequency in addition to\nintegrating over time. This method is valid for backgrounds that have a\npower-law spectrum in the analysis band. We call these graphs \"power-law\nintegrated curves.\" For simplicity, we consider cross-correlation searches for\nunpolarized and isotropic stochastic backgrounds using two or more detectors.\nWe apply our method to construct power-law integrated sensitivity curves for\nsecond-generation ground-based detectors such as Advanced LIGO, space-based\ndetectors such as LISA and the Big Bang Observer, and timing residuals from a\npulsar timing array. The code used to produce these plots is available at\nhttps://dcc.ligo.org/LIGO-P1300115/public for researchers interested in\nconstructing similar sensitivity curves.", "category": "astro-ph_IM" }, { "text": "The performance of the bolometer array and readout system during the\n 2012/2013 flight of the E and B experiment (EBEX): EBEX is a balloon-borne telescope designed to measure the polarization of the\ncosmic microwave background radiation. During its eleven day science flight in\nthe Austral Summer of 2012, it operated 955 spider-web transition edge sensor\n(TES) bolometers separated into bands at 150, 250 and 410 GHz. This is the\nfirst time that an array of TES bolometers has been used on a balloon platform\nto conduct science observations. Polarization sensitivity was provided by a\nwire grid and continuously rotating half-wave plate. The balloon implementation\nof the bolometer array and readout electronics presented unique development\nrequirements. Here we present an outline of the readout system, the remote\ntuning of the bolometers and Superconducting QUantum Interference Device\n(SQUID) amplifiers, and preliminary current noise of the bolometer array and\nreadout system.", "category": "astro-ph_IM" }, { "text": "Detectability of Galactic Faraday Rotation in Multi-wavelength CMB\n Observations: A Cross-Correlation Analysis of CMB and Radio Maps: We introduce a new cross-correlation method to detect and verify the\nastrophysical origin of Faraday Rotation (FR) in multiwavelength surveys. FR is\nwell studied in radio astronomy from radio point sources but the $\\lambda^{2}$\nsuppression of FR makes detecting and accounting for this effect difficult at\nmillimeter and sub-millimeter wavelengths. Therefore statistical methods are\nused to attempt to detect FR in the cosmic microwave background (CMB). Most\nestimators of the FR power spectrum rely on single frequency data. In contrast,\nwe investigate the correlation of polarized CMB maps with FR measure maps from\nradio point sources. We show a factor of $\\sim30$ increase in sensitivity over\nsingle frequency estimators and predict detections exceeding $10\\sigma$\nsignificance for a CMB-S4 like experiment. Improvements in observations of FR\nfrom current and future radio polarization surveys will greatly increase the\nusefulness of this method.", "category": "astro-ph_IM" }, { "text": "Conditional Density Estimation Tools in Python and R with Applications\n to Photometric Redshifts and Likelihood-Free Cosmological Inference: It is well known in astronomy that propagating non-Gaussian prediction\nuncertainty in photometric redshift estimates is key to reducing bias in\ndownstream cosmological analyses. Similarly, likelihood-free inference\napproaches, which are beginning to emerge as a tool for cosmological analysis,\nrequire a characterization of the full uncertainty landscape of the parameters\nof interest given observed data. However, most machine learning (ML) or\ntraining-based methods with open-source software target point prediction or\nclassification, and hence fall short in quantifying uncertainty in complex\nregression and parameter inference settings. As an alternative to methods that\nfocus on predicting the response (or parameters) $\\mathbf{y}$ from features\n$\\mathbf{x}$, we provide nonparametric conditional density estimation (CDE)\ntools for approximating and validating the entire probability density function\n(PDF) $\\mathrm{p}(\\mathbf{y}|\\mathbf{x})$ of $\\mathbf{y}$ given (i.e.,\nconditional on) $\\mathbf{x}$. As there is no one-size-fits-all CDE method, the\ngoal of this work is to provide a comprehensive range of statistical tools and\nopen-source software for nonparametric CDE and method assessment which can\naccommodate different types of settings and be easily fit to the problem at\nhand. Specifically, we introduce four CDE software packages in\n$\\texttt{Python}$ and $\\texttt{R}$ based on ML prediction methods adapted and\noptimized for CDE: $\\texttt{NNKCDE}$, $\\texttt{RFCDE}$, $\\texttt{FlexCode}$,\nand $\\texttt{DeepCDE}$. Furthermore, we present the $\\texttt{cdetools}$\npackage, which includes functions for computing a CDE loss function for tuning\nand assessing the quality of individual PDFs, along with diagnostic functions.\nWe provide sample code in $\\texttt{Python}$ and $\\texttt{R}$ as well as\nexamples of applications to photometric redshift estimation and likelihood-free\ncosmological inference via CDE.", "category": "astro-ph_IM" }, { "text": "Efficient generation and optimization of stochastic template banks by a\n neighboring cell algorithm: Placing signal templates (grid points) as efficiently as possible to cover a\nmulti-dimensional parameter space is crucial in computing-intensive\nmatched-filtering searches for gravitational waves, but also in similar\nsearches in other fields of astronomy. To generate efficient coverings of\narbitrary parameter spaces, stochastic template banks have been advocated,\nwhere templates are placed at random while rejecting those too close to others.\nHowever, in this simple scheme, for each new random point its distance to every\ntemplate in the existing bank is computed. This rapidly increasing number of\ndistance computations can render the acceptance of new templates\ncomputationally prohibitive, particularly for wide parameter spaces or in large\ndimensions. This work presents a neighboring cell algorithm that can\ndramatically improve the efficiency of constructing a stochastic template bank.\nBy dividing the parameter space into sub-volumes (cells), for an arbitrary\npoint an efficient hashing technique is exploited to obtain the index of its\nenclosing cell along with the parameters of its neighboring templates. Hence\nonly distances to these neighboring templates in the bank are computed,\nmassively lowering the overall computing cost, as demonstrated in simple\nexamples. Furthermore, we propose a novel method based on this technique to\nincrease the fraction of covered parameter space solely by directed template\nshifts, without adding any templates. As is demonstrated in examples, this\nmethod can be highly effective..", "category": "astro-ph_IM" }, { "text": "Space very long baseline interferometry in China: Space very long baseline interferometry (VLBI) has unique applications in\nhigh-resolution imaging of fine structure of astronomical objects and\nhigh-precision astrometry due to the key long space-Earth or space-space\nbaselines beyond the Earth's diameter. China has been actively involved in the\ndevelopment of space VLBI in recent years. This review briefly summarizes\nChina's research progress in space VLBI and the future development plan.", "category": "astro-ph_IM" }, { "text": "A linearized approach to radial velocity extraction: High-precision radial velocity (RV) measurements are crucial for exoplanet\ndetection and characterisation. Efforts to achieve ~10 cm/s precision have been\nmade over the recent decades, with significant advancements in instrumentation,\ndata reduction techniques, and statistical inference methods. However, despite\nthese efforts, RV precision is currently limited to ~50 cm/s. This value\nexceeds state-of-the-art spectrographs' expected instrumental noise floor and\nis mainly attributed to RV signals induced by stellar variability. In this\nwork, we propose a factorisation method to overcome this limitation. The\nfactorisation is particularly suitable for controlling the effect of localised\nchanges in the stellar emission profile, assuming some smooth function of a few\nastrophysical parameters governs them. We use short-time Fourier transforms\n(STFT) to infer the RV in a procedure equivalent to least-squares minimisation\nin the wavelength domain and demonstrate the effectiveness of our method in\ntreating arbitrary temperature fluctuations on the star's surface. The proposed\nprescription can be naturally generalised to account for other effects, either\nintrinsic to the star, such as magnetic fields, or extrinsic to it, such as\ntelluric contamination. As a proof-of-concept, we empirically derive a set of\nfactorisation terms describing the Solar centre-to-limb variation and apply\nthem to a set of realistic SOAP-GPU spectral simulations. We discuss the\nmethod's capability to mitigate variability-induced RV signals and its\npotential extensions to serve as a tomographic tool.", "category": "astro-ph_IM" }, { "text": "Estimating Extinction using Unsupervised Machine Learning: Dust extinction is the most robust tracer of the gas distribution in the\ninterstellar medium, but measuring extinction is limited by the systematic\nuncertainties involved in estimating the intrinsic colors to background stars.\nIn this paper we present a new technique, PNICER, that estimates intrinsic\ncolors and extinction for individual stars using unsupervised machine learning\nalgorithms. This new method aims to be free from any priors with respect to the\ncolumn density and intrinsic color distribution. It is applicable to any\ncombination of parameters and works in arbitrary numbers of dimensions.\nFurthermore, it is not restricted to color space. Extinction towards single\nsources is determined by fitting Gaussian Mixture Models along the extinction\nvector to (extinction-free) control field observations. In this way it becomes\npossible to describe the extinction for observed sources with probability\ndensities. PNICER effectively eliminates known biases found in similar methods\nand outperforms them in cases of deep observational data where the number of\nbackground galaxies is significant, or when a large number of parameters is\nused to break degeneracies in the intrinsic color distributions. This new\nmethod remains computationally competitive, making it possible to correctly\nde-redden millions of sources within a matter of seconds. With the\never-increasing number of large-scale high-sensitivity imaging surveys, PNICER\noffers a fast and reliable way to efficiently calculate extinction for\narbitrary parameter combinations without prior information on source\ncharacteristics. PNICER also offers access to the well-established NICER\ntechnique in a simple unified interface and is capable of building extinction\nmaps including the NICEST correction for cloud substructure. PNICER is offered\nto the community as an open-source software solution and is entirely written in\nPython.", "category": "astro-ph_IM" }, { "text": "Supersonic turbulence simulations with GPU-based high-order\n Discontinuous Galerkin hydrodynamics: We investigate the numerical performance of a Discontinuous Galerkin (DG)\nhydrodynamics implementation when applied to the problem of driven, isothermal\nsupersonic turbulence. While the high-order element-based spectral approach of\nDG is known to efficiently produce accurate results for smooth problems\n(exponential convergence with expansion order), physical discontinuities in\nsolutions, like shocks, prove challenging and may significantly diminish DG's\napplicability to practical astrophysical applications. We consider whether DG\nis able to retain its accuracy and stability for highly supersonic turbulence,\ncharacterized by a network of shocks. We find that our new implementation,\nwhich regularizes shocks at sub-cell resolution with artificial viscosity,\nstill performs well compared to standard second-order schemes for moderately\nhigh Mach number turbulence, provided we also employ an additional projection\nof the primitive variables onto the polynomial basis to regularize the\nextrapolated values at cell interfaces. However, the accuracy advantage of DG\ndiminishes significantly in the highly supersonic regime. Nevertheless, in\nturbulence simulations with a wide dynamic range that start with supersonic\nMach numbers and can resolve the sonic point, the low numerical dissipation of\nDG schemes still proves advantageous in the subsonic regime. Our results thus\nsupport the practical applicability of DG schemes for demanding astrophysical\nproblems that involve strong shocks and turbulence, such as star formation in\nthe interstellar medium. We also discuss the substantial computational cost of\nDG when going to high order, which needs to be weighted against the resulting\naccuracy gain. For problems containing shocks, this favours the use of\ncomparatively low DG order.", "category": "astro-ph_IM" }, { "text": "Mechanical cryocooler noise observed in the ground testing of the\n Resolve X-ray microcalorimeter onboard XRISM: Low-temperature detectors often use mechanical coolers as part of the cooling\nchain in order to reach sub-Kelvin operating temperatures. The microphonics\nnoise caused by the mechanical coolers is a general and inherent issue for\nthese detectors. We have observed this effect in the ground test data obtained\nwith the Resolve instrument to be flown on the XRISM satellite. Resolve is a\ncryogenic X-ray microcalorimeter spectrometer with a required energy resolution\nof 7 eV at 6 keV. Five mechanical coolers are used to cool from ambient\ntemperature to about 4 K: four two-stage Stirling coolers (STC) driven\nnominally at 15 Hz and a Joule-Thomson cooler (JTC) driven nominally at 52 Hz.\nIn 2019, we operated the flight-model instrument for two weeks, in which we\nalso obtained accelerometer data inside the cryostat at a low-temperature stage\n(He tank). X-ray detector and accelerometer data were obtained continuously\nwhile changing the JTC drive frequency, which produced a unique data set for\ninvestigating how the vibration from the cryocoolers propagates to the\ndetector. In the detector noise spectra, we observed harmonics of both STCs and\nJTC. More interestingly, we also observed the low (<20 Hz) frequency beat\nbetween the 4'th JTC and 14'th STC harmonics and the 7'th JTC and the 23--24'th\nSTC harmonics. We present here a description and interpretation of these\nmeasurements.", "category": "astro-ph_IM" }, { "text": "Low-frequency wideband timing of InPTA pulsars observed with the uGMRT: High-precision measurements of the pulsar dispersion measure (DM) are\npossible using telescopes with low-frequency wideband receivers. We present an\ninitial study of the application of the wideband timing technique, which can\nsimultaneously measure the pulsar times of arrival (ToAs) and DMs, for a set of\nfive pulsars observed with the upgraded Giant Metrewave Radio Telescope (uGMRT)\nas part of the Indian Pulsar Timing Array (InPTA) campaign. We have used the\nobservations with the 300-500 MHz band of the uGMRT for this purpose. We obtain\nhigh precision in DM measurements with precisions of the order\n10^{-6}cm^{-3}pc. The ToAs obtained have sub-{\\mu}s precision and the\nroot-mean-square of the post-fit ToA residuals are in the sub-{\\mu}s range. We\nfind that the uncertainties in the DMs and ToAs obtained with this wideband\ntechnique, applied to low-frequency data, are consistent with the results\nobtained with traditional pulsar timing techniques and comparable to\nhigh-frequency results from other PTAs. This work opens up an interesting\npossibility of using low-frequency wideband observations for precision pulsar\ntiming and gravitational wave detection with similar precision as\nhigh-frequency observations used conventionally.", "category": "astro-ph_IM" }, { "text": "Flexure updates to MOSFIRE on the Keck I telescope: We present a recent evaluation and updates applied to the Multi-Object\nSpectrometer For Infra-Red Exploration (MOSFIRE) on the Keck I telescope. Over\nthe course of significantly long integrations, when MOSFIRE sits on one mask\nfor $>$4 hours, a slight drift in mask stars has been measured. While this does\nnot affect all science-cases done with MOSFIRE, the drift can smear out signal\nfor observers whose science objective depends upon lengthy integrations. This\neffect was determined to be the possible result of three factors: the internal\nflexure compensation system (FCS), the guider camera flexure system, and/or the\ndifferential atmospheric refraction (DAR) corrections. In this work, we will\nsummarize the three systems and walk through the current testing done to narrow\ndown the possible culprit of this drift and highlight future testing to be\ndone.", "category": "astro-ph_IM" }, { "text": "First on-sky results of a FIOS prototype, a Fabry Perot Based Instrument\n for Oxygen Searches: The upcoming Extremely Large Telescopes (ELTs) are expected to have the\ncollecting area required to detect potential biosignature gases such as\nmolecular oxygen, $\\mathrm{O_2}$, in the atmosphere of terrestrial planets\naround nearby stars. One of the most promising detection methods is\ntransmission spectroscopy. To maximize our capability to detect $\\mathrm{O_2}$\nusing this method, spectral resolutions $\\mathrm{R}\\geq 300,000$ are required\nto fully resolve the absorption lines in an Earth-like exoplanet atmosphere and\ndisentangle the signal from telluric lines. Current high-resolution\nspectrographs typically achieve a spectral resolution of\n$\\mathrm{R}\\sim100,000$. Increasing the resolution in seeing limited\nobservations/instruments requires drastically larger optical components, making\nthese instruments even more expensive and hard to fabricate and assemble.\nInstead, we demonstrate a new approach to high-resolution spectroscopy. We\nimplemented an ultra-high spectral resolution booster to be coupled in front of\na high-resolution spectrograph. The instrument is based on a chained Fabry\nPerot array which generates a hyperfine spectral profile. We present on-sky\ntelluric observations with a lab demonstrator. Depending on the configuration,\nthis two-arm prototype reaches a resolution of R=250,000-350,000. After\ncarefully modeling the prototype's behavior, we propose a Fabry Perot\nInterferometer (FPI) design for an eight-arm array configuration aimed at ELTs\ncapable of exceeding R=300,000. The novel FPI resolution booster can be plugged\nin at the front end of an existing R=100,000 spectrograph to overwrite the\nspectral profile with a higher resolution for exoplanet atmosphere studies.", "category": "astro-ph_IM" }, { "text": "Development of Dual-Gain SiPM Boards for Extending the Energy Dynamic\n Range: Astronomical observations with gamma rays in the range of several hundred keV\nto hundreds of MeV currently represent the least explored energy range. To\naddress this so-called MeV gap, we designed and built a prototype CsI:Tl\ncalorimeter instrument using a commercial off-the-shelf (COTS) SiPMs and\nfront-ends which may serve as a subsystem for a larger gamma-ray mission\nconcept. During development, we observed significant non-linearity in the\nenergy response. Additionally, using the COTS readout, the calorimeter could\nnot cover the four orders of magnitude in energy range required for the\ntelescope. We, therefore, developed dual-gain silicon photomultiplier (SiPM)\nboards that make use of two SiPM species that are read out separately to\nincrease the dynamic energy range of the readout. In this work, we investigate\nthe SiPM's response with regards to active area ($3\\times3 \\ \\mathrm{mm}^2$ and\n$1 \\times 1 \\ \\mathrm{mm}^2$) and various microcell sizes ($10$, $20$, and $35\n\\ \\mu \\mathrm{m}$). We read out $3\\times3\\times6 \\ \\mathrm{cm}^3$ CsI:Tl chunks\nusing dual-gain SiPMs that utilize $35 \\ \\mu \\mathrm{m}$ microcells for both\nSiPM species and demonstrate the concept when tested with high-energy gamma-ray\nand proton beams. We also studied the response of $17 \\times 17 \\times 100 \\\n\\mathrm{mm}^3$ CsI bars to high-energy protons. With the COTS readout, we\ndemonstrate a sensitivity to $60 \\ \\mathrm{MeV}$ protons with the two SiPM\nspecies overlapping at a range of around $2.5-30 \\ \\mathrm{MeV}$. This\ndevelopment aims to demonstrate the concept for future scintillator-based\nhigh-energy calorimeters with applications in gamma-ray astrophysics.", "category": "astro-ph_IM" }, { "text": "First 230 GHz VLBI Fringes on 3C 279 using the APEX Telescope: We report about a 230 GHz very long baseline interferometry (VLBI) fringe\nfinder observation of blazar 3C 279 with the APEX telescope in Chile, the\nphased submillimeter array (SMA), and the SMT of the Arizona Radio Observatory\n(ARO). We installed VLBI equipment and measured the APEX station position to 1\ncm accuracy (1 sigma). We then observed 3C 279 on 2012 May 7 in a 5 hour 230\nGHz VLBI track with baseline lengths of 2800 M$\\lambda$ to 7200 M$\\lambda$ and\na finest fringe spacing of 28.6 micro-arcseconds. Fringes were detected on all\nbaselines with SNRs of 12 to 55 in 420 s. The correlated flux density on the\nlongest baseline was ~0.3 Jy/beam, out of a total flux density of 19.8 Jy.\nVisibility data suggest an emission region <38 uas in size, and at least two\ncomponents, possibly polarized. We find a lower limit of the brightness\ntemperature of the inner jet region of about 10^10 K. Lastly, we find an upper\nlimit of 20% on the linear polarization fraction at a fringe spacing of ~38\nuas. With APEX the angular resolution of 230 GHz VLBI improves to 28.6 uas.\nThis allows one to resolve the last-photon ring around the Galactic Center\nblack hole event horizon, expected to be 40 uas in diameter, and probe radio\njet launching at unprecedented resolution, down to a few gravitational radii in\ngalaxies like M 87. To probe the structure in the inner parsecs of 3C 279 in\ndetail, follow-up observations with APEX and five other mm-VLBI stations have\nbeen conducted (March 2013) and are being analyzed.", "category": "astro-ph_IM" }, { "text": "First observations and magnitude measurement of Starlink's Darksat: Measure the Sloan g' magnitudes of the Starlink's STARLINK-1130 (Darksat) and\n1113 LEO communication satellites and determine the effectiveness of the\nDarksat darkening treatment at 475.4\\,nm. Two observations of the Starlink's\nDarksat LEO communication satellite were conducted on 2020/02/08 and 2020/03/06\nusing a Sloan r' and g' filter respectively. While a second satellite,\nSTARLINK-1113 was observed on 2020/03/06 using a Sloan g' filter. The initial\nobservation on 2020/02/08 was a test observation when Darksat was still\nmanoeuvring to its nominal orbit and orientation. Based on the successful test\nobservation, the first main observation was conducted on 2020/03/06 along with\nan observation of the second Starlink satellite. The calibration, image\nprocessing and analysis of the Darksat Sloan g' image gives an estimated Sloan\ng' magnitude of $7.46\\pm0.04$ at a range of 976.50\\,km. For STARLINK-1113 an\nestimated Sloan g' magnitude of $6.59\\pm0.05$ at a range of 941.62\\,km was\nfound. When scaled to a range of 550\\,km and corrected for the solar and\nobserver phase angles, a reduction by a factor of two is seen in the reflected\nsolar flux between Darksat and STARLINK-1113. The data and results presented in\nthis work, show that the special darkening coating used by Starlink for Darksat\nhas darkened the Sloan g' magnitude by $0.77\\pm0.05$\\,mag, when the range is\nequal to a nominal orbital height (550\\,km). This result will serve members of\nthe astronomical community modelling the satellite mega-constellations, to\nascertain their true impact on both the amateur and professional astronomical\ncommunities. Concurrent and further observations are planned to cover the full\noptical and NIR spectrum, from an ensemble of instruments, telescopes and\nobservatories.", "category": "astro-ph_IM" }, { "text": "The ROAD to discovery: machine learning-driven anomaly detection in\n radio astronomy spectrograms: As radio telescopes increase in sensitivity and flexibility, so do their\ncomplexity and data-rates. For this reason automated system health management\napproaches are becoming increasingly critical to ensure nominal telescope\noperations. We propose a new machine learning anomaly detection framework for\nclassifying both commonly occurring anomalies in radio telescopes as well as\ndetecting unknown rare anomalies that the system has potentially not yet seen.\nTo evaluate our method, we present a dataset consisting of 7050\nautocorrelation-based spectrograms from the Low Frequency Array (LOFAR)\ntelescope and assign 10 different labels relating to the system-wide anomalies\nfrom the perspective of telescope operators. This includes electronic failures,\nmiscalibration, solar storms, network and compute hardware errors among many\nmore. We demonstrate how a novel Self Supervised Learning (SSL) paradigm, that\nutilises both context prediction and reconstruction losses, is effective in\nlearning normal behaviour of the LOFAR telescope. We present the Radio\nObservatory Anomaly Detector (ROAD), a framework that combines both SSL-based\nanomaly detection and a supervised classification, thereby enabling both\nclassification of both commonly occurring anomalies and detection of unseen\nanomalies. We demonstrate that our system is real-time in the context of the\nLOFAR data processing pipeline, requiring <1ms to process a single spectrogram.\nFurthermore, ROAD obtains an anomaly detection F-2 score of 0.92 while\nmaintaining a false positive rate of ~2\\%, as well as a mean per-class\nclassification F-2 score 0.89, outperforming other related works.", "category": "astro-ph_IM" }, { "text": "Effects of the Hunga Tonga-Hunga Ha'apai Volcanic Eruption on\n Observations at Paranal Observatory: The Hunga Tonga-Hunga Ha'apai volcano erupted on 15 January 2022 with an\nenergy equivalent to around 61 megatons of TNT. The explosion was bigger than\nany other volcanic eruption so far in the 21st century. Huge quantities of\nparticles, including dust and water vapour, were released into the atmosphere.\nWe present the results of a preliminary study of the effects of the explosion\non observations taken at Paranal Observatory using a range of instruments.\nThese effects were not immediately transitory in nature, and a year later\nstunning sunsets are still being seen at Paranal.", "category": "astro-ph_IM" }, { "text": "A Constrained Transport Scheme for MHD on Unstructured Static and Moving\n Meshes: Magnetic fields play an important role in many astrophysical systems and a\ndetailed understanding of their impact on the gas dynamics requires robust\nnumerical simulations. Here we present a new method to evolve the ideal\nmagnetohydrodynamic (MHD) equations on unstructured static and moving meshes\nthat preserves the magnetic field divergence-free constraint to machine\nprecision. The method overcomes the major problems of using a cleaning scheme\non the magnetic fields instead, which is non-conservative, not fully Galilean\ninvariant, does not eliminate divergence errors completely, and may produce\nincorrect jumps across shocks. Our new method is a generalization of the\nconstrained transport (CT) algorithm used to enforce the $\\nabla\\cdot\n\\mathbf{B}=0$ condition on fixed Cartesian grids. Preserving $\\nabla\\cdot\n\\mathbf{B}=0$ at the discretized level is necessary to maintain the\northogonality between the Lorentz force and $\\mathbf{B}$. The possibility of\nperforming CT on a moving mesh provides several advantages over static mesh\nmethods due to the quasi-Lagrangian nature of the former (i.e., the mesh\ngenerating points move with the flow), such as making the simulation\nautomatically adaptive and significantly reducing advection errors. Our method\npreserves magnetic fields and fluid quantities in pure advection exactly.", "category": "astro-ph_IM" }, { "text": "Optical Cross Correlation Filters: An Economical Approach for\n Identifying SNe Ia and Estimating their Redshifts: Large photometric surveys of transient phenomena, such as Pan-STARRS and\nLSST, will locate thousands to millions of type Ia supernova candidates per\nyear, a rate prohibitive for acquiring spectroscopy to determine each\ncandidate's type and redshift. In response, we have developed an economical\napproach to identifying SNe Ia and their redshifts using an uncommon type of\noptical filter which has multiple, discontinuous passbands on a single\nsubstrate. Observation of a supernova through a specially designed pair of\nthese `cross-correlation filters' measures the approximate amplitude and phase\nof the cross-correlation between the spectrum and a SN Ia template, a quantity\ntypically used to determine the redshift and type of a high-redshift SN Ia.\nSimulating the use of these filters, we obtain a sample of SNe Ia which is ~98%\npure with individual redshifts measured to 0.01 precision. The advantages of\nthis approach over standard broadband photometric methods are that it is\ninsensitive to reddening, independent of the color data used for subsequent\ndistance determinations which reduces selection or interpretation bias, and\nbecause it makes use of the spectral features its reliability is greater. A\ngreat advantage over long-slit spectroscopy comes from increased throughput,\nenhanced multiplexing and reduced set-up time resulting in a net gain in speed\nof up to ~30 times. This approach is also insensitive to host galaxy\ncontamination. Prototype filters were built and successfully used on Magellan\nwith LDSS-3 to characterize three SNLS candidates. We discuss how these filters\ncan provide critical information for the upcoming photometric supernova\nsurveys.", "category": "astro-ph_IM" }, { "text": "Autocollimating compensator for controlling aspheric optical surfaces: A compensator (null-corrector) for testing aspheric optical surfaces is\nproposed, which enables i) independent verification of optical elements and\nassembling of the compensator itself, and ii) ascertaining the compensator\nposition in a control layout for a specified aspheric surface. The compensator\nconsists of three spherical lenses made of the same glass. In this paper, the\nscope of the compensator expanded to a surface speed ~f/2.3; a conceptual\nexample for a nominal primary of Hubble Space Telescope is given. The\nautocollimating design allows significant reducing difficulties associated with\npractical use of lens compensators.", "category": "astro-ph_IM" }, { "text": "An optimized algorithm for multi-scale wideband deconvolution of radio\n astronomical images: We describe a new multi-scale deconvolution algorithm that can also be used\nin multi-frequency mode. The algorithm only affects the minor clean loop. In\nsingle-frequency mode, the minor loop of our improved multi-scale algorithm is\nover an order of magnitude faster than the CASA multi-scale algorithm, and\nproduces results of similar quality. For multi-frequency deconvolution, a\ntechnique named joined-channel cleaning is used. In this mode, the minor loop\nof our algorithm is 2-3 orders of magnitude faster than CASA MSMFS. We extend\nthe multi-scale mode with automated scale-dependent masking, which allows\nstructures to be cleaned below the noise. We describe a new scale-bias function\nfor use in multi-scale cleaning. We test a second deconvolution method that is\na variant of the MORESANE deconvolution technique, and uses a convex\noptimisation technique with isotropic undecimated wavelets as dictionary. On\nsimple, well calibrated data the convex optimisation algorithm produces\nvisually more representative models. On complex or imperfect data, the convex\noptimisation algorithm has stability issues.", "category": "astro-ph_IM" }, { "text": "An Overview of CHIME, the Canadian Hydrogen Intensity Mapping Experiment: The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a drift scan\nradio telescope operating across the 400-800 MHz band. CHIME is located at the\nDominion Radio Astrophysical Observatory near Penticton, BC Canada. The\ninstrument is designed to map neutral hydrogen over the redshift range 0.8 to\n2.5 to constrain the expansion history of the Universe. This goal drives the\ndesign features of the instrument. CHIME consists of four parallel cylindrical\nreflectors, oriented north-south, each 100 m $\\times$ 20 m and outfitted with a\n256 element dual-polarization linear feed array. CHIME observes a two degree\nwide stripe covering the entire meridian at any given moment, observing 3/4 of\nthe sky every day due to Earth rotation. An FX correlator utilizes FPGAs and\nGPUs to digitize and correlate the signals, with different correlation products\ngenerated for cosmological, fast radio burst, pulsar, VLBI, and 21 cm absorber\nbackends. For the cosmology backend, the $N_\\mathrm{feed}^2$ correlation matrix\nis formed for 1024 frequency channels across the band every 31 ms. A data\nreceiver system applies calibration and flagging and, for our primary\ncosmological data product, stacks redundant baselines and integrates for 10 s.\nWe present an overview of the instrument, its performance metrics based on the\nfirst three years of science data, and we describe the current progress in\ncharacterizing CHIME's primary beam response. We also present maps of the sky\nderived from CHIME data; we are using versions of these maps for a cosmological\nstacking analysis as well as for investigation of Galactic foregrounds.", "category": "astro-ph_IM" }, { "text": "The JEM-EUSO Mission: Contributions to the ICRC 2013: Contributions of the JEM-EUSO Collaboration to the 33rd International Cosmic\nRay Conference (The Astroparticle Physics Conference) Rio de Janeiro, July,\n2013.", "category": "astro-ph_IM" }, { "text": "Data mining techniques on astronomical spectra data. I : Clustering\n Analysis: Clustering is an effective tool for astronomical spectral analysis, to mine\nclustering patterns among data. With the implementation of large sky surveys,\nmany clustering methods have been applied to tackle spectroscopic and\nphotometric data effectively and automatically. Meanwhile, the performance of\nclustering methods under different data characteristics varies greatly. With\nthe aim of summarizing astronomical spectral clustering algorithms and laying\nthe foundation for further research, this work gives a review of clustering\nmethods applied to astronomical spectra data in three parts. First, many\nclustering methods for astronomical spectra are investigated and analysed\ntheoretically, looking at algorithmic ideas, applications, and features.\nSecondly, experiments are carried out on unified datasets constructed using\nthree criteria (spectra data type, spectra quality, and data volume) to compare\nthe performance of typical algorithms; spectra data are selected from the Large\nSky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) survey and Sloan\nDigital Sky Survey (SDSS). Finally, source codes of the comparison clustering\nalgorithms and manuals for usage and improvement are provided on GitHub.", "category": "astro-ph_IM" }, { "text": "Advanced Kelvin Probe Operational Methodology for Space Applications: We present a novel methodology for the operation of macroscopic Kelvin Probe\ninstruments. The methodology is based on the use of a harmonic backing\npotential signal to drive the tip-sample variable capacitance and on a Fourier\nrepresentation of the tip current, allows for the operation of the instrument\nunder full control and improves its scanning performance by a factor of 60 or\nmore over that of currently available commercial instruments.", "category": "astro-ph_IM" }, { "text": "Improvements to the Search for Cosmic Dawn Using the Long Wavelength\n Array: We present recent improvements to the search for the global Cosmic Dawn\nsignature using the Long Wavelength Array station located on the Sevilleta\nNational Wildlife Refuge in New Mexico, USA (LWA-SV). These improvements are\nboth in the methodology of the experiment and the hardware of the station. An\nimproved observing strategy along with more sophisticated temperature\ncalibration and foreground modelling schemes have led to improved residual RMS\nlimits. A large improvement over previous work using LWA-SV is the use of a\nnovel achromatic beamforming technique which has been developed for LWA-SV. We\npresent results from an observing campaign which contains 29 days of\nobservations between March $10^{\\rm{th}}$, 2021 and April $10^{\\rm{th}}$ 2021.\nThe reported residual RMS limits are 6 times above the amplitude of the\npotential signal reported by the Experiment to Detect the Global EoR Signature\n(EDGES) collaboration.", "category": "astro-ph_IM" }, { "text": "Effects of $150-1000$ eV Electron Impacts on Pure Carbon Monoxide Ices\n using the Interstellar Energetic-Process System (IEPS): Pure CO ice has been irradiated with electrons of energy in the range\n$150-1000$~eV with the Interstellar Energetic-Process System (IEPS). The main\nproducts of irradiation are carbon chains C$_n$ ($n=3$, 5, 6, 8, 9, 10, 11,\n12), suboxides, C$_n$O ($n=2$, 3, 4, 5, 6, 7), and C$_n$O$_2$ ($n=1$, 3, 4, 5,\n7) species. \\ce{CO2} is by far the most abundant reaction product in all the\nexperiments. The destruction cross-section of CO peaks at about 250 eV,\ndecreases with the energy of the electrons and is more than one order of\nmagnitude higher than for gas-phase CO ionization. The production cross-section\nof carbon dioxide has been also derived and is characterized by the competition\nbetween chemistry and desorption.\n Desorption of CO and of new species during the radiolysis follows the\nelectron distribution in the ice. Low energy electrons having short penetration\ndepths induce significant desorption. Finally, as the ice thickness approaches\nthe electron penetration depth the abundance of the products starts to\nsaturate. Implications on the atmospheric photochemistry of cold planets\nhosting surface CO ices are also discussed.", "category": "astro-ph_IM" }, { "text": "Bayesian inference for pulsar timing models: The extremely regular, periodic radio emission from millisecond pulsars makes\nthem useful tools for studying neutron star astrophysics, general relativity,\nand low-frequency gravitational waves. These studies require that the observed\npulse times of arrival be fit to complex timing models that describe numerous\neffects such as the astrometry of the source, the evolution of the pulsar's\nspin, the presence of a binary companion, and the propagation of the pulses\nthrough the interstellar medium. In this paper, we discuss the benefits of\nusing Bayesian inference to obtain pulsar timing solutions. These benefits\ninclude the validation of linearized least-squares model fits when they are\ncorrect, and the proper characterization of parameter uncertainties when they\nare not; the incorporation of prior parameter information and of models of\ncorrelated noise; and the Bayesian comparison of alternative timing models. We\ndescribe our computational setup, which combines the timing models of Tempo2\nwith the nested-sampling integrator MultiNest. We compare the timing solutions\ngenerated using Bayesian inference and linearized least-squares for three\npulsars: B1953+29, J2317+1439, and J1640+2224, which demonstrate a variety of\nthe benefits that we posit.", "category": "astro-ph_IM" }, { "text": "VOEvent Standard for Fast Radio Bursts: Fast radio bursts are a new class of transient radio phenomena currently\ndetected as millisecond radio pulses with very high dispersion measures. As new\nradio surveys begin searching for FRBs a large population is expected to be\ndetected in real-time, triggering a range of multi-wavelength and\nmulti-messenger telescopes to search for repeating bursts and/or associated\nemission. Here we propose a method for disseminating FRB triggers using Virtual\nObservatory Events (VOEvents). This format was developed and is used\nsuccessfully for transient alerts across the electromagnetic spectrum and for\nmulti-messenger signals such as gravitational waves. In this paper we outline a\nproposed VOEvent standard for FRBs that includes the essential parameters of\nthe event and where these parameters should be specified within the structure\nof the event. An additional advantage to the use of VOEvents for FRBs is that\nthe events can automatically be ingested into the FRB Catalogue (FRBCAT)\nenabling real-time updates for public use. We welcome feedback from the\ncommunity on the proposed standard outlined below and encourage those\ninterested to join the nascent working group forming around this topic.", "category": "astro-ph_IM" }, { "text": "Study on the gain and photon detection efficiency drops of silicon\n photomultipliers under bright background conditions: The use of silicon photomultipliers (SiPMs) in imaging atmospheric Cherenkov\ntelescopes is expected to extend the observation times of very-high-energy\ngamma-ray sources, particularly within the highest energy domain of 50-300 TeV,\nwhere the Cherenkov signal from celestial gamma rays is adequate even under\nbright moonlight background conditions. Unlike conventional photomultiplier\ntubes, SiPMs do not exhibit quantum efficiency or gain degradation, which can\nbe observed after long exposures to bright illumination. However, under bright\nconditions, the photon detection efficiency of a SiPM can be undergo temporary\ndegradation because a fraction of its avalanche photodiode cells can saturate\nowing to photons from the night-sky background (NSB). In addition, the large\ncurrent generated by the high NSB rate can increase the temperature of the\nsilicon substrate, resulting in shifts in the SiPM breakdown voltages and\nconsequent gain changes. Moreover, this large current changes the effective\nbias voltage because it causes a voltage drop across the protection resistor of\n100-1000 {\\Omega}. Hence, these three factors, namely the avalanche photodiode\n(APD) saturation, Si temperature, and voltage drop must be carefully\ncompensated for and/or considered in the energy calibration of Cherenkov\ntelescopes with SiPM cameras. In this study, we measured the signal output\ncharge of a SiPM and its variation as a function of different NSB-like\nbackground conditions up to 1 GHz/pixel. The results verify that the product of\nthe SiPM gain and photon detection efficiency is well characterized by these\nthree factors.", "category": "astro-ph_IM" }, { "text": "Evolution of Data Formats in Very-High-Energy Gamma-ray Astronomy: Most major scientific results produced by ground-based gamma-ray telescopes\nin the last 30 years have been obtained by expert members of the collaborations\noperating these instruments. This is due to the proprietary data and software\npolicies adopted by these collaborations. However, the advent of the next\ngeneration of telescopes and their operation as observatories open to the\nastronomical community, along with a generally increasing demand for open\nscience, confront gamma-ray astronomers with the challenge of sharing their\ndata and analysis tools. As a consequence, in the last few years, the\ndevelopment of open-source science tools has progressed in parallel with the\nendeavour to define a standardised data format for astronomical gamma-ray data.\nThe latter constitutes the main topic of this review. Common data\nspecifications provide equally important benefits to the current and future\ngeneration of gamma-ray instruments: they allow the data from different\ninstruments, including legacy data from decommissioned telescopes, to be easily\ncombined and analysed within the same software framework. In addition,\nstandardised data accessible to the public, and analysable with open-source\nsoftware, grant fully-reproducible results. In this article we provide an\noverview of the evolution of the data format for gamma-ray astronomical data,\nfocusing on its progression from private and diverse specifications to\nprototypical open and standardised ones. The latter have already been\nsuccessfully employed in a number of publications paving the way to the\nanalysis of data from the next generation of gamma-ray instruments, and to an\nopen and reproducible way of conducting gamma-ray astronomy.", "category": "astro-ph_IM" }, { "text": "Perspectives on Reproducibility and Sustainability of Open-Source\n Scientific Software from Seven Years of the Dedalus Project: As the Science Mission Directorate contemplates establishing an open code\npolicy, we consider it timely to share our experiences as the developers of the\nopen-source partial differential equation solver Dedalus. Dedalus is a flexible\nframework for solving partial differential equations. Its development team\nprimarily uses it for studying stellar and planetary astrophysics. Dedalus was\ndeveloped originally for astrophysical fluid dynamics (AFD), though it has\nfound a much broader user base, including applied mathematicians, plasma\nphysicists, and oceanographers. Here, we will focus on issues related to\nopen-source software from the perspective of AFD. We use the term AFD with the\nunderstanding that astrophysics simulations are inherently multi-physics: fluid\ndynamics coupled with some combination of gravitational dynamics, radiation\ntransfer, relativity, and magnetic fields. In practice, a few well-known\nopen-source simulation packages represent a large fraction of published work in\nthe field. However, we will argue that an open-code policy should encompass not\njust these large simulation codes, but also the input files and analysis\nscripts. It is our interest that NASA adopt an open-code policy because without\nit, reproducibility in computational science is needlessly hampered.", "category": "astro-ph_IM" }, { "text": "A broadband scalar optical vortex coronagraph: In recent years, new coronagraphic schemes have been proposed, the most\npromising being the optical vortex phase mask coronagraphs. In our work, a new\nscheme of broadband optical scalar vortex coronagraph is proposed and\ncharacterized experimentally in the laboratory. Our setup employs a pair of\ncomputer generated phase gratings (one of them containing a singularity) to\ncontrol the chromatic dispersion of phase plates and achieves a constant\npeak-to-peak attenuation below 1:1000 over a bandwidth of 120 nm centered at\n700 nm. An inner working angle of $\\lambda$/D is demonstrated along with a raw\ncontrast of 11.5\\,magnitudes at 2$\\lambda$/D. A more compact setup achieves a\npeak-to-peak attenuation below 1:1000 over a bandwidth of 60 nm with the other\nresults remaining the same.", "category": "astro-ph_IM" }, { "text": "The LOFAR View of Cosmic Magnetism: The origin of magnetic fields in the Universe is an open problem in\nastrophysics and fundamental physics. Polarization observations with the\nforthcoming large radio telescopes will open a new era in the observation of\nmagnetic fields and should help to understand their origin. At low frequencies,\nLOFAR (10-240 MHz) will allow us to map the structure of weak magnetic fields\nin the outer regions and halos of galaxies, in galaxy clusters and in the Milky\nWay via their synchrotron emission. Even weaker magnetic fields can be measured\nat low frequencies with help of Faraday rotation measures. A detailed view of\nthe magnetic fields in the local Milky Way will be derived by Faraday rotation\nmeasures from pulsars. First promising images with LOFAR have been obtained for\nthe Crab pulsar-wind nebula, the spiral galaxy M51, the radio galaxy M87 and\nthe galaxy clusters A2255 and A2256. With help of the polarimetric technique of\n\"Rotation Measure Synthesis\", diffuse polarized emission has been detected from\na magnetic bubble in the local Milky Way. Polarized emission and rotation\nmeasures were measured for more than 20 pulsars so far.", "category": "astro-ph_IM" }, { "text": "Optimal PSF modeling for weak lensing: complexity and sparsity: We investigate the impact of point spread function (PSF) fitting errors on\ncosmic shear measurements using the concepts of complexity and sparsity.\nComplexity, introduced in a previous paper, characterizes the number of degrees\nof freedom of the PSF. For instance, fitting an underlying PSF with a model\nwith low complexity will lead to small statistical errors on the model\nparameters, however these parameters could suffer from large biases.\nAlternatively, fitting with a large number of parameters will tend to reduce\nbiases at the expense of statistical errors. We perform an optimisation of\nscatters and biases by studying the mean squared error of a PSF model. We also\ncharacterize a model sparsity, which describes how efficiently the model is\nable to represent the underlying PSF using a limited number of free parameters.\nWe present the general case and illustrate it for a realistic example of PSF\nfitted with shapelet basis sets. We derive the relation between complexity and\nsparsity of the PSF model, signal-to-noise ratio of stars and systematic errors\non cosmological parameters. With the constraint of maintaining the systematics\nbelow the statistical uncertainties, this lead to a relation between the\nrequired number of stars to calibrate the PSF and the sparsity. We discuss the\nimpact of our results for current and future cosmic shear surveys. In the\ntypical case where the biases can be represented as a power law of the\ncomplexity, we show that current weak lensing surveys can calibrate the PSF\nwith few stars, while future surveys will require hard constraints on the\nsparsity in order to calibrate the PSF with 50 stars.", "category": "astro-ph_IM" }, { "text": "Design and performance of a Collimated Beam Projector for telescope\n transmission measurement using a broadband light source: Type Ia supernovae are the most direct cosmological probe to study dark\nenergy in the recent Universe, for which the photometric calibration of\nastronomical instruments remains one major source of systematic uncertainties.\nTo address this, recent advancements introduce Collimated Beam Projectors\n(CBP), aiming to enhance calibration by precisely measuring a telescope's\nthroughput as a function of wavelength. This work describes the performance of\na prototype portable CBP. The experimental setup consists of a broadband Xenon\nlight source replacing a more customary but much more demanding high-power\nlaser source, coupled with a monochromator emitting light inside an integrating\nsphere monitored with a photodiode and a spectrograph. Light is injected at the\nfocus of the CBP telescope projecting a collimated beam onto a solar cell whose\nquantum efficiency has been obtained by comparison with a NIST-calibrated\nphotodiode. The throughput and signal-to-noise ratio achieved by comparing the\nphotocurrent signal in the CBP photodiode to the one in the solar cell are\ncomputed. We prove that the prototype, in its current state of development, is\ncapable of achieving 1.2 per cent and 2.3 per cent precision on the integrated\ng and r bands of the ZTF photometric filter system respectively, in a\nreasonable amount of integration time. Central wavelength determination\naccuracy is kept below $\\sim$ {0.91} nm and $\\sim$ {0.58} nm for g and r bands.\nThe expected photometric uncertainty caused by filter throughput measurement is\napproximately 5 mmag on the zero-point magnitude. Several straightforward\nimprovement paths are discussed to upgrade the current setup.", "category": "astro-ph_IM" }, { "text": "Gravitational Waves from Orphan Memory: Gravitational-wave memory manifests as a permanent distortion of an idealized\ngravitational-wave detector and arises generically from energetic astrophysical\nevents. For example, binary black hole mergers are expected to emit memory\nbursts a little more than an order of magnitude smaller in strain than the\noscillatory parent waves. We introduce the concept of \"orphan memory\":\ngravitational-wave memory for which there is no detectable parent signal. In\nparticular, high-frequency gravitational-wave bursts ($\\gtrsim$ kHz) produce\norphan memory in the LIGO/Virgo band. We show that Advanced LIGO measurements\ncan place stringent limits on the existence of high-frequency gravitational\nwaves, effectively increasing the LIGO bandwidth by orders of magnitude. We\ninvestigate the prospects for and implications of future searches for orphan\nmemory.", "category": "astro-ph_IM" }, { "text": "The Lexington Benchmarks for Numerical Simulations of Nebulae: We present the results of a meeting on numerical simulations of ionized\nnebulae held at the University of Kentucky in conjunction with the celebration\nof the 70th birthdays of Profs. Donald Osterbrock and Michael Seaton.", "category": "astro-ph_IM" }, { "text": "The KaVA and KVN Pulsar Project: We present our work towards using the Korean VLBI (Very Long Baseline\nInterferometer) Network (KVN) and VLBI Exploration of Radio Astronomy (VERA)\narrays combined into the KVN and VERA Array (KaVA) for observations of radio\npulsars at high frequencies ($\\simeq$22-GHz). Pulsar astronomy is generally\nfocused at frequencies approximately 0.3 to several GHz and pulsars are usually\ndiscovered and monitored with large, single-dish, radio telescopes. For most\npulsars, reduced radio flux is expected at high frequencies due to their steep\nspectrum, but there are exceptions where high frequency observations can be\nuseful. Moreover, some pulsars are observable at high frequencies only, such as\nthose close to the Galactic Center. The discoveries of a radio-bright magnetar\nand a few dozen extended Chandra sources within 15 arc-minute of the Galactic\nCenter provide strong motivations to make use of the KaVA frequency band for\nsearching pulsars in this region. Here, we describe the science targets and\nreport progresses made from the KVN test observations for known pulsars. We\nthen discuss why KaVA pulsar observations are compelling.", "category": "astro-ph_IM" }, { "text": "Separating the EoR Signal with a Convolutional Denoising Autoencoder: A\n Deep-learning-based Method: When applying the foreground removal methods to uncover the faint\ncosmological signal from the epoch of reionization (EoR), the foreground\nspectra are assumed to be smooth. However, this assumption can be seriously\nviolated in practice since the unresolved or mis-subtracted foreground sources,\nwhich are further complicated by the frequency-dependent beam effects of\ninterferometers, will generate significant fluctuations along the frequency\ndimension. To address this issue, we propose a novel deep-learning-based method\nthat uses a 9-layer convolutional denoising autoencoder (CDAE) to separate the\nEoR signal. After being trained on the SKA images simulated with realistic beam\neffects, the CDAE achieves excellent performance as the mean correlation\ncoefficient ($\\bar{\\rho}$) between the reconstructed and input EoR signals\nreaches $0.929 \\pm 0.045$. In comparison, the two representative traditional\nmethods, namely the polynomial fitting method and the continuous wavelet\ntransform method, both have difficulties in modelling and removing the\nforeground emission complicated with the beam effects, yielding only\n$\\bar{\\rho}_{\\text{poly}} = 0.296 \\pm 0.121$ and $\\bar{\\rho}_{\\text{cwt}} =\n0.198 \\pm 0.160$, respectively. We conclude that, by hierarchically learning\nsophisticated features through multiple convolutional layers, the CDAE is a\npowerful tool that can be used to overcome the complicated beam effects and\naccurately separate the EoR signal. Our results also exhibit the great\npotential of deep-learning-based methods in future EoR experiments.", "category": "astro-ph_IM" }, { "text": "The Signal-to-Noise Ratio for Photon Counting After Photometric\n Corrections: Photon counting is a mode of processing astronomical observations of\nlow-signal targets that have been observed using an electron-multiplying\ncharge-coupled device (EMCCD). In photon counting, the EMCCD amplifies the\nsignal, and a thresholding technique effectively selects for the signal\nelectrons while drastically reducing relative noise sources. Photometric\ncorrections have been developed which result in the extraction of a more\naccurate estimate of the signal of electrons, and the Nancy Grace Roman\nTelescope will utilize a theoretical expression for the signal-to-noise ratio\n(SNR) given these corrections based on well-calibrated noise parameters to plan\nobservations taken by its coronagraph instrument. I derive here analytic\nexpressions for the SNR for the method of photon counting, before and after\nthese photometric corrections have been applied.", "category": "astro-ph_IM" }, { "text": "Multi-Chroic Feed-Horn Coupled TES Polarimeters: Multi-chroic polarization sensitive detectors offer an avenue to increase\nboth the spectral coverage and sensitivity of instruments optimized for\nobservations of the cosmic-microwave background (CMB) or sub-mm sky. We report\non an effort to adapt the Truce Collaboration horn coupled bolometric\npolarimeters for operation over octave bandwidth. Development is focused on\ndetectors operating in both the 90 and 150 GHz bands which offer the highest\nCMB polarization to foreground ratio. We plan to deploy an array of 256\nmulti-chroic 90/150 GHz polarimeters with 1024 TES detectors on ACTPol in 2013,\nand there are proposals to use this technology for balloon-borne instruments.\nThe combination of excellent control of beam systematics and sensitivity make\nthis technology ideal for future ground, ballon, and space missions.", "category": "astro-ph_IM" }, { "text": "fastRESOLVE: fast Bayesian imaging for aperture synthesis in radio\n astronomy: The standard imaging algorithm for interferometric radio data, CLEAN, is\noptimal for point source observations, but suboptimal for diffuse emission.\nRecently, RESOLVE, a new Bayesian algorithm has been developed, which is ideal\nfor extended source imaging. Unfortunately, RESOLVE is computationally very\nexpensive. In this paper we present fastRESOLVE, a modification of RESOLVE\nbased on an approximation of the interferometric likelihood that allows us to\navoid expensive gridding routines and consequently gain a factor of roughly 100\nin computation time. Furthermore, we include a Bayesian estimation of the\nmeasurement uncertainty of the visibilities into the imaging, a procedure not\napplied in aperture synthesis before. The algorithm requires little to no user\ninput compared to the standard method CLEAN while being superior for extended\nand faint emission. We apply the algorithm to VLA data of Abell 2199 and show\nthat it resolves more detailed structures.", "category": "astro-ph_IM" }, { "text": "Atacama Compact Array Correlator for Atacama Large\n Millimeter/submillimeter Array: We have developed a FX-architecture digital spectro-correlator, Atacama\nCompact Array Correlator for the Atacama Large Millimeter/submillimeter Array.\nThe ACA Correlator processes four pairs of dual polarization signals, whose\nbandwidth is 2 GHz, from up to sixteen antennas, and calculates auto- and\ncross-correlation spectra including cross-polarization in all combinations of\nsixteen antennas. We report the detailed design of the correlator and the\nverification results of the correlator hardware.", "category": "astro-ph_IM" }, { "text": "The P\u00e9gase.3 code of spectrochemical evolution of galaxies:\n documentation and complements: P\\'egase.3 is a Fortran 95 code modeling the spectral evolution of galaxies\nfrom the far-ultraviolet to submillimeter wavelengths. It also follows the\nchemical evolution of their stars, gas and dust.\n For a given scenario (a set of parameters defining the history of mass\nassembly, the star formation law, the initial mass function...), P\\'egase.3\nconsistently computes the following:\n * the star formation, infall, outflow and supernova rates from 0 to 20 Gyr;\n * the stellar metallicity, the abundances of main elements in the gas and the\ncomposition of dust;\n * the unattenuated stellar spectral energy distribution (SED);\n * the nebular SED, using nebular continua and emission lines precomputed with\ncode Cloudy (Ferland et al. 2017);\n * the attenuation in star-forming clouds and the diffuse interstellar medium,\nby absorption and scattering on dust grains, of the stellar and nebular SEDs.\nFor this, the code uses grids of the transmittance for spiral and spheroidal\ngalaxies. We precomputed these grids through Monte Carlo simulations of\nradiative transfer based on the method of virtual interactions;\n * the re-emission by grains of the light they absorbed, taking into account\nstochastic heating.\n The main innovation compared to P\\'egase.2 is the modeling of dust emission\nand its evolution. The computation of nebular emission has also been entirely\nupgraded to take into account metallicity effects and infrared lines.\n Other major differences are that complex scenarios of evolution (derived for\ninstance from cosmological simulations), with several episodes of star\nformation, infall or outflow, may now be implemented, and that the detailed\nevolution of the most important elements -- not only the overall metallicity --\nis followed.", "category": "astro-ph_IM" }, { "text": "On-sky performance of the SPT-3G frequency-domain multiplexed readout: Frequency-domain multiplexing (fMux) is an established technique for the\nreadout of large arrays of transition edge sensor (TES) bolometers. Each TES in\na multiplexing module has a unique AC voltage bias that is selected by a\nresonant filter. This scheme enables the operation and readout of multiple\nbolometers on a single pair of wires, reducing thermal loading onto sub-Kelvin\nstages. The current receiver on the South Pole Telescope, SPT-3G, uses a 68x\nfMux system to operate its large-format camera of $\\sim$16,000 TES bolometers.\nWe present here the successful implementation and performance of the SPT-3G\nreadout as measured on-sky. Characterization of the noise reveals a median\npair-differenced 1/f knee frequency of 33 mHz, indicating that low-frequency\nnoise in the readout will not limit SPT-3G's measurements of sky power on large\nangular scales. Measurements also show that the median readout white noise\nlevel in each of the SPT-3G observing bands is below the expectation for photon\nnoise, demonstrating that SPT-3G is operating in the photon-noise-dominated\nregime.", "category": "astro-ph_IM" }, { "text": "Pre-flight integration and characterization of the SPIDER balloon-borne\n telescope: We present the results of integration and characterization of the SPIDER\ninstrument after the 2013 pre-flight campaign. SPIDER is a balloon-borne\npolarimeter designed to probe the primordial gravitational wave signal in the\ndegree-scale $B$-mode polarization of the cosmic microwave background. With six\nindependent telescopes housing over 2000 detectors in the 94 GHz and 150 GHz\nfrequency bands, SPIDER will map 7.5% of the sky with a depth of 11 to 14\n$\\mu$K$\\cdot$arcmin at each frequency, which is a factor of $\\sim$5 improvement\nover Planck. We discuss the integration of the pointing, cryogenic,\nelectronics, and power sub-systems, as well as pre-flight characterization of\nthe detectors and optical systems. SPIDER is well prepared for a December 2014\nflight from Antarctica, and is expected to be limited by astrophysical\nforeground emission, and not instrumental sensitivity, over the survey region.", "category": "astro-ph_IM" }, { "text": "BICEP3: a 95 GHz refracting telescope for degree-scale CMB polarization: BICEP3 is a 550 mm-aperture refracting telescope for polarimetry of radiation\nin the cosmic microwave background at 95 GHz. It adopts the methodology of\nBICEP1, BICEP2 and the Keck Array experiments - it possesses sufficient\nresolution to search for signatures of the inflation-induced cosmic\ngravitational-wave background while utilizing a compact design for ease of\nconstruction and to facilitate the characterization and mitigation of\nsystematics. However, BICEP3 represents a significant breakthrough in\nper-receiver sensitivity, with a focal plane area 5$\\times$ larger than a\nBICEP2/Keck Array receiver and faster optics ($f/1.6$ vs. $f/2.4$).\nLarge-aperture infrared-reflective metal-mesh filters and infrared-absorptive\ncold alumina filters and lenses were developed and implemented for its optics.\nThe camera consists of 1280 dual-polarization pixels; each is a pair of\northogonal antenna arrays coupled to transition-edge sensor bolometers and read\nout by multiplexed SQUIDs. Upon deployment at the South Pole during the 2014-15\nseason, BICEP3 will have survey speed comparable to Keck Array 150 GHz (2013),\nand will significantly enhance spectral separation of primordial B-mode power\nfrom that of possible galactic dust contamination in the BICEP2 observation\npatch.", "category": "astro-ph_IM" }, { "text": "The optimization of satellite orbit for Space-VLBI observation: By sending one or more telescopes into space, Space-VLBI (SVLBI) is able to\nachieve even higher angular resolution and is therefore the trend of the VLBI\ntechnique. For SVLBI program, the design of satellite orbits plays an important\nrole for the success of planned observation. In this paper, we present our\norbit optimization scheme, so as to facilitate the design of satellite orbit\nfor SVLBI observation. To achieve that, we characterize the $uv$ coverage with\na measure index and minimize it by finding out the corresponding orbit\nconfiguration. In this way, the design of satellite orbit is converted to an\noptimization problem. We can prove that, with appropriate global minimization\nmethod, the best orbit configuration can be found within the reasonable time.\nBesides that, we demonstrate this scheme can be used for the scheduling of\nSVLBI observations.", "category": "astro-ph_IM" }, { "text": "HIDE & SEEK: End-to-End Packages to Simulate and Process Radio Survey\n Data: As several large single-dish radio surveys begin operation within the coming\ndecade, a wealth of radio data will become available and provide a new window\nto the Universe. In order to fully exploit the potential of these data sets, it\nis important to understand the systematic effects associated with the\ninstrument and the analysis pipeline. A common approach to tackle this is to\nforward-model the entire system - from the hardware to the analysis of the data\nproducts. For this purpose, we introduce two newly developed, open-source\nPython packages: the HI Data Emulator (HIDE) and the Signal Extraction and\nEmission Kartographer (SEEK) for simulating and processing single-dish radio\nsurvey data. HIDE forward-models the process of collecting astronomical radio\nsignals in a single-dish radio telescope instrument and outputs pixel-level\ntime-ordered-data. SEEK processes the time-ordered-data, removes artifacts from\nRadio Frequency Interference (RFI), automatically applies flux calibration, and\naims to recover the astronomical radio signal. The two packages can be used\nseparately or together depending on the application. Their modular and flexible\nnature allows easy adaptation to other instruments and data sets. We describe\nthe basic architecture of the two packages and examine in detail the noise and\nRFI modeling in HIDE, as well as the implementation of gain calibration and RFI\nmitigation in SEEK. We then apply HIDE & SEEK to forward-model a Galactic\nsurvey in the frequency range 990 - 1260 MHz based on data taken at the Bleien\nObservatory. For this survey, we expect to cover 70% of the full sky and\nachieve a median signal-to-noise ratio of approximately 5 - 6 in the cleanest\nchannels including systematic uncertainties. However, we also point out the\npotential challenges of high RFI contamination and baseline removal when\nexamining the early data from the Bleien Observatory.", "category": "astro-ph_IM" }, { "text": "An Extension of the Athena++ Framework for Fully Conservative\n Self-Gravitating Hydrodynamics: Numerical simulations of self-gravitating flows evolve a momentum equation\nand an energy equation that account for accelerations and gravitational energy\nreleases due to a time-dependent gravitational potential. In this work, we\nimplement a fully conservative numerical algorithm for self-gravitating flows,\nusing source terms, in the astrophysical magnetohydrodynamics framework\nAthena++. We demonstrate that properly evaluated source terms are conservative\nwhen they are equivalent to the divergence of a corresponding \"gravity flux\"\n(i.e., a gravitational stress tensor or a gravitational energy flux). We\nprovide test problems that demonstrate several advantages of the\nsource-term-based algorithm, including second order convergence and round-off\nerror total momentum and total energy conservation. The fully conservative\nscheme suppresses anomalous accelerations that arise when applying a common\nnumerical discretization of the gravitational stress tensor that does not\nguarantee curl-free gravity.", "category": "astro-ph_IM" }, { "text": "Performance of multi-detector hybrid statistic in targeted compact\n binary coalescence search: In this paper we compare the performance of two likelihood ratio based\ndetection statistics namely maximum likelihood ratio statistic and {\\it hybrid}\nstatistic designed for the detection of gravitational waves from compact binary\ncoalescence using multiple interferometric detector networks. We perform\nsimulations with non-spinning double neutron star binary system and neutron\nstar-black hole binary systems with spinning as well as non-spinning black hole\ncomponent masses. The binary injections are distributed uniformly in volume up\nto 1 Gpc. We observe that, on average, the maximum likelihood ratio statistic\nrecovers $\\sim 34.45\\%$, $\\sim 49.69\\%$, $\\sim 61.25\\%$ and $\\sim 69.67\\%$ of\ninjections in 2, 3, 4 and 5 detector networks respectively in the case of\nneutron star-black hole injections for a fixed false alarm probability of\n$10^{-7}$ in Gaussian noise. Further, we note that, compared to the maximum\nlikelihood ratio statistic, the {\\it hybrid} statistic recovers $\\sim 7.45\\%$,\n$\\sim 4.57\\%$, $\\sim 2.56\\%$ and $\\sim 1.22\\%$ more injections in 2, 3, 4 and 5\ndetector networks respectively for the same false alarm probability in Gaussian\nnoise. On the other hand, among binary neutron star injections, the maximum\nlikelihood ratio statistic recovers $\\sim 5.587\\%$, $\\sim 9.917\\%$, $\\sim\n14.73\\%$ and $\\sim 19.86\\%$ of injections in 2, 3, 4 and 5 detector networks\nrespectively and the {\\it hybrid} statistic recovers $\\sim 14.63\\%$, $\\sim\n12.91\\%$, $\\sim 11.49\\%$ and $\\sim 10.29\\%$ more injections compared to maximum\nlikelihood ratio statistic in 2, 3, 4 and 5 detector networks respectively.", "category": "astro-ph_IM" }, { "text": "A Joint Roman Space Telescope and Rubin Observatory Synthetic Wide-Field\n Imaging Survey: We present and validate 20 deg$^2$ of overlapping synthetic imaging surveys\nrepresenting the full depth of the Nancy Grace Roman Space Telescope\nHigh-Latitude Imaging Survey (HLIS) and five years of observations of the Vera\nC. Rubin Observatory Legacy Survey of Space and Time (LSST). The two synthetic\nsurveys are summarized, with reference to the existing 300 deg$^2$ of LSST\nsimulated imaging produced as part of Dark Energy Science Collaboration (DESC)\nData Challenge 2 (DC2). Both synthetic surveys observe the same simulated DESC\nDC2 universe. For the synthetic Roman survey, we simulate for the first time\nfully chromatic images along with the detailed physics of the Sensor Chip\nAssemblies derived from lab measurements using the flight detectors. The\nsimulated imaging and resulting pixel-level measurements of photometric\nproperties of objects span a wavelength range of $\\sim$0.3 to 2.0 $\\mu$m. We\nalso describe updates to the Roman simulation pipeline, changes in how\nastrophysical objects are simulated relative to the original DC2 simulations,\nand the resulting simulated Roman data products. We use these simulations to\nexplore the relative fraction of unrecognized blends in LSST images, finding\nthat 20-30% of objects identified in LSST images with $i$-band magnitudes\nbrighter than 25 can be identified as multiple objects in Roman images. These\nsimulations provide a unique testing ground for the development and validation\nof joint pixel-level analysis techniques of ground- and space-based imaging\ndata sets in the second half of the 2020s -- in particular the case of joint\nRoman--LSST analyses.", "category": "astro-ph_IM" }, { "text": "Kinematic Modelling of Disc Galaxies using Graphics Processing Units: With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of\ngalaxies currently under-way or planned, the astronomical community is in need\nof methods, techniques and tools that will allow the analysis of huge amounts\nof data. We focus on the kinematic modelling of disc galaxies and investigate\nthe potential use of massively parallel architectures, such as the Graphics\nProcessing Unit (GPU), as an accelerator for the computationally expensive\nmodel-fitting procedure. We review the algorithms involved in model-fitting and\nevaluate their suitability for GPU implementation. We employ different\noptimization techniques, including the Levenberg-Marquardt and Nested Sampling\nalgorithms, but also a naive brute-force approach based on Nested Grids. We\nfind that the GPU can accelerate the model-fitting procedure up to a factor of\n~100 when compared to a single-threaded CPU, and up to a factor of ~10 when\ncompared to a multi-threaded dual CPU configuration. Our method's accuracy,\nprecision and robustness are assessed by successfully recovering the kinematic\nproperties of simulated data, and also by verifying the kinematic modelling\nresults of galaxies from the GHASP and DYNAMO surveys as found in the\nliterature. The resulting GBKFIT code is available for download from:\nhttp://supercomputing.swin.edu.au/gbkfit.", "category": "astro-ph_IM" }, { "text": "A proposal for relative in-flight flux self-calibrations for\n spectro-photometric surveys: We present a method for the in-flight relative flux self-calibration of a\nspectro-photometer instrument, general enough to be applied to any upcoming\ngalaxy survey on satellite. The instrument response function, that accounts for\na smooth continuous variation due to telescope optics, on top of a\ndiscontinuous effect due to the segmentation of the detector, is inferred with\na $\\chi^2$ statistics. The method provides unbiased inference of the sources\ncount rates and of the reconstructed relative response function, in the limit\nof high count rates. We simulate a simplified sequence of observations\nfollowing a spatial random pattern and realistic distributions of sources and\ncount rates, with the purpose of quantifying the relative importance of the\nnumber of sources and exposures for correctly reconstructing the instrument\nresponse. We present a validation of the method, with the definition of figures\nof merit to quantify the expected performance, in plausible scenarios.", "category": "astro-ph_IM" }, { "text": "Target Detection Framework for Lobster Eye X-Ray Telescopes with Machine\n Learning Algorithms: Lobster eye telescopes are ideal monitors to detect X-ray transients, because\nthey could observe celestial objects over a wide field of view in X-ray band.\nHowever, images obtained by lobster eye telescopes are modified by their unique\npoint spread functions, making it hard to design a high efficiency target\ndetection algorithm. In this paper, we integrate several machine learning\nalgorithms to build a target detection framework for data obtained by lobster\neye telescopes. Our framework would firstly generate two 2D images with\ndifferent pixel scales according to positions of photons on the detector. Then\nan algorithm based on morphological operations and two neural networks would be\nused to detect candidates of celestial objects with different flux from these\n2D images. At last, a random forest algorithm will be used to pick up final\ndetection results from candidates obtained by previous steps. Tested with\nsimulated data of the Wide-field X-ray Telescope onboard the Einstein Probe,\nour detection framework could achieve over 94% purity and over 90% completeness\nfor targets with flux more than 3 mCrab (9.6 * 10-11 erg/cm2/s) and more than\n94% purity and moderate completeness for targets with lower flux at acceptable\ntime cost. The framework proposed in this paper could be used as references for\ndata processing methods developed for other lobster eye X-ray telescopes.", "category": "astro-ph_IM" }, { "text": "Searching for high-energy neutrinos in coincidence with gravitational\n waves with the ANTARES and VIRGO/LIGO detectors: Cataclysmic cosmic events can be plausible sources of both gravitational\nwaves (GW) and high-energy neutrinos (HEN). Both GW and HEN are alternative\ncosmic messengers that may escape very dense media and travel unaffected over\ncosmological distances, carrying information from the innermost regions of the\nastrophysical engines. For the same reasons, such messengers could also reveal\nnew, hidden sources that were not observed by conventional photon astronomy.\n Requiring the consistency between GW and HEN detection channels shall enable\nnew searches as one has significant additional information about the common\nsource. A neutrino telescope such as ANTARES can determine accurately the time\nand direction of high energy neutrino events, while a network of gravitational\nwave detectors such as LIGO and VIRGO can also provide timing/directional\ninformation for gravitational wave bursts. By combining the information from\nthese totally independent detectors, one can search for cosmic events that may\narrive from common astrophysical sources.", "category": "astro-ph_IM" }, { "text": "A Laser Frequency Comb System for Absolute Calibration of the VTT\n Echelle Spectrograph: A wavelength calibration system based on a laser frequency comb (LFC) was\ndeveloped in a co-operation between the Kiepenheuer-Institut f\\\"ur\nSonnenphysik, Freiburg, Germany and the Max-Planck-Institut f\\\"ur Quantenoptik,\nGarching, Germany for permanent installation at the German Vacuum Tower\nTelescope (VTT) on Tenerife, Canary Islands. The system was installed\nsuccessfully in October 2011. By simultaneously recording the spectra from the\nSun and the LFC, for each exposure a calibration curve can be derived from the\nknown frequencies of the comb modes that is suitable for absolute calibration\nat the meters per second level. We briefly summarize some topics in solar\nphysics that benefit from absolute spectroscopy and point out the advantages of\nLFC compared to traditional calibration techniques. We also sketch the basic\nsetup of the VTT calibration system and its integration with the existing\nechelle spectrograph.", "category": "astro-ph_IM" }, { "text": "A Novel Technique to Observe Rapidly Pulsating Objects Using Spectral\n Wave-Interaction Effects: Conventional techniques that measure rapid time variations are inefficient or\ninadequate to discover and observe rapidly pulsating astronomical sources. It\nis therefore conceivable that there exist some classes of objects pulsating\nwith extremely short periods that have not yet been discovered. This article\nstarts from the fact that rapid flux variations generate a spectral modulation\nthat can be detected in the beat spectrum of the output current fluctuations of\na quadratic detector. The telescope could observe at any frequency, although\nshorter frequencies would have the advantage of lower photon noise. The\ntechniques would allow us to find and observe extremely fast time variations,\nopening up a new time window in Astronomy. The current fluctuation technique,\nlike intensity interferometers, uses second-order correlation effects and fits\ninto the current renewal of interest in intensity interferometry. An\ninteresting aspect it shares with intensity interferometry is that it can use\ninexpensive large telescope that have low-quality mirrors, like Cherenkov\ntelescopes. It has other advantages over conventional techniques that measure\ntime variations, foremost of which is its simplicity. Consequently, it could be\nused for extended monitoring of astronomical sources, something that is\ndifficult to do with conventional telescopes. Arguably, the most interesting\nscientific justification for the technique comes from Serendipity", "category": "astro-ph_IM" }, { "text": "A Cryogenic Integrated Noise Calibration and Coupler Module Using a MMIC\n LNA: A new cryogenic noise calibration source for radio astronomy receivers is\npresented. Dissipated power is only 4.2 mW, allowing it to be integrated with\nthe cold part of the receiver. Measured long-term stability, sensitivity to\nbias voltages, and noise power output versus frequency are presented. The\nmeasured noise output versus frequency is compared to a warm noise diode\ninjected into cryogenic K-band receiver and shows the integrated noise module\nto have less frequency structure, which will result in more accurate\nastronomical flux calibrations. It is currently in operation on the new\n7-element K-band focal plane array receiver on the NRAO Robert C. Byrd Green\nBank Telescope (GBT).", "category": "astro-ph_IM" }, { "text": "Deep learning method for identifying mass composition of\n ultra-high-energy cosmic rays: We introduce a novel method for identifying the mass composition of\nultra-high-energy cosmic rays using deep learning. The key idea of the method\nis to use a chain of two neural networks. The first network predicts the type\nof a primary particle for individual events, while the second infers the mass\ncomposition of an ensemble of events. We apply this method to the Monte-Carlo\ndata for the Telescope Array Surface Detectors readings, on which it yields an\nunprecedented low error of 7% for 4-component approximation. We also discuss\nthe problems of applying the developed method to the experimental data, and the\nway they can be resolved.", "category": "astro-ph_IM" }, { "text": "Aerosol characterization using satellite remote sensing of light\n pollution sources at night: A demanding challenge in atmospheric research is the night-time\ncharacterization of aerosols using passive techniques, that is, by extracting\ninformation from scattered light that has not been emitted by the observer.\nSatellite observations of artificial night-time lights have been used to\nretrieve some basic integral parameters, like the aerosol optical depth.\nHowever, a thorough analysis of the scattering processes allows one to obtain\nsubstantially more detailed information on aerosol properties. In this Letter\nwe demonstrate a practicable approach for determining the aerosol particle size\nnumber distribution function in the air column, based on the measurement of the\nangular radiance distribution of the scattered light emitted by night-time\nlights of cities and towns, recorded from low Earth orbit. The method is\nself-calibrating and does not require the knowledge of the absolute city\nemissions. The input radiance data are readily available from several\nspaceborne platforms, like the VIIRS-DNB radiometer onboard the Suomi-NPP\nsatellite.", "category": "astro-ph_IM" }, { "text": "Wavefront error tolerancing for direct imaging of exo-Earths with a\n large segmented telescope in space: Direct imaging of exo-Earths and search for life is one of the most exciting\nand challenging objectives for future space observatories. Segmented apertures\nin space will be required to reach the needed large diameters beyond the\ncapabilities of current or planned launch vehicles. These apertures present\nadditional challenges for high-contrast coronagraphy, not only in terms of\nstatic phasing but also in terms of their stability. The Pair-based Analytical\nmodel for Segmented Telescope Imaging from Space (PASTIS) was developed to\nmodel the effects of segment-level optical aberrations on the final image\ncontrast. In this paper, we extend the original PASTIS propagation model from a\npurely analytical to a semi-analytical method, in which we substitute the use\nof analytical images with numerically simulated images. The inversion of this\nmodel yields a set of orthonormal modes that can be used to determine\nsegment-level wavefront tolerances. We present results in the case of\nsegment-level piston error applied to the baseline coronagraph design of LUVOIR\nA, with minimum and maximum wavefront error constraint between 56 pm and 290 pm\nper segment. The analysis is readily generalizable to other segment-level\naberrations modes, and can also be expanded to establish stability tolerances\nfor these missions.", "category": "astro-ph_IM" }, { "text": "A Visible-light Lyot Coronagraph for SCExAO/VAMPIRES: We describe the design and initial results from a visible-light Lyot\ncoronagraph for SCExAO/VAMPIRES. The coronagraph is comprised of four\nhard-edged, partially transmissive focal plane masks with inner working angles\nof 36 mas, 55 mas, 92 mas, and 129 mas, respectively. The Lyot stop is a\nreflective, undersized design with a geometric throughput of 65.7%. Our\npreliminary on-sky contrast is 1e-2 at 0.1\" to 1e-4 at 0.75\" for all mask\nsizes. The coronagraph was deployed in early 2022 and is available for open\nuse.", "category": "astro-ph_IM" }, { "text": "SST-GATE: A dual mirror telescope for the Cherenkov Telescope Array: The Cherenkov Telescope Array (CTA) will be the world's first open\nobservatory for very high energy gamma-rays. Around a hundred telescopes of\ndifferent sizes will be used to detect the Cherenkov light that results from\ngamma-ray induced air showers in the atmosphere. Amongst them, a large number\nof Small Size Telescopes (SST), with a diameter of about 4 m, will assure an\nunprecedented coverage of the high energy end of the electromagnetic spectrum\n(above ~1TeV to beyond 100 TeV) and will open up a new window on the\nnon-thermal sky. Several concepts for the SST design are currently being\ninvestigated with the aim of combining a large field of view (~9 degrees) with\na good resolution of the shower images, as well as minimizing costs. These\ninclude a Davies-Cotton configuration with a Geiger-mode avalanche photodiode\n(GAPD) based camera, as pioneered by FACT, and a novel and as yet untested\ndesign based on the Schwarzschild-Couder configuration, which uses a secondary\nmirror to reduce the plate-scale and to allow for a wide field of view with a\nlight-weight camera, e.g. using GAPDs or multi-anode photomultipliers. One\nobjective of the GATE (Gamma-ray Telescope Elements) programme is to build one\nof the first Schwarzschild-Couder prototypes and to evaluate its performance.\nThe construction of the SST-GATE prototype on the campus of the Paris\nObservatory in Meudon is under way. We report on the current status of the\nproject and provide details of the opto-mechanical design of the prototype, the\ndevelopment of its control software, and simulations of its expected\nperformance.", "category": "astro-ph_IM" }, { "text": "Synthetic tracking using ZTF Long Dwell Datasets: The Zwicky Transit Factory (ZTF) is a powerful time domain survey facility\nwith a large field of view. We apply the synthetic tracking technique to\nintegrate a ZTF's long-dwell dataset, which consists of 133 nominal 30-second\nexposure frames spanning about 1.5 hours, to search for slowly moving asteroids\ndown to approximately 23rd magnitude. We found more than one thousand objects\nfrom searching 40 CCD-quadrant subfields, each of which covers a field size of\n$\\sim$0.73 deg$^2$. While most of the objects are main belt asteroids, there\nare asteroids belonging to families of Trojan, Hilda, Hungaria, Phocaea, and\nnear-Earth-asteroids. Such an approach is effective and productive. Here we\nreport the data process and results.", "category": "astro-ph_IM" }, { "text": "PESummary: the code agnostic Parameter Estimation Summary page builder: PESummary is a Python software package for processing and visualising data\nfrom any parameter estimation code. The easy to use Python executable scripts\nand extensive online documentation has resulted in PESummary becoming a key\ncomponent in the international gravitational-wave analysis toolkit. PESummary\nhas been developed to be more than just a post-processing tool with all outputs\nfully self-contained. PESummary has become central to making gravitational-wave\ninference analysis open and easily reproducible.", "category": "astro-ph_IM" }, { "text": "Gaussian phase autocorrelation as an accurate compensator for FFT-based\n atmospheric phase screen simulations: Accurately simulating the atmospheric turbulence behaviour is always\nchallenging. The well-known FFT based method falls short in correctly\npredicting both the low and high frequency behaviours. Sub-harmonic\ncompensation aids in low-frequency correction but does not solve the problem\nfor all screen size to outer scale parameter ratios (G/$L_0$). FFT-based\nsimulation gives accurate result only for relatively large screen size to outer\nscale parameter ratio (G/$L_0$). In this work, we have introduced a Gaussian\nphase autocorrelation matrix to compensate for any sort of residual errors\nafter applying for a modified subharmonics compensation. With this, we have\nsolved problems such as under sampling at the high-frequency range, unequal\nsampling/weights for subharmonics addition at low-frequency range and the patch\nnormalization factor. Our approach reduces the maximum error in phase\nstructure-function in the simulation with respect to theoretical prediction to\nwithin 1.8\\%, G/$L_0$ = 1/1000.", "category": "astro-ph_IM" }, { "text": "The Near Infrared Imager and Slitless Spectrograph for the James Webb\n Space Telescope -- IV. Aperture Masking Interferometry: The James Webb Space Telescope's Near Infrared Imager and Slitless\nSpectrograph (JWST-NIRISS) flies a 7-hole non-redundant mask (NRM), the first\nsuch interferometer in space, operating at 3-5 \\micron~wavelengths, and a\nbright limit of $\\simeq 4$ magnitudes in W2. We describe the NIRISS Aperture\nMasking Interferometry (AMI) mode to help potential observers understand its\nunderlying principles, present some sample science cases, explain its\noperational observing strategies, indicate how AMI proposals can be developed\nwith data simulations, and how AMI data can be analyzed. We also present key\nresults from commissioning AMI. Since the allied Kernel Phase Imaging (KPI)\ntechnique benefits from AMI operational strategies, we also cover NIRISS KPI\nmethods and analysis techniques, including a new user-friendly KPI pipeline.\nThe NIRISS KPI bright limit is $\\simeq 8$ W2 magnitudes. AMI (and KPI) achieve\nan inner working angle of $\\sim 70$ mas that is well inside the $\\sim 400$ mas\nNIRCam inner working angle for its circular occulter coronagraphs at comparable\nwavelengths.", "category": "astro-ph_IM" }, { "text": "On Surface Brightness and Flux Calibration for Point and Compact\n Extended Sources in the AKARI Far-IR All-Sky Survey (AFASS) Maps: The AKARI Infrared Astronomical Satellite produced the all-sky survey (AFASS)\nmaps in the far-IR at roughly arc-minute spatial resolution, enabling us to\ninvestigate the whole sky in the far-IR for objects having surface brightnesses\ngreater than a few to a couple of dozen MJy/sr. While the AFASS maps are\nabsolutely calibrated against large-scale diffuse emission, it was uncertain\nwhether or not an additional flux correction for point sources was necessary.\nHere, we verify that calibration for point-source photometry in the AFASS maps\nis proper. With the aperture correction method based on the empirical\npoint-spread-function templates derived directly from the AFASS maps, fluxes in\nthe AKARI bright source catalogue (BSC) are reproduced. The AKARI BSC fluxes\nare also satisfactorily recovered with the 1 sigma aperture, which is the\nempirical equivalent of an infinite aperture. These results confirm that in the\nAFASS maps far-IR photometry can be properly performed by using the aperture\ncorrection method for point sources and by summing all pixel values within an\nappropriately defined aperture of the intended target (i.e., the aperture\nphotometry method) for extended sources.", "category": "astro-ph_IM" }, { "text": "Waveguide-Type Multiplexer for Multiline Observation of Atmospheric\n Molecules using Millimeter-Wave Spectroradiometer: In order to better understand the variation mechanism of ozone abundance in\nthe middle atmosphere, the simultaneous monitoring of ozone and other minor\nmolecular species, which are related to ozone depletion, is the most\nfundamental and critical method. A waveguide-type multiplexer was developed for\nthe expansion of the observation frequency range of a millimeter-wave\nspectroradiometer, for the simultaneous observation of multiple molecular\nspectral lines. The proposed multiplexer contains a cascaded four-stage\nsideband-separating filter circuit. The waveguide circuit was designed based on\nelectromagnetic analysis, and the pass frequency bands of Stages 1-4 were\n243-251 GHz, 227-235 GHz, 197-205 GHz, and 181-189 GHz. The insertion and\nreturn losses of the multiplexer were measured using vector network analyzers,\neach observation band was well-defined, and the bandwidths were appropriately\nspecified. Moreover, the receiver noise temperature and the image rejection\nratio (IRR) using the superconducting mixer at 4 K were measured. As a result,\nthe increase in receiver noise due to the multiplexer compared with that of\nonly the mixer can be attributed to the transmission loss of the waveguide\ncircuit in the multiplexer. The IRRs were higher than 25 dB at the center of\neach observation band. This indicates that a high and stable IRR performance\ncan be achieved by the waveguide-type multiplexer for the separation of\nsideband signals.", "category": "astro-ph_IM" }, { "text": "Gaia Early Data Release 3. Building the Gaia DR3 source list --\n Cross-match of Gaia observations: The Gaia Early Data Release 3 (Gaia EDR3) contains results derived from 78\nbillion individual field-of-view transits of 2.5 billion sources collected by\nthe European Space Agency's Gaia mission during its first 34 months of\ncontinuous scanning of the sky. We describe the input data, which have the form\nof onboard detections, and the modeling and processing that is involved in\ncross-matching these detections to sources. For the cross-match, we formed\nclusters of detections that were all linked to the same physical light source\non the sky. As a first step, onboard detections that were deemed spurious were\ndiscarded. The remaining detections were then preliminarily associated with one\nor more sources in the existing source list in an observation-to-source match.\nAll candidate matches that directly or indirectly were associated with the same\nsource form a match candidate group. The detections from the same group were\nthen subject to a cluster analysis. Each cluster was assigned a source\nidentifier that normally was the same as the identifiers from Gaia DR2. Because\nthe number of individual detections is very high, we also describe the\nefficient organising of the processing. We present results and statistics for\nthe final cross-match with particular emphasis on the more complicated cases\nthat are relevant for the users of the Gaia catalogue. We describe the\nimprovements over the earlier Gaia data releases, in particular for stars of\nhigh proper motion, for the brightest sources, for variable sources, and for\nclose source pairs.", "category": "astro-ph_IM" }, { "text": "Cleaning radio interferometric images using a spherical wavelet\n decomposition: The deconvolution, or cleaning, of radio interferometric images often\ninvolves computing model visibilities from a list of clean components, in order\nthat the contribution from the model can be subtracted from the observed\nvisibilities. This step is normally performed using a forward fast Fourier\ntransform (FFT), followed by a 'degridding' step that interpolates over the uv\nplane to construct the model visibilities. An alternative approach is to\ncalculate the model visibilities directly by summing over all the members of\nthe clean component list, which is a more accurate method that can also be much\nslower. However, if the clean components are used to construct a model image on\nthe surface of the celestial sphere then the model visibilities can be\ngenerated directly from the wavelet coefficients, and the sparsity of the model\nmeans that most of these coefficients are zero, and can be ignored. We have\nconstructed a prototype imager that uses a spherical-wavelet representation of\nthe model image to generate model visibilities during each major cycle, and\nfind empirically that the execution time scales with the wavelet resolution\nlevel, J, as O(1.07 J), and with the number of distinct clean components, N_C,\nas O(N_C). The prototype organises the wavelet coefficients into a tree\nstructure, and does not store or process the zero wavelet coefficients.", "category": "astro-ph_IM" }, { "text": "Human Contrast Threshold and Astronomical Visibility: The standard visibility model in light pollution studies is the formula of\nHecht (1947), as used e.g. by Schaefer (1990). However it is applicable only to\npoint sources and is shown to be of limited accuracy. A new visibility model is\npresented for uniform achromatic targets of any size against background\nluminances ranging from zero to full daylight, produced by a systematic\nprocedure applicable to any appropriate data set (e.g Blackwell (1946)), and\nbased on a simple but previously unrecognized empirical relation between\ncontrast threshold and adaptation luminance. The scotopic luminance correction\nfor variable spectral radiance (colour index) is calculated. For point sources\nthe model is more accurate than Hecht's formula and is verified using\ntelescopic data collected at Mount Wilson by Bowen (1947), enabling the sky\nbrightness at that time to be determined. The result is darker than the\ncalculation by Garstang (2004), implying that light pollution grew more rapidly\nin subsequent decades than has been supposed. The model is applied to the\nnebular observations of William Herschel, enabling his visual performance to be\nquantified. Proposals are made regarding sky quality indicators for public use.", "category": "astro-ph_IM" }, { "text": "Design and Implementation of the wvrgcal Program: This memo describes the software engineering and technical details of the\ndesign and implementation of the wvrgcal program and associated libraries. This\nprogram performs off-line correction of atmospheric phase fluctuations in ALMA\nobservations, using the 183 GHz Water Vapour Radiometers (WVRs) installed on\nthe ALMA 12 m dishes. The memo can be used as a guide for detailed study of the\nsource code of the program for purposes of further development or maintenance.", "category": "astro-ph_IM" }, { "text": "Upgrade of the VERITAS Cherenkov Telescope Array: The VERITAS Cherenkov telescope array has been fully operational since Fall\n2007 and has fulfilled or outperformed its design specifications. We are\npreparing an upgrade program with the goal to lower the energy threshold and\nimprove the sensitivity of VERITAS at all accessible energies. In the baseline\nprogram of the upgrade we will relocate one of the four telescopes, replace the\nphoto-sensors by higher efficiency photomultipliers and install a new trigger\nsystem. In the enhanced program of the upgrade we foresee, in addition, the\nconstruction of a fifth telescope and installation of an active mirror\nalignment system.", "category": "astro-ph_IM" }, { "text": "Laboratory gas-phase infrared spectra of two astronomically relevant PAH\n cations: diindenoperylene, C$_{32}$H$_{16}$$^+$ and dicoronylene,\n C$_{48}$H$_{20}$$^+$: The first gas-phase infrared spectra of two isolated astronomically relevant\nand large PAH cations - diindenoperylene (DIP) and dicoronylene (DC) - in the\n530$-$1800 cm$^{-1}$ (18.9$-$5.6 $\\mu$m) range - are presented. Vibrational\nband positions are determined for comparison to the aromatic infrared bands\n(AIBs). The spectra are obtained via infrared multiphoton dissociation (IRMPD)\nspectroscopy of ions stored in a quadrupole ion trap (QIT) using the intense\nand tunable radiation of the free electron laser for infrared experiments\n(FELIX). DIP$^{+}$ shows its main absorption peaks at 737 (13.57), 800 (12.50),\n1001 (9.99), 1070 (9.35), 1115 (8.97), 1152 (8.68), 1278 (7.83), 1420 (7.04)\nand 1550 (6.45) cm$^{-1}$($\\mu$m), in good agreement with DFT calculations that\nare uniformly scaled to take anharmonicities into account. DC$^+$ has its main\nabsorption peaks at 853 (11.72), 876 (11.42), 1032 (9.69), 1168 (8.56), 1300\n(7.69), 1427 (7.01) and 1566 (6.39) cm$^{-1}$($\\mu$m), that also agree well\nwith the scaled DFT results presented here.\n The DIP$^+$ and DC$^+$ spectra are compared with the prominent infrared\nfeatures observed towards NGC 7023. This results both in matches and clear\ndeviations. Moreover, in the 11.0$-$14.0 $\\mu$m region, specific bands can be\nlinked to CH out-of-plane (oop) bending modes of different CH edge structures\nin large PAHs. The molecular origin of these findings and their astronomical\nrelevance are discussed.", "category": "astro-ph_IM" }, { "text": "Temperature dependence of radiation damage annealing of Silicon\n Photomultipliers: The last decade has increasingly seen the use of silicon photomultipliers\n(SiPMs) instead of photomultiplier tubes (PMTs). This is due to various\nadvantages of the former on the latter like its smaller size, lower operating\nvoltage, higher detection efficiency, insensitivity to magnetic fields and\nmechanical robustness to launch vibrations. All these features make SiPMs ideal\nfor use on space based experiments where the detectors require to be compact,\nlightweight and capable of surviving launch conditions. A downside with the use\nof this novel type of detector in space conditions is its susceptibility to\nradiation damage. In order to understand the lifetime of SiPMs in space, both\nthe damage sustained due to radiation as well as the subsequent recovery, or\nannealing, from this damage have to be studied. Here we present these studies\nfor three different types of SiPMs from the Hamamatsu S13360 series. Both their\nbehaviour after sustaining radiation equivalent to 2 years in low earth orbit\nin a typical mission is presented, as well as the recovery of these detectors\nwhile stored in different conditions. The storage conditions varied in\ntemperature as well as in operating voltage. The study found that the annealing\ndepends significantly on the temperature of the detectors with those stored at\nhigh temperatures recovering significantly faster and at recovering closer to\nthe original performance. Additionally, no significant effect from a reasonable\nbias voltage on the annealing was observed. Finally the annealing rate as a\nfunction of temperature is presented along with various operating strategies\nfor the future SiPM based astrophysical detector POLAR-2 as well as for future\nSiPM based space borne missions.", "category": "astro-ph_IM" }, { "text": "Stratospheric Imaging of Polar Mesospheric Clouds: A New Window on\n Small-Scale Atmospheric Dynamics: Instabilities and turbulence extending to the smallest dynamical scales play\nimportant roles in the deposition of energy and momentum by gravity waves\nthroughout the atmosphere. However, these dynamics and their effects have been\nimpossible to quantify to date due to lack of observational guidance.\nSerendipitous optical images of polar mesospheric clouds at ~82 km obtained by\nstar cameras aboard a cosmology experiment deployed on a stratospheric balloon\nprovide a new observational tool, revealing instability and turbulence\nstructures extending to spatial scales < 20 m. At 82 km, this resolution\nprovides sensitivity extending to the smallest turbulence scale not strongly\ninfluenced by viscosity: the \"inner scale\" of turbulence,\n$l_0\\sim$10($\\nu^3$/$\\epsilon$)$^{1/4}$. Such images represent a new window\ninto small-scale dynamics that occur throughout the atmosphere but are\nimpossible to observe in such detail at any other altitude. We present a sample\nof images revealing a range of dynamics features, and employ numerical\nsimulations that resolve these dynamics to guide our interpretation of several\nobserved events.", "category": "astro-ph_IM" }, { "text": "Cosmological surveys with multi-object spectrographs: Multi-object spectroscopy has been a key technique contributing to the\ncurrent era of 'precision cosmology'. From the first exploratory surveys of the\nlarge-scale structure and evolution of the universe to the current generation\nof superbly detailed maps spanning a wide range of redshifts, multi-object\nspectroscopy has been a fundamentally important tool for mapping the rich\nstructure of the cosmic web and extracting cosmological information of\nincreasing variety and precision. This will continue to be true for the\nforeseeable future, as we seek to map the evolving geometry and structure of\nthe universe over the full extent of cosmic history in order to obtain the most\nprecise and comprehensive measurements of cosmological parameters. Here I\nbriefly summarize the contributions that multi-object spectroscopy has made to\ncosmology so far, then review the major surveys and instruments currently in\nplay and their prospects for pushing back the cosmological frontier. Finally, I\nexamine some of the next generation of instruments and surveys to explore how\nthe field will develop in coming years, with a particular focus on specialised\nmulti-object spectrographs for cosmology and the capabilities of multi-object\nspectrographs on the new generation of extremely large telescopes.", "category": "astro-ph_IM" }, { "text": "Processing Images from Multiple IACTs in the TAIGA Experiment with\n Convolutional Neural Networks: Extensive air showers created by high-energy particles interacting with the\nEarth atmosphere can be detected using imaging atmospheric Cherenkov telescopes\n(IACTs). The IACT images can be analyzed to distinguish between the events\ncaused by gamma rays and by hadrons and to infer the parameters of the event\nsuch as the energy of the primary particle. We use convolutional neural\nnetworks (CNNs) to analyze Monte Carlo-simulated images from the telescopes of\nthe TAIGA experiment. The analysis includes selection of the images\ncorresponding to the showers caused by gamma rays and estimating the energy of\nthe gamma rays. We compare performance of the CNNs using images from a single\ntelescope and the CNNs using images from two telescopes as inputs.", "category": "astro-ph_IM" }, { "text": "Equalizing the Pixel Response of the Imaging Photoelectric Polarimeter\n On-Board the IXPE Mission: The Gas Pixel Detector is a gas detector, sensitive to the polarization of\nX-rays, currently flying on-board IXPE - the first observatory dedicated to\nX-ray polarimetry. It detects X-rays and their polarization by imaging the\nionization tracks generated by photoelectrons absorbed in the sensitive volume,\nand then reconstructing the initial direction of the photoelectrons. The\nprimary ionization charge is multiplied and ultimately collected on a\nfinely-pixellated ASIC specifically developed for X-ray polarimetry. The signal\nof individual pixels is processed independently and gain variations can be\nsubstantial, of the order of 20%. Such variations need to be equalized to\ncorrectly reconstruct the track shape, and therefore its polarization\ndirection. The method to do such equalization is presented here and is based on\nthe comparison between the mean charge of a pixel with respect to the other\npixels for equivalent events. The method is shown to finely equalize the\nresponse of the detectors on board IXPE, allowing a better track reconstruction\nand energy resolution, and can in principle be applied to any imaging detector\nbased on tracks.", "category": "astro-ph_IM" }, { "text": "An Advanced Atmospheric Dispersion Corrector: The Magellan Visible AO\n Camera: In addition to the BLINC/MIRAC IR science instruments, the Magellan adaptive\nsecondary AO system will have an EEV CCD47 that can be used both for visible AO\nscience and as a wide-field acquisition camera. The effects of atmospheric\ndispersion on the elongation of the diffraction limited Magellan adaptive\noptics system point spread function (PSF) are significant in the near IR. This\nelongation becomes particularly egregious at visible wavelengths, culminating\nin a PSF that is 2000\\{mu}m long in one direction and diffraction limited\n(30-60 \\{mu}m) in the other over the wavelength band 0.5-1.0\\{mu}m for a source\nat 45\\pm zenith angle. The planned Magellan AO system consists of a deformable\nsecondary mirror with 585 actuators. This number of actuators should be\nsufficient to nyquist sample the atmospheric turbulence and correct images to\nthe diffraction limit at wavelengths as short as 0.7\\{mu}m, with useful science\nbeing possible as low as 0.5\\{mu}m. In order to achieve diffraction limited\nperformance over this broad band, 2000\\{mu}m of lateral color must be corrected\nto better than 10\\{mu}m. The traditional atmospheric dispersion corrector (ADC)\nconsists of two identical counter-rotating cemented doublet prisms that correct\nthe primary chromatic aberration. We propose two new ADC designs: the first\nconsisting of two identical counter-rotating prism triplets, and the second\nconsisting of two pairs of cemented counter-rotating prism doublets that use\nboth normal dispersion and anomalous dispersion glass in order to correct both\nprimary and secondary chromatic aberration. The two designs perform 58% and\n68%, respectively, better than the traditional two-doublet design. We also\npresent our design for a custom removable wide-field lens that will allow our\nCCD47 to switch back and forth between an 8.6\" FOV for AO science and a 28.5\"\nFOV for acquisition.", "category": "astro-ph_IM" }, { "text": "Design and Initial Performance of the Prototype for the BEACON\n Instrument for Detection of Ultrahigh Energy Particles: The Beamforming Elevated Array for COsmic Neutrinos (BEACON) is a planned\nneutrino telescope designed to detect radio emission from upgoing air showers\ngenerated by ultrahigh energy tau neutrino interactions in the Earth. This\ndetection mechanism provides a measurement of the tau flux of cosmic neutrinos.\nWe have installed an 8-channel prototype instrument at high elevation at\nBarcroft Field Station, which has been running since 2018, and consists of 4\ndual-polarized antennas sensitive between 30-80 MHz, whose signals are\nfiltered, amplified, digitized, and saved to disk using a custom data\nacquisition system (DAQ). The BEACON prototype is at high elevation to maximize\neffective volume and uses a directional beamforming trigger to improve\nrejection of anthropogenic background noise at the trigger level. Here we\ndiscuss the design, construction, and calibration of the BEACON prototype\ninstrument. We also discuss the radio frequency environment observed by the\ninstrument, and categorize the types of events seen by the instrument,\nincluding a likely cosmic ray candidate event.", "category": "astro-ph_IM" }, { "text": "Detecting and analysing the topology of the cosmic web with spatial\n clustering algorithms I: Methods: In this paper we explore the use of spatial clustering algorithms as a new\ncomputational approach for modeling the cosmic web. We demonstrate that such\nalgorithms are efficient in terms of computing time needed. We explore three\ndistinct spatial methods which we suitably adjust for (i) detecting the\ntopology of the cosmic web and (ii) categorizing various cosmic structures as\nvoids, walls, clusters and superclusters based on a variety of topological and\nphysical criteria such as the physical distance between objects, their masses\nand local densities. The methods explored are (1) a new spatial method called\nGravity Lattice ; (2) a modified version of another spatial clustering\nalgorithm, the ABACUS; and (3) the well known spatial clustering algorithm\nHDBSCAN. We utilize HDBSCAN in order to detect cosmic structures and categorize\nthem using their overdensity. We demonstrate that the ABACUS method can be\ncombined with the classic DTFE method to obtain similar results in terms of the\nachieved accuracy with about an order of magnitude less computation time. To\nfurther solidify our claims, we draw insights from the computer science domain\nand compare the quality of the results with and without the application of our\nmethod. Finally, we further extend our experiments and verify their\neffectiveness by showing their ability to scale well with different cosmic web\nstructures that formed at different redshifts.", "category": "astro-ph_IM" }, { "text": "Seeing Science: The ability to represent scientific data and concepts visually is becoming\nincreasingly important due to the unprecedented exponential growth of\ncomputational power during the present digital age. The data sets and\nsimulations scientists in all fields can now create are literally thousands of\ntimes as large as those created just 20 years ago. Historically successful\nmethods for data visualization can, and should, be applied to today's huge data\nsets, but new approaches, also enabled by technology, are needed as well.\nIncreasingly, \"modular craftsmanship\" will be applied, as relevant\nfunctionality from the graphically and technically best tools for a job are\ncombined as-needed, without low-level programming.", "category": "astro-ph_IM" }, { "text": "How to Scale a Code in the Human Dimension: As scientists' needs for computational techniques and tools grow, they cease\nto be supportable by software developed in isolation. In many cases, these\nneeds are being met by communities of practice, where software is developed by\ndomain scientists to reach pragmatic goals and satisfy distinct and enumerable\nscientific goals. We present techniques that have been successful in growing\nand engaging communities of practice, specifically in the yt and Enzo\ncommunities.", "category": "astro-ph_IM" }, { "text": "Background assessment for the TREX Dark Matter experiment: TREX-DM is conceived to look for low-mass Weakly Interacting Massive\nParticles (WIMPs) using a gas Time Projection Chamber equipped with micromegas\nreadout planes at the Canfranc Underground Laboratory. The detector can hold in\nthe active volume 20 l of pressurized gas up to 10 bar, corresponding to 0.30\nkg of Ar or 0.16 kg of Ne. The micromegas are read with a self-triggered\nacquisition, allowing for thresholds below 0.4 keV (electron equivalent). A low\nbackground level in the lowest energy region is another essential requirement.\nTo assess the expected background, all the relevant sources have been\nconsidered, including the measured fluxes of gamma radiation, muons and\nneutrons at the Canfranc Laboratory, together with the activity of most of the\ncomponents used in the detector and ancillary systems, obtained in a complete\nassay program. The background contributions have been simulated by means of a\ndedicated application based on Geant4 and a custom-made code for the detector\nresponse. The background model developed for the detector presently installed\nin Canfranc points to levels from 1 to 10 counts keV-1 kg-1 d-1 in the region\nof interest, making TREX-DM competitive in the search for low-mass WIMPs. A\nroadmap to further decrease it down to 0.1 counts keV-1 kg-1 d-1 is underway.", "category": "astro-ph_IM" }, { "text": "Neutrino Astronomy - A Review of Future Experiments: Current generation neutrino telescopes cover an energy range from about 10\nGeV to beyond $10^9$ GeV. IceCube sets the scale for future experiments to make\nimprovements. Strategies for future upgrades will be discussed in three energy\nranges. At the low-energy end, an infill detector to IceCube's DeepCore would\nadd sensitivity in the energy range from a few to a few tens of GeV with the\nprimary goal of measuring the neutrino mass hierarchy. In the central energy\nrange of classical optical neutrino telescopes, next generation detectors are\nbeing pursued in the Mediterranean and at Lake Baikal. The KM3NeT detector in\nits full scale would establish a substantial increase in sensitivity over\nIceCube. At the highest energies, radio detectors in ice are among the most\npromising and pursued technologies to increase exposure at $10^9$ GeV by more\nthan an order of magnitude compared to IceCube.", "category": "astro-ph_IM" }, { "text": "Towards an astronomical foundation model for stars with a\n Transformer-based model: Rapid strides are currently being made in the field of artificial\nintelligence using Transformer-based models like Large Language Models (LLMs).\nThe potential of these methods for creating a single, large, versatile model in\nastronomy has not yet been explored. In this work, we propose a framework for\ndata-driven astronomy that uses the same core techniques and architecture as\nused by LLMs. Using a variety of observations and labels of stars as an\nexample, we build a Transformer-based model and train it in a self-supervised\nmanner with cross-survey data sets to perform a variety of inference tasks. In\nparticular, we demonstrate that a $\\textit{single}$ model can perform both\ndiscriminative and generative tasks even if the model was not trained or\nfine-tuned to do any specific task. For example, on the discriminative task of\nderiving stellar parameters from Gaia XP spectra, we achieve an accuracy of 47\nK in $T_\\mathrm{eff}$, 0.11 dex in $\\log{g}$, and 0.07 dex in $[\\mathrm{M/H}]$,\noutperforming an expert $\\texttt{XGBoost}$ model in the same setting. But the\nsame model can also generate XP spectra from stellar parameters, inpaint\nunobserved spectral regions, extract empirical stellar loci, and even determine\nthe interstellar extinction curve. Our framework demonstrates that building and\ntraining a $\\textit{single}$ foundation model without fine-tuning using data\nand parameters from multiple surveys to predict unmeasured observations and\nparameters is well within reach. Such \"Large Astronomy Models\" trained on large\nquantities of observational data will play a large role in the analysis of\ncurrent and future large surveys.", "category": "astro-ph_IM" }, { "text": "Modelling multimodal photometric redshift regression with noisy\n observations: In this work, we are trying to extent the existing photometric redshift\nregression models from modeling pure photometric data back to the spectra\nthemselves. To that end, we developed a PCA that is capable of describing the\ninput uncertainty (including missing values) in a dimensionality reduction\nframework. With this \"spectrum generator\" at hand, we are capable of treating\nthe redshift regression problem in a fully Bayesian framework, returning a\nposterior distribution over the redshift. This approach allows therefore to\napproach the multimodal regression problem in an adequate fashion. In addition,\ninput uncertainty on the magnitudes can be included quite naturally and lastly,\nthe proposed algorithm allows in principle to make predictions outside the\ntraining values which makes it a fascinating opportunity for the detection of\nhigh-redshifted quasars.", "category": "astro-ph_IM" }, { "text": "Long-baseline horizontal radio-frequency transmission through polar ice: We report on analysis of englacial radio-frequency (RF) pulser data received\nover horizontal baselines of 1--5 km, based on broadcasts from two sets of\ntransmitters deployed to depths of up to 1500 meters at the South Pole. First,\nwe analyze data collected usingtwo RF bicone transmitters 1400 meters below the\nice surface, and frozen into boreholes drilled for the IceCube experiment in\n2011. Additionally, in Dec., 2018, a fat-dipole antenna, fed by one of three\nhigh-voltage (~1 kV), fast (~(1-5 ns)) signal generators was lowered into the\n1700-m deep icehole drilled for the South Pole Ice Core Experiment (SPICE),\napproximately 3 km from the geographic South Pole. Signals from transmitters\nwere recorded on the five englacial multi-receiver ARA stations, with receiver\ndepths between 60--200 m. We confirm the long, >1 km RF electric field\nattenuation length, test our observed signal arrival timing distributions\nagainst models, and measure birefringent asymmetries at the 0.15% level.", "category": "astro-ph_IM" }, { "text": "Accurate, Meshless Methods for Magneto-Hydrodynamics: Recently, we developed a pair of meshless finite-volume Lagrangian methods\nfor hydrodynamics: the 'meshless finite mass' (MFM) and 'meshless finite\nvolume' (MFV) methods. These capture advantages of both smoothed-particle\nhydrodynamics (SPH) and adaptive mesh-refinement (AMR) schemes. Here, we extend\nthese to include ideal magneto-hydrodynamics (MHD). The MHD equations are\nsecond-order consistent and conservative. We augment these with a\ndivergence-cleaning scheme, which maintains div*B~0 to high accuracy. We\nimplement these in the code GIZMO, together with a state-of-the-art\nimplementation of SPH MHD. In every one of a large suite of test problems, the\nnew methods are competitive with moving-mesh and AMR schemes using constrained\ntransport (CT) to ensure div*B=0. They are able to correctly capture the growth\nand structure of the magneto-rotational instability (MRI), MHD turbulence, and\nthe launching of magnetic jets, in some cases converging more rapidly than AMR\ncodes. Compared to SPH, the MFM/MFV methods exhibit proper convergence at fixed\nneighbor number, sharper shock capturing, and dramatically reduced noise, div*B\nerrors, and diffusion. Still, 'modern' SPH is able to handle most of our tests,\nat the cost of much larger kernels and 'by hand' adjustment of artificial\ndiffusion parameters. Compared to AMR, the new meshless methods exhibit\nenhanced 'grid noise' but reduced advection errors and numerical diffusion,\nvelocity-independent errors, and superior angular momentum conservation and\ncoupling to N-body gravity solvers. As a result they converge more slowly on\nsome problems (involving smooth, slowly-moving flows) but more rapidly on\nothers (involving advection or rotation). In all cases, divergence-control\nbeyond the popular Powell 8-wave approach is necessary, or else all methods we\nconsider will systematically converge to unphysical solutions.", "category": "astro-ph_IM" }, { "text": "Next-generation telescopes with curved focal surface for ultra-low\n surface brightness surveys: In spite of major advances in both ground- and space-based instrumentation,\nthe ultra-low-surface brightness universe (ULSB) still remains a largely\nunexplored volume in observational parameter space. ULSB observations provide\nunique constraints on a wide variety of objects, from the Zodiacal light all\nthe way to the optical cosmological background radiation, through dust cirri,\nmass loss shells in giant stars, LSB galaxies and the intracluster light. These\nsurface brightness levels (>28-29 mag arcsec^-2) are observed by maximising the\nefficiency of the surveys and minimising or removing the systematics arising in\nthe measurement of surface brightness. Based on full-system photon Monte Carlo\nsimulations, we present here the performance of a ground-based telescope aimed\nat carrying out ULSB observations, with a curved focal surface design. Its\noff-axis optical design maximises the field of view while minimising the focal\nratio. No lenses are used, as their multiple internal scatterings increase the\nwings of the point spread function (PSF), and the usual requirement of a flat\nfocal plane is relaxed through the use of curved CCD detectors. The telescope\nhas only one unavoidable single refractive surface, the cryostat window, and\nyet it delivers a PSF with ultra-compact wings, which allows the detection, for\na given exposure time, of surface brightness levels nearly three orders of\nmagnitude fainter than any other current telescope.", "category": "astro-ph_IM" }, { "text": "Systematic biases in low frequency radio interferometric data due to\n calibration: the LOFAR EoR case: The redshifted 21 cm line of neutral hydrogen is a promising probe of the\nEpoch of Reionization (EoR). However, its detection requires a thorough\nunderstanding and control of the systematic errors. We study two systematic\nbiases observed in the LOFAR EoR residual data after calibration and\nsubtraction of bright discrete foreground sources. The first effect is a\nsuppression in the diffuse foregrounds, which could potentially mean a\nsuppression of the 21 cm signal. The second effect is an excess of noise beyond\nthe thermal noise. The excess noise shows fluctuations on small frequency\nscales, and hence it can not be easily removed by foreground removal or\navoidance methods. Our analysis suggests that sidelobes of residual sources due\nto the chromatic point spread function and ionospheric scintillation can not be\nthe dominant causes of the excess noise. Rather, both the suppression of\ndiffuse foregrounds and the excess noise can occur due to calibration with an\nincomplete sky model containing predominantly bright discrete sources. We show\nthat calibrating only on bright sources can cause suppression of other signals\nand introduce an excess noise in the data. The levels of the suppression and\nexcess noise depend on the relative flux of sources which are not included in\nthe model with respect to the flux of modeled sources. We discuss possible\nsolutions such as using only long baselines to calibrate the interferometric\ngain solutions as well as simultaneous multi-frequency calibration along with\ntheir benefits and shortcomings.", "category": "astro-ph_IM" }, { "text": "Iris: an Extensible Application for Building and Analyzing Spectral\n Energy Distributions: Iris is an extensible application that provides astronomers with a\nuser-friendly interface capable of ingesting broad-band data from many\ndifferent sources in order to build, explore, and model spectral energy\ndistributions (SEDs). Iris takes advantage of the standards defined by the\nInternational Virtual Observatory Alliance, but hides the technicalities of\nsuch standards by implementing different layers of abstraction on top of them.\nSuch intermediate layers provide hooks that users and developers can exploit in\norder to extend the capabilities provided by Iris. For instance, custom Python\nmodels can be combined in arbitrary ways with the Iris built-in models or with\nother custom functions. As such, Iris offers a platform for the development and\nintegration of SED data, services, and applications, either from the user's\nsystem or from the web. In this paper we describe the built-in features\nprovided by Iris for building and analyzing SEDs. We also explore in some\ndetail the Iris framework and software development kit, showing how astronomers\nand software developers can plug their code into an integrated SED analysis\nenvironment.", "category": "astro-ph_IM" }, { "text": "A simple and efficient solver for self-gravity in the DISPATCH\n astrophysical simulation framework: We describe a simple and effective algorithm for solving Poisson's equation\nin the context of self-gravity within the DISPATCH astrophysical fluid\nframework. The algorithm leverages the fact that DISPATCH stores multiple time\nslices and uses asynchronous time-stepping to produce a scheme that does not\nrequire any explicit global communication or sub-cycling, only the normal,\nlocal communication between patches and the iterative solution to Poisson's\nequation. We demonstrate that the implementation is suitable for both\ncollections of patches of a single resolution and for hierarchies of adaptively\nresolved patches. Benchmarks are presented that demonstrate the accuracy,\neffectiveness and efficiency of the scheme.", "category": "astro-ph_IM" }, { "text": "The conceptual design of GMagAO-X: visible wavelength high contrast\n imaging with GMT: We present the conceptual design of GMagAO-X, an extreme adaptive optics\nsystem for the 25 m Giant Magellan Telescope (GMT). We are developing GMagAO-X\nto be available at or shortly after first-light of the GMT, to enable early\nhigh contrast exoplanet science in response to the Astro2020 recommendations. A\nkey science goal is the characterization of nearby potentially habitable\nterrestrial worlds. GMagAO-Xis a woofer-tweeter system, with integrated segment\nphasing control. The tweeter is a 21,000 actuator segmented deformable mirror,\ncomposed of seven 3000 actuator segments. A multi-stage wavefront sensing\nsystem provides for bootstrapping, phasing, and high order sensing. The entire\ninstrument is mounted in a rotator to provide gravity invariance. After the\nmain AO system, visible (g to y) and near-IR (Y to H) science channels contain\nintegrated coronagraphic wavefront control systems. The fully corrected and,\noptionally, coronagraphically filtered beams will then be fed to a suite of\nfocal plane instrumentation including imagers and spectrographs. This will\ninclude existing facility instruments at GMT via fiber feeds. To assess the\ndesign we have developed an end-to-end frequency-domain modeling framework for\nassessing the performance of GMagAO-X. The dynamics of the many closed-loop\nfeedback control systems are then modeled. Finally, we employ a\nfrequency-domain model of post-processing algorithms to analyze the final\npost-processed sensitivity. The CoDR for GMagAO-X was held in September, 2021.\nHere we present an overview of the science cases, instrument design, expected\nperformance, and concept of operations for GMagAO-X.", "category": "astro-ph_IM" }, { "text": "REXPACO: an algorithm for high contrast reconstruction of the\n circumstellar environment by angular differential imaging: Aims. The purpose of this paper is to describe a new post-processing\nalgorithm dedicated to the reconstruction of the spatial distribution of light\nreceived from off-axis sources, in particular from circumstellar disks.\n Methods. Built on the recent PACO algorithm dedicated to the detection of\npoint-like sources, the proposed method is based on the local learning of patch\ncovariances capturing the spatial fluctuations of the stellar leakages. From\nthis statistical modeling, we develop a regularized image reconstruction\nalgorithm (REXPACO) following an inverse problem approach based on a forward\nimage formation model of the off-axis sources in the ADI sequences.\n Results. Injections of fake circumstellar disks in ADI sequences from the\nVLT/SPHERE-IRDIS instrument show that both the morphology and the photometry of\nthe disks are better preserved by REXPACO compared to standard postprocessing\nmethods like cADI. In particular, the modeling of the spatial covariances\nproves usefull in reducing typical ADI artifacts and in better disentangling\nthe signal of these sources from the residual stellar contamination. The\napplication to stars hosting circumstellar disks with various morphologies\nconfirms the ability of REXPACO to produce images of the light distribution\nwith reduced artifacts. Finally, we show how REXPACO can be combined with PACO\nto disentangle the signal of circumstellar disks from the signal of candidate\npoint-like sources.\n Conclusions. REXPACO is a novel post-processing algorithm producing\nnumerically deblurred images of the circumstellar environment. It exploits the\nspatial covariances of the stellar leakages and of the noise to efficiently\neliminate this nuisance term.", "category": "astro-ph_IM" }, { "text": "MegaPipe astrometry for the New Horizons spacecraft: The New Horizons spacecraft, launched by NASA in 2006, will arrive in the\nPluto-Charon system on July 14, 2015. There, it will spend a few hours imaging\nPluto and its moons. It will then have a small amount of reserve propellant\nwhich will be used to direct the probe on to a second, yet to be discovered\nobject in the Kuiper Belt. Data from the MegaPrime camera on CFHT was used to\nbuild a precise, high density astrometric reference frame for both the final\napproach into the Pluto system and the search for the secondary target. Pluto\ncurrently lies in the galactic plane. This is a hindrance in that there are\npotential problems with confusion. However, it is also a benefit, since it\nallows the use of the UCAC4 astrometric reference catalog, which is normally\ntoo sparse for use with MegaCam images. The astrometric accuracy of the final\ncatalogs, as measured by the residuals, is 0.02 arcseconds.", "category": "astro-ph_IM" }, { "text": "Machine learning techniques to distinguish near-field interference and\n far-field astrophysical signals in radio telescopes: The CHIME radio telescope operates in the frequency bandwidth of 400 to 800\nMHz. The CHIME/FRB collaboration has a data pipeline that analyzes the data in\nreal time, suppresses radio frequency interferences (RFI) and searches for\nFRBs. However, the RFI removal techniques work best for broadband and narrow\nFRBs.We wish to create a RFI removal technique that works without making\nassumptions about the characteristics of the FRB signal. In this thesis we\nfirst explore the data of intensity generated by CHIME/FRB backend. After\nbecoming familiar with the structure and organisation of data we present a new\nnovel method for RFI removal using unsupervised machine learning clustering\ntechniques by using multiple beams on CHIME telescope. We are trying to use the\nanalogy of theory of interference for RFI removal by distinguishing near field\nRFI and far field astrophysical signals in the data. We explored many\nclustering techniques like K-means,DBSCAN etc but one technique called as\nHDBSCAN looks particularly promising. Using HDBSCAN clustering technique we\nhave developed the new method for RFI removal. The removal technique upto this\npoint has been developed by us using 3 beams of CHIME telescope. The new novel\nidea is still in it's incubatory phase and soon we will try to include more\nbeams for our new RFI removal method. We have visually observed that RFI has\nbeen been considerably removed from our data. In future we are going to do more\ncalculations to further measure the signal to noise ratio (SNR) of the FRB\nsignal after RFI removal and we will use this technique to compare the SNR\nmeasured by current RFI removal technique at CHIME/FRB data pipeline.", "category": "astro-ph_IM" }, { "text": "An iterative wave-front sensing algorithm for high-contrast imaging\n systems: Wave-front sensing from focal plane multiple images is a promising technique\nfor high-contrast imaging systems. However, the wave-front error of an optics\nsystem can be properly reconstructed only when it is very small. This paper\npresents an iterative optimization algorithm for the measurement of large\nstatic wave-front errors directly from only one focal plane image. We firstly\nmeasure the intensity of the pupil image to get the pupil function of the\nsystem and acquire the aberrated image on the focal plane with a phase error\nthat is to be measured. Then we induce a dynamic phase to the tested pupil\nfunction and calculate the associated intensity of the reconstructed image on\nthe focal plane. The algorithm is to minimize the intensity difference between\nthe reconstructed image and the tested aberrated image on the focal plane,\nwhere the induced phase is as the variable of the optimization algorithm. The\nsimulation shows that the wave-front of an optics system can be theoretically\nreconstructed with a high precision, which indicates that such an iterative\nalgorithm may be an effective way for the wave-front sensing for high-contrast\nimaging systems.", "category": "astro-ph_IM" }, { "text": "Analysis Framework for Multi-messenger Astronomy with IceCube: Combining observational data from multiple instruments for multi-messenger\nastronomy can be challenging due to the complexity of the instrument response\nfunctions and likelihood calculation. We introduce a python-based\nunbinned-likelihood analysis package called i3mla (IceCube Maximum Likelihood\nAnalysis). i3mla is designed to be compatible with the Multi-Mission Maximum\nLikelihood (3ML) framework, which enables multi-messenger astronomy analyses by\ncombining the likelihood across different instruments. By making it possible to\nuse IceCube data in the 3ML framework, we aim to facilitate the use of neutrino\ndata in multi-messenger astronomy. In this work we illustrate how to use the\ni3mla package with 3ML and present preliminary sensitivities using the i3mla\npackage and 3ML through a joint-fit with HAWC Public dataset.", "category": "astro-ph_IM" }, { "text": "A partially dimensionally-split approach to numerical MHD: We modify an existing magnetohydrodynamics algorithm to make it more\ncompatible with a dimensionally-split (DS) framework. It is based on the\nstandard reconstruct-solve-average strategy (using a Riemann solver), and\nrelies on constrained transport to ensure that the magnetic field remains\ndivergence-free (div B = 0). The DS approach, combined with the use of a\nsingle, cell-centred grid (for both the fluid quantities and the magnetic\nfield), means that the algorithm can be easily added to existing DS\nhydrodynamics codes. This makes it particularly useful for mature astrophysical\ncodes, which often model more complicated physical effects on top of an\nunderlying DS hydrodynamics engine, and therefore cannot be restructured\neasily. Several test problems have been included to demonstrate the accuracy of\nthe algorithm, and illustrative source code has been made freely available\nonline.", "category": "astro-ph_IM" }, { "text": "Advanced Architectures for Astrophysical Supercomputing: Astronomers have come to rely on the increasing performance of computers to\nreduce, analyze, simulate and visualize their data. In this environment, faster\ncomputation can mean more science outcomes or the opening up of new parameter\nspaces for investigation. If we are to avoid major issues when implementing\ncodes on advanced architectures, it is important that we have a solid\nunderstanding of our algorithms. A recent addition to the high-performance\ncomputing scene that highlights this point is the graphics processing unit\n(GPU). The hardware originally designed for speeding-up graphics rendering in\nvideo games is now achieving speed-ups of $O(100\\times)$ in general-purpose\ncomputation -- performance that cannot be ignored. We are using a generalized\napproach, based on the analysis of astronomy algorithms, to identify the\noptimal problem-types and techniques for taking advantage of both current GPU\nhardware and future developments in computing architectures.", "category": "astro-ph_IM" }, { "text": "KSIM: simulating KIDSpec, a Microwave Kinetic Inductance Detector\n spectrograph for the optical/NIR: KIDSpec, the Kinetic Inductance Detector Spectrometer, is a proposed optical\nto near IR Microwave Kinetic Inductance Detector (MKID) spectrograph. MKIDs are\nsuperconducting photon counting detectors which are able to resolve the energy\nof incoming photons and their time of arrival. KIDSpec will use these detectors\nto separate incoming spectral orders from a grating, thereby not requiring a\ncross-disperser. In this paper we present a simulation tool for KIDSpec's\npotential performance upon construction to optimise a given design. This\nsimulation tool is the KIDSpec Simulator (KSIM), a Python package designed to\nsimulate a variety of KIDSpec and observation parameters. A range of\nastrophysical objects are simulated: stellar objects, an SDSS observed galaxy,\na Seyfert galaxy, and a mock galaxy spectrum from the JAGUAR catalogue.\nMultiple medium spectral resolution designs for KIDSpec are simulated. The\npossible impact of MKID energy resolution variance and dead pixels were\nsimulated, with impacts to KIDSpec performance observed using the Reduced\nChi-Squared (RCS) value. Using dead pixel percentages from current instruments,\nthe RCS result was found to only increase to 1.21 at worst for one of the\ndesigns simulated. SNR comparisons of object simulations between KSIM and\nX-Shooter's ETC were also simulated. KIDSpec offers a particular improvement\nover X-Shooter for short and faint observations. For a Seyfert galaxy\n($m_{R}=21$) simulation with a 180s exposure, KIDSpec had an average SNR of\n4.8, in contrast to 1.5 for X-Shooter. Using KSIM the design of KIDSpec can be\noptimised to improve the instrument further.", "category": "astro-ph_IM" }, { "text": "A New Method for Band-limited Imaging with Undersampled Detectors: Since its original use on the Hubble Deep Field, \"Drizzle\" has become a de\nfacto standard for the combination of images taken by the Hubble Space Tele-\nscope. However, the drizzle algorithm was developed with small, faint,\npartially resolved sources in mind, and is not the best possible algorithm for\nhigh signal-to-noise unresolved objects. Here, a new method for creating\nband-limited images from undersampled data is presented. The method uses a\ndrizzled image as a first order approximation and then rapidly converges toward\na band-limited image which fits the data given the statistical weighting\nprovided by the drizzled image. The method, named iDrizzle, for iterative\nDrizzle, effectively eliminates both the small high-frequency artifacts and\nconvolution with an interpolant kernel that can be introduced by drizzling. The\nmethod works well in the presence of geometric distortion, and can easily\nhandle cosmic rays, bad pixels, or other missing data. It can combine images\ntaken with random dithers, though the number of dithers required to obtain a\ngood final image depends in part on the quality of the dither placements.\niDrizzle may prove most beneficial for producing high-fidelity point spread\nfunctions from undersampled images, and could be particularly valuable for\nfuture Dark Energy missions such as WFIRST and EUCLID, which will likely\nattempt to do high precision supernova photometry and lensing experiments with\nundersampled detectors.", "category": "astro-ph_IM" }, { "text": "Partially Coherent Optical Modelling of the Ultra-Low-Noise Far-Infrared\n Imaging Arrays on the SPICA Mission: We have developed a range of theoretical and numerical techniques for\nmodeling the multi-mode, 210-34 micron, ultra-low-noise Transition Edge Sensors\nthat will be used on the SAFARI instrument on the ESA/JAXA cooled-aperture FIR\nspace telescope SPICA. The models include a detailed analysis of the resistive\nand reactive properties of thin superconducting absorbing films, and a\npartially coherent mode-matching analysis of patterned films in multi-mode\nwaveguide. The technique allows the natural optical modes, modal\nresponsivities, and Stokes maps of complicated structures comprising patterned\nfilms in profiled waveguides and cavities to be determined.", "category": "astro-ph_IM" }, { "text": "Fully Automated Approaches to Analyze Large-Scale Astronomy Survey Data: Observational astronomy has changed drastically in the last decade: manually\ndriven target-by-target instruments have been replaced by fully automated\nrobotic telescopes. Data acquisition methods have advanced to the point that\nterabytes of data are flowing in and being stored on a daily basis. At the same\ntime, the vast majority of analysis tools in stellar astrophysics still rely on\nmanual expert interaction. To bridge this gap, we foresee that the next decade\nwill witness a fundamental shift in the approaches to data analysis:\ncase-by-case methods will be replaced by fully automated pipelines that will\nprocess the data from their reduction stage, through analysis, to storage.\nWhile major effort has been invested in data reduction automation, automated\ndata analysis has mostly been neglected despite the urgent need. Scientific\ndata mining will face serious challenges to identify, understand and eliminate\nthe sources of systematic errors that will arise from this automation. As a\nspecial case, we present an artificial intelligence (AI) driven pipeline that\nis prototyped in the domain of stellar astrophysics (eclipsing binaries in\nparticular), current results and the challenges still ahead.", "category": "astro-ph_IM" }, { "text": "Automatic morphological classification of galaxy images: We describe an image analysis supervised learning algorithm that can\nautomatically classify galaxy images. The algorithm is first trained using a\nmanually classified images of elliptical, spiral, and edge-on galaxies. A large\nset of image features is extracted from each image, and the most informative\nfeatures are selected using Fisher scores. Test images can then be classified\nusing a simple Weighted Nearest Neighbor rule such that the Fisher scores are\nused as the feature weights. Experimental results show that galaxy images from\nGalaxy Zoo can be classified automatically to spiral, elliptical and edge-on\ngalaxies with accuracy of ~90% compared to classifications carried out by the\nauthor. Full compilable source code of the algorithm is available for free\ndownload, and its general-purpose nature makes it suitable for other uses that\ninvolve automatic image analysis of celestial objects.", "category": "astro-ph_IM" }, { "text": "Study of cosmogenic activation above ground of Ar for DarkSide-20k: The production of long-lived radioactive isotopes due to the exposure to\ncosmic rays on the Earth's surface is an hazard for experiments searching for\nrare events like the direct detection of galactic dark matter particles. The\nuse of large amounts of liquid Argon is foreseen in different projects, like\nthe DarkSide-20k experiment, intended to look for Weakly Interacting Massive\nParticles at the Laboratori Nazionali del Gran Sasso. Here, results from the\nstudy of the cosmogenic activation of Argon carried out in the context of\nDarkSide-20k are presented. The induced activity of several isotopes, including\n39Ar, and the expected counting rates in the detector have been deduced,\nconsidering exposure conditions as realistic as possible.", "category": "astro-ph_IM" }, { "text": "The analysis of effective galaxies number count for Chinese Space\n Station Optical Survey(CSS-OS) by image simulation: The Chinese Space Station Optical Survey (CSS-OS) is a mission to explore the\nvast universe. This mission will equip a 2-meter space telescope to perform a\nmulti-band NUV-optical large area survey (over 40% of the sky) and deep survey\n(~1% of the sky) for the cosmological and astronomical goals. Galaxy detection\nis one of the most important methods to achieve scientific goals. In this\npaper, we evaluate the galaxy number density for CSS-OS in i band (depth, i ~26\nfor large area survey and ~27 for the deep survey, point source, 5-sigma by the\nmethod of image simulation. We also compare galaxies detected by CSS-OS with\nthat of LSST (i~27, point source, 5-sigma. In our simulation, the HUDF galaxy\ncatalogs are used to create mock images due to long enough integration time\nwhich meets the completeness requirements of the galaxy analysis for CSS-OS and\nLSST. The galaxy surface profile and spectrum are produced by the morphological\ninformation, photometric redshift and SEDs from the catalogs. The instrumental\nfeatures and the environmental condition are also considered to produce the\nmock galaxy images. The galaxies of CSS-OS and LSST are both extracted by\nSExtractor from the mock i band image and matched with the original catalog.\nThrough the analysis of the extracted galaxies, we find that the effective\ngalaxy number count is ~13 arcmin^-2, ~40 arcmin^-2 and ~42 arcmin^-2 for\nCSS-OS large area survey, CSS-OS deep survey and LSST, respectively. Moreover,\nCSS-OS shows the advantage in small galaxy detection with high spatial\nresolution, especially for the deep survey: about 20% of the galaxies detected\nby CSS-OS deep survey are not detected by LSST, and they have a small effective\nradius of re < 0.3\".", "category": "astro-ph_IM" }, { "text": "Matched filter in the low-number count Poisson noise regime: an\n efficient and effective implementation: The matched filter (MF) is widely used to detect signals hidden within the\nnoise. If the noise is Gaussian, its performances are well-known and\ndescribable in an elegant analytical form. The treatment of non-Gaussian noises\nis often cumbersome as in most cases there is no analytical framework. This is\ntrue also for Poisson noise which, especially in the low-number count regime,\npresents the additional difficulty to be discrete. For this reason in the past\nmethods have been proposed based on heuristic or semi-heuristic arguments.\nRecently, an analytical form of the MF has been introduced but the computation\nof the probability of false detection or false alarm (PFA) is based on\nnumerical simulations. To overcome this inefficient and time consuming approach\nwe propose here an effective method to compute the PFA based on the saddle\npoint approximation (SA). We provide the theoretical framework and support our\nfindings by means of numerical simulations. We discuss also the limitations of\nthe MF in practical applications.", "category": "astro-ph_IM" }, { "text": "Investigating the In-Flight Performance of the UVIT Payload on ASTROSAT: We have studied the performance of the Ultraviolet Imaging Telescope payload\non AstroSat and derived a calibration of the FUV and NUV instruments on board.\nWe find that the sensitivity of both the FUV and NUV channels is as expected\nfrom ground calibrations, with the FUV effective area about 35% and the NUV\neffective area about the same as that of GALEX. The point spread function of\nthe instrument is on the order of 1.2-1.6 arcsec. We have found that\npixel-to-pixel variations in the sensitivity are less than 10% with spacecraft\nmotion compensating for most of the flat-field variations. We derived a\ndistortion correction but recommend that it be applied post-processing as part\nof an astrometric solution.", "category": "astro-ph_IM" }, { "text": "The Importance of Telescope Training in Data Interpretation: In this State of the Profession Consideration, we will discuss the state of\nhands-on observing within the profession, including: information about\nprofessional observing trends; student telescope training, beginning at the\nundergraduate and graduate levels, as a key to ensuring a base level of\ntechnical understanding among astronomers; the role that amateurs can take\nmoving forward; the impact of telescope training on using survey data\neffectively; and the need for modest investments in new, standard\ninstrumentation at mid-size aperture telescope facilities to ensure their\nusefulness for the next decade.", "category": "astro-ph_IM" }, { "text": "Fully-Automated Reduction of Longslit Spectroscopy with the Low\n Resolution Imaging Spectrometer at Keck Observatory: I present and summarize a software package (\"LPipe\") for completely\nautomated, end-to-end reduction of both bright and faint sources with the\nLow-Resolution Imaging Spectrometer (LRIS) at Keck Observatory. It supports all\ngratings, grisms, and dichroics, and also reduces imaging observations,\nalthough it does not include multislit or polarimetric reduction capabilities\nat present. It is suitable for on-the-fly quicklook reductions at the\ntelescope, for large-scale reductions of archival data-sets, and (in many\ncases) for science-quality post-run reductions of PI data. To demonstrate its\ncapabilities the pipeline is run in fully-automated mode on all LRIS longslit\ndata in the Keck Observatory Archive acquired during the 12-month period\nbetween August 2016 and July 2017. The reduced spectra (of 675 single-object\ntargets, totaling ~200 hours of on-source integration time in each camera), and\nthe pipeline itself, are made publicly available to the community.", "category": "astro-ph_IM" }, { "text": "Pulsar scattering in space and time: We report on a recent global VLBI experiment in which we study the scatter\nbroadening of pulsars in the spatial and time domain simultaneously. Depending\non the distribution of scattering screen(s), geometry predicts that the less\nspatially broadened parts of the signal arrive earlier than the more broadened\nparts. This means that over one pulse period the size of the scattering disk\nshould grow from pointlike to the maximum size. An equivalent description is\nthat the pulse profile shows less temporal broadening on the longer baselines.\nThis contribution presents first results that are consistent with the expected\nexpanding rings. We also briefly discuss how the autocorrelations can be used\nfor amplitude calibration. This requires a thorough investigation of the\ndigitisation and the sampler statistics and is not fully solved yet.", "category": "astro-ph_IM" }, { "text": "Antenna characterization for the HIRAX experiment: The Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) aims to\nimprove constraints on the dark energy equation of state through measurements\nof large-scale structure at high redshift ($0.8$700MHz.\nNoise temperature measurements of the HIRAX feeds were performed in a custom\napparatus built at Yale. In this system, identical loads, one cryogenic and the\nother at room temperature, are used to take a differential (Y-factor)\nmeasurement from which the noise of the system is inferred. Several measurement\nsets have been conducted using the system, involving CHIME feeds as well as\nfour of the HIRAX active feeds. These measurements give the first noise\ntemperature measurements of the HIRAX feed, revealing a $\\sim$60K noise\ntemperature (relative to 30K target) with 40K peak- to-peak frequency-dependent\nfeatures, and provide the first demonstration of feed repeatability. Both\nfindings inform current and future feed designs.", "category": "astro-ph_IM" }, { "text": "Exploring the Capabilities of Gibbs Sampling in Pulsar Timing Arrays: We explore the use of Gibbs sampling in estimating the noise properties of\nindividual pulsars and illustrate its effectiveness using the NANOGrav 11-year\ndata set. We find that Gibbs sampling noise modeling (GM) is more efficient\nthan the current standard Bayesian techniques (SM) for single pulsar analyses\nby yielding model parameter posteriors with average effective-sample-size ratio\n(GM/SM) of 6 across all parameters and pulsars. Furthermore, the output of GM\ncontains posteriors for the Fourier coefficients that can be used to\ncharacterize the underlying red noise process of any pulsar's timing residuals,\nwhich are absent in current implementations of SM. Through simulations, we\ndemonstrate the potential for such coefficients to measure the spatial\ncross-correlations between pulsar pairs produced by a gravitational wave\nbackground.", "category": "astro-ph_IM" }, { "text": "EMC design for the actuators of FAST reflector: The active reflector is one of the three main innovations of the\nFive-hundred-meter Aperture Spherical radio Telescope (FAST). The deformation\nof such a huge spherically shaped reflector into different transient parabolic\nshapes is achieved by using 2225 hydraulic actuators which change the position\nof the 2225 nodes through the connected down tied cables. For each different\ntracking process of the telescope, more than 1/3 of these 2225 actuators must\nbe in operation to tune the parabolic aperture accurately to meet the surface\nerror restriction. It means that some of these actuators are inevitably located\nwithin the main beam of the receiver, and the Electromagnetic Interference\n(EMI) from the actuators must be mitigated to ensure the scientific output of\nthe telescope. Based on the threshold level of interference detrimental to\nradio astronomy presented in ITU-R Recommendation RA.769 and EMI measurements,\nthe shielding efficiency (SE) requirement of each actuator is set to be 80dB in\nthe frequency range from 70MHz to 3GHz. Therefore, Electromagnetic\nCompatibility (EMC) was taken into account in the actuator design by measures\nsuch as power line filters, optical fibers, shielding enclosures and other\nstructural measures. In 2015, all the actuators had been installed at the FAST\nsite. Till now, no apparent EMI from the actuators has been detected by the\nreceiver, which proves the effectiveness of these EMC measures.", "category": "astro-ph_IM" }, { "text": "Rethinking the modeling of the instrumental response of telescopes with\n a differentiable optical model: We propose a paradigm shift in the data-driven modeling of the instrumental\nresponse field of telescopes. By adding a differentiable optical forward model\ninto the modeling framework, we change the data-driven modeling space from the\npixels to the wavefront. This allows to transfer a great deal of complexity\nfrom the instrumental response into the forward model while being able to adapt\nto the observations, remaining data-driven. Our framework allows a way forward\nto building powerful models that are physically motivated, interpretable, and\nthat do not require special calibration data. We show that for a simplified\nsetting of a space telescope, this framework represents a real performance\nbreakthrough compared to existing data-driven approaches with reconstruction\nerrors decreasing 5 fold at observation resolution and more than 10 fold for a\n3x super-resolution. We successfully model chromatic variations of the\ninstrument's response only using noisy broad-band in-focus observations.", "category": "astro-ph_IM" }, { "text": "Radiative transfer and molecular data for astrochemistry (Review): The estimation of molecular abundances in interstellar clouds from\nspectroscopic observations requires radiative transfer calculations, which\ndepend on basic molecular input data. This paper reviews recent developments in\nthe fields of molecular data and radiative transfer. The first part is an\noverview of radiative transfer techniques, along with a \"road map\" showing\nwhich technique should be used in which situation. The second part is a review\nof measurements and calculations of molecular spectroscopic and collisional\ndata, with a summary of recent collisional calculations and suggested modeling\nstrategies if collision data are unavailable. The paper concludes with an\noverview of future developments and needs in the areas of radiative transfer\nand molecular data.", "category": "astro-ph_IM" }, { "text": "Performance results of HESP physical model: As a continuation to the published work on model based calibration technique\nwith HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present\nthe performance results of the technique. We also describe how the open\nparameters were chosen in the model for optimization, the glass data accuracy\nand handling the discrepancies. It is observed through simulations that the\ndiscrepancies in glass data can be identified but not quantifiable. So having\nan accurate glass data is important which is possible to obtain from the glass\nmanufacturers. The model's performance in various aspects is presented using\nthe ThAr calibration frames from HESP during its pre-shipment tests. Accuracy\nof model predictions and its wave length calibration comparison with\nconventional empirical fitting, the behaviour of open parameters in\noptimization, model's ability to track instrumental drifts in the spectrum and\nthe double fibres performance were discussed. It is observed that the optimized\nmodel is able to predict to a high accuracy the drifts in the spectrum from\nenvironmental fluctuations. It is also observed that the pattern in the\nspectral drifts across the 2D spectrum which vary from image to image is\npredictable with the optimized model. We will also discuss the possible science\ncases where the model can contribute.", "category": "astro-ph_IM" }, { "text": "Estimating effective wind speed from Gemini Planet Imager's adaptive\n optics data using covariance maps: The Earth's turbulent atmosphere results in speckled and blurred images of\nastronomical objects when observed by ground based visible and near-infrared\ntelescopes. Adaptive optics (AO) systems are employed to reduce these\natmospheric effects by using wavefront sensors (WFS) and deformable mirrors.\nSome AO systems are not fast enough to correct for strong, fast, high\nturbulence wind layers leading to the wind butterfly effect, or wind-driven\nhalo, reducing contrast capabilities in coronagraphic images. Estimating the\neffective wind speed of the atmosphere allows us to calculate the atmospheric\ncoherence time. This is not only an important parameter to understand for site\ncharacterization but could be used to help remove the wind butterfly in post\nprocessing. Here we present a method for estimating the atmospheric effective\nwind speed from spatio-temporal covariance maps generated from pseudo open-loop\n(POL) WFS data. POL WFS data is used as it aims to reconstruct the full\nwavefront information when operating in closed-loop. The covariance maps show\nhow different atmospheric turbulent layers traverse the telescope. Our method\nsuccessfully recovered the effective wind speed from simulated WFS data\ngenerated with the soapy python library. The simulated atmospheric turbulence\nprofiles consist of two turbulent layers of ranging strengths and velocities.\nThe method has also been applied to Gemini Planet Imager (GPI) AO WFS data.\nThis gives insight into how the effective wind speed can affect the wind-driven\nhalo seen in the AO image point spread function. In this paper, we will present\nresults from simulated and GPI WFS data.", "category": "astro-ph_IM" }, { "text": "The Scaling of the RMS with Dwell Time in NANOGrav Pulsars: Pulsar Timing Arrays (PTAs) are collections of well-timed millisecond pulsars\nthat are being used as detectors of gravitational waves (GWs). Given current\nsensitivity, projected improvements in PTAs and the predicted strength of the\nGW signals, the detection of GWs with PTAs could occur within the next decade.\nOne way we can improve a PTA is to reduce the measurement noise present in the\npulsar timing residuals. If the pulsars included in the array display\nuncorrelated noise, the root mean square (RMS) of the timing residuals is\npredicted to scale as $\\mathrm{T}^{-1/2}$, where T is the dwell time per\nobservation. In this case, the sensitivity of the array can be increased by\nincreasing T. We studied the 17 pulsars in the five year North American\nNanohertz Observatory for Gravitational Waves (NANOGrav) data set to determine\nif the noise in the timing residuals of the pulsars observed was consistent\nwith this property. For comparison, we performed the same analysis on PSR\nB1937+21, a pulsar that is known to display red noise. With this method, we\nfind that 15 of the 17 NANOGrav pulsars have timing residuals consistent with\nthe inverse square law. The data also suggest that these 15 pulsars can be\nobserved for up to eight times as long while still exhibiting an RMS that\nscales as root T.", "category": "astro-ph_IM" }, { "text": "Citizen Science Astronomy with a Network of Small Telescope: The Launch\n and Deployment of JWST: We present a coordinated campaign of observations to monitor the brightness\nof the James Webb Space Telescope (JWST) as it travels toward the second\nEarth-Sun Lagrange point and unfolds using the network ofUnistellar digital\ntelescopes. Those observations collected by citizen astronomers across the\nworld allowed us to detect specific phases such as the separation from the\nbooster, glare due to a change of orientation after a maneuver, the unfurling\nof the sunshield, and deployment of the primary mirror. After deployment of the\nsunshield on January 6 2022, the 6-h lightcurve has a significant amplitude and\nshows small variations due to the artificial rotation of the space telescope\nduring commissionning. These variations could be due to the deployment of the\nprimary mirror or some changes in orientation of the space telescope. This work\nillustrates the power of a worldwide array of small telescopes, operated by\ncitizen astronomers, to conduct large scientific campaigns over a long\ntimeframe. In the future, our network and others will continue to monitor JWST\nto detect potential degradations to the space environment by comparing the\nevolution of the lightcurve.", "category": "astro-ph_IM" }, { "text": "A convergent blind deconvolution method for post-adaptive-optics\n astronomical imaging: In this paper we propose a blind deconvolution method which applies to data\nperturbed by Poisson noise. The objective function is a generalized\nKullback-Leibler divergence, depending on both the unknown object and unknown\npoint spread function (PSF), without the addition of regularization terms;\nconstrained minimization, with suitable convex constraints on both unknowns, is\nconsidered. The problem is nonconvex and we propose to solve it by means of an\ninexact alternating minimization method, whose global convergence to stationary\npoints of the objective function has been recently proved in a general setting.\nThe method is iterative and each iteration, also called outer iteration,\nconsists of alternating an update of the object and the PSF by means of fixed\nnumbers of iterations, also called inner iterations, of the scaled gradient\nprojection (SGP) method. The use of SGP has two advantages: first, it allows to\nprove global convergence of the blind method; secondly, it allows the\nintroduction of different constraints on the object and the PSF. The specific\nconstraint on the PSF, besides non-negativity and normalization, is an upper\nbound derived from the so-called Strehl ratio, which is the ratio between the\npeak value of an aberrated versus a perfect wavefront. Therefore a typical\napplication is the imaging of modern telescopes equipped with adaptive optics\nsystems for partial correction of the aberrations due to atmospheric\nturbulence. In the paper we describe the algorithm and we recall the results\nleading to its convergence. Moreover we illustrate its effectiveness by means\nof numerical experiments whose results indicate that the method, pushed to\nconvergence, is very promising in the reconstruction of non-dense stellar\nclusters. The case of more complex astronomical targets is also considered, but\nin this case regularization by early stopping of the outer iterations is\nrequired.", "category": "astro-ph_IM" }, { "text": "The Near-Infrared Spectrograph (NIRSpec) on the James Webb Space\n Telescope IV. Capabilities and predicted performance for exoplanet\n characterization: The Near-Inrared Spectrograph (NIRSpec) on the James Webb Space Telescope\n(JWST) is a very versatile instrument, offering multiobject and integral field\nspectroscopy with varying spectral resolution ($\\sim$30 to $\\sim$3000) over a\nwide wavelength range from 0.6 to 5.3 micron, enabling scientists to study many\nscience themes ranging from the first galaxies to bodies in our own Solar\nSystem. In addition to its integral field unit and support for multiobject\nspectroscopy, NIRSpec features several fixed slits and a wide aperture\nspecifically designed to enable high precision time-series and transit as well\nas eclipse observations of exoplanets. In this paper we present its\ncapabilities regarding time-series observations, in general, and transit and\neclipse spectroscopy of exoplanets in particular. Due to JWST's large\ncollecting area and NIRSpec's excellent throughput, spectral coverage, and\ndetector performance, this mode will allow scientists to characterize the\natmosphere of exoplanets with unprecedented sensitivity.", "category": "astro-ph_IM" }, { "text": "DQSEGDB: A time-interval database for storing gravitational wave\n observatory metadata: The Data Quality Segment Database (DQSEGDB) software is a database service,\nbackend API, frontend graphical web interface, and client package used by the\nLaser Interferometer Gravitational-Wave Observatory (LIGO), Virgo, GEO600 and\nthe Kamioka Gravitational wave detector for storing and accessing metadata\ndescribing the status of their detectors. The DQSEGDB has been used in the\nanalysis of all published detections of gravitational waves in the advanced\ndetector era. The DQSEGDB currently stores roughly 600 million metadata entries\nand responds to roughly 600,000 queries per day with an average response time\nof 0.223 ms.", "category": "astro-ph_IM" }, { "text": "A Case Study in Astronomical 3-D Printing: The Mysterious Eta Carinae: 3-D printing moves beyond interactive 3-D graphics and provides an excellent\ntool for both visual and tactile learners, since 3-D printing can now easily\ncommunicate complex geometries and full color information. Some limitations of\ninteractive 3-D graphics are also alleviated by 3-D printable models, including\nissues of limited software support, portability, accessibility, and\nsustainability. We describe the motivations, methods, and results of our work\non using 3-D printing (1) to visualize and understand the Eta Car Homunculus\nnebula and central binary system and (2) for astronomy outreach and education,\nspecifically, with visually impaired students. One new result we present is the\nability to 3-D print full-color models of Eta Car's colliding stellar winds. We\nalso demonstrate how 3-D printing has helped us communicate our improved\nunderstanding of the detailed structure of Eta Car's Homunculus nebula and\ncentral binary colliding stellar winds, and their links to each other. Attached\nto this article are full-color 3-D printable files of both a red-blue\nHomunculus model and the Eta Car colliding stellar winds at orbital phase\n1.045. 3-D printing could prove to be vital to how astronomer's reach out and\nshare their work with each other, the public, and new audiences.", "category": "astro-ph_IM" }, { "text": "A Fourier optics approach to evaluate the astrometric performance of\n MICADO: We present our investigation into the impact of wavefront errors on high\naccuracy astrometry using Fourier Optics. MICADO, the upcoming near-IR imaging\ninstrument for the Extremely Large Telescope, will offer capabilities for\nrelative astrometry with an accuracy of 50 micro arcseconds ({\\mu}as). Due to\nthe large size of the point spread function (PSF) compared to the astrometric\nrequirement, the detailed shape and position of the PSF on the detector must be\nwell understood. Furthermore, because the atmospheric dispersion corrector of\nMICADO is a moving component within an otherwise mostly static instrument, it\nmight not be sufficient to perform a simple pre-observation calibration.\nTherefore, we have built a Fourier Optics framework, allowing us to evaluate\nthe small changes in the centroid position of the PSF as a function of\nwavefront error. For a complete evaluation, we model both the low order surface\nform errors, using Zernike polynomials, and the mid- and high-spatial\nfrequencies, using Power Spectral Density analysis. The described work will\nthen make it possible, performing full diffractive beam propagation, to assess\nthe expected astrometric performance of MICADO.", "category": "astro-ph_IM" }, { "text": "Status of the Medium-Sized Telescope for the Cherenkov Telescope Array: The Cherenkov Telescope Array (CTA), is an international project for the next\ngeneration ground- based observatory for gamma-ray astronomy in the energy\nrange from 20 GeV to 300 TeV. The sensitivity in the core energy range will be\ndominated by up to 40 Medium-Sized Telescopes (MSTs). The MSTs, of\nDavies-Cotton type with a 12 m diameter reflector are currently in the\nprototype phase. A full-size mechanical telescope structure has been assembled\nin Berlin. The telescope is partially equipped with different mirror\nprototypes, which are currently being tested and evaluated for performances\ncharacteristics. A report concentrating on the details of the tele- scope\nstructure, the drive assemblies and the optics of the MST prototype will be\ngiven.", "category": "astro-ph_IM" }, { "text": "The antinucleus annihilation reconstruction algorithm of the GAPS\n experiment: The General AntiParticle Spectrometer (GAPS) is an Antarctic balloon-borne\ndetector designed to measure low-energy cosmic antinuclei (< 0.25 GeV/n), with\na specific focus on antideuterons, as a distinctive signal from dark matter\nannihilation or decay in the Galactic halo. The instrument consists of a\ntracker, made up of ten planes of lithium-drifted Silicon Si(Li) detectors,\nsurrounded by a plastic scintillator Time-of-Flight system. GAPS uses a novel\nparticle identification method based on exotic atom capture and decay with the\nemission of pions, protons, and atomic X-rays from a common annihilation\nvertex.\n An important ingredient for the antinuclei identification is the\nreconstruction of the \"annihilation star\" topology. A custom antinucleus\nannihilation reconstruction algorithm, called the \"star-finding\" algorithm, was\ndeveloped to reconstruct the annihilation star fully, determining the\nannihilation vertex position and reconstructing the tracks of the primary and\nsecondary charged particles. The reconstruction algorithm and its performances\nwere studied on simulated data obtained with the Geant4-based GAPS simulation\nsoftware, which fully reproduced the detector geometry. This custom algorithm\nwas found to have better performance in the vertex resolution and\nreconstruction efficiency compared with a standard Hough-3D algorithm.", "category": "astro-ph_IM" }, { "text": "Adaptive Kernel Density Estimation proposal in gravitational wave data\n analysis: Markov Chain Monte Carlo approach is frequently used within Bayesian\nframework to sample the target posterior distribution. Its efficiency strongly\ndepends on the proposal used to build the chain. The best jump proposal is the\none that closely resembles the unknown target distribution, therefore we\nsuggest an adaptive proposal based on Kernel Density Estimation (KDE). We group\nparameters of the model according to their correlation and build KDE based on\nthe already accepted points for each group. We adapt the KDE-based proposal\nuntil it stabilizes. We argue that such a proposal could be helpful in\napplications where the data volume is increasing and in the hyper-model\nsampling. We tested it on several astrophysical datasets (IPTA and LISA) and\nhave shown that in some cases KDE-based proposal also helps to reduce the\nautocorrelation length of the chains. The efficiency of this proposal is\nreduces in case of the strong correlations between a large group of parameters.", "category": "astro-ph_IM" }, { "text": "The LAUE project for broadband gamma-ray focusing lenses: We present the LAUE project devoted to develop an advanced technology for\nbuilding a high focal length Laue lens for soft gamma--ray astronomy (80-600\nkeV). The final goal is to develop a focusing optics that can improve the\ncurrent sensitivity in the above energy band by 2 orders of magnitude.", "category": "astro-ph_IM" }, { "text": "All-sky Radio SETI: Over the last decade, Aperture Arrays (AA) have successfully replaced\nparabolic dishes as the technology of choice at low radio frequencies - good\nexamples are the MWA, LWA and LOFAR. Aperture Array based telescopes present\nseveral advantages, including sensitivity to the sky over a very wide\nfield-of-view. As digital and data processing systems continue to advance, an\nall-sky capability is set to emerge, even at GHz frequencies. We argue that\nassuming SETI events are both rare and transitory in nature, an instrument with\na large field-of-view, operating around the so-called water-hole (1-2 GHz),\nmight offer several advantages over contemporary searches. Sir Arthur C. Clarke\nwas the first to recognise the potential importance of an all-sky radio SETI\ncapability, as presented in his book, Imperial Earth. As part of the global SKA\n(Square Kilometre Array) project, a Mid-Frequency Aperture Array (MFAA)\nprototype known as MANTIS (Mid- Frequency Aperture Array Transient and\nIntensity-Mapping System) is now being considered as a precursor for SKA-2.\nMANTIS can be seen as a first step towards an all-sky radio SETI capability at\nGHz frequencies. This development has the potential to transform the field of\nSETI research, in addition to several other scientific programmes.", "category": "astro-ph_IM" }, { "text": "Scattering efficiencies measurements of soft protons at grazing\n incidence from an Athena Silicon Pore Optics sample: Soft protons are a potential threat for X-ray missions using grazing\nincidence optics, as once focused onto the detectors they can contribute to\nincrease the background and possibly induce radiation damage as well. The\nassessment of these undesired effects is especially relevant for the future ESA\nX-ray mission Athena, due to its large collecting area. To prevent degradation\nof the instrumental performance, which ultimately could compromise some of the\nscientific goals of the mission, the adoption of ad-hoc magnetic diverters is\nenvisaged. Dedicated laboratory measurements are fundamental to understand the\nmechanisms of proton forward scattering, validate the application of the\nexisting physical models to the Athena case and support the design of the\ndiverters. In this paper we report on scattering efficiency measurements of\nsoft protons impinging at grazing incidence onto a Silicon Pore Optics sample,\nconducted in the framework of the EXACRAD project. Measurements were taken at\ntwo different energies, ~470 keV and ~170 keV, and at four different scattering\nangles between 0.6 deg and 1.2 deg. The results are generally consistent with\nprevious measurements conducted on eROSITA mirror samples, and as expected the\npeak of the scattering efficiency is found around the angle of specular\nreflection.", "category": "astro-ph_IM" }, { "text": "Mining for Strong Gravitational Lenses with Self-supervised Learning: We employ self-supervised representation learning to distill information from\n76 million galaxy images from the Dark Energy Spectroscopic Instrument Legacy\nImaging Surveys' Data Release 9. Targeting the identification of new strong\ngravitational lens candidates, we first create a rapid similarity search tool\nto discover new strong lenses given only a single labelled example. We then\nshow how training a simple linear classifier on the self-supervised\nrepresentations, requiring only a few minutes on a CPU, can automatically\nclassify strong lenses with great efficiency. We present 1192 new strong lens\ncandidates that we identified through a brief visual identification campaign,\nand release an interactive web-based similarity search tool and the top network\npredictions to facilitate crowd-sourcing rapid discovery of additional strong\ngravitational lenses and other rare objects:\nhttps://github.com/georgestein/ssl-legacysurvey.", "category": "astro-ph_IM" }, { "text": "The Radar Echo Telescope for Cosmic Rays: Pathfinder Experiment for a\n Next-Generation Neutrino Observatory: The Radar Echo Telescope for Cosmic Rays (RET-CR) is a recently initiated\nexperiment designed to detect the englacial cascade of a cosmic-ray initiated\nair shower via in-ice radar, toward the goal of a full-scale, next-generation\nexperiment to detect ultra high energy neutrinos in polar ice. For cosmic rays\nwith a primary energy greater than 10 PeV, roughly 10% of an air-shower's\nenergy reaches the surface of a high elevation ice-sheet ($\\gtrsim$2 km)\nconcentrated into a radius of roughly 10 cm. This penetrating shower core\ncreates an in-ice cascade many orders of magnitude more dense than the\npreceding in-air cascade. This dense cascade can be detected via the radar echo\ntechnique, where transmitted radio is reflected from the ionization deposit\nleft in the wake of the cascade. RET-CR will test the radar echo method in\nnature, with the in-ice cascade of a cosmic-ray initiated air-shower serving as\na test beam. We present the projected event rate and sensitivity based upon a\nthree part simulation using CORSIKA, GEANT4, and RadioScatter. RET-CR expects\n$\\sim$1 radar echo event per day.", "category": "astro-ph_IM" }, { "text": "AstroCloud: A Distributed Cloud Computing and Application Platform for\n Astronomy: Virtual Observatory (VO) is a data-intensively online astronomical research\nand education environment, which takes advantages of advanced information\ntechnologies to achieve seamless and global access to astronomical information.\nAstroCloud is a cyber-infrastructure for astronomy research initiated by\nChinese Virtual Observatory (China-VO) project, and also a kind of physical\ndistributed platform which integrates lots of tasks such as telescope access\nproposal management, data archiving, data quality control, data release and\nopen access, cloud based data processing and analysis. It consists of five\napplication channels, i.e. observation, data, tools, cloud and public and is\nacting as a full lifecycle management system and gateway for astronomical data\nand telescopes. Physically, the platform is hosted in six cities currently,\ni.e. Beijing, Nanjing, Shanghai, Kunming, Lijiang and Urumqi, and serving more\nthan 17 thousand users. Achievements from international Virtual Observatories\nand Cloud Computing are adopted heavily. In the paper, backgrounds of the\nproject, architecture, Cloud Computing environment, key features of the system,\ncurrent status and future plans are introduced.", "category": "astro-ph_IM" }, { "text": "Portable Adaptive Optics for Exoplanet Imaging: The Portable Adaptive Optics (PAO) is a low-cost and compact system, designed\nfor 4-meter class telescopes that have no Adaptive Optics (AO), because of the\nphysical space limitation at the Nasmyth or Cassegrain focus and the\nhistorically high cost of the conventional AO. The initial scientific\nobservations of the PAO are focused on the direct imaging of exoplanets and\nsub-stellar companions. This paper discusses the PAO concept and the associated\nhigh-contrast imaging performance in our recent observational runs. PAO is\ndelivering a Strehl ratio better than 0.6 in H band under median seeing\nconditions of 1 arcsec. Combined with our dedicated image rotation and\nsubtraction (IRS) technique and the optimized IRS (O-IRS) algorithm, the\naveraged contrast ratio for a Vmag (5-9) primary star is 1.3E10-5 and 3.3E10-6\nat angular distance of 0.36 arcsec under exposure time of 7 minutes and 2\nhours, respectively. PAO has successfully recovered the known exoplanet of\n\\k{appa} And b, in our recent observation at 3.5-meter ARC telescope at Apache\nPoint Observatory. We have performed the associated astrometry and photometry\nanalysis of the recovered kappa And b planet, which gives a projected\nseparation of 0.984 +/- 0.05 arcsec, a position angle of 51.1 +/- 0.5 degrees,\nand a mass of 10.15 (-1.255) (+2.19) MJup. These results demonstrate that PAO\ncan be used for direct imaging of exoplanets with medium-sized telescopes.", "category": "astro-ph_IM" }, { "text": "Finding the UV-Visible Path Forward: Proceedings of the Community\n Workshop to Plan the Future of UV/Visible Space Astrophysics: We present the science cases and technological discussions that came from the\nworkshop entitled \"Finding the UV-Visible Path Forward\" held at NASA GSFC June\n25-26, 2015. The material presented outlines the compelling science that can be\nenabled by a next generation space-based observatory dedicated for UV-visible\nscience, the technologies that are available to include in that observatory\ndesign, and the range of possible alternative launch approaches that could also\nenable some of the science. The recommendations to the Cosmic Origins Program\nAnalysis Group from the workshop attendees on possible future development\ndirections are outlined.", "category": "astro-ph_IM" }, { "text": "The possibility of determining open-cluster parameters from BVRI\n photometry: In the last decades we witnessed an increase in studies of open clusters of\nthe Galaxy, especially because of the good determination for a wide range of\nvalues of parameters such as age, distance, reddening, and proper motion. The\nreliable determination of the parameters strongly depends on the photometry\navailable and especially on the U filter, which is used to obtain the color\nexcess E(B-V) through the color-color diagram (U-B) by (B-V) by fitting a zero\nage main-sequence. Owing to the difficulty of performing photometry in the U\nband, many authors have tried to obtain E(B-V) without the filter. But because\nof the near linearity of the color-color diagrams that use the other bands,\ncombined with the fact that most fitting procedures are highly subjective (many\ndone \"by eye\") the reliability of those results has always been questioned. Our\ngroup has recently developed, a tool that performs isochrone fitting in\nopen-cluster photometric data with a global optimization algorithm, which\nremoves the need to visually perform the fits and thus removes most of the\nrelated subjectivity. Here we apply our method to a set of synthetic clusters\nand two observed open clusters (Trumpler 1 and Melotte 105) using only\nphotometry for the BVRI bands. Our results show that, considering the cluster\nstructural variance caused only by photometric and Poisson sampling errors, our\nmethod is able to recover the synthetic cluster parameters with errors of less\nthan 10% for a wide range of ages, distances, and reddening, which clearly\ndemonstrates its potential. The results obtained for Trumpler 1 and Melotte 105\nalso agree well with previous literature values.", "category": "astro-ph_IM" }, { "text": "A Simple Proposal for Radial 3D Needlets: We present here a simple construction of a wavelet system for the\nthree-dimensional ball, which we label \\emph{Radial 3D Needlets}. The\nconstruction envisages a data collection environment where an observer located\nat the centre of the ball is surrounded by concentric spheres with the same\npixelization at different radial distances, for any given resolution. The\nsystem is then obtained by weighting the projector operator built on the\ncorresponding set of eigenfunctions, and performing a discretization step which\nturns out to be computationally very convenient. The resulting wavelets can be\nshown to have very good localization properties in the real and harmonic\ndomain; their implementation is computationally very convenient, and they allow\nfor exact reconstruction as they form a tight frame systems. Our theoretical\nresults are supported by an extensive numerical analysis.", "category": "astro-ph_IM" }, { "text": "The upcoming spectroscopic powerhouses at the Isaac Newton Group of\n Telescopes: The Isaac Newton Group of Telescopes is completing a strategic change for the\nscientific use of its two telescopes, the 4.2-m William Herschel Telescope\n(WHT) and the 2.5-m Isaac Newton Telescope (INT). After more than 30 years\noperating as multi-purpose telescopes, the telescopes will soon complete their\nshift to nearly-single instrument operation dominated by large surveys.\n At the WHT, the WEAVE multi-fibre spectrograph is being commissioned in late\n2022. Science surveys are expected to launch in 2023. 30% of the available time\nwill be offered in open time. For the INT, construction of HARPS-3, a\nhigh-resolution ultra-stable spectrograph for extra-solar planet studies, is\nunderway, with deployment planned for late 2024. The INT itself is being\nmodernised and will operate as a robotic telescope. An average of 40% of the\ntime will be offered as open time.\n The ING will maintain its student programme. Plans call for moving student\nwork from the INT to the WHT once the INT starts operating robotically.", "category": "astro-ph_IM" }, { "text": "Audible universe: A multi-disciplinary team recently came together online to discuss the\napplication of sonification in astronomy, focussing on the effective use of\nsound for scientific discovery and for improving accessibility to astronomy\nresearch and education. Here we provide a meeting report.", "category": "astro-ph_IM" }, { "text": "Hierarchical approach to matched filtering using a reduced basis: Searching for gravitational waves from compact binary coalescence (CBC) is\nperformed by matched filtering the observed strain data from gravitational-wave\nobservatories against a discrete set of waveform templates designed to\naccurately approximate the expected gravitational-wave signal, and are chosen\nto efficiently cover a target search region. The computational cost of matched\nfiltering scales with both the number of templates required to cover a\nparameter space and the in-band duration of the waveform. Both of these factors\nincrease in difficulty as the current observatories improve in sensitivity,\nespecially at low frequencies, and may pose challenges for third-generation\nobservatories. Reducing the cost of matched filtering would make searches of\nfuture detector data more tractable. In addition, it would be easier to conduct\nsearches that incorporate the effects of eccentricity, precession or target\nlight sources (e.g. subsolar). We present a hierarchical scheme based on a\nreduced basis method to decrease the computational cost of conducting a\nmatched-filter based search. Compared to the current methods, we estimate\nwithout any loss in sensitivity, a speedup by a factor of ~ 10 for sources with\nsignal-to-noise ratio (SNR) of at least =6.0, and a factor of ~ 6 for SNR of at\nleast 5. Our method is dominated by linear operations which are highly\nparallelizable. Therefore, we implement our algorithm using graphical\nprocessing units (GPUs) and evaluate commercially motivated metrics to\ndemonstrate the efficiency of GPUs in CBC searches. Our scheme can be extended\nto generic CBC searches and allows for efficient matched filtering using GPUs.", "category": "astro-ph_IM" }, { "text": "The Influence of Satellite Trails on H.E.S.S. Gamma-Ray Astronomical\n Observations: The number of satellites launched into low earth orbit has almost tripled (to\nover 4000) in the last three years due to the increasing commercialisation of\nspace. Satellite constellations with a total of over 400,000 satellites are\nproposed to be launched in the near future. Many of these satellites are highly\nreflective, resulting in a high optical brightness that affects ground-based\nastronomical observations across the electromagnetic spectrum. Despite this,\nthe potential effect of these satellites on Imaging Atmospheric Cherenkov\nTelescopes (IACTs) has so far been assumed to be negligible due to their\nnanosecond integration times. This has, however, never been verified. We aim to\nidentify satellite trails in data taken by the High Energy Stereoscopic System\n(H.E.S.S.) IACT array in Namibia, using Night Sky Background (NSB) data from\nthe CT5 camera installed in 2019. We determine which observation times and\npointing directions are affected the most, and evaluate the impact on Hillas\nparameters used for classification and reconstruction of high-energy Extensive\nAir Shower events. Finally, we predict how future planned satellite launches\nwill affect gamma-ray observations with IACTs.", "category": "astro-ph_IM" }, { "text": "A new photopolymer based VPHG for astronomy: The case of SN 2013fj: The spectroscopic studies of near infrared emission arising from supernovae\nallow to derive crucial quantities that could better characterise physical\nconditions of the expanding gas, such as the CaII IR HVF spectral feature. For\nthis reason is mandatory to have Diffractive Optical Elements (DOEs) with a\nspectral coverage in the range 8000 - 10000 Angstroms (for low z sources)\ncombined with a reasonable Signal to Noise Ratio (S/N) and medium-low\nresolution. In order to cope with all of those requirements we developed a\nVolume Phase Holographic Grating (VPHG) based on an innovative photosensitive\nmaterial, developed by Bayer MaterialScience. We demonstrated the capabilities\nof this new DOE through observation of SN 2013fj as case study at Asiago\nCopernico Telescope where AFOSC spectrograph is available.", "category": "astro-ph_IM" }, { "text": "The fiber-fed preslit of GIANO at T.N.G: Giano is a Cryogenic Spectrograph located in T.N.G. (Spain) and commisioned\nin 2013. It works in the range 950-2500 nm with a resolving power of 50000.\nThis instrument was designed and built for direct feeding from the telescope\n[2]. However, due to constraints imposed on the telescope interfacing during\nthe pre-commissioning phase, it had to be positioned on the rotating building,\nfar from the telescope focus. Therefore, a new interface to the telescope,\nbased on IR-transmitting ZBLAN fibers with 85\\mu m core, was\ndeveloped.Originally designed to work directly at the $f/11$ nasmyth focus of\nthe telescope, in 2011 it has decided to use a fiber to feed it. The beam from\nthe telescope is focused on a double fiber boundle by a Preslit Optical Bench\nattached to the Nasmith A interface of the telescope. This Optical Bench\ncontains the fiber feeding system and other important features as a guiding\nsystem, a fiber viewer, a fiber feed calibration lamp and a nodding facility\nbetween the two fibers. The use of two fibers allow us to have in the\nechellogram two spectrograms side by side in the same acquisition: one of the\nstar and the other of the sky or simultaneously to have the star and a\ncalibration lamp. Before entering the cryostat the light from the fiber is\ncollectd by a second Preslit Optical Bench attached directly to the Giano\ncryostat: on this bench the correct f-number to illuminate the cold stop is\ngenerated and on the same bench is placed an image slicer to increase the\nefficiency of the system.", "category": "astro-ph_IM" }, { "text": "Standard FITS template for simulated astrophysical scenes with the\n WFIRST coronagraph: The science investigation teams (SITs) for the WFIRST coronagraphic\ninstrument have begun studying the capabilities of the instrument to directly\nimage reflected light off from exoplanets at contrasts down to contrasts of\n~10^-9 with respect to the stellar flux. Detection of point sources at these\nhigh contrasts requires yield estimates and detailed modeling of the image of\nthe planetary system as it propagates through the telescope optics. While the\nSITs might generate custom astrophysical scenes, the integrated model,\npropagated through the internal speckle field, is typically done at JPL. In\nthis white paper, we present a standard file format to ensure a single\ndistribution system between those who produce the raw astrophysical scenes, and\nJPL modelers who incorporate those scenes into their optical modeling. At its\ncore, our custom file format uses FITS files, and incorporates standards on\npackaging astrophysical scenes. This includes spectral and astrometric\ninformation for planetary and stellar point sources, zodiacal light and\nextragalactic sources that may appear as contaminants. Adhering to such a\nuniform data distribution format is necessary, as it ensures seamless work flow\nbetween the SITs and modelers at JPL for the goals of understanding limits of\nthe WFIRST coronagraphic instrument.", "category": "astro-ph_IM" }, { "text": "The performance of the MAGIC telescopes using deep convolutional neural\n networks with CTLearn: The Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescope system is\nlocated on the Canary Island of La Palma and inspects the very high-energy\n(VHE, few tens of GeV and above) gamma-ray sky. MAGIC consists of two imaging\natmospheric Cherenkov telescopes (IACTs), which capture images of the air\nshowers originating from the absorption of gamma rays and cosmic rays by the\natmosphere, through the detection of Cherenkov photons emitted in the shower.\nThe sensitivity of IACTs to gamma-ray sources is mainly determined by the\nability to reconstruct the properties (type, energy, and arrival direction) of\nthe primary particle generating the air shower. The state-of-the-art IACT\npipeline for shower reconstruction is based on the parameterization of the\nshower images by extracting geometric and stereoscopic features and machine\nlearning algorithms like random forest or boosted decision trees. In this\ncontribution, we explore deep convolutional neural networks applied directly to\nthe pixelized images of the camera as a promising method for IACT full-event\nreconstruction and present the performance of the method on observational data\nusing CTLearn, a package for IACT event reconstruction that exploits deep\nlearning.", "category": "astro-ph_IM" }, { "text": "Polarization loss in reflecting coating: In laser gravitational waves detectors optical loss restricts sensitivity. We\ndiscuss polarization scattering as one more possible mechanism of optical\nlosses. Circulated inside interferometer light is polarized and after\nreflection its plane of polarization can turn a little due to reflecting\ncoating of mirror can have slightly different refraction index along axes $x,\\,\ny$ in plane of mirror surface (optical anisotropy). This anisotropy can be\nproduced during manufacture of coating (elasto-optic effect). This orthogonal\npolarized light, enhanced in cavity, produces polarization optical loss.\nPolarization map of mirrors is very important and we propose to measure it.\nPolarization loss can be important in different precision optical experiments\nbased on usage of polarized light, for example, in quantum speed meter.", "category": "astro-ph_IM" }, { "text": "LSST Target of Opportunity proposal for locating a core collapse\n supernova in our galaxy triggered by a neutrino supernova alert: A few times a century, a core collapse supernova (CCSN) occurs in our galaxy.\nWhen such galactic CCSNe happen, over 99\\% of its gravitational binding energy\nis released in the form of neutrinos. Over a period of tens of seconds, a\npowerful neutrino flux is emitted from the collapsing star. When the exploding\nshock wave finally reaches the surface of the star, optical photons escaping\nthe expanding stellar envelope leave the star and eventually arrive at Earth as\na visible brightening. Crucially, although the neutrino signal is prompt, the\ntime to the shock wave breakout can be minutes to many hours later. This means\nthat the neutrino signal will serve as an alert, warning the optical astronomy\ncommunity the light from the explosion is coming. Quickly identifying the\nlocation of the supernova on the sky and disseminating it to the all available\nground and spaced-based instruments will be critical to learn as much as\npossible about the event. Some neutrino experiments can report pointing\ninformation for these galactic CCSNe. In particular, the Super-Kamiokande\nexperiment can point to a few degrees for CCSNe near the center of our galaxy.\nA CCSN located 10 kpc from Earth is expected to result in a pointing resolution\non the order of 3 degrees. LSST's field of view (FOV) is well matched to this\ninitial search box. LSSTs depth is also uniquely suited for identifying CCSNe\neven if they fail or are obscured by the dust of the galactic plane. This is a\nproposal to, upon receipt of such an alert, prioritize the use of LSST for a\nfull day of observing to continuously monitor a pre-identified region of sky\nand, by using difference imaging, identify and announce the location of the\nsupernova.", "category": "astro-ph_IM" }, { "text": "Measuring the Evolution of the NuSTAR Detector Gains: The memo describes the methods used to track the long-term gain variations in\nthe NuSTAR detectors. It builds on the analysis presented in Madsen et al.\n(2015) using the deployable calibration source to measure the gain drift in the\nNuSTAR CdZnTe detectors. This is intended to be a live document that is\nperiodically updated as new entries are required in the NuSTAR gain CALDB\nfiles. This document covers analysis up through early-2022 and the gain v011\nCALDB file released in version 20240226.", "category": "astro-ph_IM" }, { "text": "Versatile Directional Searches for Gravitational Waves with Pulsar\n Timing Arrays: By regularly monitoring the most stable millisecond pulsars over many years,\npulsar timing arrays (PTAs) are positioned to detect and study correlations in\nthe timing behaviour of those pulsars. Gravitational waves (GWs) from\nsupermassive black hole binaries (SMBHBs) are an exciting potentially\ndetectable source of such correlations. We describe a straight-forward\ntechnique by which a PTA can be \"phased-up\" to form time series of the two\npolarisation modes of GWs coming from a particular direction of the sky. Our\ntechnique requires no assumptions regarding the time-domain behaviour of a GW\nsignal. This method has already been used to place stringent bounds on GWs from\nindividual SMBHBs in circular orbits. Here, we describe the methodology and\ndemonstrate the versatility of the technique in searches for a wide variety of\nGW signals including bursts with unmodeled waveforms. Using the first six years\nof data from the Parkes Pulsar Timing Array, we conduct an all-sky search for a\ndetectable excess of GW power from any direction. For the lines of sight to\nseveral nearby massive galaxy clusters, we carry out a more detailed search for\nGW bursts with memory, which are distinct signatures of SMBHB mergers. In all\ncases, we find that the data are consistent with noise.", "category": "astro-ph_IM" }, { "text": "The ATLAS All-Sky Stellar Reference Catalog: The Asteroid Terrestrial-impact Last Alert System (ATLAS) observes most of\nthe sky every night in search of dangerous asteroids. Its data are also used to\nsearch for photometric variability, where sensitivity to variability is limited\nby photometric accuracy. Since each exposure spans 7.6 deg corner to corner,\nvariations in atmospheric transparency in excess of 0.01 mag are common, and\n0.01 mag photometry cannot be achieved by using a constant flat field\ncalibration image. We therefore have assembled an all-sky reference catalog of\napproximately one billion stars to m~19 from a variety of sources to calibrate\neach exposure's astrometry and photometry. Gaia DR2 is the source of astrometry\nfor this ATLAS Refcat2. The sources of g, r, i, z photometry include Pan-STARRS\nDR1, the ATLAS Pathfinder photometry project, ATLAS re-flattened APASS data,\nSkyMapper DR1, APASS DR9, the Tycho-2 catalog, and the Yale Bright Star\nCatalog. We have attempted to make this catalog at least 99% complete to m<19,\nincluding the brightest stars in the sky. We believe that the systematic errors\nare no larger than 5 millimag RMS, although errors are as large as 20 millimag\nin small patches near the galactic plane.", "category": "astro-ph_IM" }, { "text": "Data Multiplexing in Radio Interferometric Calibration: New and upcoming radio interferometers will produce unprecedented amounts of\ndata that demand extremely powerful computers for processing. This is a\nlimiting factor due to the large computational power and energy costs involved.\nSuch limitations restrict several key data processing steps in radio\ninterferometry. One such step is calibration where systematic errors in the\ndata are determined and corrected. Accurate calibration is an essential\ncomponent in reaching many scientific goals in radio astronomy and the use of\nconsensus optimization that exploits the continuity of systematic errors across\nfrequency significantly improves calibration accuracy. In order to reach full\nconsensus, data at all frequencies need to be calibrated simultaneously. In the\nSKA regime, this can become intractable if the available compute agents do not\nhave the resources to process data from all frequency channels simultaneously.\nIn this paper, we propose a multiplexing scheme that is based on the\nalternating direction method of multipliers (ADMM) with cyclic updates. With\nthis scheme, it is possible to simultaneously calibrate the full dataset using\nfar fewer compute agents than the number of frequencies at which data are\navailable. We give simulation results to show the feasibility of the proposed\nmultiplexing scheme in simultaneously calibrating a full dataset when a limited\nnumber of compute agents are available.", "category": "astro-ph_IM" }, { "text": "A distributed data warehouse system for astroparticle physics: A distributed data warehouse system is one of the actual issues in the field\nof astroparticle physics. Famous experiments, such as TAIGA, KASCADE-Grande,\nproduce tens of terabytes of data measured by their instruments. It is critical\nto have a smart data warehouse system on-site to store the collected data for\nfurther distribution effectively. It is also vital to provide scientists with a\nhandy and user-friendly interface to access the collected data with proper\npermissions not only on-site but also online. The latter case is handy when\nscientists need to combine data from different experiments for analysis. In\nthis work, we describe an approach to implementing a distributed data warehouse\nsystem that allows scientists to acquire just the necessary data from different\nexperiments via the Internet on demand. The implementation is based on\nCernVM-FS with additional components developed by us to search through the\nwhole available data sets and deliver their subsets to users' computers.", "category": "astro-ph_IM" }, { "text": "Swift publication statistics: a comparison with other major\n observatories: Swift is a satellite equipped with gamma-ray, X-ray, and optical-UV\ninstruments aimed at discovering, localizing and collecting data from gamma-ray\nbursts (GRBs). Launched at the end of 2004, this small-size mission finds about\na hundred GRBs per year, totaling more than 700 events as of 2012. In addition\nto GRBs, Swift observes other energetic events, such as AGNs, novae, and\nsupernovae. Here we look at its success using bibliometric tools; that is the\nnumber of papers using Swift data and their impact (i.e., number of citations\nto those papers). We derived these for the publication years 2005 to 2011, and\ncompared them with the same numbers for other major observatories. Swift\nprovided data for 1101 papers in the interval 2005-2011, with 24 in the first\nyear, to 287 in the last year. In 2011, Swift had more than double the number\nof publications as Subaru, it overcame Gemini by a large fraction, and reached\nKeck. It is getting closer to the ~400 publications of the successful\nhigh-energy missions XMM-Newton and Chandra, but is still far from the most\nproductive telescopes VLT (over 500) and HST (almost 800). The overall average\nnumber of citations per paper, as of November 2012, is 28.3, which is\ncomparable to the others, but lower than Keck (41.8). The science topics\ncovered by Swift publications have changed from the first year, when over 80%\nof the papers were about GRBs, while in 2011 it was less than 30%.", "category": "astro-ph_IM" }, { "text": "Status of predictive wavefront control on Keck II adaptive optics bench:\n on-sky coronagraphic results: The behavior of an adaptive optics (AO) system for ground-based high contrast\nimaging (HCI) dictates the achievable contrast of the instrument. In conditions\nwhere the coherence time of the atmosphere is short compared to the speed of\nthe AO system, the servo-lag error becomes the dominate error term of the AO\nsystem. While the AO system measures the wavefront error and subsequently\napplies a correction (taking a total of 1 to 2 milli-seconds), the atmospheric\nturbulence above the telescope has changed. In addition to reducing the Strehl\nratio, the servo-lag error causes a build-up of speckles along the direction of\nthe dominant wind vector in the coronagraphic image, severely limiting the\ncontrast at small angular separations. One strategy to mitigate this problem is\nto predict the evolution of the turbulence over the delay. Our predictive\nwavefront control algorithm minimizes the delay in a mean square sense and has\nbeen implemented on the Keck II AO bench. In this paper we report on the latest\nresults of our algorithm and discuss updates to the algorithm itself. We\nexplore how to tune various filter parameters on the basis of both daytime\nlaboratory tests and on-sky tests. We show a reduction in residual-mean-square\nwavefront error for the predictor compare to the leaky integrator implemented\non Keck. Finally, we present contrast improvements for both day time and on-sky\ntests. Using the L-band vortex coronagraph for Keck's NIRC2 instrument, we find\na contrast gain of 2.03 at separation of 3~$\\lambda/D$ and up to 3 for larger\nseparations (4-6~$\\lambda/D$).", "category": "astro-ph_IM" }, { "text": "hammurabi X: Simulating Galactic Synchrotron Emission with Random\n Magnetic Fields: We present version X of the hammurabi package, the HEALPix-based numeric\nsimulator for Galactic polarized emission. Improving on its earlier design, we\nhave fully renewed the framework with modern C++ standards and features.\nMulti-threading support has been built in to meet the growing computational\nworkload in future research. For the first time, we present precision profiles\nof hammurabi line-of-sight integral kernel with multi-layer HEALPix shells. In\naddition to fundamental improvements, this report focuses on simulating\npolarized synchrotron emission with Gaussian random magnetic fields. Two fast\nmethods are proposed for realizing divergence-free random magnetic fields\neither on the Galactic scale where a field alignment and strength modulation\nare imposed, or on a local scale where more physically motivated models like a\nparameterized magneto-hydrodynamic (MHD) turbulence can be applied. As an\nexample application, we discuss the phenomenological implications of Gaussian\nrandom magnetic fields for high Galactic latitude synchrotron foregrounds. In\nthis, we numerically find B/E polarization mode ratios lower than unity based\non Gaussian realizations of either MHD turbulent spectra or in spatially\naligned magnetic fields.", "category": "astro-ph_IM" }, { "text": "Modelling astronomical adaptive optics performance with\n temporally-filtered Wiener reconstruction of slope data: We build on a long-standing tradition in astronomical adaptive optics (AO) of\nspecifying performance metrics and error budgets using linear systems modeling\nin the spatial-frequency domain. Our goal is to provide a comprehensive tool\nfor the calculation of error budgets in terms of residual temporally filtered\nphase power spectral densities and variances. In addition, the fast simulation\nof AO-corrected point spread functions (PSFs) provided by this method can be\nused as inputs for simulations of science observations with next-generation\ninstruments and telescopes, in particular to predict post-coronagraphic\ncontrast improvements for planet finder systems. We extend the previous results\nand propose the synthesis of a distributed Kalman filter to mitigate both\naniso-servo-lag and aliasing errors whilst minimizing the overall residual\nvariance. We discuss applications to (i) analytic AO-corrected PSF modeling in\nthe spatial-frequency domain, (ii) post-coronagraphic contrast enhancement,\n(iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF\nreconstruction from system telemetry. Under perfect knowledge of wind\nvelocities, we show that $\\sim$60 nm rms error reduction can be achieved with\nthe distributed Kalman filter embodying anti- aliasing reconstructors on 10 m\nclass high-order AO systems, leading to contrast improvement factors of up to\nthree orders of magnitude at few ${\\lambda}/D$ separations\n($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order\nof magnitude for a 12 magnitude star.", "category": "astro-ph_IM" }, { "text": "A Bayesian approach to high fidelity interferometric calibration II:\n demonstration with simulated data: In a companion paper, we presented BayesCal, a mathematical formalism for\nmitigating sky-model incompleteness in interferometric calibration. In this\npaper, we demonstrate the use of BayesCal to calibrate the degenerate gain\nparameters of full-Stokes simulated observations with a HERA-like hexagonal\nclose-packed redundant array, for three assumed levels of completeness of the a\npriori known component of the calibration sky model. We compare the BayesCal\ncalibration solutions to those recovered by calibrating the degenerate gain\nparameters with only the a priori known component of the calibration sky model\nboth with and without imposing physically motivated priors on the gain\namplitude solutions and for two choices of baseline length range over which to\ncalibrate. We find that BayesCal provides calibration solutions with up to four\norders of magnitude lower power in spurious gain amplitude fluctuations than\nthe calibration solutions derived for the same data set with the alternate\napproaches, and between $\\sim10^7$ and $\\sim10^{10}$ times smaller than in the\nmean degenerate gain amplitude on the full range of spectral scales accessible\nin the data. Additionally, we find that in the scenarios modelled only BayesCal\nhas sufficiently high fidelity calibration solutions for unbiased recovery of\nthe 21 cm power spectrum on large spectral scales ($k_\\parallel \\lesssim\n0.15~h\\mathrm{Mpc}^{-1}$). In all other cases, in the completeness regimes\nstudied, those scales are contaminated.", "category": "astro-ph_IM" }, { "text": "Point Source Detection and Flux Determination with PGWave: One of the largest uncertainties in the Point Source (PS) studies, at\nFermi-LAT energies, is the uncertainty in the diffuse background. In general\nthere are two approaches for PS analysis: background-dependent methods, that\ninclude modeling of the diffuse background, and background-independent methods.\nIn this work we study PGWave, which is one of the background-independent\nmethods, based on wavelet filtering to find significant clusters of gamma rays.\nPGWave is already used in the Fermi-LAT catalog pipeline for finding candidate\nsources. We test PGWave, not only for source detection, but especially to\nestimate the flux without the need of a background model. We use Monte Carlo\n(MC) simulation to study the accuracy of PS detection and estimation of the\nflux. We present preliminary results of these MC studies.", "category": "astro-ph_IM" }, { "text": "Is HDF5 a good format to replace UVFITS?: The FITS (Flexible Image Transport System) data format was developed in the\nlate 1970s for storage and exchange of astronomy-related image data. Since\nthen, it has become a standard file format not only for images, but also for\nradio interferometer data (e.g. UVFITS, FITS-IDI). But is FITS the right format\nfor next-generation telescopes to adopt? The newer Hierarchical Data Format\n(HDF5) file format offers considerable advantages over FITS, but has yet to\ngain widespread adoption within radio astronomy. One of the major holdbacks is\nthat HDF5 is not well supported by data reduction software packages. Here, we\npresent a comparison of FITS, HDF5, and the MeasurementSet (MS) format for\nstorage of interferometric data. In addition, we present a tool for converting\nbetween formats. We show that the underlying data model of FITS can be ported\nto HDF5, a first step toward achieving wider HDF5 support.", "category": "astro-ph_IM" }, { "text": "2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for\n Cosmological Simulation: We report on improvements made over the past two decades to our adaptive\ntreecode N-body method (HOT). A mathematical and computational approach to the\ncosmological N-body problem is described, with performance and scalability\nmeasured up to 256k ($2^{18}$) processors. We present error analysis and\nscientific application results from a series of more than ten 69 billion\n($4096^3$) particle cosmological simulations, accounting for $4 \\times 10^{20}$\nfloating point operations. These results include the first simulations using\nthe new constraints on the standard model of cosmology from the Planck\nsatellite. Our simulations set a new standard for accuracy and scientific\nthroughput, while meeting or exceeding the computational efficiency of the\nlatest generation of hybrid TreePM N-body methods.", "category": "astro-ph_IM" }, { "text": "Apertif, Phased Array Feeds for the Westerbork Synthesis Radio Telescope: We describe the APERture Tile In Focus (Apertif) system, a phased array feed\n(PAF) upgrade of the Westerbork Synthesis Radio Telescope which has transformed\nthis telescope into a high-sensitivity, wide field-of-view L-band imaging and\ntransient survey instrument. Using novel PAF technology, up to 40 partially\noverlapping beams can be formed on the sky simultaneously, significantly\nincreasing the survey speed of the telescope. With this upgraded instrument, an\nimaging survey covering an area of 2300 deg2 is being performed which will\ndeliver both continuum and spectral line data sets, of which the first data has\nbeen publicly released. In addition, a time domain transient and pulsar survey\ncovering 15,000 deg2 is in progress. An overview of the Apertif science\ndrivers, hardware and software of the upgraded telescope is presented, along\nwith its key performance characteristics.", "category": "astro-ph_IM" }, { "text": "On Point Spread Function modelling: towards optimal interpolation: Point Spread Function (PSF) modeling is a central part of any astronomy data\nanalysis relying on measuring the shapes of objects. It is especially crucial\nfor weak gravitational lensing, in order to beat down systematics and allow one\nto reach the full potential of weak lensing in measuring dark energy. A PSF\nmodeling pipeline is made of two main steps: the first one is to assess its\nshape on stars, and the second is to interpolate it at any desired position\n(usually galaxies). We focus on the second part, and compare different\ninterpolation schemes, including polynomial interpolation, radial basis\nfunctions, Delaunay triangulation and Kriging. For that purpose, we develop\nsimulations of PSF fields, in which stars are built from a set of basis\nfunctions defined from a Principal Components Analysis of a real ground-based\nimage. We find that Kriging gives the most reliable interpolation,\nsignificantly better than the traditionally used polynomial interpolation. We\nalso note that although a Kriging interpolation on individual images is enough\nto control systematics at the level necessary for current weak lensing surveys,\nmore elaborate techniques will have to be developed to reach future ambitious\nsurveys' requirements.", "category": "astro-ph_IM" }, { "text": "Two modified ILC methods to detect point sources in Cosmic Microwave\n Background maps: We propose two detection techniques that take advantage of a small sky area\napproximation and are based on modifications of the \"internal linear\ncombination\" (ILC) method, an approach widely used in Cosmology for the\nseparation of the various components that contribute to the microwave\nbackground. The main advantage of the proposed approach, especially in handling\nmulti-frequency maps of the same region, is that it does not require the \"a\npriori\" knowledge of the spatial power-spectrum of either the CMB and/or the\nGalactic foreground. Hence, it is more robust, easier and more intuitive to\nuse. The performance of the proposed algorithms is tested with numerical\nexperiments that mimic the physical scenario expected for high Galactic\nlatitude observations with the Atacama Large Millimeter/submillimeter Array\n(ALMA).", "category": "astro-ph_IM" }, { "text": "Three recipes for improving the image quality with optical long-baseline\n interferometers: BFMC, LFF, \\& DPSC: We present here three recipes for getting better images with optical\ninterferometers. Two of them, Low- Frequencies Filling and Brute-Force Monte\nCarlo were used in our participation to the Interferometry Beauty Contest this\nyear and can be applied to classical imaging using V 2 and closure phases.\nThese two addition to image reconstruction provide a way of having more\nreliable images. The last recipe is similar in its principle as the\nself-calibration technique used in radio-interferometry. We call it also\nself-calibration, but it uses the wavelength-differential phase as a proxy of\nthe object phase to build-up a full-featured complex visibility set of the\nobserved object. This technique needs a first image-reconstruction run with an\navailable software, using closure-phases and squared visibilities only. We used\nit for two scientific papers with great success. We discuss here the pros and\ncons of such imaging technique.", "category": "astro-ph_IM" }, { "text": "NEBULAR: A simple synthesis code for the hydrogen and helium nebular\n spectrum: NEBULAR is a lightweight code to synthesize the spectrum of an ideal, mixed\nhydrogen and helium gas in ionization equilibrium, over a useful range of\ndensities, temperatures and wavelengths. Free-free, free-bound and two-photon\ncontinua are included as well as parts of the HI, HeI and HeII line series.\nNEBULAR interpolates over publicly available data tables; it can be used to\neasily extract information from these tables without prior knowledge about\ntheir data structure. The resulting spectra can be used to e.g. determine\nequivalent line widths, constrain the contribution of the nebular continuum to\na bandpass, and for educational purposes. NEBULAR can resample the spectrum on\na user-defined wavelength grid for direct comparison with an observed spectrum;\nhowever, it can not be used to fit an observed spectrum.", "category": "astro-ph_IM" }, { "text": "Prospects for a radio air-shower detector at South Pole: IceCube is currently not only the largest neutrino telescope but also one of\nthe world's most competitive instruments for studying cosmic rays in the PeV to\nEeV regime where the transition from galactic to extra-galactic sources should\noccur. Further augmenting this observatory with an array of radio sensors in\nthe 10-100 MHz regime will additionally permit observation of the geomagnetic\nradio emission from the air shower. Yielding complementary information on the\nshower development a triple-technology array consisting of radio sensors, the\nground sampling stations of IceTop and the in-ice optical modules of IceCube,\nshould significantly improve the understanding of cosmic rays, as well as\nenhance many aspects of the physics reach of the observatory. Here we present\nfirst results from two exploratory setups deployed at the South Pole. Noise\nmeasurements from data taken in two consecutive seasons show a very good\nagreement of the predicted and observed response of the antennas designed\nspecifically for this purpose. The radio background is found to be highly\ndominated by galactic noise with a striking absence of anthropogenic radio\nemitters in the frequency band from 25-300 MHz. Motivated by the excellent\nsuitability of the location, we present first performance studies of a proposed\nRadio Air-Shower Test Array (RASTA) using detailed MonteCarlo simulation and\ndiscuss the prospects for its installation.", "category": "astro-ph_IM" }, { "text": "Unrolling PALM for sparse semi-blind source separation: Sparse Blind Source Separation (BSS) has become a well established tool for a\nwide range of applications - for instance, in astrophysics and remote sensing.\nClassical sparse BSS methods, such as the Proximal Alternating Linearized\nMinimization (PALM) algorithm, nevertheless often suffer from a difficult\nhyperparameter choice, which undermines their results. To bypass this pitfall,\nwe propose in this work to build on the thriving field of algorithm\nunfolding/unrolling. Unrolling PALM enables to leverage the data-driven\nknowledge stemming from realistic simulations or ground-truth data by learning\nboth PALM hyperparameters and variables. In contrast to most existing unrolled\nalgorithms, which assume a fixed known dictionary during the training and\ntesting phases, this article further emphasizes on the ability to deal with\nvariable mixing matrices (a.k.a. dictionaries). The proposed Learned PALM\n(LPALM) algorithm thus enables to perform semi-blind source separation, which\nis key to increase the generalization of the learnt model in real-world\napplications. We illustrate the relevance of LPALM in astrophysical\nmultispectral imaging: the algorithm not only needs up to $10^4-10^5$ times\nfewer iterations than PALM, but also improves the separation quality, while\navoiding the cumbersome hyperparameter and initialization choice of PALM. We\nfurther show that LPALM outperforms other unrolled source separation methods in\nthe semi-blind setting.", "category": "astro-ph_IM" }, { "text": "ATLAS Probe: Breakthrough Science of Galaxy Evolution, Cosmology, Milky\n Way, and the Solar System: ATLAS (Astrophysics Telescope for Large Area Spectroscopy) is a concept for a\nNASA probe-class space mission. It is the spectroscopic follow-up mission to\nWFIRST, boosting its scientific return by obtaining deep NIR & MIR slit\nspectroscopy for most of the galaxies imaged by the WFIRST High Latitude Survey\nat z>0.5. ATLAS will measure accurate and precise redshifts for ~200M galaxies\nout to z=7 and beyond, and deliver spectra that enable a wide range of\ndiagnostic studies of the physical properties of galaxies over most of cosmic\nhistory. ATLAS and WFIRST together will produce a definitive 3D map of the\nUniverse over 2000 sq deg. ATLAS Science Goals are: (1) Discover how galaxies\nhave evolved in the cosmic web of dark matter from cosmic dawn through the peak\nera of galaxy assembly. (2) Discover the nature of cosmic acceleration. (3)\nProbe the Milky Way's dust-enshrouded regions, reaching the far side of our\nGalaxy. (4) Discover the bulk compositional building blocks of planetesimals\nformed in the outer Solar System. These flow down to the ATLAS Scientific\nObjectives: (1A) Trace the relation between galaxies and dark matter with less\nthan 10% shot noise on relevant scales at 1 10^{11}$ GeV is derived from acoustic data\ntaken over eight months.", "category": "astro-ph_IM" }, { "text": "Data downloaded via parachute from a NASA super-pressure balloon: In April to May 2023, the superBIT telescope was lifted to the Earth's\nstratosphere by a helium-filled super-pressure balloon, to acquire astronomical\nimaging from above (99.5% of) the Earth's atmosphere. It was launched from New\nZealand then, for 40 days, circumnavigated the globe five times at a latitude\n40 to 50 degrees South. Attached to the telescope were four 'DRS' (Data\nRecovery System) capsules containing 5 TB solid state data storage, plus a GNSS\nreceiver, Iridium transmitter, and parachute. Data from the telescope were\ncopied to these, and two were dropped over Argentina. They drifted 61 km\nhorizontally while they descended 32 km, but we predicted their descent vectors\nwithin 2.4 km: in this location, the discrepancy appears irreducible below 2 km\nbecause of high speed, gusty winds and local topography. The capsules then\nreported their own locations to within a few metres. We recovered the capsules\nand successfully retrieved all of superBIT's data - despite the telescope\nitself being later destroyed on landing.", "category": "astro-ph_IM" }, { "text": "21 cm observations: calibration, strategies, observables: This chapter aims to provide a review of the basics of 21 cm interferometric\nobservations and its methodologies. A summary of the main concepts of radio\ninterferometry and their connection with the 21 cm observables - power spectra\nand images - is presented. I then provide a review of interferometric\ncalibration and its interplay with foreground separation, including the current\nopen challenges in calibration of 21 cm observations. Finally, a review of 21\ncm instrument designs in the light of calibration choices and observing\nstrategies follows.", "category": "astro-ph_IM" }, { "text": "On-sky closed loop correction of atmospheric dispersion for\n high-contrast coronagraphy and astrometry: Adaptive optic (AO) systems delivering high levels of wavefront correction\nare now common at observatories. One of the main limitations to image quality\nafter wavefront correction comes from atmospheric refraction. An Atmospheric\ndispersion compensator (ADC) is employed to correct for atmospheric refraction.\nThe correction is applied based on a look-up table consisting of dispersion\nvalues as a function of telescope elevation angle. The look-up table based\ncorrection of atmospheric dispersion results in imperfect compensation leading\nto the presence of residual dispersion in the point-spread function (PSF) and\nis insufficient when sub-milliarcsecond precision is required. The presence of\nresidual dispersion can limit the achievable contrast while employing\nhigh-performance coronagraphs or can compromise high-precision astrometric\nmeasurements. In this paper, we present the first on-sky closed-loop correction\nof atmospheric dispersion by directly using science path images. The concept\nbehind the measurement of dispersion utilizes the chromatic scaling of focal\nplane speckles. An adaptive speckle grid generated with a deformable mirror\n(DM) that has a sufficiently large number of actuators is used to accurately\nmeasure the residual dispersion and subsequently correct it by driving the ADC.\nWe have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system\non-sky closed-loop correction of residual dispersion to < 1 mas across H-band.\nThis work will aid in the direct detection of habitable exoplanets with\nupcoming extremely large telescopes (ELTs) and also provide a diagnostic tool\nto test the performance of instruments which require sub-milliarcsecond\ncorrection.", "category": "astro-ph_IM" }, { "text": "MKID, an energy sensitive superconducting detector for the next\n generation of XAO: Selected for the next generation of adaptive optics (AO) systems, the pyramid\nwavefront sensor (PWFS) is recognised for its closed AO loop performance. As\nnew technologies are emerging, it is necessary to explore new methods to\nimprove it. Microwave Kinetic Inductance Detectors (MKID) are photon-counting\ndevices that measure the arrival time and energy of each incident photon,\nproviding new capabilities over existing detectors and significant AO\nperformance benefits. After developing a multi-wavelength PWFS simulation, we\nstudy the benefits of using an energy sensitive detector, analyse the PWFS\nperformance according to wavelength and explore the possibility of using\nfainter natural guide stars by widening the bandpass of the wavefront sensor.", "category": "astro-ph_IM" }, { "text": "Experimental Realization of an Achromatic Magnetic Mirror based on\n Metamaterials: Our work relates to the use of metamaterials engineered to realize a\nmeta-surface approaching the exotic properties of an ideal object not observed\nin nature, a \"magnetic mirror\". Previous realizations were based on resonant\nstructures which implied narrow bandwidths and large losses. The working\nprinciple of our device is ideally frequency-independent, it does not involve\nresonances and it does not rely on a specific technology. The performance of\nour prototype, working at millimetre wavelengths, has never been achieved\nbefore and it is superior to any other device reported in the literature, both\nin the microwave and optical regions. The device inherently has large bandwidth\n(144%), low losses (<1 %) and is almost independent of incidence-angle and\npolarization-state and thus approaches the behaviour of an ideal magnetic\nmirror. Applications of magnetic mirrors range from low-profile antennas,\nabsorbers to optoelectronic devices. Our device can be realised using different\ntechnologies to operate in other spectral regions.", "category": "astro-ph_IM" }, { "text": "Extended-Path Intensity Correlation: Microarcsecond Astrometry with an\n Arcsecond Field of View: We develop in detail a recently proposed optical-path modification of\nastronomical intensity interferometers. Extended-Path Intensity Correlation\n(EPIC) introduces a tunable path extension, enabling differential astrometry of\nmultiple compact sources such as stars and quasars at separations of up to a\nfew arcseconds. Combined with other recent technological advances in\nspectroscopy and fast single-photon detection, a ground-based intensity\ninterferometer array can achieve microarcsecond resolution and even better\nlight-centroiding accuracy on bright sources of magnitude $m \\lesssim 15$. We\nlay out the theory and technical requirements of EPIC, and discuss the\nscientific potential. Promising applications include astrometric lensing of\nstars and quasar images, binary-orbit characterization, exoplanet detection,\nGalactic acceleration measurements and calibration of the cosmic distance\nladder. The introduction of the path extension thus significantly increases the\nscope of intensity interferometry while reaching unprecedented levels of\nrelative astrometric precision.", "category": "astro-ph_IM" }, { "text": "Deep learning of quasar lightcurves in the LSST era: Deep learning techniques are required for the analysis of synoptic\n(multi-band and multi-epoch) light curves in massive data of quasars, as\nexpected from the Vera C. Rubin Observatory Legacy Survey of Space and Time\n(LSST). In this follow-up study, we introduced an upgraded version of a\nconditional neural process (CNP) embedded in a multistep approach for analysis\nof large data of quasars in the LSST Active Galactic Nuclei Scientific\nCollaboration data challenge database. We present a case study of a stratified\nset of the u-band light curves for 283 quasars with very low variability $\\sim\n0.03$. In this sample, CNP average mean square error is found to be $\\sim 5\\%\n$($\\sim 0.5$ mag). Interestingly, beside similar level of variability there are\nindications that individual light curves show flare like features. According to\npreliminary structure function analysis, these occurrences may be associated to\nmicrolensing events with larger time scales $5-10$ years.", "category": "astro-ph_IM" }, { "text": "GAMMA-LIGHT: High-Energy Astrophysics above 10 MeV: High-energy phenomena in the cosmos, and in particular processes leading to\nthe emission of gamma- rays in the energy range 10 MeV - 100 GeV, play a very\nspecial role in the understanding of our Universe. This energy range is indeed\nassociated with non-thermal phenomena and challenging particle acceleration\nprocesses. The technology involved in detecting gamma-rays is challenging and\ndrives our ability to develop improved instruments for a large variety of\napplications. GAMMA-LIGHT is a Small Mission which aims at an unprecedented\nadvance of our knowledge in many sectors of astrophysical and Earth studies\nresearch. The Mission will open a new observational window in the low-energy\ngamma-ray range 10-50 MeV, and is configured to make substantial advances\ncompared with the previous and current gamma-ray experiments (AGILE and Fermi).\nThe improvement is based on an exquisite angular resolution achieved by\nGAMMA-LIGHT using state-of-the-art Silicon technology with innovative data\nacquisition. GAMMA-LIGHT will address all astrophysics issues left open by the\ncurrent generation of instruments. In particular, the breakthrough angular\nresolution in the energy range 100 MeV - 1 GeV is crucial to resolve patchy and\ncomplex features of diffuse sources in the Galaxy as well as increasing the\npoint source sensitivity. This proposal addresses scientific topics of great\ninterest to the community, with particular emphasis on multifrequency\ncorrelation studies involving radio, optical, IR, X-ray, soft gamma-ray and TeV\nemission. At the end of this decade several new observatories will be\noperational including LOFAR, SKA, ALMA, HAWK, CTA. GAMMA-LIGHT will \"fill the\nvacuum\" in the 10 MeV-10 GeV band, and will provide invaluable data for the\nunderstanding of cosmic and terrestrial high-energy sources.", "category": "astro-ph_IM" }, { "text": "A Real-time Coherent Dedispersion Pipeline for the Giant Metrewave Radio\n Telescope: A fully real-time coherent dedispersion system has been developed for the\npulsar back-end at the Giant Metrewave Radio Telescope (GMRT). The dedispersion\npipeline uses the single phased array voltage beam produced by the existing\nGMRT software back-end (GSB) to produce coherently dedispersed intensity output\nin real time, for the currently operational bandwidths of 16 MHz and 32 MHz.\nProvision has also been made to coherently dedisperse voltage beam data from\nobservations recorded on disk.\n We discuss the design and implementation of the real-time coherent\ndedispersion system, describing the steps carried out to optimise the\nperformance of the pipeline. Presently functioning on an Intel Xeon X5550 CPU\nequipped with a NVIDIA Tesla C2075 GPU, the pipeline allows dispersion free,\nhigh time resolution data to be obtained in real-time. We illustrate the\nsignificant improvements over the existing incoherent dedispersion system at\nthe GMRT, and present some preliminary results obtained from studies of pulsars\nusing this system, demonstrating its potential as a useful tool for low\nfrequency pulsar observations.\n We describe the salient features of our implementation, comparing it with\nother recently developed real-time coherent dedispersion systems. This\nimplementation of a real-time coherent dedispersion pipeline for a large, low\nfrequency array instrument like the GMRT, will enable long-term observing\nprograms using coherent dedispersion to be carried out routinely at the\nobservatory. We also outline the possible improvements for such a pipeline,\nincluding prospects for the upgraded GMRT which will have bandwidths about ten\ntimes larger than at present.", "category": "astro-ph_IM" }, { "text": "A graph-based spectral classification of Type II supernovae: Given the ever-increasing number of time-domain astronomical surveys,\nemploying robust, interpretative, and automated data-driven classification\nschemes is pivotal. Based on graph theory, we present new data-driven\nclassification heuristics for spectral data. A spectral classification scheme\nof Type II supernovae (SNe II) is proposed based on the phase relative to the\nmaximum light in the $V$ band and the end of the plateau phase. We utilize a\ncompiled optical data set that comprises 145 SNe and 1595 optical spectra in\n4000-9000 $\\overset{\\circ}{\\mathrm {A}}$. Our classification method naturally\nidentifies outliers and arranges the different SNe in terms of their major\nspectral features. We compare our approach to the off-the-shelf umap manifold\nlearning and show that both strategies are consistent with a continuous\nvariation of spectral types rather than discrete families. The automated\nclassification naturally reflects the fast evolution of Type II SNe around the\nmaximum light while showcasing their homogeneity close to the end of the\nplateau phase. The scheme we develop could be more widely applicable to\nunsupervised time series classification or characterisation of other functional\ndata.", "category": "astro-ph_IM" }, { "text": "Single-pulse classifier for the LOFAR Tied-Array All-sky Survey: Searches for millisecond-duration, dispersed single pulses have become a\nstandard tool used during radio pulsar surveys in the last decade. They have\nenabled the discovery of two new classes of sources: rotating radio transients\nand fast radio bursts. However, we are now in a regime where the sensitivity to\nsingle pulses in radio surveys is often limited more by the strong background\nof radio frequency interference (RFI, which can greatly increase the\nfalse-positive rate) than by the sensitivity of the telescope itself. To\nmitigate this problem, we introduce the Single-pulse Searcher (SpS). This is a\nnew machine-learning classifier designed to identify astrophysical signals in a\nstrong RFI environment, and optimized to process the large data volumes\nproduced by the new generation of aperture array telescopes. It has been\nspecifically developed for the LOFAR Tied-Array All-Sky Survey (LOTAAS), an\nongoing survey for pulsars and fast radio transients in the northern\nhemisphere. During its development, SpS discovered 7 new pulsars and blindly\nidentified ~80 known sources. The modular design of the software offers the\npossibility to easily adapt it to other studies with different instruments and\ncharacteristics. Indeed, SpS has already been used in other projects, e.g. to\nidentify pulses from the fast radio burst source FRB 121102. The software\ndevelopment is complete and SpS is now being used to re-process all LOTAAS data\ncollected to date.", "category": "astro-ph_IM" }, { "text": "Unsupervised self-organised mapping: a versatile empirical tool for\n object selection, classification and redshift estimation in large surveys: We present an application of unsupervised machine learning - the\nself-organised map (SOM) - as a tool for visualising, exploring and mining the\ncatalogues of large astronomical surveys. Self-organisation culminates in a\nlow-resolution representation of the 'topology' of a parameter volume, and this\ncan be exploited in various ways pertinent to astronomy. Using data from the\nCosmological Evolution Survey (COSMOS), we demonstrate two key astronomical\napplications of the SOM: (i) object classification and selection, using the\nexample of galaxies with active galactic nuclei as a demonstration, and (ii)\nphotometric redshift estimation, illustrating how SOMs can be used as totally\nempirical predictive tools. With a training set of ~3800 galaxies with\nz_spec<1, we achieve photometric redshift accuracies competitive with other\n(mainly template fitting) techniques that use a similar number of photometric\nbands (sigma(Dz)=0.03 with a ~2% outlier rate when using u*-band to 8um\nphotometry). We also test the SOM as a photo-z tool using the PHoto-z Accuracy\nTesting (PHAT) synthetic catalogue of Hildebrandt et al. (2010), which compares\nseveral different photo-z codes using a common input/training set. We find that\nthe SOM can deliver accuracies that are competitive with many of the\nestablished template-fitting and empirical methods. This technique is not\nwithout clear limitations, which are discussed, but we suggest it could be a\npowerful tool in the era of extremely large - 'petabyte' - databases where\nefficient data-mining is a paramount concern.", "category": "astro-ph_IM" }, { "text": "The Burke-Gaffney Observatory: A fully roboticized remote-access\n observatory with a low resolution spectrograph: We describe the current state of the Burke-Gaffney Observatory (BGO) at Saint\nMary's University - a unique fully roboticized remote-access observatory that\nallows students to carry out imaging, photometry, and spectroscopy projects\nremotely from anywhere in the world via a web browser or social media. Stellar\nspectroscopy is available with the ALPY 600 low resolution grism spectrograph\nequipped with a CCD detector. We describe our custom CCD spectroscopy reduction\nprocedure written in the Python programming language and demonstrate the\nquality of fits of synthetic spectra computed with the ChromaStarServer (CSS)\ncode to BGO spectra. The facility along with the accompanying Python BGO\nspectroscopy reduction package and the CSS spectrum synthesis code provide an\naccessible means for students anywhere to carry our projects at the\nundergraduate honours level. BGO web pages for potential observers are at the\nsite: observatory.smu.ca/bgo-useme. All codes are available from the OpenStars\nwww site: openstars.smu.ca/", "category": "astro-ph_IM" }, { "text": "Towards the MICADO@ELT PSF-R with simulated and real data: Observations close to the diffraction limit, with high Strehl ratios from\nAdaptive Optics (AO)-assisted instruments mounted on ground-based telescopes\nare a reality and will become even more widespread with the next generation\ninstruments that equip 30 meter-class telescopes. This results in a growing\ninterest in tools and methods to accurately reconstruct the observed Point\nSpread Function (PSF) of AO systems. We will discuss the performance of the PSF\nreconstruction (PSF-R) software developed in the context of the MICADO\ninstrument of the Extremely Large Telescope. In particular, we have recently\nimplemented a novel algorithm for reconstructing off-axis PSFs. In every case,\nthe PSF is reconstructed from AO telemetry, without making use of science\nexposures. We will present the results coming from end-to-end simulations and\nreal AO observations, covering a wide range of observing conditions.\nSpecifically, the spatial variation of the PSF has been studied with different\nAO-reference star magnitudes. The reconstructed PSFs are observed to match the\nreference ones with a relative error in Strehl ratio and full-width at half\nmaximum below 10% over a field of view of the order of one arcmin, making the\nproposed PSF-R method an appealing tool to assist observation analysis, and\ninterpretation.", "category": "astro-ph_IM" }, { "text": "First Light for the First Station of the Long Wavelength Array: The first station of the Long Wavelength Array (LWA1) was completed in April\n2011 and is currently performing observations resulting from its first call for\nproposals in addition to a continuing program of commissioning and\ncharacterization observations. The instrument consists of 258 dual-polarization\ndipoles, which are digitized and combined into beams. Four\nindependently-steerable dual-polarization beams are available, each with two\n\"tunings\" of 16 MHz bandwidth that can be independently tuned to any frequency\nbetween 10 MHz and 88 MHz. The system equivalent flux density for zenith\npointing is ~3 kJy and is approximately independent of frequency; this\ncorresponds to a sensitivity of ~5 Jy/beam (5sigma, 1 s); making it one of the\nmost sensitive meter-wavelength radio telescopes. LWA1 also has two \"transient\nbuffer\" modes which allow coherent recording from all dipoles simultaneously,\nproviding instantaneous all-sky field of view. LWA1 provides versatile and\nunique new capabilities for Galactic science, pulsar science, solar and\nplanetary science, space weather, cosmology, and searches for astrophysical\ntransients. Results from LWA1 will detect or tightly constrain the presence of\nhot Jupiters within 50 parsecs of Earth. LWA1 will provide excellent resolution\nin frequency and in time to examine phenomena such as solar bursts, and pulsars\nover a 4:1 frequency range that includes the poorly understood turnover and\nsteep-spectrum regimes. Observations to date have proven LWA1's potential for\npulsar observing, and just a few seconds with the completed 256-dipole LWA1\nprovide the most sensitive images of the sky at 23 MHz obtained yet. We are\noperating LWA1 as an open skies radio observatory, offering ~2000 beam-hours\nper year to the general community.", "category": "astro-ph_IM" }, { "text": "Pipe3D, a pipeline to analyze Integral Field Spectroscopy data: I. New\n fitting phylosophy of FIT3D: We present an improved version of FIT3D, a fitting tool for the analysis of\nthe spectroscopic properties of the stellar populations and the ionized gas\nderived from moderate resolution spectra of galaxies. FIT3D is a tool developed\nto analyze Integral Field Spectroscopy data and it is the basis of Pipe3D, a\npipeline already used in the analysis of datasets like CALIFA, MaNGA, and SAMI.\nWe describe the philosophy behind the fitting procedure, and in detail each of\nthe different steps in the analysis. We present an extensive set of simulations\nin order to estimate the precision and accuracy of the derived parameters for\nthe stellar populations. In summary, we find that using different stellar\npopulation templates we reproduce the mean properties of the stellar population\n(age, metallicity, and dust attenuation) within ~0.1 dex. A similar approach is\nadopted for the ionized gas, where a set of simulated emission- line systems\nwas created. Finally, we compare the results of the analysis using FIT3D with\nthose provided by other widely used packages for the analysis of the stellar\npopulation (Starlight, Steckmap, and analysis based on stellar indices) using\nreal high S/N data. In general we find that the parameters for the stellar\npopulations derived by FIT3D are fully compatible with those derived using\nthese other tools.", "category": "astro-ph_IM" }, { "text": "The Ecological Impact of High-performance Computing in Astrophysics: The importance of computing in astronomy continues to increase, and so is its\nimpact on the environment. When analyzing data or performing simulations, most\nresearchers raise concerns about the time to reach a solution rather than its\nimpact on the environment. Luckily, a reduced time-to-solution due to faster\nhardware or optimizations in the software generally also leads to a smaller\ncarbon footprint. This is not the case when the reduced wall-clock time is\nachieved by overclocking the processor, or when using supercomputers.\n The increase in the popularity of interpreted scripting languages, and the\ngeneral availability of high-performance workstations form a considerable\nthreat to the environment. A similar concern can be raised about the trend of\nrunning single-core instead of adopting efficient many-core programming\nparadigms.\n In astronomy, computing is among the top producers of green-house gasses,\nsurpassing telescope operations. Here I hope to raise the awareness of the\nenvironmental impact of running non-optimized code on overpowered computer\nhardware.", "category": "astro-ph_IM" }, { "text": "emcee: The MCMC Hammer: We introduce a stable, well tested Python implementation of the\naffine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed\nby Goodman & Weare (2010). The code is open source and has already been used in\nseveral published projects in the astrophysics literature. The algorithm behind\nemcee has several advantages over traditional MCMC sampling methods and it has\nexcellent performance as measured by the autocorrelation time (or function\ncalls per independent sample). One major advantage of the algorithm is that it\nrequires hand-tuning of only 1 or 2 parameters compared to $\\sim N^2$ for a\ntraditional algorithm in an N-dimensional parameter space. In this document, we\ndescribe the algorithm and the details of our implementation and API.\nExploiting the parallelism of the ensemble method, emcee permits any user to\ntake advantage of multiple CPU cores without extra effort. The code is\navailable online at http://dan.iel.fm/emcee under the MIT License.", "category": "astro-ph_IM" }, { "text": "Analysis of Galactic molecular cloud polarization maps: a review of the\n methods: The Davis-Chandrasekhar-Fermi (DCF) method using the Angular Dispersion\nFunction (ADF), the Histogram of Relative Orientations (HROs) and the\nPolarization-Intensity Gradient Relation (P-IGR) are the most common tools used\nto analyse maps of linearly polarized emission by thermal dust grains at\nsubmilliter wavelengths in molecular clouds and star-forming regions. A short\nreview of these methods is given. The combination of these methods will provide\nvaluable tools to shed light on the impact of the magnetic fields on the\nformation and evolution of subparsec scale hub-filaments that will be mapped\nwith the NIKA2 camera and future experiments.", "category": "astro-ph_IM" }, { "text": "ASTRO2020 White Paper: JWST: Probing the Epoch of Reionization with a\n Wide Field Time-Domain Survey: A major scientific goal of JWST is to probe the epoch of re-ionization of the\nUniverse at z above 6, and up to 20 and beyond. At these redshifts, galaxies\nare just beginning to form and the observable objects are early black holes,\nsupernovae, and cosmic infrared background. The JWST has the necessary\nsensitivity to observe these targets individually, but a public deep and wide\nscience enabling survey in the wavelength range from 2-5 $\\mu$m is needed to\ndiscover these black holes and supernovae and to cover the area large enough\nfor cosmic infrared background to be reliably studied. This enabling survey\nwill also discover a large number of other transients and enable sciences such\nas supernova cosmology up to z $\\sim$ 5, star formation history at high\nredshift through supernova explosions, faint stellar objects in the Milky Way,\nand galaxy evolution up to z approaching 10. The results of this survey will\nalso serve as an invaluable target feeder for the upcoming era of ELT and SKA.", "category": "astro-ph_IM" }, { "text": "Testing the variation of fundamental constants by astrophysical methods:\n overview and prospects: By measuring the fundamental constants in astrophysical objects one can test\nbasic physical principles as space-time invariance of physical laws along with\nprobing the applicability limits of the standard model of particle physics. The\nlatest constraints on the fine structure constant alpha and the\nelectron-to-proton mass ratio mu obtained from observations at high redshifts\nand in the Milky Way disk are reviewed. In optical range, the most accurate\nmeasurements have already reached the sensitivity limit of available\ninstruments, and further improvements will be possible only with next\ngeneration of telescopes and receivers. New methods of the wavelength\ncalibration should be realized to control systematic errors at the sub-pixel\nlevel. In radio sector, the main tasks are the search for galactic and\nextragalactic objects suitable for precise molecular spectroscopy as well as\nhigh resolution laboratory measurements of molecular lines to provide accurate\nfrequency standards. The expected progress in the optical and radio\nastrophysical observations is quantified.", "category": "astro-ph_IM" }, { "text": "On the measurements of numerical viscosity and resistivity in Eulerian\n MHD codes: We propose a simple ansatz for estimating the value of the numerical\nresistivity and the numerical viscosity of any Eulerian MHD code. We test this\nansatz with the help of simulations of the propagation of (magneto)sonic waves,\nAlfven waves, and the tearing mode instability using the MHD code Aenus. By\ncomparing the simu- lation results with analytical solutions of the\nresistive-viscous MHD equations and an empirical ansatz for the growth rate of\ntearing modes we measure the numerical viscosity and resistivity of Aenus. The\ncomparison shows that the fast-magnetosonic speed and wavelength are the\ncharacteristic velocity and length, respectively, of the aforementioned\n(relatively simple) systems. We also determine the dependance of the numerical\nviscosity and resistivity on the time integration method, the spatial\nreconstruction scheme and (to a lesser extent) the Riemann solver employed in\nthe simulations. From the measured results we infer the numerical resolution\n(as a function of the spatial reconstruction method) required to properly\nresolve the growth and saturation level of the magnetic field amplified by the\nmagnetorotational instability in the post-collapsed core of massive stars. Our\nresults show that it is to the best advantage to resort to ultra-high order\nmethods (e.g., 9th-order Monotonicity Preserving method) to tackle this problem\nproperly, in particular in three dimensional simulations.", "category": "astro-ph_IM" }, { "text": "The Pierre Auger Observatory Upgrade - Preliminary Design Report: The Pierre Auger Observatory has begun a major Upgrade of its already\nimpressive capabilities, with an emphasis on improved mass composition\ndetermination using the surface detectors of the Observatory. Known as\nAugerPrime, the upgrade will include new 4 m$^2$ plastic scintillator detectors\non top of all 1660 water-Cherenkov detectors, updated and more flexible surface\ndetector electronics, a large array of buried muon detectors, and an extended\nduty cycle for operations of the fluorescence detectors. This Preliminary\nDesign Report was produced by the Collaboration in April 2015 as an internal\ndocument and information for funding agencies. It outlines the scientific and\ntechnical case for AugerPrime. We now release it to the public via the arXiv\nserver. We invite you to review the large number of fundamental results already\nachieved by the Observatory and our plans for the future.", "category": "astro-ph_IM" }, { "text": "Possibility of Terahertz Observations at the ALMA site: Observational rates under terahertz (THz) opacities less than 3.0 and 2.0 at\nthe Atacama Large Millimeter/submillimeter Array (ALMA) site have been\ncalculated using the 225 GHz tipping radiometer monitoring data and the opacity\ncorrelation between 225 GHz and THz opacities. The observational rate with THz\nopacity condition less than 3.0 is 12.4% in a year, and in winter (November -\nApril) it is about twice higher than in summer (May - October). This\nobservational rate shows a large sinusoidal annual variation, and it seems to\nhave relation with the El Ni\\~no and La Ni\\~na phenomena; the La Ni\\~na years\ntend to have high observational rates, but the El Ni\\~no years show low rates.\nOn the other hand, the observational rate with the THz opacity condition less\nthan 2.0 is only 1.9%, and no obvious annual and seasonal variations are\nobserved. This indicates that THz observations under low opacity condition of\nless than 2.0 at the ALMA site are very difficult to be performed.", "category": "astro-ph_IM" }, { "text": "Detecting Dispersed Radio Transients in Real Time Using Convolutional\n Neural Networks: We present a methodology for automated real-time analysis of a radio image\ndata stream with the goal to find transient sources. Contrary to previous\nworks, the transients we are interested in occur on a time-scale where\ndispersion starts to play a role, so we must search a higher-dimensional data\nspace and yet work fast enough to keep up with the data stream in real time.\nThe approach consists of five main steps: quality control, source detection,\nassociation, flux measurement, and physical parameter inference. We present\nparallelized methods based on convolutions and filters that can be accelerated\non a GPU, allowing the pipeline to run in real-time. In the parameter inference\nstep, we apply a convolutional neural network to dynamic spectra that were\nobtained from the preceding steps. It infers physical parameters, among which\nthe dispersion measure of the transient candidate. Based on critical values of\nthese parameters, an alert can be sent out and data will be saved for further\ninvestigation. Experimentally, the pipeline is applied to simulated data and\nimages from AARTFAAC (Amsterdam Astron Radio Transients Facility And Analysis\nCentre), a transients facility based on the Low-Frequency Array (LOFAR).\nResults on simulated data show the efficacy of the pipeline, and from real data\nit discovered dispersed pulses. The current work targets transients on time\nscales that are longer than the fast transients of beam-formed search, but\nshorter than slow transients in which dispersion matters less. This fills a\nmethodological gap that is relevant for the upcoming Square-Kilometer Array\n(SKA). Additionally, since real-time analysis can be performed, only data with\npromising detections can be saved to disk, providing a solution to the big-data\nproblem that modern astronomy is dealing with.", "category": "astro-ph_IM" }, { "text": "The Compton-Pair telescope: A prototype for a next-generation MeV\n $\u03b3$-ray observatory: The Compton Pair (ComPair) telescope is a prototype that aims to develop the\nnecessary technologies for future medium energy gamma-ray missions and to\ndesign, build, and test the prototype in a gamma-ray beam and balloon flight.\nThe ComPair team has built an instrument that consists of 4 detector\nsubsystems: a double-sided silicon strip detector Tracker, a novel\nhigh-resolution virtual Frisch-grid cadmium zinc telluride Calorimeter, and a\nhigh-energy hodoscopic cesium iodide Calorimeter, all of which are surrounded\nby a plastic scintillator anti-coincidence detector. These subsystems together\ndetect and characterize photons via Compton scattering and pair production,\nenable a veto of cosmic rays, and are a proof-of-concept for a space telescope\nwith the same architecture. A future medium-energy gamma-ray mission enabled\nthrough ComPair will address many questions posed in the Astro2020 Decadal\nsurvey in both the New Messengers and New Physics and the Cosmic Ecosystems\nthemes. In this contribution, we will give an overview of the ComPair project\nand steps forward to the balloon flight.", "category": "astro-ph_IM" }, { "text": "Effects of the Number of Active Receiver Channels on the Sensitivity of\n a Reflector Antenna System with a Multi-Beam Wideband Phased Array Feed: A method for modeling a reflector antenna system with a wideband phased array\nfeed is presented and used to study the effects of the number of active antenna\nelements and associated receiving channels on the sensitivity of the system.\nNumerical results are shown for a practical system named APERTIF that is\ncurrently under developed at The Netherlands Institute for Radio Astronomy\n(ASTRON)", "category": "astro-ph_IM" }, { "text": "Optical amplification for astronomical imaging at higher resolution: Heisenberg's uncertainty principle tells us that it is impossible to\ndetermine simultaneously the position of a photon crossing a telescope's\naperture and its momentum. Super-resolution imaging techniques rely on\nmodification of the observed sample, or on entangling photons. In astronomy we\nhave no access to the object, but resolution may be improved by optical\namplification. Unfortunately, spontaneous emission contributes noise and\nnegates the possible gain from stimulated emissions. We show that it is\npossible to increase the weight of the stimulated photons by considering photon\nstatistics, and observe an improvement in resolution. Most importantly, we\ndemonstrate a method which can apply for all imaging purposes.", "category": "astro-ph_IM" }, { "text": "MCAO for the European Solar Telescope: first results: We analise the efficiency of wavefront reconstruction in the MultiConjugate\nAdaptive Optics system for the European Solar Telescope (EST). We present\npreliminary results derived from numerical simulations. We study a 4 meter\nclass telescope with multiple deformable mirrors conjugated at variable\nheights. Along with common issues, difficulties peculiar to the solar case have\nto be considered, such as the low contrast and extended nature of the natural\nguide features. Our findings identify basic requirements for the EST Adaptive\nOptics system and show some of its capabilities.", "category": "astro-ph_IM" }, { "text": "Simulation of Stray Light Contamination on CHEOPS Detector: The aim of this work is to quantify the amount of Earth stray light that\nreaches the CHEOPS (CHaracterising ExOPlanets Satellite) detector. It will\ncarry out follow-up measurements on transiting planets. This requires exquisite\ndata that can be acquired only by a space-borne observatory and by well\nunderstood and mitigated sources of noise. Earth stray light is one of them\nwhich becomes the most prominent noise for faint stars.\n A software suite was developed to evaluate the contamination by the stray\nlight. As the satellite will be launched in late 2017, the year 2018 is\nanalysed for three different altitudes. Given an visible region at any time,\nthe stray light contamination is simulated at the entrance of the telescope.\nThe amount that reaches the detector is, however, much lower, as it is reduced\nby the point source transmittance function.\n Information about the faintest star visible in any direction in the sky is\ntherefore available and is compared to a potential list of targets. The\ninfluence of both the visibility region and the unavoidable South Atlantic\nAnomaly are also studied as well as the effect of a changing optical assembly.\nA methodology to compute the visible region of the sky and the stray light flux\nis described. Techniques to prepare the scheduling of the observation and a\npossible way of calibrating the dark current and the map of hot pixels are\npresented.\n The simulations show that there are seasonal variations on the amount of flux\nreceived and on the altitude. However, the South Atlantic Anomaly impacts more\ndirely higher orbits. This high radiation region demand the interruption of the\nscience operations. Even if the viewing zone at low altitude is smaller, the\navailability of instrument is greater. There exist two favoured regions for the\nobservations. The field of view is the widest then as the plane of the orbit\nand of the terminator merge.", "category": "astro-ph_IM" }, { "text": "The Gaia mission: Gaia is a cornerstone mission in the science programme of the European Space\nAgency (ESA). The spacecraft construction was approved in 2006, following a\nstudy in which the original interferometric concept was changed to a\ndirect-imaging approach. Both the spacecraft and the payload were built by\nEuropean industry. The involvement of the scientific community focusses on data\nprocessing for which the international Gaia Data Processing and Analysis\nConsortium (DPAC) was selected in 2007. Gaia was launched on 19 December 2013\nand arrived at its operating point, the second Lagrange point of the\nSun-Earth-Moon system, a few weeks later. The commissioning of the spacecraft\nand payload was completed on 19 July 2014. The nominal five-year mission\nstarted with four weeks of special, ecliptic-pole scanning and subsequently\ntransferred into full-sky scanning mode. We recall the scientific goals of Gaia\nand give a description of the as-built spacecraft that is currently (mid-2016)\nbeing operated to achieve these goals. We pay special attention to the payload\nmodule, the performance of which is closely related to the scientific\nperformance of the mission. We provide a summary of the commissioning\nactivities and findings, followed by a description of the routine operational\nmode. We summarise scientific performance estimates on the basis of in-orbit\noperations. Several intermediate Gaia data releases are planned and the data\ncan be retrieved from the Gaia Archive, which is available through the Gaia\nhome page at http://www.cosmos.esa.int/gaia.", "category": "astro-ph_IM" }, { "text": "Conditions for Coronal Observation at the Lijiang Observatory in 2011: The sky brightness is a critical parameter for estimating the coronal\nobservation conditions for solar observatory. As part of a site-survey project\nin Western China, we measured the sky brightness continuously at the Lijiang\nObservatory in Yunnan province in 2011. A sky brightness monitor (SBM) was\nadopted to measure the sky brightness in a region extending from 4.5 to 7.0\napparent solar radii based on the experience of the Daniel K. Inouye Solar\nTele- scope (DKIST) site survey. Every month, the data were collected manually\nfor at least one week. We collected statistics of the sky brightness at four\nbandpasses located at 450, 530, 890, and 940 nm. The results indicate that\naerosol scattering is of great importance for the diurnal variation of the sky\nbrightness. For most of the year, the sky brightness remains under 20\nmillionths per airmass before local Noon. On average, the sky brightness is\nless than 20 millionths accounts for 40.41 % of the total observing time in a\nclear day. The best observation time is from 9:00 to 13:00 (Beijing time).\nLijiang Observatory is therefore suitable for coronagraphs investigating the\nstructures and dynamics of the corona.", "category": "astro-ph_IM" }, { "text": "Deep reinforcement learning for smart calibration of radio telescopes: Modern radio telescopes produce unprecedented amounts of data, which are\npassed through many processing pipelines before the delivery of scientific\nresults. Hyperparameters of these pipelines need to be tuned by hand to produce\noptimal results. Because many thousands of observations are taken during a\nlifetime of a telescope and because each observation will have its unique\nsettings, the fine tuning of pipelines is a tedious task. In order to automate\nthis process of hyperparameter selection in data calibration pipelines, we\nintroduce the use of reinforcement learning. We test two reinforcement learning\ntechniques, twin delayed deep deterministic policy gradient (TD3) and soft\nactor-critic (SAC), to train an autonomous agent to perform this fine tuning.\nFor the sake of generalization, we consider the pipeline to be a black-box\nsystem where the summarized state of the performance of the pipeline is used by\nthe autonomous agent. The autonomous agent trained in this manner is able to\ndetermine optimal settings for diverse observations and is therefore able to\nperform 'smart' calibration, minimizing the need for human intervention.", "category": "astro-ph_IM" }, { "text": "Optimal Filtration and a Pulsar Time Scale: An algorithm is proposed for constructing a group (ensemble) pulsar time\nbased on the application of optimal Wiener filters. This algorithm makes it\npossible to separate the contributions of variations of the atomic time scale\nand of the pulsar rotation to barycentric residual deviations of the pulse\narrival times. The method is applied to observations of the pulsars PSR\nB1855+09 and PSR B1937+21, and is used to obtain corrections to UTC relative to\nthe group pulsar time PT$_{\\rm ens}$. Direct comparison of the terrestrial time\nTT(BIPM06) and the group pulsar time PT$_{\\rm ens}$ shows that they disagree by\nno more than $0.4\\pm 0.17\\; \\mu$s. Based on the fractional instability of the\ntime difference TT(BIPM06) -- PT$_{\\rm ens}$, a new limit for the energy\ndensity of the gravitational-wave background is established at the level\n$\\Omega_g {h}^2\\sim 10^{-9}$.", "category": "astro-ph_IM" }, { "text": "On-sky demonstration of optical polaroastrometry: A method for measuring the difference between centroids of polarized flux and\ntotal flux of an astronomical object - {\\it polaroastrometry} - is proposed.\nThe deviation of the centroid of flux corresponding to Stokes parameter $Q$ or\n$U$ from the centroid of total flux multiplied by dimensionless Stokes\nparameter $q$ or $u$ respectively, was used as a signal. The efficiency of the\nmethod is demonstrated on the basis of observations made in the $V$ band by\nusing an instrument combining features of a two-beam polarimeter with a\nrotating half-wave plate and a speckle interferometer. The polaroastrometric\nsignal noise is 60-70 $\\mu$as rms for a total number of accumulated\nphotoelectrons $N_e$ of $10^9$ from a 70-cm telescope; this corresponds to a\ntotal integration time of 500 sec and an object magnitude $V=6$ mag. At smaller\n$N_e$ the noise increases as $\\approx 1.7^{\\prime\\prime}/\\sqrt{N_e}$, while at\nlarger $N_e$ it remains the same owing to imperfection of the half-wave plate.\nFor main sequence stars that are unpolarized and polarized by interstellar dust\nand the Mira type variable R Tri the signal was undetectable. For the Mira type\nvariable $\\chi$ Cyg the polaroastrometric signal is found to be $310\\pm70$ and\n$300\\pm70$ $\\mu$as for Stokes $Q$ and $U$ respectively; for $o$ Cet these\nvalues are $490\\pm100$ and $1160\\pm100$ $\\mu$as. The significant value of the\npolaroastrometric signal provides evidence of the asymmetry of the polarized\nflux distribution.", "category": "astro-ph_IM" }, { "text": "Correlated magnetic noise in global networks of gravitational-wave\n interferometers: observations and implications: One of the most ambitious goals of gravitational-wave astronomy is to observe\nthe stochastic gravitational-wave background. Correlated noise in two or more\ndetectors can introduce a systematic error, which limits the sensitivity of\nstochastic searches. We report on measurements of correlated magnetic noise\nfrom Schumann resonances at the widely separated LIGO and Virgo detectors. We\ninvestigate the effect of this noise on a global network of interferometers and\nderive a constraint on the allowable coupling of environmental magnetic fields\nto test mass motion in gravitational-wave detectors. We find that while\ncorrelated noise from global electromagnetic fields could be safely ignored for\ninitial LIGO stochastic searches, it could severely impact Advanced LIGO and\nthird-generation detectors.", "category": "astro-ph_IM" }, { "text": "High-resolution Solar Image Reconstruction Based on Non-rigid Alignment: Suppressing the interference of atmospheric turbulence and obtaining\nobservation data with a high spatial resolution is an issue to be solved\nurgently for ground observations. One way to solve this problem is to perform a\nstatistical reconstruction of short-exposure speckle images. Combining the\nrapidity of Shift-Add and the accuracy of speckle masking, this paper proposes\na novel reconstruction algorithm-NASIR (Non-rigid Alignment based Solar Image\nReconstruction). NASIR reconstructs the phase of the object image at each\nfrequency by building a computational model between geometric distortion and\nintensity distribution and reconstructs the modulus of the object image on the\naligned speckle images by speckle interferometry. We analyzed the performance\nof NASIR by using the correlation coefficient, power spectrum, and coefficient\nof variation of intensity profile (CVoIP) in processing data obtained by the\nNVST (1m New Vacuum Solar Telescope). The reconstruction experiments and\nanalysis results show that the quality of images reconstructed by NASIR is\nclose to speckle masking when the seeing is good, while NASIR has excellent\nrobustness when the seeing condition becomes worse. Furthermore, NASIR\nreconstructs the entire field of view in parallel in one go, without phase\nrecursion and block-by-block reconstruction, so its computation time is less\nthan half that of speckle masking. Therefore, we consider NASIR is a robust and\nhigh-quality fast reconstruction method that can serve as an effective tool for\ndata filtering and quick look.", "category": "astro-ph_IM" }, { "text": "Astro2020 Project White Paper: PolyOculus -- Low-cost Spectroscopy for\n the Community: As astronomy moves into the era of large-scale time-domain surveys, we are\nseeing a flood of new transient and variable sources which will reach biblical\nproportions with the advent of LSST. A key strategic challenge for astronomy in\nthis era is the lack of suitable spectroscopic followup facilities. In response\nto this need, we have developed the PolyOculus approach for producing\nlarge-area-equivalent telescopes by using fiber optics to link modules of\nmultiple semi-autonomous, small, inexpensive, commercial-off-the-shelf\ntelescopes. Crucially, this scalable design has construction costs which are\n$>10x$ lower than equivalent traditional large-area telescopes. In addition,\nPolyOculus is inherently highly automated and well-suited for remote\noperations. Development of this technology will enable the expansion of major\nresearch efforts in the LSST era to a host of smaller universities and\ncolleges, including primarily-undergraduate institutions, for budgets\nconsistent with their educational expenditures on similar facilities. We\npropose to develop and deploy a 1.6-m prototype demonstrator at the Mt. Laguna\nObservatory in California, followed by a full-scale 5-meter-class PolyOculus\nfacility for linkage to existing and upcoming time-domain surveys.", "category": "astro-ph_IM" }, { "text": "Towards a cosmic-ray mass-composition study at Tunka Radio Extension\n (ARENA 2016): The Tunka Radio Extension (Tunka-Rex) is a radio detector at the TAIGA\nfacility located in Siberia nearby the southern tip of Lake Baikal. Tunka-Rex\nmeasures air-showers induced by high-energy cosmic rays, in particular, the\nlateral distribution of the radio pulses. The depth of the air-shower maximum,\nwhich statistically depends on the mass of the primary particle, is determined\nfrom the slope of the lateral distribution function (LDF). Using a\nmodel-independent approach, we have studied possible features of the\none-dimensional slope method and tried to find improvements for the\nreconstruction of primary mass. To study the systematic uncertainties given by\ndifferent primary particles, we have performed simulations using the CONEX and\nCoREAS software packages of the recently released CORSIKA v7.5 including the\nmodern high-energy hadronic models QGSJet-II.04 and EPOS-LHC. The simulations\nhave shown that the largest systematic uncertainty in the energy deposit is due\nto the unknown primary particle. Finally, we studied the relation between the\npolarization and the asymmetry of the LDF.", "category": "astro-ph_IM" }, { "text": "The advantages of using a Lucky Imaging camera for observations of\n microlensing events: In this work, we study the advantages of using a Lucky Imaging camera for the\nobservations of potential planetary microlensing events. Our aim is to reduce\nthe blending effect and enhance exoplanet signals in binary lensing systems\ncomposed of an exoplanet and the corresponding parent star. We simulate\nplanetary microlensing light curves based on present microlensing surveys and\nfollow-up telescopes where one of them is equipped with a Lucky imaging camera.\nThis camera is used at the Danish $1.54$-m follow-up telescope. Using a\nspecific observational strategy, For an Earth-mass planet in the resonance\nregime, where the detection probability in crowded-fields is smaller, lucky\nimaging observations improve the detection efficiency which reaches 2 per cent.\nGiven the difficulty of detecting the signal of an Earth-mass planet in\ncrowded-field imaging even in the resonance regime with conventional cameras,\nwe show that Lucky Imaging can substantially improve the detection efficiency.", "category": "astro-ph_IM" }, { "text": "Hunting electromagnetic counterparts of gravitational-wave events using\n the Zwicky Transient Facility: Detections of coalescing binary black holes by LIGO have opened a new window\nof transient astronomy. With increasing sensitivity of LIGO and participation\nof the Virgo detector in Cascina, Italy, we expect to soon detect coalescence\nof compact binary systems with one or more neutron stars. These are the prime\ntargets for electromagnetic follow-up of gravitational wave triggers, which\nholds enormous promise of rich science. However, hunting for electromagnetic\ncounterparts of gravitational wave events is a non-trivial task due to the\nsheer size of the error regions, which could span hundreds of square degrees.\nThe Zwicky Transient facility (ZTF), scheduled to begin operation in 2017, is\ndesigned to cover such large sky-localization areas. In this work, we present\nthe strategies of efficiently tiling the sky to facilitate the observation of\nthe gravitational wave error regions using ZTF. To do this we used simulations\nconsisting of 475 binary neutron star coalescences detected using a mix of two-\nand three-detector networks. Our studies reveal that, using two overlapping\nsets of ZTF tiles and a (modified) ranked-tiling algorithm, we can cover the\ngravitational-wave sky-localization regions with half as many pointings as a\nsimple contour-covering algorithm. We then incorporated the ranked-tiling\nstrategy to study our ability to observe the counterparts. This requires\noptimization of observation depth and localization area coverage. Our results\nshow that observation in r-band with ~600 seconds of integration time per\npointing seems to be optimum for typical assumed brightnesses of\nelectromagnetic counterparts, if we plan to spend equal amount of time per\npointing. However, our results also reveal that we can gain by as much as 50%\nin detection efficiency if we linearly scale our integration time per pointing\nbased on the tile probability.", "category": "astro-ph_IM" }, { "text": "Monte-Carlo Imaging for Optical Interferometry: We present a flexible code created for imaging from the bispectrum and\nvisibility-squared. By using a simulated annealing method, we limit the\nprobability of converging to local chi-squared minima as can occur when\ntraditional imaging methods are used on data sets with limited phase\ninformation. We present the results of our code used on a simulated data set\nutilizing a number of regularization schemes including maximum entropy. Using\nthe statistical properties from Monte-Carlo Markov chains of images, we show\nhow this code can place statistical limits on image features such as unseen\nbinary companions.", "category": "astro-ph_IM" }, { "text": "r-Java 2.0: the astrophysics: [Context:] This article is the second in a two part series introducing r-Java\n2.0, a nucleosynthesis code for open use that performs r-process calculations\nand provides a suite of other analysis tools. [Aims:] The first paper discussed\nthe nuclear physics inherent to r-Java 2.0 and in this article the astrophysics\nincorporated into the software will be detailed. [Methods:] R-Java 2.0 allows\nthe user to specify the density and temperature evolution for an r-process\nsimulation. Defining how the physical parameters (temperature and density)\nevolve can effectively simulate the astrophysical conditions for the r-process.\nWithin r-Java 2.0 the user has the option to select astrophysical environments\nwhich have unique sets of input parameters available for the user to adjust. In\nthis work we study three proposed r-process sites; neutrino-driven winds around\na proto-neutron star, ejecta from a neutron star merger and ejecta from a quark\nnova. The underlying physics that define the temperature and density evolution\nfor each site is described in this work. [Results:] In this paper a survey of\nthe available parameters for each astrophysical site is undertaken and the\neffect on final r-process abundance is compared. The resulting abundances for\neach site are also compared to solar observations both independently and in\nconcert. R-Java 2.0 is available for download from the website of the\nQuark-Nova Project: http://quarknova.ucalgary.ca/", "category": "astro-ph_IM" }, { "text": "Radio Detection of High Energy Neutrinos in Ice: Radio-based detection of high-energy particles is growing in maturity. In\nthis chapter, we focus on the detection of neutrinos with energies in excess of\n10 PeV that interact in the thick, radio-transparent ice found in the polar\nregions. High-energy neutrinos interacting in the ice generate short duration,\nradio-frequency flashes through the Askaryan effect that can be measured with\nantennas installed at shallow depths. The abundant target material and the long\nattenuation lengths of around 1 km allow cost-effective instrumentation of huge\nvolumes with a sparse array of radio detector stations. This detector\narchitecture provides sufficient sensitivity to the low flux of\nultra-high-energy neutrinos to probe the production of ultra-high-energy cosmic\nrays whose origin is one of the longest-standing riddles in astroparticle\nphysics. We describe the signal characteristics, propagation effects, detector\nsetup, suitable detection sites, and background processes. We give an overview\nof the current experimental landscape and an outlook into the future where\nalmost the entire sky can be viewed by a judicious choice of detector\nlocations.", "category": "astro-ph_IM" }, { "text": "STATCONT: A statistical continuum level determination method for\n line-rich sources: STATCONT is a python-based tool designed to determine the continuum emission\nlevel in spectral data, in particular for sources with a line-rich spectrum.\nThe tool inspects the intensity distribution of a given spectrum and\nautomatically determines the continuum level by using different statistical\napproaches. The different methods included in STATCONT are tested against\nsynthetic data. We conclude that the sigma-clipping algorithm provides the most\naccurate continuum level determination, together with information on the\nuncertainty in its determination. This uncertainty can be used to correct the\nfinal continuum emission level, resulting in the here called `corrected\nsigma-clipping method' or c-SCM. The c-SCM has been tested against more than\n750 different synthetic spectra reproducing typical conditions found towards\nastronomical sources. The continuum level is determined with a discrepancy of\nless than 1% in 50% of the cases, and less than 5% in 90% of the cases,\nprovided at least 10% of the channels are line free. The main products of\nSTATCONT are the continuum emission level, together with a conservative value\nof its uncertainty, and datacubes containing only spectral line emission, i.e.,\ncontinuum-subtracted datacubes. STATCONT also includes the option to estimate\nthe spectral index, when different files covering different frequency ranges\nare provided.", "category": "astro-ph_IM" }, { "text": "Measurement of the cosmic-ray energy spectrum above $10^{16}$ eV with\n the LOFAR Radboud Air Shower Array: The energy reconstruction of extensive air showers measured with the LOFAR\nRadboud Air Shower Array (LORA) is presented in detail. LORA is a particle\ndetector array located in the center of the LOFAR radio telescope in the\nNetherlands. The aim of this work is to provide an accurate and independent\nenergy measurement for the air showers measured through their radio signal with\nthe LOFAR antennas. The energy reconstruction is performed using a\nparameterized relation between the measured shower size and the cosmic-ray\nenergy obtained from air shower simulations. In order to illustrate the\ncapabilities of LORA, the all-particle cosmic-ray energy spectrum has been\nreconstructed, assuming that cosmic rays are composed only of protons or iron\nnuclei in the energy range between $\\sim2\\times10^{16}$ and $2\\times10^{18}$\neV. The results are compatible with literature values and a changing mass\ncomposition in the transition region from a galactic to an extragalactic origin\nof cosmic rays.", "category": "astro-ph_IM" }, { "text": "The Faulkes Telescope Project: Not Just Pretty Pictures: The Faulkes Telescope (FT) Project is an educational and research arm of the\nLas Cumbres Observatory Global Telescope Network (LCOGTN). As well as producing\nspectacular images of galaxies, nebulae, supernovae remnants, star clusters,\netc., the FT team is involved in several projects pursuing scientific goals.\nMany of these projects also incorporate data collected and analysed by schools\nand amateur astronomers.", "category": "astro-ph_IM" }, { "text": "Bokeh Mirror Alignment for Cherenkov Telescopes: Imaging Atmospheric Cherenkov Telescopes (IACTs) need imaging optics with\nlarge apertures and high image intensities to map the faint Cherenkov light\nemitted from cosmic ray air showers onto their image sensors. Segmented\nreflectors fulfill these needs, and composed from mass production mirror facets\nthey are inexpensive and lightweight. However, as the overall image is a\nsuperposition of the individual facet images, alignment remains a challenge.\nHere we present a simple, yet extendable method, to align a segmented reflector\nusing its Bokeh. Bokeh alignment does not need a star or good weather nights\nbut can be done even during daytime. Bokeh alignment optimizes the facet\norientations by comparing the segmented reflectors Bokeh to a predefined\ntemplate. The optimal Bokeh template is highly constricted by the reflector's\naperture and is easy accessible. The Bokeh is observed using the out of focus\nimage of a near by point like light source in a distance of about 10 focal\nlengths. We introduce Bokeh alignment on segmented reflectors and demonstrate\nit on the First Geiger-mode Avalanche Cherenkov Telescope (FACT) on La Palma,\nSpain.", "category": "astro-ph_IM" }, { "text": "Reconstruction methods for acoustic particle detection in the deep sea\n using clusters of hydrophones: This article focuses on techniques for acoustic noise reduction, signal\nfilters and source reconstruction. For noise reduction, bandpass filters and\ncross correlations are found to be efficient and fast ways to improve the\nsignal to noise ratio and identify a possible neutrino-induced acoustic signal.\nThe reconstruction of the position of an acoustic point source in the sea is\nperformed by using small-volume clusters of hydrophones (about 1 cubic meter)\nfor direction reconstruction by a beamforming algorithm. The directional\ninformation from a number of such clusters allows for position reconstruction.\nThe algorithms for data filtering, direction and position reconstruction are\nexplained and demonstrated using simulated data.", "category": "astro-ph_IM" }, { "text": "Overview of the Advanced X-ray Imaging Satellite (AXIS): The Advanced X-ray Imaging Satellite (AXIS) is a Probe-class concept that\nwill build on the legacy of the Chandra X-ray Observatory by providing\nlow-background, arcsecond-resolution imaging in the 0.3-10 keV band across a\n450 arcminute$^2$ field of view, with an order of magnitude improvement in\nsensitivity. AXIS utilizes breakthroughs in the construction of lightweight\nsegmented X-ray optics using single-crystal silicon, and developments in the\nfabrication of large-format, small-pixel, high readout rate CCD detectors with\ngood spectral resolution, allowing a robust and cost-effective design. Further,\nAXIS will be responsive to target-of-opportunity alerts and, with onboard\ntransient detection, will be a powerful facility for studying the time-varying\nX-ray universe, following on from the legacy of the Neil Gehrels (Swift) X-ray\nobservatory that revolutionized studies of the transient X-ray Universe. In\nthis paper, we present an overview of AXIS, highlighting the prime science\nobjectives driving the AXIS concept and how the observatory design will achieve\nthese objectives.", "category": "astro-ph_IM" }, { "text": "Spectrally resolved imaging with the solar gravitational lens: We consider the optical properties of the solar gravitational lens (SGL)\ntreating the Sun as a massive compact body. Using our previously developed\nwave-optical treatment of the SGL, we convolve it with a thin-lens representing\nan optical telescope, and estimate the power spectral density and associated\nphoton flux at individual pixel locations on the image sensor at the focal\nplane of the telescope. We also consider the solar corona, which is the\ndominant noise source when imaging faint objects with the SGL. We evaluate the\nsignal-to-noise ratio at individual pixels as a function of wavelength. To\nblock out the solar light, we contrast the use of a conventional internal\ncoronagraph with a Lyot-stop to an external occulter (i.e., starshade). An\nexternal occulter, not being a subject to the diffraction limit of the\nobserving telescope, makes it possible to use small telescopes (e.g., $\\sim\n40$~cm) for spatially and spectrally resolved imaging with the SGL in a broad\nrange of wavelengths from optical to mid-infrared (IR) and without the\nsubstantial loss of optical throughput that is characteristic to internal\ndevices. Mid-IR observations are especially interesting as planets are\nself-luminous at these wavelengths, producing a strong signal, while there is\nsignificantly less noise from the solar corona. This part of the spectrum\ncontains numerous features of interest for exobiology and biosignature\ndetection. We develop tools that may be used to estimate instrument\nrequirements and devise optimal observing strategies to use the SGL for\nhigh-resolution, spectrally resolved imaging, ultimately improving our ability\nto confirm and study the presence of life on a distant world.", "category": "astro-ph_IM" }, { "text": "The Maunakea Spectroscopic Explorer: Thousands of Fibers, Infinite\n Possibilities: The Maunakea Spectroscopic Explorer (MSE) is a massively multiplexed\nspectroscopic survey facility that will replace the Canada-France-Hawaii\nTelescope over the next two decades. This 12.5-meter telescope, with its 1.5\nsquare degree field-of-view, will observe 18,000-20,000 astronomical targets in\nevery pointing from 0.36-1.80 microns at low/moderate resolution (R~3,000,\n6,000) and from 0.36-0.90 microns at high resolution (R~30,000). Parallel\npositioning of all fibers in the field will occur, providing simultaneous\nfull-field coverage for both resolution modes. Unveiling the composition and\ndynamics of the faint Universe, MSE will impact nearly every field of\nastrophysics across all spatial scales, from individual stars to the largest\nscale structures in the Universe, including (i) the ultimate Gaia follow-up\nfacility for understanding the chemistry and dynamics of the distant Milky Way,\nincluding the distant halo at high spectral resolution, (ii) the unparalleled\nstudy of galaxy formation and evolution at cosmic noon, (iii) the determination\nof the neutrino mass, and (iv) the generation of insights into inflationary\nphysics through a cosmological redshift survey that probes a large volume of\nthe Universe with a high galaxy density. Initially, CFHT will build a\nPathfinder instrument to fast-track the development of MSE technology while\nproviding multi-object and IFU spectroscopic capability.", "category": "astro-ph_IM" }, { "text": "Effects of the Number of Active Receiver Channels on the Sensitivity of\n a Reflector Antenna System with a Multi-Beam Wideband Phased Array Feed: A method for modeling a reflector antenna system with a wideband phased array\nfeed is presented and used to study the effects of the number of active antenna\nelements and associated receiving channels on the sensitivity of the system.\nNumerical results are shown for a practical system named APERTIF that is\ncurrently under developed at The Netherlands Institute for Radio Astronomy\n(ASTRON)", "category": "astro-ph_IM" }, { "text": "How proper are Bayesian models in the astronomical literature?: The well-known Bayes theorem assumes that a posterior distribution is a\nprobability distribution. However, the posterior distribution may no longer be\na probability distribution if an improper prior distribution (non-probability\nmeasure) such as an unbounded uniform prior is used. Improper priors are often\nused in the astronomical literature to reflect a lack of prior knowledge, but\nchecking whether the resulting posterior is a probability distribution is\nsometimes neglected. It turns out that 23 articles out of 75 articles (30.7%)\npublished online in two renowned astronomy journals (ApJ and MNRAS) between Jan\n1, 2017 and Oct 15, 2017 make use of Bayesian analyses without rigorously\nestablishing posterior propriety. A disturbing aspect is that a Gibbs-type\nMarkov chain Monte Carlo (MCMC) method can produce a seemingly reasonable\nposterior sample even when the posterior is not a probability distribution\n(Hobert and Casella, 1996). In such cases, researchers may erroneously make\nprobabilistic inferences without noticing that the MCMC sample is from a\nnon-existing probability distribution. We review why checking posterior\npropriety is fundamental in Bayesian analyses, and discuss how to set up\nscientifically motivated proper priors.", "category": "astro-ph_IM" }, { "text": "Direct measurement of the intra-pixel response function of Kepler Space\n Telescope's CCDs: Space missions designed for high precision photometric monitoring of stars\noften under-sample the point-spread function, with much of the light landing\nwithin a single pixel. Missions like MOST, Kepler, BRITE, and TESS, do this to\navoid uncertainties due to pixel-to-pixel response nonuniformity. This approach\nhas worked remarkably well. However, individual pixels also exhibit response\nnonuniformity. Typically, pixels are most sensitive near their centers and less\nsensitive near the edges, with a difference in response of as much as 50%. The\nexact shape of this fall-off, and its dependence on the wavelength of light, is\nthe intra-pixel response function (IPRF). A direct measurement of the IPRF can\nbe used to improve the photometric uncertainties, leading to improved\nphotometry and astrometry of under-sampled systems. Using the spot-scan\ntechnique, we measured the IPRF of a flight spare e2v CCD90 imaging sensor,\nwhich is used in the Kepler focal plane. Our spot scanner generates spots with\na full-width at half-maximum of $\\lesssim$5 microns across the range of 400 nm\n- 900 nm. We find that Kepler's CCD shows similar IPRF behavior to other\nback-illuminated devices, with a decrease in responsivity near the edges of a\npixel by $\\sim$50%. The IPRF also depends on wavelength, exhibiting a large\namount of diffusion at shorter wavelengths and becoming much more defined by\nthe gate structure in the near-IR. This method can also be used to measure the\nIPRF of the CCDs used for TESS, which borrows much from the Kepler mission.", "category": "astro-ph_IM" }, { "text": "The wavefront sensing making-of for THEMIS solar telescope: An adaptive optics system with a single deformable mirror is being\nimplemented on the THEMIS 90cm solar telescope. This system is designed to\noperate in the visible and is required to be as robust as possible in order to\ndeliver the best possible correction in any atmospheric conditions, even if\nwavefronts are sensed on some low-contrast solar granulation. In extreme\nconditions, the images given by the subapertures of the Shack-Hartmann\nwavefront sensor get randomly blurred in space, in the set of subapertures, and\nthe distribution of blurred images is rapidly changing in time, some of them\npossibly fading away. The algorithms we have developed for such harsh\nconditions rely on inverse problem approach. As an example, with the gradients\nof the wavefronts, the wavefront sensor also estimates their errors, including\ntheir covariance. This information allows the control loop to promptly optimize\nitself to the fast varying conditions, both in space (wavefront reconstruction)\nand in time. A major constraint is to fit the calculations in a low-cost\nmulti-core CPU. An overview of the algorithms in charge of implementing this\nstrategy is presented, focusing on wavefront sensing.", "category": "astro-ph_IM" }, { "text": "HCGrid: A Convolution-based Gridding Framework for RadioAstronomy in\n Hybrid Computing Environments: Gridding operation, which is to map non-uniform data samples onto a uniformly\ndistributedgrid, is one of the key steps in radio astronomical data reduction\nprocess. One of the mainbottlenecks of gridding is the poor computing\nperformance, and a typical solution for suchperformance issue is the\nimplementation of multi-core CPU platforms. Although such amethod could usually\nachieve good results, in many cases, the performance of gridding is\nstillrestricted to an extent due to the limitations of CPU, since the main\nworkload of gridding isa combination of a large number of single instruction,\nmulti-data-stream operations, which ismore suitable for GPU, rather than CPU\nimplementations. To meet the challenge of massivedata gridding for the modern\nlarge single-dish radio telescopes, e.g., the Five-hundred-meterAperture\nSpherical radio Telescope (FAST), inspired by existing multi-core CPU\ngriddingalgorithms such as Cygrid, here we present an easy-to-install,\nhigh-performance, and open-source convolutional gridding framework, HCGrid,in\nCPU-GPU heterogeneous platforms. Itoptimises data search by employing\nmulti-threading on CPU, and accelerates the convolutionprocess by utilising\nmassive parallelisation of GPU. In order to make HCGrid a more\nadaptivesolution, we also propose the strategies of thread organisation and\ncoarsening, as well as optimalparameter settings under various GPU\narchitectures. A thorough analysis of computing timeand performance gain with\nseveral GPU parallel optimisation strategies show that it can leadto excellent\nperformance in hybrid computing environments.", "category": "astro-ph_IM" }, { "text": "The Locus Algorithm III: A Grid Computing system to generate catalogues\n of optimised pointings for Differential Photometry: This paper discusses the hardware and software components of the Grid\nComputing system used to implement the Locus Algorithm to identify optimum\npointings for differential photometry of 61,662,376 stars and 23,799 quasars.\nThe scale of the data, together with initial operational assessments demanded a\nHigh Performance Computing (HPC) system to complete the data analysis. Grid\ncomputing was chosen as the HPC solution as the optimum choice available within\nthis project. The physical and logical structure of the National Grid computing\nInfrastructure informed the approach that was taken. That approach was one of\nlayered separation of the different project components to enable maximum\nflexibility and extensibility.", "category": "astro-ph_IM" }, { "text": "Effects of transients in LIGO suspensions on searches for gravitational\n waves: This paper presents an analysis of the transient behavior of the Advanced\nLIGO suspensions used to seismically isolate the optics. We have characterized\nthe transients in the longitudinal motion of the quadruple suspensions during\nAdvanced LIGO's first observing run. Propagation of transients between stages\nis consistent with modelled transfer functions, such that transient motion\noriginating at the top of the suspension chain is significantly reduced in\namplitude at the test mass. We find that there are transients seen by the\nlongitudinal motion monitors of quadruple suspensions, but they are not\nsignificantly correlated with transient motion above the noise floor in the\ngravitational wave strain data, and therefore do not present a dominant source\nof background noise in the searches for transient gravitational wave signals.", "category": "astro-ph_IM" }, { "text": "Investigation of the radio wavefront of air showers with LOPES\n measurements and CoREAS simulations (ARENA 2014): We investigated the radio wavefront of cosmic-ray air showers with LOPES\nmeasurements and CoREAS simulations: the wavefront is of approximately\nhyperbolic shape and its steepness is sensitive to the shower maximum. For this\nstudy we used 316 events with an energy above 0.1 EeV and zenith angles below\n$45^\\circ$ measured by the LOPES experiment. LOPES was a digital radio\ninterferometer consisting of up to 30 antennas on an area of approximately 200\nm x 200 m at an altitude of 110 m above sea level. Triggered by KASCADE-Grande,\nLOPES measured the radio emission between 43 and 74 MHz, and our analysis might\nstrictly hold only for such conditions. Moreover, we used CoREAS simulations\nmade for each event, which show much clearer results than the measurements\nsuffering from high background. A detailed description of our result is\navailable in our recent paper published in JCAP09(2014)025. The present\nproceeding contains a summary and focuses on some additional aspects, e.g., the\nasymmetry of the wavefront: According to the CoREAS simulations the wavefront\nis slightly asymmetric, but on a much weaker level than the lateral\ndistribution of the radio amplitude.", "category": "astro-ph_IM" }, { "text": "Scaling of collision strengths for highly-excited states of ions of the\n H- and He-like sequences: Emission lines from highly-excited states (n >= 5) of H- and He-like ions\nhave been detected in astrophysical sources and fusion plasmas. For such\nexcited states, R-matrix or distorted wave calculations for electron-impact\nexcitation are very limited, due to the large size of the atomic basis set\nneeded to describe them. Calculations for n >= 6 are also not generally\navailable. We study the behaviour of the electron-impact excitation collision\nstrengths and effective collision strengths for the most important transitions\nused to model electron collision dominated astrophysical plasmas, solar, for\nexample. We investigate the dependence on the relevant parameters: the\nprincipal quantum number n or the nuclear charge Z. We also estimate the\nimportance of coupling to highly-excited states and the continuum by comparing\nthe results of different sized calculations. We provide analytic formulae to\ncalculate the electron-impact excitation collision strengths and effective\ncollision strengths to highly-excited states (n >= 8) of H- and He-like ions.\nThese extrapolated effective collision strengths can be used to interpret\nastrophysical and fusion plasma via collisional-radiative modelling.", "category": "astro-ph_IM" }, { "text": "Gaia: unraveling the chemical and dinamical history of our Galaxy: The Gaia astrometric mission - the Hipparcos successor - is described in some\ndetail, with its three instruments: the two (spectro)photometers (BP and RP)\ncovering the range 330-1050 nm, the white light (G-band) imager dedicated to\nastrometry, and the radial velocity spectrometer (RVS) covering the range\n847-874 nm at a resolution R \\simeq 11500. The whole sky will be scanned\nrepeatedly providing data for ~10^9 point-like objects, down to a magnitude of\nV \\simeq 20, aiming to the full 6D reconstruction of the Milky Way kinematical\nand dinamical structure with unprecendented precision. The horizon of\nscientific questions that can find an answer with such a set of data is vast,\nincluding besides the Galaxy: Solar system studies, stellar astrophysics,\nexoplanets, supernovae, Local group physics, unresolved galaxies, Quasars, and\nfundamental physics. The Italian involvement in the mission preparation is\nbriefly outlined.", "category": "astro-ph_IM" }, { "text": "Inequalities faced by women in access to permanent positions in\n astronomy in France: We investigate inequalities in access to permanent positions in professional\nastronomy in France, focusing on the hiring stage. We use results from a\nnational survey conducted on behalf of the French society of astronomy and\nastrophysics (SF2A) aimed at young astronomers holding a PhD obtained in\nFrance, and answered by over 300 researchers. We find that women are nearly two\ntimes less likely than men to be selected by the (national or local) committees\nattributing permanent positions ($p=0.06$). We also find that applicants who\ndid their undergraduate studies in an elite school (\"Grande \\'Ecole\"), where\nwomen are largely under-represented, rather than in a university, are nearly\nthree times more likely to succeed in obtaining a position ($p=0.0026$). Our\nanalysis suggests the existence of two biases in committees attributing\npermanent positions in astronomy in France: a gender bias, and a form of\nelitism. These biases against women in their professional life impacts their\npersonal life as our survey shows that a larger fraction of them declare that\nhaving children can have a negative effect on their careers. They are half as\nmany as men having children in the sample. National committees (such as the\nCNRS) have acknowledged this issue for several years now, hence one can hope\nthat changes will be seen in the next decade.", "category": "astro-ph_IM" }, { "text": "New Gapless COS G140L Mode Proposed for Background-Limited Far-UV\n Observations: Here we describe the observation and calibration procedure for a new G140L\nobserving mode for the Cosmic Origins Spectrograph (COS) aboard the Hubble\nSpace Telescope (HST). This mode, CENWAV = 800, is designed to move the far-UV\nband fully onto the Segment A detector, allowing for more e cient ob- servation\nand analysis by simplifying calibration management between the two channels,\nand reducing the astigmatism in this wavelength region. We also de- scribe some\nof the areas of scientific interest for which this new mode will be especially\nsuited.", "category": "astro-ph_IM" }, { "text": "First low frequency all-sky search for continuous gravitational wave\n signals: In this paper we present the results of the first low frequency all-sky\nsearch of continuous gravitational wave signals conducted on Virgo VSR2 and\nVSR4 data. The search covered the full sky, a frequency range between 20 Hz and\n128 Hz with a range of spin-down between $-1.0 \\times 10^{-10}$ Hz/s and $+1.5\n\\times 10^{-11}$ Hz/s, and was based on a hierarchical approach. The starting\npoint was a set of short Fast Fourier Transforms (FFT), of length 8192 seconds,\nbuilt from the calibrated strain data. Aggressive data cleaning, both in the\ntime and frequency domains, has been done in order to remove, as much as\npossible, the effect of disturbances of instrumental origin. On each dataset a\nnumber of candidates has been selected, using the FrequencyHough transform in\nan incoherent step. Only coincident candidates among VSR2 and VSR4 have been\nexamined in order to strongly reduce the false alarm probability, and the most\nsignificant candidates have been selected. Selected candidates have been\nsubject to a follow-up by constructing a new set of longer FFTs followed by a\nfurther incoherent analysis, still based on the FrequencyHough transform. No\nevidence for continuous gravitational wave signals was found, therefore we have\nset a population-based joint VSR2-VSR4 90$\\%$ confidence level upper limit on\nthe dimensionless gravitational wave strain in the frequency range between 20\nHz and 128 Hz. This is the first all-sky search for continuous gravitational\nwaves conducted, on data of ground-based interferometric detectors, at\nfrequencies below 50 Hz. We set upper limits in the range between about\n$10^{-24}$ and $2\\times 10^{-23}$ at most frequencies. Our upper limits on\nsignal strain show an improvement of up to a factor of $\\sim$2 with respect to\nthe results of previous all-sky searches at frequencies below $80~\\mathrm{Hz}$.", "category": "astro-ph_IM" }, { "text": "Modeling the Optical Cherenkov Signals by Cosmic Ray Extensive Air\n Showers Directly Observed from Sub-Orbital and Orbital Altitudes: Future experiments based on the observation of Earth's atmosphere from\nsub-orbital and orbital altitudes plan to include optical Cherenkov cameras to\nobserve extensive air showers produced by high-energy cosmic radiation via its\ninteraction with both the Earth and its atmosphere. As discussed elsewhere,\nparticularly relevant is the case of upward-moving showers initiated by\nastrophysical neutrinos skimming and interacting in the Earth. The Cherenkov\ncameras, by looking above Earth's limb, can also detect cosmic rays with\nenergies starting from less than a PeV up to the highest energies (tens of\nEeV). Using a customized computation scheme to determine the expected optical\nCherenkov signal from these high-energy cosmic rays, we estimate the\nsensitivity and event rate for balloon-borne and satellite-based instruments,\nfocusing our analysis on the Extreme Universe Space Observatory aboard a Super\nPressure Balloon 2 (EUSO-SPB2) and the Probe of Extreme Multi-Messenger\nAstrophysics (POEMMA) experiments. We find the expected event rates to be\nlarger than hundreds of events per hour of experimental live time, enabling a\npromising overall test of the Cherenkov detection technique from sub-orbital\nand orbital altitudes as well as a guaranteed signal that can be used for\nunderstanding the response of the instrument.", "category": "astro-ph_IM" }, { "text": "PRAXIS: low thermal emission high efficiency OH suppressed fibre\n spectrograph: PRAXIS is a second generation instrument that follows on from GNOSIS, which\nwas the first instrument using fibre Bragg gratings for OH background\nsuppression. The Bragg gratings reflect the NIR OH lines while being\ntransparent to light between the lines. This gives a much higher signal-noise\nratio at low resolution but also at higher resolutions by removing the\nscattered wings of the OH lines. The specifications call for high throughput\nand very low thermal and detector noise so that PRAXIS will remain sky noise\nlimited. The optical train is made of fore-optics, an IFU, a fibre bundle, the\nBragg grating unit, a second fibre bundle and a spectrograph. GNOSIS used the\npre-existing IRIS2 spectrograph while PRAXIS will use a new spectrograph\nspecifically designed for the fibre Bragg grating OH suppression and optimised\nfor 1470 nm to 1700 nm (it can also be used in the 1090 nm to 1260 nm band by\nchanging the grating and refocussing). This results in a significantly higher\ntransmission due to high efficiency coatings, a VPH grating at low incident\nangle and low absorption glasses. The detector noise will also be lower.\nThroughout the PRAXIS design special care was taken at every step along the\noptical path to reduce thermal emission or stop it leaking into the system.\nThis made the spectrograph design challenging because practical constraints\nrequired that the detector and the spectrograph enclosures be physically\nseparate by air at ambient temperature. At present, the instrument uses the\nGNOSIS fibre Bragg grating OH suppression unit. We intend to soon use a new OH\nsuppression unit based on multicore fibre Bragg gratings which will allow\nincreased field of view per fibre. Theoretical calculations show that the gain\nin interline sky background signal-noise ratio over GNOSIS may very well be as\nhigh as 9 with the GNOSIS OH suppression unit and 17 with the multicore fibre\nOH suppression unit.", "category": "astro-ph_IM" }, { "text": "A passive FPAA based RF scatter meteor detector: In the article we present a hardware meteor detector. The detection principle\nis based on the electromagnetic wave reflection from the ionized meteor trail\nin the atmosphere. The detector uses the ANADIGM field programmable analogue\narray (FPAA), which is an attractive alternative for a typically used detecting\nequipment - a PC computer with dedicated software. We implement an analog\nsignal path using most of available FPAA resources to obtain precise audio\nsignal detection. Our new detector was verified in collaboration with the\nPolish Fireball Network - the organization which monitors meteor activity in\nPoland. When compared with currently used signal processing PC software\nemploying real radio meteor scatter signals, our low-cost detector proved to be\nmore precise and reliable. Due to its cost and efficiency superiority over the\ncurrent solution, the presented module is going to be implemented in the\nplanned distributed detectors system.", "category": "astro-ph_IM" }, { "text": "\u03bc-Spec Spectrometers for the EXCLAIM Instrument: The EXperiment for Cryogenic Large-Aperture Intensity Mapping (EXCLAIM) is a\ncryogenic balloon-borne instrument that will map carbon monoxide and\nsingly-ionized carbon emission lines across redshifts from 0 to 3.5, using an\nintensity mapping approach. EXCLAIM will broaden our understanding of these\nelemental and molecular gases and the role they play in star formation\nprocesses across cosmic time scales. The focal plane of EXCLAIM's cryogenic\ntelescope features six {\\mu}-Spec spectrometers. {\\mu}-Spec is a compact,\nintegrated grating-analog spectrometer, which uses meandered superconducting\nniobium microstrip transmission lines on a single-crystal silicon dielectric to\nsynthesize the grating. It features superconducting aluminum microwave kinetic\ninductance detectors (MKIDs), also in a microstrip architecture. The\nspectrometers for EXCLAIM couple to the telescope optics via a hybrid planar\nantenna coupled to a silicon lenslet. The spectrometers operate from 420 to 540\nGHz with a resolving power R={\\lambda}/{\\Delta}{\\lambda}=512 and employ an\narray of 355 MKIDs on each spectrometer. The spectrometer design targets a\nnoise equivalent power (NEP) of 2x10-18W/\\sqrt{Hz} (defined at the input to the\nmain lobe of the spectrometer lenslet beam, within a 9-degree half width),\nenabled by the cryogenic telescope environment, the sensitive MKID detectors,\nand the low dielectric loss of single-crystal silicon. We report on these\nspectrometers under development for EXCLAIM, providing an overview of the\nspectrometer and component designs, the spectrometer fabrication process,\nfabrication developments since previous prototype demonstrations, and the\ncurrent status of their development for the EXCLAIM mission.", "category": "astro-ph_IM" }, { "text": "Data model as agile basis for evolving calibration software: We design the imaging data calibration and reduction software for MICADO, the\nFirst Light near-IR instrument on the Extremely Large Telescope. In this\nprocess we have hit the limit of what can be achieved with a detailed software\ndesign that is primarily captured in pdf/word documents.\n Trade-offs between hardware and calibration software are required to meet\nstringent science requirements. To support such trade-offs, more software needs\nto be developed in the early phases of the project: simulators, archives,\nprototype recipes and pipelines. This requires continuous and efficient\nexchange of evolving designs between the software and hardware groups, which is\nhard to achieve with manually maintained documents. This, and maintaining the\nconsistency between the design documents and various software components is\npossible with a machine readable version of the design.\n We construct a detailed design that is readable by both software and humans.\nFrom this the design documentation, prototype pipelines and data archives are\ngenerated automatically. We present the implementation of such an approach for\nthe calibration software detailed design for the ELT MICADO imager which is\nbased on expertise and lessons learned in earlier projects (e.g. OmegaCAM,\nMUSE, Euclid).", "category": "astro-ph_IM" }, { "text": "Photometric Redshift Biases from Galaxy Evolution: Proposed cosmological surveys will make use of photometric redshifts of\ngalaxies that are significantly fainter than any complete spectroscopic\nredshift surveys that exist to train the photo-z methods. We investigate the\nphoto-z biases that result from known differences between the faint and bright\npopulations: a rise in AGN activity toward higher redshift, and a metallicity\ndifference between intrinsically luminous and faint early-type galaxies. We\nfind that even very small mismatches between the mean photometric target and\nthe training set can induce photo-z biases large enough to corrupt derived\ncosmological parameters significantly. A metallicity shift of ~0.003dex in an\nold population, or contamination of any galaxy spectrum with ~0.2% AGN flux, is\nsufficient to induce a 10^-3 bias in photo-z. These results highlight the\ndanger in extrapolating the behavior of bright galaxies to a fainter\npopulation, and the desirability of a spectroscopic training set that spans all\nof the characteristics of the photo-z targets, i.e. extending to the 25th mag\nor fainter galaxies that will be used in future surveys.", "category": "astro-ph_IM" }, { "text": "New Generation Stellar Spectral Libraries in the Optical and\n Near-Infrared I: The Recalibrated UVES-POP Library for Stellar Population\n Synthesis: We present re-processed flux calibrated spectra of 406 stars from the\nUVES-POP stellar library in the wavelength range 320-1025 nm, which can be used\nfor stellar population synthesis. The spectra are provided in the two versions\nhaving spectral resolving power R=20,000 and R=80,000. Raw spectra from the ESO\ndata archive were re-reduced using the latest version of the UVES data\nreduction pipeline with some additional algorithms that we developed. The most\nsignificant improvements in comparison with the original UVES-POP release are:\n(i) an updated Echelle order merging, which eliminates \"ripples\" present in the\npublished spectra, (ii) a full telluric correction, (iii) merging of\nnon-overlapping UVES spectral setups taking into account the global continuum\nshape, (iv) a spectrophotometric correction and absolute flux calibration, and\n(v) estimates of the interstellar extinction. For 364 stars from our sample, we\ncomputed atmospheric parameters $T_\\mathrm{eff}$, surface gravity log $g$,\nmetallicity [Fe/H], and $\\alpha$-element enhancement [$\\alpha$/Fe] by using a\nfull spectrum fitting technique based on a grid of synthetic stellar\natmospheres and a novel minimization algorithm. We also provide projected\nrotational velocity $v\\sin i$ and radial velocity $v_{rad}$ estimates. The\noverall absolute flux uncertainty in the re-processed dataset is better than 2%\nwith sub-% accuracy for about half of the stars. A comparison of the\nrecalibrated UVES-POP spectra with other spectral libraries shows a very good\nagreement in flux; at the same time, $Gaia$ DR3 BP/RP spectra are often\ndiscrepant with our data, which we attribute to spectrophotometric calibration\nissues in $Gaia$ DR3.", "category": "astro-ph_IM" }, { "text": "Prediction of Apophis Asteroid Flyby Optimal Trajectories and Data\n Fusion of Earth-Apophis Mission Launch Windows using Deep Neural Networks: In recent years, understanding asteroids has shifted from light worlds to\ngeological worlds by exploring modern spacecraft and advanced radar and\ntelescopic surveys. However, flyby in 2029 will be an opportunity to conduct an\ninternal geophysical study and test the current hypothesis on the effects of\ntidal forces on asteroids. The Earth-Apophis mission is driven by additional\nfactors and scientific goals beyond the unique opportunity for natural\nexperimentation. However, the internal geophysical structures remain largely\nunknown. Understanding the strength and internal integrity of asteroids is not\njust a matter of scientific curiosity. It is a practical imperative to advance\nknowledge for planetary defense against the possibility of an asteroid impact.\nThis paper presents a conceptual robotics system required for efficiency at\nevery stage from entry to post-landing and for asteroid monitoring. In short,\nasteroid surveillance missions are futuristic frontiers, with the potential for\ntechnological growth that could revolutionize space exploration. Advanced space\ntechnologies and robotic systems are needed to minimize risk and prepare these\ntechnologies for future missions. A neural network model is implemented to\ntrack and predict asteroids' orbits. Advanced algorithms are also needed to\nnumerically predict orbital events to minimize error", "category": "astro-ph_IM" }, { "text": "Dawes Review 5: Australian Aboriginal Astronomy and Navigation: The traditional cultures of Aboriginal Australians include a significant\nastronomical component, perpetuated through oral tradition, ceremony, and art.\nThis astronomical knowledge includes a deep understanding of the motion of\nobjects in the sky, which was used for practical purposes such as constructing\ncalendars and for navigation. There is also evidence that traditional\nAboriginal Australians made careful records and measurements of cyclical\nphenomena, recorded unexpected phenomena such as eclipses and meteorite\nimpacts, and could determine the cardinal points to an accuracy of a few\ndegrees. Putative explanations of celestial phenomena appear throughout the\noral record, suggesting traditional Aborig- inal Australians sought to\nunderstand the natural world around them, in the same way as modern scientists,\nbut within their own cultural context. There is also a growing body of evidence\nfor sophisticated navigational skills, including the use of astronomically\nbased songlines. Songlines are effectively oral maps of the landscape, and are\nan efficient way of transmitting oral navigational skills in cultures that do\nnot have a written language. The study of Aboriginal astronomy has had an\nimpact extending beyond mere academic curiosity, facilitating cross-cultural\nunderstanding, demonstrating the intimate links between science and culture,\nand helping students to engage with science.", "category": "astro-ph_IM" }, { "text": "Performances of an upgraded front-end-board for the NectarCAM camera: The Front-End Board (FEB) is a key component of the NectarCAM camera, which\nhas been developed for the Medium-Sized-Telescopes (MST) of the Cherenkov\nTelescope Array Observatory (CTAO). The FEB is responsible for reading and\nconverting the signals from the camera's photo-multiplier tubes (PMTs) into\ndigital data, as well as generating module level trigger signals. This\ncontribution provides an overview of the design and performances of a new\nversion of the FEB that utilizes an improved version of the NECTAr chip. The\nNECTAr chip includes a switched capacitor array for sampling signals at 1 GHz,\nand a 12-bit analog-to-digital converter (ADC) for digitizing each sample when\nthe trigger signal is received. The integration of this advanced NECTAr chip\nsignificantly reduces the deadtime of NectarCAM by an order of magnitude as\ncompared to the previous version. This contribution also presents the results\nof laboratory testing of the new FEB, including measurements of timing\nperformance, linearity, dynamic range, and deadtime.", "category": "astro-ph_IM" }, { "text": "Realizing the potential of astrostatistics and astroinformatics: This Astro2020 State of the Profession Consideration White Paper highlights\nthe growth of astrostatistics and astroinformatics in astronomy, identifies key\nissues hampering the maturation of these new subfields, and makes\nrecommendations for structural improvements at different levels that, if acted\nupon, will make significant positive impacts across astronomy.", "category": "astro-ph_IM" }, { "text": "Extension of the Bayesian searches for anisotropic stochastic\n gravitational-wave background with non-tensorial polarizations: The recent announcement of strong evidence for a stochastic\ngravitational-wave background (SGWB) by various pulsar timing array\ncollaborations has highlighted this signal as a promising candidate for future\nobservations. Despite its non-detection by ground-based detectors such as\nAdvanced LIGO and Advanced Virgo, Callister \\textit{et\nal.}~\\cite{tom_nongr_method} developed a Bayesian formalism to search for an\nisotropic SGWB with non-tensorial polarizations, imposing constraints on signal\namplitude in those components that violate general relativity using LIGO's\ndata. Since our ultimate aim is to estimate the spatial distribution of\ngravitational-wave sources, we have extended this existing method to allow for\nanisotropic components in signal models. We then examined the potential\nbenefits from including these additional components. Using injection campaigns,\nwe found that introducing anisotropic components into a signal model led to\nmore significant identification of the signal itself and violations of general\nrelativity. Moreover, the results of our Bayesian parameter estimation\nsuggested that anisotropic components aid in breaking down degeneracies between\ndifferent polarization components, allowing us to infer model parameters more\nprecisely than through an isotropic analysis. In contrast, constraints on\nsignal amplitude remained comparable in the absence of such a signal. Although\nthese results might depend on the assumed source distribution on the sky, such\nas the Galactic plane, the formalism presented in this work has laid a\nfoundation for establishing a generalized Bayesian analysis for an SGWB,\nincluding its anisotropies and non-tensorial polarizations.", "category": "astro-ph_IM" }, { "text": "Photoprocessing of formamide ice: route towards prebiotic chemistry in\n space: Aims. Formamide (HCONH2) is the simplest molecule containing the peptide bond\nfirst detected in the gas phase in Orion-KL and SgrB2. In recent years, it has\nbeen observed in high temperature regions such as hot corinos, where thermal\ndesorption is responsible for the sublimation of frozen mantles into the gas\nphase. The interpretation of observations can benefit from information gathered\nin the laboratory, where it is possible to simulate the thermal desorption\nprocess and to study formamide under simulated space conditions such as UV\nirradiation. Methods. Here, two laboratory analyses are reported: we studied\nformamide photo-stability under UV irradiation when it is adsorbed by space\nrelevant minerals at 63 K and in the vacuum regime. We also investigated\ntemperature programmed desorption of pure formamide ice in the presence of TiO2\ndust before and after UV irradiation. Results. Through these analyses, the\neffects of UV degradation and the interaction between formamide and different\nminerals are compared.We find that silicates, both hydrates and anhydrates,\noffer molecules a higher level of protection from UV degradation than mineral\noxides. The desorption temperature found for pure formamide is 220 K. The\ndesorption temperature increases to 250 K when the formamide desorbs from the\nsurface of TiO2 grains. Conclusions. Through the experiments outlined here, it\nis possible to follow the desorption of formamide and its fragments, simulate\nthe desorption process in star forming regions and hot corinos, and constrain\nparameters such as the thermal desorption temperature of formamide and its\nfragments and the binding energies involved. Our results offer support to\nobservational data and improve our understanding of the role of the grain\nsurface in enriching the chemistry in space.", "category": "astro-ph_IM" }, { "text": "Geometric calibration of Colour and Stereo Surface Imaging System of\n ESA's Trace Gas Orbiter: There are many geometric calibration methods for \"standard\" cameras. These\nmethods, however, cannot be used for the calibration of telescopes with large\nfocal lengths and complex off-axis optics. Moreover, specialized calibration\nmethods for the telescopes are scarce in literature. We describe the\ncalibration method that we developed for the Colour and Stereo Surface Imaging\nSystem (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO).\nAlthough our method is described in the context of CaSSIS, with camera-specific\nexperiments, it is general and can be applied to other telescopes. We further\nencourage re-use of the proposed method by making our calibration code and data\navailable on-line.", "category": "astro-ph_IM" }, { "text": "T35: a small automatic telescope for long-term observing campaigns: The T35 is a small telescope (14\") equipped with a large format CCD camera\ninstalled in the Sierra Nevada Observatory (SNO) in Southern Spain. This\ntelescope will be a useful tool for the detecting and studying pulsating stars,\nparticularly, in open clusters. In this paper, we describe the automation\nprocess of the T35 and show also some images taken with the new\ninstrumentation.", "category": "astro-ph_IM" }, { "text": "Advanced Environmentally Resistant Lithium Fluoride Mirror Coatings for\n the Next-Generation of Broadband Space Observatories: Recent advances in the physical vapor deposition (PVD) of protective fluoride\nfilms have raised the far-ultraviolet (FUV: 912-1600 {\\AA}) reflectivity of\naluminum-based mirrors closer to the theoretical limit. The greatest gains, at\nmore than 20%, have come for lithium fluoride-protected aluminum, which has the\nshortest wavelength cutoff of any conventional overcoat. Despite the success of\nthe NASA FUSE mission, the use of lithium fluoride (LiF)-based optics is rare,\nas LiF is hygroscopic and requires handling procedures that can drive risk.\nWith NASA now studying two large mission concepts for astronomy, Large\nUV-Optical-IR Surveyor (LUVOIR) and the Habitable Exoplanet Imaging Mission\n(HabEx), which mandate throughput down to 1000 {\\AA}, the development of\nLiF-based coatings becomes crucial. This paper discusses steps that are being\ntaken to qualify these new enhanced LiF-protected aluminum (eLiF) mirror\ncoatings for flight. In addition to quantifying the hygroscopic degradation, we\nhave developed a new method of protecting eLiF with an ultrathin (10-20 {\\AA})\ncapping layer of a non-hygroscopic material to increase durability. We report\non the performance of eLiF-based optics and assess the steps that need to be\ntaken to qualify such coatings for LUVOIR, HabEx, and other FUV-sensitive space\nmissions.", "category": "astro-ph_IM" }, { "text": "AstroSat - a multi-wavelength astronomy satellite: AstroSat is a multi-wavelength astronomy satellite, launched on 2015\nSeptember 28. It carries a suite of scientific instruments for multi-wavelength\nobservations of astronomical sources. It is a major Indian effort in space\nastronomy and the context of AstroSat is examined in a historical perspective.\nThe Performance Verification phase of AstroSat has been completed and all\ninstruments are working flawlessly and as planned. Some brief highlights of the\nscientific results are also given here.", "category": "astro-ph_IM" }, { "text": "The Zadko Telescope: A Southern Hemisphere Telescope for Optical\n Transient Searches, Multi-Messenger Astronomy and Education: The new 1-m f/4 fast-slew Zadko Telescope was installed in June 2008 about 70\nkm north of Perth, Western Australia. It is the only metre-class optical\nfacility at this southern latitude between the east coast of Australia and\nSouth Africa, and can rapidly image optical transients at a longitude not\nmonitored by other similar facilities. We report on first imaging tests of a\npilot program of minor planet searches, and Target of Opportunity observations\ntriggered by the Swift satellite. In 12 months, 6 gamma-ray burst afterglows\nwere detected, with estimated magnitudes; two of them, GRB 090205 (z = 4.65)\nand GRB 090516 (z = 4.11), are among the most distant optical transients imaged\nby an Australian telescope. Many asteroids were observed in a systematic\n3-month search. In September 2009, an automatic telescope control system was\ninstalled, which will be used to link the facility to a global robotic\ntelescope network; future targets will include fast optical transients\ntriggered by highenergy satellites, radio transient detections, and LIGO\ngravitational wave candidate events. We also outline the importance of the\nfacility as a potential tool for education, training, and public outreach.", "category": "astro-ph_IM" }, { "text": "Comparative performance of some popular ANN algorithms on benchmark and\n function approximation problems: We report an inter-comparison of some popular algorithms within the\nartificial neural network domain (viz., Local search algorithms, global search\nalgorithms, higher order algorithms and the hybrid algorithms) by applying them\nto the standard benchmarking problems like the IRIS data, XOR/N-Bit parity and\nTwo Spiral. Apart from giving a brief description of these algorithms, the\nresults obtained for the above benchmark problems are presented in the paper.\nThe results suggest that while Levenberg-Marquardt algorithm yields the lowest\nRMS error for the N-bit Parity and the Two Spiral problems, Higher Order\nNeurons algorithm gives the best results for the IRIS data problem. The best\nresults for the XOR problem are obtained with the Neuro Fuzzy algorithm. The\nabove algorithms were also applied for solving several regression problems such\nas cos(x) and a few special functions like the Gamma function, the\ncomplimentary Error function and the upper tail cumulative\n$\\chi^2$-distribution function. The results of these regression problems\nindicate that, among all the ANN algorithms used in the present study,\nLevenberg-Marquardt algorithm yields the best results. Keeping in view the\nhighly non-linear behaviour and the wide dynamic range of these functions, it\nis suggested that these functions can be also considered as standard benchmark\nproblems for function approximation using artificial neural networks.", "category": "astro-ph_IM" }, { "text": "Ultra-fast model emulation with PRISM; analyzing the Meraxes galaxy\n formation model: We demonstrate the potential of an emulator-based approach to analyzing\ngalaxy formation models in the domain where constraining data is limited. We\nhave applied the open-source Python package PRISM to the galaxy formation model\nMeraxes. Meraxes is a semi-analytic model, purposefully built to study the\ngrowth of galaxies during the Epoch of Reionization (EoR). Constraining such\nmodels is however complicated by the scarcity of observational data in the EoR.\nPRISM's ability to rapidly construct accurate approximations of complex\nscientific models using minimal data is therefore key to performing this\nanalysis well.\n This paper provides an overview of our analysis of Meraxes using measurements\nof galaxy stellar mass densities; luminosity functions; and color-magnitude\nrelations. We demonstrate the power of using PRISM instead of a full Bayesian\nanalysis when dealing with highly correlated model parameters and a scarce set\nof observational data. Our results show that the various observational data\nsets constrain Meraxes differently and do not necessarily agree with each\nother, signifying the importance of using multiple observational data types\nwhen constraining such models. Furthermore, we show that PRISM can detect when\nmodel parameters are too correlated or cannot be constrained effectively. We\nconclude that a mixture of different observational data types, even when they\nare scarce or inaccurate, is a priority for understanding galaxy formation and\nthat emulation frameworks like PRISM can guide the selection of such data.", "category": "astro-ph_IM" }, { "text": "Application of a Regional Model to Astronomical Site Testing in Western\n Antarctica: The quality of ground based astronomical observations are significantly\naffected by telluric conditions, and the search for best sites has led to the\nconstruction of observatories at remote locations, including recent initiatives\non the high plateaus of E Antarctica where the calm, dry and cloud free\nconditions during winter are recognized as amongst the best. Site selection is\nan important phase of any observatory development project, and candidate sites\nmust be tested with specialized equipment, a process both time consuming and\ncostly. A potential screening of site locations before embarking on field\ntesting is through the use of climate models. Here, we describe the application\nof the Polar version of the Weather Research and Forecast (WRF) model to the\npreliminary site suitability assessment of an unstudied region in W Antarctica.\nNumerical simulations with WRF were carried out for the winter of 2011 at 3 km\nand 1 km spatial resolution over a region centered on the Ellsworth mountain\nrange. Comparison with observations of surface wind speed and direction,\ntemperature and specific humidity at nine automatic weather stations indicate\nthat the model succeed in capturing the mean and time variability of these\nvariables. Credible features shown by the model include zones of high winds\nover the southernmost part of the Ellsworth Mntns, a deep thermal inversion\nover the Ronne-Fincher Ice Shelf and strong west to east moisture gradient\nacross the entire study area. Comparison of simulated cloud fraction with a\nspacebourne Lidar climatology indicates that the model may underestimate cloud\noccurrence, a problem that has been noted in previous studies. A simple scoring\nsystem was applied to reveal the most promising locations. The results of this\nstudy indicate that the WRF model is capable of providing useful guidance\nduring the initial site selection stage of project development.", "category": "astro-ph_IM" }, { "text": "AMIDAS-II: Upgrade of the AMIDAS Package and Website for Direct Dark\n Matter Detection Experiments and Phenomenology: In this paper, we give a detailed user's guide to the AMIDAS (A\nModel-Independent Data Analysis System) package and website, which is developed\nfor online simulations and data analyses for direct Dark Matter detection\nexperiments and phenomenology. Recently, the whole AMIDAS package and website\nsystem has been upgraded to the second phase: AMIDAS-II, for including the new\ndeveloped Bayesian analysis technique.\n AMIDAS has the ability to do full Monte Carlo simulations as well as to\nanalyze real/pseudo data sets either generated by another event generating\nprograms or recorded in direct DM detection experiments. Moreover, the\nAMIDAS-II package can include several \"user-defined\" functions into the main\ncode: the (fitting) one-dimensional WIMP velocity distribution function, the\nnuclear form factors for spin-independent and spin-dependent cross sections,\nartificial/experimental background spectrum for both of simulation and data\nanalysis procedures, as well as different distribution functions needed in\nBayesian analyses.", "category": "astro-ph_IM" }, { "text": "Monte-Carlo Imaging for Optical Interferometry: We present a flexible code created for imaging from the bispectrum and\nvisibility-squared. By using a simulated annealing method, we limit the\nprobability of converging to local chi-squared minima as can occur when\ntraditional imaging methods are used on data sets with limited phase\ninformation. We present the results of our code used on a simulated data set\nutilizing a number of regularization schemes including maximum entropy. Using\nthe statistical properties from Monte-Carlo Markov chains of images, we show\nhow this code can place statistical limits on image features such as unseen\nbinary companions.", "category": "astro-ph_IM" }, { "text": "Long-term stability of fibre-optic transmission for multi-object\n spectroscopy: We present an analysis of the long-term stability of fibre-optic transmission\nproperties for fibre optics in astronomy. Data from six years of operation of\nthe AAOmega multi-object spectrograph at the Anglo-Australian Telescope is\npresented. We find no evidence for significant degradation in the bulk\ntransmission properties of the 38 m optical fibre train. Significant losses\n(<20% relative, 4% absolute) are identified and associated with the end\ntermination of the optical fibres in the focal plane. Improved monitoring and\nmaintenance can rectify the majority of this performance degradation.", "category": "astro-ph_IM" }, { "text": "Observation of axisymmetric standard magnetorotational instability in\n the laboratory: We report the first direct evidence for the axisymmetric standard\nmagnetorotational instability (SMRI) from a combined experimental and numerical\nstudy of a magnetized liquid-metal shear flow in a Taylor-Couette cell with\nindependently rotating and electrically conducting end caps. When a uniform\nvertical magnetic field $B_i$ is applied along the rotation axis, the measured\nradial magnetic field $B_r$ on the inner cylinder increases linearly with a\nsmall magnetic Reynolds number $Rm$ due to the magnetization of the residue\nEkman circulation. Onset of the axisymmetric SMRI is identified from the\nnonlinear increase of $B_r$ beyond a critical $Rm$ in both experiments and\nnonlinear numerical simulations. The axisymmetric SMRI exists only at\nsufficiently large $Rm$ and intermediate $B_i$, a feature consistent with\ntheoretical predictions. Our simulations further show that the axisymmetric\nSMRI causes the velocity and magnetic fields to contribute an outward flux of\naxial angular momentum in the bulk region, just as it should in accretion\ndisks.", "category": "astro-ph_IM" }, { "text": "Investigating the Efficiency of the Beijing Faint Object Spectrograph\n and Camera (BFOSC) of the Xinglong 2.16-m Reflector: The Beijing Faint Object Spectrograph and Camera (BFOSC) is one of the most\nimportant instruments of the 2.16-m telescope of the Xinglong Observatory.\nEvery year there are ~ 20 SCI-papers published based on the observational data\nof this telescope. In this work, we have systemically measured the total\nefficiency of the BFOSC of the 2.16-m reflector, based on the observations of\ntwo ESO flux standard stars. We have obtained the total efficiencies of the\nBFOSC instrument of different grisms with various slit widths in almost all\nranges, and analysed the factors which effect the efficiency of telescope and\nspectrograph. For the astronomical observers, the result will be useful for\nthem to select a suitable slit width, depending on their scientific goals and\nweather conditions during the observation; For the technicians, the result will\nhelp them systemically find out the real efficiency of telescope and\nspectrograph, and further to improve the total efficiency and observing\ncapacity of the telescope technically.", "category": "astro-ph_IM" }, { "text": "Apertif, Phased Array Feeds for the Westerbork Synthesis Radio Telescope: We describe the APERture Tile In Focus (Apertif) system, a phased array feed\n(PAF) upgrade of the Westerbork Synthesis Radio Telescope which has transformed\nthis telescope into a high-sensitivity, wide field-of-view L-band imaging and\ntransient survey instrument. Using novel PAF technology, up to 40 partially\noverlapping beams can be formed on the sky simultaneously, significantly\nincreasing the survey speed of the telescope. With this upgraded instrument, an\nimaging survey covering an area of 2300 deg2 is being performed which will\ndeliver both continuum and spectral line data sets, of which the first data has\nbeen publicly released. In addition, a time domain transient and pulsar survey\ncovering 15,000 deg2 is in progress. An overview of the Apertif science\ndrivers, hardware and software of the upgraded telescope is presented, along\nwith its key performance characteristics.", "category": "astro-ph_IM" }, { "text": "Understanding Instrumental Stokes Leakage in Murchison Widefield Array\n Polarimetry: This paper offers an electromagnetic, more specifically array theory,\nperspective on understanding strong instrumental polarization effects for\nplanar low-frequency \"aperture arrays\" with the Murchison Widefield Array (MWA)\nas an example. A long-standing issue that has been seen here is significant\ninstrumental Stokes leakage after calibration, particularly in Stokes Q at high\nfrequencies. A simple model that accounts for inter-element mutual coupling is\npresented which explains the prominence of Q leakage seen when the array is\nscanned away from zenith in the principal planes. On these planes, the model\npredicts current imbalance in the X (E-W) and Y (N-S) dipoles and hence the Q\nleakage. Although helpful in concept, we find that this model is inadequate to\nexplain the full details of the observation data. This finding motivates\nfurther experimentation with more rigorous models that account for both mutual\ncoupling and embedded element patterns. Two more rigorous models are discussed:\nthe \"full\" and \"average\" embedded element patterns. The viability of the \"full\"\nmodel is demonstrated by simulating current MWA practice of using a Hertzian\ndipole model as a Jones matrix estimate. We find that these results replicate\nthe observed Q leakage to approximately 2 to 5%. Finally, we offer more direct\nindication for the level of improvement expected from upgrading the Jones\nmatrix estimate with more rigorous models. Using the \"average\" embedded pattern\nas an estimate for the \"full\" model, we find that Q leakage of a few percent is\nachievable.", "category": "astro-ph_IM" }, { "text": "WTF? Discovering the Unexpected in next-generation radio continuum\n surveys: Most major discoveries in astronomy have come from unplanned discoveries made\nby surveying the Universe in a new way, rather than by testing a hypothesis or\nconducting an investigation with planned outcomes. Next generation radio\ncontinuum surveys such as the Evolutionary Map of the Universe (EMU: the radio\ncontinuum survey on the new Australian SKA Pathfinder telescope), will\nsignificantly expand the volume of observational phase space, so we can be\nreasonably confident that we will stumble across unexpected new phenomena or\nnew types of object. However, the complexity of the instrument and the large\ndata volumes mean that it may be non-trivial to identify them. On the other\nhand, if we don't, then we may be missing out on the most exciting science\nresults from EMU. We have therefore started a project called \"WTF\", which\nexplicitly aims to mine EMU data to discover unexpected science that is not\npart of our primary science goals, using a variety of machine-learning\ntechniques and algorithms. Although targeted specifically at EMU, we expect\nthis approach will have broad applicability to astronomical survey data.", "category": "astro-ph_IM" }, { "text": "Radio Weak Lensing Shear Measurement in the Visibility Domain - II.\n Source Extraction: This paper extends the method introduced in Rivi et al. (2016b) to measure\ngalaxy ellipticities in the visibility domain for radio weak lensing surveys.\nIn that paper we focused on the development and testing of the method for the\nsimple case of individual galaxies located at the phase centre, and proposed to\nextend it to the realistic case of many sources in the field of view by\nisolating visibilities of each source with a faceting technique. In this second\npaper we present a detailed algorithm for source extraction in the visibility\ndomain and show its effectiveness as a function of the source number density by\nrunning simulations of SKA1-MID observations in the band 950-1150 MHz and\ncomparing original and measured values of galaxies' ellipticities. Shear\nmeasurements from a realistic population of 10^4 galaxies randomly located in a\nfield of view of 1 deg^2 (i.e. the source density expected for the current\nradio weak lensing survey proposal with SKA1) are also performed. At SNR >= 10,\nthe multiplicative bias is only a factor 1.5 worse than what found when\nanalysing individual sources, and is still comparable to the bias values\nreported for similar measurement methods at optical wavelengths. The additive\nbias is unchanged from the case of individual sources, but is significantly\nlarger than typically found in optical surveys. This bias depends on the shape\nof the uv coverage and we suggest that a uv-plane weighting scheme to produce a\nmore isotropic shape could reduce and control additive bias.", "category": "astro-ph_IM" }, { "text": "Calibration of force actuators on an adaptive secondary prototype: In the context of the Large Binocular Telescope project, we present the\nresults of force actuator calibrations performed on an adaptive secondary\nprototype called P45, a thin deformable glass with magnets glued onto its back.\nElectromagnetic actuators, controlled in a closed loop with a system of\ninternal metrology based on capacitive sensors, continuously deform its shape\nto correct the distortions of the wavefront. Calibrations of the force\nactuators are needed because of the differences between driven forces and\nmeasured forces. We describe the calibration procedures and the results,\nobtained with errors of less than 1.5%.", "category": "astro-ph_IM" }, { "text": "Starbugs: all-singing, all-dancing fibre positioning robots: Starbugs are miniature piezoelectric 'walking' robots with the ability to\nsimultaneously position many optical fibres across a telescope's focal plane.\nTheir simple design incorporates two piezoceramic tubes to form a pair of\nconcentric 'legs' capable of taking individual steps of a few microns, yet with\nthe capacity to move a payload several millimetres per second. The Australian\nAstronomical Observatory has developed this technology to enable fast and\naccurate field reconfigurations without the inherent limitations of more\ntraditional positioning techniques, such as the 'pick and place' robotic arm.\nWe report on our recent successes in demonstrating Starbug technology, driven\nprincipally by R&D efforts for the planned MANIFEST (many instrument\nfibre-system) facility for the Giant Magellan Telescope. Significant\nperformance gains have resulted from improvements to the Starbug system,\nincluding i) the use of a vacuum to attach Starbugs to the underside of a\ntransparent field plate, ii) optimisation of the control electronics, iii) a\nsimplified mechanical design with high sensitivity piezo actuators, and iv) the\nconstruction of a dedicated laboratory 'test rig'. A method of reliably\nrotating Starbugs in steps of several arcminutes has also been devised, which\nintegrates with the pre-existing x-y movement directions and offers greater\nflexibility while positioning. We present measured performance data from a\nprototype system of 10 Starbugs under full (closed-loop control), at field\nplate angles of 0-90 degrees.", "category": "astro-ph_IM" }, { "text": "Discovering Strongly-lensed QSOs From Unresolved Light Curves: We present a new method of discovering galaxy-scale, strongly-lensed QSO\nsystems from unresolved light curves using the autocorrelation function. The\nmethod is tested on five rungs of simulated light curves from the Time Delay\nChallenge 1 that were designed to match the light-curve qualities from\nexisting, ongoing, and forthcoming time-domain surveys such as the Medium Deep\nSurvey of the Panoramic Survey Telescope And Rapid Response System 1, the\nZwicky Transient Facility, and the Rubin Observatory Legacy Survey of Space and\nTime. Among simulated lens systems for which time delays can be successfully\nmeasured by current best algorithms, our method achieves an overall true\npositive rate of 28--58% for doubly-imaged QSOs (doubles) and 36--60% for\nquadruply-imaged QSOs (quads) while maintains $\\lesssim$10% false positive\nrates. We also apply the method to observed light curves of 22 known\nstrongly-lensed QSOs, and recover 20% of doubles and 25% of quads. The tests\ndemonstrate the capability of our method for discovering strongly-lensed QSOs\nfrom major time domain surveys. The performance of our method can be further\nimproved by analysing multi-filter light curves and supplementing with\nmorphological, colour, and/or astrometric constraints. More importantly, our\nmethod is particularly useful for discovering small-separation strongly-lensed\nQSOs, complementary to traditional imaging-based methods.", "category": "astro-ph_IM" }, { "text": "Matched filtering for gravitational wave detection without template bank\n driven by deep learning template prediction model bank: The existing matched filtering method for gravitational wave (GW) search\nrelies on a template bank. The computational efficiency of this method scales\nwith the size of the templates within the bank. Higher-order modes and\neccentricity will play an important role when third-generation detectors\noperate in the future. In this case, traditional GW search methods will hit\ncomputational limits. To speed up the computational efficiency of GW search, we\npropose the utilization of a deep learning (DL) model bank as a substitute for\nthe template bank. This model bank predicts the latent templates embedded in\nthe strain data. Combining an envelope extraction network and an astrophysical\norigin discrimination network, we realize a novel GW search framework. The\nframework can predict the GW signal's matched filtering signal-to-noise ratio\n(SNR). Unlike the end-to-end DL-based GW search method, our statistical SNR\nholds greater physical interpretability than the $p_{score}$ metric. Moreover,\nthe intermediate results generated by our approach, including the predicted\ntemplate, offer valuable assistance in subsequent GW data processing tasks such\nas parameter estimation and source localization. Compared to the traditional\nmatched filtering method, the proposed method can realize real-time analysis.\nThe minor improvements in the future, the proposed method may expand to other\nscopes of GW search, such as GW emitted by the supernova explosion.", "category": "astro-ph_IM" }, { "text": "Camera Calibration for the IceCube Upgrade and Gen2: An upgrade to the IceCube Neutrino Telescope is currently under construction.\nFor this IceCube Upgrade, seven new strings will be deployed in the central\nregion of the 86 string IceCube detector to enhance the capability to detect\nneutrinos in the GeV range. One of the main science objectives of the IceCube\nUpgrade is an improved calibration of the IceCube detector to reduce systematic\nuncertainties related to the optical properties of the ice. We have developed a\nnovel optical camera and illumination system that will be part of 700 newly\ndeveloped optical modules to be deployed with the IceCube Upgrade. A\ncombination of transmission and reflection photographic measurements will be\nused to measure the optical properties of bulk ice between strings and refrozen\nice in the drill hole, to determine module positions, and to survey the local\nice environments surrounding the sensor module. In this contribution we present\nthe production design, acceptance testing, and plan for post-deployment\ncalibration measurements with the camera system.", "category": "astro-ph_IM" }, { "text": "Asymptotic Orbits in Barred Spiral Galaxies: We study the formation of the spiral structure of barred spiral galaxies,\nusing an $N$-body model. The evolution of this $N$-body model in the adiabatic\napproximation maintains a strong spiral pattern for more than 10 bar rotations.\nWe find that this longevity of the spiral arms is mainly due to the phenomenon\nof stickiness of chaotic orbits close to the unstable asymptotic manifolds\noriginated from the main unstable periodic orbits, both inside and outside\ncorotation. The stickiness along the manifolds corresponding to different\nenergy levels supports parts of the spiral structure. The loci of the disc\nvelocity minima (where the particles spend most of their time, in the\nconfiguration space) reveal the density maxima and therefore the main\nmorphological structures of the system. We study the relation of these loci\nwith those of the apocentres and pericentres at different energy levels. The\ndiffusion of the sticky chaotic orbits outwards is slow and depends on the\ninitial conditions and the corresponding Jacobi constant.", "category": "astro-ph_IM" }, { "text": "Photostability of gas- and solid-phase biomolecules within dense\n molecular clouds due to soft X-rays: An experimental photochemistry study involving gas- and solid-phase amino\nacids (glycine, DL-valine, DL-proline) and nucleobases (adenine and uracil)\nunder soft X-rays was performed. The aim was to test the molecular stabilities\nof essential biomolecules against ionizing photon fields inside dense molecular\nclouds and protostellar disks analogs. In these environments, the main energy\nsources are the cosmic rays and soft X-rays. The measurements were taken at the\nBrazilian Synchrotron Light Laboratory (LNLS), employing 150 eV photons.\nIn-situ sample analysis was performed by Time-of-flight mass spectrometer\n(TOF-MS) and Fourier transform infrared (FTIR) spectrometer, for gas- and\nsolid- phase analysis, respectively. The half-life of solid phase amino acids,\nassumed to be present at grain mantles, is at least 3E5 years and 3E8 years\ninside dense molecular clouds and protoplanetary disks, respectively. We\nestimate that for gas-phase compounds these values increase one order of\nmagnitude since the dissociation cross section of glycine is lower at gas-phase\nthan at solid phase for the same photon energy. The half-life of solid phase\nnucleobases is about 2-3 orders of magnitude higher than found for amino acids.\nThe results indicate that nucleobases are much more resistant to ionizing\nradiation than amino acids. We consider these implications for the survival and\ntransfer of biomolecules in space environments.", "category": "astro-ph_IM" }, { "text": "Time-division SQUID multiplexers with reduced sensitivity to external\n magnetic fields: Time-division SQUID multiplexers are used in many applications that require\nexquisite control of systematic error. One potential source of systematic error\nis the pickup of external magnetic fields in the multiplexer. We present\nmeasurements of the field sensitivity figure of merit, effective area, for both\nthe first stage and second stage SQUID amplifiers in three NIST SQUID\nmultiplexer designs. These designs include a new variety with improved\ngradiometry that significantly reduces the effective area of both the first and\nsecond stage SQUID amplifiers.", "category": "astro-ph_IM" }, { "text": "eROSITA on SRG: a X-ray all-sky survey mission: eROSITA (extended ROentgen Survey with an Imaging Telescope Array) is the\ncore instrument on the Russian Spektrum-Roentgen-Gamma (SRG) mission which is\nscheduled for launch in late 2012. eROSITA is fully approved and funded by the\nGerman Space Agency DLR and the Max-Planck-Society. The design driving science\nis the detection of 50 - 100 thousands Clusters of Galaxies up to redshift z ~\n1.3 in order to study the large scale structure in the Universe and test\ncosmological models, especially Dark Energy. This will be accomplished by an\nall-sky survey lasting for four years plus a phase of pointed observations.\neROSITA consists of seven Wolter-I telescope modules, each equipped with 54\nWolter-I shells having an outer diameter of 360 mm. This would provide and\neffective area at 1.5 keV of ~ 1500 cm2 and an on axis PSF HEW of 15\" which\nwould provide an effective angular resolution of 25\"-30\". In the focus of each\nmirror module, a fast frame-store pn-CCD will provide a field of view of 1 deg\nin diameter for an active FOV of ~ 0.83 deg^2. At the time of writing the\ninstrument development is currently in phase C/D.", "category": "astro-ph_IM" }, { "text": "Gaia reference frame amid quasar variability and proper motion patterns\n in the data: Gaia's very accurate astrometric measurements will allow the International\nCelestial Reference Frame (ICRF) to be improved by a few orders of magnitude in\nthe optical. Several sets of quasars are used to define a kinematically stable\nnon-rotating reference frame with the barycentre of the Solar System as its\norigin. Gaia will also observe a large number of galaxies which could obtain\naccurate positions and proper motions although they are not point-like. The\noptical stability of the quasars is critical and we investigate how accurately\nthe reference frame can be recovered. Various proper motion patterns are also\npresent in the data, the best known is caused by the acceleration of the Solar\nSystem Barycentre, presumably, towards the Galactic centre. We review some\nother less-well-known effects that are not part of standard astrometric models.\nWe model quasars and galaxies using realistic sky distributions, magnitudes and\nredshifts. Position variability is introduced using a Markov chain model. The\nreference frame is determined using the algorithm developed for the Gaia\nmission which also determines the acceleration of the Solar System. We also\ntest a method to measure the velocity of the Solar System barycentre in a\ncosmological frame. We simulate the recovery of the reference frame and the\nacceleration of the Solar System and conclude that they are not significantly\ndisturbed in the presence of quasar variability which is statistically\naveraged. However, the effect of a non-uniform sky distribution of the quasars\ncan result in a correlation between the reference frame and acceleration which\ndegrades the solution. Our results suggest that an attempt should be made to\nastrometrically determine the redshift dependent apparent drift of galaxies due\nto our velocity relative to the CMB, which in principle could allow the\ndetermination of the Hubble parameter.", "category": "astro-ph_IM" }, { "text": "MAORY AO performances: The Multi-conjugate Adaptive Optics RelaY (MAORY) should provide 30% SR in K\nband (50% goal) on half of the sky at the South Galactic Pole. Assessing its\nperformance and the sensitivity to parameter variations during the design phase\nis a fundamental step for the engineering of such a complex system. This step,\ncentered on numerical simulations, is the connection between the performance\nrequirements and the Adaptive Optics system configuration. In this work we\npresent MAORY configuration and performance and we justify theAdaptive Optics\nsystem design choices.", "category": "astro-ph_IM" }, { "text": "A way to deal with the fringe-like pattern in VIMOS-IFU data: The use of integral field units is now commonplace at all major observatories\noffering efficient means of obtaining spectral as well as imaging information\nat the same time. IFU instrument designs are complex and spectral images have\ntypically highly condensed formats, therefore presenting challenges for the IFU\ndata reduction pipelines. In the case of the VLT VIMOS-IFU, a fringe-like\npattern affecting the spectra well into the optical and blue wavelength regime\nas well as artificial intensity variations, require additional reduction steps\nbeyond standard pipeline processing. In this research note we propose an\nempirical method for the removal of the fringe-like pattern in the spectral\ndomain and the intensity variations in the imaging domain. We also demonstrate\nthe potential consequences for data analysis if the effects are not corrected.\nHere we use the example of deriving stellar velocity, velocity dispersion and\nabsorption line-strength maps for early-type galaxies. We derive for each\nspectrum, reduced by the ESO standard VIMOS pipeline, a correction-spectrum by\nusing the median of the eight surrounding spectra as a proxy for the\nunaffected, underlying spectrum. This method relies on the fact that our\nscience targets (nearby ETGs) cover the complete FoV of the VIMOS-IFU with\nslowly varying spectral properties and that the exact shape of the fringe-like\npattern is nearly independent and highly variable between neighboring spatial\npositions. We find that the proposed correction methods for the removal of the\nfringe-like pattern and the intensity variations in VIMOS-IFU data-cubes are\nsuitable to allow for meaningful data analysis in our sample of nearby\nearly-type galaxies. Since the method relies on the scientific target\nproperties it is not suitable for general implementation in the pipeline\nsoftware for VIMOS.", "category": "astro-ph_IM" }, { "text": "A Near Infrared Laser Frequency Comb for High Precision Doppler Planet\n Surveys: We discuss the laser frequency comb as a near infrared astronomical\nwavelength reference, and describe progress towards a near infrared laser\nfrequency comb at the National Institute of Standards and Technology and at the\nUniversity of Colorado where we are operating a laser frequency comb suitable\nfor use with a high resolution H band astronomical spectrograph.", "category": "astro-ph_IM" }, { "text": "An Inexpensive Liquid Crystal Spectropolarimeter for the Dominion\n Astrophysical Observatory Plaskett Telescope: A new, inexpensive polarimetric unit has been constructed for the Dominion\nAstrophysical Observatory (DAO) 1.8-m Plaskett telescope. It is implemented as\na plug-in module for the telescope's existing Cassegrain spectrograph, and\nenables medium resolution (R~10,000) circular spectropolarimetry of point\nsources. A dual-beam design together with fast switching of the wave plate at\nrates up to 100Hz, and synchronized with charge shuffling on the CCD, is used\nto significantly reduce instrumental effects and achieve high-precision\nspectropolarimetric measurements for a very low cost. The instrument is\noptimized to work in the wavelength range 4700 - 5300A to simultaneously detect\npolarization signals in the H beta line as well as nearby metallic lines. In\nthis paper we describe the technical details of the instrument, our observing\nstrategy and data reduction techniques, and present tests of its scientific\nperformance.", "category": "astro-ph_IM" }, { "text": "Astronomical seeing and ground-layer turbulence in the Canadian High\n Arctic: We report results of a two-year campaign of measurements, during arctic\nwinter darkness, of optical turbulence in the atmospheric boundary-layer above\nthe Polar Environment Atmospheric Laboratory in northern Ellesmere Island\n(latitude +80 deg N). The data reveal that the ground-layer turbulence in the\nArctic is often quite weak, even at the comparatively-low 610 m altitude of\nthis site. The median and 25th percentile ground-layer seeing, at a height of\n20 m, are found to be 0.57 and 0.25 arcsec, respectively. When combined with a\nfree-atmosphere component of 0.30 arcsec, the median and 25th percentile total\nseeing for this height is 0.68 and 0.42 arcsec respectively. The median total\nseeing from a height of 7 m is estimated to be 0.81 arcsec. These values are\ncomparable to those found at the best high-altitude astronomical sites.", "category": "astro-ph_IM" }, { "text": "pyCallisto: A Python Library To Process The CALLISTO Spectrometer Data: CALLISTO is a radio spectrometer designed to monitor the transient radio\nemissions/bursts originated from the solar corona in the frequency range\n$45-870$ MHz. At present, there are $\\gtrsim 150$ stations (together forms an\ne-CALLISTO network) around the globe continuously monitoring the Sun 24 hours a\nday. We have developed a pyCallisto, a python library to process the CALLISTO\ndata observed by all stations of the e-CALLISTO network. In this article, we\ndemonstrate various useful functions that are routinely used to process the\nCALLISTO data with suitable examples. This library is not only efficient in\nprocessing the data but plays a significant role in developing automatic\nclassification algorithms of different types of solar radio bursts.", "category": "astro-ph_IM" }, { "text": "Correcting Bandwidth Depolarization by Extreme Faraday Rotation: Measurements of the polarization of radio emission are subject to a number of\ndepolarization effects such as bandwidth depolarization, which is caused by the\naveraging effect of a finite channel bandwidth combined with the\nfrequency-dependent polarization caused by Faraday rotation. There have been\nvery few mathematical treatments of bandwidth depolarization, especially in the\ncontext of the rotation measure (RM) synthesis method for analyzing radio\npolarization data. We have found a simple equation for predicting if bandwidth\ndepolarization is significant for a given observational configuration. We have\nderived and tested three methods of modifying RM synthesis to correct for\nbandwidth depolarization. From these tests we have developed a new algorithm\nthat can detect bandwidth-depolarized signals with higher signal-to-noise than\nconventional RM synthesis and recover the correct source polarization\nproperties (RM and polarized intensity). We have verified that this algorithm\nworks as expected with real data from the LOFAR Two-metre Sky Survey. To make\nthis algorithm available to the community, we have added it as a new tool in\nthe RM-Tools polarization analysis package.", "category": "astro-ph_IM" }, { "text": "The Multi-slit Approach to Coronal Spectroscopy with the Multi-slit\n Solar Explorer (MUSE): The Multi-slit Solar Explorer (MUSE) is a proposed mission aimed at\nunderstanding the physical mechanisms driving the heating of the solar corona\nand the eruptions that are at the foundation of space weather. MUSE contains\ntwo instruments, a multi-slit EUV spectrograph and a context imager. It will\nsimultaneously obtain EUV spectra (along 37 slits) and context images with the\nhighest resolution in space (0.33-0.4 arcsec) and time (1-4 s) ever achieved\nfor the transition region and corona. The MUSE science investigation will\nexploit major advances in numerical modeling, and observe at the spatial and\ntemporal scales on which competing models make testable and distinguishable\npredictions, thereby leading to a breakthrough in our understanding of coronal\nheating and the drivers of space weather. By obtaining spectra in 4 bright EUV\nlines (Fe IX 171A, Fe XV 284A, Fe XIX-XXI 108A) covering a wide range of\ntransition region and coronal temperatures along 37 slits simultaneously, MUSE\nwill be able to \"freeze\" the evolution of the dynamic coronal plasma. We\ndescribe MUSE's multi-slit approach and show that the optimization of the\ndesign minimizes the impact of spectral lines from neighboring slits, generally\nallowing line parameters to be accurately determined. We also describe a\nSpectral Disambiguation Code to resolve multi-slit ambiguity in locations where\nsecondary lines are bright. We use simulations of the corona and eruptions to\nperform validation tests and show that the multi-slit disambiguation approach\nallows accurate determination of MUSE observables in locations where\nsignificant multi-slit contamination occurs.", "category": "astro-ph_IM" }, { "text": "The Road to Quasars: Although the extragalactic nature of 3C 48 and other quasi stellar radio\nsources was discussed as early as 1960 by John Bolton and others, it was\nrejected largely because of preconceived ideas about what appeared to be\nunrealistically high radio and optical luminosities. Not until the 1962\noccultations of the strong radio source 3C 273 at Parkes, which led Maarten\nSchmidt to identify 3C 273 with an apparent stellar object at a redshift of\n0.16, was the true nature understood. Successive radio and optical measurements\nquickly led to the identification of other quasars with increasingly large\nredshifts and the general, although for some decades not universal, acceptance\nof quasars as the very luminous nuclei of galaxies. Curiously, 3C 273, which is\none of the strongest extragalactic sources in the sky, was first cataloged in\n1959 and the magnitude 13 optical counterpart was observed at least as early as\n1887. Since 1960, much fainter optical counterparts were being routinely\nidentified using accurate radio interferometer positions which were measured\nprimarily at the Caltech Owens Valley Radio Observatory. However, 3C 273 eluded\nidentification until the series of lunar occultation observations led by Cyril\nHazard. Although an accurate radio position had been obtained earlier with the\nOVRO interferometer, inexplicably 3C 273 was initially misidentified with a\nfaint galaxy located about an arc minute away from the true quasar position.", "category": "astro-ph_IM" }, { "text": "Pulsar Candidate Identification Using Semi-Supervised Generative\n Adversarial Networks: Machine learning methods are increasingly helping astronomers identify new\nradio pulsars. However, they require a large amount of labelled data, which is\ntime consuming to produce and biased. Here we describe a Semi-Supervised\nGenerative Adversarial Network (SGAN) which achieves better classification\nperformance than the standard supervised algorithms using majority unlabelled\ndatasets. We achieved an accuracy and mean F-Score of 94.9% trained on only 100\nlabelled candidates and 5000 unlabelled candidates compared to our standard\nsupervised baseline which scored at 81.1% and 82.7% respectively. Our final\nmodel trained on a much larger labelled dataset achieved an accuracy and mean\nF-score value of 99.2% and a recall rate of 99.7%. This technique allows for\nhigh quality classification during the early stages of pulsar surveys on new\ninstruments when limited labelled data is available. We open-source our work\nalong with a new pulsar-candidate dataset produced from the High Time\nResolution Universe - South Low Latitude Survey. This dataset has the largest\nnumber of pulsar detections of any public dataset and we hope it will be a\nvaluable tool for benchmarking future machine learning models.", "category": "astro-ph_IM" }, { "text": "Geometric calibration of Colour and Stereo Surface Imaging System of\n ESA's Trace Gas Orbiter: There are many geometric calibration methods for \"standard\" cameras. These\nmethods, however, cannot be used for the calibration of telescopes with large\nfocal lengths and complex off-axis optics. Moreover, specialized calibration\nmethods for the telescopes are scarce in literature. We describe the\ncalibration method that we developed for the Colour and Stereo Surface Imaging\nSystem (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO).\nAlthough our method is described in the context of CaSSIS, with camera-specific\nexperiments, it is general and can be applied to other telescopes. We further\nencourage re-use of the proposed method by making our calibration code and data\navailable on-line.", "category": "astro-ph_IM" }, { "text": "Fundamental limits to high-contrast wavefront control: The current generation of ground-based coronagraphic instruments uses\ndeformable mirrors to correct for phase errors and to improve contrast levels\nat small angular separations. Improving these techniques, several space and\nground based instruments are currently developed using two deformable mirrors\nto correct for both phase and amplitude errors. However, as wavefront control\ntechniques improve, more complex telescope pupil geome- tries (support\nstructures, segmentation) will soon be a limiting factor for these next\ngeneration coronagraphic instruments. In this paper we discuss fundamental\nlimits associated with wavefront control with deformable mirrors in high\ncontrast coronagraph. We start with an analytic prescription of wavefront\nerrors, along with their wave- length dependence, and propagate them through\ncoronagraph models. We then consider a few wavefront control architectures,\nnumber of deformable mirrors and their placement in the optical train of the\ninstrument, and algorithms that can be used to cancel the starlight scattered\nby these wavefront errors over a finite bandpass. For each configuration we\nderive the residual contrast as a function of bandwidth and of the properties\nof the incoming wavefront. This result has consequences when setting the\nwavefront requirements, along with the wavefront control architecture of future\nhigh contrast instrument both from the ground and from space. In particular we\nshow that these limits can severely affect the effective Outer Working Angle\nthat can be achieved by a given coronagraph instrument.", "category": "astro-ph_IM" }, { "text": "Short Spacing Synthesis from a Primary Beam Scanned Interferometer: Aperture synthesis instruments providing a generally highly uniform sampling\nof the visibility function often leave an unsampled hole near the origin of the\n(u,v)-plane. In this paper, originally published in 1979, we first describe the\ncommon solution of retrieving the information from scans made with a large\nsingle-dish telescope. However, this is not the only means by which short\nspacing visibility data can be obtained. We propose an alternative technique\nthat employs a short-baseline interferometer to scan the entire primary beam\narea. The obvious advantage is that a short-baseline pair from the synthesis\ninstrument can be used, ensuring uniformity in instrumental characteristics.\nThis technique is the basis for the mosaicing algorithms now commonly used in\naperture synthesis radio astronomy imaging.", "category": "astro-ph_IM" }, { "text": "Trend Filtering -- II. Denoising Astronomical Signals with Varying\n Degrees of Smoothness: Trend filtering---first introduced into the astronomical literature in Paper\nI of this series---is a state-of-the-art statistical tool for denoising\none-dimensional signals that possess varying degrees of smoothness. In this\nwork, we demonstrate the broad utility of trend filtering to observational\nastronomy by discussing how it can contribute to a variety of spectroscopic and\ntime-domain studies. The observations we discuss are (1) the Lyman-$\\alpha$\nforest of quasar spectra; (2) more general spectroscopy of quasars, galaxies,\nand stars; (3) stellar light curves with planetary transits; (4) eclipsing\nbinary light curves; and (5) supernova light curves. We study the\nLyman-$\\alpha$ forest in the greatest detail---using trend filtering to map the\nlarge-scale structure of the intergalactic medium along quasar-observer lines\nof sight. The remaining studies share broad themes of: (1) estimating\nobservable parameters of light curves and spectra; and (2) constructing\nobservational spectral/light-curve templates. We also briefly discuss the\nutility of trend filtering as a tool for one-dimensional data reduction and\ncompression.", "category": "astro-ph_IM" }, { "text": "Processing System for Coherent Dedispersion of Pulsar Radio Emission: The work describes a system for converting VLBI observation data using the\nalgorithms of coherent dedispersion and compensation of two-bit signal\nsampling. Coherent dedispersion is important for processing pulsar observations\nto obtain the best temporal resolution, while correction for signal sampling\nmakes it possible to get rid of a number of parasitic effects that interfere\nwith the analysis of the diffraction pattern of pulsars. A pipeline has been\nestablished that uses the developed converter and the ASC Software Correlator,\nwhich will allow reprocessing all archived data of Radioastron pulsar\nobservations and to conduct a search for giant pulses, which requires the best\ntemporal resolution.", "category": "astro-ph_IM" }, { "text": "Dual Purpose Lyot Coronagraph Masks for Simultaneous High-Contrast\n Imaging and High-Resolution Wavefront Sensing: Directly imaging Earth-sized exoplanets with a visible-light coronagraph\ninstrument on a space telescope will require a system that can achieve\n$\\sim10^{-10}$ raw contrast and maintain it for the duration of observations\n(on the order of hours or more). We are designing, manufacturing, and testing\nDual Purpose Lyot coronagraph (DPLC) masks that allow for simultaneous\nwavefront sensing and control using out-of-band light to maintain high contrast\nin the science focal plane. Our initial design uses a tiered metallic focal\nplane occulter to suppress starlight in the transmitted coronagraph channel and\na dichroic-coated substrate to reflect out-of-band light to a wavefront sensing\ncamera. The occulter design introduces a phase shift such that the reflected\nchannel is a Zernike wavefront sensor. The dichroic coating allows higher-order\nwavefront errors to be detected which is especially critical for compensating\nfor residual drifts from an actively-controlled segmented primary mirror. A\nsecond-generation design concept includes a metasurface to create\npolarization-dependent phase shifts in the reflected beam, which has several\nadvantages including an extended dynamic range. We will present the focal plane\nmask designs, characterization, and initial testing at NASA's High Contrast\nImaging Testbed (HCIT) facility.", "category": "astro-ph_IM" }, { "text": "Updated Inflight Calibration of Hayabusa2's Optical Navigation Camera\n (ONC) for Scientific Observations during the Cruise Phase: The Optical Navigation Camera (ONC-T, ONC-W1, ONC-W2) onboard Hayabusa2 are\nalso being used for scientific observations of the mission target, C-complex\nasteroid 162173 Ryugu. Science observations and analyses require rigorous\ninstrument calibration. In order to meet this requirement, we have conducted\nextensive inflight observations during the 3.5 years of cruise after the launch\nof Hayabusa2 on 3 December 2014. In addition to the first inflight calibrations\nby Suzuki et al. (2018), we conducted an additional series of calibrations,\nincluding read-out smear, electronic-interference noise, bias, dark current,\nhot pixels, sensitivity, linearity, flat-field, and stray light measurements\nfor the ONC. Moreover, the calibrations, especially flat-fields and\nsensitivities, of ONC-W1 and -W2 are updated for the analysis of the\nlow-altitude (i.e., high-resolution) observations, such as the gravity\nmeasurement, touchdowns, and the descents for MASCOT and MINERVA-II payload\nreleases. The radiometric calibration for ONC-T is also updated in this study\nbased on star and Moon observations. Our updated inflight sensitivity\nmeasurements suggest the accuracy of the absolute radiometric calibration\ncontains less than 1.8% error for the ul-, b-, v-, Na-, w-, and x-bands based\non star calibration observations and ~5% for the p-band based on lunar\ncalibration observations. The radiance spectra of the Moon, Jupiter, and Saturn\nfrom the ONC-T show good agreement with the spacecraft-based observations of\nthe Moon from SP/SELENE and WAC/LROC and with ground-based telescopic\nobservations for Jupiter and Saturn.", "category": "astro-ph_IM" }, { "text": "Optical NEP in Hot-Electron Nanobolometers: For the first time, we have measured the optical noise equivalent power (NEP)\nin titanium (Ti) superconducting hot-electron nanobolometers (nano-HEBs). The\nbolometers were 2{\\mu}mx1{\\mu}mx20nm and 1{\\mu}mx1{\\mu}mx20nm planar\nantenna-coupled devices. The measurements were done at {\\lambda} = 460 {\\mu}m\nusing a cryogenic black body radiation source delivering optical power from a\nfraction of a femtowatt to a few 100s of femtowatts. A record low NEP =\n3x10^{-19} W/Hz^{1/2} at 50 mK has been achieved. This sensitivity meets the\nrequirements for SAFARI instrument on the SPICA telescope. The ways for further\nimprovement of the nano-HEB detector sensitivity are discussed.", "category": "astro-ph_IM" }, { "text": "Scintillation Pulse Shape Discrimination in a Two-Phase Xenon Time\n Projection Chamber: The energy and electric field dependence of pulse shape discrimination in\nliquid xenon have been measured in a 10 gm two-phase xenon time projection\nchamber. We have demonstrated the use of the pulse shape and charge-to-light\nratio simultaneously to obtain a leakage below that achievable by either\ndiscriminant alone. A Monte Carlo is used to show that the dominant fluctuation\nin the pulse shape quantity is statistical in nature, and project the\nperformance of these techniques in larger detectors. Although the performance\nis generally weak at low energies relevant to elastic WIMP recoil searches, the\npulse shape can be used in probing for higher energy inelastic WIMP recoils.", "category": "astro-ph_IM" }, { "text": "Ultra-sensitive Super-THz Microwave Kinetic Inductance Detectors for\n future space telescopes: Future actively cooled space-borne observatories for the far-infrared,\nloosely defined as a 1--10 THz band, can potentially reach a sensitivity\nlimited only by background radiation from the Universe. This will result in an\nincrease in observing speed of many orders of magnitude. A spectroscopic\ninstrument on such an observatory requires large arrays of detectors with a\nsensitivity expressed as a noise equivalent power NEP = 3 $\\times 10^{-20}$\n$W\\surd{Hz}$. We present the design, fabrication, and characterisation of\nmicrowave kinetic inductance detectors (MKIDs) for this frequency range\nreaching the required sensitivity. The devices are based on thin-film NbTiN\nresonators which use lens-antenna coupling to a submicron-width aluminium\ntransmission line at the shorted end of the resonator where the radiation is\nabsorbed. We optimised the MKID geometry for a low NEP by using a small\naluminium volume of $\\approx$ 1$\\mu m^3$ and fabricating the aluminium section\non a very thin (100 nm) SiN membrane. Both methods of optimisation also reduce\nthe effect of excess noise by increasing the responsivity of the device, which\nis further increased by reducing the parasitic geometrical inductance of the\nresonator. We measure the sensitivity of eight MKIDs with respect to the power\nabsorbed in the detector using a thermal calibration source filtered in a\nnarrow band around 1.55 THz. We obtain a\nNEP$_{exp}(P_{abs})\\:=\\:3.1\\pm0.9\\times10^{-20}\\:W\\surd{Hz}$ at a modulation\nfrequency of 200 Hz averaged over all measured MKIDs. The NEP is limited by\nquasiparticle trapping. The measured sensitivity is sufficient for\nspectroscopic observations from future, actively cooled space-based\nobservatories. Moreover, the presented device design and assembly can be\nadapted for frequencies up to $\\approx$ 10 THz and can be readily implemented\nin kilopixel arrays.", "category": "astro-ph_IM" }, { "text": "New Periodograms Separating Orbital Radial Velocities and Spectral Shape\n Variation: We present new periodograms that are effective in distinguishing Doppler\nshift from spectral shape variability in astronomical spectra. These\nperiodograms, building upon the concept of partial distance correlation,\nseparate the periodic radial velocity modulation induced by orbital motion from\nthat induced by stellar activity. These tools can be used to explore large\nspectroscopic databases in search of targets in which spectral shape variations\nobscure the orbital motion; such systems include active planet-hosting stars or\nbinary systems with an intrinsically variable component. We provide a detailed\nprescription for calculating the periodograms, demonstrate their performance\nvia simulations and real-life case studies, and provide a public Python\nimplementation.", "category": "astro-ph_IM" }, { "text": "A new method of testing the gravitational redshift effect with radio\n interferometers: We propose a new method to measure gravitational redshift effect using\nsimultaneous interferometric observations of a distant radio source to\nsynchronize clocks. The first order by $v/c$ contribution to the signal (the\nclassical Doppler effect) is automatically canceled in our setup. When other\ncontributions from the velocities of the clocks, clock imperfection and\natmosphere are properly taken into account, the residual gravitational redshift\ncan be measured with the relative precision of $\\sim 10^{-3}$ for RadioAstron\nspace-to-ground interferometer or with precision up to few $10^{-5}$ with the\nnext generation of space radio interferometers.", "category": "astro-ph_IM" }, { "text": "The performance of SiPM-based gamma-ray detector (GRD) of GECAM-C: As a new member of GECAM mission, the GECAM-C (also called High Energy Burst\nSearcher, HEBS) is a gamma-ray all-sky monitor onboard SATech-01 satellite,\nwhich was launched on July 27th, 2022 to detect gamma-ray transients from 6 keV\nto 6 MeV, such as Gamma-Ray Bursts (GRBs), high energy counterpart of\nGravitational Waves (GWs) and Fast Radio Bursts (FRBs), and Soft Gamma-ray\nRepeaters (SGRs). Together with GECAM-A and GECAM-B launched in December 2020,\nGECAM-C will greatly improve the monitoring coverage, localization, as well as\ntemporal and spectral measurements of gamma-ray transients. GECAM-C employs 12\nSiPM-based Gamma-Ray Detectors (GRDs) to detect gamma-ray transients . In this\npaper, we firstly give a brief description of the design of GECAM-C GRDs, and\nthen focus on the on-ground tests and in-flight performance of GRDs. We also\ndid the comparison study of the SiPM in-flight performance between GECAM-C and\nGECAM-B. The results show GECAM-C GRD works as expected and is ready to make\nscientific observations.", "category": "astro-ph_IM" }, { "text": "PhotoNs-GPU:A GPU accelerated cosmological simulation code: We present a GPU-accelerated cosmological simulation code, PhotoNs-GPU, based\non algorithm of Particle Mesh Fast Multipole Method (PM-FMM), and focus on the\nGPU utilization and optimization. A proper interpolated method for truncated\ngravity is introduced to speed up the special functions in kernels. We verify\nthe GPU code in mixed precision and different levels of interpolated method on\nGPU. A run with single precision is roughly two times faster that double\nprecision for current practical cosmological simulations. But it could induce a\nunbiased small noise in power spectrum. Comparing with the CPU version of\nPhotoNs and Gadget-2, the efficiency of new code is significantly improved.\nActivated all the optimizations on the memory access, kernel functions and\nconcurrency management, the peak performance of our test runs achieves 48% of\nthe theoretical speed and the average performance approaches to 35% on GPU.", "category": "astro-ph_IM" }, { "text": "Optical Characterization & Testbed Development for \u03bc-Spec Integrated\n Spectrometers: This paper describes a cryogenic optical testbed developed to characterize\nu-Spec spectrometers in a dedicated dilution refrigerator (DR) system. u-Spec\nis a far-infrared integrated spectrometer that is an analog to a Rowland-type\ngrating spectrometer. It employs a single-crystal silicon substrate with\nniobium microstrip lines and aluminum kinetic inductance detectors (KIDs).\nCurrent designs with a resolution of 512 are in fabrication for the EXCLAIM\n(Experiment for Cryogenic Large Aperture Intensity Mapping) balloon mission.\nThe primary spectrometer performance and design parameters are efficiency, NEP,\ninter-channel isolation, spectral resolution, and frequency response for each\nchannel. Here we present the development and design of an optical\ncharacterization facility and preliminary validation of that facility with\nearlier prototype R=64 devices. We have conducted and describe initial optical\nmeasurements of R = 64 devices using a swept photomixer line source. We also\ndiscuss the test plan for optical characterization of the EXCLAIM R = 512\nu-Spec devices in this new testbed.", "category": "astro-ph_IM" }, { "text": "Measurement of turbulence profile from defocused ring images: A defocused image of a bright single star in a small telescope contains rich\ninformation on the optical turbulence, i.e. the seeing. The concept of a novel\nturbulence monitor based on recording sequences of ring-like intrafocal images\nand their analysis is presented. It can be implemented using standard\ninexpensive telescopes and cameras. Statistics of intensity fluctuations in the\nrings and their radial motion allow measurement of the low-resolution\nturbulence profile, the total seeing, and the atmospheric time constant. The\nalgorithm of processing the images and extracting the turbulence parameters is\ndeveloped and extensively tested by numerical simulation. Prescriptions to\ncorrect for finite exposure time and partially saturated scintillation are\ngiven. A prototype instrument with a 0.13-m aperture was tested on the sky. The\nRINGSS (Ring-Image Next Generation Scintillation Sensor) can be used as a\nportable turbulence monitor for site testing and as an upgrade of existing\nseeing monitors.", "category": "astro-ph_IM" }, { "text": "Thermal architecture for the QUBIC cryogenic receiver: QUBIC, the QU Bolometric Interferometer for Cosmology, is a novel forthcoming\ninstrument to measure the B-mode polarization anisotropy of the Cosmic\nMicrowave Background. The detection of the B-mode signal will be extremely\nchallenging; QUBIC has been designed to address this with a novel approach,\nnamely bolometric interferometry. The receiver cryostat is exceptionally large\nand cools complex optical and detector stages to 40 K, 4 K, 1 K and 350 mK\nusing two pulse tube coolers, a novel 4He sorption cooler and a double-stage\n3He/4He sorption cooler. We discuss the thermal and mechanical design of the\ncryostat, modelling and thermal analysis, and laboratory cryogenic testing.", "category": "astro-ph_IM" }, { "text": "Cryogenic cooling with cryocooler on a rotating system: We developed a system that continuously maintains a cryocooler for long\nperiods on a rotating table. A cryostat that holds the cryocooler is set on the\ntable. A compressor is located on the ground and supplies high-purity (>\n99.999%) and high-pressure (1.7 MPa) helium gas and electricity to the\ncryocooler. The operation of the cryocooler and other instruments requires the\ndevelopment of interface components between the ground and rotating table. A\ncombination of access holes at the center of the table and two rotary joints\nallows simultaneous circulation of electricity and helium gas. The developed\nsystem provides two innovative functions under the rotating condition; cooling\nfrom room temperature and the maintenance of a cold condition for long periods.\nWe have confirmed these abilities as well as temperature stability under a\ncondition of continuous rotation at 20 revolutions per minute. The developed\nsystem can be applied in various fields; e.g., in tests of Lorentz invariance,\nsearches for axion, radio astronomy and cosmology, and application of radar\nsystems. In particular, there is a plan to use this system for a radio\ntelescope observing cosmic microwave background radiation.", "category": "astro-ph_IM" }, { "text": "A dual-mask coronagraph for observing faint companions to binary stars: Observations of binary stars for faint companions with conventional\ncoronagraphic methods are challenging, as both targets will be bright enough to\nobscure any nearby faint companions if their scattered light is not suppressed.\nWe propose coronagraphic examination of binary stars using an apodized pupil\nLyot coronagraph and a pair of actively-controlled image plane masks to\nsuppress both stars simultaneously. The performance is compared to imaging with\na band-limited mask, a dual-mask Lyot coronagraph and with no coronagraph at\nall. An imaging procedure and control system for the masks are also described.", "category": "astro-ph_IM" }, { "text": "PRAXIS: low thermal emission high efficiency OH suppressed fibre\n spectrograph: PRAXIS is a second generation instrument that follows on from GNOSIS, which\nwas the first instrument using fibre Bragg gratings for OH background\nsuppression. The Bragg gratings reflect the NIR OH lines while being\ntransparent to light between the lines. This gives a much higher signal-noise\nratio at low resolution but also at higher resolutions by removing the\nscattered wings of the OH lines. The specifications call for high throughput\nand very low thermal and detector noise so that PRAXIS will remain sky noise\nlimited. The optical train is made of fore-optics, an IFU, a fibre bundle, the\nBragg grating unit, a second fibre bundle and a spectrograph. GNOSIS used the\npre-existing IRIS2 spectrograph while PRAXIS will use a new spectrograph\nspecifically designed for the fibre Bragg grating OH suppression and optimised\nfor 1470 nm to 1700 nm (it can also be used in the 1090 nm to 1260 nm band by\nchanging the grating and refocussing). This results in a significantly higher\ntransmission due to high efficiency coatings, a VPH grating at low incident\nangle and low absorption glasses. The detector noise will also be lower.\nThroughout the PRAXIS design special care was taken at every step along the\noptical path to reduce thermal emission or stop it leaking into the system.\nThis made the spectrograph design challenging because practical constraints\nrequired that the detector and the spectrograph enclosures be physically\nseparate by air at ambient temperature. At present, the instrument uses the\nGNOSIS fibre Bragg grating OH suppression unit. We intend to soon use a new OH\nsuppression unit based on multicore fibre Bragg gratings which will allow\nincreased field of view per fibre. Theoretical calculations show that the gain\nin interline sky background signal-noise ratio over GNOSIS may very well be as\nhigh as 9 with the GNOSIS OH suppression unit and 17 with the multicore fibre\nOH suppression unit.", "category": "astro-ph_IM" }, { "text": "Deep Generative Models of Gravitational Waveforms via Conditional\n Autoencoder: We construct few deep generative models of gravitational waveforms based on\nthe semi-supervising scheme of conditional autoencoders and their variational\nextensions. Once the training is done, we find that our best waveform model can\ngenerate the inspiral-merger waveforms of binary black hole coalescence with\nmore than $97\\%$ average overlap matched filtering accuracy for the mass ratio\nbetween $1$ and $10$. Besides, the generation time of a single waveform takes\nabout one millisecond, which is about $10$ to $100$ times faster than the EOBNR\nalgorithm running on the same computing facility. Moreover, these models can\nalso help to explore the space of waveforms. That is, with mainly the\nlow-mass-ratio training set, the resultant trained model is capable of\ngenerating large amount of accurate high-mass-ratio waveforms. This result\nimplies that our generative model can speed up the waveform generation for the\nlow latency search of gravitational wave events. With the improvement of the\naccuracy in future work, the generative waveform model may also help to speed\nup the parameter estimation and can assist the numerical relativity in\ngenerating the waveforms of higher mass ratio by progressively self-training.", "category": "astro-ph_IM" }, { "text": "Ultra-Low-Frequency Radio Astronomy Observations from a Selenocentric\n Orbit: first results of the Longjiang-2 experiment: This paper introduces the first results of observations with the\nUltra-Long-Wavelength (ULW) -- Low Frequency Interferometer and Spectrometer\n(LFIS) on board the selenocentric satellite Longjiang-2. We present a brief\ndescription of the satellite and focus on the LFIS payload. The in-orbit\ncommissioning confirmed a reliable operational status of the instrumentation.\nWe also present results of a transition observation, which offers unique\nmeasurements on several novel aspects. We estimate the RFI suppression required\nfor such a radio astronomy instrumentation at the Moon distances from Earth to\nbe of the order of 80 dB. We analyse a method of separating Earth- and\nsatellite-originated radio frequency interference (RFI). It is found that the\nRFI level at frequencies lower than a few MHz is smaller than the receiver\nnoise floor.", "category": "astro-ph_IM" }, { "text": "Geopolitical Implications of a Successful SETI Program: We discuss the recent \"realpolitik\" analysis of Wisian & Traphagan (2020,\nW&T) of the potential geopolitical fallout of the success of SETI. They\nconclude that \"passive\" SETI involves an underexplored yet significant risk\nthat, in the event of a successful, passive detection of extraterrestrial\ntechnology, state-level actors could seek to gain an information monopoly on\ncommunications with an ETI. These attempts could lead to international conflict\nand potentially disastrous consequences. In response to this possibility, they\nargue that scientists and facilities engaged in SETI should preemptively engage\nin significant security protocols to forestall this risk.\n We find several flaws in their analysis. While we do not dispute that a\nrealpolitik response is possible, we uncover concerns with W&T's presentation\nof the realpolitik paradigm, and we argue that sufficient reason is not given\nto justify treating this potential scenario as action-guiding over other\ncandidate geopolitical responses. Furthermore, even if one assumes that a\nrealpolitik response is the most relevant geopolitical response, we show that\nit is highly unlikely that a nation could successfully monopolize communication\nwith ETI. Instead, the real threat that the authors identify is based on the\nperception by state actors that an information monopoly is likely. However, as\nwe show, this perception is based on an overly narrow contact scenario.\n Overall, we critique W&T's argument and resulting recommendations on\ntechnical, political, and ethical grounds. Ultimately, we find that not only\nare W&T's recommendations unlikely to work, they may also precipitate the very\nills that they foresee. As an alternative, we recommend transparency and data\nsharing (which are consistent with currently accepted best practices), further\ndevelopment of post-detection protocols, and better education of policymakers\nin this space.", "category": "astro-ph_IM" }, { "text": "Multi-Band Feeds: A Design Study: Broadband antenna feeds are of particular interest to existing and future\nradio telescopes for multi-frequency studies of astronomical sources. Although\na 1:15 range in frequency is difficult to achieve, the well-known Eleven feed\ndesign offers a relatively uniform response over such a range, and reasonably\nwell-matched responses in E & H planes. However, given the severe Radio\nFrequency Interference in several bands over such wide spectral range, one\ndesires to selectively reject the corresponding bands. With this view, we have\nexplored the possibilities of having a multi-band feed antenna spanning a wide\nfrequency range, but which would have good response only in a number of\npre-selected (relatively) RFI-free windows (for a particular telescope-site).\nThe designs we have investigated use the basic configuration of pairs of\ndipoles as in the Eleven feed, but use simple wire dipoles instead of folded\ndipoles used in the latter. From our study of the two designs we have\ninvestigated, we find that the design with feed-lines constructed using\nco-axial lines shows good rejection in the unwanted parts of the spectrum and\ncontrol over the locations of resonant bands.", "category": "astro-ph_IM" }, { "text": "Potential for Observing Methane on Mars Using Earth-based Extremely\n Large Telescopes: The Red Planet has fascinated humans for millennia, especially for the last\nfew centuries, and particularly during the Space Age. The nagging suspicion of\nextant Martian life is both fed by, and drives the many space missions to Mars\nand recent detections of large, seasonal volumes of atmospheric methane have\nre-fuelled the discussion. Methane's strongest vibrational frequency (around\n3.3 micron) occurs in the lower half of astronomers' L Band in the near infra\nred, and is readily detectable in the Martian atmosphere from ground based\nspectroscopes at high, dry locations such as Hawaii and Chile. However,\nresolution of specific spectral absorption lines that categorically identify\nmethane are disputed in the literature, as are their origins. With the proposed\nconstruction of extremely large telescopes operating in the optical/NIR, the\nquestion became: could these ELTs supplement, or even replace space-based\ninstruments trained on Martian methane? A 2012 review of immediate-past,\npresent and future NIR spectrometers on Earth, in the air, in Earth orbit, in\nsolar orbit, in L2 orbit, in Mars orbit, and on Mars, revealed a wide range of\ncapabilities and limitations. Spatial, spectral, radiometric and temporal\nresolutions were all considered and found to be complex, inter-related and\nhighly instrument-specific. The Giant Magellan Telescope, the Thirty Meter\nTelescope and the European Extremely Large Telescope will each have at least\none L Band spectrometer supported by state-of-the-art adaptive optics and be\ncapable of extreme spatial, spectral and radiometric resolution. Replicating\nobservations over time will provide a critical constraint to theoretical\nconsiderations about the biotic or abiotic origins of any detected methane and\nit is recommended that existing datasets be mined, science cases for the ELTs\ninclude Martian methane and collaboration between science teams be enhanced.", "category": "astro-ph_IM" }, { "text": "A Bayesian approach to high fidelity interferometric calibration II:\n demonstration with simulated data: In a companion paper, we presented BayesCal, a mathematical formalism for\nmitigating sky-model incompleteness in interferometric calibration. In this\npaper, we demonstrate the use of BayesCal to calibrate the degenerate gain\nparameters of full-Stokes simulated observations with a HERA-like hexagonal\nclose-packed redundant array, for three assumed levels of completeness of the a\npriori known component of the calibration sky model. We compare the BayesCal\ncalibration solutions to those recovered by calibrating the degenerate gain\nparameters with only the a priori known component of the calibration sky model\nboth with and without imposing physically motivated priors on the gain\namplitude solutions and for two choices of baseline length range over which to\ncalibrate. We find that BayesCal provides calibration solutions with up to four\norders of magnitude lower power in spurious gain amplitude fluctuations than\nthe calibration solutions derived for the same data set with the alternate\napproaches, and between $\\sim10^7$ and $\\sim10^{10}$ times smaller than in the\nmean degenerate gain amplitude on the full range of spectral scales accessible\nin the data. Additionally, we find that in the scenarios modelled only BayesCal\nhas sufficiently high fidelity calibration solutions for unbiased recovery of\nthe 21 cm power spectrum on large spectral scales ($k_\\parallel \\lesssim\n0.15~h\\mathrm{Mpc}^{-1}$). In all other cases, in the completeness regimes\nstudied, those scales are contaminated.", "category": "astro-ph_IM" }, { "text": "Combine User's Manual: {\\sc Combine} is an add-on to {\\sc SigSpec} and {\\sc Cinderella}. A {\\sc\nSigSpec} result file or a file generated by {\\sc Cinderella} contains the\nsignificant sinusoidal signal components in a time series. In this file, {\\sc\nCombine} checks one frequency after the other for being a linear combination of\npreviously examined frequencies. If this attempt fails, the corresponding\nfrequency is considered ``genuine''. Only genuine frequencies are used to form\nlinear combinations subsequently. A purely heuristic model is employed to\nassign a reliability to each linear combination and to justify whether to\nconsider a frequency genuine or a linear combination.", "category": "astro-ph_IM" }, { "text": "VAST: An ASKAP Survey for Variables and Slow Transients: The Australian Square Kilometre Array Pathfinder (ASKAP) will give us an\nunprecedented opportunity to investigate the transient sky at radio\nwavelengths. In this paper we present VAST, an ASKAP survey for Variables and\nSlow Transients. VAST will exploit the wide-field survey capabilities of ASKAP\nto enable the discovery and investigation of variable and transient phenomena\nfrom the local to the cosmological, including flare stars, intermittent\npulsars, X-ray binaries, magnetars, extreme scattering events, interstellar\nscintillation, radio supernovae and orphan afterglows of gamma ray bursts. In\naddition, it will allow us to probe unexplored regions of parameter space where\nnew classes of transient sources may be detected. In this paper we review the\nknown radio transient and variable populations and the current results from\nblind radio surveys. We outline a comprehensive program based on a multi-tiered\nsurvey strategy to characterise the radio transient sky through detection and\nmonitoring of transient and variable sources on the ASKAP imaging timescales of\nfive seconds and greater. We also present an analysis of the expected source\npopulations that we will be able to detect with VAST.", "category": "astro-ph_IM" }, { "text": "A Novel Hybrid Algorithm for Lucky Imaging: Lucky imaging is a high-resolution astronomical image recovery technique with\ntwo classic implementation algorithms, i.e. image selecting, shifting and\nadding in image space and data selecting and image synthesizing in Fourier\nspace. This paper proposes a novel lucky imaging algorithm where with\nspace-domain and frequency-domain selection rates as a link, the two classic\nalgorithms are combined successfully, making each algorithm a proper subset of\nthe novel hybrid algorithm. Experimental results show that with the same\nexperiment dataset and platform, the high-resolution image obtained by the\nproposed algorithm is superior to that obtained by the two classic algorithms.\nThis paper also proposes a new lucky image selection and storage scheme, which\ncan greatly save computer memory and enable lucky imaging algorithm to be\nimplemented in a common desktop or laptop with small memory and to process\nastronomical images with more frames and larger size. Besides, through\nsimulation analysis, this paper discusses the binary star detection limits of\nthe novel lucky imaging algorithm and traditional ones under different\natmospheric conditions.", "category": "astro-ph_IM" }, { "text": "A Statistical Framework for the Utilization of Simultaneous Pupil Plane\n and Focal Plane Telemetry for Exoplanet Imaging, Part I: Accounting for\n Aberrations in Multiple Planes: A new generation of telescopes with mirror diameters of 20 m or more, called\nextremely large telescopes (ELTs) has the potential to provide unprecedented\nimaging and spectroscopy of exo-planetary systems, if the difficulties in\nachieving the extremely high dynamic range required to differentiate the\nplanetary signal from the star can be overcome to a sufficient degree. Fully\nutilizing the potential of ELTs for exoplanet imaging will likely require\nsimultaneous and self-consistent determination of both the planetary image and\nthe unknown aberrations in multiple planes of the optical system, using\nstatistical inference based on the wavefront sensor and science camera data\nstreams. This approach promises to overcome the most important systematic\nerrors inherent in the various schemes based on differential imaging, such as\nADI and SDI. This paper is the first in a series on this subject, in which a\nformalism is established for the exoplanet imaging problem, setting the stage\nfor the statistical inference methods to follow in the future. Every effort has\nbeen made to be rigorous and complete, so that validity of approximations to be\nmade later can be assessed. Here, the polarimetric image is expressed in terms\nof aberrations in the various planes of a polarizing telescope with an adaptive\noptics system. Further, it is shown that current methods that utilize focal\nplane sensing to correct the speckle field, e.g., electric field conjugation,\nrely on the tacit assumption that aberrations on multiple optical surfaces can\nbe represented as aberration on a single optical surface, ultimately limiting\ntheir potential effectiveness for ground-based astronomy.", "category": "astro-ph_IM" }, { "text": "The GAMMA-400 gamma-ray telescope characteristics. Angular resolution\n and electrons/protons separation: The measurements of gamma-ray fluxes and cosmic-ray electrons and positrons\nin the energy range from 100 MeV to several TeV, which will be implemented by\nthe specially designed GAMMA-400 gamma-ray telescope, concern with the\nfollowing broad range of science topics. Searching for signatures of dark\nmatter, surveying the celestial sphere in order to study gamma-ray point and\nextended sources, measuring the energy spectra of Galactic and extragalactic\ndiffuse gamma-ray emission, studying gamma-ray bursts and gamma-ray emission\nfrom the Sun, as well as high precision measuring spectra of high-energy\nelectrons and positrons, protons and nuclei up to the knee. To clarify these\nscientific problems with the new experimental data the GAMMA-400 gamma-ray\ntelescope possesses unique physical characteristics comparing with previous and\npresent experiments. For gamma-ray energies more than 100 GeV GAMMA-400\nprovides the energy resolution of ~1% and angular resolution better than 0.02\ndeg. The methods developed to reconstruct the direction of incident gamma\nphoton are presented in this paper, as well as, the capability of the GAMMA-400\ngamma-ray telescope to distinguish electrons and positrons from protons in\ncosmic rays is investigated.", "category": "astro-ph_IM" }, { "text": "Two phase mixtures in SPH - A new approach: We present a new approach to simulating mixtures of gas and dust in smoothed\nparticle hydrodynamics (SPH). We show how the two-fluid equations can be\nrewritten to describe a single-fluid 'mixture' moving with the barycentric\nvelocity, with each particle carrying a dust fraction. We show how this\nformulation can be implemented in SPH while preserving the conservation\nproperties (i.e. conservation of mass of each phase, momentum and energy). We\nalso show that the method solves two key issues with the two fluid approach: it\navoids over-damping of the mixture when the drag is strong and prevents a\nproblem with dust particles becoming trapped below the resolution of the gas.\n We also show how the general one-fluid formulation can be simplified in the\nlimit of strong drag (i.e. small grains) to the usual SPH equations plus a\ndiffusion equation for the evolution of the dust fraction that can be evolved\nexplicitly and does not require any implicit timestepping. We present tests of\nthe simplified formulation showing that it is accurate in the small\ngrain/strong drag limit. We discuss some of the issues we have had to solve\nwhile developing this method and finally present a preliminary application to\ndust settling in protoplanetary discs.", "category": "astro-ph_IM" }, { "text": "GAPS: A New Cosmic Ray Anti-matter Experiment: The General AntiParticle Spectrometer (GAPS) is a balloon-borne instrument\ndesigned to detect cosmic-ray antimatter using the novel exotic atom technique,\nobviating the strong magnetic fields required by experiments like AMS, PAMELA,\nor BESS. It will be sensitive to primary antideuterons with kinetic energies of\n$\\approx0.05-0.2$ GeV/nucleon, providing some overlap with the previously\nmentioned experiments at the highest energies. For $3\\times35$ day balloon\nflights, and standard classes of primary antideuteron propagation models, GAPS\nwill be sensitive to $m_{\\mathrm{DM}}\\approx10-100$ GeV c$^{-2}$ WIMPs with a\ndark-matter flux to astrophysical flux ratio approaching 100. This clean\nprimary channel is a key feature of GAPS and is crucial for a rare event\nsearch. Additionally, the antiproton spectrum will be extended with high\nstatistics measurements to cover the $0.07 \\leq E \\leq 0.25 $ GeV domain. For\n$E>0.2$ GeV GAPS data will be complementary to existing experiments, while\n$E<0.2$ GeV explores a new regime. The first flight is scheduled for late 2020\nin Antarctica. These proceedings will describe the astrophysical processes and\nbackgrounds relevant to the dark matter search, a brief discussion of detector\noperation, and construction progress made to date.", "category": "astro-ph_IM" }, { "text": "Earth-Moon VLBI project. Modeling of scientific outcome: Modern radio astrometry has reached the limit of the resolution that is\ndetermined by the size of the Earth. The only way to overcome that limit is to\ncreate the radio telescopes outside our planet. It is proposed to build an\nautonomous remote-controlled radio observatory on the Moon. Working together\nwith the existing radio telescopes on Earth in the VLBI mode, the new\nobservatory will form an interferometer baseline up to 410000 km, enhancing the\npresent astrometric and geodetic capabilities of VLBI. We perform numerical\nsimulations of Earth-Moon VLBI observations operating simultaneously with the\ninternational VLBI network. It is shown that these observations will\nsignificantly improve the precision of determination of Moon's orbital motion,\nlibration angles, ICRF, and relativistic parameters.", "category": "astro-ph_IM" }, { "text": "Uploading User-Defined Functions onto the AMIDAS Website: The AMIDAS website has been established as an online interactive tool for\nrunning simulations and analyzing data in direct Dark Matter detection\nexperiments. At the first phase of the website building, only some commonly\nused WIMP velocity distribution functions and elastic nuclear form factors have\nbeen involved in the AMIDAS code. In order to let the options for velocity\ndistribution as well as for nuclear form factors be more flexible, we have\nextended the AMIDAS code to be able to include user-uploaded files with their\nown functions. In this article, I describe the preparation of files of\nuser-defined functions onto the AMIDAS website. Some examples will also be\ngiven.", "category": "astro-ph_IM" }, { "text": "Minimum-variance multitaper spectral estimation on the sphere: We develop a method to estimate the power spectrum of a stochastic process on\nthe sphere from data of limited geographical coverage. Our approach can be\ninterpreted either as estimating the global power spectrum of a stationary\nprocess when only a portion of the data are available for analysis, or\nestimating the power spectrum from local data under the assumption that the\ndata are locally stationary in a specified region. Restricting a global\nfunction to a spatial subdomain -- whether by necessity or by design -- is a\nwindowing operation, and an equation like a convolution in the spectral domain\nrelates the expected value of the windowed power spectrum to the underlying\nglobal power spectrum and the known power spectrum of the localization window.\nThe best windows for the purpose of localized spectral analysis have their\nenergy concentrated in the region of interest while possessing the smallest\neffective bandwidth as possible. Solving an optimization problem in the sense\nof Slepian (1960) yields a family of orthogonal windows of diminishing\nspatiospectral localization, the best concentrated of which we propose to use\nto form a weighted multitaper spectrum estimate in the sense of Thomson (1982).\nSuch an estimate is both more representative of the target region and reduces\nthe estimation variance when compared to estimates formed by any single\nbandlimited window. We describe how the weights applied to the individual\nspectral estimates in forming the multitaper estimate can be chosen such that\nthe variance of the estimate is minimized.", "category": "astro-ph_IM" }, { "text": "Noise reduction on single-shot images using an autoencoder: We present an application of autoencoders to the problem of noise reduction\nin single-shot astronomical images and explore its suitability for upcoming\nlarge-scale surveys. Autoencoders are a machine learning model that summarises\nan input to identify its key features, then from this knowledge predicts a\nrepresentation of a different input. The broad aim of our autoencoder model is\nto retain morphological information (e.g., non-parametric morphological\ninformation) from the survey data whilst simultaneously reducing the noise\ncontained in the image. We implement an autoencoder with convolutional and\nmaxpooling layers. We test our implementation on images from the Panoramic\nSurvey Telescope and Rapid Response System (Pan-STARRS) that contain varying\nlevels of noise and report how successful our autoencoder is by considering\nMean Squared Error (MSE), Structural Similarity Index (SSIM), the second-order\nmoment of the brightest 20 percent of the galaxy's flux M20, and the Gini\ncoefficient, whilst noting how the results vary between the original images,\nstacked images, and noise reduced images. We show that we are able to reduce\nnoice, over many different targets of observations, whilst retaining the\ngalaxy's morphology, with metric evaluation on a target by target analysis. We\nestablish that this process manages to achieve a positive result in a matter of\nminutes, and by only using one single shot image compared to multiple survey\nimages found in other noise reduction techniques.", "category": "astro-ph_IM" }, { "text": "Numerical Error in Interplanetary Orbit Determination Software: The core of every orbit determination process is the comparison between the\nmeasured observables and their predicted values, computed using the adopted\nmathematical models, and the minimization, in a least square sense, of their\ndifferences, known as residuals. In interplanetary orbit determination, Doppler\nobservables, obtained by measuring the average frequency shift of the received\ncarrier signal over a certain count time, are compared against their predicted\nvalues, usually computed by differencing two round-trip light-times. This\nformulation is known to be sensitive to round-off errors, caused by the use of\nfinite arithmetic in the computation, giving rise to an additional noise in the\nresiduals, called numerical noise, that degrades the accuracy of the orbit\ndetermination solution. This paper presents a mathematical model for the\nexpected numerical errors in two-way and three-way Doppler observables,\ncomputed using the differenced light-time formulation. The model was validated\nby comparing its prediction to the actual noise in the computed observables,\nobtained by NASA/Jet Propulsion Laboratory's Orbit Determination Program. The\nmodel proved to be accurate within $3 \\times 10^{-3} \\,\\text{mm/s}$ at $60\n\\,\\text{s}$ integration time. Then it was applied to the case studies of\nCassini's and Juno's nominal trajectories, proving that numerical errors can\nassume values up to $6 \\times 10^{-2} \\,\\text{mm/s}$ at $60 \\,\\text{s}$\nintegration time, and consequently that they are an important noise source in\nthe Doppler-based orbit determination processes. Three alternative strategies\nare proposed and discussed in the paper to mitigate the effects of numerical\nnoise.", "category": "astro-ph_IM" }, { "text": "Efficient modeling of correlated noise II. A flexible noise model with\n fast and scalable methods: Correlated noise affects most astronomical datasets and to neglect accounting\nfor it can lead to spurious signal detections, especially in low\nsignal-to-noise conditions, which is often the context in which new discoveries\nare pursued. For instance, in the realm of exoplanet detection with radial\nvelocity time series, stellar variability can induce false detections. However,\na white noise approximation is often used because accounting for correlated\nnoise when analyzing data implies a more complex analysis. Moreover, the\ncomputational cost can be prohibitive as it typically scales as the cube of the\ndataset size.\n For some restricted classes of correlated noise models, there are specific\nalgorithms that can be used to help bring down the computational cost. This\nimprovement in speed is particularly useful in the context of Gaussian process\nregression, however, it comes at the expense of the generality of the noise\nmodel.\n Here, we present the S+LEAF noise model, which allows us to account for a\nlarge class of correlated noises with a linear scaling of the computational\ncost with respect to the size of the dataset. The S+LEAF model includes, in\nparticular, mixtures of quasiperiodic kernels and calibration noise. This\nefficient modeling is made possible by a sparse representation of the\ncovariance matrix of the noise and the use of dedicated algorithms for matrix\ninversion, solving, determinant computation, etc.\n We applied the S+LEAF model to reanalyze the HARPS radial velocity time\nseries of HD 136352. We illustrate the flexibility of the S+LEAF model in\nhandling various sources of noise. We demonstrate the importance of taking\ncorrelated noise into account, and especially calibration noise, to correctly\nassess the significance of detected signals.\n We provide an open-source implementation of the S+LEAF model, available at\nhttps://gitlab.unige.ch/jean-baptiste.delisle/spleaf.", "category": "astro-ph_IM" }, { "text": "A flexible method for estimating luminosity functions via Kernel Density\n Estimation -- II. Generalization and Python implementation: We propose a generalization of our previous KDE (kernel density estimation)\nmethod for estimating luminosity functions (LFs). This new upgrade further\nextend the application scope of our KDE method, making it a very flexible\napproach which is suitable to deal with most of bivariate LF calculation\nproblems. From the mathematical point of view, usually the LF calculation can\nbe abstracted as a density estimation problem in the bounded domain of\n$\\{Z_1f_{\\mathrm{lim}}(z) \\}$. We use the transformation-reflection\nKDE method ($\\hat{\\phi}$) to solve the problem, and introduce an approximate\nmethod ($\\hat{\\phi}_{\\mathrm{1}}$) based on one-dimensional KDE to deal with\nthe small sample size case. In practical applications, the different versions\nof LF estimators can be flexibly chosen according to the Kolmogorov-Smirnov\ntest criterion. Based on 200 simulated samples, we find that for both cases of\ndividing or not dividing redshift bins, especially for the latter, our method\nperforms significantly better than the traditional binning method\n$\\hat{\\phi}_{\\mathrm{bin}}$. Moreover, with the increase of sample size $n$,\nour LF estimator converges to the true LF remarkably faster than\n$\\hat{\\phi}_{\\mathrm{bin}}$. To implement our method, we have developed a\npublic, open-source Python Toolkit, called \\texttt{kdeLF}. With the support of\n\\texttt{kdeLF}, our KDE method is expected to be a competitive alternative to\nexisting nonparametric estimators, due to its high accuracy and excellent\nstability. \\texttt{kdeLF} is available at\n\\url{http://github.com/yuanzunli/kdeLF} with extensive documentation available\nat \\url{http://kdelf.readthedocs.org/en/latest~}.", "category": "astro-ph_IM" }, { "text": "Cyclic Spectral Analysis of Radio Pulsars: Cyclic spectral analysis is a signal processing technique designed to deal\nwith stochastic signals whose statistics vary periodically with time. Pulsar\nradio emission is a textbook example of this signal class, known as\ncyclostationary signals. In this paper, we discuss the application of cyclic\nspectral analysis methods to pulsar data, and compare the results with the\ntraditional filterbank approaches used for almost all pulsar observations to\ndate. In contrast to standard methods, the cyclic spectrum preserves phase\ninformation of the radio signal. This feature allows us to determine the\nimpulse response of the interstellar medium and the intrinsic, unscattered\npulse profile directly from a single observation. We illustrate these new\nanalysis techniques using real data from an observation of the millisecond\npulsar B1937+21.", "category": "astro-ph_IM" }, { "text": "Entering into the Wide Field Adaptive Optics Era on Maunakea: As part of the National Science Foundation funded \"Gemini in the Era of\nMultiMessenger Astronomy\" (GEMMA) program, Gemini Observatory is developing\nGNAO, a widefield adaptive optics (AO) facility for Gemini-North on Maunakea,\nthe only 8m-class open-access telescope available to the US astronomers in the\nnorthern hemisphere. GNAO will provide the user community with a queue-operated\nMulti-Conjugate AO (MCAO) system, enabling a wide range of innovative solar\nsystem, Galactic, and extragalactic science with a particular focus on\nsynergies with JWST in the area of time-domain astronomy. The GNAO effort\nbuilds on institutional investment and experience with the more limited\nblock-scheduled Gemini Multi-Conjugate System (GeMS), commissioned at Gemini\nSouth in 2013. The project involves close partnerships with the community\nthrough the recently established Gemini AO Working Group and the GNAO Science\nTeam, as well as external instrument teams. The modular design of GNAO will\nenable a planned upgrade to a Ground Layer AO (GLAO) mode when combined with an\nAdaptive Secondary Mirror (ASM). By enhancing the natural seeing by an expected\nfactor of two, GLAO will vastly improve Gemini North's observing efficiency for\nseeing-limited instruments and strengthen its survey capabilities for\nmulti-messenger astronomy.", "category": "astro-ph_IM" }, { "text": "Improved Image Quality Over 10' Fields with the `Imaka Ground Layer\n Adaptive Optics Experiment: `Imaka is a ground layer adaptive optics (GLAO) demonstrator on the\nUniversity of Hawaii 2.2m telescope with a 24'x18' field-of-view, nearly an\norder of magnitude larger than previous AO instruments. In 15 nights of\nobserving with natural guide star asterisms ~16' in diameter, we measure median\nAO-off and AO-on empirical full-widths at half-maximum (FWHM) of 0''95 and\n0''64 in R-band, 0''81 and 0''48 in I-band, and 0''76 and 0''44 at 1 micron.\nThis factor of 1.5-1.7 reduction in the size of the point spread function (PSF)\nresults from correcting both the atmosphere and telescope tracking errors. The\nAO-on PSF is uniform out to field positions ~5' off-axis, with a typical\nstandard deviation in the FWHM of 0''018. Images exhibit variation in FWMM by\n4.5% across the field, which has been applied as a correction to the\naforementioned quantities. The AO-on PSF is also 10x more stable in time\ncompared to the AO-off PSF. In comparing the delivered image quality to proxy\nmeasurements, we find that in both AO-off and AO-on data, delivered image\nquality is correlated with `imaka's telemetry, with R-band correlation\ncoefficients of 0.68 and 0.70, respectively. At the same wavelength, the data\nare correlated to DIMM and MASS seeing with coefficients of 0.45 and 0.55. Our\nresults are an essential first step to implementing facility-class, wide-field\nGLAO on Maunakea telescopes, enabling new opportunities to study extended\nastronomical sources, such as deep galaxy fields, nearby galaxies or star\nclusters, at high angular resolution.", "category": "astro-ph_IM" }, { "text": "Collisionless Stellar Hydrodynamics as an Efficient Alternative to\n N-body Methods: For simulations that deal only with dark matter or stellar systems, the\nconventional N-body technique is fast, memory efficient, and relatively simple\nto implement. However when including the effects of gas physics, mesh codes are\nat a distinct disadvantage compared to SPH. Whilst implementing the N-body\napproach into SPH codes is fairly trivial, the particle-mesh technique used in\nmesh codes to couple collisionless stars and dark matter to the gas on the\nmesh, has a series of significant scientific and technical limitations. These\ninclude spurious entropy generation resulting from discreteness effects, poor\nload balancing and increased communication overhead which spoil the excellent\nscaling in massively parallel grid codes.\n We propose the use of the collisionless Boltzmann moment equations as a means\nto model collisionless material as a fluid on the mesh, implementing it into\nthe massively parallel FLASH AMR code. This approach, which we term\n\"collisionless stellar hydrodynamics\" enables us to do away with the\nparticle-mesh approach. Since the parallelisation scheme is identical to that\nused for the hydrodynamics, it preserves the excellent scaling of the FLASH\ncode already demonstrated on peta-flop machines.\n We find the classic hydrodynamic equations and Boltzmann moment equations can\nbe reconciled under specific conditions, allowing us to generate analytic\nsolutions for collisionless systems using conventional test problems. We\nconfirm the validity of our approach using a suite of demanding test problems,\nincluding the use of a modified Sod shock test. We conclude by demonstrating\nthe ability of our code to model complex phenomena by simulating the evolution\nof a spiral galaxy whose properties agree with those predicted by swing\namplification theory. (Abridged)", "category": "astro-ph_IM" }, { "text": "Preliminary Astrometric Results from Kepler: Although not designed as an astrometric instrument, Kepler is expected to\nproduce astrometric results of a quality appropriate to support many of the\nastrophysical investigations enabled by its photometric results. On the basis\nof data collected during the first few months of operation, the astrometric\nprecision for a single 30 minute measure appears to be better than 4\nmilliarcseconds (0.001 pixel). Solutions for stellar parallax and proper\nmotions await more observations, but the analysis of the astrometric residuals\nfrom a local solution in the vicinity of a star have already proved to be an\nimportant tool in the process of confirming the hypothesis of a planetary\ntransit.", "category": "astro-ph_IM" }, { "text": "Provenance of astronomical data: In the context of Open Science, provenance has become a decisive piece of\ninformation to provide along with astronomical data. Provenance is explicitly\ncited in the FAIR principles, that aims to make research data Findable,\nAccessible, Interoperable and Reusable. The IVOA Provenance Data Model,\npublished in 2020, puts in place the foundations for structuring and managing\ndetailed provenance information, from the acquisition of raw data, to the\ndissemination of final products. The ambition is to provide for each\nastronomical dataset a sufficiently fine grained and detailed provenance\ninformation so that end-users understand the quality, reliability and\ntrustworthiness of the data. This would ensure that the Reusable principle is\nrespected.", "category": "astro-ph_IM" }, { "text": "First Generation Heterodyne Instrumentation Concepts for the Atacama\n Large Aperture Submillimeter Telescope: (abridged) The Atacama Large Aperture Submillimeter Telescope (AtLAST)\nproject aims to build a 50-m-class submm telescope with $>1^\\circ$ field of\nview, high in the Atacama Desert, providing fast and detailed mapping of the\nmm/submm sky. It will thus serve as a strong complement to existing facilities\nsuch as ALMA. ALMA's small field of view ($<15^{\\prime\\prime}$ at 350 GHz)\nlimits its mapping speed for large surveys. Instead, a single dish with a large\nfield of view such as the AtLAST concept can host large multi-element\ninstruments that can more efficiently map large portions of the sky. Small\naperture survey instruments (typically much smaller than $<3\\times$ the size of\nan interferometric array element) can mitigate this somewhat but lack the\nresolution for accurate recovery of source location and have small collecting\nareas. Furthermore, small aperture survey instruments do not provide sufficient\noverlap in the spatial scales they sample to provide a complete reconstruction\nof extended sources (i.e.\\ the zero-spacing information is incomplete in\n$u,v$-space.) The heterodyne instrumentation for the AtLAST telescope that we\nconsider here will take advantage of extensive developments in the past decade\nimproving the performance and pixel count of heterodyne focal plane arrays.\nSuch instrumentation, with higher pixel counts, has alredy begun to take\nadvantage of integration in the focal planes to increase packaging efficiency\nover simply stacking modular mixer blocks in the focal plane. We extrapolate\nfrom the current state-of-the-art to present concept first-generation\nheterodyne designs for AtLAST.", "category": "astro-ph_IM" }, { "text": "Improving Planet-Finding Spectrometers: Like the miniaturization of modern computers, next-generation radial velocity\ninstruments will be significantly smaller and more powerful than their\npredecessors.", "category": "astro-ph_IM" }, { "text": "An in-depth exploration of LAMOST Unknown spectra based on density\n clustering: LAMOST (Large Sky Area Multi-Object Fiber Spectroscopic Telescope) has\ncompleted the observation of nearly 20 million celestial objects, including a\nclass of spectra labeled `Unknown'. Besides low signal-to-noise ratio, these\nspectra often show some anomalous features that do not work well with current\ntemplates. In this paper, a total of 638,000 `Unknown' spectra from LAMOST DR5\nare selected, and an unsupervised-based analytical framework of `Unknown'\nspectra named SA-Frame (Spectra Analysis-Frame) is provided to explore their\norigins from different perspectives. The SA-Frame is composed of three parts:\nNAPC-Spec clustering, characterization and origin analysis. First,\nNAPC-Spec(Nonparametric density clustering algorithm for spectra) characterizes\ndifferent features in the \"unknown\" spectrum by adjusting the influence space\nand divergence distance to minimize the effects of noise and high\ndimensionality, resulting in 13 types. Second, characteristic extraction and\nrepresentation of clustering results are carried out based on spectral lines\nand continuum, where these 13 types are characterized as regular spectra with\nlow S/Ns, splicing problems, suspected galactic emission signals, contamination\nfrom city light and un-gregarious type respectively. Third, a preliminary\nanalysis of their origins is made from the characteristics of the observational\ntargets, contamination from the sky, and the working status of the instruments.\nThese results would be valuable for improving the overall data quality of\nlarge-scale spectral surveys.", "category": "astro-ph_IM" }, { "text": "Expectation Maximization for Hard X-ray Count Modulation Profiles: This paper is concerned with the image reconstruction problem when the\nmeasured data are solar hard X-ray modulation profiles obtained from the Reuven\nRamaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is\nto demonstrate that a statistical iterative method classically applied to the\nimage deconvolution problem is very effective when utilized for the analysis of\ncount modulation profiles in solar hard X-ray imaging based on Rotating\nModulation Collimators. The algorithm described in this paper solves the\nmaximum likelihood problem iteratively and encoding a positivity constraint\ninto the iterative optimization scheme. The result is therefore a classical\nExpectation Maximization method this time applied not to an image deconvolution\nproblem but to image reconstruction from count modulation profiles. The\ntechnical reason that makes our implementation particularly effective in this\napplication is the use of a very reliable stopping rule which is able to\nregularize the solution providing, at the same time, a very satisfactory\nCash-statistic (C-statistic). The method is applied to both reproduce synthetic\nflaring configurations and reconstruct images from experimental data\ncorresponding to three real events. In this second case, the performance of\nExpectation Maximization, when compared to Pixon image reconstruction, shows a\ncomparable accuracy and a notably reduced computational burden; when compared\nto CLEAN, shows a better fidelity with respect to the measurements with a\ncomparable computational effectiveness. If optimally stopped, Expectation\nMaximization represents a very reliable method for image reconstruction in the\nRHESSI context when count modulation profiles are used as input data.", "category": "astro-ph_IM" }, { "text": "LUCI: A Python package for SITELLE spectral analysis: High-resolution optical integral field units (IFUs) are rapidly expanding our\nknowledge of extragalactic emission nebulae in galaxies and galaxy clusters. By\nstudying the spectra of these objects -- which include classic HII regions,\nsupernova remnants, planetary nebulae, and cluster filaments -- we are able to\nconstrain their kinematics (velocity and velocity dispersion). In conjunction\nwith additional tools, such as the BPT diagram, we can further classify\nemission regions based on strong emission-line flux ratios. LUCI is a\nsimple-to-use python module intended to facilitate the rapid analysis of IFU\nspectra. LUCI does this by integrating well-developed pre-existing python tools\nsuch as astropy and scipy with new machine learning tools for spectral analysis\n(Rhea et al. 2020). Furthermore, LUCI provides several easy-to-use tools to\naccess and fit SITELLE data cubes.", "category": "astro-ph_IM" }, { "text": "Automatic Classification of Variable Stars in Catalogs with missing data: We present an automatic classification method for astronomical catalogs with\nmissing data. We use Bayesian networks, a probabilistic graphical model, that\nallows us to perform inference to pre- dict missing values given observed data\nand dependency relationships between variables. To learn a Bayesian network\nfrom incomplete data, we use an iterative algorithm that utilises sampling\nmethods and expectation maximization to estimate the distributions and\nprobabilistic dependencies of variables from data with missing values. To test\nour model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and\none complete catalog (MACHO). We examine how classification accuracy changes\nwhen information from missing data catalogs is included, how our method\ncompares to traditional missing data approaches and at what computational cost.\nIntegrating these catalogs with missing data we find that classification of\nvariable objects improves by few percent and by 15% for quasar detection while\nkeeping the computational cost the same.", "category": "astro-ph_IM" }, { "text": "Apodized Lyot Coronagraph for VLT-SPHERE: Laboratory tests and\n performances of a first prototype in the visible: We present some of the High Dynamic Range Imaging activities developed around\nthe coronagraphic test-bench of the Laboratoire A. H. Fizeau (Nice). They\nconcern research and development of an Apodized Lyot Coronagraph (ALC) for the\nVLT-SPHERE instrument and experimental results from our testbed working in the\nvisible domain. We determined by numerical simulations the specifications of\nthe apodizing filter and searched the best technological process to manufacture\nit. We present the results of the experimental tests on the first apodizer\nprototype in the visible and the resulting ALC nulling performances. The tests\nconcern particularly the apodizer characterization (average transmission radial\nprofile, global reflectivity and transmittivity in the visible), ALC nulling\nperformances compared with expectations, sensitivity of the ALC performances to\nmisalignments of its components.", "category": "astro-ph_IM" }, { "text": "Automated Adaptive Optics: Large area surveys will dominate the forthcoming decades of astronomy and\ntheir success requires characterizing thousands of discoveries through\nadditional observations at higher spatial or spectral resolution, and at\ncomplementary cadences or periods. Only the full automation of adaptive optics\nsystems will enable high-acuity, high-sensitivity follow-up observations of\nseveral tens of thousands of these objects per year, maximizing on-sky time.\nAutomation will also enable rapid response to target-of-opportunity events\nwithin minutes, minimizing the time between discovery and characterization.\n In June 2012, we demonstrated the first fully automated operation of an\nastronomical adaptive optics system by observing 125 objects in succession with\nthe Robo-AO system. Efficiency has increased ever since, with a typical night\ncomprising 200-250 automated observations at the visible diffraction limit. By\nobserving tens of thousands of targets in the largest-ever adaptive-optics\nsurveys, Robo-AO has demonstrated the ability to address the follow-up needs of\ncurrent and future large astronomical surveys.", "category": "astro-ph_IM" }, { "text": "CREDO project: The Cosmic-Ray Extremely Distributed Observatory (CREDO) is a project created\na few years ago in the Institute of Nuclear Physics PAS in Krak\\'ow and\ndedicated is to global studies of extremely extended cosmic-ray phenomena. The\nmain reason for creating such a project was that the cosmic-ray ensembles (CRE)\nare beyond the capabilities of existing detectors and observatories. Until now,\ncosmic ray studies, even in major observatories, have been limited to the\nrecording and analysis of individual air showers therefore ensembles of\ncosmic-rays, which may spread over a significant fraction of the Earth were\nneither recorded nor analyzed. In this paper the status and perspectives of the\nCREDO project are presented.", "category": "astro-ph_IM" }, { "text": "Speckle Space-Time Covariance in High-Contrast Imaging: We introduce a new framework for point-spread function (PSF) subtraction\nbased on the spatio-temporal variation of speckle noise in high-contrast\nimaging data where the sampling timescale is faster than the speckle evolution\ntimescale. One way that space-time covariance arises in the pupil is as\natmospheric layers translate across the telescope aperture and create small,\ntime-varying perturbations in the phase of the incoming wavefront. The\npropagation of this field to the focal plane preserves some of that space-time\ncovariance. To utilize this covariance, our new approach uses a\nKarhunen-Lo\\'eve transform on an image sequence, as opposed to a set of single\nreference images as in previous applications of Karhunen-Lo\\'eve Image\nProcessing (KLIP) for high-contrast imaging. With the recent development of\nphoton-counting detectors, such as microwave kinetic inductance detectors\n(MKIDs), this technique now has the potential to improve contrast when used as\na post-processing step. Preliminary testing on simulated data shows this\ntechnique can improve contrast by at least 10-20% from the original image, with\nsignificant potential for further improvement. For certain choices of\nparameters, this algorithm may provide larger contrast gains than spatial-only\nKLIP.", "category": "astro-ph_IM" }, { "text": "Finding faint HI structure in and around galaxies: scraping the barrel: Soon to be operational HI survey instruments such as APERTIF and ASKAP will\nproduce large datasets. These surveys will provide information about the HI in\nand around hundreds of galaxies with a typical signal-to-noise ratio of $\\sim$\n10 in the inner regions and $\\sim$ 1 in the outer regions. In addition, such\nsurveys will make it possible to probe faint HI structures, typically located\nin the vicinity of galaxies, such as extra-planar-gas, tails and filaments.\nThese structures are crucial for understanding galaxy evolution, particularly\nwhen they are studied in relation to the local environment. Our aim is to find\noptimized kernels for the discovery of faint and morphologically complex HI\nstructures. Therefore, using HI data from a variety of galaxies, we explore\nstate-of-the-art filtering algorithms. We show that the intensity-driven\ngradient filter, due to its adaptive characteristics, is the optimal choice. In\nfact, this filter requires only minimal tuning of the input parameters to\nenhance the signal-to-noise ratio of faint components. In addition, it does not\ndegrade the resolution of the high signal-to-noise component of a source. The\nfiltering process must be fast and be embedded in an interactive visualization\ntool in order to support fast inspection of a large number of sources. To\nachieve such interactive exploration, we implemented a multi-core CPU (OpenMP)\nand a GPU (OpenGL) version of this filter in a 3D visualization environment\n($\\tt{SlicerAstro}$).", "category": "astro-ph_IM" }, { "text": "The NRL Program in X-ray Navigation: This chapter describes the development of X-ray Navigation at the Naval\nResearch Laboratory (NRL) within its astrophysics research programs. The\nprospects for applications emerged from early discoveries of X-ray source\nclasses and their properties. Starting around 1988 some NRL X-ray astronomy\nprograms included navigation as one of the motivations. The USA experiment\n(1999) was the first flight payload with an explicit X-ray navigation theme.\nSubsequently, NRL has continued to work in this area through participation in\nDARPA and NASA programs. Throughout, the general concept of X-ray navigation\n(XRNAV) has been broad enough to encompass many different uses of X-ray source\nobservations for attitude determination, position determination, and\ntimekeeping. Pulsar-based X-ray navigation (XNAV) is a special case.", "category": "astro-ph_IM" }, { "text": "The nature of the near-infrared interline sky background using fibre\n Bragg grating OH suppression: We analyse the near-infrared interline sky background, OH and O2 emission in\n19 hours of H band observations with the GNOSIS OH suppression unit and the\nIRIS2 spectrograph at the 3.9-m AAT. We find that the temporal behaviour of OH\nemission is best described by a gradual decrease during the first half of the\nnight followed by a gradual increase during the second half of the night\nfollowing the behaviour of the solar elevation angle. We measure the interline\nbackground at 1.520 microns where the instrumental thermal background is very\nlow and study its variation with zenith distance, time after sunset, ecliptic\nlatitude, lunar zenith angle and lunar distance to determine the presence of\nnon-thermal atmospheric emission, zodiacal scattered light and scattered\nmoonlight. Zodiacal scattered light is too faint to be detected in the summed\nobservations. Scattered moonlight due to Mie scattering by atmospheric aerosols\nis seen at small lunar distances (< 11 deg), but is otherwise too faint to\ndetect. Except at very small lunar distances the interline background at a\nresolving power of R~2400 when using OH suppression fibres is dominated by a\nnon-thermal atmospheric source with a temporal behaviour that resembles\natmospheric OH emission suggesting that the interline background contains\ninstrumentally-scattered OH. However, the interline background dims more\nrapidly than OH early in the night suggesting contributions from rapid dimming\nmolecules. The absolute interline background is 560 +/- 120 photons s^-1 m^-2\nmicron^-1 arcsec^-2 under dark conditions. This value is similar to previous\nmeasurements without OH suppression suggesting that non-suppressed atmospheric\nemission is responsible for the interline background. Future OH suppression\nfibre designs may address this by the suppression of more sky lines using more\naccurate sky line measurements taken from high resolution spectra.", "category": "astro-ph_IM" }, { "text": "A Gaussian process cross-correlation approach to time delay estimation\n in active galactic nuclei: We present a probabilistic cross-correlation approach to estimate time delays\nin the context of reverberation mapping (RM) of Active Galactic Nuclei (AGN).\nWe reformulate the traditional interpolated cross-correlation method as a\nstatistically principled model that delivers a posterior distribution for the\ndelay. The method employs Gaussian processes as a model for observed AGN light\ncurves. We describe the mathematical formalism and demonstrate the new approach\nusing both simulated light curves and available RM observations. The proposed\nmethod delivers a posterior distribution for the delay that accounts for\nobservational noise and the non-uniform sampling of the light curves. This\nfeature allow us to fully quantify its uncertainty and propagate it to\nsubsequent calculations of dependent physical quantities, e.g., black hole\nmasses. It delivers out-of-sample predictions, which enables us to subject it\nto model selection and it can calculate the joint posterior delay for more than\ntwo light curves. Because of the numerous advantages of our reformulation and\nthe simplicity of its application, we anticipate that our method will find\nfavour not only in the specialised community of RM, but in all fields where\ncross-correlation analysis is performed. We provide the algorithms and examples\nof their application as part of our Julia GPCC package.", "category": "astro-ph_IM" }, { "text": "Near-UV Spectroscopy with the VLT: The 39-meter European Extremely Large Telescope (E-ELT) is expected to have\nvery low throughput in the blue part of the visible spectrum. Because of that,\na blue-optimised spectrograph at the 8-meter Very Large Telescope could\npotentially be competitive against the E-ELT at wavelengths shorter than 400\nnm. A concept study for such an instrument was concluded in 2012. This would be\na high-throughput, medium resolution (R $\\sim$ 20\\,000) spectrograph, operating\nbetween 300 and 400 nm. It is currently expected that construction of this\ninstrument will start in the next few years. In this contribution, I present a\nsummary of the instrument concept and of some of the possible Galactic and\nextragalactic science cases that motivate such a spectrograph.", "category": "astro-ph_IM" }, { "text": "Sub-kilometre scale ionospheric studies at the SKA-Low site, using MWA\n extended baselines: The ambitious scientific goals of SKA require a matching capability for\ncalibration of instrumental and atmospheric propagation contributions as\nfunctions of time, frequency and position. The development of novel calibration\nalgorithms to meet these requirements is an active field of research. In this\nwork {we aim to characterize} these, focusing on the spatial and temporal\nstructure scales of the ionospheric effects; ultimately, these provide the\nguidelines for designing the optimum calibration strategy. We used empirical\nionospheric measurements at the site where the SKA-Low will be built, using MWA\nPhase-2 Extended baseline observations and the station-based Low-frequency\nExcision of Atmosphere in Parallel (LEAP) calibration algorithm. We have done\nthis via direct regression analysis of the ionospheric screens and by forming\nthe full and detrended structure functions. We found that 50% of the screens\nshow significant non-linear structures at scales >0.6km that dominate at >2km,\nand 1% show significant sub-minute temporal changes, providing that there is\nsufficient sensitivity. Even at the moderate sensitivity and baseline lengths\nof MWA, non-linear corrections are required at 88 MHz during moderate-weather\nand at 154 MHz during poor weather, or for high SNR measurements. Therefore we\npredict that improvements will come from correcting for higher-order defocusing\neffects in observations with MWA Phase-2, and further with new developments in\nMWA Phase-3. Because of the giant leap in sensitivity, the correction for\ncomplex ionospheric structures will be mandatory on SKA-Low, for both imaging\nand tied-array beam formation.", "category": "astro-ph_IM" }, { "text": "Direct Imaging in the Habitable Zone and the Problem of Orbital Motion: High contrast imaging searches for exoplanets have been conducted on 2.4-10 m\ntelescopes, typically at H band (1.6 microns) and used exposure times of ~1 hr\nto search for planets with semi-major axes of > ~10 AU. We are beginning to\nplan for surveys using extreme-AO systems on the next generation of 30-meter\nclass telescopes, where we hope to begin probing the habitable zones (HZs) of\nnearby stars. Here we highlight a heretofore ignorable problem in direct\nimaging: planets orbit their stars. Under the parameters of current surveys,\norbital motion is negligible over the duration of a typical observation.\nHowever, this motion is not negligible when using large diameter telescopes to\nobserve at relatively close stellar distances (1-10pc), over the long exposure\ntimes (10-20 hrs) necessary for direct detection of older planets in the HZ. We\nshow that this motion will limit our achievable signal-to-noise ratio and\ndegrade observational completeness. Even on current 8m class telescopes,\norbital motion will need to be accounted for in an attempt to detect HZ planets\naround the nearest sun-like stars alpha Cen A & B, a binary system now known to\nharbor at least one planet. Here we derive some basic tools for analyzing this\nproblem, and ultimately show that the prospects are good for de-orbiting a\nseries of shorter exposures to correct for orbital motion.", "category": "astro-ph_IM" }, { "text": "Neutrino direction and energy resolution of Askaryan detectors: Detection of high-energy neutrinos via the radio technique allows for an\nexploration of the neutrino energy range from $\\sim10^{16}$\\~eV to\n$\\sim10^{20}$\\~eV with unprecedented precision. These Askaryan detectors have\nmatured in two pilot arrays (ARA and ARIANNA) and the construction of a\nlarge-scale detector is actively discussed in the community. In this\ncontribution, we present reconstruction techniques to determine the neutrino\ndirection and energy from the observed few-nanoseconds short radio flashes and\nquantify the resolution of one of such detectors. The reconstruction of the\nneutrino direction requires a precise measurement of both the signal direction\nas well as the signal polarization. The reconstruction of the neutrino energy\nrequires, in addition, the measurement of the vertex distance, obtainable from\nthe time difference of two signal paths through the ice, and the viewing angle\nof the in-ice shower via the frequency spectrum. We discuss the required\nalgorithms and quantify the resolution using a detailed Monte Carlo simulation\nstudy.", "category": "astro-ph_IM" }, { "text": "Radon backgrounds in the DEAP-1 liquid-argon-based Dark Matter detector: The DEAP-1 \\SI{7}{kg} single phase liquid argon scintillation detector was\noperated underground at SNOLAB in order to test the techniques and measure the\nbackgrounds inherent to single phase detection, in support of the\n\\mbox{DEAP-3600} Dark Matter detector. Backgrounds in DEAP are controlled\nthrough material selection, construction techniques, pulse shape discrimination\nand event reconstruction. This report details the analysis of background events\nobserved in three iterations of the DEAP-1 detector, and the measures taken to\nreduce them.\n The $^{222}$Rn decay rate in the liquid argon was measured to be between 16\nand \\SI{26}{\\micro\\becquerel\\per\\kilogram}. We found that the background\nspectrum near the region of interest for Dark Matter detection in the DEAP-1\ndetector can be described considering events from three sources: radon\ndaughters decaying on the surface of the active volume, the expected rate of\nelectromagnetic events misidentified as nuclear recoils due to inefficiencies\nin the pulse shape discrimination, and leakage of events from outside the\nfiducial volume due to imperfect position reconstruction. These backgrounds\nstatistically account for all observed events, and they will be strongly\nreduced in the DEAP-3600 detector due to its higher light yield and simpler\ngeometry.", "category": "astro-ph_IM" }, { "text": "AtmoHEAD 2013 workshop / Atmospheric Monitoring for High-Energy\n Astroparticle Detectors: A 3-day international workshop on atmospheric monitoring and calibration for\nhigh-energy astroparticle detectors, with a view towards next-generation\nfacilities. The atmosphere is an integral component of many high-energy\nastroparticle detectors. Imaging atmospheric Cherenkov telescopes and\ncosmic-ray extensive air shower detectors are the two instruments driving the\nrapidly evolving fields of very-high- and ultra-high-energy astrophysics. In\nthese instruments, the atmosphere is used as a giant calorimeter where cosmic\nrays and gamma rays deposit their energy and initiate EASs; it is also the\nmedium through which the resulting Cherenkov light propagates. Uncertainties in\nreal-time atmospheric conditions and in the fixed atmospheric models typically\ndominate all other systematic errors. With the improved sensitivity of upgraded\nIACTs such as H.E.S.S.-II and MAGIC-II and future facilities like the Cherenkov\nTelescope Array (CTA) and JEM-EUSO, statistical uncertainties are expected to\nbe significantly reduced, leaving the atmosphere as the limiting factor in the\ndetermination of astroparticle spectra. Varying weather conditions necessitate\nthe development of suitable atmospheric monitoring to be integrated in the\noverall instrument calibration, including Monte Carlo simulations. With\nexpertise distributed across multiple collaborations and scientific domains, an\ninterdisciplinary workshop is being convened to advance progress on this\ncritical and timely topic.", "category": "astro-ph_IM" }, { "text": "Dark Ages Radio Explorer Mission: Probing the Cosmic Dawn: The period between the creation of the cosmic microwave background at a\nredshift of ~1000 and the formation of the first stars and black holes that\nre-ionize the intergalactic medium at redshifts of 10-20 is currently\nunobservable. The baryonic component of the universe during this period is\nalmost entirely neutral hydrogen, which falls into local regions of higher dark\nmatter density. This seeds the formation of large-scale structures including\nthe cosmic web that we see today in the filamentary distribution of galaxies\nand clusters of galaxies. The only detectable signal from these dark ages is\nthe 21-cm spectral line of hydrogen, redshifted down to frequencies of\napproximately 10-100 MHz. Space-based observations of this signal will allow us\nto determine the formation epoch and physics of the first sources of ionizing\nradiation, and potentially detect evidence for the decay of dark matter\nparticles. JPL is developing deployable low frequency antenna and receiver\nprototypes to enable both all-sky spectral measurements of neutral hydrogen and\nultimately to map the spatial distribution of the signal as a function of\nredshift. Such observations must be done from space because of Earth's\nionosphere and ubiquitous radio interference. A specific application of these\ntechnologies is the Dark Ages Radio Explorer (DARE) mission. This small\nExplorer class mission is designed to measure the sky-averaged hydrogen signal\nfrom the shielded region above the far side of the Moon. These data will\ncomplement ground-based radio observations of the final stages of intergalactic\nre-ionization at higher frequencies. DARE will also serve as a scientific\npercursor for space-based interferometry missions to image the distribution of\nhydrogen during the cosmic dark ages.", "category": "astro-ph_IM" }, { "text": "A method to develop mission critical data processing systems for\n satellite based instruments. The spinning mode case: Modern satellite based experiments are often very complex real-time systems,\ncomposed by flight and ground segments, that have challenging resource related\nconstraints, in terms of size, weight, power, requirements for real-time\nresponse, fault tolerance, and specialized input/output hardware-software, and\nthey must be certified to high levels of assurance. Hardware-software data\nprocessing systems have to be responsive to system degradation and to changes\nin the data acquisition modes, and actions have to be taken to change the\norganization of the mission operations. A big research & develop effort in a\nteam composed by scientists and technologists can lead to produce software\nsystems able to optimize the hardware to reach very high levels of performance\nor to pull degraded hardware to maintain satisfactory features. We'll show\nreal-life examples describing a system, processing the data of a X-Ray detector\non satellite-based mission in spinning mode.", "category": "astro-ph_IM" }, { "text": "Bayesian modelling of scattered light in the LIGO interferometers: Excess noise from scattered light poses a persistent challenge in the\nanalysis of data from gravitational wave detectors such as LIGO. We integrate a\nphysically motivated model for the behavior of these \"glitches\" into a standard\nBayesian analysis pipeline used in gravitational wave science. This allows for\nthe inference of the free parameters in this model, and subtraction of these\nmodels to produce glitch-free versions of the data. We show that this inference\nis an effective discriminator of the presence of the features of these\nglitches, even when those features may not be discernible in standard\nvisualizations of the data.", "category": "astro-ph_IM" }, { "text": "Trigonometric Extension of the Geometric Correction Factor: Prototype\n for adding precision to adaptive ray tracing in ENZO: In this paper, we describe a method designed to add precision to radiation\nsimulations in the adaptive mesh refinement cosmological hydrodynamics code\nENZO. We build upon the geometric correction factor described in\n\\textit{ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement\nsimulations with adaptive ray tracing} (Wise and Abel 2011) which accounts for\npartial coverage of a ray's solid angle with a cube. Because of this geometric\nmismatch in the methods to approximate this, there are artifacts in the\nradiation field. Here, we address the two-dimensional extension, which acts as\na sufficient estimate of the three-dimensional case and, in practice, the\nHierarchical Equal Area isoLatitude Pixelization of the sphere (HEALPix)\n(Gorski 2005). We will demonstrate the value of an extension to the geometric\ncorrection factor and lay the groundwork for a future implementation to ENZO to\nimprove simulations of radiation from point sources.", "category": "astro-ph_IM" }, { "text": "An Analysis of DES Cluster Simulations through the IMCAT and Shapelets\n Weak Lensing Pipelines: We have run two completely independent weak lensing analysis pipelines on a\nset of realistic simulated images of a massive galaxy cluster with a singular\nisothermal sphere profile (galaxy velocity dispersion sigma_v=1250 km/ sec).\nThe suite of images was constructed using the simulation tools developed by the\nDark Energy Survey. We find that both weak lensing pipelines can accurately\nrecover the velocity dispersion of our simulated clusters, suggesting that\ncurrent weak lensing tools are accurate enough for measuring the shear profile\nof massive clusters in upcoming large photometric surveys. We also demonstrate\nhow choices of some cuts influence the final shear profile and sigma_v\nmeasurement. Analogously to the STEP program, we make all of these cluster\nsimulation images publically available for other groups to analyze through\ntheir own weak lensing pipelines.", "category": "astro-ph_IM" }, { "text": "Noise statistics in a fast digital radio receiver: the Bedlam backend\n for the Parkes Radio Telescope: The digital record of the voltage in a radio telescope receiver, after\nfrequency conversion and sampling at a finite rate, is not a perfect\nrepresentation of the original analog signal. To detect and characterise a\ntransient event with a duration comparable to the inverse bandwidth it is\nnecessary to compensate for these effects, which modifies the statistics of the\nsignal, making it difficult to determine the significance of a potential\ndetection. We present an analysis of these modified statistics and demonstrate\nthem with experimental results from Bedlam, a new digital backend for the\nParkes radio telescope.", "category": "astro-ph_IM" }, { "text": "A 50 mK test bench for demonstration of the readout chain of\n Athena/X-IFU: The X-IFU (X-ray Integral Field Unit) onboard the large ESA mission Athena\n(Advanced Telescope for High ENergy Astrophysics), planned to be launched in\nthe mid 2030s, will be a cryogenic X-ray imaging spectrometer operating at 55\nmK. It will provide unprecedented spatially resolved high-resolution\nspectroscopy (2.5 eV FWHM up to 7 keV) in the 0.2-12 keV energy range thanks to\nits array of TES (Transition Edge Sensors) microcalorimeters of more than 2k\npixel. The detection chain of the instrument is developed by an international\ncollaboration: the detector array by NASA/GSFC, the cold electronics by NIST,\nthe cold amplifier by VTT, the WFEE (Warm Front-End Electronics) by APC, the\nDRE (Digital Readout Electronics) by IRAP and a focal plane assembly by SRON.\nTo assess the operation of the complete readout chain of the X-IFU, a 50 mK\ntest bench based on a kilo-pixel array of microcalorimeters from NASA/GSFC has\nbeen developed at IRAP in collaboration with CNES. Validation of the test bench\nhas been performed with an intermediate detection chain entirely from NIST and\nGoddard. Next planned activities include the integration of DRE and WFEE\nprototypes in order to perform an end-to-end demonstration of a complete X-IFU\ndetection chain.", "category": "astro-ph_IM" }, { "text": "An update to the EVEREST K2 pipeline: Short cadence, saturated stars,\n and Kepler-like photometry down to Kp = 15: We present an update to the EVEREST K2 pipeline that addresses various\nlimitations in the previous version and improves the photometric precision of\nthe de-trended light curves. We develop a fast regularization scheme for third\norder pixel level decorrelation (PLD) and adapt the algorithm to include the\nPLD vectors of neighboring stars to enhance the predictive power of the model\nand minimize overfitting, particularly for faint stars. We also modify PLD to\nwork for saturated stars and improve its performance on extremely variable\nstars. On average, EVEREST 2.0 light curves have 10-20% higher photometric\nprecision than those in the previous version, yielding the highest precision\nlight curves at all Kp magnitudes of any publicly available K2 catalog. For\nmost K2 campaigns, we recover the original Kepler precision to at least Kp =\n14, and to at least Kp = 15 for campaigns 1, 5, and 6. We also de-trend all\nshort cadence targets observed by K2, obtaining even higher photometric\nprecision for these stars. All light curves for campaigns 0-8 are available\nonline in the EVEREST catalog, which will be continuously updated with future\ncampaigns. EVEREST 2.0 is open source and is coded in a general framework that\ncan be applied to other photometric surveys, including Kepler and the upcoming\nTESS mission.", "category": "astro-ph_IM" }, { "text": "Characterization of a multi-etalon array for ultra-high resolution\n spectroscopy: The upcoming Extremely Large Telescopes (ELTs) are expected to have the\ncollecting area required to detect potential biosignature gases in the\natmosphere of rocky planets around nearby low-mass stars. Some efforts are\ncurrently focusing on searching for molecular oxygen (O2), since O2 is a known\nbiosignature on Earth. One of the most promising methods to search for O2 is\ntransmission spectroscopy in which high-resolution spectroscopy is combined\nwith cross-correlation techniques. In this method, high spectral resolution is\nrequired both to resolve the exoplanet's O2 lines and to separate them from\nforeground telluric absorption. While current astronomical spectrographs\ntypically achieve a spectral resolution of 100,000, recent studies show that\nresolutions of 300,000 -- 400,000 are optimal to detect O2 in the atmosphere of\nearth analogs with the ELTs. Fabry Perot Interferometer (FPI) arrays have been\nproposed as a relatively low-cost way to reach these resolutions. In this\npaper, we present performance results for our 2-FPI array lab prototype, which\nreaches a resolving power of 600,000. We further discuss the use of\nmulti-cavity etalons (dualons) to be resolution boosters for existing\nspectrographs.", "category": "astro-ph_IM" }, { "text": "Optimal detuning for quantum filter cavities: Vacuum quantum fluctuations impose a fundamental limit on the sensitivity of\ngravitational-wave interferometers, which rank among the most sensitive\nprecision measurement devices ever built. The injection of conventional\nsqueezed vacuum reduces quantum noise in one quadrature at the expense of\nincreasing noise in the other. While this approach improved the sensitivity of\nthe Advanced LIGO and Advanced Virgo interferometers during their third\nobserving run (O3), future improvements in arm power and squeezing levels will\nbring radiation pressure noise to the forefront. Installation of a filter\ncavity for frequency-dependent squeezing provides broadband reduction of\nquantum noise through the mitigation of this radiation pressure noise, and it\nis the baseline approach planned for all of the future gravitational-wave\ndetectors currently conceived. The design and operation of a filter cavity\nrequires careful consideration of interferometer optomechanics as well as\nsqueezing degradation processes. In this paper, we perform an in-depth analysis\nto determine the optimal operating point of a filter cavity. We use our model\nalongside numerical tools to study the implications for filter cavities to be\ninstalled in the upcoming \"A+\" upgrade of the Advanced LIGO detectors.", "category": "astro-ph_IM" }, { "text": "The Power Board of the KM3NeT Digital Optical Module: design, upgrade,\n and production: The KM3NeT Collaboration is building an underwater neutrino observatory at\nthe bottom of the Mediterranean Sea consisting of two neutrino telescopes, both\ncomposed of a three-dimensional array of light detectors, known as digital\noptical modules. Each digital optical module contains a set of 31 three inch\nphotomultiplier tubes distributed over the surface of a 0.44 m diameter\npressure-resistant glass sphere. The module includes also calibration\ninstruments and electronics for power, readout and data acquisition. The power\nboard was developed to supply power to all the elements of the digital optical\nmodule. The design of the power board began in 2013, and several prototypes\nwere produced and tested. After an exhaustive validation process in various\nlaboratories within the KM3NeT Collaboration, a mass production batch began,\nresulting in the construction of over 1200 power boards so far. These boards\nwere integrated in the digital optical modules that have already been produced\nand deployed, 828 until October 2023. In 2017, an upgrade of the power board,\nto increase reliability and efficiency, was initiated. After the validation of\na pre-production series, a production batch of 800 upgraded boards is currently\nunderway. This paper describes the design, architecture, upgrade, validation,\nand production of the power board, including the reliability studies and tests\nconducted to ensure the safe operation at the bottom of the Mediterranean Sea\nthroughout the observatory's lifespan", "category": "astro-ph_IM" }, { "text": "Reduction of CCD observations made with a scanning Fabry--Perot\n interferometer. III. Wavelength scale refinement: We describe the recent modifications to the data reduction technique for\nobservations acquired with the scanning Fabry-Perot interferometer (FPI)\nmounted on the 6-m telescope of the Special Astrophysical Observatory that\nallow the wavelength scale to be correctly computed in the case of large mutual\noffsets of studied objects in interferograms. Also the parameters of the\nscanning FPIs used in the SCORPIO-2 multimode focal reducer are considered.", "category": "astro-ph_IM" }, { "text": "Modeling Results and Baseline Design for an RF-SoC-Based Readout System\n for Microwave Kinetic Inductance Detectors: Building upon existing signal processing techniques and open-source software,\nthis paper presents a baseline design for an RF System-on-Chip Frequency\nDivision Multiplexed readout for a spatio-spectral focal plane instrument based\non low temperature detectors. A trade-off analysis of different FPGA carrier\nboards is presented in an attempt to find an optimum next-generation solution\nfor reading out larger arrays of Microwave Kinetic Inductance Detectors\n(MKIDs). The ZCU111 RF SoC FPGA board from Xilinx was selected, and it is shown\nhow this integrated system promises to increase the number of pixels that can\nbe read out (per board) which enables a reduction in the readout cost per\npixel, the mass and volume, and power consumption, all of which are important\nin making MKID instruments more feasible for both ground-based and space-based\nastrophysics. The on-chip logic capacity is shown to form a primary constraint\non the number of MKIDs which can be read, channelised, and processed with this\nnew system. As such, novel signal processing techniques are analysed, including\nDigitally Down Converted (DDC)-corrected sub-maximally decimated sampling, in\nan effort to reduce logic requirements without compromising signal to noise\nratio. It is also shown how combining the ZCU111 board with a secondary FPGA\nboard will allow all 8 ADCs and 8 DACs to be utilised, providing enough\nbandwidth to read up to 8,000 MKIDs per board-set, an eight-fold improvement\nover the state-of-the-art, and important in pursuing 100,000 pixel arrays.\nFinally, the feasibility of extending the operational frequency range of MKIDs\nto the 5 - 10 GHz regime (or possibly beyond) is investigated, and some\nbenefits and consequences of doing so are presented.", "category": "astro-ph_IM" }, { "text": "A superconducting focal plane array for ultraviolet, optical, and\n near-infrared astrophysics: Microwave Kinetic Inductance Detectors, or MKIDs, have proven to be a\npowerful cryogenic detector technology due to their sensitivity and the ease\nwith which they can be multiplexed into large arrays. A MKID is an energy\nsensor based on a photon-variable superconducting inductance in a lithographed\nmicroresonator, and is capable of functioning as a photon detector across the\nelectromagnetic spectrum as well as a particle detector. Here we describe the\nfirst successful effort to create a photon-counting, energy-resolving\nultraviolet, optical, and near infrared MKID focal plane array. These new\nOptical Lumped Element (OLE) MKID arrays have significant advantages over\nsemiconductor detectors like charge coupled devices (CCDs). They can count\nindividual photons with essentially no false counts and determine the energy\nand arrival time of every photon with good quantum efficiency. Their physical\npixel size and maximum count rate is well matched with large telescopes. These\ncapabilities enable powerful new astrophysical instruments usable from the\nground and space. MKIDs could eventually supplant semiconductor detectors for\nmost astronomical instrumentation, and will be useful for other disciplines\nsuch as quantum optics and biological imaging.", "category": "astro-ph_IM" }, { "text": "Light Curve Classification with DistClassiPy: a new distance-based\n classifier: The rise of synoptic sky surveys has ushered in an era of big data in\ntime-domain astronomy, making data science and machine learning essential tools\nfor studying celestial objects. Tree-based (e.g. Random Forests) and deep\nlearning models represent the current standard in the field. We explore the use\nof different distance metrics to aid in the classification of objects. For\nthis, we developed a new distance metric based classifier called DistClassiPy.\nThe direct use of distance metrics is an approach that has not been explored in\ntime-domain astronomy, but distance-based methods can aid in increasing the\ninterpretability of the classification result and decrease the computational\ncosts. In particular, we classify light curves of variable stars by comparing\nthe distances between objects of different classes. Using 18 distance metrics\napplied to a catalog of 6,000 variable stars in 10 classes, we demonstrate\nclassification and dimensionality reduction. We show that this classifier meets\nstate-of-the-art performance but has lower computational requirements and\nimproved interpretability. We have made DistClassiPy open-source and accessible\nat https://pypi.org/project/distclassipy/ with the goal of broadening its\napplications to other classification scenarios within and beyond astronomy.", "category": "astro-ph_IM" }, { "text": "A Bayesian method for the analysis of deterministic and stochastic time\n series: I introduce a general, Bayesian method for modelling univariate time series\ndata assumed to be drawn from a continuous, stochastic process. The method\naccommodates arbitrary temporal sampling, and takes into account measurement\nuncertainties for arbitrary error models (not just Gaussian) on both the time\nand signal variables. Any model for the deterministic component of the\nvariation of the signal with time is supported, as is any model of the\nstochastic component on the signal and time variables. Models illustrated here\nare constant and sinusoidal models for the signal mean combined with a Gaussian\nstochastic component, as well as a purely stochastic model, the\nOrnstein-Uhlenbeck process. The posterior probability distribution over model\nparameters is determined via Monte Carlo sampling. Models are compared using\nthe \"cross-validation likelihood\", in which the posterior-averaged likelihood\nfor different partitions of the data are combined. In principle this is more\nrobust to changes in the prior than is the evidence (the prior-averaged\nlikelihood). The method is demonstrated by applying it to the light curves of\n11 ultra cool dwarf stars, claimed by a previous study to show statistically\nsignificant variability. This is reassessed here by calculating the\ncross-validation likelihood for various time series models, including a null\nhypothesis of no variability beyond the error bars. 10 of 11 light curves are\nconfirmed as being significantly variable, and one of these seems to be\nperiodic, with two plausible periods identified. Another object is best\ndescribed by the Ornstein-Uhlenbeck process, a conclusion which is obviously\nlimited to the set of models actually tested.", "category": "astro-ph_IM" }, { "text": "Bias-Free Estimation of Signals on Top of Unknown Backgrounds: We present a method for obtaining unbiased signal estimates in the presence\nof a significant background, eliminating the need for a parametric model for\nthe background itself. Our approach is based on a minimal set of conditions for\nobservation and background estimators, which are typically satisfied in\npractical scenarios. To showcase the effectiveness of our method, we apply it\nto simulated data from the planned dielectric axion haloscope MADMAX.", "category": "astro-ph_IM" }, { "text": "Arbitrary Transform Telescopes: The Generalization of Interferometry: The basic principle of astronomical interferometry is to derive the angular\ndistribution of radiation in the sky from the Fourier transform of the electric\nfield on the ground. What is so special about the Fourier transform? Nothing,\nit turns out. I consider the possibility of performing other transforms on the\nelectric field with digital technology. The Fractional Fourier Transform (FrFT)\nis useful for interpreting observations of sources that are close to the\ninterferometer (in the atmosphere for radio interferometers). Essentially,\napplying the FrFT focuses the array somewhere nearer than infinity. Combined\nwith the other Linear Canonical Transforms, any homogeneous linear optical\nsystem with thin elements can be instantiated. The time variation of the\nelectric field can also be decomposed into other bases besides the Fourier\nmodes, which is especially useful for dispersed transients or quick pulses. I\ndiscuss why the Fourier basis is so commonly used, and suggest it is partly\nbecause most astrophysical sources vary slowly in time.", "category": "astro-ph_IM" }, { "text": "The Rapid Imaging Planetary Spectrograph: The Rapid Imaging Planetary Spectrograph (RIPS) was designed as a long-slit\nhigh-resolution spectrograph for the specific application of studying\natmospheres of spatially extended solar system bodies. With heritage in\nterrestrial airglow instruments, RIPS uses an echelle grating and order-sorting\nfilters to obtain optical spectra at resolving powers up to R~127,000. An\nultra-narrowband image from the reflective slit jaws is captured concurrently\nwith each spectrum on the same EMCCD detector. The \"rapid\" portion of RIPS'\nmoniker stems from its ability to capture high frame rate data streams, which\nenables the established technique known as \"lucky imaging\" to be extended to\nspatially resolved spectroscopy. Resonantly scattered emission lines of alkali\nmetals, in particular, are sufficiently bright to be measured in short\nintegration times. RIPS has mapped the distributions of Na and K emissions in\nMercury's tenuous exosphere, which exhibit dynamic behavior coupled to the\nplanet's plasma and meteoroid environment. An important application is daylight\nobservations of Mercury at solar telescopes since synoptic context on the\nexosphere's distribution comprises valuable ground-based support for the\nupcoming BepiColombo orbital mission. As a conventional long slit spectrograph,\nRIPS has targeted the Moon's surface-bound exosphere where structure in\nlinewidth and brightness as a function of tangent altitude are observed. At the\nGalilean moons, RIPS can study the plasma interaction with Io and place new\nconstraints on the sputtered atmosphere of Europa, which in turn provides\ninsight into the salinity of Europa's subsurface ocean. The instrumental design\nand construction are described herein, and these astronomical observations are\npresented to illustrate RIPS' performance as a visiting instrument at three\ndifferent telescope facilities.", "category": "astro-ph_IM" }, { "text": "Experiments with calibrated digital sideband separating downconversion: This article reports on the first step in a focused program to re-optimize\nradio astronomy receiver architecture to better take advantage of the latest\nadvancements in commercial digital technology. Specifically, an L-Band\nsideband-separating downconverter has been built using a combination of careful\n(but ultimately very simple) analog design and digital signal processing to\nachieve wideband downconversion of an RFI-rich frequency spectrum to baseband\nin a single mixing step, with a fixed-frequency Local Oscillator and stable\nsideband isolation exceeding 50 dB over a 12 degree C temperature range.", "category": "astro-ph_IM" }, { "text": "A Lunar L2-Farside Exploration and Science Mission Concept with the\n Orion Multi-Purpose Crew Vehicle and a Teleoperated Lander/Rover: A novel concept is presented in this paper for a human mission to the lunar\nL2 (Lagrange) point that would be a proving ground for future exploration\nmissions to deep space while also overseeing scientifically important\ninvestigations. In an L2 halo orbit above the lunar farside, the astronauts\naboard the Orion Crew Vehicle would travel 15% farther from Earth than did the\nApollo astronauts and spend almost three times longer in deep space. Such a\nmission would serve as a first step beyond low Earth orbit and prove out\noperational spaceflight capabilities such as life support, communication, high\nspeed re-entry, and radiation protection prior to more difficult human\nexploration missions. On this proposed mission, the crew would teleoperate\nlanders and rovers on the unexplored lunar farside, which would obtain samples\nfrom the geologically interesting farside and deploy a low radio frequency\ntelescope. Sampling the South Pole-Aitken basin, one of the oldest impact\nbasins in the solar system, is a key science objective of the 2011 Planetary\nScience Decadal Survey. Observations at low radio frequencies to track the\neffects of the Universe's first stars/galaxies on the intergalactic medium are\na priority of the 2010 Astronomy and Astrophysics Decadal Survey. Such\ntelerobotic oversight would also demonstrate capability for human and robotic\ncooperation on future, more complex deep space missions such as exploring Mars.", "category": "astro-ph_IM" }, { "text": "Optical capabilities of the Multichannel Subtractive Double Pass (MSDP)\n for imaging spectroscopy and polarimetry at the Meudon Solar Tower: The Meudon Solar Tower (MST) is a 0.60 m telescope dedicated to spectroscopic\nobservations of solar regions. It includes a 14-meter focal length spectrograph\nwhich offers high spectral resolution. The spectrograph works either in\nclassical thin slit mode (R > 300000) or 2D imaging spectroscopy (60000 < R <\n180000). This specific mode is able to provide high temporal resolution\nmeasurements (1 min) of velocities and magnetic fields upon a 2D field of view,\nusing the Multichannel Subtractive Double Pass (MSDP) system. The purpose of\nthis paper is to describe the capabilities of the MSDP at MST with available\nslicers for broad and thin lines. The goal is to produce multichannel\nspectra-images, from which cubes of instantaneous data (x, y, $\\lambda$) are\nderived, in order to study of the plasma dynamics and magnetic fields (with\npolarimetry).", "category": "astro-ph_IM" }, { "text": "The small size telescope projects for the Cherenkov Telescope Array: The small size telescopes (SSTs), spread over an area of several square km,\ndominate the CTA sensitivity in the photon energy range from a few TeV to over\n100 TeV, enabling for the detailed exploration of the very high energy\ngamma-ray sky. The proposed telescopes are innovative designs providing a wide\nfield of view. Two of them, the ASTRI (Astrophysics con Specchi a Tecnologia\nReplicante Italiana) and the GCT (Gamma-ray Cherenkov Telescope) telescopes,\nare based on dual mirror Schwarzschild-Couder optics, with primary mirror\ndiameters of 4 m. The third, SST-1M, is a Davies-Cotton design with a 4 m\ndiameter mirror. Progress with the construction and testing of prototypes of\nthese telescopes is presented. The SST cameras use silicon photomultipliers,\nwith preamplifier and readout/trigger electronics designed to optimize the\nperformance of these sensors for (atmospheric) Cherenkov light. The status of\nthe camera developments is discussed. The SST sub-array will consist of about\n70 telescopes at the CTA southern site. Current plans for the implementation of\nthe array are presented.", "category": "astro-ph_IM" }, { "text": "The Footprint Database and Web Services of the Herschel Space\n Observatory: Data from the Herschel Space Observatory is freely available to the public\nbut no uniformly processed catalogue of the observations has been published so\nfar. To date, the Herschel Science Archive does not contain the exact sky\ncoverage (footprint) of individual observations and supports search for\nmeasurements based on bounding circles only. Drawing on previous experience in\nimplementing footprint databases, we built the Herschel Footprint Database and\nWeb Services for the Herschel Space Observatory to provide efficient search\ncapabilities for typical astronomical queries. The database was designed with\nthe following main goals in mind: (a) provide a unified data model for\nmeta-data of all instruments and observational modes, (b) quickly find\nobservations covering a selected object and its neighbourhood, (c) quickly find\nevery observation in a larger area of the sky, (d) allow for finding solar\nsystem objects crossing observation fields. As a first step, we developed a\nunified data model of observations of all three Herschel instruments for all\npointing and instrument modes. Then, using telescope pointing information and\nobservational meta-data, we compiled a database of footprints. As opposed to\nmethods using pixellation of the sphere, we represent sky coverage in an exact\ngeometric form allowing for precise area calculations. For easier handling of\nHerschel observation footprints with rather complex shapes, two algorithms were\nimplemented to reduce the outline. Furthermore, a new visualisation tool to\nplot footprints with various spherical projections was developed. Indexing of\nthe footprints using Hierarchical Triangular Mesh makes it possible to quickly\nfind observations based on sky coverage, time and meta-data. The database is\naccessible via a web site (http://herschel.vo.elte.hu) and also as a set of\nREST web service functions.", "category": "astro-ph_IM" }, { "text": "The Chinese space millimeter-wavelength VLBI array - a step toward\n imaging the most compact astronomical objects: The Shanghai Astronomical Observatory (SHAO) of the Chinese Academy of\nSciences (CAS) is studying a space VLBI (Very Long Baseline Interferometer)\nprogram. The ultimate objective of the program is to image the immediate\nvicinity of the supermassive black holes (SMBHs) in the hearts of galaxies with\na space-based VLBI array working at sub-millimeter wavelengths and to gain\nultrahigh angular resolution. To achieve this ambitious goal, the mission plan\nis divided into three stages. The first phase of the program is called Space\nMillimeter-wavelength VLBI Array (SMVA) consisting of two satellites, each\ncarrying a 10-m diameter radio telescope into elliptical orbits with an apogee\nheight of 60000 km and a perigee height of 1200 km. The VLBI telescopes in\nspace will work at three frequency bands, 43, 22 and 8 GHz. The 43- and 22-GHz\nbands will be equipped with cryogenic receivers. The space telescopes,\nobserving together with ground-based radio telescopes, enable the highest\nangular resolution of 20 micro-arcsecond ($\\mu$as) at 43 GHz. The SMVA is\nexpected to conduct a broad range of high-resolution observational research,\ne.g. imaging the shadow (dark region) of the supermassive black hole in the\nheart of the galaxy M87 for the first time, studying the kinematics of water\nmegamasers surrounding the SMBHs, and exploring the power source of active\ngalactic nuclei. Pre-research funding has been granted by the CAS in October\n2012, to support scientific and technical feasibility studies. These studies\nalso include the manufacturing of a prototype of the deployable 10-m\nspace-based telescope and a 22-GHz receiver. Here we report on the latest\nprogress of the SMVA project.", "category": "astro-ph_IM" }, { "text": "Initial simulation study on high-precision radio measurements of the\n depth of shower maximum with SKA1-low: As LOFAR has shown, using a dense array of radio antennas for detecting\nextensive air showers initiated by cosmic rays in the Earth's atmosphere makes\nit possible to measure the depth of shower maximum for individual showers with\na statistical uncertainty less than $20\\,g/cm^2$. This allows detailed studies\nof the mass composition in the energy region around $10^{17}\\,eV$ where the\ntransition from a Galactic to an Extragalactic origin could occur. Since\nSKA1-low will provide a much denser and very homogeneous antenna array with a\nlarge bandwidth of $50-350\\,MHz$ it is expected to reach an uncertainty on the\n$X_{\\max}$ reconstruction of less than $10\\,g/cm^2$. We present first results\nof a simulation study with focus on the potential to reconstruct the depth of\nshower maximum for individual showers to be measured with SKA1-low. In\naddition, possible influences of various parameters such as the numbers of\nantennas included in the analysis or the considered frequency bandwidth will be\ndiscussed.", "category": "astro-ph_IM" }, { "text": "The GLENDAMA Database: This is the first version (v1) of the Gravitational LENses and DArk MAtter\n(GLENDAMA) database accessible at http://grupos.unican.es/glendama/database The\nnew database contains more than 6000 ready-to-use (processed) astronomical\nframes corresponding to 15 objects that fall into three classes: (1) lensed QSO\n(8 objects), (2) binary QSO (3 objects), and (3) accretion-dominated radio-loud\nQSO (4 objects). Data are also divided into two categories: freely available\nand available upon request. The second category includes observations related\nto our yet unpublished analyses. Although this v1 of the GLENDAMA archive\nincorporates an X-ray monitoring campaign for a lensed QSO in 2010, the rest of\nframes (imaging, polarimetry and spectroscopy) were taken with NUV, visible and\nNIR facilities over the period 1999-2014. The monitorings and follow-up\nobservations of lensed QSOs are key tools for discussing the accretion flow in\ndistant QSOs, the redshift and structure of intervening (lensing) galaxies, and\nthe physical properties of the Universe as a whole.", "category": "astro-ph_IM" }, { "text": "Gammapy - A Python package for \u03b3-ray astronomy: In the past decade imaging atmospheric Cherenkov telescope arrays such as\nH.E.S.S., MAGIC, VERITAS, as well as the Fermi-LAT space telescope have\nprovided us with detailed images and spectra of the gamma-ray universe for the\nfirst time. Currently the gamma-ray community is preparing to build the\nnext-generation Cherenkov Telecope Array (CTA), which will be operated as an\nopen observatory. Gammapy (available at https://github.com/gammapy/gammapy\nunder the open-source BSD license) is a new in-development Astropy affiliated\npackage for high-level analysis and simulation of astronomical gamma-ray data.\nIt is built on the scientific Python stack (Numpy, Scipy, matplotlib and\nscikit-image) and makes use of other open-source astronomy packages such as\nAstropy, Sherpa and Naima to provide a flexible set of tools for gamma-ray\nastronomers. We present an overview of the current Gammapy features and example\nanalyses on real as well as simulated gamma-ray datasets. We would like Gammapy\nto become a community-developed project and a place of collaboration between\nscientists interested in gamma-ray astronomy with Python. Contributions\nwelcome!", "category": "astro-ph_IM" }, { "text": "An investigation of lucky imaging techniques: We present an empirical analysis of the effectiveness of frame selection\n(also known as Lucky Imaging) techniques for high resolution imaging. A\nhigh-speed image recording system has been used to observe a number of bright\nstars. The observations were made over a wide range of values of D/r0 and\nexposure time. The improvement in Strehl ratio of the stellar images due to\naligning frames and selecting the best frames was evaluated as a function of\nthese parameters. We find that improvement in Strehl ratio by factors of 4 to 6\ncan be achieved over a range of D/r0 from 3 to 12, with a slight peak at D/r0 ~\n7. The best Strehl improvement is achieved with exposure times of 10 ms or less\nbut significant improvement is still obtained at exposure times as long as 640\nms. Our results are consistent with previous investigations but cover a much\nwider range of parameter space. We show that Strehl ratios of >0.7 can be\nachieved in appropiate conditions whereas previous studies have generally shown\nmaximum Strehl ratios of ~0.3. The results are in reasonable agreement with the\nsimulations of Baldwin et al. (2008).", "category": "astro-ph_IM" }, { "text": "Early Science Results from SOFIA, the World's Largest Airborne\n Observatory: The Stratospheric Observatory For Infrared Astronomy, or SOFIA, is the\nlargest flying observatory ever built,consisting of a 2.7-meter diameter\ntelescope embedded in a modified Boeing 747-SP aircraft. SOFIA is a joint\nproject between NASA and the German Aerospace Center Deutsches Zentrum fur Luft\nund-Raumfahrt (DLR). By flying at altitudes up to 45000 feet, the observatory\ngets above 99.9 percent of the infrared-absorbing water vapor in the Earth's\natmosphere. This opens up an almost uninterrupted wavelength range from\n0.3-1600 microns that is in large part obscured from ground based\nobservatories. Since its 'Initial Science Flight' in December 2010, SOFIA has\nflown several dozen science flights, and has observed a wide array of objects\nfrom Solar System bodies, to stellar nurseries, to distant galaxies. This paper\nreviews a few of the exciting new science results from these first flights\nwhich were made by three instruments: the mid-infrared camera FORCAST, the\nfar-infrared heterodyne spectrometer GREAT, and the optical occultation\nphotometer HIPO.", "category": "astro-ph_IM" }, { "text": "Energy spectra of abundant cosmic-ray nuclei in the NUCLEON experiment: The NUCLEON satellite experiment is designed to directly investigate the\nenergy spectra of cosmic-ray nuclei and the chemical composition (Z=1-30) in\nthe energy range of 2-500 TeV. The experimental results are presented,\nincluding the energy spectra of different abundant nuclei measured using the\nnew Kinematic Lightweight Energy Meter (KLEM) technique. The primary energy is\nreconstructed by registration of spatial density of the secondary particles.\nThe particles are generated by the first hadronic inelastic interaction in a\ncarbon target. Then additional particles are produced in a thin tungsten\nconverter, by electromagnetic and hadronic interactions.", "category": "astro-ph_IM" }, { "text": "Measurement errors and scaling relations in astrophysics: a review: This review article considers some of the most common methods used in\nastronomy for regressing one quantity against another in order to estimate the\nmodel parameters or to predict an observationally expensive quantity using\ntrends between object values. These methods have to tackle some of the awkward\nfeatures prevalent in astronomical data, namely heteroscedastic\n(point-dependent) errors, intrinsic scatter, non-ignorable data collection and\nselection effects, data structure and non-uniform population (often called\nMalmquist bias), non-Gaussian data, outliers and mixtures of regressions. We\noutline how least square fits, weighted least squares methods, Maximum\nLikelihood, survival analysis, and Bayesian methods have been applied in the\nastrophysics literature when one or more of these features is present. In\nparticular we concentrate on errors-in-variables regression and we advocate\nBayesian techniques.", "category": "astro-ph_IM" }, { "text": "The optical imager Galileo (OIG): The present paper describes the construction, the installation and the\noperation of the Optical Imager Galileo (OIG), a scientific instrument\ndedicated to the 'imaging' in the visible. OIG was the first instrument\ninstalled on the focal plane of the Telescopio Nazionale Galileo (TNG) and it\nhas been extensively used for the functional verification of several parts of\nthe telescope (as an example the optical quality, the rejection of spurious\nlight, the active optics and the tracking), in the same way also several parts\nof the TNG informatics system (instrument commanding, telemetry and data\narchiving) have been verified making extensive use of OIG. This paper provides\nalso a frame of work for a further development of the imaging dedicated\ninstrumentation inside TNG. OIG, coupled with the first near-IR camera\n(ARNICA), has been the 'workhorse instrument' during the first period of\ntelescope experimental and scientific scheduling.", "category": "astro-ph_IM" }, { "text": "VERITAS Telescope 1 Relocation: Details and Improvements: The first VERITAS telescope was installed in 2002-2003 at the Fred Lawrence\nWhipple Observatory and was originally operated as a prototype instrument.\nSubsequently the decision was made to locate the full array at the same site,\nresulting in an asymmetric array layout. As anticipated, this resulted in less\nthan optimal sensitivity due to the loss in effective area and the increase in\nbackground due to local muon initiated triggers. In the summer of 2009, the\nVERITAS collaboration relocated Telescope 1 to improve the overall array\nlayout. This has provided a 30% improvement in sensitivity corresponding to a\n60% change in the time needed to detect a source.", "category": "astro-ph_IM" }, { "text": "An Atmospheric Cerenkov Telescope Simulation System: A detailed numerical procedure has been developed to simulate the mechanical\nconfigurations and optical properties of Imaging Atmospheric Cerenkov Telescope\nsystems. To test these procedures a few existing ACT arrays are simulated.\nFirst results from these simulations are presented.", "category": "astro-ph_IM" }, { "text": "The ASTROID Simulator Software Package: Realistic Modelling of\n High-Precision High-Cadence Space-Based Imaging: The preparation of a space-mission that carries out any kind of imaging to\ndetect high-precision low-amplitude variability of its targets requires a\nrobust model for the expected performance of its instruments. This model cannot\nbe derived from simple addition of noise properties due to the complex\ninteraction between the various noise sources. While it is not feasible to\nbuild and test a prototype of the imaging device on-ground, realistic numerical\nsimulations in the form of an end-to-end simulator can be used to model the\nnoise propagation in the observations. These simulations not only allow\nstudying the performance of the instrument, its noise source response and its\ndata quality, but also the instrument design verification for different types\nof configurations, the observing strategy and the scientific feasibility of an\nobserving proposal. In this way, a complete description and assessment of the\nobjectives to expect from the mission can be derived. We present a\nhigh-precision simulation software package, designed to simulate photometric\ntime-series of CCD images by including realistic models of the CCD and its\nelectronics, the telescope optics, the stellar field, the jitter movements of\nthe spacecraft, and all important natural noise sources. This formalism has\nbeen implemented in a software tool, dubbed ASTROID Simulator.", "category": "astro-ph_IM" }, { "text": "denmarf: a Python package for density estimation using masked\n autoregressive flow: Masked autoregressive flow (MAF) is a state-of-the-art non-parametric density\nestimation technique. It is based on the idea (known as a normalizing flow)\nthat a simple base probability distribution can be mapped into a complicated\ntarget distribution that one wishes to approximate, using a sequence of\nbijective transformations. The denmarf package provides a scikit-learn-like\ninterface in Python for researchers to effortlessly use MAF for density\nestimation in their applications to evaluate probability densities of the\nunderlying distribution of a set of data and generate new samples from the\ndata, on either a CPU or a GPU, as simple as \"from denmarf import\nDensityEstimate; de = DensityEstimate().fit(X)\". The package also implements\nlogistic transformations to facilitate the fitting of bounded distributions.", "category": "astro-ph_IM" }, { "text": "SHIMM: A Versatile Seeing Monitor for Astronomy: Characterisation of atmospheric optical turbulence is crucial for the design\nand operation of modern ground-based optical telescopes. In particular, the\neffective application of adaptive optics correction on large and extremely\nlarge telescopes relies on a detailed knowledge of the prevailing atmospheric\nconditions, including the vertical profile of the optical turbulence strength\nand the atmospheric coherence timescale. The Differential Image Motion Monitor\n(DIMM) has been employed as a facility seeing monitor at many astronomical\nobserving sites across the world for several decades, providing a reliable\nestimate of the seeing angle. Here we present the Shack-Hartmann Image Motion\nMonitor (SHIMM), which is a development of the DIMM instrument, in that it\nexploits differential image motion measurements of bright target stars.\nHowever, the SHIMM employs a Shack-Hartmann wavefront sensor in place of the\ntwo-hole aperture mask utilised by the DIMM. This allows the SHIMM to provide\nan estimate of the seeing, unbiased by shot noise or scintillation effects. The\nSHIMM also produces a low-resolution (three-layer) measure of the vertical\nturbulence profile, as well as an estimate of the coherence timescale. The\nSHIMM is designed as a low-cost, portable, instrument. It is comprised of\noff-the-shelf components so that it is easy to duplicate and well-suited for\ncomparisons of atmospheric conditions within and between different observing\nsites. Here, the SHIMM design and methodology for estimating key atmospheric\nparameters will be presented, as well as initial field test results with\ncomparisons to the Stereo-SCIDAR instrument.", "category": "astro-ph_IM" }, { "text": "Pan-STARRS Photometric and Astrometric Calibration: We present the details of the photometric and astrometric calibration of the\nPan-STARRS1 $3\\pi$ Survey. The photometric goals were to reduce the systematic\neffects introduced by the camera and detectors, and to place all of the\nobservations onto a photometric system with consistent zero points over the\nentire area surveyed, the ~30,000 square degrees north of $\\delta$ = -30\ndegrees. The astrometric calibration compensates for similar systematic effects\nso that positions, proper motions, and parallaxes are reliable as well. The\nPan-STARRS Data Release 2 (DR2) astrometry is tied to the Gaia DR1 release.", "category": "astro-ph_IM" }, { "text": "The Photometric LSST Astronomical Time-series Classification Challenge\n (PLAsTiCC): Selection of a performance metric for classification\n probabilities balancing diverse science goals: Classification of transient and variable light curves is an essential step in\nusing astronomical observations to develop an understanding of their underlying\nphysical processes. However, upcoming deep photometric surveys, including the\nLarge Synoptic Survey Telescope (LSST), will produce a deluge of low\nsignal-to-noise data for which traditional labeling procedures are\ninappropriate. Probabilistic classification is more appropriate for the data\nbut are incompatible with the traditional metrics used on deterministic\nclassifications. Furthermore, large survey collaborations intend to use these\nclassification probabilities for diverse science objectives, indicating a need\nfor a metric that balances a variety of goals. We describe the process used to\ndevelop an optimal performance metric for an open classification challenge that\nseeks probabilistic classifications and must serve many scientific interests.\nThe Photometric LSST Astronomical Time-series Classification Challenge\n(PLAsTiCC) is an open competition aiming to identify promising techniques for\nobtaining classification probabilities of transient and variable objects by\nengaging a broader community both within and outside astronomy. Using mock\nclassification probability submissions emulating archetypes of those\nanticipated of PLAsTiCC, we compare the sensitivity of metrics of\nclassification probabilities under various weighting schemes, finding that they\nyield qualitatively consistent results. We choose as a metric for PLAsTiCC a\nweighted modification of the cross-entropy because it can be meaningfully\ninterpreted. Finally, we propose extensions of our methodology to ever more\ncomplex challenge goals and suggest some guiding principles for approaching the\nchoice of a metric of probabilistic classifications.", "category": "astro-ph_IM" }, { "text": "AstroInformatics: Recommendations for Global Cooperation: Policy Brief on \"AstroInformatics, Recommendations for Global Collaboration\",\ndistilled from panel discussions during S20 Policy Webinar on Astroinformatics\nfor Sustainable Development held on 6-7 July 2023.\n The deliberations encompassed a wide array of topics, including broad\nastroinformatics, sky surveys, large-scale international initiatives, global\ndata repositories, space-related data, regional and international collaborative\nefforts, as well as workforce development within the field. These discussions\ncomprehensively addressed the current status, notable achievements, and the\nmanifold challenges that the field of astroinformatics currently confronts.\n The G20 nations present a unique opportunity due to their abundant human and\ntechnological capabilities, coupled with their widespread geographical\nrepresentation. Leveraging these strengths, significant strides can be made in\nvarious domains. These include, but are not limited to, the advancement of STEM\neducation and workforce development, the promotion of equitable resource\nutilization, and contributions to fields such as Earth Science and Climate\nScience.\n We present a concise overview, followed by specific recommendations that\npertain to both ground-based and space data initiatives. Our team remains\nreadily available to furnish further elaboration on any of these proposals as\nrequired. Furthermore, we anticipate further engagement during the upcoming G20\npresidencies in Brazil (2024) and South Africa (2025) to ensure the continued\ndiscussion and realization of these objectives.\n The policy webinar took place during the G20 presidency in India (2023).\nNotes based on the seven panels will be separately published.", "category": "astro-ph_IM" }, { "text": "Searching for Extraterrestrial Intelligence with the Square Kilometre\n Array: The vast collecting area of the Square Kilometre Array (SKA), harnessed by\nsensitive receivers, flexible digital electronics and increased computational\ncapacity, could permit the most sensitive and exhaustive search for\ntechnologically-produced radio emission from advanced extraterrestrial\nintelligence (SETI) ever performed. For example, SKA1-MID will be capable of\ndetecting a source roughly analogous to terrestrial high-power radars (e.g. air\nroute surveillance or ballistic missile warning radars, EIRP (EIRP = equivalent\nisotropic radiated power, ~10^17 erg sec^-1) at 10 pc in less than 15 minutes,\nand with a modest four beam SETI observing system could, in one minute, search\nevery star in the primary beam out to ~100 pc for radio emission comparable to\nthat emitted by the Arecibo Planetary Radar (EIRP ~2 x 10^20 erg sec^-1). The\nflexibility of the signal detection systems used for SETI searches with the SKA\nwill allow new algorithms to be employed that will provide sensitivity to a\nmuch wider variety of signal types than previously searched for.\n Here we discuss the astrobiological and astrophysical motivations for radio\nSETI and describe how the technical capabilities of the SKA will explore the\nradio SETI parameter space. We detail several conceivable SETI experimental\nprograms on all components of SKA1, including commensal, primary-user, targeted\nand survey programs and project the enhancements to them possible with SKA2. We\nalso discuss target selection criteria for these programs, and in the case of\ncommensal observing, how the varied use cases of other primary observers can be\nused to full advantage for SETI.", "category": "astro-ph_IM" }, { "text": "VISION: A Six-Telescope Fiber-Fed Visible Light Beam Combiner for the\n Navy Precision Optical Interferometer: Visible-light long baseline interferometry holds the promise of advancing a\nnumber of important applications in fundamental astronomy, including the direct\nmeasurement of the angular diameters and oblateness of stars, and the direct\nmeasurement of the orbits of binary and multiple star systems. To advance, the\nfield of visible-light interferometry requires development of instruments\ncapable of combining light from 15 baselines (6 telescopes) simultaneously. The\nVisible Imaging System for Interferometric Observations at NPOI (VISION) is a\nnew visible light beam combiner for the Navy Precision Optical Interferometer\n(NPOI) that uses single-mode fibers to coherently combine light from up to six\ntelescopes simultaneously with an image-plane combination scheme. It features a\nphotometric camera for calibrations and spatial filtering from single-mode\nfibers with two Andor Ixon electron multiplying CCDs. This paper presents the\nVISION system, results of laboratory tests, and results of commissioning on-sky\nobservations. A new set of corrections have been determined for the power\nspectrum and bispectrum by taking into account non-Gaussian statistics and read\nnoise present in electron-multipying CCDs to enable measurement of visibilities\nand closure phases in the VISION post-processing pipeline. The post-processing\npipeline has been verified via new on-sky observations of the O-type supergiant\nbinary $\\zeta$ Orionis A, obtaining a flux ratio of $2.18\\pm0.13$ mag with a\nposition angle of $223.9\\pm1.0^{\\circ}$ and separation $40.6\\pm1.8$ mas over\n570-750 nm, in good agreement with expectations from the previously published\norbit.", "category": "astro-ph_IM" }, { "text": "Errors, chaos and the collisionless limit: We simultaneously study the dynamics of the growth of errors and the question\nof the faithfulness of simulations of $N$-body systems. The errors are\nquantified through the numerical reversibility of small-$N$ spherical systems,\nand by comparing fixed-timestep runs with different stepsizes. The errors add\nrandomly, before exponential divergence sets in, with exponentiation rate\nvirtually independent of $N$, but scale saturating as $\\sim 1/\\sqrt{N}$, in\nline with theoretical estimates presented. In a third phase, the growth rate is\ninitially driven by multiplicative enhancement of errors, as in the exponential\nstage. It is then qualitatively different for the phase space variables and\nmean field conserved quantities (energy and momentum); for the former, the\nerrors grow systematically through phase mixing, for the latter they grow\ndiffusively. For energy, the $N$-variation of the `relaxation time' of error\ngrowth follows the $N$-scaling of two-body relaxation. This is also true for\nangular momentum in the fixed stepsize runs, although the associated error\nthreshold is higher and the relaxation time smaller. Due to shrinking\nsaturation scales, the information loss associated with the exponential\ninstability decreases with $N$ and the dynamical entropy vanishes at any finite\nresolution as $N \\rightarrow \\infty$. A distribution function depending on the\nintegrals of motion in the smooth potential is decreasingly affected. In this\nsense there is convergence to the collisionless limit, despite the persistence\nof exponential instability on infinitesimal scales. Nevertheless, the slow\n$N$-variation in its saturation points to the slowness of the convergence.", "category": "astro-ph_IM" }, { "text": "Expectations on the mass determination using astrometric microlensing by\n Gaia: Context. Astrometric gravitational microlensing can be used to determine the\nmass of a single star (the lens) with an accuracy of a few percent. To do so,\nprecise measurements of the angular separations between lens and background\nstar with an accuracy below 1 milli-arcsecond at different epochs are needed.\nHence only the most accurate instruments can be used. However, since the\ntimescale is in the order of months to years, the astrometric deflection might\nbe detected by Gaia, even though each star is only observed on a low cadence.\nAims. We want to show how accurately Gaia can determine the mass of the lensing\nstar. Methods. Using conservative assumptions based on the results of the\nsecond Gaia Data release, we simulated the individual Gaia measurements for 501\npredicted astrometric microlensing events during the Gaia era (2014.5 -\n2026.5). For this purpose we use the astrometric parameters of Gaia DR2, as\nwell as an approximative mass based on the absolute G magnitude. By fitting the\nmotion of lens and source simultaneously we then reconstruct the 11 parameters\nof the lensing event. For lenses passing by multiple background sources, we\nalso fit the motion of all background sources and the lens simultaneously.\nUsing a Monte-Carlo simulation we determine the achievable precision of the\nmass determination. Results. We find that Gaia can detect the astrometric\ndeflection for 114 events. Further, for 13 events Gaia can determine the mass\nof the lens with a precision better than 15% and for 13 + 21 = 34 events with a\nprecision of 30% or better.", "category": "astro-ph_IM" }, { "text": "Light curve completion and forecasting using fast and scalable Gaussian\n processes (MuyGPs): Temporal variations of apparent magnitude, called light curves, are\nobservational statistics of interest captured by telescopes over long periods\nof time. Light curves afford the exploration of Space Domain Awareness (SDA)\nobjectives such as object identification or pose estimation as latent variable\ninference problems. Ground-based observations from commercial off the shelf\n(COTS) cameras remain inexpensive compared to higher precision instruments,\nhowever, limited sensor availability combined with noisier observations can\nproduce gappy time-series data that can be difficult to model. These external\nfactors confound the automated exploitation of light curves, which makes light\ncurve prediction and extrapolation a crucial problem for applications.\nTraditionally, image or time-series completion problems have been approached\nwith diffusion-based or exemplar-based methods. More recently, Deep Neural\nNetworks (DNNs) have become the tool of choice due to their empirical success\nat learning complex nonlinear embeddings. However, DNNs often require large\ntraining data that are not necessarily available when looking at unique\nfeatures of a light curve of a single satellite.\n In this paper, we present a novel approach to predicting missing and future\ndata points of light curves using Gaussian Processes (GPs). GPs are non-linear\nprobabilistic models that infer posterior distributions over functions and\nnaturally quantify uncertainty. However, the cubic scaling of GP inference and\ntraining is a major barrier to their adoption in applications. In particular, a\nsingle light curve can feature hundreds of thousands of observations, which is\nwell beyond the practical realization limits of a conventional GP on a single\nmachine. Consequently, we employ MuyGPs, a scalable framework for\nhyperparameter estimation of GP models that uses nearest neighbors\nsparsification and local cross-validation. MuyGPs...", "category": "astro-ph_IM" }, { "text": "Radiative Cooling II: Effects of Density and Metallicity: This work follows Lykins et al. discussion of classic plasma cooling function\nat low density and solar metallicity. Here we focus on how the cooling function\nchanges over a wide range of density (n_H<10^12 cm^(-3)) and metallicity (Z<30Z\n_sun ). We find that high densities enhance the ionization of elements such as\nhydrogen and helium until they reach local thermodynamic equilibrium. By charge\ntransfer, the metallicity changes the ionization of hydrogen when it is\npartially ionized. We describe the total cooling function as a sum of four\nparts: those due to H&He, the heavy elements, electron-electron bremsstrahlung\nand grains. For the first 3 parts, we provide a low-density limit cooling\nfunction, a density dependence function, and a metallicity dependence function.\nThese functions are given with numerical tables and analytical fit functions.\nFor grain cooling, we only discuss in ISM case. We then obtain a total cooling\nfunction that depends on density, metallicity and temperature. As expected,\ncollisional de-excitation suppresses the heavy elements cooling. Finally, we\nprovide a function giving the electron fraction, which can be used to convert\nthe cooling function into a cooling rate.", "category": "astro-ph_IM" }, { "text": "The SKA and the Unknown Unknowns: As new scientists and engineers join the SKA project and as the pressures\ncome on to maintain costs within a chosen envelope it is worth restating and\nupdating the rationale for the 'Exploration of the Unknown' (EoU). Maintaining\nan EoU philosophy will prove a vital ingredient for realizing the SKA's\ndiscovery potential. Since people make the discoveries enabled by technology a\nfurther axis in capability parameter space, the'human bandwidth' is emphasised.\nUsing the morphological approach pioneered by Zwicky, a currently unexploited\nregion of observational parameter space can be identified viz: time variable\nspectral patterns on all spectral and angular scales, one interesting example\nwould be 'spectral transients'. We should be prepared to build up to 10 percent\nless collecting area for a given overall budget in order to enhance the ways in\nwhich SKA1 can be flexibly utilized.", "category": "astro-ph_IM" }, { "text": "Inviscid SPH: In smooth-particle hydrodynamics (SPH), artificial viscosity is necessary for\nthe correct treatment of shocks, but often generates unwanted dissipation away\nfrom shocks. We present a novel method of controlling the amount of artificial\nviscosity, which uses the total time derivative of the velocity divergence as\nshock indicator and aims at completely eliminating viscosity away from shocks.\nWe subject the new scheme to numerous tests and find that the method works at\nleast as well as any previous technique in the strong-shock regime, but becomes\nvirtually inviscid away from shocks, while still maintaining particle order. In\nparticular sound waves or oscillations of gas spheres are hardly damped over\nmany periods.", "category": "astro-ph_IM" }, { "text": "RAFTER: Ring Astrometric Field Telescope for Exo-planets and Relativity: High precision astrometry aims at source position determination to a very\nsmall fraction of the diffraction image size, in high SNR regime. One of the\nkey limitations to such goal is the optical response variation of the telescope\nover a sizeable FOV, required to ensure bright reference objects to any\nselected target. The issue translates into severe calibration constraints,\nand/or the need for complex telescope and focal plane metrology. We propose an\ninnovative system approach derived from the established TMA telescope concept,\nextended to achieve high filling factor of an annular field of view around the\noptical axis of the telescope. The proposed design is a very compact, 1 m class\ntelescope compatible with modern CCD and CMOS detectors (EFL = 15 m). We\ndescribe the concept implementation guidelines and the optical performance of\nthe current optical design. The diffraction limited FOV exceeds 1.25 square\ndegrees, and the detector occupies the best 0.25 square degree with 66 devices.", "category": "astro-ph_IM" }, { "text": "The Electromagnetic Characteristics of the Tianlai Cylindrical\n Pathfinder Array: A great challenge for 21 cm intensity mapping experiments is the strong\nforeground radiation which is orders of magnitude brighter than the 21cm\nsignal. Removal of the foreground takes advantage of the fact that its\nfrequency spectrum is smooth while the redshifted 21cm signal spectrum is\nstochastic. However, a complication is the non-smoothness of the instrument\nresponse. This paper describes the electromagnetic simulation of the Tianlai\ncylinder array, a pathfinder for 21 cm intensity mapping experiments. Due to\nthe vast scales involved, a direct simulation requires large amount of\ncomputing resources. We have made the simulation practical by using a\ncombination of methods: first simulate a single feed, then an array of feed\nunits, finally with the feed array and a cylindrical reflector together, to\nobtain the response for a single cylinder. We studied its radiation pattern,\nbandpass response and the effects of mutual coupling between feed units, and\ncompared the results with observation. Many features seen in the measurement\nresult are well reproduced in the simulation, especially the oscillatory\nfeatures which are associated with the standing waves on the reflector. The\nmutual coupling between feed units is quantified with S-parameters, which\ndecrease as the distance between the two feeds increases. Based on the\nsimulated S-parameters, we estimate the correlated noise which has been seen in\nthe visibility data, the results show very good agreement with the data in both\nmagnitude and frequency structures. These results provide useful insights on\nthe problem of 21cm signal extraction for real instruments.", "category": "astro-ph_IM" }, { "text": "A fast 2D image reconstruction algorithm from 1D data for the Gaia\n mission: A fast 2-dimensional image reconstruction method is presented, which takes as\ninput 1-dimensional data acquired from scans across a central source in\ndifferent orientations. The resultant reconstructed images do not show\nartefacts due to non-uniform coverage in the orientations of the scans across\nthe central source, and are successful in avoiding a high background due to\ncontamination of the flux from the central source across the reconstructed\nimage. Due to the weighting scheme employed this method is also naturally\nrobust to hot pixels. This method was developed specifically with Gaia data in\nmind, but should be useful in combining data with mismatched resolutions in\ndifferent directions.", "category": "astro-ph_IM" }, { "text": "Basic Survey Scheduling for the Wide Field Survey Telescope (WFST): Aiming at improving the survey efficiency of the Wide Field Survey Telescope,\nwe have developed a basic scheduling strategy that takes into account the\ntelescope characteristics, observing conditions, and weather conditions at the\nLenghu site. The sky area is divided into rectangular regions, referred to as\n`tiles', with a size of 2.577 deg * 2.634 deg slightly smaller than the focal\narea of the mosaic CCDs. These tiles are continuously filled in annulars\nparallel to the equator. The brightness of the sky background, which varies\nwith the moon phase and distance from the moon, plays a significant role in\ndetermining the accessible survey fields. Approximately 50 connected tiles are\ngrouped into one block for observation. To optimize the survey schedule, we\nperform simulations by taking into account the length of exposures, data\nreadout, telescope slewing, and all relevant observing conditions. We utilize\nthe Greedy Algorithm for scheduling optimization. Additionally, we propose a\ndedicated dithering pattern to cover the gaps between CCDs and the four corners\nof the mosaic CCD array, which are located outside of the 3 deg field of view.\nThis dithering pattern helps to achieve relatively uniform exposure maps for\nthe final survey outputs.", "category": "astro-ph_IM" }, { "text": "Baryon acoustic oscillations from Integrated Neutral Gas Observations:\n Broadband corrugated horn construction and testing: The Baryon acoustic oscillations from Integrated Neutral Gas Observations\n(BINGO) telescope is a 40-m~class radio telescope under construction that has\nbeen designed to measure the large-angular-scale intensity of HI emission at\n980--1260 MHz and hence to constrain dark energy parameters. A large focal\nplane array comprising of 1.7-metre diameter, 4.3-metre length corrugated feed\nhorns is required in order to optimally illuminate the telescope. Additionally,\nvery clean beams with low sidelobes across a broad frequency range are\nrequired, in order to facilitate the separation of the faint HI emission from\nbright Galactic foreground emission. Using novel construction methods, a\nfull-sized prototype horn has been assembled. It has an average insertion loss\nof around 0.15 dB across the band, with a return loss around -25 dB. The main\nbeam is Gaussian with the first sidelobe at around $-25 dB. A septum polariser\nto separate the signal into the two hands of circular polarization has also\nbeen designed, built and tested.", "category": "astro-ph_IM" }, { "text": "Hi-fi phenomenological description of eclipsing binary light variations\n as the basis for their period analysis: In-depth analysis of eclipsing binary (EB) observational data collected for\nseveral decades can inform us about a lot of astrophysically interesting\nprocesses taking place in the systems. We have developed a wide-ranging method\nfor the phenomenological modelling of eclipsing binary phase curves that\nenables us to combine even very disparate sources of phase information. This\napproach is appropriate for the processing of both standard photometric series\nof eclipses and data from photometric surveys of all kind. We conclude that\nmid-eclipse times, determined using the latest version of our 'hi-fi'\nphenomenological light curve models, as well as their accuracy, are nearly the\nsame as the values obtained using much more complex standard physical EB\nmodels.", "category": "astro-ph_IM" }, { "text": "Calibration database for the Murchison Widefield Array All-Sky Virtual\n Observatory: We present a calibration component for the Murchison Widefield Array All-Sky\nVirtual Observatory (MWA ASVO) utilising a newly developed PostgreSQL database\nof calibration solutions. Since its inauguration in 2013, the MWA has recorded\nover thirty-four petabytes of data archived at the Pawsey Supercomputing\nCentre. According to the MWA Data Access policy, data become publicly available\neighteen months after collection. Therefore, most of the archival data are now\navailable to the public. Access to public data was provided in 2017 via the MWA\nASVO interface, which allowed researchers worldwide to download MWA\nuncalibrated data in standard radio astronomy data formats (CASA measurement\nsets or UV FITS files). The addition of the MWA ASVO calibration feature opens\na new, powerful avenue for researchers without a detailed knowledge of the MWA\ntelescope and data processing to download calibrated visibility data and create\nimages using standard radio-astronomy software packages. In order to populate\nthe database with calibration solutions from the last six years we developed\nfully automated pipelines. A near-real-time pipeline has been used to process\nnew calibration observations as soon as they are collected and upload\ncalibration solutions to the database, which enables monitoring of the\ninterferometric performance of the telescope. Based on this database we present\nan analysis of the stability of the MWA calibration solutions over long time\nintervals.", "category": "astro-ph_IM" }, { "text": "Polarimetric characterization of segmented mirrors: We study the impact of the loss of axial symmetry around the optical axis on\nthe polarimetric properties of a telescope with segmented primary mirror when\neach segment is present in a different aging stage. The different oxidation\nstage of each segment as they are substituted in time leads to non-negligible\ncrosstalk terms. This effect is wavelength dependent and it is mainly\ndetermined by the properties of the reflecting material. For an aluminum\ncoating, the worst polarimetric behavior due to oxidation is found for the blue\npart of the visible. Contrarily, dust -- as modeled in this work -- does not\nsignificantly change the polarimetric behavior of the optical system .\nDepending on the telescope, there might be segment substitution sequences that\nstrongly attenuate this instrumental polarization.", "category": "astro-ph_IM" }, { "text": "Simulation and Analysis Chain for Acoustic Ultra-high Energy Neutrino\n Detectors in Water: Acousticneutrinodetectionisapromisingapproachforlarge-scaleultra-highenergyneutrinodetectorsinwater.In\nthis article, a Monte Carlo simulation chain for acoustic neutrino detection\ndevices in water will be presented. The simulation chain covers the generation\nof the acoustic pulse produced by a neutrino interaction and its propagation to\nthe sensors within the detector. Currently, ambient and transient noise models\nfor the Mediterranean Sea and simulations of the data acquisition hardware,\nequivalent to the one used in ANTARES/AMADEUS, are implemented. A pre-selection\nscheme for neutrino-like signals based on matched filtering is employed, as it\nis used for on-line filtering. To simulate the whole processing chain for\nexperimental data, signal classification and acoustic source reconstruction\nalgorithms are integrated in an analysis chain. An overview of design and\ncapabilities of the simulation and analysis chain will be presented and\npreliminary studies will be discussed.", "category": "astro-ph_IM" }, { "text": "A new method of CCD dark current correction via extracting the dark\n information from scientific images: We have developed a new method to correct dark current at relatively high\ntemperatures for Charge-Coupled Device (CCD) images when dark frames cannot be\nobtained on the telescope. For images taken with the Antarctic Survey\nTelescopes (AST3) in 2012, due to the low cooling efficiency, the median CCD\ntemperature was -46$^\\circ$C, resulting in a high dark current level of about\n3$e^-$/pix/sec, even comparable to the sky brightness (10$e^-$/pix/sec). If not\ncorrected, the nonuniformity of the dark current could even overweight the\nphoton noise of the sky background. However, dark frames could not be obtained\nduring the observing season because the camera was operated in frame-transfer\nmode without a shutter, and the telescope was unattended in winter. Here we\npresent an alternative, but simple and effective method to derive the dark\ncurrent frame from the scientific images. Then we can scale this dark frame to\nthe temperature at which the scientific images were taken, and apply the dark\nframe corrections to the scientific images. We have applied this method to the\nAST3 data, and demonstrated that it can reduce the noise to a level roughly as\nlow as the photon noise of the sky brightness, solving the high noise problem\nand improving the photometric precision. This method will also be helpful for\nother projects that suffer from similar issues.", "category": "astro-ph_IM" }, { "text": "Exploiting the geomagnetic distortion of the inclined atmospheric\n showers: We propose a novel approach for the determination of the nature of ultra-high\nenergy cosmic rays by exploiting the geomagnetic deviation of muons in nearly\nhorizontal showers. The distribution of the muons at ground level is well\ndescribed by a simple parametrization providing a few shape parameters tightly\ncorrelated to $X^\\mu_\\mathrm{max}$, the depth of maximal muon production, which\nis a mass indicator tightly correlated to the usual parameter $X_\\mathrm{max}$,\nthe depth of maximal development of the shower. We show that some constraints\ncan be set on the predictions of hadronic models, especially by combining the\ngeomagnetic distortion with standard measurement of the longitudinal profile.\nWe discuss the precision needed to obtain significant results and we propose a\nschematic layout of a detector.", "category": "astro-ph_IM" }, { "text": "Astrometric and photometric standard candidates for the upcoming 4-m\n ILMT survey: The International Liquid Mirror Telescope (ILMT) is a 4-meter class survey\ntelescope that has recently achieved first light and is expected to swing into\nfull operations by 1st January 2023. It scans the sky in a fixed 22' wide strip\ncentered at the declination of $+29^o21'41''$ and works in Time Delay\nIntegration (TDI) mode. We present a full catalog of sources in the ILMT strip\nthat can serve as astrometric calibrators. The characteristics of the sources\nfor astrometric calibration are extracted from Gaia EDR3 as it provides a very\nprecise measurement of astrometric properties such as RA ($\\alpha$), Dec\n($\\delta$), parallax ($\\pi$), and proper motions ($\\mu_{\\alpha^{*}}$ &\n$\\mu_{\\delta}$). We have crossmatched the Gaia EDR3 with SDSS DR17 and\nPanSTARRS-1 (PS1) and supplemented the catalog with apparent magnitudes of\nthese sources in g, r, and i filters. We also present a catalog of\nspectroscopically confirmed white dwarfs with SDSS magnitudes that may serve as\nphotometric calibrators. The catalogs generated are stored in a SQLite database\nfor query-based access. We also report the offsets in equatorial positions\ncompared to Gaia for an astrometrically calibrated TDI frame observed with the\nILMT.", "category": "astro-ph_IM" }, { "text": "The AstroSat Observatory: AstroSat is India's first Ultra-violet (UV) and X-ray astronomy observatory\nin space. The satellite was launched by the Indian Space Research Organisation\non a Polar Satellite Launch Vehicle on 28 September 2015 from Sriharikota Range\nnorth of Chennai on the eastern coast of India. AstroSat carries five\nscientific instruments and one auxiliary instrument. Four of these consist of\nco-aligned telescopes and detectors mounted on a common deck of the satellite\nto observe stars and galaxies simultaneously in the near- and far-UV\nwavelengths and a broad range of X-ray energies (0.3 to 80 keV). The fifth\ninstrument consists of three X-ray detectors and is mounted on a rotating\nplatform on a side that is oriented 90 degrees with respect to the other\ninstruments to scan the sky for X-ray transients. An auxiliary instrument\nmonitors the charged particle environment in the path of the satellite.", "category": "astro-ph_IM" }, { "text": "Recovering simulated planet and disk signals using SCALES aperture\n masking: The Slicer Combined with Array of Lenslets for Exoplanet Spectroscopy\n(SCALES) instrument is a lenslet-based integral field spectrograph that will\noperate at 2 to 5 microns, imaging and characterizing colder (and thus older)\nplanets than current high-contrast instruments. Its spatial resolution for\ndistant science targets and/or close-in disks and companions could be improved\nvia interferometric techniques such as sparse aperture masking. We introduce a\nnascent Python package, NRM-artist, that we use to design several SCALES masks\nto be non-redundant and to have uniform coverage in Fourier space. We generate\nhigh-fidelity mock SCALES data using the scalessim package for SCALES' low\nspectral resolution modes across its 2 to 5 micron bandpass. We include\nrealistic noise from astrophysical and instrument sources, including Keck\nadaptive optics and Poisson noise. We inject planet and disk signals into the\nmock datasets and subsequently recover them to test the performance of SCALES\nsparse aperture masking and to determine the sensitivity of various mask\ndesigns to different science signals.", "category": "astro-ph_IM" }, { "text": "Spatial intensity interferometry on three bright stars: The present articlereports on the first spatial intensity interferometry\nmeasurements on stars since the observations at Narrabri Observatory by Hanbury\nBrown et al. in the 1970's. Taking advantage of the progresses in recent years\non photon-counting detectors and fast electronics, we were able to measure the\nzero-time delay intensity correlation $g^{(2)}(\\tau = 0, r)$ between the light\ncollected by two 1-m optical telescopes separated by 15 m. Using two marginally\nresolved stars ($\\alpha$ Lyr and $\\beta$ Ori) with R magnitudes of 0.01 and\n0.13 respectively, we demonstrate that 4-hour correlation exposures provide\nreliable visibilities, whilst a significant loss of contrast is found on alpha\nAur, in agreement with its binary-star nature.", "category": "astro-ph_IM" }, { "text": "Overview of lunar detection of ultra-high energy particles and new plans\n for the SKA: The lunar technique is a method for maximising the collection area for\nultra-high-energy (UHE) cosmic ray and neutrino searches. The method uses\neither ground-based radio telescopes or lunar orbiters to search for Askaryan\nemission from particles cascading near the lunar surface. While experiments\nusing the technique have made important advances in the detection of\nnanosecond-scale pulses, only at the very highest energies has the lunar\ntechnique achieved competitive limits. This is expected to change with the\nadvent of the Square Kilometre Array (SKA), the low-frequency component of\nwhich (SKA-low) is predicted to be able to detect an unprecedented number of\nUHE cosmic rays.\n In this contribution, the status of lunar particle detection is reviewed,\nwith particular attention paid to outstanding theoretical questions, and the\ntechnical challenges of using a giant radio array to search for nanosecond\npulses. The activities of SKA's High Energy Cosmic Particles Focus Group are\ndescribed, as is a roadmap by which this group plans to incorporate this\ndetection mode into SKA-low observations. Estimates for the sensitivity of\nSKA-low phases 1 and 2 to UHE particles are given, along with the achievable\nscience goals with each stage. Prospects for near-future observations with\nother instruments are also described.", "category": "astro-ph_IM" }, { "text": "High-resolution wide-band Fast Fourier Transform spectrometers: We describe the performance of our latest generations of sensitive wide-band\nhigh-resolution digital Fast Fourier Transform Spectrometer (FFTS). Their\ndesign, optimized for a wide range of radio astronomical applications, is\npresented. Developed for operation with the GREAT far infrared heterodyne\nspectrometer on-board SOFIA, the eXtended bandwidth FFTS (XFFTS) offers a high\ninstantaneous bandwidth of 2.5 GHz with 88.5 kHz spectral resolution and has\nbeen in routine operation during SOFIA's Basic Science since July 2011. We\ndiscuss the advanced field programmable gate array (FPGA) signal processing\npipeline, with an optimized multi-tap polyphase filter bank algorithm that\nprovides a nearly loss-less time-to-frequency data conversion with\nsignificantly reduced frequency scallop and fast sidelobe fall-off. Our digital\nspectrometers have been proven to be extremely reliable and robust, even under\nthe harsh environmental conditions of an airborne observatory, with\nAllan-variance stability times of several 1000 seconds. An enhancement of the\npresent 2.5 GHz XFFTS will duplicate the number of spectral channels (64k),\noffering spectroscopy with even better resolution during Cycle 1 observations.", "category": "astro-ph_IM" }, { "text": "The ADS All-Sky Survey: The ADS All-Sky Survey (ADSASS) is an ongoing effort aimed at turning the\nNASA Astrophysics Data System (ADS), widely known for its unrivaled value as a\nliterature resource for astronomers, into a data resource. The ADS is not a\ndata repository per se, but it implicitly contains valuable holdings of\nastronomical data, in the form of images, tables and object references\ncontained within articles. The objective of the ADSASS effort is to extract\nthese data and make them discoverable and available through existing data\nviewers. The resulting ADSASS data layer promises to greatly enhance workflows\nand enable new research by tying astronomical literature and data assets into\none resource.", "category": "astro-ph_IM" }, { "text": "Modern middleware for the data acquisition of the Cherenkov Telescope\n Array: The data acquisition system (DAQ) of the future Cherenkov Telescope Array\n(CTA) must be ef- ficient, modular and robust to be able to cope with the very\nlarge data rate of up to 550 Gbps coming from many telescopes with different\ncharacteristics. The use of modern middleware, namely ZeroMQ and Protocol\nBuffers, can help to achieve these goals while keeping the development effort\nto a reasonable level. Protocol Buffers are used as an on-line data for- mat,\nwhile ZeroMQ is employed to communicate between processes. The DAQ will be\ncontrolled and monitored by the Alma Common Software (ACS). Protocol Buffers\nfrom Google are a way to define high-level data structures through an in-\nterface description language (IDL) and a meta-compiler. ZeroMQ is a middleware\nthat augments the capabilities of TCP/IP sockets. It does not implement very\nhigh-level features like those found in CORBA for example, but makes use of\nsockets easier, more robust and almost as effective as raw TCP. The use of\nthese two middlewares enabled us to rapidly develop a robust prototype of the\nDAQ including data persistence to compressed FITS files.", "category": "astro-ph_IM" }, { "text": "Geostationary Antenna for Disturbance-Free Laser Interferometry (GADFLI): We present a mission concept, the Geostationary Antenna for Disturbance-Free\nLaser Interferometry (GADFLI), for a space-based gravitational-wave\ninterferometer consisting of three satellites in geostationary orbit around the\nEarth. Compared to the nominal design of the Laser Interferometer Space Antenna\n(LISA), this concept has the advantages of significantly decreased requirements\non the telescope size and laser power, decreased launch mass, substantially\nimproved shot noise resulting from the shorter 73000 km armlengths, simplified\nand less expensive communications, and an overall lower cost which we (roughly)\nestimate at $1.2B. GADFLI preserves much of the science of LISA, particularly\nthe observation of massive black-hole binary coalescences, although the SNR is\ndiminished for all masses in the potential designs we consider.", "category": "astro-ph_IM" }, { "text": "Betelgeuse scope: Single-mode-fibers-assisted optical interferometer\n design for dedicated stellar activity monitoring: Betelgeuse has gone through a sudden shift in its brightness and dimmed\nmysteriously. This is likely caused by a hot blob of plasma ejected from\nBetelgeuse and then cooled to obscuring dust. If true, it is a remarkable\nopportunity to directly witness the formation of dust around a red supergiant\nstar. Today's optical telescope facilities are not optimized for time-evolution\nmonitoring of the Betelgeuse surface, so in this work, we propose a low-cost\noptical interferometer. The facility will consist of $12 \\times 4$ inch optical\ntelescopes mounted on the surface of a large radio dish for interferometric\nimaging; polarization-maintaining single-mode fibers will carry the coherent\nbeams from the individual optical telescopes to an all-in-one beam combiner. A\nfast steering mirror assisted fiber injection system guides the flux into\nfibers. A metrology system senses vibration-induced piston errors in optical\nfibers, and these errors are corrected using fast-steering delay lines. We will\npresent the design.", "category": "astro-ph_IM" }, { "text": "STARFORGE: Toward a comprehensive numerical model of star cluster\n formation and feedback: We present STARFORGE (STAR FORmation in Gaseous Environments): a new\nnumerical framework for 3D radiation MHD simulations of star formation that\nsimultaneously follow the formation, accretion, evolution, and dynamics of\nindividual stars in massive giant molecular clouds (GMCs) while accounting for\nstellar feedback, including jets, radiative heating and momentum, stellar\nwinds, and supernovae. We use the GIZMO code with the MFM mesh-free Lagrangian\nMHD method, augmented with new algorithms for gravity, timestepping, sink\nparticle formation and accretion, stellar dynamics, and feedback coupling. We\nsurvey a wide range of numerical parameters/prescriptions for sink formation\nand accretion and find very small variations in star formation history and the\nIMF (except for intentionally-unphysical variations). Modules for\nmass-injecting feedback (winds, SNe, and jets) inject new gas elements\non-the-fly, eliminating the lack of resolution in diffuse feedback cavities\notherwise inherent in Lagrangian methods. The treatment of radiation uses\nGIZMO's radiative transfer solver to track 5 frequency bands (IR, optical, NUV,\nFUV, ionizing), coupling direct stellar emission and dust emission with gas\nheating and radiation pressure terms. We demonstrate accurate solutions for\nSNe, winds, and radiation in problems with known similarity solutions, and show\nthat our jet module is robust to resolution and numerical details, and agrees\nwell with previous AMR simulations. STARFORGE can scale up to massive ($>10^5\nM_\\odot $) GMCs on current supercomputers while predicting the stellar\n($\\gtrsim 0.1 M_\\odot$) range of the IMF, permitting simulations of both high-\nand low-mass cluster formation in a wide range of conditions.", "category": "astro-ph_IM" }, { "text": "The Simons Observatory: A fully remote controlled calibration system\n with a sparse wire grid for cosmic microwave background telescopes: For cosmic microwave background (CMB) polarization observations, calibration\nof detector polarization angles is essential. We have developed a fully remote\ncontrolled calibration system with a sparse wire grid that reflects linearly\npolarized light along the wire direction. The new feature is a\nremote-controlled system for regular calibration, which has not been possible\nin sparse wire grid calibrators in past experiments. The remote control can be\nachieved by two electric linear actuators that load or unload the sparse wire\ngrid into a position centered on the optical axis of a telescope between the\ncalibration time and CMB observation. Furthermore, the sparse wire grid can be\nrotated by a motor. A rotary encoder and a gravity sensor are installed on the\nsparse wire grid to monitor the wire direction. They allow us to achieve\ndetector angle calibration with expected systematic error of $0.08^{\\circ}$.\nThe calibration system will be installed in small-aperture telescopes at Simons\nObservatory.", "category": "astro-ph_IM" }, { "text": "Ground Layer Adaptive Optics: PSF effects on ELT scales: On certain extent the behavior of the Adaptive Optics correction for\nExtremely Large Telescope scales with diameter size. But in Ground Layer\nAdaptive Optics the combined effect of a Large Field of View and the large\noverlap of Guide Stars pupil footprints at high atmospheric altitude introduces\nsevere changes in the behavior of the correction returning a very different\ndistribution of the energy going from known 8-10meter to 100m diameters. In\nthis paper we identify the reasons and the ways of these different behaviors.", "category": "astro-ph_IM" }, { "text": "The Generalized Spectral Kurtosis Estimator: Due to its conceptual simplicity and its proven effectiveness in real-time\ndetection and removal of radio frequency interference (RFI) from radio\nastronomy data, the Spectral Kurtosis (SK) estimator is likely to become a\nstandard tool of a new generation of radio telescopes. However, the SK\nestimator in its original form must be developed from instantaneous power\nspectral density (PSD) estimates, and hence cannot be employed as an RFI\nexcision tool downstream of the data pipeline in existing instruments where any\ntime averaging is performed. In this letter, we develop a generalized estimator\nwith wider applicability for both instantaneous and averaged spectral data,\nwhich extends its practical use to a much larger pool of radio instruments.", "category": "astro-ph_IM" }, { "text": "An Information Theory Approach on Deciding Spectroscopic Follow Ups: Classification and characterization of variable phenomena and transient\nphenomena are critical for astrophysics and cosmology. These objects are\ncommonly studied using photometric time series or spectroscopic data. Given\nthat many ongoing and future surveys are in time-domain and given that adding\nspectra provide further insights but requires more observational resources, it\nwould be valuable to know which objects should we prioritize to have spectrum\nin addition to time series. We propose a methodology in a probabilistic setting\nthat determines a-priory which objects are worth taking spectrum to obtain\nbetter insights, where we focus 'insight' as the type of the object\n(classification). Objects for which we query its spectrum are reclassified\nusing their full spectrum information. We first train two classifiers, one that\nuses photometric data and another that uses photometric and spectroscopic data\ntogether. Then for each photometric object we estimate the probability of each\npossible spectrum outcome. We combine these models in various probabilistic\nframeworks (strategies) which are used to guide the selection of follow up\nobservations. The best strategy depends on the intended use, whether it is\ngetting more confidence or accuracy. For a given number of candidate objects\n(127, equal to 5% of the dataset) for taking spectra, we improve 37% class\nprediction accuracy as opposed to 20% of a non-naive (non-random) best\nbase-line strategy. Our approach provides a general framework for follow-up\nstrategies and can be extended beyond classification and to include other forms\nof follow-ups beyond spectroscopy.", "category": "astro-ph_IM" }, { "text": "Robust dimensionality reduction for interferometric imaging of Cygnus A: Extremely high data rates expected in next-generation radio interferometers\nnecessitate a fast and robust way to process measurements in a big data\ncontext. Dimensionality reduction can alleviate computational load needed to\nprocess these data, in terms of both computing speed and memory usage. In this\narticle, we present image reconstruction results from highly reduced\nradio-interferometric data, following our previously proposed data\ndimensionality reduction method, $\\mathrm{R}_{\\mathrm{sing}}$, based on\nstudying the distribution of the singular values of the measurement operator.\nThis method comprises a simple weighted, subsampled discrete Fourier transform\nof the dirty image. Additionally, we show that an alternative gridding-based\nreduction method works well for target data sizes of the same order as the\nimage size. We reconstruct images from well-calibrated VLA data to showcase the\nrobustness of our proposed method down to very low data sizes in a 'real data'\nsetting. We show through comparisons with the conventional reduction method of\ntime- and frequency-averaging, that our proposed method produces more accurate\nreconstructions while reducing data size much further, and is particularly\nrobust when data sizes are aggressively reduced to low fractions of the image\nsize. $\\mathrm{R}_{\\mathrm{sing}}$ can function in a block-wise fashion, and\ncould be used in the future to process incoming data by blocks in real-time,\nthus opening up the possibility of performing 'on-line' imaging as the data are\nbeing acquired. MATLAB code for the proposed dimensionality reduction method is\navailable on GitHub.", "category": "astro-ph_IM" }, { "text": "Tidal Accelerometry: Exploring the Cosmos Via Gravitational Correlations: Newtonian gravitation is non-radiative but is extremely pervasive and\npenetrates equally into every media because it cannot be shielded. The extra\nterrestrial fgravity is responsible for earth's trajectory. However its\ncorrelation or geodesic deviation is manifested as semi-diurnal and diurnal\ntides. Tidal signals, A(t) are temporal modulations in the field differential\nwhich can be observed in a wide variety of natural and laboratory situations.\nA(t) is a quasi-static, low frequency signal which arises from the relative\nchanges in positions of the detector and source and is not part of the\nelectromagnetic spectrum. Isaac Newton was the first to recognize the\nimportance of tides in astrometry and attempetd to estimate lunar mass from\nocean tides. By a case study we show, how the systematics of the gravitational\ncorrelation can be used for calibration and de-trending which can significantly\nincrease the confidence level of high precision experiments. A(t) can also be\nused to determine the distribution of celestial masses independently of the\n\"1-2-3\" law. Guided by modern advances in gravity wave detectors we argue that\nit is important to develop high precision accelerometry. With a resolution of\nabout a nano-m it will be possible to determine solar system masses and detect\nthe SMBH at the center of our galaxy. Observations of the gravitational\ncorrelation can potentially open up yet to be explored vistas of the cosmos.", "category": "astro-ph_IM" }, { "text": "The miniJPAS Survey: A Study on Wavelength Dependence of the Photon\n Response Non-uniformity of the JPAS-{\\it Pathfinder} Camera: Understanding the origins of small-scale flats of CCDs and their\nwavelength-dependent variations plays an important role in high-precision\nphotometric, astrometric, and shape measurements of astronomical objects. Based\non the unique flat data of 47 narrow-band filters provided by JPAS-{\\it\nPathfinder}, we analyze the variations of small-scale flats as a function of\nwavelength. We find moderate variations (from about $1.0\\%$ at 390 nm to\n$0.3\\%$ at 890 nm) of small-scale flats among different filters, increasing\ntowards shorter wavelengths. Small-scale flats of two filters close in central\nwavelengths are strongly correlated. We then use a simple physical model to\nreproduce the observed variations to a precision of about $\\pm 0.14\\%$, by\nconsidering the variations of charge collection efficiencies, effective areas\nand thicknesses between CCD pixels. We find that the wavelength-dependent\nvariations of small-scale flats of the JPAS-{\\it Pathfinder} camera originate\nfrom inhomogeneities of the quantum efficiency (particularly charge collection\nefficiency) as well as the effective area and thickness of CCD pixels. The\nformer dominates the variations in short wavelengths while the latter two\ndominate at longer wavelengths. The effects on proper flat-fielding as well as\non photometric/flux calibrations for photometric/slit-less spectroscopic\nsurveys are discussed, particularly in blue filters/wavelengths. We also find\nthat different model parameters are sensitive to flats of different\nwavelengths, depending on the relations between the electron absorption depth,\nthe photon absorption length and the CCD thickness. In order to model the\nwavelength-dependent variations of small-scale flats, a small number (around\nten) of small-scale flats with well-selected wavelengths are sufficient to\nreconstruct small-scale flats in other wavelengths.", "category": "astro-ph_IM" }, { "text": "Forward Global Photometric Calibration of the Dark Energy Survey: Many scientific goals for the Dark Energy Survey (DES) require calibration of\noptical/NIR broadband $b = grizY$ photometry that is stable in time and uniform\nover the celestial sky to one percent or better. It is also necessary to limit\nto similar accuracy systematic uncertainty in the calibrated broadband\nmagnitudes due to uncertainty in the spectrum of the source. Here we present a\n\"Forward Global Calibration Method (FGCM)\" for photometric calibration of the\nDES, and we present results of its application to the first three years of the\nsurvey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at\nthe observatory with data from the broad-band survey imaging itself and models\nof the instrument and atmosphere to estimate the spatial- and time-dependence\nof the passbands of individual DES survey exposures. \"Standard\" passbands are\nchosen that are typical of the passbands encountered during the survey. The\npassband of any individual observation is combined with an estimate of the\nsource spectral shape to yield a magnitude $m_b^{\\mathrm{std}}$ in the standard\nsystem. This \"chromatic correction\" to the standard system is necessary to\nachieve sub-percent calibrations. The FGCM achieves reproducible and stable\nphotometric calibration of standard magnitudes $m_b^{\\mathrm{std}}$ of stellar\nsources over the multi-year Y3A1 data sample with residual random calibration\nerrors of $\\sigma=5-6\\,\\mathrm{mmag}$ per exposure. The accuracy of the\ncalibration is uniform across the $5000\\,\\mathrm{deg}^2$ DES footprint to\nwithin $\\sigma=7\\,\\mathrm{mmag}$. The systematic uncertainties of magnitudes in\nthe standard system due to the spectra of sources are less than\n$5\\,\\mathrm{mmag}$ for main sequence stars with $0.5$ 85%. We present our preliminary\noptical design and our strategies for wavelength calibration.", "category": "astro-ph_IM" }, { "text": "Prediction on detection and characterization of Galactic disk\n microlensing events by LSST: Upcoming LSST survey gives an unprecedented opportunity for studying\npopulations of intrinsically faint objects using microlensing technique. Large\nfield of view and aperture allow effective time-series observations of many\nstars in Galactic disk and bulge. Here, we combine Galactic models (for |b|<10\ndeg) and simulations of LSST observations to study how different observing\nstrategies affect the number and properties of microlensing events detected by\nLSST. We predict that LSST will mostly observe long duration microlensing\nevents due to the source stars with the averaged magnitude around 22 in r-band,\nrather than high-magnification events due to fainter source stars. In Galactic\nbulge fields, LSST should detect on the order of 400 microlensing events per\nsquare degree as compared to 15 in disk fields. Improving the cadence increases\nthe number of detectable microlensing events, e.g., improving the cadence from\n6 to 2 days approximately doubles the number of microlensing events throughout\nthe Galaxy. According to the current LSST strategy, it will observe some fields\n900 times during a 10-year survey with the average cadence of ~4-days (I) and\nother fields (mostly toward the Galactic disk) around 180 times during a 1-year\nsurvey only with the average 1-day cadence (II). We anticipate that the number\nof events corresponding to these strategies are 7900 and 34000, respectively.\nToward similar lines of sight, LSST with the first observing strategy (I) will\ndetect more and on average longer microlensing events than those observable\nwith the second strategy. If LSST spends enough time observing near Galactic\nplane, then the large number of microlensing events will allow studying\nGalactic distribution of planets and finding isolated black holes among wealth\nof other science cases.", "category": "astro-ph_IM" }, { "text": "Evaluating the efficacy of sonification for signal detection in\n univariate, evenly sampled light curves using astronify: Sonification is the technique of representing data with sound, with potential\napplications in astronomy research for aiding discovery and accessibility.\nSeveral astronomy-focused sonification tools have been developed; however,\nefficacy testing is extremely limited. We performed testing of astronify, a\nprototype tool for sonification functionality within the Barbara A. Mikulski\nArchive for Space Telescopes (MAST). We created synthetic light curves\ncontaining zero, one, or two transit-like signals with a range of\nsignal-to-noise ratios (SNRs=3-100) and applied the default mapping of\nbrightness to pitch. We performed remote testing, asking participants to count\nsignals when presented with light curves as a sonification, visual plot, or\ncombination of both. We obtained 192 responses, of which 118 self-classified as\nexperts in astronomy and data analysis. For high SNRs (=30 and 100), experts\nand non-experts performed well with sonified data (85-100% successful signal\ncounting). At low SNRs (=3 and 5) both groups were consistent with guessing\nwith sonifications. At medium SNRs (=7 and 10), experts performed no better\nthan non-experts with sonifications but significantly better (factor of ~2-3)\nwith visuals. We infer that sonification training, like that experienced by\nexperts for visual data inspection, will be important if this sonification\nmethod is to be useful for moderate SNR signal detection within astronomical\narchives and broader research. Nonetheless, we show that even a very simple,\nand non-optimised, sonification approach allows users to identify high SNR\nsignals. A more optimised approach, for which we present ideas, would likely\nyield higher success for lower SNR signals.", "category": "astro-ph_IM" }, { "text": "A high precision technique to correct for residual atmospheric\n dispersion in high-contrast imaging systems: Direct detection and spectroscopy of exoplanets requires high contrast\nimaging. For habitable exoplanets in particular, located at small angular\nseparation from the host star, it is crucial to employ small inner working\nangle (IWA) coronagraphs that efficiently suppress starlight. These\ncoronagraphs, in turn, require careful control of the wavefront which directly\nimpacts their performance. For ground-based telescopes, atmospheric refraction\nis also an important factor, since it results in a smearing of the PSF, that\ncan no longer be efficiently suppressed by the coronagraph. Traditionally,\natmospheric refraction is compensated for by an atmospheric dispersion\ncompensator (ADC). ADC control relies on an a priori model of the atmosphere\nwhose parameters are solely based on the pointing of the telescope, which can\nresult in imperfect compensation. For a high contrast instrument like the\nSubaru Coronagraphic Extreme Adaptive Optics (SCExAO) system, which employs\nvery small IWA coronagraphs, refraction-induced smearing of the PSF has to be\nless than 1 mas in the science band for optimum performance. In this paper, we\npresent the first on-sky measurement and correction of residual atmospheric\ndispersion. Atmospheric dispersion is measured from the science image directly,\nusing an adaptive grid of artificially introduced speckles as a diagnostic to\nfeedback to the telescope's ADC. With our current setup, we were able to reduce\nthe initial residual atmospheric dispersion from 18.8 mas to 4.2 in broadband\nlight (y- to H-band), and to 1.4 mas in H-band only. This work is particularly\nrelevant to the upcoming extremely large telescopes (ELTs) that will require\nfine control of their ADC to reach their full high contrast imaging potential.", "category": "astro-ph_IM" }, { "text": "On the coherence loss in phase-referenced VLBI observations: Context: Phase referencing is a standard calibration technique in radio\ninterferometry, particularly suited for the detection of weak sources close to\nthe sensitivity limits of the interferometers. However, effects from a changing\natmosphere and inaccuracies in the correlator model may affect the\nphase-referenced images, leading to wrong estimates of source flux densities\nand positions. A systematic observational study of signal decoherence in phase\nreferencing, and its effects in the image plane, has not been performed yet.\n Aims: We systematically studied how the signal coherence in\nVery-Long-Baseline-Interferometry (VLBI) observations is affected by a\nphase-reference calibration at different frequencies and for different\ncalibrator-to-target separations. The results obtained should be of interest\nfor a correct interpretation of many phase-referenced observations with VLBI.\n Methods: We observed a set of 13 strong sources (the S5 polar cap sample) at\n8.4 and 15 GHz in phase-reference mode, with 32 different calibrator/target\ncombinations spanning angular separations between 1.5 and 20.5 degrees. We\nobtained phase-referenced images and studied how the dynamic range and peak\nflux density depend on observing frequency and source separation.\n Results: We obtained dynamic ranges and peak flux densities of the\nphase-referenced images as a function of frequency and separation from the\ncalibrator. We compared our results with models and phenomenological equations\npreviously reported.\n Conclusions: The dynamic range of the phase-referenced images is strongly\nlimited by the atmosphere at all frequencies and for all source separations.\nThe limiting dynamic range is inversely proportional to the sine of the\ncalibrator-to-target separation. We also find that the peak flux densities,\nrelative to those obtained with the self-calibrated images, decrease with\nsource separation.", "category": "astro-ph_IM" }, { "text": "Optimization of an Optical Testbed for Characterization of EXCLAIM\n u-Spec Integrated Spectrometers: We describe a testbed to characterize the optical response of compact\nsuperconducting on-chip spectrometers in development for the Experiment for\nCryogenic Large-Aperture Intensity Mapping (EXCLAIM) mission. EXCLAIM is a\nballoonborne far-infrared experiment to probe the CO and CII emission lines in\ngalaxies from redshift 3.5 to the present. The spectrometer, called u-Spec,\ncomprises a diffraction grating on a silicon chip coupled to kinetic inductance\ndetectors (KIDs) read out via a single microwave feedline. We use a prototype\nspectrometer for EXCLAIM to demonstrate our ability to characterize the\nspectrometers spectral response using a photomixer source. We utilize an\non-chip reference detector to normalize relative to spectral structure from the\noff-chip optics and a silicon etalon to calibrate the absolute frequency.", "category": "astro-ph_IM" }, { "text": "Characterization of Skipper CCDs for Cosmological Applications: We characterize the response of a novel 250 $\\mu$m thick, fully-depleted\nSkipper Charged-Coupled Device (CCD) to visible/near-infrared light with a\nfocus on potential applications for astronomical observations. We achieve\nstable, single-electron resolution with readout noise $\\sigma \\sim 0.18$\ne$^{-}$ rms/pix from 400 non-destructive measurements of the charge in each\npixel. We verify that the gain derived from photon transfer curve measurements\nagrees with the gain calculated from the quantized charge of individual\nelectrons to within < 1%. We also perform relative quantum efficiency\nmeasurements and demonstrate high relative quantum efficiency at\noptical/near-infrared wavelengths, as is expected for a thick, fully depleted\ndetector. Finally, we demonstrate the ability to perform multiple\nnon-destructive measurements and achieve sub-electron readout noise over\nconfigurable subregions of the detector. This work is the first step toward\ndemonstrating the utility of Skipper CCDs for future astronomical and\ncosmological applications.", "category": "astro-ph_IM" }, { "text": "The Possible Detection of Dark Energy on Earth Using Atom Interferometry: This paper describes the concept and the beginning of an experimental\ninvestigation of whether it is possible to directly detect dark energy density\non earth using atom interferometry. The concept is to null out the\ngravitational force using a double interferometer. This research provides a\nnon-astronomical path for research on dark energy. The application of this\nmethod to other hypothetical weak forces and fields is also discussed. In the\nthe final section I discuss the advantages of carrying out a dark energy\ndensity search in a satellite in earth orbit where more precise nulling of\ngravitational forces can be achieved.", "category": "astro-ph_IM" }, { "text": "Deformable mirror-based pupil chopping for exoplanet imaging and\n adaptive optics: Due to turbulence in the atmosphere images taken from ground-based telescopes\nbecome distorted. With adaptive optics (AO) images can be given greater clarity\nallowing for better observations with existing telescopes and are essential for\nground-based coronagraphic exoplanet imaging instruments. A disadvantage to\nmany AO systems is that they use sensors that can not correct for non-common\npath aberrations. We have developed a new focal plane wavefront sensing\ntechnique to address this problem called deformable mirror (DM)-based pupil\nchopping. The process involves a coronagraphic or non-coronagraphic science\nimage and a deformable mirror, which modulates the phase by applying a local\ntip/tilt every other frame which enables correcting for leftover aberrations in\nthe wavefront after a conventional AO correction. We validate this technique\nwith both simulations (for coronagraphic and non-coronagraphic images) and\ntesting (for non-coronagraphic images) on UCSC's Santa Cruz Extreme AO\nLaboratory (SEAL) testbed. We demonstrate that with as low as 250 nm of DM\nstroke to apply the local tip/tilt this wavefront sensor is linear for\nlow-order Zernike modes and enables real-time control, in principle up to kHz\nspeeds to correct for residual atmospheric turbulence.", "category": "astro-ph_IM" }, { "text": "Collision-free motion planning for fiber positioner robots:\n discretization of velocity profiles: The next generation of large-scale spectroscopic survey experiments such as\nDESI, will use thousands of fiber positioner robots packed on a focal plate. In\norder to maximize the observing time with this robotic system we need to move\nin parallel the fiber-ends of all positioners from the previous to the next\ntarget coordinates. Direct trajectories are not feasible due to collision risks\nthat could undeniably damage the robots and impact the survey operation and\nperformance. We have previously developed a motion planning method based on a\nnovel decentralized navigation function for collision-free coordination of\nfiber positioners. The navigation function takes into account the configuration\nof positioners as well as their envelope constraints. The motion planning\nscheme has linear complexity and short motion duration (~2.5 seconds with the\nmaximum speed of 30 rpm for the positioner), which is independent of the number\nof positioners. These two key advantages of the decentralization designate the\nmethod as a promising solution for the collision-free motion-planning problem\nin the next-generation of fiber-fed spectrographs. In a framework where a\ncentralized computer communicates with the positioner robots, communication\noverhead can be reduced significantly by using velocity profiles consisting of\na few bits only. We present here the discretization of velocity profiles to\nensure the feasibility of a real-time coordination for a large number of\npositioners. The modified motion planning method that generates piecewise\nlinearized position profiles guarantees collision-free trajectories for all the\nrobots. The velocity profiles fit few bits at the expense of higher\ncomputational costs.", "category": "astro-ph_IM" }, { "text": "Gaia space mission and quasars: Quasars are often considered to be point-like objects. This is largely true\nand allows for an excellent alignment of the optical positional reference frame\nof the ongoing ESA mission Gaia with the International Celestial Reference\nFrame. But presence of optical jets in quasars can cause shifts of the optical\nphoto-centers at levels detectable by Gaia. Similarly, motion of emitting blobs\nin the jet can be detected as proper motion shifts. Gaia's measurements of\nspectral energy distribution for around a million distant quasars is useful to\ndetermine their redshifts and to assess their variability on timescales from\nhours to years. Spatial resolution of Gaia allows to build a complete magnitude\nlimited sample of strongly lensed quasars. The mission had its first public\ndata release in September 2016 and is scheduled to have the next and much more\ncomprehensive one in April 2018. Here we briefly review the capabilities and\ncurrent results of the mission. Gaia's unique contributions to the studies of\nquasars are already being published, a highlight being a discovery of a number\nof quasars with optical jets.", "category": "astro-ph_IM" }, { "text": "Adapting astronomical source detection software to help detect animals\n in thermal images obtained by unmanned aerial systems: In this paper we describe an unmanned aerial system equipped with a\nthermal-infrared camera and software pipeline that we have developed to monitor\nanimal populations for conservation purposes. Taking a multi-disciplinary\napproach to tackle this problem, we use freely available astronomical source\ndetection software and the associated expertise of astronomers, to efficiently\nand reliably detect humans and animals in aerial thermal-infrared footage.\nCombining this astronomical detection software with existing machine learning\nalgorithms into a single, automated, end-to-end pipeline, we test the software\nusing aerial video footage taken in a controlled, field-like environment. We\ndemonstrate that the pipeline works reliably and describe how it can be used to\nestimate the completeness of different observational datasets to objects of a\ngiven type as a function of height, observing conditions etc. -- a crucial step\nin converting video footage to scientifically useful information such as the\nspatial distribution and density of different animal species. Finally, having\ndemonstrated the potential utility of the system, we describe the steps we are\ntaking to adapt the system for work in the field, in particular systematic\nmonitoring of endangered species at National Parks around the world.", "category": "astro-ph_IM" }, { "text": "The High Inclination Solar Mission: The High Inclination Solar Mission (HISM) is a concept for an\nout-of-the-ecliptic mission for observing the Sun and the heliosphere. The\nmission profile is largely based on the Solar Polar Imager concept: initially\nspiraling in to a 0.48 AU ecliptic orbit, then increasing the orbital\ninclination at a rate of $\\sim 10$ degrees per year, ultimately reaching a\nheliographic inclination of $>$75 degrees. The orbital profile is achieved\nusing solar sails derived from the technology currently being developed for the\nSolar Cruiser mission, currently under development.\n HISM remote sensing instruments comprise an imaging spectropolarimeter\n(Doppler imager / magnetograph) and a visible light coronagraph. The in-situ\ninstruments include a Faraday cup, an ion composition spectrometer, and\nmagnetometers. Plasma wave measurements are made with electrical antennas and\nhigh speed magnetometers.\n The $7,000\\,\\mathrm{m}^2$ sail used in the mission assessment is a direct\nextension of the 4-quadrant $1,666\\,\\mathrm{m}^2$ Solar Cruiser design and\nemploys the same type of high strength composite boom, deployment mechanism,\nand membrane technology. The sail system modelled is spun (~1 rpm) to assure\nrequired boom characteristics with margin. The spacecraft bus features a\nfine-pointing 3-axis stabilized instrument platform that allows full science\nobservations as soon as the spacecraft reaches a solar distance of 0.48 AU.", "category": "astro-ph_IM" }, { "text": "Removing visual bias in filament identification: a new goodness-of-fit\n measure: Different combinations of input parameters to filament identification\nalgorithms, such as Disperse and FilFinder, produce numerous different output\nskeletons. The skeletons are a one pixel wide representation of the filamentary\nstructure in the original input image. However, these output skeletons may not\nnecessarily be a good representation of that structure. Furthermore, a given\nskeleton may not be as good a representation as another. Previously there has\nbeen no mathematical `goodness-of-fit' measure to compare output skeletons to\nthe input image. Thus far this has been assessed visually, introducing visual\nbias. We propose the application of the mean structural similarity index\n(MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the\nMSSIM to find the output skeletons most mathematically similar to the original\ninput image (the optimum, or `best', skeletons) for a given algorithm, and\nindependently of the algorithm. This measure makes possible systematic\nparameter studies, aimed at finding the subset of input parameter values\nreturning optimum skeletons. It can also be applied to the output of\nnon-skeleton based filament identification algorithms, such as the Hessian\nmatrix method. The MSSIM removes the need to visually examine thousands of\noutput skeletons, and eliminates the visual bias, subjectivity, and limited\nreproducibility inherent in that process, representing a major improvement on\nexisting techniques. Importantly, it also allows further automation in the\npost-processing of output skeletons, which is crucial in this era of `big\ndata'.", "category": "astro-ph_IM" }, { "text": "Viewpoints: A high-performance high-dimensional exploratory data\n analysis tool: Scientific data sets continue to increase in both size and complexity. In the\npast, dedicated graphics systems at supercomputing centers were required to\nvisualize large data sets, but as the price of commodity graphics hardware has\ndropped and its capability has increased, it is now possible, in principle, to\nview large complex data sets on a single workstation. To do this in practice,\nan investigator will need software that is written to take advantage of the\nrelevant graphics hardware. The Viewpoints visualization package described\nherein is an example of such software. Viewpoints is an interactive tool for\nexploratory visual analysis of large, high-dimensional (multivariate) data. It\nleverages the capabilities of modern graphics boards (GPUs) to run on a single\nworkstation or laptop. Viewpoints is minimalist: it attempts to do a small set\nof useful things very well (or at least very quickly) in comparison with\nsimilar packages today. Its basic feature set includes linked scatter plots\nwith brushing, dynamic histograms, normalization and outlier detection/removal.\nViewpoints was originally designed for astrophysicists, but it has since been\nused in a variety of fields that range from astronomy, quantum chemistry, fluid\ndynamics, machine learning, bioinformatics, and finance to information\ntechnology server log mining. In this article, we describe the Viewpoints\npackage and show examples of its usage.", "category": "astro-ph_IM" }, { "text": "Overview of the Instrumentation for the Dark Energy Spectroscopic\n Instrument: The Dark Energy Spectroscopic Instrument (DESI) has embarked on an ambitious\nfive-year survey to explore the nature of dark energy with spectroscopy of 40\nmillion galaxies and quasars. DESI will determine precise redshifts and employ\nthe Baryon Acoustic Oscillation method to measure distances from the nearby\nuniverse to z > 3.5, as well as measure the growth of structure and probe\npotential modifications to general relativity. In this paper we describe the\nsignificant instrumentation we developed for the DESI survey. The new\ninstrumentation includes a wide-field, 3.2-deg diameter prime-focus corrector\nthat focuses the light onto 5020 robotic fiber positioners on the 0.812 m\ndiameter, aspheric focal surface. The positioners and their fibers are divided\namong ten wedge-shaped petals. Each petal is connected to one of ten\nspectrographs via a contiguous, high-efficiency, nearly 50 m fiber cable\nbundle. The ten spectrographs each use a pair of dichroics to split the light\ninto three channels that together record the light from 360 - 980 nm with a\nresolution of 2000 to 5000. We describe the science requirements, technical\nrequirements on the instrumentation, and management of the project. DESI was\ninstalled at the 4-m Mayall telescope at Kitt Peak, and we also describe the\nfacility upgrades to prepare for DESI and the installation and functional\nverification process. DESI has achieved all of its performance goals, and the\nDESI survey began in May 2021. Some performance highlights include RMS\npositioner accuracy better than 0.1\", SNR per \\sqrt{\\AA} > 0.5 for a z > 2\nquasar with flux 0.28e-17 erg/s/cm^2/A at 380 nm in 4000s, and median SNR = 7\nof the [OII] doublet at 8e-17 erg/s/cm^2 in a 1000s exposure for emission line\ngalaxies at z = 1.4 - 1.6. We conclude with highlights from the on-sky\nvalidation and commissioning of the instrument, key successes, and lessons\nlearned. (abridged)", "category": "astro-ph_IM" }, { "text": "Generating artificial light curves: Revisited and updated: The production of artificial light curves with known statistical and\nvariability properties is of great importance in astrophysics. Consolidating\nthe confidence levels during cross-correlation studies, understanding the\nartefacts induced by sampling irregularities, establishing detection limits for\nfuture observatories are just some of the applications of simulated data sets.\nCurrently, the widely used methodology of amplitude and phase randomisation is\nable to produce artificial light curves which have a given underlying power\nspectral density (PSD) but which are strictly Gaussian distributed. This\nrestriction is a significant limitation, since the majority of the light curves\ne.g. active galactic nuclei, X-ray binaries, gamma-ray bursts show strong\ndeviations from Gaussianity exhibiting `burst-like' events in their light\ncurves yielding long-tailed probability distribution functions (PDFs). In this\nstudy we propose a simple method which is able to precisely reproduce light\ncurves which match both the PSD and the PDF of either an observed light curve\nor a theoretical model. The PDF can be representative of either the parent\ndistribution or the actual distribution of the observed data, depending on the\nstudy to be conducted for a given source. The final artificial light curves\ncontain all of the statistical and variability properties of the observed\nsource or theoretical model i.e. same PDF and PSD, respectively. Within the\nframework of Reproducible Research, the code, together with the illustrative\nexample used in this manuscript, are both made publicly available in the form\nof an interactive Mathematica notebook.", "category": "astro-ph_IM" } ]