Matthias Samland (Stockholm University / MPIA), Jeroen Bouwman (MPIA), David W. Hogg (New York University / Flat Iron Institute), Wolfgang Brandner (MPIA), Thomas Henning (MPIA)

Here we see the type of data we are dealing with. Due to the rotation of the field-of-view the companion signal moves along an arc through the background noise pattern.

*Figure 1: SPHERE data with injected bright signal north of host star, showing the sky rotation's effect on a companion*

In this work we we show that we can replace this spatial model (combination of images) with a temporal model (combination of pixel lightcurves), and constructing a forward model for the time series of each pixel produced by a companion signal. This process is similar to lightcurve modeling in transit observations.

Modeling confounding systematics in time-series data is a well studied problem (Schölkopf et al. 2015). As long as the systematics share a common underlying cause, it is possible to use a regression approach to model a particular instance of the systematics function using other similarly affected timeseries (half-sibling regression). Instead of image patches at the position of the planet displaced in time, we use non-local pixels for training the temporal model. Figure 2 shows the geometry we use (similar separation, surrounding reduction area, and mirrored area). The area affected by planet signal is explicitly excluded, as such self-subtraction is ruled out. Instead of using the lightcurves themselves we use a principle component decomposition to reduce collinearity.

*Figure 2: Selection of non-local training data. The white area shows the pixels whose time-series will serve as the training data for the temporal systematics model.*

In addition to this temporal systematic model we fit the planet signal that would arise, if a planet was located at the corresponding sky position. The problem for a single pixel is shown in Figure 3.

*Figure 3: Example schematic of the system of linear equations that we solve. We simultaneously fit our systematics model (principal components) as well as the forward model for the planet signal.*

The planet amplitude and uncertainty is determined for each pixel and the final contrast value determined as the noise weighted average over all pixels. This process is repeated for a grid of possible planet positions, resulting in a detection map, which is then normalized empirically to account for the simplified assumptions in the linear least-square fit.

We compare our results to the ANDROMEDA algorithm (Cantalloube et al. 2015), because it follows a similar forward-modelling approach and produces 2D detection maps analogous to our approach. Furthermore ANDROMEDA has been shown to provide good results for the SPHERE data sets we used for testing (e.g. 51 Eridani b, Samland et al. 2017).

*Figure 4: SNR map for beta Pic b using TRAP and ANDROMEDA.*

Figure 4 shows the results for beta Pic b, a dataset with short integration times (4 seconds). We see a large four times improvement in the signal to noise resulting from using a temporal model that does not require a temporal exclusion criterion. The systematics can be modelled even on the shortest temporal timescales, whereas in a spatial approach even small protection angles would exclude at least several minutes of data around each frame. Binning down the data to 64 second integrations, as is (unfortunately) common for high-contrast imaging observations, TRAP and ANDROMEDA achieve a similar SNR, confirming the primary cause for the gain in SNR to be temporal sampling.

*Figure 5: Contrast curve comparison TRAP using different numbers of principal components (as fraction of maximum number) and ANDROMEDA using the standard protection angle of 0.5 lamda/D. The shaded area shows the 14-86 percentile range along the azimuth.*

There are many more results to show; too many for a quick poster. If you are interested in seeing more, please contact me (or wait another short while for the paper to be on arxive in all its glory). 😊

The code will be published on GitHub when the paper comes out.

Finally, we would like to emphasise that the temporal regression approach is not necessarily in conflict with spatial approaches, as they optimize two different quantities (temporal similarity vs. spatial similarity). It is entirely possible to use a LOCI-like (Locally Optimized Combination of Images, Lafrenière et al. 2007) approach on the residuals of TRAP, which are decorrelated and white in time, but not necessarily in space. Ultimately, combining both approaches should make optimal use of all the information available in the data. It can further be extended to make use of polychromatic data in the training process.