TRAP: Applying transit techniques to direct imaging data

Matthias Samland (Stockholm University / MPIA), Jeroen Bouwman (MPIA), David W. Hogg (New York University / Flat Iron Institute), Wolfgang Brandner (MPIA), Thomas Henning (MPIA)

The Problem: Traditional methods for post-processing of direct imaging data using sky-rotation are limited in performance close to the inner working angle, because a larger field-rotation is required to displace a source on the detector. This is associated with a temporal exclusion criteria (protection angle) which reduces the training data that can be used.
The Solution: We have developed a data-driven causal temporal systematics model based on non-local reference pixel lightcurves that circumvents these limitations. The approach is similar transit spectroscopy methods. Our systematics model is valid under the assumption that the underlying causes of the systematics affect multiple image areas, which is generally the case for the speckle pattern in high-contrast imaging. We simultaneously fit a forward model of a planet signal "transiting" over detector pixels and reference light curves describing the temporal trends of the speckle pattern to find the best fitting model describing the signal.
The Result: With our implementation of a non-local, temporal systematics model, called TRAP, we show that it is possible to gain up to a factor of six in contrast at close separations (smaller 3 lambda / D) compared to a spatial model with temporal exclusion criterion. We further demonstrate that the temporal sampling has a big impact on the achievable contrast, where shorter exposure times result in significantly better contrasts. For beta Pic data taken with VLT/SPHERE at short integration times (4 seconds), the approach improves the SNR of the planet by a factor of four compared to the spatial systematics model. This approach opens the possibility to take methods originally developed for transit spectroscopy and apply them to direct imaging data, bringing the two fields closer together, and utilize synergies that are previously unexplored. Coming to Arxiv soon (Samland et al. 2020, subm).


Here we see the type of data we are dealing with. Due to the rotation of the field-of-view the companion signal moves along an arc through the background noise pattern.

Figure 1: SPHERE data with injected bright signal north of host star, showing the sky rotation's effect on a companion

Traditionally one would model the background at the location of the planet using other frames displaced in time enough that the planet has moved sufficiently. Subtracting the frames from each other (or building a linear model) can therefore remove the background noise without overly impacting the companion signal. This, however, has the drawback that the background has to remain stable over the timescales we have to wait for the signal to move.
In this work we we show that we can replace this spatial model (combination of images) with a temporal model (combination of pixel lightcurves), and constructing a forward model for the time series of each pixel produced by a companion signal. This process is similar to lightcurve modeling in transit observations.


Modeling confounding systematics in time-series data is a well studied problem (Schölkopf et al. 2015). As long as the systematics share a common underlying cause, it is possible to use a regression approach to model a particular instance of the systematics function using other similarly affected timeseries (half-sibling regression). Instead of image patches at the position of the planet displaced in time, we use non-local pixels for training the temporal model. Figure 2 shows the geometry we use (similar separation, surrounding reduction area, and mirrored area). The area affected by planet signal is explicitly excluded, as such self-subtraction is ruled out. Instead of using the lightcurves themselves we use a principle component decomposition to reduce collinearity.

Figure 2: Selection of non-local training data. The white area shows the pixels whose time-series will serve as the training data for the temporal systematics model.

In addition to this temporal systematic model we fit the planet signal that would arise, if a planet was located at the corresponding sky position. The problem for a single pixel is shown in Figure 3.

Figure 3: Example schematic of the system of linear equations that we solve. We simultaneously fit our systematics model (principal components) as well as the forward model for the planet signal.

The planet amplitude and uncertainty is determined for each pixel and the final contrast value determined as the noise weighted average over all pixels. This process is repeated for a grid of possible planet positions, resulting in a detection map, which is then normalized empirically to account for the simplified assumptions in the linear least-square fit.


We compare our results to the ANDROMEDA algorithm (Cantalloube et al. 2015), because it follows a similar forward-modelling approach and produces 2D detection maps analogous to our approach. Furthermore ANDROMEDA has been shown to provide good results for the SPHERE data sets we used for testing (e.g. 51 Eridani b, Samland et al. 2017).

Figure 4: SNR map for beta Pic b using TRAP and ANDROMEDA.

Figure 4 shows the results for beta Pic b, a dataset with short integration times (4 seconds). We see a large four times improvement in the signal to noise resulting from using a temporal model that does not require a temporal exclusion criterion. The systematics can be modelled even on the shortest temporal timescales, whereas in a spatial approach even small protection angles would exclude at least several minutes of data around each frame. Binning down the data to 64 second integrations, as is (unfortunately) common for high-contrast imaging observations, TRAP and ANDROMEDA achieve a similar SNR, confirming the primary cause for the gain in SNR to be temporal sampling.

Figure 5: Contrast curve comparison TRAP using different numbers of principal components (as fraction of maximum number) and ANDROMEDA using the standard protection angle of 0.5 lamda/D. The shaded area shows the 14-86 percentile range along the azimuth.

Figure 5 shows the detection limit contrast curves for TRAP and ANDROMEDA. We see that at the shortest separations where protection angles become dominant, the gain from using a temporal model is highest. However, in the case of short integration data, using TRAP provides an advantage even at larger separations, where we still gain a factor of two in contrast by reducing the systematic noise.
There are many more results to show; too many for a quick poster. If you are interested in seeing more, please contact me (or wait another short while for the paper to be on arxive in all its glory). 😊
The code will be published on GitHub when the paper comes out.


We showed that by using a temporal model we can in particular gain an important boost in contrast at small angular separations, the region of parameter space most interesting for finding Jupiter like planets using direct imaging. We further showed that using shorter integration times can prove advantageous and opens the road to exploit new data reduction strategies more effectively. Using shorter integration times previously was not useful as protection angles would make it difficult to actually exploit the higher temporal sampling in modelling the systematics. The technique described here may also help in dealing with data that has only small field-of-view rotation and/or space-based observations taken at discrete roll angles.
Finally, we would like to emphasise that the temporal regression approach is not necessarily in conflict with spatial approaches, as they optimize two different quantities (temporal similarity vs. spatial similarity). It is entirely possible to use a LOCI-like (Locally Optimized Combination of Images, Lafrenière et al. 2007) approach on the residuals of TRAP, which are decorrelated and white in time, but not necessarily in space. Ultimately, combining both approaches should make optimal use of all the information available in the data. It can further be extended to make use of polychromatic data in the training process.


Cantalloube, F. & Mugnier, L. M, A&A, 582, A89 (2015), ADS
Lafrenière, D., Marois, C., APJ, 660, 1, (2007), ADS
Samland, M., Mollière, P., A&A, 603, A57, (2017), ADS
Schölkopf, B. & Hogg, D. W., arxiv, (2015), ADS