Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 14 mars 2025 Efficient Learned Deconvolution of Radio Interferometric Images Using Deep Unrolling Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 14 mars 2025
SKA SKA-Mid : 197 planned dishes SKA-Low : 131,072 planned antennas A massive data processing challenge 20 terabits per second of data generated : 1,000 × the data rate of ALMA Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 3 / 31 Towards large-scale radio-data
arg min x ∥y − Ax∥2 2 y : visibilities, A : forward operator Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 6 / 31 Image reconstruction problem
arg min x ∥y − Ax∥2 2 y : visibilities, A : forward operator An ill posed problem We need to add constraint on x arg min x ∥y − Ax∥2 2 + R (x) Different types of constraint possible : sparsity (in various domain), learned regularizer, ... Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 6 / 31 Image reconstruction problem
▪ Detection of one peak in the image at a time Compressive sensing methods (Wiaux, 2009) Iterative algorithm using proximal operators for minimizing the regularization term ie, with R (x) = ∥x∥1 : zt+1 = xt − 2¸A∗(y − Axt ) , Forward step xt+1 = ST–¸ (zt+1) , Backward step (¸ : gradient step, – : threshold) Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 7 / 31 Existing methods
prox operator can be replaced by a pretrained DNN xt+1 = D(xt − 2¸A∗(y − Axt )) End to end DNN (Schuler, 2013) / Learned post-processing Apply a DNN on the dirty image ▪ A lot faster than iterative algorithms ▪ Require a lot of training data ▪ Not model-based –> worse performance Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 8 / 31 Existing methods with learned components
prox operator can be replaced by a pretrained DNN xt+1 = D(xt − 2¸A∗(y − Axt )) End to end DNN (Schuler, 2013) / Learned post-processing Apply a DNN on the dirty image ▪ A lot faster than iterative algorithms ▪ Require a lot of training data ▪ Not model-based –> worse performance Need for a method as fast a DNNs but taking into account the physic reconstruction model for best performance Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 8 / 31 Existing methods with learned components
iterations and convert it as a L layers DNN One layer –> One iteration Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 10 / 31 Algorithm unrolling
xl + 2¸A∗( ˜ y − A ˜ xl )) , until convergence or max iteration is reached ¸ : gradient step Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 11 / 31 Algorithm unrolling for RI
xl + 2¸A∗( ˜ y − A ˜ xl )) , until convergence or max iteration is reached ¸ : gradient step We unroll the PGD algorithm to obtain a Learned PGD : ˜ xl+1 = gl ( ˜ xl + wl ∗ A∗( ˜ y − A ˜ xl )) , for l from 1 to L gl Trainable or pre-trained operator wl Trainable scalar or convolution kernel Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 11 / 31 Algorithm unrolling for RI
gl (x) = (0, if|x| ≤ ‚l |x − ‚l|, otherwise ‚l is a trainable parameter For more complex sources : R (x) = P i ∥Ψi x∥1 We seek sparsity in a dictionary of wavelet bases (for example the first eight Daubechies wavelets) Wavelet denoiser gl (x) = argmin u ∥u − x∥2 2 + –l X i ∥Ψi u∥1 , –l is the learned regularization/thresholding parameter Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 14 / 31 Different kind of regularization operators
pretrained deep denoiser. D can for example be a UNet : PnP prior gl (x) = D(x, ff) ff is a learned noise level parameter Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 15 / 31 Different kind of regularization operators
generated with 1% of sources following absolute gaussian distribution for each image ▪ Galaxy simulation Simulated UV coverage based on MEERKAT antennas - 1h observation time Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 16 / 31 Experiments Setup
iterations) ▪ Great reconstruction quality Unrolling drawbacks : ▪ Lost interpretation of the reconstruction ▪ Is it the fixed point of an equation? ▪ Is the reconstruction related to a posterior probability distribution? ▪ UQ is missing These drawbacks limit its scientific application Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 22 / 31 Faster reconstruction : algorithm unrolling
(2024) Given an observation model y = Ax + n (e.g. RI imaging), group actions {Tg}g∈G such that Tg x ∈ X and a reconstruction method ˆ x(y) = f(y) (e.g. our unrolling algorithm) : For i = 1, ... , N : 1. Draw transform gi from G and sample noise ni ∼ N (0, ff2I) 2. Build bootsrap measurement ˜ yi = ATgi ˆ x(y) + ni = Agi ˆ x(y) + ni 3. Reconstruct ˜ xi = T−1 gi ˆ x(˜ yi ) 4. Collect error estimate ei = ∥ˆ x(y) − ˜ xi ∥2 Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 24 / 31 CARB : Conformalized Augmented Radio Bootstrap
▪ Independent of the reconstruction method and each sample can run in parallel ▪ Well-suited to ultra-fast reconstruction methods, e.g. unrolled algorithms ▪ Carefully selected group transforms allow us to explore the big nullspace of the RI imaging forward operator and better characterise the errors CARB method consists of : 1. Fast reconstruction algorithm 2. Equivariant bootstrap framework 3. Adapted group actions for the RI imaging problem 4. Conformalisation procedure to guarantee coverage Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 25 / 31 CARB : Conformalized Augmented Radio Bootstrap
translations not exceeding 2 pixels, 2. Image flips over the horizontal and vertical axis, 3. Rotations of 90-degrees multiples, 4. Invertible 2D radially-symmetric filters in the specific form of low-shelving and high-shelving filters with varying cuttoff frequencies. Examples of filter transformations : 0 20 40 60 80 100 120 Frequency 0.0 0.2 0.4 0.6 0.8 1.0 Amplitude Low-shelving filters High-shelving filters Each time we apply a random composition of these transformations, where each transformation is applied with a given probability. Pixel-wise UQ maps From the collection of N bootstrap samples, {˜ xi }N i=1 , we build confidence regions, C¸ , for x (ground truth) using q¸ the top ¸-quantile of the samples {|ˆ x(y) − ˜ x|}N i=1 , with C¸ = {x : |x − ˆ x(y)| < q¸}. Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 26 / 31 CARB : Conformalized Augmented Radio Bootstrap
Quantile Regression (QR) 0.15 14% Conformalized QR 204.08 92% Parametric Bootstrap 0.07 0% Equivariant Bootstrap 0.13 7% Augmented Radio Bootstrap 0.29 87% CARB 0.34 91% Coverage plots for equivariant bootstrap methods with different group actions. 0.0 0.2 0.4 0.6 0.8 1.0 Confidence level 0.0 0.2 0.4 0.6 0.8 1.0 Empirical coverage Parametric Rotations Rotations and flips Rotations and flips and 1-shifts Rotations and flips and 2-shifts Rotations and flips and 3-shifts Augmented Radio Bootstrap Ideal Tight intervals and very good coverage! Results showcase : ▪ the importance of selecting adapted group actions, ▪ the conformalisation is useful once the intervals are already good. We still need to validate the method on higher dimensions. Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 27 / 31 CARB : Results
using algorithm unrolling ▪ Faster than current iterative methods (∼ 10 iterations) ▪ Capable of working directly with visibilities ▪ Can work with uncertainty quantification method Perspectives : ▪ Apply unrolling method and UQ on bigger, realistic images ▪ Improve design of network for better performance and adaptability Jonathan Kern, Jérome Bobin, Tobias Liaudat Christophe Kervazo GT ICR SKA 20/06/2024 30 / 31 Conclusion and perspectives