Image restoration An example in astronomy - Jean-Francois Giovannelli

Truncated and noisy Fourier transform y = TFx + ε x ∈ RN : unknown image y, ε ∈ CM : measurements, errors. F : Fourier matrix (N × N). T : troncature matrix (M ...
801KB taille 6 téléchargements 235 vues
Image restoration — Convex approaches: penalties and constraints —

An example in astronomy

Jean-Fran¸cois Giovannelli [email protected] Groupe Signal – Image Laboratoire de l’Int´egration du Mat´eriau au Syst`eme Univ. Bordeaux – CNRS – BINP

1 / 28

Summary

Direct model and inverse problem Interpolation-extrapolation / deconvolution / Fourier synthesis Indetermination, non inversibility

Prior information and regularized solution Positivity and possible support Point sources onto smooth background and double model

Algorithmic aspects and numerical optimisation Data processing results Simulated Data NRH Data

Conclusions et extensions

2 / 28

Interferometry: principles of measurement Physical principle [Thompson, Moran, Swenson, 2001] Antenna array

large aperture

Frequency band, e.g., 164 MHz Couple of antennas interference

Antenna positions

Fourier plane

0

Frequency (v )

North-South (km)

Picture site (NRH)

one measure in the Fourier plane

−1

−2

20

40

60

80

100

120

−1

0

1

2

3

West-Est (km)

20

40

60

80

100

120

Frequency (u)

Knowledge of the sun, magnetic activity, eruptions, sunspots,. . . Forecast of sun events and their impact,. . . 3 / 28

Interferometer: example of measurements

ES

PS x

Fx

TFx

y = TFx + ε

4 / 28

Instrument model Truncated and noisy Fourier transform y = TFx + ε x ∈ RN : unknown image y, ε ∈ CM : measurements, errors F : Fourier matrix (N × N) T : troncature matrix (M × N), e.g., T =

h

0 0 0

1 0 0

0 1 0

0 0 0

0 0 1

0 0 0

0 0 0

i

ε x

H

+

y

Difficulties: M  N, noise 5 / 28

Different formulations Fourier synthesis (original formulation) y = TFx + ε ◦

Interpolation – extrapolation: change of variable x = F x ◦

y = Tx + ε Deconvolution: transformation of data ◦

y ¯ = F †T ty H = F †T tT F ◦

y ¯ = Hx + e ε A few simple properties F † F = F F † = IN : orthonormality T t : zero-padding matrix, M t

N (T t extends)

T T : (diagonal) projection matrix, N

N (T t T nullifies)

T T t = IM 6 / 28

Interferometry: illustration True map ES

True map PS

Dirty beam

Dirty map ES

Dirty map PS

Dirty map PS + ES

7 / 28

Data based inversion and ill-posed character

Deficient rank, missing data F T , T , H: 1 singular value order M and 0 order N − M

Infinity of Least-Squares solution QLS (x) = ky − T F xk2

Other solutions: minimum norm solution, TSVD, (quasi) Wiener . . . ◦

QLS (y ¯ )=0   ◦ ¯ + u − F † T t T F u = 0, QLS y

for all map u

Necessity of other information

8 / 28

Taking constraints into account: positivity and support Notation M: index set of the image pixels S, D ⊂ M: index set of a part of the image pixels

Investigated constraints here Positivity Cp : ∀p ∈ M ,

xp > 0

Support Cs : ∀p ∈ S¯ ,

xp = 0

Extensions (non investigated here) Template ∀p ∈ M ,

tp− 6 xp 6 tp+

Partially known map ∀p ∈ D ,

xp = mp 9 / 28

Point sources + extended source

Double-model [Ciuciu02, Samson03] et [Magain98, Pirzkal00] x = xe + xp Direct model y = T F (xe + xp ) + ε New indeterminations

Appropriate regularisation Re (xe ) =

X

[xe (n + 1) − xe (n)]2

Rp (xp ) =

10 / 28

Point sources + extended source

Double-model [Ciuciu02, Samson03] et [Magain98, Pirzkal00] x = xe + xp Direct model y = T F (xe + xp ) + ε New indeterminations

Appropriate regularisation X

[xe (n + 1) − xe (n)]2 X Rp (xp ) = |xp (n)| = Re (xe ) =

11 / 28

Point sources + extended source

Double-model [Ciuciu02, Samson03] et [Magain98, Pirzkal00] x = xe + xp Direct model y = T F (xe + xp ) + ε New indeterminations

Appropriate regularisation X

[xe (n + 1) − xe (n)]2 X X Rp (xp ) = |xp (n)| = xp (n) Re (xe ) =

12 / 28

Frequential analysis

1

0.5

0 0

0.1

0.2

0.3

0.4

0.5

0.1

0.2

0.3

0.4

0.5

0.2

0.3

0.4

0.5

1

0.5

0 0

1

0.5

0 0

0.1

Reduced frequency 13 / 28

Regularized criterion y regularized solution Criterion: penalized, quadratic, strictly convex Q(xe , xp )

2

= ky − T F (xe + xp )k + λe

X

2

[xe (n + 1) − xe (n)] + λp

+ εe

X

xe (n)2 + εp

X

X

xp (n)

xp (n)2

Solution: unique constrained minimizer (x = [xe ; xp ])

cp ) = (c xe , x

   arg min Q(xe , xp )  

s.t. (C )

 1  arg min xt Q x + q t x   2   = (   xp = 0 for p ∈ S¯    s.t. xp ≥ 0 for p ∈ M 14 / 28

Constraints: some illustrations

15 / 28

Positivity: one variable One variable: α(t − ¯t )2 + γ 250

250

200

200

150

150

100

100

50

50

0 −10

0 −5

0

5

10

−10

t

−5

0

5

10

t

Non-constrained solution: bt = ¯t Constrained solution: bt = max [ 0, ¯t ] Active and inactive constraints 16 / 28

Positivity: two variables (1) Two variables: α1 (t1 − t¯1 )2 + α2 (t2 − t¯2 )2 + β(t2 − t1 )2 + γ

Glop

Pas glop

10

10

5

5

0

0

−5 −5

0

5

10

−5 −5

0

5

10

Sometimes / often difficult to deduce the constrained minimiser from the non-constrained one

17 / 28

Positivity: two variables (2) Two variables: α1 (t1 − t¯1 )2 + α2 (t2 − t¯2 )2 + β(t2 − t1 )2 + γ 10

10

10

5

5

5

0

0

0

−5 −5

0

5

1

10

−5 −5

0

5

10

−5 −5

2a

0

5

10

2b

Constrained solution = Non-constrained solution (1) Constrained solution 6= Non-constrained solution (2) . . . so active constraints

18 / 28

Positivity: two variables (3) Two variables: α1 (t1 − t¯1 )2 + α2 (t2 − t¯2 )2 + β(t2 − t1 )2 + γ 10

10

5

5

0

0

−5 −5

0

5

2a

10

−5 −5

0

5

10

2b

Constrained solution 6= Non-constrained solution (2) . . . so active constraints Constrained solution 6= Projected non-constrained solution (2a)  tb1 ; tb2 6= (max [0, t¯1 ] ; max [0, t¯2 ]) Constrained solution = Projected non-constrained solution (2b)  tb1 ; tb2 = (max [0, t¯1 ] ; max [0, t¯2 ]) 19 / 28

Numerical optimisation: state of the art Problem Quadratic optimisation with linear constraints Difficulties N ∼ 1 000 000 Constraints ⊕ non-separable variables

Existing algorithms Existing tools with guaranteed convergence [Bertsekas 95,99; Nocedal 00,08; Boyd 04,11] Gradient projection methods, constrained gradient method Broyden-Fletcher-Goldfarb-Shanno (BFGS) and limited memory Interior points and barrier Pixel-wise descent Augmented Lagrangian, ADMM Constrained but separated + non-separated but non-constrained Partial solutions still through FFT

20 / 28

Lagrangians und penalisation Equality constraint: xp = 0 −

X

`p xp +

p∈S¯

Inequality constraint: (xp ≥ 0) −

X p∈S

`p (xp − sp ) +

1 X 2 c xp 2 ¯ p∈S

(sp − xp = 0 ; sp ≥ 0) 1 X c (xp − sp )2 2 p∈S

Globally L(x, s, `) =

1 1 t x Q x + q t x − `t (x − s) + c (x − s)t (x − s) 2 2 21 / 28

Iterative algorithm L(x, s, `) =

1 t 1 x Q x + q t x − `t (x − s) + c (x − s)t (x − s) 2 2

Iterate three steps 1

Unconstrained minimization of L w.r.t. x e = −(Q + cIN )−1 (q + [` + cs]) x

2

3

Minimization of L w.r.t. s, s.t. sp ≥ 0, ( max (0, cxp − `p )/c e sp = 0

(≡ fft)

for p ∈ S for p ∈ S¯

Update ` `ep =

( max (0, `p − cxp ) `p − cxp

for p ∈ S for p ∈ S¯ 22 / 28

Details about Q and q q

dirty map:

∂J q= ∂x xe ,xp =0

Q

 ∂J  ∂xe  =  ∂J ∂xp



 ◦  y ¯      = −2    ◦ y ¯ + λc 1/2

dirty beam: 

  ∂2J Q= =  ∂x2 

∂2J ∂x2e 2

∂ J ∂xp ∂xe

 ∂2J  H + λs D t D ∂xe ∂xp   =   ∂2J H 2 ∂xp

H

 

H + εs I

23 / 28

Simulated data results

True extended object x?e

ce Estimated extended object x

20

20

40

40

60

60

Dirty map

80

100

80

100

20

120

120

40 20

40

60

80

100

120

20

40

60

80

100

120

60

True point object x?p

cp Estimated point object x

80

100

20

20

120

40

40 20

60

40

60

80

100

120 60

80

80

100

100

120

120 20

40

60

80

100

120

20

40

60

80

100

120

24 / 28

Simulated data results

True extended object x?e

ce Estimated extended object x

120

120

100

100

80

80

Dirty map

60

40

60

40

120

20

20

100

20

40

60

80

100

120

True point object x?p

80

20

40

120

80

80

100

120

100

20

40

60

80

100

120

80

60

60

40

40

20

60

120

20

100

40

cp Estimated point object x

60

20

20

40

60

80

100

120

20

40

60

80

100

120

25 / 28

Real data: first results ce Estimated extended object x 20

40

60

Dirty map 80

20

100

40

120 20

40

60

80

100

120

60

cp Estimated point object x

80

100

20

120

40 20

40

60

80

100

120 60

80

100

120 20

40

60

80

100

120

26 / 28

Real data: first results ce Estimated extended object x 120

100

80

Dirty map

60

120

40

100

20

80

20

40

60

80

100

120

60

cp Estimated point object x

40

120

20

100

20

40

60

80

100

120

80

60

40

20

20

40

60

80

100

120

27 / 28

Conclusions Synthesis Direct model and inverse problem Interpolation-extrapolation/deconvolution/Fourier synthesis Double-model and appropriate regularisation: point/background Positivity and support

Optimisation: lagrangians Simulations et real data: interferometry in radio-astronomy Perspectives Quantitative assessment Case of a single map: an extended source / a set of point sources Non quadratic penalty background resolution enhancement Data and/or sources “out grid” Hyperparameter estimation 28 / 28