Image restoration: linear solutions - Jean-Francois Giovannelli

Quadratic penalties and linear solutions. Closed-form ... Half-quadratic approaches, including computation through FFT ..... the normal system of equation.
526KB taille 6 téléchargements 297 vues
Image restoration: linear solutions — Inverse filtering and Wiener filtering —

Jean-Fran¸cois Giovannelli [email protected] Groupe Signal – Image Laboratoire de l’Int´egration du Mat´eriau au Syst`eme Univ. Bordeaux – CNRS – BINP

1 / 37

Topics Image restoration, deconvolution Motivating examples: medical, astrophysical, industrial,. . . Various problems: Fourier synthesis, deconvolution,. . . Missing information: ill-posed character and regularisation

Three types of regularised inversion 1

Quadratic penalties and linear solutions Closed-form expression Computation through FFT Numerical optimisation, gradient algorithm

2

Non-quadratic penalties and edge preservation Half-quadratic approaches, including computation through FFT Numerical optimisation, gradient algorithm

3

Constraints: positivity and support Augmented Lagrangian and ADMM

Bayesian strategy: a few incursions Tuning hyperparameters, instrument parameters,. . . Hidden / latent parameters, segmentation, detection,. . . 2 / 37

Inversion: standard question y = H(x) + ε = Hx + ε = h ? x + ε ε x

H

+

y

b = Xb(y) x Restoration, deconvolution-denoising General problem: ill-posed inverse problems, i.e., lack of information Methodology: regularisation, i.e., information compensation Specificity of the inversion / reconstruction / restoration methods Trade off and tuning parameters

Limited quality results 3 / 37

Example due to Hunt (“square” response) [1970]

Convolutive model (sample averaging): y = h ? x + ε

0.3 1.2

1.2 0.25

1

1 0.2

0.8

0.8 0.15

0.6

0.6 0.1

0.4

0.4 0.05

0.2

0.2 0

0

0 −0.05

−0.2

−0.2 0

20

40

60

80

Input (x)

100

120

−0.1

−10

−8

−6

−4

−2

0

2

4

Response (h)

6

8

10

0

20

40

60

80

100

120

Output (y)

4 / 37

Example: photographed photographer (“square” response) Convolutive model (pixels averaging): y = h ? x + ε ◦







Fourier domain: y (ν) = h(ν) x(ν) + ε(ν) 1 0.8 0.6 0.4 0.2 0 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.1

0.2

0.3

0.4

0.5

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5 −0.5

Spatial response

−0.4

−0.3

−0.2

−0.1

0

Frequency response

Input (x)

Output (y)

5 / 37

Example: photographed photographer (“motion blur”) Convolutive model (pixels averaging): y = h ? x + ε ◦







Fourier domain: y (ν) = h(ν) x(ν) + ε(ν) −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5 −0.5

Spatial response

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Frequency response

Input (x)

Output (y) 6 / 37

Convolution Examples of response Square

Motion

Gaussian

Diffraction

Punctual

Convolutive model z(n)

=

+P X

h(p) x(n − p)

p=−P

z(n, m)

=

+Q +P X X

h(p, q) x(n − p, m − q)

p=−P q=−Q

Response: h(p, q) or h(p) impulse response, convolution kernel,. . . . . . point spread function, stain image 7 / 37

Matrix form: 1D convolution Linear matricial relation: z = Hx Shift invariance Tœplitz structure Short response band structure . . . zn−1

=

zn

=

zn+1

=

hP xn−P + · · · + h1 xn−1 + h0 xn + h−1 xn+1 + · · · + h−P xn+P

. . .

        H =      

. . . ... ... ... ... . . .

hP 0 0 0 0 0

. . . ... hP 0 0 0 0 . . .

h0 ... hP 0 0 0

. . . ... h0 ... hP 0 0 . . .

h−P ... h0 ... hP 0

. . . 0 h−P ... h0 ... hP . . .

0 0 h−P ... h0 ...

. . . 0 0 0 h−P ... h0 . . .

0 0 0 0 h−P ...

. . . 0 0 0 0 0 h−P . . .



... ... ... ...

             

8 / 37

Dealing with side effects

9 / 37

Circulant case / Circulant approximation Extend convolution matrix into a circulant matrix Approximation “periodic objects”

                                      

h−P h−P+1 h−P+2

. . . ... ... ...

0 0 0

. . . 0 0 0

. . . ... ... ... ...

0 0 0 0

. . . 0 h−P h−P+1

0 0 h−P

. . .

... ... ...

. . . 0 0 0

. . . hP 0 0 0

... hP 0 0

. . . h0 ... hP 0

. . . 0 0 0

... ... 0

. . .

. . .

... ... ...

. . . 0 0 0

... h0 ... hP

. . . h−P ... h0 ...

0 0 0

. . . 0 0 0

0 h−P ... h0

. . . 0 0 h−P ...

0 0 0

hP−1 hP ...

0 0 0 h−P

. . . 0 0 0 0

... ... ... ...

. . .

. . .

. . .

. . .

. . .

. . .

... . . .



. . . hP 0 0

hP−2 hP−1 hP

                                     

10 / 37

Circulant case: diagonalization Circulant matrices diagonalization. . . ¯ = F † Λh F H . . . in the Fourier basis h i F = N −1/2 e −2iπ(k−1)(l−1)/N k,l∈1,...,N

Reminder of properties Ft †

F F = FF



F −1

= F = IN = F† = F∗

F x = FFT(x) F †x

= IFFT(x)

11 / 37

Circulant case: eigenvalues Eigenvalues ∼ frequency response      ◦ h=    



 ·  0   h−P     ·   √  = N F  h0     ·    hP   0 · 

h0



h1 . . . ◦

h N−2



h N−1

       = fft(h,N)     

Eigenvalues “read” on the frequency response graph 1 0.8 0.6 0.4 0.2 0 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Comment: conditioning of H and low pass character 12 / 37

Circulant case: convolution through FFT Matricial form of the convolution through FFT = Hx = F † Λh F x

z Fz

= Λh F x



= Λh x



zn

= hn x n

z







pour n = 1, . . . , N

1 0.8 0.6 0.4 0.2 0 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Frequency attenuations, lowpass character, ill-conditioned character Possibly non-invertible system Remark: exact convolution calculus always possible by FFT but. . . 13 / 37

First approach for restoration

14 / 37

First restoration: least squares Compare observations y and model output Hx Unknown: x Known: H and y

Comparison experiment – model x

H

Hx y

Quadratic criterion: distance observation – model output JLS (x) = ky − Hxk

2

Least squares solution bLS = arg min JLS (x) x x

bLS : the best one to reproduce data Solution x Faut que “¸ca colle”, it must fit, it must match 15 / 37

Calculational aspects Linear model + least squares

Quadratic criterion

JLS (x) = ky − Hxk

Gradient calculus g(x) =

2

t

=

(y − Hx) (y − Hx)

=

xt H t Hx − 2y t Hx + y t y

linear

∂JLS = 2 H t Hx − 2 H t y = −2H t (y − Hx) ∂x

And Hessian calculus Q=

constant

∂ 2 JLS = 2H t H ∂x2

Gradient nullification

(> 0 or > 0)

linear system

bLS (H t H) x bLS x

matrix inversion

= H ty =

(H t H)−1 H t y 16 / 37

Computational aspects and implementation Various options and many relationships. . . Direct calculus, compact (closed) form, matrix inversion Algorithms for linear system solution Gauss, Gauss-Jordan Substitution Triangularisation,. . .

Numerical optimisation gradient descent: descent in the direction opposite to the gradient. . . . . . and various modifications

Special algorithms, especially for 1D case Recursive least squares Kalman smoother or filter (and fast versions. . . )

Diagonalization Circulant approximation and diagonalization by FFT 17 / 37

Computations through FFT b x

(H t H)−1 H t y ¯ ty ¯ −1 H ¯ t H) ' (H =

=

(F † Λ†h F F † Λh F )−1 F † Λ†h F y

= F † Λ−1 h Fy b Fx ◦

b x

= Λ−1 h Fy = Λ−1 h y ◦





b xn

=

yn ◦

pour n = 1, . . . , N

hn

It is just inverse filtering !! Matlab Pseudo-code ObjetEstLS = IFFT( FFT(Data,N) ./ fft(IR,N) ) 18 / 37

Least squares solution (Hunt and photographer) 4

1.2

1.2

1

1

0.8

0.8

0.6

0.6

0

0.4

0.4

−1

0.2

0.2

3

2

1

−2

−3

0

0 −4

−0.2

−0.2 0

20

40

60

80

100

120

Input

0

20

40

60

80

100

120

−5

0

20

Observations

40

60

80

100

120

140

LS solution

Advantages / Disadvantages Very general, hyper-fast, no parameter to tune . . . but does not work . . . (except if . . . ) 19 / 37

Frequency analysis −0.5

−0.4

−0.3

−0.2

−0.1

0

1

0.1

0.8

0.2

0.6 0.4

0.3

0.2 0.4

0 0.5 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

−0.5

0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Frequency response (2D et cross-section)

120

120

120

100

100

100

80

80

80

60

60

60

40

40

40

20

20

0 −0.5

0 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

−4

10

20

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0 −0.5

−4

x 10

10

10

8

8

8

6

6

6

4

4

2 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

Original image

0.4

0.5

2 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.2

0.3

0.4

0.5

−4

x 10

x 10

4

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

Observations

0.3

0.4

0.5

2 −0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

LS solution

20 / 37

Default of the least squares solution

Unacceptable solution, explosive, noise dominated Three analyses: signal-image, numerical, statistical Bandwidth, non-observed or badly-observed frequencies Badly-scaled matrix, eigenvalues, numerical instabilities Strong variance (even if minimal variance and unbiased)

Ill-posed problem Missing information Regularisation

21 / 37

Regularisation: generalities Data insufficiently informative Account for prior information 1 2

Question of compromise, of competition Specificity of methods

Here: smoothness of images, ideally edge preserving Explicitation of prior information Regularisation Through penalty, constraint, re-parametrisation . . . or other ideas . . . but regularisation anyhow

Regularisation by penalty 2

JPLS (x) = ky − Hxk + µ P(x) Restored image bPLS = arg min JPLS (x) x x

22 / 37

Quadratic penalty (1) Differences, higher order derivatives, generalizations,. . . X P(x) = (xn+1 − xn )2 n

P(x)

=

X

=

X

=

X

(xn+1 − 2xn + xn−1 )2

n

P(x)

(α xn+1 − xn + α0 xn−1 )2

n

P(x)

(αtn x)2

n

Linear combinations (wavelet, other-truc-en-et,. . . ) !2 X X X P(x) = (wnt x)2 = wnm xm n

n

m

Redundant or not Link with Haar wavelet and other 23 / 37

Quadratic penalty (2) 2D aspects: derivative, finite differences, gradient approximations X P(x) = (xp − xq )2 p∼q

X X = (xn+1,m − xn,m )2 + (xn,m+1 − xn,m )2 n,m

n,m

Norms of filtered images Through rows and columns  

 1  0 + 1 −1

 0

−1

or





1 0 −1

2 0 −2

  1 1 0 + 2 1 −1

0 0 0

 −1 −2  −1

Jointly both of them 

0  −1 0

−1 4 −1

 0 −1  0

Remark Notion of neighborhood and Markov field Any highpass filter, contour detector (Prewitt, Sobel,. . . ) Linear combinations: wavelet, contourlet and other truc-en-et,. . . 24 / 37

Quadratic penalty (3) Other possibilities (slightly different) Enforcement towards a known shape x0 P(x) = (x − x0 )t (x − x0 ) Usual Euclidean norm (separable terms) P(x) = xt x

More general form P(x) = (x − x0 )t M (x − x0 ) In the following development P1 (x)

=

P (xp − xq )2

2

= kDxk

= xt D t Dx und D = . . .

25 / 37

Penalised least squares restoration Remind the criterion. . . 2

= ky − Hxk + µ kDxk

JPLS (x)

2

. . . and its gradient. . . g(x) =

∂JPLS = −2H t (y − Hx) + 2µ D t Dx ∂x

. . . and its Hessian. . . Q=

∂ 2 JPLS = 2H t H + 2µ D t D ∂x2

. . . the normal system of equation. . . bPLS (H t H + µD t D) x

=

H ty

. . . and the minimiser . . . bPLS x

=

(H t H + µD t D)−1 H t y 26 / 37

Computational aspects and implementation Various options and many relationships. . . Direct calculus, compact (closed) form, matrix inversion Algorithms for linear system solution Gauss, Gauss-Jordan Substitution Triangularisation,. . .

Special algorithms, especially for 1D case Recursive least squares Kalman smoother or filter (and fast versions)

Numerical optimisation gradient descent: descent in the direction opposite to the gradient. . . . . . and various modifications

Diagonalization Circulant approximation and diagonalization by FFT 27 / 37

Circulant form Circulant approximation for H as previously for D in addition

One extra-term of interaction       ¯ = D      

1 0 . . . . . . 0 0 −1

−1 1

0 −1 .. .

0 0 .. ..

... ... ...

... ... ...

... ...

... ...

.

. 0 0 0

..

. 1 0 0

−1 1 0

0 0 . . . . . . 0 −1 1

            

(N × N)

¯ Diagonalization of D ¯ = F † Λd F D Eigenvalues by FFT ◦

d = fft([-1,1],N) 28 / 37

Computations by FFT b x

=

¯ tH ¯ + µD ¯ t D) ¯ −1 H ¯ ty (H

=

(F † Λ†h F F † Λh F + F † Λ†d F F † Λd F µ)−1 F † Λh F y

= F † (Λ†h Λh + µΛ†d Λd )−1 Λ†h F y ◦

b x

=

(Λ†h Λh + µΛ†d Λd )−1 Λ†h y ◦

◦∗

hn



b xn

=



|2





|hn + µ|d n

|2

yn

pour n = 1, . . . N

This is just Wiener filtering !! Matlab pseudo-code Gain = fft(IR,N)∗ ./ (|fft(IR,N)|2 + mu*|fft([-1,1],N)|2 ) ObjetEstPLS = IFFT( FFT(Data,N) .* Gain ) 29 / 37

Solutions by quadratic penalties (Hunt) Evolution with µ

4

1.2

1.2

3

1

1

0.8

0.8

0

0.6

0.6

−1

0.4

0.4

0.2

0.2

2

1

−2

−3

0

0

−4

−0.2 −5

0

20

40

60

80

100

120

−0.2 0

140

20

µ=0

40

60

µ = 10

80

100

120

0

1.2

1

1

1

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0.2

0

0

20

40

60

80

µ = 10−1

100

120

60

80

100

120

100

120

−2

1.2

0

−0.2 0

40

µ = 10

1.2

−0.2

20

−3

−0.2 0

20

40

60

µ=1

80

100

120

0

20

40

60

80

µ = 10+1 30 / 37

Photographer example (zoom)

Input

Data

Penalised Least Squares

31 / 37

Photographer example (total)

Input

Data

Penalised Least Squares

32 / 37

Solution without circulant approximation

Input

Data

Penalised Least Squares

33 / 37

Frequency analysis: equalisation

µ=0

µ=+

µ = ++

1.2

1.2

µ=+++

1.2

1.2

1

1

1

1

0.8

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0

0 −0.2 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0.4

0.2

0.2

0 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

10

1500

0.6

0.4

0.2

0.2

−0.2

−0.2

0 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

4

−0.2

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

1.5

3.5 8

3

1000

6

2.5

4

1.5

1

2 500

0.5

1

2

0.5 0

0 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

1.2

1.2

0

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

1.2

0

1.2

1

1

1

1

0.8

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Frequency

0.4

0.45

0.5

−0.2

0.4

0.2

0

0

0.6

0.4

0.2

0.2

−0.2

0.2

0 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Frequency

0.4

0.45

0.5

−0.2

0 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Frequency

−0.2

Frequency

Depending on the considered frequency equalisation in the correctly observed bandwidth nullification in the uncorrectly observed bandwidth

34 / 37

Bias / Variance (1)

Bias

Variance

1.5 1.5 1 1 0.5 0.5 0

0 0

0.05

0.1

0.15

0.2

0.25

0.3

Frequency

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Frequency

35 / 37

Bias / Variance (2)

Bias

Variance

14

Mean Square Error 6

200 180

5

12 160

4

10 140

3

120

8

100

2

6 80

1

4

60

0

40

2 20

0 −10

−1

0

−8

−6

−4

−2

0

µ

2

4

6

8

10

−10

−8

−6

−4

−2

0

µ

2

4

6

8

10

−2 −10

−8

−6

−4

−2

0

2

4

6

8

10

µ

36 / 37

Synthesis and extensions Synthesis Image deconvolution Also available for non-invariant linear direct model And also available for the case of signal. . .

Quadratic penalty and smoothness of solution Gradient of gray level and extensions Other decompositions

Colsed-form expression and linear w.r.t. the data Numerical computations Circulant approximation (diagonalization) Numerical optimisation (gradient,. . . )

FFT only

Extensions (next lecture) Extension to non-quadratic Including constraints

better image resolution

better image resolution

Hyperparameters estimation, instrument parameter estimation,. . . 37 / 37