Super-Resolution: A Bayesian approach Ali Mohammad-Djafari Laboratoire des Signaux et Syst`emes, UMR8506 CNRS-SUPELEC-UNIV PARIS SUD 11 SUPELEC, 91192 Gif-sur-Yvette, France Email:
[email protected] http://djafari.free.fr
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
1/75
Contents ◮
◮ ◮ ◮
◮ ◮ ◮ ◮ ◮ ◮ ◮ ◮
Super Resolution problems: Single Input Single Output (SISO) Multiple Input Single Output (MISO) Multiple Input Multiple Output (MIMO) Forward models SISO: Image deconvolution (restoration) Convolution, Identification, Deconvolution and Blind Deconvolution Algebraic deterministic methods Bayesian approach MISO or MIMO: Classical SR methods Bayesian approach A new method using Hidden Markov modeling of HR image Some results Conclusions New challenges in SR
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
2/75
Examples of applications
◮
Embedded low-resolution imaging devices: Increasing the resolution
◮
Thermal camera: Increasing the resolution
◮
Multi-camera and multi-view recording in aerial or satellite imaging: Registration and image fusion
◮
Medical and Biological imaging systems: Multi modal image fusion
◮
Holographic and 3D TV imaging: 3D from 2D
◮
3D photography and surface modeling for 3-D scenes
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
3/75
SISO (Single Input Single Output) SR problem
? =⇒ g(r)
←− B ←−
f (r)
◮
B: Blurring (needs image restoration) ZZ g(x, y) = f (x′ , y ′ )h(x − x′ , y − y ′ ) dx′ dy ′ ZZ g(r) = f (r′ )h(r − r ′ ) dr ′
◮
h(r) = h(x, y): Point Spread function Convolution/Deconvolution
◮
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
4/75
SISO (Single Input Single Output) SR problem
? =⇒
g(r)
←− D B ←−
f (r)
◮
B: Blurring (needs image restoration)
◮
D: Down sampling (needs interpolation and Up Sampling)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
5/75
MISO (Multi Input Single Output) SR problem
? =⇒
◮ ◮
◮
gk (r) ←− D Mk B ←− f (r) B: Blurring (needs image restoration) Mk : Movement (needs Registration and image fusion) D: Down sampling (needs interpolation and Up Sampling)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
6/75
MIMO (Multi Input Multi Output) SR problem
? =⇒
gk (r) ◮ ◮
◮
←− D Mk B ←−
fk (r)
B: Blurring (needs image restoration) Mk : Movement (needs Registration and image fusion) D: Down sampling (needs interpolation and Up Sampling)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
7/75
MISO 3D SR problem
? =⇒
gk (r)
←− DMk B ←−
◮
Non Destructive Testing (NDT) using Computed Tomography (CT)
◮
Multi modal medical imaging
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
f (r)
8/75
Forward model for SISO SR problem
? =⇒ g(r)
←− B ←−
f (r)
◮
B: Blurring (needs image restoration) ZZ g(x, y) = f (x′ , y ′ )h(x − x′ , y − y ′ ) dx′ dy ′ ZZ g(r) = f (r′ )h(r − r ′ ) dr ′
◮
h(r) = h(x, y): Point Spread function Convolution/Deconvolution
◮
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
9/75
Image Restoration (Deconvolution) ◮
Forward problem: Convolution: ZZ g(x, y) = f (x′ , y ′ )h(x − x′ , y − y ′ ) dx′ dy ′
◮
Inverse problem: Given g and h find f : Deconvolution
◮
Fourier based methods: ZZ F (u, v) = f (x, y) exp {−j(ux + vy)} dx dy ZZ f (x, y) = F (u, v) exp {+j(ux + vy)} du dv
g(x, y) = f (x, y) ∗ h(x, y) −→ G(u, v) = H(u, v)F (u, v)
◮
Inverse Filtering: F (u, v) =
A. Mohammad-Djafari,
superesolution,
1 H(u,v) G(u, v)
Cours Master Montpelier 2013
10/75
Deconvolution: 1D and 2D cases
ǫ(t) ↓ L −→ −→ g(t)
f (t) −→ h(t) Z g(t) = f (t′ ) h(t − t′ ) dt′ + ǫ(t) ◮
f (t), g(t) and ǫ(t) are modelled as Gaussian random signal ǫ(x, y) ↓ L f (x, y) −→ h(x, y) −→ −→ g(x, y) ZZ g(x, y) = f (x′ , y ′ ) h(x − x′ , y − y ′ ) dx′ dy ′ + ǫ(x, y)
◮
f (x, y), g(x, y) and ǫ(x, y) are modelled as homogeneous and Gaussian random fields
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
11/75
Wiener Filtering: 1D Case ǫ(t) f (t) ✲
h(t)
❄ ✎☞ ✲ + ✲ g(t) ✍✌
g(t) = h(t) ∗ f (t) + ǫ(t) ◮
Expected values: E {g(t)} = h(t) ∗ E {f (t)} + E {ǫ(t)}
◮
Auto and Inter correlation functions: Rgg (τ ) = E {g(t) g(t + τ )} Rf f (τ ) = E {f (t) f (t + τ )} Rǫf (τ ) = Rf ǫ (−τ ) = E {ǫ(t) f (t + τ )} Rgf (τ ) = Rf g (−τ ) = E {g(t) f (t + τ )}
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
12/75
Wiener Filtering: 1D Case ǫ(t) f (t) ✲ ◮
h(t)
❄ ✎☞ ✲ + ✲ g(t) ✍✌
Auto and Inter correlation functions: Rgg (τ ) = h(t) ∗ h(t) ∗ Rf f (τ ) + Rǫǫ (τ ) Rgf (τ ) = h(t) ∗ Rf f (τ )
◮
Spectral density functions: Sgg (ω) = |H(ω)|2 Sf f (ω) + Rǫǫ (ω) Sgf (ω) = H(ω)Sf f (ω) Sf g (ω) = H ∗ (ω)Sf f (ω)
◮
Wiener filtering: g(t) −→
A. Mohammad-Djafari,
w(t)
superesolution,
−→ fb(t) or G(ω) −→ Cours Master Montpelier 2013
13/75
W (ω)
−→ Fb (ω)
Wiener Filtering: 1D Case o n EQM = E [f (t) − fb(t)]2 = E [f (t) − w(t) ∗ g(t)]2 ∂EQM = −2E {[f (t) − w(t) ∗ g(t)] ∗ g(t + τ )} = 0 ∂f E {[f (t) − w(t) ∗ g(t)] g(t + τ )} = 0
∀t, τ −→
Rf g (τ ) = w(t) ∗ Rgg (τ ) W (ω) =
W (ω) =
Sf g (ω) H ∗ (ω) Sf f (ω) = Sgg (ω) |H(ω)|2 Sf f (ω) + Sǫǫ (ω)
H ∗ (ω)Sf f (ω) |H(ω)|2 1 = |H(ω)|2 Sf f (ω) + Sǫǫ (ω) H(ω) |H(ω)|2 + Sǫǫ (ω)
Sf f (ω)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
14/75
Wiener Filtering: 2D Case ◮
Linear Estimation: fb(x, y) is such that: ◮
◮
◮
fb(x, y) depends on g(x, y) in a linear way: ZZ fb(x, y) = g(x′ , y ′ ) w(x − x′ , y − y ′ ) dx′ dy ′ w(x, y) is the impulse filtre o n response of the Wiener 2 b minimizes MSE: E |f (x, y) − f (x, y)|
Orthogonality condition:
(f (x, y)−fb(x, y))⊥g(x′ , y ′ )
−→
n o E (f (x, y) − fb(x, y)) g(x′ , y ′ ) = 0
fb = g∗w −→ E {(f (x, y) − g(x, y) ∗ w(x, y)) g(x + α1 , y + α2 )} = 0
Rf g (α1 , α2 ) = (Rgg ∗w)(α1 , α2 ) −→ TF −→ Sf g (u, v) = Sgg (u, v)W (u, v) ⇓ W (u, v) = A. Mohammad-Djafari,
superesolution,
Sf g (u, v) Sgg (u, v)
Cours Master Montpelier 2013
15/75
Wiener filtering: 1D and 2D Cases Signal Sf g (ω) W (ω) = Sgg (ω)
Image Sf g (u, v) W (u, v) = Sgg (u, v)
Particular Case: f (x, y) and ǫ(x, y) are assumed to be centered and non correlated Sf g (u, v) = H ′ (u, v) Sf f (u, v) Sgg (u, v) = |H(u, v)|2 Sf f (u, v) + Sǫǫ (u, v) W (u, v) =
H ′ (u, v)Sf f (u, v) |H(u, v)|2 Sf f (u, v) + Sǫǫ (u, v) Image
Signal W (ω) =
1 |H(ω)|2 H(ω) |H(ω)|2 + Sǫǫ (ω)
W (u, v) =
Sf f (ω)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
1 |H(u, v)|2 H(u, v) |H(u, v)|2 + Sǫǫ (u,v)
Sf f (u,v)
16/75
Convolution, Deconvolution, Identification and Blind Deconvolution in signal processing ǫ(t) f (t) ✲
❄ ✲ +♠✲ g(t)
h(t)
x(t)
y(t)
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 −0.1 0
0
100
200
300
g(t) = ◮ ◮ ◮ ◮
Z
400
500
600
700
800
900
1000
−0.1 0
f (t′ )h(t − t′ ) dt′ + b(t) =
100
Z
200
300
400
500
600
superesolution,
800
h(t′ )f (t − t′ ) dt′ + b(t)
Convolution: Given f and h compute g Identification: Given f and g estimate h Deconvolution: Given g and h estimate f Blind deconvolution: Given g estimate both h and f
A. Mohammad-Djafari,
700
Cours Master Montpelier 2013
17/75
900
1000
Convolution: Discretization ǫ(t) f (t) ✲
g(t) =
Z
′
′
h(t)
′
❄ ✲ +♠✲ g(t)
f (t ) h(t − t ) dt + ǫ(t) =
Z
h(t′ ) f (t − t′ ) dt′ + ǫ(t)
◮
The signals f (t), g(t), h(t) are discretized with the same sampling period ∆T = 1,
◮
The impulse response is finite (FIR) : h(t) = 0, for t such that t < −q∆T or ∀t > p∆T . p X g(m) = h(k) f (m − k) + ǫ(m), m = 0, · · · , M k=−q
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
18/75
Convolution: Discretized matrix vector forms
g(0) h(p) g(1) . 0 . . . . . . . . . . . . . . = . . . . . . . . . . . . . . . . . . 0 g(M )
f (−p) . . 0 . . f (0) . . f (1) . . . . . . . . . . . . . . . . . . . . . . . . f (M ) f (M + 1) 0 . h(−q) . . f (M + q)
··· .
.
h(0)
··· .
.
h(p)
.
.
h(0)
···
.
···
···
···
h(0)
···
.
···
h(−q)
..
. .
···
0 .
···
..
h(−q)
.
.
. 0
h(p)
···
g = Hf + ǫ ◮ ◮ ◮ ◮
g is a (M + 1)-dimensional vector, f has dimension M + p + q + 1, h = [h(p), · · · , h(0), · · · , h(−q)] has dimension (p + q + 1) H has dimensions (M + 1) × (M + p + q + 1).
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
19/75
Convolution: Discretized matrix vector form ◮
If system is causal (q = 0) we obtain
h(p) · · · g(0) g(1) 0 . . .. . . .. . . = .. .. . . .. .. . . .. g(M ) 0 ··· ◮ ◮ ◮ ◮
h(0)
0
···
···
h(p) · · ·
h(0)
···
h(p) · · ·
0
g is a (M + 1)-dimensional vector, f has dimension M + p + 1, h = [h(p), · · · , h(0)] has dimension (p + 1) H has dimensions (M + 1) × (M + p + 1).
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
20/75
f (−p) .. 0 . .. . f (0) .. f (1) . . .. .. . . .. .. . ... 0 . . h(0) . f (M )
Convolution: Causal systems and causal input
g(0) g(1) . .. .. . .. . .. .
h(0)
h(1) . . . .. . = h(p) · · · .. 0 . .. . 0 ··· g(M )
h(0) ..
.
0 h(p) · · ·
f (0) f (1) . .. .. . .. . .. . h(0) f (M )
◮
g is a (M + 1)-dimensional vector,
◮
f has dimension M + 1,
◮
h = [h(p), · · · , h(0)] has dimension (p + 1)
◮
H has dimensions (M + 1) × (M + 1).
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
21/75
Convolution, Identification, Deconvolution and Blind deconvolution problems Z Z g(t) =
f (t′ ) h(t − t′ ) dt′ + ǫ(t) =
h(t′ ) f (t − t′ ) dt′ + ǫ(t)
ǫ(t)
ǫ(t)
❄ f (t) ✲ h(t) ✲ +❥✲g(t)
f (t) ✲ h(t) ✲ +❥✲g(t) ❄
E(ω)
E(ω)
❄ F (ω)✲ H(ω) ✲ +❥✲G(ω)
G(ω) = H(ω) F (ω) + E(ω) F (ω) = ◮ ◮ ◮ ◮
G(ω) H(ω)
+
E(ω) H(ω)
F (ω)✲ H(ω) ✲ +❥✲G(ω) ❄
G(ω) = H(ω) F (ω) + E(ω) H(ω) =
G(ω) F (ω)
Convolution: Given h and f compute g Identification: Given f and g estimate h Simple Deconvolution: Given h and g estimate f Blind Deconvolution: Given g estimate h and f
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
22/75
+
E(ω) F (ω)
Deconvolution: Given g and h estimate f
◮
Direct computation: f=deconv(g,h)
◮
Fourier domain: Inverse Filtering F (ω) = ◮ ◮
◮
G(ω) H(ω)
G(ω) Compute H(ω), G(ω) and F (ω) = H(ω) Compute g(t) by inverse FT of F (ω)
Main difficulties: Divide by zero and noise amplification
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
23/75
Identification: Given g and f estimate h
◮
Direct computation: ◮ ◮
◮
Fourier domain: Inverse Filtering H(ω) = ◮ ◮
◮
f (t) = δ(t) −→ g(t) = h(t) −→ h(t) = g(t) Rt 0 t0 G(ω) F (ω)
Compute F (ω), G(ω) and H(ω) = G(ω) F (ω) Compute h(t) by inverse FT of H(ω)
Main difficulties: Divide by zero and noise amplification
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
24/75
Convolution: Discretization for Identification Causal systems and causal input
g(0) 0 . 0 f (0) . . f (0) f (1) g(1) h(p) .. .. . f (0) f (1) . . h(p − 1) . . . .. .. . . . . . . . = f (0) f (1) f (M − p) . .. . f (1) . . . . . .. h(1) . . . . . . . h(0) .. . . . f (M − p) . . . f (M ) g(M ) g =F h+ǫ ◮
g is a (M + 1)-dimensional vector,
◮
F has dimension (M + 1) × (p + 1),
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
25/75
Convolution in imaging systems ǫ(x, y) f (x, y) ✲
g(x, y) =
A. Mohammad-Djafari,
ZZ
superesolution,
h(x, y)
❄ ✓✏ ✲ + ✲g(x, y) ✒✑
f (x′ , y ′ ) h(x, y; x′ , y ′ ) dx′ dy ′ + ǫ(x, y)
Cours Master Montpelier 2013
26/75
2D Convolution for image restoration ǫ(x, y) f (x, y) ✲ h(x, y)
g(x, y) =
ZZ
❄ ✎☞ ✲ + ✲g(x, y) ✍✌
f (x′ , y ′ ) h(x − x′ , y − y ′ ) dx′ dy ′ + b(x, y)
D
g(m∆x, n∆y) =
+J +I X X
h(i∆x, j∆y)f ((m − i)∆x, (n − j)∆y)
i=−I j=−J
m = 1, . . . , M n = 1, . . . , N
g(m, n) =
+J +I X X
∆x = ∆y = 1
h(i, j) f (m − i, n − j)
i=−I j=−J
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
27/75
2D Convolution for image restoration Two caracteristics: +J +I X X
g(m, n) =
h(i, j) f (m − i, n − j)
i=−I j=−J
g(m, n) depends on f (k, l) for (k, l) ∈ N (k, l) where N (k, l) means the neigborhood pixels around the pixel ′ (k, l) −→ No Causality ◮ The boarding effects cannot be neglected as easily as in the 1D case. Vectorial Forme: ◮
g(m, n) =
+J +I X X
h(i, j) f (m − i, n − j)
i=−I j=−J
g(m, n)
m = 1, . . . , M n = 1, . . . , N
f (k, l)
k = 1, . . . , K l = 1, . . . , L
g = Hf A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
28/75
h(i, j)
i = 1, . . . , I j = 1, . . . , J
2D Convolution for image restoration g = [ g(1,1) , . . . , g(M, 1), g(1,2) , . . . , g(M,2) , . . . , g(1,N ) , . . . , g(M,N ) ]t − − −1 − −− − − −2 − −− − − −N − −−
f = [ f(1,1) , . . . , f(K,1) , f(1,2) , . . . , f (K, 2), . . . , f(1,L) , . . . , f(K,L) ]t − − −1 − −− − − −2 − −− − − −L − −−
The structure of the matrix H depends on the domaines Dh , Df and Dg . Matrix Form H : ◮
Image > Object
◮
Image=Object
◮
Image < Object
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
29/75
2D Convolution for image restoration: Image > Object J I
N Domaine
de l'image
Domaine
du noyau
L
M
g(m, n)
K
Domaine
de l'objet
Dg > Df
f (k , l )
m = 1, . . . , M n = 1, . . . , N
f (k, l)
M =K +I −1 N =L+J −1
k = 1, . . . , K l = 1, . . . , L
h(i, j)
i = 1, . . . , I j = 1, . . . , J
g = [ g(1,1) , . . . , g(M, 1), g(1,2) , . . . , g(M,2) , . . . , g(1,N ) , . . . , g(M,N ) ]t − − −1 − −− − − −2 − −− − − −N − −− f = [ f(1,1) , . . . , f(K,1) , f(1,2) , . . . , f (K, 2), . . . , f(1,L) , . . . , f(K,L) ]t − − −1 − −− − − −2 − −− − − −L − −− A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
30/75
2D Convolution for image restoration: Image > Object HI . . . . . . . . . H H = 1 · . . . . . . . . . ·
·
··· .
HI
.
···
···
. .
.
. .
.
. HI H1 .
.
.
H1 .
.
. .
···
···
.
. ···
·
· h(i, J ) . . . . . . . . . . . . . . . . . . · h(i, 1) Hi = HI · . . . . . . . . . . . . . . . . . .
H1
·
·
··· .
h(i, J )
superesolution,
···
. .
.
. . . . h(i, J )
h(i, 1) .
.
.
h(i, 1) .
···
Cours Master Montpelier 2013
.
.
···
Toeplitz-Bloc-Toeplitz
A. Mohammad-Djafari,
.
···
31/75
. . . ···
·
· . . . . . . . . . · h(i, J ) . . . . . . . . . h(i, 1)
2D Convolution for image restoration: Image < Object J I
N Domaine
de l'objet
Domaine
du noyau
L Domaine
de l'image M
g(m, n)
Dg < Df
K
m = 1, . . . , M n = 1, . . . , N
f (k, l)
M =K −I −1 N =L−J +1
k = 1, . . . , K l = 1, . . . , L
h(i, j)
i = 1, . . . , I j = 1, . . . , J
g = [ g(1,1) , . . . , g(M,1) , g(1,2) , . . . , g(M,2) , . . . , g(1,N ) , . . . , g(M,N ) ]t − − −1 − −− − − −2 − −− − − −N − −− f = [ f(1,1) , . . . , f(K,1) , f(1,2) , . . . , f(K,2) , . . . , − − −1 − −− − − −2 − −− A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
32/75
f(1,L) , . . . , f(K,L) ]t − − −L − −−
2D Convolution for image restoration: Image < Object H1 · H = ... . .. ·
with
H2
···
H1 .. .
H1
···
·
HI
· HI
··· .. . HI
H1
H2
· .. . · .. . HI
h(i, 1) h(i, 2) ··· h(i, J) · ··· .. · . h(i, 1) h(i, J) .. .. Hi = . . h(i, 1) h(i, J) . .. ·
A. Mohammad-Djafari,
···
superesolution,
·
h(i, 1)
h(i, 2)
Cours Master Montpelier 2013
33/75
· .. .
· .. . h(i, J)
2D Convolution for image restoration: Image=Object Domaine
du noyau
Domaine
de l'objet Domaine
de l'image
g(m, n)
m = 1, . . . , M n = 1, . . . , N
f (k, l)
Dg = Df
k = 1, . . . , K l = 1, . . . , L
M =K N =L
h(i, j)
i = 1, . . . , I j = 1, . . . , J
g = [ g(1,1) , . . . , g(M,1) , g(1,2) , . . . , g(M,2) , . . . , g(1,N ) , . . . , g(M,N ) ]t − − −1 − −− − − −2 − −− − − −N − −− f = [ f(1,1) , . . . , f(K,1) , f(1,2) , . . . , f(K,2) , . . . , − − −1 − −− − − −2 − −− A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
34/75
f(1,L) , . . . , f(K,L) ]t − − −L − −−
2D Convolution for image restoration: Circulante forme g(m, n)
m = 1, . . . , M n = 1, . . . , N
f (k, l)
k = 1, . . . , K l = 1, . . . , L
h(i, j)
i = 1, . . . , I j = 1, . . . , J
P =K +I −1 Q=L+J −1 .. f ((m, n)) . 0 ˜ f˜(m, n) = ··· ······ dim(f ) = [P, Q] ······ .. . 0 0 .. 0 g(k, l) . g ) = [P, Q] · · · · · · · · · · · · ··· g˜(k, l) = dim(˜ .. . 0 0 .. 0 h(i, j) . ˜ j) = · · · · · · · · · · · · · · · dim(h) ˜ = [P, Q] h(i, .. . Montpelier 0 2013 35/75 A. Mohammad-Djafari, superesolution, 0 Cours Master
2D Convolution for image restoration: Circulante forme
H1 HP .. H = . .. . HP
H2 ··· ··· H1 H2 · · · .. .. . . .. .. . . HP −1 · · · · · ·
··· ···
···
HP HP −1 .. . .. . H1
h(i, 1) h(i, 2) ··· ··· h(i, P ) h(i, 1) h(i, 2) ··· .. .. .. . . Hi = . .. .. .. . . . h(i, P ) h(i, P − 1) h(i, P − 2) · · ·
··· ···
···
Circulante-Bloc-Circulante
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
36/75
bloc-circulante
h(i, P ) h(i, P ) .. . .. . h(i, 1)
circulante
Classification of the signal and image restoration methods ◮
Analytical methods f (t) −→ f (x, y) −→ H
H
−→ g(t) = H[f (t)]
H
−→ g(x, y) = H[f (x, y)]
G
−→ fb(t) = H−1 [f (t)]
Linear Operator g(t) −→ g(x, y) −→
G
−→ fb(x, y) = H−1 [f (x, y)]
Linear Operator approximating H−1
G ◮
Inverse Filtring
◮
Pseudo–inverse Filtering
◮
Wiener Filtering
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
37/75
Classification of the signal and image restoration methods ◮
Algebraic methods g(t) = H[f (t)] −→ Discretization −→ g = Hf g(x, y) = H[f (x, y)] −→ Discretization −→ g = Hf b = H −1 g Ideal case : H invertible −→ f More general case : H is not invertible ◮ ◮ ◮
◮
Generalized Inversion Least Squares (LS) and Minimum norm LS Regularization
Probabilistic methods ◮ ◮ ◮
A. Mohammad-Djafari,
Wiener Filtering Kalman Filtering General Bayesian approach
superesolution,
Cours Master Montpelier 2013
38/75
Algebraic Approches Signal f (t) −→
h(t)
Image −→ g(t)
f (x, y) −→
h(x, y)
−→ g(x, y)
Discretization ⇓ g = Hf ◮ ◮
b = H −1 g Ideal case: H invertible −→ f b = arg min {J(f )} M > N Least Squares: f f
b] J(f ) = kg − Hf k2 = [g − Hf ]′ [g − H f
b = (H ′ H)−1 H ′ g ∇J = −2H ′ [g−Hf ] = 0 −→ H ′ Hf = H ′ g −→ f ◮
b = H ′ (HH ′ )−1 g M < N Min Norme Solution: f
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
39/75
Regularization
b = arg min kg − Hf k2 M > N Least Squares: f f 2 b = arg min ◮ M < N Min Norme Solution: f Hf =g kf k 2 2 b = arg min ◮ Regularization: f Hf =g kg − Hf k + λkf k J(f ) = kg − Hf ]k2 + λkDf k2 1 0 ··· ··· 0 1 0 ··· ··· 0 .. .. −1 1 . . . −2 1 . . . . . .. .. .. 0 −1 1 . . . . . or D = 1 −2 1 . D= .. .. . . 0 −1 1 1 −2 1 0 0 −1 1 0 1 −2 1 ◮
∇J = 2H ′ [Hf − g]′ + 2λD ′ Df = 0 b = H ′ g −→ f b = [H ′ H + λD ′ D]−1 H ′ g [H ′ H + λD ′ D]f
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
40/75
Bayesian estimation approach M:
g = Hf + ǫ
◮
Observation model M + Hypothesis on the noise ǫ −→ p(g|f ; M) = pǫ (g − Hf )
◮
A priori information
p(f |M)
◮
Bayes :
p(f |g; M) =
p(g|f ; M) p(f |M) p(g|M)
Link with regularization : Maximum A Posteriori (MAP) : b = arg max {p(f |g)} = arg max {p(g|f ) p(f )} f f f = arg min {− ln p(g|f ) − ln p(f )} f with
Q(g, Hf ) = − ln p(g|f )
A. Mohammad-Djafari,
superesolution,
and
Cours Master Montpelier 2013
λΩ(f ) = − ln p(f ) 41/75
Case of linear models and Gaussian priors g = Hf + ǫ
◮
Hypothesis on the noise: nǫ ∼ N (0, σǫ2 I) o p(g|f ) ∝ exp − 2σ1 2 kg − Hf k2
◮
A posteriori:
◮
ǫ
Hypothesis on f : f ∼ N (0, σf2 (D ′ D)−1 ) p(f ) ∝ exp − 2σ12 kDf k2 f
1 1 2 2 p(f |g) ∝ exp − 2σ2 kg − Hf k − 2σ2 kDf k ǫ f n o 1 b )′ Σ b −1 (f − f b) ∝ exp − 2 (f − f 2σǫ
◮
MAP :
with ◮
b = arg max {p(f |g)} = arg min {J(f )} f f f J(f ) = kg − Hf k2 + λkDf k2 ,
λ=
σǫ2 σf2
Advantage : characterization of the solution b, P b=P b ) with f b H ′ g, P b = H ′ H + λD ′ D −1 f |g ∼ N (f
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
42/75
MAP estimation with other priors: b = arg min {J(f )} with J(f ) = kg − Hf k2 + λΩ(f ) f f
Separable priors:
P p(fj ) ∝ exp −α|fj |2 −→ Ω(f ) = α j |fj |2
◮
Gaussian:
◮
Gamma: P p(fj ) ∝ fjα exp {−βfj } −→ Ω(f ) = α j ln fj + βfj
◮
◮
Beta: P P p(fj ) ∝ fjα(1 − fj )β −→ Ω(f ) = α j ln fj + β j ln(1 − fj )
p Generalized Gaussian: j ) ∝ exp {−α|fj | } , P p(f p 2 −→ Ω(f ) = α j |fj | ,
Markovian models: X p(fj |f ) ∝ exp −α φ(fj , fi ) −→
Ω(f ) = α
superesolution,
Cours Master Montpelier 2013
XX j
i∈Nj
A. Mohammad-Djafari,
1 T
44/75
Main advantages of the Bayesian approach ◮
MAP = Regularization
◮
Posterior mean ? Marginal MAP ?
◮
More information in the posterior law than only its mode or its mean
◮
Meaning and tools for estimating hyper parameters
◮
Meaning and tools for model selection
◮
More specific and specialized priors, particularly through the hidden variables More computational tools:
◮
◮
◮ ◮
◮
A. Mohammad-Djafari,
Expectation-Maximization for computing the maximum likelihood parameters MCMC for posterior exploration Variational Bayes for analytical computation of the posterior marginals ... superesolution,
Cours Master Montpelier 2013
45/75
Blind Deconvolution: Bayesian approach Deconvolution g =Hf +ǫ
Identification g = F h+ǫ
p(g|f ) = N (Hf , Σǫ = σǫ2 I) p(g|h) = N (F h, Σǫ = σǫ2 I) ′ −1 2 p(f ) = N (0, Σf = σf (D f D f ) ) p(h) = N (0, Σh = σh2 (D ′h D h )−1 ) b, Σ b Σ bf) b h) p(f |g) = N (f p(h|g) = N (h, b f = [H ′ H + λf D ′ Df ]−1 b h = [F ′ F + λh D ′ D h ]−1 Σ Σ f h ′ ′ ′ ′ ′ −1 b b f = [H H + λf Df D f ] H g h = [F F + λh D h Dh ]−1 F ′ g ◮
Joint posterior law:
p(f , h|g) ∝ p(g|f , h) p(f ) p(h) p(f , h|g) ∝ exp {−J(f , h)} with J(f , h) = kg − hf k2 + λf kD f f k2 + λh kD h hk2 ◮
iterative algorithm
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
46/75
Blind Deconvolution: Bayesian Joint MAP criterion ◮
Joint posterior law: p(f , h|g) ∝ p(g|f , h) p(f ) p(hh) p(f , h|g) ∝ exp {−J(f , h)} with J(f , h) = kg − hf k2 + λf kD f f k2 + λh kD h hk2
◮
iterative algorithm
Identification Deconvolution p(g|f , H) = N (Hf , Σǫ ) p(g|h, F ) = N (F h, Σǫ ) p(h) = N (0, Σh ) p(f ) = N (0, Σf ) b b Σ b b h) p(f |g, H) = N (f , Σf ) p(h|g, F ) = N (h, ′ ′ ′ ′ −1 b f = [H H + λf D Df ] b h = [F F + λh D Dh ]−1 Σ Σ f h b = [H ′ H + λf D′ D f ]−1 H ′ g h b = [F ′ F + λh D ′ Dh ]−1 F ′ g f f h
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
47/75
Blind Deconvolution: Marginalization and EM algorithm ◮ ◮
◮ ◮
Joint posterior law: Marginalizationp(f , h|g) ∝ p(g|f , h) p(f ) p(hh) Z p(h|g) = p(f , h|g) df n o b = arg max {p(h|g)} −→ f b = arg max p(f |g, h) b h h f Expression of p(h|g) and its maximization are complexes Expectation-Maximization Algorithm ln p(f , h|g) ∝ J(f , h) = kg−hf k2 +λf kD f f k2 +λh kD h hk2 ◮ ◮
Iterative algorithm Expectation: Compute Q(h, hk−1 ) = Ep(f ,hk−1 |g ) {J(f , h)} = hln p(f , h|g)ip(f ,hk−1 |g )
◮
A. Mohammad-Djafari,
Maximization:
superesolution,
hk = arg max Q(h, hk−1 ) h
Cours Master Montpelier 2013
48/75
Blind Deconvolution: Variational Bayesian Approximation ◮
Joint posterior law: p(f , h|g) ∝ p(g|f , h) p(f ) p(hh)
◮
Approximation: p(f , h|g) by q(f , h|g) = q1 (f ) q2 (h)
◮
Criterion of approximation: Kullback-Leiler Z Z q q1 q2 KL(q|p) = q ln = q1 q2 ln p p KL(q1 q2 |p) =
Z
q1 ln q1 +
Z
q2 ln q2 −
Z
q ln p
= −H(q1 ) − H(q2 ) + h− ln p((f , h|g)iq ◮
When the expression of q1 and q2 are obtained, use them.
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
49/75
Variational Bayesian Approximation algorithm ◮
Kullback-Leibler criterion Z Z Z KL(q1 q2 |p) = q1 ln q1 + q2 ln q2 + q ln p
= −H(q1 ) − H(q2 ) + h− ln p((f , h|g)iq
◮
Free energy F(q1 q2 ) = − hln p((f , h|g)iq1 q2
◮
Equivalence between optimization of KL(q1 q2 |p) and F(q1 q2 )
◮
Alternate optimization: qb1 = arg min {KL(q1 q2 |p)} = arg min {F(q1 q2 )} q1
q1
qb2 = arg min {KL(q1 q2 |p)} = arg min {F(q1 q2 )} q2
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
q2
50/75
Deconvolution results
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
51/75
Forward modeling: from a HR image to LR images g
3
g
1
g
3
g
g1
g3
g1
g
3
1
g
4
g
2
g
4
g
2
1
3
1
g
3
1
2
4
2
4
g
2
g
3
g
3
2
f
6
f
5
f
4
f
3
f2
1
f10
f
9
f
8
17
f
16
f15
23
22
f
29
g1
4
g
2
g1
4
3
4
g
2
g
4
g
2
2
g
g
1
4
1
3
1
g3 g
2
2
g
3
g
f
40
38
f
37
4
f
f
49
f
48
f47
f
f
3
f2
1
9
8
12
17
25
24
23
22
27
26
f
31
f30
f
29
f
g 3
g
g
g3
g1
g
3
1
1
g
4
g2
4
g
2 3
1
3
g
1
4 1
3
g
f6
f
5
f
4
f f1
2
f
8
f10
f9
15
f18
f
17
f
f
13
12
11
16
f
22
f
23
21
26
36
g g
3
1
g
4
g2
2
g2
3
3
g
1
f
f39
38
37
f
35
34
33
32
f
f
3
g
f44
f
42
f40 f
41
f
f48
47
3
f f1
2
f
8
46
4
f
f49
f10
f9
f
17
f
f
15
16
f
23
f
29
21
26 32
f
f44
43
gk (r) = [DBMk f ](r) superesolution,
Cours Master Montpelier 2013
52/75
35
34
38
37
f
f
f39
f
f
36
28
33
2
f
45
f
42
f
f40 f
46
41
f
47
f48
g
g 4
2
f
f
f
f31 f
g
g4
g
3
g
4
f27
f
30
3
g
g
f
20
25
f
1
g
g1
g3
g
g
g1
4
2
2
f
24
2
3
g
g
g4
g
g
f
f19
f
f
22
45
13
f18
3
1
4
g
f14
f 12
11
43
A. Mohammad-Djafari,
49
48
g
g
g
1
f
f
4
g3
g2
g4
g2 7
f6
5
f
f
f
3
1
g
g 2
g1
g
g
1
f
28
2
4
f
f
f
f
f
46
g 4
g
2
2
3
g
g
4
g
3
g
g
g
g1
g
g1
1
g
2
g
g
g4
g
2
g4
2
f
f
f31 f
4
g1
4
2
g3
3
4
g
4
f27
f
30
g
g
g
g
g4
g
g
g
f
20
25
f
29
3
3
1
g
2
f
24
1
g
g1
g3
g
g
3
4
2
3
g
g
g
g4
g1
g
f
f19
f
f
4
g
f14
f
f
f
1
2
g
1
7
3
g
g
g2
g4
g2
f
3
3
g
g3
g1
g
g
1
f
4
2
g
4
2
g1
g
g
g
g
2
2
3
g
g
4
g
3
g
g
g g
g1
g
g1
g
g
g
2
42
f47
f
f45
f44
f
f
f41
39
43
g4
g
g
g4
g2
3
g1
g
35
40
f
37
f36
2
4
f
f
38
43
4
2
4
g
g
34
33
f
g
g
g
2
f
f
f32
3
1
4
f28
f
f
f
f
f
f
21
20
g
g
1
g
g
g2
f
f
19
18
16
f15
14
f
f
f
f
f
f13
f
f11
f10
f
f
46
f45
f44
f
42
f41
5
f
f
3
4
g3
g
g3
g1
g 1
6
f 35
4
4
g
g
g2
g 2
g 2
g4
g 7
3
1
g
g
2
39
f
3
g
g
g3
g1
g
g1
1
f
f
2
4
4
2
4
g
g
g4
g
g
g
g
3
1
3
g
g
g
g
g
g
2
34
33
f32
f36
g
g
g
g
g
f
f
31
f30
g
4
f
27
f
1
g3
g4
g2
g
g2
4
f28
f
26
24
f
f
21
25
f
3
g
1
g3
g
1
f
f
f
f
4
g3
g
g
g2
20
18
f
3
f
19
4
g1
g
14
f
f
g
3
g
g
g2
2
2
f
12
1
4
g
f13
f
f11
g
g
1
7
3
g
g
g4
g
f
g
g3
g1
g
g1
1
f
4
g
g
2
1
g
g4
g
g
g
g
3
1
3
g
g
g
g
g
g
g3
g
4
g
g4
g2
g
g2
f49
2
4
Modeling forward problems of superresolution
gk (r) = [Hk f ](r) + ǫk (r) = [DBMk f ](r) + ǫk (r)
◮
B: Blurring effects (needs deconvolution)
◮
Mk : Movement effects (needs registration)
◮
D: Sub sampling effects (needs interpolation)
◮
Two models: gk (r) = [DBMk f ](r) + ǫk (r) = [DMk Bf ](r) + ǫk (r)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
53/75
Two models for MISO SR problems
[Bf ](r)
[Mk Bf ](r)
LR images [DMk Bf ](r)
HR image f (r) LR images [DBMk f ](r) [Mk f ](r)
A. Mohammad-Djafari,
superesolution,
[BMk f ](r)
Cours Master Montpelier 2013
54/75
Classical methods ◮
◮
Many classical methods are based on adjoint operators: P gk (r) = [DBMk f ](r) −→ f (r) = k [M′k B ′ D ′ gk ](r) 3 basic operations: Interpolation, Registration and Image fusion ◮ ◮
Interpolation: an ad hoc way to upsampling Registration: compensation for movements Correlation based methods: f1 (r) M ovement f2 (r) = f1 (r − d) Z C(r ′ ) = f1 (r)f2 (r − r′ ) dr = δ(r ′ − d) Fourier domain based methods: f (r) f (r − d)
◮ A. Mohammad-Djafari,
FT ↔ ↔
F (ω) exp {−jω ′ d} F (ω)
Image fusion: linear (mean) or non linear (median) superesolution,
Cours Master Montpelier 2013
55/75
Classical methods: adjoint operators f11 0 f12 0 0
f21 f22 f23 f24 f31 f32 f33 f34
−→ D0 Mk
f41 f42 f43 f44
f31 f33 f32 f34
−→ ′ M′k D0
f21 f23 f22 f24
0
0 f12 0 f14 0
0
0
0
0 f32 0 f34
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f21 0 f23 0
f41 f43 f42 f44
0
0
0
0
0 f22 0 f24 0
0
0
f11 f12 f13 f14 −→ X ′ f21 f22 f23 f24 Mk f31 f32 f33 f34 k f41 f42 f43 f44
0
f41 0 f43 0 0 f42 0 f44 ′ [M′k D0 gk ](r)
gk (r) = [D0 Mk f ](r′ )
f (r)
0
f31 0 f33 0
f11 f13 f12 f14
f11 f12 f13 f14
0
11 11 12 12 12 12 14 14 g1 g1 g1 g1 g2 g2 g2 g2
fb(r) = f (r)
11 11 12 12 12 12 14 14 g1 g1 g1 g1 g2 g2 g2 g2
f21 f22 f23 f24 f31 f32 f33 f34
31 31 33 33 32 32 34 34 g1 g1 g1 g1 g2 g2 g2 g2
11 12 11 12 g1 g1 g2 g2
f11 f12 f13 f14 −→ D1 Mk
f41 f42 f43 f44
21 22 21 22 g1 g1 g2 g2
−→ ′ M′k D1
11 12 11 12 g1 g3 g4 g4
fˆ11 fˆ12 fˆ13 fˆ14 −→ X ′ fˆ21 fˆ22 fˆ23 fˆ24 Mk 21 21 23 23 22 22 24 24 g3 g3 g3 g3 g4 g4 g4 g4 k fˆ31 fˆ32 fˆ33 fˆ34 31 31 33 33 32 32 34 34 g1 g1 g1 g1 g2 g2 g2 g2
21 21 23 23 22 22 24 24 g3 g3 g3 g3 g4 g4 g4 g4
21 22 21 22 g3 g3 g4 g4
fˆ41 fˆ42 fˆ43 fˆ44
41 41 43 43 42 42 44 44 g3 g3 g3 g3 g4 g4 g4 g4
f (r)
A. Mohammad-Djafari,
gk (r) = [D1 Mk f ](r′ )
superesolution,
41 41 43 43 42 42 44 44 g3 g3 g3 g3 g4 g4 g4 g4 ′ [M′k D1 g](r)
Cours Master Montpelier 2013
56/75
fb(r) 6= f (r)
Interpolation, HR registration and image fusion
g111 g112 g211 g212
g111 g111 g112 g112 x g111 g111 g112 g112 x
x g211 g211 g212 g212 x g211 g211 g212 g212
g121 g121 g122 g122 x g121 g121 g122 g122 x
x g221 g221 g222 g222 x g221 g221 g222 g222
g222
x
x
x
x
x
x
x
x
x
x
g311 g312 g411 g412
x
x
x
x
x
x
x
x
x
x
g121
g122
g221
g321 g322 g421 g422
A. Mohammad-Djafari,
g311 g311 g312 g312 x g311 g311 g312 g312 x
x g411 g411 g412 g412 x g411 g411 g412 g412
g321 g321 g322 g322 x g321 g321 g322 g322 x
x g421 g421 g422 g422 x g421 g421 g422 g422
superesolution,
Cours Master Montpelier 2013
57/75
fˆ11 fˆ12 fˆ13 fˆ14 fˆ21 fˆ22 fˆ23 fˆ24
fˆ31 fˆ32 fˆ33 fˆ34 fˆ41 fˆ42 fˆ43 fˆ44
Classical SR methods Three main methods: P ′ ′ ′ ◮ f (r) = k [B D Mk gk ](r) ◮ ◮ ◮
◮
f (r) = ◮ ◮ ◮
◮
Sub pixel LR registration Interpolation to HR grid Mean or Median image fusion
P
k
[M′k B ′ D ′ gk ](r)
Interpolation of all LR images to HR grid HR registration Mean or Median image fusion
Iterative BackprojectionX methods f (i+1) = f (i) + α [B ′ D ′ M′k ] gk − Mk DBf (i) k
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
58/75
Classical SR methods
Unwrapping and interpolation −→ Linear or Non Lineaire combination Sub pixel registration + Interpolation + Mean or Median image fusion Interpolation + HR registration + Mean or Median image fusion
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
59/75
General inversion methods gk (r) = [Hk f ](r) + ǫk (r) = [DMk Bf ](r) + ǫk (r) gk = Hkf + ǫ = DM k Bf + ǫ ◮
b = arg min {J(f )} Least squares (LS) methods: f f J(f ) =
X
kg k − H k f k2 =
k
◮
XX k
|gk (r) − [Hk f ](r)|2
r ∈R
Iterative algorithms b (k+1) = f b (k) − α∇J(f b (k) ) f b (k) + 2α P H ′ (g − H k f b (k) ) =f k k k
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
60/75
General inversion methods ◮
Regularization methods X (Tikhonov) J(f ) = kg k − H k f k2 + λkDf k2 k
◮
Robust estimation (RE) [7, 8] [9, 7, 8, 10] X J(f ) = kg k − H k f kβ1 + λkDf kβ2 , 1 < β1 , β2 ≤ 2 k
◮
Bayesian MAP estimation methods [1, 2, 3, 4, 5, ?]. J(f ) = − ln p(f |g) = − ln p(g|f ) − ln p(f ) + c (
1 X p(g|f ) ∝ exp − 2 kg k − H k f k2 2σǫ k
and p(f ) is the prior law A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
61/75
)
General Bayesian inference ◮
Use forward and errors model to obtain the likelihood g k = H k f + ǫ = DM k Bf + ǫ ) ( 1 X kg k − H k f k2 p(g|f ) ∝ exp − 2 2σǫ k
◮
Use prior knowledge to assign prior law p(f )
◮
Obtain the expresion of the posterior law p(f |g) =
◮
p(g|f ) p(f ) p(g)
Use it to make inference: MAP or PM Z b = arg max{p(f |g)} or f b = f p(f |g) df f f
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
62/75
Hidden Markov model variables
f (r)
region labels z(r)
contours q(r)
p(f (r)|z(r) = k) = N (mk , vk2 ), P (z(r) = k) = αk PK 2 p(f (r)) = k=1 αk N (mk , vk )
Rk = ∪l Rkl with ∩l Rkl = 0, ∩k Rk = 0, ∪k Rk = R X p(z(r)|z(r ′ ), r ′ ∈ V(r)) ∝ exp γ δ(z(r) − z(r ′ )) ′ r ∈V(r ) A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
63/75
Expression of the posterior law ◮
o n P Likelihood: p(g|f , σǫ2 ) ∝ exp − 2σ1 2 k kg k − H k f ǫ
◮
Prior laws:
K Y Y
−1 exp p(f |z, {mk , vk }) ∝ (f (r) − mk )2 2 2vk k=1 r ∈Rk X X p(z|γ) ∝ exp γ δ(z(r) − z(r ′ )) ′ Y r ∈R r ∈V(r) p(θ) = p(σǫ2 ) p(mk ) p(vk ) p(γ)
k
◮
Joint Posterior law of all the unknowns: p(f , z, θ|g) ∝ p(g|f , σǫ2 ) p(f |z, {mk , vk }) p(z|γ) p(θ)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
64/75
Bayesian computation p(f , z, θ|g) ∝ p(g|f , σǫ2 ) p(f |z, {mk , vk }) p(z|γ) p(θ) ◮
◮
◮
Joint MAP : (Optimization) b, z b = arg max {p(f , z, θ|g)} b, θ) (f (f ,z ,θ )
Posterior means: (Integration) b = E f |g , θ = E θ|g , f General iterative algorithms:
A. Mohammad-Djafari,
b = E z|g) z
b ∼ p(f |b b g) Estimation f z , θ, b , θ, b g) Segmentation b ∼ p(z|f z b ∼ p(θ|f b, z b, g) Hyperparameters θ
superesolution,
Cours Master Montpelier 2013
65/75
Bayesian SR
◮
◮
In real SR problems, we have also to estimate the PSF h and the movement or registration parameters dk p(f , z, θ, h, dk |g) ∝ p(g|f , σǫ2 ) p(f |z, {mk , vk }) Q p(z|γ) p(θ) p(h) k p(dk )
General iterative algorithms: ◮ ◮ ◮ ◮ ◮
A. Mohammad-Djafari,
Update Update Update Update Update
PSF h registration parameters dk segmentation z and contours q registration parameters θ the HR image f
superesolution,
Cours Master Montpelier 2013
66/75
A first algorithm ◮
Initialization: 1. Estimate the sub-pixel translational movements dk between the LR images g k (r); b (r) based on LS or quadratic 2. Estimate a first HR image f regularization
◮
Iterations: b (r) based b(r) for the HR image f 1. Estimate a segmentation z on the Potts Markov modeling; b of Gaussian mixtures; 2. Estimate the parameters θ 3. Update the HR image using:
b g) = ln p(f |b z , θ, A. Mohammad-Djafari,
X k
superesolution,
X X f (r) − mk 2 kg k − H k f k + vk k r ∈Rk 2
Cours Master Montpelier 2013
67/75
New algorithm ◮
◮
Initialization: b (r) by interpolating 1. Estimate a first HR image f the first LR image;
Iterations:
1. Estimate the translational movements dk between the HR b (r) and newly entered LR images g (r) which is image f k interpolated to the HR dimensions. 2. Estimate the blurring PSF h. b(r) for the HR image 3. Estimate a segmentation z b of Gaussian mixtures 4. Estimate the parameters θ 5. Update the HR image as before
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
68/75
Results
Original HR f 0
LR images with k = 3
Robust regularization b 0 k2 df = kfkf−f = 9% 2 0k
Proposed method b 0 k2 df = kfkf−f = 7% 2 0k
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
One of the LR images
69/75
Results
Original HR f 0
LR images with k = 3
Robust regularization b 0 k2 df = kfkf−f = 4.9% 2 0k
Proposed method b 0 k2 df = kf−f kf0 k2 = 2.8%
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
One of the LR images
70/75
Conclusions
◮
The Bayesian approach is an appropriate approach for any inverse problem, and so, for SR problem.
◮
In this approach, it is possible to account for any prior knowledge.
◮
The uncertainties in each step are transmitted to the following steps in a natural way through the probability laws.
◮
Obtained methods give more satisfaction if the forward and prior models are more appropriate.
◮
In general, the computational costs are higher than classical methods, but non really so much.
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
71/75
Challenges
◮
Forward modeling: Translation, Rotation, Zooming and other projective models for registration step has to be accounted for.
◮
Prior modeling: Accounting for textures in each region
◮
More efficient and robust movement, or more generally, registration parameters estimation algorithms
◮
More efficient PSF estimation algorithms
◮
More efficient optimization algorithms, and more generally, Bayesian computation methods (MCMC, Variational methods)
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
72/75
References H. Shekarforoush, M. Berthod, and J. Zerubia, “3D super-resolution using generalised sampling expansion,” Tech. Rep. RR-2706. T. Lehmann, C. Gonner, and K. Spitzer, “Survey: interpolation methods in medical image processing,” IEEE Transactions on Medical Imaging, vol. 18, no. 11, pp. 1049–1075, 1999. R. W. Gerchberg, “Superresolution through error energy reduction,” vol. 21, no. 9, pp. 709–720, 1974. R. Y. Tsai and T. S. Huang, “Multi-Frame Image Restoration and Registration,” Advances in Computer Vision on Image Processing, vol. 1, pp. 317–339, 1984. H. Foroosh, J. Zerubia, and M. Berthod, “Extension of Phase Correlation to Subpixel Registration,” IEEE Trans. on Image Processing, vol. 11, no. 3, pp. 188–200, 2002. G. Rochefort, F. Champagnat, G. L. Besnerais, and J.-F. Giovannelli, “Super-Resolution from a Sequence of Undersampled Image Under Affine Motion,” submitted to IEEE Trans. on Image Processing, March 2005. N. Nguyen, P. Milanfar, and G. Golub, “A computationally efficient superresolution image reconstruction algorithm,” IEEE Trans. Image Processing, vol. 10, pp. 573–583, Apr. 2001. M. Elad and Y. Hel-Or, “A fast super-resolution reconstruction algorithm for pure translational motion and common space-invariant blur,” IEEE Trans. Image Processing, vol. 10, pp. 1187–1193, Aug. 2001. R. R. Schultz, L. Meng, and R. L. Stevenson, “Subpixel motion estimation super-resolution image sequence enhancement,” Journal of Visual Communication and Image Representation, vol. 9, pp. 38–50, Mar. 1998. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Fast and Robust Multi-Frame Super-Resolution,” IEEE Trans. on Image Processing, vol. 13, pp. 1327–1344, Oct. 2004.
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
73/75
References R. Molina, J. Mateos, A. K. Katsaggelos, and M. Vega, “Bayesian Multichannel Image Restoration Using Compound Gauss-Markov Random Fields,” IEEE Trans. on Image Processing, vol. 12, pp. 1642–1654, Dec. 2003. R. Molina, M. Vega, J. Abad, and A. K. Katsaggelos, “Parameter Estimation in Bayesian High-Resolution Image Reconstruction With Multisensors,” IEEE Trans. on Image Processing, vol. 12, pp. 1655–1667, Dec. 2003. F. Humblot, B. Collin, and A. Mohammad-Djafari, “Evaluation and practical issues of subpixel image registration using phase correlation methods,” in Inter. Conf. on Physics in Signal and Image Processing (PSIP ’05), (Toulouse, France), pp. 115–120, 31 jan. 2005. F. Humblot and A. Mohammad-Djafari, “Super-resolution and joint segmentation in bayesian framework,” in 25th Inter. Workshop on Bayesian Inference and Maximum Entropy Methods (MaxEnt ’05). AIP Conference Proceedings, vol. 803, (San Jos, CA, tats-Unis), pp. 207–214, aot 2005. F. Humblot and A. Mohammad-Djafari, “Super-resolution using hidden markov model and bayesian detection estimation framework,” EURASIP Journal on Applied Signal Processing, special issue on Super-Resolution Imaging Analysis, Algorithms, and Applications, ID 36971 2006. F. Humblot, D´ etection de petits objets dans une image en utilisant les techniques de super-rsolution. PhD thesis, Universit´ e de Paris–sud XI, Orsay, France, 2005. A. Mohammad-Djafari, Super-Resolution: A short review, a new method based on hidden Markov modelling of HR image and future challenges. submitted to J. of Computer, dec. 2006.
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
74/75
Questions and comments ?
A. Mohammad-Djafari,
superesolution,
Cours Master Montpelier 2013
75/75