Modeling Statistik untuk Computer Vision sumber: - Forsyth+Ponce Chap. 7. - Standford Vision & Modeling
Modeling Statistik untuk Computer Vision Agenda •Statistical Models (baca Forsyth+Ponce Chap. 7.) - Bayesian Decision Theory - Density Estimation
• PCA (Principal Component Analysis • EM (Expectation Maximazation)
Page 1 1
Contoh aplikasi model statistik: segmentasi dengan EM
Segmentasi Warna
Page 2 2
Contoh recognition dengan PCA: Face Recognition dengan PCA (Turk+Pentland, ):
Contoh contour tracking dengan theorema Bayes
Snake Tracking
E + βΩ
ln p(x|c) + ln p(c)
Page 3 3
Statistical Models / Probability Theory • Model Statistical : model yg merepresentasikan Uncertainty and Variability • Probability Theory: menjelaskan tentang mekanisme dari Uncertainty • Lihat contoh2 pada file pdf buku elektronik, pada CD. (Forsyth+Ponce Chap 6)
Statistical Models / Probability Theory • Fakta mengakan bahwa Segala sesuatu adalah merupakan Variabel Random
Page 4 4
Pengantar Desisi Optimal Bayes Dengan berbagai aplikasi untuk proses klasifikasi
Teori Desisi Bayes (Bayes Decision Theory) Contoh: Contoh: Character Recognition:
Tujuan: Tujuan: Mengklasifikasikan karakter sedemikian rupa sehingga dapat meminimalisasi probabiliti kesalahan klasifikasi (minimize probability of misclassification)
Page 5 5
Teori Desisi Bayes
P(Ck ) • konsep
#1: Priors (prob. anggapan awal)
aababaaba baaaabaaba abaaaabba babaabaa
?
P(a)=0.75 P(b)=0.25
Teori Desisi Bayes
• Konsep
#2: Conditional Probability / Likelihood
P ( X | Ck ) P( X | a) # black pixel
P ( X | b)
# black pixel
Page 6 6
Teori Desisi Bayes
Contoh: P( X | a)
P ( X | b)
Ck = ?
P ( X | b)
Ck = ?
X=7
Teori Desisi Bayes
Contoh: P( X | a)
X=8
Page 7 7
Teori Desisi Bayes
• Contoh: P( X | a)
P ( X | b)
Ck = a Karena… P(a)=0.75 P(b)=0.25
X=8
Teori Desisi Bayes
• Contoh:
P( X | a)
P ( X | b)
X=9
Ck = ? P(a)=0.75 P(b)=0.25
Page 8 8
Teori Desisi Bayes
• Teorema
Bayes :
P(Ck | X ) =
P ( X | Ck ) P (Ck ) P( X )
Teori Desisi Bayes
• Teorema
Bayes :
P(Ck | X ) = =
P ( X | Ck ) P (Ck ) P( X ) P ( X | Ck ) P (Ck ) ∑ P( X | C j ) P(C j ) j
Page 9 9
Teori Desisi Bayes
• Teorema Bayes :
Posterior =
Likelihood x prior Normalization factor
Teori Desisi Bayes
• Contoh:
P( X | a)
P ( X | b)
Page 10 10
Teori Desisi Bayes
• Contoh:
P ( X | a ) P (a ) P ( X | b) P (b)
Teori Desisi Bayes
• Contoh:
P(a | X )
P(b | X )
X>8 sehingga termasuk kelas b
Page 11 11
Teori Desisi Bayes
Tujuan: Mengklasifikasikan karakter sedemikian rupa sehingga dapat meminimalisasi probabiliti kesalahan klasifikasi (minimize probability of misclassification)
Batas2 desisi (Decision boundaries):
P (Ck | x) > P (C j | x) for all j ≠ k
Teori Desisi Bayes
Batas-batas desisi:
P (Ck | x) > P (C j | x) for all j ≠ k P ( x | Ck ) P (Ck ) > P ( x | C j ) P (C j ) for all j ≠ k
Page 12 12
Teori Desisi Bayes
Daerah desisi :
R1 ,..., Rc
R1
R2
R3
Teori Desisi Bayes
Tujuan: minimize probability of misclassification
P (error ) = P ( x ∈ R2 , C1 ) + P ( x ∈ R1 , C2 )
Page 13 13
Teori Desisi Bayes
Tujuan: minimize probability of misclassification
P (error ) = P ( x ∈ R2 , C1 ) + P ( x ∈ R1 , C2 )
= P ( x ∈ R2 | C1 ) P (C1 ) + P ( x ∈ R1 | C2 ) P (C2 )
Teori Desisi Bayes
Tujuan: minimize probability of misclassification
P (error ) = P ( x ∈ R2 , C1 ) + P ( x ∈ R1 , C2 )
= P ( x ∈ R2 | C1 ) P (C1 ) + P ( x ∈ R1 | C2 ) P (C2 )
=
∫ p( x | C ) P(C )dx + ∫ p( x | C ) P(C )dx 1
1
R2
2
2
R1
Page 14 14
Teori Desisi Bayes
Tujuan: minimize probability of misclassification
=
∫ p( x | C ) P(C )dx + ∫ p( x | C ) P(C )dx 1
1
2
R2
2
R1
Teori Desisi Bayes
Mengapa
p ( x | Ck ) P ( C k )
(Posteriori Probability)
menjadi sangat-sangat penting ?
Page 15 15
Teori Desisi Bayes Mengapa p( x | Ck ) P(Ck )
jadi penting sekali ?
Contoh #1: Speech Recognition FFT melscale bank
7 1 8 9
=x
y ε [/ah/, /eh/, .. /uh/]
apple, ...,zebra
Teori Desisi Bayes
Contoh #1: Speech Recognition /t/
/t/
/aal/
/aol/
/t/
/t/
FFT melscale bank
/owl/
Page 16 16
Teori Desisi Bayes
Contoh #1: Speech Recognition
Bagaimana manusia dapat mengenali dengan mudah? Apakah mesin bisa ???
Teori Desisi Bayes
Contoh #1: Speech Recognition
FFT melscale bank
7 1 8 9
=x
y
p ( x | Ck )
Page 17 17
Teori Desisi Bayes
Contoh #1: Speech Recognition Language Model FFT melscale bank
7 1 8 9
=x
y
P(“wreck a nice beach”) = 0.001 P(“recognize speech”) = 0.02
p ( x | Ck )
P(Ck )
Teori Desisi Bayes Mengapa p( x | Ck ) P(Ck )
penting ?
Contoh #2: Computer Vision Low-Level Image Measurements
High-Level Model Knowledge
p ( x | Ck )
P(Ck )
Page 18 18
Bayes Mengapa p( x | Ck ) P(Ck )
penting?
Contoh #3: Curve Fitting
E + βΩ
ln p(x|c) + ln p(c)
Bayes Mengapa p( x | Ck ) P(Ck )
penting?
Contoh #4: Snake Tracking
E + βΩ
ln p(x|c) + ln p(c)
Page 19 19
Estimasi Densitas (Density Estimation) •Statistical Models (Forsyth+Ponce Chap. 6) - Bayesian Decision Theory - Density Estimation
Probability Density Estimation
?
Data koleksi: x1,x2,x3,x4,x5,... x
Estimasi:
p( x | C ) x
Page 20 20
Probability Density Estimation
Beberapa metode estimasi dengan: •Parametric Representations • Non-Parametric Representations • Mixture Models
Probability Density Estimation
• Parametric Representations - Normal Distribution (Gaussian) - Maximum Likelihood - Bayesian Learning
Page 21 21
Normal Distribution
σ µ = mean σ = variance
Multivariate Normal Distribution
Page 22 22
Multivariate Normal Distribution
Mengapa Gaussian, apa istimewanya ? • Punya properti sederhana: - linear transformasi Gaussians adalah Gaussian juga - marginal dan conditional densities dari Gaussians adalah Gaussian - Moment dari densitas Gaussian secara explisit merupakan fungsi dari µ • “Good” Model of Nature: - Central Limit Theorem: Mean of M random variables is distributed normally in the limit.
Multivariate Normal Distribution
Discriminant functions:
yk ( x) = ln p( x | Ck ) + ln P(Ck )
Page 23 23
Multivariate Normal Distribution
Discriminant functions:
yk ( x) = ln p( x | Ck ) + ln P(Ck )
equal priors + cov: Jarak Mahalanobis
Multivariate Normal Distribution Bagaimana "Belajar" dari contoh ? Bisa dilakukan dengan : • Maximum Likelihood • Bayesian Learning
Page 24 24
Maximum Likelihood Bagaimana "Belajar" dari contoh ?: ?
x ? x
Maximum Likelihood Likelihood dari model densitas θ untuk menghasilkan data X:
N
Likelihood L(θ ) ≡ p ( X | θ ) = ∏ p ( xn | θ ) n =1
Page 25 25
Maximum Likelihood Likelihood dari model densitas θ untuk menghasilkan data X:
N
Likelihood L(θ ) ≡ p ( X | θ ) = ∏ p ( xn | θ ) n =1
N
more convenient : E = − ln L(θ ) = −∑ ln p ( xn | θ ) n =1
Maximum Likelihood
“Belajar” = Proses optimasi (maximizing likelihood / minimizing E): N
more convenient : E = − ln L(θ ) = −∑ ln p ( xn | θ ) n =1
Page 26 26
Maximum Likelihood
Maximum Likelihood untuk Gaussian density: N
more convenient : E = − ln L(θ ) = −∑ ln p ( xn | θ ) n =1
Solusi singkatnya:
µˆ =
1 N
ˆ = 1 ∑ N
N
∑x n =1
n
N
∑ (x n =1
n
− µˆ )( xn − µˆ )T
Probability Density Estimation
• Parametric Representations • Non-Parametric Representations • Mixture Models
Page 27 27