文档库 最新最全的文档下载
当前位置:文档库 › mctf_denoising

mctf_denoising

mctf_denoising
mctf_denoising

Video Denoising via Adaptive Quantization

Huipin Zhang

May8,2010

Contents

1Introduction2

2Algorithm2

2.1Formulation (2)

2.2Wiener?lter (2)

2.3Simpli?cation (3)

2.4Scaling via Quantization in H.264 (3)

3Implementation4

3.1Model simpli?cation (4)

3.2Noise variance estimation (4)

4Conclusions5 5References5

1

1Introduction

We are introducing a Motion Compensated Temporal Filtering(MCTF)based denosing algorithms that is easy to implement in an H.264video encoder.It turns out that we can use the adaptive quamtization mechanism in the current encoder to remove noise.The implementation is simple but the e?ectiveness of removing noise needs to be tested.

2Algorithm

2.1Formulation

Denote by S n and R n,respectively,the original frame n and its reference frame in a Motion Compensation(MC)video encoder.Denote by S n the denoised frame n,which can obtained via a simple MCTF:

S n(i,j)=ωS n(i,j)+(1?ω)R n(i+x,j+y),(1)

where v=(x,y)represents the motion vector for location(i,j)used in the video encoder,andωis a smoothing factor(weight)which takes value from0 to1.According to(1),the MC residual coressonding to the denoised frame n, r n(i,j)= S n(i,j)?R n(i+x,j+y),is

r n(i,j)=ωr n(i,j),(2) where r n(i,j)=S n(i,j)?R n(i+x,j+y)is the MC residual corresponding to the original frame n.Equation(2)indicates that the MC residual used in the video encoder is a scaled MC residual obtained used the original source frame.

2.2Wiener?lter

Equation(2)is equivalent to?ltering the motion compensated residual signals. In the liteature there are various?ltering techniques for denosing such as hard thresholding,soft thresholding,and etc.But the Wiener?lter is optimal in the sense that it minimizes the mean square error(MMSE criteria)between the signal and the observation,.More explicitly,under the additive noise model,for observed MC residual signal r(i,j),the?ltered MC residual signal r(i,j)is

r(i,j)= σ2

r (i,j)

σ2r(i,j)+σ2n r(i,j),(3) where σ2r(i,j)is the estimated variance of the residual signal andσ2n is the variance of the noise.Therefore,the scaling factor is represented as

ω=

σ2r(i,j)

σ2r(i,j)+σ2n.(4) 2

2.3Simpli?cation

The ?ltering (3)invloves operations at every pixel location in an image.It is thus desired to reduce the complexity of the ?ltering operation.For computa-tional simplicity,we thus may apply a same scaling factor for all residuals in a macroblock (MB),i.e.,ωis constant for an MB,

ω= σ2r σ2r +σ2n ,(5)

where σ2r is the variance of the residual signal of the MB.This simpli?cation may be well justi?ed for MBs where signals are similarly compensated,i.e.,the residual signals have approximately the same variance.For MBs with high motion or irregular motion,the simplication is therefore questionable,however we believe that the noise is much less noticeable in such areas than other areas.Even more agressively,ωmay be further simpli?ed to be based on the mean-absolute-di?erence (MAD)of the MB

ω=MAD MAD +σn ,(6)

where the noise variance needs to be accordingly replaced by σn .Note that a similar model for the scaling factor is introduced in [1],where the model can be illustrated as in Figure 2.3.T E

````````````````````T MAD

σn 1ωFigure 1:The model for scalaing factor ωin [1],where T is a threshold.

2.4Scaling via Quantization in H.264

Considered the linearity of the DCT,equation (??)further indicates that a

DCT coe?cient C

to be quantized and encoded is a scaled version of the noised version C ,i.e., C

=ωC.Assume the quantization stepsize for the block is Q .Then C

is quantized as follows: d = C Q =ωC Q =C Q/ω

.We therefore conclude that the denoising process can be equivalently converted into quantization with a revised quantization stepsize.The quantization stepsize

3

revision is equivalent to a QP addition in the Quantization Parameter(QP) domain:

?QP=?6log2ω.(7) 3Implementation

3.1Model simpli?cation

Combining two equations(6)and(7),we have

?QP=?6log2

MAD

MAD+σn

.(8)

To avoid the expensive computation of the ln funcation,we can further simplify the model as the following example,

?QP=

20σn

2MAD+σn

,

where the maximal?QP is20which is achieved when MAD=0.Clearly, similar models can be also be tried.Figure2illustates two approximation models together with the original mode shown in(8).

Figure2:Illustration of the log model with two approximation models.

3.2Noise variance estimation

It remains how to estimate the noise variance.There are very sophistical esti-mation methods based on statistical models.There are also simple algorithms

4

using statistics from the images.For simplicity,we prefer algorithms that are easy to implement and based on statistics easily accessible in a video encoder. The noise variance is usually computed using a set of data that are believed to be mostly noise.The separation of signal and noise is usually achieved by performing a high-pass?ltering on the original noisy signal,and the high-passed signal is considered to be noise.The high pass?ltering can be the result of a wavelet transform or DCT or a heuristic?ltering such as f=[1,?1]/2.For video processing,we may consider motion compensation on the encoder side is a high pass?ltering process for slow moving static regions.Therefore,the noise variance can be estimated by using the MC residual of the low moving static MBs only.The current encoder can easily accommodate this need.

Denote by S n the clean frame n with no noise.Under the assumption of additive noise model,we have

S n(i,j)=S n(i,j)+N n(i,j),

where N n(i,j)is the noise component for pixel(i,j)at frame n.The corre-sponing MC residual r n(i,j)=S n(i,j)?R n(i+x,j+y)now can be expressed as

r n(i,j)=(S n(i,j)?R n(i+x,j+y))+N n(i,j).

For slow moving static regions,we can assume S n(i,j)≈R n(i+x,j+y),and therefore r n(i,j)≈N n(i,j).Consequenctly,the noise varianceσ2n is estimated as follows

σ2n=var(N n(i,j))=var(r n(i,j)).

Note that the noise variance estimation does not have to be done for each frame. Rather it can be done once or periodically in a meeting session.It is not known but can be tested that we may heuristically determine the noise level for some cameras and even for all cameras.Of course,this has to be very carefully evaluated by reviewing the?nal quality of the encoded video.

4Conclusions

We have developed a simple method to remove noise in a video encoder which can be implemented together with adaptive quantization.It is based on MCTF and MB adaptive.It still needs to be testes and?ne tuned.The removal of noise in the intra frames is not considered but should not be as critical as removing noise for inter frames.

5References

[1]Byung Cheol Song and Kong Wook Chun,Motion-compensated temporal pre?ltering for noise reduction in a video encoder,IEEE International Confer-ence on Image Processing,vol2,pages1221-1224,2004.

[2]M.Kivanc Mihcak and Igor Kozintsev and Kannan Ramchandran,Spa-tially adaptive statistical modeling of wavelet image coe?cients and its applica-tion to denoising,Proceedings of the Acoustics,Speech,and Signal Processing,

5

1999.on1999IEEE International Conference-Volume06,Pages:3253-3256, 1999.[https://www.wendangku.net/doc/6b10223827.html,.tr/mihcak/publications/icassp99.pdf], [ppt:https://www.wendangku.net/doc/6b10223827.html,.tr/mihcak/publications/icassp99.ppt]

6

相关文档