文档库 最新最全的文档下载
当前位置:文档库 › The application of colour filtering to real-time person tracking

The application of colour filtering to real-time person tracking

The application of colour filtering to real-time person tracking
The application of colour filtering to real-time person tracking

The Application of Colour Filtering to

Real-Time Person Tracking?

N.T.Siebel and S.J.Maybank?

Computational Vision Group

Department of Computer Science

The University of Reading,Whiteknights,

Reading,Berkshire,RG66AY,UK

https://www.wendangku.net/doc/5c915274.html,/

{nts,sjm}@https://www.wendangku.net/doc/5c915274.html,

January2,2002

Abstract

We present results from multiple experiments

with colour?ltering methods in order to im-

prove robustness in an integrated surveillance

system to track people in a subway station.

The system is designed to operate in real-time

in a distributed local network of o?-the-shelf

computers,resulting in practical constraints

not found in developmental systems.We show

that the quality of colour information is de-

graded by electrical interference and image

compression to such an extent that it is no

longer useful for local edge detection.We give

a recommendation as to what methods can be

used to?lter out most of the image noise in?u-

encing local edge detection and show how using

these methods increases robustness of tracking.

Figure1:View from surveillance camera robust involve the introduction of prior knowl-edge,for example the use of complex statistical models to detect image motion[2],and the use of detailed3D models of the human body to give more accurate tracking results[4,7]. However,automated visual surveillance sys-tems have to operate in real-time and with a minimum of hardware requirements,if the sys-tem is to be economical and scalable.Even with today’s computer speeds this limits the complexity of methods for detection and track-ing.Haritaoglu et al[5]show that real-time performance can be obtained with a simpli?ed person model in low-resolution images.

In this paper we use simple image?lter-ing techniques to improve a people track-ing method in which the person model is of medium complexity.We explore whether it is possible to use the colour information which is available on many systems today.We show im-age?ltering methods which can enhance edge detection,thereby improving the robustness of image analysis software for people tracking.

1People Tracking

The tracking system used for our experi-ments is an extension of the Leeds People Tracker which was developed by Baumberg and Hogg[1].We have ported the tracker

from an sgi TM platform to a PC running GNU/Linux to make economic system integra-tion feasible.

The tracker uses an active shape model[1] for the contour of a person in the image.A space of suitable models is learnt in a train-ing stage using a set of video images con-taining walking pedestrians.Detected person outline shapes are represented using cubic B-splines,giving a large set of points in a high-dimensional parameter space.Principal Com-ponent Analysis(PCA)is applied to the ob-tained set of shapes to generate a lower dimen-sional subspace S which explains the most sig-ni?cant modes of shape variation,and which is

a suitable state space for the tracker.

1.1Basic detection and tracking People tracking is performed in multiple stages.The tracker maintains a background image which is automatically updated by median-?ltering the sequence of video images over time.To detect new people,a motion de-tector subtracts the background from the cur-rent video image.Thresholding of this di?er-ence image yields a binary image containing regions where the pixel values of the current image di?er signi?cantly from the pixel values of the background image.The detected fore-ground regions that match certain criteria for size and shape are examined.Their outline shape is approximated by a B-spline and pro-jected into the PCA space S of trained pedes-trian outlines.The new shape obtained in this process is then used as a starting point for fur-ther shape analysis.Once people are recog-nised they are tracked using the trained shape model.This is done with the help of Kalman ?ltering using second order motion models for their movements.The state of the tracker in-cludes the current outline shape as a point in S,which is updated as the observed outline changes during tracking.

2

Figure2:Edge search for shape?tting

1.2Fitting a person shape

In order to adjust the outline shape to each new image of a person we use an iterative op-timisation method.The current estimate of the shape in S is projected onto the image.It is then adjusted to the actual shape in an op-timisation loop by searching for edges of the person’s outline,at each spline control point around the shape.Figure2illustrates this pro-cess showing the original camera image,in this case with a fairly dark person against a light background.

It is in this measurement process that image noise and background clutter severely disturb the search for the best?tting outline.

2Colour Filtering

In order to make edge detection more robust we?lter the image in such a way that fore-ground objects become more salient.Normally the search for edges is done in the pixelwise ab-solute di?erence image obtained by subtracting the background image from the video image,to improve contrast[1].

2.1Image Quality

When dealing with colour information con-tained in video images,their quality is an important issue.Before interpreting colour information obtained from from image mea-surements it is important to trace the path the image took,from camera to computer. The colour images used in our system origi-nate from interlaced colour cameras installed in subway stations.They are transferred over long cables and a?ected by interference from electrical equipment in the station.The im-age sequence used for the experiments shown was recorded in a London Underground sta-tion1,stored on analogue(S-VHS PAL)video tape which degraded the colour information. The image quality was further reduced by JPEG image compression needed for transmis-sion over a local network(Ethernet)in the in-tegrated surveillance system.Even though the human eye does not notice a lot of image degra-dation,the JPEG compression can create con-siderable problems for image processing algo-rithms.The reason is that JPEG compression techniques usually sub-sample colour informa-tion(the2“chrominance channels”,denoted C B and C R)[6],thereby causing problems in algorithms which rely on the correctness of this colour information.JPEG stores the image brightness(“luminance”,Y)at a higher res-olution than chrominance because this compo-nent makes up the main part of the information extracted from images by humans.

2.2Filtering Techniques

Our investigations involved a large number of colour?ltering techniques to?nd out how they in?uence edge contrast and how they behave in the presence of the image noise described earlier.The?lters used project each pixel in the image from the3-dimensional RGB colour space down to a1-dimensional subspace,giv-ing a grey level image(see below for exam-

ples).The resulting image can then be used by our edge detection algorithm to search for those edges included in the outlines of people. We have applied both linear(weighted sums of RGB values)and non-linear mappings(for ex-ample,mapping to the hue component in HSV space).However,given the low colour resolu-tion in our images some?lters are not feasible in our system.In the following examinations we focus on the results obtained from map-pings that are least prone to colour noise. Another problem for edge detection algo-rithms is the low contrast of people against the background.The image in Figure1,chosen as an example,shows a person with a white jacket against a light yellow background.In the colour di?erence image the jacket appears as dark blue and most edge detection methods will not be able to detect it as the edge contrast at the jacket outline is too low.

2.3Filtering the di?erence image Many algorithms for colour edge detection start by taking the absolute di?erence in each colour channel to get a new colour image.Each pixel value p=(R,G,B)of this di?erence is then mapped to a scalar using a function of its components.The mappings use a scaled Euclidian norm,||p||2= 3(R2+G2+B2), or a weighted sum,w1R+w2G+w3B with w i=1.

The image in Figure3shows the result for a simple linear mapping from RGB space onto the luminance(Y)subspace,using the Y pro-jection from CIE2recommendation709,de-noted here as Y709(taken from[3]):

Y:=Y709=.2125R+.7154G+.0721B.(1) The weights are chosen in order to mimic the way humans perceive the brightness of a colour image.However,this weighting only

Figure4:Absolute di?erence of luminance Y

person’s coat and trousers we see that these are now more uniformly shaded which means that edge detection is also less prone to being distracted by edges located inside the body. This method of?ltering also makes back-ground edges stand out more clearly.For mo-tion detection algorithms this could create dif-?culties.Our edge detector,however,operates locally in the image so this does not pose a problem.

3Discussion

In this paper we have presented an approach to improve local edge detection for people track-ing systems in order to increase robustness in the presence of image noise.The main problem in an integrated system like ours is the noise found in the chrominance channels of the im-age as seen in Figure5.This is due to electri-cal interference in a subway station(analogue camera cabling)and image compression tech-niques(such as JPEG)within our local com-puter network.

We have shown that using image di?erenc-ing methods using only luminance gives better results in local edge detection than standard routines,thereby improving local edge detec-tion which is a crucial part of our people de-tection and tracking

system.

Figure5:Chrominance channels C B and C R

It should be noted that even in a system like ours,colour information can still be of good use for the identi?cation of people once they are tracked.Examining the in?uence of strong colour noise on the identi?cation of tracked people is part of our ongoing research. References

[1]Adam M.Baumberg.Learning Deformable

Models for Tracking Human Motion.PhD

thesis,School of Computer Studies,Uni-

versity of Leeds,October1995.

[2]Ahmed Elgammal,David Harwood,and

Larry Davis.Non-parametric model for

background subtraction.In David Vernon,

editor,ECCV2000,6th European Confer-

ence on Computer Vision,pages751–767.

Springer Verlag,2000.

[3]Adrian Ford and Alan Roberts.Colour

Space Conversions,August1998.

[4]Dariu M.Gavrila and Larry S.Davis.

Tracking of humans in action:A3-D

model-based approach.In ARPA Image

Understanding Workshop,pages737–746,

Palm Springs,February1996.

[5]Ismail Haritaoglu,David Harwood,and

Larry S.Davis.W4:Real-time surveillance 5

of people and their actions.IEEE Trans-

actions on Pattern Analysis and Machine

Intelligence,22(8):809–830,August2000. [6]Thomas https://www.wendangku.net/doc/5c915274.html,ne.IJG JPEG Library:

System Architecture.Independent JPEG

Group,1991–1995.Part of the Independent

JPEG Group’s JPEG software documenta-

tion.

[7]Hedvig Sidenbladh,Michael J.Black,and

David J.Fleet.Stochastic tracking of3D

human?gures using2D image motion.In

David Vernon,editor,ECCV2000,6th

European Conference on Computer Vision,

pages702–718.Springer Verlag,2000.

6

相关文档