文档库 最新最全的文档下载
当前位置:文档库 › Contour and boundary detection improved by surround suppression

Contour and boundary detection improved by surround suppression

Contour and boundary detection improved by surround suppression
Contour and boundary detection improved by surround suppression

Contour and boundary detection improved by surround suppression

of texture edges

Cosmin Grigorescu,Nicolai Petkov *,Michel A.Westenberg

Institute of Mathematics and Computing Science,University of Groningen,P.O.Box 800,9700AV Groningen,The Netherlands

Received 18February 2003;received in revised form 15December 2003;accepted 16December 2003

Abstract

We propose a computational step,called surround suppression,to improve detection of object contours and region boundaries in natural scenes.This step is inspired by the mechanism of non-classical receptive ?eld inhibition that is exhibited by most orientation selective neurons in the primary visual cortex and that in?uences the perception of groups of edges or lines.We illustrate the principle and the effect of surround suppression by adding this step to the Canny edge detector.The resulting operator responds strongly to isolated lines and edges,region boundaries,and object contours,but exhibits a weaker or no response to texture edges.Additionally,we introduce a new post-processing method that further suppresses texture edges.We use natural images with associated subjectively de?ned desired output contour and boundary maps to evaluate the performance of the proposed additional steps.In a contour detection task,the Canny operator augmented with the proposed suppression and post-processing step achieves better results than the traditional Canny edge detector and the SUSAN edge detector.The performance gain is highest at scales for which these latter operators strongly react to texture in the input image.q 2003Elsevier B.V.All rights reserved.

Keywords:Edge;Region boundary;Contour detection;Texture;Inhibition;Non-classical receptive ?eld;Surround suppression;Context;Canny;SUSAN

1.Introduction

Edge detection is considered a fundamental operation in image processing and computer vision,with a large number of studies published in the last two decades.In the context of this paper,the term ‘edge’stands for a local luminance change for which a gradient can be de?ned and which is of suf?cient strength to be considered important in a given task.Examples of edge detectors are operators that incorporate linear ?ltering [6,13,25,35,41],local orientation analysis [17,36,59],?tting of analytical models to the image data [8,16,22,40]and local energy [12,24,30,39].Some of these methods were biologically motivated [24,25,35,39].Since these operators do not make any difference between various types of edges,such as texture edges vs.object contours and region boundaries,they are known as non-contextual or,simply,general edge detectors [58].

Other studies propose more elaborate edge detection techniques that take into account additional information around an edge,such as local image statistics,image topology,perceptual differences in local cues (e.g.texture,colour),edge continuity and density,etc.Examples are dual frequency band analysis [48],statistical analysis of the gradient ?eld [2,38],anisotropic diffusion [4,9,45,57],complementary analysis of boundaries and regions [32–34],use of edge density information [10]and biologically motivated surround modulation [19,31,46,47].These operators are not aimed at detecting all luminance changes in an image but rather at selectively enhancing only those of them that are of interest in the context of a speci?c computer vision task,such as the outlines of tissues in medical images,object contours in natural image scenes,boundaries between different texture regions,etc.Such methods are usually referred to as contextual edge detectors.The human visual system differentiates in its early stages of visual information processing between isolated edges,such as object contours and region boundaries,on the one hand,and edges in a group,such as those in texture,on the other.Various psychophysical studies have shown that the perception of an oriented stimulus,e.g.a line segment,can be in?uenced by the presence of other such stimuli

0262-8856/$-see front matter q 2003Elsevier B.V.All rights reserved.

doi:10.1016/j.imavis.2003.12.004

Image and Vision Computing 22(2004)609–622

https://www.wendangku.net/doc/cd15325949.html,/locate/imavis

*Corresponding author.Address:Department of Computing Science,University of Groningen,P.O.Box 800,9700AV Groningen,The Netherlands.Tel.:t31-50-363-7129;fax:t31-50-363-3800.

E-mail addresses:petkov@cs.rug.nl (N.Petkov),cosmin@cs.rug.nl (C.Grigorescu),michel@cs.rug.nl (M.A.Westenberg).

(distractors)in its neighbourhood.This in?uence can,for instance,manifest itself in the decreased saliency of a contour in presence of surrounding texture [11,27](Fig.1),in the so-called orientation contrast pop-out effect [43],or in the decreased visibility of letters,object icons,and bars embedded in texture [47,54].These visual perception effects are in agreement with the results of neurophysiological measurements on neural cells in the primary visual cortex.These studies show that the response of an orientation selective visual neuron to an optimal bar stimulus in its receptive ?eld 1is reduced by the addition of other oriented stimuli to the surround [28,29,44].Neurophysiologists refer to this effect as nonclassical receptive ?eld (non-CRF)inhibition [29,44]or,equivalently,surround suppression [26].Statistical data [26,29,44]reveals that about 80%of the orientation selective cells in the primary visual cortex show this inhibitory effect.In approximately 30%of all orientation selective cells,surround stimuli of orientation that is orthogonal to the optimal central stimulus have a weaker suppression effect than stimuli of the same orientation (anisotropic inhibitory behavior),see Fig.2(a)–(c).In 40%of the cells,the suppression effect manifests itself irrespective of the relative orientation between the surrounding stimuli and the central stimulus (isotropic inhibitory behavior),see Fig.2(e)–(g).

In [19,47],it was suggested that the biological utility of non-CRF inhibition is contour enhancement in natural images rich in background texture.In that study,contour detection operators were proposed that combine two biologically motivated steps:Gabor energy edge detection followed by non-CRF inhibition.In the current study,we demonstrate that the usefulness of non-CRF inhibition is not limited to biologically motivated contour detection operators only.We incorporate a non-CRF inhibition step into a typical gradient-based edge detector,the Canny

operator that is widely used in image processing and computer vision,and show that this results in better enhancement of object contours and region boundaries in presence of texture.Since the terminology related to receptive ?elds is less appropriate in this more general computer vision context,throughout the paper,we refer to the mechanism inspired by non-CRF inhibition as surround suppression .Furthermore,we propose a new post-processing method based on hysteresis thresholding of the suppression slope,a measure characteristic of the context in which an edge appears (stand-alone contour vs.a texture edge).

The paper is organized as follows.Section 2reviews gradient-based edge detection,describes two mechanisms of surround suppression,anisotropic and isotropic,and introduces the suppression slope thresholding technique.In Section 3,we use a measure de?ned elsewhere [19]to evaluate the performance of the proposed contour and boundary enhancement steps.The Canny edge detector augmented with these steps is compared with the traditional Canny edge detector [6]and the SUSAN operator [53].Finally,in Section 4,we summarize the results,review similar work,and draw conclusions.

2.Surround suppression augmented operators In the following,we propose two gradient-based contour and boundary detection operators that incorporate surround suppression.As a ?rst step in our method,we compute a scale-dependent gradient,a technique similar to that proposed by Canny [6].We start by reviewing scale-dependent gradient computation brie?y,and then introduce two types of surround suppression and a new post-processing step.

2.1.Scale-dependent gradient computation

Gradient methods for edge detection compute the luminance gradient for each pixel of the image.When using ?nite differences in a very small neighborhood,the gradient is susceptible to image noise and discretization effects.In order to diminish such in?uences,it is customary to apply ?rst some type of smoothing.For example,in Canny’s original formulation,the input image f ex ;y Tis ?rst smoothed by convolving it with a two-variate Gaussian function g s ex ;y T:f s ex ;y T?ef p g s Tex ;y T;e1T

where g s ex ;y T?

12p s 2

exp 2

x 2

ty

2

2s 2

!:

e2T

The scale-dependent gradient of f ex ;y T;de?ned as the

gradient of the smoothed function f s ex ;y T,

7f s ex ;y T? ?f s ex ;y T?x ;?f s ex ;y T

?y

;1

The concept of receptive ?eld or,more precisely,classical receptive ?eld (CRF)used in neurophysiology corresponds to the concept of support of the impulse response used in image processing.It is the area in which an impulse stimulus affects the ?ring rate of the neuron.In neurophysiological practice,the CRF of an orientation selective neuron is determined by using a bar stimulus of certain optimal size and orientation.

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622

610

is then computed using ?nite differences.However,this method of differentiation has the drawback of being analytically ill-posed.The derivative of a mathematical distribution (in our case an image),can be obtained by convolving the distribution with the derivative of a smooth test function (e.g.a Gaussian)[49].In agreement with this proposition and following [56],we choose to compute 7f s ex ;y Tby evaluating the right-hand side of the following equation:

7f s ex ;y T?ef p 7g s Tex ;y T;

e3T

which has the advantage that 7g s ex ;y Tis analytically well-de?ned and no ?nite difference computations are needed.Let 7x f s ex ;y Tand 7y f s ex ;y Tbe the x-and y-components of the scale-dependent gradient Eq.(3):

7x f s ex ;y T?f p ?g s

?x

ex ;y T;e4T

7y f s ex ;y T?f p ?g s

?y

ex ;y T:

The scale-dependent gradient magnitude M s ex ;y Tand orientation Q s ex ;y Tare then given by:

M s ex ;y T???????????????????????????????????

e7x f s ex ;y TT2te7y f s ex ;y TT2

q ;e5T

Q s ex ;y T?atan

7y f s ex ;y T7x f s ex ;y T

The local maxima of the gradient magnitude M s ex ;y Tin orientation Q s ex ;y Tare good indicators of possible edge locations in an image.The derivative of a Gaussian is an optimal step-edge detector in that it maximizes the signal-to-noise ratio in presence of Gaussian noise while maintaining good localization of the response,as ?rst

shown by Canny [6]and further studied by Tagare and de Figueiredo [56].2.2.Surround suppression

Next,we extend the gradient magnitude operator de?ned above with a term which takes into account the context in?uence of the surroundings of a given point.Let DoG s ex ;y Tbe the following difference of two Gaussian functions:

DoG s ex ;y T?

12p e4s Texp 2x 2ty 2

2e4s T !

212p 2exp 2x 2ty 2

22 !

:e6TWe de?ne a weighting function w s ex ;y Tas follows:w s ex ;y T?H eDoG s ex ;y TT

k H eDoG s Tk 1

;

e7T

where

H ez T?

0z ,0z

z $0;(

and k ·k 1is the L 1norm.

We implement surround suppression by computing an inhibition term for every point of an image.This term is a weighted sum of the values of the gradient in the suppression surround of the concerned point (Fig.3).The distance between this point and a surround point is taken into account by the weighting function w s :In the following subsections,we introduce operators that deploy surround suppression in two different ways:anisotropic and

isotropic.

Fig.2.Responses of two visual neurons showing anisotropic (left)and isotropic (right)inhibitory behavior,respectively (redrawn from [44],courtesy of H.C.Nothdurft,J.Gallant,D.C.van Essen,and Cambridge University Press).(a),(e)Response to a single bar of optimal size and orientation inside the CRF,delineated by a dotted rectangle.(b),(f)Decreased response is recorded when texture consisting of identical bars with the same orientation is present in the area outside the CRF.(c)For one of the cells (left)the inhibitory effect is small when the orientation of the surrounding bars is orthogonal to that of the optimal stimulus in the CRF (anisotropic surround inhibition).(g)For the other neuron (right),the inhibitory effect does not depend on the relative difference in the orientation between the surrounding bars and optimal stimulus in the CRF (isotropic surround inhibition).(d),(h)In absence of the optimal stimulus,the response of both cells is reduced to the level of spontaneous activity.

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622611

2.2.1.Anisotropic surround suppression

In the case of anisotropic suppression,the difference in the gradient orientations in the central point and a surround point is taken into account by an additional factor.For a point ex ;y Tin the image with a gradient orientation Q s ex ;y Tand a point ex 2u ;x 2v Tin the suppression surround with a gradient orientation Q s ex 2u ;y 2v T;we de?ne this factor as follows:

D Q ;s ex ;y ;x 2u ;y 2v T

?l cos eQ s ex ;y T2Q s ex 2u ;y 2v TTl :

e8T

If the gradient orientations at points ex ;y Tand ex 2u ;y 2v T

are identical,the weighting factor takes a maximum eD Q ;s ?1T;the value of the factor decreases with the angle difference Q s ex ;y T2Q s ex 2u ;y 2v T;and reaches a minimum eD Q ;s ?0Twhen the two gradient orientations are orthogonal.In this way,edges in the surround of point ex ;y Twhich have the same orientation as an edge at point ex ;y Twill have a maximal inhibitory effect.The visual cell whose response to various oriented stimuli is illustrated by the left diagram in Fig.2exhibits this type of behavior.

For each image point ex ;y Twe now de?ne an anisotropic

suppression term t A

s ex ;y Tas the following weighted sum of

the gradient magnitude values in the suppression surround of that point:

t A

s

ex ;y T?

ee

V

M s ex 2u ;y 2v Tw s eu ;v T

?l cos eQ s ex ;y T2Q s ex 2u ;y 2v TTl d u d v ;

e9T

where V is the image coordinate domain.The two weighting factors (w s eu ;v Tand the cosine)take into account the distance and the gradient orientation difference,respect-ively.This integral can be computed ef?ciently by convolution,as described in detail in Appendix A.

We now introduce an operator C A

s ex ;y Twhich takes as its inputs the gradient magnitude M s ex ;y Tand the suppression

term t A

s ex ;y T:

C A s ex ;y T?H eM s ex ;y T2a t A s ex ;y TT;

e10T

with a half-wave recti?cation function H ez Tde?ned as in

Eq.(7).The factor a controls the strength of the suppression of the surround on the gradient magnitude.If there is no texture in the surroundings of a given point,the response of this operator at that point will be equal to the gradient magnitude response M s ex ;y T:An edge passing through that point will be detected by this operator in the same way as it is detected by the gradient magnitude.However,if there are many other edges of the same orientation in the surround-ings,the suppression term t A

s ex ;y Tmay become so strong that it cancels out the contribution of the gradient magnitude,resulting in a zero response.De?ned in this way,the operator will respond to isolated lines and edges and to (texture)region boundaries,but it will not respond to groups of such stimuli that make part of texture of the same orientation ,see Fig.4(c).The response at texture bound-aries is higher than the response in the interior of a texture region because the inhibition term takes smaller values at such

boundaries.

Fig.3.The central region with radius r 1;r 1<2s ;can be considered as the

support of the scale dependent gradient operator 7f s :The suppression originates from an annular surround with inner radius r 1:The contribution of points at distances larger than r 2?4r 1can be neglected,so that r 2can be thought of as the outer radius of the suppression

surround.

Fig.4.(a)Synthetic input image.(b)The gradient magnitude operator detects all lines and edges independently of the context,i.e.,the surroundings in which these lines and edges are embedded.(c)The gradient magnitude operator augmented with anisotropic surround suppression responds selectively to isolated lines and edges,to lines that are surrounded by a grating of a different orientation,and to region boundaries.Interior texture edges are suppressed.(d)The gradient magnitude operator with isotropic surround suppression responds selectively only to isolated lines and edges and also to (texture)region boundaries.Interior texture edges and lines embedded in texture of any orientation are suppressed.

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622

612

2.2.2.Isotropic surround suppression

We implement isotropic surround suppression by computing a suppression term t I sex;yTthat does not depend on the orientation of surround edges;only the distance to such edges is taken into account.The suppression term t I sex;yTis de?ned as a convolution of the gradient magnitude M sex;yTwith the weighting function w sex;yT:

t I sex;yT?ee

V

M sex2u;y2vTw seu;vTd u d v:e11T

We introduce a second contour operator C I sex;yTwhich takes as its inputs the gradient magnitude and the isotropic suppression term t I sex;yT:

C I sex;yT?HeM sex;yT2a t I sex;yTT:e12TAs before,the factor a controls the strength of suppression exercised by the surround on the gradient magnitude.This operator responds to isolated lines and edges in the same way as the operator with anisotropic suppression,but it does not respond to groups of such stimuli of any orientation that make part of the interior of a texture region,see Fig.4(d).At the boundary of an edge texture region with another region that is not rich in edges,this operator will respond more strongly than to the texture interior.In this way,such boundaries will be enhanced in the operator output.Boundaries of two texture regions that are de?ned by orientation contrast will,however,not be detected by this operator.

2.3.Binary map computation

Binary contour and boundary maps can be extracted from the surround suppressed responses C A sex;yTand C I sex;yTby non-maxima suppression and hysteresis thresholding similar to the way this is done using the gradient[6,55].In the following,we will use the shorthand notation C sex;yTfor either C A sex;yTor C I sex;yT:For briefness,we will use the term contour for either an object contour or a region boundary.

2.3.1.Thinning by non-maxima suppression

Non-maxima suppression thins the areas in which C sex;yTis non-zero to one-pixel wide candidate contours as follows:For each positionex;yT;two responses C sex0;y0Tand C sex00;y00Tin adjacent positionsex0;y0Tandex00;y00Tthat

are intersection points of a line passing throughex;yTin orientation Q sex;yTand a square de?ned by the diagonal points of an8-neighbourhood are computed by linear interpolation,cf.Fig.5.If the response C sex;yTatex;yTis greater than both these values(i.e.it is a local maximum

along the concerned line),it is retained,otherwise it is

assigned the value zero.

2.3.2.Hysteresis thresholding using the contour strength

Next,a binary map is computed from the candidate

contour pixels by hysteresis thresholding.This process

involves two threshold values t l and t h;t l,t https://www.wendangku.net/doc/cd15325949.html,monly, the high threshold value t h is computed as ae12pT-quantile

of the distribution of the response values at the candidate

contour pixels,where p is the minimum fraction of candidate

pixels to be retained in the contour map.Candidate contour

pixels with responses higher than t h are de?nitely retained in

the contour map,while the ones with responses below the low

threshold t l are discarded.Candidate contour pixels with

responses between t l and t h are retained if they can be

connected to any candidate contour pixel with a response

higher than t h through a chain of other candidate contour

pixels with responses larger than t l:

2.3.3.Hysteresis thresholding using the suppression slope

An additional processing step that we present in the

following further improves contour detection results.

Consider the synthetic input image presented in Fig.6(a)

and two points A and B in the image.These are points in

which the gradient magnitude(after the application of

surround suppression)has local maxima and are,thus,

potential contour points.

Note that,where positive,the surround suppressed

response C sex;yTdepends linearly on the suppression factor a;cf.Eqs.(10)and(12).From this linear dependence,it follows that the ratio

C sex;yT

M sex;yT

?H12a

t sex;yT

M sex;yT

;e13T

as a function of a;takes values on a line with a slope gex;yT; that we call the suppression slope,given by:

gex;yT?atan

t sex;yT

s

:e14T

The suppression slope gex;yTdepends on the amount of texture surrounding the concerned point.For instance,the slope at the contour point A is smaller than the slope at a point in a textured area,like B,see Fig.6(b).

If the value of the suppression slope is large in a given point,this means that the surround suppression is signi?cant at that point.Consequently,the concerned point is considered to lie in a texture region and can be eliminated from the contour map.A threshold condition can be imposed on the value of the suppression slope gex;yTto discriminate between contour and texture points:points at which this slope takes values that are larger than a given threshold value can be eliminated form the contour map.A large threshold value will eliminate only a small amount of potential texture edges,while a small threshold value will eliminate such edges more

substantially.

Although a single threshold has the advantage of simplicity,it leads in most cases to a streaking effect in the ?nal result (discontinuous segments originating from the same contour).To reduce this effect,we apply hysteresis thresholding on the values of g ex ;y T:A low hysteresis threshold g l is computed as a p eg T-quantile of the distribution of suppression slope values,where p eg Tis the minimum fraction of contour pixels to be de?nitely retained in the ?nal contour map.(Only pixels obtained by the ?rst hysteresis thresholding operation are con-sidered).A high suppression slope threshold value g h is selected as a multiple of g l :In our experiments,we choose a ?xed ratio g h ?2g l :Points with g ex ;y T,g l are labelled as contour points and retained in the ?nal contour map;the points with g ex ;y T.g h are considered texture

edge points and are eliminated.Finally,those points with g l ,g ex ;y T,g h which can be connected through a chain of other similar points to a contour point are retained,otherwise eliminated.

To summarize,we perform the following post-proces-sing steps on the surround suppressed response of the gradient magnitude C s ex ;y T:

(i)thinning by non-maxima suppression of C s ex ;y T;

(ii)binarization by (hysteresis)thresholding applied on the

result of (i);

(iii)selection of contour pixels from the result of (ii)by

(hysteresis)thresholding of the suppression slope.This process is illustrated for a natural image in Fig.7

.

Fig.6.(a)Points of local maxima of the surround suppressed gradient magnitude response:a point A from a contour and a point B inside a texture region.(b)At each point ex ;y T;the ratio C s ex ;y T=M s ex ;y Tis a linear function of a that has a slope g ex ;y Tthat is determined by the gradient values in the surroundings of ex ;y Tand is different for a contour point and a texture point:the slope for the contour point A is smaller than the slope for the texture point B,g A ,g B

:

Fig.7.(a)Original input image (512£512pixels).(b)Gradient magnitude M s ex ;y Tfor s ?1:6:(c)Anisotropic and (d)isotropic surround suppressed responses for a ?1:0:(e)Binary map obtained from (b)by non-maxima suppression and hysteresis thresholding (p ?0:1)as in Canny’s algorithm.(f),(g)Binary maps extracted from (c)and (d),respectively,by non-maxima suppression and hysteresis thresholding ep ?0:1Tand subsequent contour pixel selection by hysteresis thresholding of the suppression slope ep eg T?0:1T:

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622

614

3.Experimental results

3.1.Subjectively speci?ed desired output

Most state-of-the-art methods for performance evalu-

ation of edge and contour detectors use natural images

(photographs)with associated desired operator output that is

subjectively speci?ed by an observer[5].Some recent

studies[50–52]show that the performance of such an

operator must be considered task dependent.For object

recognition,for example,some operators may perform

better than others despite similar performance on synthetic

images.The proposed surround suppression mechanisms

aim explicitly at better detection of object contours and

region boundaries in natural scenes.

We took40images which depict either man-made

objects on textured backgrounds or animals in their natural

habitat.For each image,an associated desired output binary

contour map was drawn by hand2.A pixel is marked as a

contour pixel in the desired output if(i)it is a part of an

occluding contour of an object or it belongs to a contour in

the interior of an object or if(ii)it makes part of a boundary

between two(textured)regions,e.g.sky and grass or water

and sky.The desired output is thus de?ned subjectively

similar to the way this is done for image segmentation in

[37].However,our procedure for de?ning the desired output

is different in two aspects:(i)we obtain contour and

boundary maps and not region maps;(ii)we use more

explicit selection criteria.Fig.8presents three natural

images from the evaluation database together with their

corresponding desired output contour maps.

3.2.Performance measure

We use the performance measure introduced in[19],and

?rst review this measure brie?y.Let E DO and B DO be the set

of contour and background pixels3,respectively,of the

desired output contour map and E D and B D be the set of

contour and background pixels of the contour image

generated by a given operator.The set of correctly detected

contour pixels is de?ned as E?E D>E DO:False negatives, i.e.desired output contour pixels missed by the operator,

comprise the set E FN?B D>E DO:False positives,i.e. pixels for which the detector indicates the presence of a contour while they belong to the background of the desired output,de?ne the set E FP?E D>B DO:

The performance measure introduced in[19]is de?ned

as follows: P?cardeET

cardeETtcardeE FPTtcardeE FNT

;e15Twhere cardeXTdenotes the number of elements of set X:

The performance measure P is a scalar taking values in the interval[0,1].If all desired output contour pixels are correctly detected and no background pixels are falsely detected as contour pixels,then P?1:For all other cases, the performance measure takes values smaller than one, being closer to zero as more contour pixels are falsely detected and/or missed by the operator.

Since a subjectively identi?ed contour does not always exactly coincide with a local maximum of the gradient magnitude operator(an effect known from psychophysics), we consider that a contour pixel is correctly detected by the operator if a corresponding desired output contour pixel is present in a7£7square neighbourhood centered at the concerned pixel.In our implementation,we take a pixel from a list of contour pixels generated by the operator and look for a matching pixel(within the mentioned neighbour-hood)in a list of desired output contour pixels.If such a match is found,both pixels are removed from the corresponding lists.After the whole list of contour pixels generated by the operator is processed in this way,the pixels which remain on that list are considered as false positives. Such a pixel was marked by the operator as a contour pixel while it has no counterpart contour pixel in the desired output.The pixels that remain on the desired output list after the elimination process are the false negatives:these are the positions which the operator wrongly failed to mark as contour pixels.

3.3.Performance evaluation

We compare the performance of the surround suppres-sion augmented operators de?ned above with two other operators:the traditional Canny and the SUSAN edge detector.Our choice of these detectors is motivated,for the former,by its wide acceptance,and for the latter by a recent study[52],which shows that it performs best in an object recognition task based on edge information when compared with six other operators.

The SUSAN edge detector[53]is based on nonlinear processing performed on a circular neighbourhood.Given an image pixel and a disk of a certain radius centered at that pixel,the method counts the number of pixels inside the disk that have intensity values within a certain threshold difference t from the central pixel.An edge strength is estimated by subtracting this pixel count from a fraction (usually three quarters)of the disk area.When this difference is negative,the edge strength is assumed to be0.Edge direction is found by computing a local axis of symmetry(second order x-axis and y-axis moments)on the support of the disk.The?nal binary edge map is computed by thinning and binarization.For noise removal,a nonlinear smoothing operation which preserves edge location can be ?rst applied in a given neighbourhood.We computed SUSAN edges by running the program?rst in the so-called smoothing mode and then applying the edge detector.In our experiments,the parameters were:d;the radius of

2The database of images and their desired output contour maps is

available at:http://www.cs.rug.nl/~imaging.

3The subscript GT(ground truth)was used in[19]instead of DO(desired

output).

C.Grigorescu et al./Image and Vision Computing22(2004)609–622615

the neighbourhood in which nonlinear smoothing is applied,called by the authors of SUSAN the distance threshold (in pixels),and the above mentioned threshold luminance difference t :

In our experiments,the Canny edge detector has two parameters:the standard deviation s of the Gaussian derivative kernel used for gradient computation and p ;the minimum fraction of candidate edge pixels which must be retained in the ?nal edge map,further used to compute a high threshold value t h :We work with a low threshold value t l ?0:5t h :

Finally,the proposed surround suppression augmented operators have the same parameters as Canny’s operator,and additionally,a suppression factor a and a fraction p eg Tof the edge pixels (after thinning and gradient strength thresholding)which are de?nitely considered to be contour pixels.For the additional post-processing step described in Section 2.3,we ?xed the value of the parameter p eg Tto p eg T?0:10:Notice that the Canny operator can be obtained as a special case of the surround suppression augmented operators for a ?0and p eg T?1:

The values of various parameters were chosen as follows.For the Canny edge detector,we used 8scales,s k ?e??2p Tk ;k [{–1…6}:For the surround suppression operator we used 4scales covering the same domain sampled at even values of k ;k [{0;2;4;6}and 2surround suppression factors,a [{1:0;1:2}:For both methods,we applied 5high hysteresis threshold values on the gradient,corresponding to p [{0:5;0:4;0:3;0:2;0:1}:This results in 40parameter combinations for each of these methods.

For the SUSAN edge detector we also chose 40combinations of parameters,from eight values of the distance threshold d k twice as big as the values of s k used in Canny’s case,d k ?2s k ;k [{–1…6};and ?ve threshold luminance difference values t [{5;10;15;20;25}:The distance threshold values d k lead to a comparable spatial support (in pixels)of the two types of operator.

We evaluated the performance of the operators as formulated in Eq.(15).For each image,we computed a binary contour map for each combination of parameters and calculated the corresponding performance value by compar-ing this contour map with the subjectively de?ned desired output.In this way,a set of 40performance values was obtained for each input image and each operator.

Fig.9shows examples of the best contour maps (i.e.the maps with the largest value of the performance)obtained at small,medium,and large scales for all possible values of post-processing parameters.At small scales ek [{–1;0}Tthat,for these particular images,correspond to the high spatial frequencies present in the texture background,the best contour maps produced by the two suppression augmented operators consist mainly of the real object contours and region boundaries (see Fig.9,left column).In contrast,the Canny and SUSAN operators produce output that is rich in texture edges.At medium scales ek [{1;2;3;4}T;this difference is less pronounced (Fig.9,middle column)and at large scales ek ?5;6Tthe outputs of the suppression augmented operators and Canny operator are very similar (Fig.9,right column).As can be seen from Fig.9,the suppression augmented operators give comparable output across all scales,while Canny and SUSAN operators are vulnerable to texture at scales for which operator support and texture details are in a certain size

agreement.

Fig.8.Natural images (?rst row)and their associated desired output contour maps (second row).

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622

616

This behavior is revealed also by a closer analysis of the performance values at different scales.Fig.10shows the performance values as statistical box-and-whisker plots computed for small scales (top part),large scales (middle part)and all scales used in our experiment (bottom part).For each method,the best performance value is represented by the top end of the corresponding whisker.Indeed,at small scales,the isotropic and anisotropic surround suppression augmented operators outperform substantially Canny and SUSAN edge detectors.The same conclusion does not hold for large scales,mainly because the support of the Gaussian function used in the Canny detector or the area of

smoothing

Fig.9.A natural input image and its desired output contour map (?rst row).Best binary contour maps obtained for the Canny edge detector (second row),SUSAN edge detector (third row),anisotropic and isotropic surround suppression augmented operators (fourth and ?fth row,respectively).The best contour map is the one that results in the best performance value over all combinations of post-processing parameters at small scales ek [{-1;0};left column),medium scales (k [{1;2;3;4};middle column)and large scales (k [{5;6};right column).

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622617

used by the SUSAN detector are large enough to average out and thus eliminate the high frequency edges originating from texture areas.

Over all scales,however,the median values obtained for isotropic and anisotropic surround suppression are larger than the ones delivered by the Canny and SUSAN detectors.Thus,in circumstances in which no information regarding the best set of parameters is available,by choosing a random set of parameter values,there is a higher probability that the results delivered by the surround suppression operators will be better than those obtained by Canny and SUSAN.

An interesting case is the synthetic image presented in Fig.11(a).Our perceptual interpretation of the image,two lines superimposed on a grating of parallel lines of a different orientation,is only mimicked by the anisotropic suppression operator,Fig.11(e).The traditional Canny operator,SUSAN and the isotropic suppression operator,Fig.11(c),(d)and (f),respectively,do not deliver results that match human perception.

4.Summary and discussion 4.1.Summary

We have shown how a biologically motivated processing step,called surround suppression,can be added to a traditional gradient-based edge detector to achieve better contour detection in natural images.The model of surround suppression we use is simple and straightforward:the response of an edge detection operator in a given point is suppressed by a weighted sum of the responses of the same operator in an annular neighborhood of that point.In this way,the proposed additional step acts as a feature contrast computation,with edges being the features involved.This step contributes to better contour detection not by means of responding more strongly to contours as compared with a traditional non-contextual edge detector but rather by means of suppressing texture edges.The result of this texture edges suppression is better contour visibility in the operator output.We considered two types of suppression,isotropic and anisotropic,and showed that they give comparable results on natural images.Certain perceptual effects related to orientation contrast can,however,be explained only by the anisotropic suppression mechanism.Furthermore,we introduced a new post-processing step,we called hysteresis thresholding of the suppression slope,aimed at further elimination of the operator response to edges which originate from textured regions.

Our experiments with a large set of natural images show that for images rich in texture background,surround suppression effectively separates contours from texture.This is important at scales for which the spatial support of the deployed edge detection operator is comparable to some characteristic size of the texture available in the

input

Fig.10.Box-and-whisker plots of the performance of the Canny edge detector (denoted by C),SUSAN edge detector (denoted by S),the anisotropic (denoted by A),and isotropic (denoted by I)surround suppression augmented operators for some of the test images.Each box is a concise representation of essential features of the statistical distribution of performance values obtained for a given operator and a given input image and all possible parameter combinations.The plots display separately the values of the performance for small scales (k [{21;0};top),large scales (k [{5;6};middle),and across all values of the scales (k [{21;…;6};bottom).

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622

618

image.At such scales,a non-contextual edge detector,such as the traditional Canny operator or SUSAN,generates strong responses to the texture regions.Object contours can be dif?cult to identify in the output of such an operator.In contrast,an edge detector that is augmented with the proposed surround suppression step does not respond strongly to texture edges while it responds to object contours.Consequently,the proposed suppression augmen-ted operators outperform considerably non-contextual edge detectors in terms of a performance measure that favors the detection of contours only.Speci?cally,we showed that for a broad range of scales,the proposed surround suppression operators perform better than the Canny and SUSAN edge detectors.The performance difference is particularly large at scales for which the latter operators respond strongly to texture available in an input image.4.2.Related work

A distinction between different types of luminance transitions,such as texture edges on the one hand vs.edges that arise from surface discontinuities and occluding boundaries on the other,was proposed as early as in 1982[48].The authors of that work formulated a method to select only some of the zero-crossings obtained by two difference-of-Gaussians (DoG)?lters,one with a high-bandpass and the other with a low-bandpass characteristic.The method is based on the observation that texture edges induce a strong response only in the high-bandpass ?lter.Only luminance changes,such as a step edge,that induce strong responses in both ?lters are retained.This method,however,has the drawback that together with texture edges it removes the contours of small objects and lines that are narrow (compared to the support of the low-pass ?lter).Since the deployed DoG ?lters involve no orientation dependence,this method will furthermore fail to detect region boundaries de?ned by orientation contrast.

In [10]it is proposed to make distinction between different types of edges:dust (short isolated line segments),(isolated)curves,?ow (dense edge patterns with locally parallel orientation)and turbulence (dense edge patterns with locally random orientation).Two local complexity measures,normal and tangential complexity (essentially the densities of edges in normal and tangential orientation of a given edge),are proposed and deployed for classi?cation of edges in one of these four categories.The authors of that work succeed to explain certain perceptual effects and from their curve map illustrations one can infer that their method can be used for contour enhancement.However,since the goal of that work seems to be quite different,no quantitative analysis and algorithm evaluation for contour detection was made.Furthermore,the referred method is quite complex and it is not clear how crucial parameter selection (e.g.for tangent statistics extraction)is for success.

A very comprehensive model of intracortical interactions in area V1of the visual cortex was proposed in [31].Next to inhibition,it incorporates enhancement and dynamical aspects.That model takes into account most of the know-ledge available in psychophysics and physiology and is able to explain a number of effects known in these areas.This was also the very purpose of that work that also includes a very good discussion of previous studies in that

direction

Fig.11.(a)A synthetic input image (after [14]).(b)Associated desired contour output that agrees with common contour perception.Responses of (c)Canny and (d)SUSAN edge detectors,and the (e)anisotropic and (f)isotropic surround suppression augmented operators.Only the anisotropic suppression operator (e)mimics common contour perception (b).

C.Grigorescu et al./Image and Vision Computing 22(2004)609–622619

[7,20,21].In contrast,the current article is not focused on unveiling the biological role of intracortical interactions;we addressed this problem elsewhere[19,46,47].In this paper we propose a simple algorithmic step that can be added to almost any edge detector in order to achieve improved contour detection.In the context of this study,obtaining a practical computer vision algorithm is an aspect that is considered more important than the original biological motivation.Therefore a performance comparison of the two approaches is not appropriate.Instead,we only point out some essential differences in the two methods.Since our approach has no time dimension,it is computationally less demanding:we compute the result in a single step instead of multiple steps that correspond to a sequence of time steps. Similarly,taking into account enhancement,as this is done in [31],implies considerable additional computational effort in each step that would improve contour detection results only incrementally.Finally,only anisotropic inhibition is taken into account in[31].

Other contextual edge detection techniques based on suppression have been previously proposed within the framework of anisotropic diffusion[4,42,45].For instance, in[45]a locally adapted smoothing factor controls the amount of suppression applied to the gradient map computed at a given scale.Smoothing is more pronounced at image locations where the gradient magnitude is small, favoring the high-contrast edges over the low-contrast ones. In these approaches,however,suppression has no effect on nearby edges which have equally strong gradients.When applied to images such as the one shown in Fig.4(a),for instance,they will not suppress the lines which are part of the gratings.Consequently,anisotropic diffusion seems more suitable for edge enhancement regardless of the underlying perceptual context(texture vs.contours).Our technique is particularly intended for texture suppression and better contour detection.

Many methods of comparing edge detection algorithms were proposed in the literature,often deploying a multitude of different evaluation criteria[5,23,50–52].We used a single comparison method because the inclusion of additional evaluation criteria would,in a way,bring the study out of focus.The additional suppression and post-processing steps we propose are aimed at eliminating texture edges,so that object contours can pop out.Consequently,the performance measure we use is conceived to quantify the improvement in this speci?c respect.The proposed steps are not intended to improve(or worsen)any of the other properties of edge detectors,e.g.edge localization that is often used for comparison in the edge detection literature[5]. The localization properties of our contour detectors are,in fact,very similar to those of Canny’s edge detector.

4.3.Discussion and conclusions

Normally,edge and contour detection are considered to be intermediate operations:the results they provide are used as input to further processing operations aimed at the completion of some more complex task such as object identi?cation.It is of interest how the proposed suppres-sion and post-processing steps would affect the ultimate result.As the proposed steps will eliminate texture edges, it is evidently not appropriate to deploy them in tasks in which texture edges are essential, e.g.texture classi?-cation or region based segmentation.In other tasks,such as shape-based object identi?cation,the proposed suppres-sion and post-processing steps can have a very important contribution to the quality of the?nal result.This is achieved through the simpli?cation effect that these steps have on edge maps(see left column of images in Fig.9). Clean object contour maps,free of texture edges,are of primary importance for shape-based object recognition methods that rely on contour information.Typically,in such methods some local descriptor is computed for each contour point,determined by the geometrical arrangement of other contour points in the neighborhood[1,3,15,18]. The local descriptors of the contour points of a reference object are compared with the local descriptors of a test object in order to establish point correspondences. Subsequently,a measure of similarity between the two objects is computed and a decision is taken whether they belong to the same category.Texture edges in the background of a test object change the local descriptors to such an extent that no correspondences can be found to the contour points of an identical reference object. Consequently,texture edges in the background have devastating effect on such shape recognition methods.

In conclusion,surround suppression can be incorporated as an additional processing step not only in Canny operator, but also in virtually any edge detection operator that relies on some form of enhancement of luminance transitions based on feature extraction using spatially limited support. The suppression step may be expected to improve contour detection performance in images that contain objects of interest on a cluttered or textured background. Acknowledgements

We thank the three anonymous reviewers for their comments and suggestions.

Appendix A

In the following we present a method for the ef?cient computation of the anisotropic suppression term t A sex;yTintroduced in Eq.(9).For this purpose we re-write Eq.(9)

C.Grigorescu et al./Image and Vision Computing22(2004)609–622 620

as follows:

t A sex;yT?ee

V

M sex2u;y2vTw seu;vT

£l coseQ sex;yT2Q sex2u;y2vTTl d u d v

?

ee

V M sex2u;y2vTw seu;vT

£coseQ sex;yT2Q sex2u;y2vTTd u d v

?

ee

V M sex2u;y2vTw seu;vT

£cos Q sex;yTcos Q sex2u;y2vTd u d v t

ee

V

M sex2u;y2vTw seu;vT£sin Q sex;yTsin Q sex2u;y2vTd u d v

?cos Q sex;yTee

V

M sex2u;y2vT

£cos Q sex2u;y2vTw seu;vTd u d v

tsin Q sex;yTee

V

M sex2u;y2vT

£sin Q sex2u;y2vTw seu;vTd u d v

eA1T

Note that:

7x f sex;yT?M sex;yTcos Q sex;yTeA2T7y f sex;yT?M sex;yTsin Q sex;yT

Substituting Eq.(A2)in(A1),we further obtain:

t A sex;yT?cos Q sex;yTee

V

7x f sex2u;y2vTw seu;vTd u d v

tsin Q sex;yTee

V

7y f sex2u;y2vTw seu;vTd u d v

?cos Q sex;yTe7x f s p w sTex;yTj

tsin Q sex;yTe7y f s p w sTex;yT

eA3T

and the right-hand side of this relation can be evaluated ef?ciently using convolutions.

References

[1]Y.Amit,D.Geman,K.Wilder,Joint induction of shape features and

tree classi?ers,IEEE Transactions on Pattern Analysis and Machine Intelligence19(11)(1997)1300–1305.

[2]S.Ando,Image?eld categorization and edge/corner detection from

gradient covariance,IEEE Transactions on Pattern Analysis and Machine Intelligence22(2)(2000)179–190.

[3]S.Belongie,J.Malik,J.Puzicha,Shape matching and object

recognition using shape contexts,IEEE Transactions on Pattern Analysis and Machine Intelligence24(4)(2002)509–522.

[4]M.J.Black,G.Sapiro, D.Marimont, D.Heeger,Robust

anisotropic diffusion,IEEE Transaction on Image Processing7(3) (1998)421–432.

[5]K.W.Bowyer, C.Kranenburg, A.Dougherty,Edge detector

evaluation using empirical ROC curves,Computer Vision and Image Understanding84(2001)77–103.

[6]J.F.Canny,A computational approach to edge detection,IEEE

Transaction on Pattern Analysis and Machine Intelligence8(6) (1986)679–698.

[7]G.Carpenter,S.Grossberg,A massively parallel architecture for a

self-organizing neural pattern recognition machine,Computer Vision and Graphical Image Processing37(1987)54–115.

[8]G.Chen,Y.H.H.Yang,Edge detection by regularized cubic B-spline

?tting,IEEE Transactions on Systems,Man,and Cybernetics25(4) (1995)635–642.

[9]Y.Chen,C.A.Z.Barcelos,B.A.Mair,Smoothing and edge detection

by time-varying coupled nonlinear diffusion equations,Computer Vision and Image Understanding82(2)(2001)85–100.

[10]B.Dubuc,S.W.Zucker,Complexity,confusion and perceptual

grouping.Part II:mapping complexity,International Journal on Computer Vision42(1/2)(2001)83–105.

[11]D.J.Field,A.Hayes,R.F.Hess,Contour integration by the human

visual system:Evidence for a local association?eld,Vision Research 33(2)(1993)173–193.

[12]T.Folsom,R.Pinter,Primitive features by steering,quadrature and

scale,IEEE Transactions on Pattern Analysis and Machine Intelli-gence20(11)(1998)1161–1173.

[13]W.Frei,C.Chen,Fast boundary detection:A generalization and a

new algorithm,IEEE Transactions on Computers26(2)(1977) 988–998.

[14]A.Galli,A.Zama,Untersuchungen u¨ber die Wahrnehmung ebener

geometrischer Figuren,die ganz oder teilweise von anderen geometrischen Figuren verdeckt sind,Zeitschrift fu¨r Psychologie31 (1931)308–348.

[15]D.M.Gavrila,Multi-feature hierarchical template matching using

distance transforms,IEEE International Conference on Pattern Recognition(ICPR’98),Brisbane Australia(1998)439–444. [16]S.Ghosal,R.Mehrotra,Detection of composite edges,IEEE

Transactions on Image Processing3(1)(1994)14–25.

[17]P.H.Gregson,Using angular dispersion of gradient direction for

detecting edge ribbons,IEEE Transactions on Pattern Analysis and Machine Intelligence15(7)(1993)682–696.

[18]C.Grigorescu,N.Petkov,Distance sets for shape?lters and shape

recognition,IEEE Transactions on Image Processing12(10)(2003) 1274–1286.

[19]C.Grigorescu,N.Petkov,M.A.Westenberg,Contour detection based

on nonclassical receptive?eld inhibition,IEEE Transactions on Image Processing12(7)(2003)729–739.

[20]S.Grossberg,E.Mignolla,Neural dynamics of perceptual grouping:

textures,boundaries,and emergent segmentation,Perceptional Psychophysics38(1985)141–171.

[21]S.Grossberg, E.Mignolla,W.Ross,Visual brain and visual

perception:how does the cortex do perceptual grouping?,Trends in Neuroscience20(1997)106–111.

[22]R.M.Haralick,Digital step edges from zero-crossings of second

directional derivatives,IEEE Transactions on Pattern Analysis and Machine Intelligence6(1)(1984)58–68.

[23]M.Heath,S.Sarkar,T.Sanocki,K.W.Bowyer,A robust visual

method for assessing the relative performance of edge-detection algorithms,IEEE Transactions on Pattern Analysis and Machine Intelligence19(12)(1997)1338–1359.

[24]F.Heitger.Feature detection using suppression and enhancement.

Technical Report TR-163,Communication Technology Laboratory, Swiss Federal Institute of Technology,1995.

[25]E.C.Hildreth,The detection of intensity changes by computer and

biological vision systems,Computer Vision,Graphics and Image Processing22(1983)1–27.

[26]H.E.Jones,K.L.Grieve,W.Wang,A.M.Sillito,Surround suppression

in primate V1,J.Neurophysiology86(10)(2001)2011–2028.

C.Grigorescu et al./Image and Vision Computing22(2004)609–622621

[27]G.Kanizsa,Organization in Vision,Essays on Gestalt Perception,

Praeger,New York,1979.

[28]M.K.Kapadia,G.Westheimer,C.D.Gilbert,Spatial distribution of

contextual interactions in primary visual cortex and in visual perception,Journal of Neurophysiology84(4)(2000)2048–2062.

[29]J.J.Knierim,D.C.van Essen,Neuronal responses to static texture

patterns in area V1of the alert macaque monkey,Journal of Neurophysiology67(1992)961–980.

[30]P.Kovesi,Image features from phase congruency,Videre:Journal on

Computer Vision Research1(3)(1999)2–27.

[31]Z.Li,Visual segmentation by contextual in?uences via intra-cortical

interactions in the primary visual cortex,Network:Computational Neural Systems10(1999)187–212.

[32]W.-Y.Ma,B.S.Manjunath,Edge?ow:A technique for boundary

detection and image segmentation,IEEE Transactions on Image Processing9(8)(2000)1375–1388.

[33]J.Malik,S.Belongie,T.Leung,J.Shi,Contour and texture analysis

for image segmentation,International Journal on Computer Vision43

(1)(2001)7–27.

[34]B.S.Manjunath,R.Chellappa,A uni?ed approach to boundary

perception:edges,textures,and illusory contours,IEEE Transactions on Neural Networks4(1)(1993)96–107.

[35]D.Marr,E.C.Hildreth,Theory of edge detection,Proceedings of the

Royal Society,London,B207(1980)187–217.

[36]J.-B.Martens,Local orientation analysis in images by means of the

Hermite transform,IEEE Transactions on Image Processing6(8) (1997)1103–1116.

[37]D.Martin,C.Fowlkes,D.Tal,J.Malik,A database of human

segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,Proceed-ings of International Conference on Computer Vision(2001)416–423.

[38]P.Meer,B.Georgescu,Edge detection with embedded con?dence,

IEEE Transactions on Pattern Analysis and Machine Intelligence23

(12)(2001)1351–1365.

[39]M.C.Morrone,R.A.Owens,Feature detection from local energy,

Pattern Recognition Letters6(1987)303–313.

[40]V.S.Nalwa,T.O.Binford,On detecting edges,IEEE Transactions

on Pattern Analysis and Machine Intelligence8(6)(1986)699–714.

[41]R.Nevatia,K.Babu,Linear feature extraction and description,

Computer Graphics and Image Processing13(1980)257–269. [42]M.Nitzberg,T.Shiota,Nonlinear image?ltering with edge and corner

enhancement,IEEE Transactions on Pattern Analysis and Machine Intelligence14(8)(1992)826–833.

[43]H.C.Nothdurft,Texture segmentation and pop-out from orientation

contrast,Vision Research31(1991)1073–1078.

[44]H.C.Nothdurft,J.Gallant,D.C.van Essen,Response modulation by

texture surround in primate area V1:Correlates of“popout”under anesthesia,Visual Neuroscience16(1)(1999)15–34.[45]P.Perona,J.Malik,Scale-space and edge detection using anisotropic

diffusion,IEEE Transactions on Pattern Analysis and Machine Intelligence12(7)(1990)629–639.

[46]N.Petkov,P.Kruizinga,Computational models of visual neurons

specialised in the detection of periodic and aperiodic oriented visual stimuli:bar and grating cells,Biological Cybernetics76(2)(1997) 83–96.

[47]N.Petkov,M.A.Westenberg,Supression of contour perception by

band-limited noise and its relation to non-classical receptive?eld inhibition,Biological Cybernetics88(2003)236–246.

[48]W.Richards,H.K.Nishihara,B.Dawson,CARTOON:A biologically

motivated edge detection algorithm.MIT A.I.Memo No.668,see also,W.Richards(Ed.),Natural Computation,MIT Press,1988, pp.55–69,Chapter4.

[49]L.Schwartz.The′orie des Distributions.Vol.I,II of Actualite′s

scienti?ques et industrielle.L’Institute de Mathe′matique de l’Uni-versite′de Strasbourg,1950-51.

[50]M.C.Shin,D.Goldgof,K.W.Bowyer,An objective comparison

methodology of edge detection algorithms for structure from motion task,In Empirical Evaluation Techniques in Computer Vision,IEEE CS Press,1998,pp.235–254.

[51]M.C.Shin, D.B.Goldgof,K.W.Bowyer,Comparison of edge

detectors using an object recognition task,In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,vol.1,Fort Collins,CO,1999,pp.360–365.

[52]M.C.Shin, D.B.Goldgof,K.W.Bowyer,Comparison of edge

detectors using an object recognition task,Computer Vision and Image Understanding84(1)(2001)160–178.

[53]S.M.Smith,J.M.Brady,SUSAN—a new approach to low level image

processing,International Journal on Computer Vision23(1)(1997) 45–78.

[54]J.A.Solomon, D.G.Pelli,The visual?lter mediating letter

identi?cation,Nature369(1994)395–397.

[55]M.Sonka,V.Hlavac,R.Boyle,Image processing,analysis,and

machine vision,Brooks/Cole Publishing Company,Paci?c Grove, CA,1999.

[56]H.Tagare,R.J.P.de Figueiredo,On the localization performance

measure and optimal edge detection,IEEE Transactions on Pattern Analysis and Machine Intelligence12(12)(1990)1186–1190. [57]J.Weickert.A review of nonlinear diffusion?ltering.In Scale-space

Theory in Computer Vision,volume1252of Lecture Notes in Computer Science,pages3-28,1997.

[58]D.Ziou,S.Tabbone,Edge detection techniques–An overview,

International Journal on Pattern Recognition and Image Analysis8(4) (1998)537–559.

[59]O.A.Zuniga,R.M.Haralick,Integrated directional derivative gradient

operator,IEEE Transactions on Systems,Man,and Cybernetics17(3) (1987)508–517.

C.Grigorescu et al./Image and Vision Computing22(2004)609–622 622

拜耳基本情况

科技创造美好生活 拜耳作为一家跨国企业,其核心竞争力领域包括医药保健、作物营养和高科技材料。公司产品和服务致力于造福人民,提高人们的生活质量。同时,拜耳还通过科技创新、业务增长和高效的盈利模式来创造价值。拜耳集团致力于可持续发展,认可并接受其作为企业公民的社会责任和道德责任。并将经济、生态和社会责任视为同等重要的企业政策目标。2010年财政年度,拜耳的员工人数为111,400名,销售额为351亿欧元,资本支出为16亿欧元,研究开发投入为31亿欧元。 拜耳医药保健中国里程碑 1936年拜耳在中国成立第一个生产公司-拜耳医药公司,生产一系列的产品,其中包括世界著名的阿司匹林?止痛药。 1992年拜耳重返中国大陆市场。 1995年拜耳在中国注册成立拜耳医药保健有限公司。 1996年拜耳医药保健有限公司开始在北京建厂。 1999年拜耳医药保健有限公司工厂通过中国国家食品药品监督管理局认证管理中心的药 品GMP认证。 2002年国家食品药品监督管理局正式批准拜唐苹?增加IGT(葡萄糖耐量低减)适应症,中国因而成为全世界第一个批准用药物干预IGT的国家。 2005年拜耳医药保健收购了罗氏的OTC业务。 2007年拜耳医药保健的处方药部和先灵(广州)药业有限公司合并。 2008年成为中国医药医院市场中份额最大的跨国医药公司。 收购东盛科技股份有限公司止咳及抗感冒类西药非处方药业务。 拜耳医药保健完成了投资达2.2亿余元人民币的北京生产厂的扩建。 2009年拜耳医药保健有限公司国际研发中心成立。 2010年拜耳医药保健中国区升级成为直接向总部汇报的区域。 拜耳(四川)动物保健有限公司正式从一家合资企业转变成为独资公司。 2011年普药全球商业管理团队迁至中国。

Bayer 产品目录及性能

Changing the world with great care. Bayer Engineering Polymers Thermoplastics and Polyurethanes Bayer Polymers Product Information is available online at: https://www.wendangku.net/doc/cd15325949.html,/polymers-usa

1-800-622-6004.*Novodur ABS grades are produced by Bayer AG in Europe.

a As with any product, use of a particular resin in a given application must be tested (including field testing, etc.) in advance by the user to determine suitability. b Please refer to Bayer Corporation guidelines for medical application of Bayer products on pages 4-5. c Makroblen d UT 408 resin (natural color) can b e used for food-contact applications except for those in which the temperature exceeds 250°F and the food contains more than 50% alcohol. Since color possibilities for food-contact applications are limited, please contact your Bayer representative for information on available colors

拜耳产品简介 杀虫剂

稻腾 产品名称 稻腾10%阿维●氟酰胺悬浮剂 产品说明 本产品由两种不同作用机制的有效成份-氟虫双酰胺和阿维菌素的混凝土配制剂。既可以提高防治害虫的效果,又延缓害虫产生抗药性;主要用于防治鳞翅目害虫,如水稻二化螟和稻纵卷叶螟等。本产品的作用方式以胃毒为主,兼具触杀作用;由于氟虫双酰胺作用于昆虫细胞兰尼碱受体,导致肌肉收缩,使害虫迅速停止取食而死亡,其速效性好,可很好保护作物免受为害。 产品外观 特点说明 ● 见效快。害虫受药后立即停止取食,保护作物免受危害。 ● 防效高。按推荐用药,防治稻纵卷叶螟和二化螟效果好且稳定。 ● 持效长。持效期较常规药剂长,防治二化螟用药一次即可控制一整代。 ● 耐雨水。由于氟虫双酰胺独特的化学特性,用药4小时后遇雨,一般不需补治。 ● 无抗性。与现有的绝大多数杀虫剂无交互抗性,适合用于抗性害虫种群治理。 使用方法 作物(或范围) 防治对象 制剂用药量 使用方法 水稻 二化螟 稻纵卷叶螟 20~30毫升/亩 喷雾 拉维因 产品名称 拉维因75%可湿性粉剂 产品说明 拉维因75%可湿性粉剂是既杀幼虫又杀卵的氨基甲酸酯类杀虫剂。以胃毒作用为主,兼具一定触杀作用。在全球20多个国家登记销售,广泛用于防治棉花、玉米、大豆、花生、蔬菜、果树等作物上的鳞翅目、双翅目、鞘翅目害虫,对有机磷、菊酯类杀虫剂已产生抗性的害虫也具有良好的防效。因其广谱高效及环境安全等优点,在许多国家被推荐为棉铃虫综合治理的首选药剂。

产品外观 特点说明 ? 卵虫兼杀 ? 持效期长 ? 杀虫谱广 ? 天敌安全 使用方法 作物(或范围)物 防治对象 制剂用药量 使用方法 棉花 棉铃虫 30-45克/亩 喷雾 酷毙 产品名称 酷毕100克/升(悬浮剂) 产品说明 酷毕是一种高效的、长持效的苯吡唑类杀虫剂,其作用机制是作用于昆虫中枢神经系统中γ-氨基丁酸受体,阻断害虫正常的神经传递,最终杀死害虫。由于这种新的作用机制,使得酷毕与正在使用的其他类型杀虫剂,尤其是与防治水稻稻飞虱的药剂之间没有交互抗性。

拜耳杀菌剂产品简介(doc 12页)

拜耳杀菌剂产品简介(doc 12页) 部门: xxx 时间: xxx 整理范文,仅供参考,可下载自行编辑

银法利 产品名称 银法利687.5克/升悬浮剂(氟菌·霜霉威) 产品 说明 该产品为低毒内吸性杀菌剂,由新的治疗性杀菌剂氟吡菌胺和内吸传导性杀菌剂霜霉威盐酸盐复配而成,既具有保护作用又具有治疗作用。对马铃薯和番茄晚疫病、黄瓜和大白菜霜霉病、西瓜和辣 椒疫病具有较好的防效。该产品具有活性较高、持效期较长、内吸性较强、施药时间灵活的特点。按照推荐方法施用,对作物安全。 产品外观 特点说 明 ● 独特性:混剂配方——氟吡菌胺+霜霉威盐酸盐 ● 保护性:较强的薄层穿透性,良好的系统传导性 ● 治疗性:对病原菌的各主要形态均有很好的抑制活性 ● 持效性:持效期长 ● 耐雨水冲刷,不受天气影响 ● 不留药渍 ● 低毒、低残留:完全符合食品产业链的需求 ● 对作物安全 使用方法 适用作物 防治对象 制剂用量 使用方法 番茄 晚疫病 60-75毫升/亩 . 配制药液时,向喷雾器中注入少量 水,然后加入推荐用量的银法利制剂。充分搅拌药液使之完全溶解后,加入足量水; . 据作物大小,按每亩推荐用药量,对水45~75升,进行叶面均匀喷雾处理; . 在病害发生初期进行叶面喷雾处理效果最佳,并可以降低用药量。建议每隔7-10天施用一次。大风天或预计1小时内降雨,请勿施药。 黄瓜 霜霉病 60-75毫升/亩 大白菜 霜霉病 60-75毫升/亩 辣椒 疫病 60-75毫升/亩 西瓜 疫病 60-75毫升/亩 马铃薯 晚疫病 60-75毫升/亩 普力克 产品 普力克722克/升水剂

拜耳动力会议产品介绍及特点

拜耳动力会议产品介绍及特点 MCW-数字200无线数字会议系统 MCW-数字50数字无线会议系统 MCS-D 200 –全数字有线会议系统 MCS 50有线会议系统 MCS 20有线会议系统 无线–安全、舒适 MCW-D 200数字无线会议系统能使您免除任何空间的限制。您可以快速、简单地安装或拆卸麦克风装置,无任何混乱的电线的困扰。该麦克风装置的设计可以与现代会议环境及历史环境完美融合。由于对登录建筑的保护性要求,电缆敷设通常是一个问题。DCN 无线讨论系统(最多150 台装置)还可以与新一代DCN 有线装置(最多95 台装置)结合使用。这是一种便捷有效的、向几乎任何会议场所增添额外容量的方法。 简单、安全 MCW-D 50数字无线会议系统能使您免除任何空间的限制。您可以快速、简单地安装或拆卸麦克风装置,无任何混乱的电线的困扰。该麦克风装置的设计融合了现代会议环境及历史环境。该麦克风装置的设计可以与现代会议环境及历史环境完美融合。由于对登录建筑的保护性要求,电缆敷设通常是一个问题。除了简单、方便用户的操作,MCW-D 50还提供了单独操作和在尖端会议模式下媒体控制实施所需的所有功能。 简单、安全 MCS-数字为全数字的有线会议系统,用于数字技术年代会谈会议的高效率开展。NetRateBus 具有外语传输、摄像头控制、投票表决及通过电脑或媒体控制方便用户操作等多重功能。NetRateBus管理大量的台,同时处于多区段模式,在该模式下,只用一台控制装置就可以使用多个会议室。当会议需要进行录音时,最适合使用的即是steno-s会议&录音软件。除了录音功能,该软件还能提供其它灵活的功能,如在结束录音后,在录音内作上记号,以识别和进入特殊部分。 简单、安全 数字控制的MCS会议系统是一个具备高性能的会议系统,可以用于有效地组织会谈和会议。无论是固定安装,还是移动性执行,MCS系统都适用。根据控制装置的设计,32或64个台可以装入该系统。推-拉式的连接器便于安装或拆卸系统,因此,可以将时间花费最小化。较长的连接电缆可以使布置的范围更加宽泛。因而,MCS系统可以一直保持灵活的状态。作为一个独立的系统,MCS具有操作简单、用户容易掌握等特征。作为一个独立的装置,或与媒体控制一起,MCS可以完美地应用于所有现代化会议。

拜耳产品简介 杀菌剂

银法利 产品名称 银法利687.5克/升悬浮剂(氟菌·霜霉威) 产品说 明 该产品为低毒内吸性杀菌剂,由新的治疗性杀菌剂氟吡菌胺和内吸传导性杀菌剂霜霉威盐酸盐复配而成,既具有保护作用又具有治疗作用。对马铃薯和番茄晚疫病、黄瓜和大白菜霜霉病、西瓜和辣椒疫病具有较好的防效。该产品具有活性较高、持效期较长、内吸性较强、施药时间灵活的特点。按照推 荐方法施用,对作物安全。 产品外观 特点说 明 ● 独特性:混剂配方——氟吡菌胺+霜霉威盐酸盐 ● 保护性:较强的薄层穿透性,良好的系统传导性 ● 治疗性:对病原菌的各主要形态均有很好的抑制活性 ● 持效性:持效期长 ● 耐雨水冲刷,不受天气影响 ● 不留药渍 ● 低毒、低残留:完全符合食品产业链的需求 ● 对作物安全 使用方法 适用作物 防治对象 制剂用量 使用方法 番茄 晚疫病 60-75毫升/亩 . 配制药液时,向喷雾器中注入少量水, 然后加入推荐用量的银法利制剂。充分搅拌药液使之完全溶解后,加入足量水; . 据作物大小,按每亩推荐用药量,对 水45~75升,进行叶面均匀喷雾处理; . 在病害发生初期进行叶面喷雾处理效 果最佳,并可以降低用药量。建议每隔7-10天施用一次。大风天或预计1小时内降雨,请勿 黄瓜 霜霉病 60-75毫升/亩 大白菜 霜霉病 60-75毫升/亩 辣椒 疫病 60-75毫升/亩 西瓜 疫病 60-75毫升/亩 马铃薯 晚疫病 60-75毫升/亩

普力克 产品名称 普力克722克/升水剂 产品说明 本产品为低毒内吸性的卵菌纲杀菌剂。具有施药灵活的特点,可采用苗床浇灌处理防治苗期猝倒病、疫病;叶面喷雾防治霜霉病、疫病等,均有良好的预防保护和治疗效果。 产品外观 特点说明 ● 对苗期病害特效,确保健苗、壮苗 ● 内吸传导性强,30分钟后开始发挥保护作用 ● 对霜霉病、疫病、晚疫病有很长的保护及治疗作用 ● 刺激作物生长,促进生根开花 ● 安全,方便 ● 增产明显 使用方法 适用作物 防治对象 制剂用量 使用方法 黄瓜 猝倒病、 疫病 5-8毫升/平方米 苗床浇灌 黄瓜 霜霉病 60-100毫升/亩 喷雾 甜椒 疫病 72-107毫升/亩 喷雾

拜耳亚太地区软性泡沫原材料产品

软性泡沫 亚太地区软性泡沫原材料产品模塑泡沫生产用的多元醇 产品产品描述羟值mg/g KOH 分子量粘度(25 °C) mPa s Arcol? Polyol 1362 高分子量活性多元醇,用于生 产高回弹泡沫 2860001200 Arcol? Polyol 3553标准活性多元醇,用于生产高 回弹泡沫 354800900 Hyperlite? E848 高性能多元醇,用于生产高回 弹泡沫 31.553001115 模塑泡沫生产用的聚合物多元醇 产品产品描述羟值mg/g KOH粘度(25 °C) mPa s Hyperlite? E851 高固含量的聚合物多元醇,用于生产高 回弹泡沫 18.56000 Softcel? VE-1100 块状泡沫生产用的多元醇 产品产品描述羟值mg/g KOH 分子量粘度(25 °C) mPa s Arcol? Polyol 5603标准聚醚三元醇563000500 Arcol? Polyol 5613标准聚醚三元醇563000520 Desmophen? 2200B聚酯多元醇,用于生产连续块 状泡沫 602500147500 Desmophen? 2300X聚酯多元醇,用于生产连续块 状泡沫 50350022500 SBU Polyol S240聚醚多元醇,用于生产软质及 超软质泡沫 3745501070

块状泡沫生产用的聚合物多元醇 产品产品描述羟值mg/g KOH粘度(25 °C) mPa s Arcol? Polyol U777 特殊多元醇用于高回弹应用311710 Arcol? Polyol HS100 高固含量聚合物多元醇,用于生产强化承重 泡沫 283600 Desmophen? 7619M 聚脲多元醇,用于生产阻燃性高回弹块状泡 沫(CMHR) 283600 添加剂 产品产品描述羟值mg/g KOH 分子量粘度(25 °C) mPa s Sumiphen? VB用于模塑泡沫生产的交联剂63035620000 聚氨酯软性泡沫生产用的异氰酸酯 产品产品描述NCO含量(%)粘度(25 °C) mPa s Desmodur? 44V10L 用于生产硬质或半硬质模塑泡沫的聚合MDI 31.5100 Desmodur? 44V20L 用于生产硬质或半硬质模塑泡沫的聚合MDI 31.5200 Desmodur? T65N特殊TDI,用于生产聚酯或粘弹性泡沫等特种 聚氨酯软性泡沫 48.33 Desmodur? T80C用于生产软性聚氨酯泡沫的标准TDI 48.33 Desmodur? VPPU 3133 用于生产模塑泡沫的MDI 32.525 Desmodur? VT06用于生产模塑泡沫的TDI/MDI混合物44.86

拜耳产品简介杀菌剂

拜耳产品简介杀菌剂集团标准化工作小组 [Q8QX9QT-X8QQB8Q8-NQ8QJ8-M8QMN]

银法利 产 品 名 称 银法利687.5克/升悬浮剂(氟菌·霜霉威) 产品说明该产品为低毒内吸性杀菌剂,由新的治疗性杀菌剂氟吡菌胺和内吸传导性杀菌剂霜霉威盐酸盐复配而成,既具有保护作用又具有治疗作用。对马铃薯和番茄晚疫病、黄瓜和大白菜霜霉病、西瓜和辣椒疫病具有较好的防效。该产品具有活性较高、持效期较长、内吸性较强、施药时间灵活的特点。按照推荐方法施用,对作物安全。 产品外观 特点说明● 独特性:混剂配方——氟吡菌胺+霜霉威盐酸盐 ● 保护性:较强的薄层穿透性,良好的系统传导性 ● 治疗性:对病原菌的各主要形态均有很好的抑制活性● 持效性:持效期长 ● 耐雨水冲刷,不受天气影响

马铃薯晚疫病60-75毫升/亩处理效果最佳,并可以降低用药量。建议每隔7-10天施用一次。大风天或预计1小时内降雨,请勿施药。 普力克 产品 名称 普力克722克/升水剂 产品说明本产品为低毒内吸性的卵菌纲杀菌剂。具有施药灵活的特点,可采用苗床浇灌处理防治苗期猝倒病、疫病;叶面喷雾防治霜霉病、疫病等,均有良好的预防保护和治疗效果。 产品外观 特点说明● 对苗期病害特效,确保健苗、壮苗 ● 内吸传导性强,30分钟后开始发挥保护作用 ● 对霜霉病、疫病、晚疫病有很长的保护及治疗作用● 刺激作物生长,促进生根开花

● 安全,方便 ● 增产明显 使用方法 适用作物防治对象制剂用量使用方法 黄瓜猝倒病、 疫病5-8毫升/平方米苗床浇灌 黄瓜霜霉病60-100毫升/亩喷雾甜椒疫病 72-107毫升/亩喷雾施佳乐 产 品 施佳乐400克/升悬浮剂

相关文档