文档库 最新最全的文档下载
当前位置:文档库 › ZhaopingGuyader_CurrentBiology

ZhaopingGuyader_CurrentBiology

ZhaopingGuyader_CurrentBiology
ZhaopingGuyader_CurrentBiology

Current Biology17,26–31,January9,2007a2007Elsevier Ltd All rights reserved DOI10.1016/j.cub.2006.10.050

Report Interference with Bottom-Up

Feature Detection

by Higher-Level Object Recognition

Li Zhaoping1,*and Nathalie Guyader1

1Department of Psychology

University College London

London WC1E6BT

United Kingdom

Summary

Drawing portraits upside down is a trick that allows novice artists to reproduce lower-level image features, e.g.,contours,while reducing interference from higher-level face cognition.Limiting the available pro-cessing time to suf?ce for lower-but not higher-level operations is a more general way of reducing interfer-ence.We elucidate this interference in a novel visual-search task to?nd a target among distractors.The tar-get had a unique lower-level orientation feature but was identical to distractors in its higher-level object shape.Through bottom-up processes,the unique fea-ture attracted gaze to the target[1–3].Subsequently, recognizing the attended object as identically shaped as the distractors,viewpoint invariant object recogni-tion[4,5]interfered.Consequently,gaze often aban-doned the target to search elsewhere.If the search stimulus was extinguished at time T after the gaze arrived at the target,reports of target location were more accurate for shorter(T<500ms)presentations. This object-to-feature interference,though perhaps un-expected,could underlie common phenomena such as the visual-search asymmetry that?nding a familiar letter N among its mirror images is more dif?cult than the converse[6].Our results should enable additional examination of known phenomena and interactions between different levels of visual processes.

Results and Discussion

Among the45 left-tilted bars in Figure1,a uniquely right-tilted bar,45 or20 from the vertical in conditions A simple or B simple,pops out.However,superposing a hor-izontal or vertical bar on each original bar makes the uniquely tilted bar much harder to?nd in condition A than condition B of Figure1.The target object in condi-tion A but not B is a rotated and sometimes also a mirror-reversed version of all distractor objects,easily con-fused with the distractors because object recognition is typically rotationally or viewpoint invariant.We sug-gest that the higher-level perception of the object com-prising the two intersecting bars interferes with the task of locating it based on its unique lower-level orientation feature component.

Primitive features,like the orientations of small bars, of visual inputs are?rst extracted by the primary visual cortex(V1)[7].Then these features are combined into objects,e.g.,composed of two intersecting bars[8,9], by higher cortical areas,including the inferotemporal (IT)cortex,whose neurons are selective to object shapes[10–15].V1is not only a way station;its activities also highlight salient items because of its sensitivity to unique low-level features such as orientation[16–18]. In addition to driving the higher visual areas such as V4,which combines bottom-up and top-down factors [19–21],V1’s saliency signal also evokes cognitive deci-sions by driving superior colliculus,which controls sac-cades[3].Behaviorally and preattentively,unique image features such as orientation and color can pop out[1], and an object’s basic features such as‘‘vertical’’and ‘‘red,’’but not its overall shape,can be obtained[22]. Meanwhile,an important characteristic of the progres-sion from feature to object processing is making object recognition viewpoint independent[23]and thus achiev-ing object invariance.Some IT neurons are indeed in-sensitive to viewpoint[10–12].IT activities also correlate with the planning of saccades[24].There is thus a hierar-chy of levels of cognition and their consequent decisions and actions.Behaviorally,attentive exposure to an ob-ject’s image can prime its subsequent recognition regardless of viewpoint but can prime its recognition only in the same view if the exposure was unattended[4]. The observations above suggest the following rele-vant processing stages:(1)an early preattentive stage that processes image features,e.g.,orientations of ob-ject components,and makes unique features salient [1];and(2)a later,attentive[4]stage that creates a view-point-invariant object representation[1,5],e.g.,a shape from two intersecting bars.For locating a target pos-sessing a uniquely oriented bar in the display,the early stage suf?ces because the salient unique orientation can attract gaze.The later,attentive object-processing stage is commonly expected to facilitate processing of the components of the objects through top-down feed-back[25].However,when differently oriented but other-wise identical distractors are present,as in condition A but not B of Figure1,viewpoint-invariant object recogni-tion could make search harder.If so,briefer stimulus viewings(within a time window),preventing invariant object recognition,should improve target localization in condition A but not B.We show exactly this below. In experiment I,subjects searched among660objects, in a display extending46 334 of visual angle,for the object with the uniquely tilted oblique bar.The search stimulus was of conditions A simple,B simple,A,or B(Fig-ure1)or control conditions.The nonoblique,task-irrele-vant bar in the target of condition A or B was randomly either horizontal or vertical(the task-relevant bar in con-dition B was always20 from this irrelevant bar).The sub-jects were a priori informed about the uniquely oriented target bar and that this unique orientation could be ran-domly tilted to the left or right in each trial.They were asked to press a left or right button quickly to indicate whether the target was in the left or right half of the dis-play.Their eye positions were tracked.

*Correspondence:z.li@https://www.wendangku.net/doc/f74625861.html,

Figure 2shows that reaction times (RTs)for the sub-ject’s ?rst gaze arrival to the target,RT eye s,were compa-rable in conditions A and B.This is unsurprising because the target in both conditions had the uniquely oriented bar.This bar is salient preattentively [1,2],attracting both attention and gaze,the latter because of the man-datory link between the directions of attention and gaze in free viewing [26].These RT eye s were longer than those in conditions A simple and B simple mainly because the non-uniform orientations of the task-irrelevant (horizontal and vertical)bars reduced the target’s saliency [27].However,the RTs for reporting the target location by button press,RT hand s,were typically more than 122s longer in condition A than B,even though A and B had comparable button-response accuracies.In condition A,after gaze ?rst reached the target,it often

dawdled

Figure 1.Small Portions of Visual-Search Displays

The target possesses the uniquely left-or right-tilted (as in these examples)bar in the entire display.In conditions A simple and B simple (top),all bars were 45 from vertical,except the target bar in B simple ,which was 20 from vertical (in this example)or horizon-tal.Conditions A and B (bottom),derived from A simple and B simple ,differed only in the angle,45 and 20 ,respectively,between the two bars in the target.Task-irrelevant,horizontal and vertical,bars made the orien-tation singleton much harder to ?nd in condi-tion A than in condition

B.

Figure 2.Hand and Gaze Responses in Experiment I

(A and B)Examples of gaze scan paths.The one in (A)is for an arrive-abandon-return (AAR)trial.Asterisks and open circles mark the locations for targets and ?xation points,respectively;the grid frames the spatial ex-tent of the stimuli.Blue and red scan paths,respectively,are for those before and after the ?rst gaze’s arrival to the target and before the button press.

(C)Data for three subjects,denoted by red,green,and blue hues,respectively.Asterisks indicate signi?cant differences between con-ditions for the subject.The left graphs show RT hand and RT eye (top of lighter-and darker-colored bars,respectively)for button re-sponses and ?rst gaze at target,respectively.The right graphs show task performances,percentage of arrive-abandon return (AAR)scan paths (e.g.,[A]),and eye-to-hand laten-cies in non-AAR trials for conditions A and B only.All error bars show SEM.

Object-to-Feature Interference in Visual Search 27

around the target before the button press or even aban-doned the target to search elsewhere before returning to it prior to the button press (Figure 2A).Such arrive-aban-don-return (AAR)scan paths were much rarer in condi-tion B.Even for the non-AAR trials,the eye-to-hand latency RT hand 2RT eye was much longer in condition A than in B.These observations are consistent with the hy-pothesis that decision processes vetoed the ?rst guess by the feature detectors in condition A because the attended object was recognized as having the same shape as the distractors,i.e.,invariant object recogni-tion could be interfering.

An alternative explanation consistent with the data could be that somehow targets in condition A but not condition B become less visible under foveal viewing than peripheral viewing.To test between the hypothe-ses of interference by invariant object recognition and of foveal visibility reduction,we examined conditions A and B in experiment II,in which the search stimulus was masked after a seemingly random time interval since its onset (Figure 3A).The subjects button-pressed for the target location as before,but could respond with-out time pressure,before or after the mask onset,and guess if they had to.The mask (Figure 3B)covered each original object,whether target or distractor,with a star-shaped object and made the original object im-perceptible.A random half of the trials in each session were gaze-contingent trials,in which mask onset oc-curred and reduced visibility of the original stimulus to zero at one of several predetermined time intervals T af-ter gaze ?rst arrived at the target.The other trials had random mask-onset times;some were gaze-opposite trials,in which the gaze position at mask onset was on one (e.g.,left)side the display center and the target was on the opposite (e.g.right)side of the display cen-ter,and were designed to prevent subjects’awareness of any link between mask onset and eye position (see Experimental Procedures ).

Figure 3C shows that for condition A,target localiza-tion worsened with longer gaze-to-mask viewing time T %1–2s.This is not because the button presses tended to agree with the eye positions at mask onset;among the gaze-opposite trials,only 56%of the button presses agreed with the eye positions at mask onset.Furthermore,the performance for T =0,when target vis-ibility became zero immediately upon foveal viewing,is comparable to that without the mask in experiment I,when the stimulus was viewed as long as was deemed necessary by the subjects.This suggests that the extra viewing time T >0,or a longer duration of target visibility (even if reduced),is unnecessary and can be detrimental for target localization for some T.Apparently,the sub-jects had a good ?rst guess of the target location based on image features (orientations of the bars)alone

before

Figure 3.Experiment II:The Longer One Looks,the Worse One ‘‘Sees’’

(A)Sequence of events in a gaze-contingent trial.(B)A small portion of an example of a mask stimulus.

(C)With longer gaze-to-mask latency T,target localization in condition A (in blocked sessions)worsened,and the gazes are more likely to have abandoned the target before mask onset.Asterisks denote data points signi?cantly smaller in value than that for T =0.GSBM trials are those in which gaze stayed (at target)before mask onset.

(D)In sessions interleaving conditions A and B (for another subject group),performances in conditions A and B are comparable for T =https://www.wendangku.net/doc/f74625861.html,-bining both T >0values,performance in B is signi?cantly better than that in A (p =0.01).Error bars show SEM.

Current Biology 28

they got confused by invariant object recognition,which likely caused them to abandon target (the non-GSBM trials in Figure 3C)and give incorrect responses.Eventu-ally,their confusion subsided.Some subjects reported that sometimes they thought they found the target,only for it to disappear when they took a second look.In experimental sessions interleaving conditions A and B (for another group of subjects),extra viewing time T >0improved performance in condition B marginally but worsened performance in A (Figure 3D).Meanwhile,the fact that the performances for the two conditions at T =0are comparable is consistent with the comparable RT eye s in these conditions in experiment I (Figure 2).Our ?nding is the ?rst we know of providing quantita-tive psychophysical data to suggest that deeper cogni-tive processing can be detrimental to some visual cogni-tive tasks—a likely explanation for the portrait-drawing trick.In particular,invariant object recognition interfered with lower-level feature processes’abilities to detect unique salient features.Here,the later-stage processes for object recognition are at best unnecessary for our task.Our ?ndings suggest that they actually overwrite or interfere with the decisions of the necessary and ear-lier feature processes,even though,in principle,they do not have to do so.The uniqueness of the orientation of the target’s component bar is suf?cient to make the tar-get location salient.Previous physiological and compu-tational studies [16–18,2]have indicated that V1can de-tect and highlight such a salient feature and direct gaze to it via the superior colliculus [3].

Although some forms of object recognition can occur quickly [28,29]and without attention or awareness [5,30],psychophysical data have indicated that view-point-invariant object representation needs attention [4,22].Accordingly,our ?ndings suggest that the later,interfering stage does not only construct object from features but also allows top-down attention to build in-variant object representations.This is consistent with the mandatory link between the directions of gaze and attention in free viewing [26].Thus,our ?nding can also be seen as the interference of top-down attentional processes with bottom-up processes,and this interfer-ence introduces nontrivial complexity to the temporal and performance differences between higher-and lower-level processes [31–33].Our ?nding also con-trasts with backward visual masking [34]in which inat-tention enables a mask to impair object recognition.Figure 3C suggests that building the invariant object representation requires at least 100ms of attentive viewing for objects in our stimuli.

Our analysis suggests the following factors as being conducive to interference:(1)tasks being feature based,not requiring object recognition;(2)object recognition or top-down knowledge,or both,introducing additional signals,which has suf?cient weight to counteract the low level feature’s contribution to task-relevant https://www.wendangku.net/doc/f74625861.html,paring condition A in blocked versus inter-leaved (with condition B)sessions (Figure 4A)suggests that an increased expectation for a unique target shape (in the interleaved session)increases interference.This is unsurprising because the expectation should increase the weight of factor (2)above.Analogously,we can re-duce interference by increasing the weight of the bot-tom-up factor and thus decreasing the relative weight of the factor (2).For instance,when the task-irrelevant bars in conditions A and B are all horizontal or all vertical so that distractors are uniformly oriented,the target be-comes more salient.We call these modi?ed conditions A 0and B 0,respectively.This reduces RT eye signi?cantly.Consequently,the feature-level in?uences could

more

Figure 4.Factors Affecting Object-to-Fea-ture Interference

(A)In experiment II,performance across T >0for condition A was somewhat better (p =0.08)in blocked sessions (one session each subject)than in sessions interleaved with condition B (two sessions each subject),in which subjects had higher expectations for uniquely shaped target.

(B)Reduction of interference with experi-ence—for longer gaze-to-mask time T in ex-periment II,performance for condition A im-proved in the second experimental session (signi?cantly at T value with an asterisk next to the data points).The data in Figure 3D were replotted here according to the two separate sessions.

(C)Stronger or weaker object-to-feature in-terference,manifested in RT hand 2RT eye in experiment I,by,respectively,higher or lower orientation variabilities of the distractors for reducing or enhancing bottom-up pop-out strength manifested in RT eye (same three subjects as those in Figure 2,denoted by blue,red,and green colors).The RT hand 2RT eye in condition A 0,although reduced from that of A,is signi?cantly longer (p =0.002)than that of B 0.In the right graph,data points of different conditions are plotted in different colors.Stimulus examples of conditions A 0,B 0,and A 0simple are shown in the Supplemen-tal Data .Error bars show SEM.

Object-to-Feature Interference in Visual Search 29

strongly push the task-decision process so that the decision threshold could be reached before object-to-feature interference becomes more signi?cant.Hence, in experiment I interleaving conditions A0,B0,A,and B, RT hand2RT eye for A0is much shorter than for A,al-though RT hand2RT eye for A0is still signi?cantly longer than the two comparable RT hand2RT eye values for B and B0(Figure4C).Conversely,when the orientation var-iability of distractors is increased in condition A simple, such that randomly1/3of the distractor bars become oriented horizontally and another1/3become oriented vertically,we call the resulting stimulus condition A0simple.In this condition A0simple,object-to-feature inter-ference arises by a RT hand2RT eye longer than that in condition B(Figure4C).This suggests that even a simple bottom-up orientation feature can,given suf?cient pro-cessing time,be treated as a viewpoint-invariant object bar and make the target object bar a rotated version of all distractor objects.

Our data also suggest that subjects can quickly learn to remove the interference in condition A within two data sessions involving no more than260trials per subjects in experiment II(Figure4B).Subjects reported discover-ing helpful strategies of trusting their instincts,defocus-ing the image,or letting the target pop out while?xating on the center of display away from the peripheral target. Peripheral visual?eld is more heavily sampled by the magno celluar pathway,which,compared to the parvo cellular pathway,is faster and processes coarser resolu-tion inputs[35,36].Hence,the magno pathway likely plays a greater role in detecting unique features and driv-ing gaze in a bottom-up manner.This is consistent with the idea that slower attentive process is associated with?ner spatial resolution than the faster bottom-up processes.Defocusing and peripheral viewing probably reduce the object-to-feature interference by selectively emphasizing the magno pathway to speed up the bot-tom-up process while removing the?ner input details to attenuate the attentive object-formation processes. Although removing?ner resolution could make two intersecting bars resemble a single bar of the averaged orientation,the observed object-to-feature interference in condition A0simple(which has only disconnected bar stimuli)suggests that viewing the objects as single bars could not remove the interference if attentive object formation proceeded.Hence,we predict that lesions (clinical or by transcranial magnetic stimulation)of the cortical areas responsible for attentive-object processes (perhaps the parietal cortex,which has been implicated in building objects from features[5])could improve per-formance in our task.Our?ndings only reveal a fraction of the rich interactions between lower-and higher-level cognitive processes.The results of such interactions are unexpected if we assume that deepening of pro-cesses should always lead to improved perception. Different degrees of object-to-feature interference may underlie common observations of visual-search asymmetry between familiar and unfamiliar targets.For example,a search for a familiar letter N among its mirror reversals is performed more slowly than a search for a mirror reversal among normal N’s[6,37,38].Both searches require the same low-level processes for de-tecting orientation contrast between left-and right-tilted bars and do not require letter recognition.However,familiarity of the letters should affect the object rather than feature-level processing.Hence,the object-to-feature interference,manifested in our task and likely behind the portrait-drawing trick,can enable additional examination of many known phenomena.

Experimental Procedures

Stimuli

Each stimulus display,viewed at a distance of40cm,had660object items,each at a position randomly displaced,up to60.24 visual an-gle,horizontally and vertically from its corresponding position in

a regular grid of22rows330columns,spanning correspondingly

34 346 in visual angle.Each stimulus bar was0.12 31.1 in visual angle and48cd(candela)/m2in brightness.The background was black.The target’s grid location was randomly one of those closest to the circle of about15 eccentricity,and beyond12 of horizontal eccentricity,from the display center.The?xation stimulus was a bright disk of0.3 diameter at the display center.

Procedures

Gazes were tracked by the50Hz infrared video eye tracker from Cambridge Research System(https://www.wendangku.net/doc/f74625861.html,).Tracking calibra-tion was performed before each data session to a precision typically within0.5 of visual angle.After being shown two examples of each stimulus condition,untrained subjects were instructed to?xate cen-trally until the stimulus onset and to freely move their eyes after-wards for target searching.The sequence of events in a trial was as follows:(1)With the?xation stimulus,the subject pressed a button to start a trial and eye tracking.(2)After0.6s,upon the subject’s continuous?xation for40ms within3 of the?xation point,a blank screen replaced the?xation stimulus for200ms and was followed by the onset(designated as time zero)of search stimulus.(3)In ex-periment I,the search stimulus remained till after the subject’s but-ton press.In experiment II,a mask replaced the search stimulus at a time determined as follows:In a gaze-contingent trial,the mask onset occurred at time T after the?rst gaze arrival at the target. The criterion for the arrival was when the gaze was within2.3 in vi-sual angle from the target’s center position.T was randomly chosen from the set T=(0,100,500,1000,and2000)ms for data sessions contributing to Figure3C,and for a different group of subjects, from the set T=(0and1500)ms or T=(0,1000,and1500)ms for ses-sions contributing to Figure3D and Figures4A and4B.For each non-gaze-contingent trial,a time t was chosen randomly and uni-formly from the time window200–1700ms.The mask onset oc-curred upon the?rst gaze arrival at the opposite(laterally from the center)side of the target since200ms after stimulus onset or at time t,whichever was sooner.The mask,once displayed,remained until after the subject’s button press.Each session of experiment I had200trials,randomly interleaving conditions A simple,B simple,A, B,A0,B0,and A0simple and other control conditions.In experiment II,each blocked session for condition A had130or60trials,and each interleaving session(of conditions A and B)had100trials.After each session of experiment II,we veri?ed that subjects did not no-tice any links between the mask onsets and the gaze positions.Dif-ferent subjects participated in experiments I and II.

Data Analysis

A trial is de?ned as a bad trial and removed from further analysis if gaze was untracked in more than10%of the video frames of the eye tracker within the time window(0,RT hand)or if RT hand<100 ms.Data from a subject or session when bad trials comprised more than10%of all trials are removed from further analysis.Suf?-ciently large gaze-tracking error can lead to failures in detecting gaze arrivals at the target.A trial is called a nonarrival trial if the gaze never arrived at the target by our arrival criteria with the tracker measurements.We thus remove from further analysis subjects and data sessions having more than11%of nonarrival trials among all trials in experiment I or among the gaze-contingent trials in experi-ment II.Results in?gures were based on the gaze-arrival trials only.The RTs plotted were based on trials with correct button re-sponses.The error bars plotted represent the standard error of the mean(SEM).Statistical tests for differences between different

Current Biology 30

conditions in Figure2were by two-tailed t test,whereas those in Fig-ures3and4were by one-tail matched sample t test. Supplemental Data

Supplemental Data include additional Experimental Procedures and can be found with this article online at http://www.current-biology. com/cgi/content/full/17/1/26/DC1/.

Acknowledgments

Work was supported by the Gatsby Charitable Foundation.We thank Keith May for help in programming the stimulus,and him,Pe-ter Dayan,Chris Frith,Uta Frith,Sheng He,Li Jingling,and Alex Lewis for conversations and comments on our works,manuscripts, and https://www.wendangku.net/doc/f74625861.html,ments by the three anonymous reviewers are also much appreciated.

Received:June22,2006

Revised:October12,2006

Accepted:October24,2006

Published:January8,2007

References

1.Treisman,A.M.,and Gelade,G.(1980).A feature-integration the-

ory of attention.Cognit.Psychol.12,97–136.

2.Li,Z.(2002).A saliency map in primary visual cortex.Trends

Cogn.Sci.6,9–16.

3.Tehovnik,E.J.,Slocum,W.M.,and Schiller,P.H.(2003).Sac-

cadic eye movements evoked by microstimulation of striate cortex.Eur.J.Neurosci.17,870–878.

4.Stankiewicz,B.J.,Hummel,J.E.,and Cooper,E.E.(1998).The

role of attention in priming for left-right re?ections of object images:Evidence for a dual representation of object shape.J.

Exp.Psychol.24,732–744.

5.Treisman,A.M.,and Kanwisher,N.G.(1998).Perceiving visually

presented objects:Recognition,awareness,and modularity.

Curr.Opin.Neurobiol.8,218–226.

6.Frith,U.(1974).A curious effect with reversed letters explained

by a theory of schema.Percept.Psychophys.16,113–116.

7.Hubel,D.H.,and Wiesel,T.N.(1968).Receptive?elds and func-

tional architecture of monkey striate cortex.J.Physiol.195,215–243.

8.Kahneman,D.,Treisman,A.,and Gibbs,B.J.(1992).The review-

ing of object?les:object-speci?c integration of information.

Cognit.Psychol.24,175–219.

9.Riesenhuber,M.,and Poggio,T.(2003).How the visual cortex

recognizes objects:The tales of the standard model.In The Vi-sual Neurosciences,Volume2,L.M.Chalupa and J.S.Werner, eds.(Cambridge,MA:MIT Press),pp.1640–1653.

10.Tanaka,K.(2003).Inferotemporal response properties.In The Vi-

sual Neurosciences,Volume2,L.M.Chalupa and J.S.Werner, eds.(Cambridge,MA:MIT Press),pp.1151–1164.

11.Rolls,E.T.(2003).Invariant object and face recognition.In The

Visual Neurosciences,Volume2,L.M.Chalupa and J.S.Werner, eds.(Cambridge,MA:MIT Press),pp.1165–1178.

12.Logothetis,N.K.,Pauls,J.,and Poggio,T.(1995).Shape repre-

sentation in the inferior temporal cortex of monkeys.Curr.

Biol.5,552–563.

13.Humphreys,G.W.,Riddoch,M.J.,and Price,C.J.(1997).Top-

down processes in object identi?cation:Evidence from experi-mental psychology,neuropsychology and functional anatomy.

Philos.Trans.R.Soc.Lond.B.Biol Sci.352,1275–1282.

14.Kourtzi,Z.,and Kanwisher,N.(2001).Representation of per-

ceived object shape by the human lateral occipital complex.

Science293,1506–1509.

15.Grill-Spector,K.,Kourtzi,Z.,and Kanwisher,N.(2001).The lat-

eral occipital complex and its role in object recognition.Vision Res.41,1409–1422.

16.Knierim,J.J.,and Van Essen,D.C.(1992).Neuronal responses to

static texture patterns in area V1of the alert macaque monkey.

J.Neurophysiol.67,961–980.17.Sillito,A.M.,Grieve,K.L.,Jones,H.E.,Cudeiro,J.,and Davis,J.

(1995).Visual cortical mechanisms detecting focal orientation discontinuities.Nature378,492–496.

18.Nothdurft,H.C.,Gallant,J.L.,and Van Essen,D.C.(1999).Re-

sponse modulation by texture surround in primate area V1:Cor-relates of‘‘popout’’under anesthesia.Vis.Neurosci.16,15–34.

19.Schiller,P.H.,and Lee,K.(1991).The role of the primate extra-

striate area V4in lesion.Science251,1251–1253.

20.Mazer,J.A.,and Gallant,J.L.(2003).Goal-related activity in V4

during free viewing visual search:Evidence for a ventral stream visual salience map.Neuron40,1241–1250.

21.Ogawa,T.,and Komatsu,H.(2004).Target selection in area V4

during a multidimensional visual search task.J.Neurosci.24, 6371–6382.

22.Wolfe,J.M.,and Bennett,S.C.(1997).Preattentive object?les:

Shapeless bundles of basic features.Vision Res.37,25–43. 23.Ungerleider,L.G.,and Mishkin,M.(1982).Two cortical visual

systems.In Analysis of Visual Behavior,D.J.Ingle,M.A.Good-ale,and R.J.W.Mans?eld,eds.(Cambridge,MA:The MIT Press),pp.549–586.

24.Chelazzi,L.,Miller,E.K.,Duncan,J.,and Desimone,R.(1993).A

neural basis for visual search in inferior temporal cortex.Nature 363,345–347.

25.Motter,B.C.(1993).Focal attention produces spatially selective

processing in visual cortical areas V1,V2,and V4in the presence of competing stimuli.J.Neurophysiol.70,909–919.

26.Hoffman,J.E.(1998).Visual attention and eye movements.In At-

tention,H.Pashler,ed.(London:University College London Press),pp.119–154.

27.Duncan,J.,and Humphreys,G.W.(1989).Visual search and

stimulus similarity.Psychol.Rev.96,433–458.

28.Potter,M.C.(1976).Short-term conceptual memory for pictures.

J.Exp.Psychol.[Hum.Learn.]5,509–522.

29.Thorpe,S.,Fize,D.,and Marlot,C.(1996).Speed of processing

in the human visual system.Nature381,520–522.

30.Luck,S.J.,Vogel,E.K.,and Shapiro,K.L.(1996).Word meanings

can be accessed but not reported during the attentional blink.

Nature382,616–618.

31.van Zoest,W.,and Donk,M.(2004).Bottom-up and top-down

control in visual search.Perception33,927–937.

32.Gilchrist,I.D.,Heywood,C.A.,and Findlay,J.M.(2003).Visual

sensitivity in search tasks depends on the response require-ment.Spat.Vis.16,277–293.

33.Fang,F.,and He,S.(2005).Nat.Neurosci.8,1380–1385.

34.Enns,J.T.,and Di Lollo,V.(2000).What’s new in visual masking?

Trends Cogn.Sci.4,345–352.

35.Frazor,R.A.,Albrecht, D.G.,Geisler,W.S.,and Crane, A.M.

(2004).Visual cortex neurons of monkeys and cats:Temporal dynamics of the spatial frequency response function.J.Neuro-physiol.91,2607–2627.

36.Li,Z.(1992).Different retinal ganglion cells have different func-

tional goals.Int.J.Neural Syst.3,237–248.

37.Richards,J.T.,and Reicher,G.M.(1978).The effect of back-

ground familiarity in visual search-An analysis of underlying factors.Percept.Psychophys.23,499–505.

38.Shen,J.,and Reingold,E.M.(2001).Visual search asymmetry:

The in?uence of stimulus familiarity and low-level features.Per-cept.Psychophys.63,464–475.

Object-to-Feature Interference in Visual Search 31

相关文档