文档库 最新最全的文档下载
当前位置:文档库 › A New Hybrid Approach to Video Organization for Content-Based Indexing

A New Hybrid Approach to Video Organization for Content-Based Indexing

A New Hybrid Approach to Video Organization for Content-Based Indexing
A New Hybrid Approach to Video Organization for Content-Based Indexing

A New Hybrid Approach to Video Organization for Content-Based Indexing

Madirakshi Das Shih-Ping Liou

Department of Computer Science Multimedia and Video Technology

University of Massachusetts Siemens Corporate Research

Amherst,MA Princeton,NJ

e-mail:mdas@https://www.wendangku.net/doc/3016073044.html, liou@https://www.wendangku.net/doc/3016073044.html,

Abstract

Video organization is a key step in the content-based in-

dexing of video archives.The objective of video organi-

zation is to capture the semantic structure of a video in a

form which is meaningful to the user.We present a hy-

brid approach to video organization which automatically

processes video,creating a video table of contents(VTOC),

while providing easy-to-use interfaces for veri?cation,cor-

rection and augmentation of the automatically extracted

video structure.Algorithms are developed to solve the sub-

problems of shot detection,shot grouping and VTOC gener-

ation without making very restrictive assumptions about the

structure or content of the video.We use a nonstationary

time series model of difference metrics for shot boundary

detection,color and edge similarities for shot grouping and

observations about the structure of a wide class of videos

for the generation of the VTOC.The use of automatic pro-

cessing in conjunction with input from the user allows us to

produce meaningful video organization ef?ciently.

1.Introduction

For a multimedia information system to better meet the

users’needs,it must capture the semantics and terminol-

ogy of speci?c user domains and allow users to retrieve in-

formation according to such semantics.This requires the

development of a content-based indexing mechanism that

is rich in its semantic capabilities for abstraction of multi-

media information and also able to provide canonical rep-

resentation of complex scenes in terms of objects and their

spatio-temporal behavior.A key initial stage in this content-

based indexing process is video organization.The objective

of the video organization is to capture the structure present

in the video,providing a video table of contents analogous

to the table of contents in a book.

brief summary of the novel aspects of the proposed method is given in section3.Section4describes the algorithms used in automatic generation of the table of contents from the raw video.Section5describes the interactive aspects of video organization,including the user interfaces.Finally,it concludes with comments on future directions in section6.

2.Literature review

There has been work in extracting the semantic struc-ture of the video using strong domain knowledge.Zhang et al[18]use known templates of anchor person shots to sepa-rate news stories.Swanberg et al[13]use the known struc-ture of news programs in addition to models of anchor per-son shots to parse news videos.The presence of the chan-nel logo,skin tones of the anchor person and the structure of a news episode have been used in[6].These approaches only create a hierarchy with a few?xed number of levels. Content-based indexing at the level of shots using motion is described in[3],without developing a high-level descrip-tion of the video.Although domain knowledge,in general, constrains the problem and makes it possible to provide a reliable solution,it can never account for all possible sce-narios even for a simple domain such as news video.For example,it is not possible to de?ne an anchor person image model that is independent of broadcast stations.

Recently,Yeung and Yeo[15]presented a domain-independent approach that extracts story units(the top level in a hierarchy)for video browsing applications and creates a scene transition graph(Fig3).Such a graph leads to a com-pact representation that serves as a summary of the story and may also provide useful information for automatic classi?-cation of video types.However,this representation reveals little information about the semantic structure within a story unit e.g.an entire news broadcast is classi?ed as a single story,making it dif?cult to browse through individual news stories.The clustering strategy proposed in this work uses temporal constraints,making it dif?cult to cluster similar shots which are temporally far apart e.g.the anchor-person shots in a news broadcast are usually scattered throughout the video.

The problem of capturing the semantic structure in a video also requires solution to both the cut detection and the shot grouping problems.Most existing cut detection algorithms are based on preset thresholds or assumptions that reduce their applicability to a limited range of video types[2,7,10].For example,it is often assumed that both the incoming and outgoing shots are static scenes and the transition only lasts for a period less than half a sec-ond.This type of model is too simple for modeling gradual shot transitions that are often present in?lms/videos.An-other frequent assumption is that the frame difference signal computed at each individual pixel can be modeled by a sta-

Figure1.The histogram of a typical inter-

frame difference that does not correspond

to a shot change.The shape of the curve

changes as the camera moves slowly(left)

versus fast(right).

tionary independent identically distributed random variable

which obeys a known probability distribution[17].This as-

sumption is generally not true as shown in Figure1.Neither

a Gaussian nor a Laplace distribution?ts both curves well.

A Gamma function?ts the curve on the left,but not the

one on the right.In addition,existing methods assume that

time-series difference metrics are stationary,when actually,

such metrics are highly correlated time signals.

Since many videos are converted from?lms and the two

media are played at different frame rates,creating videos

from?lms requires making every other?lm frame a little bit

longer.This process results in video frames that are made

up of two?elds with totally different(although consecu-

tive)pictures in them.As a result,the digitization produces

duplicate video frames and almost zero inter-frame differ-

ences at?ve frame intervals.A similar problem occurs in

animated videos where almost zero inter-frame difference

is produced in as often as every other frame.

In grouping visually similar shots,most existing ap-

proaches use color histograms[12].However,the com-

monly used RGB and HSV color spaces are sensitive to

the illumination to varying degrees and uniform quantiza-

tion of the color space is against the principles of human

perception[14].

3.Our approach

Our approach to the video organization problem is illus-

trated in Figure2.This hybrid approach consists of a set

of automatic video organization algorithms and a collection

of interactive video organization interfaces.Our approach

differs from existing approaches in the following aspects.

First,none of the existing algorithms provide man-

ual feedback mechanisms during the automatic creation of

video structure.Mistakes made by either automatic cut de-

tection and/or shot grouping algorithms will ruin the?nal

Cut Detection

Video Table of Contents

Creation

Shot Grouping

Automatic Algorithms

Interactive Interfaces

Browser

(Correct Shot Boundaries)

(Correct Grouping Results)

(Correct VTOC)

Tree View Tree View

Video Player

Figure 2.Our hybrid approach to video orga-nization

video structure produced by the algorithm.Therefore,it is important to provide interfaces so that a human operator can verify and correct the results produced automatically at ev-ery step of the processing.

Second,we use the observation that repeating shots which are similar in some way,alternating or interleaving with other shots,are often used to convey parallel events in a scene or to signal the beginning of a semantically mean-ingful unit .Since we do not use pre-de?ned models for sim-ilar shots,this observation can be used for a wide variety of videos for which organization is particularly relevant e.g.news,sports events,interviews,documentaries etc.News and documentaries have the anchor-person appearing before each new segment of the story to introduce it.Interviews have the interviewer appearing to ask each new question.Sports events have sports action between shots of the sta-dium or the commentator.So,it is possible to create a forest structure directly from the list of identi?ed recurring shots.This forest structure (video table of contents)preserves the time order among shots.It also captures the syntactic struc-ture of the video which is a hierarchy composed of stories,and,under stories,sub-plots,which may have further sub-plots embedded in them.For most structured videos,it pro-vides interesting insights into the semantic context of the video and is therefore a more useful representation than the scene transition graph.An example is shown in Fig 3.

Third,we use a name-based color system,ISCC-NBS [9],to describe the color of images during the shot grouping process.The NBS color system divides the Mun-sell color space into irregularly shaped regions and assigns a color name to each region based on common usage.This enables the use of color histograms with perceptually based color space quantization and allows us to construct a color description independent of the illumination of the image.Finally,unlike existing cut detection methods which have no notion of time series and non-stationarity ,we treat a sequence of difference metrics as nonstationary time se-ries signals and model the time trend deterministically.The sequence of difference metrics are just like any

economic

Figure 3.Scene transition graph (left)and video table of contents (right)of the same video clip

or statistical data collected over time.In this view,shot changes as well as the ?lm-to-video conversion process will both create observation outliers in time series,while the gradual shot transition and gradual camera moves will pro-duce innovation outliers.Fox [5]de?nes the observation outlier to be the one that is caused by a gross error of ob-servation or recording error and it only affects a single ob-servation.Similarly,the innovation outlier is one that cor-responds to the situation in which a single “innovation”is extreme.This type of outlier affects not only the particular observation but also subsequent observations.

4.Automatic organization of video

We have developed a set of automatic algorithms that can produce an organized structure from a raw video.The ?rst task in the automatic organization of video is to recover the shots present in the video.We have developed a scene change detection algorithm which provides good shot detec-tion in the presence of outliers and dif?cult shot boundaries like fades and zooms.Each shot is completely de?ned by a start and end frame and a list of all shots in a video is stored in a shotlist .Each shot is represented by a single frame,the representative frame ,which is stored as an image (icon).The more complex task is to organize the shots into a higher level structure re?ecting the semantics of the video.We have automated the process of inferring the semantics of the video using the grouping of shots by similarity and the observation made in the earlier section about the relevance of repetition of similar shots.In the following sub-sections,we will describe the cut detection,shot grouping and video table of contents creation algorithms respectively.

4.1.Cut detection

Pixel-based metrics (e.g.inter-frame difference)and distribution-based difference metrics (e.g.statistics)re-

spond differently to different types of shots and shot transi-tions.For example,the former are very sensitive to camera moves but are a good indicator for shot changes;the lat-ter are relatively insensitive to camera and object motion, but can produce little response when two shots look quite different but have similar distributions.We feel that it is necessary to combine both measures in cut detection.

We model the sequence of difference metrics as non-stationary time series signals.There are standard methods [1,5,8]that detect both innovation and observation out-liers based on the estimate of time trend and autoregres-sive coef?cients.These standard methods,however,cannot be applied to the cut detection problem directly because of the following three reasons.First,most methods require intensive computation(e.g.least squares)to estimate time trend and autoregressive coef?cients.Second,the observa-tion outliers created by slow motion and the?lm-to-video conversion process could occur as often as one in every other sample,making the time trend and autoregressive co-ef?cient estimation an extremely dif?cult process.Finally, since gradual shot transitions and gradual camera moves are indistinguishable in most cases,location of gradual shot transitions requires not only the detection of innovation out-liers but also an extra camera motion estimation step.

In our solution,we use a zero th-order autoregressive model and a piecewise-linear function to model the time trend.With this simpli?cation,samples from both the past and the future must be used in order to improve the ro-bustness of time trend estimation.We also need to dis-card more than half the samples because the observation outliers created by slow motion and the?lm-to-video con-version process could occur as often as one in every other sample.Fortunately,these types of observation outliers are least in value,and hence easily identi?able.After remov-ing the time trend,the remaining value is tested against a normal distribution in which can be estimated recursively or in advance.

To make the cut detection method more robust,we ap-ply the Kolmogorov-Smirnov test to eliminate false posi-tives.This test is chosen because it does not assume a priori knowledge of the underlying distribution function.The tra-ditional Kolmogorov-Smirnov test procedure compares the computed test metric with a preset signi?cance level(nor-mally at95%),and has been used[11]to detect cuts from videos.This use of single pre-selected signi?cance level ig-nores the non-stationary nature of the cut detection problem. We feel that the use of Kolmogorov-Smirnov test should take into account the non-stationary nature of the problem i.e.the signi?cance level should be automatically adjusted to different types of video contents.

One way to represent video content is to use measure-ment in both the spatial and the temporal domain together. For example,image contrast is a good spatial domain mea-surement and the amount of intensity changes across two neighboring frames measures video content in the temporal domain.The adjustment should be made such that,

the higher the image contrast is,the more sensitive the cut detection mechanism should be,and

the more changes occur in two consecutive images,the less sensitive the detection mechanism should be.

The traditional Kolmogorov-Smirnov test also cannot differentiate the long shot from the close up of the same scene.To guard against such transitions,we propose a hier-archical Kolmogorov-Smirnov test.In this test,each frame is divided into four rectangular regions of equal size and the traditional Kolmogorov-Smirnov test is applied to ev-ery pair of regions as well as to the entire image.This test therefore produces?ve binary numbers that indicate whether there is a change in the entire image as well as in each of the four sub-images.

Finally,instead of directly using these?ve binary num-bers to eliminate false positives,the signi?cance of the test result of a shot change frame is compared against that of its neighboring frames.

4.2.Shot grouping

Similarity between shots is determined by comparing the representative frame images for the shots.We have used color to cluster the shots into initial groups,and then used edge information within each group to re?ne the clustering results.

https://www.wendangku.net/doc/3016073044.html,ing color:

The use of color in computing image similarity has found wide use in image retrieval systems,color histograms[12] being especially popular.The ability of the color histogram to detect similarity in the presence of illumination variations is greatly affected by the color space used as well as how the color space is quantized.

In our solution,we have selected a name-based color sys-tem,ISCC-NBS[9],which is constructed from the human perception of color.Since the color names are based on common usage,the results are more likely to agree with the user’s perception of color similarity and natural language user interfaces can be developed in the future.

Each color name in the NBS system has two components :the hue name and the hue modi?er.Fig4shows a list of hue names and Fig5shows the hue modi?ers used in the NBS system. e.g.‘very deep purplish blue’is a possible color name.However,all combinations of hue name and modi?ers are not valid-there are a total of267valid color names,obtained by dividing the Munsell color space into irregularly shaped regions.

Since we would like to keep the description of an image independent of the illumination,we use only the hue names

Figure 6.(top row)The original images (bottom row)Images labeled using 14colors red

reddish purple green

purplish red

greenish blue yellow green

orange yellow

yellowish brown purplish blue

olive brown

greenish yellow

violet

olive

brownish orange

Figure 4.ISCC-NBS hue names

very pale grayish strong blackish

dark deep very deep

vivid

very light

light grayish pale light moderate dark grayish very dark

brilliant

L i g h t n e s s (M u n s e l l V a l u e )

Saturation (Munsell Chroma)

Figure 5.ISCC-NBS hue modi?ers

instead of the full color names.In our experiments with this color system,we have observed that replacing the color names containing the modi?ers specially marked in Fig 5by ‘black’or ‘white’results in better classi?cation of the color.e.g.‘very pale green’is in fact closer to white and ‘very dark green’is closer to black than green.We have further reduced the number of colors to 14,by merging some of the colors into their more dominant component e.g.‘reddish orange’is considered to be ‘orange’.Fig 6shows images labelled using 14colors.Reducing the number of colors improves the chances of two similar images being clustered in the same group.Fig 7shows the histograms obtained from two images of a soccer match.There is variation in the green color of the https://www.wendangku.net/doc/3016073044.html,ing 14colors,all types of green are labelled ‘green’(color #2)and therefore,the his-

tograms are very similar.However,when all hue names are used,the green label is divided into ‘olive green’,‘yellowish green’,‘bluish green’etc.The histograms appear different now since the proportions of the different shades of green are not the same in both

images.

(a)(b)

Figure 7.Effect of number of colors on simi-larity:(a)14colors (b)All hue

names

Figure 8.Color histograms of similar images The normalized histogram bin counts are used as the fea-ture vector to describe the color content of an image.Fig 8shows the color histograms of the shots grouped together on the basis of similar color https://www.wendangku.net/doc/3016073044.html,ing edges:

When images are grouped only on the basis of their color histograms,visually different images with similar color dis-tribution may be grouped together.Edge information is used as a ?lter to remove shots which have incorrectly been

grouped using color alone.

We use a straightforward global measure of the edge in-formation in an image.Each edge pixel is classi?ed as be-longing to one of four directions based on the sign and rela-tive magnitude of its response to edge operators along x and y directions.The histogram of pixel counts along each of the four directions is used as a feature vector to describe the edge information in the image.The gross edge information is computed after simplifying the image by quantizing it to a few levels(4or8)and converting the quantized image to an intensity image.This edge information is suf?cient to ?lter out most of the false shots in a group.

4.2.3.Clustering strategy:

We are constrained in the choice of clustering strategy by having no a priori knowledge of the number or the nature of the clusters.We cannot make the assumption that similar images will be temporally close to each other in the video, since the repeating shots are likely to be scattered through-out the video.Therefore,clustering strategies which involve comparisons among all points in a limited window[15]are not suitable in our case.We also do not know the number of potential clusters a priori,so K-means clustering and other strategies using this a priori information are also not use-ful.Moreover,it would be advantageous if the clustering strategy was not of?ine i.e.did not require all the shots to be present before starting.This would allow us to process shots as they are generated.

The clustering algorithm we have used is based on near-est neighbor classi?cation,combined with a threshold cri-terion.The initial clusters are generated based on the color feature vector of the shots.Each cluster is speci?ed by a feature vector which is the mean of the vectors of its mem-bers.When a new shot is available,the city block distance between its color feature vector and the means of the exist-ing clusters is computed.The new shot is grouped into the cluster with the minimum distance from its feature vector, provided the minimum distance is less than a threshold.If an existing cluster is found for the new shot,the mean of the cluster is updated to include the feature vector of the new shot.Otherwise,a new cluster is created with the fea-ture vector of the new shot as its mean.The threshold is selected based on the percentage of the image pixels that need to match in color,in order to call two images similar.

Shots are deleted from the clusters produced above if the distance of their edge feature vector from the mean edge vector of the group is greater than a threshold,starting with the shot furthest from the mean.The mean is recomputed each time a member is deleted from the cluster.This is con-tinued till all the edge feature vectors of the members in the cluster are within the threshold from the mean of the cluster, or there is a single member left in the cluster.The threshold in this case is a multiple of the variance of the edge vectors of the cluster members.So,the?nal clusters are based on color as well as edge similarity,allowing the color feature to be the main criterion in determining the clusters.

A mergelist is produced by automatic clustering,identi-fying the group number for each shot in the shotlist.This information is used for the automatic construction of the video structure.

4.3.VTOC creation

We have found that a hierarchical tree structure captures the organization of the video adequately and is easy for the user to understand and work with.The whole video is the root node,which can have a number of child nodes,each corresponding to a separate‘story’in the video.A story is a self-contained unit which deals with a single or related subject(s).Sub-plots are different elements in a story unit or sub-plot unit.The tree structure has nodes of different types to provide semantic information about its contents. Each node also has a representative icon visible to allow browsing without having to unravel the full structure.Each new story starts with a story node,which consists of sub-plot nodes for each sub-plot.Similar nodes are used to bind together all consecutive frames found to be in the same clus-ter.Frequently,these nodes may be replaced by any one of its members by merging the other shots.The leaf nodes contain the shots from the shotlist.

The algorithm contains two major functions.A mod-i?ed version of the algorithm presented in[15],?nds all story units,creates a story node for each story and calls the second function Find-structure to?nd structure within each story.Each story unit extends to the last re-occurance of a shot within the body of the story.Find-structure is a func-tion that takes a segment of shot indices,traverses through the segment to create a node for each shot until it?nds one shot that reoccurs later.At this point,it divides the rest of the segment into sub-segments each of which is lead by the recurring shot as a sub-plot node and recursively calls itself to process each sub-segment.If consecutive shots are found to be similar,they are grouped under a single similar node. The structure produced by Find-structure is attached as a child of the story node for which it was called.

5.Interactive organization

We now describe the tools provided to the user to modify the results of automatic organization at both the shot and VTOC levels.The steps in the interactive generation of the ?nal organized video can be summarized as follows:

1.Automatic construction of shotlist from raw video.

2.*Viewing and editing the shot structure to add new

shots or merge shots.

3.Automatic clustering of shots into visually similar

groups.

4.*Viewing and editing the clusters generated automat-

ically to create new clusters and modify existing clus-

ters.

5.Automatic generation of tree structure to describe the

high level units in the video using the cluster infor-

mation from the earlier step.

6.*Viewing and modifying the tree structure to reorga-

nize the video.

The steps marked with an asterix in the above list need user interaction and therefore,an interface needs to be pro-vided with the required functionality.Since these interfaces work with higher level representations of the video,a sep-arate component is also provided to view the raw video accompanied by its audio.The interactive video organiza-tion environment provides three main interfaces to the user which communicate with each other so that changes made using one component produce the appropriate updates in the other interfaces.

Browser:This interface is used to view and mod-

ify the shot list.The video stream is represented as a

composite image making it easy to detect shot bound-

aries visually.The results of automatic shot detection

are displayed alongside using colored bars,and can

be altered easily.

TreeView:This interface serves a dual purpose.Be-

fore the creation of the tree structure of the video,it

is used to view and alter the automatic clustering of

shots based on visual similarity.Once the similarity-

based grouping results have been?nalized,it is used

as an interface to organize the video into a tree struc-

ture.

VideoPlayer:This interface is used for playing the

video,with audio,from any point in the video.It

has the functionality of a VCR including fast forward,

rewind,pause and step.

These components allow easy manipulation of the results obtained by automatic processing to?t the user’s interpre-tation of the video.Fig9shows the components of this system along with the interactions between them.Fig11 shows the graphical interface presented to the user by each component.

If the steps in the construction of the tree structure were followed sequentially,there would be very little interac-tion between the interfaces in the environment.However,it

Play video with full

VCR functions

V i d e o P l a y e r View and manipulate View and manipulate

shots of video in a

hierarchical structure

T r e e V i e w

the division of shots

into similar groups

play video

play video

track video

track video

reload browser

reload tree

shot change

B r o w s e r

View and manipulate

the grouping of frames

into shots

Figure9.System Overview

would be very useful for the user to see the effect of changes made at one level propagate to the other levels and be able to move between levels.Our environment aims to provide a fully integrated system where change is re?ected through-out the system,wherever possible.

The environment is designed so that each of the three in-terfaces can run individually or in combination with others. Each interface has the capability of starting up the other in-terfaces.Any one of them can be started?rst,provided the required?les are present.The shotlist is needed to start the Browser.TreeView starts with the group structure if only the mergelist is present,otherwise it starts with the tree struc-ture which is stored in the treelist.When more than one interfaces are running simultaneously,the issue of interac-tions has to be considered.Fig9shows the communica-tions between the interfaces in this case.Each interface and its communications with the other interfaces is described in greater detail in the following sections.

5.1.Browser

The browser is used to view and alter the grouping of frames into shots.The video is presented to the user in the form of a composite image which is constructed by in-cluding a horizontal and a vertical single pixel slice from the center line of each frame in the video along the time axis.This form makes it easier to check the shots produced automatically which are depicted visually by colored bars alongside the composite image as shown in Fig11(d).

Automatic shot boundary detection may produce unsat-isfactory results in the case of slow wipes from one shot to the next,momentary changes in illumination(?ashes), zoom etc.Changes may be neccessary in the shotlist gener-ated automatically to include additional shots or merge two shots.Any changes made in the shot boundaries changes the shotlist and the set of representative icon images.The following operations are implemented to make it simple to

modify the shot boundaries.

Split :A shot can be split into two shots by marking the point of split.A representative frame for the new shot is generated and the internal shot structure of the Browser is updated.This operation is used to incorporate shots which were not detected automatically.

Split Ahead :When there is a gradual shot change,it may not be easy for the user to locate the point where the shot should be split to get a good representative icon for the new shot https://www.wendangku.net/doc/3016073044.html,ing this operation,any point selected in the transition region produces a correct split.The point where the transition is completed is detected by processing the region following the point selected by the

user.

4800

5000

5200540056005800600062006400

51325134513651385140514251445146514851505152

T o t a l I n t e n s i t y

Frame Number

’transition.out’

(a)

(b)

Figure 10.Split Ahead Operation:(a)Example icons (b)Corresponding intensity plot

As an example,the middle icon image in Fig 10(a)shows the frame selected by the user based on visual inspection of the shot boundary displayed in the browser.This im-age does not represent the next shot as it still contains an overlap from the earlier shot.The last icon image in Fig 10(a)shows the frame correctly picked by the split ahead operation in which the dissolve process is complete.This is achieved by following the gradient along the smoothed intensity plot [Fig 10(b)]till the gradient direction changes or the gradient becomes negligible.

Merge :This operation is used to merge a shot into ei-ther of its adjoining shots.A shot to be merged is speci?ed by selecting any frame within it.The icon representing the merged shot is deleted by this operation.

Play video :The actual video can be played on the Video-Player from any selected frame.This playback may be needed to determine the content of the shots and detect sub-tle boundaries.While the video is playing,the Browser may track the video to keep the frame currently playing at the center of the viewing area.

Interactions :The Browser can produce a change in the

shotlist .This information is provided to the TreeView via a message and the change becomes visible immediately i.e.a new shot appears at the speci?ed location or a shot gets deleted automatically.This helps the user to actually see the icons representing the shots that are being created or deleted.In fact,the visual information from the tree can be used to determine actions taken in the browser e.g.when two consecutive representative icons shown in the TreeView cover very similar subject matter,the user may choose to merge them using the Browser .

The user may opt to reload the TreeView interface us-ing the new shotlist to redo the clustering,when there have been enough changes in the shotlist to make the earlier tree organization obsolete.The Browser can store the modi?ed shotlist containing the changes made by the user and trigger the automatic clustering of shots in the shotlist ,to produce a mergelist which is used by TreeView .

5.2.TreeView :correction of VTOC

The TreeView interface is used to interact with the tree description of the video generated from the mergelist .The user is allowed full freedom in restructuring the tree since semantic information can often be missed or misinterpreted by automatic processing.The following operations are pro-vided to facilitate changes in the structure of the tree.

Move nodes :Nodes can be moved either one at a time or in groups by selecting the node(s)to be moved and the destination node.The moved node(s)are added as siblings of the destination node selected.

Add nodes :New leaf nodes can only be added through changes in the shotlist using Browser .However,all types of non-leaf nodes can be added as parents of existing nodes.Update :This operation is an utility to make it easier to move large number of nodes.It repeats the previous oper-ation on all siblings of the node which was selected for the previous operation.For example,if a subplot node is to be deleted by moving all its children to another subplot node,just one child needs to be explicitly moved -the others can be forced to move by using the update operation.

Interactions :When changes are made in the order of the shots using TreeView and the user wants to see these changes re?ected in the Browser ,(s)he can opt to send a signal to the Browser to reload the rearranged shotlist after saving it from TreeView .The VideoPlayer can be played from this interface exactly as in the Browser interface.

No explicit delete function is provided in this interface since leaf nodes can only be deleted through changes in the shotlist using the Browser .All other (non-leaf)nodes are deleted automatically when they have no children.

The user can invoke these operations to regroup the shots into more meaningful stories and sub-plots.The order of shots can also be changed from their usual temporal or-

der to a more logical sequence.When used along with the Browser,all possible changes to the content and organiza-tion of the tree are supported.

The tree structure is stored as a treelist?le so that or-ganized videos can display the tree structure without going through the processing steps again.Modi?cations made by the user in the tree structure are also saved in the treelist.

5.3.TreeView:modifying shot groups

Though the primary function of the TreeView interface is to interact with the high-level structure,the same inter-face can also be used to view and modify the groups gener-ated by the automatic clustering process.In this case,there are only two types of nodes attached to the root node.If the group contains a single member,the member shot is at-tached as a leaf node to the root.For groups containing more than one member,an intermediate Group node is at-tached,which contains the member shots as its children.

The tree operations described in the earlier section can be used to move shots out of existing groups or create new groups.A modi?ed mergelist can be generated which re-?ects the changes made by the user.This step needs to be performed before the tree structure is loaded since the tree structure is constructed from the mergelist.

6.Conclusion and future work

We have proposed a hybrid video organization method-ology which combines the bene?ts of automatic processing with human input.Algorithms for producing a table of con-tents automatically from the raw video have been developed by solving the sub-problems of cut detection,shot grouping and generation of table of contents.Automatic processing reduces the user’s work from going through the whole pro-cess manually to just providing error checking and higher level semantics.We have constructed an environment which provides effective tools to the user to guide the video organi-zation process.The system has been tested on news stories and sports videos and produces reasonable organization of the videos requiring only small alterations from the user.

An important direction of future work is to have the sys-tem utilize the feedback from the user in an intelligent way. When the user merges two groups which were found to be different at the color or edge level,this may be due to the fact that some parts of the images from both groups match. Partial match templates could be generated from this in-formation.Instead,the similarity may be based on audio, which could be found by comparing the audio streams as-sociated with the merged https://www.wendangku.net/doc/3016073044.html,ing the cues from the user will further reduce the work which is needed from the user to modify the index structure produced automatically.References

[1] B.Abraham and A.Chuang.Outlier detection and time se-

ries modeling.Technometrics,31(2):241–248,May1989.

[2]P.Aigrain and P.Joly.The automatic real-time analysis of

?lm editing and transition effects and its https://www.wendangku.net/doc/3016073044.html,-puter and Graphics,18(1):93–103,1994.

[3] F.Arman,R.Depommier,A.Hsu,and M.Y.Chiu.Content-

based browsing of video sequences.ACM Multimedia,pages 97–103,Aug1994.

[4]P.England,R.B.Allen,S.M,and H.A.I/browse:The

bellcore video library toolkit.Storage and Retrieval for Still Image and Video Databases,SPIE,IV:254–264,Feb1996.

[5] A.J.Fox.Outliers in time series.Journal of the Royal

Statistical Society,Series B(34):350–363,1972.

[6] B.Gunsel,A.M.Ferman,and A.M.Tekalp.Video indexing

through integration of syntactic and semantic features.IEEE Multimedia Systems,pages90–95,1996.

[7]H.Hampapur,R.Jain,and T.Weymouth.Digital video seg-

mentation.Proc.of ACM Multimedia Conference,pages 357–363,1994.

[8]L.K.Hotta and M.M.C.Neves.A brief review of

tests for detection of time series outliers.ESTADISTICA, 44(142):103–148,1992.

[9]K.L.Kelly and D.B.Judd.The iscc-nbs method of des-

ignating colors and a dictionary of color names.National Bureau of Standards Circular,(553),Nov11955.

[10]J.Meng,Y.Juan,and S.F.Chang.Scene change detection

in a mpeg compressed video sequence.Digital Video Com-pression Algorithms and Technologies,SPIE,pages14–25, Feb1995.

[11]I.K.Sethi and N.Patel.A statistical approach to scene

change detection.Storage and Retrieval for Image and Video Databases,SPIE,III:329–338,Feb1995.

[12]M.J.Swain and D.H.Ballard.Indexing via color his-

tograms.Third International Conference on Computer Vi-sion,pages390–393,1990.

[13] D.Swanberg,C.F.Shu,and R.Jain.Knowledge guided

parsing in video databases.Storage and Retrieval for Image and Video Databases,SPIE,1908:13–25,Feb1993. [14]G.Wyszecki and W.S.Stiles.Color Science:Concepts and

Methods,Quantitative Data and Formulae.John Wiley& Sons,Inc.,1982.

[15]M.M.Yeung and B.L.Yeo.Time-constrained clustering for

segmentation of video into story units.International Con-ference on Pattern Recognition,pages375–380,1996. [16]M.M.Yeung,B.L.Yeo,W.Wolf,and B.Liu.Video brows-

ing using clustering and scene transitions on compressed sequences.Multimedia Computing and Networking,SPIE, 2417:399–413,1995.

[17]H.Zhang,A.Kankanhalli,and S.W.Smoliar.Automatic

parsing of full-motion video.ACM Multimedia Systems, 1:10–28,1993.

[18]H.J.Zhang,Y.H.Gong,S.W.Smoliar,and S.Y.Liu.Au-

tomatic parsing of news video.International Conference on Multimedia Computing and Systems,pages45–54,1994.

Video player(d)Browser

Web性能测试方案

Web性能测试方案 1测试目的 此处阐述本次性能测试的目的,包括必要性分析与扩展性描述。 性能测试最主要的目的是检验当前系统所处的性能水平,验证其性能是否能满足未来应用的需求,并进一步找出系统设计上的瓶颈,以期改善系统性能,达到用户的要求。 2测试范围 此处主要描述本次性能测试的技术及业务背景,以及性能测试的特点。 编写此方案的目的是为云应用产品提供web性能测试的方法,因此方案内容主要包括测试环境、测试工具、测试策略、测试指标与测试执行等。 2.1测试背景 以云采业务为例,要满足用户在互联网集中采购的要求,实际业务中通过云采平台询报价、下单的频率较高,因此云采平台的性能直接决定了业务处理的效率,并能够支撑业务并发的压力。 例如:支撑100家企业用户的集中访问,以及业务处理要求。 2.2性能度量指标 响应时间(TTLB) 即“time to last byte”,指的是从客户端发起的一个请求开始,到客户端接收到从服务器端返回的响应结束,这个过程所耗费的时间,响应时间的单位一般为“秒”或者“毫秒”。响应时间=网络响应时间+应用程序响应时间。 响应时间标准:

事务能力TPS(transaction per second) 服务器每秒处理的事务数; 一个事务是指一个客户机向服务器发送请求然后服务器做出反应的过程。 客户机在发送请求时开始计时,收到服务器响应后结束计时,一次来计算使用的时间和完成的事务个数。它是衡量系统处理能力的重要指标。 并发用户数 同一时刻与服务器进行交互的在线用户数量。 吞吐率(Throughput) 单位时间内网络上传输的数据量,也可指单位时间内处理的客户端请求数量,是衡量网络性能的重要指标。 吞吐率=吞吐量/传输时间 资源利用率 这里主要指CPU利用率(CPU utilization),内存占用率。 3测试内容 此处对性能测试整体计划进行描述,包括测试内容以及关注的性能指标。Web性能测试内容包含:压力测试、负载测试、前端连接测试。 3.1负载测试 负载测试是为了测量Web系统在某一负载级别上的性能,以保证Web系统在需求范围内能正常工作。负载级别可以是某个时刻同时访问Web系统的用户数量,也可以是在线数据处理的数量。例如:Web应用系统能允许多少个用户同时在线?如果超过了这个数量,会出现什么现象?Web应用系统能否处理大

h3c端口镜像配置及实例

1 配置本地端口镜像 2 1.2.1 配置任务简介 本地端口镜像的配置需要在同一台设备上进行。 首先创建一个本地镜像组,然后为该镜像组配置源端口和目的端口。 表1-1 本地端口镜像配置任务简介 ●一个端口只能加入到一个镜像组。 ●源端口不能再被用作本镜像组或其它镜像组的出端口或目的端口。 3 1.2.2 创建本地镜像组 表1-2 创建本地镜像组 配置源端口目的端口后,本地镜像组才能生效。 4 1.2.3 配置源端口 可以在系统视图下为指定镜像组配置一个或多个源端口,也可以在端口视图下将当前端口配置为指定镜像组的源端口,二者的配置效果相同。 1. 在系统视图下配置源端口 表1-3 在系统视图下配置源端口

2. 在端口视图下配置源端口 表1-4 在端口视图下配置源端口 一个镜像组内可以配置多个源端口。 5 1.2.4 配置源CPU 表1-5 配置源CPU 一个镜像组内可以配置多个源CPU。 6 1.2.5 配置目的端口 可以在系统视图下为指定镜像组配置目的端口,也可以在端口视图下将当前端口配置为指定镜像组的目的端口,二者的配置效果相同。

1. 在系统视图下配置目的端口 表1-6 在系统视图下配置目的端口 2. 在端口视图下配置目的端口 表1-7 在端口视图下配置目的端口 ●一个镜像组内只能配置一个目的端口。 ●请不要在目的端口上使能STP、MSTP和RSTP,否则会影响镜像功能的正常使 用。 ●目的端口收到的报文包括复制自源端口的报文和来自其它端口的正常转发报文。 为了保证数据监测设备只对源端口的报文进行分析,请将目的端口只用于端口镜 像,不作其它用途。 ●镜像组的目的端口不能配置为已经接入RRPP环的端口。 7 1.3 配置二层远程端口镜像 8 1.3.1 配置任务简介 二层远程端口镜像的配置需要分别在源设备和目的设备上进行。 ●一个端口只能加入到一个镜像组。 ●源端口不能再被用作本镜像组或其它镜像组的出端口或目的端口。 ●如果用户在设备上启用了GVRP(GARP VLAN Registration Protocol,GARP VLAN注册协议)功能,GVRP可能将远程镜像VLAN注册到不希望的端口上, 此时在目的端口就会收到很多不必要的报文。有关GVRP的详细介绍,请参见“配 置指导/03-接入/GVRP配置”。

以太网端口聚合+RSTP配置案例

以太网端口聚合+RSTP配置 拓扑图 功能要求: 通过在网络中配置RSTP功能,实现消除网络环路的目的, 当RSTP的根桥DOWN掉后,可以通过非根桥正常通信,达到根桥和备用根桥的切换,某个链路DOWN后,可以通过将某个阻塞端口恢复为根端口或转发端口,以实现正常的数据通信, 当聚合链路中的某个链路DOWN掉后,不会影响正常的通信 配置过程: S5700-LSW1 [Huawei]DIS CU # sysname Huawei # vlan batch 10 20 # stp mode rstp # cluster enable ntdp enable ndp enable # drop illegal-mac alarm #

diffserv domain default # drop-profile default # aaa authentication-scheme default authorization-scheme default accounting-scheme default domain default domain default_admin local-user admin password simple admin local-user admin service-type http # interface Vlanif1 # interface MEth0/0/1 # interface GigabitEthernet0/0/1 port link-type trunk port trunk allow-pass vlan 10 20 # interface GigabitEthernet0/0/2 port link-type trunk port trunk allow-pass vlan 10 20 # interface GigabitEthernet0/0/3 port link-type access port default vlan 10 stp disable # interface GigabitEthernet0/0/4 port link-type access port default vlan 20 stp disable # interface GigabitEthernet0/0/5 # interface GigabitEthernet0/0/6 # interface GigabitEthernet0/0/7 # interface GigabitEthernet0/0/8 # interface GigabitEthernet0/0/9

品质体系框架图.

品质体系框架图 图中各缩写词含义如下: QC:Quality Control 品质控制 QA:Quality Assurance 品质保证 QE:Quality Engineering 品质工程 IQC:Incoming Quality Control 来料品质控制 LQC:Line Quality Control 生产线品质控制 IPQC:In Process Quality Control 制程品质控制 FQC:Final Quality Control 最终品质控制 SQA:Source (Supplier) Quality Assurance 供应商品质控制 DCC:Document Control Center 文控中心 PQA:Process Quality Assurance 制程品质保证 FQA:Final Quality Assurance 最终品质保证 DAS:Defects Analysis System 缺陷分析系统 FA:Failure Analysis 坏品分析 CPI:Continuous Process Improvement 连续工序改善 CS:Customer Service 客户服务 TRAINNING:培训 一供应商品质保证(SQA) 1.SQA概念 SQA即供应商品质保证,识通过在供应商处设立专人进行抽样检验,并定期对供应商进行审核、评价而从最源头实施品质保证的一种方法。是以预防为主思想的体现。

2.SQA组织结构 3.主要职责 1)对从来料品质控制(IQC)/生产及其他渠道所获取的信息进行分析、综合,把结果反馈给供应商,并要求改善。 2)耕具派驻检验远提供的品质情报对供应商品质进行跟踪。 3)定期对供应商进行审核,及时发现品质隐患。 4)根据实际不定期给供应商导入先进的品质管理手法及检验手段,推动其品质保证能力的提升。 5)根据公司的生产反馈情况、派驻人员检验结果、对投宿反应速度及态度对供应商进行排序,为公司对供应商的取舍提供依据。 4.供应商品质管理的主要办法 1)派驻检验员 把IQC移至供应商,使得及早发现问题,便于供应商及时返工,降低供应商的品质成本,便于本公司快速反应,对本公司的品质保证有利。同时可以根据本公司的实际使用情况及IQC的检验情况,专门严加检查问题项目,针对性强。 2)定期审核 通过组织各方面的专家对供应商进行审核,有利于全面把握供应商的综合能力,及时发现薄弱环节并要求改善,从而从体系上保证供货品质定期排序,此结果亦为供应商进行排序提供依据。 一般审核项目包含以下几个方面 A.品质。 B.生产支持。 C.技术能力及新产品导入。 D.一般事务. 具体内容请看“供应商调查确认表”. 3)定期排序 排序的主要目的是评估供应商的品质及综合能力,以及为是否保留、更换供应商提供决策依据.排序主要依据以下几个方面的内容: A.SQA批通过率:一般要求不低于95%。 B.IQC批合格率:一般要求不低于95%。

性能测试流程规范

目录 1前言 (2) 1.1 文档目的 (2) 1.2 适用对象 (2) 2性能测试目的 (2) 3性能测试所处的位置及相关人员 (3) 3.1 性能测试所处的位置及其基本流程 (3) 3.2 性能测试工作内容 (4) 3.3 性能测试涉及的人员角色 (5) 4性能测试实施规范 (5) 4.1 确定性能测试需求 (5) 4.1.1 分析应用系统,剥离出需测试的性能点 (5) 4.1.2 分析需求点制定单元测试用例 (6) 4.1.3 性能测试需求评审 (6) 4.1.4 性能测试需求归档 (6) 4.2 性能测试具体实施规范 (6) 4.2.1 性能测试起始时间 (6) 4.2.2 制定和编写性能测试计划、方案以及测试用例 (7) 4.2.3 测试环境搭建 (7) 4.2.4 验证测试环境 (8) 4.2.5 编写测试用例脚本 (8) 4.2.6 调试测试用例脚本 (8) 4.2.7 预测试 (9) 4.2.8 正式测试 (9) 4.2.9 测试数据分析 (9) 4.2.10 调整系统环境和修改程序 (10) 4.2.11 回归测试 (10) 4.2.12 测试评估报告 (10) 4.2.13 测试分析报告 (10) 5测试脚本和测试用例管理 (11) 6性能测试归档管理 (11) 7性能测试工作总结 (11) 8附录:............................................................................................. 错误!未定义书签。

1前言 1.1 文档目的 本文档的目的在于明确性能测试流程规范,以便于相关人员的使用,保证性能测试脚本的可用性和可维护性,提高测试工作的自动化程度,增加测试的可靠性、重用性和客观性。 1.2 适用对象 本文档适用于部门内测试组成员、项目相关人员、QA及高级经理阅读。 2性能测试目的 性能测试到底能做些什么,能解决哪些问题呢?系统开发人员,维护人员及测试人员在工作中都可能遇到如下的问题 1.硬件选型,我们的系统快上线了,我们应该购置什么样硬件配置的电脑作为 服务器呢? 2.我们的系统刚上线,正处在试运行阶段,用户要求提供符合当初提出性能要 求的报告才能验收通过,我们该如何做? 3.我们的系统已经运行了一段时间,为了保证系统在运行过程中一直能够提供 给用户良好的体验(良好的性能),我们该怎么办? 4.明年这个系统的用户数将会大幅度增加,到时我们的系统是否还能支持这么 多的用户访问,是否通过调整软件可以实现,是增加硬件还是软件,哪种方式最有效? 5.我们的系统存在问题,达不到预期的性能要求,这是什么原因引起的,我们 应该进行怎样的调整? 6.在测试或者系统试点试运行阶段我们的系统一直表现得很好,但产品正式上 线后,在用户实际环境下,总是会出现这样那样莫名其妙的问题,例如系统运行一段时间后变慢,某些应用自动退出,出现应用挂死现象,导致用户对我们的产品不满意,这些问题是否能避免,提早发现? 7.系统即将上线,应该如何部署效果会更好呢? 并发性能测试的目的注要体现在三个方面:以真实的业务为依据,选择有代表性的、关键的业务操作设计测试案例,以评价系统的当前性能;当扩展应用程序的功能或者新的应用程序将要被部署时,负载测试会帮助确定系统是否还能够处理期望的用户负载,以预测系统的未来性能;通过模拟成百上千个用户,重复执行和运行测试,可以确认性能瓶颈并优化和调整应用,目的在于寻找到瓶颈问题。

以太网端口

目录 第1章以太网端口配置 ............................................................................................................ 1-1 1.1 以太网端口简介.................................................................................................................. 1-1 1.2以太网端口配置步骤.......................................................................................................... 1-1 1.2.1 配置以太网端口描述................................................................................................ 1-1 1.2.2 配置以太网接口状态变化上报抑制时间................................................................... 1-1 1.2.3 以太网端口专有参数配置......................................................................................... 1-2 1.3 以太网端口显示和调试....................................................................................................... 1-4 1.4 以太网端口配置示例 .......................................................................................................... 1-6 1.5 以太网端口排错.................................................................................................................. 1-7第2章以太网端口聚合配置..................................................................................................... 2-1 2.1 以太网端口聚合简介 .......................................................................................................... 2-1 2.2以太网端口聚合配置步骤 .................................................................................................. 2-1 2.3 以太网端口聚合显示和调试................................................................................................ 2-2 2.4 以太网端口聚合配置示例 ................................................................................................... 2-2 2.5 以太网端口聚合排错 .......................................................................................................... 2-3第3章以太网端口镜像配置..................................................................................................... 3-1 3.1 以太网端口镜像简介 .......................................................................................................... 3-1 3.2 以太网端口镜像配置步骤 ................................................................................................... 3-1 3.3 以太网端口镜像显示和调试................................................................................................ 3-2 3.4 以太网端口镜像配置示例 ................................................................................................... 3-2 3.5以太网端口镜像排错.......................................................................................................... 3-4

华为交换机端口镜像配置举例

华为交换机端口镜像配置举例 配置实例 文章出处:https://www.wendangku.net/doc/3016073044.html, 端口镜像是将指定端口的报文复制到镜像目的端口,镜像目的端口会接入数据监测设备,用户利用这些设备分析目的端口接收到的报文,进行网络监控和故障排除。本文介绍一个在华为交换机上通过配置端口镜像实现对数据监测的应用案例,详细的组网结构及配置步骤请查看以下内容。 某公司内部通过交换机实现各部门之间的互连,网络环境描述如下: 1)研发部通过端口Ethernet 1/0/1接入Switch C;λ 2)市场部通过端口Ethernet 1/0/2接入Switch C;λ 3)数据监测设备连接在Switch C的Ethernet 1/0/3端口上。λ 网络管理员希望通过数据监测设备对研发部和市场部收发的报文进行监控。 使用本地端口镜像功能实现该需求,在Switch C上进行如下配置: 1)端口Ethernet 1/0/1和Ethernet 1/0/2为镜像源端口;λ 2)连接数据监测设备的端口Ethernet 1/0/3为镜像目的端口。λ 配置步骤 配置Switch C: # 创建本地镜像组。

system-view [SwitchC] mirroring-group 1 local # 为本地镜像组配置源端口和目的端口。 [SwitchC] mirroring-group 1 mirroring-port Ethernet 1/0/1 Ethernet 1/0/2 both [SwitchC] mirroring-group 1 monitor-port Ethernet 1/0/3 # 显示所有镜像组的配置信息。 [SwitchC] display mirroring-group all mirroring-group 1: type: local status: active mirroring port: Ethernet1/0/1 both Ethernet1/0/2 both monitor port: Ethernet1/0/3 配置完成后,用户就可以在Server上监控部门1和部门2收发的所有报文。 相关文章:端口镜像技术简介远程端口镜像配置举例

配置以太网单板的内部端口

配置以太网单板的内部端口 当网元通过以太网板内部端口(即VCTRUNK)将以太网业务传输到SDH侧时,需配置VCTRUNK端口的各种属性,以便配合对端网元的以太网单板,实现以太网业务在SDH网络中的传输。 前提条件 用户具有“网元操作员”及以上的网管用户权限。 已创建以太网单板。 注意事项 注意:错误的配置绑定通道,可能会导致业务中断。 操作步骤 1.在网元管理器中选择以太网单板,在功能树中选择“配置 > 以太网接口管理 > 以太 网接口”。 2.选择“内部端口”。 3.配置内部端口的TAG属性。 a.选择“TAG属性”选项卡。 b.配置内部端口的TAG属性。 c.单击“应用”。 4.配置内部端口的网络属性。 a.选择“网络属性”选项卡。 b.配置内部端口的网络属性。

图1支持QinQ功能的以太网单板的内部端口属性 图2支持MPLS功能的以太网单板的内部端口属性 c.单击“应用”。 5.配置内部端口使用的封装映射协议。 a.选择“封装/映射”选项卡。 b.配置内部端口使用的封装协议及各参数。 说明:传输线路两端的以太网单板的VCTURNK的“映射协议”和协议参数应保 持一致。 c.单击“应用”。 6.配置内部端口的LCAS功能。 a.选择“LCAS”选项卡。

b.设置“LCAS使能”以及LCAS其他参数。 说明:传输线路两端的以太网单板的VCTURNK的“LCAS使能”和LCAS协议参 数应保持一致。 c.单击“应用”。 7.设置端口的绑定通道。 a.选择“绑定通道”选项卡,单击“配置”,出现“绑定通道配置”对话框。 b.在“可配置端口”中选择VCTRUNK端口作为配置端口,在“可选绑定通道”中 选择承载层时隙。单击。 c.单击“确定”,单击“是”。出现“操作结果”对话框,提示操作成功。

金蝶ERP性能测试经验分享

ERP性能测试总结分享

1分享 (3) 1.1测试环境搭建 (3) 1.2并发量计算及场景设计 (3) 1.3测试框架搭建 (4) 1.4测试脚本开发/调试 (5) 1.5场景调试/执行 (5) 1.6性能监控分析 (6) 1.7结果报告 (7) 2展望 (8) 2.1业务调研及场景确定 (8) 2.2场景监控与分析 (8)

1分享 1.1 测试环境搭建 在我们进行性能测试之前,通常需要搭建一个供测试用的环境,使用这个环境来录制脚本,根据在这个环境下执行测试的结果,得出最终的测试结论。 有些时候,测试环境就是生产环境,例如:一个新的项目上线前进行的性能测试,通常就是在未来的生产环境下进行的。在这种情况下,可以排除测试环境与生产环境差异带来影响,测试结果相对比较准确。 反之,如果测试环境与生产环境不是同一环境,这个时候,为了保证测试结果的准确性,需要对生产环境进行调研。在搭建测试环境时,尽量保证搭建的测试环境和生成环境保持一致(环境主体框架相同,服务器硬件配置相近,数据库数据相近等)。 另外,最好输出一个测试环境搭建方案,召集各方参加评审确认。同时,在测试方案、测试报告中,对测试环境进行必要的阐述。 1.2 并发量计算及场景设计 首先,在确定场景及并发量之前,需要对业务进行调研,调研的对象最好是业务部门,也可以通过数据库中心查询数据,进行辅助。 场景选取一般包括:登陆场景、操作频繁的核心业务场景、涉及重要信息(如:资金等)业务场景、有提出明确测试需求的业务场景、组合场景等。 每个场景的并发量,需要根据业务调研的结果进行计算。可以采用并发量计算公式:C=nL / T 进行计算(其中C是平均的并发用户数,n是平均每天访问用户数,L是一天内用户从登录到退出的平均时间(操作平均时间),T是考察时间长度(一天内多长时间有用户使用系统))。 每个场景的思考时间,也可以通过业务调研获得。 另外,也可以采用模拟生产业务场景TPS(每秒通过事务数)的方式,来确定场景。相比上一种方式,模拟生产业务场景TPS,能更加准确模拟生产压力。本次ERP性能测试采用的就是这种方式:首先,通过调研确定业务高峰时段,各核心业务TPS量及产生业务单据量。然后,通过调整组合场景中,各单场景的Vusr(虚拟用户数)和Thinktime(思考时间),使每个场景的TPS接近业务调研所得到的TPS量,每个场景相同时间(即高峰时间段长度)通过事务数接近调研业务单据量,从而确定一个,可以模拟生成环境压力的基准场景。最后,通

以太网端口配置命令

一以太网端口配置命令 1.1.1 display interface 【命令】 display interface[ interface_type | interface_type interface_num | interface_name ] 【视图】 所有视图 【参数】 interface_type:端口类型。 interface_num:端口号。 interface_name:端口名,表示方法为interface_name=interface_type interface_num。 参数的具体说明请参见interface命令中的参数说明。 【描述】 display interface命令用来显示端口的配置信息。 在显示端口信息时,如果不指定端口类型和端口号,则显示交换机上所 有的端口信息;如果仅指定端口类型,则显示该类型端口的所有端口信 息;如果同时指定端口类型和端口号,则显示指定的端口信息。 【举例】 # 显示以太网端口Ethernet0/1的配置信息。 display interface ethernet0/1 Ethernet0/1 current state : UP IP Sending Frames' Format is PKTFMT_ETHNT_2, Hardware address is 00e0-fc00-0010 Description : aaa The Maximum Transmit Unit is 1500 Media type is twisted pair, loopback not set Port hardware type is 100_BASE_TX 100Mbps-speed mode, full-duplex mode Link speed type is autonegotiation, link duplex type is autonegotiation Flow-control is not supported The Maximum Frame Length is 1536 Broadcast MAX-ratio: 100% PVID: 1 Mdi type: auto Port link-type: access Tagged VLAN ID : none Untagged VLAN ID : 1 Last 5 minutes input: 0 packets/sec 0 bytes/sec Last 5 minutes output: 0 packets/sec 0 bytes/sec input(total): 0 packets, 0 bytes 0 broadcasts, 0 multicasts input(normal): - packets, - bytes

端口镜像典型配置举例

端口镜像典型配置举例 1.5.1 本地端口镜像配置举例 1. 组网需求 某公司内部通过交换机实现各部门之间的互连,网络环境描述如下: ●研发部通过端口GigabitEthernet 1/0/1接入Switch C; ●市场部通过端口GigabitEthernet 1/0/2接入Switch C; ●数据监测设备连接在Switch C的GigabitEthernet 1/0/3端口上。 网络管理员希望通过数据监测设备对研发部和市场部收发的报文进行监控。 使用本地端口镜像功能实现该需求,在Switch C上进行如下配置: ●端口GigabitEthernet 1/0/1和GigabitEthernet 1/0/2为镜像源端口; ●连接数据监测设备的端口GigabitEthernet 1/0/3为镜像目的端口。 2. 组网图 图1-3 配置本地端口镜像组网图 3. 配置步骤 配置Switch C: # 创建本地镜像组。

system-view [SwitchC] mirroring-group 1 local # 为本地镜像组配置源端口和目的端口。 [SwitchC] mirroring-group 1 mirroring-port GigabitEthernet 1/0/1 GigabitEthernet 1/0/2 both [SwitchC] mirroring-group 1 monitor-port GigabitEthernet 1/0/3 # 显示所有镜像组的配置信息。 [SwitchC] display mirroring-group all mirroring-group 1: type: local status: active mirroring port: GigabitEthernet1/0/1 both GigabitEthernet1/0/2 both monitor port: GigabitEthernet1/0/3 配置完成后,用户就可以在数据监测设备上监控研发部和市场部收发的所有报文。

配置基于端口的vlan及实例

1 配置基于Access端口的VLAN 配置基于Access端口的VLAN有两种方法:一种是在VLAN视图下进行配置,另一种是在接口视图/端口组视图/二层聚合接口视图或二层虚拟以太网接口视图下进行配置。 表1-4 配置基于Access端口的VLAN(在VLAN视图下) 表1-5 配置基于Access端口的VLAN(在接口视图/端口组视图下/二层聚合接口视图/二层虚拟以太网接口视图)

●在将Access端口加入到指定VLAN之前,要加入的VLAN必须已经存在。 ●在VLAN视图下向VLAN中添加端口时,只能添加二层以太网端口。● 2 1.4. 3 配置基于Trunk端口的VLAN Trunk端口可以允许多个VLAN通过,只能在接口视图/端口组视图/二层聚合接口视图或二层虚拟以太网接口视图下进行配置。 表1-6 配置基于Trunk端口的VLAN

●Trunk端口和Hybrid端口之间不能直接切换,只能先设为Access端口,再设 置为其它类型端口。例如:Trunk端口不能直接被设置为Hybrid端口,只能先 设为Access端口,再设置为Hybrid端口。 ●配置缺省VLAN后,必须使用port trunk permit vlan命令配置允许缺省VLAN 的报文通过,出接口才能转发缺省VLAN的报文。 3 1.4. 4 配置基于Hybrid端口的VLAN Hybrid端口可以允许多个VLAN通过,只能在接口视图/端口组视图/二层聚合接口视图或二层虚拟以太网接口视图下进行配置。 表1-7 配置基于Hybrid端口的VLAN

●Trunk端口和Hybrid端口之间不能直接切换,只能先设为Access端口,再设 置为其它类型端口。例如:Trunk端口不能直接被设置为Hybrid端口,只能先 设为Access端口,再设置为Hybrid端口。 ●在设置允许指定的VLAN通过Hybrid端口之前,允许通过的VLAN必须已经存 在。 ●配置缺省VLAN后,必须使用port hybrid vlan命令配置允许缺省VLAN的报 文通过,出接口才能转发缺省VLAN的报文。 4 1.4. 5 基于端口的VLAN典型配置举例 1. 组网需求 ●Host A和Host C属于部门A,但是通过不同的设备接入公司网络;Host B和 Host D属于部门B,也通过不同的设备接入公司网络。 ●为了通信的安全性,也为了避免广播报文泛滥,公司网络中使用VLAN技术来 隔离部门间的二层流量。其中部门A使用VLAN 100,部门B使用VLAN 200。 ●现要求不管是否使用相同的设备接入公司网络,同一VLAN内的主机能够互 通。即Host A和Host C能够互通,Host B和Host D能够互通。 2. 组网图 图1-6 基于端口的VLAN组网图 3. 配置步骤 (1)配置Device A # 创建VLAN 100,并将Ethernet1/1加入VLAN 100。 system-view [DeviceA] vlan 100 [DeviceA-vlan100] port ethernet 1/1 [DeviceA-vlan100] quit # 创建VLAN 200,并将Ethernet1/2加入VLAN 200。 [DeviceA] vlan 200 [DeviceA-vlan200] port ethernet 1/2

端口镜像配置

的需要,也迫切需要

例如,模块1中端口1和端口2同属VLAN1,端口3在VLAN2,端口4和5在VLAN2,端口2监听端口1和3、4、5, set span 1/1,1/3-5 1/2 2950/3550/3750 格式如下: #monitor session number source interface mod_number/port_number both #monitor session number destination interface mod_mnumber/port_number //rx-->指明是进端口得流量,tx-->出端口得流量 both 进出得流量 for example: 第一条镜像,将第一模块中的源端口为1-10的镜像到端口12上面; #monitor session 1 source interface 1/1-10 both #monitor session 1 destination interface 1/12 第二条镜像,将第二模块中的源端口为13-20的镜像到端口24上面; #monitor session 2 source interface 2/13-20 both #monitor session 2 destination interface 2/24 当有多条镜像、多个模块时改变其中的参数即可。 Catalyst 2950 3550不支持port monitor C2950#configure terminal C2950(config)# C2950(config)#monitor session 1 source interface fastEthernet 0/2 !--- Interface fa 0/2 is configured as source port. C2950(config)#monitor session 1 destination interface fastEthernet 0/3 !--- Interface fa0/3 is configured as destination port. 4配置命令 1. 指定分析口 feature rovingAnalysis add,或缩写 f r a, 例如: Select menu option: feature rovingAn alysis add Select analysis slot: 1?& nbsp; Select analysis port: 2 2. 指定监听口并启动端口监听 feature rovingAnalysis start,或缩写 f r sta, 例如: Select menu option: feature rovingAn alysis start Select slot to monitor ?(1-12): 1 Select port to monitor&nb sp;?(1-8): 3

CS架构性能测试

C/S测试 通常,客户/ 服务器软件测试发生在三个不同的层次: 1.个体的客户端应用以“ 分离的” 模式被测试——不考虑服务器和底层网络的运行; 2.客户端软件和关联的服务器端应用被一起测试,但网络运行不被明显的考虑; 3.完整的 C/S 体系结构,包括网络运行和性能,被测试。 下面的测试方法是 C/S 应用中经常用到的: 应用功能测试客户端应用被独立地执行,以揭示在其运行中的错误。 服务器测试——测试服务器的协调和数据管理功能,也考虑服务器性能(整体反映时间和数据吞吐量)。 数据库测试——测试服务器存储的数据的精确性和完整性,检查客户端应用提交的事务,以保证数据被正确地存储、更新和检索。 事务测试——创建一系列的测试以保证每类事务被按照需求处理。测试着重于处理的正确性,也关注性能问题。 网络通信测试——这些测试验证网络节点间的通信正常地发生,并且消息传递、事务和相关的网络交通无错的发生。 C/S结构与B/S结构的特点分析 为了区别于传统的C/S模式,才特意将其称为B/S模式。认识到这些结构的特征,对于系统的选型而言是很关键的。 1、系统的性能 在系统的性能方面,B/S占有优势的是其异地浏览和信息采集的灵活性。任何时间、任何地点、任何系统,只要可以使用浏览器上网,就可以使用B/S系统的终端。 不过,采用B/S结构,客户端只能完成浏览、查询、数据输入等简单功能,绝大部分工作由服务器承担,这使得服务器的负担很重。采用C/S结构时,客户端和服务器端都能够处理任务,这虽然对客户机的要求较高,但因此可以减轻服务器的压力。而且,由于客户端使用浏览器,使得网上发布的信息必须是以HTML 格式为主,其它格式文件多半是以附件的形式存放。而HTML格式文件(也就是Web页面)不便于编辑修改,给文件管理带来了许多不便。

沙发性能测试

1. 作用 家具性能测试是一种加速使用的疲劳和强度承受能力的测试方法,可以用来评估产品能否达到预期的设计要求。 2.GSA沙发性能测试 2.1 测试方法 GSA性能测试基于循环递增加载模式(图1)。测试开始时先给沙发加一个起始载荷,这个载荷以20次/分的频率循环加力25000/次。循环结束后,载荷增加一定量,然后再循环25000次。当载荷达到所需求的水平,或沙发框架或部件破环时测试才结束。 2.2测度设备 GSA沙发性能测试是在一套特别设计的汽缸-管道支架系统上进行的(如照片所示)。气缸在压缩空气机的推动下,以20次/分的循环频率对被测沙发加载。加载循环次数是由一个可编程逻辑控制器和可重置电子记数系统进行记录的。当沙发框架或部件破坏时,限制开关被激活并停止测试。 2.3主要的测试类型 图2所示是沙发的基本框架图,沙发框架可以分成3个部分:坐基框架系统、侧边扶手框架系统和后靠框架系统。根据沙发通常的受力状态,可以进行以下几种测试: 3 测试数据的讨论 3.1垂直坐力测试(seat foundation load test)(图3), 在垂直坐力作用下,主要的座位支撑框架部位承受很大的应力,如果其中任何一个部件破坏,整个沙发也就很快破坏了。 前横档和后横档破坏的主要原因是其尺寸大小,不过实木横档上有太多交叉纹理和太大太多的节疤刀会引起横档破坏。横档底部的节疤破坏性最大,因为这里所受的拉应力最大,如果支撑框架部件主要是由木榫钉为主要连接件,横档破坏常常起于横档上用于连接座位撑档的榫钉孔处,然后扩大到整个横档截面上。另外在测试中,前横档和前支柱的接头也经常是破坏发生的地方,既有在垂直面的破坏,也有前后方向的破坏。这些破坏的接头大多是由于没有加用涂胶支撑木块来加强连接,而

品质体系框架图

品质体系框架图 1

品质体系框架图 图中各缩写词含义如下: QC:Quality Control 品质控制 QA:Quality Assurance 品质保证 QE:Quality Engineering 品质工程 IQC:Incoming Quality Control 来料品质控制 LQC:Line Quality Control 生产线品质控制IPQC:In Process Quality Control 制程品质控制FQC:Final Quality Control 最终品质控制 SQA:Source (Supplier) Quality Assurance 供应商品质控制DCC:Document Control Center 文控中心 PQA:Process Quality Assurance 制程品质保证FQA:Final Quality Assurance 最终品质保证

DAS:Defects Analysis System 缺陷分析系统 FA:Failure Analysis 坏品分析 CPI:Continuous Process Improvement 连续工序改进 CS: Customer Service 客户服务 TRAINNING:培训 一供应商品质保证(SQA) 1.SQA概念 SQA即供应商品质保证,识经过在供应商处设立专人进行抽样检验,并定期对供应商进行审核、评价而从最源头实施品质保证的一种方法。是以预防为主思想的体现。 2.SQA组织结构 3.主要职责 1)对从来料品质控制(IQC)/生产及其它渠道所获取的信息进行分析、综合,把结果反馈给供应商,并要求改进。 2)耕具派驻检验远提供的品质情报对供应商品质进行跟踪。 3)定期对供应商进行审核,及时发现品质隐患。

相关文档