文档库 最新最全的文档下载
当前位置:文档库 › Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching

Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching

Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching
Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching

Sparse Solution of Underdetermined Linear Equations

by Stagewise Orthogonal Matching Pursuit

David L.Donoho1,Yaakov Tsaig2,Iddo Drori1,Jean-Luc Starck3

March2006

Abstract

Finding the sparsest solution to underdetermined systems of linear equations y=Φx is NP-hard in general.We show here that for systems with‘typical’/‘random’Φ,a good approximation to the

sparsest solution is obtained by applying a?xed number of standard operations from linear algebra.

Our proposal,Stagewise Orthogonal Matching Pursuit(StOMP),successively transforms the signal into a negligible residual.Starting with initial residual r0=y,at the s-th stage it forms

the‘matched?lter’ΦT r s?1,identi?es all coordinates with amplitudes exceeding a specially-chosen

threshold,solves a least-squares problem using the selected coordinates,and subtracts the least-

squares?t,producing a new residual.After a?xed number of stages(e.g.10),it stops.In contrast

to Orthogonal Matching Pursuit(OMP),many coe?cients can enter the model at each stage in

StOMP while only one enters per stage in OMP;and StOMP takes a?xed number of stages(e.g.

10),while OMP can take many(e.g.n).StOMP runs much faster than competing proposals for sparse

solutions,such as 1minimization and OMP,and so is attractive for solving large-scale problems.

We use phase diagrams to compare algorithm performance.The problem of recovering a k-sparse vector x0from(y,Φ)whereΦis random n×N and y=Φx0is represented by a point(n/N,k/n)

in this diagram;here the interesting range is k

approximation to)the sparsest solution of y=Φx over a region of the sparsity/indeterminacy plane

comparable to the region where 1minimization is successful.In fact,StOMP outperforms both 1

minimization and OMP for extremely underdetermined problems.

We rigorously derive a conditioned Gaussian distribution for the matched?ltering coe?cients at each stage of the procedure and rigorously establish a large-system limit for the performance

variables of StOMP.We precisely calculate large-sample phase transitions;these provide asymptot-

ically precise limits on the number of samples needed for approximate recovery of a sparse vector by

StOMP.

We give numerical examples showing that StOMP rapidly and reliably?nds sparse solutions in compressed sensing,decoding of error-correcting codes,and overcomplete representation.

Keywords:compressed sensing,decoding error-correcting codes,sparse overcomplete representation. phase transition,large-system limit.random matrix theory.Gaussian approximation. 1minimization. stepwise regression.thresholding,false discovery rate,false alarm rate.MIMO channel,mutual access interference,successive interference cancellation.iterative decoding.

Acknowledgements This work was supported by grants from NIH,ONR-MURI,a DARPA BAA, and NSF DMS00-77261,DMS01-40698(FRG)and DMS05-05303.

1:Department of Statistics,Stanford University,Stanford CA,94305

2:Institute for Computational Mathematics in Engineering,Stanford University,Stanford CA,94305 3:DAPNIA/SEDI-SAP,Service d’Astrophysique,Centre Europeen d’Astronomie/Saclay,F-91191Gif-sur-Yvette Cedex France.

1Introduction

The possibility of exploiting sparsity in signal processing is attracting growing attention.Over the years, several applications have been found where signals of interest have sparse representations and exploiting this sparsity o?ers striking bene?ts;see for example[11,28,26,25,7].At the ICASSP2005conference a special session addressed the theme of exploiting sparsity,and a recent international workshop,SPARS05, was largely devoted to this topic.

Very recently,considerable attention has focused on the following Sparse Solutions Problem(SSP). We are given an n×N matrixΦwhich is in some sense‘random’,for example a matrix with iid Gaussian entries.We are also given an n-vector y and we know that y=Φx0where x0is an unknown sparse vector.We wish to recover x0;however,crucially,n

App1:Compressed Sensing.x0represents the coe?cients of a signal or image in a known basis which happens to sparsely represent that signal or image.Φencodes a measurement operator,i.e.an operator yielding linear combinations of the underlying object.Here n

App2:Error https://www.wendangku.net/doc/503807260.html,rmation is transmitted in a coded block in which a small fraction of the entries may be corrupted.From the received data,one constructs a system y=Φx0;here x0 represents the values of errors which must be identifed/corrected,y represents(generalized)check sums,andΦrepresents a generalized checksum operator.It is assumed that the number of errors is smaller than a threshold,and so x0is sparse.This sparsity allows to perfectly correct the gross errors[9,48,28].

App3:Sparse Overcomplete Representation.x0represents the synthesis coe?cients of a signal y,which is assumed to be sparsely represented from terms in an overcomplete expansion;those terms are the columns ofΦ.The sparsity allows to recover a unique representation using only a few terms, despite the fact that representation is in general nonunique[43,11,21,20,50,51].

In these applications,several algorithms are available to pursue sparse solutions;in some cases attractive theoretical results are known,guaranteeing that the solutions found are the sparsest possible solutions. For example,consider the algorithm of 1minimization,which?nds the solution to y=Φx having minimal 1norm.Also called Basis Pursuit(BP)[11],this method enjoys some particularly striking theoretical properties,such as rigorous proofs of exact reconstruction under seemingly quite general circumstances[21,35,32,7,16,8,17,18]

Unfortunately,some of the most powerful theoretical results are associated with fairly heavy com-putationally burdens.The research reported here began when,in applying the theory of compressed sensing to NMR spectroscopy,we found that a straightforward application of the 1minimization ideas in[17,58]required several days solution time per(multidimensional)spectrum.This seemed prohibitive computational expense to us,even though computer time is relatively less precious than spectrometer time.

In fact,numerous researchers have claimed that general-purpose 1minimization is much too slow for large-scale applications.Some have advocated a heuristic approach,Orthogonal Matching Pursuit (OMP),(also called greedy approximation and stepwise regression in other?elds)[43,52,53,55,54], which though often e?ective in empirical work,does not o?er the strong theoretical guarantees that attach to 1minimization.(For other heuristic approaches,see[50,51,29].)

In this paper we describe Stagewise Orthogonal Matching Pursuit(StOMP),a method for approx-imate sparse solution of underdetermined systems with the property either thatΦis‘random’or that the nonzeros in x0are randomly located,or both.StOMP is signi?cantly faster than the earlier methods BP and OMP on large-scale problems with sparse solutions.Moreover,StOMP permits a theoretical analysis showing that StOMP is similarly succcessful to BP at?nding sparse solutions.

Our analysis uses the notion of comparison of phase transitions as a performance metric.We con-sider the phase diagram,a2D graphic with coordinates measuring the relative sparsity of x0(number

of nonzeros in x0/number of rows inΦ),as well as the indeterminacy of the system y=Φx(number of rows inΦ/number of columns inΦ).StOMP’s large-n performance exhibits two phases(success/failure) in this diagram,as does the performance of BP.The“success phase”(the region in the phase diagram where StOMP successfully?nds sparse solutions)is large and comparable in size to the success phase for 1minimization.In a sense StOMP is more e?ective at?nding sparse solutions to large extremely under-determined problems than either 1minimization or OMP;its phase transition boundary is even higher at extreme sparsity than the other methods.Moreover,StOMP takes a few seconds to solve problems that may require days for 1solution.As a result StOMP is well suited to large-scale applications.Indeed we give several explicitly worked-out examples of realistic size illustrating the performance bene?ts of this approach.

Our analysis suggests the slogan

noiseless underdetermined problems behave like noisy well-determined problems,

i.e.coping with incompleteness of the measurement data is(for‘randomΦ’)similar to coping with Gaus-sian noise in complete measurements.At each StOMP stage,the usual set of matched?lter coe?cients is a mixture of‘noise’caused by cross-talk(non-orthogonality)and true signal;setting an appropriate threshold,we can subtract identi?ed signal,causing a reduction in cross-talk at the next iteration.This is more than a slogan;we develop a theoretical framework for rigorous asymptotic analysis.Theorems 1-3below allow us to track explicitly the successful capture of signal,and the reduction in cross-talk, stage by stage,rigorously establishing(asymptotic)success below phase transition,together with the failure that occurs above phase transition.The theory agrees with empirical?nite-n results.

Our paper is organized as follows.Section2presents background on the sparse solutions problem; Section3introduces the StOMP algorithm and documents its favorable performance;Section4develops a performance measurement approach based on the phase diagram and phase transition.Section5analyzes the computational complexity of the algorithm.Section6develops an analytic large-system-limit for predicting phase transitions which agree with empirical performance characteristics of StOMP.Section 7develops the Gaussian noise viewpoint which justi?es our thresholding rules.Section8describes the performance of StOMP under variations[adding noise,of distribution of nonzero coe?cients,of matrix ensemble]and documents the good performance of StOMP under all these variations.

Section9presents computational examples in applications App1-App3that document the success of the method in simulated model problems.Section10describes the available software package which reproduces the results in this paper and Section11discusses the relationship of our results to related ideas in multiuser detection theory and to previous work in the sparse solutions problem.

2Sparse Solution Preliminaries

Recall the Sparse Solutions Problem(SSP)mentioned in the introduction.In the SSP,an unknown vector x0∈R N is of interest;it is assumed sparse,which is to say that the number k of nonzeros is much smaller than N.We have the linear measurements y=Φx0whereΦis a known n by N matrix, and wish to recover x0.

Of course,ifΦwere a nonsingular square matrix,with n=N,we could easily recover x from y; but we are interested in the case where n

For now,we consider a speci?c collection of matrices where sparsity proves valuable.Until we say otherwise,letΦbe a random matrix taken from the Uniform Spherical ensemble(USE);the columns of Φare iid points on the unit sphere S n?1[16,17].Later,several other ensembles will be introduced.

3Stagewise Orthogonal Matching Pursuit

StOMP aims to achieve an approximate solution to y=Φx0whereΦcomes from the USE and x0is sparse.In this section,we describe its basic ingredients.In later sections we document and analyse its

Matched Filter

"T r s

Projection

"

I s

T "I s ()#1

"I s

T y

Interference Construction

"x s

Figure 1:Schematic Representation of the StOMP algorithm.

performance.

3.1The Procedure

StOMP operates in S stages,building up a sequence of approximations x 0,x 1,...by removing detected structure from a sequence of residual vectors r 1,r 2,....Figure 1gives a diagrammatic representation.StOMP starts with initial ‘solution’x 0=0and initial residual r 0=y .The stage counter s starts at s =1.The algorithm also maintains a sequence of estimates I 1,...,I s of the locations of the nonzeros in x 0.

The s -th stage applies matched ?ltering to the current residual,getting a vector of residual correlations

c s =ΦT r s ?1,

which we think of as containing a small number of signi?cant nonzeros in a vector disturbed by Gaussian noise in each entry.The procedure next performs hard thresholding to ?nd the signi?cant nonzeros;the thresholds,are specially chosen based on the assumption of Gaussianity [see below].Thresholding yields a small set J s of “large”coordinates:

J s ={j :|c s (j )|>t s σs };

here σs is a formal noise level and t s is a threshold parameter.We merge the subset of newly selected coordinates with the previous support estimate,thereby updating the estimate:

I s =I s ?1∪J s .

We then project the vector y on the columns of Φbelonging to the enlarged support.Letting ΦI denote the n ×|I |matrix with columns chosen using index set I ,we have the new approximation x s supported in I s with coe?cients given by

(x s )I s =(ΦT I s ΦI s )?1ΦT

I s y.The updated residual is

r s =y ?Φx s .

We check a stopping condition and,if it is not yet time to stop,we set s :=s +1and go to the next

stage of the procedure.If it is time to stop,we set ?x S =x s as the ?nal output of the procedure.Remarks:

1.The procedure resembles Orthogonal Matching Pursuit(known to statisticians as Forward Stepwise

Regression).In fact the two would give identical results if S were equal to n and if,by coincidence, the threshold t s were set in such a way that a single new term were obtained in J s at each stage.

2.The thresholding strategy used in StOMP(to be described below)aims to have numerous terms

enter at each stage,and aims to have a?xed number of stages.Hence the results will be di?erent from OMP.

3.The formal noise levelσs= r s 2/√

n,and typically the threshold parameter takes values in the

range2≤t s≤3.

4.There are strong connections to:stagewise/stepwise regression in statistical model building;succes-

sive interference cancellation multiuser detectors in digital communications and iterative decoders in error-control coding.See the discussion in Section11.

Our recommended choice of S(10)and our recommended threshold-setting procedures(see Section 3.4below)have been designed so that when x0is su?ciently sparse,the following two conditions are likely to hold upon termination:

?All nonzeros in x0are selected in I S.

?I S has no more than n entries.

The next lemma motivates this design criterion.Recall thatΦis sampled from the USE and so columns ofΦare in general position.The following is proved in Appendix A.

Lemma3.1Let the columns ofΦbe in general position.Let I S denote the support of?x S.Suppose that the support I0of x0is a subset of I S.Suppose in addition that#I S≤n.Then we have perfect recovery:

?x S=x0.(3.1)

3.2An Example

We give a simple example showing that the procedure works in a special case.

We generated a coe?cient vector x0with k=32nonzeros,having amplitudes uniformly distributed on[0,1].We sampled a matrixΦat random from the USE with n=256,N=1024,and computed a linear measurement vector y=Φx0.Thus the problem of recovering x0given y is1:4underdetermined (i.e.δ=n/N=.25),with underlying sparsity measureρ=k/n=.125.To this SSP,we applied StOMP coupled with the CFAR threshold selection rule to be discussed below.The results are illustrated in Figure2.

Panels(a)-(i)depict each matched?ltering output,its hard thresholding and the evolving approxi-mation.As can be seen,after3stages a result is obtained which is quite sparse and,as the?nal panel shows,matches well the object x0which truly generated the data.In fact,the error at the end of the third stage measures ?x3?x0 2/ x0 2=0.022,i.e.a mere3stages were required to achieve an accuracy of2decimal digits.

3.3Approximate Gaussianity of Residual MAI

At the heart of our procedure are two thresholding schemes often used in Gaussian noise removal.(N.B. at this point we assume there is no noise in y!)To explain the relevance of Gaussian‘noise’concepts, note that at stage1,the algorithm is computing

?x=ΦT y;

this is essentially the usual matched?lter estimate of x0.If y=Φx0and x0vanishes except in one coordinate,the matched?lter output?x equals x0perfectly.Hence z=?x?x0is a measure of the disturbance to exact reconstruction caused by multiple nonzeros in x0.The same notion arises in digital communications where it is called Multiple-Access Interference(MAI)[60].Perhaps surprisingly-because there is no noise in the problem-the MAI in our setting typically has a Gaussian behavior.More

Figure2:Progression of the StOMP algorithm.Panels(a),(d),(g):successive matched?ltering outputs c1,c2,c3;Panels(b),(e),(h):successive thresholding results;Panels(c),(f),(i):successive partial solutions. In this example,k=32,n=256,N=1024.

speci?cally,ifΦis a matrix from the USE and if n and N are both large,then the entries in the MAI vector z have a histogram which is nearly Gaussian with standard deviation

σ≈ x0 2/√

n.(3.2)

The heuristic justi?cation is as follows.The MAI has the form

z(j)=?x(j)?x0(j)=

j=

φj,φ x0( ).

The thing we regard as‘random’in this expression is the matrixΦ.The termξj

k ≡ φj,φk measures the

projection of a random point on the sphere S n?1onto another random point.This random variable has approximately a Gaussian distribution N(0,1

n

).ForΦfrom the USE,for a given?xedφj,the di?erent

random variables(ξj

k :k=j)are independently distributed.Hence the quantity z(j)is an iid sum of

approximately normal r.v.’s,and so,by standard arguments,should be approximately normal with mean 0and variance

σ2j=V ar[

j= ξj

x0( )]=(

j=

x0( )2)·V ar(ξj1)≈n?1 x0 22

Settingσ2= x0 2/n,this justi?es(3.2).

Computational experiments validate Gaussian approximation for the MAI.In Figure3,Panels(a),(d),(g) display Gaussian QQ-plots of the MAI in the sparse case with k/n=.125,.1875and.25,in the Uniform Spherical Ensemble with n=256and N=1024.In each case,the QQ-plot appears straight,as the Gaussian model would demand.

Through the rest of this paper,the phrase Gaussian approximation means that the MAI has an approximately Gaussian marginal distribution.(The reader interested in formal proofs of Gaussian approximation can consult the literature of multiuser detection e.g.[46,61,12];such a proof is implicit

in the proofs of Theorems1and2below.The connection between our work and MUD theory will be ampli?ed in Section11below).

Properly speaking,the term‘MAI’applies only at stage1of StOMP.At later stages there is residual MAI,i.e.MAI which has not yet been cancelled.This can be de?ned as

z s(j)=x0(j)?φT j r s/ P I

s?1

φj 22,j∈I s?1;

Figure3:QQ plots comparing MAI with Gaussian distribution.Left column:k/n=.125,middle column:k/n=.1875,right column:k/n=.25.Top row:USE,middle row:RSE,bottom row:URP. The RSE and URP ensembles are discussed in Section8.The plots all appear nearly linear,indicating that the MAI has a nearly Gaussian distribution.

the coordinates j∈I s?1are ignored at stage s-the residual in those coordinates is deterministically0.

Empirically,residual MAI has also a Gaussian behavior.Figure4shows quantile-quantile plots for the ?rst few stages of the CFAR variant,comparing the residual MAI with a standard normal distribution. The plots are e?ectively straight lines,illustrating the Gaussian https://www.wendangku.net/doc/503807260.html,ter,we provide theoretical support for a perturbed Gaussian approximation to residual MAI.

3.4Threshold Selection

Our threshold selection proposal is inspired by the Gaussian behavior of residual MAI.We view the vector of correlations c s at stage s as consisting of a small number of‘truly nonzero’entries,combined with a large number of‘Gaussian noise’entries.The problem of separating‘signal’from‘noise’in such problems has generated a large literature including the papers[24,27,26,1,23,37],which in?uenced our way of thinking.

We adopt language from statistical decision theory[39]and the?eld of multiple comparisons[38]. Recall that the support I0of x0is being(crudely)estimated in the StOMP algorithm.If a coordinate belonging to I0does not appear in I S,we call this a missed detection.If a coordinate not in I0does appear in I S we call this a false alarm.The coordinates in I S we call discoveries,and the coordinates in I S\I0we call false discoveries.(Note:false alarms are also false discoveries.The terminological distinction is relevant when we normalize to form a rate;thus the false alarm rate is the number of false alarms divided by the number of coordinates not in I0;the false discovery rate is the fraction of false discoveries within I S.)

We propose two strategies for setting the threshold.Ultimately,each strategy should land us in a position to apply Lemma3.1:i.e.to arrive at a state where#I S≤n and there are no missed detections. Then,Lemma3.1assures us,we perfectly recover:?x S=x.The two strategies are:

?False Alarm Control.We attempt to guarantee that the number of total false alarms,across all stages,does not exceed the natural codimension of the problem,de?ned as n?k.Subject to this, we attempt to make the maximal number of discoveries possible.To do so,we choose a threshold so the False Alarm rate at each stage does not exceed a per-stage budget.

?False Discovery Control.We attempt to arrange that the number of False Discoveries cannot exceed

Figure4:QQ plots comparing residual MAI with Gaussian distribution.Quantiles of residual MAI at di?erent stages of StOMP are plotted against Gaussian quantiles.Near-linearity indicates approximate Gaussianity.

a?xed fraction q of all discoveries,and to make the maximum number of discoveries possible subject to that constraint.This leads us to consider Simes’rule[2,1].

The False Alarm Control strategy requires knowledge of the number of nonzeros k or some upper bound.False Discovery Control does not require such knowledge,which makes it more convenient for applications,if slightly more complex to implement and substantially more complex to analyse[1].The choice of strategy matters;the basic StOMP algorithm behaves di?erently depending on the threshold strategy,as we will see below.

Implementation details are available by downloading the software we have used to generate the results in this paper;see Section10below.

4Performance Analysis by Phase Transition

When does StOMP work?To discuss this,we use the notions of phase diagram and phase transition.

4.1Problem Suites,Performance Measures

By problem suite S(k,n,N)we mean a collection of Sparse Solution Problems de?ned by two ingredients: (a)an ensemble of random matricesΦof size n by N;(b)an ensemble of k-sparse vectors x0.By standard problem suite S st(k,n,N)we mean the suite withΦsampled from the uniform spherical ensemble,with x0a random variable having k nonzeros sampled iid from a standard N(0,1)distribution.

For a given problem suite,a speci?c algorithm can be run numerous times on instances sampled from the problem suite.Its performance on each realization can then be measured according to some numerical or qualitative criterion.If we are really ambitious,and insist on perfect recovery,we use the performance

measure1{?x

S =x0}

.More quantitative is the 0-norm, ?x S?x0 0,the number of sites at which the two

vectors disagree.Both these measures are inappropriate for use with?oating point arithmetic,which does not produce exact agreement.We prefer to use instead 0, ,the number of sites at which the reconstruction and the target disagree by more than =10?4.We can also use the quantitative measure relerr2= ?x S?x0 2/ x0 2,declaring success when the measure is smaller than a?xed threshold(say ).

For a qualitative performance indicator we simply report the fraction of realizations where the qual-itative condition was true;for a quantitative performance measure,we present the mean value across instances at a given k,n,N.

Figure5:Phase Diagram for 1minimization.Shaded attribute is the number of coordinates of recon-struction which di?er from optimally sparse solution by more than10?4.The diagram displays a rapid transition from perfect reconstruction to perfect disagreement.Overlaid red curve is theoretical curve ρ

1

.

4.2Phase Diagram

A phase diagram depicts performance of an algorithm at a sequence of problem suites S(k,n,N).The average value of some performance measure as displayed as a function ofρ=k/n andδ=n/N.Both of these variablesρ,δ∈[0,1],so the diagram occupies the unit square.

To illustrate such a phase diagram,consider a well-studied case where something interesting happens. Let x1solve the optimization problem:

(P1)min x 1subject to y=Φx.

As mentioned earlier,if y=Φx0where x0has k nonzeros,we may?nd that x1=x0exactly when k is small enough.Figure5displays a grid ofδ?ρvalues,withδranging through50equispaced points in the interval[.05,.95]andρranging through50equispaced points in[.05,.95];here N=800.Each point on the grid shows the mean number of coordinates at which original and reconstruction di?er by more than10?4,averaged over100independent realizations of the standard problem suite S st(k,n,N). The experimental setting just described,i.e.theδ?ρgrid setup,the values of N,and the number of realizations,is used to generate phase diagrams later in this paper,although the problem suite being used may change.

This diagram displays a phase transition.For smallρ,it seems that high-accuracy reconstruction is obtained,while for largeρreconstruction fails.The transition from success to failure occurs at di?erent ρfor di?erent values ofδ.

This empirical observation is explained by a theory that accurately predicts the location of the observed phase transition and shows that,asymptotically for large n,this transition is perfectly sharp. Suppose that problem(y,Φ)is drawn at random from the standard problem suite,and consider the event E k,n,N that x0=x1i.e.that 1minimization exactly recovers x0.The paper[19]de?nes a function

ρ

1(δ)(called thereρW)with the following property.Consider sequences of(k n),(N n)obeying k n/n→ρ

and n/N n→δ.Suppose thatρ<ρ

1

(δ).Then as n→∞

P rob(E k

n ,n,N n

)→1.

On the other hand,suppose thatρ>ρ

1

(δ).Then as n→∞

P rob(E k

n ,n,N n

)→0.

The theoretical curve(δ,ρ

1(δ))described there is overlaid on Figure5,showing good agreement between

asymptotic theory and experimental results.

Figure6:Phase diagram for CFAR thresholding.Overlaid red curve is heuristically-derived analytical curveρF AR(see Appendix B).Shaded attribute:number of coordinates wrong by more than10?4 relative error.

4.3Phase Diagrams for StOMP

We now use phase diagrams to study the behavior of StOMP.Figure6displays performance of StOMP with CFAR thresholding with per-iteration false alarm rate(n?k)/(S(N?k)).The problem suite and un-derlying problem size,N=800,are the same as in Figure5.The shaded attribute again portrays the number of entries where the reconstruction misses by more than10?4.Once again,for very sparse problems(ρsmall),the algorithm is successful at recovering(a good approximation to)x0,while for less sparse problems(ρlarge),the algorithm fails.Superposed on this display is the graph of a heuristically-derived functionρF AR,which we call the Predicted Phase transition for CFAR thresholding.Again the agreement between the simulation results and the predicted transition is reasonably good.Appendix

B explains the calculation of this predicted transition,although it is best read only after?rst reading Section6.

Figure7shows the number of mismatches for the StOMP algorithm based on CFDR thresholding with False Discovery Rate q=1/2.Here N=800and the display shows that,again,for very sparse problems(ρsmall),the algorithm is successful at recovering(a good approximation to)x0,while for less sparse problemsρlarge,the algorithm fails.Superposed on this display is the graph of a heuristically-derived functionρF DR,which we call the Predicted Phase transition for CFDR thresholding.Again the agreement between the simulation results and the predicted transition is reasonably good,though visibly not quite as good as in the CFAR case.

5Computation

Since StOMP seems to work reasonably well,it makes sense to study how rapidly it runs.

5.1Empirical Results

Table1shows the running times for StOMP equipped with CFAR and CFDR thresholding,solving an instance of the problem suite S st(k,n,N).We compare these?gures with the time needed to solve the same problem instance via 1minimization and OMP.Here 1minimization is implemented using Michael Saunders’PDCO solver[49].The simulations used to generate the?gures in the table were all executed on a3GHz Xeon workstation,comparable with current desktop CPUs.

Table1suggests that a tremendous saving in computation time is achieved when using the StOMP scheme

over traditional 1minimization.The conclusion is that CFAR-and CFDR-based methods have a large

Figure7:Phase diagram for CFDR thresholding.Overlaid red curve is heuristically-derived curveρF DR (see Appendix B).Shaded attribute:number of coordinates wrong by more than10?4relative error.

Problem Suite(k,n,N) 1OMP CFAR CFDR

(10,100,1000)0.970.370.020.03

(100,500,1000)22.790.790.420.32

(100,1000,10000)482.227.98 5.24 3.28

(500,2500,10000)7767.16151.81126.3688.48

Table1:Comparison of execution times(in seconds)for instances of the random problem suite S st(k,n,N).

domain of applicability for sparsely solving random underdetermined systems,while running much faster than other methods in problem sizes of current interest.

5.2Complexity Analysis

The timing studies are supported by a formal analysis of the asymptotic complexity.In this analysis,we consider two scenarios.

?Dense Matrices.In this scenario,the matrixΦde?ning an underdetermined linear system is an explicit n×N dense matrix stored in memory.Thus,applyingΦto an N-vector x involves nN ?ops.

?Fast Operators.Here,the linear operatorΦis not explicitly represented in matrix form.Rather, it is implemented as an operator taking a vector x,and returningΦx.Classical examples of this type include the Fourier transform,Hadamard transform,and Wavelet transform,just to name

a few;all of these operators are usually implemented without matrix multiplication.Such fast

operators are of key importance in large-scale applications.As a concrete example,consider an imaging scenario where the data is a d×d array,andΦis an n by d2partial Fourier operator, with n=μd2proportional to d2.Direct application ofΦwould involve nd2=O(d4)operations, whereas applying a2-D FFT followed by random sampling would require merely O(d2log(d))?ops;

the computational gain is evident.In our analysis below,we let V denote the cost of one application of a linear operator or its adjoint(corresponding to one matrix-vector multiplication).

In fact,as we will now see,the structure of the StOMP algorithm makes it a prime choice when fast operators are available,as nearly all its computational e?ort is invested in solving partial least-squares systems involvingΦandΦT.In detail,assume we are at the s-th stage of execution.StOMP starts by

applying matched?ltering to the current residual,which amount to one application ofΦT,at a cost

of nN?ops.Next,it applies hard-thresholding to the residual correlations and updates the active set accordingly,using at most2N additional?ops.The core of the computation lies in calculating the

projection of y onto the subset of columnsΦI

s ,to get a new approximation x s.This is implemented via

a Conjugate Gradient(CG)solver[34].Each CG iteration involves application ofΦI

s andΦT

I s

,costing

at most2nN+O(N)?ops.The number of CG iterations used is a small constant,independent of n and N,which we denoteν.In our implementation we useν=10.Finally,we compute the new residual by applyingΦto the new approximation,requiring an additional nN?ops.Summarizing,the total operation count per StOMP stage amounts to(ν+2)nN+O(N).The total number of StOMP stages, S,is a prescribed constant,independent of the data;in the simulations in this paper we set S=10.

Readers familiar with OMP have by now doubtless recognized the evident parallelism in the algorith-mic structure of StOMP and OMP.Indeed,much like StOMP,at each stage OMP computes residual correlations and solves a least-squares problem for the new solution estimate.Yet,unlike StOMP,OMP builds up the active set one element at a time.Hence,an e?cient implementation would necessarily main-tain a Cholesky factorization of the active set matrix and update it at each stage,thereby reducing the cost of solving the least-squares system.In total,k steps of OMP would take at most4k3/3+knN+O(N)?ops.Without any sparsity assumptions on the data,OMP takes at most n steps,thus,its worst-case performance is bounded by4n3/3+n2N+O(N)operations.A key di?erence between StOMP and OMP is that the latter needs to store the Cholesky factor of the active set matrix in its explicit form,taking up to n2/2memory elements.When n is large,as is often the case in2-and3-D image-reconstruction scenarios,this greatly hinders the applicability of OMP.In contrast,StOMP has very modest storage requirements.At any given point of the algorithm execution,one needs only store the current estimate x s,the current residual vector r s,and the current active set I s.This makes StOMP very attractive for use in large-scale applications.

Table2summarizes our discussion so far,o?ering a comparison of the computational complexity of StOMP,OMP and 1minimization via linear programming(LP).For the LP solver,we use a primal-dual barrier method for convex optimization(PDCO)developed by Michael Saunders[49].The estimates listed in the table all assume worst-case behavior.Examining the bounds in the dense matrix case closely, we notice that StOMP is the only algorithm of the three admitting quadratic order complexity estimates. In contrast,OMP and PDCO require cubic order estimates for their worst-case performance bound. Therefore,for large scale problems StOMP can dominate due to its simple structure and e?ciency.In the case where fast operators are applicable,StOMP yet again prevails;it is the only algorithm of the three requiring a constant number(S·(ν+2))of matrix-vector multiplications to reach a solution.

Algorithm Dense Matrices Fast Operators

StOMP S(ν+2)nN+O(N)S(ν+2)·V+O(N)

OMP4n3/3+n2N+O(N)4n3/3+2n·V+O(N)

1min.with PDCO S(2N)3/3+O(nN)2N·V+O(nN)

Table2:Worst-Case Complexity Bounds for StOMP,OMP and PDCO.S denotes the maximum number of stages,νdenotes the maximum number of CG iterations employed per stage of StOMP,and V stands for the cost of one matrix-vector product(implemented as a fast operator).

To convey the scale of computational bene?ts in large problems,we conduct a simple experiment in a setting whereΦcan be implemented as a fast operator.We consider a system y=Φx whereΦis made from only n=δN rows of the Fourier matrix.Φcan be implemented by application of a Fast Fourier Transform followed by a coordinate selection.Table3gives the results.Clearly the advantage of StOMP is even more convincing.

Problem Suite(k,n,N) 1OMP CFAR CFDR

S P F E(500,10000,20000)237.6953.64 2.07 3.16

S P F E(1000,20000,50000)810.39299.54 5.639.47

Table3:Comparison of execution times(in seconds)in the random partial Fourier suite S P F E(k,n,N). Because of the fast operator,StOMP outperforms OMP.

Figure8:Empirical Transition Behaviors,varying n.(a)Fraction of cases with termination before stage S.(b)Fraction of missed detections.Averages of1000trials with n=400,800,1600and k= ρn , N= n/δ ,δ=1/4andρvarying.Sharpness of each transition seems to be increasing with n.

To make the comparison still more vivid,we point ahead to an imaging example from Section9.1 below.There an image of dimensions d×d is viewed as a vector x of length N=d2.Again the system y=Φx whereΦis made from only n=δN rows of the Fourier matrix.One matrix-vector product costs V=4N log N=8d2log d.

How do the three algorithms compare in this setting?Plugging-in S=10,ν=10,and V as above,we see that the leading term in the complexity bound for StOMP is960·d2log d.In contrast,for OMP the

leading term in the worst-case bound becomes4δ3

3d6+16δd4log d,and for 1minimization the leading

term is16d4log d.The computational gains from StOMP are indeed substantial.Moreover,to run OMP

in this setting,we may need up toδ2

2d4memory elements to store the Cholesky factorization,which

renders it unusable for anything but the smallest d.In Section9.1,we present actual running times of the di?erent algorithms.

6The Large-System Limit

Figures6and7suggest phase transitions in the behavior of StOMP,which would imply a certain well-de?ned asymptotic‘system capacity’below which StOMP successfully?nds a sparse solution,and above which it fails.In this section,we review the empirical evidence for a phase transition in the large-system limit and develop theory that rigorously establishes it.We consider the problem suite S(k,n,N;USE,±1)de?ned by randomΦsampled from the USE,and with y generated as y=Φx0, where x0has k nonzero coe?cients in random positions having entries±1.This ensemble generates a slightly‘lower’transition than the ensemble used for Figures6and7where the nonzeros in x0had iid Gaussian N(0,1)entries.

6.1Evidence for Phase Transition

Figure8presents results of simulations at?xed ratiosδ=n/N but increasing n.Three di?erent quantities are considered:in panel(a),the probability of early termination,i.e.termination before stage S=10because the residual has been driven nearly to zero;in panel(b)the missed detection rate,i.e. the fraction of nonzeros in x0that are not supported in the reconstruction?x S.Both quantities undergo transitions in behavior nearρ=.2.

Signi?cantly,the transitions become more sharply de?ned with increasing n.As n increases,the

early termination probability behaves increasingly like a raw discontinuity1{k/n≥ρ

F AR (n/N)}

as n→∞,

while the fraction of missed detections properties behave increasingly like a discontinuity in derivative (k/n?ρF AR(n/N))+.In statistical physics such limiting behaviors are called?rst-order and second-order phase transitions,respectively.

(k,n,N)ρ1ρ2ρ3ρ4d1d2d3d4ν1ν2ν3ν4

(70,280,800)0.2500.1300.0740.041 1.0000.7730.6300.521 2.857 2.630 2.487 2.377

(105,420,1200)0.2500.1300.0750.043 1.0000.7750.6330.524 2.857 2.632 2.490 2.381

(140,560,1600)0.2500.1310.0750.043 1.0000.7760.6320.522 2.857 2.633 2.489 2.379

(210,840,2400)0.2500.1300.0750.043 1.0000.7760.6300.522 2.856 2.632 2.486 2.378

(78,280,800)0.2790.1580.1060.078 1.0000.7780.6430.546 2.857 2.635 2.499 2.403

(117,420,1200)0.2790.1570.1040.073 1.0000.7810.6460.546 2.857 2.638 2.502 2.403

(156,560,1600)0.2790.1590.1050.076 1.0000.7820.6450.545 2.857 2.639 2.502 2.401

(235,840,2400)0.2800.1590.1060.076 1.0000.7780.6440.542 2.856 2.635 2.501 2.399

Table4:Dimension-normalized ratiosρs,d s,νs.Problems of di?erent sizes(k,n,N)with identical ratios ofρ=k/n andδ=n/N.Note similarity of entries in adjacent rows within each half of the table.Top half of table:just below phase transition;bottom half:just above1phase transition.

(k,n,N)α1α2α3α4β1β2β3β4

(70,280,800)0.0410.0350.0320.0310.4810.4290.4440.421

(105,420,1200)0.0400.0350.0320.0330.4790.4230.4240.480

(140,560,1600)0.0400.0350.0320.0310.4780.4280.4270.481

(210,840,2400)0.0400.0360.0310.0310.4780.4230.4290.485

(78,280,800)0.0390.0340.0290.0280.4340.3310.2610.221

(117,420,1200)0.0380.0330.0290.0300.4360.3410.2950.248

(156,560,1600)0.0380.0330.0300.0280.4290.3410.2770.225

(235,840,2400)0.0390.0330.0300.0280.4330.3320.2780.220

Table5:Dimension-normalized detector operating characteristicsαs,βs.Problems of di?erent sizes (k,n,N)with identical ratios ofρ=k/n andδ=n/N.Note similarity of entries in adjacent rows within each half of the table.Top half of table:just below phase transition;bottom half:just above phase transition.

6.2Evidence for Intensivity

In statistical physics,a system property is called intensive when it tends to a stable limit as the system size increases.Many properties of StOMP,when expressed as ratios to the total system size,are intensive. Such properties include:the number of detections at each stage,the number of true detections,the number of false alarms,and the squared norm of the residual r s.When sampling from the standard problem suite,all these properties-after normalization by the problem size n-behave as intensive quantities.

Table4illustrates this by considering6di?erent combinations of k,n,N,all six at the same value of determinacyδ=n/N,in two groups of three,each group at one common value ofρ.Within each group with common values ofδ=n/N andρ=k/n,we considered three di?erent problem sizes n.

Stage s of StOMP considers an n s-dimensional subspace,using k s nonzeros out of N s possible terms, where

k s=k?#true discoveries prior to stage s,

n s=n?#discoveries prior to stage s,

N s=N?#discoveries prior to stage s.

The table presents dimension-normalized ratiosρs=k s/n,d s=n s/n,νs=N s/n.If these quantities are intensive,they will behave similarly at the same stage even at di?erent problem sizes.The evidence of the table suggests that they are indeed intensive.

Also important in what follows are two threshold detector operating characteristics:the stage-speci?c false-alarm rate

αs=P rob{| φj,r s |>t sσs|j∈I c0∩I c s?1}

and the stage-speci?c correct detection rate

βs=P rob{| φj,r s |>t sσs|j∈I0∩I c s?1}.

There is also evidence of intensivity for these quantities;see Table5.

6.3Limit Quantities

We have seen that the dimension-normalized quantitiesρs=k s/n s and d s=n s/n are empirically nearly constant for large n.We now present a theoretical result to explain this.

For our result,we?x S>0and consider the CFAR algorithm designed for that speci?ed S.We also?xρ,δ∈(0,1).Let k=k n= ρn ,N n= n/δ .Run StOMP on an instance of the problem suite S(k,n,N;USE,±1).Let r s 2denote the norm of the residual at stage s.

Recall the notation p.lim for limit in probability;a sequence of random variables(v n:n≥1)has the nonstochastic limitˉv in probability,writtenˉv=p.lim n→∞v n,if,for each >0,P rob{|v n?ˉv|> }→0 as n→∞.In the result below,let k s,n denote the random quantity k s on a problem from the standard suite at size n.Similarly for r s,n,d s,n,n s,n,αs,n,βs,n.Also,if n s,n=0,the iteration stops immediately, and the monitoring variables at that and all later stages up to stage S are assigned values in the obvious way:r s,n=0,βs,n=0,αs,n=0,etc.

Theorem1Large-System Limit.There are constantsˉσs,ˉμs depending on s=1,...,S,onδand onρ,so that

ˉσ2s=p.lim

n→∞ r s,n 22/n,ˉμs=p.lim

n→∞

r s,n 22/k s,n,s=1,...,S.

We also have large-system limits in probability for the detector operating characteristics

ˉαs=p.lim

n→∞αs,n,ˉβs=p.lim

n→∞

βs,n,s=1,...,S,

where the limits depend on s,δandρ.Finally,the normalized dimensions also have large-system limits:

ˉρs=p.lim

n→∞k s,n/n s,n,ˉd s=p.lim

n→∞

n s,n/n,s=1,...,S,

with limits depending onδand onρ.

See Appendix C for the proof.It is best studied after?rst becoming familiar with Section7.

6.4The Predicted Phase Transition

Fix a smallη>0;we say that StOMP is successful,if at termination of the S-stage StOMP algorithm,?the active set I S contains all but a fractionηof the elements of I0:

|I0\I S|≤η|I0|;and

?the active set I S contains no more than n elements:

|I S|≤(1?η)n.

Lemma3.1motivates this de?nition(in the caseη=0).When this property holds,it is typically the case that?x S≈x0,as experiments have shown.

The existence of large-system limits allows us to derive phase transitions in the‘Success’property;the corresponding curvesρF AR andρF DR decorate Figures6and7.Empirically,these transitions happen at about the same place as apparent transitions for other candidate de?nitions of‘Success’,such as exact equality?x S=x0.The key point is that the transitions in this property can be calculated analytically, and are rigorously in force at large n,whereas empirical phase transitions are simply interpretations.

This analytic calculation works by tracking the large-system limit variables(ˉρs,ˉd s,ˉνs,ˉσs)as a function of s;thus we use dimension-normalized units,1?n,ρ?k,1/δ?N,and this state vector is initialized to(ρ,1,1/δ,ρ).

The heart of the calculation is an iteration over s=1,...,S.At stage s,we?rst calculate the model false alarm rate and the model true detect rate:

ˉαs=p.lim

n→∞

P rob{| φj,r s |>t sσs,n|j∈I c0∩I c s?1},(6.1)

Methodδ=.05δ=.10.15.20.25.35.50.65.80

Empirical0.12500.15620.17920.20000.22250.26070.32120.36630.3852 Theoretical0.12470.14980.17030.18690.20760.25180.29130.35670.4008 Table6:Empirical and Theoretical Phase https://www.wendangku.net/doc/503807260.html,parisons at several values of indeterminacy δ.Top half of table:empirical calculation(N=1600);bottom half:theoretical calculation.

ˉβs =p.lim

n→∞

P rob{| φj,r s |>t sσs,n|j∈I0∩I c s?1}.(6.2)

This part of the calculation requires theoretical developments from the next section;speci?cally Corol-laries7.1,7.2.We then update the limit quantities in the obvious way:

ˉd

s

=ˉd s?1?ˉβsˉρs?ˉαs(ˉνs?ˉρs),

ˉρs+1=ˉρs(1?ˉβs),ˉνs+1=ˉνs?ˉαs(ˉνs?ˉρs)+(ˉρs+1?ˉρs).

The calculation announces success if,at or before stage S,

ˉd

s

≥η,ˉρs≤ηρ.

Otherwise,it announces failure.

This calculation evaluates a speci?c parameter combination(δ,ρ)for success or failure.We are really interested in the boundaryρF AR,S(δ)which separates the‘success’region from the‘failure’region.By binary search,we obtain a numerical approximation to this boundary.

In this calculation,there is no notion of problem size n;in principle the calculation is applicable to all large problem sizes.The assumption being made is that certain variables(such as the empirical false alarm rate)are intensive,and,though random,can be approximated by a limit quantity.This has been established for the relevant variables by Theorem1.

Table6compares the calculations made by this approach with the results of a StOMP simulation. The degree of match is apparent.The di?erence between the empirical transition and the theoretical prediction is smaller than the width of the transition;compare Figure8.Since the empirical transition point is not a sharply de?ned quantity,the degree of match seems quite acceptable.

7The Conditioned Gaussian Limit

Underlying Theorem1and the subsequent phase-transition calculations is a particular model for the statistical behavior of coe?cients φj,r s .We now introduce and derive that model.

7.1The Conditioned Gaussian Model

Our model considers the quantities φj,r s driving the StOMP algorithm.There are two kind of behav-iors:one for j∈I0–the null case–and one for j∈I0–the non-null case.

7.1.1Null Case

De?ne jointly Gaussian random variables Z0,Z1,...,Z S,with means zero and variancesˉσ2s de?ned by Theorem1.The variances are decreasing:ˉσ2s>ˉσ2s+1.The random variables have the covariance structure

Cov(Z u,Z s)=ˉσ2

max(u,s)

.

That is to say,the process(Z s:s=1,...,S)behaves as a time-reversed martingale.

Consider the coe?cient φj,r s obtained by matched?ltering of the s-th residual,and suppose that j is a truly null coordinate,i.e.j is not in the support of x0.For a random variable X let L(X)denote the probability law of X.Consider the(USE,±)problem suite with given values ofδ=n/N andρ=k/n, and n large.Our conditioned Gaussian model says that,in the CFAR case

L( φj,r s |j∈I0∪I s?1)≈L(Z s||Z i|

In words,we model each null coe?cient as a certain Gaussian random variable conditioned on certain non-exceedance events involving other,correlated random variables.In particular,we do not model it simply as a Gaussian random variable(except if s=1).To enforce this distinction,we let?Z s denote the random variable Z s conditioned on{|Z i|

7.1.2Non-Null Case

De?ne jointly Gaussian random variables X0,X1,...,X S,with meansˉμs and variancesˉσ2s again deriving from Theorem1.There is again the covariance appropriate to a time-reversed martingale:

Cov(X u,X s)=ˉσ2

max(u,s)

.

Consider now the coe?cient φj,r s obtained by matched?ltering of the s-th residual,where j is a truly non-null coordinate,i.e.j is not in the support of x0.Consider again the standard problem suite with given values ofδandρand n large.The conditioned Gaussian model says that

L( φj,r s |j∈I0∩I c s?1)≈L(X s||X i|

In words,we model each non-null coe?cient at stage s as a certain nonzero-mean Gaussian random variable conditioned on a certain sequence of non-exceedances at earlier stages in the sequence.In this case,the conditioning event looks the same as in the non-null case;however the random variables X i do not have mean zero.We let?X s denote the random variable X s conditioned on{|X i|

7.1.3The Gaussian Approximation

The CG model,which will later be seen to be highly accurate,explains why the Gaussian approximation

sometimes works.The model has the following consequence.Let p?

Z s

(z)denote the marginal probability

density of the CG variable?Z s and let gˉσ

s (z)denote the probability density of a Gaussian N(0,σ2s).

Under a simple normal approxmation,we would have p?

Z s (z)≈gˉσ

s

(z).Under our model,

p s(z)=P rob{|Z1|

s

(z) P rob{|Z1|

.

We have the identity p?

Z s (z)=λs(z)gˉσ

s

(z),where

λs(z)=

P rob{|Z1|

.

A parallel de?nition for the random variables?X s sets

ξs(x)=p?

X s (x)/p X

s

(x).

In Figure9Panel(a)we display exact results forλs under our model,with a choice ofˉσobtained from analyzing the caseδ=.2,ρ=.2.As one can see,eachλs is e?ectively1near zero,and drops to zero in

the tails.In this case,each underlyingˉσs is small and each gˉσ

s is e?ectively concentrated over the region

whereλs is nearly1.Hence the Gaussian approximation to the conditioned Gaussian model is not bad, for the parametersρandδunderlying this situation.Panel(b)depictsξs with a choice ofˉμ,ˉτobtained from analyzing the caseδ=.2,ρ=.2.Now we have only a vaguely Gaussian shape.

7.2Derivation of the Model

The?rst part of this section will prove:

Theorem2Let?Z s be as de?ned in Section7.1.1.Then,for w∈R,

P{ φj,r s ≤w|j∈I0&j∈I s?1}→P{?Z s≤w},n→∞,s=1,...,S.

We immediately gain a formula for computing the limiting threshold false-alarm probability:

Figure s density to get the conditioned Gaussian density,null case.(b)ξs(z),the factor multiplying the(non-standard) normal density to get the conditioned Gaussian density,nonnull case.Here s=2,3,4δ=1/4andρ=.2. Corollary7.1

ˉαs=P{|?Z s|≥t sˉσs}.(7.3) The comparable result in the Non-null case is:

Theorem3Let?X s be as de?ned in Section7.1.1.Then

P{ φj,r s ≤w|j∈I0&j∈I s?1}→P{?X s≤w},n→∞.

We obtain a formula for computing the limiting threshold correct detection rate:

Corollary7.2

ˉβ

=P{|?X s|≥t sˉσs}(7.4)

s

The formulas(7.3)and(7.4)explain how to perform in principle the calculations(6.1)-(6.2)needed for calculating phase transitions in Section6.4.For complete documentation of the calculation procedure, see Section10.

7.2.1Null Case

Suppose we are given a deterministic vector v0∈R n and a sequence of deterministic orthoprojectors Q1,Q2,Q3,...,where Q1=Id.We are given a random vectorψ0Gaussian distributed with mean zero

I n.De?ne v i=Q i v0.

and diagonal covariance matrix1

n

Lemma7.1De?ne random variables Z s= ψ0,v s ,s=1,...,S.Then(Z s,s=1,...,S)is a jointly Gaussian sequence,with mean zero,variancesσ2s= v s 22/n and covariances Cov(Z s,Z u)=σ2

.

max(s,u) This is self-evident,and we omit the proof.

Now introduce the random variable

φ0=ψ0/ ψ0 2.(7.5) Of courseφ0is a random point on S n?1,in fact uniformly distributed.

Lemma7.2De?ne random variables W s= φ0,v s ,s=1,...,S.For?xed S,the sequence(W s,s= 1,...,S)is asymptotically well-approximated by a joint Gaussian sequence(Z s),with variancesσ2s= v s 22/n and covariances Cov(Z s,Z u)=σ2

.More precisely,for a sequence n depending on n

max(s,u)

only,E(W s?Z s)2/σ2s≤ n→0,n→∞.

Proof.Of course,the Gaussian sequence we have in mind is just (Z s )introduced above,linked to

W s by (7.5).Then W s ?Z s =Z s ( ψ0 ?1

2?1).Now √n · ψ0 2has a χn distribution,so that ψ0 1converges in probability to 1as n →∞.In fact,using the analytic expression for the χ2n density in terms

of beta functions,one can show that E ( ψ0 2?1)4→0.Moreover EZ 4s =3ˉσ4

s .Hence

E (W s ?Z s )2=EZ 2s ( ψ0 ?12?1)2≤(EZ 4s )1/2·(E ( ψ0 ?12?1)4)

1/2

→0,n →∞.

Putting n =(3E ( ψ0 ?12?1)4)

1/2

gives the claimed result. It is useful to restate the last lemma.Let H 1,...,s ;n (w 1,...,w s ;σ)denote the joint distribution function of W 1,...,W s ,conditional on σ1= v 1 2/√

n ,...,σs = v s 2/√n .Let G 1,...,s ;n (w 1,...,w s ;σ)denote the joint distribution function of Z 1,...,Z s ,again conditional on σ1= v 1 2/√n ,...,σs = v s 2/√n .Then the last lemma implies

H 1,...,s ;n (w 1,...,w s ;σ)→G 1,...,s ;n (w 1,...,w s ;σ),n →∞.

However,a certain degree of uniformity is present,which will be important for us.In the following result,σ=(σ1,...,σs ),and σ>0means σ1>0,σ2>0,...,σs >0.

Lemma 7.3The family of functions H 1,...,s ;n (w 1,...,w s ;σ)is uniformly continuous in w ,uniformly in n >n 0,locally uniformly at each σ>0.The family G 1,...,s (w 1,...,w s ;σ)is uniformly continuous in w and locally uniformly continuous in σat each σ>0.

The result is a tedious exercise using standard ideas and we omit the proof.

Suppose that we have a sample (y,Φ)from the standard problem suite and that the random variable φ0introduced above is stochastically independent of (y,Φ).Suppose that StOMP has been run through s ?1stages.Recall the values y ,r 1,r 2etc.produced by the StOMP algorithm,and condition on those results,de?ning v 0=y ,v 1=r 1,etc.As φ0is a random point on S n ?1but not a column in the matrix Φ,the probability distribution of φ0,r s ,conditional on y ,r 1,...is exactly that of W s above,with parameters σ1= y 1/√

n ,σ2= r 2 2/√n ,etc.Now let p s,n (σ)denote the probability density of σ=(σ1,...,σs )induced by StOMP ,and let P s,n denote the corresponding probability measure.Let F 1,...,s ;n denote the joint cumulative distribution function of the random variables φ0,y , φ0,r 2 ,..., φ0,r s .Then we have the exact formula

F 1,...,s ;n (w 1,...,w s )=

H 1,...,s ;n (w 1,...,w s ;σ)p s,n (σ)dσ.(7.6)Now by Theorem 1there exist constants ˉσs so that,for >0,

P {|ˉσ1? y 2/√n |< ,|ˉσ

s ? r s 2/√

n |< ,s =2,...,S }→1.(7.7)

Combining this with the uniform equicontinuity of Lemma 7.3and the scale mixture identity (7.6),we

conclude that

F 1,...,s ;n (w 1,...,w s )→

G 1,...,s ;n (w 1,...,w s ;ˉσ),n →∞.(7.8)Of course φ0is of no interest to us per se.Consider now a column φj from Φ,and suppose that j is

both a null coordinate –j ∈I 0–and a surviving coordinate at stage s –j ∈I s ?1.It might seem that φj ,r s would have the same distribution as φ0,r s but this is only true for s =1.At later stages s >1of the algorithm, φj ,r s behaves as W s subjected to conditioning:

L ( φj ,r s |j ∈I 0∪I s ?1)=L ( φ0,r s || φ0,r i |

(7.9)

We now make the observation that probabilities of hyper-rectangles can be computed simply from the joint cumulative distribution function.We state without proof an elementary fact:

Lemma 7.4Let U 1,...,U s denote random variables with joint cumulative distribution function H 1,...,s (u 1,...,u s ).The rectangle probability R 1,...,s (u 1,...,u s ;H 1,...,s )≡P rob {|U i |

R 1,...,s (u 1,...,u s ;H 1,...,s )=

±i

c 1,...,s (±1,...,±s )H 1,...,s (±1u 1,...,±s u s ),

with coe?cients c 1,...,s (±1,...,±s ).The rectangle probability Q s 1,...,s ?1(u 1,...,u s )≡P rob {U s ≤u s ,|U i |

Q s

1,...,s ?1(u 1,...,u s ;H 1,...,s )=

±i

c s 1,...,s ?1(±1,...,±s )H 1,...,s (±1u 1,...,±s ?1u s ?1,u s ).

It follows that,if we have a sequence H 1,...,s ;n of such CDF’s converging uniformly to a limit CDF H 1,...,n ,

then we also have convergence of the corresponding rectangle probabilities just mentioned.A conditional probability is a ratio of two such terms:

P rob {Z s ≤w ||Z i |

Q s 1,...,s ?1(w 1,...,w s ?1,w ;G 1,...,s )

R 1,...,s ?1(w 1,...,w s ?1;G 1,...,s ?1)

The probability law given on the right-hand side of (7.9)has cumulative distribution function

?F

1,...,s ;n (w )=P rob { φ0,r s ≤w || φ0,r i |

?F 1,...,s ;n (w )= Q s 1,...,s ?1(tσ1,...,tσs ?1,w ;G 1,...,s ;n (·;σ))R s 1,...,s ?1(tσ1,...,tσs ?1;G 1,...,s ?1;n (·;σ))

p s,n (σ)dσ→Q s 1,...,s ?1(t ˉσ

1,...,t ˉσs ?1,w ;G 1,...,s (·;ˉσ))s 1,...,s ?11s ?11,...,s ?1,n →∞,

=

P rob {Z s ≤w ||Z i |

The middle step invoked the fact that,in the sense of convergence in probability,G 1,...,s ;n (·;σ)→P

G 1,...,s (·;ˉσ)in uniform norm,locally uniformly in σ>0.7.2.2

Non-null Case

The technical side of the argument parallels the null case,and we will not repeat it.The only point we clarify here is the calculation of the means μs and standard deviations τs .For this calculation,we propose to model y as a ±i ψi ,where the ±i are arbitrary signs,and ψi are Gaussian random vectors.This model corresponds to ‘Gaussianizing’the SSP instance (y,Φ)

A vector v uniformly distributed on the unit sphere in R n is Gaussianized by multiplying it by an independent scalar random variable n ?1/2χn where χn is Chi-distributed on n degrees of freedom.The resulting vector n ?1/2χn ·v is distributed N (0,I n ).

Now apply such Gaussianization independently to each of the columns of Φ,producing the columns of a matrix Ψ,the vector y =Ψx 0has indeed the distribution of ±i ψi .We will make some computations using this Gaussianization;the result,exactly true in the Gaussian case,is asymptotically correct for the original pre-Gaussianization model.The same approach was used,less explicitly,in the last subsection.Gaussianization has also been heavily used in the Banach space literature;see also [16,17]for examples in the present spirit.

We start with a typical Bayesian calculation.

Lemma 7.5Suppose that ψ1,...,ψk are Gaussian vectors in R n distributed N (0,1

n

I n ).Suppose that y = k

1ψi .Given y ,ψ1has a Gaussian conditional distribution:

L (ψ1|y )=N (y/k,

k ?1k 1

n

I n ).We omit the proof of this well-known fact.Consider now a deterministic vector v 0and deterministic orthoprojectors Q 1,Q 2,...,Q S ,yielding vectors v i =Q i v 0∈R n .Because projections of Gaussians are Gaussian and linear combinations of Gaussians are Gaussian,we immediately obtain:

Lemma 7.6Let ψ0be a random vector in R n with Gaussian distribution N (v 0/k,k ?1k 1

n I n ).De?ne random variables X s = ψ0,v s .Their marginal distribution is precisely

L (X s )=N ( v s 22/k,

k ?1k 1

n

v s 22).

AB 7500 2.0绝对定量简易操作流程

applied biosystems 7500荧光定量PCR 仪 绝对定量绝对定量实验实验实验简易简易简易操作流程操作流程 SDS 2.0

7500定量PCR 仪 绝对绝对定量定量定量实验实验实验简易简易简易操作流程操作流程 1. 双击桌面图标 ,或从Start >All Programs > Applied Biosystems > 7500 Software >7500 V2.0开启软件。进入主界面后选择Advanced Setup 。 2. 默认进入Setup 下的Experiment Properties 界面 2.1 输入实验名称 (Experiment Name ) 2.2 确认仪器型号 2.3 在实验类型中,选择Quantitation-Standard Curve 2.4 选择试剂种类 2.5 确认运行模式 3. 进入Setup 下的Plate Setup 界面,编辑基因(Target )及样本(Sample ):

3.1 在 的基因,并在Target 记的荧光基团及淬灭基择NFQ-MGB ;荧光淬灭基团(如BH 界面中设置基因及样品。利用 rget Name 中编辑基因名称,Reporter 和Quencher 淬灭基团。对于Quencher 的选择,如果是如果是TAMRA 探针,请选择TAMRA ;如果是其BHQ ),请选择None 。 添加新 encher 中选择所标MGB 探针,请选 果是其他形式的非

还可以利用其他按钮将之前已添加的基因进行保存(Save Target)或删除 (Delete Target)。利用添加新的样本,并编辑样品名称。 3.2在界面中进行样品板的排布。利用鼠标单选 或拖曳以选择反应孔,然后勾选左侧的基因及样本,同时在Task选项中 指定该反应孔的类型(S代表标准曲线数据点,U代表未知样本,N代 表阴性对照)。 3.3设置标准曲线:利用鼠标单选或拖曳以选择反应孔(一般情况下,每个 梯度设置三个副孔),而后勾选左侧的基因,在Task选项中选择S,然后 在Quantity中输入拷贝数。按照相同操作,完成标准曲线其他数据点的设 置(建议设置5个梯度)。 4.在Setup下的Run Method界面中,设定反应条件。

视频会议系统操作说明

视频会议系统 简 易 操 作 说 明 一、本地PPT 演示(使用自带笔记本): 1)按投影机遥控器“POWER”键,开启投影机; 2)按投影幕遥控器“下”,把投影幕降落; 3)将笔记本电脑与墙面插连接,并将笔记本电脑的外接方式选择为“扩展”或者“复制“,分辨率设置为1024×768;

4)根据需要关闭不需要的灯光; 5) 投影机输入选择“computer 1”; 6)PPT演示完毕后,按投影机遥控器“ON/OFF”按钮,关闭投影机,按投影幕墙面开关“上”,把投影幕回升。若要关闭系统电源,请将插座电源断掉 二、本地PPT 演示(使用一体触摸屏): 1)按投影机遥控器“POWER”键,开启投影机; 2)按投影幕遥控器“下”,把投影幕降落; 3)按电视机遥控器“电源”键,开启电视机, 4)按电视机右边电脑的电源按键,启动电视自带的电脑; 5)墙面插断开与其他电脑的连接; 6)根据需要关闭不需要的灯光; 7) 投影机输入选择“computer 1”;电视机输入选择“电脑”,这时候电视机和 投影机显示的是相同的图像画面,这样使用电视机内置电脑进行PPT演示;8)PPT演示完毕后,按投影机遥控器“ON/OFF”按钮,关闭投影机;按投影幕墙面开关“上”,把投影幕回升;关闭操作系统,最后关闭电视机。若要关闭系统电源,请将插座电源断掉 三、召开视频会议 1)启动宝利通视频终端按遥控器“电源“按钮,此时宝利通视频终端指示灯闪烁,摄像机复位,120秒左右终端启动成功,指示灯长明; 2)启动电视机按电视机遥控器“电源“按钮,启动电视机,电视机启动后,左电视选择“HDMI 1”输入; 3)启动投影机投影机遥控器“POWER”键开启投影,机投影机输入选择“HDMI 1”; 4)呼叫远程从主屏幕选择“拨打电话”,或在遥控器上输入号码,后按遥控

ViiA7相对定量简易操作指南

Applied Biosystems ViiA?7实时荧光定量PCR仪V1.X相对定量简易操作流程

1.双击桌面图标,,或从Start>All programs>Applied Biosystems>ViiA7Software>ViiA7Software v1.X开启软件。进入主界面后选择“Experiment Setup”。 2.选择“Setup”下的“Experiment Properties”界面。 2.1输入实验名称(Experiment Name)。

2.2选择Block类型。 2.3选择相对定量实验类型,“Comparative C T”。 2.4选择试剂种类。Taqman探针法选择“Taqman Reagents”,SYBR染料 法选择“SYBR Green Reagents”。 2.5选择运行模式。普通试剂选择“Standard”,快速试剂选择“Fast"。 3.选择“Setup”下的“Define”界面设置基因名称(Target)和样品名称(Sample)。

3.1在“Targets”下点击“New”,添加待测基因。在“Target Name”中编 辑基因名称,“Reporter”和“Quencher”中选择所标记的荧光基团及淬灭基团。对于“Quencher”的选择,如果是MGB探针,请选择NFQ-MGB;如果是TAMRA探针,请选择TAMRA;如果是其他形式的非荧光淬灭基团则选择None。 3.2在“Samples”下点击“New”,添加待测样品。在“Sample Name”中 编辑样品名称。 3.3在“Analysis Settings”下选择合适的“Reference Sample”(对照样 品)和“Endogenous Control”(内参基因)。

MCU视频会议操作手册

目录 1视频会议开局调试内容 ....................................... 错误!未定义书签。 系统组网图................................................ 错误!未定义书签。 准备会议参数.............................................. 错误!未定义书签。 规划IP地址............................................... 错误!未定义书签。 规划通信参数.............................................. 错误!未定义书签。 配置MCU8650 ............................................. 错误!未定义书签。 配置8650与RM的相关参数.................................. 错误!未定义书签。 配置8650与SC(GK)相关参数 .............................. 错误!未定义书签。 配置RM数据............................................... 错误!未定义书签。 添加区号.................................................. 错误!未定义书签。 添加服务区................................................ 错误!未定义书签。 添加MCU 8650 ............................................. 错误!未定义书签。 添加会场.................................................. 错误!未定义书签。 配置SM数据............................................... 错误!未定义书签。 登陆SM ................................................... 错误!未定义书签。 添加SC ................................................... 错误!未定义书签。 添加MCU节点.............................................. 错误!未定义书签。 召开会议.................................................. 错误!未定义书签。 定义会议.................................................. 错误!未定义书签。 调度会议.................................................. 错误!未定义书签。 结束会议.................................................. 错误!未定义书签。2安装RMCC多点资源管理中心软件............................... 错误!未定义书签。 安装RM ................................................... 错误!未定义书签。 配置中的数据库参数........................................ 错误!未定义书签。 安装后检查................................................ 错误!未定义书签。 启动系统服务.............................................. 错误!未定义书签。 刷新L ICENSE ............................................... 错误!未定义书签。3安装SC&SM多点控制管理中心软件.............................. 错误!未定义书签。 安装S WITCH C ENTRE........................................... 错误!未定义书签。 安装S WITCH M ANAGER .......................................... 错误!未定义书签。 配置系统参数.............................................. 错误!未定义书签。 配置SwitchCentre系统参数................................. 错误!未定义书签。 配置SwitchManager系统参数................................ 错误!未定义书签。 启动系统服务.............................................. 错误!未定义书签。 刷新L ICENSE ............................................... 错误!未定义书签。4MCU VIEWPOINT 8650维护指南................................. 错误!未定义书签。 登录V IEWPOINT 8650 ......................................... 错误!未定义书签。 V IEWPOINT 8650内部命令..................................... 错误!未定义书签。 系统设置命令.............................................. 错误!未定义书签。 系统查询类命令............................................ 错误!未定义书签。

译林版牛津英语七年级上册教案

译林版牛津英语七年级上册教案 苏教版牛津英语初中七年级上册精品教案 Topic 3 Unit 1Getting to know you I’m twelve years old Section C Ⅰ. Teaching aims and demands 教学目标 1. Learn the sentences: What’s this/that in English? It’s a/an _____.What are these/those in English? They are ______s.How do you spell it? 2. Learn the following new words and phrases:English, spell, these, those, double,in English, a car, a pencil-box, a pen, a pencil, a book, a bag, a cakean orange, an apple, an eraser, an egg buses, Ⅱ. The main activities本课重点活动

1a and 2a Ⅲ. The difficulties 教学难点 What’s this/that in English? It’s an eraser.What are these/those in English?They are buses.How do you spell it? IV. Teaching aids 教具 1. Practicalities: A oranges, apples, eggs, bags B a pencil-box, a pen, a pencil, an eraser, some books 2.Pictures: a picture of a car, a picture of three buses, a picture of some cakes 3. Multimedia V. Teaching methods教学方法 Teaching with the practicalities Talking and guiding VI. Teaching steps教学步骤 Step 1 Preparing(3 minutes)

中海达RTK简单操作技巧经过流程

中海达Hi-RTK简易操作流程 中海达RTK系列产品以其简单易懂、人性化的操作赢得客户好评,下面以GIS+手簿HI-RTK2.5道路版本为例,简要说明其操作流程。 一、软件界面 HI-RTK为九宫格菜单,每个菜单都对应一个大功能,界面简洁直观,容易上手,如图1 图1 其中1、2、3、5项为重点使用项目,基本涵盖了碎部测量和各种放样功能,2.5版本增加了向导功能,该功能可以引导新手从新建项目开始到测量进行设置,由于其他版本并没有此项功能,因此本文重点说明如何用1、2、3、5项菜单完成一次测量工作的流程。 二、使用流程 1、新建项目 点击“项目”图标,进入项目设置界面,如图2

图2 点击“新建”图标,进入输入界面,如图3 图3 2.5版本默认了将当天日期作为新建项目名称,如果不想用,也可以自己输入要用的名称,界面上的“向上箭头”为大小写切换,“123”为数字字母切换,输入完毕后点击“√”,新建项目成功,点击“×”,返回九宫格菜单。如图4

图4 2、设置参数 点击九宫格菜单第三项“3.参数”进入参数设置界面,界面显示为坐标系统名称,以及“椭球、投影、椭球转换、平面转换、高程拟合、平面格网、选项”七项参数的设置,如图5

图5 首先设置椭球,源椭球为默认的“WGS84”,当地椭球则要视工程情况来定,我国一般使用的椭球有两种,一为“北京54”,一为“国家80”工程要求用哪个就选哪个,点击框后面的下拉小箭头选择。 再设置投影,方法为:点击屏幕上“投影”,界面显示了“投影方法”以及一些投影参数,如图6 图6 工程一般常用高斯投影,高斯投影又分六度带、三度带、一点五度带等,选什么要视工程情况而定,工程需要三度带就选三度带,需要注意的是如果工程需要一点五度带则要选择“高斯自定义”,选择方法也是点击显示框右边的下拉小箭头选择,选择好投影方法后,我们要修改的是“中央子午线”,修改方法是双击中央子午线的值,再点击右上角“×”旁边的虚拟键盘按钮,调出小键盘修改,注意修改后格式一定要和以前一样为×××:××:××.×××××E如图7

牛津译林版七年级上册牛津英语词组

牛津译林版七年级上册牛津英语词组 集团文件发布号:(9816-UATWW-MWUB-WUNN-INNUL-DQQTY-

U n i t O n e T h i s i s m e! 1.What’syourname 2.Thisis… 3.aninstructionbook 4.lookafter 5.makefriendswith 6.introduceoneselftoeachother 7.aprofileofoneself 8.welcometo+n. 9.atBeijingSunshineSecondarySchool 10.Goodmorning(afternoon,evening,night)! 11.12yearsold=12-year-old 12.livein(aflat) 13.becleverat(begoodat=dowellin) 14.intheschoolbasketballteam 15.intheReadingClub 16.callsb.+name https://www.wendangku.net/doc/503807260.html,efrom=befrom 18.bebornin(on) 19.atschool(comparewith:attheschool) 20.havehairinaponytail(havehairinbunches)21.likedoingsth.(lovedoing sth,enjoydoingsth.) 22.listento(music,teacher) 23.lookat 24.workhard(comparewith:ha rdwork) 25.wearglasses 26.playcomputergames 27.wanttodosth. 28.makenotesabout 29.knoweachother 30.theClass1,Grade7student s=thestudentsinClass1,Grade 7 31.helpsb.dosth. 32.It’stimeforsth.=It’st imetodosth. 33.PEclass 34.footballboots 35.tennisracket 36.footballfield

定量PCR加样操作流程

QPS-201加样流程和Step One Plus仪器的仪器操作 要准备的东西: 1.八排管和盖子 2.96孔细胞培养板 3.移液器:10ul,20ul,100ul,200ul。 4.枪头10ul,20ul,100ul,200ul或者300ul。 5.200ul和600ul的PCR管。 6.试剂:QPk-201,上下游引物,cDNA,去RNA酶的PCR水 7.注意事项: A.引物的储存浓度100nm,工作浓度:5-10nmol,引物要过量,上下游引物要先混匀在一起,然后再加。内参actin和目标基因的引物都要混匀。混匀后的引物可用半个月,用时要反复混匀,水样的东西混20下,油状的(酶)东西至少要混30下。 B.试剂配总管用,cDNA和酶mix配一管,引物和水配一管。混匀之后,再分别加到管子里。为什么试剂要配总管呢?cDNA模板量很少,所以加多加少对实验影响很大,酶加10ul,样多,少量的东西要想混匀,必须放在大量的东西里这样再去混匀,就可以使每个管子里的样品都能保证是一致的。 C.内参和目标基因都要跑3个复孔,取平均值,差异CT值要在0.5之内,如果Ct值>0.5,结果就不可信,要重新再做。 (一)QPk-201加样流程: 用QPk-201做样品体系是20ul的。 cDNA :2ul PCR水:6ul 上下游引物共:2ul 酶MIX:10ul 一共是20ul体系。 把cDNA :2ul和酶mix:10ul共12ul配一管。 把引物:2ul和水6ul共8ul配一管。 以下以样品为例,加样流程如下: 目标基因STOB1,样品六个,编号:1,2,3,C5,C6,C7 内参基因actin

译林版牛津英语七年级上册期末复习知识点整理

译林版牛津英语七年级上册期末复习知识点整理 Units1--4重点知识点总结 n.名词 v.动词 vt.及物动词 vi.不及物动词 adj.形容词 adv.副词 prep.介词 pron.代词 conj.连词 1、喜欢 like / love / enjoy / be interested in / be crazy about (痴迷于)/ have fun / have a good time +doing sth. 动词+doing 的还有 Go doing sth. / finish doing sth./Be good at doing sth./ do well in doing sth. How/what about doing sth./practise doing sth. 2、“四大看” read vt.看读物(read books/newspaper/magazines/a map等) look vi. 瞧常用短语look at/ for/around/after/out/over/up see vt.看见,强调结果 I can see you. watch vt.带有欣赏性的观看watch TV/ a film / a football game 3、“五大穿着” Put on 强调“穿上”的动作eg. He ____a coat and goes for a walk. Wear 强调“穿着”的状态;进行时态表示暂时的情况eg. She is wearing a new skirt now. / wear glasses Dress (1) dress sb. (2) dress oneself (3) dress up as (4) get dressed In (穿戴)后接颜色(或衣服),表示状态 look!Lucy is_____a red skirt and a pair of pink shoes. On 后接人指衣服穿在某人身上看出区别来。The red coat looks nice on you. 4、“四大花费” Spend:sb.(人) + spend + 时间/金钱 + on sth. sb.(人) + spend + 时间/金钱+(in) doing sth. pay:sb.(人) +pay + 金钱+for sth. cost:sth.(物) + cost + sb.+金钱 Doing sth.costs + sb.+时间 take:it takes sb. +时间+ to do sth. 5、“三点副词” Home / there /here 前不加任何的介词 welcome home / come here / go there

绝对定量简易操作流程

applied biosystems StepOne/StepOnePlus 荧光定量PCR仪 绝对定量实验简易操作流程 绝对定量实验简易操作流程 SDS 2.2

StepOne/StepOnePlus定量PCR仪 绝对定量实验简易操作流程 绝对定量实验简易操作流程 1.双击桌面图标,或从Start > All Programs > Applied Biosystems > StepOne Software> StepOne Software V2.2开启软件。进入主界面后选择Advanced Setup 。 2.进入Setup下的Experiment Properties界面: 2.1输入实验名称(Experiment Name) 2.2选择仪器型号 2.3 2.3在实验类型中,选择Quantitation-Standard Curve

2.4选择试剂种类 2.5选择运行模式 3.进入Setup下的Plate Setup界面,设置基因(Target)及样本(Sample): 3.1在左边界面中设置基因及样本。利用 按钮添加新 的基因,并在Target Name中编辑基因名称,Reporter和Quencher中选择所标记的荧光基团及淬灭基团。对于Quencher的选择,如果是MGB探针,请选择NFQ-MGB;如果是TAMRA探针,请选择TAMRA;如果是其他形式的非荧光淬灭基团(如BHQ),请选择None。

也可以利用其他按钮保存(Save Target)或删除(Delete Target)已添加的基因。利用 按钮添加样本,在Sample name中编辑样本名称。 3.2在右边 界面中进行样品板的排布。利用鼠标单选或拖曳以选择 反应孔,然后勾选左侧的基因及样本,同时在Task选项中指定该反应孔的类型(S 代表标准曲线数据点,U代表未知样本,N代表阴性对照)。 3.3设置标准曲线:利用鼠标单选或拖曳以选择反应孔(一般情况下,每个梯度设置三 个副孔),而后勾选左侧的基因,在Task选项中选择S,然后在Quantity中输入拷

(牛津译林版)七年级上册牛津英语词组

Unit One This is me! 1.What’s your name? 2.This is… 3.an instruction book 4.look after 5.make friends with 6.introduce oneself to each other 7.a profile of oneself 8.welcome to + n. 9.at Beijing Sunshine Secondary School 10.Good morning (afternoon, evening, night)! 11.12 years old=12-year-old 12.live in (a flat) 13.be clever at (be good at = do well in) 14.in the school basketball team 15.in the Reading Club 16.call sb. + name https://www.wendangku.net/doc/503807260.html,e from = be from 18.be born in (on) 19.at school (compare with: at the school) 20.have hair in a ponytail (have hair in bunches) 21.like doing sth. (love doing sth, enjoy doing sth.) 22.listen to (music, teacher) 23.look at 24.work hard (compare with: hard work) 25.wear glasses 26.play computer games 27.want to do sth. 28.make notes about 29.know each other 30.the Class 1, Grade 7 students =the students in Class 1, Grade 7 31.help sb. do sth. 32.It’s time for sth. =It’s time to do sth. 33.PE class 34.football boots 35.tennis racket 36.football field 37.tennis court 38.swimming pool 39.play…with sb. 40.talk to sb. 41.at lunchtime 42.take sb. for a walk 43.after school 你叫什么名字? 这是……(用于介绍人或物) 一本说明书 照料,保管 与……交朋友 相互间进行自我介绍 一份某人自己的档案 欢迎到……来 在北京阳光中学 早上好!(下午好,晚上好,晚安)12岁 住在(公寓里) 在……方面聪明(在……很擅长)在校篮球队 在阅读俱乐部 称某人为…… 来自……,……地方人 出生于…… 在校学习班(在学校里) 将头发扎成马尾辫(扎辫子) 喜欢做某事 听(音乐,老师讲课) 看…… 努力工作(对比:艰苦的工作) 戴眼镜 玩电脑游戏 想要做某事 做有关……的记录 相互了解 七年级一班的学生 帮助某人做某事 是该做某事的时候了。 体育课 足球鞋 网球拍 足球场 网球场 游泳池 和某人一起玩…… 和某人交谈 在午餐时间 带某人去散步 放学后

视频会议系统简易操作手册

陵县电力局视频会议系统简易操作手册 此手册为简易操作手册,针对初次使用的对此设备不了解的用户,在需要建立会议的时候,开启总电源(机柜内插线板上),检查设备是否都正常启动,机柜内设备指示灯是否闪烁,(机柜内有三台设备蓝色长盒为交换机、灰色竖立设备为b5视频终端、黑色小盒为光端收发器)如果在操作中有设备没有响应的话,需要断电重启设备(B5后部有单独开关键),检查设备启动无误后,等待中心会场呼叫。在会议结束后,由专人负责断电(请先关闭电视机,再关闭机柜内总电源|;有投影机的站应先用遥控器关闭两次投影机,待绿灯不闪烁变成红灯时,再断总电源)和检查设备,麦克风套上塑料袋放入机柜,摄像机套上塑料袋。检查设备都已断电方可离开,会议室在没人的情况下要锁门。

设备正确连接总图

投影机使用说明 一、投影机正面板图片 1、电源指示灯通电后电源指示灯为红色,轻按一下为绿色,则 为正式启动。若要关机,轻按一下为绿色闪烁,几秒钟后变为红色,则为关机状态。 2、输出信号选择(INPUT)开会时需要选择的输出信号为 S-VIDEO信号。可选信号为A V信号,S-VIDEO信号,视频信号等。开设视频会议时必须要选择为S-VIDEO信号模式。3、确定键(ENTER) 当进入菜单后需要选择时,按确定键确认选 择。周围4个键为上,下,左,右方向键。 4、菜单键(MENU) 可调节画面亮度,对比度,画面翻转等。 5、暂时停止投影键可暂时停止投影(待机键)一般不需要

二、投影机侧面板图片 1、焦距可调节焦距,清晰或模糊。 2、画面大小可调节投影画面的大小,推荐与投影幕布大小一样。

超高分辨率显微镜技术

超高分辨率显微镜技术 为了更好地理解生命过程和疾病发生机理,生物学研究需要观察细胞内器官等细微结构的精确定位和分布,阐明蛋白等生物大分子如何组成细胞的基本结构,重要的活性因子如何调节细胞的主要生命活动等,而这些体系尺度都在纳米量级,远远超出了常规的光学显微镜(激光共聚焦显微镜等)的分辨极限。为了解决生命科学研究面临的这一难题,超高分辨率显微技术应时而生,并且一经问世就得到了广泛的响应,2008年Nature Methods将这一技术列为年度之最。 为了达到纳米量级的分辨率和极快速的成像,超高分辨率显微镜引入了许多突破时代的创新技术,了解这些技术将带领我们走入超高分辨率显微镜的奇妙世界。 3D-SIM(结构照明技术): 荧光样品通过不同方向和相位的光源照射,并且在成像后利用特点的运算方法重构,产生突破光学极限的超高分辨率图像。 ●完全兼容现有荧光分子和荧光染料、不改变任何实验流程 ●轴向分辨率提高到80-120nm,空间分辨率提高到激光共聚焦显微镜观察极限的8 倍。 搭载3D-SIM技术的DeltaVision OMX超高分辨率显微镜已经成功运用到了很多样品,比如微生物、脊椎动物细胞、组织切片甚至整个胚胎等。大大提高的分辨率在鉴定和研究亚细胞结构中成效显著,比如对微管和肌动蛋白的观察中可以解析到单根微管纤维。 Monet (单分子成像与定位技术): 通过在极短时间内对单个或几个荧光分子的激发和获取发射光信号,上千次获取后重构图像,从而获得突破百纳米极限的超高分辨率图像。这种技术需要使用独特的光敏蛋白来做荧光染料,通过独特的算法可以分辨衍射极限上重叠的荧光团位置。 ●搭载PALM的DeltaVision OMX可在极短时间内完成图像获取和重构 ●能够处理极大密度的图像,使高浓度标记的和更高激活能量的样品的成像变成 可能。 超高速成像: 研究者对于成像速度进入“亚秒时代”的需求已经十分的迫切。以往的速度瓶颈主要在曝光时间以及CCD成像速度,利用高效光路和改进的新型照相机,大大提高了成像速度。

宝利通视频会议系统操作手册

视频会议系统MCU会议操作手册 一、会前准备 各分会场应提前开启主会场视频终端电源,检查网络连接情况,电视是否有显示本端会场画面,主屏幕下方是否显示“我的IP地址”。 检查用于MCU操作的电脑是否正常,网终连接是否正常。 二、会议操作 1、进入MCU管理界面 打开IE浏览器界面,在地址栏输入MCU地址:XXX.XXX.XXX.XXX,回车即可进行MCU的管理员界面,如下图: 分别输入用户名和密码后点击登陆;用户名和密码由信息中心统一发放,默认情况下为用户名和密码均为POLYCOM; 管理员管理界面介绍:如下图

2、新建会议 点击会议列表左上方的,出现如下图: 点击新建会议后,会出现如下图的对话框,左边是会议属性列表,右边是会

议属性的配置选项: “常规”选项: 在常规属性里面,可以进行如下配置: 会议名称可输入需要召开的会议名称,这名称会在会议中显示出来; 会议模板会议模板有832K速率视频会议和1.2 M速率视频会议,如有双流会议,建议先择1.2M速率视频会议模板; 会议时长可以定义会议召开的时长,默认为8小时,会议最长不得超过24小时 会议号码此号码用于区别不同的会议,供于终端呼入MCU选择需要加入的会议时使用; 会议密码设置加入会议的密码; 主席密码设置会议主席密码; “与会者”选项:点击“从地址薄添加”,打开地址簿选择与会者;

选择与会者; “组播”选项:本属性用于开启会议组播;如需组播,把“启用组播”打勾,把“LAN1”上的勾去掉,把“LAN2“打勾就完成;

“信息”选项:本属性用于输入本会议的一些附加说明或计费信息,如无特别说明一般为空; 当以上设置完毕时,会议正常召开,整个界面如下图所示;

CopyCallerSoftware简易操作流程-ThermoFisherScientific

CopyCaller Software简易操作流程 简易操作流程

CopyCaller Software 简易操作流程 CopyCaller Software软件能够简便、快速地分析来自于Applied Biosystems品牌实时荧光定量PCR仪的TaqMan? Copy Number Assays实验数据。 1.准备实时荧光定量PCR数据 1.1在Applied Biosystems?品牌实时荧光定量PCR仪上运行TaqMan? Copy Number Assays实验,实验类型设置为绝对定量,每个样品 推荐做4个重复,具体设置请参考TaqMan? Copy Number Assays 实验手册。 1.2分析实验数据 将阈值线手动设定为:0.2 自动基线设为:ON 1.3导出包含Ct值的实验数据文件,文件类型选择为.txt或者.csv。 2.使用CopyCaller Software软件分析数据 2.1 打开CopyCaller Software软件 双击桌面图标,或从Start > All programs > Applied Biosystems > CopyCallerSoftware> CopyCaller开启软件。

2.2双击图标,导入.txt或者.csv的文件,选择需要分析的一个或 者多个文件,点击open,选中的文件将会出现在软件的Assay Selection界面。 2.3分析实验数据 如下图所示,在○1assay selection界面中选择所要分析的数据,○2点击绿色图标分析,弹出分析设置界面,如果实验中包含已知基因拷贝数的样品,选择○3with calibration sample,在calibrator sample下拉选项中选择用做校正的样品名称,在calibrator sample copy number中输入拷贝数;如果实验中没有已知拷贝数的样品,选择○4without calibration sample, 在most frequent sample copy number中输入本次实验中大多数样品预测的拷贝数。点击○5 Apply分析实验。

牛津译林版七年级上英语知识点总结

7A Unit1 This is me 1.your name你的名字 2.my master我的主人 3.rend this book读这本书 4.look after照顾,照看 5.good morning/ afternoon/evening早上/下午/晚上好 6.nice to meet you见到你很高兴 7.Class l,Grade 7七年级(1)班 8.12 years old 12岁 9.love reading喜爱阅读 10.my new classmates我的新同学们 11.like sports喜欢体育运动 12.play football 踢足球play with sth 玩耍 13.after school放学后 14.like music喜欢音乐 15.be from= come from 来自be from Nanjing来自南京 16.be good at = do well in + v-ing 擅长 https://www.wendangku.net/doc/503807260.html,e from来自 18.live with sb和某人一起居住 19.wear glasses 戴眼镜 20.an e-dog 一只电子狗 21.master n. 主人,大师v. 掌握 22.welcome to + 地点欢迎来到......(副词home,there和here,to省略) 23.let sb not do sth 让某人(不要)做某事 24.like to do sth. like doing sth. 喜欢做某事 25.This is...... 向别人介绍某人 26.live with sb in ... 和别人住在某地 27.be nice/friendly to sb 对某人友好的 28.all one’s lesson 某人所有的功课all the lessons 所有的功课 Unit2 Let’s play sports 1.play sports做运动;进行体育活动 2.many times a day一-天多次 3.play tennis打网球 4.enjoy listening to music喜爱听音乐 5.go swimming去游泳 6.Huanghe Football Club黄河足球俱乐部 7.the World Cup世界杯 https://www.wendangku.net/doc/503807260.html,e true变为现实;成为事实 9.free time空余时间 10.get up起床 11.on/at weekends( = on/ at the weekend)在周末 12.of course当然 13.table tennis乒乓球 14. a lot of(= lots of)许多;大量 15.talk about/of谈论 16.watch basketball matches观看篮球赛 17.play with和某人一起玩;玩弄 18.once 一次twice两次three times 19.favourite = like...best 最喜欢的 20.what about = how about + v-ing ......怎么样?

定量装车系统操作规程

广州石化化工区 定量装车系统操作规程 广州石化化工一部 2008年3月

目录 1 概述 (2) 1.1 定量装车系统 (2) 2 定量装车系统操作说明 (2) 2.1 上位机系统预备知识 (2) 2.1.1硬件平台 (2) 2.1.2软件平台 (2) 2.1.3鼠标器操作 (3) 2.2 系统应用详细说明: (3) 2.2.1启动控制系统及开票系统计算机 (3) 2.2.2屏幕画面的分类及相关操作 (4) 2.2.3装车台控制 (4) 2.2.4参数表 (6) 2.2.5报警查看 (7) 2.2.6趋势画面 (7) 2.2.7操作记录 (9) 2.2.8管理页面 (9) 2.3 系统安全与维护 (10) 2.3.1系统硬件系统的安全与维护 (10) 2.3.2系统软件系统的安全与维护 (11) 3 定量装车系统操作说明 (11) 3.1 系统构成及设备简单说明 (11) 3.1.1上位计算机操作系统 (11) 3.1.2MicroLoad控制器 (11) 3.1.3215型气动数字控制多级关断阀 (11) 3.2数控阀流程原理示意图: (12) 3.3定量装车系统设备启动及操作 (13) 3.3.1Smith MicroLoad控制器操作说明 (13) 3.3.2定量装车系统设备故障原因及处理方法 (15) 4 发货流程 (16)

1概述 本操作手册适用化工区装车台定量装车系统。该手册主要介绍了系统的功能,运行环境及操作使用说明,操作工通过本手册可了解并掌握该系统的基本构成及操作。该系统主要由以下三个部分组成:定量装车系统、开票管理系统及流量计数据采集系统等组成。 1.1 定量装车系统 定量装车系统主要由批量控制器、质量流量计、气动式数字控制多级关断阀、单路溢油静电保护器等组成,与流量计数据采集系统共用一套上位机系统,完成对6个鹤位3种化工产品的定量装车控制。 主要设备包括: ●DELL计算机:GX280 1台 ●批量控制器:ML-XP-STD-1 6台 ●气动数字控制多级关断阀:215 6台 ●单路溢油静电保护器:SLA-S-IIB 6台 2定量装车系统操作说明 2.1 上位机系统预备知识 2.1.1硬件平台 本系统的硬件平台为DELL操作站,通过安装在操作站上的以太网卡与PLC进行在线数据通讯。 2.1.2软件平台 本系统的软件平台为Windows XP 简体中文版操作系统,Windows XP 简体中文版操作系统是一种多任务窗口式操作系统。该控制系统主要采用鼠标器操作,用来激活各种应用目标(Object)的方式,使操作工能非常直观地进入各种操作状态。

视频会议终端操作手册

视频会议终端 操作手册

目录 1. 产品概述 (1) 2. 系统主要功能及特色 (1) 2.1主要功能 (1) 2.2突出特点 (1) 3. 系统简介 (2) 3.1 欢迎界面 (2) 3.1.1 欢迎界面概览 (2) 3.1.2 欢迎界面详细介绍 (3) 3.2 主界面 (7) 3.2.1 主界面概览 (7) 3.2.2 主界面详细介绍 (8) 3.3 窗口布局 (9) 3.3.1 窗口布局设置 (9) 3.4 用户角色 (11) 3.5 接收视频 (12) 3.6 发言权 (12) 3.6.1 发言权说明 (12) 3.6.2 申请发言权 (12) 3.6.3 释放发言权 (13) 3.7 主讲权 (15) 3.7.1 主讲权说明 (15) 3.7.2 申请主讲权 (15) 3.7.3 释放主讲权 (16) 3.8 数据功能 (17) 3.8.1 文字交流 (17) 3.8.2 电子白板 (18) 3.8.3 屏幕共享 (20) 3.8.4 媒体共享 (21) 3.8.5 文档列表 (22) 3.8.6 文件的发送 (23) 3.8.7 文件的接收 (25) 3.8.8 会议录制 (25) 3.8.9 本地监控 (27) 3.9 会议管理 (28) 3.9.1 申请/放弃主席权限 (28) 3.9.2会议室锁定/解锁 (29) 3.9.3 敲门功能 (30) 3.9.4 全场静音功能 (31) 3.9.5 会场字幕功能 (31) 3.9.6 允许保存白板功能 (32) 3.9.7保留主讲视频 (33) 3.9.8允许/关闭会议录制功能 (33)

3.9.9 设置视频字幕参数 (34) 3.9.10文字聊天管理 (35) 3.9.11 用户管理功能 (36) 3.9.12 广播视频功能 (37) 3.9.13私聊功能 (37) 3.10 视频相关功能 (38) 3.10.1显示视频信息 (38) 3.10.2操作 (39) 3.10.3全屏 (39) 3.10.4关闭视频 (39) 3.10.5更多功能 (39) 3.11系统设置 (39) 3.11.1 音频设置 (39) 3.11.2 视频设置 (40) 3.11.3 录制设置 (41) 3.11.4 文件设置 (41) 3.11.5 文档设置 (42) 3.11.6本地监控 (42) 3.11.7消息设置 (42) 3.11.8消息标注 (43) 3.11.9 视频字幕 (44) 3.12退出当前会议 (45) 3.13关机 (45)