文档库 最新最全的文档下载
当前位置:文档库 › Fast decorrelated neural network ensembles with random weights

Fast decorrelated neural network ensembles with random weights

Fast decorrelated neural network ensembles with random weights
Fast decorrelated neural network ensembles with random weights

Fast decorrelated neural network ensembles with random

weights

Monther Alhamdoosh,Dianhui Wang ?

Department of Computer Science and Computer Engineering,La Trobe University,Melbourne,VIC 3086,Australia

a r t i c l e i n f o Article history:

Received 6November 2013

Received in revised form 27November 2013Accepted 20December 2013

Available online 31December 2013Keywords:

Negative correlation learning Neural network ensembles

Random vector function link networks Data regression

a b s t r a c t

Negative correlation learning (NCL)aims to produce ensembles with sound generalization capability through controlling the disagreement among base learners’outputs.Such a learning scheme is usually implemented by using feed-forward neural networks with error back-propagation algorithms (BPNNs).However,it suffers from slow convergence,local minima problem and model uncertainties caused by the initial weights and the setting of learning parameters.To achieve a better solution,this paper employs the random vector functional link (RVFL)networks as base components,and incorporates with the NCL strat-egy for building neural network ensembles.The basis functions of the base models are gen-erated randomly and the parameters of the RVFL networks can be determined by solving a linear equation system.An analytical solution is derived for these parameters,where a cost function de?ned for NCL and the well-known least squares method are used.To examine the merits of our proposed algorithm,a comparative study is carried out with nine bench-mark datasets.Results indicate that our approach outperforms other ensembling tech-niques on the testing datasets in terms of both effectiveness and ef?ciency.

Crown Copyright ó2013Published by Elsevier Inc.All rights reserved.

1.Introduction

In the last two decades,ensemble learning framework has received considerable attention and resulted in many novel machine learning techniques,for instance,bagging,boosting and random forests [1–4].The base models of an ensemble can be trained individually or collectively.Ensemble methods also differ in the way that they handle the training data during the learning phase.For example,simple averaging ensembles [5,6]use whole training data to learn base models,but with different parameter settings for each,while bagging,boosting and random forests use different parts of the training dataset for each base model to gain more diversity among ensemble components.Bagging randomly resamples replicates from a gi-ven training dataset [1];while Boosting sequentially resamples the training dataset based on mis-classi?cation probability distribution calculated from previous learnt models [2,3].Random forests,however,create the training data variability based on the input features rather than the training examples.It selects the splitting features for each node in its base decision trees from a random set of input features [4].On the other hand,the merging weights of base models can be set equally [7],or learnt by using heuristic means or minimizing a cost function [8,9].Note that boosting ensembles adapt their aver-aging weights during the course of training weak learners [3].It is worthy mentioned that ensemble learning is essential to learn complex domain problems that have complex nonlinear relationships among input variables and output variables.In such cases,the hypotheses space of base models is very large and one single hypothesis cannot model the underlying data distribution.

0020-0255/$-see front matter Crown Copyright ó2013Published by Elsevier Inc.All rights reserved.https://www.wendangku.net/doc/1c9235294.html,/10.1016/j.ins.2013.12.016

?Corresponding author.Tel.:+61394793034;fax:+61394793060.

E-mail addresses:mal-hamdoosh@https://www.wendangku.net/doc/1c9235294.html,.au (M.Alhamdoosh),dh.wang@https://www.wendangku.net/doc/1c9235294.html,.au (D.Wang).

An important issue in ensemble learning algorithms is how to maintain no duplication of base models,i.e.,modeling different regions of the features space by different base models without deteriorating their individual accuracies.Meanwhile,the overall performance of ensemble should be maintained [7].This is called ensemble diversity in literature [7,10].Evolutionary approaches are widely used to select ensemble component networks with maximum disagreement among their outputs [8,11–13].However,such evolutionary-based approaches take long time to converge.In [7]the ensemble base models were trained by using cross-validation datasets and a diversity metric was introduced to trade-off the bias-variance–covariance.The bias-variance–covariance decomposition raised the attention to ?nd ensemble models with min-imum covariance among the base models so that the individual model performance can be retained [10].

Negative correlation learning (NCL)[14]amends the cost function with a penalty term that weakens the relationship with other individuals and controls the trade-off among the bias,variance and covariance in the ensemble learning [10].Later,this idea was extended in [15].It has been noticed that the proposed approach in [13]can automatically deter-mine an optimal size of ensemble by thoroughly exploring the ensemble hypotheses space using NCL.Recently,a reg-ularized version of the negative correlation learning techniques was proposed in [16],aiming at reducing the over?tting risk for noisy data.

In addition to the diversity concern in ensemble learning,the way of generating the component models is crucial.Neural networks with back-propagation learning algorithms are mostly employed for this https://www.wendangku.net/doc/1c9235294.html,ually,single-layer feed-for-ward (SLFN)networks are suf?cient to problem solving due to its universal approximation capability [17–19].Unfortunately,the gradient-based learning algorithms for training SLFNs suffer from local minima problem,slow convergence and very poor sensitivity to learning rate setting.To overcome these dif?culties,random vector functional-link (RVFL)networks were proposed [20–22],where the weights between the input layer and the hidden layer can be randomly assigned and no need to be tuned.Then,the well-known least square methods can be used to calculate the output weights [23].This ?at-net archi-tecture universally approximates any continuous function and dramatically reduces the training time [22,24].Our recent work reported in [12]shows that RVFL-based ensembles take some advantages comparing against other existing ensembling methods.It is interesting to see that in most of the cases the resulting ensemble is composed of few component models (4–12).Note that this ?nding is obtained by selecting the best RVFL candidates from a pool.Because the best candidates selec-tion is implemented by using genetic algorithms,it is a time consuming process to achieve the ?nal ensemble model.Indeed,this motivates us to do further studies in this direction.

Based on our previous studies,this paper aims to develop a fast solution on building neural network ensembles.Our pro-posed algorithm (termed as DNNE)randomly initializes the hidden layer parameters of base RVFL networks,and then em-ploys the least square method with negative correlation learning scheme to analytically calculate the output weights of these base networks.A minimum norm least square solution is derived and formulated in a matrix form for computational exer-cises.A comparative study on data regression is carried out.Results over the testing datasets are promising,and supporting a positive statement on performance assessment among bagging,boosting,simple ensembles and random forests.

The remainder of this paper is organized as follows.Basics on RVFL networks,ensemble learners and the negative corre-lation learning are reviewed in Section 2.Our solution with a detailed description on the proposed learning algorithm is gi-ven in Section 3.To evaluate the performance of our approach,some benchmark datasets are employed in this study and results with comparisons and discussions are presented in Section 4.Finally,we conclude this work in the last section.2.Background

Learning from data is a process of ?nding a hypothesis f ex ;h Tthat estimates an unknown target function w ex T,where h is the model parameters to be tuned.The learnt hypothesis can be single or composite model (i.e.,ensemble of base hypothe-ses).Only few works reported the use of random weights in ensemble base networks [10,20]while it has been widely inves-tigated in single neural networks [21,22,25,26].The theoretical foundation of randomness in neural networks can be deeply understood from the function approximation task with Monte-Carlo (MC)methods.It has been shown in [22]that any con-tinuous function de?ned on a compact set can be represented by a limit-integral of multivariate continuous function with integration in the parameters space.MC method approximates this multiple integral by drawing random samples of the parameter vector from uniform distribution de?ned over the limit-integral domain.More likely,the Monte-Carlo estimated accuracy proportionally improves along with an increase of the number of random samples and the approximation error tends to zero as this number goes to in?nity.A special case of this general approximation theory becomes the random vector functional link (RVFL)networks,which can be represented as single-layer feed-forward networks (SLFNs).The multivariate continuous functions in the MC method play a role as the activation functions of hidden neurons in the SLFN,and the input weights and the hidden layer biases of the SLFN correspond to the sampled parameters vector in the MC method.Eventually,the output weights of the SLFN function as the estimated parameters in the MC method.2.1.Review of random basis function approximators

It has been proved that RVFL networks are universal approximators for continuous functions on compact sets and its

approximation error converges to zero with order O C =???L p

,where L is the number of basis functions (hidden neurons)and C is a constant [22,26].An RVFL network can be de?ned as a SLFN model,

M.Alhamdoosh,D.Wang /Information Sciences 264(2014)104–117105

fex;bT?

X L

k?1b k G w T

k

áxtb k

àá

;e1T

where b??b

1;b

2

;...;b

L

2R L is a parameter vector(the output layer weights);x2R d is the input features vector;w k2R d

and b k2R are nonlinear parameters(input weights and hidden layer biases);d is the number of network inputs;and GeáT

is a basis function.

In RVFL network,the parameters of the hidden layer(w k and b k)are assigned randomly and independently from the train-ing data;while the linear parameters b

k

of the output layer can be tuned using quadratic optimization techniques[22].

Building an RVFL network model requires a training dataset of size N,denoted by D t?fex1;y1T;ex2;y2T;...;ex N;y NTg,

whereex n;y

n

T2R d?R are pairs of observations.The pairsex i;y iTare i.i.d.samples drawn from an unknown joint probability distribution pex;yT.This implies that D t is a realization of a random sequence D N?f Z1;Z2;...;Z N g whose elements are ran-dom vectors Z n?eX n;Y nTfor n?1;...;N.Now,the mean squares learning error of an RVFL network model f over D t can be de?ned as

E feD tT?1

N

X N

n?1

efex n;beD tTTày nT2:e2T

We write beD tTto clarify that the values of the parameter vector b,and thus the hypothesis f,are learnt on the training data-set D t.Therefore,we will write fex;D tTinstead of fex n;beD https://www.wendangku.net/doc/1c9235294.html,monly,the parameters vector may be estimated through minimizing the following cost function E f,i.e.,

^b?argmin

b 1X N

n?1

1

efex n;D tTày nT2:e3T

Let D v?fe x1; y1T;e x2; y2T;...;e x

N ; y

N

Tg be a validation dataset of size N,whose elements are distributed identically but

independently of Z n for all training examples.Then,the generalization error E feD vTof the RVFL network model f can be de-?ned as the mean squares error(MSE)averaged over all possible realizations of D N,that is,

E feD vT?E D

N

1

N

X N

n?1

efe x n;D NTà y nT2

()

;e4T

where E D

N

fág is the expectation with respect to the distribution of the random sequence D N.

Theorem1.The generalization error,given in Eq.(4),of an RVFL learner f can be expressed using the bias/variance dilemma as follows

E feD vT?1

N

X N

n?1

Varef; x nTtef; x nT2

h i

;e5T

where

ef; x nT?E D

N

fe x n;D NTàE D

N

f fe x n;D NTg

àá2 n o

;

Biasef; x nT?E D

N

fe x n;D NT

f gà y n:

e6T

The proof of Theorem1can be found in[27].If a learner model is unbiased,it does not imply a good generalization be-cause it may suffer from a large variance and vice versa.Actually,a balanced trade-off between the bias and variance is re-quired in order to obtain best performance.Interestingly,the bias and variance of a learner can be realized even when a?xed training dataset is used,but with different parameters initializations,as in RVFL networks.

Recently,the feasibility of RVFL networks in learning from data was investigated in[26],and some issues should be care-fully paid attention in practice.For instance,the number of basis elements should be suf?ciently large and supervised ini-tialization is needed in order to model and compensate system uncertainties.However,these issues can be relaxed when a collection of RVFL networks are combined using Pincus formula[28]and Monte-Carlo approximation procedure so that it results in a model with less variance[20,22].Alternatively,the problems mentioned in[26]can be overcome by combining multiple RVFL networks trained using negative correlation learning[14].

2.2.Review of negative correlation learning

Given an ensemble of M base models and a training dataset D t of N instances,then the output of the i th network on the n th data example is designated by f iex n;D tT2R and the ensemble collective output is given by

fex

n ;D tT?

X M

i?1

a i f iex n;D tT;e7T

106M.Alhamdoosh,D.Wang/Information Sciences264(2014)104–117

where a i is the averaging weight of the i th component network and re?ects the contribution of this component in the ?nal

ensemble decision.In other words,the averaging weights must satisfy P

M i ?1a i ?1and 06a i 61.For simplicity,however,

uniform averaging weights are adopted in our study i.e.,a i ?1

;i ?1;...;M .

The learning errors of the ensemble base models and the ensemble model can be de?ned by Eq.(2)by considering the base models independently and the ensemble model as a whole single model,respectively,as follows

E i ?X N n ?1

1

2ef i ex n Tày n T2

;

i ?1;...;M ;e8TE ens

?1N X N n ?1

e f ex n Tày n T2:e9T

Using Eqs.(4)and (7),the generalization error of the ensemble model

f ,averaged over all possible trainin

g datasets D t of size N ,is written as follows

E f eD v T?E D N

1 N

X N

n ?1

e f e x n ;D N Tà y

n T2

(

);e10T

where f e x n ;D N T?1M P M i ?1f i e x n ;D N Tis the ensemble output given an input x n and learnt on a realization of D N .

In traditional ensemble learning algorithms,the base models can be trained independently in a parallel way as adopted in

Bagging [1]or sequentially as used in Boosting [2,3].However,such learning algorithms focus only on minimizing the indi-vidual learning errors given in Eq.(8).In order to deeply understand the ensemble generalization abilities,Theorem 2is re-called from [29]and formulated as follows.

Theorem 2.The generalization error of an ensemble,given in Eq.(10),in which all components are trained using the same dataset can be expressed using the following bias/variance/covariance decomposition

E f eD v T?1 N

X N

n ?1

1M Var e x n TtM à1M Co v e x n TtBias e x n T2

!e11T

where

x n T?1X M i ?1E D N

ef i àE D N f f i gT2

n o ;

v e x n T?1

X M i ?1X j –i E D N fef i àE D N f f i gTef j àE D N f f j gTg ;

Bias e x n T?

1X

M i ?1

eE D N f f i g à y

n T:Notice that f ;f ex n ;D N Tand f ex n Tare used interchangeably to denote the output of the model f ,trained on a realization of

D N ,given an input observation x n .Var e x n Tand x n Tare actually the average individual variances and biases,respectively.The proof of Theorem 2can be read in [29].In fact,managing the covariance term Co v e

x n Texplicitly helps in controlling the disagreement among ensemble components’outputs and hence producing better generalized ensemble model.Some details can be found in [10].

Negative correlation learning (NCL)was proposed to reduce the covariance among ensemble individuals while the var-iance and bias terms are not increased.Unlike traditional ensemble learning approaches,NCL was introduced to train base models simultaneously in a cooperative manner that decorrelates individual errors E i [14,30].Mathematically,the learning error of the i th base model,given in Eq.(8),was modi?ed to include a decorrelation penalty term p i as follows

e i ?

X N n ?1

12

ef i ex n Tày n T2

tk p i ex n T !

;

e12T

where k 2?0;1 is a regularizing factor.The penalty term p i can be designed in different ways depending on whether the ensemble networks are trained sequentially or parallelly.For instance,it could decorrelate the current learning network with all previously learned networks [14]

p i ex n T?ef i ex n Tày n TX

i à1j ?1

ef j ex n Tày n T:

e13T

The penalty term in Eq.(12)can be designed in a way that preserves decorrelation between pairs of networks yet allows alternate networks to be trained independently [14].In [30],a new penalty term was proposed and formulated as follows

M.Alhamdoosh,D.Wang /Information Sciences 264(2014)104–117107

p i ex n T?ef i ex n Tà

f ex n TTX

j –i

ef j ex n Tà

f ex n TT:e14T

Notice that the penalty term in Eq.(14)reduces the correlation mutually among all ensemble individuals by using the

actual ensemble output

f ex n Tinstead of the target function y n .The work in [10]thoroughly analyzed the characteristics of NCL technique and showed that it really works for multilayer perceptron (MLP)and radial basis function (RBF)networks.More explanation on why negative correlation learnin

g does work for building good ensembles can be found in [10].3.Decorrelated neural-net ensembles wit

h random weights

The assurance of well-balanced trade-off between the bias and variance of single RVFL learner is a dif?cult task due to the uncertainties in the learning process that are caused by the random initializations of basis functions.Different initial settings for the RVFL networks lead to different solutions.In order to reduce the impact of these factors on the generalization error of a learning system,a cluster of RVFL networks are combined together to produce ef?cient predictions [5,22].In this paper,we propose a new ensemble learning approach that uses RVFL networks as ensemble components and it is ?tted in negative correlation learning framework.Since RVFL networks do not require learning all their parameters (basis functions are set randomly),we seek a simple and fast solution to calculate the output weights of the base RVFL networks.This solution should take into account the correlation among the base RVFL networks in the output space and make sure it is at minimum.Although DNNE aims at encouraging the diversity among ensemble components by reducing the correlation among their outputs,it still maintains an overall good ensemble accuracy.Next,we present our method in details.

Given an ensemble of RVFL networks and a training dataset D t of N instances,by substituting the penalty term p i in Eq.(12)with (14),the decorrelated error of the i th individual is expressed as follows

e i ?X N n ?1e i ex n T?X N n ?1

1ef i ex n Tày n T2

àk ef i ex n Tà

f ex n TT2 !:e15T

Note that we use uniform averaging weights to combine ensemble base RVFL networks i.e.,

f ex n T?1X M i ?1

f i ex n Tand X j –i

ef j ex n Tà

f ex n TT?àef i ex n Tà f ex n TT:e16T

Here,we relax the assumption that the ensemble output

f ex n Tis constant.Moreover,since RVFL networks are used to pop-ulate our ensemble,the output of the i th base network,stimulated with an instance x n ,is given by

f i ex n T?

X L j ?1

b ij g ij ex n T;

e17T

where L is the number of basis functions (hidden neurons)in the i th individual RVFL network;b ij is the output weight con-necting the j th hidden neuron with the output neuron in the i th base model;g ij ex n T?G ij ew j ;b j ;x n Tis the output of the j th

hidden neuron in the i th base model;and G can be any squashing basis function.

In this study,we assume homogeneous hidden nodes for all base networks,i.e.,one type of squashing functions is used for all hidden neurons.As mentioned earlier,the parameters (w j ;b j )of the basis functions g ij are randomly set while the only parameters to be tuned are the output weights b ij .Therefore,NCL ensemble models attain an optimal performance when the gradient of the error function in Eq.(15)vanishes with respect to the output weights b ij ,that is,

5e i ?0;for i ?1;...;M ;which leads to

@e i @b ij ?X N n ?1@e i ex n T@b ij

?0;for j ?1;...;L :

e18T

We assume here that all base networks have similar architecture and the same dataset is used to train all of https://www.wendangku.net/doc/1c9235294.html,ing calculus,we have

@e i ex n T@b ij ?@@b ij 12ef i ex n Tày n T2àk ef i ex n Tà f ex n TT2 !?ef i ex n Tày n Tg ij ex n Tà2k ef i ex n Tà f ex n TT1à1M

g ij

ex n T?g ij

ex n T?ef i

ex n Tày n

Tàc ef i

ex n Tà

f ex n TT ;e19T

where

c ?2k M à1

M

:

e20T

Substituting Eq.(16)in Eq.(19),we get

@e i ex n T@b ij ?g ij ex n T1àc tc M f i ex n Ttc

M X M l –i

l ?1

f l ex n Tày n 243

5:e21T

108M.Alhamdoosh,D.Wang /Information Sciences 264(2014)104–117

Using Eq.(17),Eq.(21)can be rewritten as

@e iex nT@b ij ?g ijex nT1àct

c

M

X L

k?1

b ik g

ik

ex nTt

c

M

X M

l–i

l?1

X L

k?1

b lk g

lk

ex nTày n

2

4

3

5:e22T

Rearranging Eq.(22)gives

@e iex nT@b

ij ?1àct

c

M

X L

k?1

b

ik

g

ij

ex nTg ikex nTt

c

M

X M

l–i

l?1

X L

k?1

b

lk

g

ij

ex nTg lkex nTàg ijex nTy n:e23T

From Eqs.(15)and(22),we have

@e i @b ij ?1àct

c

M

X L

k?1

b ik

X N

n?1

g

ij

ex nTg ikex nTt

c

M

X M

l–i

l?1

X L

k?1

b lk

X N

n?1

g

ij

ex nTg lkex nTà

X N

n?1

g

ij

ex nTy n:e24T

Finally,by Eqs.(18)and(24),we obtain

X L k?1C1uei;j;i;kTb ikt

X M

l–i

l?1

X L

k?1

C2uei;j;l;kTb lk?uei;jT;e25T

where i?1;...;M;j?1;...;L and

C1?1àctc

M

;C2?

c

M

;e26T

uei;j;k;lT?

X N

n?1g

ij

ex nTg klex nT;uei;jT?

X N

n?1

g

ij

ex nTy n:e27T

Here,C1and C2are two constants;uei;j;k;lTrepresents the correlation between the j th hidden neuron of the i th individual RVFL network and the l th hidden neuron of the k th individual RVFL network;and uei;jTmodels the correlation between the j th hidden neuron of the i th base network and the target function wexT.By Eqs.(20)and(26),the two constants C1and C2can be expressed as follows

C1?1à2k eMà1T2

M

;C2?2k

Mà1

M

:e28T

Applying Eq.(25)to all individual errors e i and all output weights b

ij

(i?1;...;M;j?1;...;L),a linear system of M?L equa-tions is derived.Therefore,solving this linear system with respect to b ij yields an RVFL ensemble model.To facilitate this computational task,we write this linear system in a matrix form as follows

H corr B ens?T h;e29Twhere H corr is called the hidden correlation matrix,B ens is the global output weights matrix and T h is the hidden-target ma-trix.H corr is de?ned as follows

H corr?

C1ue1;1;1;1T...C1ue1;1;1;LT......C2ue1;1;M;1T...C2ue1;1;M;LT........................

C1ue1;L;1;1T...C1ue1;L;1;LT......C2ue1;L;M;1T...C2ue1;L;M;LT

C2ue2;1;1;1T...C2ue2;1;1;LT......C2ue2;1;M;1T...C2ue2;1;M;LT........................

C2ue2;L;1;1T...C2ue2;L;1;LT......C2ue2;L;M;1T...C2ue2;L;M;LT........................

........................

C2ueM;1;1;1T...C2ueM;1;1;LT......C1ueM;1;M;1T...C1ueM;1;M;LT........................

C2ueM;L;1;1T...C2ueM;L;1;LT......C1ueM;L;M;1T...C1ueM;L;M;LT

2

66

66

66

66

66

66

66

66

66

66

66

66

66

4

3

77

77

77

77

77

77

77

77

77

77

77

77

77

5

eML?MLT

or H correp;qT?

C1uem;n;k;lTif m?k;

C2uem;n;k;lTotherwise;

&

M.Alhamdoosh,D.Wang/Information Sciences264(2014)104–117109

where p ;q ?1;...;M ?L ;m ?p L ??;n ?eep à1Tmod L Tt1;k ?q L ??

;l ?eeq à1Tmod L Tt1;and mod is the modulo operation;B ens and T h are de?ned as follows

B ens ??b 11;...;b 1L ;b 21;...;b 2L ;...;b M 1;...;b ML T ML ?1;

T h ??u e1;1T;...;u e1;L T;u e2;1T;...;u e2;L T;...;u eM ;1T;...;u eM ;L T T ML ?1:

Instead of using gradient descent-based quadratic optimization techniques to tune the linear parameters b ij ,an analytical

solution can be derived by using Eq.(29),that is

b B ens ?H à1corr

T h :e30T

Algorithm 1.DNNE algorithm.

Require:Training dataset D t ,a basis function G :R #R;default is Sigmoid,scaling coef?cient k 2?0;1 ,the size of base model L ,and the number of base models M .Ensure:Trained DNNE model.

1:Initialize the architecture of M SLFNs to be the ensemble base models.

2:Initialize the basis functions of base models randomly i.e.,w j ;b j are randomly set.

3:Calculate the outputs of the hidden layers of base models for all examples in D t i.e.,calculate g ij ex n T.4:Calculate the constants C 1and C 2.5:for p 1to M ?L

6:m p L ??

7:n eep à1Tmod L Tt18:for q 1to M ?L do

9:k q L ??

10:l eeq à1Tmod L Tt111:if m ?k then 12:H corr ?p ;q C 1u em ;n ;k ;l T13:else 14:H corr ?p ;q C 2u em ;n ;k ;l T15:end if 16:end for 17:end for 18:k 1

19:for i 1to M do 20:for j 1to L do 21:T h ?k u ei ;j T22:k k t123:end for 24:end for

25:Calculate H y corr ,the pseudo-inverse inverse of H corr .

26:Calculate the estimated global output weights matrix b B ens from Eq.(31).27:for i 1to M do

28:Output weights of base model i b B ens ?ei à1TL t1:iL 29:end for

30:return Ensemble model (DNNE).

It has been aware that the matrix H corr tends to be ill-conditioned when dealing with real datasets in practice,especially when two or more basis functions are linearly correlated.In fact,the magnitudes of the resulting parameter values b ij will be quite large which is not desirable indeed.To overcome this technical trouble,as done in numerical analysis,we employ the well-known singular value decomposition (SVD)techniques [31]to evaluate the parameters vector in this study.Denoted by H y corr as the generalized pseudo-inverse of the matrix H corr [23].Thus,the estimated parameters vector given by Eq.(30)can be evaluated as

b B ens ?H y corr

T h e31T

Algorithm 1describes our proposed learning scheme for building neural-net ensembles with random weights.Obviously,

it can be seen that our proposed NCL-based ensemble technique with a fast component model builder outperforms other NCL-based ensemble techniques where some classical training algorithms such as back-propagation algorithm are

110M.Alhamdoosh,D.Wang /Information Sciences 264(2014)104–117

employed.Based on this understanding,it does not make sense to take a long run to make performance comparisons against BP-based ensembles.

Remark 1.It should be pointed out that,with the best of our knowledge,feed-forward neural networks with random weights assignment have been originally proposed by Schmidt and his co-worker in [21].Such an idea has been further explored by Igelnik and Pao in [22]where a signi?cant result on universal approximation theorem was established.Recently,this kind of learner models with random parameters has received considerable attention due to its applicability for real-time data processing.In [26],Tyukin and Prokhorov regarded this type of neural networks as random basis function approximators.They provide some deeper insights on the feasibility of such random models for modeling and control applications.Their reported results are quite informative and useful to better understanding the problems associated with the randomness in the learner model.In this paper,we employ the same approach as used in [21]to derive an analytic expression of the weights in output layer,but with a different objective cost function.

Remark 2.The random input weights of base RVFL networks can be selected in different ways in order to improve the per-formance of the ensemble model.For example,the input weights can be selected so that the feature vectors generated from the hidden neurons are distributed on the surface of a ball centered at the origin of the feature space,but in a higher dimen-sion space.However,this paper does not make any assumption on the selection of input weights.DNNE randomly initializes input weights using a uniform distribution ?à1;t1 .On the other hand,the hidden feature vectors are scattered in a cube ?0;1 d when a sigmoidal function is used as a basis function.Idealy,the hidden feature vectors of each base RVFL network in the ensemble model covers different regions in the feature space.4.Performance evaluation

This section investigates the performance of DNNE ensembles and then compares it with other popular ensembling ap-proaches:simple,bagging,boosting and random forests.We tried different con?gurations for the tested ensembles to ?gure out the effect of each design parameter on the generalization performance of each ensemble approach.4.1.Datasets

Nine datasets have been selected to analyze the performance of our method on regression tasks.Among these datasets,four datasets are used to justify the function approximation capabilities of our ensemble method.These functions are:3-d Mexican Hat,Friedman #1,Friedman #3and Multi.The domain variables of these functions are randomly generated using a uniform probability distribution U ?a ;b ;where a and b are the lower and upper bounds of the corresponding variables,respectively.The size of each dataset and the constraints on its variables are listed in Table 1.A random Gaussian noise with zero mean N e0;0:02Twas also added to all examples during training in order to reliably measure the learning capabil-ities of the tested methods.However,testing examples are noise-free.The remaining datasets model real world applications obtained from UCI repository [32],StatLib repository (https://www.wendangku.net/doc/1c9235294.html, )and Delve repository (https://www.wendangku.net/doc/1c9235294.html,/delve );below is a brief summary of each dataset.

California Housing dataset contains 20,640observations for estimating the median house prices in California,U.S.Each characterized by eight continuous features (median income,housing median age,total rooms,total bedrooms,popula-tion,households,latitude,and longitude).

Quake dataset contains information for 2178earthquakes occurred between January 1964and February 1986and each one was determined by three real variables (focal depth,latitude and longitude).

Table 1

Summary of regression datasets used in our study.Dataset Function/dependent variable

Attributes Size 3-d Mex.hat y ?sinc ????????????????

x 21tx 22

q ?sin ??????????x 21tx 2

2p ??????????x 21tx 22

p t x i $U ?à4p ;4p 3000Friedman #1y ?10sin ep x 1x 2Tt20ex 3à0:5T2t10x 4t5x 5t x i $U ?0;1 5000Friedman #3

y ?tan à1

x 2x 3à1x

2x 4

x 1

t

x 1$U ?0;100

3000

x 2$U ?40p ;560p x 3$U ?0;1 x 4$U ?1;11 Multi

y ?0:79t1:27x 1x 2t1:56x 1x 4t3:42x 2x 5t2:06x 3x 4x 5t x i $U ?0;1 4000Cal.housing House price

?x 1;...;x 8 2R 20,640Quake Earthquake magnitude

?x 1;...;x 3 2R 2178Space ga Proportion of votes cast per county ?x 1;...;x 6 2R 3107Abalone

Abalone age

?x 1;...;x 8 2R 4177Comp.activ.

CPU running time in user mode

?x 1;...;x 12 2R

8192

M.Alhamdoosh,D.Wang /Information Sciences 264(2014)104–117111

Space ga dataset holds3107spatial data records on the U.S.county votes cast in the1980presidential elections and each record is composed of six input attributes(the population in each county,population in each county with a12th grade or higher education,the number of owner-occupied housing units,the aggregate income,the X coordinate of the county,and the Y coordinate of the county)and one output dependent variable(logarithm of the proportion of votes cast per county). Abalone dataset is used to predict the age of abalone(number of rings in the shell)from eight physical measurements (sex,length,diameter,height,whole weight,shucked weight,viscera weight and shell weight)of4177abalones.

Computer Activity dataset describes the portion of time that CPUs run in user-mode,based on8192computer system activities collected from a Sun SPARCstation20/712with2CPUs and128MB of memory running in a multi-user univer-sity department,and each system activity is evaluated using12system measures(number of reads and number of writes between system memory and user memory,number of system calls of all types,number of system read calls,number of system write calls,number of system fork calls,number of system exec calls,number of characters transferred by read calls,number of characters transferred by write calls,process run queue size,number of memory pages available to user processes,and number of disk blocks available for page swapping).

A summary of all datasets information is presented in Table1.Note that all input and output attributes were normalized as follows

x?

xàx min

x maxàx min

;e32T

where x is the new normalized value of the attribute x;and x min and x max are the minimum and maximum values of the attri-bute x,respectively.

4.2.Experiments setup

Different datasets and different ensemble methods require different parameter values.In all methods,we use the RVFL network with Sigmoid basis function GexT?1=e1teàxTas a base learner.In simple averaging,bagging,boosting and ERWNE, the component RVFL networks are trained using standard least squares method as described in page142of[31];while CART algorithm[33]is used to train the base trees of random forest ensembles.In fact,there are three parameters to be set for our ensemble method DNNE(number of basis functions in RVFL networks L,ensemble size M,and penalty coef?cient k),two parameters for simple averaging ensemble(L and M),three parameters for bagging(L;M,and bootstrap sampling size),four parameters for boosting(L;M,bootstrap sampling size,and threshold/for weighting incorrect predictions).Note that ada-boost.RT version[34]is used to compare boosting algorithm with our method since it is one of the best boosting algorithms for data regression.

Exhaustive linear search strategy was adopted to?nd the optimal values for these parameters.The ensemble size M was searched in the range?2;15 with step1and the number of basis functions L in RVFL networks was searched in the range ?5;50 with step5.Similarly,the penalty coef?cient k and the boosting threshold/were searched in the range?0;1 with steps0:1and0:01,respectively.Moreover,60%of the training examples were frequently sampled for bagging and boosting ensembles.On the other hand,the component decision trees of random forests were used with no pruning and the number of base trees was searched in the range[5,100]with step5.The random selection ratio of features was searched in the range ?0:1;1 with step0.1.All parameter values can be further tuned using other model selection methods,but we are only inter-ested in highlighting the relative performance of DNNE ensemble in comparison with other ensembling methods.It is worth-while mentioned that we use the same initial weights for the base networks in DNNE and simple averaging ensemble models.This demonstrates the effectiveness of negative correlation learning against simple independent learning.Note that the same RVFL network architecture is used for all ensemble individuals.

All methods were assessed using10-fold cross-validation procedure.Each dataset is randomly partitioned into ten sub-sets that initiate10runs for each experiment.For each run,two subsets are used for testing and parameters selection and the remaining subsets are used all together for training.The performance measurements from all folds are collected and aver-aged over ten.Due to the randomness of weights and data sampling,each experiment is simulated ten times for more reli-able results.Then,the results from all simulations are averaged.The Root Mean Squares Error(RMSE)metric is calculated on the testing subsets in order to evaluate the generalization capabilities of the tested methods,and it is given by

E?

??????????????????????????????????????????????????????????????????

1X N

n?1

1X M

i?1

f iex nTày n

2

s

;e33T

where N is the number of testing examples;M is the number of base networks;and f iex nTis the output of the i th base net-work given an input x n.

Eventually,the simulations were executed on a computer of2processors,each one with3.0GHz frequency,and4.0GB of RAM.Multiprocessing strategy has been used to run the simulations ef?ciently.

112M.Alhamdoosh,D.Wang/Information Sciences264(2014)104–117

M.Alhamdoosh,D.Wang/Information Sciences264(2014)104–117113 4.3.Results and discussion

4.3.1.Performance analysis of DNNE algorithm

We thoroughly investigate the performance of our proposed approach in different experimental designs.Fig.1plots the testing generalization error of DNNE models(solid lines)and the testing generalization error of simple averaging ensemble models(dashed lines)for different ensemble sizes and for all datasets.At each ensemble size in Fig.1,the best penalty coef-?cient and best number of hidden neurons in the RVFL networks are used.We observe from Fig.1that increasing the number of base networks is not always bene?cial for the ensemble generalization accuracy and the performance of DNNE algorithm is always better than the performance of simple averaging method regardless of the ensemble size.However,Quake dataset reveals an unexpected behavior at four base networks.It can been seen from for four datasets in Fig.1that our ensemble approach DNNE gains in accuracy when the ensemble expands up to12components(see Quake,Space ga,Abalone and Com-puter Activity).This can be due to the different datasets complexities.Apparently,4–12base networks are suf?cient to con-stitute well generalized ensemble models with random weights using our algorithm.

Not only the ensemble size impacts the generalization performance of an ensembling method.Fig.2illustrates how the number of basis functions in the base networks can affect the accuracy of DNNE models(solid lines).In most of the experiments(3d Mexican Hat,Friedman#1,Friedman#3,California Housing,Space ga,Abalone,and Computer Activity), enlarging the hidden layer of base networks can re?ect in a better performance.However,the testing RMSE of DNNE tends to become quite stable as the number of basis functions in RVFL networks approaches40.In the other two data-sets,DNNE method bene?ts from merging the ensemble components rather than from increasing the number of basis functions.25and5basis functions were suf?cient to gain best accuracy for the Multi and Quake problems,respectively. These observations are expected because our algorithm tries to map the problem input space in a way that each base network learns different part of the features space.In addition to that,Fig.2reports that ensemble models outperform single models even when random weights are used(compare the solid and dashed lines with the dotted lines).It is also evident from Fig.2that DNNE models attain better generalization accuracies than simple averaging models regardless of the size of hidden layers in base networks.The Quake dataset,however,shows comparable performance for both approaches.

Besides the ensemble size and the number of basis functions in RVFL networks,the penalty coef?cient of negative corre-lation learning plays a key role in the performance of our DNNE algorithm.Fig.3presents the testing RMSE of our method in terms of the penalty coef?cient and for different base network complexities.The solid,dotted and dashed lines represent the performance measures for 10,30,and 50basis functions in the RVFL networks,respectively.Although the penalty coef?cient values were searched in the range ?0;1 ,we do not display the whole scale for sake of graphics clarity and only show the ef?cient ranges.Fig.3demonstrates that the testing generalization error slightly declines as the penalty coef?cient value increases.However,the performance of DNNE dramatically improves when k P 0:5.This result can be interpreted due to the scaling factor 1=2in the ?rst term of Eq.(15).It forces the learning algorithm to focus on the ?rst term rather than the penalty term when k <0:5.Fig.3is also consistent with the results of Fig.2,that is,increasing the number of basis func-tions is bene?cial for most study cases (can be seen from the solid lines).

It is worthy noted that although the basis functions of base RVFL networks (hidden neurons)work in a linear mode,the proposed learning model (DNNE)could model nonlinear relationships between the input and output variables in the inves-tigated datasets.This linearity emerges because the input weights and biases are randomly ?xed and only the output weights of base networks are updated.

https://www.wendangku.net/doc/1c9235294.html,parison with other ensemble methods

Tables 2and 3compare the performance of our method with four popular ensemble approaches:simple averaging,bag-ging,adaboost.rt and random forests in addition to single RVFL networks and our previous work (denoted by ERWNE in Table 2).The reported results were collected based on the best parameter values for each method and each dataset (can be seen in Table 4).They were selected by using the same search criteria explained in Section 4.2.From Tables 2and 3,it is evident that our proposed ensemble method DNNE always performs better than other ensemble approaches including our previous work ERWNE [12].Moreover,the complexity of DNNE models is much simpler than that of other ensemble models in ?ve cases and comparable with others in four cases (compare M 1;M 2;M 3and M 4columns in Table 4).Since DNNE and random forest ensemble models have different base learners,it is useful to compare their performance closely.Table 2shows that DNNE greatly outperforms random forests in all datasets except in Quake dataset where DNNE performs slightly better.The per-formance of random forests greatly degrades in the four arti?cial function approximation datasets (3d Mexican Hat,Fried-man #1,Friedman #3,and Multi)and this is due to its high sensitivity to noisy data.In terms of ensemble size,the size

of

114M.Alhamdoosh,D.Wang /Information Sciences 264(2014)104–117

M.Alhamdoosh,D.Wang/Information Sciences264(2014)104–117115

Table2

Comparison of testing RMSEs of DNNE,ERWNE,single RVFL network and random forest ensembles.

Dataset DNNE ERWNE Random forest RVFL network

3-d Mex.hat0.0201±0.00240.0847±0.00120.1380±0.00010.1005±0.0036 Friedman#10.0571±0.00320.3976±0.0151 4.89163±0.0015970.7738±0.1220 Friedman#30.0465±0.00200.0690±0.00120.1380±0.00080.0731±0.0020 Multi0.0048±0.00030.0383±0.0034 1.1510±0.00060.1127±0.0268 Cal.housing0.1285±0.00110.1290±0.00150.2275±0.00200.1320±0.0015 Quake0.171420±0.00010.171478±0.00020.171970±0.00020.171478±0.0003 Space ga0.0334±0.00030.0342±0.00060.0618±0.00010.0346±0.0008 Abalone0.0750±0.00060.0757±0.00110.1017±0.00050.0769±0.0024 Comp.activ.0.0326±0.00040.0353±0.00100.0829±0.00010.0443±0.0042

Table3

Comparison of testing RMSEs of DNNE,simple and bagging ensembles and Adaboost.Rt.

Dataset DNNE Simple Bagging Adaboost.RT

3-d Mex.hat0.0201±0.00240.0920±0.00170.0910±0.00120.0908±0.0012 Friedman#10.0571±0.00320.5571±0.04880.5322±0.03270.5326±0.0250 Friedman#30.0465±0.00200.0710±0.00090.0705±0.00070.0722±0.0008 Multi0.0048±0.00030.0640±0.00920.0126±0.00080.0495±0.0074 Cal.housing0.1285±0.00110.1299±0.00070.1292±0.00050.1290±0.0004 Quake0.171420±0.00010.171421±0.00010.171434±0.00020.171445±0.0002 Space ga0.0334±0.00030.0339±0.00020.0339±0.00040.0346±0.0007 Abalone0.0750±0.00060.0753±0.00060.0756±0.00110.0757±0.0010 Comp.activ.0.0326±0.00040.0357±0.00050.0358±0.00070.0378±0.0010

random forest models is much greater than that of DNNE models,i.e.,few networks are used in DNNE models compared with many decision trees used in RF models.

As mentioned in Section 1,we are looking for an ef?cient solution that reduces the training time of ensemble models.Fig.4illustrates the average training times of our current DNNE algorithm and our previous ERWNE algorithm that was developed in [12]along with the datasets sizes.It is evident that DNNE runs ef?ciently regardless of the dataset size while ERWNE suffers when the dataset size dramatically increases,as can be seen from Cal.Housing and Comp.Act.datasets.Note that the training time of DNNE for Comp.Act.is greater than the training time for Cal.Housing because the number of base models in Comp.Act.ensembles is larger (see Table 4).5.Conclusions

Evolutionary approaches combined with gradient-based learning algorithms are mainly employed to build neural-net ensembles with high diversity among their base networks,while the overall ensemble accuracy is well maintained.However,these approaches,including our previous work in [12],are time consuming and exhaustively explore the ensemble hypoth-eses space.In this paper,we proposed an effective and ef?cient solution to build ensemble models in a very short time.The majority of RVFL networks’weights (hidden layer weights and biases)are randomly assigned,and the rest of weights can be computed analytically based on the negative correlation learning (NCL)and least squares methods.The results show that our proposed algorithm is promising for data regression applications.It should be pointed out that this is the ?rst study to inves-tigate the effectiveness of least squares method in building ensemble models with random weights.

Although DNNE algorithm is effective and ef?cient,it still has some limitations.For instance,the performance of our DNNE is sensitive to the value of regularizing factor k .It is interesting to do further research on how to reduce the in?uence of this free parameter on the algorithm performance.Another limitation of DNNE is the high computational complexity when it attempts to solve problems that require large number of RVFL networks with more hidden neurons.The computa-

Table 4

Optimal parameter values of each dataset for the results in Tables 3and 2.Dataset

DNNE Bagging

Adaboost.RT Forest M 1a

L 1b k c M 2a L 2a M 3a L 3a /d M 4a f e 3-d Mex.hat 6500.6155015500.15300.1Friedman #16500.6155015500.0575 1.0Friedman #36500.6145015500.0575 1.0Multi

6250.6155015350.051000.9Cal.housing 4500.55145015500.05150.1Quake 1250.11151550.1600.3Space ga 4500.5125015500.0525 1.0Abalone

12450.5154515450.05600.1Comp.activ.

12

50

0.5

14

50

15

50

0.05

85

1.0

a The ensemble size.

b The number of basis functions in the RVFL base networks.

c The penalty coef?cient of the negative correlation learning formula.

d Th

e incorrect predictions threshold.

e

The features random selection

percentage.

116M.Alhamdoosh,D.Wang /Information Sciences 264(2014)104–117

M.Alhamdoosh,D.Wang/Information Sciences264(2014)104–117117 tional cost is mainly dominated by computing the matrix H corr and its pseudo-inverse.One way to alleviate this computa-tional burden is to partition the whole training dataset into some subsets for building compact RVFL networks.Moreover, the pseudo-inverse matrix can be calculated using more ef?cient algorithms.

This work sets a basis for future research on building advanced neural-net ensembles with random weights.As can be seen that other basis functions,such as kernel functions used in support vector machines(SVMs)and Radial Basis Functions (RBFs),can be also employed in RVFL networks.As for if such a replacement can result in better performance,further empir-ical studies with comparisons are needed.Furthermore,with some adjustments,the proposed algorithm in this paper can be easily extended for data classi?cation problems.

References

[1]L.Breiman,Bagging predictors,Mach.Learn.24(2)(1996)123–140.

[2]R.E.Schapire,The strength of weak learnability,Mach.Learn.5(2)(1990)197–227.

[3]Y.Freund,Boosting a weak learning algorithm by majority,in:Proceedings of the3rd Annual Workshop on Computational Learning Theory,Rochester

NY,USA,1990,pp.202–216.

[4]L.Breiman,Radnom Forests–Random Features,Tech.Rep.567,University of California,Berkeley CA,USA,1999.

[5]L.Hansen,P.Salamon,Neural network ensembles,IEEE Trans.Pattern Anal.Mach.Intell.12(10)(1990)993–1001.

[6]M.P.Perrone,L.N.Cooper,When Networks Disagree:Ensemble Methods for Hybrid Neural Networks,Tech.Rep.61,Brown University,Providence RI,

USA,1993.

[7]A.Krogh,J.Vedelsby,Neural network ensembles,cross validation and active learning,in:Advances in Neural Information Processing Systems,MIT

Press,1995,pp.231–238.

[8]X.Yao,Y.Liu,Making use of population information in evolutionary arti?cial neural networks,IEEE Trans.Syst.,Man Cybernet.28(3)(1998)417–425.

[9]D.H.Wolpert,Stacked generalization,Neural Netw.5(1992)241–259.

[10]G.Brown,J.L.Wyatt,P.Tinˇo,Managing diversity in regression ensembles,J.Mach.Learn.Res.6(2005)1621–1650.

[11]D.W.Opitz,J.W.Shavlik,Generating accurate and diverse members of a neural network Ensemble,in:Advances in Neural Information Processing

Systems,MIT Press,1996,pp.535–541.

[12]D.Wang,M.Alhamdoosh,Evolutionary extreme learning machine ensembles with size control,Neurocomputing102(2013)98–110.

[13]Y.Liu,X.Yao,T.Higuchi,Evolutionary ensembles with negative correlation learning,IEEE https://www.wendangku.net/doc/1c9235294.html,put.4(4)(2000)380–387.

[14]B.Rosen,Ensemble learning using decorrelated neural networks,Connect.Sci.8(1996)373–384.

[15]Y.Liu,X.Yao,Ensemble learning via negative correlation,Neural Netw.12(10)(1999)1399–1404.

[16]H.Chen,X.Yao,Regularized negative correlation learning for neural network ensembles,IEEE Trans.Neural Netw.20(12)(2009)1962–1979.

[17]K.Hornik,M.Stinchcombe,H.White,Multilayer feedforward networks are universal approximators,Neural Netw.2(5)(1989)359–366.

[18]M.Leshno,S.Schocken,Multilayer feedforward networks with a nonpolynomial activation function can approximate any function,Neural Netw.6

(1993)861–867.

[19]J.Park,I.W.Sandberg,Universal approximation using radial-basis-function networks,Neural Comput.3(2)(1991)246–257.

[20]B.Igelnik,Y.-H.Pao,S.LeClair,C.Y.Shen,The ensemble approach to neural-network learning and generalization,IEEE Trans.Neural Netw.10(1)(1999)

19–30.

[21]W.Schmidt,M.Kraaijveld,R.Duin,Feedforward neural networks with random weights,in:Proceedings of11th IAPR International Conference on

Pattern Recognition Methodology and Systems,1992,pp.1–4.

[22]B.Igelnik,Y.-H.Pao,Stochastic choice of basis functions in adaptive function approximation and the functional-link net,IEEE Trans.Neural Netw.6(6)

(1995)1320–1329.

[23]C.R.Rao,S.K.Mitra,Generalized Inverse of Matrices and Its Applications,Wiley,New York,1971.

[24]Y.-H.Pao,Y.Takefuji,Functional-link net computing:theory,system architecture,and functionalities,Comput.Mag.25(5)(1992)76–79.

[25]D.J.Albers,J.C.Sprott,W.D.Dechert,Routes to chaos in neural networks with random weights,Int.J.Bifurcat.Chaos8(7)(1998)1463–1478.

[26]I.Tyukin,D.Prokhorov,Feasibility of random basis function approximators for modeling and control,in:Proceedings of IEEE Multi-Conference on

Systems and Control,Saint Petersburg,Russia,2009,pp.1391–1396.

[27]S.Geman,E.Bienenstock,R.Doursat,Neural networks and the bias/variance dilemma,Neural Comput.4(1)(1992)1–58.

[28]M.Pincus,A closed form solution of certain types of constrained optimization problems,Oper.Res.16(1968)690–694.

[29]N.Ueda,R.Nakano,Generalization error of ensemble estimators,in:Proceedings of IEEE International Conference on Neural Networks,Washington

DC,USA,1996,pp.90–95.

[30]Y.Liu,X.Yao,Negatively correlated neural networks can produce best ensembles,https://www.wendangku.net/doc/1c9235294.html,rm.Process.Syst.4(3/4)(1997)176–185.

[31]C.M.Bishop,Pattern Recognition and Machine Learning(Information Science and Statistics),Springer-Verlag,New York,Inc.,Secaucus,NJ,USA,2006.

[32]A.Frank,A.Asuncion,UCI Machine Learning Repository,2010..

[33]L.Breiman,J.H.Friedman,R.A.Olshen,C.J.Stone,Classi?cation and Regression Trees,Wadsworth and Brooks/Cole Advanced Books and Software,

Monterey CA,USA,1984.

[34]D.L.Shrestha,D.P.Solomatine,Experiments with AdaBoost.RT,an improved boosting scheme for regression,Neural Comput.18(7)(2006)1678–1710.

对等网络模式

一、对等网简介 “对等网”也称“工作组网”,那是因为它不像企业专业网络中那样是通过域来控制的,在对等网中没有“域”,只有“工作组”,这一点要首先清楚。正因如此,我们在后面的具体网络配置中,就没有域的配置,而需配置工作组。很显然,“工作组”的概念远没有“域”那么广,所以对等网所能随的用户数也是非常有限的。在对等网络中,计算机的数量通常不会超过20台,所以对等网络相对比较简单。在对等网络中,对等网上各台计算机的有相同的功能,无主从之分,网上任意节点计算机既可以作为网络服务器,为其它计算机提供资源;也可以作为工作站,以分享其它服务器的资源;任一台计算机均可同时兼作服务器和工作站,也可只作其中之一。同时,对等网除了共享文件之外,还可以共享打印机,对等网上的打印机可被网络上的任一节点使用,如同使用本地打印机一样方便。因为对等网不需要专门的服务器来做网络支持,也不需要其他组件来提高网络的性能,因而对等网络的价格相对要便宜很多。 对等网主要有如下特点: (1)网络用户较少,一般在20台计算机以内,适合人员少,应用网络较多的中小企业; (2)网络用户都处于同一区域中; (3)对于网络来说,网络安全不是最重要的问题。 它的主要优点有:网络成本低、网络配置和维护简单。 它的缺点也相当明显的,主要有:网络性能较低、数据保密性差、文件管理分散、计算机资源占用大。 二、对等网结构 虽然对等网结构比较简单,但根据具体的应用环境和需求,对等网也因其规模和传输介质类型的不同,其实现的方式也有多种,下面分别介绍: 1、两台机的对等网 这种对等网的组建方式比较多,在传输介质方面既可以采用双绞线,也可以使用同轴电缆,还可采用串、并行电缆。所需网络设备只需相应的网线或电缆和网卡,如果采用串、并行电缆还可省去网卡的投资,直接用串、并行电缆连接两台机即可,显然这是一种最廉价的对等网组建方式。这种方式中的“串/并行电缆”俗称“零调制解调器”,所以这种方式也称为“远程通信”领域。但这种采用串、并行电缆连接的网络的传输速率非常低,并且串、并行电缆制作比较麻烦,在网卡如此便宜的今天这种对等网连接方式比较少用。 2、三台机的对等网

五款最好的免费电脑资料同步备份软件

文件夹同步就是将两个文件夹内的文件内容进行分析,可选择性的让两个文件夹内容保存一直。文件夹同步软件相当有用,虽然大多数人没用过,但它确实能够为你节省很多时间和操作。比如说:同步U盘上的数据和软件设置,查找软件版本区别和更新,同步FTP上的数据。我认为,很多情况下使用同步软件可以极大提高计算机操作效率。 高效文件同步工具GoodSync 在多种驱动设备之间自动同步和备份,(个人电脑、移动设备、网络设备)支持任何文件类型,支持多任务、多语言。人性化的界面,可自由选择部分单向双向同步,有强大的过滤系统,有完整的日志记录及更改内容报表。 注意:GoodSync分析之后会在任务文件夹生成“_gsdata_”的隐藏文件夹,里面存放在任务日志和备份文件。GoodSync有免费版和专业版之分。免费版在30天内没有任何限制,仅仅是不能可用于商业用途和政府机构。过来三十天依然可以免费使用,但是仅支持3个任务(相比很多单任务的还是强大不少)和每次100文件夹的同步工作(一般情况下够)。下载 开源同步软件FreeFileSync 界面简洁,操作简单。虽然是单任务,但是可以保存和加载配置。最重要的是,作为一款开源如软件,它没有任何限制。下载

多文件夹同步器Allway Sync Allway Sync 是一个非常容易使用的 Windows 文件同步软件。同样支持在多种设备进行同步、多向同步(1个文件夹到N个)、自动同步。有极其强大的过滤规则、错误管理,可以压缩备份、加密备份。可导出导入xml格式配置文件和任务。免费版有文件大小和数量限制。当然,有着强大功能的同时,体积和资源占用也偏大。下载

(完整版)博客系统需求分析

校园博客系统需求分析 评审日期:2010 年04 月01 日 目录 1导言 (1)

1.2范围 (1) 1.3缩写说明 (1) 1.4术语定义 (1) 1.5引用标准 (1) 1.6参考资料 (2) 2系统定义 (2) 2.1项目来源及背景 (2) 2.2系统整体结构 (2) 3应用环境 (3) 3.1系统运行网络环境 (3) 3.2系统运行硬件环境 (4) 3.3系统运行软件环境 (4) 4功能规格 (4) 4.1角色( A CTOR )定义 (5) 4.1.1博客访问者 (5) 4.1.2管理用户 (5) 4.1.3 数据库 (6) 4.2系统主U SE C ASE图. (6) 4.3客户端子系统 (6) 4.4管理端子系统 (8) 4.4.1 登录管理 ....................................................... 10 4.4.2 类型管理 ......................................................... 11 4.4.3 评论管理 ....................................................... 12 4.4.4 留言管理 ....................................................... 12 4.4.5 图片管理 ....................................................... 12 4.4.6 用户管理 ....................................................... 13 5性能需求 (13) 5.1 界面需求 (13) 5.2响应时间需求 (13) 5.3可靠性需求 (13) 5.4开放性需求 (14) 5.5可扩展性需求 (14) 5.6系统安全性需求 (14) 6产品提交 (14)

个人博客简介

1.1 博客信息系统概述 “博客”(Blog或Weblog)一词源于“Web Log(网络日志)”的缩写,是一种十分简易的傻瓜化个人信息发布方式。任何人都可以像使用免费电子邮件一样,完成个人网页的创建、发布和更新。博客就是开放的私人空间,可以充分利用超文本链接、网络互动、动态更新等特点,在网络中,精选并链接全球互联网中最有价值的信息、知识与资源;也可以将个人工作过程、生活故事、思想历程、闪现的灵感等及时记录和发布,发挥个人无限的表达力;更可以以文会友,结识和汇聚朋友,进行深度交流沟通[1]。 “博客”当然是个大家都陌生的名词,博客的英文名词就是“Blog或Weblog”(指人时对应于Blogger),是一个典型的网络新事物,查阅最新的英文词典也不可能查到。该词来源于“Web Log(网络日志)”的缩写,特指一种特别的网络个人出版形式,内容按照时间顺序排列,并且不断更新。 博客是一种零编辑、零技术、零成本、零形式的网上个人出版方式。 博客概念一般包含了三个要素(当然,也不需要局限这些定义): (1)网页主体内容由不断更新的、个性化的众多日志组成。 (2)按时间顺序排列,而且是倒序方式,也就是最新的放在最上面,最旧的放在最下面。 (3)内容可以是各种主题、各种外观布局和各种写作风格,但是文章内容以“超链接”作为重要的表达方式。 因此,博客是个人性和公共性的结合体,其精髓不是主要表达个人思想,不是主要记录个人日常经历;而是以个人的视角,以整个互联网为视野,精选和记录自己在互联网上看到的精彩内容,为他人提供帮助,使其具有更高的共享价值。 博客精神的核心并不是自娱自乐,甚至不是个人表达自由,相反,是体现一种利他的共享精神,为他人提供帮助。个人日记和个人网站主要表现的还是“小我”,而博客表现的是“大我”。也许形式上很接近,但内在有着本质的差异。所有优秀博客网站中,真正表达作者个人的内容非常有限,最多只是点缀,而不像个人网站那样是核心。 1.2 博客发展趋势 趋势一:博客现在正在形成个人的信誉机制,有了博客之后就确立了一个个人虚拟身份,简单的来讲就是个人在互联网上是有名有姓的,而不再是一种匿名的行为,网民从流浪汉变成了一个定居者。以前在互联网上的各种行为都是在匿名状态中,相互之间是不认识的,但有了博客之后可以天天关注,而别的人也可

汇博通文档借阅管理组织系统软件使用使用说明

汇博通文档借阅管理系统使用说明书 汇博通知识管理系统的属性管理,实际上已提供了借阅与归还功能,但那是针对每一份文件 或档案而言的。 这里,为客户提供一款专门用于文档的借阅与归还的软件,不但可办理一份文件的借阅或归 还手续,只要有需要,也可批量办理借阅与归还,另外,还提供了与借阅有关的一系列统计 报表。 发放功能与借阅类似,所不同的只是发放不必归还,如将购买的资料、图书发放给职员学习 等。 注:借阅与归还模块的操作,需要获得以下三种权限中的一种: 系统管理员 归档授权(档案管理员) 编号授权(文件管理员) 与借阅与归还模块相关的系统参数的设置说明如下:

首页 汇博通主页的模块工具条上,有一个借阅与归还的按钮,单击它即进入借阅与归还首页。 借阅(发放) 前面已经介绍过,借阅与发放的区别在于,借阅需要归还,发放则不必归还,从某种意义上 来说,发放实际上已将所有权(或有条件的所有权)转移给接收者。 借阅界面包括左右两个子窗体,左侧子窗体用于显示可供借阅(发放)的文档,其上部有搜 索关键词输入框,输入相应关键词即可查找出可供借阅的相应文档,如果要借阅的文档已经 在操作者手上,并且,标注有条形码或电子标签,操作者可直接通过条码阅读器或电子标签

阅读器读取相应编码直接获取到该文档。 根据实际需要,通过点选左侧的复选框,选择具体文档,然后,通过点击两个子窗体中间的箭头,即可将选中的文档添加到右侧子窗体的列表中,即可直接办理借阅或发放手续。 可供借阅(发放)检索列表待选区。借阅(发放)选择勾选列表区。 可供借阅(发放) 输入文件名称、编号、责任者或主题词等属性,点击【检索】按钮进行查找,如下图: 勾选确定后点击该按钮,即可添加到已 选择列表区中。

如何使用群晖备份、同步文件

如何使用群晖备份、同步文件?通过群晖管家安装好NAS之后,想要实现备份、同步还要随时随地查看所有的文件?只需要一个Drive,就能把你的需求统统搞定。让你轻松的掌握文件同步和备份。 Drive既是备份盘、同步盘、网盘,还可以是协作盘。集中管理所有文件,还能够同步不同电脑上的数据。团队脑风暴时,可以多人在同一个文档上实时协同编辑,还能够备份电脑上的文件并且提供多版本保护。 安装及设置drive套件 1、打开群晖DSM界面,在套件中心安装Drive套件。 2、安装Drive套件会一并安装Drive管理控制台——顾名思义,就是可以设置Drive 相关功能、管理所有备份和同步的设备、查看历史版本等。 3、建议你在Drive管理控制台启用深度搜索,就可以在Drive里面查找内文关键字还有照片各种原始信息,步骤如图。

设置备份盘 1、首先进入群晖官网的下载中心,根据NAS机型,选择下载“Drive Client”PC客户端,Windows、Mac、Ubuntu一应俱全,系统兼容妥妥的。 2、PC客户端安装完成后,根据需求修改Drive服务器(NAS端)和电脑(本地端)的不同文件夹。 3、设置完成后,进入Drive PC客户端的控制面板,将同步模式改成“单向上传”,点击应用,然后就开始备份啦。 同步盘如何实现 1、同步盘很简单,设置步骤跟上面的备份盘一样,在需要同步的电脑上安装Drive PC客户端,并且选择双向同步。 2、如果在办公场景,希望把他人分享给你的共享文件夹同步到电脑本地,在PC客户 端控制面板点击“创建>启用同步与我共享”,这么一来,别人与你分享的文件也会同步到本地。

日志分析系统

Web日志集中管理系统的研究与实现 吴海燕朱靖君程志锐戚丽 (清华大学计算机与信息管理中心,北京100084) E-mail:wuhy@https://www.wendangku.net/doc/1c9235294.html, 摘要: Web服务是目前互联网的第一大网络服务,Web日志的分析对站点的安全管理与运行维护非常重要。在实际运行中,由于应用部署的分散性和负载均衡策略的使用,使得Web日志被分散在多台服务器上,给日志的管理和分析带来不便。本文设计并实现了一个Web日志集中管理系统(命名为ThuLog),系统包括日志集中、日志存储和日志分析三个模块。目前,该系统已经在清华大学的多个关键Web应用系统上进行了应用,能够帮助系统管理员清晰地了解系统运行情况,取得了较好的运行效果。 关键词:Web日志日志分析日志集中管理系统 The Research and Implementation of a Centralized Web Log Management System Wu Haiyan Zhu Jingjun Cheng Zhirui Qi Li (Computer&Information Center,Tsinghua University,Beijing100084) Abstract:Web is now the biggest network service on the Internet.The analysis of Web logs plays an important role in the security management and the maintenance of a website.But because of the decentralization of deployment and the use of load balancing,Web logs are often seperated on each Web server,which makes the management and analysis of them not so convenient.This paper designs and implements a Web Log Centralized Management System(named ThuLog),which includes3modules:the centralization of logs,the storage of logs and the analysis of logs.Through log analysis of several critical Web systems in Tsinghua University,it could help system administrators learn clearly what happens in information systems and achieves good operating results. Key words:Web Logs Log Analysis Web Log Centralized Management System 1.引言 近年来,随着计算机网络技术的迅速发展,Web正以其广泛性、交互性、快

博客系统需求分析报告

博 客 系 统 需 求 分 析 报 告 院系:信息电子工程学院 班级:软件08-1 设计小组人员:29号 日期:2010年5月24日

一、系统概述 “博客”一词是从英文单词Blog音译(不是翻译)而来。Blog是Weblog 的简称,而Weblog则是由Web和Log两个英文单词组合而成。 Weblog就是在网络上发布和阅读的流水记录,通常称为“网络日志”,简称为“网志”。博客(BLOGGER)概念解释为网络出版(Web Publishing)、发表和张贴(Post-这个字当名词用时就是指张贴的文章)文章,是个急速成长的网络活动,现在甚至出现了一个用来指称这种网络出版和发表文章的专有名词——Weblog,或Blog。 在网络上发表Blog的构想始于1998年,但到了2000年才开始真正流行。而2000年博客开始进入中国,并迅速发展,但都业绩平平。直到2004年木子美事件,才让中国民众了解到了博客,并运用博客。2005年,国内各门户网站,如新浪、搜狐,原不看好博客业务,也加入博客阵营,开始进入博客春秋战国时代。起初,Bloggers将其每天浏览网站的心得和意见记录下来,并予以公开,来给其他人参考和遵循。但随着Blogging快速扩张,它的目的与最初已相去甚远。目前网络上数以千计的Bloggers发表和张贴Blog的目的有很大的差异。不过,由于沟通方式比电子邮件、讨论群组更简单和容易,Blog已成为家庭、公司、部门和团队之间越来越盛行的沟通工具,因为它也逐渐被应用在企业内部网络(Intranet)。目前,国内优秀的中文博客网有:新浪博客,搜狐博客,中国博客网,腾讯博客,博客中国等。 二、需求分析 博客系统是一个多用户、多界面的系统,主要包括以下几个模块组成。 1.匿名用户模块 本模块主要由注册、登录、浏览博客、评论4个部分组成。匿名用户可以对其他用户的博客内容时行浏览、评论。也可以通过注册后登录博客系统,申请一个属于自己的博客。 2.注册用户模块 本模块主要由个人信息管理、评论管理、好友管理、相册管理、文章管理5

人力资源管理系统软件操作手册

XX集团—人力资源管理系统操作手册 目录 常用操作(新人必读) (2) 1.基础数据管理 ................................................................................................................... - 5 - 1.1组织架构 (5) 1.2职位体系 (8) 1.3职员维护 (11) 1.4结束初始化.................................................................................. 错误!未定义书签。 2.组织管理业务 ................................................................................................................. - 27 - 2.1组织规划 (27) 2.2人力规划 (33) 2.3组织报表 (38) 3.员工管理业务 ................................................................................................................. - 41 - 3.1员工状态管理 (41) 3.2合同管理 (41) 3.3后备人才管理 .............................................................................. 错误!未定义书签。 3.4人事事务 (52) 3.5人事报表 (59) 4.薪酬管理 ......................................................................................................................... - 69 - 4.1基础数据准备 (69) 4.2薪酬管理日常业务 (92) 4.3薪酬管理期末业务 (107) 4.4薪酬报表 (108)

对等网络配置及网络资源共享

物联网技术与应用 对等网络配置及网络资源共享 实验报告 组员:

1.实验目的 (1)了解对等网络基本配置中包含的协议,服务和基本参数 (2)了解所在系统网络组件的安装和卸载方法 (3)学习所在系统共享目录的设置和使用方法 (4)学习安装远程打印机的方法 2.实验环境 Window8,局域网 3.实验内容 (1)查看所在机器的主机名称和网络参数,了解网络基本配置中包含的协议,服务和基本参数 (2)网络组件的安装和卸载方法 (3)设置和停止共享目录 (4)安装网络打印机 4.实验步骤 首先建立局域网络,使网络内有两台电脑 (1)“我的电脑”→“属性”,查看主机名,得知两台计算机主机名为“idea-pc”和“迦尴专属”。 打开运行输入cmd,进入窗口输入ipconfig得到相关网络参数。局域网使用的是无线局域网。 (2)网络组件的安装和卸载方法:“网络和共享中心”→“本地连接”→“属

性”即可看到网络组件,可看其描述或卸载。 “控制面板”→“卸载程序”→“启用和关闭windows功能”,找到internet 信息服务,即可启用或关闭网络功能。 (3)设置和停止共享目录(由于windows版本升高,加强了安全措施和各种权

限,所以操作增加很多) 使用电脑“idea-pc”。“打开网络和共享中心”→“更改高级选项设置”。将专用网络,来宾或公用,所有网络中均选择启用文件夹共享选项,最下面的密码保护项选择关闭,以方便实验。 分享文件夹“第一小组实验八”,“右键文件夹属性”→“共享”→“共享”,选择四个中的一个并添加,此处选择everyone,即所有局域网内人均可以共享。

销售管理软件操作手册

前言 本《操作手册》内容是按该软件主界面上第一横排从左至右的顺序对各个功能加以介绍的,建议初学者先对第一章系统设置作初步了解,从第二章基础资料读起,回头再读第一章。该管理软件的重点与难点是第二章,望读者详读。 第一章系统设置 打开此管理软件,在主界面上的左上方第一栏就是【系统设置】,如下图所示: 点击【系统设置】,在系统设置下方会显示【系统设置】的内容,包括操作员管理、数据初始化、修改我的登录密码、切换用户、选项设置、单据报表设置、导入数据、数据库备份、数据库恢复、压缩和修复数据库、退出程序。下面分别将这些功能作简要介绍: 1.1操作员管理 新建、删除使用本软件的操作员,授权他们可以使用哪些功能。此功能只有系统管理员可以使用。 1.1.1 进入界面 单击【系统设置】,选择其中的【操作员管理】,画面如下:

1.1.2、增加操作员 单击【新建】按钮,画面如下: 输入用户名称、初始密码、选择用户权限,可对用户进行适当描述,按【保存】后就点【退出】,就完成了新操作员的添加,效果如下图。

1.1.3 删除操作员 选择要删除的操作员,单击【删除】按钮。 1.1.4 修改操作员 选择要修改的操作员,单击【修改】按钮,可对操作员作相应修改,修改后需保存。 1.1.5 用户操作权限 选择要修改的操作员,单击【修改】按钮,出现以下画面,点击【用户权限】栏下的编辑框,出现对号后点【保存】,该操作员就有了此权限。 1.2数据初始化 1.2.1进入界面 单击【系统设置】,选择其中的【数据初始化】,画面如下:

1.2.2数据清除 选择要清除的数据,即数据前出现对号,按【确定】后点【退出】,就可清除相应数据。 1.3 修改我的登录密码 1.3.1进入界面 单击【系统设置】,选择其中的【修改我的登录密码】,画面如下: 1.3.2密码修改 输入原密码、现密码,然后对新密码进行验证,按【确定】后关闭此窗口,就可完成密码修改。 1.4 切换用户 1.4.1进入界面 单击【系统设置】,选择其中的【切换用户】,画面如下:

对等网络(P2P)总结整理解析

对等网络(P2P 一、概述 (一定义 对等网络(P2P网络是分布式系统和计算机网络相结合的产物,在应用领域和学术界获得了广泛的重视和成功,被称为“改变Internet的新一代网络技术”。 对等网络(P2P:Peer to Peer。peer指网络结点在: 1行为上是自由的—任意加入、退出,不受其它结点限制,匿名; 2功能上是平等的—不管实际能力的差异; 3连接上是互联的—直接/间接,任两结点可建立逻辑链接,对应物理网上的一条IP路径。 (二P2P网络的优势 1、充分利用网络带宽 P2P不通过服务器进行信息交换,无服务器瓶颈,无单点失效,充分利用网络带宽,如BT下载多个文件,可接近实际最大带宽,HTTP及FTP很少有这样的效果 2、提高网络工作效率 结构化P2P有严格拓扑结构,基于DHT,将网络结点、数据对象高效均匀地映射到覆盖网中,路由效率高 3、开发了每个网络结点的潜力 结点资源是指计算能力及存储容量,个人计算机并非永久联网,是临时性的动态结点,称为“网络边缘结点”。P2P使内容“位于中心”转变为“位于边缘”,计算模式由“服务器集中计算”转变为“分布式协同计算”。

4、具有高可扩展性(scalability 当网络结点总数增加时,可进行可扩展性衡量。P2P网络中,结点间分摊通信开销,无需增加设备,路由跳数增量小。 5、良好的容错性 主要体现在:冗余方法、周期性检测、结点自适应状态维护。 二、第一代混合式P2P网络 (一主要代表 混合式P2P网络,它是C/S和P2P两种模式的混合;有两个主要代表: 1、Napster——P2P网络的先驱 2、BitTorrent——分片优化的新一代混合式P2P网络 (二第一代P2P网络的特点 1、拓扑结构 1混合式(C/S+P2P 2星型拓扑结构,以服务器为核心 2、查询与路由 1用户向服务器发出查询请求,服务器返回文件索引 2用户根据索引与其它用户进行数据传输 3路由跳数为O(1,即常数跳 3、容错性:取决于服务器的故障概率(实际网络中,由于成本原因,可用性较低。

多文件夹的自动同步和各向同步工具

多文件夹的自动同步和各向同步工具 出处:小建の软件园作者:佚名日期:2008-06-25 关键字:同步 对于经常需要备份文件,同步文件的网友,Allway Sync 可谓不可多得,虽然不能激活其专业版,对文件数量多和经常性的同步操作可能会超过免费版的限制,不过对于一般文件数量不多同步操作可以完全满足,Allway Sync 使用相当简单,多种同步方式能满足你不同需求。对重要文件进行备份是文件恢复最好的方法,而 Allway Sync 可以简化你许多备份的过程,能实现自动备份,如果你“胃口”不大,免费版应当已经可以满足。 下载地址:https://www.wendangku.net/doc/1c9235294.html,/soft/23495.html Allway Sync 可以进行自动同步,可以对的文件/文件夹进行筛选,只备份需要的东东。

Allway Sync 备份方式介绍 - 同步方式有源文件夹同步和各向同步两种方式: 1、源文件夹同步方式将以一个文件夹为基准,删除或覆盖其余文件夹与源文件相比较不相同的文件。 2、各向同步方式则自动将更新的文件覆盖几个同步文件夹中的旧文件。软件带有一个小型数据库,监视每次更新后的文件状态。如果在一次同步之后,你删除了同步文件夹中某些文件,它在同步的时候将其它的几个文件夹的副本也删除,而不会将不需要的未删除文件重复拷贝到已更新的文件夹。由于软件自己会对文件进行删除和覆盖,它提供了使用回收站进行文件备份的措施,使用者可以在不慎执行错误的同步动作之后,从回收站将错误删除或覆盖的文件找回来(默认禁用该功能,请到软件选项处激活相应设置)。 主程序在 AllwaySync\Bin\里面,此为多国语言版,在语音选项那里选择中文即可。不过退出的时候会有错误提示(貌似没影响?)

博客系统需求分析报告1

系统需求分析和概要设计 1 系统需求分析 1.1 开发背景 过去很多人都喜欢写文章写日记以及交流自己的文章和作品,以求实现相互间的沟通、展现自己的才华和让别人了解自己的想法观点。现在的网络已经成为人们生活中不可或缺的一个元素,所以自然而然诞生了博客这样一个新兴事物,它不仅仅能取代前面所说的功能,还能加入图片,而且使得作者更能无所拘束地生动地写出自己想写的,旁人也能非常便捷地阅读并且加以评论,并且它还能作为展示个人个性的窗户。个人博客现在已经成为很多人生活中必不可少的一个部分,方便了人与人之间的沟通和交流。 1.2 系统实现目标概述 基于个人博客以上的特点,本系统要实现个人博客的主要基本功能有主界面,博客用户登录发表文章(心情、日志),用户登录/退出,游客发表评论,分页浏览文章和评论等。这里其中比较主要的是区分了个人博客用户和游客。博客用户可以在任何时候写下自己的主张,记录下自己的点点滴滴。而游客主要的权限是阅读博客所有注册用户写的文章,阅读后可以发表评论和留言,还可以分页浏览所有注册用户上传的图片。以上是个人博客的系统功能目标,当然由于个人博客的网络流行特点以及个人个性的展示,还适当要求界面比较漂亮轻快,直观便捷,操作方式简单以及人性化。 1.3 系统功能需求 根据对系统的特点和应用的分析,可以得到本系统主要有如下功能: (1)登录 这部分功能又分为用户登录、用户退出两个部分。 登录:主要用于验证博客网站用户信息的真实身份,以便对博客网站进行管

理和维护。通过系统管理员写入的用户名,密码登录到网站。网站检测用户的用户名,密码并给予其相应的权限对博客网站进行操作。 用户退出:已经登陆的用户可以退出,释放自己所占有的各种信息资源。 (2)文章管理 文章管理主要有文章的发表、查询、浏览、评论和删除功能。 博客的系统管理员除了可以查询、浏览和评论文章外,还可以对系统中的所有文章以及评论进行修改、删除操作。这些维护和管理拥有最高权限,并且系统自动更新在服务器端数据库中的数据。 文章的发表:博客用户可以发表自己的文章,文章包括主题、正文、表情、图片等信息,作者通过各种元素来展示自己的想法和思想。系统接受这些信息并且存储在服务器端的数据库中。 文章的删除:博客用户可以删除自己已经发表的文章内容和各项信息,系统自动在服务器端数据库中删除这些记录。 文章的浏览:游客和博客用户根据所获得的用户权限获取服务器端数据存储的各篇文章并且浏览阅读文章的所有信息,包括标题、正文、表情、图片以及其它读者的留言评论。 文章的评论:文章的读者可以评论和回复所阅读的文章,发表自己的看法。系统自动将这些评论存储在服务器端的数据库中,并且可供博客作者以及其它读者浏览。 文章的查询:博客用户可以按文章题目或作者来查询想要查的文章。 文章中还可能包含一些图片视频等多媒体,所以文章管理中还包含了网站中媒体的管理。 媒体管理有添加,浏览、删除和查询功能。博客用户可以添加自己喜欢的图片或视频等,还可以查询和浏览系统中的所有媒体信息。游客只能浏览博客系统中的媒体信息。系统管理员拥有以上的所有权限,除此之外还可以删除媒体信息。 (3)博客管理员管理 博客管理员可以添加、删除新用户,用户的角色又分为订阅者、作者、编辑、投稿者、管理员。 还可以对博客主页的外观、博客使用的插件、工具进行添加、删除、设置。

博客作用

1.过滤信息 在这个网络信息泛滥的时代,网上的信息太多、太杂、太乱,学习者无法判别哪些信息是有价值的,哪些是重要的。教师可以通过博客将经过过滤过的信息传递给学生,而学生也可以通过博客将信息传递给他的伙伴。通过浏览别人的博客日志,知识获取的效率将得到很大的提高。 2.提供学习的丰富情境 通常的教辅网站,只是提供一些参考资料的链接,而博客则提供更多的评价,更广泛的背景资料。有一些学者通过博客日志反映他们对某些问题的认识,开始对于这些问题的看法可能也是粗糙的,但是他们将这些思想表达出来,然后在博客上发表后续的看法。在这一过程中,专家可以将最近看了哪些书,读了哪些人的文章,听取了哪些意见都通过博客方式表达出来。这样,阅读者了解的不仅仅是专家静态的、目前的观点,而重要的是可以把握专家思想的流程。同样,这一方式对于学生来讲也是有效的,学生的博客日志可以反映出他们在学习过程中产生的问题、关于问题的想法与思路、问题的解决过程,使得教师可以更有效地了解学生的学习状况。 3.提高学生的媒体文化水平 博客(blog)的个人化使得博客们(blogger)在信息发布的过程中,要采用最适当的方式对信息进行过滤与说明,使得他的博客日志能够为更多的人接受,使得他的思想和资源为更多的人所了解。与传统BBS相比,博客日志具有更强的规范性,博客们具有更强的自律性。由于博客一般是由个人或小组拥有的,通常具有共同的主题,所谓敝帚自珍,所以在博客的世界中,很少出现在BBS中常见的不负责任的"胡说八道"。 4.鼓励参与者发表自己不同的观点 博客的模式是平等的,博客更看重的是参与的过程而不是结果。对于教师或书本上的观点,学生可以通过博客的方式发表他对于这些问题的理解,博客并不要求意见的统一,但要求意见的针对性和独立性。另外,在课程设置的过程中可以设置多个不同的议题,允许学生自由地选择他们感兴趣的议题。 5.提供对信息的评价 博客的重要特征就是对信息的过滤,使得信息可以转换成有用的知识。但是

对等网络的网络弹性分析

对等网络的网络弹性分析 摘要:网络弹性研究的是网络在节点失效或被有意攻击下所表现出来的特征。分析Gnutella网络的网络弹性,包括对于随机攻击的容错性和对于选择性攻击的抗攻击性,并与ER模型和EBA模型进行了对比。Gnutella网络对于随机攻击具有很好的容错性,但是对于选择性攻击却显得脆弱。最后对网络弹性进行了理论分析,给出了网络在出现最大集团临界点之前的平均集团大小的公式解。 关键词:对等网络;无标度;网络弹性;脆弱性 中图分类号:TP393.02文献标识码:A 文章编号:1001-9081(2007)04-0784-04 0 引言 在过去的40多年里,科学家习惯于将所有复杂网络看作是随机网络。随机网络中绝大部分节点的连结数目会大致相同。1998年开展的一个描绘互联网的项目却揭示了令人惊诧的事实:基本上,互联网是由少数高连结性的页面串联起来的,80%以上页面的连结数不到4个,而只占节点总数不到万分之一的极少数节点,例如门户网Yahoo和搜索引擎Google等类似网站,却高达上百万乃至几十亿个链接。研究者把包含这种重要集散节点的网络称为无标度网络[1]。

具有集散节点和集群结构的无标度网络,对意外故障具有极强的承受能力,但面对蓄意的攻击和破坏却不堪一击[2]。在随机网络中,如果大部分节点发生瘫痪,将不可避免地导致网络的分裂。无标度网络的模拟结果则展现了全然不同的情况,随意选择高达80%的节点使之失效,剩余的网络还可能组成一个完整的集群并保持任意两点间的连接,但是只要5%―10%的集散节点同时失效,就可导致互联网溃散成孤立无援的小群路由器。 许多复杂网络系统显示出惊人的容错特性,例如复杂通信网络也常常显示出很强的健壮性,一些关键单元的局部失效很少会导致全局信息传送的损失。但并不是所有的网络都具有这样的容错特性,只有那些异构连接的网络,即无标度网络才有这种特性,这样的网络包括WWW、因特网、社会网络等。虽然无标度网络具有很强的容错性,但是对于那些有意攻击,无标度网络却非常脆弱。容错性和抗攻击性是通信网络的基本属性,可以用这两种属性来概括网络弹性。 对等网络技术和复杂网络理论的进展促使对现有对等 网络的拓扑结构进行深入分析。对网络弹性的认识可以使从网络拓扑的角度了解网络的脆弱点,以及如何设计有效的策略保护、减小攻击带来的危害。本文研究Gnutella网络的网络弹性,并与ER模型和EBA模型进行了比较,对比不同类 型的复杂网络在攻击中的网络弹性。当网络受到攻击达到某

备份软件使用方法v1.0

备份软件使用方法 一Bestsync2012使用说明 1 软件运行 点击BestSync2012运行软件 2 设置任务 在编辑菜单下点击追加任务(如果任务列表下没有任务可以在文件菜单下选择新建任务选项) 软件会弹出任务窗口,用来设置同步任务

以其中一个任务为例

选择好同步的文件夹和同步方向,点击下一步,按照要求设置任务即可。 3 查看任务 在以有任务中点击设置任务(任务必须是未在同步状态,否者不能点击设置任务选项)

点击后软件会弹出设置同步任务窗口,在这里可以在里面进行任务修改和设置

目前我们设置的同步任务只需要修改一般和日程两个窗口下的内容,其他暂时不需要修改。 BestSync2012这款同步软件目前还不是很稳定,需要不定期检查一下软件是否运行正常,如果发现软件出错,就关闭软件后在打开BestSync2012软件,因为打开软件后软件不会自动启动同步功能,所有需要手动启动所有任务 注意: 1 在修改任务在开启后,必须将修改的任务停止一下在开启,不然同步任务不能正常同步。 2 现有BestSync2012同步软件在16.15和151.247这两台机器上。

二Backup Exec 2010 R2 SP1使用说明 1 软件运行 点击Backup Exec 2010运行软件 2 设置任务 在作业设置选项中可以看到作业的作业名称、策略名称和备份选这项列表。 其中作业名称里放有现有作业,双击其中一个作业就可以看到作业属性。作业属性默认显示设备和介质窗口,在设备和介质窗口下可以选择设备和介质集。目前设备选项中因为只有一台磁带机工作,所有只有一个选项,而介质集一般选择永久保留数据-不允许覆盖选项。

博客需求分析

博客系统需求分

一、系统概述 “博客”一词是从英文单词Blog音译(不是翻译)而来。Blog是Weblog 的简称,而Weblog则是由Web和Log两个英文单词组合而成。 Weblog就是在网络上发布和阅读的流水记录,通常称为“网络日志”,简称为“网志”。博客(BLOGGER)概念解释为网络出版(Web Publishing)、发表和张贴(Post-这个字当名词用时就是指张贴的文章)文章,是个急速成长的网络活动,现在甚至出现了一个用来指称这种网络出版和发表文章的专有名词——Weblog,或Blog。 在网络上发表Blog的构想始于1998年,但到了2000年才开始真正流行。而2000年博客开始进入中国,并迅速发展,但都业绩平平。直到2004年木子美事件,才让中国民众了解到了博客,并运用博客。2005年,国内各门户网站,如新浪、搜狐,原不看好博客业务,也加入博客阵营,开始进入博客春秋战国时代。起初,Bloggers将其每天浏览网站的心得和意见记录下来,并予以公开,来给其他人参考和遵循。但随着Blogging快速扩张,它的目的与最初已相去甚远。目前网络上数以千计的Bloggers发表和张贴Blog的目的有很大的差异。不过,由于沟通方式比电子邮件、讨论群组更简单和容易,Blog已成为家庭、公司、部门和团队之间越来越盛行的沟通工具,因为它也逐渐被应用在企业内部网络(Intranet)。目前,国内优秀的中文博客网有:新浪博客,搜狐博客,中国博客网,腾讯博客,博客中国等。 二、需求分析 博客系统是一个多用户、多界面的系统,主要包括以下几个模块组成。 1.匿名用户模块

博客简介

漫漫教学路,博客伴我行能在互联网上拥有一个真正属于自己的空间,是我的梦想,而 今天这个梦想在“博客”中实现了。我怀着一颗好奇心,在博客上流连,申请了一方属于自己的免费空间,置身于梦幻秋天的背景下,设置自己喜欢的几个栏目,于是我便拥有了博客。当我第一次在博客中添加文章的时候,兴奋得无法入眠。我想:平素与网络无缘的我也终于拥有了一个网上家园。一个可以让我任意挥洒激情、记录人生轨迹的网上家园。感谢博客给我一块自由的空间,让我展翅飞翔!在与博客“亲密接触”的日子里,我深深地感觉到博客对教育的促进,对自身专业化成长的帮助。 在开始的时候,我也只是摘录一些自己感兴趣的信息,很少有经过自己思考的原创日志。随着对博客认识的加深,以及浏览其他著名博客所受到的启示,我也试着把自己在教育中的思考及时记录下来。就这样我在博客里“书写着,记录着,思考着,分享着,品味着,学习着”,在不断地积累中感受着学习的乐趣。在博客里写作已经逐渐成为我的一种习惯,在博客中我不断地阅读、书写,在阅读、书写中释放心情,这让我感到在博客中学习竟是如此快乐。我在博客中开设了心灵随笔,教学案例,教学反思,教学相长,教学设计,教学论文,主题中队会等栏目…… 作为一名教师,我深深地认识到:要想鼓励、指导学生写出好的文章,教师必须要有过硬的写作基本功,博客中的心灵随笔这个栏目正好为我提供了这样一个平台,我在这个栏目中及时捕捉教学生活中细微的瞬间,从中悟出深刻的道理,并马上形成文字。例如:《让心灵跟着爱飞翔》、《如何赏识学生》、《感动》、《怎样转变学生的不良习惯》等文章。心灵的感悟,出乎意料的发挥了作用,有了这样的历练,对学生进行写作指导和评改,就驾轻就熟了。学生也会在老师的指导下逐渐明白,写作并不是一件很难的事情,只要真实的记录自己在生活中的所见、所闻,有了自己的感悟,慢慢就会写出具有真情实感的文章。 在博客中记录教学过程是一个不断充实自己,提高自己的过程,教学中我也曾遇到过很多困惑,于是把这些困惑书写到我的博客中,期待与博友们交流和切磋。博友们的热心触发了我很多灵感,常常使我茅塞顿开……我现在博客中的教学案例就是平时点滴的积累。《位置与方向》《那只松鼠》《笔算除法》《商中间、末

管理软件使用说明书

目录 1 软件介绍...................................................... 1 2 软件运行环境 ................................................. 1 3 软件安装步骤 ................................................. 1 4 软件卸载步骤 ................................................. 4 5 软件使用...................................................... 45.1、创建数据库.............................................................................................................................. 4 5.2、创建数据数据表................................................................................................................... 6 5.3、历史数据读取 ........................................................................................................................ 7 5.4、查看历史数据、通道信息.............................................................................................. 8 5.5、打印数据、曲线或图片输出 .................................................................................... 13 5.6、数据实时采集 .................................................................................................................... 15 6 软件使用中可能出现的问题与解决方法.................. 186.1、不出现对话框 .................................................................................................................... 18 6.2、数据库不能建立............................................................................................................... 18 6.3、U盘不能数据转存........................................................................................................... 18 6.4、U盘上没有文件 ................................................................................................................ 18 6.5、U盘数据不能导入计算机;...................................................................................... 18

相关文档