文档库 最新最全的文档下载
当前位置:文档库 › 神经网络概述外文翻译

神经网络概述外文翻译

神经网络概述外文翻译
神经网络概述外文翻译

外文原文与译文

外文原文

Neural Network Introduction

1.Objectives

As you read these words you are using a complex biological neural network. You have a highly interconnected set of some 1011neurons to facilitate your reading, breathing, motion and thinking. Each of your biological neurons,a rich assembly of tissue and chemistry, has the complexity, if not the speed, of a microprocessor. Some of your neural structure was with you at birth. Other parts have been established by experience.

Scientists have only just begun to understand how biological neural networks operate. It is generally understood that all biological neural functions, including memory, are stored in the neurons and in the connections between them. Learning is viewed as the establishment of new connections between neurons or the modification of existing connections.

This leads to the following question: Although we have only a rudimentary understanding of biological neural networks, is it possible to construct a small set of simple artificial “neurons” and perhaps train them to serve a useful function? The answer is “yes.”This book, then, is about artificial neural networks.

The neurons that we consider here are not biological. They are extremely simple abstractions of biological neurons, realized as elements in a program or perhaps as circuits made of silicon. Networks of these artificial neurons do not have a fraction of the power of the human brain, but they can be trained to perform useful functions. This book is about such neurons, the networks that contain them and their training.

2.History

The history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggled for decades to

develop concepts that we now take for granted. This history has been documented by various authors. One particularly interesting book is Neurocomputing: Foundations of Research by John Anderson and Edward Rosenfeld. They have collected and edited a set of some 43 papers of special historical interest. Each paper is preceded by an introduction that puts the paper in historical perspective.

Histories of some of the main neural network contributors are included at the beginning of various chapters throughout this text and will not be repeated here. However, it seems appropriate to give a brief overview, a sample of the major developments.

At least two ingredients are necessary for the advancement of a technology: concept and implementation. First, one must have a concept, a way of thinking about a topic, some view of it that gives clarity not there before. This may involve a simple idea, or it may be more specific and include a mathematical description. To illustrate this point, consider the history of the heart. It was thought to be, at various times, the center of the soul or a source of heat. In the 17th century medical practitioners finally began to view the heart as a pump, and they designed experiments to study its pumping action. These experiments revolutionized our view of the circulatory system. Without the pump concept, an understanding of the heart was out of grasp.

Concepts and their accompanying mathematics are not sufficient for a technology to mature unless there is some way to implement the system. For instance, the mathematics necessary for the reconstruction of images from computer-aided topography (CAT) scans was known many years before the availability of high-speed computers and efficient algorithms finally made it practical to implement a useful CAT system.

The history of neural networks has progressed through both conceptual innovations and implementation developments. These advancements, however, seem to have occurred in fits and starts rather than by steady evolution.

Some of the background work for the field of neural networks occurred in the late 19th and early 20th centuries. This consisted primarily of interdisciplinary work in physics, psychology and neurophysiology by such scientists as Hermann von

Helmholtz, Ernst Much and Ivan Pavlov. This early work emphasized general theories of learning, vision, conditioning, etc.,and did not include specific mathematical models of neuron operation.

The modern view of neural networks began in the 1940s with the work of Warren McCulloch and Walter Pitts [McPi43], who showed that networks of artificial neurons could, in principle, compute any arithmetic or logical function. Their work is often acknowledged as the origin of the

neural network field.

McCulloch and Pitts were followed by Donald Hebb [Hebb49], who proposed that classical conditioning (as discovered by Pavlov) is present because of the properties of individual neurons. He proposed a mechanism for learning in biological neurons.

The first practical application of artificial neural networks came in the late 1950s, with the invention of the perception network and associated learning rule by Frank Rosenblatt [Rose58]. Rosenblatt and his colleagues built a perception network and demonstrated its ability to perform pattern recognition. This early success generated a great deal of interest in neural network research. Unfortunately, it was later shown that the basic perception network could solve only a limited class of problems. (See Chapter 4 for more on Rosenblatt and the perception learning rule.)

At about the same time, Bernard Widrow and Ted Hoff [WiHo60] introduced a new learning algorithm and used it to train adaptive linear neural networks, which were similar in structure and capability to Rosenblatt’s perception. The Widrow Hoff learning rule is still in use today. (See Chapter 10 for more on Widrow-Hoff learning.) Unfortunately, both Rosenblatt's and Widrow's networks suffered from the same inherent limitations, which were widely publicized in a book by Marvin Minsky and Seymour Papert [MiPa69]. Rosenblatt and Widrow were

aware of these limitations and proposed new networks that would overcome them. However, they were not able to successfully modify their learning algorithms to train the more complex networks.

Many people, influenced by Minsky and Papert, believed that further research on

neural networks was a dead end. This, combined with the fact that there were no powerful digital computers on which to experiment,

caused many researchers to leave the field. For a decade neural network research was largely suspended. Some important work, however, did continue during the 1970s. In 1972 TeuvoKohonen [Koho72] and James Anderson [Ande72] independently and separately developed new neural networks that could act as memories. Stephen Grossberg [Gros76] was also very active during this period in the investigation of self-organizing networks.

Interest in neural networks had faltered during the late 1960s because of the lack of new ideas and powerful computers with which to experiment. During the 1980s both of these impediments were overcome, and research

in neural networks increased dramatically. New personal computers and workstations, which rapidly grew in capability, became widely available. In addition, important new concepts were introduced.

Two new concepts were most responsible for the rebirth of neural net works. The first was the use of statistical mechanics to explain the operation of a certain class of recurrent network, which could be used as an associative memory. This was described in a seminal paper by physicist John Hopfield [Hopf82].

The second key development of the 1980s was the backpropagationalgorithm for training multilayer perceptron networks, which was discovered independently by several different researchers. The most influential publication of the backpropagation algorithm was by David Rumelhart and James McClelland [RuMc86]. This algorithm was the answer to the criticisms Minsky and Papert had made in the 1960s. (See Chapters 11 and 12 for a development of the backpropagation algorithm.) These new developments reinvigorated the field of neural networks. In the last ten years, thousands of papers have been written, and neural networks have found many applications. The field is buzzing with new theoretical and practical work. As noted below, it is not clear where all of this will lead US.

The brief historical account given above is not intended to identify all of the major contributors, but is simply to give the reader some feel for how knowledge in

the neural network field has progressed. As one might note, the progress has not always been "slow but sure." There have been periods of dramatic progress and periods when relatively little has been accomplished.

Many of the advances in neural networks have had to do with new concepts, such as innovative architectures and training. Just as important has been the availability of powerful new computers on which to test these new concepts.

Well, so much for the history of neural networks to this date. The real question is, "What will happen in the next ten to twenty years?" Will neural networks take a permanent place as a mathematical/engineering tool, or will they fade away as have so many promising technologies? At present, the answer seems to be that neural networks will not only have their day but will have a permanent place, not as a solution to every problem, but as a tool to be used in appropriate situations. In addition, remember that we still know very little about how the brain works. The most important advances in neural networks almost certainly lie in the future.

Although it is difficult to predict the future success of neural networks, the large number and wide variety of applications of this new technology are very encouraging. The next section describes some of these applications.

3.Applications

A recent newspaper article described the use of neural networks in literature research by Aston University. It stated that "the network can be taught to recognize individual writing styles, and the researchers used it to compare works attributed to Shakespeare and his contemporaries." A popular science television program recently documented the use of neural networks by an Italian research institute to test the purity of olive oil. These examples are indicative of the broad range of applications that can be found for neural networks. The applications are expanding because neural networks are good at solving problems, not just in engineering, science and mathematics, but m medicine, business, finance and literature as well. Their application to a wide variety of problems in many fields makes them very attractive. Also, faster computers and faster algorithms have made it possible to use neural

networks to solve complex industrial problems that formerly required too much computation.

The following note and Table of Neural Network Applications are reproduced here from the Neural Network Toolbox for MATLAB with the permission of the Math Works, Inc.

The 1988 DARPA Neural Network Study [DARP88] lists various neural network applications, beginning with the adaptive channel equalizer in about 1984. This device, which is an outstanding commercial success, is a single-neuron network used in long distance telephone systems to stabilize voice signals. The DARPA report goes on to list other commercial applications, including a small word recognizer, a process monitor, a sonar classifier and a risk analysis system.

Neural networks have been applied in many fields since the DARPA report was written. A list of some applications mentioned in the literature follows.

Aerospace

High performance aircraft autopilots, flight path simulations, aircraft control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectors

Automotive

Automobile automatic guidance systems, warranty activity analyzers

Banking

Check and other document readers, credit application evaluators

Defense

Weapon steering, target tracking, object discrimination, facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification Electronics

Code sequence prediction, integrated circuit chip layout, process control, chip failure analysis, machine vision, voice synthesis, nonlinear modeling Entertainment

Animation, special effects, market forecasting

Financial

Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit line use analysis, portfolio trading program, corporate financial analysis, currency price prediction

Insurance

Policy application evaluation, product optimization

Manufacturing

Manufacturing process control, product design and analysis, process and machine diagnosis, real-time particle identification, visual quality inspection systems, beer testing, welding quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and management, dynamic modeling of chemical process systems

Medical

Breast cancer cell analysis, EEG and ECG analysis, prosthesis design, optimization of transplant times, hospital expense reduction, hospital quality improvement, emergency room test advisement

0il and Gas

Exploration

Robotics

Trajectory control, forklift robot, manipulator controllers, vision systems Speech

Speech recognition, speech compression, vowel classification, text to speech synthesis

Securities

Market analysis, automatic bond rating, stock trading advisory systems Telecommunications

Image and data compression, automated information services,real-time translation of spoken language, customer payment processing systems

Transportation

Truck brake diagnosis systems, vehicle scheduling, routing systems Conclusion

The number of neural network applications, the money that has been invested in neural network software and hardware, and the depth and breadth of interest in these devices have been growing rapidly.

4.Biological Inspiration

The artificial neural networks discussed in this text are only remotely related to their biological counterparts. In this section we will briefly describe those characteristics of brain function that have inspired the development of artificial neural networks.

The brain consists of a large number (approximately 1011) of highly connected elements (approximately 104 connections per element) called neurons. For our purposes these neurons have three principal components: the dendrites, the cell body and the axon. The dendrites are tree-like receptive networks of nerve fibers that carry electrical signals into the cell body. The cell body effectively sums and thresholds these incoming signals. The axon is a single long fiber that carries the signal from the cell body out to other neurons. The point of contact between an axon of one cell and a dendrite of another cell is called a synapse. It is the arrangement of neurons and the strengths of the individual synapses, determined by a complex chemical process, that establishes the function of the neural network. Figure 6.1 is a simplified schematic diagram of two biological neurons.

Figure 6.1 Schematic Drawing of Biological Neurons

Some of the neural structure is defined at birth. Other parts are developed through learning, as new connections are made and others waste away. This development is most noticeable in the early stages of life. For example, it has been shown that if a young cat is denied use of one eye during a critical window of time, it will never develop normal vision in that eye.

Neural structures continue to change throughout life. These later changes tend to consist mainly of strengthening or weakening of synaptic junctions. For instance, it is believed that new memories are formed by modification of these synaptic strengths. Thus, the process of learning a new friend's face consists of altering various synapses.

Artificial neural networks do not approach the complexity of the brain. There are, however, two key similarities between biological and artificial neural networks. First, the building blocks of both networks are simple computational devices (although artificial neurons are much simpler than biological neurons) that are highly interconnected. Second, the connections between neurons determine the function of the network. The primary objective of this book will be to determine the appropriate connections to solve particular problems.

It is worth noting that even though biological neurons are very slow when

compared to electrical circuits, the brain is able to perform many tasks much faster than any conventional computer. This is in part because of the massively parallel structure of biological neural networks; all of the neurons are operating at the same time. Artificial neural networks share this parallel structure. Even though most artificial neural networks are currently implemented on conventional digital computers, their parallel structure makes them ideally suited to implementation using VLSI, optical devices and parallel processors.

In the following chapter we will introduce our basic artificial neuron and will explain how we can combine such neurons to form networks. This will provide a background for Chapter 3, where we take our first look at neural networks in action.

译文

神经网络概述

1.目的

当你现在看这本书的时候,就正在使用一个复杂的生物神经网络。你有一个约为1011个神经元的高度互连的集合帮助你完成阅读、呼吸、运动和思考。你的每一个生物神经元都是生物组织和化学物质的有机结合。若不考虑其速度的话,可以说每个神经元都是一个复杂的微处理器。你的某些神经结构是与生俱来的,而其他一些则是在实践中形成的。

科学家们才刚刚开始对生物神经网络工作机理有所认识。一般认为,包括记忆在内的所有生物神经功能,都存储在神经元和及其之间的连接上。学习被看作是在神经元之间建立新的连接或对已有的连接进行修改的过程。这便将引出下面一个问题:既然我们已经对生物神经网络有一个基本的认识,那么能否利用一些简单的人工“神经元”构造一个小系统,然后对其进行训练,从而使它们具有一定有用功能呢?回答是肯定的。本书正是要讨论有关人工神经网络工作机理的一些问题。

我们在这里考虑的神经元不是生物神经元。它们是对生物神经元极其简单的抽象,可以用程序或硅电路实现。虽然由这些神经元组成的网络的能力远远不及人脑的那么强大,但是可对其进行训练,以实现一些有用的功能。本书所要介绍的正是有关于这样的神经元,以及包含这些神经元的网络及其训练方法。

2.历史

在人工神经网络的发展历程中,涌现了许多在不同领域中富有创造性的传奇人物,他们艰苦奋斗几十年,提出了许多至今仍然让我们受益的概念。许多作者都记载了这一历史。一本特别有趣的书是由John Anderson和Edward Rosenfeld 写的《神经计算:研究的基础》(Neurocomputing:Foudations of Research)。在该书中,他们收集并编辑了一组由43篇具有特别历史意义的论文,每一篇前面都有一段历史观点的导言。

本书各章开始包括了一些主要神经网络研究人员的历史,所以这里不必赘述。但是,还是有必要简单地回顾一下神经网络的主要发展历史。

对技术进步而言,有两点是必需的:概念与实现。首先,必须有一个思考问题的概念,一根据这些概念明确所面临的问题。这就要求概念包含一种简单的思想,或者更具特色,并且引入数学描述。为了理解这一点,让我们看看心脏的研究历史。在不同时期,心脏被看成灵魂的中心或身体的热源。17世纪的医生们认识到心脏是一个血泵,于是科学家们开始设计实验,研究泵的行为。这些实验最终开创了循环系统理论。可以说,没有泵的概念,就不会有人们对心脏的深人认识。

概念及其相应的数学描述还不足以使新技术走向成熟,除非能通过某种方式实现这种系统。比如,虽然多年前就从数学上知道根据计算机辅助层析成像(CAT)扫描可以重构图像,但是直到有了高速计算机和有效的算法才使其走向实用,并最终实现了有用的CAT系统。

神经网络的发展史同时包含了概念创新和实现开发的进步。但是这些成果的取得并不是一帆风顺的。

神经网络领域研究的背景工作始于19世纪末和20世纪初。它源于物理学、心理学和神经生理学的跨学科研究,主要代表人物有 Herman Von Helmholts,Ernst Mach和Ivan Pavlov。这些早期研究主要还是着重于有关学习、视觉和条件反射等一般理论,并没有包含有关神经元工作的数学模型。

现代对神经网络的研究可以追溯到220世纪40年代 Warren McCulloch和Walter Pitts的工作[McPi43]。他们从原理上证明了人工神经网络可以计算任何算术和逻辑函数。通常认为他们的工作是神经网络领域研究工作的开始。

在McCulloch和Pitts之后,Donald Hebb【Hebb49】指出,经典的条件反射

现)是由单个神经元的性质引起的。他提出了生物神经元的一种学习机制。

人工神经网络第一个实际应用出现在20世纪50年代后期,Farnk Rosenblatt[Rose58]提出了感知机网络和联想学习规则。Rosenblatt和他的同事构造了一个感知机网络,并公开演示了它进行模式识别的能力。这次早期的成功引起了许多人对神经网络研究的兴趣。不幸的是,后来研究表明基本的感知机网络只能解决有限的几类问题。

同时,Bernard Widrow和Ted Hoff[WiHo60]引入了一个新的学习算法用于

训练自适应线性神经网络。它在结构和功能上类似于Rosenblatt的感知机。Widrow—Hoff学习规则至今仍然还在使用。

但是,Rosenblatt和Widrow的网络都有同样的固有局限性。这些局限性在Marvin Minsky和SymourPapert的书【MiPa69】中有广泛的论述。Rosenblatt 和Widrow也十分清楚这些局限性,并提出了一些新的网络来克服这些局限性。但是他们没能成功找到训练更加复杂网络的学习算法。

许多人受到Minsky和 Papert的影响,相信神经网络的研究已走人了死胡同。同时由于当时没有功能强大的数字计算机来支持各种实验,从而导致许多研究者纷纷离开这一研究领域。神经网络的研究就这样停滞了十多年。

即使如此,在20世纪70年代,科学家们仍然在该领域开展了许多重要的工作。1972年TeuvoKohonen[Koho72]和James Anderson[Ande72]分另独立提出了能够完成记忆的新型神经网络。这一时期,Stephen Grossberg[Gros76]在自组织网络方面的研究也十分活跃。

前面我们说过,在60年代,由于缺乏新思想和用于实验的高性能计算机。曾一度动摇了人们对神经网络的研究兴趣。到了80年代,随着个人计算机和工作站计算能力的急剧增强和广泛应用,以及不断引人新的概念,克服了摆在神经网络研究面前的障碍,人们对神经网络的研究热情空前高涨。

有两个新概念对神经网络的复兴具有极其重大的意义。其一是:用统计机理解释某些类型的递归网络的操作,这类网络可作为联想存储器。物理学家John Hopfield的研究论文[Hopf82]论述了这些思想。

其二是:在20世纪80年代,几个不同的研究者分别开发出了用于训练多层感知机的反传算法。其中最具影响力的反传算法是David Rumelhart和James McClelland[RuMc86]提出的。该算法有力地回答了60年代Minsky和Papert对神经网络的责难。

这些新进展对神经网络研究领域重新注入了活力。在过去的10年中,人们发表了成千上万的神经网络研究论文,神经网络也有了很多应用。许多理论和实践工作蜂拥而至,以致于我价至今还不十分清楚这将会把我偷带向何方。

以上简略的历史回顾并没有列出所有对神经网络作出重要贡献的人,但它能使读者知道神经网络是如何发展而来的。读者或许会注意到,这个发展趋势并不

总是“缓慢而坚定”的,而是曾经有急剧发展的时期,也有相对停滞的时期。

许多神经网络研究进展都与新概念的提出有关,如革新的神经网络结构和训练规则。同样十分重要的是,高性能计算机的出现使新概念能够得到检验。。

好了,对神经网络的历史就说这么多。真正的问题是:“以后的10到20年会怎样?”神经网络将演变为一个永久的数学/工程工具,还是像许多曾大有希望的技术那样退出历史舞台?目前来看,似乎神经网络不仅有兴旺的时日,而且能取得一个永久的地位,即使它不能解决所有问题,但在某些适当的场合还是非常有用的工具。另外,要记住我们现在对人脑的认识仍很肤浅,相信将来某一天神经网络将会取得最重要的进展。

尽管很难预料神经网络今后能否成功,但这种新技术的大量而广泛应用还是令人鼓舞的。

3.应用

最近报纸报道Aston大学用神经网络来进行文献研究。这篇报道说“神经网络可以用来识别个人的写作风格,研究人员用它比较了莎士比亚和他同时代人的著作”。一个大众科学电视节目最近报道了某意大利的研究结构用神经网络测试橄榄油的纯度。这些例子从一个侧面说明神经网络有极其广泛的应用领域。正是因为它适合于解决实际问题,所以其应用领域在不断扩大,它不仅可以广泛应用于工程。科学和数学领域,也可广泛应用于医学、商业、金融和文学等领域。神经网络在许多领域的广泛应用,使其极具吸引力。同时,基于高速计算机和快速算法,也可以用神经网络解决过去许多计算量很大的复杂工业问题。

以下神经网络的应用说明来源于MATLAB用到的Neural Network Toolbox(神经网络工具箱), 已经得到了MathWorks公司的允许。

1988年,在DARPA的“神经网络研究报告”(Neural Network Study)中列举了各种神经网络的应用。其中第一个应用就是大约在1984年的自适应频道均衡器。这个设备在商业上取得了极大的成功。它用一个单神经元网络来稳定电话系统中长距离传输的声音信号。DARPA报告还列举了其他一些神经网络在商业领域中的应用,包括一个小规模的单词识别器、过程监测器、声纳分类器和一个风险分析系统。

自DARPA报告问世以来,神经网络已被用于许多领域。在文献中所列举的一

些应用如下:

航空

高性能飞行器自动驾驶仪,飞行路径模拟,飞机控制系统,自动驾驶优化器,飞行部件模拟,飞行器部件故障检测器

汽车

汽车自动导航系统,担保行为分析器

银行

支票和其他公文阅读器,信贷申请的评估器

国防

武器操纵,目标跟踪,目标辨识,面部识别、新型的传感器,声纳、雷达和图像信号处理(包括数据压缩、特征提取、噪声抑制、信号/图像的识别)电子

代码序列预测,集成电路芯片布局,过程控制,芯片故障分析,机器视觉,语音综合,非线性建模

娱乐

动画,特技,市场预测。

金融

不动产评估,借贷咨询,抵押审查,、公司证券分级,投资交易程序,公司财务分析,通货价格预测

保险

政策应用评估,产品优化

制造

生产流程控制,产品设计和分析,过程和机器诊断,实时微粒识别,可视质量监督系统,啤酒检测,焊接质量分析,纸张质量预测,计算机芯片质量分析,磨床运转分析,化工产品设计分析,机器性能分析,项目投标,计划和管理,化工流程系统动态建模

医疗

乳房癌细胞分析,EEG和ECG分析,修复设计,移植次数优化,医院费用节流,医院质量改进,急诊室检查建议

石油和天然气

探查

机器人

轨道控制,铲车机器人,操作手控制器,视觉系统

语音

语音识别,语音压缩,元音识别,文本到语音的综合

有价证券

市场分析,自动证券分级,股票交易咨询系统

电信

图像和数据压缩,自动信息服务,实时语言翻译,客户支付处理系统

交通

卡车制动器诊断系统,车辆调度,运送系统

结论

神经网络应用的数量、投人到神经网络软硬件上的资金和公众对这些设计的兴趣都在快速增长。

4 生物学的启示

本书所讲的人工神经网络与它对应的生物神经网络有很大区别。本节我们将简单介绍人脑功能中那些对人工神经网络研究有启示的特征。

人脑由大量(约1011个)高度互连的单元(每个单元约有104个连接)组成。这些单元被称为神经元。就研究的目的来看,这些神经元由三部分组成:树突、细胞体和轴突。树突是树状的神经纤维接收网络,它将电信号传送到细胞体,细胞体对这些输人信号进行整合并进行阈值处理。轴突是单根长纤维,它把细胞体的输出信号导向其他神经元。一个神经细胞的轴突和另一个神经细胞树突的结合点称为突触。神经元的排列和突触的强度(由复杂的化学过程决定)确立了神经网络的功能。图6-1是两个生物神经元的简化图示。

图6-1 生物神经元简图

一些神经结构是与生俱来的,而其他部分则是在学习的过程中形成的。在学习的过程中,可能会产生一些新的连接,一些连接也可能会消失。这个过程在生命早期最为显著。比如,如果在某一段关键的时期内禁止一只小猫使用它某一只眼睛,则它的这只眼在以后很难形成正常的视力。

神经结构在整个生命期内不断地进行着改变,后期的改变主要是加强或减弱突触连接。例如,现在已经确认,新记忆的形成是通过改变突触强度而实现的。所以,认识一位新朋友面孔的过程中包含了各种突触的改变过程。

人工神经网络却没有人脑那么复杂,但它们之间有两个关键相似之处。首先,两个网络的构成都是可计算单元的高度互连(虽然人工神经元比生物神经元简单得多人其次,处理单元之间的连接决定了网络的功能。本书的根本目标就是在人工神经网络中采用合适的连接来解决特定的问题。

值得注意的是,虽然生物神经元相对于电子电路来说非常慢(10-3秒相对于10-9秒),人脑却能以比现有计算机快得多的速度完成许多任务。这主要是因为生物神经网络具有巨大的并行性,即所有的神经元能同时操作。即使大多数人工神经网络是在传统的数字计算机上

实现的,但并行处理结构使它们适合于采用“VLSI、光学器件和并行处理技术实现。

机械 外文翻译 外文文献 英文文献 液压机械及泵

Hydraulic machinery and pump Hydraulic machinery are machines and tools which use fluid power to do work. Heavy equipment is a common example. In this type of machine, high-pressure liquid - called hydraulic fluid - is transmitted throughout the machine to various hydraulic motors and hydraulic cylinders. The fluid is controlled directly or automatically by control valves and distributed through hoses and tubes. The popularity of hydraulic machinery is due to the very large amount of power that can be transferred through small tubes and flexible hoses, and the high power density and wide array of actuators that can make use of this power. Hydraulic machinery is operated by the use of hydraulics, where a liquid is the powering medium. Pneumatics, on the other side, is based on the use of a gas as the medium for power transmission, generation and control. Hydraulic circuits For the hydraulic fluid to do work, it must flow to the actuator and or motors, then return to a reservoir. The fluid is then filtered and re-pumped. The path taken by hydraulic fluid is called a hydraulic circuit of which there are several types. Open center circuits use pumps which supply a

人工神经网络原理及实际应用

人工神经网络原理及实际应用 摘要:本文就主要讲述一下神经网络的基本原理,特别是BP神经网络原理,以及它在实际工程中的应用。 关键词:神经网络、BP算法、鲁棒自适应控制、Smith-PID 本世纪初,科学家们就一直探究大脑构筑函数和思维运行机理。特别是近二十年来。对大脑有关的感觉器官的仿生做了不少工作,人脑含有数亿个神经元,并以特殊的复杂形式组成在一起,它能够在“计算"某些问题(如难以用数学描述或非确定性问题等)时,比目前最快的计算机还要快许多倍。大脑的信号传导速度要比电子元件的信号传导要慢百万倍,然而,大脑的信息处理速度比电子元件的处理速度快许多倍,因此科学家推测大脑的信息处理方式和思维方式是非常复杂的,是一个复杂并行信息处理系统。1943年Macullocu和Pitts融合了生物物理学和数学提出了第一个神经元模型。从这以后,人工神经网络经历了发展,停滞,再发展的过程,时至今日发展正走向成熟,在广泛领域得到了令人鼓舞的应用成果。本文就主要讲述一下神经网络的原理,特别是BP神经网络原理,以及它在实际中的应用。 1.神经网络的基本原理 因为人工神经网络是模拟人和动物的神经网络的某种结构和功能的模拟,所以要了解神经网络的工作原理,所以我们首先要了解生物神经元。其结构如下图所示: 从上图可看出生物神经元它包括,细胞体:由细胞核、细胞质与细胞膜组成;

轴突:是从细胞体向外伸出的细长部分,也就是神经纤维。轴突是神经细胞的输出端,通过它向外传出神经冲动;树突:是细胞体向外伸出的许多较短的树枝状分支。它们是细胞的输入端,接受来自其它神经元的冲动;突触:神经元之间相互连接的地方,既是神经末梢与树突相接触的交界面。 对于从同一树突先后传入的神经冲动,以及同一时间从不同树突输入的神经冲动,神经细胞均可加以综合处理,处理的结果可使细胞膜电位升高;当膜电位升高到一阀值(约40mV),细胞进入兴奋状态,产生神经冲动,并由轴突输出神经冲动;当输入的冲动减小,综合处理的结果使膜电位下降,当下降到阀值时。细胞进入抑制状态,此时无神经冲动输出。“兴奋”和“抑制”,神经细胞必呈其一。 突触界面具有脉冲/电位信号转换功能,即类似于D/A转换功能。沿轴突和树突传递的是等幅、恒宽、编码的离散电脉冲信号。细胞中膜电位是连续的模拟量。 神经冲动信号的传导速度在1~150m/s之间,随纤维的粗细,髓鞘的有无而不同。 神经细胞的重要特点是具有学习功能并有遗忘和疲劳效应。总之,随着对生物神经元的深入研究,揭示出神经元不是简单的双稳逻辑元件而是微型生物信息处理机制和控制机。 而神经网络的基本原理也就是对生物神经元进行尽可能的模拟,当然,以目前的理论水平,制造水平,和应用水平,还与人脑神经网络的有着很大的差别,它只是对人脑神经网络有选择的,单一的,简化的构造和性能模拟,从而形成了不同功能的,多种类型的,不同层次的神经网络模型。 2.BP神经网络 目前,再这一基本原理上已发展了几十种神经网络,例如Hopficld模型,Feldmann等的连接型网络模型,Hinton等的玻尔茨曼机模型,以及Rumelhart 等的多层感知机模型和Kohonen的自组织网络模型等等。在这众多神经网络模型中,应用最广泛的是多层感知机神经网络。 这里我们重点的讲述一下BP神经网络。多层感知机神经网络的研究始于50年代,但一直进展不大。直到1985年,Rumelhart等人提出了误差反向传递学习算法(即BP算),实现了Minsky的多层网络设想,其网络模型如下图所示。它可以分为输入层,影层(也叫中间层),和输出层,其中中间层可以是一层,也可以多层,看实际情况而定。

基于BP神经网络的车型识别外文翻译

、外文资料 License Plate Recognition Based On Prior Knowledge Abstract - In this paper, a new algorithm based on improved BP (back propagation) neural network for Chinese vehicle license plate recognition (LPR) is described. The proposed approach provides a solution for the vehicle license plates (VLP) which were degraded severely. What it remarkably differs from the traditional methods is the application of prior knowledge of license plate to the procedure of location, segmentation and recognition. Color collocation is used to locate the license plate in the image. Dimensions of each character are constant, which is used to segment the character of VLPs. The Layout of the Chinese VLP is an important feature, which is used to construct a classifier for recognizing. The experimental results show that the improved algorithm is effective under the condition that the license plates were degraded severely. Index Terms - License plate recognition, prior knowledge, vehicle license plates, neural network. I. INTRODUCTION Vehicle License-Plate (VLP) recognition is a very interesting but difficult problem. It is important in a number of applications such as weight-and-speed-limit, red traffic infringement, road surveys and park security [1]. VLP recognition system consists of the plate location, the characters segmentation, and the characters recognition. These tasks become more sophisticated when dealing with plate images taken in various inclined angles or under various lighting, weather condition and cleanliness of the plate. Because this problem is usually used in real-time systems, it requires not only accuracy but also fast processing. Most existing VLP recognition methods [2], [3], [4], [5] reduce the complexity and increase the recognition rate by using some specific features of local VLPs and establishing some constrains on the position, distance from the camera to vehicles, and the inclined angles. In addition, neural network was used to increase the recognition rate [6], [7] but the traditional recognition methods seldom consider the prior knowledge of the local VLPs. In this paper, we proposed a new improved learning method of BP algorithm based on specific features of Chinese VLPs. The proposed algorithm overcomes the low speed convergence of BP neural network [8] and remarkable increases the recognition rate especially under the condition that the license plate images were degrade severely.

柱塞泵毕业设计外文文献翻译

利用神经网络预测轴向柱塞泵的性能 Mansour A Karkoub a, Osama E Gad a, Mahmoud G Rabie b a--就读于科威特的科威特大学工程与石油学院 b--就读于埃及开罗的军事科技大学 摘要 本文推导了应用于轴向柱塞泵(斜轴式)的神经网络模型。该模型采用的数据是由一个实验装置获得的。这个正在进行的研究的目的是降低柱塞泵在高压下工作时的能量损耗。然而,在最初我们要做一些研究来预测当前所设计的泵的响应。神经网络模型具有前反馈的结构,并在测验过程中使用Levenberg-Marquardt优化技术。该模型能够准确地预测柱塞泵的动态响应。 1、简介 可变排量轴向柱塞泵是在流体动力系统中经常要用到的重要设备,如液压动力供应控制和静液压传动驱动器的控制。本装置具有变量机制和功率-重量比特性,使其最适合于高功率电平的控制。所设计的这种轴向柱塞泵拥有可靠性和简便的特点,然而其最重要的特征是可以变量输出。 人们在轴向柱塞泵领域已经做了很多研究,但是本文将只论述一下少数几人所做的贡献。 Kaliafetis和Costopoulos[5]用调压器研究了轴向柱塞变量泵的静态和动态特性。所提出的模型的精确度依赖于制造商提供的动态运行曲线等数据。他们得出结论,运行条件对泵的动态行为是非常关键的,而泵的动态行为可以通过减小压力设定值进行改善。Harris等人[4]模拟和测量了轴向柱塞泵的缸体压力和进油流量脉动。Kiyoshi和Masakasu[7]研究了斜盘式变量输送的轴向柱塞泵在运行时刻的实验上和理论上的静态和动态特性。并提出了一种新的方法来预测泵在运行过程中的响应。也对研究泵特性的新方法的有效性进行了实验验证,实验中使用了一个有宽、短而深的凹槽的配流盘。Edge和Darling[2]研究了液压轴向柱塞泵的缸体压力和流量。这个得出的模型经过了实验检验。对于配流盘、缸体上设计的退刀槽和泵的流量脉动对泵特性的影响都进行了验证。 人们已证实了一种可替代的建模技术——神经网络(NN)能取得良好的效果,特别是对于高度非线性的系统。这种技术是模仿人脑获取信息的功能。Karkoub 和Elkamel[6]用神经网络模型预测了一个长方形的气压轴承的压力分布。所设计的这种模型在预测压力分布和承载能力方面比其他可用的工具更加精确。Gharbi 等人[3]利用神经网络预测了突破采油。其表现远远优于常见的回归模型或有限差分法。李等人[8]用神经网络模型NNS和鲍威尔优化技术对单链路和双链路的倒立摆进行了建模和控制。研究者们取得了理想的结果。Panda等人[9]应用NNS在普拉德霍湾油田对流体接触进行了建模。所得到的模型预测的目标油井中的流量分配比传统的以回归为基础的技术更准确。Aoyama等人[1]已经推导出一个神经网络模型来预测非最小相系统的响应。所开发出的的模型被应用于Van de Vuss反应器和连续搅拌式生物反应器,所得到的结果是令人满意的。 本文研究利用神经网络解决轴向柱塞泵(斜轴式)在一定的供油压力下的建模。本文首先会描述用于收集实验数据的实验装置,然后将会简要介绍神经网络建模程序。 2、实验装置

人工神经网络的发展及应用

人工神经网络的发展与应用 神经网络发展 启蒙时期 启蒙时期开始于1980年美国著名心理学家W.James关于人脑结构与功能的研究,结束于1969年Minsky和Pape~发表的《感知器》(Perceptron)一书。早在1943年,心理学家McCulloch和数学家Pitts合作提出了形式神经元的数学模型(即M—P模型),该模型把神经细胞的动作描述为:1神经元的活动表现为兴奋或抑制的二值变化;2任何兴奋性突触有输入激励后,使神经元兴奋与神经元先前的动作状态无关;3任何抑制性突触有输入激励后,使神经元抑制;4突触的值不随时间改变;5突触从感知输入到传送出一个输出脉冲的延迟时问是0.5ms。可见,M—P模型是用逻辑的数学工具研究客观世界的事件在形式神经网络中的表述。现在来看M—P 模型尽管过于简单,而且其观点也并非完全正确,但是其理论有一定的贡献。因此,M—P模型被认为开创了神经科学理论研究的新时代。1949年,心理学家D.0.Hebb 提出了神经元之间突触联系强度可变的假设,并据此提出神经元的学习规则——Hebb规则,为神经网络的学习算法奠定了基础。1957年,计算机学家FrankRosenblatt提出了一种具有三层网络特性的神经网络结构,称为“感知器”(Perceptron),它是由阈值性神经元组成,试图模拟动物和人脑的感知学习能力,Rosenblatt认为信息被包含在相互连接或联合之中,而不是反映在拓扑结构的表示法中;另外,对于如何存储影响认知和行为的信息问题,他认为,存储的信息在神经网络系统内开始形成新的连接或传递链路后,新 的刺激将会通过这些新建立的链路自动地激活适当的响应部分,而不是要求任何识别或坚定他们的过程。1962年Widrow提出了自适应线性元件(Ada—line),它是连续取值的线性网络,主要用于自适应信号处理和自适应控制。 低潮期 人工智能的创始人之一Minkey和pape~经过数年研究,对以感知器为代表的网络系统的功能及其局限性从数学上做了深入的研究,于1969年出版了很有影响的《Perceptron)一书,该书提出了感知器不可能实现复杂的逻辑函数,这对当时的人工神经网络研究产生了极大的负面影响,从而使神经网络研究处于低潮时期。引起低潮的更重要的原因是:20世纪7O年代以来集成电路和微电子技术的迅猛发展,使传统的冯·诺伊曼型计算机进入发展的全盛时期,因此暂时掩盖了发展新型计算机和寻求新的神经网络的必要性和迫切性。但是在此时期,波士顿大学的S.Grossberg教授和赫尔辛基大学的Koho—nen教授,仍致力于神经网络的研究,分别提出了自适应共振理论(Adaptive Resonance Theory)和自组织特征映射模型(SOM)。以上开创性的研究成果和工作虽然未能引起当时人们的普遍重视,但其科学价值却不可磨灭,它们为神经网络的进一步发展奠定了基础。 复兴时期 20世纪80年代以来,由于以逻辑推理为基础的人工智能理论和冯·诺伊曼型计算机在处理诸如视觉、听觉、联想记忆等智能信息处理问题上受到挫折,促使人们

外文翻译---神经网络概述

外文原文与译文 外文原文 Neural NetworkIntroduction 1.Objectives As you read these words you are using a complex biological neural network. You have a highly interconnected set of some 1011neurons to facilitate your reading, breathing, motion and thinking. Each of your biological neurons,a rich assembly of tissue and chemistry, has the complexity, if not the speed, of a microprocessor. Some of your neural structure was with you at birth. Other parts have been established by experience. Scientists have only just begun to understand how biological neural networks operate. It is generally understood that all biological neural functions, including memory, are stored in the neurons and in the connections between them. Learning is viewed as the establishment of new connections between neurons or the modification of existing connections. This leads to the following question: Although we have only a rudimentary understanding of biological neural networks, is it possible to construct a small set of simple artificial “neurons” and perhaps train them to serve a useful function? The answer is “yes.”This book, then, is about artificial neural networks. The neurons that we consider here are not biological. They are extremely simple abstractions of biological neurons, realized as elements in a program or perhaps as circuits made of silicon. Networks of these artificial neurons do not have a fraction of the power of the human brain, but they can be trained to perform useful functions. This book is about such neurons, the networks that contain them and their training. 2.History The history of artificial neural networks is filled with colorful, creative individuals from many different fields, many of whom struggled for decades to

外文翻译---人工神经网络

英文文献 英文资料: Artificial neural networks (ANNs) to ArtificialNeuralNetworks, abbreviations also referred to as the neural network (NNs) or called connection model (ConnectionistModel), it is a kind of model animals neural network behavior characteristic, distributed parallel information processing algorithm mathematical model. This network rely on the complexity of the system, through the adjustment of mutual connection between nodes internal relations, so as to achieve the purpose of processing information. Artificial neural network has since learning and adaptive ability, can provide in advance of a batch of through mutual correspond of the input/output data, analyze master the law of potential between, according to the final rule, with a new input data to calculate, this study analyzed the output of the process is called the "training". Artificial neural network is made of a number of nonlinear interconnected processing unit, adaptive information processing system. It is in the modern neuroscience research results is proposed on the basis of, trying to simulate brain neural network processing, memory information way information processing. Artificial neural network has four basic characteristics: (1) the nonlinear relationship is the nature of the nonlinear common characteristics. The wisdom of the brain is a kind of non-linear phenomena. Artificial neurons in the activation or inhibit the two different state, this kind of behavior in mathematics performance for a nonlinear relationship. Has the threshold of neurons in the network formed by the has better properties, can improve the fault tolerance and storage capacity. (2) the limitations a neural network by DuoGe neurons widely usually connected to. A system of the overall behavior depends not only on the characteristics of single neurons, and may mainly by the unit the interaction between the, connected to the. Through a large number of connection between units simulation of the brain limitations. Associative memory is a typical example of limitations. (3) very qualitative artificial neural network is adaptive, self-organizing, learning ability. Neural network not only handling information can have all sorts of change, and in the treatment of the information at the same time, the nonlinear dynamic system itself is changing. Often by iterative process description of the power system evolution. (4) the convexity a system evolution direction, in certain conditions will depend on a particular state function. For example energy function, it is corresponding to the extreme value of the system stable state. The convexity refers to the function extreme value, it has DuoGe DuoGe system has a stable equilibrium state, this will cause the system to the diversity of evolution. Artificial neural network, the unit can mean different neurons process of the object, such as characteristics, letters, concept, or some meaningful abstract model. The type of network processing unit is divided into three categories: input unit, output unit and hidden units. Input unit accept outside the world of signal and data; Output unit of output system processing results; Hidden unit is in input and output unit, not between by external observation unit. The system The connections between neurons right value reflect the connection between the unit strength, information processing and embodied in the network said the processing unit in the connections. Artificial neural network is a kind of the procedures, and adaptability, brain style of information processing, its essence is through the network of transformation and dynamic behaviors have a

【机械专业文献翻译】液压传动概述

附件1:外文资料翻译译文 1 液压传动概述 1.1 液压传动的发展概况 1.1.1 压传动的定义 一部完整的机器是由原动机部分、传动机构、控制部分及工作机部分组成。原动机有几种类型,例如电动机、内燃机等。工作机即完成该机器的工作任务的直接工作部分,如剪床的剪刀,车床的刀架、车刀、卡盘等。由于原动机的功率和转速比是被限制的,为了覆盖工作机较大范围的工作力和工作速度的变化,以及操作性能的要求,在原动机和工作机之间设置了传动机构,其作用是把原动机输出功率经过变换后传递给工作机。 传动机构通常分为电气传动、机械传动和流体传动三类机构。流体传动是通过液压、流体或气体来进行能量传递和控制的。但需要认识到实际上只有两种液压系统:液力传动和液压传动(包括液力和气体)。 液压传动和液力传动均是以液体作为工作介质来进行能量传递的传达方式。液力传动则主要是利用液体的动能来传递能量;而液压传动主要是利用液体的压力能来传递能量。由于液压传动有许多突出的优点,因此,它被广泛的应用于工业的每个分支。一些典型的运用像机械工程、建筑、海洋开发、交通运输、农业和航天航空 1.1.2 液压传动的发展概况 到18世纪中叶的工业革命,电能已经不能支持工业机器的传动需求。18世纪末液压传动被用于驱动液力设备,例如起重机、压力机、绞车、压榨机、液力千斤顶、修剪机械、和支承机械。在这些系统中,是一种由蒸汽机驱动液力的水泵,这种泵是通过压力将水通过管道传到工业机械来驱动各种机器的。这些早期的液力系统有着许多的不足,例如设计问题,由于设计已经发展为艺术而多过了科学。然而,直到19世纪电力成为新的有优势的技术。这样的结果液压传动并没有起到推动的作用。电力传动不久被发现在远距离传递上有良好的效果。在19世纪最后的10年里液压传动技术只有小小的发展。近代,1906年液压传动开始被重视,是当时用液

人工神经网络题库

人工神经网络 系别:计算机工程系 班级: 1120543 班 学号: 13 号 姓名: 日期:2014年10月23日

人工神经网络 摘要:人工神经网络是一种应用类似于大脑神经突触联接的结构进行信息处理的数学模型。在工程与学术界也常直接简称为神经网络或类神经网络。神经网络是一种运算模型,由大量的节点(或称神经元)之间相互联接构成,由大量处理单元互联组成的非线性、自适应信息处理系统。它是在现代神经科学研究成果的基础上提出的,试图通过模拟大脑神经网络处理、记忆信息的方式进行信息处理。 关键词:神经元;神经网络;人工神经网络;智能; 引言 人工神经网络的构筑理念是受到生物(人或其他动物)神经网络功能的运作启发而产生的。人工神经网络通常是通过一个基于数学统计学类型的学习方法(Learning Method )得以优化,所以人工神经网络也是数学统计学方法的一种实际应用,通过统计学的标准数学方法我们能够得到大量的可以用函数来表达的局部结构空间,另一方面在人工智能学的人工感知领域,我们通过数学统计学的应用可以来做人工感知方面的决定问题(也就是说通过统计学的方法,人工神经网络能够类似人一样具有简单的决定能力和简单的判断能力),这种方法比起正式的逻辑学推理演算更具有优势。 一、人工神经网络的基本原理 1-1神经细胞以及人工神经元的组成 神经系统的基本构造单元是神经细胞,也称神经元。它和人体中其他细胞的关键区别在于具有产生、处理和传递信号的功能。每个神经元都包括三个主要部分:细胞体、树突和轴突。树突的作用是向四方收集由其他神经细胞传来的信息,轴突的功能是传出从细胞体送来的信息。每个神经细胞所产生和传递的基本信息是兴奋或抑制。在两个神经细胞之间的相互接触点称为突触。简单神经元网络及其简化结构如图2-2所示。 从信息的传递过程来看,一个神经细胞的树突,在突触处从其他神经细胞接受信号。 这些信号可能是兴奋性的,也可能是抑制性的。所有树突接受到的信号都传到细胞体进行综合处理,如果在一个时间间隔内,某一细胞接受到的兴奋性信号量足够大,以致于使该细胞被激活,而产生一个脉冲信号。这个信号将沿着该细胞的轴突传送出去,并通过突触传给其他神经细胞.神经细胞通过突触的联接形成神经网络。 图1-1简单神经元网络及其简化结构图 (1)细胞体 (2)树突 (3)轴突 (4)突触

人工神经网络概论

人工神经网络概论 梁飞 (中国矿业大学计算机科学与技术学院信科09-1班,江苏,徐州,221116) 摘要:进入21世纪以来,神经网络近来越来越受到人们的关注,因为神经网络可以很容易的解决具有上百个参数的问题,它为大复杂度问题提供了解决一种相对来说比较有效的简单方法。人工神经网络是涉及神经科学、思维科学、人工智能、计算机科学等多个领域的交叉学科。本文简要介绍了人工神经网络的工作原理、属性、特点和优缺点、网络模型、发展历史及它的应用和发展前景等。 关键词:人工神经网络;人工智能;神经网络;神经系统 1.人工神经网络的简介 人工神经网络(Artificial Neural Networks,简写为 ANN),一种模仿动物神经网络行为特征,进行分布式并行信息处理的算法数学模型。这种网络依靠系统的复杂程度,通过调整内部大量节点之间相互连接的关系,从而达到处理信息的目的。人工神经网络具有自学习和自适应的能力,可以通过预先提供的一批相互对应的输入-输出数据,分析掌握两者之间潜在的规律,最终根据这些规律,用新的输入数据来推算输出结果,这种学习分析的过程被称为“训练”。 2.人工神经网络的工作原理 人脑的处理机制极其复杂,从结构上看它是包含有140亿神经细胞的大规模网络。单个神经细胞的工作速度并不高,但它通过超并行处理使得整个系统实现处理的高速性和表现的多样性。 因此,从处理的角度对人脑进行研究,并由此研制出一种象人脑一样能够“思维”的智能计算机和智能处理方法,一直是人工智能追求的目标。 人脑神经系统的基本构造单元是神经细胞,也称神经元。它和人体中其他细胞的关键区别在于具有产生、处理和传递信号的功能。每个神经元都包括三个主要部分:细胞体、树突和轴突。树突的作用是向四方收集由其他神经细胞传来的信息,轴突的功能是传出从细胞体送来的信息。每个神经细胞所产生和传递的基本信息是兴奋或抑制。在两个神经细胞之间的相互接触点称为突触。从信息的传递过程来看,一个神经细胞的树突,在突触处从其他神经细胞接受信号。这些信号可能是兴奋性的,也可能是抑制性的。所有树突接受到的信号都传到细胞体进行综合处理,如果在一个时间间隔内,某一细胞接受到的兴奋性信号量足够大,以致于使该细胞被激活,而产生一个脉冲信号。这个信号将沿着该细胞的轴突传送出去,并通过突触传给其他神经细胞.神经细胞通过突触的联接形成神经网络。

Hopfield神经网络综述

题目:Hopfield神经网络综述 一、概述: 1.什么是人工神经网络(Artificial Neural Network,ANN) 人工神经网络是一个并行和分布式的信息处理网络结构,该网络结构一般由许多个神经元组成,每个神经元有一个单一的输出,它可以连接到很多其他的神经元,其输入有多个连接通路,每个连接通路对应一个连接权系数。 人工神经网络系统是以工程技术手段来模拟人脑神经元(包括细胞体,树突,轴突)网络的结构与特征的系统。利用人工神经元可以构成各种不同拓扑结构的神经网络,它是生物神经网络的一种模拟和近似。主要从两个方面进行模拟:一是结构和实现机理;二是从功能上加以模拟。 根据神经网络的主要连接型式而言,目前已有数十种不同的神经网络模型,其中前馈型网络和反馈型网络是两种典型的结构模型。 1)反馈神经网络(Recurrent Network) 反馈神经网络,又称自联想记忆网络,其目的是为了设计一个网络,储存一组平衡点,使得当给网络一组初始值时,网络通过自行运行而最终收敛到这个设计的平衡点上。反馈神经网络是一种将输出经过一步时移再接入到输入层的神经网络系统。 反馈网络能够表现出非线性动力学系统的动态特性。它所具有的主要特性为以下两点:(1).网络系统具有若干个稳定状态。当网络从某一初始状态开始运动,网络系统总可以收敛到某一个稳定的平衡状态; (2).系统稳定的平衡状态可以通过设计网络的权值而被存储到网络中。 反馈网络是一种动态网络,它需要工作一段时间才能达到稳定。该网络主要用于联想记忆和优化计算。在这种网络中,每个神经元同时将自身的输出信号作为输入信号反馈给其他神经元,它需要工作一段时间才能达到稳定。 2.Hopfiel d神经网络 Hopfield网络是神经网络发展历史上的一个重要的里程碑。由美国加州理工学院物理学家J.J.Hopfield 教授于1982年提出,是一种单层反馈神经网络。Hopfiel d神经网络是反馈网络中最简单且应用广泛的模型,它具有联想记忆的功能。 Hopfield神经网络模型是一种循环神经网络,从输出到输入有反馈连接。在输入的激励下,会产生不断的状态变化。 反馈网络有稳定的,也有不稳定的,如何判别其稳定性也是需要确定的。对于一个Hopfield 网络来说,关键是在于确定它在稳定条件下的权系数。 下图中,第0层是输入,不是神经元;第二层是神经元。

外文翻译----离心泵在化工生产的应用

CENTRIFUGAL PUMP IN CHEMICALPRODUCTIONAPPLICATION Centrifugal pump in chemical production, this is most widely used because of its performance wide range (including flow, pressure head of media properties and ShiYing sexual), small volume, simple structure, easy operation, flow uniformity, low malfunction, long service life, CaoZuoFei are lower cost and prominent advantages. (1)centrifugal pump principle of work Centrifugal pump working principle is: centrifugal pump so can turn the water away is because only centrificating. Water pump before work, pump body and feed water full line must be cans in vacuum condition, when the impeller fast turns, leaf prompted water quickly spin, spin in the water under the action of the centrifugal force, pump impeller flew in from the water was thrown in the central part of the impeller, after forming vacuum area. Water in the water atmospheric pressure (or water pressure) under the action of water pipe pressure came through a tube. This cycle unceasingly, can achieve continuous pumped. In the centrifugal pump is worth mentioning: before initiating must to the pump housing filled with water, can start after, otherwise will cause, the vibration of the pump body, thus reduce heat pump, damage to (hereinafter referred to as "cavitation erosion") cause equipment accident! Centrifugal pump is a lot of more phyletic, classification methods common has the following kinds of kind: 1, press the impeller inhaled way points: single suction style centrifugal pump double suction type centrifugal pump. 2, press the impeller number points: single grade centrifugal pump mulfistage centrifugal pump. 3, according to the structure of impeller points: open impeller centrifugal pump impeller half open centrifugal pump closed impeller centrifugal pump. 4, according to work pressure points: low voltage centrifugal pump medium centrifugal pump high-pressure centrifugal pump. 5, according to pump shaft position points: horizontal centrifugal pump edge vertical centrifugal pump. (2) basic structure, centrifugal pumps

相关文档