文档库 最新最全的文档下载
当前位置:文档库 › 软件工程专业软件数据库的面向对象的视角大学毕业论文外文文献翻译及原文

软件工程专业软件数据库的面向对象的视角大学毕业论文外文文献翻译及原文

软件工程专业软件数据库的面向对象的视角大学毕业论文外文文献翻译及原文
软件工程专业软件数据库的面向对象的视角大学毕业论文外文文献翻译及原文

毕业设计(论文)

外文文献翻译

文献、资料中文题目:专业软件数据库的面向对象的视角

文献、资料英文题目:Software Database An Object-Oriented Perspective

文献、资料来源:

文献、资料发表(出版)日期:

院(部):

专业:软件工程

班级:

姓名:

学号:

指导教师:

翻译日期: 2017.02.14

A HISTORICAL PERSPECTIVE

From the earliest days of computers, storing and manipulating data have been a major application focus. The first general-purpose DBMS was designed by Charles Bachman at General Electric in the early 1960s and was called the Integrated Data Store. It formed the basis for the network data model, which was standardized by the Conference on Data Systems Languages (CODASYL) and strongly influenced database systems through the 1960s. Bachman was the fi rst recipient of ACM’s Turing Award (the computer science equivalent of a Nobel prize) for work in the database area; he received the award in 1973.

本科毕业设计外文文献翻译 英文题目:Software Database An Object-Oriented Perspective. 中文题目:软件数据库的面向对象的视角

In the late 1960s, IBM developed the Information Management System (IMS) DBMS, used even today in many major installations. IMS formed the basis for an alternative data representation framework called the hierarchical data model. The SABRE system for making airline reservations was jointly developed by American Airlines and IBM around the same time, and it allowed several people to access the same data through computer network. Interestingly, today the same SABRE system is used to power popular Web-based travel services such as Travelocity!

In 1970, Edgar Codd, at IBM’s San Jose Research Laboratory, proposed a new data representation framework called the relational data model. This proved to be a watershed in the development of database systems: it sparked rapid development of several DBMSs based on the relational model, along with a rich body of theoretical results that placed the field on a firm foundation. Codd won the 1981 Turing Award for his seminal work. Database systems matured as an academic discipline, and the popularity of relational DBMSs changed the commercial landscape. Their benefits were widely recognized, and the use of DBMSs for managing corporate data became standard practice.

In the 1980s, the relational model consolidated its position as the dominant DBMS paradigm, and database systems continued to gain widespread use. The SQL query language for relational databas es, developed as part of IBM’s System R project, is now the standard query language. SQL was standardized in the late 1980s, and the current standard, SQL-92, was adopted by the American National Standards Institute (ANSI) and International Standards Organization (ISO). Arguably, the most widely used form of concurrent programming is the concurrent execution of database programs (called transactions). Users write programs as if they are to be run by themselves, and the responsibility for running them concurrently is given to the DBMS. James Gray won the 1999 Turing award for his contributions to the field of transaction management in a DBMS.

In the late 1980s and the 1990s, advances have been made in many areas of database systems. Considerable research has been carried out into more powerful query languages and richer data models, and there has been a big emphasis on supporting complex analysis of data from all parts of an enterprise. Several vendors (e.g., IBM’s DB2, Oracle 8, Informix UDS) have extended their systems with the ability to store new data types such as

images and text, and with the ability to ask more complex queries. Specialized systems have been developed by numerous vendors for creating data warehouses, consolidating data from several databases, and for carrying out specialized analysis.

An interesting phenomenon is the emergence of several enterprise resource planning(ERP) and management resource planning (MRP) packages, which add a substantial layer of application-oriented features on top of a DBMS. Widely used packages include systems from Baan, Oracle, PeopleSoft, SAP, and Siebel. These packages identify a set of common tasks (e.g., inventory management, human resources planning, financial analysis) encountered by a large number of organizations and provide a general application layer to carry out these tasks. The data is stored in a relational DBMS, and the application layer can be customized to different companies, leading to lower Introduction to Database Systems overall costs for the companies, compared to the cost of building the application layer from scratch. Most significantly, perhaps, DBMSs have entered the Internet Age. While the first generation of Web sites stored their data exclusively in operating systems files, the use of a DBMS to store data that is accessed through a Web browser is becoming widespread. Queries are generated through Web-accessible forms and answers are formatted using a markup language such as HTML, in order to be easily displayed in a browser. All the database vendors are adding features to their DBMS aimed at making it more suitable for deployment over the Internet. Database management continues to gain importance as more and more data is brought on-line, and made ever more accessible through computer networking. Today the field is being driven by exciting visions such as multimedia databases, interactive video, digital libraries, a host of scientific projects such as the human genome mapping effort and NASA’s Earth Observation System project, and the desire of companies to consolidate their decision-making processes and mine their data repositories for useful information about their businesses. Commercially, database manage- ment systems represent one of the largest and most vigorous market segments. Thusthes- tudy of database systems could prove to be richly rewarding in more ways than one!

INTRODUCTION TO PHYSICAL DATABASE DESIGN

Like all other aspects of database design, physical design must be guided by the nature of the data and its intended use. In particular, it is important to understand the typical workload that the database must support; the workload consists of a mix of queries and updates. Users also have certain requirements about how fast certain queries or updates must run or how many transactions must be processed per second. The workload description and users’ performance requirements are the basis on which a number of decisions have to be made during physical database design.

To create a good physical database design and to tune the system for performance in response to evolving user requirements, the designer needs to understand the workings of a DBMS, especially the indexing and query processing techniques supported by the DBMS. If the database is expected to be accessed concurrently by many users, or is a distributed database, the task becomes more complicated, and other features of a DBMS come into play.

DATABASE WORKLOADS

The key to good physical design is arriving at an accurate description of the expected workload. A workload description includes the following elements:

1. A list of queries and their frequencies, as a fraction of all queries and updates.

2. A list of updates and their frequencies.

3. Performance goals for each type of query and update.

For each query in the workload, we must identify:

Which relations are accessed.

Which attributes are retained (in the SELECT clause).

Which attributes have selection or join conditions expressed on them (in the WHERE clause) and how selective these conditions are likely to be. Similarly, for each update in the workload, we must identify:

Which attributes have selection or join conditions expressed on them (in the WHERE

clause) and how selective these conditions are likely to be.

The type of update (INSERT, DELETE, or UPDATE) and the updated relation.

For UPDATE commands, the fields that are modified by the update.

Remember that queries and updates typically have parameters, for example, a debit or credit operation involves a particular account number. The values of these parameters determine selectivity of selection and join conditions.

Updates have a query component that is used to find the target tuples. This component can benefit from a good physical design and the presence of indexes. On the other hand, updates typically require additional work to maintain indexes on the attributes that they modify. Thus, while queries can only benefit from the presence of an index, an index may either speed up or slow down a given update. Designers should keep this trade-offer in mind when creating indexes.

NEED FOR DATABASE TUNING

Accurate, detailed workload information may be hard to come by while doing the initial design of the system. Consequently, tuning a database after it has been designed and deployed is important—we must refine the initial design in the light of actual usage patterns to obtain the best possible performance.

The distinction between database design and database tuning is somewhat arbitrary.

We could consider the design process to be over once an initial conceptual schema is designed and a set of indexing and clustering decisions is made. Any subsequent changes to the conceptual schema or the indexes, say, would then be regarded as a tuning activity. Alternatively, we could consider some refinement of the conceptual schema (and physical design decisions affected by this refinement) to be part of the physical design process.

Where we draw the line between design and tuning is not very important.

OVERVIEW OF DATABASE TUNING After the initial phase of database design, actual use of the database provides a

valuable source of detailed information that can be used to refine the initial design. Many of the original assumptions about the expected workload can be replaced by observed usage patterns; in general, some of the initial workload specification will be validated, and some of it will turn out to be wrong. Initial guesses about the size of data can be replaced with actual statistics from the system catalogs (although this information will keep changing as the system evolves). Careful monitoring of queries can reveal unexpected problems; for example, the optimizer may not be using some indexes as intended to produce good plans.

Continued database tuning is important to get the best possible performance.

TUNING THE CONCEPTUAL SCHEMA

In the course of database design, we may realize that our current choice of relation schemas does not enable us meet our performance objectives for the given workload with any (feasible) set of physical design choices. If so, we may have to redesign our conceptual schema (and re-examine physical design decisions that are affected by the changes that we make).

We may realize that a redesign is necessary during the initial design process or later, after the system has been in use for a while. Once a database has been designed and populated with data, changing the conceptual schema requires a significant effort in terms of mapping the contents of relations that are affected. Nonetheless, it may sometimes be necessary to revise the conceptual schema in light of experience with the system. We now consider the issues involved in conceptual schema (re)design from the point of view of performance.

Several options must be considered while tuning the conceptual schema:

We may decide to settle for a 3NF design instead of a BCNF design.

If there are two ways to decompose a given schema into 3NF or BCNF, our choice should be guided by the workload.

Sometimes we might decide to further decompose a relation that is already in BCNF.

In other situations we might denormalize. That is, we might choose to replace a

collection of relations obtained by a decomposition from a larger relation with the original (larger) relation, even though it suffers from some redundancy problems. Alternatively, we might choose to add some fields to certain relations to speed up some important queries, even if this leads to a redundant storage of some information (and consequently, a schema that is in neither 3NF nor BCNF).

This discussion of normalization has concentrated on the technique of decomposition, which amounts to vertical partitioning of a relation. Another technique to consider is horizontal partitioning of a relation, which would lead to our having two relations with identical schemas. Note that we are not talking about physically partitioning the cuples of a single relation; rather, we want to create two distinct relations (possibly with different constraints and indexes on each).

Incidentally, when we redesign the conceptual schema, especially if we are tuning an existing database schema, it is worth considering whether we should create views to mask these changes from users for whom the original schema is more natural.

TUNING QUERIES AND VIEWS

If we notice that a query is running much slower than we expected, we have to examine the query carefully to end the problem. Some rewriting of the query, perhaps in conjunction with some index tuning, can often ?x the problem. Similar tuning may be called for if queries on some view run slower than expected.

When tuning a query, the first thing to verify is that the system is using the plan that you expect it to use. It may be that the system is not finding the best plan for a variety of reasons. Some common situations that are not handled efficiently by many optimizers follow:

A selection condition involving null values.

Selection conditions involving arithmetic or string expressions or conditions using the or connective. For example, if we have a condition E.age = 2*D.age in the WHERE clause, the optimizer may correctly utilize an available index on E.age but fail to utilize an available index on D.age. Replacing the condition by E.age/2=D.age would reverse the

situation.

Inability to recognize a sophisticated plan such as an index-only scan for an aggregation query involving a GROUP BY clause.

If the optimizer is not smart enough to and the best plan (using access methods and evaluation strategies supported by the DBMS), some systems allow users to guide the choice of a plan by providing hints to the optimizer; for example, users might be able to force the use of a particular index or choose the join order and join method. A user who wishes to guide optimization in this manner should have a thorough understanding of both

optimization and the capabilities of the given DBMS.

(8)OTHER TOPICS

MOBILE DATABASES

The availability of portable computers and wireless communications has created a new breed of nomadic database users. At one level these users are simply accessing a database through a network, which is similar to distributed DBMSs. At another level the network as well as data and user characteristics now have several novel properties, which affect basic assumptions in many components of a DBMS, including the query engine, transaction manager, and recovery manager.

Users are connected through a wireless link whose bandwidth is ten times less than Ethernet and 100 times less than ATM networks. Communication costs are therefore significantly higher in proportion to I/O and CPU costs.

Users’ locations are constantly changing, and mobile computers ha ve a limited battery life. Therefore, the true communication costs is connection time and battery usage in addition to bytes transferred, and change constantly depending on location. Data is frequently replicated to minimize the cost of accessing it from different locations.

As a user moves around, data could be accessed from multiple database servers within a single transaction. The likelihood of losing connections is also much greater than in a traditional network. Centralized transaction management may therefore be impractical, especially if some data is resident at the mobile computers. We may in fact have to give up

on ACID transactions and develop alternative notions of consistency for user programs. MAIN MEMORY DA TABASES

The price of main memory is now low enough that we can buy enough main memory to hold the entire database for many applications; with 64-bit addressing, modern CPUs

软件工程专业BIOS资料外文翻译文献

软件工程专业BIOS资料外文翻译文献 What is the Basic Input Output System (BIOS)? BIOS is an acronym for Basic Input Output System. It is the program that stores configuration details about your computer hardware and enables your computer to boot up. Every time your computer is switched on the BIOS loads configuration data into main memory, performs a routine diagnostic test on your hardware, then loads the operating system. The BIOS resides in a ROM (Read-Only memory) chip, which is mounted on the motherboard, usually in a socket so it is removable. To the right is an example of what a BIOS chip may look like in your motherboard. This is a PLCC 32 pin type BIOS chip. It is a very common type. Every computer has BIOS. There are many types but the most common type of BIOS 's come from: AMI, Award and Phoenix. Motherboard manufacturers buy or lease the BIOS source code from these companies. The BIOS tells the operating system in your computer how to boot up, where to load everything, what to load, what memory and CPU are present and much more. A good comparison to further understand the

外文文献翻译,好的外文文献这里找

香港科技大学图书馆Dspace https://www.wendangku.net/doc/7817903601.html,t.hk/dspace 包括香港科技大学的学术论文、学位论文、研究报告等内容,均可免费获取全文。 Openj-gate https://www.wendangku.net/doc/7817903601.html,/ 提供4350种开放获取的期刊的数百万期刊全文文献。 加利福尼亚大学国际和区域数字馆藏 https://www.wendangku.net/doc/7817903601.html,/escholarship/ 加利福尼亚大学国际和区域数字馆藏研究项目。eScholarshipRepository主要提供已出版的期刊论文、未出版的研究手稿、会议文献以及其他连接出版物上的文章1万多篇,均可免费阅读。 剑桥大学机构知识库 https://www.wendangku.net/doc/7817903601.html,/ 由Cambridge University Library和University Computing Service维护,提供剑桥大学相关的期刊、学术论文、学位论文等电子资源。 发展中国家联合期刊库 https://www.wendangku.net/doc/7817903601.html,.br/ 非营利的电子出版物服务机构,提供来自发展中国家(如巴西、古巴、印度、印尼、肯尼亚、南非、乌干达、津巴布韦等)的开放获取的多种期刊的全文。 美国密西根大学论文库 https://www.wendangku.net/doc/7817903601.html,/index.jsp 美国密西根大学论文库2万多篇期刊论文、技术报告、评论等文献全文。包含艺术学、生物学、社会科学、资源环境学等学科的相关论文,另还有博硕士论文。标识为OPEN的可以打开全文。 jfg CERN Document Server http://cdsweb.cern.ch/ 主要覆盖物理学(particle physics)及相关学科,提供360,000多篇全文文献,包括预印文献、期刊论文、图书、图片、学位论文等等。 kl ArXiv https://www.wendangku.net/doc/7817903601.html,/ ArXiv是属于Cornell University的非盈利教育机构,面向物理学、数学、非线性科学、计算机科学和定量生物学等学科提供16种免费电子期刊的访问。 NASA Technical Reports Server https://www.wendangku.net/doc/7817903601.html,/?method=browse 主要是关于航空航天领域研究的科技报告和会议论文。

英文文献及中文翻译

毕业设计说明书 英文文献及中文翻译 学院:专 2011年6月 电子与计算机科学技术软件工程

https://www.wendangku.net/doc/7817903601.html, Overview https://www.wendangku.net/doc/7817903601.html, is a unified Web development model that includes the services necessary for you to build enterprise-class Web applications with a minimum of https://www.wendangku.net/doc/7817903601.html, is part of https://www.wendangku.net/doc/7817903601.html, Framework,and when coding https://www.wendangku.net/doc/7817903601.html, applications you have access to classes in https://www.wendangku.net/doc/7817903601.html, Framework.You can code your applications in any language compatible with the common language runtime(CLR), including Microsoft Visual Basic and C#.These languages enable you to develop https://www.wendangku.net/doc/7817903601.html, applications that benefit from the common language runtime,type safety, inheritance,and so on. If you want to try https://www.wendangku.net/doc/7817903601.html,,you can install Visual Web Developer Express using the Microsoft Web Platform Installer,which is a free tool that makes it simple to download,install,and service components of the Microsoft Web Platform.These components include Visual Web Developer Express,Internet Information Services (IIS),SQL Server Express,and https://www.wendangku.net/doc/7817903601.html, Framework.All of these are tools that you use to create https://www.wendangku.net/doc/7817903601.html, Web applications.You can also use the Microsoft Web Platform Installer to install open-source https://www.wendangku.net/doc/7817903601.html, and PHP Web applications. Visual Web Developer Visual Web Developer is a full-featured development environment for creating https://www.wendangku.net/doc/7817903601.html, Web applications.Visual Web Developer provides an ideal environment in which to build Web sites and then publish them to a hosting https://www.wendangku.net/doc/7817903601.html,ing the development tools in Visual Web Developer,you can develop https://www.wendangku.net/doc/7817903601.html, Web pages on your own computer.Visual Web Developer includes a local Web server that provides all the features you need to test and debug https://www.wendangku.net/doc/7817903601.html, Web pages,without requiring Internet Information Services(IIS)to be installed. Visual Web Developer provides an ideal environment in which to build Web sites and then publish them to a hosting https://www.wendangku.net/doc/7817903601.html,ing the development tools in Visual Web Developer,you can develop https://www.wendangku.net/doc/7817903601.html, Web pages on your own computer.

毕业论文英文参考文献与译文

Inventory management Inventory Control On the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion. The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility. Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored: First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments . Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field of

概率论毕业论文外文翻译

Statistical hypothesis testing Adriana Albu,Loredana Ungureanu Politehnica University Timisoara,adrianaa@aut.utt.ro Politehnica University Timisoara,loredanau@aut.utt.ro Abstract In this article,we present a Bayesian statistical hypothesis testing inspection, testing theory and the process Mentioned hypothesis testing in the real world and the importance of, and successful test of the Notes. Key words Bayesian hypothesis testing; Bayesian inference;Test of significance Introduction A statistical hypothesis test is a method of making decisions using data, whether from a controlled experiment or an observational study (not controlled). In statistics, a result is called statistically significant if it is unlikely to have occurred by chance alone, according to a pre-determined threshold probability, the significance level. The phrase "test of significance" was coined by Ronald Fisher: "Critical tests of this kind may be called tests of significance, and when such tests are available we may discover whether a second sample is or is not significantly different from the first."[1] Hypothesis testing is sometimes called confirmatory data analysis, in contrast to exploratory data analysis. In frequency probability,these decisions are almost always made using null-hypothesis tests. These are tests that answer the question Assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at [] least as extreme as the value that was actually observed?) 2 More formally, they represent answers to the question, posed before undertaking an experiment,of what outcomes of the experiment would lead to rejection of the null hypothesis for a pre-specified probability of an incorrect rejection. One use of hypothesis testing is deciding whether experimental results contain enough information to cast doubt on conventional wisdom. Statistical hypothesis testing is a key technique of frequentist statistical inference. The Bayesian approach to hypothesis testing is to base rejection of the hypothesis on the posterior probability.[3][4]Other approaches to reaching a decision based on data are available via decision theory and optimal decisions. The critical region of a hypothesis test is the set of all outcomes which cause the null hypothesis to be rejected in favor of the alternative hypothesis. The critical region is usually denoted by the letter C. One-sample tests are appropriate when a sample is being compared to the population from a hypothesis. The population characteristics are known from theory or are calculated from the population.

毕业论文外文翻译模版

吉林化工学院理学院 毕业论文外文翻译English Title(Times New Roman ,三号) 学生学号:08810219 学生姓名:袁庚文 专业班级:信息与计算科学0802 指导教师:赵瑛 职称副教授 起止日期:2012.2.27~2012.3.14 吉林化工学院 Jilin Institute of Chemical Technology

1 外文翻译的基本内容 应选择与本课题密切相关的外文文献(学术期刊网上的),译成中文,与原文装订在一起并独立成册。在毕业答辩前,同论文一起上交。译文字数不应少于3000个汉字。 2 书写规范 2.1 外文翻译的正文格式 正文版心设置为:上边距:3.5厘米,下边距:2.5厘米,左边距:3.5厘米,右边距:2厘米,页眉:2.5厘米,页脚:2厘米。 中文部分正文选用模板中的样式所定义的“正文”,每段落首行缩进2字;或者手动设置成每段落首行缩进2字,字体:宋体,字号:小四,行距:多倍行距1.3,间距:前段、后段均为0行。 这部分工作模板中已经自动设置为缺省值。 2.2标题格式 特别注意:各级标题的具体形式可参照外文原文确定。 1.第一级标题(如:第1章绪论)选用模板中的样式所定义的“标题1”,居左;或者手动设置成字体:黑体,居左,字号:三号,1.5倍行距,段后11磅,段前为11磅。 2.第二级标题(如:1.2 摘要与关键词)选用模板中的样式所定义的“标题2”,居左;或者手动设置成字体:黑体,居左,字号:四号,1.5倍行距,段后为0,段前0.5行。 3.第三级标题(如:1.2.1 摘要)选用模板中的样式所定义的“标题3”,居左;或者手动设置成字体:黑体,居左,字号:小四,1.5倍行距,段后为0,段前0.5行。 标题和后面文字之间空一格(半角)。 3 图表及公式等的格式说明 图表、公式、参考文献等的格式详见《吉林化工学院本科学生毕业设计说明书(论文)撰写规范及标准模版》中相关的说明。

软件工程中英文对照外文翻译文献

中英文对照外文翻译 (文档含英文原文和中文翻译) Application Fundamentals Android applications are written in the Java programming language. The compiled Java code — along with any data and resource files required by the application — is bundled by the aapt tool into an Android package, an archive file marked by an .apk suffix. This file is the vehicle for distributing the application and installing it on mobile devices; it's the file users download to their devices. All the code in a single .apk file is considered to be one application. In many ways, each Android application lives in its own world: 1. By default, every application runs in its own Linux process. Android starts the process when any of the application's code needs to be executed, and shuts down the process when it's no longer needed and system resources are required by other applications. 2. Each process has its own virtual machine (VM), so application code runs in isolation from the code of all other applications. 3. By default, each application is assigned a unique Linux user ID. Permissions are set so that the application's files are visible only to that user and only to the application itself — although there are ways to export them to other applications as well. It's possible to arrange for two applications to share the same user ID, in which case they will be able to see each other's files. To conserve system resources, applications with the same ID can also arrange to run in the same Linux process, sharing the same

数据库外文参考文献及翻译.

数据库外文参考文献及翻译 数据库外文参考文献及翻译数据库管理系统——实施数据完整性一个数据库,只有用户对它特别有信心的时候。这就是为什么服务器必须实施数据完整性规则和商业政策的原因。执行SQL Server的数据完整性的数据库本身,保证了复杂的业务政策得以遵循,以及强制性数据元素之间的关系得到遵守。因为SQL Server的客户机/服务器体系结构允许你使用各种不同的前端应用程序去操纵和从服务器上呈现同样的数据,这把一切必要的完整性约束,安全权限,业务规则编码成每个应用,是非常繁琐的。如果企业的所有政策都在前端应用程序中被编码,那么各种应用程序都将随着每一次业务的政策的改变而改变。即使您试图把业务规则编码为每个客户端应用程序,其应用程序失常的危险性也将依然存在。大多数应用程序都是不能完全信任的,只有当服务器可以作为最后仲裁者,并且服务器不能为一个很差的书面或恶意程序去破坏其完整性而提供一个后门。SQL Server使用了先进的数据完整性功能,如存储过程,声明引用完整性(DRI),数据类型,限制,规则,默认和触发器来执行数据的完整性。所有这些功能在数据库里都有各自的用途;通过这些完整性功能的结合,可以实现您的数据库的灵活性和易于管理,而且还安全。声明数据完整性声明数据完整原文请找腾讯3249114六,维-论'文.网 https://www.wendangku.net/doc/7817903601.html, 定义一个表时指定构成的主键的列。这就是所谓的主键约束。SQL Server使用主键约束以保证所有值的唯一性在指定的列从未侵犯。通过确保这个表有一个主键来实现这个表的实体完整性。有时,在一个表中一个以上的列(或列的组合)可以唯一标志一行,例如,雇员表可能有员工编号( emp_id )列和社会安全号码( soc_sec_num )列,两者的值都被认为是唯一的。这种列经常被称为替代键或候选键。这些项也必须是唯一的。虽然一个表只能有一个主键,但是它可以有多个候选键。 SQL Server的支持多个候选键概念进入唯一性约束。当一列或列的组合被声明是唯一的, SQL Server 会阻止任何行因为违反这个唯一性而进行的添加或更新操作。在没有故指的或者合适的键存在时,指定一个任意的唯一的数字作为主键,往往是最有效的。例如,企业普遍使用的客户号码或账户号码作为唯一识别码或主键。通过允许一个表中的一个列拥有身份属性,SQL Server可以更容易有效地产生唯一数字。您使用的身份属性可以确保每个列中的值是唯一的,并且值将从你指定的起点开始,以你指定的数量进行递增(或递减)。(拥有特定属性的列通常也有一个主键或唯一约束,但这不是必需的。)第二种类型的数据完整性是参照完整性。 SQL Server实现了表和外键约束之间的逻辑关系。外键是一个表中的列或列的组合,连接着另一个表的主键(或着也可能是替代键)。这两个表之间的逻辑关系是关系模型的基础;参照完整性意味着这种关系是从来没有被违反的。例如,一个包括出版商表和标题表的简单的select例子。在标题表中,列title_id (标题编号)是主键。在出版商表,列pub_id (出版者ID )是主键。 titles表还包括一个pub_id列,这不是主键,因为出版商可以发布多个标题。相反, pub_id是一个外键,它对应着出版商表的主键。如果你在定义表的时候声明了这个关系, SQL Server由双方执行它。首先,它确保标题不能进入titles表,或在titles表中现有的pub_id无法被修改,除非有效的出版商ID作为新pub_id出现在出版商表中。其次,它确保在不考虑titles表中对应值的情况下,出版商表中的pub_id的值不做任何改变。以下两种方法可

毕业论文外文翻译模板

农村社会养老保险的现状、问题与对策研究社会保障对国家安定和经济发展具有重要作用,“城乡二元经济”现象日益凸现,农村社会保障问题客观上成为社会保障体系中极为重要的部分。建立和完善农村社会保障制度关系到农村乃至整个社会的经济发展,并且对我国和谐社会的构建至关重要。我国农村社会保障制度尚不完善,因此有必要加强对农村独立社会保障制度的构建,尤其对农村养老制度的改革,建立健全我国社会保障体系。从户籍制度上看,我国居民养老问题可分为城市居民养老和农村居民养老两部分。对于城市居民我国政府已有比较充足的政策与资金投人,使他们在物质和精神方面都能得到较好地照顾,基本实现了社会化养老。而农村居民的养老问题却日益突出,成为摆在我国政府面前的一个紧迫而又棘手的问题。 一、我国农村社会养老保险的现状 关于农村养老,许多地区还没有建立农村社会养老体系,已建立的地区也存在很多缺陷,运行中出现了很多问题,所以完善农村社会养老保险体系的必要性与紧迫性日益体现出来。 (一)人口老龄化加快 随着城市化步伐的加快和农村劳动力的输出,越来越多的农村青壮年人口进入城市,年龄结构出现“两头大,中间小”的局面。中国农村进入老龄社会的步伐日渐加快。第五次人口普查显示:中国65岁以上的人中农村为5938万,占老龄总人口的67.4%.在这种严峻的现实面前,农村社会养老保险的徘徊显得极其不协调。 (二)农村社会养老保险覆盖面太小 中国拥有世界上数量最多的老年人口,且大多在农村。据统计,未纳入社会保障的农村人口还很多,截止2000年底,全国7400多万农村居民参加了保险,占全部农村居民的11.18%,占成年农村居民的11.59%.另外,据国家统计局统计,我国进城务工者已从改革开放之初的不到200万人增加到2003年的1.14亿人。而基本方案中没有体现出对留在农村的农民和进城务工的农民给予区别对待。进城务工的农民既没被纳入到农村养老保险体系中,也没被纳入到城市养老保险体系中,处于法律保护的空白地带。所以很有必要考虑这个特殊群体的养老保险问题。

大学毕业论文---软件专业外文文献中英文翻译

软件专业毕业论文外文文献中英文翻译 Object landscapes and lifetimes Tech nically, OOP is just about abstract data typing, in herita nee, and polymorphism, but other issues can be at least as importa nt. The rema in der of this sect ion will cover these issues. One of the most importa nt factors is the way objects are created and destroyed. Where is the data for an object and how is the lifetime of the object con trolled? There are differe nt philosophies at work here. C++ takes the approach that con trol of efficie ncy is the most importa nt issue, so it gives the programmer a choice. For maximum run-time speed, the storage and lifetime can be determined while the program is being written, by placing the objects on the stack (these are sometimes called automatic or scoped variables) or in the static storage area. This places a priority on the speed of storage allocatio n and release, and con trol of these can be very valuable in some situati ons. However, you sacrifice flexibility because you must know the exact qua ntity, lifetime, and type of objects while you're writing the program. If you are trying to solve a more general problem such as computer-aided desig n, warehouse man ageme nt, or air-traffic con trol, this is too restrictive. The sec ond approach is to create objects dyn amically in a pool of memory called the heap. In this approach, you don't know un til run-time how many objects you n eed, what their lifetime is, or what their exact type is. Those are determined at the spur of the moment while the program is runnin g. If you n eed a new object, you simply make it on the heap at the point that you n eed it. Because the storage is man aged dyn amically, at run-time, the amount of time required to allocate storage on the heap is sig ni fica ntly Ion ger tha n the time to create storage on the stack. (Creat ing storage on the stack is ofte n a si ngle assembly in structio n to move the stack poin ter dow n, and ano ther to move it back up.) The dyn amic approach makes the gen erally logical assumpti on that objects tend to be complicated, so the extra overhead of finding storage and releas ing that storage will not have an importa nt impact on the creati on of an object .In additi on, the greater flexibility is esse ntial to solve the gen eral program ming problem. Java uses the sec ond approach, exclusive". Every time you want to create an object, you use the new keyword to build a dyn amic in sta nee of that object. There's ano ther issue, however, and that's the lifetime of an object. With Ian guages that allow objects to be created on the stack, the compiler determines how long the object lasts and can automatically destroy it. However, if you create it on the heap the compiler has no kno wledge of its lifetime. In a Ianguage like C++, you must determine programmatically when to destroy the

外文文献翻译---基于 Web 的分析系统

文献翻译 基于 Web 的分析系统 院(系)名称信息工程学院专业名称软件工程

英文译文 基于Web 的分析系统 马克斯科特,约翰琳 1 摘要 在使用分析型数据库时,分析人员将数据归入公用组,并尝试确定条件变化时产生的结果。例如,提高产品价格会增加单位利润,但可能会减少销量????ù会产生较高还是较低的总利润?或者,联邦贴现率的下降会如何影响房地产贷款的收益?为了帮助分析人员根据历史趋势做出有根据的预测,Microsoft 在SQL Server 2000 中提供了分析服务,在SQL Server 7.0 中提供了OLAP 服务。这些服务都提供OLAP 功能,能够将存储在SQL Server(或任何其他OLE DB 兼容的数据源)上的数据处理成多维数据结构,称为多维数据集。多维数据集简化了趋势分析和建立实体间交互方式联系的过程。例如,房地产投资者采用现金流模型来区分一组具有共同特征(如:地产类型、地理位置和利率范围)的贷款,并预测各种事件的影响。如果贷款提前偿还或者借款人违约,后果将会如何?此类不可预测的事件会如何影响贷款所担保的债券的收益。 从包含几百笔贷款的清单中选择并区分具有分析特征的贷款是需要相当技巧的。分析服务和OLAP 服务有助于在各组贷款间建立联系,以便分析人员能够建立贷款假设模型。为了帮助客户的房地产分析人员预测商业抵押证券的业绩,我们的开发小组需要设计一个以各种方式(如:利率、到期期限或地产位置)来简化贷款分类的系统。其界面应易于学习和使用。而且,所开发的系统需要在Internet 上进行安全的部署。为了满足这些要求,开发小组选择了分析服务。 2 在Web上部署Office 在选定了后端技术后,开发小组开始制订实现前端界面的计划。多数金融分析人员使用Microsoft Excel,他们对其界面比较熟悉,感觉也很舒服。Excel 包括数据透视表服务,能够允许分析人员连接到分析服务数据库。Excel 的拖放界面提供了对多维数据

信息系统和数据库中英文对照外文翻译文献

中英文对照翻译 信息系统开发和数据库开发 在许多组织中,数据库开发是从企业数据建模开始的,企业数据建模确定了组织数据库的范围和一般内容。这一步骤通常发生在一个组织进行信息系统规划的过程中,它的目的是为组织数据创建一个整体的描述或解释,而不是设计一个特定的数据库。一个特定的数据库为一个或多个信息系统提供数据,而企业数据模型(可能包含许多数据库)描述了由组织维护的数据的范围。在企业数据建模时,你审查当前的系统,分析需要支持的业务领域的本质,描述需要进一步抽象的数据,并且规划一个或多个数据库开发项目。图1显示松谷家具公司的企业数据模型的一个部分。 1.1 信息系统体系结构 如图1所示,高级的数据模型仅仅是总体信息系统体系结构(ISA)一个部分或一个组织信息系统的蓝图。在信息系统规划期间,你可以建立一个企业数据模型作为整个信息系统体系结构的一部分。根据Zachman(1987)、Sowa和Zachman (1992)的观点,一个信息系统体系结构由以下6个关键部分组成: 数据(如图1所示,但是也有其他的表示方法)。 操纵数据的处理(着系可以用数据流图、带方法的对象模型或者其他符号表示)。 网络,它在组织内并在组织与它的主要业务伙伴之间传输数据(它可以通过网络连接和拓扑图来显示)。 人,人执行处理并且是数据和信息的来源和接收者(人在过程模型中显示为数据的发送者和接收者)。 执行过程的事件和时间点(它们可以用状态转换图和其他的方式来显示)。 事件的原因和数据处理的规则(经常以文本形式显示,但是也存在一些用于规划的图表工具,如决策表)。 1.2 信息工程 信息系统的规划者按照信息系统规划的特定方法开发出信息系统的体系结构。信息工程是一种正式的和流行的方法。信息工程是一种面向数据的创建和维护信息系统的方法。因为信息工程是面向数据的,所以当你开始理解数据库是怎样被标识和定义时,信息工程的一种简洁的解释是非常有帮助的。信息工程遵循自顶向下规划的方法,其中,特定的信息系统从对信息需求的广泛理解中推导出来(例如,我们需要关于顾客、产品、供应商、销售员和加工中心的数据),而不是合并许多详尽的信息请求(如一个订单输入屏幕或按照地域报告的销售汇总)。自顶向下规划可使开发人员更全面地规划信息系统,提供一种考虑系统组件集成的方法,增进对信息系统与业务目标的关系的理解,加深对信息系统在整个组织中的影响的理解。 信息工程包括四个步骤:规划、分析、设计和实现。信息工程的规划阶段产

相关文档