文档库 最新最全的文档下载
当前位置:文档库 › Newcastle upon Tyne, NE1 7RU, UK. Trust as a key to improving Recommendation Systems

Newcastle upon Tyne, NE1 7RU, UK. Trust as a key to improving Recommendation Systems

Newcastle upon Tyne, NE1 7RU, UK. Trust as a key to improving Recommendation Systems
Newcastle upon Tyne, NE1 7RU, UK. Trust as a key to improving Recommendation Systems

School of Computing Science,

University of Newcastle upon Tyne

Trust as a key to improving Recommendation Systems

Georgios Pitsilis and Lindsay Marshall

Technical Report Series

CS-TR-875

November2004

Copyright c 2004University of Newcastle upon Tyne

Published by the University of Newcastle upon Tyne, School of Computing Science,Claremont Tower,Claremont Road, Newcastle upon Tyne,NE17RU,UK.

Trust as a key to improving Recommendation Systems

Georgios Pitsilis1 and Lindsay Marshall

School of Computing Science, University of Newcastle Upon-Tyne

Newcastle Upon Tyne NE1 7RU, U.K.

{Georgios.Pitsilis,Lindsay.Marshall)@https://www.wendangku.net/doc/2310907425.html,

Abstract. In this paper we propose a method that can be used to avoid the

problem of sparsity in recommendation systems and thus to provide improved

quality recommendations. The concept is based on the idea of using trust

relationships to support the prediction of user preferences. We present the

method as used in a centralized environment; we discuss its efficiency and

compare its performance with other existing approaches. Finally we give a brief

outline of the potential application of this approach to a decentralized

environment.

Keywords: Recommendation Systems, Subjective Logic, Trust Modeling

1 Introduction

This paper introduces a method that can be used to improve the performance of recommendation systems using trust based neighbourhood formation schemas. In recommendation systems a typical neighbourhood formation scheme uses correlation and similarity as measures of proximity. With this approach, relationships between members of the community can be found only in the case when common properties or common purpose exists, where such properties can be for example, common experiences expressed in the form of opinions about an assessed property. This requirement appears to be a problem for the formation of communities especially at the beginning of the process when in general there is a low number of individual experiences and therefore the possibility of common experiences existing is also low. Our main idea is to exploit information, which at first glance may seem to be extraneous, in such a way that might be beneficial for the community. In a recommendation system, that benefit appears as improved accuracy, as well as improved capability in providing recommendations. We make use of any common experiences that two entities might have, to establish hypothetical trust relationships between them, through which they will then be able to relate to other entities. This would make it possible for entities which were unlinked before, to link together and use each other’s experiences for their mutual benefit.

1 Scholar of the Greek State Scholarships Foundation (IKY)

However, there are two challenges for recommendation systems which appear to be in conflict with each other : scalability and accuracy. Accuracy is proportional to the amount of data that is available but appears to work at the expense of scalability, since more time is needed to search for those data. In this paper we only deal with the first challenge, leaving the scalability issues for future work.

Our claim is that discovering trust relationships and thereby linking the users of a recommendation system together can have a positive impact on the performance of such a system.

To support our hypothesis we ran experiments on a small community of 100 nodes and compared the recommendations of our system against those that a plain collaborative filtering method would give. We also performed a comparison against the output that we would get if the choices were solely based on intuition. In our study we chose a centralized environment as our test-bed for evaluating the algorithms and for carrying out the processing of data. However, we do discuss the requirements, benefits and pitfalls if deploying it in a decentralized system.

The rest of the paper is organized as follows. In the next section there is a general description of recommendation systems, as well as related work in the field. In section 3 we explain the main idea of our work, the logic and calculus we have used in our model and we also focus on the trust modeling we have done for the purpose of our experiments. In Section 4 there are some performance measurements showing the benefits and the drawbacks of our system. In sections 5 and 6 we explain roughly how such a model might work in a decentralized environment showing some major drawbacks that have to do with the information discovery process and we also propose solutions that might help overcome these problems.

2 Motivation

2.1 Background Research

Recommender systems [1] are widely used nowadays and in simple terms their purpose is to suggest items, usually products, to those who might be interested in them. The main idea is to get the users that participate in such system correlated based on the opinions they have expressed in the past and then to provide as suggestion lists of products that might be of interest, or in the simplest form, predictions of ratings of services or products they want to know about.

Recommender systems often exist as services embedded into web sites which provide support for e-commerce activities. https://www.wendangku.net/doc/2310907425.html, [2], https://www.wendangku.net/doc/2310907425.html, [3] and ebay [4] are some of the most popular commercial sites. Some others, such as Grouplens [5] have been built with the sole purpose of supporting research activities in this area. Technologies that have been applied to recommender systems include Nearest-neighbor (which includes Collaborative filtering(CF)), Bayesian networks, and Clustering. Bayesian networks create a decision tree based on a set of user ratings.

Despite their speed in providing recommendations they are not practical for environments in which user preferences are updated regularly. In clustering, users are grouped by their similarity in preferences and predictions are made regarding the participation of a user in some cluster. In the case of participation in multiple clusters the prediction is a weighted average. As is shown in [6], algorithms based on Clustering have worse accuracy than nearest-neighbor therefore pre-clustering is recommended.

The basic idea behind CF is to make predictions of scores based on the heuristic that people who agreed (or disagreed) in the past will probably agree (disagree) again. Even though such a heuristic can be sufficient to correlate numerous users with each other, systems that have employed this method still appear to be highly sparse and thus are ineffective at making accurate predictions all the time. This ineffectiveness is proportional to how sparse the dataset is. By Sparsity we mean a lack of data required for a CF system to work, where in this specific case the data are in the form of experiences which users share with each other through the system. Sparsity appears mostly because users are not willing to invest much time and effort in rating items. Conventional recommendation systems face other problems such as the cold start problem [7] and their vulnerability to attacks. The latter comes from the centralized nature of a collaborative filtering system and the fact that there are always users that have malicious intent and want to influence the system. The attacker can simply create a fake user with very similar preferences to that of the targeted user and thus he/she becomes highly influential to the victim. The cold start problem, is due to the low number of ratings that new users contribute to the system who thus becoming isolated and cannot receive good quality recommendations. Developing other types of relations between the users, especially new ones, could help increase their connectivity base and thus their contribution to the system.

2.2 Trust and Reputation in Computing Systems

Trust and Reputation have always been a concern for computer scientists and much work has been done to formalize it in computing environments [8]. In computing, Trust has been the subject of investigation in distributed applications in order to enable service providers and consumers to know how much reliance to place on each other. Reputation is a commonly held belief about an agent’s trustworthiness. It is mainly derived from the recommendations of various agents.

Yahalom et. al in [9] distinguish between directly trusting an entity about a particular subject and trusting an entity to express the trustworthiness of a third entity with respect to a subject. These two types of trust are known as direct and indirect (or derived) trust. This raises the issue of how one can traverse a whole chain of intermediate entities to find a trust value for a distant one. In fact, there is a debate as to whether it is valid or not to consider transitive trust relationships. Even though it has been shown that trust is not necessarily transitive [10], there are various requirements such as the context, otherwise known as the trust purpose, that need to be specified and which indicate the ability to recommend [11]. This ability, if it exists,

makes indirect trust possible. Assuming that this ability is present in a long chain then a recommendation can be made, as indirect trust can be calculated along the chain.

2.3 Trust Modeling

Trust can be thought as the level of belief established between two entities in relation to a certain context. In uncertain probabilities theory [12] belief is expressed with a metric that is called opinion. Because opinions are based on observations, there is always imperfect knowledge and therefore it is impossible to know for certain the real (objective) behavior of the examined entity. This theory introduces the notion of uncertainty to describe this gap of knowledge or else the absence of belief and disbelief. Uncertainty is important in trust modeling, as it is always present in human beliefs and thus is suitable for expressing these kinds of beliefs.

A framework for artificial reasoning that makes use of the uncertainty in the expression of beliefs is called Subjective Logic [13]. It has its basis in uncertain probabilities theory and provides some logical operators for combining beliefs and deriving conclusions in cases where there is insufficient evidence.

From a probabilistic point of view there would be both a certain amount of belief and disbelief which is used to express the level of trustworthiness with absolute certainty. As that absolute certainty can never exist, the uncertainty property (u) has been introduced to fill this gap and deal with the absence of both belief and disbelief. The probabilistic approach would treat trustworthiness by observing the pattern of an entity’s behavior and using only two properties belief (b) and disbelief (d) where b+d=1, ]1,0[,∈d b . Binary calculus assumes statements of trust as dual-valued; either true or false . As such, Subjective Logic can be seen as an extension of both binary calculus and probability calculus. The relationship between b,d and u is expressed as b+d+u=1 which is known as the Belief Function Additivity Theorem [13]. Subjective Logic also provides the traditional logical operators for combining opinions (e.g.∨∧,) as well as some non-standard ones such as Suggestion and Consensus which are useful for combining series of opinions serially or in parallel. A complete reference about the algebra of Subjective logic and on how the algebra is applied to b ,d and u can be found in [14].

Even though opinions in the form (b,d,u) are more manageable due to the flexible calculus that opinion space provides, evidence is usually available in other forms, that are easier for humans to understand. In [13] there is a mapping between Evidence Spaces to Opinion Spaces based on the idea of coding the observations as elements of the Beta Distribution probability function. In this approach the uncertainty property (u) appears to be exclusively dependent on the quantity of observations. [15] has an alternative mapping that uses both quantitative and qualitative measures to transform observations into opinions.

In contrast, other similarity based approaches [16] use the idea of linking users indirectly with each other using predictability measures, but these have not been tested on real environments.

As we mentioned, the requirement for trust to become transitive in long chains requires that a common purpose will exist along the chain. Only the last relationship should concern trust about a certain purpose and all the other trust relationships in the chain should be with respect to the entities’ recommending abilities for the given purpose.

It is worth mentioning the existence of another approach to making recommendation systems trust-enabled [17] which does not distinguish between functional and recommended trust.

3Our Approach

3.1 Using Trust in Recommendation Systems

As we mentioned in 2.1, in standard collaborative filtering, the correlation of ratings is done on a nearest-neighbour basis, which means only users who have common experiences can be correlated. In that schema only knowledge within a radius of one hop from the referenced node can become useful. For example, in the simple scenario of figure 1, where 3 services are experienced by 4 entities, using the standard method, there is no way for any knowledge from entity A to be used for providing recommendations to entity D.

Figure https://www.wendangku.net/doc/2310907425.html,ing Trust to link A B C and D together

Our idea is to exploit information from any experiences that can be reached beyond the barriers of the local neighborhood for the benefit of the querying entities. We deal with this issue by utilizing the trust that could exist between the entities and in this way build a web of trust within the system.

Then, those entities that may have experienced the services in question would be reachable through the web of trust, thus providing additional information to the querying entities for making predictions about services they might like, but where no relevant experiences have been found for them within their local neighbourhood. However, this requires some way of mapping the experiences of each entity into trust measures and in the opposite direction, transforming the derived trust into some form

of information that can provide useful recommendations.

Once all these issues are shorted our, the quality of the recommendation system should improve since more queries can now be answered and thus avoid the problems of Sparsity.

3.2 Our Trust modeling

In general, trust models are used to enable the parties involved in a trust relationship to know how much reliance to place on each other and there are a few models that have been proposed to calculate trust, for instance [13][18].

The problem that emerges when Trust is to be used in a recommendation system is the fact that entities involved provide ratings of items rather than their trust estimates of other entities. That means, making the model trust enabled would require that all this information that has been expressed in the form of ratings be transformed into trust values, and this requires a transformation method.

In [15] we proposed a technique for modeling trust relationships between entities derived from evidence that characterize their past experiences. Our model aims to provide a method for estimating how much trust two entities should place on each other, given the similarities between them. In this model, the entities are considered as more similar if they can more accurately predict the other’s ratings. Predictions can be done using Resnick’s [19] formula or some alternative to this [16][6].

=?+=n

u u i u u a a i a r r w r p 1

,,,)( (1)

where

a r is the average rating of the querying user, u a w , is the Coefficient of the

similarity correlation of user with user u (which appears as weight in the deviation), u r is the average rating of each of the n entities that provide recommendations,i u r , is their ratings of each of the n and

i a p ,is the predicted rating.

As can be seen from the formula, the prediction is dependent on the number of correlated entities n and becomes noisy and unreliable when 5 or less entities are involved. Hence, higher accuracies become possible with the incorporation of more entities in the calculation.

In contrast to modeling trust using the Beta distribution function [13], our similarity based modeling technique can also be used in the opposite way for estimating similarities given the trust between the entities. Using this characteristic, a querying entity that can receive ratings about some distant entities for which it can make a trust estimate through the graph, will be able to use them in its prediction schema.

4Our Experiments

As we mentioned in section 2.3, there is a requirement for a common trust purpose that has to be met in order to regard trust as transitive. In the trust graph idea we have used to bring users together, we assumed as the common purpose the involved party’s ability to recommend. This ability comes from the way the trust relationships have been formed. Hereafter, in this experiment we assume that the entities, in fact, do have this ability to provide recommendations as soon as they appear to have a common taste on things.

We performed a series of tests to examine how efficient our system might be if applied to a real recommender system. Efficiency is measured as how successfully the system can predict the consumer’s preferences. In our experiments we used data from the publicly available dataset of the MovieLens project [20]. MovieLens is a film recommendation system based on collaborative filtering, established at the University of Minnesota. The dataset that is publicly available contains around one million ratings of movies made by 6,040 users who joined the MovieLens in the year 2000. In our experiment we used a subset of 100 users which comprises only around 13,000 ratings.

To avoid poor performance due to the noisy behavior of the Pearson Correlation, we applied some filtering to the existing relationships. Therefore, those relationships which were built upon 5 or less common experiences were not considered in our calculations.

The dataset also contains timestamps for every rating indicating when the rating took place, but that information was not considered at all in our correlations since at this stage we intended to study the static behaviour of the model. The timestamps might be useful in some future experiment if used as a secondary criterion for choosing ratings to be considered in the trust relationships. For example, only ratings that have been issued by both counterparts within a certain amount of time will be considered in a trust relationship.

In our analysis, we demonstrate how such system would perform in comparison to standard collaborative filtering. We also applied a comparison schema against a system that involves no use of recommendation system, but where the users make the choices themselves by using their intuition. For this comparison, every user’s predictions were guided alone by personal, past experiences. Needless to say such a schema has meaning only to those users that share some significant number of personal experiences.

We introduce two notions that will be used in our measurements: Computability: We define this as the total number of services for which a user can find opinions through the trust graph, divided by the total number of services that have been rated by all counterparts in the sample. This normalization value should be seen as a performance limit, as no more services can be reached by any of the counterparts. The Computability value is specific to a particular user since the number of services that can be reached is absolutely dependent on users’ position in the graph as well as the number of their own experiences.

Recommendation Error (E): We define this as the average error of users when trying to predict their own impressions of those services they can reach when using the reputation system. It can be defined as the prediction rating divided by the rating that is given after the experience. Similar to Computability, this measure is also specific to a particular user.

To provide a unique metric of effectiveness, we also introduce the Normalized Coverage factor F. This measure combines Recommendation Error and Computability into a single value and is expressed as:

?

=1( (2)

F?

E

C

where:

E is the average recommendation error for a particular user and C is the Computability value for that user.

F represents how much a user benefits from his participation in the community. High values of F should mean that participation is beneficial for the user.

4.1 Testing Method

For evaluation purposes we used two algorithms, one for the calculation of Computability and another one for the Recommendation Error. Due to the static nature of the data that we used in the experiment there was no way to simulate a real environment of users experiencing services. For that reason, to be able to measure the difference between a prediction and the actual experience, we used a technique called leave one out as our metric. In this method one rating is kept hidden and the system tries to predict it.

The pseudocode we used to evaluate the Recommendation Error of the recommendations is given in figure 2.

The difference between the real rating – mentioned here as post-experience – and the predicted rating gives the error. Setting k=1 in the algorithm returns the prediction error for the plain CF method which is based on examining the nearest neighbours only. In the same way, k=0 can give the error if users were doing the choices guided by their intuition alone. In our experiments we run tests for k ranging from 0 to 3. For Computability (or Coverage), we also ran evaluations for values of k=0,1,2,3. The pseudocode of the algorithm we used is shown in figure 3.

For the calculation of trust between any two entities in the trust graph we used a parser to discover the accessible trust paths between the trustor entity and the trustee. Then we applied subjective logic rules (the consensus and recommendation operators) to simplify the resulting graphs into a single opinion that the trustor would hold for the trustee at distance k. It was necessary to do this separately for every individual entity to prevent any opinion dependency problems [14] that can be caused when hidden topologies exist. Therefore, the calculation of the resulting trust was left to be carried out by every trustor individually.

Moreover, in cases where trust paths couldn’t be analyzed and simplified further by just using these two operators, we applied a simple pruning technique to remove those opinions that were found to cause problems in the simplification process. In this study, the selection of the pruned links was done at random, but we leave for future work an extensive study of the consequences of using pruning in trust calculations as well as the formulation of a policy that would minimize these consequences.

4.2 Results

Figures 4,5 and 6 show the results from the experiments we performed. Figure 4 shows how Prediction Error changes with regard to hop distance and various belief filtering policies in the propagation of trust. In total, we compared three filtering policies (b >0.5 , b >0.6 and b >0.7), where b is the belief property. In this filtering policy, trust is not allowed to propagate to entities that are not considered trustworthy as defined by the filter. So, path exploration proceeds up to the point where it is found that the belief property of some neighboring entity does not exceed the value set on the filter.

The same diagram also shows the results from the plain CF method (1 hop) as well as the case where users make choices using only their intuition. For the latter, there is no categorization for various trust filters since there is no use of trust at all. The results represent average values taken over the series of 100 entities.

It seems that, on average, the intuitive rating appears to have the lowest error, but as we will see, this criterion is inadequate to be used for judging. An equally interesting fact is that in our method (Hop distance>1) the error is not affected significantly as the hop distance increases, which means there is no loss of precision if using the trust graph.

Figure 5 shows the average Coverage ratio, that is, the number of reachable services for the group of 100 users divided by the total number of services that opinions can be expressed about. In all cases, our method appears to perform better against both the intuitive choice and the plain CF method. A strong filtering policy though has a negative impact especially for short hop distances, whereas applying an average filter (0.6) seems to improve the situation. (hop=2)

Figure 4. Accuracy of Recommendations .

Figure 5. Computability of Recommendations

Finally, figure 6 presents the Normalized Coverage Factor we introduced and which can be thought as the total gain from using some policy. From the graph it seems that the participants do not benefit when strong filtering policies are applied. Strong filtering though is less consuming in resources due to the simpler graphs that have to be explored and be simplified. Therefore, such comparison without including the cost

compared to the benefit would not be fair. We leave as future research a full

performance analysis that would find the best policy.

5 Discussion

The increase in computability that our method can achieve, also has a positive effect on the reduction of Sparsity.Our measurements show a significant fall of sparseness of 9.5%. The original 100 user dataset was found to be 97.85%. sparse. This calculation of sparseness is based on the 100 users we used in the evaluations over the whole set of items (6040), where the total number of ratings expressed by those 100 users was 12976.

By using our method, only 30% of the users benefited by using the trust graph. The remaining 70% were those who received the same benefit as when using the plain CF system (1 hop). This is because in the dataset it was likely for two users to have common experiences and this is dependent on how clustered the user communities in the dataset are. For example, if there were more than one clustered communities of users, then on average the benefit of using the graph would be higher because a very small number of users would be enough to bridge the gap between those separate clusters and thus to increase the number of recommendations that can be received on the other side.

That extra 30% benefit characterizes the potential of the proposed system with respect to the dataset used in the experiment. Speaking in terms of sparseness, that potential over the 100-user base, constituted a dataset that was 88.33% sparse.

The results justify the explanation we gave in section 2 saying that the plain method suffers from reduced coverage due to the small number of ratings that close neighbors can provide. This is because the nearest-neighbors algorithms rely upon exact matches.

For the prediction error, a comparison against a random choice of ratings instead of predicting them shows our method to be better even for hop distances of 3. Using our dataset, random generated values would give error rates as low as 24.5%, but such a comparison would be unfair for two reasons, first because it requires access to global knowledge which is unlikely to be possible, and second, because the error is highly dependent on the distribution of ratings over the classes of rates.

As can be seen, even though our method increases the system output by increasing the quality of recommendations as compared to the plain method, the algorithms do not scale to large amounts of data and thus performance degrades for a large number of users. In other words, the design will not lead to a system capable of providing quality recommendations in real-time. Even if the complex and expensive computation of direct trust vectors is done off-line there will be a bottleneck in calculating the indirect trust relationships and discovery of trust paths. This is because a direct trust vector needs to be recalculated whenever a new rating is introduced by either of the two sides in a trust relationship. However, these re-calculations could be done off-line as background processes, preserving the computing power for the graph analysis. This is feasible since in such recommendation systems the user and the item data do not change frequently.

For the above reasons, the method does not seem suitable for use in centralized systems. Cacheing techniques might provide some extra speed in calculations, provided that changes in the virtual trust infrastructure will not happen frequently.

6 Future Issues

In the future, we intend to perform a comparison of our method with other alternatives such as Horting [16] or others based on Dimensionality Reduction such as Singular value Decomposition [21. As regards the depth we chose to do the graph analysis, we anticipate performing more analyses using greater depths than the 3 hops we used in this experiment. This would help us to study how the performance increases with the depth of search and also find the optimum depth given the high computational load that depth searching requires.

As we mentioned in the Discussion section, our method is not suitable for application in centralized systems because it is highly compute intensive. However, a promising solution to overcome this weakness would be to restructure the centralized system as a peer-to-peer recommendation system. In such an approach the benefit is two fold. It provides distributed computational load as well as higher robustness and lowers vulnerabilities to the kinds of attacks we described in section 2. This architecture is also closer to the natural way that recommendations within groups of people take place.

As regards the requirement for a common purpose to exist in order for trust relationships to be used in a transitive way, we intend to alter the assumptions we have made about the existence of common purpose and re-run the experiment using pure recommender trust in the transitive chains. That requires though, that we can somehow model the trust placed on a recommender’s abilities for the given purpose, using the existing evidence.

We also plan to investigate the model from a graph theoretical perspective and examine how the Clustering Coefficient of the trust graph might affect the quality of recommendations. Also a close analysis of every user separately could show better which users benefit most from their contribution in the system.

7 Conclusion

We proposed a method that is based on the idea of using trust relationships to extend the knowledge basis of the members of groups so they can receive recommendations of better quality. In this study we applied a model that uses quantitative and qualitative parameters to build trust relationships between entities based on their common choices. We used algebra for relating users together through the transitive trust relationships that can be built between them and we extended in this way their neighboring basis. For the evaluations we used real data from a publicly available centralized recommendation system and we presented some preliminary results about the performance of our method, we discussed the benefits and the pitfalls. Our first results show that despite the fact that the method seems incapable of providing recommendations in real time, it seemed to improve the efficiency of the system, which translates into increased computability without significant impact on the accuracy of predictions. We also pointed out how the disadvantages could be overcome if the method is applied in decentralized environments such as peer-to-peer.

8 References

[1] P.Resnick – H.R.Varian, “Recommender Systems”, Communications of the ACM, 40(3):

56-58, 1997

[2] https://www.wendangku.net/doc/2310907425.html,

[3] https://www.wendangku.net/doc/2310907425.html,

[4] https://www.wendangku.net/doc/2310907425.html,

[5] P.Resnick - N.Iacovou - M.Suchak - P.Bergstrom - J.Riedl, "Grouplens. An Open

Architecture for Collaborative filtering of Netnews, from Proceedings of ACM 1994, Conf.

On Computer Supported Cooperative Work.

[6] Breese, J.S. Heckerman, D. Kadie,C. (1998). “Emperical Analysis of Predictive Algorithms

for Collaborative Filtering”. In Proc. of the 14th Conference on Uncertainty in Artificial Intelligence pp 43-52.

[7] D. Maltz and K.Ehrlish. “Pointing the Way: Active Collabortive filtering”. In Proc. Of

CHI-95,

[8] S.Marsh, “Formalizing Trust as Computational concept”, PhD Thesis, University of

Stirling, Scotland 1994.

[9] R.Yahalom, B.Klein, T.Beth, ”Trust relationships in secure systems – A Distributed

authentication perspective”, In Proc. of the 1993 IEEE Symposium on Research in Security and Privacy, p 152. pages 202-209,Denver,Colorado 1995.

[10] Cristianson B.,Harbison W., “ Why isn’t Trust Transitive?”, In Proc. of the Security

Protocols Workshop, p171-p176, 1996

[11] A.J?sang – E.Gray – M.Kinateder, “Analyzing topologies of Transitive Trust”, In

proceedings of the Workshop of Formal Aspects of Security and Trust, (FAST 2003), Piza September 2003.

[12] G.Shafer, "A Mathematical Theory of Evidence", Princeton University Press. 1976

[13] A.J?sang, “A Logic for Uncertain probabilities”, International Journal of Uncertainty,

fuzziness and Knowledge based systems, Vol.9,No.3, June 2001.

[14] A.J?sang, “An Algebra for Assessing Trust in Certification Chains”, In proceedings of

NDSS’99, Network and Distributed Systems Security Symposioum, The Internet Society, San Diego 1999.

[15] G.Pitsilis. – L.Marshall., “A model of Trust Derivation from Evidence for User in

recommendation Systems”, School of Computing Science, University of Newcastle Technical Report Series, November 2004.

[16] Aggarwal, C.C. Wolf, J.L., Wu K. and Yu, P.S. (1999). “Horting Hatches an Egg: A New

Graph-theoretic Approach to Collaborative Filtering”. In Proceedings of the ACM KDD’99 Conference. San Diego, CA, pp.201-212

[17] P.Massa – P.Avesani, ”Trust-aware Collaborative Filtering for recommender Systems”,

CoopIS/DOA/ODBASE (1) 2004: 492-508

[18] Rahman.A – Heiles.S, “Supporting trust in Virtual Communities”, Proceedings of

International conference on System Sciences, Jan 4-7 2000, Hawaii

[19] Resnick,P. and Varian, H.R. (1997). “Recommender Systems”. Special issue of

Communications of the ACM. 40(3).

[20] Bradley N. Miller, Istvan Albert, Shyong K. Lam, Joseph A. Konstan, John Riedl. (2003).

“MovieLens Unplugged: Experiences with an Occasionally Connected Recommender System”. InProceedings of ACM 2003 International Conference on Intelligent User Interfaces (IUI'03) (Accepted Poster), January 2003.

[21] Sarwar.B.M.,Karypis.G.,Konstan.J.A.,Riedl.J.T, “Application of Dimensionality

Reduction in Recommender System-A Case Study”, WebKDD Workshop, August 20, 2000.

英国工艺美术运动

英国工艺美术运动 19世纪末,在英国著名的社会活动家拉斯金—莫里斯的“美术家与工匠结合才能设计制造出有美学质量的为群众享用的工艺品”的主张影响下,英国出现了许多类似的工艺品生产机构。1888年英国一批艺术家与技师组成了“英国工艺美术展览协会”,定期举办国际性展览会,并出版了《艺术工作室》杂志。拉斯金—莫里斯的工艺美术思想广泛传播并影响欧美各国。这就是所谓的英国工艺美术运动。但是,由于工业革命初期人们对工业化的意识认识不足,加上当时英国盛行浪漫主义的文化思潮,英国工艺美术的代表人物始终站在工业生产的对立面,进入20世纪,英国工艺美术转向形式主义的美术装潢,追求表面效果,结果使英国的设计革命未能顺利发展,反而落后于其他工业革命稍迟的国家。而欧美一些国家从英国工艺美术运动得到启示,又从其缺失之处得到教训,因而设计;思想的发展演变快于英国,后来居上。工艺美术运动的代表性建筑有:魏布设计的莫里斯红屋和美国的甘布尔兄弟设计的甘布尔住宅。将理论与实践加以结合的评论者,以威廉·莫里斯(William Morris,1834-1896)最为著名。莫里科17岁时参观了博览会,对展品的反感影响到他以后的设计活动和设计思想。他出生于一个富有的家族,受过高等教育,在以设计哥特式风格闻名的斯特里德建筑设计事务所从事过建筑设计,后来加入拉斐尔前派的行列,准备从事绘画。由于建立画室和以后的新婚居室,使他又一次改变了想法转而从事起建筑及产品设计来。1859年,他与菲利普·韦珀(Phillip Webb,1831-1915)合作设计建造了“红屋”,内部的家具、壁毯、地毯、窗帘织物等,均由莫里斯自己设计。它们实用、合理的结构,以及追求自然的装饰体现了浓郁的田园特色和乡村别墅的风格。从此,他开设了十几个工厂并于1861年成立了独立的设计事务所,把包括建筑、家具、灯具、室内织物、器皿、园林、雕塑等等构成居住环境的所有项目纳入业务之中,并以典雅的色调,精美自然的图案备受青睐。莫里斯的理论与实践在英国产生了很大影响,一些年轻的艺术家和建筑师纷纷效仿,进行设计的革新,从而在1880—1910年间形成了一个设计革命的高潮,这就是所谓的“工艺美术运动”。这个运动以英国为中心,波及到了不少欧美国家,并对后世的现代设计运动产生了深远影响。工艺美术运动产生于所谓的“良心危机”,艺术家们对于不负责任地粗制滥造的产品以及其对自然环境的破坏感到痛心疾首,并力图为产品及生产者建立或者恢复标准。在设计上,工艺美术运动从手工艺品的“忠实于材料”、“合适于目的性”等价值中获取灵感,并把源于自然的简洁和忠实的装饰作为其活动的基础。工艺美术运动不是一种特定的风格,而是多种风格并存,从本质上来说,它是通过艺术和设计来改造社会,并建立起以手工艺为主导的生产模式的试验。工艺美术运动范围十分广泛,它包括了一批类似莫里斯商行的设计行会组织,并成为工艺美术运动的活动中心。行会原本是中世纪手工艺人的行业组织,莫里斯及其追随者借用行会这种组织形式,以反抗工业化的商业组织。最有影响的设计行会有:1882年由

工艺美术运动

现代设计史 第一章早期工业时期的设计 主讲荆雷 第一节概述 从19世纪中叶年到1919年,即欧洲工业革命开始到第一次世界大战结束。 这是由手工艺设计到现代工业设计的过渡时期,它的发展过程充分体现了设计领域中酝酿、探索、根本变革的艰难复杂的历程,体现了技术和经济因素对于设计发展的推动作用和制约条件的影响。18世纪从英国开始的工业革命,改变了生产力的基本条件,也改变了社会的政治、经济和文化的面貌,从而使设计进入了一个新的时代,一个围绕着机器和机器生产,围绕着市场的新时代,我们称之为现代设计。 现代设计产生的社会背景: 欧洲直到18世纪为止,还是处在农业经济时代,处在封建时代。 这种贵族中心的文化与政治,阻碍了资本主义经济的发展。设计为贵族所享用,而不是为民众所拥有。明显的阶级特征,矫饰的繁琐装饰,巴洛克风格、洛可可风格交替的过程,却在18世纪末社会日益产生的两极分化和咄咄逼人的工业化进程中感到了新时代的来临。 一、机器生产与设计的技术环境 能源和动力一直是生产力发展的主要支点,18世纪以前,人们主要依靠大自然赐给的 能源作用生产的动力,如人力、畜力、风力、水力等,生产的发展十分缓慢。自从吉米·瓦特发明蒸汽机以后,一个新的动力出现了——机器,它除了为矿井抽水之外,还被发明家史蒂文森用来制成火车头,把运输的速度和负载量提高了上千倍,又曾经被富尔顿安装在船上,使水上运输的速度大增,纺织业、冶炼业、机械制造业都大量采用蒸汽机为动力,世界的面貌发生了巨大变化。 1、设计和制造的分工 手工业时期的作坊主和工匠,既是设计者,又是制造者,有时还是销售者。18世纪,建筑师从“建筑公会”中分离出来,使建筑设计成为高水平的专业智力劳动,带动了建筑风格的日新月异。 设计从制造业中分离出来,成为独立的行业,使担任制造角色的体力劳动者——工人,变成设 计师用以体现自己意图的工具,与机器的性质近乎相同,工人的劳动失去了古代工匠所具有的乐趣。 2、标准化和一体化产品的出现 通过机器生产出来的产品完全一个模样,没有古代工匠们个人风格和技巧存在的余地,产品的艺术风格亦无人关心,这时,评价设计优劣的唯一标准是利益,是如何省工、省料、省钱,没有人关心产品的艺术性和文化性。 3、新的能源、动力也带来新材料的运用传统的木、铁,被各种优质钢和轻金属所代替,建筑业也把砖石置于一旁,开始了钢筋水泥构架的时代。总之,机器生产的普及为设计带来.

威廉·莫里斯与英国工艺美术运动

工艺美术运动』代表人物威廉·莫里斯 工艺美术运动(the Arts & Crafts Movement)是起源于19世纪下半叶英国的一场设计改良运动。其起因是针对装饰艺术、家具、室内产品、建筑等,因为工业革命的批量生产所带来设计水平下降而开始的设计改良运动。 运动的理论指导是约翰·拉斯金,运动的主要成员是威廉·莫里斯。 莫里斯的设计不仅包括平面设计,也有室内设计、纺织品设计等等。以莫里斯为首的工艺美术运动设计家创造了许多以后设计家广泛运用的编排构图方式,比较典型的有将文字和曲线花纹拥挤地结合在一起,将各种几何图形插入和分隔画面 等等。 由莫里斯设计,莱恩插图《呼啸平原的故事》一书的扉页。

英国工艺美术运动建 筑代表作莫里斯红屋 英国工艺美术运动建筑代表作莫里斯的“红屋”的建成和在设计界所引起的广泛兴趣与称颂,使莫里斯感到社会上对于好的设计、为大众的设计的广泛需求。因而他放弃了原来做画家或者建筑家的想法,开设了自己设计事物所,为顾客提供新的设计。莫里斯设计事物所设计的金属工艺品、家具、彩色玻璃镶嵌、墙纸、毛毯、室内装饰品等等,都具有非常鲜明的特征,也是后来被称为”工艺美术风格特征。 英国工艺美术运动 但是,由于工业革命初期人们对工业化的意识认识不足,加上当时英国盛行浪漫主义的文化思潮,英国工艺美术的代表人物始终站在工业生产的对立面,进入20世纪,英国工艺美术转向形式主义的美术装潢,追求表面效果,结果使英国的设计革命未能顺利发展,反而落后于其他工业革命稍迟的国家。而欧美一些国家从英国工艺美术运动得到启示,又从其缺失之处得到

他出生于一个富有的家族,受过高等教育,在以设计哥特式风格闻名的斯特里德建筑设计事务所从事过建筑设计,后来加入拉斐尔前派的行列,准备从事绘画。由于建立画室和以后的新婚居室,使他又一次改变了想法转而从事起建筑及产品设计来。 1859年,他与菲利普·韦珀(Phillip Webb,1831-1915)合作设计建造了“红屋”,内部的家具、壁毯、地毯、窗帘织物等,均由莫里斯自己设计。它们实用、合理的结构,以及追求自然的装饰体现了浓郁的田园特色和乡村别墅的风格。从此,他开设了十几个工厂并于1861年成立了独立的设计事务所,把包括建筑、家具、灯具、室内织物、器皿、园林、雕塑等等构成居住环境的所有项目纳入业务之中,并以典雅的色调,精美自然的图案备受青睐。 莫里斯的理论与实践在英国产生了很大影响,一些年轻的艺术家和建筑师纷纷效仿,进行设计的革新,从而在1880—1910年间形成了一个设计革命的高潮,这就是所谓的“工艺美术运动”。这个运动以英国为中心,波及到了不少欧美国家,并对后世的现代设计运动产生了深远影响。 工艺美术运动产生于所谓的“良心危机”,艺术家们对于不负责任地粗制滥造的产品以及其对自然环境的破坏感到痛心疾首,并力图为产品及生产者建立或者恢复标准。在设计上,工艺美术运动从手工艺品的“忠实于材料”、“合适于目的性”等价值中获取灵感,并把源于自然的简洁和忠实的装饰作为其活动的基础。工艺美术运动不是一种特定的风格,而是多种风格并存,从本质上来说,它是通过艺术和设计来改造社会,并建立起以手工艺为主导的生产模式的试验。 工艺美术运动范围十分广泛,它包括了一批类似莫里斯商行的设计行会组织,并成为工艺美术运动的活动中心。行会原本是中世纪手工艺人的行业组织,莫里斯及其追随者借用行会这种组织形式,以反抗工业化的商业组织。最有影响的设计行会有:1882年由马克穆多(Arthur Mackmurdo,1851—1942)组建的“世纪行会”和1888年由阿什比(Charles R.Ashbee,1863—1942)组建的“手工艺行会”等。值的一提的是,1885年由一批技师、艺术家组成了英国工艺美术展览协会,并从此开始定期举办国际展览会,因而吸引了大批外国艺术家、建筑师到英国参观,这对于传播英国工艺美术运动的精神起了重要作用。 工艺美术运动的主要人物大都受过建筑师的训练,但他们以莫里斯为楷模,转向了室内、家具、染织和小装饰品设计。马克穆多本人是建筑师出身,他的“世纪行会”集合了一批设计师、装饰匠人和雕塑家,其目的是为了打破艺术与手工艺之间的界线,工艺美术运动的名称“Arts and Crafts”的意义即在于此。用他自己的话来说,为了拯救设计于商业化的渊薮,“必须将各行各业的手工艺人纳入艺术家的殿堂”。 编辑本段阿什比的“手工艺行会” 阿什比的命运是整个工艺美术运动命运的一个缩影。他是一位有天分和创造性的银匠,主要设计金属器皿。这些器皿一般用榔头锻打成形,并饰以宝石,能反映出手工艺金属制品的共同特点。在他的设计中,采用了各种纤细、起伏的线条,被认为是新艺术的先声。 阿什比的“手工艺行会”最早被设在伦敦东区,在闹市还有零售部。1902年他为了解决“良心危机”问题,决意将行会迁至农村以逃避现代工业

研究英国的工艺美术运动

英国工艺美术运动简介 2007-11-08 20:51 一、威廉&S226;莫里斯及其追随者 1、威廉&S226;莫里斯和工艺美术运动。 随着工业革命的开始,资产阶级思想的传播,以及资产阶级革命的进行,在建筑设计领域率先对传统进行了挑战,在包括机械制品在内的工业设计领域,其新观念的确立要来得迟缓,以至在工业革命发生以后的一段时间里,包括英国在内的国家的机械制品丑陋不堪,设计低劣,同时过分装饰、矫饰做作的维多利之风,在设计中的日渐蔓延,使传统的装饰艺术因失去了造型基础,而成了一个为装饰而装饰、画蛇添足的东西。这种状态的日趋严重,终于导致在工业革命最早的发生地英国发生了工艺美术运动。所谓工艺美术运动是起源于英国19世纪下半叶的一场设计运动。以追求自然纹样和哥特式风格为特征,旨在提高产品质量,复兴手工艺品的设计传统。这场运动威廉&S226;莫里斯倡导宣传和身体力行。英国的文艺批评家和作家罗斯金提出了某些理论指导。1857年,莫里斯请设计师菲利蒲&S226;韦伯设计以红砖瓦构成的“红房子”(Red House),这幢房子充分体现了工艺美术运动在建筑设计方面的思想,创立了建筑设计的四条基本原则。莫里斯的莫里斯事务所,真正拉开了工艺美术运动的序幕。 2、“工艺美术”运动风格的特征。 从今天的藏品来看,莫里斯事务所设计的物品具有鲜明的特征,后被称为“工艺美术”运动风格的特征。具体表现: 第一,强调手工艺,明确反对机械化的生产。 第二,在装饰上反对古典、传统的复兴风格。 第三,提倡哥特风格和其它中世纪的风格,讲究简单、扑实无华、良好功能。 第四,主张设计的诚实、诚恳,反对设计上的哗众取宠、华而不实趋向。 第五,装饰上推崇自然主义、东方装饰和东方艺术的特点。 在设计上,莫里斯强调设计的服务对象,同时也希望能够重新振兴工艺美术的民族传统。 3、莫里斯的设计思想中带有浓厚的民主色彩,很符合资产阶级所标榜的民主思想。因此,在英国和美国等国家产生了相当的反响。英国出现许多与之相仿的设计事物所,称之为行会(guild)。这些组织团体的成立,在承担了当时的大量设计的同时,将莫里斯的设计思想和设计风格传播开来。尽管他们未能超越时代的局限但在唤醒人们对工业产品设计重视,探索艺术与技术的结

英国工艺美术运动的主要代表人物

英国工艺美术运动的主要代表人物(C)P33 A贝伦斯B韦伯C威廉。莫里斯D勒,柯布西耶 以下设计师中,被誉为第一位现代艺术设计师的是(D)P37 A沙利文B文丘里C米斯D贝伦斯 3设计整体灵魂——使用价值与审美价值是设计艺术的(C)必须传达的内容P82 A功能美B材质美C形式美D结构美 4中国古代园林设计的顶峰时期是(D) A宋代B元代C唐代D明清 5随着瓷器制作工艺的发展成熟,铜质用品逐渐为瓷器取代,金银器制作工艺的全盛时期是(唐代)P23 A唐代B明清C元代D汉代 6在中国设计发展过程中,()时期出现了现存中国历史上的第一个商标 A秦汉B隋唐C宋代D魏晋南北朝(瓷器) 7下列选项中()是实现功能与形式的前提条件。P86 A结构美B形式美C材质美D技术美 8功能和形式的问题是设计界的经典问题,手工艺时代的产品设计主张()P98 A形式至上B功能至上C材质至上D技术至上 9下列建筑中,(C)标志着文艺复兴在建筑领域的开端。 A卢浮宫B凡尔赛宫C佛罗伦萨教堂D巴黎圣母院 10新艺术运动的发源地(法国) A美国B英国C法国D比利时 11下列设计师中,(D)提倡少即是多的设计原则。 A纳吉B格罗皮乌斯C姆特修斯D米斯 12英国工业设计协会成立于()年,并提出了“优良设计,优良企业”的口号P92 A1944年B1955年C1946年D1957年 13设计美学的理论基础是(B )P71 A艺术美学B哲学美学C理论美学D应用美学 14人类最早的创造方式,也是一种最古老。生命力最强的设计思想是(B)P50 A联想B仿生法C借鉴法D继承法 15彼得贝伦斯是德国(D青年风格)新艺术运动的代表人物之一。P37 16()认为功效是衡量美的重要标准,提出了有用就是美的观点。P80 A墨子B苏格拉底C韩非子D贝尔 17所谓形态是针对视觉形象的一个概念,在我们的经验体系中可以被直觉感知到的形态被称为()P55 A认为形态B现实形态C自然形态D概念形态 19现代设计中最为活跃的一个门类,其形态总能反映最新鲜的时代偏好的是() A服装设计B建筑设计C产品设计D网页设计 20欧洲中世纪设计艺术的最高成就是(B)风格的建筑 A拜占庭B哥特式C罗马式D洛可可 填空 21在平面设计中,视觉元素往往同时具有两大特征,丰富性和。。抽象性。。。。 22设计的审核过程经过批量生产和试产两个阶段P52 23任何事物的发展都离不开批评,大自然最经典也是最基本的批评原则是适者生存 24古希腊的建筑设计按照比例,均衡,和谐的美学原则进行着美的创造。

重读英国工艺美术运动的启示

重读英国工艺美术运动的启示 摘要:本文通过对英国艺术与手工艺运动产生、发展的动力与状况的考察,深入分析了莫里斯、阿什比等在工艺美术运动中的表现。以手工艺对机器大生产的反抗、斗争为线索,揭示出艺术与手工艺运动不但继承与发展了传统手工艺,保存了传统文化;而且促使了现代设计的产生,对新艺术运动和包豪斯等产生了重要影响。最后强调发展手工艺在当今社会具有重要价值与意义。 关键词:手工艺机器生产启示 当我们翻开任何一本现代设计史,首先要向我们介绍的就是英国工艺美术运动。莫里斯被很多人称为了“现代设计之父”,因为他最早在伦敦红狮广场的一间画室里开展了具有现代设计性质的一系列的设计活动。莫里斯能让我们深深铭记是因为他是最早一批受过良好教育的美术家开始了设计的创作。并把被人们认为低下的手工艺提高到一个新的价值高度。英国工艺美术运动的出现它不是偶然的,而是有着深刻的历史、文化、经济的原因。纵观整个英国工艺美术运动,莫里斯的设计实践和理论是取得了重大胜利的。但是他的追随者们却因为追求极端的理想而最终走向了失败。英国工艺美术运动是英国现代设计思想的基石,同时也对欧洲大陆及整个世界的设计产生了深远的影响。他们的一些设计理念和设计思想不时的被人们提及并借鉴,取得了长胜不衰的效果。英国工艺美术运动的丰富设计思想有许多值得我们当代人去重新省视与思考。 1 两种境遇,两种风景 英国是世界上最早开始工业革命的国家,18世纪英国成为了名副其实的“世界工厂”。英国的工业革命的成功有诸多因素,首先是政治的改良。其次,英国工业革命的成功与一大批思想家、科学家、经济学家的理论创新密不可分。最后,英国工业革命的发展的强大动力是海外贸易与掠夺。 随着这种新的经济模式的形成与发展,它对产品的生产及社会的各个方面造成了巨大的影响。首先是劳动的分工。其次,社会产品的面貌发生了巨大的变化。再次,自然环境迅速恶化,生活品质下降。最后,机器的胜利并没有的得到美术家和文化人的认同与参与,使得当时的产品风格混乱,粗陋不堪。在1851年的水晶宫博览会上,很多产品的装饰都是一些普通工人的大手笔:把哥特式的纹样刻到铸铁的蒸气机上,在金属椅上油漆画上木纹,在纺机上加了大批罗可可的装饰。凡此种种,不胜枚举。难怪年轻而早慧的莫里斯在母亲的带领下参观这些产品时,会放声大哭呀。 在这种经济文化背景下,我们可以看到当时的社会上普通人不得不被动的接受粗制滥造的工业产品,那些产品大多装饰丑陋,没有设计,资本家只关系产品生产的速度和何如降低产品的价格。总之,市场上的商品风格混乱,有的装饰的浪里浪气,折衷主义盛行。在这种情况下,莫里斯、菲利普·韦博、伯恩·琼斯、

英国工艺美术运动和美国工艺美术运动区别及联系

英国工艺美术运动和美国工艺美术运动区别及联系 英国工艺美术运动: 从19世纪末到20世纪20年代,在英国掀起了一场影响深远的设计运动,设计史上称之为“工艺美术运动”这场运动的理论指导是作家约翰?拉斯金,而运动的主要人物则是艺术家、诗人威廉。莫里斯, “工艺美术运动”的直接起因是欧洲工业革命之后机械产品的丑陋,在产品装饰上大量借用手工艺的装饰手法,缺少一种合理的装饰形式,以与其内部结构相适应。约翰·拉斯金看到工业生产的这种弊病,极力倡导美术与技术、使用和审美的结合。他认为产品丑陋的直接原因是机器生产,因而咒骂及其生产,主张美术家参与生产,向中世纪的工艺美术学习,以弥补及其生产的不足。威廉·莫里斯继承和发展了约翰·拉斯金的思想,并发起了“工艺美术运动”,反对机器生产,主张把美术与产品设计相结合。 威廉·莫里斯是英国诗人兼文学家、社会活动家,英国工艺美术运动的领导者,被设计界称为“现代设计之父”。他认为“美就是价值,就是功能”。他的名言:“不要在你家里放一件虽然你认为有用,但你认为并不美的东西。”其含义自然是指功能与美的统一。他强调艺术、设计是为大众服务的,而不是为少数人服务的;强调手工艺,明确反对机械化批量生产,认为手工制品永远比机械产品更容易做到艺术化;在产品的装饰上反对矫揉造作的维多利亚风格和古典主义复兴,主张艺术家和技术家团结协作,设计是艺术家、技术家团结协作的创造劳动。威廉·莫里斯在近代设计史上的地位史不可动摇的。主要贡献是在设计观念上,提倡整体设计思想,主张实行以建筑为主体,全面处理室内外生活环境、以生活为中心的设计思想。在实践上他倡导并且亲自组织了设计与制作一贯制的做法,反对艺术家只停留在纸面上,主张艺术家、技术家、工匠相结合。“工艺美术运动”通过行会这种组织形式达到其目的,并有相应的设计公司生产“工艺美术”风格的作品。 英国工艺美术运动的理论依据来源于约翰·拉斯金和威廉·莫里斯,主张“拉斐尔前派”的“师自然”的哥特式样复兴。威廉?莫里斯,他与艺术家福特?布朗、爱德华,柏恩-琼斯、画家但T ?罗西蒂、建筑师飞利浦?威伯共同组成了艺术小组拉菲尔前派。他们主张回溯到中世纪的传统,同时也受到刚刚引入欧洲的日本艺术的影响,他们的目的是诚实的艺术,主要是回复手工艺传统。他们的设计主要集中在首饰、书籍装帧、纺织品、墙纸、家具和其他的用品上。 “工艺美术运动”的了高潮是1888年,在威廉·莫里斯的倡导下,成立的“工艺美术协会”这个协会连续不断地举行了一系列的展览,在英国向公众提供了一个了解好设计及高雅设计品味的机会,从而促进了"工艺美术"运动的发展。 美国工艺美术运动: 美国工艺美术运动是由沃尔特·克兰和查爾斯·羅伯特·阿什比传到美国。一般指新艺术运动和装饰艺术运动之间时期,即约1910年至1925年间,的建筑、内部设计和装饰艺术。19世纪末在英国的影响下成立了许多工艺美术协会。 部分美国“工艺美术”运动是包含在芝加哥学派的设计中。希望能够突破欧洲传统的单一影响,从外国的传统,特别是日本的传统建筑和设计中吸取一些养分,来发展为富裕的美国中产阶级服务的住宅建筑。因此,美国的“工艺美术”运动建筑设计都具有强烈的日本倾向,比如赖特在他的“草原住宅”系列中广泛采用的低矮、强调平向延伸的大屋顶,格林兄弟和斯提格利的建筑和家具中对于木构件的突出强调,采用所谓“装饰性地运用功能性构件”的方法,都是来自日本建筑和家具设计的。比如格林兄弟设计的根堡住宅,就具有明显的日本民间传统建筑的结构特点,整个建筑采用木构件,讲究梁柱结构的功能性和装饰性,吸收东方建筑和家具设计中装饰性地使用功能构件的特征,强调日本建筑的模数体系和强调横向形

相关文档
相关文档 最新文档