The Research Report should have 1000 words (+/-100 words):? an explanation of topic importance should contain about 200 words. ?A brief literat
The Research Report should have 1’000 words (+/-100 words):
an explanation of topic importance should contain about 200 words. A brief literature review should contain about 800 words.
Total, Introduction 200 words, Main body 600 words, Conclusion 200 words.
Find the difference between the articles
How are the findings different
Analyse and compare (Biggest part)
Then summarise, just a little bit.
Conclusion
The references that have to be used are these three files that I upload + this link: (chapter#9)
All the references should be in Harvard Style
https://books.google.ch/books?id=KDshCgAAQBAJ&pg=PA261&dq=social+influence+in+group+decision+making+research&hl=en&sa=X&ved=2ahUKEwiKqcXh0uP3AhWrgf0HHUGqBBUQ6AF6BAgIEAI#v=onepage&q&f=false
Information Sciences 478 (2019) 461–475
Contents lists available at ScienceDirect
Information Sciences
journal homepage: www.elsevier.com/locate/ins
A review on trust propagation and opinion dynamics in social
networks and group decision making frameworks
Raquel Ureña a , ∗, Gang Kou b , Yucheng Dong c , Francisco Chiclana a , d , ∗, Enrique Herrera-Viedma d , e , ∗
a Institute of Artificial Intelligence (IAI), School of Computer Science and Informatics, De Montfort University, Leicester, UK b School of Business Administration, Southwestern University of Finance and Economics, Chengdu, China c Business School, Sichuan University, Chengdu, China d Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain e Peoples’ Friendship University of Russia (RUDN University), Moscow, Russian Federation
a r t i c l e i n f o
Article history:
Received 19 August 2018
Revised 12 November 2018
Accepted 17 November 2018
Available online 19 November 2018
Keywords:
Trust
Reputation
Influence
Social networks
Decision making
Opinion dynamics
a b s t r a c t
On-line platforms foster the communication capabilities of the Internet to develop large-
scale influence networks in which the quality of the interactions can be evaluated based on
trust and reputation. So far, this technology is well known for building trust and harness-
ing cooperation in on-line marketplaces, such as Amazon (www.amazon.com) and eBay
(www.ebay.es). However, these mechanisms are poised to have a broader impact on a wide
range of scenarios, from large scale decision making procedures, such as the ones implied
in e-democracy, to trust based recommendations on e-health context or influence and per-
formance assessment in e-marketing and e-learning systems. This contribution surveys the
progress in understanding the new possibilities and challenges that trust and reputation
systems pose. To do so, it discusses trust, reputation and influence which are important
measures in networked based communication mechanisms to support the worthiness of
information, products, services opinions and recommendations. The existent mechanisms
to estimate and propagate trust and reputation, in distributed networked scenarios, and
how these measures can be integrated in decision making to reach consensus among the
agents are analysed. Furthermore, it also provides an overview of the relevant work in
opinion dynamics and influence assessment, as part of social networks. Finally, it identi-
fies challenges and research opportunities on how the so called trust based network can
be leveraged as an influence measure to foster decision making processes and recommen-
dation mechanisms in complex social networks scenarios with uncertain knowledge, like
the mentioned in e-health and e-marketing frameworks.
© 2019 The Authors. Published by Elsevier Inc.
This is an open access article under the CC BY license.
( http://creativecommons.org/licenses/by/4.0/ )
∗ Corresponding authors. E-mail addresses: [email protected] (R. Ureña), [email protected] (G. Kou), [email protected] (Y. Dong), [email protected] ,
[email protected] (F. Chiclana), [email protected] (E. Herrera-Viedma).
https://doi.org/10.1016/j.ins.2018.11.037
0020-0255/© 2019 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license.
( http://creativecommons.org/licenses/by/4.0/ )
462 R. Ureña, G. Kou and Y. Dong et al. / Information Sciences 478 (2019) 461–475
1. Introduction
Virtual interactions between people and services without any previous real world relationship have experienced an expo-
nential increase with the availability of interactive on line sites, including the so called social networks. These interconnected
platforms present very diverse functionalities; from the well known social networks Facebook ( www.facebook.com ), Insta-
gram ( www.instagram.com ) or Twitter ( www.twitter.com ), where people share pictures and thoughts with their friends and
followers, to e-commerce platform like Amazon or E-bay; crowd-sourcing platforms to share knowledge and expertise, such
as Wikipedia ( www.wikipedia.com ), Slashdot ( www.slashdot.com ), or Quora ( www.quora.com ), and on-line facilities sharing
networks such as UBER ( www.uber.com ) and Blablacar ( www.blablacar.com ) for cars or Airbnb ( www.airbnb.com/ ) for ac-
commodation. In spite of the diversity of the available on-line communities, all of them share the common characteristic of
having a vast amount of users interacting between each other under a virtual identity. These emerging social media chan-
nels permit users to build various explicit or implicit social relationships, to leverage the network as a worldwide showcase
to disseminate and share products, services, information, opinions and recommendations.
In these digital media scenarios, the evaluation of the credibility of the information constitutes a more challenging
problem than in conventional media, because of its inherent anonymous, open nature that is characterised by a lack of
strong governance structures [23] . In fact, this anonymity offers a favourable environment for malicious users to spread
wrong information, virus, or even files in the case of Peer to Peer networks (P2P) [26] . Therefore, it is necessary to use
mechanisms that allow to choose the peers from whom interact as well as effectively identify and isolate the malicious
ones.
The ideal solution would be to develop a web of reputation and trust, either at a local level (individual websites for
example) or across the whole web, that would enable users to express and propagate trust on others to the entire network
to allow other users to assess the quality of the information or service provided even without a prior interaction with the
agent in question. The ultimate goal, here, is to estimate, for each agent, a reputation level or score. In this sense, reputation
can be understood as a predictor of future behaviour based on previous interactions, that is, any agent will be considered as
highly regarded if it has consistently performed satisfactorily in the past assuming that the service can be trusted to perform
as expected in the future as well. Therefore, reputation and trust based management will ultimately lead to minimise poor
and dishonest agents as well as to encourage the reliable and trustworthy behaviours [17,22,30] , which, in its turn, makes
reputation to be one key mechanism for the social governance on the Internet [49] .
When it comes to estimate reputation and propagate trust, we can recognise two main differences between traditional
and on-line environments: (i) traditional indicators that allow to estimate trust and reputation, observed on the physical
world, are missing in on-line environments, and so electronic substitutes are required; (ii) in the physical world, commu-
nicating and sharing information related to trust and reputation is relatively difficult, since this knowledge is constrained
to local communities. On the contrary, in the case of IT systems, the Internet can be leveraged to design efficient ways to
calculate and propagate this information in a world-wide scale [30] .
Some examples of these on-line reputation networks are included in e-commerce platforms such as Amazon and E-bay
or even in accommodation booking platforms like Airbnb where both hosts and guests are rated with the aim to provide a
global reputation score publicly available to all users. With this regards, it has been observe that a reliable reputation system
increases users trust on the Web [9] . In fact, for the case of e-auctions systems several studies confirm that reputation
mechanisms benefit both sellers and buyers [27] . More concretely, for the case of e-Bay it has been concluded that part of
the e-Bay’s commercial success can be attributed to its reputation mechanism, eBay’s Feedback Forum, that allows as well
to rate negatively dishonest behaviour [19] .
On the other hand, trust has been considered as well an important factor to influence decision making and consen-
sus reaching between multiple users in group decision making (GDM) scenarios [5,33,38,39,41] . These procedures facilitate
agents to negotiate in order to arrive to mutually acceptable agreements [3,32,46] , and so trust here can be used to spread
experts opinions and to provide recommendations [44,45,47] . A recent survey on social network based consensus approaches
[11] points out that trust based GDM approaches are still in an early stage and they lack the necessary tools to calculate
dynamically inter agents trust and influence.
The aim of this article is to survey the main mechanisms to generate and propagate trust and reputation in social net-
works and to point out several open questions concerning the integration of trust and reputation based measures and opin-
ion dynamics procedures in the context of decision making approaches. To do so, this contribution is organised as follows:
Section 2 analyses the main characteristics of on-line social communities. Section 3 introduces the concepts of trust and
reputation, and the differences between them. The main existing procedures to estimate reputation and to propagate trust
are also explained in this section. Section 4 focuses on the trust based GDM approaches, while the main approaches to
carry out opinion spreading in social networks are revised in Section 5 . The main challenges and research opportunities
on how the analysed trust and reputation based systems, as an influence measure to foster decision making processes and
recommendation mechanisms in complex social networks scenarios with uncertain knowledge, are pointed out in Section 6 .
Finally, conclusions are presented Section 7 .
R. Ureña, G. Kou and Y. Dong et al. / Information Sciences 478 (2019) 461–475 463
2. Social networks
A social network can be considered as a set of people (or even groups of people) that participate and interact sharing dif-
ferent kinds of information with the purpose of friendship, marketing or business exchange. Social network modelling refers
to the analysis of the different structures in the network to understand the underlying pattern that may either facilitate or
impede the knowledge creation in this type of interconnected communities [21] .
2.1. Characteristics of real world social networks
A multi person social network has some specific characteristics, when compared with a random graph of nodes, that have
to be considered in order to properly understand the opinion dynamics, trust propagation and influence. In the following,
some of the most important ones are pointed out:
A small-world network
The two main structural properties that define this type of network are the higher clustering coefficient and average path
length that scales the logarithm of the number of nodes. The clustering coefficient is also known as the transitivity, that is,
a friend of a friend is likely to be my friend and the average path length is the minimum number of nodes to traverse to
move from A to B [21] .
Scale free network
This implies that few nodes present an elevated number of connections (degree) whereas the majority of them present
very few connections. These type of networks have no specific scales for the degrees.
In a social network, it is key to identify which agents or individuals cause the highest influence in the network, and which
are the nodes that receive this impact. The eigenvector centrality proposed by Bonacich and Lloyd [2] has been extensively
adopted as a measure of the relative importance of an individual in a social influence network. In this way, centrality scores
are given to all the nodes in the network based on the premise that “the centrality or status of an individual is a function
of the status of those who choose him”, for example, being chosen by someone powerful makes one more powerful to the
eyes of the others. Thus, a node having a high eigenvector centrality means that it is connected to other nodes with high
eigenvector centrality as well. This measures can be mathematically formalised as follows:
Definition 1 (Centrality [2] ) . Given an adjacency Matrix A = (a i j ) , with element a ij represents the degree of influence of node i towards node j and let v = (v 1 , . . . , v i , . . . , v n ) the unknown vector of centrality scores for each node, then
v i = v 1 a 1 i + v 2 a 2 i + · · · + v n a ni , Which can be expressed as the following eigenvector equation
A T v = v . Notice that this eigenvector based centrality assessment may not be applicable to asymmetric networks. For these cases,
a generalisation of this measure, that allows every individual some status that does not depend only on his or her connec-
tion to others, denominated α-centrality has been proposed in [2] . This assumes for example that the popularity of each student in a class depends not only on her internal connections with her fellow students within the class but also on her
independent external evaluation by other such as her teachers. Therefore, given the vector e that represents this external
(exogenous) sources of status or information the above expression can be extended as follows:
v = αA T v + e.
3. Trust and reputation systems
Trust and reputation management encompasses several disciplines such us data collection and storage, modelling, com-
munication, evaluation and reputation safeguards [30] . Therefore, various research initiatives have been carried out in dif-
ferent sectors ranging from psychology, sociology and politics to economics, computer science and marketing. From the per-
spective of computer science, examples of existing applications that make use of reputation and or trust approaches include
peer-to-peer (P2P) networks, e-commerce, e-marketing, multi-agent systems, web search engines and GDM scenarios.
Manifestation of trust are almost obvious in our daily routine; nevertheless finding an exact definition is challenging
since trust can be represented in many different forms leading to an ambiguous concept. In cite [22] , the following definition
of trust is provided: “trust is the extent to which one party is willing to depend on something or somebody in a given
situation with a feeling of relative security, even though negative consequences are possible.” The authors claim that this
relative vague definition encompasses the following aspects: “(i) dependence on the party that is trusted; (ii) reliability of the
trusted entity ; (iii) utility a positive utility will result from a positive outcome, and negative utility will result from a negative
outcome; (iv) risk attitude that implies the trusting party is willing to accept a possible risk.”
On the other hand, according to the Concise Oxford dictionary, “reputation is what is generally said or believed about
a person’s or thing’s character or standing.” This definition is aligned with the one given by social networks researchers
464 R. Ureña, G. Kou and Y. Dong et al. / Information Sciences 478 (2019) 461–475
that states that “reputation is a quantity measure derived from the underlying social network which is globally visible to all
members of the network” [13] . Therefore reputation can be understood as the perception that an agent creates through past
actions about its intentions and norms in a global level. Reputation can be assessed in relation with one individual or with a
whole group. For example group reputation can be obtained as the average of all its individual members’ reputation values.
Indeed it has been observed that the fact that an individual belongs to a given group has an impact in his/her reputation
depending on the give group’s reputation.
Notice that the concept of reputation is closely linked to that of trustworthiness, since dependence and reliability, which
are key implications in the definition of trust, can be assessed through a persons reputation, for example, based on the
evaluations given by the other members in the system. Therefore trust can be established through the use of reputation.
That is, one may trust someone who has a good reputation. However, the difference between trust and reputation resides in
that trust systems take into consideration as input general and subjective measures of trust (reliability) in a pairwise basis,
whereas reputation considers the information about objective events like specific transactions [22] .
Trust manifestations from traditional systems include subjective terms of friendship, long term based knowledge or even
intuition that are not available in on-line system. Thus, electronic measures are required to computerise these abstracts
concepts. Conversely, in real physical worlds, trust and reputation systems are confined to small local communities and so
the use of IT technologies in a proper way will allow the effective development of worldwide systems to exchange and
collect trust based knowledge. Some challenges arise in this regard: (i) finding effective ways to model and recognise trust
and reputation in the on-line world, i.e. to identify the available cues and develop mechanisms to fuse them; (ii) mechanisms
to collect and propagate these measures in a scalable, secure, and robust way. According to Kamvar et al. [26] , there are six
main characteristics that any on-line trust and reputation system should address:
1. Self policing. The system should rely on the information given by the users of the network and not on some central
authority, therefore ratings given by the user about current interactions should be collected and distributed.
2. Durability in time of the entities. After an interaction it is normally assumed that a posterior interation will take place
in the future.
3. Anonymity. The reputation and trust should be link with an ID.
4. No profit to newcomers. Reputation is calculated by constant good behaviour, no advantages for new members.
5. Minimal overhead. The computation of the values of trust and reputation should not constitute a charge in terms of
computation to the whole system.
6. Robust to malicious collectives. Users trying to abuse of the trust system should be immediately recognised and blocked
in the system.
In the following the main characteristics of the trust and reputation systems are analysed. To do so, in Section 3.1 the
main reputation network architectures are presented. In Section 3.2 , the procedures to compute reputation are outlined. In
Section 3.3 , the principal approaches to carry out trust propagation between peers are studied. Finally in Section 3.4 , an
overview of various methods that use both reputation and trust are presented.
3.1. Reputation network architectures
In a reputation system, once a transaction between two agents is completed the agents are required to rate the quality
of the transaction (service). The architecture of the these systems determines the way in which the ratings and reputation
scores are collected, stored and shared between the members of the system. There exist two main types of reputation
network architectures (see Fig. 1 ): centralised and distributed.
In a centralised reputation system ( Fig. 1 (a)) a central authority collects all the ratings and constantly updates each
agent’s reputation score as a function of the rating the agent received. This type of system requires (i) a centralised com-
munication protocol in charge of keeping the central authority updated from all the ratings; and (ii) a reputation calculation
method for the central authority to estimate and update the reputation of each agent. In contrast, in a distributed reputa-
tion system ( Fig. 1 (b)) each agent individually collects and combines the ratings from the other agents. That is, an agent
A, who wants to transact with another target agent B, has to demand for ratings to the other community members who
have directly interacted with agent B. Consequently, given the distributed nature of the information, obtaining the ratings
from all interactions with a given agent may be too expensive (time consuming) and so, only a subset of the interactions,
usually from the relying agents’ network are considered to calculate the reputation score. This type of system requires (i) a
distributed communication protocol to allow agents to get information from others agent they are considering to transact
with; and (ii) a reputation computation method to estimate and update the reputation given the values of other agents
(neighbours). A well known example of distributed architecture are P2P networks in which each agent acts as both client
and server. It is noted that these networks may introduce a security threat since they could be used to propagate malicious
software or to bypass firewalls. Therefore the role of reputation in this particular case is crucial to determine which nodes
in the network are most reliable and which ones should be avoided.
3.2. Reputation calculation
In the following the most popular mechanisms to compute reputation are outlined.
R. Ureña, G. Kou and Y. Dong et al. / Information Sciences 478 (2019) 461–475 465
Fig. 1. Architecture for reputation systems.
Counting
This simple technique consists in the summation of the positive ratings minus the summation of negative ones. This is
the technique proposed by e-Bay [34] . An alternative related approach is Amazon’s one based a weighted average of the
ratings taking into consideration factors such as rater trustworthiness, distance between ratings and current scores [36] .
Probabilistic models
These models use as input previous binary ratings, positive or negative, to estimate the probability of a future transaction
to be positive or negative [18,22] . The β-probability distribution is a family of continuous probability distributions that has been used in Bayesian analysis to describe initial knowledge concerning probability of success. Its expression relies on the
gamma function γ in the following way:
β( p| α, β ) = γ (α + β ) γ (α) γ (β )
p α−1 (1 − p) β−1 ,
where 0 ≤ p ≤ 1, α, β > 0, with expectation value being E(p| α, β ) = α α+ β . In the reputation framework of interest in this
paper, α and β represent the amount of positive and negative ratings, respectively.
Fuzzy models
They use of fuzzy numbers or linguistic ratings modelled as fuzzy sets with membership functions describing the extent
up to which an agent can be trustworthy or not. Examples of this approach are the Regret System proposed by Sabater and
Sierra [35] and trust based GDM methodologies [45–47] that will be discussed in more detail in the next section.
3.3. Trust propagation approaches
In a social network, where the agents have express their trust on other agents, there might not be direct trust relationship
between all pair of agents of the network. The goal of a trust propagation system is to estimate unknown trust values
between pair of agents using the known and available trust values (see Fig. 2 ). Thus, given a network of n agents, who
may have expressed some of their level of trust (and/or distrust), and T = (t i j ) being the matrix of trust values where t ij ( ∈ [0, 1]) represents the agent i trust value on agent j , there may be the case that some of the elements of matrix T are unknown. The estimation of the trust value between any two agents with no previous interaction between them, has
been proposed to be carried out by some authors using models that rely on some kind of transitive property of trust
via an iterative transitivity based aggregation along the different paths in the network that connect indirectly both agents
until the estimated trust scores become stable for all agents. These models are based on the premise that a user is more
likely to trust the statements/recommendations/advices coming from a trusted user. In this way, trust may propagate (with
appropriate mitigation) through the network [15] . These models are known as Flow Reputation Systems [17,22] . In what
follows a review of the two most representative mechanisms is presented.
466 R. Ureña, G. Kou and Y. Dong et al. / Information Sciences 478 (2019) 461–475
Fig. 2. Trust propagation problem.
Fig. 3. Guha et al.’s trust propagation approach.
Guha’s et al. model [15]
This approach estimates the trust missing values carrying out atomic propagation of the trust in four different ways (the
first three are depicted in Fig. 3 ):
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.