A two-level solution to fight against dishonest opinions in recommendation-based trust systems

06/09/2020
by   Omar Abdel Wahab, et al.
0

In this paper, we propose a mechanism to deal with dishonest opinions in recommendation-based trust models, at both the collection and processing levels. We consider a scenario in which an agent requests recommendations from multiple parties to build trust toward another agent. At the collection level, we propose to allow agents to self-assess the accuracy of their recommendations and autonomously decide on whether they would participate in the recommendation process or not. At the processing level, we propose a recommendations aggregation technique that is resilient to collusion attacks, followed by a credibility update mechanism for the participating agents. The originality of our work stems from its consideration of dishonest opinions at both the collection and processing levels, which allows for better and more persistent protection against dishonest recommenders. Experiments conducted on the Epinions dataset show that our solution yields better performance in protecting the recommendation process against Sybil attacks, in comparison with a competing model that derives the optimal network of advisors based on the agents' trust values.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

Discovering Chatbot's Self-Disclosure's Impact on User Trust, Affinity, and Recommendation Effectiveness

In recent years, chatbots have been empowered to engage in social conver...
research
09/17/2015

Efficient Task Collaboration with Execution Uncertainty

We study a general task allocation problem, involving multiple agents th...
research
11/13/2019

Getting recommendation is not always better

We present an extended version of the Iterated Prisoner's Dilemma game i...
research
12/28/2017

A non-biased trust model for wireless mesh networks

Trust models that rely on recommendation trusts are vulnerable to badmou...
research
02/11/2020

Trust dynamics and user attitudes on recommendation errors: preliminary results

Artificial Intelligence based systems may be used as digital nudging tec...
research
05/19/2021

POINTREC: A Test Collection for Narrative-driven Point of Interest Recommendation

This paper presents a test collection for contextual point of interest (...
research
05/16/2021

How Can Robots Trust Each Other For Better Cooperation? A Relative Needs Entropy Based Robot-Robot Trust Assessment Model

Cooperation in multi-agent and multi-robot systems can help agents build...

Please sign up or login with your details

Forgot password? Click here to reset