Rating the Raters
Rating services are playing an increasingly important role in the digital world as more and more individuals rely on these services to plan for the future, that is make critical or life-style-driven decisions, solve problems or discover opportunities. In this context, the success of rating services will depend to a large extent on their ability to establish a trusted relationship with their users.
As a matter of fact, existing rating services generate a great deal of distrust among their users because they are fed by humans who all too often are able to discover and use the flaws in the algorithms that underpin current rating services. In addition, current rating services place the same amount of trust in all users, regardless of their sincerity and the level of their experience or expertise-linked knowledge. On top of that, businesses are devising strategies of all sorts to exert an influence on their customers’ ratings and reviews. Some businesses have gone so far as to pay targeted customers to get positive ratings and reviews.
Basically, there are two major issues that need to be addressed:
– Existing rating services are not able to defend themselves against individual and collaborative fake ratings and reviews, and thus are subject to manipulation.
– There is a lack of models to manage efficiently the trust placed in each rater.
Let’s take a closer look at each of these issues.
The inability to counter fake ratings
According to Bing Liu, a computer science professor at the University of Illinois at Chicago and a leading researcher in the area of fake online ratings, “for some products, up to 30 percent of ratings can be fake”. Moreover, according to Gartner, by 2014, 10 to 15% of online ratings will be fake and paid for by companies. Fake ratings and reviews, a very cheap way of marketing, can be either excessively negative, bashing competitors’ products, services or brands, or overly positive, raving about the products, services or brands of specific businesses.
Fake ratings and reviews can be published directly by businesses or posted by customers who receive perks for their contributions. Fake ratings and reviews can also be obtained by relying on professional favorable ratings providers.
A lot of effort and money are spent in research and development to rise to the challenge and devise mathematical models that detect and combat fake ratings and reviews. For example, a team of researchers from the University of Rhode Island is underway with a project to develop algorithms that can serve as a defense against collaborative, profit-driven manipulations of online rating services.
The lack of models to manage trust placed in raters
The raters space is completely “flat” in existing rating services, and there is no hierarchy in terms of competence, that is experience or expertise-linked knowledge. Moreover, current rating services are not intrinsically organized to attract competent raters. As a result, everyone can rate anything in any domain. Although some rating services include features aimed at tracking raters’ activities and giving a competence-based weight to each rating, we are far from addressing this issue adequately and effectively.
At this point, it is interesting to draw a parallel with Google’s search engine. Considering that the web is a graph of documents, the power of Google’s search engine algorithms revolves around assigning a reputation index (a Page Rank) to each document and classifying the documents by their reputation inside the graph. These algorithms not only take into account the number of documents linking to a given document, but they also consider that a document partially inherits the reputation assigned to the documents that point to it.
If we consider the analogy with the graph of documents, rating services should include a graph of raters and implement algorithms similar to Google’s Page Rank algorithms. But assigning a reputation index to each rater in the graph of raters through incoming reference count would not be sufficient to estimate the real reputation of a rater because:
– The graph of raters is more complex. Reference counting alone is not sufficient to express this complexity. The type of relationship a rater has with other users and the level of influence he exerts within the community of users should also be taken into account.
– The reputation index shouldn’t be considered as a single scalar number because when we look at the graph of raters from the angle of knowledge, the graph lives in multi-dimensional space, each domain of knowledge should be considered as a dimension, and reputation index should take the form of a vector.
In summary, although there is a great deal of effort to make rating services more reliable, the accomplishment of this goal is a long way off.
We do believe that a pure algorithmic approach is not sufficient to fix all these problems. There is a need to combine a computational approach with a human controlling effort in order to substantially improve the overall trustworthiness of rating services.
Rafik Hanibeche & Adel Amri (Trustiser Founders)