All About Trust and Reputation in the Digital World

Tag: Big Data

Can We Trust Digital Psychiatry?

Can We Trust Digital Psychiatry

Over the past few years, digital psychiatry research has gone into hyperdrive. As recent study results from a number of renowned institutions (such as Harvard, the National Alliance on Mental Illness, and the King’s College London) reveal, psychiatric patients, even those with severe illnesses such as schizophrenia can manage more efficiently their conditions with smartphones and wearable devices. Indeed, a range of technologies embedded within existing smartphones and wearable devices can collect and send valuable information to psychiatrists allowing accurate, real-time monitoring and enabling far more efficient diagnosis and treatment plans. For example, GPS data from a smartphone can give an accurate picture of a person’s movements, which in turn reveals a person’s mental health. As a 2016 study by the Center for Behavioral Intervention Technologies at Northwestern University in Chicago has found, depressed people tend to stay at home more than when they are feeling well. Conversely, in a manic episode of bipolar disorder, patients are more active and on the move. Accelerometer data shed a light on a person’s movements and provide information about exercise patterns. The frequency of phone calls and text messages can be an indicator of any mental change; and voice analysis technologies can help detect vocal patterns that might signal post-traumatic stress disorder or postpartum depression.

Moreover, physiological data collected by some wearable devices such as heart rate and temperature can also reveal a person’s mental well-being. For example, heart rate variability can be used to track the severity of bipolar disorder and schizophrenia.

Using data science and big data analytics to analyze all patient-related data streams provide therapists with valuable insights and actionable knowledge to devise and execute personalized and precise treatment plans through dedicated apps that monitor patients’ behavior and keep their treatment on track.

While the promise of digital psychiatry to better help those with mental illness is very enticing, many hurdles need to be addressed. One of the major challenges is related to trust. How can we make sure patients, some of them psychologically fragile, trust this whole new medical approach? Given the stigma associated with psychiatric illness, any security vulnerabilities, in data collection, transmission, storage, and processing can lead to serious privacy breaches and confidential patient data leaks with negative consequences on both professional and personal life. On top of that, the temptation of selling app users’ mental health information to corporations, especially data brokers and insurance companies, should be resisted and more stringent legislation should be enacted.

Rafik Hanibeche & Adel Amri (Trustiser Founders)

Can We Trust the Crowd Miners?

Can We Trust the Crowd Miners

The digital world is caught in a data deluge, caused to a large extent by the huge collection of actions, ratings, recommendations, opinions, and mere information (in the form of text, audio, or video) generated every day by the citizens of the digital world.  This phenomenon has not gone unnoticed by the research and commercial communities.  As a result, many companies and universities have invested heavily in developing various data mining techniques to harness the exaflood of data generated by the data deluge and discover valuable knowledge and relevant patterns.  

Of particular interest is crowd mining, where gigantic databases of social information are mined to extract useful knowledge.  One example is dishtip, a service offered by TipSense.  TipSense devised a data mining algorithm which is able to reveal best dishes at restaurants by crunching millions of reviews, mentions, and photos of food.

Crowd mining looks very promising but the data extracted from social databases convey malicious content, such as fake ratings and recommendations, that can corrupt the results of crowd mining tools.  In this context, several approaches have been developed to fight malicious content by cleaning the data.  In the realm of rating services, several universities (e.g. Cornell University) and companies (e.g. Google) are working hard to detect fake ratings. 
However, we do believe that fake rating detection algorithms are necessary but not sufficient to deliver high quality data to crowd mining tools.  Indeed, all ratings are not equal, that is the reason why each rating has to be weighted by the trust placed in the user who performed the rating.  In this context, Trustiser will push the envelope by providing crowd mining engines with reliable ratings generated by a community of members arranged hierarchically; the basis of the hierarchy is the trust placed in raters in relation to each topic.

Rafik Hanibeche & Adel Amri (Trustiser Founders)

%d bloggers like this: