HOME JOURNALS CONTACT

Journal of Artificial Intelligence

Year: 2022 | Volume: 15 | Issue: 1 | Page No.: 1-8
DOI: 10.3923/jai.2022.1.8
Role of Artificial Intelligence in Mental Wellbeing: Opportunities and Challenges
Bavly Samy Helmy Hanna and Andrew Samy Helmy Hanna

Abstract: COVID-19 has exposed the public to enormous mental disorders especially with social distance and limited resources for mental health. The surge of AI does not mean only a turning point of diagnosis and treatment of mental disorders, but also, the way we define mental health issues. AI offers many potential opportunities to be implemented in the field of mental health assessment and treatment. AI enhances mental wellbeing through internet-based cognitive behavioural therapy chatbots, intelligent virtual worlds and artificial companions, augmented reality applications, therapeutic computer games and electronic medical records. However, these opportunities come with several challenges of users’ privacy, data security, bias, consent, governance and regulation. With the availability of several AI options, psychologists and psychiatrists must pick a suitable tool based on their needs, resources available and implementation practicality. There is a need to develop indigenous proprietary technology for mental health, test it and validate it. Harmony between traditional and technologically based treatment must be achieved overtime.

Fulltext PDF Fulltext HTML

How to cite this article
Bavly Samy Helmy Hanna and Andrew Samy Helmy Hanna, 2022. Role of Artificial Intelligence in Mental Wellbeing: Opportunities and Challenges. Journal of Artificial Intelligence, 15: 1-8.

Keywords: Artificial Intelligence, chatbots, mental health, depression, mental disorders, data security and psychologists

INTRODUCTION

Artificial Intelligence (AI) is increasingly widely used in medicine for physical health applications, but mental health has been slower to accept AI technology1,2. In comparison to most non-psychiatric practitioners, mental health practitioners are more hands-on and patient-centred in their clinical practice, depending on "softer" abilities such as building connections with patients and personally monitoring patient behaviours and emotions3. Clinical data in the field of mental health is frequently in the form of subjective and qualitative patient remarks as well as written notes.

AI technology has a lot of potential in the field of mental health4-8. AI has a lot of potentials to change how we diagnose and comprehend mental disorders9. The bio-psycho-social profile of an individual is best suited to complete a person’s holistic mental health10. However, our knowledge of the interconnections between these biological, psychological and social systems is limited.

The pathophysiology of mental disease is highly heterogeneous and the identification of biomarkers may allow for more objective and better classifications of these illnesses. Using AI approaches, researchers may create better pre-diagnosis screening tools and risk models to assess a person's proclivity for, or risk of acquiring, mental disorder7. We need to use computational techniques best suited to large data to deliver customized mental healthcare as a long-term objective.

AI allows identifying mental illnesses at an earlier or prodromal stage when interventions may be more effective and personalizing treatments based on an individual's unique characteristics. Nevertheless, caution is essential to prevent over interpreting preliminary findings and more effort is needed to bridge the gap between AI research and clinical care in mental health.

The rest of the paper is designed as follows: section 2 is a literature review about the use of artificial intelligence and machine learning in mental health. Section 3 reports the applications of AI in mental health care. This is followed by a discussion of the challenges of AI applications in mental health care in section 4 and a conclusion in section 5.

LITERATURE REVIEW

Regardless of efforts to identify variables linked to medication-placebo differences in antidepressant trials, there have been few consistent findings to influence participant selection in drug development settings and differential treatments in clinical practice. The failure to yield consistent results might be explained in part by limitations in the techniques utilized, notably the search for a single moderator while considering all other factors as noise. Zilcha-Mano et al.11 examined data from 174 unipolar depression patients aged 75 and up who were randomly allocated to citalopram or placebo. To find the most robust significant modifiers of placebo vs. citalopram response, they used model-based recursive partitioning analysis.

They concluded that education, depression length and the baseline Hamilton Rating Scale for Depression (HRSD) are major modifiers of individual patients' HRSD slopes in the random forest analysis. Consequently, the strongest signal detection in favour of medicine was among individuals with fewer years of schooling who had been depressed for a longer period since their first episode. For individuals who were more educated, placebo had the largest reaction compared to medicine, to the point that placebo virtually surpassed treatment.

The efficacy of antidepressant therapy is poor, although it might be increased by matching individuals to treatments. Clinicians currently lack empirically proven ways to determine if a patient with depression will respond to a particular antidepressant. Chekroud et al.12 designed an algorithm to determine whether patients would achieve symptomatic remission after 12-week citalopram treatment.

They analyzed patient-reported data from level 1 of the Sequenced Treatment Alternatives to Relieve Depression to select factors that were most predictive of treatment outcome and then an ML model has been applied t train these variables to predict clinical remission. From 164 patient-reported characteristics, they found 25 that were most predictive of treatment outcome and utilized them to train the model. The model was internally cross-validated and it accurately predicted outcomes in the STAR*D cohort with a margin of error considerably low. the CO-MED's escitalopram treatment group provided external validation for the model.

Despite compelling evidence that depression is not a unitary entity, depressive severity is generally evaluated using total scores on questionnaires that encompass a wide variety of symptoms. Treatment efficacy is usually moderate when measured in aggregate and variations in efficacy amongst antidepressant treatments are minor. Chekroud et al.13 identified clusters of symptoms in a depressing symptom checklist using patient-reported data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D).They later replicated the findings applying the CO-MED trial (Combining Medications to Improve Depression Outcomes).

Using intent-to-treat data from both trials, as well as 7 additional placeboes and active-comparator phase 3 trials of duloxetine, mixed-effects regression analysis was used to see if the identified symptom clusters have different responses trajectories. Lastly, using machine-learning techniques, the out comes for each cluster were calculated individually over the period 2014-2016.

They concluded that two popular depression severity assessments can create statistically valid clusters of symptoms. The response of these clusters to therapy varies both within and between antidepressant drugs. Choosing the appropriate medication for a specific cluster may provide a greater advantage than using an active molecule against a placebo.

AI applications in mental health care: AI allows existing therapies to be delivered in innovative ways and potentially increasing their availability and effectiveness.

Internet-based cognitive behavioral therapy chatbot: Since the 1990s, Internet-based Cognitive-Behavioural Therapy (CBT) has been offered yet has been described by low adherence. The advancement of CBT chatbots, which mirror typical conversational style to convey CBT, may expand adherence and offer different benefits. Fitzpatrick et al.14. tracked down that one such chatbot, named 'Woebot', diminished both depression and anxiety in undergraduates over a multi-week course. Given the nascent level of literature, the best application of such a chatbot is currently unknown. While some preliminary research suggests that anxiety and depression are improving, further research is needed. A rise in anxiety and alcohol intake was seen in one research among Japanese employees, which the authors ascribed to a greater awareness of abnormal thinking and drinking behavior15.

A chatbot is computer software that imitates human interaction using a chat interface, which can be text or voice-based16. The foundations of the underlying system might range from a collection of simple rule-based answers and keyword matching to sophisticated Natural Language Processing (NLP)17 and Machine Learning (ML) techniques18. NLP is concerned with the use of computers to interpret and manipulate natural language, whereas ML is concerned with self-learning computer systems that can develop and adapt to new data without being expressly designed to do so19. Regardless of the answering bot's real intelligence, there is something unique about the experience of a user submitting data and a bot responding.

A bot understands typical speech patterns, so a bot may give the user the feeling of being in a genuine setting. An app or a web search provides a straight response to a user's search query, but a bot replicates a real-life conversation as if the user were conversing with another person; the feature's distinctiveness lies in the user's impression of the interaction20. Aside from the challenge of empowering chatbots with AI in terms of their ability to mimic the structures of natural language conversation, another imperative feature, especially in a psychology/therapy approach, is emotional intelligence; in which chatbots could detect and reply properly to a person's emotional state. Affective computing has produced some recent work on emotionally intelligent AI21,22.

Between a simple conversational search and a chatbot that is the equal of a real mental health expert, there are various additional fascinating usage options. While bots that can sustain a rudimentary kind of conversation beyond one-question, one-input, one- response are not yet smart enough to imitate a therapist, they are viable and have been used for a variety of purposes such as a diabetic patient's virtual dietitian23, or students as an educational system24; or as an e-learning system for disabled individuals to learn how to speak25. The deployment of such a chatbot would allow for the collection of more conversational content from the user. The ability to analyze a larger amount of discussion presumably leads to the prospect of improved content suggestions.

Intelligent virtual worlds and artificial companions: Virtual reality simulation is another AI application that is gaining traction. Virtual reality is a type of human-computer interaction that allows the user to immerse themselves in a computer-generated virtual environment and interact with it26. The use of virtual reality for clinical assessment and therapy is known as clinical virtual reality27, it has also been utilized to treat a wide range of psychological illnesses28-31. In virtual worlds, AI is already being utilized to construct intelligent entities that can learn and interact with users, increasing versatility and realism. Furthermore, these artificially intelligent entities can now display emotion and engage in conversation with humans.

Virtual companions that are"biologically inspired,"such as virtual home pets, may also have mental health advantages by improving mental well being and assisting people in coping with loneliness. These can be in the shape of virtual animals or humanoid robots that appear on a video screen. Animal robot companions, for example, have been created to provide treatment for dementia sufferers32. AI makes these artificial companions more lifelike, engaging and capable of doing things that are adaptable to a patient's requirements, much as it does with AI-augmented video games.

Augmented reality applications: By superimposing computer-generated images over live video imagery, augmented reality brings virtual reality and the real world together33. When coupled with other AI technologies, this technology has the potential to change how humans perceive and interact with their surroundings, as well as be utilized for several therapeutic objectives. It might, for instance, be used to produce anxiety-inducing virtual stimuli in the patient's real-world environment during prolonged exposure therapy or to provide real-time therapeutic virtual coaching on the screen to patients.

Mobile devices, such as smartphones, tablet PCs and other wearable devices, can also leverage the potential of augmented reality and other AI capabilities. As an example, Google Glass which is a wearable intelligent glass may connect people to the Internet for real-time data access and sharing, among other things. Bionic contact lenses are also being developed by researchers at the University of Washington (U.S.) and Aalto University (Finland), which might one day lead to technology that allows people to scan the Internet and obtain data on request, such as medical information34.

Therapeutic computer games: Computer games can be used for skills teaching, behaviour modelling, therapeutic diversion and other therapeutic objectives in mental health treatment. Increased patient engagement, greater treatment adherence and reduced stigma associated with psychiatric treatment are just a few of the therapeutic benefits of computer games35. Adolescents' self-confidence and problem-solving abilities have also been proven to increase with the use of therapeutic computer games36.

Many commercial computer games include AI technology, which has lately been used for Internet-based online and social network games37. When used to computer games, AI and machine learning technology improves realism, making the games more engaging, difficult and fun to play. Machine learning techniques also aid in making the games more adaptable to the demands of the patients.

Patients can be trained by virtual intelligent agents within games or other virtual environments such as Second Life, or AI technology can be used to control the gameplay so that the patient practices skills in needed areas38. For instance, Brigadoon is a virtual world in Second Life created specifically for persons with autism spectrum conditions. Users may engage with avatars in the simulation to learn and practice social skills in a safesetting39.

Electronic medical record: The incorporation of Artificial Intelligence (AI)into various clinical instruments used by mental health and other medical practitioners can improve convenience, accuracy and efficiency. Speech recognition technology has been utilized for medical dictation for some time.

An Electronic Medical Record (EMR) is a computer database that allows healthcare managers and physicians to record information about patients40. EMRs are increasingly being used by government and commercial medical providers because of their efficiency and accuracy in recording as compared to paper-based individual documentation40,41.

There is around 85.9% of American doctors utilize electronic health/medical records in their offices, according to the National Center for Health Statistics42. However, EMR software solutions that utilize Artificial Intelligence (AI) and Boolean logic to automate patient data input by recalling components from previous instances that are the same or similar to the current case, increasing accuracy and saving time, are now available.

When using an EMR system, there are concerns about client privacy, such as how much information should be stored in an EMR, especially when that record is available to professionals across an organization40,43-45.

To provide an example, because other health providers and administrators have access to client information, any document created by mental health experts can inadvertently or purposefully be revealed to all connected and unrelated providers and administrators43. As a result, experts stress the necessity of fair information practices to prevent any breaches of patient digital privacy40,41,46.

Another use might be an AI-based computer that listens to therapy or assessment sessions and intelligently summarizes them, obviating the need for clinical chart notes at the end of the session. This form of technology might be used on smartphones, tablets and other mobile platforms.

Challenges of AI applications in mental health care: The opportunities created by artificial intelligence come with several challenges of users’ privacy, data security, bias, consent, governance and regulation.

Privacy: AI may compel the treatment developer to make clear ethically problematic decisions. Automobile makers, for example, must now specify whether the driver and his or her passengers or a pedestrian's life are more important in preventing a collision while developing completely autonomous driving capabilities. Should the automobile be designed to avoid colliding with a pedestrian in any scenario, even if it means the driver's death? Although there are rarely such clear-cut conflicts in mental health care, the necessity to evaluate the potential adverse effects of medicine against the potential benefits implies that ethical concerns will arise when AI is used in mental health.

The problem of legal duty in employing an AI application is related to the ethics issue but has more immediate repercussions for the health professional. It is unclear who has legal responsibility for AI-based treatments that go awry. Who is to blame for such outcomes: the individual who uses AI, the algorithm's developer, or both?

Price concludes that if doctors exercise standard care, they are unlikely to face legal responsibility in the event of a bad outcome. As a result, if therapy has bad consequences but was the standard of care, there is typically no legal responsibility47.

However, AI is unlikely to become the standard of treatment in most circumstances very soon. While this may change as proof of AI's usefulness grows, the healthcare practitioner now faces a larger risk of legal responsibility when utilizing an AI application that differs from the standard of care.

Data security: Data scientists working with data given by others may lack the necessary grasp of the data's complexity to be aware of its limits. Furthermore, they may not believe it is their job to assess the data's accuracy and limits. Librenza-Garcia47 presents a thorough examination of ethical concerns surrounding the usage of huge data sets in AI.

Lawrie et al.48 examine the ethical problems surrounding the prediction of major mental disorders. They acknowledge that predictive algorithms are not yet precise enough, but that progress is being made. The authors express concerns about whether individuals want to know their risk of severe mental diseases, about individual and social attitudes towards such knowledge, the potential negative consequences of sharing such data and the influence of such data on early diagnosis and treatment49. They advocate for more study in this area.

Consent: Healthcare data is delicate and mental health data is especially sensitive so because of the significant risk of stigmatization and prejudice if it is disclosed. The healthcare profession must gain the public's trust before they will accept this new technology, which is required for it to be effective. There have been high-profile examples of personal data abuse. Similar incidents in the healthcare industry might have significant ramifications. Mental illness can impair capacity and, as a result, the ability to consent. Capacity levels might change over time.

As an example, a patient may initially consent to passive monitoring, but it is uncertain if such permission will be valid if the patient loses ability due to poor mental health. Patients would also have to agree to considerably larger volumes of data if AI were to be used; those who consent to medical notes may not be as willing to consent to video, audio and other types of data being retained50.

Data bias: The amount and quality of the data, like with any AI application, limits algorithm performance51. Overfitting of ML algorithms is extremely frequent for small sample sets 8. The generalizability of the results is limited because the ML models are only tested inside the same population and not out-of-sample. This researches’ predictive power is limited to the features utilized as input for the machine learning models such as clinical data, demographics and biomarkers. Since no one research can be regarded as complete in this regard, the clinical effectiveness of the specific characteristics utilized to generate these models must be taken into account. It is also conceivable that the algorithms' outputs are only valid in certain circumstances or for a specific group of people. The relevance or practical value of the derived performance metrics was not always explicitly stated in these researches. To evaluate the clinical value, performance accuracy should be compared to clinical diagnostic accuracy, rather than just equating these numbers to chance52. Because binary classifiers are easier to train than regression models (such as continuous scores), they are more commonly used in machine learning. However, one disadvantage of this technique is that it overlooks the severity of a condition53. Future research should aim to represent the severity of mental disorders on a scale.

These researches focused on features that are thought to be risk factors for mental disorders, future studies should look into protective variables such as wisdom, which can help an individual's mental health54,55. Research attempting to simulate uncommon occurrences(such as suicide) or illnesses face the problem of extremely unbalanced datasets (such as an uncommon occurrence or a small percentage of the population developing the illness). In these cases, classifiers are more likely to anticipate the majority class's outcome (such as missing uncommon occurrences such as suicidal ideation)56.

These researchers used a variety of techniques to solve this challenge, including:

Under-sampling, in the majority of cases, the number of samples is being reduced)57
Over-sampling, duplicate samples for the minority group to match the ratio of main and minor groupings58
Ensemble learning methods, decrease variance and improve predictions by merging various models59,60, just a few types of research have used these methods

Governance and regulation: The regulation of mobile applications is far less regulated than that of medical devices and therapies. In 2015, for example, approximately 47,000 mental health applications were available for purchase in the United States. Most of these apps had not been validated and those that had been verified were mostly through small-scale, short-term pilot studies. Patients are in danger when they utilize unvalidated smartphone applications because of low-quality information and potentially hazardous advice. Since these apps focus primarily on self-management and lifestyle, many of the apps available for mental health may not be classified as "medicalgoods"61.

Clinical governance is critical in this field to avoid unregulated usage and possible risk. The NHS in the United Kingdom has taken steps to implement this by developing the"NHS Digital Apps Library," which contains "NHS Approved" badges for apps that have adequate proof of efficacy and safety.

It is not expected that new technologies will always be beneficial as they might have the potential to be harmful. While some people believe that passive monitoring gives them greater control over their disease, others may feel overwhelmed by the added duty or regard it as a continuous reminder that they are sick61. Some technologies may help some subpopulations while harming others and these distinctions must be supported by extensive study.

CONCLUSION

Artificial intelligence technology has both enormous potential and significant challenges in the field of mental health treatment. The successful integration of artificial intelligence into healthcare might have a significant impact on care quality. New technologies for diagnosis, monitoring and therapy in psychology may enhance patient outcomes while also rebalancing practitioner workload. While there is a lot of promise, there will also be a lot of dangers and obstacles. To guarantee the proper adoption of this new technology, careful navigation will be required.

SIGNIFICANCE STATEMENT

This study analyses the role of artificial intelligence in defining mental health issues, diagnosing and treating mental disorders. The novel COVID-19 pandemic and consequence lockdown has resulted in enormous mental disorders with insufficient resources available for mental health. Thus, AI could assist mental health practitioners in redefining mental illnesses more objectively than the DSM-5 as well as assessing and treating mental health disorders through internet-based cognitive behavioural therapy chatbots, intelligent virtual worlds and artificial companions, augmented reality applications, therapeutic computer games and electronic medical records.

REFERENCES

  • Jiang, F., Y. Jiang, H. Zhi, Y. Dong and H. Li et al., 2017. Artificial intelligence in healthcare: Past, present and future. Stroke Vascular Neurol., 2: 230-243.
    CrossRef    Direct Link    


  • Miller, D.D. and E.W. Brown, 2018. Artificial intelligence in medical practice: The question to the answer? Am. J. Med., 131: 129-133.
    CrossRef    Direct Link    


  • Gabbard, G.O. and H. Crisp-Han, 2017. The early career psychiatrist and the psychotherapeutic identity. Acad. Psychiatry, 41: 30-34.
    CrossRef    Direct Link    


  • Janssen, R.J., J. Mourão-Miranda and H.G. Schnack, 2018. Making individual prognoses in psychiatry using neuroimaging and machine learning. Biol. Psychiatry: Cognitive Neurosci. Neuroimaging, 3: 798-808.
    CrossRef    Direct Link    


  • Luxton, D.D., 2014. Artificial intelligence in psychological practice: current and future applications and implications. Professional Psychol.: Res. Pract., 45: 332-339.
    CrossRef    Direct Link    


  • Mohr, D.C., M. Zhang and S.M. Schueller, 2017. Personal sensing: Understanding mental health using ubiquitous sensors and machine learning. Annual Rev. Clin. Psychol., 13: 23-47.
    CrossRef    Direct Link    


  • Shatte, A.B.R., D.M. Hutchinson and S.J. Teague, 2019. Machine learning in mental health: A scoping review of methods and applications. Psychol. Med., 49: 1426-1448.
    CrossRef    Direct Link    


  • Iniesta, R., D. Stahl and P. McGuffin, 2016. Machine learning, statistical learning and the future of biological research in psychiatry. Psychol. Med., 46: 2455-2465.
    CrossRef    Direct Link    


  • Bzdok, D. and A. Meyer-Lindenberg, 2018. Machine learning for precision psychiatry: Opportunities and challenges. Biol. Psychiatry: Cognitive Neurosci. Neuroimaging, 3: 223-230.
    CrossRef    Direct Link    


  • Jeste, D.V., D. Glorioso, E.E. Lee, R. Daly and S. Graham et al., 2019. Study of independent living residents of a continuing care senior housing community: Sociodemographic and clinical associations of cognitive, physical, and mental health. Am. J. Geriatric Psychiatry, 27: 895-907.
    CrossRef    Direct Link    


  • Zilcha-Mano, S., S.P. Roose, P.J. Brown and B.R. Rutherford, 2018. A machine learning approach to identifying placebo responders in late-life depression trials. Am. J. Geriatric Psychiatry, 26: 669-677.
    CrossRef    Direct Link    


  • Chekroud, A.M., R.J. Zotti, Z. Shehzad, R. Gueorguieva and M.K. Johnson et al., 2016. Cross-trial prediction of treatment outcome in depression: A machine learning approach. Lancet Psychiatry, 3: 243-250.
    CrossRef    Direct Link    


  • Chekroud, A.M., R. Gueorguieva, H.M. Krumholz, M.H. Trivedi, J.H. Krystal and G. McCarthy, 2017. Reevaluating the efficacy and predictability of antidepressant treatments. JAMA Psychiatry, 74: 370-378.
    CrossRef    Direct Link    


  • Fitzpatrick, K.K., A. Darcy and M. Vierhile, 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, Vol. 4.
    CrossRef    


  • Hamamura, T., S. Suganuma, M. Ueda, J. Mearns and H. Shimoyama, 2018. Standalone effects of a cognitive behavioral intervention using a mobile phone app on psychological distress and alcohol consumption among Japanese workers: Pilot nonrandomized controlled trial. JMIR Mental Health, Vol. 5.
    CrossRef    


  • Abdul-Kader, S.A. and J.C. Woods, 2015. Survey on chatbot design techniques in speech conversation systems. Int. J. Adv. Comput. Sci. Appl., Vol. 6, No. 7.
    CrossRef    


  • Chowdhury, G.G., 2003. Natural language processing. Ann. Rev. Inf. Sci. Technol., 37: 51-89.
    CrossRef    Direct Link    


  • Glaz, A.L., Y. Haralambous, D.H. Kim-Dufor, P. Lenca and R. Billot et al., 2021. Machine learning and natural language processing in mental health: systematic review. J. Med. Internet Res., Vol. 23.
    CrossRef    


  • Thieme, A., D. Belgrave and G. Doherty, 2020. Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems ACM Trans. Comput.-Hum. Interact., Vol. 27.
    CrossRef    


  • Vaidyam, A.N., H. Wisniewski, J.D. Halamka, M.S. Kashavan and J.B. Torous, 2019. Chatbots and conversational agents in mental health: A review of the psychiatric landscape. Can. J. Psychiatry, 64: 456-464.
    CrossRef    Direct Link    


  • Skowron, M., S. Rank, D. Garcia and J.A. Hołyst, 2017. Zooming in: Studying Collective Emotions with Interactive Affective Systems. In: Cyberemotions, Holyst, J.A. (Ed.)., Springer, Cham, pp: 279-304
    CrossRef    Direct Link    


  • Cominelli, L., D. Mazzei and D.E. De Rossi, 2018. Social emotional artificial intelligence based on damasio`s theory of mind. Front. Rob. AI, Vol. 5.
    CrossRef    


  • Anselma, L. and A. Mazzei, 2020. Building a persuasive virtual dietitian. Informatics, Vol. 7.
    CrossRef    


  • Wollny, S., J. Schneider, D.D. Mitri, J. Weidlich, M. Rittberger and H. Drachsler, 2021. Are we there yet? - A systematic literature review on chatbots in education. Front. Artif. Intell., Vol. 4.
    CrossRef    


  • Smutny, P. and P. Schreiberova, 2020. Chatbots for learning: A review of educational chatbots for the facebook messenger. Comput. Educ., Vol. 151.
    CrossRef    


  • Rizzo, A., T.D. Parsons, B. Lange, P. Kenny and J.G. Buckwalter et al., 2011. Virtual reality goes to war: A brief review of the future of military behavioral healthcare. J. Clin. Psychol. Med. Settings, 18: 176-187.
    CrossRef    Direct Link    


  • Gorrindo, T. and J.E. Groves, 2009. Computer simulation and virtual reality in the diagnosis and treatment of psychiatric disorders. Acad. Psychiatry, 33: 413-417.
    CrossRef    Direct Link    


  • Krijn, M., P.M.G. Emmelkamp, R.P. Olafsson and R. Biemond, 2004. Virtual reality exposure therapy of anxiety disorders: A review. Clin. Psychol. Rev., 24: 259-281.
    CrossRef    Direct Link    


  • Reger, G.M., K.M. Holloway, C. Candy, B.O. Rothbaum, J. Difede, A.A. Rizzo and G.A. Gahm, 2011. Effectiveness of virtual reality exposure therapy for active duty soldiers in a military mental health clinic. J. Traumatic Stress, 24: 93-96.
    CrossRef    Direct Link    


  • Freeman, D., S. Reeve, A. Robinson, A. Ehlers, D. Clark, B. Spanlang and M. Slater, 2017. Virtual reality in the assessment, understanding, and treatment of mental health disorders. Psychol. Med., 47: 2393-2400.
    CrossRef    Direct Link    


  • Shibata, T. and K. Wada, 2011. Robot therapy: A new approach for mental healthcare of the elderly – a mini-review. Gerontology, 57: 378-386.
    CrossRef    Direct Link    


  • Rohrbach, N., P. Gulde, A.R. Armstrong, L. Hartig and A. Abdelrazeq et al., 2019. An augmented reality approach for ADL support in Alzheimer`s disease: A crossover trial. J. Neuroeng. Rehabil., Vol. 16.
    CrossRef    


  • Lingley, A.R., M. Ali, Y. Liao, R. Mirjalili and M. Klonner et al., 2011. A single-pixel wireless contact lens display. J. Micromech. Microeng., Vol. 21.
    CrossRef    


  • Kim, J., M. Kim, M.S. Lee, K. Kim and S. Ji et al., 2017. Wearable smart sensor systems integrated on soft contact lenses for wireless ocular diagnostics. Nat. Commun., Vol. 8.
    CrossRef    


  • Coyle, D., M. Matthews, J. Sharry, A. Nisbet and G. Doherty, 2005. Personal investigator: A therapeutic 3D game for adolecscent psychotherapy. Interact. Technol. Smart Educ., 2: 73-88.
    CrossRef    Direct Link    


  • Fujita, H. and I.C. Wu, 2012. A special issue on artificial intelligence in computer games: AICG. Knowledge-Based Syst., 34: 1-2.
    CrossRef    Direct Link    


  • von der Heiden, J.M., B. Braun, K.W. Müller and B. Egloff, 2019. The association between video gaming and psychological functioning. Front. Psychol., Vol. 10.
    CrossRef    


  • Fleming, T.M., L. Bavin, K. Stasiak, E. Hermansson-Webb and S.N. Merry et al., 2017. Serious games and gamification for mental health: Current status and promising directions. Front. Psychiatry, Vol. 7.
    CrossRef    


  • Steinfeld, B.I. and J.A. Keyes, 2011. Electronic medical records in a multidisciplinary health care setting: A clinical perspective. Professional Psychol. Res. Pract., 42: 426-432.
    CrossRef    Direct Link    


  • Jarrett, M.P., 2017. Cybersecurity—a serious patient care concern. JAMA, Vol. 318.
    CrossRef    


  • Wolfe, L., M.S. Chisolm and F. Bohsali, 2018. Clinically excellent use of the electronic health record: review. JMIR Hum. Factors, Vol. 5.
    CrossRef    


  • Magruder, J.A., B.S. Adams, P. Pohto and T.L. Smith, 2018. Clinicians’ experiences of transition to electronic health records. J. Coll. Couns., 21: 210-223.
    CrossRef    Direct Link    


  • Shenoy, A. and J.M. Appel, 2017. Safeguarding confidentiality in electronic health records. Cambridge Q. Healthcare Ethics, 26: 337-341.
    CrossRef    Direct Link    


  • Yüksel, B., A. Küpçü and Ö. Özkasap, 2017. Research issues for privacy and security of electronic health services. Future Gener. Comput. Syst., 68: 1-13.
    CrossRef    Direct Link    


  • Holmes, C.M. and C.A. Reid, 2018. Ethics in telerehabilitation: Looking ahead. J. Appl. Rehabil. Couns., 49: 14-23
    CrossRef    


  • Price, W.N., S. Gerke and I.G. Cohen, 2019. Potential liability for physicians using artificial intelligence. JAMA, 322: 1765-1766.
    CrossRef    Direct Link    


  • Librenza-Garcia, D., 2019. Ethics in the Era of Big Data. In: Personalized Psychiatry: Big Data Analytics in Mental Health, Cavalcante, P.I., M. Benson and K. Flavio (Eds.)., Springer, Switzerland, pp: 161-172
    CrossRef    Direct Link    


  • Lawrie, S.M., S. Fletcher-Watson, H.C. Whalley and A.M. McIntosh, 2019. Predicting major mental illness: Ethical and practical considerations. BJPsych Open, Vol. 5.
    CrossRef    


  • O`Loughlin, K., M. Neary, E.C. Adkins and S.M. Schueller, 2019. Reviewing the data security and privacy policies of mobile apps for depression. Internet Interventions, 15: 110-115.
    CrossRef    Direct Link    


  • Miotto, R., F. Wang, S. Wang, X. Jiang and J.T. Dudley, 2017. Deep learning for healthcare: Review, opportunities and challenges. Briefings Bioinf., 19: 1236-1246.
    CrossRef    Direct Link    


  • Park, S.H. and K. Han, 2018. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology, 286: 800-809.
    CrossRef    Direct Link    


  • Jordan, M.I. and T.M. Mitchell, 2015. Machine learning: Trends, perspectives and prospects. Science, 349: 255-260.
    CrossRef    Direct Link    


  • Lee, E.E., C. Depp, B.W. Palmer, D. Glorioso and R. Daly et al., 2018. High prevalence and adverse health effects of loneliness in community-dwelling adults across the lifespan: Role of wisdom as a protective factor. Int. Psychogeriatrics, 31: 1447-1462.
    CrossRef    Direct Link    


  • Jeste, D.V., 2018. Positive psychiatry comes of age. Int. Psychogeriatrics, 30: 1735-1738.
    CrossRef    Direct Link    


  • Lemaitre, G, F. Nogueira and C.K. Aridas, 2017. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. J. Mach. Learn Res., 18: 559-563.
    Direct Link    


  • Kessler, R.C., I. Hwang, C.A. Hoffmire, J.F. McCarthy and M.V. Petukhova et al., 2017. Developing a practical suicide risk prediction model for targeting high‐risk patients in the veterans health administration. Int. J. Methods Psychiatric Res., Vol. 26.
    CrossRef    


  • Choi, S.B., W. Lee, J.H. Yoon, J.U. Won and D.W. Kim, 2018. Ten-year prediction of suicide death using cox regression and machine learning in a nationwide retrospective cohort study in South Korea. J. Affective Disord., 231: 8-14.
    CrossRef    Direct Link    


  • Šimundić, A.M., 2009. Measures of diagnostic accuracy: Basic definition. EJIFCC, 19: 203-211.
    Direct Link    


  • Wahle, F., T. Kowatsch, E. Fleisch, M. Rufer and S. Weidt, 2016. Mobile sensing and support for people with depression: A pilot trial in the wild. JMIR mHealth uHealth, Vol. 4. 4: e111-0.
    CrossRef    Direct Link    


  • Glenn, T. and S. Monteith, 2014. Privacy in the digital world: Medical and health data outside of hipaa protections. Curr. Psychiatry Rep., Vol. 16.
    CrossRef    


  • Shen, N., L. Sequeira, M.P. Silver, A. Carter-Langford, J. Strauss and D. Wiljer, 2019. Patient privacy perspectives on health information exchange in a mental health context: qualitative study. JMIR Mental Health, Vol. 6.
    CrossRef    

  • © Science Alert. All Rights Reserved