Implications of the first VR Code of Ethics

Applications of virtual reality (VR) are rapidly growing outside of strictly game-based experiences for entertainment. This technology has already shown promising applications in the medical, military, educational, and automotive industries just to name a few. However, as VR becomes more available to researchers, professionals, and consumers, its ethical ambiguity grows. With all new technologies comes associated risk that requires a framework for ethical use. This is especially true for VR, as consumer products such as the Oculus Rift, HTC VIVE™ and Sony PlayStation VR® are currently available for purchase.

Two scientists from Germany, Michael Madary and Thomas Metzinger, recently presented the first Code of Ethical Conduct for VR technology [1]. For the purposes of this article, the concentration will be only on their “Recommendations for the Research Ethics of VR.” It should be noted that the second part to their Code of Ethics contains “Recommendations for the Use of VR by the General Public.” For the sake of brevity, The Code of Ethical Conduct for VR will simply be referred to as the “Code.”

This article will assess their Code using utilitarian ethical principles. Utilitarianism is the idea that “the sole standard of right action is good consequences [2].” It has only one moral requirement of “producing the most amount of good for the most people, giving equal consideration to everyone affected.” This principle encourages actions that limit suffering and produce the greatest amount of good for the greatest amount of individuals.

Code of Conduct #1: Non-maleficence

The first recommendation by Madary and Metzinger is that of non-maleficence. They suggest that “No experiment should be conducted using virtual reality with the foreseeable consequence that it will cause serious or lasting harm to a subject.” Because VR contains a high level of immersion for the user, there is much debate about the short and long-term psychological effects.

The use of virtual avatars is just one example of potential negative effects of VR immersion. Through a virtual avatar, a user is presented with a different representation of self. This avatar can not only physically look and move differently than the user, but also behave very differently. This experience has the ability to create illusions of embodiment, where a user can become emotionally and physically disconnected from his or her real world body. Even early research suggested that individuals can become convinced to mentally develop ownership over their self-avatars in the virtual world the same as they would in the real world [3]. Long-term exposure to certain types of VR could cause disassociation with the real world, depression, aggression, dependency on VR technologies, etc. With even newer immersion features in VR (olfactory, tactile, etc.), there is serious concern of how this will affect an individual’s self-recognition.

From a utilitarian perspective, conducting VR research is complicated. Causing harm in the name of research does not produce the greatest good for all concerned parties. Individuals could suffer lasting psychological and behavioral effects. However, in studying VR, harming a few individuals would expose new research that would protect many more individuals in the future. Conducting VR research would produce the greatest good for the greatest amount of individuals.

Madary and Metzinger suggest offsetting the ethical load of this by selecting candidates that have pre-existing psychological conditions. However, this would prove to be inaccurate research when it is applied to the general population. Additionally, if individuals have existing psychological issues, there is a possibility that long-term VR exposure would only exacerbate their conditions. Research candidates meeting a mean threshold of psychological stability would be required for accurate applied research. Utilitarianism strives to make choices that will have the least amount of suffering. Causing additional suffering to an already harmed individual is less ethical than causing some harm to a healthy individual.

However, the level of potential harm to an individual also depends on the research study. As Madary and Metzinger illustrate, it may be necessary to include individuals with psychological traumas to research areas such as PTSD or extreme phobias. Act utilitarianism supports the idea that situations and people are different; thus, weighing associated outcomes should be handled on a case-by-case basis to achieve the most positive outcome. This ethical principle provides an ethical framework in this particular scenario.

Overall, in assessing the good consequences of VR research vs. the bad consequences, continued research under careful consideration would prove to be the most ethical option.

Code of Conduct #2: Informed consent

Madary and Metzinger’s suggestion on informed consent is to disclose potential known and unknown effects of VR to all research candidates. The problem here becomes research candidates’ understanding of possible psychological and behavioral effects. If researchers are currently struggling to understand this, then how can a layperson be expected to?

Are any research candidates capable of informed consent? Are they capable of ensuring their own autonomy in giving informed consent? With the correct amount of education prior to research participation, it is plausible. However, research efforts currently support a “greater emphasis on satisfying a variety of legal and institutional needs than on facilitating study participants’ understanding [4].”

Utilitarianism cannot be supported with the current informed consent process. Candidates do not have a supported understanding of their participation. However, Lentz, et al. and other researchers have provided new guidelines to improve the informed consent process for research candidates. Their recommendations include: defining an effective informed consent process, training research staff, improving informed consent documentation, and exploring uses of electronic and ongoing interactive consent.

Rule utilitarianism supports this idea that there can be a set of established rules that would bring about the greatest amount of good. By providing additional ongoing support and resources to candidates, the most utilitarian outcome can be achieved. It preserves individual autonomy, and prevents institutions from favoring their own agendas. The greatest good for both individuals and institutions can be reached, given the informed consent process follows stringent guidelines.

This is even more important on a consumer level, where VR products are already readily available without informed consent. One solution to support consumer understanding could be federally requiring that states comply with informed consent initiatives related to VR products. For example, even though abortion is legal and available, the Informed Consent Project works to provide patients with medically accurate educational materials and support before any procedure is allowed [5]. Although the Informed Consent Project does not operate in a research capacity, this is still a good illustration of providing individuals with information on potential adverse effects. Only about half of U.S. states participate with the Informed Consent Project as it is not a federal mandate.

Code of Conduct #3: Transparency and Media Ethics

This part of the Code is related to previous recommendations on informed consent. It suggests keeping an open dialogue with both participants and the media about ongoing research. False hope, particularly in medical applications of VR, is a concern because participants may believe that “treatment using VR is better than traditional interventions merely due to the fact that it is a new technology [1].” Madary and Metzinger also suggest that research involving therapeutic and clinical applications should involve having certified medical personnel present at all times.

Popular media is an ethical concern because it can influence public opinion on VR research. For instance, evidence about the health benefits of wine remains clinically questionable, but the media often portrays it as solely beneficial, which causes consumers to purchase and consume more [6]. The media has the power to directly influence ideas about the efficacy of research and products simply through its opinion and frequency of information. From a utilitarian perspective, they are not concerned with the greatest good, but only for the popularity of their media outlets and influence. Researchers and medical personnel should work closely with the popular media before clinical tests are misinterpreted.

In addition, it could prove to be beneficial if federal media regulatory bodies place disclaimers on VR products and media responses to new research. As similar approach is currently used in the pharmaceutical industry for consumer marketing materials. The FDA states that “the law requires that product claim ads give a “fair balance” of information about drug risks as compared with information about drug benefits.” This would force popular media to not create a distorted view of VR technology and use consumers as a means to their end.

Code of Conduct #4: Dual Use

Dual use refers to when “technology can be used for something other than its intended purpose [1].” In an example of good dual use, a VR training tool that lets users explore a supermarket could also be used as a screening tool to test for cognitive impairments [7]. On the opposite end, a VR program that assists autistic children with facial recognition could also be used for psychological warfare or torture.

Madary and Metzinger suggest that military applications of VR be closely monitored by policy makers. They state that physical torture in the VR world is ethically the same as that in the real world, and should never be allowed or condoned. Their solution is to create international policy to ensure that dual use technology is controlled. This would be the most utilitarian approach. However, there have been and will always be countries and states that do not wish to participate in such policies. Although unethical, there is nothing from stopping them from abusing technology for their own dual use purposes.

Code of Conduct #5: Internet Research

Because VR tools leverage the internet for features like multi-player gaming and data collection, privacy issues are a concern. VR technology can provide new user data such as eye-movement, emotion, and body-based information. This data can be used not only to easily identify an individual without their consent but also to target them through neuromarketing. Madary and Metzinger suggest continually reminding participants that their data is being collected for research purposes. However, reminders about informed consent is not the issue. If data collection points exist in VR research, the scientific community must take steps to avoid exploitation of this data for dual use. There is a definite need to address this in a complete Code of Ethical Conduct for VR. The most utilitarian approach would be to empower academic institutions to preserve privacy but outlining detailed security requirements and recommendations. Federal funding may be necessary to meet minimum security requirements.

Conclusion

Overall, Madary and Metzinger provide a thorough proposed Code of Ethical Conduct for Virtual Reality. Their suggestions are well documented and based on recent VR research. They concentrate on the research participant from a very utilitarian perspective. However, these are just suggestions. Their Code lacks solutions about what regulatory bodies will oversee and enforce their ethical framework.

Additionally, they mention little about ethical implications inside virtual environment designs. Philip Brey finds two potentially harmful factors in VR related to morality: representations and actions [8]. Representations being “the way in which objects, state-of-affairs and events are depicted or simulated,” and actions being the behaviors conducted in the virtual environment. Because these factors can be depicted differently than those in real-world, there is always the potential for unethical behavior in the virtual world. And because VR can be so psychologically influential on a user, there is a risk that this unethical behavior would manifest in the real world. For example, as Brey illustrates, a combat game that requires users to kill opponents is immoral. However, allowing users to wound and incapacitate an opponent explores more ethical choices.

This first Code of Ethical Conduct for Virtual Reality marks an important milestone in technology, but should be further evaluated to ensure it is suggesting the greatest good for the greatest amount of individuals.

References

Brey, P. (1999). The Ethics of Representation and Action in Virtual Reality. Ethics and Information Technology. 1/1. p. 5 – 14. doi:10.1023/A:1010069907461

Botvinick M. and Cohen, J. (19 February 1998). Rubber hands “feel” touch that eyes see. Nature, Vol. 391, p. 756. doi:10.1038/35784

Daniels, C. (2016) Informed Consent Project. http://informedconsentproject.com/

Madary, M. and Metzinger, T. K. (2016). Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology. Frontiers in Robotics and AI, Vol. 3, p. 3. doi: 10.3389/frobt.2016.00003

Martin, M.W. and Schinzinger, R. (1996) Ethics in Engineering (3rd Edition), McGraw-Hil.

Lentz, J., Kennett, M., Perlmutter, J., Forrest, A. (16 June 2016) Paving the way to a more effective informed consent process: Recommendations from the Clinical Trials Transformation Initiative. Contemporary Clinical Trials, Vol. 49, p. 65 – 69.

Saliba, A. and Moran, C. (October 2010). The influence of perceived healthiness on wine consumption patterns. Food Quality and Preference. Vol. 21, p. 692 – 696.

Zygoruis, S., Giakoumis, D., Votis, K. Doumpoulakis, S., Ntovas, K. Segkouli, S., Karagiannidis, C., Tzovaras, D., Tsolaki, M. (30 October 2014). Can a Virtual Reality Cognitive Training Application Fulfill a Dual Role? Journal of Alzheimer’s Disease. Vol. 44, p. 1333 – 1347. doi: 10.3233/JAD-141260