INJ Search

CLOSE


Int Neurourol J > Volume 20(3); 2016 > Article
Hamacher, Kim, Cho, Pardeshi, Lee, Eun, and Whangbo: Application of Virtual, Augmented, and Mixed Reality to Urology
This article has been corrected. See "Application of Virtual, Augmented, and Mixed Reality to Urology" in Volume 20 on page 375.

ABSTRACT

Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected.

left
    right

    INTRODUCTION

    A new wave of virtual (VR) and augmented reality (AR) has started. Benefiting from considerable media hype, the entertainment industry tries to embrace the mass market, with new applications and games based in a virtual environment. These new technological developments might emphasize a virtual reality trend that has been the subject of research in the medical field over many decades.
    This article aims to review the technologies that are available today and their application to the medical field and in particular to urology. The review presents an overview of significant developments that have occurred in medicine and in the discipline of urology using either AR or VR. Considering both the achievements and also the remaining challenges, it aims to identify trends towards possible future developments.

    Concepts of Virtual, Augmented, and Mixed Reality

    Definitions and terms

    Several attempts have been made to find an appropriate definition to describe different types of reality. The definition found in the dictionary states: an artificial environment that is experienced through sensory stimuli (as sights and sounds) provided by a computer and in which one’s actions partially determine what happens in the environment [1]. One commonly quoted definition is the model introduced by Milgram and Kishino describing a continuum and a gradual transition as shown in Fig. 1. The figure depicts the continuum from the real world to the virtual world, leaving space in between for AR as well as for augmented virtuality and considering everything between these two worlds the mixed reality (MR) [2].
    AR is generally referred to as a system in which the user has a direct view of their environment and where a specially constructed device allows additional information or graphical elements to be blended with the real environment in the form of an overlay. Examples of this technology are the early Google Glass first released in 2013, Epson SmartGlasses, or Microsoft Hololens. In principle, the concept of creating an information overlay was already present in head-up displays (HUDs) as they have been in use for pilots of military aircraft since the 1950s. Several automotive manufacturers offer miniaturized versions of HUDs as instruments and for speed indication in some of their recent models.
    Although AR describes a vision enhanced with artificial additions, the opposite is also conceivable. Diminished reality refers to a processed environment from which insignificant or unwanted parts are omitted. Objects can be removed from an image by using image processing, for example, when they are occluding the view of a more important or significant object [3].
    A more recent definition of VR was proposed by LaValle [4]: “Inducing targeted behavior in an organism by using artificial sensory stimulation, while the organism has little or no awareness of the interference.” This definition broadens the field of application to a much wider field, so that the application of VR can be seen as a wide simulation environment, not only applicable to humans, but generally to everything else. Moreover, it insists on the fact that the experience is deliberately created by an author with the intention to fool the senses of the user or organism. This concept removes the difficult term reality which can be subject to various interpretations, both philosophical as well as behavioral. It draws VR closer towards the definition of simulation, namely that it is designed for a particular purpose. In this sense it rejoins a concept that has been present in medicine for a long time and is a proven concept for teaching and training.
    Considering the development in scientific research over the last decades, we discover that there has been a steady rise in the number of publications since the late 1990s. Fig. 2 shows the result of a keyword search in Google Scholar for the terms augmented reality, virtual reality, and mixed reality. This figure can be interpreted as a trend with its peak centered around 2010 for VR and MR, and around 2014 for AR. The present success of new technologies might imply that the results of many previous research efforts have now reached maturity of application.

    Brief history of development

    The history of VR can be traced back to the early years of the 20th century when the first mechanical simulators were used in 1909 for the training of aircraft pilots. Sensorama, one of the first multisense simulators, entered the market in 1960. It was advertised as providing the user with a complete experience comprising stereo sound, movement, vibration, wind, smell, and 3-dimensional (3D) imagery in a virtual motor cycle ride through New York. The first trackers and haptic devices such as the data glove date back to 1970. One of the first head-mounted displays (HMDs) was developed by Callahan in 1983 and was considered a breakthrough [5]. Commercial devices became available shortly after. HMDs with a greater or lesser degree of sophistication brought miniaturized standard definition monitors right before the users’ eyes, combined with a stereophonic audio experience.

    New technical developments today

    Compared to the first generation of HMDs significant changes have been observed in recent years, particularly concerning the quality of visual images. Early commercial HMDs generally provided a resolution comparable to that of standard TV, i.e., 800×600 pixels for each eye with an average field of view of 30 degrees. In contrast, new generation HMDs are capable of displaying full high-definition resolution or more and allows a field of view of at least 110 degrees. This significant change not only allows the creation of experiences that appear more immersive and realistic, but also increases the level of detail and precision users might need for professional applications such as in the medical field.
    On the computational side, prevailing computers have followed Moore’s law steadily and have sufficient power to perform graphical computation of high-quality images in real time. Underlying algorithms based on the theories of machine learning that can lead towards intelligent interaction are now supported by dedicated chips and manufacturer components [6].
    Furthermore, batteries and communication technologies have advanced to a level where huge amounts of video data can be transferred with almost no latency over dedicated cable connections or even wireless. Implementation of the next generation of LTE (long-term evolution) transmission technology makes it conceivable to create ubiquitous networks [7].

    Technology of Virtual and Augmented Reality

    Virtual reality

    Recent years have experienced the introduction of a wave of VR devices into the mass consumer market, offering image quality and performance that could not be matched before. Low-cost-entry devices on offer in the consumer market are mainly dominated by 2 players, but other manufacturers have announced their intention to enter the competition. One of the pioneers of the new wave of VR is Facebook-owned Oculus, an HMD with tracking and space sensors. The other player is VIVE manufactured by HTC (New Taipei City, Taiwan). The device is similar to Oculus, but contains an additional video capture device for AR applications. The HMD also provides tracking and external laser-based trackers that are available to determine the user’s position in space.
    The common principle of VR is the complete immersion of the user in a computer-generated environment. Each device requires a powerful computer with a modern graphics processing unit to provide fluent stereoscopic images with low latency and a high refresh rate.

    Augmented reality

    Among the technologies focusing on AR, 2 types of devices are presently predominant:
    Glasses: The first type of AR devices is based on glasses. These glasses allow the user to see their physical environment with their own eyes by blending it with a layer of additional graphics generated by a computer. The overlaying images can be either monoscopic or stereoscopic. Stereoscopic overlays have the advantage that they can be positioned in 3D space in front of the viewer. Google Glass and Epson SmartGlasses are prototypes of this technology. They are constructed from lightweight glass wired to a portable computer, which the user can wear in their pocket. The recently available Hololens from Microsoft combines an external computer with a headset combined with glasses and also integrates additional cameras and sensors. This device is powered with high-performance batteries and presently allows unconnected operation for approximately three hours. Additionally, as software manufacturer, Microsoft delivers a software development kit (SDK), giving access to lower level algorithms and libraries that perform elementary AR functions such as environment space recognition or collision detection, that facilitate placement of additional virtual objects above or in front of rather than inside real-world objects.
    Tablets: Tablets present an alternative possibility for deploy AR applications. These applications rely on the built-in sensors of the tablet, such as the camera and motion and gyroscopic sensors, to determine the user’s position. The camera is used to capture the physical world, and the display shows the AR with enhancements or additions provided by the software, which can calculate the appropriate homography to match computed image parts and overlay them over the captured images of the physical world. Although basic AR can be performed with any tablet device, Intel is now offering specific AR components for tablets with its RealSenseTM SDK and R200 camera. Tablets equipped with this technology contain additional sensors and depth capture devices to obtain a more accurate scan of the physical world and offer AR capability on system level [8].

    Sensors

    As opposed to low-cost smartphone-based Google Cardboard-type VR devices, which use the internal sensors of smartphones to determine users’ movements, the dedicated HMDs possess their own accelerometer and gyroscope sensors.

    Trackers

    The purpose of trackers is to identify the user’s position in the environment to allow interaction. In this respect, Oculus uses infrared tracking systems to the one side of the user, whereas HTC VIVE uses a laser-based system in front of and behind the user; thus, it is able to track the user’s movements with slightly greater precision. Trackers can also be used for interaction, similar to markers the user holds in their hands. They allow greater precision to detect the user’s gestures and movements.

    Interaction

    Interaction is a crucial factor for VR devices. Most devices use a kind of game console for interaction. The Microsoft Hololens offers some possibilities for interaction that are already part of the system and do not require programming on the application level. System-wide gestures such as opening the hand in front of the viewer are shown in Fig. 3. This is known as “blooming” and allows the user to call the main menu or interrupt an application. Fig. 4 demonstrates an “Air tap” gesture that signifies a mouse click in the air.
    Beyond this translated desktop behavior, additional interaction using voice recognition is also possible. However, since the speech recognition uses Microsoft’s cloud services, it becomes necessary to address privacy concerns when sensible data is transferred to remote data centers beyond the user’s control.

    Overview of AR, VR, and MR in Medical Science

    Neuroscience/psychotherapy

    The NeuroVR open source platform is designed to meet the specific demands of a clinical or experimental setting. A screenshot of the platform editor is shown in Fig. 5. The NeuroVR platform can be used to create different environments, such as a living room, supermarket, or park, aiming at behavioral rehabilitation of patients suffering from various phobias such as fear of flying, agoraphobia, acrophobia, and eating disorders [9]. One of its features is open protocols to gather biofeedback and real-time modification of the objects in the virtual environment based on the patient’s interaction. North et al. [10] describe the experiments and case studies they have conducted in 1998 using VR technology. One of the main conclusions is that experiences made in the virtual world may modify the patient’s behavior in the real world.
    One of the limitations of working with virtual environments is the graphic complexity ofthe environment. Another drawback is related to safety. In fear of exposing the patients to an excessive amount of flicker that might enhance the risk of a panic attack, sessions were prevented from exceeding 20 minutes and the patient was required to sit, in order to avoid simulator sickness. This problem continues to remain unresolved to this day [10,11].
    Although virtual environments require meticulously realistic reconstruction, another approach in behavioral therapy involved using AR in the case of phobias of small animals. In 2005, Juan et al. [12] reported research conducted to study the phobia of spiders and cockroaches. The use of AR for exposure therapy generally increased patients’ acceptance, because, instead of being exposed to the target animal in reality, they would only be using a simulation. The AR system allowed exposure to a certain situation, e.g., a spider crawling over a hand as simulated in Fig. 6, or the simulation of placing a dead cockroach in a box. This is technically achieved by optical markers. The patient can see and use their hands for the interactions. AR therapy seems to be more efficient than other VR experiments, because the time needed to reduce the patient’s fears seems to be shorter [13]. The author concludes that AR might be a useful therapeutic tool for several other psychological disorders.

    Liver

    In 2013, the project MEVIS was initiated by the Fraunhofer Research Institute. The project involved the use of an iPad-based AR application to support liver operations. As doctors need to know as accurately as possible before and during an operation where blood vessels are located inside the organ, this AR application supports the surgeon by comparing the actual operation with the planning data based on 3D X-ray images. Fig. 7 shows an overlay of the planning data on the actual camera image, as if looking inside the organ.

    Orthopedics

    Various research studies have been carried out in the field of stroke rehabilitation to reduce the costs of rehabilitation training for the patient as well as for the rehabilitation facility. Very similar to a game, the patient has to perform tasks directed by an application and the results are measured with a game controller virtual glove as shown in Fig. 8 [14]. A currently running survey of 46 patients concludes that VR-based rehabilitation combined with standard occupational therapy might be more effective than amount-matched conventional rehabilitation.

    Applications of VR, AR, and MR in Urology

    The question of applying VR to the field of urology was discussed by Shah et al. [15] as early as 2001. One of the main objectives of using simulators is to shorten the time needed for training and provide the doctors or surgeons with the possibility to gain experience and improve performance outside of the operating room environment. According to Shah et al. [15], a functioning medical simulator requires the following four elements: (1) Visual reality, meaning that the visual simulation has to be sufficiently realistic and have the appearance of a true medical situation. (2) Physical reality, meaning that, e.g., tissues need to contain dynamic realism when grasped and the simulator devices need to correctly respond to the forces applied by the trainee. (3) Physiological reality, muscles should show contraction and bleeding should occur as in real situations. (4) Tactile reality in that resistance and forces need to be experienced realistically by the trainee to achieve a good simulation. However, recent publications reporting 3D image-guided surgery indicate that AR with visual cues to the subsurface anatomy could be a replacement in the case of minimally invasive surgery in the field of urology [16].

    Education

    Teaching anatomy

    Anatomy teaching in the field of urology can be a central application of VR. Datasets from scanned computed tomography, magnetic resonance imaging, and tissue images made available by the National Library of Medicine in the USA as part of the “visible human project” could be used to create a virtual male and female dataset [17]. In 2006 Korean researchers replied to this model and created the “visible Korean human” for anatomy education. As part of this project presegmented images were further segmented using SURFdriver software, to allow the structures of ureters, the urinary bladder (inner boundary), urethra, testes, epididymides, ducti deferens, seminal vesicles, prostate, rectum, anal canal, superior mesenteric artery, inferior vena cava, renal veins, and intervertebral disks to be determined. This data was used to model each anatomical structure into a separate 3D object. The result is an interactive 3D model as shown Fig. 9 with the purpose of helping medicine students with their study of anatomy. This model is also expected to assist urologists in the process of explaining diseases to patients. The 3D images of this model are made freely available by the researchers.
    Even though VR/AR applications are often thought of enhancing an existing body or adding information and presence to an operation situation, a more general aspect in education would be recent technology developments using AR in combination with medicine textbooks. In this regard, the “Gunner Goggles” project aims to enhance teaching books with additional content such as movies or 3D objects, using special markers in the textbook and an associated application on a smartphone or tablet to visualize this additional content [18].

    Medical imaging visualizations: 3D modeling of medical imaging

    In 2015, Case Western Reserve University and Cleveland Clinic announced their cooperation with Microsoft using the AR headset Hololens for teaching anatomy students [19]. The models were generated from patient MRI data with color-coded areas assisting with the identification of tumorous zones in the brain. Gestures allow interaction with the 3D model to reveal organs that may be obscured by others.

    Training

    Digital rectal examination (DRE) simulator. The second-most frequent cause of death among men is prostate malignancies; thus, early diagnosis is of the highest importance. Before the trainees practice on patients, simulation is necessary to reduce the discomfort patients might undergo when a DRE is performed by an inexperienced trainee. Although many prostate training kits exist to familiarize trainees with the use of DRE to perform a diagnosis, most simulation kits lack the capability of haptic feedback. Burdea et al. [20] undertook a human factor study with 12 virtual patients where medicine students used VR and were trained with the PHANToM haptic interface. However, further development is still required, even though the results showed the feasibility of such a VR-based DRE simulator.
    A simulator for transrectal ultrasound guided prostate biopsy was developed by the University of Grenoble in 2013 [21]. Recent systems have become available and allow the transrectal ultrasound to be mapped and targeted during a prostate biopsy. The simulator works with a haptic interface and allows trainees to experiment and exercise with a large amount of prostate image datasets as well as various clinical situations or specific tasks that need to be performed in clinical practice.
    Transurethral resection of the prostate (TURP). Research published in 2001 states the need for an adequate TURP simulator. The danger exists that fewer urologists are practicing these procedures and that this may even weaken their skills (p366) [22]. TURP may be an ideal procedure to simulate, since it is performed in a fluid environment and involves a variety of situations for bleeding. It might be an ideal tool for the practice of hemostasis. Presently five TURP simulators have been validated. Among them are the GreenLight laser simulator, Kansai HoLEP simulator, and UroSim HoLEP. A study entailing simulation-based training for prostate surgery concluded that all simulator models can be used to provide additional training sessions for trainee surgeons along other traditional training methods [23].

    Therapy

    Beyond a training application in VR and AR, attempts have been made toward the treatment of erectile dysfunction using VR. Specially designed VR software allows patients to learn to identify the obstacles that are the causes for their sexual dysfunction. After therapy sessions in combination with VR sessions the researchers reported a partially positive response rate of 73% [24]. (With the total being defined as the return of adequate erection with completion of sexual activity.)

    Planning

    In the field of preoperative planning, several new technologies can be identified. In the last ten years the use of 3D printed models has grown in importance. Surgeons can use these 3D models for renal cell carcinoma, ureteral stents, and staghorn calculus for planning complicated procedures. They also help to improve the patients’ understanding of surgery [25]. Reports show that printed 3D models seem to be useful, especially in the case of complex tumor operations [26].
    The use of VR in preoperative planning has been researched over the last few decades. It seems that planning has become a central field of application for this technology. Reports have appeared in different areas of endoscopic surgery [27]. The researchers suggest that it offers additional tools for surgeons and helps to optimize surgical procedures and maximize functional preservation.
    As simulation methods become increasingly realistic, modern 3D models offer the possibility to show real deformations, e.g., those due to insufflation and respiration, as is the case in soft tissue laparoscopic surgery. Research indicated that improved quality of registration and model alignments was achievable and could be used to expand applications in intraoperative planning and image guidance [28].
    Eventually a symbiosis between preoperative planning and assistance during surgery would be observed. Patients’ individual 3D models used in conjunction with medical image reconstruction and guiding systems assist with forecasting possible complications during laparoscopic partial nephrectomy. Planning and real-time assistance during surgery appeared to reduce the time in the operation room [29].

    Assisting

    AR in nephrectomy and partial nephrectomy has gained importance in recent years because of the wider use of robot-assisted laparoscopy. As the haptic feedback is lost using this technology it appears that AR can play a significant role by compensating for this loss by offering enhanced visual information [30]. In this context, AR, also known as “image-enhanced operating environment“ in a study by Hughes-Hallett et al. [31] of over 60 cases, shows that surgery is undergoing an important change with a movement towards minimal invasive surgery (MIS) becoming a new standard of care [32]. The proposed 3D image-guided surgery relies on 2 phases: planning and execution. Although the first phase requires a large amount of anatomical data, the second only requires a subset of this information, but of much higher accuracy. The 3D models needed during the planning phase are visualized to the surgeon both on a tablet and a daVinci console.
    The tracking of instruments has also been improved. During the execution phase, optically registered intraoperative ultrasound is used for high-precision guidance and to create freehand 3D reconstructions that can be blended over the operative view via a tablet. A study by the same author concludes that the tablet can be a “low-barrier-to-entry” device with sufficient accuracy and with little impact on the surgical workflow [33].

    Possible Future Developments and Applications

    Telementoring forms part of telemedicine, which consists of remote assistance by a specialist or surgeon. This can be part of a broad training program or direct mentoring. Low-latency AR systems can show the mentor what the surgeon on location can actually see at that moment, while the mentor’s advice can be directly displayed within the view of the surgeon on location. In 2002, Rassweiler et al. [34] published a study of research techniques relating to urological laparoscopy. Many of the described problems involving handling problems and hardware could be resolved by new technologies such as the higher resolution provided by 4k cameras [35] and AR displays with stereoscopic capabilities, thereby overcoming the use of shutter glasses and heavy video helmets.

    Telesurgery

    In principle, telesurgery entails the remote control of a surgical robot. Feasibility studies have been carried out as well for MIS and open surgery [36]. Possible application could be on battlefields, in army camps, or in remote rural areas [37]. The first case of telesurgery was demonstrated in 2001 by Marescaux in New York operating remotely on a patient located in a hospital in France [38,39]. One of the drawbacks is the requirement for dedicated communication networks to control the time latency in the loop in which a surgeon performs an action until the result becomes visible. This latency has been the subject of research and should be below 105 msec to avoid deterioration of the performance and user experience [40]. This can presently only be achieved using private networks. Research using the public Internet and securing high reliability as well as low latency was conducted by Obenshain and Tantillo [41] using an LTN overlay network and a da Vinci robot as the telesurgical device. Progress in image compression technologies and network capabilities would be expected to open this field to wider applications [42].

    Augmented biofeedback in pelvic floor muscle re-education

    Pelvic floor muscle (PFM) re-education is used to provide therapy to patients with incontinence of their urine or stool. Physical therapists rehabilitate the PFMs of women experiencing increased tension in the PFMs or vaginal pain during intercourse or men suffering from chronic genital or groin pain, frequent urination or a burning sensation when urinating [43].
    In these cases biofeedback is used during rehabilitation sessions to improve the performance of the PFM. Internal and external sensors, such as the electromyograph (EMG), surface EMG (SEMG), perineometer, and vaginal weights/cones, exist. These sensors help patients gain greater awareness about physiological functions and supports learning how to control them consciously. AR could be a used as a supporting device to communicate and visualize the biofeedback to the patient in therapy. A study on women’s self-efficiency in performing pelvic muscle exercises indicates that biofeedback confirming the exactitude of exercises, increases the confidence that these exercises will avoid unwanted urine loss [44].

    CONCLUSIONS

    Current technical progress is the driving force behind a new wave of VR and MR applications. Although conceived and developed over a decade ago, many applications have progressively found their way into clinical practice. As MIS has become the state of care for many urological surgical interventions, robots and operating aids have become more common in the operating room. During recent years, these tools may have introduced a change in paradigm that also affects the work of surgeons. The concept of haptic realism as mentioned by Shah et al. [15] in 2001 might move towards VR or AR, where augmented visual information introduces new tools and methods to the operating room. Surgeons are often compared to pilots in terms of the skills and level of responsibility their jobs require. The new paradigm could be the equivalent of flying by instruments for surgeons.
    This tendency, together with the increasing use of robots, places the surgeon in a de facto VR environment. Gradual technical modifications allow the development of further situations and applications: using remote controls with training software creates a virtual training environment. Increasing the distance between the remote control device and the robot creates telesurgery. VR could fuse training, surgery, and telemedicine and could create new opportunities for delivering higher quality medical competency in additional places. This role could also apply to tele-mentoring: despite its virtuality it bears the chance to build a human network across borders and countries for sharing best practices or calling for advice, when it is needed.
    Although this development may not be applicable everywhere, chances are that surgeons could also take advantage of AR using low-entry-cost tablets supporting many of their tasks and by reducing risks and optimizing tasks in presurgery planning and saving operating time in execution as well.
    The long delay before these new technologies find their way into training institutions or the operating room indicates that these new technologies need to meet medical requirements and have to be validated. Critical discussions regarding precision, operability, and latency are crucial, and are not perceivable for nonmedical professionals. VR can only perform as good as the real-life and work experiences of the surgeons they constitute.
    AR and VR have the potential to reduce risk through improved planning and relying on their assistance would reduce the time spent in the operating room. Although most studies and research conclude that all technologies need further improvements on many levels, they bear the potential to further increase efficiency in health care and to provide enhanced medical services to patients in the future.

    NOTES

    Grant Support
    This work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (NRF-2014R1A2A1A11052321).
    Conflict of Interest
    No potential conflict of interest relevant this article is reported.

    REFERENCES

    1. Virtual reality. Simple definition of virtual reality [Internet]; Springfield (MA): Merriam-Webster; c2016 [cited 2016 Sep 10]. Available from: http://www.merriam-webster.com/dictionary/virtual%20reality.
    2. Milgram P, Colquhoun H Jr. A taxonomy of real and virtual world display integration. Mixed reality: merging real and virtual worlds. In: Proceedings of International Symposium of mixed reality (ISMR 1999) [Internet]; [cited 2016 Sep 10]. Available from: https://www.researchgate.net/profile/Paul_Milgram/publication/263085124_ISMRpaper/links/0deec539c18fc18f4b000000.pdf.
    3. Herling J, Broll W Jr. PixMix: a real-time approach to high-quality diminished reality. In: 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2012). 2012 Nov 5-8; Atlanta (GA): IEEE; 141-50. crossref
    4. LaValle SM. Virtual reality. Champaign (IL): University of Illinois; 2016.
    5. Earnshaw RA. Virtual reality systems. London: Acad. Press; 1994.
    6. Intel tunes its mega-chip for machine learning [Internet]; San Francisco (CA): PCWorld; c2016 [cited 2016 Sep 10]. Available from: http://www.pcworld.com/article/3091028/intel-tunes-its-megachip-for-machine-learning.html.
    7. 5 Amazing things you’ll be able to do with 5G [Internet]; San Francisco (CA): CNET; c2016 [cited 2016 Sep 10]. Available from: https://www.cnet.com/news/5-amazing-things-youll-be-able-to-dowith-5g/.
    8. Overview of Intel® RealSenseTM SDK [Internet]; Santa Clara (CA): Intel Software; c2016 [cited 2016 Sep 10]. Available from: https://software.intel.com/en-us/intel-realsense-sdk.
    9. Riva G, Gaggioli A, Villani D, Preziosa A, Morganti F, Corsi R, et al. NeuroVR: an open source virtual reality platform for clinical psychology and behavioral neurosciences. Stud Health Technol Inform 2007;125:394-9. PMID: 17377310
    pmid
    10. North MM, North SM, Coble JR. Virtual reality therapy: an effective treatment for phobias. Stud Health Technol Inform 1998;58:112-9. PMID: 10350911
    crossref pmid
    11. Huang MP, Alessi NE. Current limitations into the application of virtual reality to mental health research. Stud Health Technol Inform 1998;58:63-6. PMID: 10350929
    pmid
    12. Juan MC, Alcaniz M, Monserrat C, Botella C, Baños RM, Guerrero B. Using augmented reality to treat phobias. IEEE Comput Graph Appl 2005;25:31-7. crossref
    13. MacIntyre B, Livingston MA. Moving mixed reality into the real world. IEEE Comput Graph Appl 2005;25:22-3. crossref
    14. Shin JH, Kim MY, Lee JY, Jeon YJ, Kim S, Lee S, et al. Effects of virtual reality-based rehabilitation on distal upper extremity function and health-related quality of life: a single-blinded, randomized controlled trial. J Neuroeng Rehabil 2016;13:17. PMID: 26911438
    crossref pmid pmc
    15. Shah J, Mackay S, Vale J, Darzi A. Simulation in urology: a role for virtual reality? BJU Int 2001;88:661-5. PMID: 11890232
    crossref pmid
    16. Ukimura O, Nakamoto M, Sato Y, Hashizume M, Miki T, Desai M. Augmented reality for image-guided surgery in urology. In: Dasgupta P, Fitzpatrick J, Kirby R, Gill IS, editors. New technologies in urology: new techniques in surgery series. London: Springer-Verlag; 2010. pp. 215-22.
    17. Westwood JD. Medicine meets virtual reality: art, science, technology: healthcare (r)evolution. Amsterdam: IOS Press; 1998.
    18. Wang LL, Wu HH, Bilici N, Tenney-Soeiro R. Gunner Goggles: implementing augmented reality into medical education. Stud Health Technol Inform 2016;220:446-9. PMID: 27046620
    pmid
    19. Case Western Reserve University. Hololens [Internet]; Cleveland (OH): Case Western Reserve University; c2015 [cited 2016 Sep 10]. Available from: http://case.edu/hololens/.
    20. Burdea G, Patounakis G, Popescu V, Weiss RE. Virtual reality-based training for the diagnosis of prostate cancer. IEEE Trans Biomed Eng 1999;46:1253-60. PMID: 10513131
    crossref pmid
    21. Selmi SY, Fiard G, Promayon E, Vadcard L, Troccaz J. A virtual reality simulator combin-ing a learning environment and clinical case database for image-guided prostate biopsy. In: Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems. 2013 Jun 20-22; Porto, Portugal: IEEE; 2013. 179-84. crossref
    22. Westwood JD. Medicine meets virtual reality 2001: outer space, inner space, virtual space. Amsterdam: IOS Press; 2001.
    23. Khan R, Aydin A, Khan MS, Dasgupta P, Ahmed K. Simulationbased training for prostate surgery. BJU Int 2015;116:665-74. PMID: 24588806
    crossref pmid
    24. Optale G, Munari A, Nasta A, Pianon C, Baldaro Verde J, Viggiano G, et al. Multimedia and virtual reality techniques in the treatment of male erectile disorders. Int J Impot Res 1997;9:197-203. PMID: 9442417
    crossref pmid
    25. Powers MK, Lee BR, Silberstein J. Three-dimensional printing of surgical anatomy. Curr Opin Urol 2016;26:283-8. PMID: 26825651
    crossref pmid
    26. Komai Y, Sugimoto M, Gotohda N, Matsubara N, Kobayashi T, Sakai Y, et al. Patient-specific 3-dimensional printed kidney designed for “4D” surgical navigation: a novel aid to facilitate minimally invasive off-clamp partial nephrectomy in complex tumor cases. Urology 2016;91:226-33. PMID: 26919965
    crossref pmid
    27. Ukimura O, Gill IS. Imaging-assisted endoscopic surgery: Cleveland Clinic experience. J Endourol 2008;22:803-10. PMID: 18366316
    crossref pmid
    28. Mountney P, Fallert J, Nicolau S, Soler L, Mewes PW. An augmented reality framework for soft tissue surgery. Med Image Comput Comput Assist Interv 2014;17(Pt 1):423-31. PMID: 25333146
    crossref pmid
    29. Wang D, Zhang B, Yuan X, Zhang X, Liu C. Preoperative planning and real-time assisted navigation by three-dimensional individual digital model in partial nephrectomy with three-dimensional laparoscopic system. Int J Comput Assist Radiol Surg 2015;10:1461-8. PMID: 25577366
    crossref pmid
    30. Hughes-Hallett A, Mayer EK, Marcus HJ, Cundy TP, Pratt PJ, Darzi AW, et al. Augmented reality partial nephrectomy: examining the current status and future perspectives. Urology 2014;83:266-73. PMID: 24149104
    crossref pmid
    31. Hughes-Hallett A, Pratt P, Dilley J, Vale J, Darzi A, Mayer E. Augmented reality: 3D image-guided surgery. Cancer Imaging 2015;15(Suppl 1):O8. crossref pmc
    32. Nicolau S, Soler L, Mutter D, Marescaux J. Augmented reality in laparoscopic surgical oncology. Surg Oncol 2011;20:189-201. PMID: 21802281
    crossref pmid
    33. Hughes-Hallett A, Pratt P, Mayer E, Martin S, Darzi A, Vale J. Image guidance for all: TilePro display of 3-dimensionally reconstructed images in robotic partial nephrectomy. Urology 2014;84:237-42. PMID: 24857271
    crossref pmid
    34. Rassweiler J, Frede T. Robotics, telesurgery and telementoring: their position in modern urological laparoscopy. Arch Esp Urol 2002;55:610-28. PMID: 12224160
    pmid
    35. Nakasu E. Super Hi-vision on the horizon: a future TV system that conveys an enhanced sense of reality and presence. IEEE Consum Electron Mag 2012;1:36-42. crossref
    36. Kim KY, Song HS, Suh JW, Lee JJ. A novel surgical manipulator with workspace-conversion ability for telesurgery. IEEE/ASME Trans Mechatron 2013;18:200-11. crossref
    37. Anvari M, McKinley C, Stein H. Establishment of the world’s first telerobotic remote surgical service: for provision of advanced laparoscopic surgery in a rural community. Ann Surg 2005;241:460-4. PMID: 15729068
    crossref pmid pmc
    38. Surgeons in U.S. perform operation in France via robot [Internet]; Washington D.C.: National Geographic Society; c1996-2015 [cited 2016 Sep 20]. Available from: http://news.nationalgeographic.com/news/2001/09/0919_robotsurgery.html.
    39. Marescaux J, Leroy J, Rubino F, Smith M, Vix M, Simone M, et al. Transcontinental robot-assisted remote telesurgery: feasibility and potential applications. Ann Surg 2002;235:487-92. PMID: 11923603
    crossref pmid pmc
    40. Kumcu A, Vermeulen L, Elprama SA, Duysburgh P, Platiša L, Van Nieuwenhove Y, et al. Effect of video lag on laparoscopic surgery: correlation between performance and usability at low latencies. Int J Med Robot 2016 Jun 3;[Epub]. http://dx.doi.org/10.1002/rcs.1758. crossref
    41. Obenshain D, Tantillo T. Remote telesurgery: enabling remote telesurgery via overlay networks [Internet]; Baltimore (MD): Johns Hopkins University, Computer Science Department, Distributed Systems and Networks La; [cited 2016 Sep 10]. Available from: http://www.cnds.jhu.edu/courses/cs667-2011/RemoteTeleSurgery/.
    42. Cheng L, Zarki ME. Systems and methods for video compression for low bit rate and low latency video communications. United States patent US20060209964. 2016 January 12.
    43. Magovich M. Biofeedback for pelvic floor muscle re-education [PPT slide on the Internet]; Cleveland (OH): Cleveland Clinic; [cited 2016 Sep 10]. Available from: http://my.clevelandclinic.org/ccf/media/files/Digestive_Disease/woc-spring-symposium-2013/biofeedback-for-pelvic-floor-muscle-reeducation.pdf.
    44. Tremback-Ball A, Levine AM, Dawson G, Perlis SM. Young women’s self-efficacy in performing pelvic muscle exercises. J Womens Health Phys Therap 2012;36:158-63. crossref

    Fig. 1.
    Reality continuum.
    inj-1632714-357f1.gif
    Fig. 2.
    Keyword occurrence.
    inj-1632714-357f2.gif
    Fig. 3.
    Bloom gesture.
    inj-1632714-357f3.gif
    Fig. 4.
    Air tap gesture.
    inj-1632714-357f4.gif
    Fig. 5.
    NeuroVR editor.
    inj-1632714-357f5.gif
    Fig. 6.
    Concept of using augmented reality in therapy for phobias of small animals.
    inj-1632714-357f6.gif
    Fig. 7.
    iPad used during an operation. Adapted with permission from Fraunhofer MEVIS.
    inj-1632714-357f7.gif
    Fig. 8.
    Rehabilitation training with virtual reality.
    inj-1632714-357f8.gif
    Fig. 9.
    Virtual Korean Human, pelvis after removal of the left iliac bone.
    inj-1632714-357f9.gif
    TOOLS
    Share :
    Facebook Twitter Linked In Google+
    METRICS Graph View
    • 51 Web of Science
    • 46 Crossref
    • 71 Scopus
    • 26,888 View
    • 409 Download
    We recommend


    ARTICLE & ORGAN
    Article Category

    Browse all articles >

    Organ

    Browse all articles >

    ISSUES
    DISEASES & TOPICS
    Diseases

    Browse all articles >

    Topics

    Browse all articles >

    AUTHOR
    INFORMATION

    Official Journal of Korean Continence Society & ESSIC (International Society for the Study of BPS) & Korean Society of Urological Research & The Korean Children’s Continence and Enuresis Society & The Korean Association of Urogenital Tract Infection and Inflammation & Korean Society of Geriatric Urological Care
    Editorial Office
    Department of Urology, Kangbuk Samsung Medical Center, Sungkyunkwan University School of Medicine,
    29 Saemunan-ro, Jongno-gu, Seoul 03181, Korea
    Tel: +82-2-2001-2237     Fax: +82-2-2001-2247    E-mail: choys1011@naver.com

    Copyright © 2024 by Korean Continence Society.

    Developed in M2PI

    Close layer
    prev next