HomeNewsXR/AR/VR Overview 1/2

XR/AR/VR Overview 1/2

We will introduce XR technology across two articles of this month and the next month. We have also included many interesting links, so be sure to check them out!

Combining Technologies to Tackle Real-World Challenges:Expanding Applications of XR, AI, and Sensing

This is Yoshinaga from the Data Integration and Development Division.

Throughout my career, I have focused on research into 3D visualization technologies centered on XR (Extended Reality). Today, I work at Avaxia Asia primarily in XR development and consulting, while continuing my research activities as a Visiting Researcher at the Institute of Systems, Information Technologies and Nanotechnologies (ISIT) in Kyushu. In this article, I will show my work before and after joining Avaxia Asia and share my vision for the future of XR combined with AI. (Please see below for the article he previously wrote on XR/AR:

Let’s create WebAR Content 1/3
Let’s create WebAR Content 2/3
Let’s create WebAR Content 3/3

My Journey — Starting with AR Applications in Healthcare

From my doctoral studies to the present, my primary research theme has been the medical application of visualization and image-processing technologies, particularly XR. I have focused especially on developing AR-assisted ultrasound diagnostic systems.

Note: XR is an umbrella term encompassing AR (Augmented Reality), VR (Virtual Reality), and MR (Mixed Reality). Each has its own distinct characteristics such as AR overlays digital information onto the real world; VR immerses users in a fully virtual environment.

In ultrasound diagnostics, a technician manipulates a probe (pressed against the patient) to capture cross-sectional images of the body’s interior. Understanding exactly which part of the body a given cross-section corresponds to requires considerable experience. To address this, I developed an AR system that visualizes the spatial relationship between the imaged cross-section and the patient’s body surface and internal organs. I also built a method that uses image processing to recognize the shape of the target organ and display it in AR.

Even before remote communication technologies became commonplace through platforms like the metaverse, I was already researching remote support for ultrasound image acquisition. As shown in the figures referenced in the original article, an experienced remote physician’s instructions are displayed visually via AR, guiding the probe operator so that even less experienced clinicians can obtain accurate diagnostic images. The goal was to extend this capability to rural and home-care healthcare settings.

At the time, head-mounted displays (HMDs) were not yet widely available, so AR guidance was displayed on a PC screen. Current research continues to explore applying these methods to HMD-based AR, as well as “markerless” probe tracking that eliminates the need for visual fiducial markers (black square targets).

Broadening My Research — Discovering Sensing Technologies

After joining the research institute as a full researcher, I also became involved in developing motion capture technology using wearable sensors. Unlike conventional camera-based systems, this approach is robust to occlusion and enables measurement of athletes’ or rehabilitation patients’ movements anytime, anywhere. This was carried out as a collaborative research project with a local private-sector company.

The technology was highly evaluated by the industry partner and led to a commercialized product. This experience convinced me that combining multiple technologies such as visualization, image processing, and sensing can generate new value beyond what any single technology offers. From then on, I became strongly committed to solving problems through the “combination of technologies.”

In addition to my core work on ultrasound AR, I pursued research in diverse fields: rehabilitation systems leveraging sensing of body and muscle movement, smart agriculture combining 3D plant shape sensing with environmental data (temperature, humidity, CO₂ concentration), radiation medicine education, civil engineering, and VR psychology. Through interdisciplinary collaborations, I applied my AR system-development skills to other fields while broadening my own knowledge.

As a personal passion project, I continue to prototype daily using AR and various sensing technologies. I will highlight two works that reflect my vision for the future of XR.

1. Real-Time 3D Scanning for Remote Communication

In the wave of metaverse popularity around 2020, users typically communicate through avatars, stylized or idealized digital representations. What I developed instead transmits a person’s actual real-time appearance so that remote participants can converse as though the other person is physically present. Think of it as “Zoom in 3D.”

The app has been demonstrated at XR events, featured in TV coverage, and distributed on the App Store as an independent release. While the visual fidelity still has room for improvement, I believe that advances in 3D scanning technology and AI will enable an experience approaching the holographic communications depicted in Star Wars. I continue this as a personal research project.

2. XR-ifying Living Spaces

This demo links a room’s door to XR effects in real time. While the visual “doorway to another world” effect naturally draws attention, achieving it requires real-time sensing of the door’s open/closed state. Beneath the visually striking presentation lies foundational technology that connects physical objects to digital information, demonstrating the integration of the real world with XR.

IoT is now well established in factories and power plants. I envision a future where sensing becomes equally embedded in everyday living spaces, and AR glasses become a daily accessory displaying lock status on doors and windows, or showing real-time power consumption on appliances, seamlessly blending sensor data with the physical world.

In the next article in April, we will introduce his challenges and projects at Avaxia Asia. Stay tuned!”

TOP