My current research pursuits are as outlined below:
XR Systems and Human-Computer Interaction
My research interest broadly explores the intersection of XR technologies with advanced HCI methods, emphasizing the practical application and theoretical analysis of XR systems in complex environments and tasks. Central to my work is the application of XR for workforce training, particularly within high-risk or technically demanding fields such as construction, forestry machinery operation, and confined space operations. My approach blends immersive virtual environments with interactive and intelligent training methodologies, aiming to enhance learning outcomes, improve user experience, and facilitate the acquisition and transfer of skills to real-world contexts.
One prominent research direction in my portfolio involves developing and validating sophisticated VR training simulators tailored for complex industrial machinery operation, notably for forestry harvesters. In this context, I have designed interactive VR simulations that leverage untethered VR systems like Meta Quest, providing immersive, safe, and cost-effective environments for users to practice essential machine operations such as navigation, manipulation of mechanical components, and procedural tasks. These simulators are meticulously tested for functional reliability and usability, reflecting an iterative design process closely guided by expert feedback and robust technical evaluations.
Another significant area of my work involves VR-based training in construction, where my research addresses the imminent workforce shortage by providing immersive training systems for wooden light-frame construction. My studies empirically validate the effectiveness of immersive VR training compared to traditional video-based instruction, demonstrating that VR significantly enhances learning speed and skill retention without sacrificing performance accuracy. Through carefully structured training scenarios that integrate real-life procedures and safety measures into virtual environments, my research confirms the superior engagement and usability of VR training systems, suggesting a strong potential for widespread adoption within industry training programs.
I have also contributed extensively to research on enhancing worker safety and operational efficiency in confined and potentially hazardous environments. My approach involves both physical and virtual simulators designed to replicate real-world conditions of confined spaces, integrating interactive elements to simulate hazards such as poor visibility, limited access, and emergency conditions. This dual modality of training, physical simulation complemented by virtual reconstruction, has shown promise in improving workers' preparedness and situational awareness, ultimately reducing workplace accidents through increased familiarity and psychological readiness for handling unexpected events and risks.
Moreover, my studies place considerable emphasis on cognitive load and usability assessments in immersive learning environments. By employing comprehensive evaluation frameworks, I systematically measure how XR training impacts cognitive processes, skill acquisition, and overall user satisfaction. Your methodological rigor ensures that XR training systems are not only technologically advanced but also cognitively aligned with user capabilities, thereby optimizing the educational effectiveness and practical utility of immersive technologies.
Overall, my research in this domain represents a coherent effort to harness immersive technologies to address real-world challenges in education, workforce training, and operational safety. My work consistently showcases XR's transformative potential when strategically integrated with intelligent interaction design, cognitive ergonomics, and empirical validation, positioning XR as an indispensable tool for modern industrial training and education.
Assistive and Cognitive Augmentation Technologies
In this domain, my research focuses on designing, developing, and validating systems that enhance human cognitive and operational capabilities through immersive environments. Leveraging advancements in VR, AR, and MR, my work integrates adaptive cognitive assistants capable of providing real-time, context-aware support. Utilizing biosensors such as EEG, eye-tracking, and physiological measurements like heart rate variability, these systems dynamically assess and respond to users' cognitive load, attention, and emotional states, optimizing engagement and learning outcomes across industrial, educational, and rehabilitation contexts.
I have a strong interest in intelligent cognitive assistants powered by natural language processing, large language models, and machine learning algorithms, which offer adaptive, conversational guidance for complex tasks. Recent projects demonstrate effective integration of these technologies with computer vision to deliver flexible, context-aware instructional systems without the need for prior scene annotation. Such innovations enhance user autonomy and effectiveness, particularly in complex assembly and disassembly processes in industrial environments.
Additionally, my research addresses human perception and cognitive processing within immersive environments, particularly spatial characteristics and time perception. My studies reveal significant impacts of immersive VR environments on subjective experiences of time and spatial engagement, informing the design of cognitively aligned VR applications. Moving forward, I aim to bridge the existing gaps between experimental research and practical applications by developing standardized cognitive models, validated measurement approaches, and comprehensive frameworks for deploying cognitive augmentation technologies in dynamic real-world settings.
​
Educational Technologies and Immersive Learning
Here, my interest centers on leveraging advanced interactive technologies, particularly VR and AR, to enhance educational experiences across multiple STEM-related contexts. My work systematically explores how immersive visualization environments and digital interfaces can improve learning effectiveness, spatial understanding, and student engagement, especially within technical and engineering disciplines.
I am particularly interested in how immersive technologies influence cognitive load, usability, and overall user experience during educational activities. My studies have consistently shown that immersive VR can effectively support students in spatial tasks and help manage cognitive load, providing an engaging learning medium comparable to, and in some cases preferable over, traditional instructional methods. Through iterative design and robust evaluation, I've demonstrated that VR-based exercises can translate conventional engineering and design courses into effective remote or hybrid learning experiences.
Moreover, my research emphasizes the pedagogical integration of real-world data, interactive multimodal representations, and cross-reality (XR) platforms to foster authentic STEM learning. For instance, I've investigated teachers' perceptions of integrating sensor data from smart buildings into K-12 curricula, underscoring the importance of data literacy and interactive visualization in STEM education. These efforts aim not only to improve student outcomes but also to prepare educators for the effective use of advanced digital tools in instruction, thereby bridging theoretical knowledge and practical application.
Digital Twins and XR GeoGraphical Environments
A significant thread of my research involves developing data-driven, procedural methods for constructing realistic, interactive visualizations of geographical environments, such as detailed forest ecosystems. This approach integrates ecological models, remote sensing data, and geospatial analytics to produce virtual environments that accurately represent real-world conditions. Such visualizations support various applications, including environmental monitoring, sustainable forestry management, ecosystem restoration planning, and immersive education. Utilizing sophisticated visualization platforms like Unreal Engine, Unity3D, and CesiumJS, these procedural techniques automate the creation of expansive digital environments, ensuring both precision and scalability.
Central to this domain is the convergence of immersive technologies with geospatial data management, aimed at enhancing user cognition, spatial awareness, and decision-making capabilities. I investigate interaction paradigms specifically tailored for navigating and engaging with expansive virtual environments, including urban spaces and natural landscapes. For instance, my empirical evaluations have refined teleportation techniques to address navigation challenges in dense urban virtual environments, improving user efficiency, orientation, and overall experience through enhanced interaction methods such as Mini-Map, Portal Preview, and X-Ray Vision. These innovations significantly improve spatial orientation, mitigate disorientation, and enrich user engagement in virtual urban explorations.
Further interest is the development of accurate digital twins—high-fidelity virtual representations that seamlessly integrate real-time data from diverse sources such as sensor networks, remote sensing, and geographic information systems (GIS). These digital twins facilitate dynamic simulations and scenario-based analyses, allowing users to experience, manage, and interact intuitively with complex, large-scale geographical and urban ecosystems.
Moreover, I place a strong emphasis on the temporal dimension within digital twins, ensuring these environments dynamically represent evolving conditions. By integrating time-sensitive data and predictive modeling techniques, my work allows stakeholders to explore environmental changes, urban development patterns, and ecological dynamics over time. This facilitates proactive planning, informed decision-making, and strategic management in sectors like urban planning, forestry, environmental conservation, and infrastructure management. The resulting spatiotemporal digital twins bridge the gap between static visualization and dynamic scenario-based planning, empowering users with comprehensive analytical tools to foresee potential outcomes and intervene effectively.
Finally, a distinctive aspect of my approach is the rigorous evaluation of these smart geographical environments and XR interfaces, ensuring usability, accessibility, and cognitive efficiency. Through iterative assessments involving stakeholders, domain experts, and end-users, I continuously refine the interactions and visualization fidelity to align with real-world demands and cognitive abilities. This holistic, user-centered evaluation strategy guarantees that my digital twin systems and immersive geospatial environments not only push technological boundaries but also deliver tangible benefits for societal, ecological, and urban development goals.
​
