HCI (Human-Computer Interaction) + Human-Centered AI (Explainable/Responsible)
Interactive, Mobile, Wearable, Ubiquitous Technology
Human-AI Teaming / XR Mobility / Vehicular-Metaverse / Actuated XR
XR Twin Platforms for AI-Infused Physical Systems (Vehicles, Robots, Cities)
AI Coaching & ActionSense / Somatosensory Intelligence / Visuo-Haptic Illusions & Telepresence
XR Twin 기반의 초실감 주행‧보행 이동경험을 확장하는 메타모빌리티 기술 개발
자율주행 차량의 사람중심 지능화 기술 (e.g., 탑승객 중심 의사결정 및 주행전략 설계, 차량 XAI 평가도구)
Stylized 뉴럴 렌더링 기반 3D 메타휴먼 생성기술 개발
인공지능 기반 메타버스 구현을 위한 융·복합 문화 가상 스튜디오
인간중심 물리 시스템 설계를 위한 HCI+AI 융합연구 (GIST-MIT 공동연구)
메타버스 공간의 공감각 증강 기술 및 메타휴먼 상호작용 기술 개발
Public Signage 및 공공시설물의 시인성 개선을 위한 인공지능 기술 및 XR Twin 플랫폼 개발
발달장애 아동을 위한 신체활동 연계형 교육 콘텐츠 개발 및 인공지능 기반의 강화, 촉구 자동화 기술
고령 운전자 및 교통약자를 위한 삶의 질 기술 (Quality of Life Technology) 연구
행동과학자의 AI 응용개발을 지원하는 인간대상 빅데이터 분석 및 시각화 도구 개발
학습자의 실시간 인지부하 및 시선패턴에 적응하는 사이버 학습 시스템 개발
Gamification을 적용한 크라우드소싱 시스템 개발
시선추적, 음성인식, 제스처 인식, 대면적 전면차창 디스플레이를 통한 차량용 NUI (Natural User Interface) 개발
Giant Finger: Visuo-proprioceptive congruent virtual legs for flying actions in virtual reality
Seamless walking in VR – Redirected Walking(RDW)
Electrical, Vibrational, and Cooling Stimuli-Based Redirected Walking: Comparison of Various Vestibular Stimulation-Based Redirected Walking Systems
The Thousand Character Classic VR: Chinese Characters-adaptive VR Game with Treadmill for an Embodied Learning of Their Korean Rendering
Exploring the effectiveness of public displays on interactive transparent displays inside autonomous public transportation (2022).
Virtual Reality Interface Design to Improve Grasping Experience in Block Stacking Activities in Virtual Environments (2022)
The goal of this project is to create a simple, usable tool for handling time-series sensor data streams in ML incorporated applications.
Annual StarCraft artificial intelligence (AI) competitions have promoted the development of successful AI players for complex real-time strategy games.
In these competitions, AI players are ranked based on their win ratio over thousands of head-to-head matches.
A factory which have facilities and machines with Internet of Things (IoT) sensors
Design and develop variants of advanced wearable user interface prototypes, including joystick-embedded, potentiometer-embedded, motion-gesture and contactless infrared user interfaces for rapidly assessing hands-on user experience of potential futuristic user interfaces.
Forecasting Future Turn-Based Strokes in Badminton Rallies
Portal Display: screen-based 3D stereoscopic conferencing system for immersive social telepresence
Deep Learning-Based Engagement Classification by Behavioral and Physiological Data of Children with Developmental Disability (2022)
REVES: Redirection Enhancement Using Four-Pole Vestibular Electrode Stimulation. CHI 2022 LBW
Take-Over Requests after Waking in Autonomous Vehicles (2022)
Build first person view virtual driving system for further Human-Vehicle Interaction research
Real-time driver’s task load discrimination -> Driver’s attentiveness level classification and condition discrimination -> Suggest various situation based hand-over notification interface design guidelines for control of high reliable autonomous driving vehicles
Exploring the modality that positively influenced the driver’s attitude when providing the combination of ‘Why message’ (e.g. construction site) and ‘How message’ (e.g. decrease driving speed) according to the behavior of the vehicle in the autonomous driving situation
Development of multi-modal NUI (Natural User Interface) based on voice command and touch gesture to improve user experience in vehicle
New annotation system suggestion for meaningful data extraction and accurate data convergence of Cultural Heritage formative element -> Annotation data accumulation and big data analysis for diverse applications (2018 KOCCA)
The goal of this project is to create an in-car information system that adapts the delivery timings of HCI demands to drivers based on in-situ driving and cognitive load models for safe navigation.
In this project, We explored a novel navigation display system that uses an augmented reality (AR) projection to minimize cognitive distance by overlaying driving directions on the windshield and road.
While in-car navigation systems enhance situational awareness, they also increase drivers’ visual distraction and cognitive load. This project explores the efficacy of multi-modal route guidance cues for ‘safer’ driving.
The purpose of this project was to design features for car dashboard displays that are both functional and aesthetically pleasing.
This project aims to better support student learning by adapting computer-based tutoring to individual learning phases and real-time capabilities. In this manner, computer-based tutors may be more effective in supporting robust learning.
This project explored the effects of AR technology, when combined with a range of 3D prototype applications.
The goal of this project is to improve perception and performance during touch-based interaction in personal electronic devices. Specifically, we have identified the appropriate fusion of visual, audio, and haptic cues during fingertip interaction with touch screen images
This project aims to understand users-on-the-go in connected environments and to improve the quality of their ubiquitous HCI experience by enhancing machine intelligence to be more human-centered.
The goal of this project is to help drivers safely interact with ubiquitous HCI demands and benefit from proactive information services in cars. Our prior and ongoing projects primarily explore the ‘interruptive’ feature of ubiquitous HCI demands in cars. We have been rigorously addressing the issues of when to intervene by using our sensor-based assessment technologies that estimate drivers’ cognitive load in near real-time.
This project seeks to develop a sensor-based method for tracking variation in cognitive processing loads.
This project investigates how dialog-based HCI demands interact with driver interruptibility. We are refining our key technology, obtained from Project 1 in this document, to predict the duration of driver interruptibility.
This proposed project aims to make machine intelligence-driven physical and non-physical interventions in computer-assisted driving more acceptable and dependable.