Long-Term Vision:
The project is evolving toward an applied system that serves product teams and industry: a multimodal emotion engine and a UX workflow for designing, testing, and training agentic AI.
1. Multimodal Emotion Detection Layer
A unified sensing layer combining EEG, eye-tracking, heart-rate variability, and facial cues to estimate user state with higher confidence.
The long-term aim is a deployable SDK that product teams can plug into usability studies or feature testing to measure cognitive load, frustration, calm, or focus.
2. Emotion-Adaptive UI and Environment API
The Design Dictionary becomes a programmable API for adjusting parameters such as color temperature, spatial openness, contrast, motion, and environmental imagery in real time.
This supports products that aim to stabilize stress, maintain attention, or personalize sensory environments across productivity, education, or wellness contexts.
3. UX Workflow for Training Agentic AI
A structured pipeline for evaluating and improving emotionally responsive AI agents:
• run controlled tests
• collect multimodal signals
• evaluate AI decisions against ground-truth emotional shifts
• iterate to create safer, more aligned adaptive behavior
This positions the work as a foundation for human-centered agentic AI systems.
4. Longitudinal Personalization
Develop adaptive profiles that learn how user emotional responses evolve over days or months.
Potential applications include focus tools, adaptive learning environments, mental-health platforms, and XR systems designed to reduce cognitive strain.
5. Enterprise and Cross-Sector Deployment
Target sectors where emotional regulation and cognitive performance directly affect outcomes, including control rooms, clinical environments, training simulations, and industrial UX such as monitoring systems.
This aligns the research with operational impact and real-world product integration.