Emotion-Adaptive Virtual Environments (Ph.D. Thesis)

Emotion-Adaptive Virtual Environments (Ph.D. Thesis)

Emotion-Adaptive Virtual Environments (Ph.D. Thesis)

Role

Role

Role

Ph.D. Candidate

Ph.D. Candidate

Ph.D. Candidate

Year

Year

Year

2023 - Current

2023 - Current

2023 - Current

Snapshot

Question:

When someone is stressed or overloaded, can an interface that senses their state from EEG and updates AI-generated indoor visuals in real-time help them stay in a better calm-focus zone, without compromising task performance, compared to static visuals or generic, non-personalized changes?


In more technical terms, can a real-time, EEG-driven closed loop that adapts AI-generated indoor imagery help people stay in a better calm-focus balance, while maintaining or improving cognitive task performance, compared to static visuals or non-personalized changes?

Method:

Within-subject experiment (planned N = 30 to 40 adults) with 3 conditions (Adaptive, Non-Adaptive scripted, Static baseline). Imagery updates every 10 to 15 seconds. Outcomes combine EEG-derived valence and arousal, task accuracy and reaction time, plus PANAS and NASA-TLX. Analysis uses linear mixed-effects models to compare conditions.

Result:

Current status: working end-to-end prototype (EEG sensing, real-time inference, Design Dictionary parameter mapping, image generation) plus a controlled study protocol with clear success metrics. The main study is the next step.

Impact:

A reusable blueprint for emotion-adaptive interfaces: a functional prototype, a replicable evaluation protocol, and an evidence-based Design Dictionary that links visual parameters (warmth, openness, clutter, lighting, texture) to measurable affective shifts, with direct relevance to adaptive UI, agentic assistants, and AR/VR comfort.

Project Overview

Emotion-Adaptive Virtual Environments is my doctoral research at NC State University, where I design and test a closed-loop human–AI system. The system reads emotional state in real time using EEG, then adapts AI-generated indoor imagery to help users maintain an optimal balance between calm and focus.

Problem:

Today's AI systems are static and impersonal as they respond identically whether you're stressed or focused, ignoring emotional context. While neurofeedback research shows visuals can regulate emotion, existing approaches use limited, fixed image sets and lack personalization.

Research question:

Can a closed-loop system that adapts AI-generated imagery based on real-time EEG feedback measurably improve emotional balance and task performance compared to static or non-adaptive interfaces?

Scope:

As a Ph.D. candidate in Design with a background in cognitive science and neuroscience, I bridge experimental research and practical interface design. This project builds and validates an EEG-driven adaptive system that uses generative AI to create responsive environments, moving beyond one-size-fits-all interaction.

Methods:
  • Mixed-Methods Research

  • Human–AI Interaction Evaluation

  • EEG recording and signal processing

  • Generative AI Prototyping

  • Behavioral, self-report

  • Statistical analysis

Research Design

This research develops and validates a closed-loop adaptive system that senses emotional state via EEG and responds by modifying AI-generated indoor environments in real time. The study examines whether such personalized adaptation can improve emotional regulation and cognitive focus compared to traditional static interfaces.

Research Gaps

Static AI Interaction:

Most AI systems today are reactive and text-dependent, meaning they ignore the user’s real-time emotional and cognitive state, offering the same experience to everyone.

Limited Personalization:

Neurofeedback and biofeedback tools often rely on fixed, pre-recorded visual or auditory stimuli, which cannot dynamically adapt to an individual’s shifting emotional needs.

Lack of Real-Time Adaptation:

While neuroscience confirms that visual affect emotion, existing systems do not implement a real-time, data-driven loop that adjusts to a user’s moment-to-moment physiological state.

Research Aims

  • To map specific visual parameters, such as color warmth, spatial openness, and clutter, to measurable changes in emotional valence and arousal.


  • To quantify whether adaptive, EEG‑driven imagery significantly improves emotional balance compared to static or pre‑scripted visuals.


  • To deliver a replicable protocol and design dictionary that product teams can use to build and test emotion‑adaptive features.

System Diagram

This diagram shows the closed loop:

EEG is translated into valence and arousal, mapped to the Design Dictionary, used to generate a new room image, then measured again until the target emotional state is reached.

nice interior
nice interior
nice interior

How It Works:

The Closed-Loop System

The core of this research is a functional, real-time adaptive system. It moves from sensing a user's emotional state to generating a personalized visual response in a continuous feedback loop. Below is a breakdown of the five-stage architecture.

Sense

Physiological Data Capture

The loop begins by capturing the user's emotional state. An EEG headband records brainwave activity, focusing on frequency bands scientifically linked to emotional valence and arousal.

nice interior
nice interior

Alpha asymmetry reflects emotional valence: more left-frontal activity signals positive approach, more right-frontal signals negative withdrawal.

The beta/alpha ratio reflects arousal: higher ratios mean heightened alertness, lower ratios mean calmer states.

Infer

Real-Time Emotion Classification

EEG data is processed through a Python pipeline (MNE-Python) to estimate the user’s current position on the valence–arousal model.
The system compares this state to a target emotional point, calculates the difference, and determines the direction of change needed (e.g., increase valence, lower arousal).

nice interior
nice interior
nice interior

Emotion Map
A simplified valence–arousal map showing common emotional states positioned by their affective intensity and positivity.

nice interior
nice interior
nice interior

Target-State Trajectory
A visual example of how the system guides a user’s emotional state from the current position toward the desired valence–arousal target using iterative adjustments. Each ghost circle is one step in the system.

Generate

Parameterized AI Imagery

The system translates the emotional gap into updated visual parameters using the Design Dictionary. This dictionary is a structured set of spatial and visual features drawn from neuroarchitecture research. It links specific design attributes to predictable effects on valence and arousal. Examples include color temperature, spatial openness, visual clutter, lighting ratio, and texture density.

These parameters are converted into a Stable Diffusion API request, producing a new environment that moves the user closer to the target emotional state.

nice interior
nice interior
nice interior

Example of the design dictionary entry
"This dictionary is a structured set of spatial and visual features drawn from neuroarchitecture research. It links specific design attributes to predictable effects on valence and arousal. Examples include color temperature, spatial openness, visual clutter, lighting ratio, and texture density."


Evaluate

Measure the Impact

The generated image is shown to the user while EEG continues recording. The system detects whether the visual change moved the user closer to the target state.

nice interior
nice interior
nice interior

Refine

Adaptive Iteration

The loop repeats, refining parameters and regenerating imagery until the user’s emotional state converges on the target.
This creates a continuously personalized adaptive environment driven by real-time physiological feedback.

Parameter Adjustment

The system compares the user’s latest EEG state with the target point and adjusts visual parameters accordingly. These changes guide the imagery toward the emotional direction needed, such as lowering arousal or increasing valence.

Cycle

After updating parameters, the system generates a new image and displays it back to the user. The environment shifts gradually, allowing the loop to observe micro-changes in EEG and evaluate whether the adjustment helped.

Convergence Check

The system evaluates whether the emotional state is trending toward the target. If progress slows or reverses, the loop fine-tunes parameters or shifts strategy, ensuring the environment continues moving toward the preferred emotional outcome.

Adaptive Imagery in Action

A demonstration of parameter-driven adaptation. The system adjusts visual features—like color temperature, lighting, Indoor plants, and spatial clutter—in response to shifting emotional state targets, creating a dynamic, responsive environment.

nice interior
nice interior
nice interior
nice interior
nice interior
nice interior
nice interior

Validating the System:

Study Design

To move from a compelling prototype to validated research, I designed a controlled experiment that tests the core hypothesis: Does EEG-driven adaptive imagery improve emotional balance and cognitive focus compared to non-adaptive interfaces?

Experimental Conditions

The study uses a within-subjects design where each participant experiences all three conditions in a counterbalanced order to control for learning and sequence effects.

1. Adaptive (Experimental)

The full closed-loop system. The imagery updates in real time (every 10-15 seconds) based on the participant's live EEG-derived emotional state.


2. Non-Adaptive (Active Control)

A pre-scripted, "best-practice" sequence of imagery changes. This controls for the novelty of seeing images change, isolating the effect of personalization.


3. Static (Baseline Control)

A single, neutral environment remains unchanged. This establishes a baseline for emotional state and task performance.

Participants & Protocol

  • Sample: 30 adult participants (a mix of graduate students and remote professionals).


  • Session Length: ~60 minutes per participant, including setup, tasks, and debrief.


  • Primary Task: Participants complete standardized cognitive challenges (e.g., timed arithmetic, reading comprehension) under mild time pressure to elicit a measurable stress response.


  • Measures Collected Triangulate physiological, performance, and subjective data:


    • EEG: Continuous recording of valence (alpha asymmetry) and arousal (beta/alpha ratio).

    • Performance: Accuracy and reaction time on cognitive tasks.

    • Self-Report: Pre- and post-session surveys (PANAS for mood, NASA-TLX for perceived workload).

Analysis & Success Metrics

Analysis & Success Metrics

Primary Analysis: A series of linear mixed-effects models will compare the three conditions on the key outcome variables (emotional balance, task performance).


Success will be demonstrated by:


  1. A statistically significant improvement in emotional balance (shift toward target valence/arousal) in the Adaptive condition versus both control conditions.


  2. Maintained or improved task performance in the Adaptive condition, proving the system supports—rather than disrupts—cognition.


  3. Subjective reports align with the physiological data, indicating the adaptation felt helpful and appropriate.

Outcomes:

Deliverables & Insights

  • A validated, functional prototype of a closed-loop adaptive system using EEG and generative AI.

  • A replicable research protocol for testing emotionally intelligent interfaces.

  • An evidence-based Design Dictionary mapping specific visual parameters to measurable emotional outcomes.

Direct Product Applications

Adaptive AI & Agentic Systems

Enables AI assistants to adjust their tone, pace, or visual presentation when they detect user frustration or cognitive overload, building trust through responsiveness.

Digital Wellbeing & Productivity:

Digital Wellbeing & Productivity

Informs the development of tools that actively help users regulate focus and stress through contextual interface shifts, not just usage timers.

Dynamic User Interfaces

Provides a blueprint for interfaces that adjust layout, information density, and contrast based on real-time user state, moving beyond static design systems.

Immersive Computing (AR/VR)

Creates a foundation for environments that dynamically adapt to maintain comfort, reduce simulator sickness, and sustain engagement.

Next Steps & Future Work

Immediate Next Steps (Next 6 Months):

  1. Complete the Main Study: Recruit and run the controlled study with 30-40 participants.

  2. Data Analysis & Publication: Analyze results, submit findings to a leading HCI conference (e.g., CHI, UbiComp), and open-source the core system components.

  3. Framework Dissemination: Develop the first public version of the Adaptive Design Protocol for industry and academic feedback.

Long-Term Vision:

The project is evolving toward an applied system that serves product teams and industry: a multimodal emotion engine and a UX workflow for designing, testing, and training agentic AI.

1. Multimodal Emotion Detection Layer

A unified sensing layer combining EEG, eye-tracking, heart-rate variability, and facial cues to estimate user state with higher confidence.
The long-term aim is a deployable SDK that product teams can plug into usability studies or feature testing to measure cognitive load, frustration, calm, or focus.

2. Emotion-Adaptive UI and Environment API

The Design Dictionary becomes a programmable API for adjusting parameters such as color temperature, spatial openness, contrast, motion, and environmental imagery in real time.
This supports products that aim to stabilize stress, maintain attention, or personalize sensory environments across productivity, education, or wellness contexts.

3. UX Workflow for Training Agentic AI

A structured pipeline for evaluating and improving emotionally responsive AI agents:
• run controlled tests
• collect multimodal signals
• evaluate AI decisions against ground-truth emotional shifts
• iterate to create safer, more aligned adaptive behavior
This positions the work as a foundation for human-centered agentic AI systems.

4. Longitudinal Personalization

Develop adaptive profiles that learn how user emotional responses evolve over days or months.
Potential applications include focus tools, adaptive learning environments, mental-health platforms, and XR systems designed to reduce cognitive strain.

5. Enterprise and Cross-Sector Deployment

Target sectors where emotional regulation and cognitive performance directly affect outcomes, including control rooms, clinical environments, training simulations, and industrial UX such as monitoring systems.
This aligns the research with operational impact and real-world product integration.

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in