Emotion-Adaptive Virtual Environments (Ph.D. Thesis)

Emotion-Adaptive Virtual Environments (Ph.D. Thesis)

Emotion-Adaptive Virtual Environments (Ph.D. Thesis)

Role

Role

Role

Ph.D. Candidate

Ph.D. Candidate

Ph.D. Candidate

Year

Year

Year

2023 - Current

2023 - Current

2023 - Current

Project Overview

Emotion-Adaptive Virtual Environments is my doctoral research at NC State University, where I design and test a closed-loop human–AI system. The system reads emotional state in real time using EEG, then adapts AI-generated indoor imagery to help users maintain an optimal balance between calm and focus.

Problem:

Today's AI systems are static and impersonal as they respond identically whether you're stressed or focused, ignoring emotional context. While neurofeedback research shows visuals can regulate emotion, existing approaches use limited, fixed image sets and lack personalization.

Research question:

Can a closed-loop system that adapts AI-generated imagery based on real-time EEG feedback measurably improve emotional balance and task performance compared to static or non-adaptive interfaces?

Scope:

As a Ph.D. candidate in Design with a background in cognitive science and neuroscience, I bridge experimental research and practical interface design. This project builds and validates an EEG-driven adaptive system that uses generative AI to create responsive environments, moving beyond one-size-fits-all interaction.

Methods:
  • Mixed-Methods Research

  • Human–AI Interaction Evaluation

  • EEG recording and signal processing

  • Generative AI Prototyping

  • Behavioral, self report

  • Statistical analysis

Research Design

This research develops and validates a closed-loop adaptive system that senses emotional state via EEG and responds by modifying AI-generated indoor environments in real time. The study examines whether such personalized adaptation can improve emotional regulation and cognitive focus compared to traditional static interfaces.

Research Gaps

Static AI Interaction:

Most AI systems today are reactive and text-dependent, meaning they ignore the user’s real-time emotional and cognitive state, offering the same experience to everyone.

Limited Personalization:

Neurofeedback and biofeedback tools often rely on fixed, pre-recorded visual or auditory stimuli, which cannot dynamically adapt to an individual’s shifting emotional needs.

Lack of Real-Time Adaptation:

While neuroscience confirms that visual affect emotion, existing systems do not implement a real-time, data-driven loop that adjusts to a user’s moment-to-moment physiological state.

Research Aims

  • To map specific visual parameters, such as color warmth, spatial openness, and clutter, to measurable changes in emotional valence and arousal.


  • To quantify whether adaptive, EEG‑driven imagery significantly improves emotional balance compared to static or pre‑scripted visuals.


  • To deliver a replicable protocol and design dictionary that product teams can use to build and test emotion‑adaptive features.

System Diagram

This diagram shows the closed loop:

EEG is translated into valence and arousal, mapped to the Design Dictionary, used to generate a new room image, then measured again until the target emotional state is reached.

nice interior
nice interior
nice interior

How It Works:

The Closed-Loop System

The core of this research is a functional, real-time adaptive system. It moves from sensing a user's emotional state to generating a personalized visual response in a continuous feedback loop. Below is a breakdown of the five-stage architecture.

Sense

Physiological Data Capture

The loop begins by capturing the user's emotional state. An EEG headband records brainwave activity, focusing on frequency bands scientifically linked to emotional valence and arousal.

nice interior
nice interior

Alpha asymmetry reflects emotional valence: more left-frontal activity signals positive approach, more right-frontal signals negative withdrawal.

The beta/alpha ratio reflects arousal: higher ratios mean heightened alertness, lower ratios mean calmer states.

Infer

Real-Time Emotion Classification

EEG data is processed through a Python pipeline (MNE-Python) to estimate the user’s current position on the valence–arousal model.
The system compares this state to a target emotional point, calculates the difference, and determines the direction of change needed (e.g., increase valence, lower arousal).

nice interior
nice interior
nice interior

Emotion Map
A simplified valence–arousal map showing common emotional states positioned by their affective intensity and positivity.

nice interior
nice interior
nice interior

Target-State Trajectory
A visual example of how the system guides a user’s emotional state from the current position toward the desired valence–arousal target using iterative adjustments. Each ghost circle is one step in the system.

Generate

Parameterized AI Imagery

The system translates the emotional gap into updated visual parameters using the Design Dictionary. This dictionary is a structured set of spatial and visual features drawn from neuroarchitecture research. It links specific design attributes to predictable effects on valence and arousal. Examples include color temperature, spatial openness, visual clutter, lighting ratio, and texture density.

These parameters are converted into a Stable Diffusion API request, producing a new environment that moves the user closer to the target emotional state.

nice interior
nice interior
nice interior

ٍExample of the design dictionary entry
"This dictionary is a structured set of spatial and visual features drawn from neuroarchitecture research. It links specific design attributes to predictable effects on valence and arousal. Examples include color temperature, spatial openness, visual clutter, lighting ratio, and texture density."


Evaluate

Measure the Impact

The generated image is shown to the user while EEG continues recording. The system detects whether the visual change moved the user closer to the target state.

nice interior
nice interior
nice interior

Refine

Adaptive Iteration

The loop repeats, refining parameters and regenerating imagery until the user’s emotional state converges on the target.
This creates a continuously personalized adaptive environment driven by real-time physiological feedback.

Parameter Adjustment

The system compares the user’s latest EEG state with the target point and adjusts visual parameters accordingly. These changes guide the imagery toward the emotional direction needed, such as lowering arousal or increasing valence.

Cycle

After updating parameters, the system generates a new image and displays it back to the user. The environment shifts gradually, allowing the loop to observe micro-changes in EEG and evaluate whether the adjustment helped.

Convergence Check

The system evaluates whether the emotional state is trending toward the target. If progress slows or reverses, the loop fine-tunes parameters or shifts strategy, ensuring the environment continues moving toward the preferred emotional outcome.

Adaptive Imagery in Action

The Adaptive Imagery

A demonstration of parameter-driven adaptation. The system adjusts visual features—like color temperature, lighting, Indoor plants, and spatial clutter—in response to shifting emotional state targets, creating a dynamic, responsive environment.

nice interior
nice interior
nice interior
nice interior
nice interior
nice interior
nice interior

Validating the System:

Study Design

To move from a compelling prototype to validated research, I designed a controlled experiment that tests the core hypothesis: Does EEG-driven adaptive imagery improve emotional balance and cognitive focus compared to non-adaptive interfaces?

Experimental Conditions

The study uses a within-subjects design where each participant experiences all three conditions in a counterbalanced order to control for learning and sequence effects.

1. Adaptive (Experimental)

The full closed-loop system. The imagery updates in real time (every 10-15 seconds) based on the participant's live EEG-derived emotional state.


2. Non-Adaptive (Active Control)

A pre-scripted, "best-practice" sequence of imagery changes. This controls for the novelty of seeing images change, isolating the effect of personalization.


3. Static (Baseline Control)

A single, neutral environment remains unchanged. This establishes a baseline for emotional state and task performance.

Participants & Protocol

  • Sample: 30 adult participants (a mix of graduate students and remote professionals).


  • Session Length: ~60 minutes per participant, including setup, tasks, and debrief.


  • Primary Task: Participants complete standardized cognitive challenges (e.g., timed arithmetic, reading comprehension) under mild time pressure to elicit a measurable stress response.


  • Measures Collected Triangulate physiological, performance, and subjective data:


    • EEG: Continuous recording of valence (alpha asymmetry) and arousal (beta/alpha ratio).

    • Performance: Accuracy and reaction time on cognitive tasks.

    • Self-Report: Pre- and post-session surveys (PANAS for mood, NASA-TLX for perceived workload).

Analysis & Success Metrics

Analysis & Success Metrics

Primary Analysis: A series of linear mixed-effects models will compare the three conditions on the key outcome variables (emotional balance, task performance).


Success will be demonstrated by:


  1. A statistically significant improvement in emotional balance (shift toward target valence/arousal) in the Adaptive condition versus both control conditions.


  2. Maintained or improved task performance in the Adaptive condition, proving the system supports—rather than disrupts—cognition.


  3. Subjective reports align with the physiological data, indicating the adaptation felt helpful and appropriate.

Outcomes:

Deliverables & Insights

  • A validated, functional prototype of a closed-loop adaptive system using EEG and generative AI.

  • A replicable research protocol for testing emotionally intelligent interfaces.

  • An evidence-based Design Dictionary mapping specific visual parameters to measurable emotional outcomes.

Direct Product Applications

Adaptive AI & Agentic Systems

Enables AI assistants to adjust their tone, pace, or visual presentation when they detect user frustration or cognitive overload, building trust through responsiveness.

Digital Wellbeing & Productivity:

Informs the development of tools that actively help users regulate focus and stress through contextual interface shifts, not just usage timers.

Dynamic User Interfaces

Provides a blueprint for interfaces that adjust layout, information density, and contrast based on real-time user state, moving beyond static design systems.

Immersive Computing (AR/VR)

Creates a foundation for environments that dynamically adapt to maintain comfort, reduce simulator sickness, and sustain engagement.

Next Steps & Future Work

Immediate Next Steps (Next 6 Months):

  1. Complete the Main Study: Recruit and run the controlled study with 30-40 participants.

  2. Data Analysis & Publication: Analyze results, submit findings to a leading HCI conference (e.g., CHI, UbiComp), and open-source the core system components.

  3. Framework Dissemination: Develop the first public version of the Adaptive Design Protocol for industry and academic feedback.

Long-Term Research Vision:

  • Explore Multi-Modal Sensing: Integrate eye-tracking, heart rate variability, or facial expression analysis to create a more robust emotional model.

  • Longitudinal Personalization: Develop systems that learn and adapt to an individual's responses over weeks or months, creating truly personalized adaptive profiles.

  • Cross-Cultural Validation: Investigate how the relationship between visual parameters and emotional response varies across different cultural contexts.

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in

If my work resonates with you, feel free to reach out. I’m always happy to connect.

Click to copy :

nfatemi@ncsu.edu

© 2025

All built in