Skip to content

xriteamupv/Reaction_Times

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Reaction Times Repository

This repository contains implementations of the work in progress paper "A Comparative Study on Reaction Times using Visual, Audio, and Haptic Stimuli with Direct, Encoded or Dynamic Deliveries". The hardware required for each modality is the following:

  1. Direct Visual: Raspberry Pi 4, Millisecond timer IPC-3834-T, 1x Push-Button, Noise-cancelling headphones Sony WH-1000XM4.
  2. Direct Audio: Raspberry Pi 4, Millisecond timer IPC-3834-T, GPIO piezo-electric buzzer, 1x Push-Button.
  3. Direct Haptic: Millisecond timer IPC-3834-T, 2x Push-Button, Noise-cancelling headphones Sony WH-1000XM4.
  4. Encoded Visual: Raspberry Pi 4, Millisecond timer IPC-3834-T, HDMI Camera Antrica ANT-SP1080P60X20, H264 Encoder + Decoder Antrica ANT-7000, Monitor ASUS VP247HAE, 1x Push-Button, Noise-canceling headphones Sony WH-1000XM4.
  5. Encoded Audio: Raspberry Pi 4, Millisecond timer IPC-3834-T, PC, Noise-canceling headphones Sony WH-1000XM4, 1x Push-Button.
  6. Encoded Haptic: Raspberry Pi 4, Millisecond timer IPC-3834-T, PC, haptic vest bHaptics Tactsuit X40, 1x Push-Button, Noise-canceling headphones Sony WH-1000XM4.
  7. Dynamic Video: PC, Wheel + Pedals Logitech G29, VR Headset Meta Quest 3, Noise-canceling headphones Sony WH-1000XM4.
  8. Dynamic Audio: PC, Wheel + Pedals Logitech G29, Noise-canceling headphones Sony WH-1000XM4.
  9. Dynamic Haptic: PC, Wheel + Pedals Logitech G29, haptic vest bHaptics Tactsuit X40, Noise-canceling headphones Sony WH-1000XM4.

Repository Structure:

Reaction_Times/
├── CAR3_2021_v2.0.5/          [Dynamic] Unity simulator
├── NODE_APP_v2.05/            [Dynamic] Node.js cockpit server
├── NAC_GW_v2.05/              [Dynamic] Python InfluxDB gateway
├── RPI/                       [Direct/Encoded] Raspberry Pi control scripts
├── MilisecondApp/             [Encoded] Unity UDP stimulus listener
└── README.md                  

Direct/Encoded Modality

Raspberry Pi Scripts

Location on device: /home/xruser/PRUEBAS2

script_a.py (Direct Visual and Encoded Visual)

Changes relay position at a random time between t and t + t seconds:

cd /home/xruser/PRUEBAS2
python ./script_a.py 7

The argument 7 means the relay will trigger between 7 and 10 seconds.

script_b.py (Direct Audio)

Similar to script_a.py, but also sounds the buzzer synchronized with the relay change:

python ./script_b.py 7

script_c.py (Encoded Haptic)

Starts the millisecond clock and, after a randomized delay defined by the relay, sends a UDP packet to the Unity MilisecondApp to activate the haptic signal in the Tactsuit:

python ./script_c.py 7 192.168.1.5 8051

script_d.py (Encoded Audio)

Similar to script_c.py, but creates a sound in the Unity MilisecondApp instead of a haptic signal:

python ./script_d.py 7 192.168.1.5 8051

Hardware Wiring

image image

Relay circuit:

  • Relay output cables → lower two connectors of the MS (millisecond) clock.
  • This starts the clock when the relay activates.

Response mechanism:

  • Push-buttons → upper two connectors of the MS clock.
  • Pressing the button stops the clock and records the reaction time.

Buzzer:

  • Connected to Raspberry Pi GPIO pin 23 (configurable in scripts).

Reaction Times Measurement

All sessions need to be conducted in a quiet laboratory room (using noise-cancelling Headphones is recommended) with controlled lighting and minimal visual or auditory distractions. Participants are required to press the primary push-button as quickly as possible when they detect the stimulus.

Direct stimuli are generated with minimal processing and delivered immediately. For visual tasks, a timestamp appears directly on a physical timer (IPC-3834-T) with millisecond precision. Audio tasks use a buzzer, while haptic tasks involve a direct, localized physical tap from the secondary push-button.

Encoded stimuli are delivered through specific devices, introducing a processing and transmission delay. For visual stimuli, the HDMI camera captures the physical clock, and after encoding/decoding, the timer is represented on a monitor. Encoded audio tasks are delivered via a PC app (/MilisecondApp) through headphones, and encoded haptic feedback is administered via the Tactsuit X40.

Latency Budget Measurement

To obtain the effective reaction time from the observed one, the latency budgets need to be calculated for each stimulus type.

Direct Visual/Audio/Haptic

In Direct Modalities, processing latency is negligible (in the order of microseconds) for both the actuators (due to the use of unprocessed signals) and the push-button. For example, in the case of the Direct Audio actuator, measured by visualizing the waveform of the audio, it is observed that the stimulus signal generated by the Raspberry Pi (that activates the Timer and the Buzzer) and the stimulus representation (Buzzer sound) are synchronized:

LatencyBudget_DirAudio.mp4

Note: Video played at 0.1X speed.

Encoded Visual

Encoded Visual processing latency includes video capture, H264 encoding, ethernet transport, H264 decoding, and display. The methodology for measuring this latency is pointing the camera directly to the display and representing a timestamp (represented in a 240 Hz PC display for better precision than the 60 Hz display of the timer), and capturing both from an external 240 fps camera. It is observed an offset of around 100 ms between the stimulus signal generated by the Raspberry Pi (synchronized with the Timer as seen in the previous video) and the stimulus representation (Display):

LatencyBudget_EncVideo.mp4

Note: Video played at 0.1X speed.

Encoded Audio

Regarding the Ecnoded Audio processing latency, it includes the UDP communication from the Raspberry to the PC, and the time the sound card and headphones take to reproduce the sound generated in Unity. The methodology is similar to the one employed for the Direct Audio, but in this case the sound is reproduced in a Headphone, making the analysis of the waveform difficult due to the lower volume, so it was decided to use the volume bar of the PC as the reference. The latency observed is around 230 ms between the stimulus generation in the Raspberry Pi and its representation in the Headphones:

LatencyBudget_EncAudio.mp4

Note: Video played at 0.1X speed.

Encoded Haptic

In the case of the Encoded Haptic stimuli, the processing latency involves the UDP communication from the Raspberry to the PC, and the delay between the bHaptics API call and the execution of the specified vibration in the bHaptics Tactsuit. The methodology for this measurement is capturing the timestamp when the haptic vest begins to vibrate, which can be observed by analyzing frame-by-frame the slow-motion (240 fps) video. The latency observed in this case is around 40 ms between the stimulus generation and its representation (Tactsuit):

LatencyBudget_EncHaptic.mp4

Note: Video played at 0.1X speed.


Dynamic Modality

Key components:

  • CAR3_2021_v2.0.5/: Unity immersive driving simulator with integrated reaction-time measurement (Dynamic Visual via traffic light changes, Dynamic Audio via driving beeps, and Dynamic Haptic via Logitech G29 + BHaptics Tactsuit vibrations.
  • NODE_APP_v2.05/: Node.js Express server managing the application backend and communication with Logitech G29.
  • NAC_GW_v2.05/: Python gateway that stores the events in InfluxDB.

Setup & Execution

Prerequisites

  1. Create a working directory:

    mkdir TOD_REACTION_TIME
    cd TOD_REACTION_TIME
  2. Extract the provided TOD_REACTION_TIME.zip to this directory. You will have:

    • CAR3_2021_v2.0.5/ (Unity project)
    • NODE_APP_v2.05/ (Node.js server)
    • NAC_GW_v2.05/ (Python gateway)

Step-by-step Execution

  1. Start Meta Quest Link on the PC.

  2. Launch the Unity simulator:

    • Open Unity Hub and add the CAR3_2021_v2.0.5/ project.
    • Open the project and press Play in the Unity Editor.
    • The application will stream to the Oculus Quest via Quest Link.
  3. Connect Logitech G29:

    • Start Logitech G Hub on the PC.
    • Connect the steering wheel via USB.
  4. Start the Node.js Cockpit Server:

    cd NODE_APP_v2.05
    node bin/COCKPIT_v2.0.js

    The server is now receiving stimulus events from the Unity simulator.

  5. Start the NAC Gateway (in a separate terminal):

    cd NAC_GW_v2.05
    python NAC_GW_Reaction_Time.py

    Events are now being written to InfluxDB.

Note: Execution order is flexible; all services are asynchronous and will connect as they start.

Reaction Time Measurement

Similarly to Direct/Encoded Stimuli, measurements need to be conducted in a quiet laboratory room (using noise-cancelling Headphones is recommended) with controlled lighting and minimal visual or auditory distractions. Participants are required to press the brake pedal of the Logitech G29 as quickly as possible when they detect the stimulus.

image

Dynamic stimuli are embedded in a real-time simulated driving scenario, where participants navigate through visual, auditory, and haptic cues. Users control a virtual vehicle with the wheel and pedals and are required to brake in the VR environment as soon as they perceive any target stimulus, mimicking real-world remote driving settings. For visual stimuli, the visual stimulus is a traffic light signal turning from green to red or vice versa. For audio stimuli, a distinctive horn sound is reproduced through headphones, while for haptic stimuli, a simultaneous vibration on both the Logitech G29 wheel and Tactsuit X40 vest is activated. The generation of the stimulus is randomized in time and type, meaning that every random time between t and t + t seconds, a random stimuli type (visual, audio or haptic) is reproduced.

Display: Bottom-left corner of the virtual windshield in real-time.

Logging:

  • File: logs/log-events.log
  • Database: InfluxDB (via NAC Gateway)

Visualization: Grafana dashboards can query InfluxDB to display KPIs, response distributions, and historical trends.

Key Files

  • CAR3_2021_v2.0.5/Assets/ - Unity scenes, scripts, and 3D models
  • NODE_APP_v2.05/bin/COCKPIT_v2.0.js - Main server entry point
  • NODE_APP_v2.05/logs/log-events.log - Event log file
  • NAC_GW_v2.05/NAC_GW_Reaction_Time.py - Gateway processor

Latency Budget Measurement

To obtain the effective reaction time from the observed one, the latency budgets need to be calculated for each stimulus type.

Dynamic Visual

In Dynamic Visual modality, the processing latency equals to the update of the traffic light, corresponding to a specific Unity animation. The methodology for measuring this latency is displaying in the Unity application both a continuous timestamp (updated every frame) and the timestamp corresponding to the random stimulus generation, comparing the difference between the sent timestamp and the displayed timestamp. An external 240 fps camera pointing directly to the display is used to compare both timestamps the moment the traffic light changes (by later analyzing frame-by-frame). The results show an observed offset of around 120 ms between the stimulus signal generated in Unity and the stimulus representation (traffic light change):

LatencyBudget_DynVideo.mp4

Note: Video played at 0.1X speed.

Dynamic Audio

In Dynamic Audio modality, the processing latency mainly originates from the time the sound card and headphones take to reproduce the sound generated in Unity. The methodology is similar to the one employed for the Encoded Audio, using the volume bar of the PC as the reference. The latency observed is lower than in the Encoded Audio modality due to a more optimized audio signal, obtaining around 150 ms between the stimulus generation in Unity and its representation in the Headphones:

LatencyBudget_DynAudio.mp4

Note: Video played at 0.1X speed.

Dynamic Haptic

Finally, for the Dynamic Haptic modality, the processing latency mainly includes the delay between the bHaptics API call and the execution of the specified vibration in the bHaptics Tactsuit. The Logitech G19 wheel also activates a vibration in the Dynamic Haptic modality, but it is synchronized with the Tactsuit by code. The methodology is similar to the one employed for the Encoded Haptic, comparing frame-by-frame the timestamp when the haptic vest begins to vibrate and the timestamp when the stimulus was originated. The latency observed is higher than in the Encoded Haptic modality due to the use of a different haptic pattern, obtaining around 80 ms between the stimulus generation in Unity and its representation in the haptic vest (and wheel):

LatencyBudget_DynHaptic.mp4

Note: Video played at 0.1X speed.


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •