A "retrofit" of my college project for illustration purpose
Figure: Overview of the gaze-controlled Brain-Computer Interface (BCI) system framework. The patient observes a visual stimulus matrix where each item (character or icon) flickers at a unique frequency. Electroencephalography (EEG) signals are acquired via an electrode cap and processed using Independent Component Analysis (ICA) to isolate source components from noise. Frequency analysis algorithms then decode the dominant frequency to identify the user's intended target (e.g., "Water").
The proposed BCI system operates on the principle of Steady-State Visually Evoked Potentials (SSVEP). The workflow is illustrated in Figure 1 and consists of the following modules:
Visual Stimulus Interface: A matrix display presents alphanumeric characters and care-related icons (such as water or assistance). Each block in the matrix oscillates at a distinct frequency (
Signal Acquisition: The patient's EEG data is captured using a multi-channel electrode cap, specifically focusing on the occipital region where visual processing occurs.
Signal Processing (ICA): Raw EEG data often contains ocular and muscle artifacts. We employ Independent Component Analysis (ICA) to decompose the multi-channel data into independent source components, allowing for the effective separation of the SSVEP signal from background noise.
Target Identification: The purified signal undergoes spectral analysis to identify the peak frequency. This frequency is matched against the known stimulus frequencies to determine the specific block the patient is gazing at, effectively decoding the intent (e.g., selecting the "Water" icon).
