Skip to content

CedrusLNZ/SeizureVision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Seizure Vision Project

Seizure Dataset

The Seizure Dataset consists of 90 video recordings, with 45 positive cases and 45 negative cases. The dataset is curated for research on seizure detection and video-based medical analysis.

📂 Original Videos: The raw 90 videos provided by the Rajarshi Group, /mnt/SSD1/linazhang/OriginalVideos

📂 Trimmed Videos: Videos are cut based on seizure start and end times for VLM processing, /mnt/SSD1/linazhang/Dataset

📄 Annotation File: Feature annotations for each video, FeatureAnnotation_V3.csv.

📄 Prompt Files: Prompts used for feature extraction with VLMs.

📂 Related Research Papers: Publications and references related to seizure classification research.

📂 Meeting Records: Zoom records with Rajarshi.

Directory Structure

This repository contains the code for video processing, feature extraction, and classification of the Seizure Dataset using Vision-Language Models (VLMs) and machine learning classficcation techniques.

├── video_processing/                                # Video preprocessing  
│   ├── cutvideos.py                                 # Trim videos based on EEG and doctor-annotated timestamps
│   ├── video_process_command.txt                    # Segmentation of all the videos
│   ├── pose_estimation.py                           # Add pose estimation for all the videos
│   ├── zoom_face.py                                 # Close-up of the patient's face     
│  
├── seizure_dataset/                                 # Seizure annotation dataset 
│   ├── FeatureAnnotation_V3.csv                     # Feature Annotation file
│   ├── xgblopo_annotation.py                        # XGB leave-one-patient-out classification metrics on seizure annotation dataset
│   ├── xgb5foldcv_annotation.py                     # XGB 5 fold cross valiation classification metrics on seizure annotation dataset
│ 
├── feature_extraction/                              # Feature extraction
│   ├── visual_feature/                              # Visual features extraction using VLM
│       ├── extract_feature_internvl3_apiserver.py   # Extracts visual features by internvl3 (deploy as api server)
│       ├── extract_feature_internvl2d5.py           # Extracts visual features by internvl 2.5 
│       ├── extract_feature_minicpm.py               # Extracts visual features by minicpm
│       ├── merge_feature.py                         # Merge segmentation extraction 
│       ├── feature_metrics.py                       # Calculate feature extraction statics
│       ├── highlight_prediction_error.py            # Hilight prediction errors
│       ├── vlm_apiserver_deploy_command.txt         # Deploy internvl vlm locally as a api server
│   ├── auditory_feature/                            # Auditory features extraction using ALM
│       ├── ictal_vocalization/                      # Extracts ictal_vocalization feature by Qwen2audio 
│       ├── verbal_responsiveness/                   # Extracts verbal_responsivveness feature by Qwen2audio 
│   ├── prompt.txt                                   # All feature prompt
│
├── classification/                                  # Video classification
│   ├── XGB/                                         # Classify videos using XGBoost with 5-fold cross-validation and leav-one-patient-out
│   ├── KNN/                                         # Classify videos using KNN with 5-fold cross-validation and leav-one-patient-out
│   ├── DeepFM/                                      # Classify videos using DeepFM 
│     
├── results/                                         # Experiment results   
│   ├── Internvl2d5_segment_extraction/              # Visual feature extraction result using Internvl 2.5
│   ├── Internvl3_segment_extraction/                # Visual feature extraction result using Internvl 3
│   ├── Seizure_annotation_metrics/                  # Annotation dataset classification metrics
│   ├── XGB_classification_metrics/                  # XGB classification metrics for extacted features
│   ├── KNN_classification_metrics/                  # KNN classification metrics for extacted features
│   ├── ExperimentMetrics.csv                        # All experiments classification metrics 
│   ├── FeatureExtractionStatics.csv                 # All feature extraction statics
│  
└── README.md                                        # Project documentation  
  

Experimental Steps and Results

  1. Run extractfeature.py to extract all features using VLM. Replace the VLM and video sampling parts.
  2. Run xgbcross.py to perform classification using XGBoost and output the metrics.
  3. Run featureaccuracy.py to calculate the feature accuracies.

ExperimentMetrics.csv summarizes all experimental classification metrics.
FeatureAccuracy.csv summarizes the accuracy of all feature extractions by VLM.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors