Skip to content

SoulSync-Tm/MCF4_D14_AI-Engine

Repository files navigation

🎵 SoulSync AI Engine

Python FastAPI Status

Advanced AI/ML Backend Service for Emotion Recognition & Multi-Modal Analysis

Powering the SoulSync application with cutting-edge machine learning capabilities


🌟 Overview

SoulSync AI Engine is a sophisticated machine learning backend service that provides real-time emotion recognition and analysis across multiple modalities. Built with FastAPI and Python 3.11, it serves as the intelligent core of the SoulSync ecosystem, processing audio, video, and textual data to understand human emotions and preferences.

🎯 Key Capabilities

  • 🎵 Music Emotion Analysis - Extract emotional features from audio files using advanced signal processing
  • 👤 Human Emotion Detection - Multi-modal emotion recognition combining facial expressions and voice analysis
    • NEW: Deep Learning CNN for face emotion recognition (7 emotions)
    • NEW: LSTM-CNN hybrid for voice emotion recognition
    • NEW: Intelligent fusion with confidence weighting
  • 🗣️ Voice Emotion Recognition - Real-time analysis of vocal patterns and emotional states
  • 🌐 Language Detection - Automatic language identification from audio inputs
  • 📷 Face Detection & Analysis - Advanced facial recognition using OpenCV
  • 🔄 Real-time Processing - Asynchronous processing for optimal performance
  • 🌍 Cross-Origin Support - CORS-enabled API for seamless frontend integration

⚡ Recent Major Improvements

🚀 Enhanced Emotion Detection (March 2026)

  • Upgraded from basic heuristics to state-of-the-art deep learning models
  • Face Emotion: 4-block CNN with BatchNorm (65-70% accuracy on FER2013)
  • Voice Emotion: Hybrid CNN-BiLSTM with 180-dimensional audio features
  • Intelligent Fusion: Confidence-weighted multi-modal integration
  • Low Latency: All models run locally (150-300ms total processing)
  • 7 Emotions: Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral
  • See EMOTION_MODEL_IMPROVEMENTS.md for details

🏗️ Architecture & Technology Stack

Core Technologies

  • Backend Framework: FastAPI (async/await support)
  • Python Version: 3.11.14 (optimized for ML packages)
  • Environment Management: Conda
  • API Documentation: OpenAPI/Swagger + ReDoc

Machine Learning Stack

  • Audio Processing: Librosa, SoundFile
  • Computer Vision: OpenCV, MediaPipe
  • Data Science: NumPy, Pandas, Scikit-learn
  • Feature Extraction: MFCC, Chroma, Spectral Analysis
  • Model Support: Ready for PyTorch, TensorFlow, ONNX

Infrastructure

  • Database: PyMongo (MongoDB integration)
  • File Handling: aiofiles (async file operations)
  • HTTP Client: HTTPx
  • Configuration: python-dotenv
  • Deployment: Uvicorn ASGI server

📁 Project Structure

ai_engine/
│
├── 📱 app/                          # Main application directory
│   ├── 🚀 main.py                  # FastAPI application entry point
│   ├── 📍 routers/                 # API route definitions
│   │   ├── 🎵 music_router.py      # Music emotion analysis endpoints
│   │   ├── 👤 human_router.py      # Human emotion detection endpoints
│   │   └── 🗣️ language_router.py   # Language detection endpoints
│   │
│   ├── 🧠 services/                # Business logic & ML services
│   │   ├── 🎵 music_emotion_service.py     # Audio feature extraction & analysis
│   │   ├── 👤 human_emotion_service.py     # Multi-modal emotion recognition
│   │   ├── 🗣️ voice_emotion_service.py     # Voice pattern analysis
│   │   ├── 📷 face_emotion_service.py      # Facial expression analysis
│   │   └── 🌐 language_service.py          # Language identification
│   │
│   ├── 📊 models/                  # Data models & schemas
│   ├── 🛠️ utils/                   # Utility functions
│   └── ⚙️ config/                  # Configuration files
│       └── settings.py             # Application settings
│
├── 📋 requirements.txt             # Python dependencies
├── 🚀 activate.sh                 # Environment activation script
├── 🤖 face_detection_short_range.tflite  # MediaPipe model file
└── 📖 README.md                   # This comprehensive guide

🚀 Quick Start Guide

Quick Setup for Improved Emotion Detection (New!)

# 1. Install all dependencies
./install.sh
# OR manually:
# pip install -r requirements.txt

# 2. Setup emotion detection models
python setup_models.py

# 3. Verify installation
python test_system.py

# 4. Start the server
python -m uvicorn app.main:app --reload

Method 1: One-Click Setup (Recommended)

# Make the activation script executable
chmod +x activate.sh

# Activate environment and see all available commands
source activate.sh

# Start the development server
python app/main.py

Method 2: Manual Setup

# Activate the conda environment
conda activate soulsync_ai

# Navigate to app directory
cd app

# Start the server
python main.py

# Alternative: Use uvicorn directly
uvicorn main:app --reload --host 0.0.0.0 --port 8000

Method 3: Production Deployment

# Production server with optimized settings
uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 4

🔧 Environment Setup

Prerequisites

  • Conda/Miniconda installed on your system
  • Python 3.11+ (handled by conda environment)
  • macOS/Linux/Windows compatibility

Environment Configuration

Setting Value
Environment Name soulsync_ai
Python Version 3.11.14
Location /Users/vignesvm/miniconda3/envs/soulsync_ai
Package Manager pip (within conda env)

Core Dependencies

Web Framework & API

fastapi>=0.108.0          # Modern, high-performance web framework
uvicorn>=0.24.0           # Lightning-fast ASGI server
pydantic>=2.5.0           # Data validation using Python type hints
python-multipart>=0.0.6   # Multipart form-data support
httpx>=0.25.0             # Async HTTP client

Data Science & Machine Learning

numpy>=1.24.0             # Numerical computing foundation
pandas>=2.1.0             # Data manipulation and analysis
scikit-learn>=1.3.0       # Machine learning algorithms

Audio/Video Processing

librosa                   # Audio analysis library
soundfile                 # Audio file I/O
opencv-python             # Computer vision library

Database & Utilities

pymongo>=4.6.0            # MongoDB driver
python-dotenv>=1.0.0      # Environment variable management
aiofiles>=23.2.0          # Asynchronous file operations

Advanced ML Packages (Optional)

For enhanced capabilities, install these packages as needed:

# Deep Learning Frameworks
pip install torch torchaudio     # PyTorch ecosystem
pip install transformers         # Hugging Face transformers
pip install tensorflow          # Google's ML framework

# Advanced Computer Vision & Audio
pip install mediapipe           # Google's ML framework
pip install onnxruntime         # ONNX model inference

🌐 API Documentation

Base URLs

  • Development: http://localhost:8000
  • Production: https://your-domain.com/api/v1

Core Endpoints

🏠 System Endpoints

Method Endpoint Description Response
GET / Server status check {"message": "SoulSync AI Engine Running"}
GET /health Health monitoring {"status": "healthy", "service": "ai_engine"}

🎵 Music Analysis Endpoints

Method Endpoint Description Parameters
POST /detect-song-emotion Analyze musical emotion file: UploadFile (audio)

Request Example:

curl -X POST "http://localhost:8000/detect-song-emotion" \
     -H "accept: application/json" \
     -H "Content-Type: multipart/form-data" \
     -F "file=@song.mp3"

Response Example:

{
  "emotion": "happy",
  "confidence": 0.87,
  "features": {
    "tempo": 128.5,
    "energy": 0.76,
    "valence": 0.82
  }
}

👤 Human Emotion Endpoints

Method Endpoint Description Parameters
POST /detect-human-emotion Multi-modal emotion analysis image: UploadFile, audio: UploadFile

Request Example:

curl -X POST "http://localhost:8000/detect-human-emotion" \
     -H "accept: application/json" \
     -H "Content-Type: multipart/form-data" \
     -F "image=@face.jpg" \
     -F "audio=@voice.wav"

Response Example:

{
  "overall_emotion": "neutral",
  "face_emotion": {
    "emotion": "neutral",
    "confidence": 0.78
  },
  "voice_emotion": {
    "emotion": "calm",
    "confidence": 0.65
  },
  "combined_confidence": 0.72
}

🌐 Language Detection Endpoints

Method Endpoint Description Parameters
POST /detect-language Audio language identification file: UploadFile (audio)

📚 Interactive Documentation


🎯 Usage Examples

1. Music Emotion Analysis

import requests

# Analyze emotion in a music file
with open('song.mp3', 'rb') as audio_file:
    response = requests.post(
        'http://localhost:8000/detect-song-emotion',
        files={'file': audio_file}
    )
    result = response.json()
    print(f"Detected emotion: {result['emotion']}")

2. Human Emotion Detection

import requests

# Analyze human emotion using face and voice
with open('face.jpg', 'rb') as img_file, open('voice.wav', 'rb') as audio_file:
    response = requests.post(
        'http://localhost:8000/detect-human-emotion',
        files={
            'image': img_file,
            'audio': audio_file
        }
    )
    result = response.json()
    print(f"Overall emotion: {result['overall_emotion']}")

3. Language Detection

// Frontend JavaScript example
const formData = new FormData();
formData.append('file', audioFile);

fetch('http://localhost:8000/detect-language', {
    method: 'POST',
    body: formData
})
.then(response => response.json())
.then(data => console.log('Detected language:', data.language));

🔬 Mathematical Foundations

This section details the mathematical models, formulas, and algorithms powering SoulSync's emotion recognition capabilities.

1. Audio Feature Extraction

1.1 Mel-Frequency Cepstral Coefficients (MFCC)

MFCCs are the primary features for audio emotion analysis, capturing the spectral envelope of sound.

Step 1: Pre-emphasis Filter

$$y[n] = x[n] - \alpha \cdot x[n-1]$$

where $\alpha \approx 0.97$ emphasizes higher frequencies.

Step 2: Frame Segmentation

$$x_m[n] = x[n] \cdot w[n - mH]$$

where $w[n]$ is a Hamming window:

$$w[n] = 0.54 - 0.46 \cos\left(\frac{2\pi n}{N-1}\right)$$

Step 3: Fast Fourier Transform (FFT)

$$X_m[k] = \sum_{n=0}^{N-1} x_m[n] e^{-j2\pi kn/N}$$

Step 4: Mel Filter Bank

Convert frequency to Mel scale:

$$m = 2595 \log_{10}\left(1 + \frac{f}{700}\right)$$

Apply triangular filters:

$$S_m[i] = \sum_{k=0}^{N/2} |X_m[k]|^2 \cdot H_i[k]$$

Step 5: Discrete Cosine Transform (DCT)

$$c[n] = \sum_{m=0}^{M-1} \log(S[m]) \cos\left(\frac{\pi n(m + 0.5)}{M}\right)$$

Typically extract 13-40 MFCC coefficients.

1.2 Chroma Features

Represent the 12 pitch classes of the musical octave:

$$C[p] = \sum_{k: \text{pitch}(k) = p} |X[k]|^2$$

where $p \in {C, C#, D, ..., B}$

1.3 Spectral Features

Spectral Centroid (brightness):

$$SC = \frac{\sum_{k=0}^{N-1} k \cdot |X[k]|}{\sum_{k=0}^{N-1} |X[k]|}$$

Spectral Rolloff (frequency below which 85% of energy is contained):

$$\sum_{k=0}^{K_r} |X[k]|^2 = 0.85 \sum_{k=0}^{N-1} |X[k]|^2$$

Zero Crossing Rate:

$$ZCR = \frac{1}{2N} \sum_{n=1}^{N-1} |\text{sgn}(x[n]) - \text{sgn}(x[n-1])|$$

1.4 Tempo & Beat Tracking

Onset Strength Envelope:

$$O[n] = \sum_{k} \max(0, |X_n[k]| - |X_{n-1}[k]|)$$

Autocorrelation for Tempo:

$$R[\tau] = \sum_{n} O[n] \cdot O[n + \tau]$$

Tempo (BPM) is extracted from peaks in $R[\tau]$.


2. Face Emotion Recognition (CNN)

2.1 Convolutional Layer

$$Y_{i,j,k} = \sigma\left(\sum_{c} \sum_{m,n} W_{m,n,c,k} \cdot X_{i+m, j+n, c} + b_k\right)$$

where:

  • $W$ = learnable kernel weights
  • $b$ = bias term
  • $\sigma$ = activation function (ReLU)

ReLU Activation:

$$\text{ReLU}(x) = \max(0, x)$$

2.2 Batch Normalization

Normalize activations across mini-batch:

$$\hat{x}_i = \frac{x_i - \mu_B}{\sqrt{\sigma_B^2 + \epsilon}}$$

$$y_i = \gamma \hat{x}_i + \beta$$

where $\mu_B$ and $\sigma_B$ are batch mean and variance, $\gamma$ and $\beta$ are learnable parameters.

2.3 Max Pooling

$$Y_{i,j,k} = \max_{m,n \in \text{window}} X_{i \cdot s + m, j \cdot s + n, k}$$

Reduces spatial dimensions while retaining important features.

2.4 Softmax Classification

Final layer outputs probability distribution over 7 emotions:

$$P(y = c | x) = \frac{e^{z_c}}{\sum_{i=1}^{7} e^{z_i}}$$

where $z = W^T x + b$ are the logits.

2.5 Cross-Entropy Loss

$$L = -\sum_{c=1}^{7} y_c \log(\hat{y}_c)$$

where $y_c$ is one-hot encoded true label and $\hat{y}_c$ is predicted probability.


3. Voice Emotion Recognition (CNN-LSTM)

3.1 Feature Vector

Combines 180 audio features:

  • 40 MFCCs + deltas + delta-deltas = 120 features
  • 20 Chroma features
  • 20 Spectral features (centroid, rolloff, contrast, etc.)
  • 20 Additional prosodic features

$$\mathbf{f}_t = [\text{MFCC}_t, \text{Chroma}_t, \text{Spectral}_t, \text{Prosody}_t]^T$$

3.2 CNN Feature Learning

1D convolutions over time dimension:

$$h_t^{(l)} = \sigma\left(\sum_{\tau=0}^{k-1} W^{(l)}_\tau \cdot f_{t+\tau}^{(l-1)} + b^{(l)}\right)$$

3.3 Bidirectional LSTM

Forward LSTM:

$$\begin{aligned} f_t &= \sigma(W_f \cdot [h_{t-1}, x_t] + b_f) \\ i_t &= \sigma(W_i \cdot [h_{t-1}, x_t] + b_i) \\ o_t &= \sigma(W_o \cdot [h_{t-1}, x_t] + b_o) \\ \tilde{C}_t &= \tanh(W_C \cdot [h_{t-1}, x_t] + b_C) \\ C_t &= f_t \odot C_{t-1} + i_t \odot \tilde{C}_t \\ h_t &= o_t \odot \tanh(C_t) \end{aligned}$$

Backward LSTM computes $\overleftarrow{h}_t$ in reverse direction.

Combined hidden state:

$$h_t = [\overrightarrow{h}_t; \overleftarrow{h}_t]$$

3.4 Attention Mechanism

$$\begin{aligned} e_t &= v^T \tanh(W_h h_t + b_a) \\ \alpha_t &= \frac{\exp(e_t)}{\sum_{\tau} \exp(e_\tau)} \\ c &= \sum_t \alpha_t h_t \end{aligned}$$

Context vector $c$ is used for final classification.


4. Multi-Modal Fusion

4.1 Confidence-Weighted Fusion

For face emotion $E_f$ with confidence $c_f$ and voice emotion $E_v$ with confidence $c_v$:

Normalized Weights:

$$w_f = \frac{c_f^\beta}{c_f^\beta + c_v^\beta}, \quad w_v = \frac{c_v^\beta}{c_f^\beta + c_v^\beta}$$

where $\beta = 2$ increases separation between high and low confidence.

Emotion Score Fusion:

$$S_e = w_f \cdot P_f(e) + w_v \cdot P_v(e)$$

for each emotion class $e$.

Final Emotion:

$$E_{\text{final}} = \arg\max_e S_e$$

Combined Confidence:

$$C_{\text{combined}} = w_f \cdot c_f + w_v \cdot c_v$$

4.2 Fallback Strategy

If one modality fails (confidence < 0.3):

$$w_{\text{available}} = 1.0, \quad w_{\text{failed}} = 0.0$$

4.3 Emotion Mapping

Voice emotions are mapped to face emotion space:

Voice Emotion Face Emotion
calm, neutral neutral
happy, excited happy
sad sad
angry, frustrated angry
fearful fear

5. Music Emotion Analysis

5.1 Valence-Arousal Model

Music emotions are represented in 2D space:

$$\text{Valence} = f(\text{mode, key, chroma_mean})$$ $$\text{Arousal} = f(\text{tempo, energy, loudness})$$

Energy Calculation:

$$E = \frac{1}{N} \sum_{n=1}^{N} |x[n]|^2$$

Spectral Flux (measure of change):

$$SF_n = \sum_k \left(|X_n[k]| - |X_{n-1}[k]|\right)^2$$

5.2 Emotion Classification

Based on quadrant in valence-arousal space:

$$\text{Emotion} = \begin{cases} \text{happy} & \text{if } V > 0.5 \text{ and } A > 0.5 \\ \text{calm} & \text{if } V > 0.5 \text{ and } A < 0.5 \\ \text{sad} & \text{if } V < 0.5 \text{ and } A < 0.5 \\ \text{angry} & \text{if } V < 0.5 \text{ and } A > 0.5 \end{cases}$$

5.3 Harmonic-Percussive Separation

$$X[k] = H[k] + P[k]$$

where $H$ contains harmonic (tonal) components and $P$ contains percussive (rhythmic) components.

Median Filtering:

$$H[k, t] = \text{median}_{\text{time}}(X[k, t])$$ $$P[k, t] = \text{median}_{\text{freq}}(X[k, t])$$


6. Performance Metrics

6.1 Classification Accuracy

$$\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}$$

6.2 Weighted F1-Score

$$\text{F1}_c = 2 \cdot \frac{\text{Precision}_c \cdot \text{Recall}_c}{\text{Precision}_c + \text{Recall}_c}$$

$$\text{F1}_{\text{weighted}} = \sum_{c} w_c \cdot \text{F1}_c$$

where $w_c$ is the proportion of samples in class $c$.

6.3 Confusion Matrix Analysis

$$C_{ij} = \text{count}(\text{predicted} = i, \text{actual} = j)$$

Per-class accuracy:

$$\text{Acc}_c = \frac{C_{cc}}{\sum_i C_{ic}}$$


7. Model Training & Optimization

7.1 Adam Optimizer

$$\begin{aligned} m_t &amp;= \beta_1 m_{t-1} + (1 - \beta_1) g_t \\ v_t &amp;= \beta_2 v_{t-1} + (1 - \beta_2) g_t^2 \\ \hat{m}_t &amp;= \frac{m_t}{1 - \beta_1^t} \\ \hat{v}_t &amp;= \frac{v_t}{1 - \beta_2^t} \\ \theta_t &amp;= \theta_{t-1} - \frac{\alpha}{\sqrt{\hat{v}_t} + \epsilon} \hat{m}_t \end{aligned}$$

Default hyperparameters: $\alpha = 0.001$, $\beta_1 = 0.9$, $\beta_2 = 0.999$, $\epsilon = 10^{-8}$

7.2 Learning Rate Scheduling

$$\alpha_t = \alpha_0 \cdot \gamma^{\lfloor t / s \rfloor}$$

where $\gamma = 0.5$ every $s = 10$ epochs.

7.3 Dropout Regularization

During training:

$$\tilde{x}_i = \begin{cases} 0 & \text{with probability } p \\ \frac{x_i}{1-p} & \text{with probability } 1-p \end{cases}$$

Typical $p = 0.5$ for fully connected layers.

7.4 Data Augmentation

For Audio:

  • Time stretching: $y[n] = x[\alpha \cdot n]$ where $\alpha \in [0.8, 1.2]$
  • Pitch shifting: Shift by $\pm 2$ semitones
  • Add noise: $\tilde{x}[n] = x[n] + \epsilon[n]$, $\epsilon \sim \mathcal{N}(0, \sigma^2)$

For Images:

  • Random rotation: $\theta \in [-15°, 15°]$
  • Horizontal flip with probability 0.5
  • Brightness/contrast adjustment: $I' = \alpha I + \beta$

⚙️ Configuration

Application Settings

Configuration is managed through app/config/settings.py:

class Settings:
    APP_NAME = "SoulSync AI Engine"
    VERSION = "1.0.0"
    
    # Audio processing limits
    MAX_AUDIO_DURATION = 30      # seconds
    MAX_VOICE_DURATION = 10      # seconds
    
    # Emotion analysis weights
    FACE_WEIGHT = 0.65          # Face emotion contribution
    VOICE_WEIGHT = 0.35         # Voice emotion contribution

Environment Variables

Create a .env file in the root directory:

# Server Configuration
HOST=0.0.0.0
PORT=8000
DEBUG=True

# Database Configuration
MONGODB_URL=mongodb://localhost:27017
DATABASE_NAME=soulsync

# ML Model Configuration
FACE_MODEL_PATH=face_detection_short_range.tflite
MIN_DETECTION_CONFIDENCE=0.5
MIN_SUPPRESSION_THRESHOLD=0.3

# Logging Configuration
LOG_LEVEL=INFO

🚀 Development Guidelines

Adding New Features

  1. New API Endpoints

    # Create new router in app/routers/
    touch app/routers/new_feature_router.py
    
    # Register in main.py
    app.include_router(new_feature_router)
  2. New ML Services

    # Create service in app/services/
    touch app/services/new_ml_service.py
    
    # Follow the existing pattern with extract_features() and predict() methods
  3. Data Models

    # Define Pydantic models in app/models/
    touch app/models/new_model.py

Code Style & Standards

  • Python Style: Follow PEP 8 guidelines
  • Type Hints: Use type annotations for all functions
  • Async/Await: Prefer async functions for I/O operations
  • Error Handling: Implement comprehensive exception handling
  • Documentation: Include docstrings for all classes and functions

Testing

# Install testing dependencies
pip install pytest pytest-asyncio httpx

# Run tests
pytest tests/

# Run with coverage
pytest --cov=app tests/

Performance Optimization

  • Use async/await for file operations
  • Implement caching for frequently used models
  • Optimize audio/video processing pipelines
  • Monitor memory usage with large files

🐳 Deployment

Docker Deployment

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

Production Considerations

  • Use a reverse proxy (nginx) for static files
  • Implement load balancing for multiple instances
  • Set up proper logging and monitoring
  • Configure SSL/TLS certificates
  • Implement rate limiting and authentication

🔧 Troubleshooting

Common Issues & Solutions

🐍 Environment Issues

# Check active environment
conda info --envs

# Recreate environment if corrupted
conda remove --name soulsync_ai --all
conda create -n soulsync_ai python=3.11 -y
conda activate soulsync_ai
pip install -r requirements.txt

📦 Package Installation Issues

# Force binary installation (avoid compilation)
pip install --only-binary=:all: package_name

# Clear pip cache
pip cache purge

# Use conda for problematic packages
conda install -c conda-forge package_name

🎵 Audio Processing Issues

# Install additional audio codecs
pip install ffmpeg-python

# For macOS audio issues
brew install ffmpeg

📷 Computer Vision Issues

# MediaPipe installation issues
pip uninstall mediapipe
pip install mediapipe --no-deps
pip install opencv-python

🔍 Debugging Tips

  1. Enable Debug Logging

    import logging
    logging.basicConfig(level=logging.DEBUG)
  2. Check File Permissions

    chmod 755 activate.sh
    ls -la temp_*  # Check temporary file creation
  3. Monitor Resource Usage

    # Memory usage
    ps aux | grep python
    
    # Disk space
    df -h

🤝 Contributing

We welcome contributions to the SoulSync AI Engine! Here's how you can help:

Getting Started

  1. Fork the Repository

    git clone https://github.com/yourusername/soulsync-ai-engine.git
    cd soulsync-ai-engine
  2. Set Up Development Environment

    source activate.sh
    pip install -r requirements.txt
  3. Create Feature Branch

    git checkout -b feature/amazing-new-feature

Contribution Types

  • 🐛 Bug Fixes - Help us squash bugs
  • New Features - Add exciting capabilities
  • 📚 Documentation - Improve our guides
  • 🧪 Tests - Increase code coverage
  • 🎨 UI/UX - Enhance user experience
  • Performance - Optimize algorithms

Submission Process

  1. Make your changes
  2. Add tests for new functionality
  3. Update documentation
  4. Submit a pull request with detailed description

⭐ If you find this project helpful, please give it a star! ⭐

Built with ❤️ by the SoulSync Team

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors