Skip to content

feat: Implement proper sparse attention in API server#2

Open
AlexCuadron wants to merge 4 commits intomainfrom
feature/sparse-api-server-v2
Open

feat: Implement proper sparse attention in API server#2
AlexCuadron wants to merge 4 commits intomainfrom
feature/sparse-api-server-v2

Conversation

@AlexCuadron
Copy link
Owner

This PR updates the API server to properly implement sparse attention from the core DoubleSparse implementation.

Changes

Core Changes

  • Replace basic model with proper sparse attention implementation from perplexity_eval.py
  • Add support for different model architectures (LLaMA, Mistral)
  • Add channel configuration support
  • Implement proper token streaming with past key/value caching

Configuration

  • Add MODEL_ARCHITECTURE setting for model type selection
  • Add CHANNEL setting for channel selection (q, k, qk)
  • Improve configuration documentation

Implementation Details

  • Uses the same sparse attention mechanism as in perplexity_eval.py
  • Properly handles channel configuration
  • Memory-efficient implementation with proper caching
  • Improved error handling for model-specific issues

- Add detailed client guide with examples in multiple languages
- Add performance considerations and best practices
- Add detailed configuration documentation
- Add error handling documentation
- Replace basic model with proper sparse attention implementation
- Add support for different model architectures (LLaMA, Mistral)
- Add channel configuration support
- Update API to be fully OpenAI-compatible
- Add proper token streaming implementation
- Add configuration for sparse attention parameters
- Add Qwen2 sparse attention implementation
- Update configuration to support Qwen2
- Update documentation with Qwen2 support
- Improve architecture selection documentation
- Create single server script that matches perplexity_eval.py usage
- Automatic architecture detection from model config
- Same command-line arguments as perplexity_eval.py
- Remove need for manual architecture configuration
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant