Hi, thanks for open-sourcing this great project. mini-vLLM is a very helpful resource for understanding the core ideas behind vLLM.
I was wondering if there are any plans to support additional backends such as CPU or Apple MPS. Since this project is educational in nature, being able to run it on more platforms (e.g., laptops without CUDA GPUs) could make it much easier for people to experiment with and learn the concepts.
Even a slower fallback implementation (e.g., pure PyTorch attention) might already be very useful for learning purposes.
Thanks again for the great work!