You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 17, 2025. It is now read-only.
So this is how the UI is supposed to look like, maybe for starters in the chat section we can just return kNN docs, but in an ideal scenario the user should get a response of the relevant information in text based (extension)
This is frontend to backend connectivity - ChatGPT and other models use HTTP + SSE
So this is how the UI is supposed to look like, maybe for starters in the chat section we can just return kNN docs, but in an ideal scenario the user should get a response of the relevant information in text based (extension)
This is frontend to backend connectivity - ChatGPT and other models use HTTP + SSE
Spring provides Spring AI
Video:
[ChatClient to provide rag type models out of the box](https://www.youtube.com/watch?v=umKbaXsiCOY&t=1415s)
Documentation:
Spring AI - https://spring.io/projects/spring-ai
Spring Embedding Creation - https://docs.spring.io/spring-ai/reference/api/embeddings.html
Option 1: Chat Stuffing
Chat stuffing is just concatenting the context kNN docs as string to the user request
ISSUE: MODELS CAN ONLY INGEST SO MUCH CONTEXTUAL DATA + PRICING OF HAVING A TON OF TOKENS
Option 2:
Create a backend endpoint that streams dataa
Tokenization of user text
Tracking of token usage
Creation of embeddings of user text
User Authentication