Hi, lifeart. Thank you for a very nice application.
I'm running into a problem right now when dealing with large code repositories. When I'm dealing with small code repositories, code2prompt works great. But when I'm dealing with large repositories, I have a token overflow problem when interacting with LLM.
So how should we deal with large code repositories? Sending only part of the source code will affect the context. Now it seems that only Gemini 1.5pro can handle about 200m tokens, which is the upper limit.
Can you perform tuning on a large code repository? Or do you have any good suggestions?