I am curious... #63
Sophist-UK
started this conversation in
01-General
Replies: 1 comment 1 reply
-
|
Also, is the Caal Ministral LLM quantised? My GPU is 6gb not 8gb, so I would like to run a quantised version? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Just started reading about this.
I am conservative about security and tried getting openclaw running in a limited docker environment on my new laptop (6gb Nvidia GPU) but couldn't get it to work. But I am concerned about openclaw security, yet still want to get the benefits of an ai assistant.
I also have a capable and underutilized TrueNAS server without a GPU.
And my background is a lifetime (50+ years) in IT so I believe in architecture and security and I am keen to start using AI to do open source coding and for a household assistant.
I plan to try out both the local version of Caal on my laptop and the groq version of this on my NAS and see how capable it is.
But in the meantime, some questions...
Is there any benefit from setting up ollama with Redis to cache LLM calls? I suspect (but cannot demonstrate it) that there are a whole bunch of repeated calls and caching might speed things up.
I like the Caal concepts of architecture and predefined workflows to give security, and I am assuming that these workflow relate to limiting how it access other local resources in a secure way. And without having tried it yet I am assuming that for general intelligence that only requires access to the internet, Caal is still able to harness the full power of the llm. Is there a feature comparison anywhere of Caal vs. openclaw? Can it do cron tasks? Can you give it a personality? Can it learn (or is that too dangerous)?
How feasible is it to have a hybrid local & remote LLM configuration, using local ai for simple stuff and a remote LLM like Kimi 2.5 for deep thinking?
Beta Was this translation helpful? Give feedback.
All reactions