Skip to content

Research Local LLMs that fit within 16gb VRAM and can respond within 5 seconds #34

@AnthonyvW

Description

@AnthonyvW

Research local Large Language Models that can fit within 16GB VRAM, respond within 5 seconds (the less the better), and are available for commercial use. You will need to research both a back end to run the LLM and the LLM to use.

The LLM will need to be able to respond in a JSON or YAML format, accurately say items from a list and be able to give yes/no answers.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

In Progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions