▲ | mooreds 5 days ago | |
Is the output as good? I'd love the ability to run the LLM locally, as that would make it easier to run on non public code. | ||
▲ | fforflo 5 days ago | parent [-] | |
It's decent enough. But you'd probably have to use a model like llama2, which may set your GPU on fire. |