| ▲ | andyfilms1 a day ago | |
Sure, but unless you're training them yourself they can still be compromised with poisoning or bias. They're still black boxes even if you're running them locally. | ||
| ▲ | lrvick 14 hours ago | parent [-] | |
Obviously, and that is no different than remote models. You do not and should not ever trust an LLM, but with proper handling they can still be super useful. You give LLMs a dedicated OS to work in, let them do research or debugging and commit to branches, review and clean up those branches as you like from a trusted OS, then sign the commits and mark a PR as ready for review. | ||