Remix.run Logo
asnyder 4 days ago

How does this compare to the already established open source solutions such as Chatbox (https://github.com/chatboxai/chatbox), or Lobechat (https://github.com/lobehub/lobe-chat)?

Been using both, like Chatbox for how snappy it is, but is local only, vs Lobechat which allows you to setup centralized host to have shared host across clients but feels a bit clunkier.

CryptoBanker 4 days ago | parent [-]

One of the biggest differences I noticed off the bat is llms includes prompt caching which I'm not sure I've seen in any other self hosted UI options

asnyder 4 days ago | parent [-]

I see Lobe and Chatbox both have prompt caching toggles, are you referring to something else?

asnyder 4 days ago | parent | next [-]

I've mistakenly given Chatbox a new feature, sorry :). In LobeChat, after you select a particular model, it enables a mini-settings menu next to the model that lets you set caching, deep thinking, and thinking token consumption.

CryptoBanker 4 days ago | parent [-]

Ah that must be new since the last time I tried lobechat

CryptoBanker 4 days ago | parent | prev [-]

Where do you see that? I can't seem to find it in the web or desktop apps for lobechat.

EDIT: I also don't see it in Chatbox