Remix.run Logo
phildougherty 10 hours ago

Honestly surprised something like this can get funded

Weves 10 hours ago | parent | next [-]

"Chat UI" can "feel" a bit thin from an eng/product when you initially think about, and that's something we've had to grapple with over time. As we've dug deeper, my worry about that has gone down over time.

For most people, the chat is the entrypoint to LLMs, and people are growing to expect more and more. So now it might be basic chat, web search, internal RAG, deep research, etc. Very soon, it will be more complex flows kicked off via this interface (e.g. cleaning up a Linear project). The same "chat UI" that is used for basic chat must (imo) support these flows to stay competitive.

On the engineering side, things like Deep Research are quite complex/open-ended, and there can be huge differences in quality between implementations (e.g. ChatGPTs vs Claude). Code interpreter as well (to do it securely) is quite a tricky task.

gip 8 hours ago | parent | prev | next [-]

My understanding of YC is that they place more emphasis on the founders than the initial idea, and teams often pivot.

That being said, I think there is an opportunity for them to discover and serve an important enterprise use case as AI in enterprise hits exponential growth.

mritchie712 9 hours ago | parent | prev | next [-]

w24, those were different times.

koakuma-chan 9 hours ago | parent [-]

Yeah that's like so long ago. But yeah, good luck competing with ChatGPT.

hobofan 8 hours ago | parent [-]

There are many markets (Europe), and highly regulated industries with air-gapped deployments where the typical players (ChatGPT, MS Copilot) in the field are having a hard time.

On another axis, if you are able to offer BYOK deployments and the customers have huge staff with low usage, it's pretty easy to compete with the big players due to their high per-seat pricing.

Weves 8 hours ago | parent [-]

There are also many teams we work with that want to (1) retain model flexibility and (2) give everyone at the company the best model for the job. Every week? a model from a different provider comes out that is better at some tasks than anyone else. It's not great to be locked out from using that model since you're a "ChatGPT" company.

kurtis_reed 10 hours ago | parent | prev | next [-]

why?

phildougherty 5 hours ago | parent | next [-]

I wasn't trying to be a hater, i think it is great they got funded for this. It just felt like there are so many free options and alternatives out there that are addressing basically the same things (and look almost exactly the same) it genuinely surprised me.

xenospn 9 hours ago | parent | prev [-]

there's a million other project just like this one, many that are much more advanced and mature, including from Vercel. There's no moat.

Weves 9 hours ago | parent [-]

Agree that's a lot of other projects out there, but why do you say the Vercel option is more advanced/mature?

The common trend we've seen is that most of these other projects are okay for a true "just send messages to an AI and get responses" use case, but for most things beyond that they fall short / there a lot of paper cuts.

For an individual, this might show up when they try more complex tasks that require multiple tool calls in sequence or when they have a research task to accomplish. For an org, this might show up when trying to manage access to assistants / tools / connected sources.

Our goal is to make sure Onyx is the most advanced and mature option out there. I think we've accomplished that, so if there's anything missing I'd love to hear about it.

elpakal 7 hours ago | parent [-]

Alright let's say im tasked with building a fancy AI-powered research assistant and I need onyx or Vercel's ai-chatbot sdk. Why would I reach for onyx?

I have used vercel for several projects and I'm not tied to it, but would like to understand how onyx is comparable.

Benefits for my use cases for using vercel have been ease of installation, streaming support, model agnosticity, chat persistence and blob support. I definitely don't like the vendor lock in, though.

Weves 3 hours ago | parent | next [-]

> ease of installation, streaming support, model agnosticity, chat persistence and blob support

we have all of those!

> how onyx is comparable

For an AI-powered research assistant, Onyx might just work out of the box. We have ~45 connectors to common apps (https://github.com/onyx-dot-app/onyx/blob/main/backend/onyx/...), integrations with the most popular web search providers (https://github.com/onyx-dot-app/onyx/blob/main/backend/onyx/...), and a built in tool calling loop w/ deep research support (https://github.com/onyx-dot-app/onyx/blob/main/backend/onyx/...). If you wanted to customize, you could pretty easily tweak this / add additional tools (or even rip this out completely and build your own agent loop).

dbish 4 hours ago | parent | prev [-]

Not wanting to use Vercel is honestly a good enough reason. If you’re a heavy Vercel user you probably aren’t their target market since they’re aiming at enterprise types from what it looks like.

skeezyjefferson 9 hours ago | parent | prev [-]

[dead]