Remix.run Logo
CSMastermind 2 days ago

Some other fun things you'll find:

- The models perform differently when called via the API vs in the Gemini UI.

- The Gemini API will randomly fail about 1% of the time, retry logic is basically mandatory.

- API performance is heavily influenced by the whims of the Google we've observed spreads between 30 seconds and 4 minutes for the same query depending on how Google is feeling that day.

hobofan 2 days ago | parent | next [-]

> The Gemini API will randomly fail about 1% of the time, retry logic is basically mandatory.

That is sadly true across the board for AI inference API providers. OpenAI and Anthropic API stability usually suffers around launch events. Azure OpenAI/Foundry serving regularly has 500 errors for certain time periods.

For any production feature with high uptime guarantees I would right now strongly advise for picking a model you can get from multiple providers and having failover between clouds.

downsplat 2 days ago | parent [-]

Yeah at $WORK we use various LLM APIs to analyze text; it's not heavy usage in terms of tokens but maybe 10K calls per day. We've found that response times vary a lot, sometimes going over a minute for simple tasks, and random fails happen. Retry logic is definitely mandatory, and it's good to have multiple providers ready. We're abstracting calls across three different APIs (openai, gemini and mistral, btw we're getting pretty good results with mistral!) so we can switch workloads quickly if needed.

jwillp 2 days ago | parent | next [-]

I've been impressed by ollama running locally for my work, involving grouping short text snippets by semantic meaning, using embeddings, as well as summarization tasks. Depending on your needs, a local GPU can sometimes beat the cloud. (I get no failures and consistent response times with no extra bill.) Obviously YMMV, and not ideal for scaling up unless you love hardware.

duckmysick 2 days ago | parent [-]

Which models have you been using?

phantasmish 2 days ago | parent | prev [-]

It'd be kinda nice if they exposed whatever queuing is going on behind the scenes, so you could at least communicate that to your users.

specproc 2 days ago | parent | prev | next [-]

I have also had some super weird stuff in my output (2.5-flash).

I'm passing docs for bulk inference via Vertex, and a small number of returned results will include gibberish in Japanese.

walthamstow 2 days ago | parent | next [-]

I had this last night from flash lite! My results were interspersed with random snippets of legible, non-gibberish English language. It was like my results had got jumbled with somenone else's.

gfdvgfffv 13 hours ago | parent | prev | next [-]

I’ve gotten Arabic randomly in Claude Code. Programming is becoming more and more like magic.

ashwindharne 2 days ago | parent | prev [-]

I get this a lot too, have made most of the Gemini models essentially unusable for agent-esque tasks. I tested with 2.5 pro and it still sometimes devolved into random gibberish pretty frequently.

halflings 2 days ago | parent | prev | next [-]

"The models perform differently when called via the API vs in the Gemini UI."

This shouldn't be surprised, e.g. the model != the product. The same way GPT4o behaves differently than the ChatGPT product when using GPT4o.

akhilnchauhan 2 days ago | parent | prev | next [-]

> The models perform differently when called via the API vs in the Gemini UI.

This difference between API vs UI responses being different is common across all the big players (Claude, GPT models, etc.)

The consumer chat interfaces are designed for a different experience than a direct API call, even if pinging the same model.

DANmode 2 days ago | parent | prev | next [-]

So, not something for a production app yet.

ianberdin 2 days ago | parent | prev | next [-]

Even funnier, when Pro 3 answers to a previous message in my chat. Just making a duplicate answer with different words. Retry helps, but…

YouAreWRONGtoo 2 days ago | parent [-]

[dead]

te_chris 2 days ago | parent | prev [-]

The way the models behave in Vertex AI Studio vs the API is unforgivable. Totally different.