Remix.run Logo
coldtea 18 hours ago

>But the author just took pictures of food & expected a realistic response? Is this genuinely what amounts to a study in AI?

If there are commercial services where you take pictures of food and are promised a realistic (paid for) response, then yes. And there are.

dahart 18 hours ago | parent | next [-]

And what’s the variance & accuracy of their responses? Isn’t comparing the models’ variance to baseline human variance what matters here? It seems like they didn’t do that, and I agree with parent’s call for that kind of baseline.

Having counted calories for years, I don’t think I could reliably estimate the calories or carbs in the example picture of a cheese sandwich. I can make assumptions about the bread and the cheese, but I might easily be off by 2-3x. Calorie counting apps that use text descriptions also have huge variance for the same thing. The problem might be the belief that a picture or description is enough, regardless of who or what is guessing…?

Edit: Ah, I see from sibling thread you meant commercial services are LLMs, I thought you meant there were human-backed services to compare to. Anyway, I totally agree there’s a problem if people rely on AI for safety, but I’m not sure LLMs are the core issue here, it seems like using vague information and guessing is the core issue.

swiftcoder 18 hours ago | parent [-]

> Isn’t comparing the models’ variance to baseline human variance what matters here?

You seem to be missing the context that this isn't just about diet apps - this is about apps claiming to be able to track carbs sufficiently accurately to be used in a medical context to dose insulin (a substance which can be lethal if incorrectly dosed)

dahart 16 hours ago | parent [-]

No I understand apps are making dubious claims and implications; obviously claiming LLMs can accurately estimate carbs from a photo is just wrong. But that doesn’t necessarily change my question. Should people use photos to estimate carbs? Can people looking at photos do any better?

The presence of variance in the LLM output doesn’t actually prove anything, in fact I would expect and hope for variance when confidence is less than 1.0. I’m more curious about accuracy of the mean of guesses for different models, for example.

But should any diabetic expect photos to be reliable, regardless of whether it’s an app or an LLM or a human? I know some diabetics, and the people I know do not rely on photos for their safety. They don’t even rely on food labels either (which are far more accurate than photos), they measure their insulin.

It’s probably useful to raise awareness, and useful to scare app makers away from making bogus medical claims - products and scams that make bogus medical claims is of course a practice as old as history. But we can still hold the studies and PR around this up to high standards, right? Even assuming this article & the paper behind it are right, there are reasonable questions here about how to demonstrate the problem and what the baselines are.

It’s worth keeping in mind that trying to prove the bogus apps wrong with a flawed methodology or questionable reasoning or just an overly heavy handed style can cause backlash and do damage to the cause. We’re already seeing that effect play out with respect to vaccinations.

endymion-light 18 hours ago | parent | prev [-]

But I don't see them using those commercial services in this study - instead, they're using frontier model companies? Is Gemini advertising that you get a realistic calorie count from a picture? Maybe so - in which case i'd take it back!

notahacker 18 hours ago | parent | next [-]

The commercial services likely also have frontier model dependencies...

The opening to the actual paper is quite explicit that (i) other studies have already tested commercial apps with with unimpressive results and (ii) a popular open source app for carb counting directly relies on API calls from these frontier models, and this research batch tested the images used the exact same models and prompts as the popular open source app.

azakai 16 hours ago | parent [-]

A carb counting app might use API calls to these frontier models and then do some kind of analysis. It could see if different models agree or not, or multiple calls, and with how much variance.

So it would be more accurate to test the apps rather than the APIs, unless the goal is to warn people that just open chatgpt and ask there.

notahacker 14 hours ago | parent [-]

The open source app could in theory do that, but the paper's authors would be able to determine whether it did or not by reading its code, which they evidently did to replicate the API calls it made with their own script.

(And of course it would also be far more tedious to submit each picture 500 times manually using an app and manually log the response than using a script which is designed to collect the data automatically as fast as API rate limits permit)

coldtea 18 hours ago | parent | prev [-]

Are commercial services anything more than just UI facades on top of frontier model APIs?

endymion-light 18 hours ago | parent [-]

Great point - and i'd love a study to address that. If the study pointed out that X services sit perfectly within the analysis found, I think that would be a fantastic study that would be enlightening & useful to show.

swiftcoder 18 hours ago | parent [-]

The app the study is based on is open-source, so you yourself can verify that it does indeed just call a frontier model with the same prompts used in the study

endymion-light 17 hours ago | parent [-]

That's not really the same thing as what I'm saying - which is to investigate the applications specifically advertising AI calorie counting capabilities

notahacker 14 hours ago | parent [-]

They investigated an open source application specifically advertising carb counting capabilities, replicated its prompts and API calls in a way optimised to collect data from 26000 queries (which is a lot to do using a GUI!). They also note other people have already done [necessarily] smaller scale studies of the commercial AI carb counting apps and been similarly unimpressed by the responses.

This is all in the first few paragraphs of a preprint paper describing the research in considerably more detail which is linked at the bottom of TFA

Meta: enjoying nearly half this HN thread being arguments that surely people care about what's in their food don't ask ChatGPT for comment instead of looking it up properly, and most of the rest of it being people who apparently care what's in a research paper asking HN for comment instead of looking it up :)