| ▲ | furyofantares 19 hours ago |
| From the text of the article I believe the author is implying there are apps doing exactly this, and so this is why it was studied that way. Had the author written the article themselves rather than an LLM their motivation probably would have been clearer. |
|
| ▲ | Brendinooo 19 hours ago | parent | next [-] |
| > there are apps doing exactly this Yeah, for sure there are. And people will just ask ChatGPT as well. The funny thing is that for people who are just trying to lose weight without managing any health issues precisely, this type of extreme variance doesn't really matter, because the mere act of consciously quantifying food consumption is, based on my experience counting calories, the single biggest factor in success with weight loss. |
| |
| ▲ | criley2 18 hours ago | parent [-] | | I actually think "just asking ChatGPT" is fine, because A) the data in these apps is suspect at best and B) the data behind calories is also pretty suspect (but we all play along because we can adjust other variables to make it all "work" well enough). Once or twice a year I spend a few weeks meticulously measuring ingredients/cooked foods and recording calories and on complex recipes apps are next to useless at getting accurate data. You're trying to input five or ten relevant ingredients, and then weighing your cooked outcome to try and divide the ingredients by proportion. Frankly it's a mess and most people aren't doing it for home cooked meals, and are getting very lossy outcomes (weighing cooked chicken and marking it as raw chicken, etc) With reasoning and tool calling (combined with me meticulously weighing before and after), it's producing fine data for my purposes. | | |
| ▲ | ijk 17 hours ago | parent | next [-] | | I was complaining about AI generated clothes being misleading marketing, deceiving customers as to whether the garment even exists. And then I learned that the pre-AI norms weren't any less fictional: they made an exemplar garment and did photoshoots, sure, but then they send the pictures and patterns to the lowest bidder factories with permission to make whatever edits are necessary to make it cheap and manufactureable. The whole thing was already a simulacrum. | |
| ▲ | smoe 17 hours ago | parent | prev [-] | | I honestly think that, given the sorry state of the pre-GenAI internet, with all the SEO optimization nonsense, clickbait, and supplement peddling everywhere, LLMs are for now actually better than Google for “doing your own research” on many things. At least at the entry level. Once you want to go in depth, the outcome in my experience is the same as with LLM use on any topic depends heavily on the domain knowledge of the prompter and their ability to steer it. |
|
|
|
| ▲ | ozgung 18 hours ago | parent | prev [-] |
| The author uses the prompts and method from an open-source app that connects to insulin pump, a medical device. I think AI food identification is an experimental feature in the app. > The prompt was adapted from the one used in the iAPS open-source automated insulin delivery system — it’s a real production prompt, not a toy example. https://github.com/Artificial-Pancreas/iAPS I think these are the prompts in the app:
https://github.com/Artificial-Pancreas/iAPS/tree/5eabe22e7e2... |
| |
| ▲ | sjhatfield 17 hours ago | parent | next [-] | | Exactly. This is not paid software. We assume full responsibility for outcomes when using it. There's a reason it's not on any app store. I'm glad features like this are being experimented with. Not how I would use AI to estimate carbs... | |
| ▲ | Ancapistani 16 hours ago | parent | prev [-] | | True, but I'm working on a product that's "adjacent to" this sort of thing, and we also have a "food recognition" feature that's marked as experimental. Our users are using it, and now I plan to push fairly hard on at least measuring the accuracy and hopefully exposing those results to our users regardless of how well it performs. |
|