▲ | dyauspitr 4 days ago | |
I’m the same. I haven’t read the responses to your comment yet but I guarantee there are some folks pulling out their hair in disbelief about what you could possibly be using LLMs for, how the code can never stand up to what they’re writing etc. I don’t understand how you can’t find chatGPT useful. I use it at least 30 times in any given day. | ||
▲ | somerandomqaguy 4 days ago | parent | next [-] | |
It's hilariously wrong at times, but the problem is when people take what an LLM model spits out as fact. Just an example was in a mild debate I was having regarding cars, the other person posed a question about how fast a Golf Type R could get in a specific distance and Chat GPT spat out a number that the other person accepted as fact, but I already knew it was too high. What ChatGPT done was taken the posted 0 to 60 time and extrapolated a linear distance vs velocity formula. Which was impressive granted, but wrong; velocity over distance is logarithmic at best. It's a great tool, but I think a lot of people are just taking what it spits out without slowing down to question if the output makes sense or not. | ||
▲ | zifpanachr23 4 days ago | parent | prev [-] | |
It's highly dependent on what you are using it for, so I think the variability in usefulness is totally predictable. That doesn't make me some fancy scientist level programmer (I'm definitely not) that I often find my attempts at using AI falling into that category...a lot of the time it's just due to niche platforms and libraries and things that are specific to our shop or the regulatory environment or a thousand other issues of that nature. I imagine that similar issues are incredibly widespread for basically anybody that is not doing greenfield work that is somewhat isolated and at young companies and isn't spending tens of millions to do custom training on their specific environments. The whole "everything web, most everything open source, ship ship ship new code" style work environments you tend to find among young start ups are not as common as I think they seem if you guage your view of technology jobs based off of hacker news. Given that most of the training of the most powerful models is basically scraping the web, it's not at all surprising that they are seriously lacking in other areas. And I'm not sure to what extent they can seriously be expected to improve there...besides the obvious issue of uploading internal documentation to give an external LLM better prompting...the thing has still got to be able to use public training data to make predictions about internal libraries and whatnot that may very well be old or anachronistic or batshit crazy, because the difference in volume of data between say your internal software and everything posted publicly on the internet is massive. |