▲ | somerandomqaguy 4 days ago | |
It's hilariously wrong at times, but the problem is when people take what an LLM model spits out as fact. Just an example was in a mild debate I was having regarding cars, the other person posed a question about how fast a Golf Type R could get in a specific distance and Chat GPT spat out a number that the other person accepted as fact, but I already knew it was too high. What ChatGPT done was taken the posted 0 to 60 time and extrapolated a linear distance vs velocity formula. Which was impressive granted, but wrong; velocity over distance is logarithmic at best. It's a great tool, but I think a lot of people are just taking what it spits out without slowing down to question if the output makes sense or not. |