| ▲ | llmslave2 2 days ago |
| Do you just believe everything everybody says? No quantifiable data required, as long as someone somewhere says it it must be true? One of the reasons software is in decline is because it's all vibes, nobody has much interest in conducting research to find anything out. It doesn't have to be some double blinded peer reviewed meta analysis, the bar can still be low, it just should be higher than "I feel like"... |
|
| ▲ | johnfn 2 days ago | parent | next [-] |
| You don't seem to have answered my questions - you are just reiterating your own point (which I already responded to). Again I ask you - do you have studies to prove that syntax highlighting is useful or are you just using it because of vibes? Do you have research showing that writing in your language of choice is faster than Assembly? |
| |
| ▲ | llmslave2 2 days ago | parent [-] | | I actually prefer no syntax highlighting, and I certainly wouldn't make any claims about it being useful. But something being "useful" is often personal - I find IDEs useful, others find Vim useful, maybe one is better or worse than the other or maybe we're all different and our brains function in different ways and that explains the difference. With assembly versus say, Go for writing a web server? That's trivially observable, good luck arguing against that one. | | |
| ▲ | nfw2 2 days ago | parent [-] | | That's the whole point. The sky is blue is trivially observable. Any claim that someone has disproven something that is trivially observable should be met with skepticism. If you have something that needs to be done, and an agent goes and does the whole thing for you without mistakes, it is trivially observable that that is useful. That is the definition of usefulness. | | |
| ▲ | llmslave2 2 days ago | parent [-] | | But useful in the context of these debates isn't that it solves any single problem for someone. Nobody is arguing that LLM's have zero utility. So I don't really see what your point is? |
|
|
|
|
| ▲ | nfw2 2 days ago | parent | prev [-] |
| here are some https://resources.github.com/learn/pathways/copilot/essentia... https://www.anthropic.com/research/how-ai-is-transforming-wo... https://www.mckinsey.com/capabilities/tech-and-ai/our-insigh... |
| |
| ▲ | llmslave2 2 days ago | parent [-] | | They're all marketing slop lol. Go look at their methodology. Absolutely shite. | | |
| ▲ | nfw2 2 days ago | parent [-] | | This is what you claimed the bar was "it just should be higher than 'I feel like'" Now you are moving it because your statement is provably false. Your criticism of it is based on vibes. What specifically is wrong with the methodologies? One of them broke randomly developers into two groups, one with access to ai and one without, timed them to complete the same task, and compared the results. That seems fine? Any measurement of performance in a lab environment comes with caveats, but since real world accounts you dismiss as vibes, that seems like the best you can do. | | |
| ▲ | llmslave2 2 days ago | parent [-] | | I'm sorry but I'm not going to take "research" about Claude seriously from Anthropic, the company who makes and sells Claude. I'm also not going to do that for Copilot from Microsoft, the company who makes and sells Copilot. |
|
|
|