▲ | barrell 4 days ago | ||||||||||||||||||||||||||||||||||
There are also a bunch of us who do kick the tires very often and are consistently underwhelmed. There are also those of us who have used them substantially, and seen the damage that causes to a codebase in the long run (in part due to the missing gains of having someone who understands the codebase). There are also those of us who just don’t like the interface of chatting with a robot instead of just solving the problem ourselves. There are also those of us who find each generation of model substantially worse than the previous generation, and find the utility trending downwards. There are also those of us who are concerned about the research coming out about the effects of using LLMs on your brain and cognitive load. There are also those of us who appreciate craft, and take pride in what we do, and don’t find that same enjoyment/pride in asking LLMs to do it. There are also those of us who worry about offloading our critical thinking to big corporations, and becoming dependent on a pay-to-play system, that is current being propped up by artificially lowered prices, with “RUG PULL” written all over them. There are also those of us who are really concerned about the privacy issues, and don’t trust companies hundreds of billions of dollars in debt to some of the least trust worth individuals with that data. Most of these issues don’t require much experience with the latest generation. I don’t think the intention of your comment was to stir up FUD, but I feel like it’s really easy for people to walk away with that from this sort of comment, so I just wanted to add my two cents and tell people they really don’t need to be wasting their time every 6 weeks. They’re really not missing anything. Can you do more than a few weeks ago? Sure? Maybe? But I can also do a lot more than I was able to a few weeks ago as well not using an LLM. I’ve learned and improved myself. Chances are if you’re not already using an LLM it’s because you don’t like it, or don’t want to, and that’s really ok. If AGSI comes out in a few months, all the time you would have invested now would be out of date anyways. There’s really no rush or need to be tapped in. | |||||||||||||||||||||||||||||||||||
▲ | bigstrat2003 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
> There are also a bunch of us who do kick the tires very often and are consistently underwhelmed. Yep, this is me. Every time people are like "it's improved so much" I feel like I'm taking crazy pills as a result. I try it every so often, and more often than not it still has the same exact issues it had back in the GPT-3 days. When the tool hasn't improved (in my opinion, obviously) in several years, why should I be optimistic that it'll reach the heights that advocates say it will? | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
▲ | libraryofbabel 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
There’s really three points mixed up in here. 1) LLMs are controlled by BigCorps who don’t have user’s best interests at heart. 2) I don’t like LLMs and don’t use them because they spoil my feeling of craftsmanship. 3) LLMs can’t be useful to anyone because I “kick the tires” every so often and am underwhelmed. (But what did you actually try? Do tell.) #1 is obviously true and is a problem, but it’s just capitalism. #2 is a personal choice, you do you etc., but it’s also kinda betting your career on AI failing. You may or may not have a technical niche where you’ll be fine for the next decade, but would you really in good conscience recommend a juniorish web dev take this position? #3 is a rather strong claim because it requires you to claim that a lot of smart reasonable programmers who see benefits from AI use are deluded. (Not everyone who says they get some benefit from AI is a shill or charlatan.) | |||||||||||||||||||||||||||||||||||
|