Remix.run Logo
zwnow 16 hours ago

Also it really baffles me how many are actually in on the hype train. Its a lot more than the crypto bros back in the day. Good thing AI still cant reason and innovate stuff. Also leaking credentials is a felony in my country so I also wont ever attach it to my codebases.

aspenmartin 15 hours ago | parent | next [-]

I think the issue is folks talk past each other. People who find coding agents useful or enjoyable are labeled “on the hype train” and folks for which coding agents don’t work for them or their workflow are considered luddites. There are an incredible number of contradicting claims and predictions out there as well, and I believe what we see is folks projecting their reaction to some amalgamation of them onto others. I see a lot of “they” language, and a lot of viral articles about business leadership “shoving AI down our throats” and it becomes a divisive issue like American political scene with really no one having a real conversation

llmslave2 15 hours ago | parent | next [-]

I think the reason for the varying claims and predictions is because developers have wildly different standards for what constitutes working code. For the developers with a lower threshold, AI is like crack to them because gen ai's output is similar to what they would produce, and it really is a 10x speedup. For others, especially those who have to fix and maintain that code, it's more like a 10x slowdown.

Hence why you have in the same thread, some developer who claims that Claude writes 99% of their code and another developer who finds it totally useless. And of course others who are somewhere in the middle.

throw1235435 14 hours ago | parent | next [-]

There's also the effect of different models. Until the most recent models, especially for concise algorithms, I felt it was still easier to sometimes do it myself (i.e. a good algo can be concise/more concise than a lossy prompt) and leave the "expansion/repetitive" boilerplate code to the LLM. At least for me the latest models do feel like a "step change" in that the problems can be bigger and/or require less supervision on each problem depending on the tradeoff you want.

remich 13 hours ago | parent | prev [-]

Have you considered that it's a bit dismissive to assume that developers who find use out of AI tools necessarily approve of worse code than you do, or have lower standards?

It's fine to be a skeptic. Or to have tried out these tools and found that they do not work well for your particular use case at this moment in time. But you shouldn't assume that people who do get value out of them are not as good at the job as you are, or are dumber than you are, or slower than you are. That's just not a good practice and is also rude.

llmslave2 12 hours ago | parent [-]

I never said anything about being worse, dumber, and definitely not slower. And keep in mind worse is subjective - if something doesn't require edge case handling or correctness, bugs can be tolerated etc, then something with those properties isn't worse is it?

I'm just saying that since there is such a wide range of experiences with the same tools, it's probably likely that developers vary on their evaluations of the output.

remich 12 hours ago | parent [-]

Okay, I certainly agree with you that different use cases can dictate different outcomes when using AI tooling. I would just encourage everyone who thinks similar to you to be cautious about assuming that someone who experiences a different result with these tools is less skilled or dealing with a less difficult use case - like one that has no edge cases or has greater tolerance for bugs. It's possible that this is the case, but it is just as possible that they have found a way to work with these tools that produces excellent output.

llmslave2 12 hours ago | parent [-]

Yeah I agree, it doesn't really have to do with skill or different use cases, it's just what your threshold is for "working" or "good".

mhitza 5 hours ago | parent | prev | next [-]

Hard to have a conversation when often the critics of LLM output receive replies like "What, you used last week's model?! No, no, no, this one is a generational leap"

Too many people are invested into AI's success to have a balanced conversation. Things will return to normal after a market shakedown of a few larger AI companies.

zwnow 15 hours ago | parent | prev [-]

Its all a hype train though. People still believe in the AI gonna bring utopia bullshit while the current infra is being built on debt. The only reason it still exists is that all these AI companies believe in some kind of revenue outside of subscriptions. So its all about:

Owning the infrastructure and enshittify (ads) once enough products are based on AI.

Its the same chokehold Amazon has on its Vendors.

fragmede 15 hours ago | parent | prev [-]

your credentials shouldn't be in your codebase to begin with!

zwnow 15 hours ago | parent [-]

.env files are a thing in tons of codebases

iwontberude 15 hours ago | parent | next [-]

but thats at runtime, secrets are going to be deployed in a secure manner after the code is released

zwnow 15 hours ago | parent [-]

.env files are used to develop as well, for some things like PayPal u dont have to change the credentials, you just enable sandbox mode. If I had some LLM attached to my codebase, it would be able to read those credentials from the .env file.

This has nothing to do with deployment. I never talked about deployment.

Carrok 15 hours ago | parent [-]

If you have your PayPal creds in your repository, you are doing it wrong.

zwnow 5 hours ago | parent [-]

.gitignore is a thing

Carrok 5 hours ago | parent [-]

Which every AI tool I’m aware of respects and ignores by default.

zwnow 4 hours ago | parent [-]

Why is it that they can add new env variables then?

mkozlows 14 hours ago | parent | prev [-]

If your secrets are in your repo, you've probably already leaked them.