Remix.run Logo
Draiken 14 hours ago

And you learned nothing and have no clue if what it spit out is good or not.

How can you even assume what it did is "better" if you have no knowledge of kubernetes in the first place? It's mere hope.

Sure it gets you somewhere but you learned nothing in the way and now depend on the LLM to maintain it forever given you don't want to learn the skill.

I use LLMs to help verify my work and it can sometimes spot something I missed (more often it doesn't but it's at least something). I also automate some boring stuff like creating more variations of some tests, but even then I almost always have to read its output line by line to make sure the tests aren't completely bogus. Thinking about it now it's likely better if I just ask for what scenarios could be missing, because when they write it, they screw it up in subtle ways.

It does save me some time in certain tasks like writing some Ansible, but I have to know/understand Ansible to be confident in any of it.

These "speedups" are mostly short term gains in sacrifice for long term gains. Maybe you don't care about the long term and that's fine. But if you do, you'll regret it sooner or later.

My theory is that AI is so popular because mediocrity is good enough to make money. You see the kind of crap that's built these days (even before LLMs) and it's mostly shit anyways, so whether it's shit built by people or machines, who cares, right?

Unfortunately I do, and I rather we improve the world we live in instead of making it worse for a quick buck.

IDK how or why learning and growing became so unpopular.

dpark 11 hours ago | parent | next [-]

> Sure it gets you somewhere but you learned nothing in the way and now depend on the LLM to maintain it forever given you don't want to learn the skill.

The kind of person who would vibe code a bunch of stuff and push it with zero understanding of what it does or how it does it is the kind of person who’s going to ruin the project with garbage and technical debt anyway.

Using an LLM doesn’t mean you shouldn’t look at the results it produces. You should still check it results. You should correct it when it doesn’t meet your standards. You still need to understand it well enough to say “that seems right”. This isn’t about LLMs. This is just about basic care for quality.

But also, I personally don’t care about being an expert at every single thing. I think that is an unachievable dream, and a poor use of individual time and effort. I also pay people to do stuff like maintenance on my car and installing HVAC systems. I want things done well. That doesn’t mean I have to do them or even necessarily be an expert in them.

Bombthecat 11 hours ago | parent | prev | next [-]

I notice this already after around of 6 months heavy usage. Skills decline, even information gathering etc

jpadkins 9 hours ago | parent [-]

I think it is more accurate to say some skills are declining (or not developing) while a different set of skills are improving (the skill of getting an LLM to produce functional output).

Similar to if someone started writing a lot of C, their assembly coding skills may decline (or at least not develop). I think all higher levels of abstraction will create this effect.

llmslave2 7 hours ago | parent [-]

> while a different set of skills are improving (the skill of getting an LLM to produce functional output

Lmaooooo

p410n3 13 hours ago | parent | prev [-]

I agree with both of your points since I use LLMs for things I am not good at and dont give a single poop about. The only things i did with LLMs are three examples from the last two years:

- Some "temporary" tool I built years ago as a pareto-style workaround broke. (As temporary tools do after some years). Its basically a wrapper that calls a bunch of XSLs on a bmecat.xml every 3-6 months. I did not care to learn XSL back then and I dont care to do it now. Its arcane and non-universal - some stuff only works with certain XSL processors. I asked the LLM to fix stuff 20 times and eventually it got it. Probably got that stuff off my back another couple years.

- Some third party tool we use has a timer feature that has a bug where it sets a cookie everytime you see the timer once per timer (for whatever reason... the timers are set to end a certain time and there is no reason to attach it to a user). The cookies have a life time of one year. We run time limited promotions twice a week so that means two cookies a week for no reason. Eventually our WAF got triggered because it has a rule to block requests when headers are crazy long - which they were because cookies. I asked an LLM to give me a script that clears the cookie when its older than 7 days because I remember the last time i hacked together cookie stuff it also felt very "wtf" in a javascript kinda way and I did not care to relive that pain. This was in place until the third party tool fixed the cookie lifetime for some weeks.

- We list products on a marketplace. The marketplace has their own category system. We have our own category system. Frankly theirs kinda suck for our use case because it lumps a lot of stuff together, but we needed to "translate" the categories anyway. So I exported all unique "breadcrumbs" we have and gave that + the categories from the marketplace to an LLM one by one by looping through the list. I then had an apprentice from another dept. that has vastly more product knowledge than me look over that list in a day. Alternative would have been to have said apprentice do that stuff by hand, which is a task I would have personally HATED so I tried to lessen the burden for them.

All these examples are free tier in whatever I used.

We also use a vector search at work. 300,000 Products with weekly updates of the vector db.

We pay 250€ / mo for all of the qdrant instances across all environments and like 5-10 € in openai tokens. And we can easily switch whatever embedding model we use at anytime. We can even selfhost a model.