Remix.run Logo
not_that_d 3 days ago

My experience with the current tools so far:

1. It helps to get me going with new languages, frameworks, utilities or full green field stuff. After that I expend a lot of time parsing the code to understand what it wrote that I kind of "trust" it because it is too tedious but "it works".

2. When working with languages or frameworks that I know, I find it makes me unproductive, the amount of time I spend writing a good enough prompt with the correct context is almost the same or more that if I write the stuff myself and to be honest the solution that it gives me works for this specific case but looks like a junior code with pitfalls that are not that obvious unless you have the experience to know it.

I used it with Typescript, Kotlin, Java and C++, for different scenarios, like websites, ESPHome components (ESP32), backend APIs, node scripts etc.

Botton line: usefull for hobby projects, scripts and to prototypes, but for enterprise level code it is not there.

brulard 3 days ago | parent | next [-]

For me it was like this for like a year (using Cline + Sonnet & Gemini) until Claude Code came out and until I learned how to keep context real clean. The key breakthrough was treating AI as an architect/implementer rather than a code generator.

Most recently I ask first CC to create a design document for what we are going to do. He has instructions to look into the relevant parts of the code and docs to reference them. I review it and few back-and-forths we have defined what we want to do. Next step is to chunk it into stages and even those to smaller steps. All this may take few hours, but after this is well defined, I clear the context. I then let him read the docs and implement one stage. This goes mostly well and if it doesn't I either try to steer him to correct it, or if it's too bad, I improve the docs and start this stage over. After stage is complete, we commit, clear context and proceed to next stage.

This way I spend maybe a day creating a feature that would take me maybe 2-3. And at the end we have a document, unit tests, storybook pages, and features that gets overlooked like accessibility, aria-things, etc.

At the very end I like another model to make a code review.

Even if this didn't make me faster now, I would consider it future-proofing myself as a software engineer as these tools are improving quickly

imiric 3 days ago | parent | next [-]

This is a common workflow that most advanced users are familiar with.

Yet even following it to a T, and being really careful with how you manage context, the LLM will still hallucinate, generate non-working code, steer you into wrong directions and dead ends, and just waste your time in most scenarios. There's no magical workflow or workaround for avoiding this. These issues are inherent to the technology, and have been since its inception. The tools have certainly gotten more capable, and the ecosystem has matured greatly in the last couple of years, but these issues remain unsolved. The idea that people who experience them are not using the tools correctly is insulting.

I'm not saying that the current generation of this tech isn't useful. I've found it very useful for the same scenarios GP mentioned. But the above issues prevent me from relying on it for anything more sophisticated than that.

brulard 3 days ago | parent [-]

> These issues are inherent to the technology

That's simply false. Even if LLMs don't produce correct and valid code on first shot 100% times of the cases, if you use an agent, it's simply a matter of iterations. I have claude code connected to Playwright, context7 for docs and to Playwright, so it can iterate by itself if there are syntax errors, runtime errors or problems with the data on the backend side. Currently I have near zero cases when it does not produce valid working code. If it is incorrect in some aspect, it is then not that hard to steer it to better solution or to fix yourself.

And even if it failed in implementing most of these stages of the plan, it's not all wasted time. I brainstormed ideas, formed the requirements, specifications to features and have clear documentation and plan of the implementation, unit tests, etc. and I can use it to code it myself. So even in the worst case scenario my development workflow is improved.

mathiaspoint 3 days ago | parent | next [-]

It definitely isn't. LLMs often end up stuck in weird corners they just don't get and need someone familiar with the theory of what they're working on to unstick them. If the agent is the same model as the code generator it won't be able to on its own.

brulard 3 days ago | parent | next [-]

I was getting to stuck state with Gemini and to lesser extent with Sonnet 4, but my cases were resolved by Opus. I think it is mostly due to size of the task and if you split it in advance to smaller chunks, all these models has much higher probability to resolve.

sawjet 3 days ago | parent | prev [-]

Skill issue

nojs 3 days ago | parent | prev [-]

Could you explain your exact playwright setup in more detail? I’ve found that claude really struggles to end-to-end test complex features that require browser use. It gets stuck for several minutes trying to find the right button to click for example.

brulard 3 days ago | parent [-]

No special setup, just something along "test with playwright" in the process list. It can get stuck, but for me it was not often enough for me to care. If it happens, I push it in the right direction.

aatd86 3 days ago | parent | prev | next [-]

For me it's the opposite. As long as I ask for small tasks, or error checking, it can help. But I'd rather think of the overall design myself because I tend to figure out corner cases or superlinear complexities much better. I develop better mental models than the NNs. That's somewhat of a relief.

Also the longer the conversation goes, the less effective it gets. (saturated context window?)

brulard 3 days ago | parent [-]

I don't think thats the opposite. I have an idea what I want and to some extent how I want it to be done. The design document starts with a brainstorming where I throw all my ideas at the agent and we iterate together.

> Also the longer the conversation goes, the less effective it gets. (saturated context window?)

Yes, this is exactly why I said the breakthrough came for me when I learned how to keep the context clean. That means multiple times in the process I ask the model to put the relevant parts of our discussion into an MD document, I may review and edit it and I reset the context with /clear. Then I have him read just the relevant things from MD docs and we continue.

john-tells-all 3 days ago | parent | prev | next [-]

I've seen this referred to as Chain of Thought. I've used it with great success a few times.

https://martinfowler.com/articles/2023-chatgpt-xu-hao.html

ramshanker 3 days ago | parent | prev [-]

Same here. A small variation: I explicitly use website to manage what context it gets to see.

brulard 3 days ago | parent [-]

What do you mean by website? An HTML doc?

ramshanker 3 days ago | parent [-]

I mean the website of AI providers. chatgpt.com , gemini.google.com , claude.ai and so on.

spaceywilly 3 days ago | parent [-]

I’ve had more success this way as well. I will use the model via web ui, paste in the relevant code, and ask it to implement something. It spits out the code, I copy it back into the ide, and build. I tried Claude Code but I find it goes off the rails too easily. I like the chat through the UI because it explains what it’s doing like a senior engineer would

brulard 3 days ago | parent [-]

Well, this is the way we could do it for 2 years already, but basically you are doing the transport layer for the process, which can not be efficient. If you really want to have tight control of what exactly the LLM sees, than that's still an option. But you only get so far with this approach.

viccis 3 days ago | parent | prev | next [-]

I agree. For me it's a modern version of that good ol "rails new" scaffolding with Ruby on Rails that got you started with a project structure. It makes sense because LLMs are particularly good at tasks that require little more knowledge than just a near perfect knowledge of the documentation of the tooling involved, and creating a well organized scaffold for a greenfield project falls squarely in that area.

For legacy systems, especially ones in which a lot of the things they do are because of requirements from external services (whether that's tech debt or just normal growing complexity in a large connected system), it's less useful.

And for tooling that moves fast and breaks things (looking at you, Databricks), it's basically worthless. People have already brought attention to the fact that it will only be as current as its training data was, and so if a bunch of terminology, features, and syntax have changed since then (ahem, Databricks), you would have to do some kind of prompt engineering with up to date docs for it to have any hope of succeeding.

pvorb 3 days ago | parent [-]

I'm wondering what exact issue you are referring to with Databricks? I can't remember a time I had to change a line I wrote during the past 2.5 years I've been using it. Or are you talking about non-breaking changes?

viccis 2 days ago | parent [-]

They have changed a lot of their DLT (not even called that anymore lol, it's Lakeflow Pipelines now I think) syntax. I tried asking ChatGPT to convert a very simple Python one to Spark SQL, and it gave me a bunch of outdated SQL syntax.

Aside from that, if you use their Python connector package, it's a shit show to put it mildly. For example, 15.4 works with serverless but tells you (via deprecation warning) it doesn't and that you need to use 15.1 (which lacks a lot of variant stuff). So then you decide screw it I'm gonna just update to 16, except that serverless (which works on 15.4) doesn't work on 16.0 or 17.0.

jeremywho 3 days ago | parent | prev | next [-]

My workflow is to use Claude desktop with the filesystem mcp server.

I give claude the full path to a couple of relevant files related to the task at hand, ie where the new code should hook into or where the current problem is.

Then I ask it to solve the task.

Claude will read the files, determine what should be done and it will edit/add relevant files. There's typically a couple of build errors I will paste back in and have it correct.

Current code patterns & style will be maintained in the new code. It's been quite impressive.

This has been with Typescript and C#.

I don't agree that what it has produced for me is hobby-grade only...

taberiand 3 days ago | parent | next [-]

I've been using it the same way. One approach that's worked well for me is to start a project and first ask it to analyse and make a plan with phases for what needs to be done, save that plan into the project, then get it to do each phase in sequence. Once it completes a phase, have it review the code to confirm if the phase is complete. Each phase of work and review is a new chat.

This way helps ensure it works on manageable amounts of code at a time and doesn't overload its context, but also keeps the bigger picture and goal in sight.

mnky9800n 3 days ago | parent [-]

I find that sometimes this works great and sometimes it happily tells you everything works and your code fails successfully and if you aren’t reading all the code you would never know. It’s kind of strange actually. I don’t have a good feeling when it will get everything correct and when it will fail and that’s what is disconcerting. I would be happy to be given advice on what to do to untangle when it’s good and when it’s not. I love chatting with Claude code about code. It’s annoying that it doesn’t always get it right and also doesn’t really interact with failure like a human would. At Least in my experience anyways.

taberiand 3 days ago | parent [-]

Of course, everything needs to be verified - I'm just trying to figure out a process that enables it to work as effectively as it can on large code bases in a structured way. Committing each stage to git, fixing issues and adjusting the context still comes into play.

hamandcheese 3 days ago | parent | prev | next [-]

Any particular reason you prefer that over Claude code?

jeremywho 3 days ago | parent [-]

I'm on windows. Claude Code via WSL hasn't been as smooth a ride.

JyB 3 days ago | parent | prev | next [-]

That's exactly how you should do it. You can also plug in an MCP for your CI or mention cli.github.com in your prompt to also make it iterate on CI failures.

Next you use claude code instead and you make several work on their own clone on their own workspace and branches in the background; So you can still iterate yourself on some other topic on your personal clone.

Then you check out its tab from time to time and optionally checkout its branch if you'd rather do some updates yourself. It's so ingrained in my day-to-day flow now it's been super impressive.

nwatson 3 days ago | parent | prev [-]

One can also integrate with, say, a running PyCharm with the Jetbrains IDE MCP server. Claude Desktop can then interact directly with PyCharm.

alfalfasprout 3 days ago | parent | prev | next [-]

The bigger problem I'm seeing is engineers that become over reliant on vibe coding tools are starting to lose context on how systems are designed and work.

As a result, their productivity might go up on simple "ticket like tasks" where it's basically just simple implementation (find the file(s) to edit, modify it, test it) but when they start using it for all their tasks suddenly they don't know how anything works. Or worse, they let the LLM dictate and bad decisions are made.

These same people are also very dogmatic on the use of these tools. They refuse to just code when needed.

Don't get me wrong, this stuff has value. But I just hate seeing how it's made many engineers complacent and accelerated their ability to add to tech debt like never before.

pqs 3 days ago | parent | prev | next [-]

I'm not a programmer, but I need to write python and bash programs to do my work. I also have a few websites and other personal projects. Claude Code helps me implement those little projects I've been wanting to do for a very long time, but I couldn't due to the lack of coding experience and time. Now I'm doing them. Also now I can improve my emacs environment, because I can create lisp functions with ease. For me, this is the perfect tool, because now I can do those little projects I couldn't do before, making my life easier.

chamomeal 3 days ago | parent | next [-]

LLMs totally kick ass for making bash scripts

dboreham 3 days ago | parent [-]

Strong agree. Bash is so annoying that there have been many scripts that I wanted to have, but just didn't write (did the thing manually instead) rather than go down the rabbit hole of Bash nonsense. LLMs turn this on its head. I probably have LLMs write 1-2 bash scripts a week now, that I commit to git for use now and later.

unshavedyak 3 days ago | parent | next [-]

Similarly my Nix[OS] env had a ton of annoyances and updates needed that i didn't care to do. My first week of Claude saw tons of Nix improvements for my environment across my three machines (desk, server, macbook) and it's a much more rich environment.

Claude did great at Nix, something i struggled with due to lack of documentation. It was far from perfect, but it usually pointed me towards the answer that i could later refine with it. Felt magical.

elcritch 3 days ago | parent [-]

Similarly I've been making Ansible Playbooks using LLMs of late, often by converting shell scripts. Play books are pretty great and easier to make idempotent than shell. But without Claude I'd forget the syntax or commands and it'd take forever to setup.

int_19h 3 days ago | parent | prev [-]

Why not use a more sensible shell, e.g. Fish?

chamomeal 3 days ago | parent [-]

Also great at making fish scripts!

Bash scripts are p much universal though. I can send em to my coworkers. I can use them in my awful prod-debugging-helm environment.

zingar 3 days ago | parent | prev | next [-]

Big +1 to customizing emacs! Used to feel so out of reach, but now I basically rolled my own cursor.

dekhn 3 days ago | parent | prev | next [-]

For context I'm a principal software engineer who has worked in and out of machine learning for decades (along with a bunch of tech infra, high performance scientific computing, and a bunch of hobby projects).

In the few weeks since I've started using Gemini/ChatGPT/Claude, I've

1. had it read my undergrad thesis and the paper it's based on, implementing correct pytorch code for featurization and training, along wiht some aspects of the original paper that I didn't include in my thesis. I had been waiting until retirement until taking on this task.

2. had it write a bunch of different scripts for automating tasks (typically scripting a few cloud APIs) which I then ran, cleaning up a long backlog of activities I had been putting off.

3. had it write a yahtzee game and implement a decent "pick a good move" feature . It took a few tries but then it output a fully functional PyQt5 desktop app that played the game. It beat my top score of all time in the first few plays.

4. tried to convert the yahtzee game to an android app so my son and I could play. This has continually failed on every chat agent I've tried- typically getting stuck with gradle or the android SDK. This matches my own personal experience with android.

5. had it write python and web-based g-code senders that allowed me to replace some tools I didn't like (UGS). Adding real-time vis of the toolpath and objects wasn't that hard either. Took about 10 minutes and it cleaned up a number of issues I saw with my own previous implementations (multithreading). It was stunning how quickly it can create fully capable web applications using javascript and external libraries.

6. had it implement a gcode toolpath generator for basic operations. At first I asked it to write Rust code, which turned out to be an issue (mainly because the opencascade bindings are incomplete), it generated mostly functional code but left it to me to implement the core algorithm. I asked it to switch to C++ and it spit out the correct code the first time. I spent more time getting cmake working on my system than I did writing the prompt and waiting for the code.

7. had it Write a script to extract subtitles from a movie, translate them into my language, and re-mux them back into the video. I was able to watch the movie less than an hour after having the idea- and most of that time was just customizing my prompt to get several refinements.

8. had it write a fully functional chemistry structure variational autoencoder that trains faster and more accurate than any I previously implemented.

9. various other scientific/imaging/photography related codes, like impleemnting multi-camera rectification, so I can view obscured objects head-on from two angled cameras.

With a few caveats (Android projects, Rust-based toolpath generation), I have been absolutely blown away with how effective the tools are (especially used in a agent which has terminal and file read/write capabilities). It's like having a mini-renaissance in my garage, unblocking things that would have taken me a while, or been so frustrating I'd give up.

I've also found that AI summaries in google search are often good enough that I don't click on links to pages (wikipedia, papers, tutorials etc). The more experience I get, the more limitations I see, but many of those limitations are simply due to the extraordinary level of unnecessary complexity required to do nearly anything on a modern computer (see my comments about about Android apps & gradle).

MangoCoffee 3 days ago | parent | prev [-]

At the end of the day, all tools are made to make their users' lives easier.

I use GitHub Copilot. I recently did a vibe code hobby project for a command line tool that can display my computer's IP, hard drive, hard drive space, CPU, etc. GPT 4.1 did coding and Claude did the bug fixing.

The code it wrote worked, and I even asked it to create a PowerShell script to build the project for release

dfedbeef 3 days ago | parent [-]

Try typing ctrl+shift+escape.

MangoCoffee 2 days ago | parent [-]

LOL. I just want to try out vibe coding on something small with a persona based appoarch described here: https://humanwhocodes.com/blog/2025/06/persona-based-approac...

apimade 3 days ago | parent | prev | next [-]

Many who say LLMs produce “enterprise-grade” code haven’t worked in mid-tier or traditional companies, where projects are held together by duct tape, requirements are outdated, and testing barely exists. In those environments, enterprise-ready code is rare even without AI.

For developers deeply familiar with a codebase they’ve worked on for years, LLMs can be a game-changer. But in most other cases, they’re best for brainstorming, creating small tests, or prototyping. When mid-level or junior developers lean heavily on them, the output may look useful.. until a third-party review reveals security flaws, performance issues, and built-in legacy debt.

That might be fine for quick fixes or internal tooling, but it’s a poor fit for enterprise.

bityard 3 days ago | parent | next [-]

I work in the enterprise, although not as a programmer, but I get to see how the sausage is made. And describing code as "enterprise grade" would not be a compliment in my book. Very analogous to "contractor grade" when describing home furnishings.

typpilol 3 days ago | parent | prev | next [-]

I've found having a ton of linting tools can help the AI write much better and secure code.

My eslint config is a mess but the code it writes comes out pretty good. Although it makes a few iterations after the lint errors pop for it to rewrite it, the code it writes is way better.

Aeolun 3 days ago | parent | prev [-]

Umm, Claude Code is a lot better than a lot of enterprise grade code I see. And it actually learns from mistakes with a properly crafted instruction xD

cube00 3 days ago | parent [-]

>And it actually learns from mistakes with a properly crafted instruction

...until it hallucinates and ignores said instruction.

hoppp 3 days ago | parent | prev | next [-]

I used it with Tyopescript and Go, SQL, Rust

Using it with rust is just horrible imho. Lots and lots of errors, I cant wait to stop this rust project already. But the project itself is quite complex

Go on the other hand is super productive, mainly because the language is already very simple. I can move 2x fast

Typescript is fine, I use it for react components and it will do animations Im lazy to do...

SQL and postgresql is fine, I can do it without it also, I just dont like to write stored functions cuz of the boilerplatey syntax, a little speed up saves me from carpal tunnel

jiggawatts 3 days ago | parent | prev | next [-]

Something I’ve discovered is that it may be worthwhile writing the prompt anyway, even for a framework you’re an expert with. Sometimes the AIs will surprise me with a novel approach, but the real value is that the prompt makes for excellent documentation of the requirements! It’s a much better starting point for doc-comments or PR blurbs than after-the-fact ramblings.

epolanski 3 days ago | parent | prev | next [-]

I really find your experience strikingly different than mine, I'll share you my flow:

- step A: ask AI to write a featureA-requirements.md file at the root of the project, I give it a general description for the task, then have it ask me as many questions as possible to refine user stories and requirements. It generally comes up with a dozen or more of questions, of which multiples I would've not thought about and found out much later. Time: between 5 and 40 minutes. It's very detailed.

- step B: after we refine the requirements (functional and non functional) we write together a todo plan as featureA-todo.md. I refine the plan again, this is generally shorter than the requirements and I'm generally done in less than 10 minutes.

- step C: implementation phase. Again the AI does most of the job, I correct it at each edit and point flaws. Are there cases where I would've done that faster? Maybe. I can still jump in the editor and do the changes I want. This step in general includes comprehensive tests for all the requirements and edge cases we have found in step A, both functional, integration and E2Es. This really varies but it is generally highly tied to the quality of phase A and B. It can be as little as few minutes (especially true when we indeed come up with the most effective plan) and as much as few hours.

- step D: documentation and PR description. With all of this context (in requirements and todos) at this point updating any relevant documentation and writing the PR description is a very short experiment.

In all of that: I have textual files with precise coding style guidelines, comprehensive readmes to give precise context, etc that get referenced in the context.

Bottom line: you might be doing something profoundly wrong, because in my case, all of this planning, requirements gathering, testing, documenting etc is pushing me to deliver a much higher quality engineering work.

mcintyre1994 3 days ago | parent [-]

You’d probably like Kiro, it seems to be built specifically for this sort of spec-driven development.

epolanski 2 days ago | parent [-]

How would it be better than what I'm doing with Claude?

drums8787 3 days ago | parent | prev | next [-]

My experience is the opposite I guess. I am having a great time using claude to quickly implement little "filler features" that require a good amount of typing and pulling from/editing different sources. Nothing that requires much brainpower beyond remembering the details of some sub system, finding the right files, and typing.

Once the code is written, review, test and done. And on to more fun things.

Maybe what has made it work is that these tasks have all fit comfortably within existing code patterns.

My next step is to break down bigger & more complex changes into claude friendly bites to save me more grunt work.

unlikelytomato 3 days ago | parent [-]

I wish I shared this experience. There are virtually no filter features for me to work on. When things feel like filler on my team, it's generally a sign of tech debt and we wouldn't want to have it generate all the code it would take. What are some examples of filler features for you?

On the other hand, it does cost me about 8 hours a week debugging issues created by bad autocompletes from my team. The last 6 months have gotten really bad with that. But that is a different issue.

flowerthoughts 3 days ago | parent | prev | next [-]

I predict microservices will get a huge push forward. The question then becomes if we're good enough at saying "Claude, this is too big now, you have to split it in two services" or not.

If LLMs maintain the code, the API boundary definitions/documentation and orchestration, it might be manageable.

urbandw311er 3 days ago | parent | next [-]

Why not just cleanly separated code in a single execution environment? No need to actually run the services in separate execution environments just for the sake of an LLM being able to parse it, that’s crazy! You can just give it the files or folders it needs for the particular services within the project.

Obviously there’s still other reasons to create micro services if you wish, but this does not need to be another reason.

fsloth 3 days ago | parent | prev | next [-]

Why microservices? Monoliths with code-golfed minimal implementation size (but high quality architecture) implemented in strongly typed language would consume far less tokens (and thus would be cheaper to maintain).

arwhatever 3 days ago | parent | prev [-]

Won’t this cause [insert LLM] to lose context around the semantics of messages passed between microservices?

You could then put all services in 1 repo, or point LLM at X number of folders containing source for all X services, but then it doesn’t seem like you’ll have gained anything, and at the cost of added network calls and more infra management.

stpedgwdgfhgdd 3 days ago | parent | prev | next [-]

For enterprise software development CC is definitely there. 100k Go code paas platform, micro services architecture, mono repo is manageable.

The prompt needs to be good, but in plan mode it will iteratively figure it out.

You need to have automated tests. For enterprise software development that actually goes without saying.

dclowd9901 3 days ago | parent | prev | next [-]

It also steps right over easy optimizations. I was doing a query on some github data (tedious work) and rather than preliminarily filter down using the graphql search method, it wanted to comb through all PRs individually. This seems like something it probably should have figured out.

mnky9800n 3 days ago | parent | prev | next [-]

Yea that’s right. It’s kind of annoying how useful it is for hobby projects and it is suddenly useless on anything at work. Haha. I love Claude code for some stuff (like generating a notebook to analyse some data). But it really just disconnects you from the problem you are solving without you going through everything it writes. And I’m really bullish on ai coding tools haha, for example:

https://open.substack.com/pub/mnky9800n/p/coding-agents-prov...

johnisgood 3 days ago | parent | prev | next [-]

> but for enterprise level code it is not there

It is good for me in Go but I had to tell it what to write and how.

sdesol 3 days ago | parent [-]

I've been able to create a very advanced search engine for my chat app that is more than enterprise ready. I've spent a decade thinking about search, but in a different language. Like you, I needed to explain what I knew about writing a search engine in Java for the LLM, to write it in JavaScript using libraries I did not know and it got me 95% of the way there.

It is also incredibly important to note that the 5% that I needed to figure out was the difference between throw away code and something useful. You absolutely need domain knowledge but LLMs are more than enterprise ready in my opinion.

Here is some documentation on how my search solution is used in my app to show that it is not a hobby feature.

https://github.com/gitsense/chat/blob/main/packages/chat/wid...

johnisgood 3 days ago | parent [-]

Thanks for your reply, I am in the same boat, and it works for me, like it seems to work for you. So as long as we are effective with it, why not? Of course I am not doing things blindly and expect good results.

tonyhart7 3 days ago | parent | prev | next [-]

it depends on model but sonnet is more than capable for enterprise code

when you stuck at claude doing dumb shit, you didnt give the model enough context to know better the system

after following spec driven development, works with LLM in large code base make it so much easier than without it like its heaven and hell differences

but also it increase in token cost exponentially, so there's that

fpauser 3 days ago | parent | prev | next [-]

Same conclusion here. Also good for analyzing existing codebases and to generate documentation for undocumented projects.

j45 3 days ago | parent [-]

It's quite good at this, I have been tying in Gemini Pro with this too.

amelius 3 days ago | parent | prev | next [-]

It is very useful for small tasks like fixing network problems, or writing regexp patterns based on a few examples.

MarcelOlsz 3 days ago | parent [-]

Here's how YOU can save $200/mo!

risyachka 3 days ago | parent | prev | next [-]

Pretty much my experience too.

I usually go to option 2 - just write it by myself as it is same time-wise but keeps skills sharp.

fpauser 3 days ago | parent [-]

Not to degenerate is really challenging these days. There are the bubbles that simulate multiple realities to us and try to untrain us logic thinking. And there are the llms that try to convice us that self thinking is unproductive. I wonder when this digitalophily suddenly turns into digitalophobia.

sciencejerk 3 days ago | parent [-]

It's happening, friend, don't let the AI hype fool you. I'm detecting quite a bit of reluctance and lack of 100% buy-in on AI coding tools and trends, even from your typically tech-loving Software Engineers.

therealpygon 3 days ago | parent | prev [-]

I mostly agree, with the caveat that I would say it can certainly be useful when used appropriately as an “assistant”. NOT vibe coding blindly and hoping what I end up with is useful. “Implement x specific thing” (e.g. add an edit button to component x), not “implement a whole new epic feature that includes changes to a significant number of files”. Imagine meeting a house builder and saying “I want a house”, then leaving and expecting to come back to exactly the house you dreamed of.

I get why, it’s a test of just how intuitive the model can be at planning and execution which drives innovation more than 1% differences in benchmarks ever will. I encourage that innovation in the hobby arena or when dogfooding your AI engineer. But as a replacement developer in an enterprise where an uncaught mistake could cost millions? No way. I wouldn’t even want to be the manager of the AI engineering team, when they come looking for the only real person to blame for the mistake not being caught.

For additional checks/tasks as a completely extra set of eyes, building internal tools, and for scripts? Sure. It’s incredibly useful with all sorts of non- application development tasks. I’ve not written a batch or bash script in forever…you just don’t really need to much anymore. The linear flow of most batch/bash/scripts (like you mentioned) couldn’t be a more suitable domain.

Also, with a basic prompt, it can be an incredibly useful rubber duck. For example, I’ll say something like “how do you think I should solve x problem”(with tools for the codebase and such, of course), and then over time having rejected and been adversarial to every suggestion, I end up working through the problem and have a more concrete mental design. Think “over-eager junior know-it-all that tries to be right constantly” without the person attached and you get a better idea of what kind of LLM output you can expect including following false leads to test your ideas. For me it’s less about wanting a plan from the LLM, and more about talking through the problems I think my plan could solve better, when more things are considered outside the LLMs direct knowledge or access.

“We can’t do that, changing X would break Y external process because Z. Summarize that concern into a paragraph to be added to the knowledge base. Then, what other options would you suggest?”