| ▲ | jasonjei 7 hours ago |
| I’m not using AI to eliminate thinking but to free me from the rote mundane code writing. AI is perfectly competent at writing code once a prototype is implemented. I do write initial proof of concept crude prototypes (not commented, hardcoded variables, etc), and AI does the productionizing of them. It has really allowed me to command a team of agents instead of keeping track of a bunch of humans of varying work ethic, skill, and ability to maintain high code quality. And often AI is very good at maintaining patterns used in the code base or even keeping them to industry best practices. When using AI you will no longer be writing so much in programming languages—English or whatever language you talk to the LLM will be the main language. |
|
| ▲ | switchbak 5 hours ago | parent | next [-] |
| "AI is perfectly competent at writing code once a prototype is implemented" ... perfectly? I mean, it's certainly far from perfect - this is where I spend most of my day, in fixing the imperfections of the code generating robot. Granted I'm not polishing up a prototype, I'm maintaining, evolving and modernizing a non-trivial 8+ year old product. |
| |
| ▲ | jasonjei 5 hours ago | parent [-] | | It's not perfect, but to give you an example, I needed to create a proxy broker to manage proxies for scraping data. GPT built me a broker to manage my fleet/inventory of proxies and buy bandwidth or new proxies as needed as well as maintain the proxy ledger for proxy health, while I was able to work on the core product. It helped me with developing a scoring system (for example, I gather slot data for restaurants to see how scarce they are) to rank restaurants (higher score being harder to book) to determine signals of busyness. It also helped me build a backlog system to make snapshots of every restaurant's slot availability on 3 different providers (OpenTable, Tock, Resy). Maybe I misspoke, I’m not saying the code it wrote is perfect. But the code produced by GPT-5.5 frontier model is easily miles better than a junior developer. But it is very competent. Perfectly competent was maybe more hyperbole, but I still drive the point. |
|
|
| ▲ | hirvi74 5 hours ago | parent | prev | next [-] |
| What even is rote, mundane code? How much of this rote, mundane code do you honestly have in any given project? |
| |
| ▲ | jasonjei 5 hours ago | parent [-] | | I don't care to build a ledger or build an ETL system. I don't care to spend my time debugging IaC terraform systems. GPT allowed me to focus on the core functionality of my code instead of investing hours of my time on ancillary support systems that are well understood. Terraform is notoriously difficult even for the most skilled among us. It helped me manage and tune my Cloud Tasks queue for processing from the microservice scraping data into the main app. The cool thing is that I was able to let an agent manage this while I did the human task of steering it to work. Some of this work is novel, yet much of this work includes well-understood software patterns, such as scatter-gather and so on. I absolutely do not enjoy using JavaScript or TypeScript at all. So it was a huge relief when I was able to get AI to help with UI work on Next.js and React Native/Expo. I've gotten it to help with a cross-cloud cross-region setup on Terraform with shared secrets on AWS and GCP with GHA workflows for sandbox, staging, and prod promotion deployments. And I have no background as a DevOps guy. Again, it didn't propose the exact shape of my system design. But chatting with it really allowed me to form the shape. It really is a bicycle for your brain. It helped me look at the tradeoffs of different products (PubSub vs Cloud Tasks, or AWS equivalents). | | |
| ▲ | skydhash 11 minutes ago | parent [-] | | If there's one thing I learned from engineering and confirmed by my years in software development. It's easy to make something that work, but it's way harder to put guaranties on it. That's why copy-paste from Stack Overflow was a real strategy. |
|
|
|
| ▲ | lofaszvanitt 3 hours ago | parent | prev [-] |
| Most of the time is spent making plans and prototype outlines for the llm to work with. Otherwise the whole thing will be a horrible mess. You need elaborately crafted prompts, so you have to have proper understanding of the underlying framework and language otherwise the whole thing will be a horrible mess. I don't even know how could people handle multiple agents, when it usually finishes quite fast. And you can't even do anything between runs, because you are in a constant oh one more minute and it finishes state. And when it finishes you have to evaluate the output. So can't even do deep thinking during "work", because the pattern is similar to social media. Constant attention, almost instant gratification. So your attention span is again and again fucked, properly fucked. And the problem is, that these plans are obliterated in a few hours and then you have to analyze and iterate on the output to weed out the idiocies. And handling multiple agent outputs... so continous context switches. Well, good luck with that in the long term. If you let agents run wild and build whatever, the output is almost surely will be a horrible mess. End of story. |