Remix.run Logo
ddosmax556 3 hours ago

This article assumes that AI only has an impact on the development phase which is certainly not true. It can speed up every part of the step. Including ideation, legal, documentation, development, and deployment.

Ideation: Throw ideas back & forth, cross reference with knowledge bases, generate design documents. Documentation: Generate large parts of docs. Development: Clear. Deployment: Generate deployment manifests, tooling around testing, knowledge around cloud platforms.

Every single step can be done better & faster with AI. Not all of them, but a lot.

Even development. Yes some part of your job involves understanding the problem better than anyone & making solutions. But some parts are also purely chore. If you know you keed a button doing X, then designing that button, placing it, figuring out edge cases with hover & press states, connecting to the backend etc - this is chore that can be skipped. Same principle applies to almost all steps.

RaftPeople 3 hours ago | parent | next [-]

I tend to agree with the article.

A typical example of trying to add a new significant capability involves many meetings (days, weeks, months, etc. )with the business to understand how their work flows between systems X, Y and Z as well as all of the significant exceptions (e.g. we handle subset A this way and subset B that way, but for the final step we blend those groups together, except for subset C which requires special process 97).

Then with that understanding comes the system solutioning across multiple systems that can be a blend of internal system or vendor's system, each with different levels of ability to customize, which pushes the shape of the final solution in different directions.

There is certainly value in speeding up coding, but it's just one piece of the puzzle and today LLM's can't help with gathering the domain information and defining a solution.

wise0wl 3 hours ago | parent [-]

I've seen proposals for Product Managers to define those conditions themselves by speaking with the LLM. A continuing architectural diagram is constructed and graph is updated until all cases are covered and then the LLM writes the code, writes the validations, pushes to CI environments, runs tests, schedules prod deploy (by looking at company event schedule), gets CAB approval, deploys code, tests in prod, and fixes regressions.

I'm not saying this is the correct thing, but companies are implementing it and it is "working". I don't think keeping our head in the sand is helping.

RaftPeople an hour ago | parent | next [-]

> I've seen proposals for Product Managers to define those conditions themselves by speaking with the LLM.

But the LLM is not aware of how the business works and why, so someone needs to work with the business to extract the information. Typically it's not well documented.

ijustlovemath 3 hours ago | parent | prev [-]

is it working though? The main outcome we've seen with companies that drink the AI Kool aid en masse is buggy unstable systems. clearly there's a level of rigor that's being missed for ship velocity

monkeydust 3 hours ago | parent | prev | next [-]

The article pretty much plays out whats happening in our place, heavy use of AI in software development but we dont see us shipping faster, about same or perhaps slower (for other reasons). Its a weird feeling as were waiting for this utopia to kick-in but its not and were cant fully put our fingers on it.

gravity2060 3 hours ago | parent | prev | next [-]

All of the above points align with our organization’s experience. But there is one more thing happening as well: we have more people in more roles able to create software solutions for issues that used to be brute forced via physical processes. (We are a small manufacturing business.) While these aren’t big giant enterprise projects that require deep swe experience, they are simple software tools that are improving process and productivity everywhere. It is pretty amazing what happens when your head of shipping can build a bespoke tool to solve a problem that previously they dealt with through burning through a lot of labor hours.

Avicebron 3 hours ago | parent | next [-]

I would be really interested in the details of these kind of tools that are improving processes and productivity.

Are they reasonably documented/audited/put into any sort of version control like a lot of internal tooling? Or are they the kind of the thing that gets whacked together on the fly in a "move spreadsheet data from A to B", "I want a list of people's schedules with custom highlighting" kind of things.

Not doubting your productivity increase, I'm just curious how people quantify that when they say it.

xeromal 3 hours ago | parent | prev | next [-]

One of our BAs created a site that tests the effectiveness of copy / layout adjustments. I don't even know exactly what that's called but he's able to do statistical analysis much faster on what works and what doesn't. It's really cool to watch him thrive and I feel like some of the thinkers that were not devs are going to find themselves to be one but in their specific domain in a few years

bjelkeman-again 3 hours ago | parent | prev | next [-]

Yes. In the same way that spreadsheets are the dev tools for non-devs, LLMs could step into that role, but with much more powerful end result. With the caveat that in the same way you can create a powerful foot-gun with a spreadsheet you can probably create a foot-cannon with an LLM.

yieldcrv 3 hours ago | parent | prev [-]

yeah the Coinbase CEO gleefully pointed that out as well and now the market thinks they are totally incompetent every time some UX quirk is found

looks like orgs have to have engineers on for optics. like having a legal staff with no lawyers, or a cybersecurity staff with no IT or certified people. Software has famously not needed state licenses or industry certification, but maybe thats a direction to consider to give utility to company optics.

echelon 3 hours ago | parent | prev | next [-]

The onus isn't on people using AI effectively to prove it to others.

In fact, these disagreements and disbeliefs create opportunities and salients in the market.

obsidianbases1 3 hours ago | parent [-]

Indeed. I suspect most effective AI users are quietly making real progress toward their objectives.

Anecdotally, I see a lot of problems/solutions content about AI that doesn't reflect at all the challenges I face. But trying to tell people that there are other ways of doing things, especially when it conflicts with token-maxxing, is a lost cause

pkoird 3 hours ago | parent | prev [-]

Precisely. People don't realize that it's all numbers. Given average IQ of people involved in a project is 140, an AI with an IQ of 150 can replicate each and every such individuals in the pipeline. People saying AI can't do this or AI can't do that should come to terms with the fact that this IQ gap is monotonously increasing.

fn-mote 2 hours ago | parent | next [-]

This is bizarre to me on so many fronts.

1: When was the last time you worked on a project where you thought the average IQ was 140? I don’t even think I have worked on a project where the maximum IQ was 140.

2: Who thinks the IQ of people on the project determines its success? There’s so much more to it than just “high capability team members” (to give IQ a generous interpretation).

3: (math joke) A sequence like (AI IQ - Human IQ) can be negative and monotonicly increasing and still never reach 0.

OccamsMirror 3 hours ago | parent | prev | next [-]

Funnily enough, though, I think it makes dumb people dumber.

icedchai 3 hours ago | parent [-]

I agree. Inexperienced people (not necessarily "dumb") are likely to accept everything at face value, not apply critical thinking skills, and not even check the AI generated output.

tovej 3 hours ago | parent | prev | next [-]

An AI does not have an IQ.

tonyedgecombe 3 hours ago | parent | prev [-]

Monotonically although I do find the discourse on AI rather monotonous.