▲ | 0xbadcafebee 2 days ago | |
Yeah it's def gonna be hard. So much of engineering is an amalgam of contexts, restrictions, intentions, best practice, and what you can get away with. An agent honed by a team of experts to keep all those things in mind (and force the user to answer important questions) would be invaluable. Might be good to train multiple "personalities": one's a startup codebro that will tell you the easiest way to do anything; another will only give you the best practice and won't let you cheat yourself. Let the user decide who they want advice from. Going further: input the business's requirements first, let that help decide? Just today I was on a call where somebody wants to manually deploy a single EC2 instance to run a big service. My first question is, if it goes down and it takes 2+ days to bring it back, is the business okay with that? That'll change my advice. | ||
▲ | nickpapciak 2 days ago | parent [-] | |
Yes definitely! That's why we do believe the agents, for the time being, will act as great junior devs that you can offload work onto, while as they get better they can slowly get promoted into more active roles. The personalities approach sounds fun to experiment with. I'm wondering if you could use SAEs to scan for a "startup codebro" feature in language models. Alas this is not something we get to look into until we think that fine-tuning our own models is the best way to make them better. For now we are betting on in-context learning. Business requirements are also incredibly valuable. Notion, Slack, and Confluence hold a lot of context, but it can be hard to find. This is something that I think the subagents architecture is great for though. |