Remix.run Logo
efitz 5 days ago

When I start a nontrivial coding task with AI, I added a “context” directory, instructions in the tool prompts how to use the files in that directory, and then I spent a couple hours using a thinking chat AI to generate the documentation I wanted (like “build me an API document for this library, the source code is at this URL and here are some URLs with good example code).

I’ve had generally good results with this approach (I’m on project #3 using this method).

stillpointlab 4 days ago | parent | next [-]

I can confirm this is effective - I have done the same.

I haven't done extensive experiments, but I have noticed anecdotal benefits to asking the LLM how they want things structured as well.

For example, for complex multi-stage tasks I asked Claude Code how best to communicate the objective and it recommended a markdown file with the following sections: "High-level goal", "User stories", "Technical requirements", "Non-goals". I then created such a doc for a pretty complex task then asked Claude to review the doc and ask any clarifications. I then answer any questions (usually 5-7) and put them into a "Clarification" section. I have also added a "Completion checklist" section that I use to ensure that Claude follows all of the rules in my subdirectory "README.md" files (I have one for each major sub-section of code, like my service layer, my router layer, my database, etc). I usually go and do 2-3 rounds of Claude asking questions and me adding to the "Clarification" section and then Claude is satisfied and ready to implement.

The bonus of this approach is I now have a growing list of the task specifications checked into a "tasks" directory showing the history of how the code base came to be.

asteroidburger 4 days ago | parent | next [-]

This sounds a lot like how Kiro works. Your requirements and design are in a .kiro directory inside the project, allowing you to commit them. The process is structured within Kiro to walk you through generating docs for each phase before beginning to write code. Ultimately, it generates a list of tasks, and you can run them one at a time and review/update between each.

pglevy 4 days ago | parent | prev | next [-]

My use case is a little different (mostly prototyping and building design ops tools) but +1 to this flow.

At this point, I typically do an LLM-readme at the branch level to document both planning and progress. At the project level I've started having it dump (and organize) everything in a work-focused Obsidian vault. This way I end up with cross-project resources in one place, it doesn't bloat my repos, and it can be used by other agents from where it is.

jellyotsiro 4 days ago | parent | prev [-]

oh damn interesting

theshrike79 4 days ago | parent | prev | next [-]

I have a "llm-shared" git submodule I add to all my projects.

In there I have generic advice on project management (use `gh` and Github issues for todo lists) and language-specific guidance in separate files, like which libraries to use etc.

Then I have a common prompt template for different agents that tells them to look there for specific technology choices and create/update their own WHATEVER.md file in the repo.

Gemini-cli is pretty efficient for creating specs and doesn't run out of context. With Context7 it can pull up API specs into the documentation it creates and with Brave API it can search for other stuff.

After it's done, I can just tell Claude to make a step by step plan based on the specs and create Github issues for them with the appropriate labels.

Clear context, and get Claude working on the issues one by one.

luckystarr 4 days ago | parent | prev | next [-]

For thorny problems I let the agent give me a simplified flow-chart in mermaid syntax. LLM's brain-farts are easily visible then. I correct the flow-chart "Ah, you're right!" and then let it translate it to code. Works wonders.

mrinterweb 4 days ago | parent | next [-]

I often provide mermaid diagrams in my promt. Mermaid seems to be a good common markup to communicate relationships between humans and LLMs.

jellyotsiro 4 days ago | parent | prev [-]

that's smart! I’ve actually been thinking about integrating something like Mermaid flowcharts directly into Nia’s output—visual context can make cursor etc understand context way better. have you found any particular types of problems where the flowchart approach really shines (or falls short)? Would love to hear more

Incipient 3 days ago | parent | prev | next [-]

I do this for larger requests I make, however I find large requests end up giving me nonsense due to the amount of context 10 odd relevant files + design context.

I'm just using vscode edit mode, so I expect I'm being too simple, mostly as I haven't found out how to make agent mode work with front and back end in separate docker containers.

Would you mind sharing a bit of insight into how you've configured your environment such that you get good results?

jerpint 4 days ago | parent | prev | next [-]

I built a library to manage this exact workflow! I actually used the library to build the library

https://github.com/jerpint/context-llemur

It’s MCP/CLI friendly , and wraps git around a context: folder, so you can super easily load context anywhere using: “ctx load” and ask LLMs to update and save context as things move along

jellyotsiro 5 days ago | parent | prev [-]

yep I used similar approach couple months ago but found it really inefficient because it took me some time

give nia a try and use it on any docs, very curious to hear ur feedback