▲ | zmgsabst 2 days ago | |||||||
I’ve found managing the context is most of the challenge: - creating the right context for parallel and recursive tasks; - removing some steps (eg, editing its previous response) to show only the corrected output; - showing it its own output as my comment, when I want a response; Etc. | ||||||||
▲ | mccoyb 2 days ago | parent | next [-] | |||||||
I've also found that relying on agents to build their own context _poisons_ it ... that it's necessary to curate it constantly. There's kind of a <1 multiplicative thing going on, where I can ask the agent to e.g. update CLAUDE.mds or TODO.mds in a somewhat precise way, and the agent will multiply my request in a lot of changes which (on the surface) appear well and good ... but if I repeat this process a number of times _without manual curation of the text_, I end up with "lower quality" than I started with (assuming I wrote the initial CLAUDE.md). Obvious: while the agent can multiply the amount of work I can do, there's a multiplicative reduction in quality, which means I need to account for that (I have to add "time doing curation") | ||||||||
| ||||||||
▲ | ModernMech 2 days ago | parent | prev [-] | |||||||
It's funny because things are finally coming full circle in ML. 10-15 years ago the challenge in ML/PR was "feature engineering", the careful crafting of rules that would define features in the data which would draw the attention of the ML algorithm. Then deep learning came along and it solved the issue of feature engineering; just throw massive amounts of data at the problem and the ML algorithms can discern the features automatically, without having to craft them by hand. Now we've gone as far as we can with massive data, and the problem seems to be that it's difficult to bring out the relevent details when there's so much data. Hence "context engineering", a manual, heuristic-heavy processes guided by trial and error and intuition. More an art than science. Pretty much the same thing that "feature engineering" was. |