| ▲ | bradfa 5 hours ago | |||||||||||||||||||
Yes and no. There are many not-trivial things you have to solve when using an LLM to help (or fully handle writing) code. For example, applying diffs to files. Since the LLM uses tokenization for all its text input/output, sometimes the diffs it'll create to modify a file aren't quite right as it may slightly mess up the text which is before/after the change and/or might introduce a slight typo in text which is being removed, which may or may not cleanly apply in the edit. There's a variety of ways to deal with this but most of the agentic coding tools have this mostly solved now (I guess you could just copy their implementation?). Also, sometimes the models will send you JSON or XML back from tool calls which isn't valid, so your tool will need to handle that. These fun implementation details don't happen that often in a coding session, but they happen often enough that you'd probably get driven mad trying to use a tool which didn't handle them seamlessly if you're doing real work. | ||||||||||||||||||||
| ▲ | noupdates 5 hours ago | parent [-] | |||||||||||||||||||
I'm part of the subset of developers that was not trained in Machine Learning, so I can't actually code up an LLM from scratch (yet). Some of us are already behind with AI. I think not getting involved in the foundational work of building coding agents will only leave more developers left in the dust. We have to know how these things work in and out. I'm only willing to deal with one black box at the moment, and that is the model itself. | ||||||||||||||||||||
| ||||||||||||||||||||