| ▲ | xienze 5 hours ago |
| > domain experts will be fine But I don't see how this holds up to even the slightest amount of scrutiny. We're literally training LLMs to BE domain experts. |
|
| ▲ | bwestergard 5 hours ago | parent | next [-] |
| I think these arguments tend to reach impasse because one gravitates to one of two views: 1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills. 2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference. |
| |
| ▲ | xienze 5 hours ago | parent | next [-] | | Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer. It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???. | | |
| ▲ | TheGRS 3 hours ago | parent | next [-] | | One explanation is that some think we might be getting to the limits of what an LLM can reasonably do. There's a lot of functions of any job that are not easily translated to an LLM and are much more about interacting with people or critical thinking in a way LLMs can't do. I'm not sure if that's everyone's rationale but that's my personal view of the situation. Like the jobs will change but we likely won't be losing them to AI outright. | |
| ▲ | paleotrope 4 hours ago | parent | prev [-] | | I was thinking today that I need to pivot to making and selling shovels, but then other issue is is anyone going to need shovels in the future. |
| |
| ▲ | georgemcbay 4 hours ago | parent | prev [-] | | I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish. So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on. I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms. Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%. |
|
|
| ▲ | jandrewrogers 4 hours ago | parent | prev [-] |
| An enormous amount of domain expertise is not legible to LLMs. Their dependence on obtaining knowledge through someone else's writing is a real limitation. A lot of human domain expertise is not acquired that way. They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible. |
| |
| ▲ | vharuck 3 hours ago | parent | next [-] | | People need to be careful about buying into the shorthand lingo with LLMs. They do not learn like we do. At the lowest level, they predict which tokens follow a body of tokens. This lets them emulate knowledge in a very useful way. This is similar to a time series model of user activity: the time series model does not keep tabs on users to see when they are active, it has not read studies about user behavior, it just reflects a mathematical relationship between points of data. For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning. | | |
| ▲ | jandrewrogers 2 hours ago | parent [-] | | The domain expertise I'm referring to isn't vague, it literally doesn't exist as training data. There are no cases of problems and solutions to study that are relevant to the state-of-the-art. In some cases this is by intent and design (e.g. trade secrets, national security, etc) long before for LLMs arrived on the scene. We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles. |
| |
| ▲ | bauerd 3 hours ago | parent | prev [-] | | >They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible. Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver. |
|