| ▲ | staticassertion 18 hours ago |
| When it comes to novel work, LLMs become "fast typers" for me and little more. They accelerate testing phases but that's it. The bar for novelty isn't very high either - "make this specific system scale in a way that others won't" isn't a thing an LLM can ever do on its own, though it can be an aid. LLMs also are quite bad for security. They can find simple bugs, but they don't find the really interesting ones that leverage "gap between mental model and implementation" or "combination of features and bugs" etc, which is where most of the interesting security work is imo. |
|
| ▲ | gilbetron 12 hours ago | parent | next [-] |
| What was your take on this? https://aisle.com/blog/what-ai-security-research-looks-like-... |
|
| ▲ | asadm 18 hours ago | parent | prev | next [-] |
| I think your analysis is a bit outdated these days or you may be holding it wrong. I am doing novel work with codex but it does need some prompting ie. exploring possibilities from current codebase, adding papers to prompt etc. For security, I think I generally start a new thread before committing to review from security pov. |
| |
| ▲ | staticassertion 18 hours ago | parent [-] | | You can do novel work with an LLM. You can. The LLM can't. It can be an aid - exploring papers, gathering information, helping to validate, etc. It can't do the actual novel part, fundamentally it is limited to what it is trained on. If you are relying on the LLM and context, then unless your context is a secret your competitor is only ever one prompt behind you. If you're willing to pursue true novelty, you need a human and you can leap beyond your competition. | | |
| ▲ | bdangubic 15 hours ago | parent [-] | | of course you need a human but do not need nearly as many humans as there are currently in the labor force | | |
| ▲ | staticassertion 14 hours ago | parent [-] | | Maybe, but I'm not really convinced. LLMs make some aspects of the job faster, mainly I don't have to type anymore. But... that was always a relatively small portion of the job. Design, understanding constraints, maintaining and operating code, deciding what to do, what not to do, when to do it, gaining consensus across product, eng, support, and customers, etc. I do all of those things as an engineer. Coding faster is really awesome, it's so nice, and I can whip up POCs for the frontend etc now, and that's accelerating development... but that's it. The reality is that a huge portion of my time is spent doing similar work and what LLMs largely do is pick up the smaller tasks or features that I may not have prioritized otherwise. Revolutionary in one sense, completely banal and a really minor part of my job in many others. | | |
| ▲ | bdangubic 12 hours ago | parent [-] | | I think the core issue (evidenced by constant stream of debates on HN) is the everyone’s experience with LLMs is different. I think we can all agree that some experiences are like yours while there are others that are vastly different than yours. Sometimes I hear “you just don’t know how to use them etc…” as if there is some magic setup that makes them do shit but the reality is that our actual jobs are drastically different even though we all technically have same titles. I have been a contractor for a decade now and have been on projects that require real “engineers” doing real hardcore shit. I have also been on projects where tens of people are doing work I can train my 12-year old daughter to be proficient in a month. I would gauge that percentage of the former is much smaller than later |
|
|
|
|
|
| ▲ | truetraveller 16 hours ago | parent | prev [-] |
| This is basically my take as well! |