| ▲ | mikert89 8 hours ago | |||||||||||||||||||||||||
As ai improves, most tasks will become something like this. Environments setup where the model learns through trial and error Any human endeavor that can be objectively verified in some environment like this can be completely automated | ||||||||||||||||||||||||||
| ▲ | miki123211 33 minutes ago | parent | next [-] | |||||||||||||||||||||||||
So much this. People make fun of prompt engineering, but I think "AI ops" will eventually become a real role at most if not all software companies. Harness Engineers and Agent Reliability Engineers will be just as important as something like DevOps is now. | ||||||||||||||||||||||||||
| ▲ | NitpickLawyer 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
What's really interesting is that the LLMs become better and better at setting up the environments / tasks themselves. I got this surreal experience the other day where I was writing a prompt0n.md file (I try to log all my prompts in a .folder to keep track of what I prompt and the results I get), and the autocomplete in antigravity kinda sorta wrote the entire prompt by itself... Granted it had all the previous prompts in the same folder (don't know exactly what it grabs in context by itself) and I was working on the next logical step, but it kept getting the "good bits" out of them, and following the pattern quite nicely. I only edited minor things, and refused one line completion in the entire prompt. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | wiz21c 34 minutes ago | parent | prev [-] | |||||||||||||||||||||||||
don't forget the size of the search space... | ||||||||||||||||||||||||||