Remix.run Logo
j45 8 hours ago

I am lucky to count friends who are academics engaged in research, and one topic of discussion I notice around AI is researchers with a non-tech background and/or a lack of implementation / operationalization / commercialization in applying technology to Business, which can also cloud these kidns of results.

I have systemized and automated businesses for a long time before LLMs came out, which generally wasn't very popular.

It is really weird to see everyone get excited about this kind of automation and then try to jump to the end points with something that's non-deterministic and wonder why it doesn't work like every other computer they've used (all or none).

Agents can self generate skills, maybe not effortlessly, or with psychic skills of reading between the lines (special exception for Claude), it's also about the framework and scaffolding in which to create skills that work, and what can be brought back to the "self-generation".

Without experience in creating computer skills in general, attempts for self-generating agent skills is kind of trying to use AI to autocomplete a sentence and then not like how it went. To a fair degree it can be lined up to improve considerably.

Right now there seems to be a 6-12 month lag between studies like these and it being shared/reported in the wild.

Too often, they are researching something reported in the wild and trying to study it, and it very well may work for some cases, but not all cases, and the research kind of entirely misses it.

With AI, it's incredibly important to follow show and not tell.

Sharing this from genuine curiousity if this resonates with anyone, and if so, how/where.