| ▲ | Avamander an hour ago | |
The more you obfuscate a topic against LLMs the lower the educational value of a challenge. The only things that works is novelty and obscurity. LLMs still suck with things mentioned in the footnotes of datasheets and manuals, things that deviate in subtle ways, unique constructions that alter something very very common. It's hard for LLMs to avoid common pitfalls in terms of making assumptions, while staying on track. | ||