| ▲ | everlier 9 hours ago | |
There was another technique "klmbr" a year or so ago: https://github.com/av/klmbr At a highest setting, It was unparseable by the LLMs at the time. Now, however, it looks like all major foundational models handle it easily, so some similar input scrambling is likely a part of robustness training for the modern models. Edit: cranking klmbr to 200% seems to confuse LLMs still, but also pushes into territory unreadable for humans. "W̃h ï̩͇с́h̋ с о̃md 4 n Υ ɔrе́͂A̮̫ť̶̹eр Hа̄c̳̃ ̶Kr N̊ws̊ͅͅ?" | ||