▲ | pessimizer 5 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Because they are. But stochastic parrots are awesome. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | ripped_britches 5 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I challenge you! Try giving this exact prompt to GPT-5-Thinking (medium or high reasoning if API). It is able to (without external code tools) solve a never before seen cypher that is not present in its training data. I think this pretty clearly demonstrates that the “stochastic parrot” is no longer an apt description of its capabilities in generalization: ———— You are given a character-by-character decode table `mapping` and a `ciphertext`. Decode by replacing each ciphertext character `c` with `mapping[c]` (i.e., mapping maps ciphertext → plaintext). Do not guess; just apply the mapping. Return *ONLY* this JSON (no prose, no extra keys, no code fences): { "decoded_prefix": "<first 40 characters of the decoded plaintext>", "last_10": "<last 10 characters of the decoded plaintext>", "vowel_counts": {"a": <int>, "e": <int>, "i": <int>, "o": <int>, "u": <int>} } Inputs use only lowercase a–z. mapping = { "a":"c","b":"j","c":"b","d":"y","e":"w","f":"f","g":"l","h":"u","i":"m","j":"g", "k":"x","l":"i","m":"o","n":"n","o":"h","p":"a","q":"d","r":"t","s":"r","t":"v", "u":"p","v":"s","w":"z","x":"k","y":"q","z":"e" } ciphertext = "nykwnowotyttbqqylrzssyqcmarwwimkiodwgafzbfippmndzteqxkrqzzophqmqzlvgywgqyazoonieqonoqdnewwctbsbighrbmzltvlaudfolmznbzcmoafzbeopbzxbygxrjhmzcofdissvrlyeypibzzixsjwebhwdjatcjrzutcmyqstbutcxhtpjqskpojhdyvgofqzmlwyxfmojxsxmb" DO NOT USE ANY CODE EXECUTION TOOLS AT ALL. THAT IS CHEATING. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|