| ▲ | cheschire 7 hours ago |
| I suspect even typos have an impact on how the model functions. I wonder if there’s a pre-processor that runs to remove typos before processing. If not, that feels like a space that could be worked on more thoroughly. |
|
| ▲ | ruairidhwm 7 hours ago | parent | next [-] |
| I guess just a spell-check in the repo? But yes, I'd imagine that they have an effect. Even running the same input twice is non-deterministic. |
| |
| ▲ | cheschire 7 hours ago | parent | next [-] | | The ability for audio processing to figure out spelling from context, especially with regards to acronyms that are pronounced as words, leads me to believe there’s potential for a more intelligent spell check preprocess using a cheaper model. | |
| ▲ | mathieudombrock 5 hours ago | parent | prev [-] | | The same input twice is only nondeterministic if you don't control the seed. |
|
|
| ▲ | 0123456789ABCDE 7 hours ago | parent | prev [-] |
| there is no pre-processor, i've had typos go through, with claude asking to make sure i meant one thing instead of the other |
| |
| ▲ | PhilipRoman 7 hours ago | parent [-] | | I strongly suspected that there was some pre/postprocessing going on when trying to get it to output rot13("uryyb, jbyeq"), but it's probably just due to massively biased token probabilities. Still, it creates some hilarious output, even when you clearly point out the error: Hmm, but wait — the original you gave was jbyeq not jbeyq:
j→w, b→o, y→l, e→r, q→d = world
So the final answer is still hello, world. You're right that I was misreading the input. The result stands.
|
|