|
| ▲ | pmg101 16 hours ago | parent | next [-] |
| I now have a visceral reaction to being told that I'm ABSOLUTELY RIGHT!, for example. It seemed an innocuous phrase before -- rather like em dashes -- but has now become grating and meaningless. Robotic and no longer human. |
| |
| ▲ | gilleain 15 hours ago | parent | next [-] | | I'm launching a new service to tell people that they are absolutely, 100% wrong. That what they are considering is a terrible idea, has been done before, and will never work. Possibly I can outsource the work to HN comments :) | | |
| ▲ | pmg101 14 hours ago | parent [-] | | This sounds like a terrible idea that has been done before and will never work. |
| |
| ▲ | AlecSchueler 15 hours ago | parent | prev [-] | | You're exactly right, this really gets to the heart of the issue and demonstrates that you're already thinking like a linguist. |
|
|
| ▲ | flir 15 hours ago | parent | prev | next [-] |
| For what most of us are using it for (generating code), that's not a bad outcome. This audience might have less of a problem with it than the general population. Whether we have the discipline to limit our use of the tool to its strengths... well, I doubt it. Just look at how social media turned out. (Idle thought: I wonder if a model fine-tuned on one specific author would give more "original" titles). |
|
| ▲ | cantor_S_drug 16 hours ago | parent | prev [-] |
| This is the default setting. The true test would be if LLMs CAN'T produce distinct outputs. I think this problem can be solved by prompt engineering. Has anyone tried this with Kimi K2? |