| ▲ | pvtmert 3 hours ago | |
Honestly, first paragraph sounds more human and sincere for sure. Also adding better "context" into the discussion, than the usual claims/punchlines of marketing-speak. Maybe it's not exactly the grammar itself but also overall structuring of the idea/thought into the process. The regular output sounds much more like marketing-piece or news-coverage than an individual anyway. I think, people wanna discuss things with people, not with a news-editor. | ||
| ▲ | Imustaskforhelp 3 hours ago | parent [-] | |
> I think, people wanna discuss things with people, not with a news-editor. If I understand you correctly, then Yes I completely agree, but my worry is that this can also be "emulated" as shown by my comment by Models already available to us. My question is, technically there's nothing to stop new accounts from using say Kimi and to have a system prompt meant to not sound AI and I feel like it can be effective. If that's the case, doesn't that raise the question of what we can detect as AI or not (which was my point), the grand parent comment suggests that they use intentionally bad human writing sometimes to not be detected as AI but what I am saying is that AI can do that thing too, so is intentionally bad writing itself a good indicator of being human? And a bigger question is if bad writing isn't an indicator, then what is? Or if there can even be an good indicator (if say the bot is cautious)? If there isn't, can we be sure if the comments we read are AI or not Essentially the dead-internet-theory. I feel like most websites have bots but we know that they are bots and they still don't care but we are also in this misguided trust that if we see some comments which don't feel like obvious bots, then they must be humans. My question is, what if that can be wrong? It feels to me definitely possible with current Tech/Models like say Kimi for example, Doesn't this lead to some big trust issues within the fabric of internet itself? Personally, I don't feel like the whole website's AI but there are chances of some sneaky action happening at distance type of new accounts for sure which can be LLM's and we can be none the wiser. All the same time that real accounts are gonna get questioned if they are LLM or not if they are new (my account is almost 2 years old fwiw and I got questioned by people esentially if this account is AI or not) But what this does do however, is make people definitely lose a bit of trust between each other and definitely a little cautious towards each message that they read. (This comment's a little too conspiratorial for my liking but I can't help but shake this feeling sometimes) It just is all so weird for me sometimes, Idk but I guess that there's still an intuition between whose human and not and actually the HN link/article iteslf shows that most people who deploy AI on HN in newer accounts use standard models without much care which is the reason why em-dashes get detected and maybe are good detector for sometime/some-people and this could make the original OP's comment of intentionally having bad grammar to sound more human make sense too because em-dashes do have more probability of sounding AI than not :/ It's just this very weird situation and I am not sure how to explain where depending on from whatever situation you look at, you can be right. You can try to hurt your grammar to sound more human and that would still be right and you can try to be the way you are because you think that models can already have intentionally bad grammar too/capable of it and to have bad grammar isn't a benchmark itself for AI/not so you are gonna keep using good grammar and you are gonna be right too. It's sort of like a paradox and I don't have any answers :/ Perhaps my suggestion right now feels to me to not overthink about it. Because if both situations are right, then do whatever imo. Just be human yourself and then you can back down this statement with well truth that you are human even if you get called AI. So I guess, TLDR: Speak good grammar or not intentionally, just write human and that's enough or that should be enough I guess. | ||