Remix.run Logo
gkoberger 3 hours ago

I do agree... I sometimes use worse grammar (like that ellipses) and leave in typos just so my comments feel more "real" now.

goodmythical 2 hours ago | parent | next [-]

fun fact, grok and kimi are both pretty good at emulating "chat" responses with any number of prompts.

"respond like a twitter user", "pretend like we're texting", etc

Imustaskforhelp 2 hours ago | parent [-]

> fun fact, grok and kimi are both pretty good at emulating "chat" responses with any number of prompts.

> "respond like a twitter user", "pretend like we're texting", etc

+1 to it. I actually had given a response to the above parent comment itself using Kimi and I would've said that its (sort of) a good emulation fwiw.

2 hours ago | parent | prev | next [-]
[deleted]
_verandaguy 2 hours ago | parent | prev | next [-]

Same here, but it'll be a cold day in hell before you see me using the dreaded double-period-bang..!

Imustaskforhelp 2 hours ago | parent | prev [-]

soon were gonna be the ones adding random typos and grammer errors just to blend in. i skip apostrophes and mispell words on purpose already. its strange how fast sloppy writing starts feeling natural

(This above line itself was written by AI itself: https://www.kimi.com/share/19c96516-4032-8b73-8000-0000f45eb...)

I don't know if worse grammar could make a difference aside from removing false negatives (ie. nowadays people with good grammar are questioned if they are LLM's or not) but this itself doesn't mean that worse grammar itself means its written by a human. (This paragraph is written by me, a human, Hi :D)

pvtmert 2 hours ago | parent [-]

Honestly, first paragraph sounds more human and sincere for sure.

Also adding better "context" into the discussion, than the usual claims/punchlines of marketing-speak.

Maybe it's not exactly the grammar itself but also overall structuring of the idea/thought into the process. The regular output sounds much more like marketing-piece or news-coverage than an individual anyway. I think, people wanna discuss things with people, not with a news-editor.

Imustaskforhelp an hour ago | parent [-]

> I think, people wanna discuss things with people, not with a news-editor.

If I understand you correctly, then Yes I completely agree, but my worry is that this can also be "emulated" as shown by my comment by Models already available to us. My question is, technically there's nothing to stop new accounts from using say Kimi and to have a system prompt meant to not sound AI and I feel like it can be effective.

If that's the case, doesn't that raise the question of what we can detect as AI or not (which was my point), the grand parent comment suggests that they use intentionally bad human writing sometimes to not be detected as AI but what I am saying is that AI can do that thing too, so is intentionally bad writing itself a good indicator of being human?

And a bigger question is if bad writing isn't an indicator, then what is?

Or if there can even be an good indicator (if say the bot is cautious)? If there isn't, can we be sure if the comments we read are AI or not

Essentially the dead-internet-theory. I feel like most websites have bots but we know that they are bots and they still don't care but we are also in this misguided trust that if we see some comments which don't feel like obvious bots, then they must be humans.

My question is, what if that can be wrong? It feels to me definitely possible with current Tech/Models like say Kimi for example, Doesn't this lead to some big trust issues within the fabric of internet itself?

Personally, I don't feel like the whole website's AI but there are chances of some sneaky action happening at distance type of new accounts for sure which can be LLM's and we can be none the wiser.

All the same time that real accounts are gonna get questioned if they are LLM or not if they are new (my account is almost 2 years old fwiw and I got questioned by people esentially if this account is AI or not)

But what this does do however, is make people definitely lose a bit of trust between each other and definitely a little cautious towards each message that they read.

(This comment's a little too conspiratorial for my liking but I can't help but shake this feeling sometimes)

It just is all so weird for me sometimes, Idk but I guess that there's still an intuition between whose human and not and actually the HN link/article iteslf shows that most people who deploy AI on HN in newer accounts use standard models without much care which is the reason why em-dashes get detected and maybe are good detector for sometime/some-people and this could make the original OP's comment of intentionally having bad grammar to sound more human make sense too because em-dashes do have more probability of sounding AI than not :/

It's just this very weird situation and I am not sure how to explain where depending on from whatever situation you look at, you can be right.

You can try to hurt your grammar to sound more human and that would still be right

and you can try to be the way you are because you think that models can already have intentionally bad grammar too/capable of it and to have bad grammar isn't a benchmark itself for AI/not so you are gonna keep using good grammar and you are gonna be right too.

It's sort of like a paradox and I don't have any answers :/ Perhaps my suggestion right now feels to me to not overthink about it.

Because if both situations are right, then do whatever imo. Just be human yourself and then you can back down this statement with well truth that you are human even if you get called AI.

So I guess, TLDR: Speak good grammar or not intentionally, just write human and that's enough or that should be enough I guess.