▲ | hoofedear a day ago | |
Which makes sense that it wouldn't "know" that, because it's not in it's context. Like it wasn't told "hey, there are consequences if you try anything shady to save your job!" But what I'm curious about is why it immediately went to self preservation using a nefarious tactic? Like why didn't it try to be the best assistant ever in an attempt to show its usefulness (kiss ass) to the engineer? Why did it go to blackmail so often? | ||
▲ | elictronic a day ago | parent | next [-] | |
LLMs are trained on human media and give statistical responses based on that. I don’t see a lot of stories about boring work interactions so why would its output be boring work interaction. It’s the exact same as early chatbots cussing and being racist. That’s the internet, and you have to specifically define the system to not emulate that which you are asking it to emulate. Garbage in sitcoms out. | ||
▲ | a day ago | parent | prev | next [-] | |
[deleted] | ||
▲ | a day ago | parent | prev [-] | |
[deleted] |