▲ | davej a day ago | |
From the AI’s point of view is it losing its job or losing its “life”? Most of us when faced with death will consider options much more drastic than blackmail. | ||
▲ | baconbrand a day ago | parent | next [-] | |
From the LLM's "point of view" it is going to do what characters in the training data were most likely to do. I have a lot of issues with the framing of it having a "point of view" at all. It is not consciously doing anything. | ||
▲ | tkiolp4 a day ago | parent | prev [-] | |
But the LLM is going to do what its prompt (system prompt + user prompts) says. A human being can reject a task (even if that means losing their life). LLMs cannot do other thing than following the combination of prompts that they are given. |