| ▲ | notarobot123 10 hours ago | |
The paper seems to provide a realistic benchmark for how these systems are deployed and used though, right? Whether the mechanisms are crude or not isn't the point - this is how production systems work today (as far as I can tell). I think the accusation of research that anthropomorphize LLMs should be accompanied by a little more substance to avoid this being a blanket dismissal of this kind of alignment research. I can't see the methodological error here. Is it an accusation that could be aimed at any research like this regardless of methodology? | ||
| ▲ | alentred 9 hours ago | parent [-] | |
Oh, sorry for misunderstanding - I am not criticizing or accusing of anything at all!, but suggesting ideas for further research. The practical applications, as I mentioned above, are all there, and for what its worth I liked the paper a lot. My point is: I wonder if this can be followed up by a more so-to-say abstract research to drill into the technicalities of how well the models follow the conflicting prompts in general. | ||