Given that natural language is ambiguous, what if the LLM makes some mistakes though?
I'm wondering because, it's not like it's a human that can then take accountability/responsibility for that...