| ▲ | nerdjon 3 hours ago | |
This was my first thought as well, all this does is further remove the user from seeing the chat output and instead makes it appear as if the information is concretely reliable. I mean is it really that shocking that you can have an LLM generate structured data and shove that into a visualizer? The concern is if is reliable, which we know it isnt. | ||
| ▲ | ericmcer 3 hours ago | parent | next [-] | |
The further they can get people from the reality of `This just spits out whatever it thinks the next token will be` the more they can push the agenda. | ||
| ▲ | j45 3 hours ago | parent | prev [-] | |
Its' a reasonable concern. Often it can be mitigated by prompting in a manner that invokes research and verification instead of defaulting to a corpus. Passive questions generate passive responses. | ||