| ▲ | sfblah 12 hours ago | ||||||||||||||||||||||||||||
Things like this really favor models offered from countries that have fewer legal restrictions. I just don't think it's realistic to expect people not to have access to these capabilities. It would be reasonable to add a disclaimer. But as things stand I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street, meaning normal free-speech protections would apply. | |||||||||||||||||||||||||||||
| ▲ | sarchertech 4 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
>I think it's fair to consider talking to ChatGPT to be the same as talking to a random person on the street That’s not how companies market AI though. And the models themselves tend to present their answers in a highly confident manner. Without explicit disclaimers, a reasonable person could easily believe that ChatGPT is an authority in the law or medicine. That’s what moves the needle over to practicing the law/medicine without a license. | |||||||||||||||||||||||||||||
| ▲ | thinkingtoilet 12 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
What capabilities? The article says the study found it was entirely correct 31% of the time. | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||