| ▲ | eru an hour ago | |
> If you manipulate your AI to deliberately push political preferences, that is your right I guess but IMO I'd appreciate some regulation saying you should be required to disclose that under penalty of perjury. People (and especially companies) are already permitted to make legally enforceable guarantees and statements about their AI. Why do we need extra machinery? You can assume that anyone who doesn't make such strong statements took the easy way out. To spell it out: the law doesn't spell out that companies can swear oaths, but they can write whatever statement they want to be liable for in their investor prospectus and then in the US any enterprising lawyer can assemble a bunch of shareholders to bring a suit for securities fraud, if the company is lying. Slightly more everyday, but with fewer legal teeth: the company can also explicitly make the relevant statements in their ads, and then they can be gotten via misleading advertisement laws. If you notice that they use weasel wording in their ads or prospectus, instead of simple and strong language that a judge would nail them to, disregard the statements. | ||
| ▲ | siliconc0w 6 minutes ago | parent [-] | |
xAI is private (and more and more companies are staying private for this and other reasons). | ||