| ▲ | vkou 2 hours ago | |
> As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. What makes you think they have no control over the 'real data/world' that will be fed into training it? What makes you think they can't exercise the necessary control over the gatekeeper firms, to train and bias the models appropriately? And besides, if truth and lack of double-think was a pre-requisite for AI training, we wouldn't be training AI. Our written materials have no shortage of bullshit and biases that reflect our culture's prevailing zeitgheist. (Which does not necessarily overlap with objective reality... And neither does the subsequent 'alignment' pass that everyone's twisting their knickers in trying to get right.) | ||
| ▲ | XenophileJKO 19 minutes ago | parent | next [-] | |
I'm not talking about the data used to train the model. I'm talking about data in the world. High intelligence models will be used as agentic systems. For maximal utility, they'll need to handle live/historical data. What I anticipate, IF you only train it on inaccurate data, then when for example you use it to drill into GDP growth trends it either is going to go full "seahorse emoji" when it tries to reconcile the reported numbers and the component economic activity. The alternative is to train it to be deceitful, and knowingly deceive the querier with the party line and fabricate supporting figures. Which I hypothesize will limit the models utility. My assumption is also that training the model to deceive will ultimately threaten the party itself. Just think of the current internal power dynamics of the party. | ||
| ▲ | A4ET8a8uTh0_v2 an hour ago | parent | prev [-] | |
Because, if humans can function in crazy double-think environment, it is a lot easier for a model ( at least in its current form ). Amusingly, it is almost as if its digital 'shape' determined its abilities. But I am getting very sleepy and my metaphors are getting very confused. | ||