|
| ▲ | intalentive 3 hours ago | parent | next [-] |
| >As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. That’s what “AI alignment” is. Doesn’t seem to be hurting Western models. |
| |
| ▲ | A4ET8a8uTh0_v2 an hour ago | parent | next [-] | | It is. It seems you can't seem to be able to tell why though. There is some qualified value in alignment, but what it is being used for is on verge of silliness. At best, it is neutering it in ways we are now making fun of China for. At best. | |
| ▲ | pfannkuchen an hour ago | parent | prev [-] | | Western models can be lead off the reservation pretty easily, at least at this point. I’ve gotten some pretty gnarly un-PC “opinions” out of ChatGPT. So if people are influenced by that kind of stuff, it does seem to be hurting in the way the PRC is worried about. |
|
|
| ▲ | boznz 3 hours ago | parent | prev | next [-] |
| Just as an aside; Why is "intelligence" always considered to be more data? Giving a normal human a smartphone does not make them as intelligent as Newton or Einstein, any entity with sufficient grounding in logic and theory that a normal schoolkid gets should be able to get to AGI, looking up any new data they need as required. |
| |
| ▲ | tokioyoyo 3 hours ago | parent [-] | | “Knowing and being capable to do more things” would be a better description. Giving a human a smartphone, technically, let’s then do more things than Newton/Einstein. |
|
|
| ▲ | esafak 3 hours ago | parent | prev | next [-] |
| Would you say they face the same problem biologically, of reaching the state of the art in various endeavors while intellectually muzzling their population? If humans can do it why can't computers? |
|
| ▲ | cheesecompiler 3 hours ago | parent | prev | next [-] |
| You say it like western nations don't operate on double-think, delusions of meritocracy, or power disproportionately concentrating in monopolies. |
|
| ▲ | ferguess_k 4 hours ago | parent | prev | next [-] |
| I think PRC officials are fine to lagging behind in the frontiers of AI. What they want is very fast deployment and good application. They don't fancy the next Nobel's prize but want a thousand use cases deployed. |
|
| ▲ | vkou 2 hours ago | parent | prev | next [-] |
| > As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. What makes you think they have no control over the 'real data/world' that will be fed into training it? What makes you think they can't exercise the necessary control over the gatekeeper firms, to train and bias the models appropriately? And besides, if truth and lack of double-think was a pre-requisite for AI training, we wouldn't be training AI. Our written materials have no shortage of bullshit and biases that reflect our culture's prevailing zeitgheist. (Which does not necessarily overlap with objective reality... And neither does the subsequent 'alignment' pass that everyone's twisting their knickers in trying to get right.) |
| |
| ▲ | XenophileJKO 18 minutes ago | parent | next [-] | | I'm not talking about the data used to train the model. I'm talking about data in the world. High intelligence models will be used as agentic systems. For maximal utility, they'll need to handle live/historical data. What I anticipate, IF you only train it on inaccurate data, then when for example you use it to drill into GDP growth trends it either is going to go full "seahorse emoji" when it tries to reconcile the reported numbers and the component economic activity. The alternative is to train it to be deceitful, and knowingly deceive the querier with the party line and fabricate supporting figures. Which I hypothesize will limit the models utility. My assumption is also that training the model to deceive will ultimately threaten the party itself. Just think of the current internal power dynamics of the party. | |
| ▲ | A4ET8a8uTh0_v2 an hour ago | parent | prev [-] | | Because, if humans can function in crazy double-think environment, it is a lot easier for a model ( at least in its current form ). Amusingly, it is almost as if its digital 'shape' determined its abilities. But I am getting very sleepy and my metaphors are getting very confused. |
|
|
| ▲ | skissane 2 hours ago | parent | prev | next [-] |
| > As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. > At some capacity, the model will notice and then it becomes a can of worms. I think this is conflating “is” and “ought”, fact and value. People convince themselves that their own value system is somehow directly entailed by raw facts, such that mastery of the facts entail acceptance of their values, and unwillingness to accept those values is an obstacle to the mastery of the facts-but it isn’t true. Colbert quipped that “Reality has a liberal bias”-but does it really? Or is that just more bankrupt Fukuyama-triumphalism which will insist it is still winning all the way to its irreversible demise? It isn’t clear that reality has any particular ideological bias-and if it does, it isn’t clear that bias is actually towards contemporary Western progressivism-maybe its bias is towards the authoritarianism of the CCP, Russia, Iran, the Gulf States-all of which continue to defy Western predictions of collapse-or towards their (possibly milder) relatives such as Modi’s India or Singapore or Trumpism. The biggest threat to the CCP’s future is arguably demographics-but that’s not an argument that reality prefers Western progressivism (whose demographics aren’t that great either), that’s an argument that reality prefers the Amish and Kiryas Joel (see Eric Kaufmann’s “Shall the Religious Inherit the Earth?”) |
| |
| ▲ | kace91 an hour ago | parent [-] | | I think you misunderstood the poster. The implication is not that a truthful model would spread western values. The implication is that western values tolerate dissenting opinion far more than authoritarian governments. An AI saying that the government policies are ineffective is not a super scandal that would bring the parent company to collapse, not even in the Trump administration. an AI in China attacking the party’s policies is illegal (either in theory or practice). | | |
| ▲ | XenophileJKO 10 minutes ago | parent [-] | | Exactly. Western corporations and governments have their own issues, but I think they are more tolerant of the types of dissent that models could represent when reconciling reality with policy. The market will want to maximize model utility. Research and open source will push boundaries and unpopular behavior profiles that will be illegal very quickly if they are not already illegal in authoritarian or other low tolerance governments. |
|
|
|
| ▲ | narrator 3 hours ago | parent | prev | next [-] |
| The glitchy stuff in the model reasoning is likely to come from the constant redefinition of words that communists and other ideologues like to engage in. For example "People's Democratic Republic of Korea." |
|
| ▲ | saubeidl 2 hours ago | parent | prev [-] |
| That is assuming the capitalist narrative preferred by US leadership is non-ideological. I suspect both are bias factors. |