Remix.run Logo
whynotminot a day ago

Intent at this stage of AI intelligence almost feels beside the point. If it’s in the training data these models can fall into harmful patterns.

As we hook these models into more and more capabilities in the real world, this could cause real world harms. Not because the models have the intent to do so necessarily! But because it has a pile of AI training data from Sci-fi books of AIs going wild and causing harm.

OzFreedom a day ago | parent | next [-]

Sci-fi books merely explore the possibilities of the domain. Seems like LLMs are able to inhabit these problematic paths, And I'm pretty sure that even if you censor all sci-fi books, they will fall into the same problems by imitating humans, because they are language models, and their language is human and mirrors human psychology. When an LLM needs to achieve a goal, it invokes goal oriented thinkers and texts, including Machiavelli for example. And its already capable of coming up with various options based on different data.

Sci-fi books give it specific scenarios that play to its strengths and unique qualities, but without them it will just have to discover these paths on its own pace, the same way sci-fi writers discovered them.

onemoresoop a day ago | parent | prev [-]

Im also worried about things moving way too fast causing a lot of harm to the internet.