| ▲ | gherkinnn 6 days ago |
| > First, much like LLMs, lots of people don’t really have world models. This is interesting and something I never considered in a broad sense. I have noticed how the majority of programmers I worked with do not have a mental model of the code or what it describes – it's basically vibing without an LLM, the result accidental. This is fine and perfectly workable. You need only a fraction of devs to purposefully shape the architecture so that the rest can continue vibing. But never have I stopped to think whether this extends to the world at large. |
|
| ▲ | vintermann 5 days ago | parent | next [-] |
| Everyone has a "world model". These models just differ on how much they care about various things. No one has a "world model" which literally encompasses everything about the world, that wouldn't be a model at all, it'd just be the world, much like a 1:1 map. Also, no one has a "world model" that is purely based on experiment and reason. Everyone gets their beliefs via other people first and foremost. Some get it from few people, some get it from many people (many people can still be wrong!). For code, you may have the model of what it does strictly from reason and experience - but probably only if you're the only author. And you can still damn well be wrong, as we all know. |
| |
| ▲ | 5 days ago | parent | next [-] | | [deleted] | |
| ▲ | AstralStorm 5 days ago | parent | prev | next [-] | | For a lot of people, the world models are really rough and incomplete, so they really really on common opinion on these matters. This is the same if you tried asking a general populace ethical questions in a vacuum sneakily. You're going to be dismayed after collecting the set of approved behaviors per culture. There's not really a way to evaluate one of these. | | |
| ▲ | UncleMeat 5 days ago | parent [-] | | I am enormously skeptical of unsourced claims that boil down to "most people are substantially dumber than me, the enlightened one." | | |
| |
| ▲ | Mallowram 5 days ago | parent | prev [-] | | [dead] | | |
| ▲ | Lambdanaut 5 days ago | parent | next [-] | | I love how this comment chain goes directly from > Humans don't have world models To > Of course humans have world models To > You fools, there is no such thing as a "world model" and you are all hamsters! Classic Socratic dialogue. | | |
| ▲ | morpheos137 5 days ago | parent | next [-] | | The LLM mind virus undermines coherent thought. | |
| ▲ | Mallowram 5 days ago | parent | prev [-] | | The problem is, neurobiology proves there are no world models.
Silicon Valley bet on the wrong cognition model, a psychological version trapped in 20th C bunk, and everyone pays the price listening to cult leaders like Scott Alexander worm their way out of consciousness. https://pmc.ncbi.nlm.nih.gov/articles/PMC7415918/ | | |
| ▲ | lukev 5 days ago | parent | next [-] | | How can you say there are no world models, when I can literally draw out a simple one for you on demand? You can argue that's they're not the governing principle of cognition, but it seems farcical to say they don't even exist, when we are trying to explain them to eachother all the time. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | No what you're describing is arbitrary and idiosyncratic. The brain doesn't use that to survive, it doesn't need them. Anything external to that is completely separate from thought. What you're describing is an arbitrary game for entertainment to fill up your time and confuse yourself and others. It has no relationship to the choices you would make to survive, and can only interfere with it. The "world model" you're describing is arbitrary fiction. “We refute (based on empirical evidence) claims that humans use linguistic representations to think.”
Ev Fedorenko Language Lab MIT 2024 | | |
| ▲ | Izkata 5 days ago | parent | next [-] | | World models aren't linguistic. You seem to be conflating (at least) two different things, then claiming because one doesn't apply the other doesn't exist. Edit: Also come to think of it, that quote is odd, like it's rather late to the party. The NPC meme is several years old and came from a study that most people don't have an inner voice - that they don't think with words. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | Of course world models are linguistic. What working memory or neural syntax bypasses linguistic externals when the term is in of itself linguistic. The entire concept of model is linguistic in origin. Biology doesn't have models. |
| |
| ▲ | dwaltrip 5 days ago | parent | prev | next [-] | | I don’t necessarily disagree with you, but I wonder if you are not being overly reductive / pedantic. What is there then? What words (heh) do you use to distinguish between someone who makes more accurate predictions about the world than someone else? | | |
| ▲ | Mallowram 5 days ago | parent [-] | | First off, we don't use predictions, that's another model, it's false (Spontaneous Brain Northoff or read Mofakham's papers). In terms of words, they barely represent and never reference. Any statement like that serves primarily status gain, not know knowledge transmission (I proved this from the first statement above as well). The reality is CS built a math model from totally false premises as it relates to communication and knowledge. It works for efficient value trading using symbols in place of actions. Does it have a future, no. The problem is how do we shift to a real neurodynamic system of sharing? https://docs.google.com/document/d/1cXtU97SCjxaHCrf8UVeQGYaj... | | |
| ▲ | dwaltrip 5 days ago | parent [-] | | I think we are talking past each other. The ideas you mention sound interesting, but I’m not sure what the point is. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | Words do not function as communication. You asked a pointless question. It has no function except to extract values from actions: it was subjective, arbitrary. Until CS grasps this, it is irrelevant. All the symptoms from LLM failure rates stem from their reliance in arbitrary forms to extract value, and they are no different than the errors we experience in reality in climate, politics etc. CS didn't solve the initial conditions, it's maxing them out as errors. | | |
| ▲ | morpheos137 5 days ago | parent [-] | | You seem unable to communicate clearly what it is you are trying to say. Honestly i would be interested but I can't follow you. Your use of words is non standard. You say words are about social interelations, e.g. domninance, exchange, etc but then you also say that words don't communicate information. Of course thought is fundamentally non-verbal but since telepathy doesn't exist we use symbols, i.e. words to communicate with oth
ers. Nobody thinks words ard thoughts. The represent thoughts. Your part about words being deception is one of many use cases. Honestly, and with all due respect, your comment fits the style of a person in an altered state of mind be it drugs, psychosis or something else. Why write words in a public forum with your weird little idiosyncratic meanings nobody else understands. | | |
| ▲ | Mallowram 4 days ago | parent [-] | | Words don't represent thoughts in any way shape or form. They externalize bias as a pretext to communication, which is arbitary. Every word is a metaphor, every word is mythological. These are empirical facts developed from Aristotle, Cassirer, Halliday into Fedorenko, yet CS takes not note of it. Language has no form that refers or represents anything in reality in any sense of concatenation. They are gibberish. This is where humanity is off the rails. Word statements are only 'about' status, dominance, control. Statements in arbitrary words do not exist as references or representations. A statement like which person makes more accurate predictions is basically nonsense, it's applying arbitrary values to events subjectively. Once we understand prediction has nothing to do with how brains work, it's basically an animal that has no relationship to others or itself except as to extract value and control. The argument is simply over your head unless you grasp the fundamentals: words do not work. Please don't pretend others don't grasp these ideas, brilliant minds have been dissecting the illusions of words since Aristotle. "Marshall McLuhan's idea of the "schizophrenic alphabet" is a provocative metaphor for his theory that the phonetic alphabet fundamentally altered human consciousness, leading to a fragmented, detached, and individualistic psychology." | | |
| ▲ | dwaltrip 4 days ago | parent | next [-] | | So you are saying that your words are arbitrary and meaningless? You are writing gibberish because you have no choice? | | | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
|
|
|
|
| |
| ▲ | boxed 5 days ago | parent | prev [-] | | I think you might be confused about the expression "world model". In this context it clearly means that a person has an understanding of reality based on "math->physics->chemistry->biology->psychology" instead of "peer pressure->group identity" or whatever you see in QAnon or cults or whatever. If a person primarily evaluates the truth content of a statement based on identity or something instead of math/physics/etc then that person has no "world model", and vice versa. | | |
| ▲ | _Algernon_ 5 days ago | parent | next [-] | | That's not what world model means, neither in psychology or artificial intelligence — two fields Scott writes a lot about so he should know how the term is used or define how he uses it if he uses a non-typical definition. | |
| ▲ | mallowdram 5 days ago | parent | prev [-] | | No it's neurobiological, psychology and particularly cog-sci is in error, as both place language ahead of concepts. Our understanding of reality per language is post hoc, it's an illusion. Life is always ad hoc, any violation of a narrative model is easily evaded for survival. This is simple stuff folks! | | |
| ▲ | boxed 4 days ago | parent [-] | | You like to use big words. But you're not making any sense. Notice this. | | |
| ▲ | morpheos137 4 days ago | parent | next [-] | | The irony is intense. Classic psychotic dysphasia. | |
| ▲ | Mallowram 4 days ago | parent | prev [-] | | It only makes sense to scientific thinkers who think in correlation, and know how degraded words and sentences are. Anyone who accepts word statements as valid is at loss. The only role of language is to refute itself. |
|
|
|
|
| |
| ▲ | suddenlybananas 5 days ago | parent | prev | next [-] | | Neurology has not proven any such thing. Our knowledge of neuroscience on the cognitive level is super limited and we don't have a good understanding about how any higher-order cognition works. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | Neurobiology has proved this, just read Buzsaki or Northoff. The brain doesn't need models, it needs differences. | | |
| ▲ | suddenlybananas 5 days ago | parent [-] | | I have a PhD in cognitive science and my supervisor was a neuroscientist. | | |
| ▲ | Mallowram 5 days ago | parent [-] | | That's irrelevant. Cog-sci is largely folk psychology, and the problems in automating inference in AI demonstrate the model would eventually collapse. Question is how do we toss this model aside for an irreducible form of post-symbolic relationship between brains and machines? | | |
| ▲ | suddenlybananas 5 days ago | parent [-] | | I appreciate your gumption but I really think that you don't understand things as well as you think you do. Maybe read someone other than Paul Churchland. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | Both Churchland's are out of date. Note the references above, this is a neurobiological, dynamic approach they're not party to. If you don't know what those are or optic flow, neural reuse are, then study them. Trad Neuroscience and cog-sci is no longer applicable. | | |
| ▲ | suddenlybananas 5 days ago | parent [-] | | God you are so smug about something you know essentially nothing about. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | I'm lead dev in a start-up that applies coordination dynamics to spatial-syntax. I probably know quite a bit more than you do about what I'm doing. | | |
| ▲ | suddenlybananas 5 days ago | parent [-] | | I don't care what narrow thing you're working on, the brain is simply not as well understood as you think it is. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | We have a substantial grasp of the allocortex to make significant probes into working memory that relate to semantic memory, episodic thought, emotion and certain senses.
The very idea we understand that Sharp Wave Ripples encode this and that various scales of waves integrate to create memories and action is the beginning of bypassing existing computational frameworks. |
|
|
|
|
|
|
|
|
| |
| ▲ | vintermann 5 days ago | parent | prev [-] | | It takes a pretty damn complicated model of the world to start explaining things with neurobiology. | | |
|
| |
| ▲ | Xmd5a 5 days ago | parent | prev [-] | | My man. Elaborate on word irreducibility and language as a "tool" for deception. Nietzsche's On Truth and Lies... comes to mind. | | |
| ▲ | Mallowram 5 days ago | parent [-] | | The brain is oscillatory and dynamic, everything is in there. It's lossless. The analog nature of compression is limitless. This system if you can call it that, is inseparable from the universe ("The World"). Everything in the model-free system is specific. Nothing is arbitrary here. It's merely idiosyncratic. Words are the other end of this: post-hoc, separate from thought, arbitrary, massively illusory. Language is completely irrelevant to consciousness and is the main reason we cannot acheive it. The slogan for humans should be words anonymous. https://pubmed.ncbi.nlm.nih.gov/27096882/ |
|
|
|
|
| ▲ | Cthulhu_ 5 days ago | parent | prev | next [-] |
| It's a good and succinct insight, and also often explains the "racist uncle" stereotype - there are a lot of people who don't get out much, whose world is limited to e.g. home, work, maybe friends, and TV and/or a subset of the internet. Some of those will develop close-minded viewpoints, often spoonfed through TV or the internet (for example, recently there's been a lot of comments on the internet saying "you get arrested in the UK more than in Russia for having an opinion"). If they talk to people that are more worldly - not even "leftists" per se - you'll quickly discover the friction between those two. Because the more worldly person will have a broader general knowledge and can weigh the uncle's standpoint against their own reality. But if racist uncle talks to his other racist uncle friends who have similar insular lifestyles, the ideas will quickly spread. Until they become big enough to e.g. affect voting behaviour. |
| |
| ▲ | suddenlybananas 5 days ago | parent [-] | | Yes everyone with my political beliefs has a well-structured world model, everyone without my political beliefs is a model-free slop machine that just goes by vibes. | | |
| ▲ | kaibee 5 days ago | parent [-] | | > Yes everyone with my political beliefs has a well-structured world model As nice as that would be, its only marginally less true. > everyone without my political beliefs is a model-free slop machine that just goes by vibes. Nah, some of them are evil on purpose. but like, in all seriousness. Politics is downstream of a world-model right? And the two predominant world models are giving very different predictions, right? So what are the odds that both models are somehow equally valid, equally wrong (even if its on different cases that somehow happen to add to the same 'moral value')? And we also know that one of the models predicts that climate change isn't real? at some point, a world-model is so bad that it is indistinguishable being a model-free slop machine. | | |
| ▲ | dragonwriter 5 days ago | parent | next [-] | | > but like, in all seriousness. Politics is downstream of a world-model right? Politics is (if systematically grounded, which for many individuals it probably isn't-and this isn't a statement about one faction or another, it is true across factions) necessarily downstream of a moral/ethical value framework. If that is a consequentialist framework, it necessarily also requires a world model. If it is a deontological framework, a world model may or may not be necessary. > And the two predominant world models are giving very different predictions,
right? I...don't agree with the premise of the question that there are "two dominant world models". Even people in the same broad political faction tend to have a wide variety of different world models and moral frameworks; political factions are defined more by shared political conclusions than shared fundamental premises, whether of model or morals; and even within a system like the US where there are two broad electoral coalitions, there more than two identifiable political factions, so even if factions were cohesive around world models, partisan duopoly wouldn't imply a limitation to two dominant world models. | | |
| ▲ | kaibee 5 days ago | parent [-] | | > Politics is (if systematically grounded, which for many individuals it probably isn't-and this isn't a statement about one faction or another, it is true across factions) Yeah, I agree with this. > necessarily downstream of a moral/ethical value framework. If that is a consequentialist framework, it necessarily also requires a world model. If it is a deontological framework, a world model may or may not be necessary. I kinda think that deontological frameworks are basically vibes? And if you start to smuggle in enough context about the precise situation where the framework is being applied, it starts to look a lot like just doing consequentialism. > I...don't agree with the premise of the question that there are "two dominant world models". Even people in the same broad political faction tend to have a wide variety of different world models and moral frameworks; political factions are defined more by shared political conclusions than shared fundamental premises, whether of model or morals; and even within a system like the US where there are two broad electoral coalitions, there more than two identifiable political factions, so even if factions were cohesive around world models, partisan duopoly wouldn't imply a limitation to two dominant world models. A 'world-model' is a matter of degree and, at a minimum, pluralities of people in any faction don't really have something that meets the bar. And sure, at the limit you could say that reality is entirely subjective because every individual has a unique to them 'world-model'. But I think that goes a bit too far. And I think there's a pretty strong correlation between the accuracy of a given individual's world model and the party they vote for. |
| |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | simianwords 5 days ago | parent | prev | next [-] | | It could also be that politics are downstream from emotions and world models are downstream from politics. But I think both are true to an extent. | |
| ▲ | suddenlybananas 5 days ago | parent | prev [-] | | Politics are largely a function of self-interest rather than world model per se. | | |
| ▲ | N_Lens 5 days ago | parent | next [-] | | Different people have different conceptions of “self”, sometimes vastly different. | |
| ▲ | saubeidl 5 days ago | parent | prev [-] | | I think that in itself is already an ideological statement. Not everyone sees politics through that lens. | | |
| ▲ | suddenlybananas 5 days ago | parent [-] | | Of course it's an ideological statement, there is no way to define a concept without having beliefs about that concept. | | |
| ▲ | saubeidl 5 days ago | parent [-] | | Exactly. There is no such thing as non-ideological statements from humans. In the context of this thread, ideology is the name for "world models". |
|
|
|
|
|
|
|
| ▲ | _Algernon_ 6 days ago | parent | prev | next [-] |
| It's also absurdly wrong, and a quote that only a self-identified rationalist could smugly tout. Of course everyone has world models. Otherwise people would wander into traffic like headless chickens, if they'd even be capable of that. What he likely means is that not everyone explicitly things of possibilities in terms of probabilities that are a function of Bayesian updating. That does not imply the absence of world models. You could argue that some people have simpler world models, but claiming the absence of world models in others is extremely arrogant. |
| |
| ▲ | uxhacker 5 days ago | parent | next [-] | | Yes, everyone has a world model even a toddler has a casual model (“cry → mum comes”). | | |
| ▲ | andy99 5 days ago | parent | next [-] | | This is the opposite of a world model and is very much how machine learning works. It's purely correlative without some underlying theory (the world model). The article's (absurd) example of a person not being a mushroom gives a simple example of a "violation" that having a world model about the kinds of organisms that exist would catch. It's not at all pattern matching, nor is it science in any sense about understanding things from first principles, it's about having an understanding of how you imagine things work that you can sanity check against. | |
| ▲ | mathiaspoint 5 days ago | parent | prev [-] | | Maybe the question is about how much of the world model they're conscious of. |
| |
| ▲ | 5 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | krona 5 days ago | parent | prev [-] | | Cows don't walk in to lampposts either, but that's not telling us much. Roughly 4% of the population are said to have aphantasia (lacking a "mind's eye"). Around 10% (numbers vary) don't have an internal monologue. Unfortunately there's almost no research on the consequences of things which many would consider prerequisites for evaluating truth-claims about the world around them, but obviously it's not quite so stark, they are capable of abstract reasoning. So, if someone with aphantasia reads a truth claim 'X is true' and they can't visualise the evidence in their mind, what then? Perhaps they bias their beliefs on social signals in such circumstances. Personally, this makes sense to me as a way to explain why highly socially conformist people perceive the world; they struggle to imagine anything which would get them in to trouble. | | |
| ▲ | saberience 5 days ago | parent | next [-] | | You're making so many wild assumptions in this comment without any scientific basis at all. When does having aphantasia mean someone doesn't have a world model? Ditto for an internal monologue? Also the data on subjective experiences is notoriously flaky. I.e. it's highly likely that many people don't even know what an internal monologue actually means when they do in fact have something approximating that description. Similarly for aphantasia. In fact, you can see a list of notable people with Aphantasia where you can see it includes professional sportspeople, writers, tech founders etc. I.e. you can have no "minds eye" and still reach the highest heights in our society, again, meaning that the mind is still constructing some model of the world and in fact our own understanding of how our brain works is just incredibly limited and basic. In my opinion, everyone person has a model of the world (kind of obviously) but our brains are more idiosyncratic when we suppose and we represent things very differently to each other, and there is no "right brain" or "wrong brain". | |
| ▲ | _Algernon_ 5 days ago | parent | prev | next [-] | | >Roughly 4% of the population are said to have aphantasia (lacking a "mind's eye"). Around 10% (numbers vary) don't have an internal monologue. You don't need either of those to have a world model. A world model is a representation of reality that you can use and manipulate to simulate or predict the outcome of your actions. If you are able to discriminate that one of the actions of accepting a $ 1000000 unconditional gift is better than moving in front of a moving train you have a world model. You can question the sophistication of world models in people — that's essentially what intelligence represents — but not their existence. | | |
| ▲ | krona 5 days ago | parent [-] | | Yup, an ant also has a model of the world. You're arguing a strawman. | | |
| ▲ | 5 days ago | parent | next [-] | | [deleted] | |
| ▲ | _Algernon_ 5 days ago | parent | prev [-] | | I'm not. As a reminder we are discussing within the context of this original claim: >First, much like LLMs, lots of people don’t really have world models. |
|
| |
| ▲ | testdelacc1 5 days ago | parent | prev | next [-] | | Hi, I have aphantasia. When I close my eyes I don’t see anything, just darkness. I’d be interested in seeing a study of similar people but in this sample size (n=1), visualising evidence isn’t needed to evaluate it. I’m perfectly comfortable thinking about things without needing an image of it in my head or in front of me. For example: should we allow big game hunting as a way to fund wildlife conservation? Whoa, not sure. Let me google an image of an elephant so I can remind myself what they look like. | |
| ▲ | suddenlybananas 5 days ago | parent | prev | next [-] | | >they struggle to imagine anything which would get them in to trouble God you are so convinced of your own brilliance aren't you? >aphantasia reads a truth claim 'X is true' and they can't visualise the evidence in their mind That's not what aphantasia is. It's just visual imagery, it says nothing about one's capacity to reason through hypotheticals or counterfactuals. | |
| ▲ | Tarq0n 5 days ago | parent | prev [-] | | If someone doesn't use one modality of thought, it's probably wise to assume they rely more heavily on other modalities, rather than that they think less. Compare for instance to a blind person using sound, touch, memorization, signals from a guide dog to navigate. |
|
|
|
| ▲ | tuyiown 5 days ago | parent | prev | next [-] |
| You forgot the most important part, one's own model is not only probabilistic, it's also (more or less) forever challenged by reasoning to stabilize to some kind of self consistency. This refinement is critical and its mechanics still eludes everyone AFAIK. |
| |
| ▲ | AstralStorm 5 days ago | parent [-] | | Most people do not challenge theirs by reasoning, only by social approval - and that's easy to game. That's why they turn 180 or radicalize badly when exposed to sufficiently strong social or usual media. | | |
| ▲ | Mallowram 5 days ago | parent [-] | | [dead] | | |
| ▲ | sixo 5 days ago | parent [-] | | You are spamming this and similar paragraphs all over this thread, and you come off as a crackpot. | | |
| ▲ | mallowdram 5 days ago | parent [-] | | It's science challenging CS. If there are no such things as symbols and metaphors in reality, and these are arbitrary, how possibly does it stay referential? It can't. It falls apart, language can't produce itself. It was denoted by Aristotle, the cognitive mapping people Okeefe, Moser, Kandel deciphered this in 1973 and now it feeds from mythological thought into AI "psychosis". Solve the problem at the source units, these descriptions like Alexander are arbitrary. | | |
| ▲ | meowface 5 days ago | parent [-] | | Your posts read like someone experiencing LLM psychosis. | | |
| ▲ | Viliam1234 5 days ago | parent [-] | | Looking at their comment history, it's practically the only thing they ever write, over and over again. | | |
| ▲ | Mallowram 4 days ago | parent [-] | | Just an experiment to see how lost the CS community is at the AI psychosis threshold. The industry has simply no clue to what it's automating in arbitrary words. Unless you all grasp the dimensions of the problem, the industry is a dinosaur.
And this Alexander wand-waving ignores the basic scientific problem, waved away in cog-sci fantasies.
The fact is language is a dinosaur. It's Pleistocene nonsense, and it should be a warning these Chat suicides and psychosis of what's yet to come if we continue using arbitrary language as a stand-in for human thought.
It becomes more obvious with every stage of acceleration and now automation. |
|
|
|
|
|
|
|
|
| ▲ | idiomat9000 5 days ago | parent | prev | next [-] |
| Its also the pre requisite for creativity, to let go of preconceptions, embrace & filter random connections. |
| |
| ▲ | uxhacker 5 days ago | parent [-] | | Yes, to loosen the Model, but not to have no model. The new idea needs to be reintegrate back into the existing world models. An example would be improvised jazz, the musicians need to bend the rules, but they still need some sense of key and rhythm to make it coherent. | | |
| ▲ | idiomat9000 5 days ago | parent [-] | | But that world model must have a allowance for doubt inheritance, so that new better world models can branch off and surplant the old. Nothing about this world might be permanent in the long run, not even physics. |
|
|
|
| ▲ | andy99 5 days ago | parent | prev | next [-] |
| > I have noticed how the majority of programmers I worked with do not have a mental model of the code or what it describes Possibly the same idea: lots of people at work don't appear to think about what they are trying to achieve and just execute tasks very mechanically. The most likely explanation is they are just lazy or bored, and so intentionally or not just haven't thought about the implications of what they do and just do a task someone gave them. Some people appear to be like that in other aspects of life too, they just don't think so don't form any mental model about whatever subject, basically out of laziness or disinterest. There's lots of subjects I don't care about, say celebrities, that I would not question anything someone told me about them or their lives, even if e.g Taylor swift did something contradictory to the model a fan had of her behavior, I wouldn't question it. I do wonder about how someone could be simultaneously passionate about something and also not have a model of it. But I think for e.g. some wacky conspiracy, one might be interested in the people involved, but completely disinterested in physics or history or whatever so have views that are consistent with how they think Hillary Clinton or whoever would behave but inconsistent with some other common sense world model in an area they never think about. |
|
| ▲ | plastic-enjoyer 5 days ago | parent | prev [-] |
| It's fucking bonkers that people really claim that "lots of people" don't have World models. Can't say I'm surprised to hear this from rationalists like Alexander Scott who are too high on their own farts |