Remix.run Logo
dchuk 3 hours ago

I’m very bought in to the idea that raw coding is now a solved problem with the current models and agentic harnesses. Let alone what’s coming in the near term.

That being said, I think we’re in a weird phase right now where people’s obvious mental health issues are appearing as “hyper productivity” due to the use of these tools to absolutely spam out code that isn’t necessarily broadly coherent but is locally impressive. I’m watching multiple people both publicly and privately clearly breaking down mentally because of the “power” AI is bestowing on them. Their wires are completely crossed when it comes to the value of outputs vs outcomes and they’re espousing generated nonsense as it’s thoughtful insight.

It’s an interesting thing to watch play out.

petesergeant 3 hours ago | parent | next [-]

> where people’s obvious mental health issues

I think the kids would call this "getting one-shotted by AI"

hahahahhaah 3 hours ago | parent | prev | next [-]

Yeah I am definitely trying to stay off hype and just use the damn tool

bkolobara 2 hours ago | parent | prev [-]

There is a lot of research on how words/language influences what we think, and even what we can observe, like the Sapir-Whorf hypothesis. If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

I have a suspicion that extensive use of LLMs can result in damage to your brain. That's why we are seeing so many mental health issues surfacing up, and we are getting a bunch of blog posts about "an agentic coding psychosis".

It could be that llms go from bicycles for the brain to smoking for the brain, once we figure out the long term effects of it.

BrenBarn an hour ago | parent | next [-]

> If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

That is quite untrue. It is true that people may be slightly slower or less accurate in distinguishing colors that are within a labeled category than those that cross a category boundary, but that's far from saying they can't perceive the difference at all. The latter would imply that, for instance, English speakers cannot distinguish shades of blue or green.

bkolobara an hour ago | parent [-]

The point I was trying to make is that the way our brain works is deeply connected to language and words, including how fast and how accurate you perceive colors [0][1]. And interacting with an LLM could have unexpected side effects on it, because we were never before exposed to "statistically generated language" in such amounts.

[0]: https://youtu.be/RKK7wGAYP6k?si=GK6VPP0yoFoGyOn3 [1]: https://youtu.be/I64RtGofPW8?si=v1FNU06rb5mMYRKj&t=889

14 minutes ago | parent [-]
[deleted]
jstanley 2 hours ago | parent | prev [-]

> If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

Perhaps you mean to say that speakers are unable to name the difference between the colours?

I can easily see differences between (for example) different shades of red. But I can't name them other than "shade of red".

I do happen to subscribe to the Sapir-Whorf hypothesis, in the sense that I think the language you think in constrains your thoughts - but I don't think it is strong enough to prevent you from being able to see different colours.

bkolobara an hour ago | parent [-]

No, if you show them two colors and ask them if they are different, they will tell you no.

EDIT: I have been searching for the source of where I saw this, but can't find it now :(

EDIT2: I found a talk touching in the topic with a study: https://youtu.be/I64RtGofPW8?si=v1FNU06rb5mMYRKj&t=889

pverheggen an hour ago | parent | next [-]

You're probably thinking of the Himba tribe color experiment - which as it turns out, was mostly fabricated by a BBC documentary:

https://languagelog.ldc.upenn.edu/nll/?p=17970

bkolobara 44 minutes ago | parent [-]

Yes, I think this was it! Thanks for sharing the link. I had no idea that part was fabricated.

JumpCrisscross an hour ago | parent | prev | next [-]

> if you show them two colors and ask them if they are different, they will tell you no

The experiments I've seen seem to interrogate what the culture means by colour (versus shade, et cetera) more than what the person is seeing.

If you show me sky blue and Navy blue and ask me if they're the same colour, I'll say yes. If you ask someone in a different context if Russian violet and Midnight blue are the same colour, I could see them saying yes, too. That doesn't mean they literally can't see the difference. Just that their ontology maps the words blue and violet to sets of colours differently.

wongarsu 24 minutes ago | parent [-]

If you asked me if a fire engine and a ripe strawberry are the same color I would say yes. Obviously, they are both red. If you held them next to each other I would still be able to tell you they are obviously different shades of red. But in my head they are both mapped to the red "embedding". I imagine that's the exact same thing that happens to blue and green in cultures that don't have a word for green.

If on the other hand you work with colors a lot you develop a finer mapping. If your first instinct when asked for the name of that wall over there is to say it's sage instead of green, then you would never say that a strawberry and a fire engine have the same color. You might even question the validity of the question, since fire engines have all kinds of different colors (neon red being a trend lately)

JumpCrisscross 15 minutes ago | parent [-]

> in my head they are both mapped to the red "embedding"

Sure. That's the point. These studies are a study of language per se. Not how language influences perception to a meanigful degree. Sapir-Whorf is a cool hypothesis. But it isn't true for humans.

(Out of curiosity, what is "embedding" doing that "word" does not?)

wongarsu 5 minutes ago | parent [-]

Word would imply that this only happens when I translate my thoughts to a chosen human language (or articulate thoughts in a language). I chose embedding because I think this happens much earlier in the pipeline: the information of the exact shade is discarded before the scene is committed to memory and before most conscious reasoning. I see this as something happening at the interface of the vision system, not the speech center.

Which is kind of Sapir-Whorf, just not the extreme version of "we literally can't see or reason about the difference". But I don't think Sapir-Whorf is completely off the mark either, the words we know and regularly use do influence how we categorize and think about things

cthalupa an hour ago | parent | prev [-]

The ability for us to look at a gradient of color and differentiate between shades even without distinct names for them seems to disprove this on its face.

Unless the question is literally the equivalent of someone showing you a swatch of crimson and a swatch of scarlet and being asked if both are red, in which case, well yeah sure.