Remix.run Logo
drakeballew 16 hours ago

This is a beautiful piece of work. The actual data or outputs seem to be more or less...trash? Maybe too strong a word. But perhaps you are outsourcing too much critical thought to a statistical model. We are all guilty of it. But some of these are egregious, obviously referential LLM dog. The world has more going on than whatever these models seem to believe.

Edit/update: if you are looking for the phantom thread between texts, believe me that an LLM cannot achieve it. I have interrogated the most advanced models for hours, and they cannot do the task to any sort of satisfactory end that a smoked-out half-asleep college freshman could. The models don't have sufficient capacity...yet.

liqilin1567 14 hours ago | parent | next [-]

When I saw that the trail goes through just one word like "Us/Them", "fictions" I thought it might be more useful if the trail went through concepts.

tmountain 7 hours ago | parent [-]

The links drawn between the books are “weaker than weak” (to quote Little Richard). This is akin to just thumbing the a book and saying, “oh, look, they used the word fracture and this other book used the word crumble, let’s assign a theme.” It’s a cool idea, but fails in the execution.

usefulposter 4 hours ago | parent [-]

Yes. It's flavor-of-the-month Anthropic marketing drivel: tenuous word associations edition¹.

¹ Oh, that's just LLMs in general? Cool!

georgebcrawford 13 minutes ago | parent [-]

I spent 30 seconds and the first word that came to mind was drivel.

As an English teacher this shit makes me hate LLMs even more. Like so much techbro nonsense, it completely ignores what makes us human.

rtgfhyuj 10 hours ago | parent | prev | next [-]

give it a more thorough look maybe?

https://trails.pieterma.es/trail/collective-brain/ is great

eloisius 9 hours ago | parent | next [-]

It’s any interesting thread for sure, but while reading through this I couldn’t help but think that the point of these ideas are for a person to read and consider deeply. What is the point of having a machine do this “thinking” for us? The thinking is the point.

DrewADesign 6 hours ago | parent [-]

And that’s the problem with a lot of chatbot usage in the wild: it’s saving you from having to think about things where thinking about them is the point. E.g. hobby writing, homework, and personal correspondence. That’s obviously not the only usage, but it’s certainly the basis for some of the more common use cases, and I find that depressing as hell.

znnajdla 5 hours ago | parent | prev [-]

This is a software engineering forum. Most of the engineer types here lack the critical education needed to appreciate this sort of thing. I have a literary education and I’m actually shocked at how good most of these threads are.

PinkMilkshake 2 hours ago | parent | next [-]

I think most engineer types avoid that kind of analysis on purpose.

znnajdla an hour ago | parent [-]

Programmers tend to lean two ways: math-oriented or literature-oriented. The math types tend to become FAANG engineers. The literature oriented ones tend to start startups and become product managers and indie game devs and Laravel artisans.

only-one1701 3 hours ago | parent | prev [-]

That doesn’t speak well towards your literary education, candidly.

znnajdla an hour ago | parent [-]

We should try posting this on a literary discussion forum and see the responses there. I expect a lot of AI FUD and envy but that’ll be evidence in this tools favor.

baxtr 3 hours ago | parent | prev | next [-]

I checked 2-3 trails and have to agree.

Take for example the OODA loop. How are the connections made here of any use? Seems like the words are semantically related but the concept are not. And even if they are, so what?

I am missing the so what.

Now imagine a human had read all these books. It would have come up with something new, I’m pretty sure about that.

https://trails.pieterma.es/trail/tempo-gradient/

wry_durian an hour ago | parent [-]

Indeed, I'm not seeing a "so what" here. LLMs make mental models cheap, but all models are wrong, and this one is too. The inclusion of Donalla Meadows' book and the quote from The Guns of August are particularly tenuous.

what-the-grump 14 hours ago | parent | prev [-]

Build a rag with significant amount of text, extract it by key word topic, place, date, name, etc.

… realize that it’s nonsense and the LLM is not smart enough to figure out much without a reranker and a ton of technology that tells it what to do with the data.

You can run any vector query against a rag and you are guaranteed a response. With chunks that are unrelated any way.

electroglyph 11 hours ago | parent [-]

unrelated in any way? that's not normal. have you tested the model to make sure you have sane output? unless you're using sentence-transformers (which is pretty foolproof) you have to be careful about how you pool the raw output vectors