| ▲ | analogpixel 4 days ago |
| > I don’t write a daily blog to crank out a post every day. If that was the point, I’d have switched to AI long ago already. I write a daily blog to make sure I remember how to think. I'm always surprised when people say they use LLMs to do stuff in their Journal/Obsidian/Notion. The whole point of those systems is to make you think better, and then you just offload all of that to a computer. |
|
| ▲ | roughly 4 days ago | parent | next [-] |
| A friend noted that many people seem to be cosplaying their lives, and it’s hard not to see it once it’s pointed out. |
| |
| ▲ | rapidfl 4 days ago | parent | next [-] | | After reading this comment, I feel leaning into the cosplaying will make me more productive/prolific. Many things I do not push thru on are because they seem superficial or a bit fake. Does not apply to all ppl but maybe there should be phases to cosplay hard. Then reflect and realign. | | |
| ▲ | albert_e 4 days ago | parent [-] | | I see your point. I am noticing that I am very quick to get excited about a thing and also very quick to lose motivation to pursue that new thing to a meangful level of understanding and mastery. Yesterday I was excited about something that I wanted to build a proof-of-concept of and blog about proudly. It might take 2-3days of intermittent effort juggling between other things but god was I excited to see it through. I reaped great dopamine learning the first 30% of the stuff by end of day. Today I wake up and am wondering what got me so excited yesterday. Of course I knew the basics of that now, parts of it seem obvious even, would anyone be really interested in me talking about it? If I threw my hat over the fence by cosplaying an active builder and blogger ... maybe I would have seen it through 3 days of commitment? |
| |
| ▲ | jackyinger 4 days ago | parent | prev | next [-] | | Great point, this cosplay phenomenon goes far beyond LLM use. | | | |
| ▲ | yeahforsureman 4 days ago | parent | prev | next [-] | | Funny, I tend to use larping for similar analogies. Not a huge insight or anything, just crossed my mind... I guess there's also overlap, or at least some kind of similarity with cargo cults, too? :) EDIT: Trying to stay on topic and score some po--, cargo I mean... | | |
| ▲ | trashburger 4 days ago | parent [-] | | Call it larping, being performative etc. but it is a concept as old as time. People emulate the interface of successful people without actually having the implementation of successful people. |
| |
| ▲ | alansaber 4 days ago | parent | prev [-] | | Most people do, some just more blatantly |
|
|
| ▲ | akoboldfrying 4 days ago | parent | prev | next [-] |
| IIUC, you believe that (a) using a tool like Notion is a useful brain-multiplying lever vs. struggling to keep everything in your head, but (b) using a tool like an LLM is a harmful brain-rotting exercise vs. struggling to do everything with your head. In your opinion, what is the differentiating factor? |
| |
| ▲ | lukeschlather 4 days ago | parent | next [-] | | Using a tool like notion is organizing your thoughts into a coherent structure so that you can reexamine them. Running that coherent structure through an LLM replaces your structured thoughts with other thoughts you didn't have. You've gone to all this trouble to record your memories so they don't change, and then you run them through a machine that replaces your memories with randomly generated ones. I think AI is a great tool in certain circumstances, but this sounds like one of the clearest examples where it is brain rot. | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | Kiro 4 days ago | parent | prev | next [-] |
| > The whole point of those systems is to make you think better I'm not using LLMs for my notes but "think better" has never been a goal for me. |
|
| ▲ | wtetzner 4 days ago | parent | prev | next [-] |
| And what is the point of AI generated posts when everyone has access to LLMs themselves? They can also generate whatever text they want. |
|
| ▲ | jphorism 4 days ago | parent | prev | next [-] |
| It's possible to do both. I write in a small field journal to think better, then periodically use Wispr Flow to quickly transcribe it to Obsidian (where I can use LLMs on the writing). |
|
| ▲ | kettlecorn 4 days ago | parent | prev [-] |
| Sometimes when working through difficult problems I will write pages of notes exploring a topic from a bunch of different angles until my brain is a bit exhausted. I've found LLMs work reasonably well to just copy-paste that blob of thoughts into to have them summarize the key points back to me in a more coherent form. |
| |
| ▲ | kaashif 4 days ago | parent [-] | | I find value in going from the unstructured blob of notes into structured and coherent thoughts myself, rather than with an LLM. If I understand something well, I can write something coherent easily. What you describe feels to me along the lines of studying for an exam by photocopying a textbook over and over. | | |
| ▲ | kettlecorn 3 days ago | parent | next [-] | | I write notes that are very explorative and rambling on some topics. Like I have probably 100+ pages of notes on programming language design where I use my notes as more of a working memory than a cohesive document. In other cases I'll do competitive market analysis by looking at most products in a category and scrawling down first impressions, strengths, and weaknesses. In some cases yes I'll synthesize that myself into something more coherent. In other cases an LLM can offer a summary of certain themes I'm coming back to, or offer a pseudo-outsider's take on what the core themes being explored are. If something is important to me I'll spend the time to understand it well enough to frame my own coherent argument, but if I'm doing extremely explorative thinking I'm OK with having a rapid process with an LLM in the loop. | |
| ▲ | ragequittah 4 days ago | parent | prev [-] | | Usually studying a test book is reconceptualizing it in whatever way fits the way you learn. For some people that's notes, for some it's flash cards, for some it's reading the textbook twice and they just get it. To imagine LLMs have no use case here seems dishonest. If I don't understand a particularly hard part of the subject matter and the textbook doesn't expand on it enough you can tell the LLM to break it down further with sources. I know this works because I've been doing it with Google (slowly, very slowly) for decades. Now it's just way more convenient to get to the ideas you want to learn about and expand them as far as you want to go. | | |
| ▲ | nunez 4 days ago | parent [-] | | My issue with using LLMs for this use case is that they can be wrong, and when they are, I'm doing the research myself anyway. | | |
| ▲ | ragequittah 3 days ago | parent [-] | | The times it's wrong has become vanishingly small. At least for the things I use it for (mostly technical). Chatgpt with extended thinking and feeding it the docs url or a pdf or 3 to start you'll very rarely get an error. Especially when compared to google / stack exchange. |
|
|
|
|