Remix.run Logo
GMoromisato 9 hours ago

I'm here to read what actual humans think. If I wanted to read what an LLM thinks, I could just ask it.

But here's where it gets tricky: Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)? Or do I value authentic human output because I expect it to be of higher quality?

I confess that it is a little of both. But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

altairprime 9 hours ago | parent | next [-]

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

This is an artificial dichotomy. HN’s guidelines specify thoughtful, curious discussion as a specific goal. One-off / pithy / sarcastic throwaway comments are generally unwelcome, however popular they are. Insightful responses can be three words, ten seconds to write and submit, and still be absolutely invaluable. Well-thought-out responses are also always appreciated, even if they tend to attract fewer upvotes than a generic rabble-rousing sentiment about DRM or GPL or Apple that’s been copy-pasted to the past hundred posts about that topic. But LLM-enhanced responses are not only unwelcome but now outright prohibited.

Better an HN with fewer words than an HN with more AI writing words. We’ve been drowned in Show HN by quantity as proof of why already.

GMoromisato 7 hours ago | parent [-]

But what if it turns out that human+LLM can produce more "thoughtful, curious discussion" than human alone?

That's the dichotomy: Do we prefer text with the right "provenance" over higher quality text?

[Perhaps you'll say that human+LLM text will never be as high-quality as human alone. But I'm pretty sure we've seen that movie before and we know how it ends.]

That said, you're right that because human+LLM is so much more efficient, we'll be drowning in material--and the average quality might even go down, even if the absolute quantity of high-quality content goes up.

I think, in the long term, we will have to come up with more sophisticated criteria for posting rather than just "must be unenhanced human".

Avicebron 7 hours ago | parent | next [-]

I think "must be unenhanced human" is probably the most sophisticated criteria even if it's simple. I don't think there's much value in trying to optimize the perfect "thoughtful, curious discussion", why would there be, it implies some ideal state for "thoughtful and curious" vs the reality that discussions between living breathing people is interesting by default as long as folks loosely follow some guidelines.

altairprime 7 hours ago | parent | prev | next [-]

> what if it turns out that

HN need not offer itself up as a Petri dish for AI writing experimentation. There are startups in that space, and at least one must be YC-funded, statistically speaking. Come back with the outcomes of the experiment you describe and make a case that they should change the rule. Maybe they will! As of today, though, they are apparently unconvinced.

> the average quality might even go down

We have a recent concrete analysis of Show HN indicating support for this possibility, resulting in the mods banning new users for posting to Show HN (something they’ve probably been resisting for close to twenty years, I imagine, given how frequent a spam vector that must be).

> Perhaps you’ll say that human+LLM text will never be as high-quality as human alone

Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.

> in the long term, we will have to come up with more sophisticated criteria

Our current criteria seem sophisticated already. Perhaps you could make a case that AI-assisted writing helps avoid guideline violations. This one tends to be especially difficult for us all today:

”Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Eschew flamebait. Avoid generic tangents.”

GMoromisato 7 hours ago | parent [-]

> Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.

I apologize--the "you" I meant was the person currently reading my post, not the person I was replying to. I was merely trying to answer a common objection that I've heard.

> HN need not offer itself up as a Petri dish for AI writing experimentation.

I'm not sure HN has a choice. I don't think we can prevent posters from experimenting with LLMs to post on HN--even if they adhere to the guidelines. For example, can I ask the LLM to come up with the strongest argument it can and then re-write it in my own words? That seems to be allowed by the guidelines. Would someone even be able to tell that's what I did? [NOTE: I did not do that.]

I think you're arguing that we should not encourage even more use of LLMs on HN. I get that. But I feel like that this community is uniquely qualified to search for better solutions.

> Our current criteria seem sophisticated already.

I hope you're right! That implies that you believe the current guidelines are sufficient to keep HN as the place we all love despite the assault from LLMs. I'm skeptical, but I've been wrong plenty of times!

altairprime 6 hours ago | parent [-]

> I don't think we can prevent posters from experimenting with LLMs to post on HN

And yet, she persisted, we will still set guidelines; so that people know they’re unwelcome to do so when they do, so that they can’t argue that they didn’t know, so that we as a social club can strive towards the standards we argue about and accept from the organizers. The point of guidelines is not that they prevent malicious intent; the point is that they inhibit those behaviors that exceed the defined boundaries, however vague or precise they may be. Prevention of malice is an impossibility in all human social affairs, whether guidelines are defined or not; one must find other reasons for rules than prevention to understand why rules are at all.

GMoromisato 6 hours ago | parent [-]

> And yet, she persisted, we will still set guidelines

I'm not sure if you're including or excluding me from the "we". If you're excluding me, then I feel our conversation has come to an end.

But if you're including me, then I think the guidelines need to evolve to deal with LLMs. Maybe not right now--maybe the current guidelines are sufficient for the next year or two or three. But I think we as a community are uniquely qualified to design and influence the future of internet social clubs in the face of LLMs.

altairprime 5 hours ago | parent [-]

> I'm not sure if you're including or excluding me from the "we".

“We” here refers to individual human beings that are members of the human social-entity constructs (‘social clubs’) that precipitate naturally out of human groups, both in general to all such groups and in specific to the group under discussion here today, HN participants.

Whether or not you’re a member of “we” HN participants is conditional on whether or not you are honoring the policy of no AI-assisted writing at HN that is in effect as of whenever you saw this post or the new guidelines. I have no judgment to offer you in that regard, and in any case you’re readily able to decide that for yourself. Separately, I’m not engaging with discussion about future policy; perhaps you should start a top-level thread about it, or write a blog post and submit it (after a few days have passed, so it doesn’t get topic-duped and so that passions have cooled somewhat).

davebranton 7 hours ago | parent | prev [-]

It doesn't matter.

The guidelines are perfectly clear, no matter the outcome of your thought experiment. Hacker News wants intelligent conversation between human beings, and that's the beginning and the end of it.

If you want LLM-enhanced conversation then I'm sure you will find places to have that desire met, and then some. Hacker News is not that place, and I pray that it will never become that place. In short, and in answer to "Do we prefer text with the right "provenance" over higher quality text?".

Yes. Yes, we do.

customguy 5 hours ago | parent | prev | next [-]

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

For me it's the first one every time. If only because LLM don't learn from responses to it (much less so when the response is to a paste of their output). It's just not communication. From that perspective, the quality of even the most brilliant LLM output is zero, because it's (whatever high value) multiplied by zero.

Even a real person saying something really horrible and too dense to learn from any response at least gives me a signal about what humans exist. An LLM doesn't tell me anything, and if wanted the reply of an LLM, I would simply feed my own posts into an LLM. A human doing that "for me" is very creepy and, to my sensibilities, boundary violating. Okay, that may be too strong a word, but it feels gross in a way I can't quite put my finger on, but reject wholeheartedly.

bittercynic 9 hours ago | parent | prev | next [-]

I like to read human comments because I'd like to know what my fellow humans think. I'd prefer not to read low-effort, throw away comments, but other than that I want to know what people think about different topics.

GMoromisato 7 hours ago | parent [-]

I read HN both because I want to read what humans think, and because I want to read insightful discussion.

The tension is that as insightful discussion becomes easier/better with LLMs, there is less need to read HN. All I'm left with is provenance: reading because a human wrote it, not because it is uniquely insightful.

alpha_squared 9 hours ago | parent | prev | next [-]

> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

I'd argue that anything insightful or well-though-out doesn't use LLMs at all. We can quibble over whether discussions with an LLM lead to insightful responses, but that still isn't your own personal thought. Just type what's on your mind, it's not that hard and nitpicking over this is just looking for ways to open up unnecessary opportunities for abuse.

rozal 9 hours ago | parent | next [-]

Often i think of a novel idea or solution to a problem, but use AI to communicate or adjust what I already wrote out so it’s more comprehensible. Sometimes when I write, it’s hard to understand.

davebranton 7 hours ago | parent | next [-]

The more you write, the less this will be true. The more you write, the better you will become at it. Using an LLM to write is like sending a robot to the gym for you.

The more you use an LLM to write for you, the worse you will become at writing yourself. There is simply no other possible outcome. It's even true of spellcheck - the more you use a spellcheck the worse you become at spelling. I know this for a fact because I can no longer spell for shit. However, spelling is to writing as arithmetic is to mathematics. I also can't add up, but I have a degree in pure mathematics.

LLMs are a cancer on human thought and expression.

briantakita 5 hours ago | parent [-]

> LLMs are a cancer on human thought and expression.

LLMs help to express what many people dont have the energy or ability to express. It also has a broader scoped view of protocol...It does not have emotions, which often leads to less than optimal discourse.

In many ways, it help those who are challenged in discourse to better express themselves...rather than keeping silent or being misunderstood.

rozal 2 hours ago | parent [-]

[dead]

jamiek88 6 hours ago | parent | prev | next [-]

How do you expect to get better at it then if you avoid the hard work and emotional weight of fixing it?

yellowapple 5 hours ago | parent [-]

So if you want to reply to a comment you read today, and you don't feel like your writing skill is up to snuff, you should be content with expecting to wait the requisite weeks or months or years of practice before even considering replying to it?

This seems especially relevant for non-English-fluent commenters, who are increasingly using LLMs to be able to communicate more effectively on an English-only site like Hacker News than they'd otherwise be able to do.

rukuu001 4 hours ago | parent [-]

I've noticed a considerable drop-off in HN commenters who are unable to deal with the substance of a comment if it contains errors in spelling or grammar, so I don't think this is the issue it used to be.

It's still daunting posting in a second language, and LLMs are an attractive solution to that (depending on your definition of 'solution').

yellowapple an hour ago | parent [-]

Is that an actual drop-off in commenters, or in comments? The latter is readily explainable by “commenters who would previously call out the errors now choose to not engage with those comments/posts at all”.

In any case, I don't think it's a bad thing to want to communicate as clearly as possible, and if an LLM helps you do that, I ain't one to judge. Sure, ideally I'd want to read folks' thoughts without the LLM-induced layer of vaseline smoothing them over, but even that's better than not reading them at all :)

sharken 7 hours ago | parent | prev [-]

In that sense AI is a tool much like a dictionary, it enhances and I'd say improve the end result.

verdverm 6 hours ago | parent [-]

The difference is that I will retain what I drew out from the dictionary the next time. If people use Ai this way for writing, great! What many of the "enhanced-by-ai" arguments sound like is that this will be an indefinite outsourcing.

Use them to get better, like how reading good writing directly (not summarized) will also make you a much better writer. Learn from the before and after so next time there isn't a need to reach for Ai.

RhodesianHunter 9 hours ago | parent | prev [-]

There are many obvious ways in which this may not be true.

Anyone learning the language and some people with learning disabilities, for example, may communicate better via an LLM.

bonoboTP 9 hours ago | parent | next [-]

There is a sliding scale from that, to it being the LLM that communicates, not the person. LLMs can really reshuffle and change priorities and modify emphasis in a text. All the missing pieces will be filled in and rounded out and sandpapered off by the inner-average-corporate-HR-Redditor of the LLM.

postalcoder 9 hours ago | parent | prev [-]

I promise you, after this past year, you don’t know how happy I am to read issues and PRs in broken English.

jmull 7 hours ago | parent | prev | next [-]

If the goal is to read what actual humans think, it's hard to see how an LLM filter can do anything but obscure and degrade the content.

LLMs, as we know them, express things using the patterns they've been developed to prefer. There's a flattening, genericizing effect built in.

If there are people who find an LLM filter to be an enhancement, they can run everything through their favorite LLM themselves.

GMoromisato 7 hours ago | parent [-]

I think it's a spectrum:

1. I enter "Describe the C++ language" at an LLM and post the response in HN. This is obviously useless--I might as well just talk to an LLM directly.

2. I enter "Why did Stroustrup allow diamond inheritance? What scenario was he trying to solve" then I distill the response into my own words so that it's relevant to the specific post. This may or may not be insightful, but it's hardly worse than consulting Google before posting.

3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN. This could be a genuinely novel idea and the fact that it is summarized by an LLM does not diminish the novelty.

My point is that human+LLM can sometimes be better than human alone, just as human+hammer, human+calculator, human+Wikipedia can be better than human alone. Using a tool doesn't guarantee better results, but claiming that LLMs never help seems silly at this point.

Avicebron 7 hours ago | parent | next [-]

> 3. I spend a week with creating a test language with a different trade-off for multiple-inheritance. Then I ask an LLM to summarize the unique features of the language into a couple of paragraphs, and then I post that into HN

I think where you are getting hung up is the idea of "better results". We as a community don't need to strive for "better results" we can easily say, hey we just want HN to be between people, if you have the LLM generate this hypothetical test, just tell people in your own words. Maybe forcing yourself to go through that exercise is better in the long run for your own understanding.

GMoromisato 6 hours ago | parent | next [-]

My example was not great.

But my point is that I read HN partly because people here are insightful in a way I can't get in other places. If LLMs turn out to ultimately be just as insightful, then my incentive to read HN is reduced to just, "read what other people like me are thinking." That's not nothing, but I can get that by just talking with my friends.

Unless, of course, we could get human+LLM insightfulness in HN and then I'd get the best of both worlds.

xenophonf 6 hours ago | parent | prev [-]

If someone can't explain something in their own words, then they don't _really_ understand it. The process of taking time to think through a topic and check one's understanding, even if only for oneself and the rubber duck, will reveal mistakes or points of confusion.

Avicebron 6 hours ago | parent [-]

Which gets to the core of the issue nicely, I want to go on to HN and talk to people who know things or have thought about things to the degree that they don't need a cheat sheet off to the side to discuss them.

jmull 6 hours ago | parent | prev [-]

How is it not better, in your third scenario, if you described what you think are the important and interesting aspects of your idea/demo?

And what motivated you to make it -- probably the most interesting thing to readers, and not something an LLM would know.

Believe me, I don't care what an LLM has to say about your thing. I care about what you have to say about your thing.

caconym_ 9 hours ago | parent | prev | next [-]

What is the value of this "output"? If I want to know what LLMs think about something, I can go ask an LLM any question I want. For a comment on [a site like] HN, either the substantive content of the comment originated inside a human mind, or there is no substantive content that I couldn't reproduce by feeding the comment's context into an LLM. At the extreme, I don't have any interest in reading or participating in a conversation between a bunch of LLMs.

neutronicus 9 hours ago | parent [-]

They’re referencing LLM-enhanced output.

The value proposition is that someone who is a lousy writer (perhaps only in English) with deep domain knowledge is going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own.

caconym_ 8 hours ago | parent | next [-]

> perhaps only in English

Wouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?

> someone who is a lousy writer with deep domain knowledge going back and forth with the LLM to express some insight or communicate some information that the LLM would not produce on its own

This sounds reasonable on its face, but how often does it actually come up that somebody can't clearly express an idea in writing on their own but can somehow get an LLM to clearly express it by writing a series of prompts to the LLM?

And, if it does come up, why don't they just have that conversation with me, instead?

zajio1am 6 hours ago | parent [-]

> Wouldn't it work better to just write the thing in whatever language they can actually write in and then do a straightforward translation in a single pass?

Nontrivial translation tools are AI(neural net)-based tools (although not necessary LLM). Whole transformer neural net architecture was originally designed for translation.

caconym_ 5 hours ago | parent [-]

I don't have a problem with people using these tools to translate their writing into languages they aren't fluent/literate in. It's a completely different dynamic vs. having them write for you.

GMoromisato 6 hours ago | parent | prev [-]

Exactly!

Just as Google-enhanced output and Wikipedia-enhanced output has helped my writing/thinking, I believe LLM-enhanced output also helps me.

Plus, I personally gain more benefit from using an LLM as a researcher than as a writer.

caconym_ 6 hours ago | parent [-]

Using LLMs for research is completely different from using them to write for you. And if you're using them to write about the results of research, you're almost certainly getting a lot less value out of the whole exercise.

abtinf 8 hours ago | parent | prev | next [-]

By this logic, you might consider vibe coding a browser plugin that takes any HN comment less than 50 words and auto-expands it into an “insightful, well thought-out response.”

telotortium 6 hours ago | parent | next [-]

Delivered: https://github.com/telotortium/dotfiles/tree/27c11efd967eebc...

zahlman 8 hours ago | parent | prev [-]

Length is not insight. I understand this to be a community oriented towards people who are not impressed by such superficial things.

_se 8 hours ago | parent [-]

That's the point :)

munificent 4 hours ago | parent | prev | next [-]

> But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

If your definition of "superior" includes some amount of "provides a meaningful connection to another living being", then LLM output will rarely be superior even when it's factually and grammatically correct.

kelnos 7 hours ago | parent | prev | next [-]

> Do I prefer low-effort, off-the-top-of-my-head reactions, as long as it is human? Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

Neither. I want insightful, well-thought-out, human comments.

It's a little sad that this might be too much to ask sometimes...

jedahan 9 hours ago | parent | prev | next [-]

I prefer low effort human thought to low effort llm output.

gkfasdfasdf 8 hours ago | parent | prev | next [-]

> But here's where it gets tricky

Pretty sure this comment is AI

GMoromisato 6 hours ago | parent [-]

Now I know how the Salem witches felt. How can I prove that it's not AI?

yellowapple 5 hours ago | parent [-]

You can't. Nobody can. False positives are the inherent danger of these sorts of policies — especially when the LLMs were trained on the exact writing styles that have dominated online conversations and publications for decades.

amarble 8 hours ago | parent | prev | next [-]

The point of a discussion site is to hear what other people think and get different perspectives. Just getting an LLM's insightful, well-thought-out response isn't really a big draw, if one is looking for that, there's a pretty obvious way to get it. I posted this the other day (ignore the title I realized later it's too clickbaity) but this is why IMO LLMs won't replace the workforce, people aren't looking for answers to things, they're looking for other people's takes: https://news.ycombinator.com/item?id=47299988

Ensorceled 8 hours ago | parent | prev | next [-]

> If I wanted to read what an LLM thinks, I could just ask it.

and

> Or do I want an insightful, well-thought-out response, even if it is LLM-enhanced?

What is the difference? What's the line between these two?

The prompt: "Analyze <opinion> and respond" is pretty clearly "I would just ask it." and, the prompt: "here's my comment, please ONLY the check the grammar and spelling" would probably be ok.

What about prompt:"I disagree with using LLMs for commenting at all for <reasons>. Please expound on this and provide references and examples". That would explode the word count for this site.

GMoromisato 6 hours ago | parent [-]

What about:

1. "Here is my answer to a comment. Give me the strongest argument against it."

2. "I think xyz. What are some arguments for and against that I may not have thought of."

3. "Is it defensible for me to say that xyz happened because of abc?"

All of these would help me to think through an issue. Is there a difference between asking a friend the above vs. an LLM? Do we care about provenance or do we care about quality?

verdverm 6 hours ago | parent [-]

The difference is in the journey to find the answer, rather than outsourcing it to man or machine. Spending more time reflecting before first post will often answer the easy questions so you can formulate more thoughtful questions.

js8 3 hours ago | parent | prev | next [-]

I agree there is a dichotomy. I personally think AIs are better debaters than humans, at the very least in their ability to make less logical mistakes and have wider knowledge. I would suggest everyone should run their thoughts through an AI to get a constructive critique, it would certainly reduce lot of time wasted.

And I find the decision to "ban" AI slightly ironic, when HN has a disdain (unlike its predecessor Slashdot) for funny or sarcastic comments, which require the reader to think more, rather than having a clear argument handed on a silver platter. I mean, it is what truly human communication is like - deliberately not always crystal clear.

I suspect that HN will eventually be replaced by an AI-moderated site, because it will have more quality content.

GMoromisato 2 hours ago | parent [-]

There are huge advantages to AI-moderation. TBD what the unintended consequences are. But I think it's worth trying.

I believe banning AI is a temporary solution. Even today it is very hard to tell human from AI. In the future it will be impossible. We are in the Philip Dick future of "Do Androids Dream" (the book, not the movie). Does it matter if we can't tell human from AI? The book proposes that how we feel about the piece we're reading is the only thing that matter. How the piece got created is irrelevant.

bonoboTP 9 hours ago | parent | prev | next [-]

Humans have more variability and "edge". If a person is passionately arguing for some point of view (perhaps somewhat outside the usual), it signals to me that they probably thought about this and it is a distillation of a long thought process and real-life experience. One could say that the logical argument should stand alone, but reality doesn't work that way. There are many things you have to implicitly trust and believe when you read. Of course lying and bullshitting already existed before ("nobody knows you're a dog" etc etc). But LLMs will really eloquently defend X, not X, X*0.5 and anything inbetween. There is no information content in it, it doesn't refer to an actual human life experience and opinion that someone wants to stand behind. It just means that someone made the LLM output a thing.

unsui 8 hours ago | parent | prev | next [-]

Gonna put out a blanket assertion about my preferences, to get a read on whether these are shared or not:

As humans, we have directives (genetic, cultural, societal, etc.) to prioritize humanistic endeavors (and output) above all else.

History has shown that humans are overwhelmingly chauvinistic in regards to their relationship to other animals in the animal kingdom, even to the point of structuring our moral/ethical/legal systems to prioritize human wellbeing over that of other animals (however correct/ethical that may ultimately be, e.g., given recent findings in animal cognition, such as recent attempts to outlaw boiling lobsters alive as per culinary tradition).

But, it seems that some parties/actors are willing (i.e., benefiting) from subverting this long-standing convention (of prioritizing human interests) in the face of AI (even to the point of the now-farcical quote by Sam Altman that humans take far more nurturing than LLMs...)

So: should we be neglecting our historical and genetic directives, to instead prioritize AI over human interests? Or should we be unashamedly anthropic (pun intended), even at the cost of creating arbitrary barriers (i.e., the equivalent of guilds) intended to protect human interests over those of AI actors?

I strongly recommend the latter, particularly if the disruptions to human-centric conventions/culture/output are indeed as significant (and catastrophic) as they will likely be if unchecked.

paganel 7 hours ago | parent | prev | next [-]

> well-thought-out response, even if it is LLM-enhanced?

There's no insight nor well-thought-out response once a person decides to "LLM-enhance" their response. The only insight that the person using the LLM is too limited to have a decent conversation with.

verdverm 6 hours ago | parent | prev | next [-]

> But it wouldn't surprise me if someday LLM-enhanced output becomes sufficiently superior to average human output that the choice to stick with authentic human output will be more painful.

My ideal vision is that instead of outsourcing indefinitely, we learn from the enhanced versions and become better independent writers.

relaxing 9 hours ago | parent | prev | next [-]

If you like reading LLM output, just talk directly to an LLM. Problem solved.

TacticalCoder 9 hours ago | parent | prev | next [-]

> Am I here to read authentic humans because I value authenticity for its own sake (like preferring Champagne instead of sparkling wine)?

Mate, Champagne is a sparkling wine. In French you can even at times hear people asking for "un vin mousseux de Champagne" meaning "a sparkling wine from Champagne" instead of the short form (just saying "un Champagne" or "du Champagne").

Now, granted, not all sparkling wine are Champagne.

The Wikipedia entry begins with: "Champagne is a sparkling wine originated and produced in the Champagne wine region of France...".

I drank enough of it to be stating my case, of which I'm certain!

P.S: and btw, yup, authentic humans content only here, even if it's of "low quality". If I want LLM, I've got my LLMs.

sireat 8 hours ago | parent [-]

Basically you have Cremant type sparking wines which are produced from other regions of France besides Champagne. It is just like Champagne just that other French regions like Loire, Alsace, Bordeux etc are not allowed to call it Champagne.

So just like Armanac's are like Cognac's for lower price, good Cremant will be cheaper and more enjoyable that cheaper Champagne (I've not had any really expensive Champagne).

Then you have Cava from Spain which is similar process to Cremants and Champagne. The difference would be in type of grapes used. A friend of mine swears by Cavas just like I swear by Cremants from Loire region. However my wife hates Cava.

Then Proseccos from Italy again are similar, but quality varies more.

After that we get into more questionable cheaper sparkling wines which usually means some sort of out of bottle insertion of CO2 and even worse version include some other modifications such as sugar.

In general to avoid literal headaches you want BRUTs. Anything semi-sweet or sweet is suspicous.

Again I am not a full wine expert but this is mostly years of ahem experience.

browningstreet 8 hours ago | parent | prev [-]

I keep wishing for a public place to put a formatted version of my LLM threads. I have long conversations with LLMs that usually result in some kind of documentation, tutorial, or dataset. Many of them are relatively novel, but I haven't created a place for them yet.

And no, I wouldn't think an HN post is it either.. I'm just saying, there should be a good place to post the output of good questions asked iteratively.

vova_hn2 8 hours ago | parent | next [-]

Have you ever read someone else's conversation with an LLM?

abustamam 8 hours ago | parent | next [-]

Not the op but I barely even read my own conversations with an LLM. ChatGPT was always so verbose even when I told it to be succinct.

Claude is a bit better but still prone to rambling.

browningstreet 8 hours ago | parent | prev [-]

I hinted at "formatted" and "good".. add the words "curated" or "edited".

vova_hn2 3 hours ago | parent | next [-]

Well, you haven't really answered the question.

I think that if you actually try reading someone else's conversation with LLM, you'll find out that it's less exciting than it seems.

For the one who has the conversation the excitement comes mostly from the ability to steer it the way you want. Reader doesn't have this ability, so they are just forced to endure the excessive wordiness, that is so typical for most LLMs.

If you learned something interesting, then why not express this knowledge in a normal article/blogpost? What advantage does a conversation between you and LLM has over just a normal text or, perhaps, text with pictures, diagrams, maybe some interactive illustrations etc

jamiek88 6 hours ago | parent | prev [-]

Make a blog? Hardly a hard problem there mate.

If you can’t even be arsed doing that how much value is there, really?

Personally the only thing less interesting to me than someone else’s conversations with an LLM is hearing about someone else’s dream they had last night but you never know, some people may be interested.

browningstreet 6 hours ago | parent [-]

Thanks for slagging.

But I was thinking less blog and more like an LLM research notebook, à la Jupyter. Jupyter for LLM prompts, outputs, refinements.

jamiek88 6 hours ago | parent [-]

No slagging meant, sorry. Reading back it does seem a bit like that you are right.

verdverm 6 hours ago | parent | prev [-]

Simon Willison published something for turning Claude Convos into something publishable. [1] I haven't tried it, so cannot speak to the ergonomics.

Where to post it? Any blog site, probably a good few Show HN too. Will anyone read it? I haven't read anyone else's, I'm more inclined to dock them reputation for suggesting I read their Ai session. Snippets of weird things shared on socials were interesting to me early on, but I'm over that now too.

[1] https://simonwillison.net/2025/Dec/25/claude-code-transcript...