Remix.run Logo
surgical_fire 3 days ago

Reading what you write for editing does not make a text lose your voice. If anything, it amplifies it, you get to ensure that what you intended to say was said.

Not reading what you write smells more like laziness.

Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.

Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.

But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.

watwut 3 days ago | parent | next [-]

> instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.

LLM cant really do that. It can help you produce correct sentence where you struggle to create own, but it does not have capabilities to do what you suggest.

jmmv 3 days ago | parent | next [-]

It sounds like you haven't tried.

LLMs definitely can do this. The output tends to be overly positive though, claiming that any sort of rough draft you give them is "great, almost ready for publishing!". But the feedback you can get on clarity, narrative flow, weak spots... _is_ usually pretty good.

Now, following that feedback to the letter is going to end up with a diluted message and boring voice, so it's up to you to do with the feedback whatever you think best.

surgical_fire 3 days ago | parent [-]

Btw, this is precisely what I implied.

I never ask the LLM to evaluate my text in terms of being good or bad. Instead I try something like this:

"In this section I tried to explain X, I intended to sound in Y and Z fashion, and I want a reader to come out with ateast W impression. Is the text achieving these goals? Do I communicate my ideas clearly and consisely, or are they too confuse and meandering?"

I typically get useful feedback. I preface specifically asking it to not rewrite, simply pointing the bits that it finds faulty and explaining why.

Of course the prompt is different is I am writing, for example, technical documentation, or if it is an attempt at creative writing.

surgical_fire 3 days ago | parent | prev [-]

What? LLMs are very capable of doing sentiment analysis. Hell, it's basically one of the things it actually excels at - understanding tone, nuance, context, etc.

I used it many times for exactly this, with good results. It points out ambiguous contructs, parts that are dissonant from the tone I intend, etc.

I have no idea why you think that LLMs can't do that lol

watwut 3 days ago | parent [-]

Sentiment analysis for the purpose of categorizing reddit comments, sure. For the purpose of giving you advice about nuance, overall clarity and tone of own long test, no.

surgical_fire 3 days ago | parent [-]

I tried it myself, and it did actually a good job.

There's nothing magical about a long text you write yourself vs a stream o reddit comments in a thread. It's all sentiment analysis on text. It can extract ambiguity, how ideas are connected in the context, categorize and summarize, etc.

You should try it and see it for yourself. Feed it some large text of a single author and ask it to do those things, see if the results are satisfactory.

jychang 3 days ago | parent | prev | next [-]

Great! That's a good thing. Embrace being human sometimes.

Plus, "lazy" would actually be just using AI to edit the writing.

fallinditch 3 days ago | parent | prev [-]

[dead]