Remix.run Logo
koliber 13 hours ago

I sometimes get contracts, NDAs, or terms and conditions which normally I would automatically accept because they are low stakes and I don't have time to read them. At best I would skim them.

Now I pass them through an LLM and ask them to point out interesting, unconventional, or surprising things, and to summarize the document in a few bullet points. They're quite good at this, and I am can use what I discover later in my relationship with the counterparty in various ways.

I also use it to "summarize" a large log output and point out the interesting bits that are relevant to my inquiry.

Another use case is meeting notes. I use fireflies.ai for some of my meetings and the summaries are decent.

I guess summarization might not be the right word for all the cases, but it deals with going through the hay stack to find the needle.

gregates 12 hours ago | parent | next [-]

Do you go through the haystack yourself first, find the needle, and then use that to validate your hypothesis that the AI is good at accomplishing that task (because it usually finds the same needle)? If not, how do you know they're good at the task?

My own experience using LLMs is that we frequently disagree about which points are crucial and which can be omitted from a summary.

koliber 12 hours ago | parent [-]

It depends on how much time I have, and how important the task is. I've been surprised and I've been disappointed.

One particular time I was wrestling with a CI/CD issue. I could not for the life of me figure it out. The logs were cryptic and there was a lot of them. In desperation I pasted the 10 or so pages of raw logs into ChatGPT and asked it to see if it can spot the problem. It have me three potential things to look at, and the first one was it.

By directing my attention it saved me a lot of time.

At the same time, I've seen it fail. I recently pasted about 10 meetings worth of conversation notes and asked it to summarize what one person said. It came back with garbage, mixed a bunch of things up, and in general did not come up with anything useful.

In some middle-of-the road cases, what you said mirrors my experience: we disagree what is notable and what is not. Still, this is a net positive. I take the stuff it gives me, discard the things I disagree on, and at least I have a partial summary. I generally check everything it spits out against the original and ask it to cite the original sources, so I don't end up with hallucinated facts. It's less time than writing up a summary myself, and it's the kind of work that I find more enjoyable than typing summaries.

Still, the hit to miss ration is good enough and the time savings on the hits are impressive so I continue to use it in various situations where I need a summary or I need it to direct my attention to something.

gregates 9 hours ago | parent [-]

I really don't see how it can save you time if you have to summarize the same source for yourself every time in order to learn whether the AI did a good job in this particular case.

11 hours ago | parent | prev | next [-]
[deleted]
notatoad 12 hours ago | parent | prev [-]

for your first one, if you're just feeding docs into a chatbot prompt and asking for a summary, i think that matches what the article would call a "chatbot product" rather than a summarization product.

fireflies.ai is interesting though, that's more what i was looking for. i've used the meeting summary tool in google meet before and it was hilariously bad, it's good to hear that there are some companies out there having success with this product type.

koliber 11 hours ago | parent [-]

I guess you’re right re chatbot for summaries. I was thinking about the use case and not the whole integrated product experience.

For example, for code gen I use agents like Claude Code, one-shot interfaces like Codex tasks, and chatbots like the generic ChatGPT. It depends on the task at hand, how much time I have, whether I am on the phone or on a laptop, and my mood. It’s all code gen though.