▲ | wilg 4 days ago | ||||||||||||||||||||||||||||||||||||||||
First, you not having to spend the 60 seconds and it means you can parallelize it with something else to get the answer effectively instantly. Second, you're essentially establishing that if an LLM can get it done in less than 60 seconds its better than your manual approach, which is a huge win, as this will get faster! | |||||||||||||||||||||||||||||||||||||||||
▲ | sigmoid10 4 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
For real. This is what it must have been like living in the early 20th century and hearing people say they prefer a horse to get groceries because it is so much more effort to crank-start a car. I look forward to the age when we gleefully reminisce about the time we had to deal with SEO spam manually. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
▲ | lambda 4 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
There's no useful parallelization that could happen during this particular search. This took a couple of iterations of research via ChatGPT, and then reading the results and looking at the referenced sources; the total interaction time with ChatGPT is a similar 60 seconds or so, the main difference is the 3 minutes of waiting for it to generate answers vs. the maybe a couple of seconds for the searches. | |||||||||||||||||||||||||||||||||||||||||
|