Remix.run Logo
chasd00 2 days ago

I would be curious to see what would happen if you could write every query/response from an LLM to an HTML file and then serve that directory of files back to google with a simple webserver for indexing.

deadbabe 2 days ago | parent [-]

I think the future will be:

1. Someone prompts 2. Server searches for equivalent prompts, if something similar was asked before, return that response from cache. 3. If prompt is unique enough, return response from LLM and cache new response. 4. If user decides response isn’t specific enough, ask LLM and cache.