▲ | deadbabe 2 days ago | |||||||
To combat this, maybe we can cache AI responses for common prompts somehow and make some kind of website where people could search for keywords and find responses that might be related to what they want, so they don’t have to spend tokens on an AI. Could be free. | ||||||||
▲ | chasd00 2 days ago | parent [-] | |||||||
I would be curious to see what would happen if you could write every query/response from an LLM to an HTML file and then serve that directory of files back to google with a simple webserver for indexing. | ||||||||
|