| ▲ | 2001zhaozhao 7 hours ago | |
Maybe the best "index" will just be markdown files fed into a tiny LLM model. Is anyone using small, low-latency, fast LLMs to implement stuff like search as a RAG alternative? Could be the perfect use case for that Llama3 8B ASIC some company showed off a few months ago. | ||