▲ | danialasif 6 days ago | |
That is great feedback, and agreed that LLMs definitely have generic outputs, especially if missing the right context. To combat this, we're actively working on playing with which data we can pull, how to cleanly give it to an LLM and which models to use to improve the inference (while staying within the compliance boundaries). We've found the "chat" functionality to be especially useful for advisors since we've been able to surface insights to them without them having to log onto many different systems and just present it in a clean output, as you pointed out. |