I’m no expert, but it seems like self updating weights requires a grounded understanding of the underlying subject matter, and this seems like a problem current LLM systems.