|
| ▲ | megaman821 6 days ago | parent | next [-] |
| The finetune would be an LLM where you say something like "my colors on the screen look to dark" and then it points you to Displays -> Brightness. It feels like a relatively constrained problem like finding the system setting that solves your problem is a good fit for a tiny LLM. |
| |
| ▲ | canyon289 6 days ago | parent [-] | | This would be a great experiment. I'm not sure how the OS integration would work, but as a first pass you could try finetuning the model to take natural language "my colors on the screen look to dark" and then have it output "Displays -> Brightness", then expand to the various other paths you would like the model to understand | | |
| ▲ | gunalx 6 days ago | parent [-] | | Maybe using a larger model to generate synthetic data of question path Combos, and also to rephrase and generate similar type questions for a more varier training set. |
|
|
|
| ▲ | hadlock 6 days ago | parent | prev [-] |
| It seems to dip into repeating itself pretty quickly on any task of actual complexity. |