| ▲ | zoho_seni 8 hours ago | |||||||
You could easily guard against bullshit issues. So you can focus on what matters. If the issue is legit goes ahead to a human reviewer. If is run of the mill ai low quality or irrelevant issue, just close. Or even nicer: let the person that opened the issue to "argue" with the ai to further explain that is legit issue for false positives. | ||||||||
| ▲ | nchmy 8 hours ago | parent [-] | |||||||
How is an llm supposed to identify an llm-generated bullshit issue...? It's the fox guarding the henhouse. | ||||||||
| ||||||||