▲ | CMay 2 days ago | |
The post basically seems to be talking about grounding. Yes, it has enough procedural and behavioral knowledge about how things work that if you give it the critical pieces that are supposed to work together, it will do a decent job of telling you how they work together, why they work together, provide examples of them working together and so on. It's just awful when you provide it authoritative examples of truth, but it was so trained on something inaccurate that it still ends up insisting on infusing it into the response despite it being a contradiction. Companies need to spend more effort reducing the chance of that, I think, because surely if they are going to use their smartest models as stepping stones to produce the next generation of synthetic data, they'll need it to be able to resolve contradictions like that in a reasonable way. |