▲ | therein 5 days ago | ||||||||||||||||
> The problem for Apple is that they have no secret sauce here: absent any ratfuckery, it would probably work just as well with competing headsets. Yeah, I'd believe it. There is a good chance that is very much the case here. | |||||||||||||||||
▲ | foobar10000 4 days ago | parent [-] | ||||||||||||||||
Let’s just say there is. Scuttlebutt says there was at least a microphone pick up redesign and a timing redesign because the diarization model loss curve was crap - and given what I hear from the rest of the industry on auto0diarization in conference rooms, I believe that easily. Basically, the AI guys tried to get it working with the standard data they had, and the loss curve was crap no matter how much compute they threw at it. So, they had to go to the HW ppl and say ‘no bueno’ - and someone had to redesign time sync and change a microphone capsule out. For reference, we are seeing it more and more - sensor design changes to improve loss curve performance - there’s even a term being bandied about : “AI-friendly sensor design”. This does have a nasty side effect of basically breaking abstraction - but that’s the price you pay for using the bitter lesson and letting the model come up with features instead of doing it yourself. (Basically - the sensor->computer abstraction eats details the RL could use to infer stuff) | |||||||||||||||||
|