Remix.run Logo
drob518 2 days ago

I’m really curious how this scales up. Bonsai delivers an 8B model in 1.15 GB. How large would a 27B or 35B model be? Would it still retain the accuracy of those large models? If the scaling holds, we could see 100+B models in 64 GB of RAM.

cubefox 2 days ago | parent | next [-]

Also depends on how expensive training these models is. It's probably at least as expensive as full precision models, otherwise they would have mentioned it.

londons_explore 2 days ago | parent [-]

My guess is the training process is their secret sauce...

cubefox 2 days ago | parent [-]

Yes, but their training speed is not secret. If their process were fast, they would have said so.

MeetRickAI 2 days ago | parent | prev [-]

[dead]