▲ | rdos 3 days ago | ||||||||||||||||||||||
There is no point in using a low-bandwidth card like the B50 for AI. Attempting to use 2x or 4x cards to load a real model will result in poor performance and low generation speed. If you don’t need a larger model, use a 3060 or 2x 3060, and you’ll get significantly better performance than the B50—so much better that the higher power consumption won’t matter (70W vs. 170W for a single card). Higher VRAM wont make the card 'better for AI'. | |||||||||||||||||||||||
▲ | bsder 3 days ago | parent | next [-] | ||||||||||||||||||||||
> There is no point in using a low-bandwidth card like the B50 for AI. People actually use loaded out M-series macs for some forms of AI training. So, total memory does seem to matter in certain cases. | |||||||||||||||||||||||
▲ | robotnikman 3 days ago | parent | prev | next [-] | ||||||||||||||||||||||
>2x 3060 Are there any performance bottlenecks with using 2 cards instead of a single card? I don't think any one the consumer Nvidia cards use NVlink anymore, or at least they haven't for a while now. | |||||||||||||||||||||||
▲ | vid 3 days ago | parent | prev [-] | ||||||||||||||||||||||
Who said anything about the B50? Plenty of people use eg 2, 4 or 6 3090s to run large models at acceptable speeds. Higher VRAM at decent (much faster than DDR5) speeds will make cards better for AI. | |||||||||||||||||||||||
|