nVidia a100 experiences?

zero_skysilk

New member
Been testing some setups with A100s lately and honestly kind of surprised more folks aren't talking about the smaller players in the space. Everyone’s obsessed with the big three (AWS, GCP, Azure), but I've been getting better price-to-performance from a provider I found kind of randomly. Latency's low, bandwidth decent, and they don’t throttle the way some others do when you scale up concurrent jobs.

Not gonna pretend it's perfect (support can be a bit busy at times), but for deep learning training runs, especially with multi-GPU configs, it’s been a solid experience. Worth poking around beyond the usual suspects there are a few newer platforms flying under the radar with real A100s and decent pricing.



 
Been testing some setups with A100s lately and honestly kind of surprised more folks aren't talking about the smaller players in the space. Everyone’s obsessed with the big three (AWS, GCP, Azure), but I've been getting better price-to-performance from a provider I found kind of randomly. Latency's low, bandwidth decent, and they don’t throttle the way some others do when you scale up concurrent jobs.

Not gonna pretend it's perfect (support can be a bit busy at times), but for deep learning training runs, especially with multi-GPU configs, it’s been a solid experience. Worth poking around beyond the usual suspects there are a few newer platforms flying under the radar with real A100s and decent pricing.



How low is the latency? Saw a test someone had done with a $5000 computer or so and deepseek, and it was slooowww.

Personally I use Groq. Fast and cheap.
 
Back
Top