Changes

Jump to navigation Jump to search
193 bytes added ,  16:29, 23 October 2017
no edit summary
====TL;DR====
Using multiple GPUs adds a lot of complexity. It has a few benefits: possible speed ups if the network can be split up (and is big enough), able to train multiple networks at once (either copies of the same network or modified networks), more memory for huge batches. Some frameworks have much better performance with multiple GPUs (pytorch, caffe 2) while others are catching up.
 
Todo: ask what software, dataset size, development approach
==RAM==
* [http://timdettmers.com/2015/03/09/deep-learning-hardware-guide/ A Full Hardware Guide to Deep Learning]
* [https://www.oreilly.com/learning/build-a-super-fast-deep-learning-machine-for-under-1000 Cheap build]
* [https://medium.com/@SocraticDatum/getting-started-with-gpu-driven-deep-learning-part-1-building-a-machine-d24a3ed1ab1eHow to build a GPU deep learning machine]  Questions to ask: * Ask what libraries will be used* Approx. dataset/batch size* Development approach regarding multiple GPUs; splitting up large models, training multiple models* Network card?* DVD drive?
226

edits

Navigation menu