Changes

Jump to navigation Jump to search
87 bytes added ,  16:42, 23 October 2017
no edit summary
==GPU==
*[https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ GTX 1080 Ti Specs]
* Since we are using Tensorflow, it doesn't scale well to multiple GPUs for a single model
* [http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/ Which GPU for deep learning (04/09/2017)]
# "I quickly found that it is not only very difficult to parallelize neural networks on multiple GPUs efficiently, but also that the speedup was only mediocre for dense neural networks. Small neural networks could be parallelized rather efficiently using data parallelism, but larger neural networks... received almost no speedup."
====TL;DR====
Using multiple GPUs adds a lot of complexity. It has a few benefits: possible speed ups if the network can be split up (and is big enough), able to train multiple networks at once (either copies of the same network or modified networks), more memory for huge batches. Some frameworks have much better performance with Tensorflow doesn't scale well for multiple GPUs , but could be used better (pytorch, caffe 2probably) while others are catching upto train multiple networks at once.
==RAM==
* [https://www.oreilly.com/learning/build-a-super-fast-deep-learning-machine-for-under-1000 Cheap build]
* [https://medium.com/@SocraticDatum/getting-started-with-gpu-driven-deep-learning-part-1-building-a-machine-d24a3ed1ab1e How to build a GPU deep learning machine]
* [https://www.slideshare.net/PetteriTeikariPhD/deep-learning-workstation Deep Learning Computer Build] useful tips, long
Questions to ask:
* Ask what libraries will be used
* Approx. dataset/batch size
* Development approach regarding multiple GPUs; splitting up large models, training multiple models
* Network card?
* DVD drive?
226

edits

Navigation menu