Changes

Jump to navigation Jump to search
262 bytes added ,  15:25, 23 October 2017
no edit summary
* [https://devtalk.nvidia.com/default/topic/743814/cuda-setup-and-installation/advice-on-single-vs-multi-gpu-system/ Advice on single vs multi-GPU system]
# Might want to get two graphics cards, one for development, one (crappy card) for operating system
*[https://stackoverflow.com/questions/37732196/tensorflow-difference-between-multi-gpus-and-distributed-tensorflow]# Different uses of multiple GPUs]
# Intra-model parallelism: If a model has long, independent computation paths, then you can split the model across multiple GPUs and have each compute a part of it. This requires careful understanding of the model and the computational dependencies.
# Replicated training: Start up multiple copies of the model, train them, and then synchronize their learning (the gradients applied to their weights & biases).
 
====TL;DR====
Using multiple GPUs adds a lot of complexity. It has a few benefits: possible speed ups if the network can be split up, able to train multiple networks at once (either copies of the same network or modified networks), more memory for huge datasets.
===Other Builds===
* [https://news.ycombinator.com/item?id=14438472 Deep learning box for $1700] (links to https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415)
226

edits

Navigation menu