Difference between revisions of "GPU Build"

From edegan.com
Jump to navigation Jump to search
Line 5: Line 5:
 
}}
 
}}
  
===Single GPU vs Multi GPU===
+
===GPU===
 
*[https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ GTX 1080 Ti Specs]
 
*[https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ GTX 1080 Ti Specs]
 
* [http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/ Which GPU for deep learning (04/09/2017)]  
 
* [http://timdettmers.com/2017/04/09/which-gpu-for-deep-learning/ Which GPU for deep learning (04/09/2017)]  
Line 19: Line 19:
  
 
====TL;DR====
 
====TL;DR====
Using multiple GPUs adds a lot of complexity. It has a few benefits: possible speed ups if the network can be split up, able to train multiple networks at once (either copies of the same network or modified networks), more memory for huge datasets.
+
Using multiple GPUs adds a lot of complexity. It has a few benefits: possible speed ups if the network can be split up (and is big enough), able to train multiple networks at once (either copies of the same network or modified networks), more memory for huge batches. Some frameworks have much better performance with multiple GPUs (pytorch, caffe 2) while others are catching up.
 +
 
 +
Todo: ask what software, dataset size, development approach
  
 
===Other Builds===
 
===Other Builds===
  
 
* [https://news.ycombinator.com/item?id=14438472 Deep learning box for $1700] (links to https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415)
 
* [https://news.ycombinator.com/item?id=14438472 Deep learning box for $1700] (links to https://blog.slavv.com/the-1700-great-deep-learning-box-assembly-setup-and-benchmarks-148c5ebe6415)

Revision as of 15:37, 23 October 2017


McNair Project
GPU Build
Project logo 02.png
Project Information
Project Title GPU Build
Owner Oliver Chang, Kyran Adams
Start Date
Deadline
Primary Billing
Notes
Has project status Active
Copyright © 2016 edegan.com. All Rights Reserved.


GPU

  1. "I quickly found that it is not only very difficult to parallelize neural networks on multiple GPUs efficiently, but also that the speedup was only mediocre for dense neural networks. Small neural networks could be parallelized rather efficiently using data parallelism, but larger neural networks... received almost no speedup."
  2. Possible other use of multiple GPUs: training multiple different models simultaneously, "very useful for researchers, who want try multiple versions of a new algorithm at the same time."
  3. This source recommends GTX 1080 Tis and does cost analysis of it
  4. If the network doesn't fit in the memory of one GPU (11 GB),
  1. Might want to get two graphics cards, one for development, one (crappy card) for operating system
  1. Intra-model parallelism: If a model has long, independent computation paths, then you can split the model across multiple GPUs and have each compute a part of it. This requires careful understanding of the model and the computational dependencies.
  2. Replicated training: Start up multiple copies of the model, train them, and then synchronize their learning (the gradients applied to their weights & biases).

TL;DR

Using multiple GPUs adds a lot of complexity. It has a few benefits: possible speed ups if the network can be split up (and is big enough), able to train multiple networks at once (either copies of the same network or modified networks), more memory for huge batches. Some frameworks have much better performance with multiple GPUs (pytorch, caffe 2) while others are catching up.

Todo: ask what software, dataset size, development approach

Other Builds