Changes

Jump to navigation Jump to search
Haas research computing for PhD students currently consists of '''bear.haas.berkeley.edu''' and '''phd-pgsqlpostgres.haas.berkeley.edu'''. This page details these resources.
==Bear==
Bear is a 12 8 node research computing cluster. [http://groups.haas.berkeley.edu/HCS/research_computing/research-hwsw.html The official blurb] is out of date (it still says that bear has two sets of 5 compute nodes, one set with 64Gb of RAM per node, and one with 16Gb of RAM per node). Nodes All nodes have dual quad core 3Ghz Xenon processorsat about 3Ghz.
To find out this information yourself (and for other reasons), you'll d like to be able to shell on to a node. I currently can't (I get permission denied requests). You However, you can dispatch a shell command to a node through bsub though!
The following commands are valuable:
The IP addresses of the nodes are:
*Bear (login node): 128.32.67.85, 128.32.67.86, 192.168.1.42, 192.168.1.43, 10.1.1.42, 10.1.1.43
*cn-01 06 to 1011: 10.1.1.101 105 to 10.1.1.110111
The memory and chips are:
*The login node(s) have 2 dual-core Xeon 5150 chips @2.66Ghz, with 38G of RAM
*cn-06 to cn-10 each have 2 quad-core Xeon 5570 chips @2.93Ghz with 48G of RAM
*cn-11 has 8 physical cores (16 cores through hyperthreading) with 196GB of RAM
 
Previously, we also had:
*(Disabled) cn-01 to cn-05 have 2 single core Xeon chips @3.2Ghz with 16G of RAM
*cn-06 to cn-10 have 8 quad-core Xeon 5570 chips @2.93Ghz, with 48G of RAM
*cn-11 has 16 cores with 196GB of RAM
'''There are three ways of using bear''':
To decide which node to target, you'll need to know who is running what where. To get this list of jobs run by '''all''' '''u'''sers you type:
bjobs -u all
 
To see the actual utilization on a particular node you can run top on the node with the commands
top-cn-06
top-cn-07
etc... Note that top-cn-11 isn't defined (yet) so you can give the command directly:
bsub -m cn-11 -Ip top
Also, note that if the top command isn't being dispatched to the node then it is probably because it thinks the node is full. You can probably get around this by giving top higher priority.
 
If you are running many jobs simultaneously (and please, be respectful when you do) it seems that bsub will only submit 3 jobs to compute nodes at once, and will queue the rest up as pending. Submitting jobs to individual nodes bypasses this.
 
====Stata====
Note that there are both stata and stata-se versions on bear. And Versions 9, 10, and 11. When you give the command
stata
or
stata-se
You automatically run the latest version and it passes the command to bsub so that you are running stata on one of the commute nodes.
 
If you want to manually control bsub options give the command (this one runs stata se version 11:
bsub "/usr/local/stata11/stata-se -b dofile.do"
 
====Languages====
Anonymous user

Navigation menu