Difference between revisions of "Research Computing At Haas"

From edegan.com
Jump to navigation Jump to search
imported>Ed
imported>Orie
 
(8 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Haas research computing for PhD students currently consists of '''bear''' and '''phd-pgsql'''. This page details these resources.
+
Haas research computing for PhD students currently consists of '''bear.haas.berkeley.edu''' and '''postgres.haas.berkeley.edu'''. This page details these resources.
  
  
 
==Bear==
 
==Bear==
  
Bear is a 12 node research computing cluster. [http://groups.haas.berkeley.edu/HCS/research_computing/research-hwsw.html The official blurb] says that bear has two sets of 5 compute nodes, one set with 64Gb of RAM per node, and one with 16Gb of RAM per node. Nodes have dual core 3Ghz Xenon processors.  
+
Bear is a 8 node research computing cluster. [http://groups.haas.berkeley.edu/HCS/research_computing/research-hwsw.html The official blurb] is out of date (it still says that bear has two sets of 5 compute nodes, one set with 64Gb of RAM per node, and one with 16Gb of RAM per node). All nodes have quad core Xenon processors at about 3Ghz.
  
To find out this information yourself (and for other reasons), you'll like to be able to shell on to a node. I currently can't (I get permission denied requests). You can dispatch a shell command to a node through bsub though!
+
To find out this information yourself (and for other reasons), you'd like to be able to shell on to a node. I currently can't (I get permission denied requests). However, you can dispatch a shell command to a node through bsub though!
  
 
The following commands are valuable:
 
The following commands are valuable:
Line 17: Line 17:
 
The IP addresses of the nodes are:
 
The IP addresses of the nodes are:
 
*Bear (login node): 128.32.67.85, 128.32.67.86, 192.168.1.42, 192.168.1.43, 10.1.1.42, 10.1.1.43
 
*Bear (login node): 128.32.67.85, 128.32.67.86, 192.168.1.42, 192.168.1.43, 10.1.1.42, 10.1.1.43
*cn-01 to 10:  10.1.1.101 to 10.1.1.110
+
*cn-06 to 11:  10.1.1.105 to 10.1.1.111
  
 
The memory and chips are:
 
The memory and chips are:
 
*The login node(s) have 2 dual-core Xeon 5150 chips @2.66Ghz, with 38G of RAM
 
*The login node(s) have 2 dual-core Xeon 5150 chips @2.66Ghz, with 38G of RAM
*cn-01 to cn-05 have 2 single core Xeon chips @3.2Ghz with 16G of RAM
+
*cn-06 to cn-10 each have 2 quad-core Xeon 5570 chips @2.93Ghz with 48G of RAM
*cn-06 to cn-10 have 8 quad-core Xeon 5570 chips @2.93Ghz, with 48G of RAM  
+
*cn-11 has 8 physical cores (16 cores through hyperthreading) with 196GB of RAM
 +
 
 +
Previously, we also had:
 +
*(Disabled) cn-01 to cn-05 have 2 single core Xeon chips @3.2Ghz with 16G of RAM
  
 
'''There are three ways of using bear''':
 
'''There are three ways of using bear''':
Line 90: Line 93:
 
To decide which node to target, you'll need to know who is running what where. To get this list of jobs run by '''all''' '''u'''sers you type:
 
To decide which node to target, you'll need to know who is running what where. To get this list of jobs run by '''all''' '''u'''sers you type:
 
  bjobs -u all
 
  bjobs -u all
 +
 +
To see the actual utilization on a particular node you can run top on the node with the commands
 +
top-cn-06
 +
top-cn-07
 +
etc... Note that top-cn-11 isn't defined (yet) so you can give the command directly:
 +
bsub -m cn-11 -Ip top
 +
Also, note that if the top command isn't being dispatched to the node then it is probably because it thinks the node is full. You can probably get around this by giving top higher priority.
 +
 +
If you are running many jobs simultaneously (and please, be respectful when you do) it seems that bsub will only submit 3 jobs to compute nodes at once, and will queue the rest up as pending. Submitting jobs to individual nodes bypasses this.
 +
 +
====Stata====
 +
Note that there are both stata and stata-se versions on bear. And Versions 9, 10, and 11. When you give the command
 +
stata
 +
or
 +
stata-se
 +
You automatically run the latest version and it passes the command to bsub so that you are running stata on one of the commute nodes.
 +
 +
If you want to manually control bsub options give the command (this one runs stata se version 11:
 +
bsub "/usr/local/stata11/stata-se -b dofile.do"
 +
  
 
====Languages====
 
====Languages====
Line 152: Line 175:
 
Remember - Bear is your R drive, so the root of bear, when you login, is the root of your R drive!
 
Remember - Bear is your R drive, so the root of bear, when you login, is the root of your R drive!
  
==PhD-PGSQL==
+
==Postgres.Haas.Berkeley.Edu==
 +
 
 +
'''Postgres.Haas''' is a new and experimental database server for PhD students and faculty. It hosts a copy of PostgreSQL with support for R, Perl and C++ scripting inside of the RDMS.
 +
 
 +
At present the server is being tested and new users are welcome.
  
'''phd-pgsql''' is a new and experimental database server for PhD students and faculty. It hosts a copy of PostgreSQL with support for R, Perl and C++ scripting inside of the RDMS. At present the server is being deployed. More news will be available here shortly!
+
Those with permission can see the [[Haas PhD Server Configuration]] page to view the set-up.
  
In the meantime, those with permission can see the [[Haas PhD Server Configuration]] page.
+
Details on how to work with this server are on the [[Working with PostgreSQL]] page.

Latest revision as of 19:56, 26 October 2011

Haas research computing for PhD students currently consists of bear.haas.berkeley.edu and postgres.haas.berkeley.edu. This page details these resources.


Bear

Bear is a 8 node research computing cluster. The official blurb is out of date (it still says that bear has two sets of 5 compute nodes, one set with 64Gb of RAM per node, and one with 16Gb of RAM per node). All nodes have quad core Xenon processors at about 3Ghz.

To find out this information yourself (and for other reasons), you'd like to be able to shell on to a node. I currently can't (I get permission denied requests). However, you can dispatch a shell command to a node through bsub though!

The following commands are valuable:

free                #list memory available
cat /proc/meminfo   #see the memory specs
cat /proc/cpuinfo   #see the chip specs
uname -a            #see the system info
/sbin/ifconfig -a   #see the network tables

The IP addresses of the nodes are:

  • Bear (login node): 128.32.67.85, 128.32.67.86, 192.168.1.42, 192.168.1.43, 10.1.1.42, 10.1.1.43
  • cn-06 to 11: 10.1.1.105 to 10.1.1.111

The memory and chips are:

  • The login node(s) have 2 dual-core Xeon 5150 chips @2.66Ghz, with 38G of RAM
  • cn-06 to cn-10 each have 2 quad-core Xeon 5570 chips @2.93Ghz with 48G of RAM
  • cn-11 has 8 physical cores (16 cores through hyperthreading) with 196GB of RAM

Previously, we also had:

  • (Disabled) cn-01 to cn-05 have 2 single core Xeon chips @3.2Ghz with 16G of RAM

There are three ways of using bear:

Storing Data on Bear

Your "R" drive lives on bear. We tested access times to the R drive and found that they are much faster than to HCS-Data or other shares that you have access to (other than your C drive, though the speeds are actually comparable with those to C). You should use R:\bulk as your primary data storage area.

If your R drive isn't mapped already then map a network drive to:

\\bear\username$


SSH'ing into Bear

You can use a copy of PuTTY to SSH onto bear. PuTTY is a free SSH client that you can download from its author. You don not need to 'install' it - it is a standalone executable file. Details for the configuration are available from the howdoi section of the haas website, but none is really needed.


The address to connect to bear is:

bear.haas.berkeley.edu


And the connection is on the standard port (22). You can set your username under:

Connection -> Data -> Auto-login Username


And save the connection settings if you want.

Screen

You may want to use the 'screen' command to create a screen that can be detached from your SSH session or to run multiple shells within a single session.

Just type:

screen

To control screen, rather than the shell you are given, use the Ctrl-A (C-A) commands. Type:

C-A ? 

to get the screen help. The command 'exit' will exit you from (your last) screen and dump you back to the original shell. Crucially, screen's can be 'detached' and left running in the background, and if your SSH (putty) connection drops for some reason but if you are running your commands within a screen, they will stay running and can be picked up again later on. Screens can also be shared across multiple users.

Useful commands are:

Within Screen:
C-a c  Create a new screen
C-a 0  Move to screen 0
C-a 1  Move to screen 1 (and so on)
C-a n  Move to next screen
C-a d  Detach screen from the SSH session and return to the shell
exit   Terminate screen and return to the shell

From the shell:
screen -ls              List all screen sessions available 
screen -r session_name  Reattach to a screen session

See also:

Bsub

If you are running scripts then you should use the bsub to have them execute on the compute nodes, rather than the login node, as otherwise a runaway script can bring the entire of bear to a stand still. An example syntax for running a perl script is:

bsub -Is "perl Script.pl"

When a process launches it reports the compute node that is being used (cn-02 to cn-10), and a process id (p_id). You can kill a process (that wasn't launched interactively) by typing:

bkill p_id

You can set the memory allocation and target a node. Memory is in Kb, so 48Gigs is 50331648 (48 * 1024 * 1024) and 40G is 41943040. To target a node use the -m switch and the node name. Nodes cn-06 to cn-10 have 48Gb (see the specs above).

bsub -M 41943040 -m cn-06 -Is "..."

To decide which node to target, you'll need to know who is running what where. To get this list of jobs run by all users you type:

bjobs -u all

To see the actual utilization on a particular node you can run top on the node with the commands

top-cn-06
top-cn-07

etc... Note that top-cn-11 isn't defined (yet) so you can give the command directly:

bsub -m cn-11 -Ip top

Also, note that if the top command isn't being dispatched to the node then it is probably because it thinks the node is full. You can probably get around this by giving top higher priority.

If you are running many jobs simultaneously (and please, be respectful when you do) it seems that bsub will only submit 3 jobs to compute nodes at once, and will queue the rest up as pending. Submitting jobs to individual nodes bypasses this.

Stata

Note that there are both stata and stata-se versions on bear. And Versions 9, 10, and 11. When you give the command

stata

or

stata-se

You automatically run the latest version and it passes the command to bsub so that you are running stata on one of the commute nodes.

If you want to manually control bsub options give the command (this one runs stata se version 11:

bsub "/usr/local/stata11/stata-se -b dofile.do"


Languages

Available scripting languages include:

  • Perl
  • Python
  • R


There are also (apparently - I haven't tested them) compilers for:

  • C++ (GNU Cpp)
  • Fortran 77

Unix Commands

To see your files you might want the following simple commands:

  • ls -alt - list the files in the current directory in all their glory
  • cd bulk - change into the bulk directory
  • cd .. - change up a directory
  • top - look at the top processes
  • ps -aux show all processes running

Useful Tricks

If you want to change the color scheme used by the 'ls' command then:

cp /etc/DIR_COLORS.xterm .dir_colors

And edit the .dir_colors file to suit your tastes. Note that the changes won't take effect until you start a new session!

Using Xwindow Applications on Bear

There are copies of the following Xwindow applications ready for use on bear:

  • Matlab (matlab)
  • Stata (xstata)
  • Stata-SE (xstata-se)
  • SAS (sas)


To use these applications you need an Xwindows client. The eXceed client is available from available from software.berkeley.edu for download. Download it and install it. You can accept a typical installation for just one user. There is a very old set-up guide from HCS that may be useful, but the details below should suffice. Note that there is a security notice warning of risks when you use eXceed. We are partly protected from these risks by the design of the Haas network, but you might want to consider following the advanced configuration instructions.


Now save a bear configuration in PuTTy by entering the following parameters and hitting "Save":

Session -> Host Name (or IP address)  bear.haas.berkeley.edu
Connection -> Data -> Auto-login Username   Your_Username
Connection -> Data -> SSH -> Encryption Cipher  Move Blowfish to the top 
Connection -> Data -> SSH -> X11 -> Enable X11 Fowarding   Tick the box
Session -> Saved Sessions   Bear + Click Save


Now start eXceed running (and leave it running in the background) and SSH onto bear using Putty. At the command line type the name of the program, for example "xstata-se", and the program will launch in an eXceed window on your desktop. Voila!

For example my terminal looked like this to launch STATA-SE:

[ed_egan@bear-b ~]$ xstata-se


There is a Best Practises blurb from HCS regarding these apps that specifically asks us not to background our processes using "&". That is do not type: xstata-se &

If no licenses are available then you should be able to see who is running other copies (so you know who to complain about) by using the command:

bjobs -u all

Remember - Bear is your R drive, so the root of bear, when you login, is the root of your R drive!

Postgres.Haas.Berkeley.Edu

Postgres.Haas is a new and experimental database server for PhD students and faculty. It hosts a copy of PostgreSQL with support for R, Perl and C++ scripting inside of the RDMS.

At present the server is being tested and new users are welcome.

Those with permission can see the Haas PhD Server Configuration page to view the set-up.

Details on how to work with this server are on the Working with PostgreSQL page.