Database Server Documentation

From edegan.com
Jump to navigation Jump to search

For help connecting, see Center_IT#Connecting_to_the_Database_Server

General configuration options:

  • hostname: McNairDBServ
  • user full name: McNair Center
  • username: mcnair
  • don't encrypt home directory
  • manual partitioning (see below for RAID 10 configuration)
  • no automatic updates
  • software: OpenSSH server, PostgreSQL database, and Samba file server

Database server drive partition specs:

  • RAID 10 the four drives
  • 1 MB out of each drive for bootloader
  • use all of M.2 drive as swap space
  • PCIe express drive format as linux partition (for PostgreSQL data directory)

Installing Ubuntu (2/19/2016)

First try at installation didn't work when the installer got to the partition disks step, because the SSD and M.2 drives weren't being detected for the partitioner. I suspected that there was a driver missing, but I don't see how we'd be able to install a driver without a proper bash shell (the installer does come with a shell, but it's very limited).

Kranthi and I took another look at the PCIe slots on the mobo and it turns out that the two PCIe slots that we had plugged the M.2 drive and SSD into were linked to CPU2, whereas the database server was only configured with one CPU, which was put in CPU1. From pages 2-19 and 2-20 of the mobo datasheeet, we put the M.2 drive in PCIE1 and the SSD in PCIE3, where are both linked to CPU1 and tried the installation process again.

This time, two new drives showed up in the partitioner (/dev/nvme0n1 and /dev/nvme1n1), but they're both listed as unknown (their capacities, however, are displayed as 400.1 GB and 512.1 GB respectively, which technically identifies the first as the SSD and the second as the M.2 drive). Going back to section 2.5 of the mobo datasheet suggests that we should assign each device in the expansion slot an IRQ (from my understanding, an IRQ is a number from 0 to 15 that marks the priority level of the device). To do so, we have to use the BIOS menu, so I aborted the installation and pressed the 'Delete' key during the boot up to get to the BIOS menu.

But through the entire tree of menus, I was only able to find an IRQ for Serial Port 1 and Serial Port 2, under NCT6779D Super IO Configuration. Wikipedia's article on interrupt requests notes that, "PCI Express does not have physical interrupt lines at all, and uses MSI exclusively." So I guess IRQs are no longer a big deal (they "went the way of the dodo," as Ed says). With no other way to label the PCIe drives from the BIOS menu making itself clear, I exited and rebooted the computer to proceed with the installation.

Interesting side note: the mobo datasheet notes that the motherboard comes with utilities to set up RAID arrays. But there doesn't seem to be any benefit of doing it on the motherboard over doing it in the Ubuntu partitioner.

In the partitioner, I deleted the partitions that were automatically generated and started each drive off with a new partition table. Then I partitioned the drives as follows:

  • /dev/nvme0n1 (aka the 400 GB Intel SSD)
    • entire space (~400 GB) as primary partition. use as ext4 filesystem, mount point: /, bootable flag off.
  • /dev/nvme1n1 (aka the 512 GB M.2 drive)
    • half (~256 GB) as primary partition. use as swap space (bootable flag off).
    • the other half (~256 GB) as primary partition. use as ext4 filesystem, mount: /var/postgresql (choose "Enter it manually" in the mount point selection menu), bootable flag off.
  • for each of the four 3.0 TB hard drives (aka /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd)
    • 10 MB primary partition, use as "reserved BIOS boot area" (bootable flag can't be changed, so leave it set to off)
    • the rest of the space as a primary partition, use as ext4 filesystem, mount to /bulk (choose "Enter it manually" in the mount point selection menu), bootable flag off

Then choose "configure software RAID" to set up the software RAID device. Confirm the partitions. Then wait for the disks to be partitioned.

Choose "Create MD device" and then "RAID10" to begin. Use 2 active devices and 2 spare devices. For the two active devices, choose /dev/sda2 and /dev/sdb2. For the two spare devices, choose /dev/sdc2 and /dev/sdd2. Confirm the partitions that will be changed. Then wait for the device to be created. When you get back to the software RAID configuration menu, choose "Delete MD device" and then choose the software RAID device that was created (there should only be one, and it should be named something like "md0_raid10"). Note the device name (in this case, it is /dev/md0) and check that the component devices are correct. If they're correct, choose "No" (to not delete the software RAID device). This will take you back to the software RAID configuration menu, where you can choose "Finish" to set up the RAID device, which should now appear at the top of the partitioner.

Choose the 3.0 TB partition in the RAID device and configure it to use as ext4 filesystem and mount point /bulk (choose "Enter it manually" in the mount point selection menu). Then choose "Finish partitioning and write changes to disk" at the bottom of the partitioner menu. Confirm that the partition changes to be made are correct. Then wait for the partitions to be formatted.

After the system is installed, the installer will prompt for software selection. Choose OpenSSH server, PostgreSQL database, and Samba file server for installation. After it finishes, it will prompt if you want to install the GRUB bootloader onto the master boot record. From my experience with setting up the webserver, choose "No" and on the next screen, install the bootloader to "/dev/sda /dev/sdb /dev/sdc /dev/sdd" and then if all goes well, the installation will finish. Take the CD out and then choose "Continue" and the system will reboot, at which point you should be able to boot into Ubuntu!


Install Postgres

apt-get install plperl

Install and configure TightVNC and xfce desktop

Installation

 apt install xfce4 xfce4-goodies tightvncserver

Configure VNC server

Created a configuration file for VNC startup:

 nano ~/.vnc/xstartup
   
   #!/bin/bash
   xrdb $HOME/.Xresources
   startxfce4 &
  • The first command in the file, xrdb $HOME/.Xresources, tells VNC's GUI framework to read the server user's .Xresources file. .Xresources is where a user can make changes to certain settings of the graphical desktop, like terminal colors, cursor themes, and font rendering.
  • The second command simply tells the server to launch XFCE, which is where you will find all of the graphical software that you need to comfortably manage your server.

Granted executable privilege:

 sudo chmod +x ~/.vnc/xstartup 

Create a VNC service file

First, opened a new service file in /etc/init.d with nano:

 sudo nano /etc/init.d/vncserver

The first block of data will be where we declare some common settings that VNC will be referring to a lot, like our username and the display resolution.

 #!/bin/bash
 PATH="$PATH:/usr/bin/"
 export USER="mcnair"
 DISPLAY="1"
 DEPTH="16"
 GEOMETRY="1920x1080"
 OPTIONS="-depth ${DEPTH} -geometry ${GEOMETRY} :${DISPLAY} -localhost"
 . /lib/lsb/init-functions

Next, we can start inserting the command instructions that will allow us to manage the new service. The following block binds the command needed to start a VNC server, and feedback that it is being started, to the command keyword start.

 case "$1" in
 vncstart)
 log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
 su ${USER} -c "/usr/bin/vncserver ${OPTIONS}"
 ;;

The next block creates the command keyword stop, which will immediately kill an existing VNC server instance.

 vncstop)
 log_action_begin_msg "Stopping vncserver for user '${USER}' on localhost:${DISPLAY}"
 su ${USER} -c "/usr/bin/vncserver -kill :${DISPLAY}"
 ;;

The final block is for the command keyword restart, which is simply the two previous commands (stop and start) combined into one command.

 vncrestart)
 $0 vncstop
 $0 vncstart
 ;;
 esac
 exit 0

Made this service script executable:

 sudo chmod +x /etc/init.d/vncserver

Now try using the service and command to start a new VNC server instance:

 sudo service vncserver vncstart

Connect to VNC server

First create an SSH connection on your local computer that securely forwards to the localhost connection for VNC. On Linux or OS X (this requires the password for DB Server's non-root user mcnair):

 ssh -L 5901:127.0.0.1:5901 -N -f -l mcnair 128.42.44.181

Now use Remmina/TightVNC Viewer/ETC to connect to the VNC server. On Remmina, use localhost:5901. No username. Password is: Go-AsK-Ed

I am not an expert in Windows Shell and I am not sure how to SSH on Windows. The TightVNC Viewer has a ssh channel option, but I failed to connect to our VNC server on Windows.

User management

Adding Root Accounts to the box

Assuming that you have root, you can create user accounts on the box and give them root too. This isn't necessary for regular users - they just need a Postgres user account (see below). To add users to the box, the process is:

First create the users group, checking the last group number (5xx is the next one):

cat /etc/group
/usr/sbin/groupadd -g 5xx username 

Then add the user (it doesn't matter what you put for -p, it is going to be overwritten):

/usr/sbin/useradd -g username -G root -s /bin/bash -p xxxx -d /home/username -m username
      where g is the primary group, G is other groups, p sets a password, 
      d declares a home directory and m makes the directory

Change the user's password:

passwd username

And add the user to the sudoers file

echo 'username ALL=(ALL) ALL' >> /etc/sudoers

Deleting a user

To delete a user:

/usr/sbin/userdel -r roger 
where r removes the home directory

And to remove their group

/usr/sbin/groupdel username

And remove their entry from the sudoers file too if they had root.

Setting up the Samba Server

Based on this Ubuntu page, but this guide is also helpful. Make a backup copy of the smb.conf configuration file before editing it:

$ sudo cp /etc/samba/smb.conf /etc/samba/smb.conf_backup
$ sudo vi /etc/samba/smb.conf

Change the workgroup parameter (under the [global] section) to MCNAIRDB. (Note that the security parameter was removed in Samba version 4.0.0).

Then go to the bottom of the configuration file and add a new section, [bulk]:

[bulk]
   comment = McNair File Server Share
   path = /bulk
   browseable = yes
   guest ok = no
   read only = no
   create mask = 0755

Once you're done editing the configuration file, test the file with testparm:

$ testparm /etc/samba/smb.conf

The /bulk directory should already exist from installation, so change its permissions:

$ sudo chmod 770 /bulk
$ groupadd smbusers
$ chown :smbusers /bulk

You also need to add a user to the Samba database. I used the username "alexjiang" for the commands below, which can be replaced with whatever username needs to be added:

$ smbpasswd -a alexjiang
$ usermod -G smbusers alexjiang

Then restart the samba services:

$ sudo restart smbd
$ sudo restart nmbd

Then you can try using any Windows Explorer window to check if the file server is set up correctly.

Adding accounts for McNair Center Researchers

Note that this section is somewhat redundant. Most McNair Center researchers will log in with the 'researcher' account. This account's creation is described below.

Add a user group:

cat /etc/group
/usr/sbin/groupadd -g 5xx username 

Then add the user (it doesn't matter what you put for -p, it is going to be overwritten):

/usr/sbin/useradd -g username -G smbusers -s /bin/bash -p xxxx -d /home/username -m username
      where g is the primary group, G is other groups, p sets a password, 
      d declares a home directory and m makes the directory
passwd username

Add the user's samba password:

$ smbpasswd -a alexjiang

Then restart the samba services:

$ sudo restart smbd
$ sudo restart nmbd

Mapping a drive to the Dbase server's bulk directory

In Windows:

Go to my computer -> Map Network Drive
Connect to:
 \\128.42.44.181\bulk
With username:
 MCNAIRDB\username

In Mac:

Go to Finder and hit CMD-k
Enter server address: 
 smb://128.42.44.181/bulk
With username:
 MCNAIRDB\username

Adding a dbase user and creating a dbase

Log on to the box as root then:

groupadd -g 112 postgres 
useradd -g postgres -s /bin/bash -p xxxx -d /home/postgres -m postgres
passwd postgres
mkdir /var/postgresql/data
chown postgres /var/postgresql/data

Change to postgres and initialize the dbase

su postgres
cd /usr/lib/postgresql/9.5/bin/
./initdb -D /var/postgresql/data
/etc/init.d/postgresql stop

Test the right location:

/usr/lib/postgresql/9.5/bin/postgres -D /var/postgresql/data > logile 2>&1 &

As postgres, edit /etc/postgresql/9.5/main/postgresql.conf (see http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server)

data_directory='/var/postgresql/data'
listen_addresses = '*'
port = 5432
max_connections = 10

shared_buffers = 100000 MB(~40% of 264 GB)
 The recommendation is to use around 25-40% of RAM (we have 256Gb) for high-performance systems like ours. We might need to check that we are running 64bit and that our kernel supports this.
 
work_mem = 500MB
 kranthi: may not do much after some limit, max allowed 2GB

maintenance_work_mem = 2GB
 This is how much VACUUM will use - it doesn't make a huge difference

effective_cache_size = 198GB(-3/4 of the current RAM)

Change /etc/postgresql/9.3/main/pg_hba.conf from

 # "local" is for Unix domain socket connections only
 local   all             all           peer

To:
 local   all             all           trust

Restart postgres to reload the configuration:

/etc/init.d/postgresql restart  #Not found if properly removed

Delete the postgres-xc account:

userdel -r postgres-xc
vi /etc/group  #check the user's group is gone too

Create the researcher user

The password for the researcher account is the 'standard' internal password.

groupadd -g 505 researcher 
useradd -g researcher -G smbusers -s /bin/bash -p xxxx -d /home/researcher -m researcher 
passwd researcher 
smbpasswd -a researcher 

Note: if you forget the -G smbusers, the smbusers groups is 1001:

usermod -a -G smbusers researcher

The 'research' user on the dbase server is created as follows (su to postgres first):

createuser -P -s researcher 

Or

createuser --interactive researcher
(Answer y or n to whether you want the new role to be superuser, etc.)

The researcher user can create and drop databases with:

createdb DBName
dropdb DBname

Notes on individual users

If the user doesn't have an account on the box, then you'll need to add a database for them (use DBname of 'firstname_data' as a default):
 createdb -O username DBname
Otherwise, they can create databases themselves with 
 createdb DBName

Drop the dbase and the user with:

dropdb DBname
dropuser username

Fixing some security

When the box was built, we created an account called mcnair. This account isn't in the sudoers file but can sudo su itself. The password on this account was changed on 13th May 2016 to the new admin password. The root account has no password.

Editing Users

NOTE: The Postgres .conf files are in:

/etc/postgresql/9.5/main


To add a user to the database from the shell:

createuser username

To delete a user from the database from the shell:

dropuser username 

From the server, you can list the users of the database using:

\du

To update the password for any of these users, from the database use:

\password username

There is a superuser named postgres that has access to the server and the box. If you need a user/password combo for a script that needs access to both, you'll need these credentials.

username: postgres
password: ask Anne (This is not the password)

NOTE: To get this combination to work, we had to change a line in

/etc/postgresql/9.5/main/pg_hba.conf

The line was changed from

local      all      postgres     peer

to

local      all      postgres     md5