Tag Archives: NFS

Users, Groups and Sudo

One thing that I did once I got my Raspberry Pi up and running is to add a user account, other than the pi account that is there by default. I have an erick account on my machines, so why not have one on the pi.

useradd

So under the default pi prompt I used the useradd command to add erick as a user. I figured that I would not login as pi and gave pi a strong password.

useradd -m erick

This will prompt for a password and make a user directory under homes by default. It also fills the directory with files and directories based on the /etc/skel directory.

Lots of Options for useradd

Usage: useradd [options] LOGIN

Options:
-b, –base-dir BASE_DIR       base directory for the home directory of the
new account
-c, –comment COMMENT         GECOS field of the new account
-d, –home-dir HOME_DIR       home directory of the new account
-D, –defaults                print or change default useradd configuration
-e, –expiredate EXPIRE_DATE  expiration date of the new account
-f, –inactive INACTIVE       password inactivity period of the new account
-g, –gid GROUP               name or ID of the primary group of the new
account
-G, –groups GROUPS           list of supplementary groups of the new
account
-h, –help                    display this help message and exit
-k, –skel SKEL_DIR           use this alternative skeleton directory
-K, –key KEY=VALUE           override /etc/login.defs defaults
-l, –no-log-init             do not add the user to the lastlog and
faillog databases
-m, –create-home             create the user’s home directory
-M, –no-create-home          do not create the user’s home directory
-N, –no-user-group           do not create a group with the same name as
the user
-o, –non-unique              allow to create users with duplicate
(non-unique) UID
-p, –password PASSWORD       encrypted password of the new account
-r, –system                  create a system account
-s, –shell SHELL             login shell of the new account
-u, –uid UID                 user ID of the new account
-U, –user-group              create a group with the same name as the user
-Z, –selinux-user SEUSER     use a specific SEUSER for the SELinux user mapping

 

Once I was able to log in under my new account, I tried setting up fswebcam to collect some timelapse video and then I had my first hitch. I needed to be part of the video group to run fswebcam.

id

The id command ran from the command line…

id username

…will list not only the user and group ID of the user UID and GID. But all of the groups that he user belongs to. It has the options -u, -g, -G. -u lists the UID alone, -g is the users GID alone and -G lists all the group ID’s that the user belongs to.

usermod

I was not part of the video group so I would have to add myself, but I was not part of the admin group either so I was not able to even run sudo.

So with a quick logout to the pi user, I was able to add myself to the admin group.

sudo usermod -a -G admin erick

The sudoers file

If you run sudo visudo, it will open the /etc/sudoers.tmp file. At the bottom of this file there is a line that explains that accounts added to the admin group are allowed to run sudo.

# Members of the admin group may gain root privileges
 %admin ALL=(ALL) ALL

Now that I can sudo from my own account, I can login back in as erick and run…

sudo usermod -a -G video erick

…to add myself to the video group. Now I was off and running with using fswebcam under my account.

NFS and Users

With users there is the notion of the name and then there is the numerical UID. NFS uses the numerical UID to map across machines. If you plan on using NFS on multiple machines, it pays to keep the UID’s lined up between them. For example, if you set up 2 Linux machines from scratch, there will be a user at UID 1000, that would be you, whatever you called it by name. The first user is at 1000. If you use NFS to mount a directory from one machine to another, no problem it all lines up. The user at UID 1000 is the same on both machines, permissions work out, files can be moved back and forth, no problems.

But if like with the example of the Raspberry Pi above. User pi is created on the NOOBS Disk when you load the Raspberian option. It is at UID 1000 and GID 1000. So if you add and other user for yourself, guess what it appears at UID 1001. Something to keep in mind when using NFS. You can use NFS in a way that will get around this using the methods laid out in the NFS post.

But it is much easier to try to keep all of the name and UID’s lined up from the beginning and not have to worry about the trickiness business. Even it means adding a user to the Raspberry Pi and then moving the UID of the pi user to some other UID and yourself to UID 1000, GID 1000 if that will line it up with your other machines on the network.

Raspberry Pi

Reduce writes to the Raspberry Pi SD card

After 5 months of solid up-time for my Raspberry Pi server, which has been running great. It has been taking a picture every hour and from them creating a creating a timelapse video every day. Also it is being used as a place to drop files to periodically from other place on the network, a little bit of file storage. Eventually I will add more storage space to it to use it even more for network storage.

Recently, I started to think about the potential wear of the SD card as I came across several articles online dealing with the topic. I decided to make a few changes to the Raspberry Pi configuration to reduce the amount of writing to the SD card.

Write Saving #1: Using a tmpfs

I editted /etc/default/tmpfs. In it the comments state  that /run, /run/lock and /run/shm are already mounted as tmpfs on the Pi by default. Which I have observed. This was a change made a while ago for the Pi according to the buzz online. I additionally set RAMTMP=Yes to add /tmp to the directories put on the tmpfs. This sets up access to /tmp with rwx-rwx-rwx permissions. There was a suggestion that I saw online to limit the sizes of the various directories, I added that as well.

# These were recommended by http://raspberrypi.stackexchange.com/questions/169/how-can-i-extend-the-life-of-my-sd-card
# 07262015, mods for using less of the SD card, RAM optimization.
TMPFS_SIZE=10%VM
RUN_SIZE=10M
LOCK_SIZE=5M
SHM_SIZE=10M
TMP_SIZE=25M

The OS and some programs will use /tmp. But so do I. I created a /tmp/web folder under it when the Raspberry Pi boots. Into this folder files go such as the hourly photo and the daily video that scripts create for the webcam that is attached. I have reduced 3 hourly writes to just one photo. I keep only one on the SD card as I don’t want to risk losing a bunch of them taken during the day if I totally relied on the tmpfs. If I was using a UPS, I would have no problem saving all of them on the tmpfs and occasionally backing up to the SD or another device. The big saver is the daily timelapse.avi for the web that is created daily from all of the hourly captured photos. It is many megs in size gets written daily and it doesn’t matter if I lose it. It can be recreated from the photos at will. So it is the perfect kind of file to throw on a RAM file system.

I also store the hourly and daily logs that I create using the cron driven logcreate script that I run. The logcreate script creates an hourly log that is concatenated into a daily log on the tmpfs then every day the daily log will concatenate into a full log, that is rotated, on the SD card, so I have a permanent record. Need to put the link for this here!

What is a tmpfs?

It is a RAM Disk, a.k.a. RAM Drive that allows RAM to be used as a hard drive. Obviously when the power goes out, it goes away. So we don’t want anything important to go there. But for things like files that I make such as the hourly photos that my Web Cam takes and the video it makes daily and logs, it is perfectly fine for usage. It is not a really big deal if the power went out and I lost this information as it will be recreated shortly anyways.

Caution

The only issue that I see with having logs on a tmpfs would be a situation where the Pi got in a state of weirdness where it started rebooting itself and then you had no logs to track down the problem. Then I suppose, it would be just a matter of changing the /etc/fstab file to revert to putting the logs back onto the SD card for a while to track down the problem. But, for a Raspberry Pi like mine that is running stable and I am not doing many experiments with right now, having the logs in volatile memory is not something I worry about. Plus it is easy to make a script to backup the logs to the SD card or another computer, if you manually reboot it, so you can save them if you like when you have control of the reboots.

Write Saving #2: Turning off swap

If the Raspberry Pi runs out of RAM, not likely if it is a server set up for light duty usage, it will start to use swap which is on the SD card, causing writes to the swap file. Mine rarely touches swap. I would rather tune the thing for better memory use than have it use swap.

It is possible to turn off swap usage using the command…

sudo dphys-swapfile swapoff

This is not persistant and needs to be done on every boot. It could be put into the root crontab by editting it using sudo crontab -e and adding the line. Or creating a script for it along with other items that are to be run at startup.

@boot dphys-swapfile swapoff

Online, people said that there was another way to turn it off by reducing the swap file size to zero, a config file for swap, can’t remember the name. But it is claimed that when it reboots it just overrides that and makes a default 100M swap file.

Write Saving #3: Moving /var/log to a tmpfs

One of the biggest offenders as far as writing to files periodically is the logs that live under /var/log and it’s sub-directories. You can create an entry in /etc/fstab that will create a tmpfs for /var/log. The only caution here is daemons, like Apache that require a directory to exist under /var/log or else they will not start. Apt also has a directory under /var/log, but it creates itself when apt runs for the first time so that is no problem. The apt directory has logs that keep track of what apt installs or uninstalls, good info to know about. News seems to work fine creating a directory for itself too. So for me only Apache is a problem.

  1.  Put an entry in /etc/fstab…
     tmpfs /var/log tmpfs defaults,noatime,mode=0755 0 0
  2.  Found out that news and apt folders create themselves when these things run.
  3. Apache is the one thing that does not like a missing folder so made a Kludge for now using ~/bin/setup-tmp.sh where I create /var/log/apache2 and chmod it 750. Then I restart apache using apachehup.sh, which just restarts it. Apache was failing to load when I pointed the log dir to /tmp in /etc/apache2/envvars under the export APACHE_LOG_DIR directive.

Write Saving #4: noatime

As you can see above one of the options used in the /etc/fstab file is the noatime option. By default the Raspberry Pi uses this option for the mount of the SD card. If you add mount points of your own to the card, make sure noatime is used. Without it Linux makes a small write each time a file is read to keep track of when it was last accessed, this obviously causes writes. It is possible to use it for the writes to the tmpfs as I am doing above. It saves a bit of time as the system does not have to do a write when a file is just being read.

Another good use of noatime is for drives connected across the network. For example on NFS mounts noatime is a really good choice. The network is generally slower than devices attached to a PC and having to send a write across every time a file is read, slows things down a bit when moving many files.

 


Been running this setup with the RAM savings for a few months now with no problems. I hardly ever see the ACT light blinking on the Pi anymore.

 

The LEDs have the following meanings :

  • ACT – D5 (Green) – SD Card Access
  • PWR – D6 (Red) – 3.3 V Power is present
  • FDX – D7 (Green) – Full Duplex (LAN) connected
  • LNK – D8(Green) – Link/Activity (LAN)
  • 100 – D9(Yellow) – 100Mbit (LAN) connected

From http://www.raspberrypi-spy.co.uk/2013/02/raspberry-pi-status-leds-explained/

Raspberry Pi

Network File System (NFS)

For a while I have been using Samba to remotely connect Windows computers to my Linux computers and one Linux file server. I can even connect my Linux laptop to my Linux server via Samba. But, recently I bought a Raspberry Pi and I got interested in using NFS for three reasons.

  1. The Raspberry Pi will be running 24/7 and I would like the option to automount the home folder and others from it on my Linux computers when I start them up.
  2. I would like to mount some folders on a server (“big” file storage Linux server) that I can start remotely to the Raspberry Pi so that it will act like expanded storage on the Raspberry Pi. Then I can start the “big” server remotely and mount the folders on the Pi and use the Pi as a proxy. So it is connected to the web and I can navigate to folders that are on the big server via a connection to the Pi with NFS mounted directories.
  3. It would make it easy to backup Linux machines, including the Pi to the big server periodically. Years ago I thought about NFS for mounting folders for backup but I was pretty happy using a scripted FTP system for backups, so I shelved implementing NFS mounts back then.

Implementing NFS was a lot easier than I thought it would be. It was actually much easier than getting Samba to work the way I wanted it to.

The first step ( in my opinion) is to have the machines that you will mount directories from and to on static IP address. On a home network it really does make it easier to have all the machines other than guest machines on static IP’s. This can be done either by setting the machine to have a static IP. Or it may be possible depending on the router, for the router to be configured to hand out the same IP address to a machine with a specific MAC ID. Effectively the results are the same.

Static IP’s are useful as the actual IP addresses will be listed in the export file. It may be possible to use names, however this depends on how DNS is handled on your network. Using the actual IP addresses will make initial setup nearly foolproof. Also an easy way to use names on any machine is to add the static IP’s and names of the machines on the network to the /etc/hosts file.

Install NFS Support

To install support for NFS on the machines run….

sudo apt-get install nfs-kernel-server

Exports File on Server

Modify the servers /etc/exports file to suit your needs. Below is an example from my system on the Raspberry Pi. Remember to restart the NFS server when you have made changes to the file….

sudo service nfs-kernel-server restart
/etc/exports
# /etc/exports: the access control list for filesystems which may be exported
 #        to NFS clients.  See exports(5).
 #
 # Example for NFSv2 and NFSv3:
 # /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
 #
 # Example for NFSv4:
 # /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
 # /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
 #
 /srv/homes       192.168.1.9(rw,sync,no_subtree_check)
 /home/erick      192.168.1.0/24(rw,all_squash,anonuid=1001,anongid=1004,no_subtree_check)
 /         192.168.1.9(rw,no_root_squash,anonuid=1001,anongid=1004,no_subtree_check)

My initial try at this was to semi follow an example and create a /srv/homes directory and export it to one machine at 192.168.1.9. rw = read write access, sync = change both folders on server and client to keep sync’d, no_subtree_check = keeps the machine from having to check consistency of file names, prevents problems if a file is open and the file is renamed.

Then I decided to export my own home directory to all of the machines on the LAN, using 192.168.1.0/24 which allows access from 192.168.1.1-192.168.1.255. This time I am using all_squash which maps all UID’s and GID’s to nobody and nogroup, then setting anonuid=1001, my UID on the Rasp Pi and anongid=1004 my group on the Rasp Pi, they will map over to the correct UID and GID for myself on the other machines. Therefore I have no problem with read write access as the same user on the other machine to the NFS drive.

The next line exports the entire file system to one machine, but has no_root_squash set, which allows the root to access and create files as root on the server. This is one to be careful with, I use it only when I need to mount the entire file system and usually I have to move things around or something as root anyways, but as always be cautious.

Restart Required

After modifying the /etc/exports file the NFS server needs to be restarted using…

service nfs-kernel-server restart

Client Machine

You have to install the common code for NFS on the client machine.

sudo apt-get install nfs-common

Mount Commands

My Home Directory on the Raspberry Pi

In this case I am mounting my home directory from the Pi under /home/erick-pi. I had to use the nolock option because I was getting an error without it, other than that it works fine.

Example of the mount command from the command line…

sudo mount -o nolock 192.168.1.17:/home/erick /home/erick-pi
Entire Root Directory

Occasionally I want to mount the entire file system of the Raspberry Pi at a location on one PC.

sudo mount -o nolock 192.168.1.17:/ /mnt/nfs/srv/
Mount Scripts

For situations that only apply occasionally, such as the above example of mounting the entire directory structure, I have created some scripts and placed them in the bin folder under my home folder and made them executable by using chmod +x filename. Then I can run them as needed by running a script with a filename that makes sense to me. Like the one for the code below is, rasp-pi-mount-root.sh.

#! /bin/bash
sudo mount -o nolock 192.168.1.17:/ /mnt/nfs/srv/
intr option

Note that the intr option for nfs mounts is a good one if the computer loses it’s connection with the server. With the Raspberry Pi this is never an issue for me but it is with other servers. It allows an interupt to stop NFS requests if the server goes down or the connection is lost. If intr is not used NFS will keep trying and the process will hang, requireing a reboot. I have had this occur mostly with servers that are on only part time. Most likely, I started the server and then put my computer on standby. When I start it and the server is off, NFS will hang looking for the mount points that have disappeared. This will hang not only the X window folders, but it will hang any command in a terminal that has to touch NFS, such as df -h which tries to look for something that is not there anymore.

Hard and Soft Options

The hard and soft options are like what they sound like. Soft mounts give up after a timeout and don’t keep trying to write or read from an NFS mount point if it flakes out or goes down. Soft mounts should only be used for read only mounts as data being written can be corrupted if a soft mount gives up where the hard mount will just keep trying until the mount comes back. If there is an issue with a write mount point flaking or dropping it is best to mount hard, the default and use the intr option.

Mounting on Startup

It is possible to set up the /etc/fstab file to mount the NFS drive on start up.  It is as simple as adding the following line to the file…

192.168.1.17:/home/erick    /home/erick-pi    nolock    0    0

On my laptop this did not work, I remembered that Wireless LAN is handled at the user level and not on during bootup when mountpoints are handed out via the /etc/fstab file. So I got an error about the mount point not being found.

On my desktop running Lubuntu 14.04, connected via Ethernet cable, the line above did not work either but I modified it as follows and it worked. It might be that I left out the part with nfs, although I thought that the OS could tell it was a NFS mount from the format, oh well. I also decided to mount it right under my home folder on the desktop…

192.168.1.17:/home/erick    /home/erick/erick-pi nfs   auto,nolock    0    0

On the desktop once the drive is mounted it will stay mounted even if I reboot the Rasp Pi while the desktop PC is running.

Delayed Mount on Startup

To mount an NFS drive on a machine that has wireless, you have to mount it after it connects to the router and by then it is already running at the user level. You have to trick the system into waiting. There are multiple ways of doing this. I chose putting a line into to root crontab and used sleep 60 for the delay. After all most mounting has to be done as root anyways.

So I put a 60 second delay in before the mount command executes in the root crontab using the @reboot directive…

@reboot bash -c "sleep 60; mount -o nolock 192.168.1.17:/home/erick /home/erick-pi"

To edit the root crontab, simple do…

sudo nano crontab -e

To simply list what is in the root crontab, which is how I cut and pasted the code above, simply do…

sudo crontab -l

Using Names

It is entirely possible to use names instead of IP addresses when you mount NFS drives and even in the /etc/exports file. One caution, if DNS is down or flaky on you LAN, it could present a problem with reliably mounting drives.  Therefore I recommend adding the server names to your /etc/hosts file. On my LAN I take it a step further, the servers are all set as static IP and my router has the ability to always hand out the same IP to a machine at a specific MAC address so I use that for laptops & etc that normally connect to the network. So in effect every normally used device has a static IP. Therefore I can put them all in /etc/hosts and I don’t even have to care about DNS on the LAN for the machines on it 99% of the time.

/etc/hosts comes with the top two entries, just add what you want to it. As you can see commenting them out works too. The erick-MS-6183 server is down, probably for good at this point!

 127.0.0.1    localhost
 127.0.1.1    erick-laptop
 #192.168.1.11    erick-MS-6183
 192.168.1.2    renee-pc
 192.168.1.9      erick-laptop
 192.168.1.10     ubuntuserver
 192.168.1.17     raspberrypi

 Gotcha

One time I was backing up my laptop to a laptop-backup directory under my home folder on the big file server. The problem was that I had my home folder on the big file server set as an NFS mount as a folder under my home folder on the laptop. It copied in circles until the harddrive filled up. Oh well, learned the hard way on that one! Be careful of NFS mounts and even symlinks to places when running backups.

NFS and Users

With users there is the notion of the name and then there is the numerical UID. NFS uses the numerical UID to map across machines. If you plan on using NFS on multiple machines, it pays to keep the UID’s lined up between them. For example, if you set up 2 Linux machines from scratch, there will be a user at UID 1000, that would be you, whatever you called it by name. The first user is at 1000. If you use NFS to mount a directory from one machine to another, no problem it all lines up. The user at UID 1000 is the same on both machines, permissions work out, files can be moved back and forth, no problems.

 Resources

Used this one to get started with NFS…

https://help.ubuntu.com/community/SettingUpNFSHowTo

Helped to figure out the whole user and group ID mapping

NFS: Overview and Gotchas

 exports(5) – Linux man page

Easy to follow, I hink I might have started with this one,

Setting up an NFS Server and Client on Debian Wheezy

I need to look at this one for a sanity check on the errors when I launch NFS server on Raspberrry Pi,

Problem with NFS network