Category Archives: Linux

Posts that cover information that I have learned and have posted, all related to Linux.

Upgrade from Ubuntu 13.04 to Ubuntu 14.04 LTS in 5 steps

When we got back from house sitting this winter and back into my regular house, I finally got around to installing a newer version of Lubuntu Linux on my desktop at home, I wanted to get away from using Windows XP and didn’t feel like installing a new version of Windows. The original Wildcat video card on the PC didn’t support Linux well, so I installed an old GeForce card I had sitting in a box from a carcass machine. It was a moment where I said, why didn’t I think of this years ago! I first installed Linux 5 years ago on it, saw that the video didn’t work quite right and didn’t really dig any deeper than trying a lot of settings changes, the gave up and lived with it. I used XP heavily on it and I have to admit I was pretty OK with the way XP was working on it, so it was a case of leave it alone if it is OK.

I actually left the other video card ( an expensive Wildcat card, large unit fits the full footprint for the bay. It cost $2K for the company that bought it for CAD/CAM usage originally) in place. I found a setting in the BIOS for Legacy detection of video, it was set to AGP, I switched it to auto, seems to work!

But, now I have it pretty much set up and working good. The machine has 2 identical hard-drives that have copies of the same stuff and 2 copies of Win XP, one on each drive. Plus I back it up to an external server, when I remember to, its been too long already! I can start it in Linux (Lubuntu 14.04 LTS or Ubuntu Server 12.04, for testing) or one of the 2 XP’s, but I am pretty much going to keep using Lubuntu Linux on it, faster than XP was and doesn’t crash, it just works better in general.

Tutorial Page

The tutorial page below  worked well. I had the Lubuntu 13.04 CD for Lubuntu from trying it out on one of my old servers a few years back. I used it to install on my desktop because the unit  doesn’t have a DVD player and Lubuntu 14.04 LTS is just a tad oversized for a CD. When the install was going on I just selected the root directory to go in the same place (on sdb5 in my case) as the old 9.10 Ubuntu install. At this question it was choose OTHER and not side by side or wipe drive for me.

Then I used the steps in the tutorial to migrate from Lubuntu 13.04 to Lubuntu 14.04 LTS. (I did try Lubuntu 15.04 which fits on a CD, but it did not run, checked the disc and MD5 sum too, but it just might not be compatible with the machine). The only things I did above and beyond the tutorial was to run..

sudo apt-get update && sudo apt-get upgrade

…before and after installing Lubuntu 14.04 LTS. And after the final update and upgrade I ran…

sudo apt-get autoremove

…to remove pieces of Lubuntu 13.04 that were not needed with Lubuntu 14.04 LTS.

http://www.tuxtrix.com/2014/03/upgrade-from-ubuntu-1304-to-ubuntu-1404.html

Simple WebDAV

WebDAV, the DAV stands for Distributed Authoring and Version. In its simplest form would be a folder that can be accessed from the web that has a username and password to keep the content locked. There are two versions basically, plain and SSL which is secure in that the data that flows in and out of the folder is encrypted as it moves through the web. In this post I am covering the simple non-SSL form for starters.

This post assumes that Apache is installed, if you need to install it do…

sudo apt-get install apache2

Then load the Apache modules for DAV…

sudo a2enmod dav
sudo a2enmod dav_fs

Create a folder for WebDAV

I created a directory at…

/srv/homes/webdav

…the command…

mkdir -p /srv/homes/webdav

…will allow the folders above webdav, such as homes be created if they do not exist.

Edit the Apache default file

The WebDAV folder access is simply controlled by the sites-available/default file. To edit it run…

sudo nano /etc/apache2/sites-available/default

Towards the bottom of the file right above the section that has the ScriptAlias for the /cgi-bin/ directory, I placed the following code…

Alias /webdav  /srv/homes/webdav
<Location /webdav>
 Options Indexes
 DAV On
 AuthType Basic
 AuthName "webdav"
 AuthUserFile /etc/apache2/webdav.password
 Require valid-user
 </Location>

Adding the Password

Use the htpasswd command to add a password to a webdav.password file. it will prompt you for a password. The file will contain hashed passwords which are not readable.

sudo htpasswd -c /etc/apache2/webdav.password username

For an extra level of protection you can change ownership of the file to root with the group of www-data, so no regular users can access the file. Setting the permission to read-write for owner root and read only for the www-data group…

sudo chown root:www-data /etc/apache2/webdav.password
sudo chmod 640 /etc/apache2/webdav.password

Access the Folder

With everything setup the folder will now appear at http://your-url-here.com/webdav, you can browse to it to test it out. You will be prompted for the user-name and password created earlier in the adding the password step.

Further Potential for WebDAV

  • Setup multiple WebDAV folders.
  • Put a web folder on expanded storage on a Raspberry Pi, such as use a bind mount to point to a USB stick plugged into the Pi for extra storage space.
  • It is possible to set up WebDAV with SSL to secure it in a way that the data flowing in and out of the folder will be secured from prying eyes. With my non-SSL WebDAV folder, I don’t put anything up there that is critical or really private data.
  • It is possible to use DAV for support of calendars across devices, something I will explore in the future.
  • There is an app for the iPhone that I have tried that allows easy uploading and downloading to the WebDAV folder. It is easy to drop attachments from email and etc. to the folder for access on a PC.

Resources

https://www.digitalocean.com/community/tutorials/how-to-configure-webdav-access-with-apache-on-ubuntu-12-04 

WebDAV Resources

 

Raspberry Pi

Network File System (NFS)

For a while I have been using Samba to remotely connect Windows computers to my Linux computers and one Linux file server. I can even connect my Linux laptop to my Linux server via Samba. But, recently I bought a Raspberry Pi and I got interested in using NFS for three reasons.

  1. The Raspberry Pi will be running 24/7 and I would like the option to automount the home folder and others from it on my Linux computers when I start them up.
  2. I would like to mount some folders on a server (“big” file storage Linux server) that I can start remotely to the Raspberry Pi so that it will act like expanded storage on the Raspberry Pi. Then I can start the “big” server remotely and mount the folders on the Pi and use the Pi as a proxy. So it is connected to the web and I can navigate to folders that are on the big server via a connection to the Pi with NFS mounted directories.
  3. It would make it easy to backup Linux machines, including the Pi to the big server periodically. Years ago I thought about NFS for mounting folders for backup but I was pretty happy using a scripted FTP system for backups, so I shelved implementing NFS mounts back then.

Implementing NFS was a lot easier than I thought it would be. It was actually much easier than getting Samba to work the way I wanted it to.

The first step ( in my opinion) is to have the machines that you will mount directories from and to on static IP address. On a home network it really does make it easier to have all the machines other than guest machines on static IP’s. This can be done either by setting the machine to have a static IP. Or it may be possible depending on the router, for the router to be configured to hand out the same IP address to a machine with a specific MAC ID. Effectively the results are the same.

Static IP’s are useful as the actual IP addresses will be listed in the export file. It may be possible to use names, however this depends on how DNS is handled on your network. Using the actual IP addresses will make initial setup nearly foolproof. Also an easy way to use names on any machine is to add the static IP’s and names of the machines on the network to the /etc/hosts file.

Install NFS Support

To install support for NFS on the machines run….

sudo apt-get install nfs-kernel-server

Exports File on Server

Modify the servers /etc/exports file to suit your needs. Below is an example from my system on the Raspberry Pi. Remember to restart the NFS server when you have made changes to the file….

sudo service nfs-kernel-server restart
/etc/exports
# /etc/exports: the access control list for filesystems which may be exported
 #        to NFS clients.  See exports(5).
 #
 # Example for NFSv2 and NFSv3:
 # /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
 #
 # Example for NFSv4:
 # /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
 # /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
 #
 /srv/homes       192.168.1.9(rw,sync,no_subtree_check)
 /home/erick      192.168.1.0/24(rw,all_squash,anonuid=1001,anongid=1004,no_subtree_check)
 /         192.168.1.9(rw,no_root_squash,anonuid=1001,anongid=1004,no_subtree_check)

My initial try at this was to semi follow an example and create a /srv/homes directory and export it to one machine at 192.168.1.9. rw = read write access, sync = change both folders on server and client to keep sync’d, no_subtree_check = keeps the machine from having to check consistency of file names, prevents problems if a file is open and the file is renamed.

Then I decided to export my own home directory to all of the machines on the LAN, using 192.168.1.0/24 which allows access from 192.168.1.1-192.168.1.255. This time I am using all_squash which maps all UID’s and GID’s to nobody and nogroup, then setting anonuid=1001, my UID on the Rasp Pi and anongid=1004 my group on the Rasp Pi, they will map over to the correct UID and GID for myself on the other machines. Therefore I have no problem with read write access as the same user on the other machine to the NFS drive.

The next line exports the entire file system to one machine, but has no_root_squash set, which allows the root to access and create files as root on the server. This is one to be careful with, I use it only when I need to mount the entire file system and usually I have to move things around or something as root anyways, but as always be cautious.

Restart Required

After modifying the /etc/exports file the NFS server needs to be restarted using…

service nfs-kernel-server restart

Client Machine

You have to install the common code for NFS on the client machine.

sudo apt-get install nfs-common

Mount Commands

My Home Directory on the Raspberry Pi

In this case I am mounting my home directory from the Pi under /home/erick-pi. I had to use the nolock option because I was getting an error without it, other than that it works fine.

Example of the mount command from the command line…

sudo mount -o nolock 192.168.1.17:/home/erick /home/erick-pi
Entire Root Directory

Occasionally I want to mount the entire file system of the Raspberry Pi at a location on one PC.

sudo mount -o nolock 192.168.1.17:/ /mnt/nfs/srv/
Mount Scripts

For situations that only apply occasionally, such as the above example of mounting the entire directory structure, I have created some scripts and placed them in the bin folder under my home folder and made them executable by using chmod +x filename. Then I can run them as needed by running a script with a filename that makes sense to me. Like the one for the code below is, rasp-pi-mount-root.sh.

#! /bin/bash
sudo mount -o nolock 192.168.1.17:/ /mnt/nfs/srv/
intr option

Note that the intr option for nfs mounts is a good one if the computer loses it’s connection with the server. With the Raspberry Pi this is never an issue for me but it is with other servers. It allows an interupt to stop NFS requests if the server goes down or the connection is lost. If intr is not used NFS will keep trying and the process will hang, requireing a reboot. I have had this occur mostly with servers that are on only part time. Most likely, I started the server and then put my computer on standby. When I start it and the server is off, NFS will hang looking for the mount points that have disappeared. This will hang not only the X window folders, but it will hang any command in a terminal that has to touch NFS, such as df -h which tries to look for something that is not there anymore.

Hard and Soft Options

The hard and soft options are like what they sound like. Soft mounts give up after a timeout and don’t keep trying to write or read from an NFS mount point if it flakes out or goes down. Soft mounts should only be used for read only mounts as data being written can be corrupted if a soft mount gives up where the hard mount will just keep trying until the mount comes back. If there is an issue with a write mount point flaking or dropping it is best to mount hard, the default and use the intr option.

Mounting on Startup

It is possible to set up the /etc/fstab file to mount the NFS drive on start up.  It is as simple as adding the following line to the file…

192.168.1.17:/home/erick    /home/erick-pi    nolock    0    0

On my laptop this did not work, I remembered that Wireless LAN is handled at the user level and not on during bootup when mountpoints are handed out via the /etc/fstab file. So I got an error about the mount point not being found.

On my desktop running Lubuntu 14.04, connected via Ethernet cable, the line above did not work either but I modified it as follows and it worked. It might be that I left out the part with nfs, although I thought that the OS could tell it was a NFS mount from the format, oh well. I also decided to mount it right under my home folder on the desktop…

192.168.1.17:/home/erick    /home/erick/erick-pi nfs   auto,nolock    0    0

On the desktop once the drive is mounted it will stay mounted even if I reboot the Rasp Pi while the desktop PC is running.

Delayed Mount on Startup

To mount an NFS drive on a machine that has wireless, you have to mount it after it connects to the router and by then it is already running at the user level. You have to trick the system into waiting. There are multiple ways of doing this. I chose putting a line into to root crontab and used sleep 60 for the delay. After all most mounting has to be done as root anyways.

So I put a 60 second delay in before the mount command executes in the root crontab using the @reboot directive…

@reboot bash -c "sleep 60; mount -o nolock 192.168.1.17:/home/erick /home/erick-pi"

To edit the root crontab, simple do…

sudo nano crontab -e

To simply list what is in the root crontab, which is how I cut and pasted the code above, simply do…

sudo crontab -l

Using Names

It is entirely possible to use names instead of IP addresses when you mount NFS drives and even in the /etc/exports file. One caution, if DNS is down or flaky on you LAN, it could present a problem with reliably mounting drives.  Therefore I recommend adding the server names to your /etc/hosts file. On my LAN I take it a step further, the servers are all set as static IP and my router has the ability to always hand out the same IP to a machine at a specific MAC address so I use that for laptops & etc that normally connect to the network. So in effect every normally used device has a static IP. Therefore I can put them all in /etc/hosts and I don’t even have to care about DNS on the LAN for the machines on it 99% of the time.

/etc/hosts comes with the top two entries, just add what you want to it. As you can see commenting them out works too. The erick-MS-6183 server is down, probably for good at this point!

 127.0.0.1    localhost
 127.0.1.1    erick-laptop
 #192.168.1.11    erick-MS-6183
 192.168.1.2    renee-pc
 192.168.1.9      erick-laptop
 192.168.1.10     ubuntuserver
 192.168.1.17     raspberrypi

 Gotcha

One time I was backing up my laptop to a laptop-backup directory under my home folder on the big file server. The problem was that I had my home folder on the big file server set as an NFS mount as a folder under my home folder on the laptop. It copied in circles until the harddrive filled up. Oh well, learned the hard way on that one! Be careful of NFS mounts and even symlinks to places when running backups.

NFS and Users

With users there is the notion of the name and then there is the numerical UID. NFS uses the numerical UID to map across machines. If you plan on using NFS on multiple machines, it pays to keep the UID’s lined up between them. For example, if you set up 2 Linux machines from scratch, there will be a user at UID 1000, that would be you, whatever you called it by name. The first user is at 1000. If you use NFS to mount a directory from one machine to another, no problem it all lines up. The user at UID 1000 is the same on both machines, permissions work out, files can be moved back and forth, no problems.

 Resources

Used this one to get started with NFS…

https://help.ubuntu.com/community/SettingUpNFSHowTo

Helped to figure out the whole user and group ID mapping

NFS: Overview and Gotchas

 exports(5) – Linux man page

Easy to follow, I hink I might have started with this one,

Setting up an NFS Server and Client on Debian Wheezy

I need to look at this one for a sanity check on the errors when I launch NFS server on Raspberrry Pi,

Problem with NFS network

 

SSH Keys

On one of my servers once I got it set up right and working smooth there was rarely a need to log into via SSH remotely. So I left the SSH port 22 closed down in the router. When I really needed to log into it, I would log into the router and hit the DMZ button and open up all the ports to the server briefly. Then I would SSH into it using the normal username and password combo do my business and lock down the ports again.

I learned about SSH Keys a few years back while I was doing some volunteer work on a site. The owner of the server had SSH Keys setup on it so that I could use WinSCP to move files up to it. He believed, rightly so in keeping the security beefed up and didn’t bother with FTP at all. Recently (February 2015) I purchased a Raspberry Pi. Eventually it will replace one of the servers I run. For now, it is a test bed and I would like to be able to log right in, no fiddling with the router! Plus why not make it more secure, that is where SSH Keys come in.

I hunted down the method to set up SSH Keys online. Not hard at all. I followed one article that helped set up the keys and it logged in great. But, I still could also still login via username and password, so I had to apply another step beyond what the article explained.

Finally, once you open up port 22, many attempts to login will occur on the port and you can see this in your router log. Mine is setup to email me the router log and I quickly noticed that I was being emailed logs one after the other. I decided to change the default port 22 to map to an obscure number higher than 1024 by adjusting the port forwarding in the router.

Installing SSH Server

In case you haven’t installed the server part of SSH on your machine here is the command line directive…

sudo apt-get install openssh-server

Setting up SSH Keys (Public Key Authentication)

These are examples of the commands that I used to set up the keys while on the client machine. It is best to try this while you are not too far from you machine physically, just in case something goes wrong and you need to get on the machine physically.

Create the RSA Public/Private Key on the client machine

You will be asked where you want the key stored the default is the .ssh directory under your home folder with a filename of id_rsa. Then you will be asked to provide a passphrase, hit enter if you do not want a passphrase. A passphrase provides an extra level of security. If someone gets a hold of your machine or private key, they still need the passphrase to get anything going.

ssh-keygen -t rsa

You will get the following message. Depending on the machine, it may take a few seconds after the first line, while the machine is doing the calculation before you see the second line. The Raspberry Pi took about 3-4 seconds to spit out the second line.

Generating public/private rsa key pair.
Enter file in which to save the key (/home/pi/.ssh/id_rsa):

The file is alright by default, hit enter, unless you have another place that you need it and know what you are doing. I assume some config file in the system expects the key in the .ssh folder.

Next comes the passphrase question…

Enter passphrase (empty for no passphrase):

…and again….

Enter same passphrase again:

Finally the key is generated and a randomart image is generated, interesting looking but nothing we need for this operation…

Your identification has been saved in /home/pi/.ssh/id_rsa.
Your public key has been saved in /home/pi/.ssh/id_rsa.pub.
The key fingerprint is:
d7:33:ed:91:ab:00:a7:bd:15:8d:15:21:fe:ed:6b:df pi@raspberrypi
The key's randomart image is:
+--[ RSA 2048]----+
|            . o. |
|           . . . |
|            . .  |
|           . * o |
|        S o * * .|
|         *   = + |
|        . o . o .|
|           + . .o|
|          . . ..E|
+-----------------+

Copy the Public Key to the Server

Next you will copy the key up to the server using the ssh-copy-id command. It will log you in and you will use your normal password that you have for your login and then it will copy the key to the server. The example shows that the user is pi and the ip=192.168.1.17. Change these to your id and server IP.

ssh-copy-id pi@192.168.1.17

In this example I am installing it on the same machine that I created it. So this is what I see…

pi@raspberrypi ~ $ ssh-copy-id pi@192.168.1.17
The authenticity of host '192.168.1.17 (192.168.1.17)' can't be established.
ECDSA key fingerprint is 7e:f0:94:8a:bd:f2:95:44:f3:a5:36:ff:e3:64:48:a3.
Are you sure you want to continue connecting (yes/no)?
Warning: Permanently added '192.168.1.17' (ECDSA) to the list of known hosts.

…And the key is added.

Test It

Now you can login to your server with the newly created keys. But you still can also login via the username and password combo.

Making it SSH Key Only login

You need to set the sshd_config to explicity allow Public Key Authentication. This step requires editing the sshd_config file. Which I didn’t remember the location of so I used…

sudo find / -name sshd_config

Edit it …

sudo nano /etc/ssh/sshd_config

Find the line that reads PasswordAuthentication which is set to yes by default, as in commented out = yes.

Set it to no and make sure it is uncommented…

PasswordAuthentication no

Check to see that this is set also…

ChallengeResponseAuthentication no

Restart the SSH server…

sudo service ssh restart

Remapping the SSH Port 22 to something less obvious

If you don’t remap to port, lots of hits happen to it. Attempts to login that will fill up your router logs. In theory someone can still find the new port, but they would have to get lucky or scan the ports. So this does cut down on bogus login attempts significantly.

There are two ways of doing this, in the configuration file sshd_config or by setting up the port forwarding in yur router. I left sshd_config set at port 22 and made the change in the router. I care about the port being mapped to something else for the outside world on my LAN it can stay 22. So I can simply use SSH servername and not specify a port.

sshd_config mod method

There is a line near the beginning of the file, change the 22 to something else and restart the sshd server…

# What ports, IPs and protocols we listen for
Port 22

Router Port Forward Mapping Change Method

Or go into your router and find the port forwarding. In my Netgear router it was under Advanced–> Port Forwarding/Port triggering. You will see a list that allows there to be changed…

# Service Name External Start Port External End Port Internal Start Port Internal End Port Internal IP address
External End Port Internal Start Port Internal End Port

Set it up for a port other than 22 for External Start and end Port, 5678 in this example…

 

SSH 5678 5678 22 22 192.168.1.170

More Tightening of SSH Security

I have not done any of this yet on my machine but for FYI. Under the spot in sshd_config where the port is set there is a place where you can place a whitelist of IP addresses that the sshd will listen for. This can restrict the IP space that can connect to the machine..

# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0

It is also possible to further restrict the actual users that are allowed or denied access to the machine via SSH. This is accomplished by using the AllowUsers, AllowGroups,DenyUsers,DenyGroups directives.

Example…

AllowUsers joe bob naomi
AllowGroups workinggroup powerusergroup
DenyUsers tempuser1 tempuser2
DenyGroups gaming

You can also block the ability to login as root, so that users will have to su to root once logged in.

PermitRootLogin on

Encryption and Keys

Creating keys such as the RSA pair is an interesting mathematical concept. It falls in the realm of one way functions. You can have the public key and have only remote odds of being able to generate the corresponding private key by a brute force search. But, the other way around is easy. It’s kind of like glass breaking or throwing a cup of water in the ocean, in theory it is all still there, but to put it back together is nearly impossible.

In physics this is what makes “time” on the macro scale. On the quantum level, time really doesn’t matter. You can play particles interactions backwards and forwards and it all works out OK. Feynman diagrams, work both ways. But on the macro level, a lot of things just go one way, just like the hash algorithms that generate the encryption keys. The same thing applies to hashes to generate look up tables, it is easy to go one way, to the lookup from the hash, but harder to go the other way. Ratchets, diodes and worm gears, go one way but not the other.

 Resources

How To Set Up SSH Keys

How do I force SSH to only allow uses with a key to log in?

7 Default OpenSSH Security Options You Should Change in /etc/ssh/sshd_config

Ubuntu Server Guide OpenSSH Server

Alternatives to FTP

One server I have is fairly low on resources, so I opted not to run FTP. It would just mean yet another service that would have to run on a low RAM unit. So to move files to and from this server I use scp or sftp from Linux and WinSCP from Windows.

SCP Example

These examples assumes you can SSH into your server!

Using a FQDN

The following example shows downloading a directories content from a remote server using a fully qualified domain name.

 scp -r username@serverlocation.com:/home/username/dir /home/username/dir

Using a IP address

On the local network in this example using an ip address, copying remote to local.

 scp -r user@192.168.1.101:/home/user/fswebcam /home/user/fswebcam

Example of uploading a single file to a remote server from the home directory of the user to a specific location under the users home directory tree on a remote computer, note the tilde (~) means home directory of user.

scp ~/fswebcam/timelapse/dusk.avi user@12.34.56.78:/home/user/files/public/timelapse-video/dusk.avi

SFTP

To connect using sftp, a ftp tunnel using SSH, typically you can use the “Connect to Server” found for instance in Ubuntu under Places.

  • Set connection type to SSH
  • Set the server
  • IP address or FQDN
  • Port is set to 22, the standard SSH port
  • Folder is set to any folder that the user has permission to get into, /home/user is a safe bet.
  • Username is set
Connect to Server in Ubuntu, Place Menu
Connect to Server in Ubuntu, Place Menu

 

  • You can add a bookmark to keep getting in to this connection
  • It will ask for your login password upon connecting

SFTP via Browser

Also from a Firefox browser, Haven’t tried this on others! you can simply put sftp://user@serveraddress in the address bar. This will connect you to your home folder after you give the password at the prompt. I noticed that in Ubuntu, it will do the same thing that the “Connect to Server” option will do. It will show a folder on the desktop
after connecting with the browser that it the sftp connection

WinSCP

From Windows I have used the tool WinSCP for years as it supports FTP, SFTP and SCP. http://winscp.net/eng/index.php
It also loads support, by editing the registry perhaps for using the sftp:// type of connection via Windows Explorer.

rsync

For Linux there is also the command rsync, remotely synchronize directories. I haven’t used this but once or twice so I don’t have much to say about it yet.

One more comment on SSH. Typically I leave SSH (Port 22) closed and open it up only when needed on this server. I do this by remotely logging into a my router and opening it and closing it. Alternatively you could configure a firewall to only allow certain IP numbers a connection to SSH and denying all others. This can be done using the direct method of editing the iptables ( I will write more on this, TBD) or using a tool such as UFW or the graphical version of it called GUFW to  handle this.

Fermented Figs Timelapse

Nothing beats combining two things that are interesting together. I’ve been into fermenting foods and beverages since the late 1990’s and have been experimenting with timelapse photography since late in 2013 ( One of the first projects I set up and Ubuntu Server for ). Combining them together has been an interesting experience lately.

Recently I bought a 50.5 Ounce glass container with a gasketed lid. I got the idea from reading The Art of Fermentation by Sandor Ellix Katz, a must have if you are considering getting serious about fermentation, it covers a lot of territory on the fermentation landscape. This jar is ideal for some fermentation experiments as any pressure built up in the container would vent via the gasketed lid.  A lot of times, I don’t worry about pressure build up, because I am close to home and can vent it manually. But this time I was going to be away so this jar would be good to use. I decided to try to ferment some figs in it and create a time lapse video by taking photos every 4 minutes. I ran it for a week, almost 2400 frames.

The Ferment Mixture

The figs were a bit on the hard side so they were not getting used up to much and I decided that they would be good candidates for a fermentation experiment. The fermentation was started by cutting up the figs into small bits and added some sugar water and a pinch of bread yeast. Normally I would have let them ferment naturally based on what wild yeasts are present on the fruit, but I wanted a vigorous fermentation that got going quickly in order to capture the action for the timelapse video.

Timelapse Video Setup

The setup for the timelapse video was a laptop running Ubuntu with Apache and a webcam. The program used to take the frames for the timelapse video is fswebcam ( which I cover in the post on Bread Dough Rising Timelapse GIF ). The frames were taken every four minutes and saved into a folder underneath my home directory. Additionally a frame was copied to the /var/www directory to allow it to be seen on the web. Plus, I have a symbolic link from /var/www to a directory called fswebcam under my home directory. This directory holds the scripts to run fswebcam, under this is a directory called timelapse which collects all of the frames. This allows me to flip through these from the web as well, so I can keep track of the fermentation progress.

I went away for a few days while I was running the timelapse frame capture and it was nice to be able to view it to check on the progress. To get it online, I basically added a virtual server on the router for port 80, pointing to the internal IP address of the laptop, which was hooked to the router via WiFi. This worked flawless and I was able to periodically check in on the fermentation while on the road.

Fig Fermentation Timelapse Photography Setup
Fig Fermentation Timelapse Photography Setup

 

Timelapse Video AVI

Fermenting Figs 1 frame every 4 minutes for 31s of video

 

 

 

Wake on LAN via Windows

Windows

To wake a machine from a Windows computer there are a few choices.

wolcmd

wolcmd for the command line from Depicus.com is good to use in scripts or by itself. It works 100% of the time for me.

 wolcmd yourmacaddr localserveripaddr 0.0.0.0 9

wolcmd can start the Linux server using wolcmd <-download page, from Depicus.com

Wake on LAN GUI

A GUI version of the wolcmd tool from Depicus.com WakeOnLanGui

WOL Magic Packet Sender Tool

WOL Magic Packet Sender, uses a WOL Setup MSI file.  I have used this quite a bit and it does work nicely. It is the first one that I used and have it on my Windows machines.

Online

At Depicus.com you can wake your machine directly from the Internet as well without loading any application via this page –> http://www.depicus.com/wake-on-lan/woli.aspx

There is even a way with the Depicus site to make up a URL that will have the MAC address, IP address and Port as parameters to send a magic packet. I’ve tried it and it works.

 

 

 

 

Automatic Server Status Page Creation

On one of the servers I ran in 2013-14, I used Webmin to keep track of what was going on with the server, memory usage, drive space and so on. It was a bit overkill, I thought I would need it more than I really did.

The server I am trying this out on is resource limited, low RAM mostly, only 512MB. So I was concerned about too many processes weighing it down and was trimming RAM use for Apache, mySql and PHP. I wanted an easy way to look at what is going on with the server, web based, so putting the info on a dynamically created page seemed like the way to go.

Restricting Directory Access with Apache

I don’t want just anyone to have access to the status directory. Clever folks might gain too much insight from what is shown there, a potential hack risk. On the server the location /var/www/status is restricted. What I mean by this is that I have edited the Apache default file to restrict access by IP, as I am only accessing this from a few IP’s. Below is an example of the mod to the Apache default file. Obviously I want to allow from my local net, so that is 192.168.1 ranging from 0-255. In the default file you don’t have to list the entire IP if you want to cover a range. Additionally at the time, there were a few IP’s in the 74.67.XX.XX range so I opened that range up for testing access to. Basically you can add as many as you want. Another option would be to password protect the directory, but for now this is all I need.

To edit the Apache default file, make a backup copy first, then on Ubuntu at least…

sudo nano /etc/apache2/sites-available/default
Example code from Apache default file to allow certain IP’s access to a directory and deny all others
<Directory /var/www/status/>
        Order deny,allow
        Allow from 192.168.1
                Allow from 74.67
    
        Deny from all
</Directory>

Logcreate

With this server instead of using Webmin to look at the status of the server,  I made a simple file called logcreate, ran by putting it in the cron.hourly folder and chmodding it +x! It makes a status page at /var/www/status/log.txt. Also generated is /var/www/status/fulllog.txt a concatenated version of the log.txt added to on an hourly basis. I used dash instead of bash, it’s a slight improvement in memory use when called. Don’t use an extension, cron won’t run files such as logcreate.sh.

Logcreate basically it gives you a synopsis of the servers state in text form…

  • Date and time stamp on top ( date )
  • Tail of the syslog ( tail /var/log/syslog )
  • Memory usage ( free )
  • Drive space usage ( df -h )
  • Processes sorted by RAM usage ( ps aux | sort -nrk 4 | head )
  • Free standing copy of the process tree ( pstree )

 

The code for logcreate, the file to be placed in /etc/cron.hourly
#!/bin/dash
# Remove old log
rm /var/www/status/log.txt
# Print logged outputs into new log.txt 
# Starting with date stamp
date >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Grab the tail of the syslog file
tail /var/log/syslog >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Log RAM usage
free >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Disk Usage
df -h >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
#ps aux | sort -nk +4 | tail >> log.txt
#echo 'USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND ' >> log.txt
ps aux | sort -nrk 4 | head >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Copy log.txt into the full log that is collected from the hourly updates.
cat /var/www/status/log.txt >> /var/www/status/fulllog.txt
# Create a free standing copy of the process tree
pstree > /var/www/status/pstree.txt

 

Resources

Figuring out a good command to list the running processes sorted by RAM use was something I needed some help figuring out as far as the best way to do it. The link below was where I got my info from.

Top memory using processes:  http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage

Backing up Windows User’s to Folders to a Linux File Server

Robust File Copy for Windows

robocopy.exe available as part of rktools <-download page, can be used to copy files across the network to a Linux machine into a folder setup using Samba.

The DOS batch file below is called serverbackup.bat on my machine. It can start the Linux server using wolcmd <-download page, from Depicus.com and will copy a users folder to a folder on the server and creates a log file C:\tools\backup.log, then it shuts down the Windows PC with a 120 second delay. This delay is mostly there for testing. If the robocopy command goes belly up, the PC will try to jump right to the shutdown, which makes troubleshooting difficult. So leaving a delay is helpful as one can abort a shutdown by executing…

shutdown /a

…from the command line in Windows to stop the machine from shutting down.

I had to review how robocopy worked before deploying it in a backup script and I found an exact example of what I was looking for on Jacob Surland’s photography blog Caught in Pixels. I reviewed his post How to create a backup script using Robocopy before writing the serverbackup.bat script. I have the script downloadable here named serverbackup.bat, rename and modify as needed.

REM Wake Server, if it is already on, no harm done. It takes about 17 secs for server to start so REM robocopy will get an error and should keep retrying
REM run Depicus Wake on Lan from Command Line. Validated, Works!
 wolcmd yourmacaddr localserveripaddr 0.0.0.0 9
REM Copy User folder to \files\user on server via Samba links
 REM /MIR = purge files from dest. that do not exists in source
 REM /M Copy Archiveable files & reset attribute - not using this yet!!!
 REM /XA:SH exclude system and hidden important for user space
 REM /FFT fixes timing up for LINUX assumes FAT file times ( 2 sec granularity)
robocopy "C:\Documents and Settings\Erick_2" \\UBUNTUSERVER\Erick_Backup /MIR /XA:SH /XJ /FFT /W:1 /R:5 /LOG:C:\tools\backup.log
REM Shutdown PC when backups are done
shutdown -s -f -t 120

So far it works….

Resources

Windows Server 2003 Resource Kit Tools aka rktools

Depicus wolcmd download page

How to create a backup script using Robocopy

 

Creating a Bootable USB Drive

A bootable USB works great, it can be very helpful at times. And it is now so easy to create a bootable USB drive with a Linux Distro of your choice. The bootable USB stick works like the Live CD but with the advantage of persistence. Persistence means you can load programs on the USB drive, unlike the Live DVD. Plus settings are remembered. It also means that you can load tools on it to help rescue a broken computer, Windows or Linux. I have rescued many a PC (Windows) by booting using Linux and copying files off and rescuing them. Or you can replace bad files directly. It is like carrying  a “computer” that you can keep in your pocket and plug into a PC and have it boot right into an environment you have set up for your self. Just be aware that certain PC’s that run Windows 8 try to block the ability to boot off of external devices. You have to go into the BIOS and switch off or over ride this behavior of the so called boot protection. It usually requires one to enter a 4 digit code when you leave the BIOS. Some PC’s flash this briefly, too fast to see in my opinion.

I installed Linux Mint XFCE 32bit on a USB drive recently. XFCE mostly for the fact that XFCE is light and will run on just about any PC, 32 bit will run on both 32 and 64 bit machines. Mint because I haven’t tried it yet and a bootable USB drive would make a good test drive, especially since I can do a lot more than I can with just the Live CD. The USB drive I bought for this was off of eBay, I opted for a USB 3.0 device for the speed when a machine has USB 3.0.

Linux command line using dd

Linux as well using dd to copy from the iso to the usb drive, make sure you know where the drive is mounted when doing this via the command line. Use sudo fdisk -l to list all of your mount points. Alternatively you can use  lsblk and you will see mounted and unmounted devices.

See the sdb1 below all of the info about sda1 ( hard drive )….

Device Boot      Start         End      Blocks   Id  System
 /dev/sda1   *           1        3648    29296875    7  HPFS/NTFS
 /dev/sda2            3648        9729    48850529+   5  Extended
 /dev/sda5            9668        9729      497983+  82  Linux swap / Solaris
 /dev/sda6            3648        9667    48352256   83  Linux
Partition table entries are not in disk order
Disk /dev/sdb: 16.1 GB, 16079781888 bytes
 256 heads, 9 sectors/track, 13631 cylinders
 Units = cylinders of 2304 * 512 = 1179648 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0xc3072e18
Device Boot      Start         End      Blocks   Id  System
 /dev/sdb1   *           8       13631    15694808    7  HPFS/NTFS

 

To burn ISO to sdb1 for example…

sudo dd if=~/Desktop/linuxmint.iso of=/dev/sdb1 oflag=direct bs=1048576

oflag=direct may not always work, leave it out if the copy fails. For more on doing this from Linux see the link below.

Mounting USB Drive from the Linux Command Line

First use fdisk -l or lsblk to find the location of the drive. Then for example to mount a usb drive at /dev/sdc1 to /mnt/sdc1 use…

 sudo mount /dev/sdc1 /mnt/sdc1

You can choose a mountpoint other than mnt, Linux can mount a drive to just about any folder you create. That is the beauty of it over lettered drives like Windows uses.

Universal USB Installer

The Universal USB Installer allows you to do all of this from Windows,

How to create a bootable Linux Mint USB drive using Windows…
http://www.everydaylinuxuser.com/2014/05/how-to-create-bootable-linux-mint-usb.html

 Test Drive

I have tried the drive on a fast machine with USB 3.0. Dell quad core 2.4GHz, 8GB RAM. It does boot fast, not as fast as a hard drive but very reasonable. I was able to stream video and watch TV with it, play DVD’s and adjust the screen saver NOT to come on.