Tag Archives: Linux

FTP on Raspberry Pi. An easy way to make shared folders

The idea with FTP is to have folders that can be reachable between Linux and Windows, locally and remotely and easily. FTP is not secure, but it can be made secure, that info can be found on the web. For now I am covering the basics of FTP here.

For most things that I need to do, I don’t need the files to be secure anyways, 90% of the time nothing critical is going back and forth across remotely. If it is I would use a secure method of sending files via SSH via SFTP or an SSHFS.

FTP is an old protocol but it just plain works and is compatible with Windows, Linux and Mac. I have tried WebDAV in the past but it is compatible to only a degree with various Windows operating systems. I have had a hard time getting it working correctly on versions of Windows beyond XP, resorting in installing patches to Windows and etc. Generally not easy to implement.

I was also looking at FTP as a native tool typical of server installs. I have experimented with cloud setups such as OwnCloud and Sparkleshare, but with FTP I was looking for something simple and quick to setup, no special software, no mySQL databases running on the Raspberry Pi, no special software on client PCs, that sort of thing.

vsFTP

sudo apt-get install vsftpd

Edit the configuration file

Back it up first then do an edit.

sudo cp /etc/vsftpd.conf /etc/vsftpd.orig
sudo nano /etc/vsftpd.conf

uncomment local_enable = YES

uncomment write_enable = YES

Find this and check that it is set this way…

local_umask=022

Enabling PASV

I have read online that enabling the PASV capability for FTP is a good idea. Frequently when I have FTP’d to various ISP’s sites I have seen them operate in PASV mode. So it stands to reason that if the pro’s are have it set up that way it may have it’s advantages.

Add the following lines to the /etc/vsftp.conf file.

pasv_enable= Yes
pasv_min_port=40000
pasv_max_port=40100

There is nothing magic about the numbers of the port range other than they should be unused by anything else that your setup might require and generally I have seen high numbers used commonly. To work out side of your local network you must enable port forwarding of the range of port numbers through your router configuration.

Changes to vsFTP

With the newer versions of vsFTP there is a change that has occurred since I wrote my previous post about vsFTP (  http://oils-of-life.com/blog/linux/server/additional-utilities-for-a-linux-server/ )

The change has to do with the fact that the root directory of the user has to be non-writable and I have read online that it is best to make it owned by root as well. This is covered below, after the section on adding a user. You need to have a user first before modifying their permissions!

FTP User

To create an FTP user, create it in a way that it does not have a login shell. So that someone who can log in to the FTP account can’t execute shell commands. The line /sbin/nologin may not be in the /etc/shell file and in that case it needs to be added in there. The user basically has to be jailed in their directory and has to have no login shell.

sudo useradd -m -s /sbin/nologin -d /home/user user

I added Documents, public_html directories to the /home/user as well. Then made the users root folder /home/user, owned by root and nonwritable.

cd /home/user
chown user:user Documents
chown user:user public_html

chown root:root /home/user
Make Root of user non writable
sudo chmod a-w /home/user



FTPing on the PC

Now that ftp is set up on the server you will want to be able to connect to it!

Options for connecting…

Command Line, WIndows and Linux

ftp yoursite.com

That gets you into FTP via the command line. The command prompt will now start with ftp> ,that is how you know that you are within the ftp command shell.

It is archaic, but worth knowing when you have to stick a file up or pull it down right at the command line. The commands the ftp prompt accepts are basic, but good enough to get most work done. Type help at the prompt to get a list of commands.

Via Folders

Linux

Just enter the location of the ftp server right into the top of the directory folder and you will be prompted for a password and taken there.

Windows
Windows7/Vista:
  1. Open Computer by clicking the “Start” button, and then clicking Computer.
  2. Right-click anywhere in the folder, and then click Add a Network Location.
  3. In the wizard, select Choose a custom network location, and then click Next.
  4. To use a name and password, clear the Log on anonymously check box.

From: https://www.google.com/search?q=connect+to+ftp+windows+7&ie=utf-8&oe=utf-8

 

 

Automatic Server Status Page Creation Update

In January 2015 I created a post about automatically creating a status page for a Linux server that I have. Typically this is put under a restricted directory and allows you to see a snapshot of what is happening with the server. I run it by putting the scripts in the /etc/cron.hourly directory on a Linux PC and a Raspberry Pi running Linux.

It serves as a simple way to check up on the server without having to use a tool such as Webmin that requires a login. It also keeps a trail of log files that get rotated on a monthly basis, so there is always a few old ones around to track down any problems and patterns in the operation.

I have found this information useful when I have traced down malfunctions that can occur when setting up a server and also when I was trying to get a webcam up and running and had the USB bus hang up a few times when the cam was overloaded with too much light.

In the new script file I fixed a bug by adding parenthesis around a line that I was trying to echo and I added code to run the w command to show a quick picture on who is logged in, how long the server has been up and running and the values for the average load on the server at the 1, 5 and 15 minute marks.

Logcreate Script

#!/bin/dash
# Remove old log
rm /var/www/status/log.txt
# Print logged outputs into new log.txt
date >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
tail /var/log/syslog >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
free >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
df -h >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
#ps aux | sort -nk +4 | tail >> log.txt
echo "USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND" >> /var/www/status/log.txt
ps aux | sort -nrk 4 | head >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Logged in User info using w command
w >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Copy log.txt into the full log that is collected
cat /var/www/status/log.txt >> /var/www/status/fulllog.txt
# Create a free standind copy of the process tree
pstree > /var/www/status/pstree.txt

Alternate Version

I also created a version of the script for a desktop Linux PC that does not have Apache installed.  In it I use a DIR variable to contain the directory that I want the log.txt file stored.

 #!/bin/dash

# User defined variables
# No trailing / on DIR!
DIR=/home/erick/status

# Remove old log
rm $DIR/log.txt
# Print logged outputs into new log.txt
date >> $DIR/log.txt
echo >> $DIR/log.txt
tail /var/log/syslog >> $DIR/log.txt
echo >> $DIR/log.txt
free >> $DIR/log.txt
echo >> $DIR/log.txt
df -h >> $DIR/log.txt
echo >> $DIR/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
#ps aux | sort -nk +4 | tail >> log.txt
echo "USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND" >> $DIR/log.txt
ps aux | sort -nrk 4 | head >> $DIR/log.txt
echo >> $DIR/log.txt
# Logged in User info using w command
w >> $DIR/log.txt
echo >> $DIR/log.txt
echo >> $DIR/log.txt
# Copy log.txt into the full log that is collected
cat $DIR/log.txt >> $DIR/fulllog.txt
# Create a free standing copy of the process tree
pstree > $DIR/pstree.txt

Rotation of Log

In the /etc/cron.monthly directory I have created a file that is called status-log-rotate and it will save backup copies of 2 months worth of the full concatenated server status logs.

#! /bin/bash
DIR=/home/erick/status
mv $DIR/fulllog.txt.1 $DIR/fulllog.txt.2
mv $DIR/fulllog.txt $DIR/fulllog.txt.1

Tweaks for Raspberry Pi

For the Raspberry Pi which has an SD card that I am trying to be conscious of writing to often. I have recently made some modifications to put the /tmp folder onto RAM using tmpfs. I create the hourly log underneath a folder there. Daily via a script it cron.hourly it gets concatenated into a daily log which is under a status folder that has restricted access. This gets appended once per day to the fulllog which actually lives on the SD card. The end result, no multiple hourly writes to the log file, just one append to the full log per day. The only downside is if the power drops and then some log entries will be lost for the day.

Logcreate runs from /etc/cron.hourly for Raspberry Pi

#!/bin/dash
# Set DIR, on Pi this is a temp location for log
DIR=/tmp/web

# Set fixed DIR FIXDIR for files that have to be stored on SD card
# Nevermind, just make a daily log and then copy that to the full log daily.
#FIXDIR=/var/www/status

# Remove old log

rm $DIR/log.txt
# Print logged outputs into new log.txt
date >> $DIR/log.txt
echo >> $DIR/log.txt
tail /var/log/syslog >> $DIR/log.txt
echo >> $DIR/log.txt
free >> $DIR/log.txt
echo >> $DIR/log.txt
df -h >> $DIR/log.txt
echo >> $DIR/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
echo "USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND" >> $DIR/log.txt

ps aux | sort -nrk 4 | head >> $DIR/log.txt
echo >> $DIR/log.txt
# Logged in User info using w command
w >> $DIR/log.txt
echo >> $DIR/log.txt
echo >> $DIR/log.txt
# Copy log.txt into the full log that is collected
cat $DIR/log.txt >> $DIR/dailylog.txt
# Create a free standing copy of the process tree
pstree > $DIR/pstree.txt

dailylog-to-fulllog script, runs from /etc/cron.daily

#! /bin/bash

DIR=/tmp/web
FIXDIR=/var/www/status

echo "----------------------------------------------" >> $DIR/dailylog.txt
date >> $DIR/dailylog.txt
echo "----------------------------------------------" >> $DIR/dailylog.txt
cat $DIR/dailylog.txt >> $FIXDIR/fulllog.txt
rm $DIR/dailylog.txt

Logcreate Output from Raspberry Pi

Below is what the logcreate script will output to the log.txt file on a Raspberry Pi that I have running as a web server.

Sun Jul 12 14:17:01 EDT 2015

Jul 12 13:47:51 raspberrypi dhclient: DHCPACK from 192.168.1.1
Jul 12 13:47:52 raspberrypi dhclient: bound to 192.168.1.17 -- renewal in 40673 seconds.
Jul 12 13:59:01 raspberrypi /USR/SBIN/CRON[28010]: (erick) CMD (aplay /opt/sonic-pi/etc/samples/guit_e_fifths.wav)
Jul 12 13:59:07 raspberrypi /USR/SBIN/CRON[28009]: (CRON) info (No MTA installed, discarding output)
Jul 12 14:00:01 raspberrypi /USR/SBIN/CRON[28013]: (erick) CMD (/home/erick/fswebcam/cron-timelapse.sh >> timelapse.log)
Jul 12 14:00:23 raspberrypi /USR/SBIN/CRON[28012]: (CRON) info (No MTA installed, discarding output)
Jul 12 14:01:01 raspberrypi /USR/SBIN/CRON[28022]: (root) CMD (/home/erick/bin/usbreset /dev/bus/usb/001/004)
Jul 12 14:01:02 raspberrypi /USR/SBIN/CRON[28021]: (CRON) info (No MTA installed, discarding output)
Jul 12 14:09:01 raspberrypi /USR/SBIN/CRON[28053]: (root) CMD (  [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime))
Jul 12 14:17:01 raspberrypi /USR/SBIN/CRON[28064]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)

             total       used       free     shared    buffers     cached
Mem:        445804     424488      21316          0     106768     260516
-/+ buffers/cache:      57204     388600
Swap:       102396          0     102396

Filesystem      Size  Used Avail Use% Mounted on
rootfs          6.3G  3.1G  3.0G  51% /
/dev/root       6.3G  3.1G  3.0G  51% /
devtmpfs        214M     0  214M   0% /dev
tmpfs            44M  240K   44M   1% /run
tmpfs           5.0M  8.0K  5.0M   1% /run/lock
tmpfs            88M     0   88M   0% /run/shm
/dev/mmcblk0p5   60M   19M   41M  32% /boot

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      2071  0.0  3.0  24896 13652 ?        Ss   Jun28   2:24 /usr/sbin/apache2 -k start
www-data 27745  0.0  1.5  25412  7084 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27744  0.0  1.5  24960  6760 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27743  0.0  1.5  25428  7116 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27742  0.0  1.5  25396  7036 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27538  0.0  1.5  25396  7032 ?        S    06:25   0:00 /usr/sbin/apache2 -k start
www-data 27502  0.0  1.5  25404  7036 ?        S    06:25   0:00 /usr/sbin/apache2 -k start
www-data 27501  0.0  1.5  25396  7044 ?        S    06:25   0:00 /usr/sbin/apache2 -k start
www-data 27747  0.0  1.3  24936  6188 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27746  0.0  1.3  24936  6188 ?        S    09:58   0:00 /usr/sbin/apache2 -k start

 14:17:02 up 14 days, 12:56,  1 user,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
erick    pts/0    192.168.1.5      14:04   10:39   1.70s  1.70s -bash

Hourly Chime for Linux and Mac

It is easy to set up a simple CRON job to run a sound on the top of the hour by running aplay on a Linux machine. Something like this would also work on a Mac with minor changes, afplay is the default command line player for Mac, CRON works the same. For Windows, I have not tried it but task scheduler, every hour and there must be some easy command line program out there to fire off, that stuff has been around since DOS.

aplay

aplay works with wave files so you can use oggdec to convert ogg files to wav. A lot of sound theme files come in ogg or wav. aplay and mplayer come installed in Ubuntu at least by default in 14.04 LTS, which I am running. If not a simple…

 sudo apt-get install aplay

…or…

sudo apt-get install mplayer

…will get them installed.

oggdec

Oggdec is part of a very small install package, takes seconds to install.

To install…

sudo apt-get install vorbis-tools

To convert OGG ausio file to a WAV audio file…

oggdec filename.ogg

Sound Themes

The sound themes are located at /usr/share/sounds . If you go there and try out the sounds you might find one that sounds good to you for an hourly chime.

Two level tree of /usr/share/sounds, using the tree command. If you don’t have it get in a few seconds using…

sudo apt-get install tree

Output of tree command show 2 levels below /usr/share/sounds

(tree /usr/share/sounds -d -L 2)

/usr/share/sounds
├── alsa
│   ├── Front_Center.wav
│   ├── Front_Left.wav
│   ├── Front_Right.wav
│   ├── Noise.wav
│   ├── Rear_Center.wav
│   ├── Rear_Left.wav
│   ├── Rear_Right.wav
│   ├── Side_Left.wav
│   └── Side_Right.wav
├── fLight__2.0
│   ├── Copyright
│   ├── index.theme
│   └── stereo
├── freedesktop
│   ├── index.theme
│   └── stereo
├── Fresh_and_CLean

Sound Theme Downloads

I went to a site see link below and downloaded two sound themes (fLight 2.0 and Fresh and Clean, the third one on the site was a dead link ) and tried out the sounds. I found that the Message sound in the fLight 2.0 theme was a pleasant but catchy enough sound to be heard at a distance and over any music I might be playing at the time the CRON job runs.

http://www.ubuntuvibes.com/2010/08/3-awesome-sound-themes-for-ubuntu.html

The CRON job that runs to do the hourly sound is..

00 09-23 * * * aplay /usr/share/sounds/fLight__2.0/stereo/Message.wav

It will produce a sound from 9AM to 11PM and uses the Message.wav which I converted from an ogg to a wav file…

I have attached the Message.wav below for your listening pleasure!

 

 

So far I have not switched my overall sound theme from the Ubuntu default, but I might try out the two themes that I have downloaded for variety.

Resources

Create Cron Jobs on a Mac

Command Line Audio Player on a Mac

Automatic Server Status Page Creation

On one of the servers I ran in 2013-14, I used Webmin to keep track of what was going on with the server, memory usage, drive space and so on. It was a bit overkill, I thought I would need it more than I really did.

The server I am trying this out on is resource limited, low RAM mostly, only 512MB. So I was concerned about too many processes weighing it down and was trimming RAM use for Apache, mySql and PHP. I wanted an easy way to look at what is going on with the server, web based, so putting the info on a dynamically created page seemed like the way to go.

Restricting Directory Access with Apache

I don’t want just anyone to have access to the status directory. Clever folks might gain too much insight from what is shown there, a potential hack risk. On the server the location /var/www/status is restricted. What I mean by this is that I have edited the Apache default file to restrict access by IP, as I am only accessing this from a few IP’s. Below is an example of the mod to the Apache default file. Obviously I want to allow from my local net, so that is 192.168.1 ranging from 0-255. In the default file you don’t have to list the entire IP if you want to cover a range. Additionally at the time, there were a few IP’s in the 74.67.XX.XX range so I opened that range up for testing access to. Basically you can add as many as you want. Another option would be to password protect the directory, but for now this is all I need.

To edit the Apache default file, make a backup copy first, then on Ubuntu at least…

sudo nano /etc/apache2/sites-available/default
Example code from Apache default file to allow certain IP’s access to a directory and deny all others
<Directory /var/www/status/>
        Order deny,allow
        Allow from 192.168.1
                Allow from 74.67
    
        Deny from all
</Directory>

Logcreate

With this server instead of using Webmin to look at the status of the server,  I made a simple file called logcreate, ran by putting it in the cron.hourly folder and chmodding it +x! It makes a status page at /var/www/status/log.txt. Also generated is /var/www/status/fulllog.txt a concatenated version of the log.txt added to on an hourly basis. I used dash instead of bash, it’s a slight improvement in memory use when called. Don’t use an extension, cron won’t run files such as logcreate.sh.

Logcreate basically it gives you a synopsis of the servers state in text form…

  • Date and time stamp on top ( date )
  • Tail of the syslog ( tail /var/log/syslog )
  • Memory usage ( free )
  • Drive space usage ( df -h )
  • Processes sorted by RAM usage ( ps aux | sort -nrk 4 | head )
  • Free standing copy of the process tree ( pstree )

 

The code for logcreate, the file to be placed in /etc/cron.hourly
#!/bin/dash
# Remove old log
rm /var/www/status/log.txt
# Print logged outputs into new log.txt 
# Starting with date stamp
date >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Grab the tail of the syslog file
tail /var/log/syslog >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Log RAM usage
free >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Disk Usage
df -h >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
#ps aux | sort -nk +4 | tail >> log.txt
#echo 'USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND ' >> log.txt
ps aux | sort -nrk 4 | head >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Copy log.txt into the full log that is collected from the hourly updates.
cat /var/www/status/log.txt >> /var/www/status/fulllog.txt
# Create a free standing copy of the process tree
pstree > /var/www/status/pstree.txt

 

Resources

Figuring out a good command to list the running processes sorted by RAM use was something I needed some help figuring out as far as the best way to do it. The link below was where I got my info from.

Top memory using processes:  http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage

Creating a Bootable USB Drive

A bootable USB works great, it can be very helpful at times. And it is now so easy to create a bootable USB drive with a Linux Distro of your choice. The bootable USB stick works like the Live CD but with the advantage of persistence. Persistence means you can load programs on the USB drive, unlike the Live DVD. Plus settings are remembered. It also means that you can load tools on it to help rescue a broken computer, Windows or Linux. I have rescued many a PC (Windows) by booting using Linux and copying files off and rescuing them. Or you can replace bad files directly. It is like carrying  a “computer” that you can keep in your pocket and plug into a PC and have it boot right into an environment you have set up for your self. Just be aware that certain PC’s that run Windows 8 try to block the ability to boot off of external devices. You have to go into the BIOS and switch off or over ride this behavior of the so called boot protection. It usually requires one to enter a 4 digit code when you leave the BIOS. Some PC’s flash this briefly, too fast to see in my opinion.

I installed Linux Mint XFCE 32bit on a USB drive recently. XFCE mostly for the fact that XFCE is light and will run on just about any PC, 32 bit will run on both 32 and 64 bit machines. Mint because I haven’t tried it yet and a bootable USB drive would make a good test drive, especially since I can do a lot more than I can with just the Live CD. The USB drive I bought for this was off of eBay, I opted for a USB 3.0 device for the speed when a machine has USB 3.0.

Linux command line using dd

Linux as well using dd to copy from the iso to the usb drive, make sure you know where the drive is mounted when doing this via the command line. Use sudo fdisk -l to list all of your mount points. Alternatively you can use  lsblk and you will see mounted and unmounted devices.

See the sdb1 below all of the info about sda1 ( hard drive )….

Device Boot      Start         End      Blocks   Id  System
 /dev/sda1   *           1        3648    29296875    7  HPFS/NTFS
 /dev/sda2            3648        9729    48850529+   5  Extended
 /dev/sda5            9668        9729      497983+  82  Linux swap / Solaris
 /dev/sda6            3648        9667    48352256   83  Linux
Partition table entries are not in disk order
Disk /dev/sdb: 16.1 GB, 16079781888 bytes
 256 heads, 9 sectors/track, 13631 cylinders
 Units = cylinders of 2304 * 512 = 1179648 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0xc3072e18
Device Boot      Start         End      Blocks   Id  System
 /dev/sdb1   *           8       13631    15694808    7  HPFS/NTFS

 

To burn ISO to sdb1 for example…

sudo dd if=~/Desktop/linuxmint.iso of=/dev/sdb1 oflag=direct bs=1048576

oflag=direct may not always work, leave it out if the copy fails. For more on doing this from Linux see the link below.

Mounting USB Drive from the Linux Command Line

First use fdisk -l or lsblk to find the location of the drive. Then for example to mount a usb drive at /dev/sdc1 to /mnt/sdc1 use…

 sudo mount /dev/sdc1 /mnt/sdc1

You can choose a mountpoint other than mnt, Linux can mount a drive to just about any folder you create. That is the beauty of it over lettered drives like Windows uses.

Universal USB Installer

The Universal USB Installer allows you to do all of this from Windows,

How to create a bootable Linux Mint USB drive using Windows…
http://www.everydaylinuxuser.com/2014/05/how-to-create-bootable-linux-mint-usb.html

 Test Drive

I have tried the drive on a fast machine with USB 3.0. Dell quad core 2.4GHz, 8GB RAM. It does boot fast, not as fast as a hard drive but very reasonable. I was able to stream video and watch TV with it, play DVD’s and adjust the screen saver NOT to come on.

Server Hardware Swaps

RAM Upgrade

When I initially built the server using a Dell Dimension 4200, I added 1GB of RAM on top of the 512MB that was factory installed. The board can support up to 2GB, but 1.5GB seems sufficient for what I am doing. One of the first steps is to run MEMTEST by booting off of a Linux CD that I had laying around. This test ran overnight (15+ hours) with no problems, it’s always a good idea to run MEMTEST with any memory changes.

Memory upgrade, added 1GB stick to the exiting 512MB
Memory upgrade, added 1GB stick to the exiting 512MB
Running MEMTEST to check for flaws in the RAM, before loading Ubuntu Server
Running MEMTEST to check for flaws in the RAM, before loading Ubuntu Server

 

 

 

 

 

 

 

 

 

 

 

Second Hard Drive
HD in floppy bay. Tight a bit tough getting the screws in.
HD in floppy bay. Tight, a bit tough getting the screws into the holder.
Pulled lower CD burner, replaced with DVD drive
Pulled lower CD burner, replaced with DVD drive
DVD drive goes in bottom slot, ready to load Ubuntu 12.04
DVD drive goes in bottom slot, ready to load Ubuntu 12.04

I removed the floppy drive and added a second 120GB hard drive. I also replaced one of the CD drives with a DVD reader. Ubuntu Server 12.04 gets burned onto a DVD so I needed to boot off of the DVD. The other option would be to boot from a USB drive. I swapped the IDE connector off of one of the CD drives and used it for the secondary hard drive mounted in the floppy drive bay.

Disconnected CD drive, hooked IDE HD in same bus as DVD drive.
Disconnected CD drive, hooked IDE HD in same bus as DVD drive.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Swap 120GB hard drive for 500GB

I soon was well on my way to filling up the primary 200GB drive so it was time to consider putting in a bigger secondary hard drive in preparation for the future.

I have installed the 500gb drive in the server and formatted for use with Linux. Linux can use a drive formatted as NTFS but I formatted it as EXT4 for Linux so the disk checks and fixes can be more precise. EXT4 can handle extremely large drives 1EB partition size, an amount of data I cannot even imagine! EXT3 is good up to 32TB partitions, which is still very big! The new extra drive will give me much more space as the main 200gb drive is almost full. I will move some files onto it, mostly the backups from the other computers at home. This is primarily the goal with second drive. There is a software manger in Linux that can manage the drive, so called “Logical Volume Management” (LVM), the primary drive is managed using this feature. In theory I can create a “snapshot” of the drive and copy the image onto a backup external or the second drive, but I have not considered doing this yet. The primary drive contains the OS, whatever software I have loaded, which doesn’t take up much space. The files that I have loaded onto OwnCloud take up a good deal of space, but 200GB will be plenty of space for the primary drive for a while.

To LVM or not?

At first I was going to connect the first and second drive into one large “logical” drive using LVM. But, there is a risk if the system treats the 500gb+200gb = 700gb logical drive. If one drive fails it can ruin the entire “logical” drive composed of both drives. One disk failing out of two might be a bad risk, so I might leave the drives connected normally, mounted separately and not as a big logical drive.

500GB drive mounted in the floppy drive holder
500GB drive mounted in the floppy drive holder
120GB drive out and 500GB drive in
120GB drive out and 500GB drive in
Resources

For more information on Linux file systems…

https://help.ubuntu.com/community/LinuxFilesystemsExplained

About files and the file system…

http://www.tldp.org/LDP/intro-linux/html/chap_03.html

 

 

 

 

CD recording from the Linux command line using cdrdao

I’ve been interested in a way to burn right from the command line, with a possibility of using one of my Linux computers with a mode as a burn station, ideally I could throw in a CD, it would detect it and start the copy process and eject when done. This post is about a small step in that direction.

I researched it a bit and tried the example given by this page….

https://help.ubuntu.com/community/cdrdao

But, I modded the  $HOME/.cdrdao file a bit to include a list of cddb servers that I pulled off of,http://roozster.info/eac/06.html,  plus added a timeout for the cddb set to 10 seconds.

The .cdrdao file goes in your home directory and it acts like a configuration file for the cdrdao program. The help site above goes into details. But, briefly the write buffer at 128, which is 128 seconds, at an 8x burn gives 16 seconds of under-run protection. The device has to be set correctly. My CD burner is at /dev/sr0. According to the help.ubuntu site above, running sudo cdrdao scanbus, will produce an output that yields the device name. For me it didn’t yield a /dev type of connection but rather a 1,0,0 bus attachment type of readout. But I hovered over the CD in the file manager and found out the device mount point from there which was /dev/sr0.

Output from running sudo cdrdao scanbus
Cdrdao version 1.2.2 - (C) Andreas Mueller <andreas@daneb.de>
  SCSI interface library - (C) Joerg Schilling
  Paranoia DAE library - (C) Monty

Check http://cdrdao.sourceforge.net/drives.html#dt for current driver tables.

Using libscg version 'ubuntu-0.8ubuntu1'

1,0,0 : QSI     , CDRW/DVD SBW-242, UD22

Paranoia Mode

Paranoia mode is interesting as it provides some repair of the ripped audio, from http://www.linuxcommand.org/man_pages/cdrdao1.html

--paranoia-mode mode
              Sets the correction mode for digital audio  extraction.  

0:  No checking,  data  is  copied  directly from the drive.
1: Perform overlapped reading to avoid jitter. 
2: Like  1  but  with  addi-tional  checks  of the read audio data. 3: Like 2 but with addi-tional scratch detection and repair.


              The extraction speed reduces from 0 to 3.

Below is the code that I pull from the site and modded by adding the cddb_servers and cddb_timeout this code is used to create  the $HOME/.cdrdao file

#---$HOME/.cdrdao --#
write_buffers: 128
write_device: "/dev/sr0"
write_driver: "generic-mmc"
read_device: "/dev/sr0"
read_driver: "generic-mmc"
read_paranoia_mode: 3
write_speed: 8
cddb_servers: "http://cddb.cddb.com:80/~cddb/cddb.cgi","http://sc.ca.us.cddb.com:80/~cddb/cddb.cgi","http://sc2.ca.us.cddb.com:80/~cddb/cddb.cgi","http://sj.ca.us.cddb.com:80/~cddb/cddb.cgi","http://sj2.ca.us.cddb.com:80/~cddb/cddb.cgi","http://us.cddb.com:80/~cddb/cddb.cgi"
cddb_timeout: 10
 A few good command line options for cdrdao

I also fire the cdrdao command with the options –with-cddb to include the text info onto the burned CD and –eject to eject the CD on a completed burn.

sudo cdrdao copy --with-cddb --eject

This code can be put in a bash script. I created cdcopy.sh to make it simple to fire off from the command line.

Download as cdcopy.sh.txt -> cdcopy.sh

Rename cdcopy.sh.txt to cdcopy.sh, put in home and run chmod 755 on it to make it executable

chmod 755 cdcopy.sh

 

So far I have used this method to burn about 10 CD’s in the first week I finished trying this and tested them out in a CD player and ripped them both with Media Player and iTunes, all worked well!

 

 

 

UNIX and DOS endlines

I had a moment where I forgot about the entire UNIX and DOS endline incompatibility issue. So when I grabbed the autosuspend script with copy and paste and I brought it into eMacs in Windows, saved it to my /files/public folder on the server and tried to execute it. Lots of “$’\r’: command not found” errors.

The solution is to use dos2unix to convert the endlines, if you don’t have it just…

sudo apt-get install dos2unix

Then do dos2unix filename and it will modify it in place. Which is good but beware of this default behavior. It does have other options, which can be explored using dos2unix –help.

Dos2unix has one and only job, to remove CR-LF (Carriage Return-Line Feed )and just leaving LF ( Line Feed ) as UNIX/Linux wants it to be. If a file acts screwy when brought in from Windows it is most likely this issue. I even had to do it on the autosuspend.conf file!

You can always check a file with the command

cat -e filename

BAD, Windows/DOS example…

#!/bin/bash^M$
^M$
# Source the configuration file^M$
. /etc/autosuspend.conf^M$
^M$

GOOD , UNIX/Linux example….

#!/bin/bash$
# Source the configuration file$
. /etc/autosuspend.conf$
$

The caret M$ is DOS, $ is UNIX.

The Linux File System In General

A website that  has a overview of the Linux file system can be found at…

http://www.tldp.org/LDP/intro-linux/html/chap_03.html

 

Samba on a Linux Server

Samba

Backup /etc/samba/smb.conf  before toying with it! Copy it somethings like /etc/samba/smb.bak or /etc/samba/smb.orig for the original and bak for files that you are modding along the way to getting this working. I admit Samba was a bit of a pain to get working, I fussed around a bit on the server and the Windows machines until success occurred.

One mistake I made was to name the folders by the paths as they appear on the server. Bad idea, Microsoft Windows did not like forward slashes and denied access to the folders. Using slashes and perhaps other non-alphanumeric characters are a no-no in the server folder names.

Make Folders on the Server

I created folders named /files/public and /files/erick on the server. More can be added for additional users. What I am doing with the folders is backing up user profiles from Windows machines in the /files/user folders. The public folder is going to hold things like install files for the Windows machines, anti-malware & etc tools, C compiler and DOS DOS-UNIX equivalent tools and so on.

I executed the following commands on the server…

sudo mkdir /files
cd /files
sudo mkdir public
sudo chmod 777 public
mkdir erick

I believe I did a chmod to 777 on files as well. I made the erick directory with my own credentials, I am owner. Directory is created as a 775 by default…

rwxrwxr-x 2 erick erick 4096 Dec 10 21:12 erick

Later on I created a renee folder. Same drill, I did an su and logged in as the user renee after I created the account and ran a mkdir renee under files.

You need to create a Samba password for yourself and any users. Make it the same as the password that you log into the Win machines, especially important if you want to access home folders.

The command for adding a Samba user and password is…

smbpasswd -a user
Linux Users

While on the users topic adding a Linux user with a home directory is accomplished with the following command…

sudo useradd -d /home/username -m username

Adding the password, don’t skip this, if you forget to do this it will cause problems down the road and it might take a while to figure the problems out.

sudo passwd username

There is a command that can take the contents of the skel directory /etc/skel,  into a users home directory. This sets up the files and folders. Normally this will happen when you use the -d /home/username option on useradd. But if you create a user without a home directory and add one later the following command may be helpful…

mkhomedir_helper username

I followed the method above to add a user renee and then created a /files/renee directory on the server.

Editing the smb.conf file

For the following, I opened my /etc/samba/smb.orig and etc/samba/smb.conf files in the eMacs editor and differenced them. The gray lines and sections show the changes, I have highlighted them with red rounded rectangles for clarity. The biggest change is at the bottom of the file where I added code to allow access to the /files/public, /files/erick and /files/renee directories.

Global Settings Changes in smb.conf
Changes under Global Settings in smb.conf
Changes under Global Settings in smb.conf
Authentication Section changes in smb.conf
Changes under Authentication in smb.conf
Changes under Authentication in smb.conf
Share Definitions sections changes in smb.conf.

This is optional and will allow the home directories of the users to be made accessible with read/write access on the network. In this section the changes are post the most part the uncommenting of the grayed out lines that you see below. I think the only change beyond that was setting read only = no.

Share Definitions sections changes in smb.conf
Share Definitions sections changes in smb.conf
Section added to tail of smb.conf for user defined directories

Follow this example to add your own directories to be accessible from the Windows network.

Don’t use any slashes in the names in the [brackets]. I imagine a lot of non-alphanumeric characters will make this fail. Slashes were my problem. I was trying to be clever and using things like [/files/erick]. Also I went to using an underscore instead of a space in the names. This makes it work better from the Windows CLI and scripts, space does not always translate well. I have had issues with scripts where it takes the first part of the folder name and thinks the 2nd part is a switch to the command or something, resulting in failure. Basically the DOS like Windows CLI (Command Line Interface) environment does not like spaces!

I have not tried setting browsable to no. I imagine it can be only access by knowing the names of the files and probably by navigating using the CLI from Windows. This would be acceptable for the two named directories as they are only backup directories and I don’t imagine I would have to browse to the often.

 

Section added to tail of smb.conf for user defined directories
Section added to tail of smb.conf for user defined directories
Restart

Samba needs to be restarted any time you change the smb.conf file. Use the command….

sudo service smbd restart

…to restart.

Windows Machine

The Windows machine needs to be set to the same workgroup. It is best to have the same user names and passwords to both the Win users and the Samba users, in this manner all will work including home file sharing. When you make changes, sometimes you have to log out and in to the Windows user for them to take effect or else you get errors like the folder is not accessible, and other like it about permissions. Windows will prompt for a username and password to access folders as well, especially if the users and passwords do not match between Windows and the Samba server.

smbclient command

Running smbclient -L servername from the server is a good sanity check that the shares are showing up and that the server actually sees the Windows network. If this looks good generally you are in business with Samba at least from the server side.

erick@ubuntuserver:/etc/samba$ smbclient -L ubuntuserver
Enter erick's password:
Domain=[MSHOME] OS=[Unix] Server=[Samba 3.6.3]

        Sharename       Type      Comment
        ---------       ----      -------
        homes           Disk      Home Directories
        print$          Disk      Printer Drivers
        Erick_Backup    Disk      Erick's Files at /files/erick
        Renee_Backup    Disk      Renee's Files at /files/renee
        Public          Disk      Public Files at /files/public
        IPC$            IPC       IPC Service (ubuntuserver server (Samba, Ubuntu))
        erick           Disk      Home Directories
Domain=[MSHOME] OS=[Unix] Server=[Samba 3.6.3]

        Server               Comment
        ---------            -------
        RENEECOMPUTER        Renee's Computer
        UBUNTUSERVER         ubuntuserver server (Samba, Ubuntu)

        Workgroup            Master
        ---------            -------
        MSHOME               RENEECOMPUTER



smbstatus command

Executing smbstatus from the server command line can tell you what computers are connected and if any files are locked. Try executing it while file operations are in progress to see how it behaves. After seeing it in operation, what is going on becomes obvious for the most part. Without any computers connected to Samba folders, nothing interesting is reported. This means that this tool be helpful troubleshooting Samba if you can’t even connect to the folders. But may be of use to troubleshoot issues when all is working OK and then an issue arises. I also have a script that runs and allows the server to shut down when idle, it executes smbstatus as a test to see if any computers are using Samba so the server won’t shutdown while Samba is in use.

It has command line options which I haven’t explored much myself yet.

For the man page on smbstatus

https://www.samba.org/samba/docs/man/manpages/smbstatus.1.html

 

The next topic in this series is…
Installing OwnCloud rounds out the server
Additional

 

There is a good YouTube tutorial online that runs through the basics of setting Samba up on Ubuntu Server 12.04. It worked for me.

Configure Static IP and installing NTP

One of the first steps when configuring a server post-install is to set up a static IP address. A resource that I followed to remember how to do it is this is The following instructions will vary widely based on your router, this is just a guideline.

How to Make an Ubuntu File Server With Samba

The following is the mods to the network config file using the nano editor, you can use pico or vi, or if you really want to you could move the file off the computer using ftp and put it back if you prefer. But I figure it is best to edit most things in place.

But make a backup first on a critical file like this one

sudo cp /etc/network/interfaces /etc/network/interfaces.bak

then edit the file…

sudo nano /etc/network/interfaces

 

I found the broadcast and netmask from using the ifconfig command. The router address (gateway), I knew from installing the router, look it up in your router admin page. The network is the same address as the gateway with the last digit set to zero, in my case at least. The address is what I want the static IP to be for this server, 10 works OK, 192.168.1.1 is the router add a zero and you’ve got the server.

ifconfig Output
ifconfig Output

For me I commented out the line for dhcp added the

iface eth0 inet static

…and added the right values for address ( my static IP), netmask, network, broadcast and gateway…

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
# iface eth0 inet dhcp
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# nameservers
dns-nameservers 8.8.8.8 8.8.4.4
Set DNS

There is a little trick I found somewhere online for putting name servers right into the interfaces file. Use Google’s DNS 8.8.8.8, 8.8.4.4 or use the ones provided by your ISP. You can usually find your ISP’s name-servers by looking at your router settings.

dns-nameservers 8.8.8.8, 8.8.4.4

After the static IP is set restart the network…

sudo service networking restart

or if the machine is rebooted r the changes will take effect.

Verify All is Well

Ping Google…

ping www.google.com

use ctrl-c to stop the pinging. It should give this kind of output if all is well…

erick@ubuntuserver:/etc/samba$ ping www.google.com
PING www.google.com (173.194.123.51) 56(84) bytes of data.
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=1 ttl=53 time=37.9 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=2 ttl=53 time=37.6 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=3 ttl=53 time=34.6 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=4 ttl=53 time=37.9 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=5 ttl=53 time=37.5 ms
^C
--- www.google.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 34.649/37.151/37.961/1.272 ms

or run the update and upgrade commands used in the earlier installation post, again to see if all is well they should complete without error.

sudo apt-get update
sudo apt-get upgrade

Since you executed them earlier ( see previous post) not much will happen but it is a good validation that the static IP is working correctly.

Installing NTP

Install NTP, so that the computers time can be synced with the network

sudo apt-get install ntp

Read more on ntp

Next installing SSH and connecting beyond you LAN to the outside world…

Remote Operation of Server