When I initially built the server using a Dell Dimension 4200, I added 1GB of RAM on top of the 512MB that was factory installed. The board can support up to 2GB, but 1.5GB seems sufficient for what I am doing. One of the first steps is to run MEMTEST by booting off of a Linux CD that I had laying around. This test ran overnight (15+ hours) with no problems, it’s always a good idea to run MEMTEST with any memory changes.
Second Hard Drive
I removed the floppy drive and added a second 120GB hard drive. I also replaced one of the CD drives with a DVD reader. Ubuntu Server 12.04 gets burned onto a DVD so I needed to boot off of the DVD. The other option would be to boot from a USB drive. I swapped the IDE connector off of one of the CD drives and used it for the secondary hard drive mounted in the floppy drive bay.
Swap 120GB hard drive for 500GB
I soon was well on my way to filling up the primary 200GB drive so it was time to consider putting in a bigger secondary hard drive in preparation for the future.
I have installed the 500gb drive in the server and formatted for use with Linux. Linux can use a drive formatted as NTFS but I formatted it as EXT4 for Linux so the disk checks and fixes can be more precise. EXT4 can handle extremely large drives 1EB partition size, an amount of data I cannot even imagine! EXT3 is good up to 32TB partitions, which is still very big! The new extra drive will give me much more space as the main 200gb drive is almost full. I will move some files onto it, mostly the backups from the other computers at home. This is primarily the goal with second drive. There is a software manger in Linux that can manage the drive, so called “Logical Volume Management” (LVM), the primary drive is managed using this feature. In theory I can create a “snapshot” of the drive and copy the image onto a backup external or the second drive, but I have not considered doing this yet. The primary drive contains the OS, whatever software I have loaded, which doesn’t take up much space. The files that I have loaded onto OwnCloud take up a good deal of space, but 200GB will be plenty of space for the primary drive for a while.
To LVM or not?
At first I was going to connect the first and second drive into one large “logical” drive using LVM. But, there is a risk if the system treats the 500gb+200gb = 700gb logical drive. If one drive fails it can ruin the entire “logical” drive composed of both drives. One disk failing out of two might be a bad risk, so I might leave the drives connected normally, mounted separately and not as a big logical drive.
I do a lot of baking of fresh home made bread and I play around with time lapse photography from time to time. It seemed natural to put together a time lapse video of bread dough rising.
How it was done
I have used the Fire Storm fswebcam program for Linux to trigger web-cams to take periodic frames to monitor my house when I was away last winter. By being able to take periodic frames, fswebcam also makes it easy to do time lapse photography.
Basically for this GIF animation, the camera is triggered 10 times per hour. I run it under Linux using a Bash shell script that time and date stamps the images when they are saved and creates folders based on the date. Then I open up the GIMP graphics editor and use File-> Open as Layers to bring all the images in, 140 in this case. Then Filters-> Animation->Optimize (for GIF) creates the animation. I then save it as a GIF with 100ms delay between frames and allow looping. I did find another way to create videos found under the section How To Make a movie, under the section that talks about doing it via SSH on the Raspberry Pi, on the site, How To Capture Time-Lapse Photography With Your Raspberry Pi and DSLR or USB Webcam
Bread Dough
The bread dough is a standard type of dough that I use often. It starts with 100g water and 100g white unbleached flour with a pinch f yeast as a poolish. This is left in a container overnight. Then 2 cups of flour, approximately 10 grams of salt and another pinch of yeast are added and thoroughly mixed. Water is added to the poolish, I start with about 1/2 cup. It is all mixed together adding more water if needed. It forms a dough ball that is worked for 1-2 minutes. Then I let it sit for 15 minutes. Water is contained in the starch bonds, this water is released during the working of the dough, it has a bit of a time delay and it will release even more water for a while after it has been worked. Allowing it to rest for 15-30 minutes allows the water to come out of the bonds and at that point you can judge whether or not the proper amount of water/flour ratio exists by the feel. Then I let the dough rise in a bowl that was coated with olive oil. My standard practice would not dictate letting it rise in open air overnight, effectively this dough has over proofed, but to take the pictures, I decide to just let it go and do it’s thing. Normally, it would be punched down a few times and if I am not ready to make it, it might go in the fridge overnight.
How did it come out by just letting it go and rise on it’s own? Surprisingly the end product was OK, I actually baked it as in the Pyrex bowl and it was a fairly good bread after all. Baked at 400 F for about 40 minutes. I preheated the oven with small bowls of water in it to add moisture as well, leaving them in while the bread baked. This enhances the crust of the bread.
Technical details on capturing the frames
fswebcam
I used Fswebcam to capture the images. It has to be compiled from the source code. Below are my notes related to fswebcam. I had a bit of a hard time getting it to run last year, but the essence of what I had to do is captured below.
————————————————————————————
fswebcam – Small and simple webcam software for *nix.
This is the program used to generate images for a webcam. It captures a number
of frames from any V4L or V4L2 compatible device, averages them to reduce noise
and draws the details on it using the GD Graphics Library which also handles
compressing the image to PNG or JPEG.
Installing fswebcam
sudo apt-get install fswebcam
Alternatively install via the DEB packages below if you want a newer version that apt can install, especially if it installs the 2009 version, which it will do if you are using an older version of Ubuntu Linux. Try the latest one that will work, I was able to get a new version that did not complain about missing packages.
I first ran fswebcam on Ubuntu 10.04 and ran into issue with an old webcam (Using palette SGRBG8 was not supported and I got an unsupported palette error), plus all of the features advertised for the fswebcam such as labeling the photos and printing a time stamp on them was not working. It was probably in the works and didn’t make it into the release, or something on my installation was not supporting the label adding feature. But I was able to get the 20101118 installed via a DEB package above.
If using dpkg to install one of the deb packages fails. Or in the case you want to work through compiling this code, follow the guidelines below, which do not cover all cases.
I did compile from scratch years ago to get a newer version of fswebcam installed, beyond what the package manager would install. I have found out that dpkg will install version 20101118 on Ubuntu 10.04 without complaint. Beyond that version, dpkg will complain about missing dependencies. I was able to compile the version labelled 20110707 and that ran on 10.04. If you need to find the version of fswebcam use….
fswebcam --version
The versions currently run to 20140113, trusty release, as of 11/26/2014.
Installing GD Library, do this before installing fswebcam
This part gave a bit of a hard time as my notes below state that there was a few failed attempts to get fswebcam up and running.
sudo apt-get install libgd2-xpm-dev
./configure --prefix=/usr
make
sudo make install
NOT SURE IF THIS IS NEEDED
But I actually tried this first, instead of the command sequence above…
Downloaded libgd-2.1.0
cd to the libgd-2.1.0 directory
then ran…
./configure
make
sudo make install
Then tried to compile fswebcam again and it complained about missing JPEG GD package, so did the compile again for GD with this command sequence instead…
./configure --with-jpeg --with-png --with-freetype
make clean
make
sudo make install
FAILED
Tried this…
install php-gd
FAILED
So I think the first method of config,make,install of libgd2-xpm-dev is the one to try, first!
I only put the failed stuff here because I am not sure if doing the failed stuff first put in JPG, freetype or something that made things work once ibgd2-xpm-dev was loaded in.
Compile and Install fswebcam
Download the source code from http://www.sanslogic.co.uk/fswebcam/ and unzip in a directory named something useful under you home folder. Use tar to extract it.
tar xvzf fswebcam-DATECODE.tar.gz
Best to use sudo make install so that the files wind up being able to be put where they need to be.
Run the following commands in the source folder to build and install fswebcam:
./configure --prefix=/usr
make
make install
It’s only requirements are that the GD library be installed with JPEG, PNG
and FreeType support.
Checking to see if the webcam is being read by the PC
Command to see what devices are hooked up to the USB for video…
ls /dev/video*
There is also a porgram called Cheese that can be installed via the package installer for Ubuntu. This lets you see the video live from the web cam. It makes it easy to adjust distance, angles, lighting and focus the cam while setting up the shot.
Autocam/Breadcam
I created autocam.sh to be called by watch periodically in order to snap a photo, name it the yr,month,day:time.jpg and put it in a created folder label for the date. Then I used to copy to Wuala mirrored path, which would automatically load it onto a Wuala cloud drive. The script below has the Wuala stuff ripped out. At the bottom of this post there is an explanation on how to use Wuala as an NFS drive.
For example, call bash script every 10 seconds…
watch -n 10 bash autocam.sh
Remember to chmod u+x autocam.sh so that it can be made into an executable script.
Autocam.sh code
#!/bin/bash
# this is the command to run this with watch -n 3579 bash autocam.sh for hourly rate at 255 frames to image
# this is the command to run this with watch -n 350 bash autocam.sh for 10x hourly rate
# /%Y%m%d/hour-%H/%Y-%m-%d-%H:%M:%S
#now is the filename for the date stamped jpg file
now=$(date +"%Y-%m-%d-%H:%M:%S")
#dir is the directory that is dated as the date of the picture. I need an IF statement around this so it doesn't keep creating the same dir.
dir=$(date +"%Y%m%d")
mkdir $dir
#pathnow, I don't think is needed anymore.
pathnow=$(date +"%Y%m%d")
#filename="webcam.$now.jpg"
#filename="$now.jpg"
# example filename: my_program.2012-01-23-47.log
To make a movie via the Linux Command line, go to the section on How To Make a movie, under the section that talks about doing it via SSH on the Raspberry Pi, on the site…
From Some fun with a webcam , I like how the code to run the camera has it set the font as white against a transparent footer. The code in bold makes it happen…
I’ve been interested in a way to burn right from the command line, with a possibility of using one of my Linux computers with a mode as a burn station, ideally I could throw in a CD, it would detect it and start the copy process and eject when done. This post is about a small step in that direction.
I researched it a bit and tried the example given by this page….
But, I modded the $HOME/.cdrdao file a bit to include a list of cddb servers that I pulled off of,http://roozster.info/eac/06.html, plus added a timeout for the cddb set to 10 seconds.
The .cdrdao file goes in your home directory and it acts like a configuration file for the cdrdao program. The help site above goes into details. But, briefly the write buffer at 128, which is 128 seconds, at an 8x burn gives 16 seconds of under-run protection. The device has to be set correctly. My CD burner is at /dev/sr0. According to the help.ubuntu site above, running sudo cdrdao scanbus, will produce an output that yields the device name. For me it didn’t yield a /dev type of connection but rather a 1,0,0 bus attachment type of readout. But I hovered over the CD in the file manager and found out the device mount point from there which was /dev/sr0.
Output from running sudo cdrdao scanbus
Cdrdao version 1.2.2 - (C) Andreas Mueller <andreas@daneb.de>
SCSI interface library - (C) Joerg Schilling
Paranoia DAE library - (C) Monty
Check http://cdrdao.sourceforge.net/drives.html#dt for current driver tables.
Using libscg version 'ubuntu-0.8ubuntu1'
1,0,0 : QSI , CDRW/DVD SBW-242, UD22
--paranoia-modemode
Sets the correction mode for digital audio extraction.
0: No checking, data is copied directly from the drive.
1: Perform overlapped reading to avoid jitter.
2: Like 1 but with addi-tional checks of the read audio data. 3: Like 2 but with addi-tional scratch detection and repair.
The extraction speed reduces from 0 to 3.
Below is the code that I pull from the site and modded by adding the cddb_servers and cddb_timeout this code is used to create the $HOME/.cdrdao file…
Rename cdcopy.sh.txt to cdcopy.sh, put in home and run chmod 755 on it to make it executable
chmod 755 cdcopy.sh
So far I have used this method to burn about 10 CD’s in the first week I finished trying this and tested them out in a CD player and ripped them both with Media Player and iTunes, all worked well!
I had a moment where I forgot about the entire UNIX and DOS endline incompatibility issue. So when I grabbed the autosuspend script with copy and paste and I brought it into eMacs in Windows, saved it to my /files/public folder on the server and tried to execute it. Lots of “$’\r’: command not found” errors.
The solution is to use dos2unix to convert the endlines, if you don’t have it just…
sudo apt-get install dos2unix
Then do dos2unix filename and it will modify it in place. Which is good but beware of this default behavior. It does have other options, which can be explored using dos2unix –help.
Dos2unix has one and only job, to remove CR-LF (Carriage Return-Line Feed )and just leaving LF ( Line Feed ) as UNIX/Linux wants it to be. If a file acts screwy when brought in from Windows it is most likely this issue. I even had to do it on the autosuspend.conffile!
You can always check a file with the command
cat -e filename
BAD, Windows/DOS example…
#!/bin/bash^M$
^M$
# Source the configuration file^M$
. /etc/autosuspend.conf^M$
^M$
GOOD , UNIX/Linux example….
#!/bin/bash$
# Source the configuration file$
. /etc/autosuspend.conf$
$
The caret M$ is DOS, $ is UNIX.
The Linux File System In General
A website that has a overview of the Linux file system can be found at…
I found another article on auto-suspending that requires only a simple bash script that I have placed in /etc/cron.hourly.
WordPress did not like me uploading autosuspend.sh, for security reasons, it will give an error, so I have the script autosuspend.sh , named autosuspend.sh.txthere->autosuspend.sh . The file goes in /etc/cron.hourly naming it just autosuspend, cron won’t run if the filename has an extension.
The file must be owned by root and executable. So you have to use the following commands before running it.
sudo chown root:root autosuspend.sh
sudo chmod u+x autosuspend.sh
I used it as autosuspend.sh and ran it a few times manually running sudo autosuspend.sh, just to see it run properly before sticking the file renamed as autosuspendand placed it into /etc/cron.hourly.
And the autosuspend.conf named as autosuspend.conf.txt here->autosuspend.conf goes in the /etc/ directory.
Both are UNIX formatted files, modify them accordingly for your use.
syslog
CRON logs things when it runs autosuspend into /var/log/syslog, so you can execute…
tail /var/log/syslog
…to see if everything is OK by seeing the traces, the autosuspend script gives good useful error messages. It also will send an email on the server to root@yourservername, every time it runs. You can use mailx from the CLI ( or some other program ) to read the local email. Mailx is very simple and good enough to quickly page through CRON emails, using return to move down through the unread ones.
…that does have the autosuspend.conf file and it seems to work, at least it runs fine so far with some mods.
Files
Once again below are the script and conf file from those sites, labeled with a txt extension. I put them here in case those sites disappear for some reason. This is good knowledge and it works so well, I’d hate to see it get lost.
The script taken from the Archlinux page, requires systemd and uses systemctlsuspend to suspend the machine, named autosuspend.sh.txt. Formatted for UNIX/Linux.
Original autosuspend.sh that uses pm-utils from the German ubuntuusers.de site, named as pm-utils_autosuspend.sh.txt and the autosuspend.conf named as autosuspend.conf.txt. Formatted for UNIX/Linux
I decided to modify the autosuspend.sh file rather than loading the package that it needed (systemd) to execute systemctl suspend, which is what the script file from the first article uses. The other option would be to use pm-utils as the second German article has the original autosuspend.sh formatted to use. For more info on pm-utils see https://wiki.archlinux.org/index.php/pm-utils
Instead of auto suspending, I decided that since the server starts fast enough from a cold boot (17 secs. to usable), why not just replace the…
systemctl suspend
…line with…
shutdown -P +5
This will shut the server down, with a 5 minute warning and guard band. I say guard band, because it can guard against a potential loop. If I play with the script more and make a mistake, I do not want to wind up with a server that starts, jumps to the script and starts shutting down immediately. I know I put the file in /etc/cron.hourly, so it will kick off every hour, but I am just guarding against unforeseen things to be safe and it’s only 5 minutes of delay. If it goes to shut down while testing at some point, I have 5 minutes to execute a shutdown -c to cancel.
I also put the line…
ethtool eth0 -s wol g
…before the shut down line, because that same piece of code, which I tried put into rules.d. But it was not setting the wake on to g, When I ran ethtool, it was staying at d. Not sure why, but since I will be allowing this server to shutdown by itself 90%+ of the time, I opted to put it right in the shutdown script. After a second thought, I also put that line into the /etc/rc.local ( which runs at start up ) as well so it is armed even if I shutdown manually! See the post of Wake On Lan via Ubuntu Linux for more info on Wake on LAN.
Here is the modded autosuspend called autoshutdown.txt. Remove the txt extension and place into the /etc/cron.hourly folder, it is formatted for UNIX.
I forgot about the UNIX and DOS endlines being different while I was working on this. See my post on UNIX vs DOS file endlines, as I had a bit of brain fog and struggled a bit with this while working on the autoshutdown script.
Winbind
Once I got the autoshutdown running. I realized that the Linux machine was not able to resolve the names the Windows machines on the network. The server could only ping the Windows machines by IP address and not their names! I saw this when I was logged out of the server and logged in a while later and the shutdown script had recorded failed pings into syslog, when checking to see if the server was idle. The script correctly saw that no one is logged it by executing, who | wc -l yielding a zero and next it was testing for attached clients ( the Windows machines named in the autosuspend.conf file) using ping $i -c1. And ping was failing as the names were unreachable.
arp -a could see all the machines by IP address from both Linux and Windows.
net viewon the Windows machine could see all the machines by name.
smbstatuscan see every computer by name fine from my Linux server machine. Particularly since I had installed Samba, the servers name is visible from Windows PC’s due to Samba.
Samba must send out net-bios information about itself, I see in the config file for Samba where it can act as a wins server as well.
In order for the autosuspend/shutdown script to work pinging by name is a must. To fix, install winbind and configure /etc/nsswitch.conf.
sudo apt-get install winbind
In /etc/nsswitch.conf add wins to the end of the line that starts with
hosts: Mine now reads…
The autosuspend script does a test to see if anyone is accessing files using Samba via smbstatus. Smbstatus is great to see what is going on, it is good to troubleshoot Samba when you can make connections. It is interesting once you play with it when various computers are accessing the server to understand what it is telling you.
But the script is just looking to see if computers are accessing Samba
shares. The autosuspend.conf shows an IP address for the test using $SAMBANETWORK as that value and grepping on it. I am not sure how this works as I don’t see any IP numbers when I run smbstatus. So for now I decided that I will use the word Public in the autosuspend.conf instead of 192.168.1. Most likely if a computer is accessing Samba shares on my network and the computers name is not one of the “clients” ( my own machines at home, that have listed names) it is going to be only accessing the Public Samba share. For now this seems to work!
Test used in autosuspend script to look for machines accessing Samba…
/usr/bin/smbstatus | grep $SAMBANETWORK | wc -l
Other conditions for shutdown
The other two tests that autosuspend does (IsRunning() and IsDaemonActive() ), I have not validated them.
That is a TBD. So far, so good, the server has not shutdown unexpectedly and I have not seen it held up by IsRunning() yet, based on it’s tests. If something is running and a shutdown occurs, a sigterm is generated as the system is going down, so anything in process should terminate cleanly in theory. I’d like to test for OwnCloud activity at some point, I have shut the machine down and restarted a few minutes later on purpose with an OwnCloud file transfer in progress and it picks back up. I have to figure out a test for this, TBD.
# Turning suspend by day (8 a.m. to 3 a.m.) off
DONT_SUSPEND_BY_DAY='no'
# Automatically reboot once a week when the system isn't in use
REBOOT_ONCE_PER_WEEK='yes'
DONT_SUSPEND_BY_DAY seems to control suspending by blocking it out during the day between 8AM and 3PM, it uses /sys/class/rtc/rtc0/wakealarm. I wasn’t interested in this so I was fine with it being carved out.
REBOOT_ONCE_PER_WEEKuses cat /proc/uptime | cut -d’ ‘ -f1-1\` / 3600 / 24 )>= 7\ as a test to see if the machine has been running for more that one week and then it reboots the next time it is idle. This is not of interest to me as my machine will shutdown rather than suspend, so this is not needed either.
Interestingly, I do see a test to see if power management is supported in the original autosuspend.sh that relies on pm-utils. This does not exist in the modified script that uses systemctl, perhaps it is not neccessary as calling systemctl is fine without or it was omitted, because such a test does not exist when using systemctl.
/usr/bin/pm-is-supported
Basically I am fine with the simpler script, if I need to add features back in, so be it!
I have been using the shutdown script for over a month with no issues so far.
Follow Up
I have been using this code on two servers, one for almost three years and one for a year. The older one does not suspend and it requires a shutdown and the newer one suspends nicely via systemctl suspend.
I decided to modify the code a bit to allow a hybrid-sleep and also allow for restarts when the system requires them. Read more about this here….
Autoshutdown Code Modded to hybrid-sleep and allow required restarts
Sometimes it is nice to have an ftp server, you might have Samba and ownCloud, but sometimes you really need ftp to do something. It is the right tool at the right time and I can’t imagine running a server without FTP installed.
sudo apt-get install vsftpd
Edit the configuration file
Back it up first then do an edit
sudo cp /etc/vsftpd.conf /etc/vsftpd.orig
sudo nano /etc/vsftpd.conf
uncomment local_enable = YES
uncomment write_enable = YES
In this manner you will be able to read and write to your home directory. With SSH and FTP you can do just about anything remotely to your server. You can ( put ) FTP a file up to your home and move it anywhere and in the opposite direction also ( get ).
For example I downloaded the zip file for the OwnCloud Music App on a Windows computer, then FTP’d it the Linux server into my home directory and moved and unzipped it in the proper directory using SSH. Zip/unzip is not loaded by default with the Ubuntu Server disc, to get it see below.
This is powerful and with that power comes danger. You don’t want anyone to be able to SSH and FTP in, so be careful when opening these ports. I get “hits” on port 22 for SSH a lot, I don’t even open port 21 for FTP outside of my LAN. When I mean hits, I mean I can see IP addresses come in on my routers log that are from outside the US, by looking them up, or browsing to them. Sometimes using a ping command to the IP a return will come from another IP. These cyber-criminals try to get in on open ports.
Editing shell or config files on a Windows machine, presents you with the CR-LF and LF issue, for Win and UNIX respectively. Scripts won’t run, problems happen with config files when they are not in the right format. Frequently I encounter this when I coy and paste some code from the Web into eMacs or Notepad, then save it on the Linux server. Then I need to execute dos2unix on it to make it run right.
UNIX and DOS endlines
I had a brain dead moment where I forgot about the entire UNIX and DOS endline thing when I was working on getting the server to auto shutdown.
So when I grabbed the autosuspend script with copy and paste and I brought it into eMacs in Windows, saved it to my /files/public folder on the server and tried to execute it. Lots of $’\r’: command not found.
The solution is to use dos2unix to convert the endlines, if you don’t have it, just do…
sudo apt-get install dos2unix
Then do dos2unix filename and it will modify it in place. Which is good but beware of this default behavior. It does have other options, which can be explored using dos2unix –help.
It’s one and only job is to remove CR-LF (Carriage Return-Line Feed )and just leaving LF ( Line Feed ) as UNIX/Linux wants it to be. If a file acts screwy when brought in from Windows it is most likely this issue. I even had to do it on the autosuspend.conf file!
You can always check a file with the command
cat -e filename
BAD example…
#!/bin/bash^M$
^M$
# Source the configuration file^M$
. /etc/autosuspend.conf^M$
^M$
GOOD example….
#!/bin/bash$
# Source the configuration file$
. /etc/autosuspend.conf$
$
The caret M$ is DOS, $ is UNIX.
Emails using ssmtp
It is great that CRON and other applications send an email to the root on a Linux server, which can be read simply by using mailx from the CLI. But, what if you are not logging into the machine very often at all. Using ssmtp might work well for those situations. Even my Netgear N150 router has something similar as far as sending email. On the router, you input email account settings on it and will email you the log file and other information you would like at regular intervals. Ssmtp may be of interest to me with regards to the server at some point and I have noted it for reference.
It would be interesting and a great idea to have the server be able to send emails of certain things, issues it may be encountering.
This looks interesting, I might do this at some point….
The first line adds to the sources list for apt and will affect the operation of the apt-get update command, more stuff related to OwnCloud gets applied. When I first did this I accidentally hit the up arrow and return and pasted it in twice. The update command complained about this as a warning, the fix is to remove the extra copy from the bottom of the /etc/apt/sources.list.d
Although the OwnCloud install pages shows this second in line. I think I had to do it first, before the above command or errors will happen regarding a missing key.
When loading the OwnCloud repository, it failed on the first try. I forget the error, but update was failing. Something was off base with my Ubuntu install, I could not update & upgrade correctly. I had to search the Internet for a fix. Which involved running
sudo rm -FR /var/lib/apt/lists/*
which cleared out the lists that apt was running on then…
sudo apt-get update
…worked fine!
If you have LAMP installed (which you should), configure OwnCloud to use mySQL when the question comes up when you login for the first time at http://youraddr/owncloud.
Leave database as owncloud and localhost.
OwnCloud Apps
Some apps can be downloaded via the normal click and download/install as an administrator. But some are not available like that. For example Music.
Installing OwnCloud apps by downloading zips.
I went to install Music, which would not install via the web interface.
I had to download the zip file and put it in the folder by ftping to the server. It is worth having vsFTP installed on the server, or at least on your machine that you are accessing the server through. With SSH and vsFTP it is easy to get a lot of work done.
Put the zip file at…
/var/www/ownloud/apps
zip/unzip do not come with Ubuntu server by default, use
sudo apt-get install zip
to get it. Then simply unzip the zip file in the apps folder, it will make it’s own folder. Then the app is installed and will appear in the menu.
Backup /etc/samba/smb.conf before toying with it! Copy it somethings like /etc/samba/smb.bak or /etc/samba/smb.orig for the original and bak for files that you are modding along the way to getting this working. I admit Samba was a bit of a pain to get working, I fussed around a bit on the server and the Windows machines until success occurred.
One mistake I made was to name the folders by the paths as they appear on the server. Bad idea, Microsoft Windows did not like forward slashes and denied access to the folders. Using slashes and perhaps other non-alphanumeric characters are a no-no in the server folder names.
Make Folders on the Server
I created folders named /files/public and /files/erick on the server. More can be added for additional users. What I am doing with the folders is backing up user profiles from Windows machines in the /files/user folders. The public folder is going to hold things like install files for the Windows machines, anti-malware & etc tools, C compiler and DOS DOS-UNIX equivalent tools and so on.
I executed the following commands on the server…
sudo mkdir /files
cd /files
sudo mkdir public
sudo chmod 777 public
mkdir erick
I believe I did a chmod to 777 on files as well. I made the erick directory with my own credentials, I am owner. Directory is created as a 775 by default…
rwxrwxr-x 2 erick erick 4096 Dec 10 21:12 erick
Later on I created a renee folder. Same drill, I did an su and logged in as the user renee after I created the account and ran a mkdir renee under files.
You need to create a Samba password for yourself and any users. Make it the same as the password that you log into the Win machines, especially important if you want to access home folders.
The command for adding a Samba user and password is…
smbpasswd -a user
Linux Users
While on the users topic adding a Linux user with a home directory is accomplished with the following command…
sudo useradd -d /home/username -m username
Adding the password, don’t skip this, if you forget to do this it will cause problems down the road and it might take a while to figure the problems out.
sudo passwd username
There is a command that can take the contents of the skel directory /etc/skel, into a users home directory. This sets up the files and folders. Normally this will happen when you use the -d /home/username option on useradd. But if you create a user without a home directory and add one later the following command may be helpful…
mkhomedir_helper username
I followed the method above to add a user renee and then created a /files/renee directory on the server.
Editing the smb.conf file
For the following, I opened my /etc/samba/smb.orig and etc/samba/smb.conf files in the eMacs editor and differenced them. The gray lines and sections show the changes, I have highlighted them with red rounded rectangles for clarity. The biggest change is at the bottom of the file where I added code to allow access to the /files/public, /files/erick and /files/renee directories.
Global Settings Changes in smb.conf
Authentication Section changes in smb.conf
Share Definitions sections changes in smb.conf.
This is optional and will allow the home directories of the users to be made accessible with read/write access on the network. In this section the changes are post the most part the uncommenting of the grayed out lines that you see below. I think the only change beyond that was setting read only = no.
Section added to tail of smb.conf for user defined directories
Follow this example to add your own directories to be accessible from the Windows network.
Don’t use any slashes in the names in the [brackets]. I imagine a lot of non-alphanumeric characters will make this fail. Slashes were my problem. I was trying to be clever and using things like [/files/erick]. Also I went to using an underscore instead of a space in the names. This makes it work better from the Windows CLI and scripts, space does not always translate well. I have had issues with scripts where it takes the first part of the folder name and thinks the 2nd part is a switch to the command or something, resulting in failure. Basically the DOS like Windows CLI (Command Line Interface) environment does not like spaces!
I have not tried setting browsable to no. I imagine it can be only access by knowing the names of the files and probably by navigating using the CLI from Windows. This would be acceptable for the two named directories as they are only backup directories and I don’t imagine I would have to browse to the often.
Restart
Samba needs to be restarted any time you change the smb.conf file. Use the command….
sudo service smbd restart
…to restart.
Windows Machine
The Windows machine needs to be set to the same workgroup. It is best to have the same user names and passwords to both the Win users and the Samba users, in this manner all will work including home file sharing. When you make changes, sometimes you have to log out and in to the Windows user for them to take effect or else you get errors like the folder is not accessible, and other like it about permissions. Windows will prompt for a username and password to access folders as well, especially if the users and passwords do not match between Windows and the Samba server.
smbclient command
Running smbclient -L servername from the server is a good sanity check that the shares are showing up and that the server actually sees the Windows network. If this looks good generally you are in business with Samba at least from the server side.
erick@ubuntuserver:/etc/samba$ smbclient -L ubuntuserver
Enter erick's password:
Domain=[MSHOME] OS=[Unix] Server=[Samba 3.6.3]
Sharename Type Comment
--------- ---- -------
homes Disk Home Directories
print$ Disk Printer Drivers
Erick_Backup Disk Erick's Files at /files/erick
Renee_Backup Disk Renee's Files at /files/renee
Public Disk Public Files at /files/public
IPC$ IPC IPC Service (ubuntuserver server (Samba, Ubuntu))
erick Disk Home Directories
Domain=[MSHOME] OS=[Unix] Server=[Samba 3.6.3]
Server Comment
--------- -------
RENEECOMPUTER Renee's Computer
UBUNTUSERVER ubuntuserver server (Samba, Ubuntu)
Workgroup Master
--------- -------
MSHOME RENEECOMPUTER
smbstatus command
Executing smbstatus from the server command line can tell you what computers are connected and if any files are locked. Try executing it while file operations are in progress to see how it behaves. After seeing it in operation, what is going on becomes obvious for the most part. Without any computers connected to Samba folders, nothing interesting is reported. This means that this tool be helpful troubleshooting Samba if you can’t even connect to the folders. But may be of use to troubleshoot issues when all is working OK and then an issue arises. I also have a script that runs and allows the server to shut down when idle, it executes smbstatus as a test to see if any computers are using Samba so the server won’t shutdown while Samba is in use.
It has command line options which I haven’t explored much myself yet.
At this point, I get off of the server, I mean disconnect the monitor and keyboard. But first remember to configure the BIOS to ignore keyboard errors, important for unattended operation! I wait until at least the updates are done and I have tested out the static IP to “unhook”. If you are setting up firewalls it is best to do it sitting at the machine as well. Because a mistake setting up the firewall can lock you out of connecting with SSH remotely! The firewall, set via the iptables, can block or allow access to incoming or outgoing ports, by passing or dropping packets. The firewall can be configured via tools such as ufw (uncomplicated firewall) to allow certain services to go through. IP addresses and ranges can be blocked or allowed as well. This can get complicated in a hurry. More on this later.
Logging onto the machine remotely from Linux, is done at the command prompt using either of these…
ssh machines-name
ssh machines-ip-address
From Windows, fire up Putty and put in the machines-name or machines-ip-address in the appropriate spot. You will be presented with a CLI ( Command Line Interface ) prompting for password on connection.
If the machines-name one doesn’t work, then the name is not mapping to the IP address locally, it is a DNS thing in this case. You can just go ahead and use the machines IP. Which you should have configured static previously.
With both Win and Linux you will get a warming the first time you SSH into the server. The warning has to do with not trusting the RSA key, which makes sense, giving that it is the first time the connection is being made. The machines don’t know each other, so just enter yes and they will be key-paired so that in the future you won’t be presented with this question.
With SSH you can continue with the configuration of the machine remotely. The next item on the list is Samba. If you are configuring remotely at a Windows machine it is easy to see if you are configuring Samba correctly. It can be tricky to get working. Searching on line, I found a lot of posts on folks struggling to get Samba to work.
Getting it to the outside world
So far all of this operation has occurred on the LAN. What if you want to make a website or any other port connected to the outside world.
For me, I went into my router via its web config page and opened up port 80 to the outside word, by forwarding the port, connecting the forwarded port to the local IP address of the server. Along with the Port 22 for SSH as well. If you run Webmin you can forward port 10000 for Webmin. Now I could navigate to my external IP and see the web page of the web server from anywhere. Initially I made port 8080 available so that I could login to the router as well, but then I decided against it. I figure why open more ports than you need. Keep it simple. How many times will I need to actually get to the router, it’s mostly set it and forget it. The inexpensive Netgear N150 router has worked reliably and has near perfect up-time so far.
Noip for a Static Address
Install noip2. Not sure, my notes aren’t clear but, I think I had to compile and install it after it didn’t work via sudo apt-get install noip2. This is dynamic DNS support support for the URL. The noip2 program runs at startup and periodically reports the IP address of my ISP to the noip headquarters, I suppose, so the URL I picked out goes to my server. Otherwise I would have to go to the actually IP address and then find out what it is when it changes. This seems like a pain if you have to do it remotely, even while experimenting initially. Luckily my ISP does not change my address very often so this step for me is optional. I did run noip with my last server. I may run it for this one at some point. But the IP address stays the same for months so it is not a pain, even if I wanted to point to it with a name. I could even so something clever like send myself an email when it changes.
Router support for noip or dyn-DNS
A new firmware upgrade for my router has added support for noip, so now it would be possible to do this from the router itself. I haven’t investigated yet, but check yours it may be possible to use noip or dyn-DNS right from the router end and not have to mess with the server at all.
Beware of opening ports
Having things like SSH and FTP, ports 22 and 21 respectively open to the outside world can invite trouble. My router logs routinely show attempts to access the SSH port by various IP’s, if I leave them open, which trace to foreign countries, China mostly. I don’t leave FTP open at all and am keeping SSH off as well until I can firewall this server. For now accessing SSH and FTP from the LAN is good enough. Ideally I want to modify iptables to only allow trusted IP addresses into SSH, the rest, drop the packets as they arrive.
These attempts I see in the router log probably try to hit the username and password with a bunch of guesses or try to look for obvious ones. These cyber-criminals are trying to jack into your machine and do whatever damage they can to the web. So be cautious.
Next do some file sharing with Windows machines using Samba…
One of the first steps when configuring a server post-install is to set up a static IP address. A resource that I followed to remember how to do it is this is The following instructions will vary widely based on your router, this is just a guideline.
The following is the mods to the network config file using the nano editor, you can use pico or vi, or if you really want to you could move the file off the computer using ftp and put it back if you prefer. But I figure it is best to edit most things in place.
But make a backup first on a critical file like this one
I found the broadcast and netmask from using the ifconfig command. The router address (gateway), I knew from installing the router, look it up in your router admin page. The network is the same address as the gateway with the last digit set to zero, in my case at least. The address is what I want the static IP to be for this server, 10 works OK, 192.168.1.1 is the router add a zero and you’ve got the server.
For me I commented out the line for dhcp added the
iface eth0 inet static
…and added the right values for address ( my static IP), netmask, network, broadcast and gateway…
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
# iface eth0 inet dhcp
iface eth0 inet static
address 192.168.1.10
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# nameservers
dns-nameservers 8.8.8.8 8.8.4.4
Set DNS
There is a little trick I found somewhere online for putting name servers right into the interfaces file. Use Google’s DNS 8.8.8.8, 8.8.4.4 or use the ones provided by your ISP. You can usually find your ISP’s name-servers by looking at your router settings.
dns-nameservers 8.8.8.8, 8.8.4.4
After the static IP is set restart the network…
sudo service networking restart
or if the machine is rebooted r the changes will take effect.
Verify All is Well
Ping Google…
ping www.google.com
use ctrl-c to stop the pinging. It should give this kind of output if all is well…
erick@ubuntuserver:/etc/samba$ ping www.google.com
PING www.google.com (173.194.123.51) 56(84) bytes of data.
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=1 ttl=53 time=37.9 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=2 ttl=53 time=37.6 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=3 ttl=53 time=34.6 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=4 ttl=53 time=37.9 ms
64 bytes from lga15s47-in-f19.1e100.net (173.194.123.51): icmp_req=5 ttl=53 time=37.5 ms
^C
--- www.google.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 34.649/37.151/37.961/1.272 ms
or run the update and upgrade commands used in the earlier installation post, again to see if all is well they should complete without error.
sudo apt-get update
sudo apt-get upgrade
Since you executed them earlier ( see previous post) not much will happen but it is a good validation that the static IP is working correctly.
Installing NTP
Install NTP, so that the computers time can be synced with the network