Tag Archives: CRON

morse-1 code

Cron Driven Morse Code Time Sounder

My goal was to make a new hourly chime,something with a bit of intelligence to it than just the same tone every hour. It was actually one of those random things that runs through your head when laying in bed trying to get in a mood to sleep.

A bit of history

Typically I have been using a preexisting system sound on a Linux PC to annunciate that it is the top of the hour. This helps to keep track of how long I am on the computer, time to take a break maybe. It also makes me away of the time, in case I am in the zone with coding or whatnot and need to be aware of how time flies. Once you have this, you kind of look for it, even when you are in the room doing something other than computer work.

Up to now this is what I typically had in my crontab…

00 09-23 * * * aplay /usr/share/sounds/fLight__2.0/stereo/Message.wav

I restrict the hours down, just so the thing doesn’t wake me at night if I forget to turn off the PC.

Festival

Years ago I used Festival, the text to speech program and made it annunciate the time. This works OK as long as you are at the PC. If you are away from it a bit, you don’t always pick up what the machine generated voice is saying. Morse Code has an advantage over voice, in that the tones are easy to pick up, even when faint. I have a ham license and have experienced this first hand. So that was my line of thinking with using Morse to give me the time at the top of the hour. It would be easy to not only know that it is the top of the hour but, I can tell what hour it is easily with Morse code. Plus it seemed like a fun twist. I have even thought of putting something like this on my Pi which runs 24/7 in my office. In our living room we have a bird clock, that has different bird sounds for each hour and that has been pretty useful at times, so that got me going on this whole hourly chime adventure years ago.

I was initially inclined to make my own tones but, a brief search found Stephen C. Phillips site and his Morse Code Translator. Using it I was able to create a more code “hour” sound file from 0-23, that plays the correct sounding for every hour.

It is possible to take any sound files and label them 0-23.wav and get other types of sounds to chime at the top of the hour. Birds songs, train whistles, nature sounds, the possibilities are endless.

How it works

Using the Morse Code Translator, I was able to create a series of Morse Code translations from 0-23, in WAV file format. The WAV files get put in a directory, morse-code-audio under my home folder and the following script gets called in place of the line shown above in my crontab….

#!/bin/bash

sounddir=/home/erick/morse-code-audio

# Grab the Hours off of the date.

hour=$(date +"%H")

#Call the correct file based off of the hour
 aplay $sounddir/$hour.wav

WAV Files for the Morse Code Time Sounder

I put the 0-23.wav files online, as a tar.gz.

morse-code-audio.tar

Compression, Signal Processing, Information Theory and Cryptography

Uncompressed the 24 WAV files in the tar file are a total of 348K , it is interesting just how compressible single frequency tones are. All of them compress down to a 2.8K tar.gz. I had to check twice, I thought I was missing a bunch of files. But it stands to reason when one thinks of a single frequency turned on and off slowly, has very little bandwidth. Provided the keying is shaped in a way so that it is not a pure sine wave modulated by a square wave, the pulses have to be shaped the corners rounded off to get the bandwidth below 100Hz. Thinking of Fourier transform, a tone left on from infinity has a bandwidth of zero. A digital signal, pulse shaped, on and off keying at 20WPM code is going to have a real narrow bandwidth, therefore highly compressible. The opposite extreme, white noise, would not be compressed down at all by gzip or any of the fancier types of compression. White noise is random, equally distributed across the spectrum, hence incompressible. But it makes for a hell of an encryption key for that very reason. The worlds of signal processing and cryptography do come together in interesting ways!

Resources

Stephen C Phillips has an excellent website and blog that covers a lot of technical topics among with the Morse Code Translator is an example. He also covers Python code and the Raspberry Pi for example.

http://blog.scphillips.com/

The featured image is from this site and it is an interesting read…

https://www.raspberrypi.org/learning/morse-code-virtual-radio/worksheet/

Automatic Server Status Page Creation Update

In January 2015 I created a post about automatically creating a status page for a Linux server that I have. Typically this is put under a restricted directory and allows you to see a snapshot of what is happening with the server. I run it by putting the scripts in the /etc/cron.hourly directory on a Linux PC and a Raspberry Pi running Linux.

It serves as a simple way to check up on the server without having to use a tool such as Webmin that requires a login. It also keeps a trail of log files that get rotated on a monthly basis, so there is always a few old ones around to track down any problems and patterns in the operation.

I have found this information useful when I have traced down malfunctions that can occur when setting up a server and also when I was trying to get a webcam up and running and had the USB bus hang up a few times when the cam was overloaded with too much light.

In the new script file I fixed a bug by adding parenthesis around a line that I was trying to echo and I added code to run the w command to show a quick picture on who is logged in, how long the server has been up and running and the values for the average load on the server at the 1, 5 and 15 minute marks.

Logcreate Script

#!/bin/dash
# Remove old log
rm /var/www/status/log.txt
# Print logged outputs into new log.txt
date >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
tail /var/log/syslog >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
free >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
df -h >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
#ps aux | sort -nk +4 | tail >> log.txt
echo "USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND" >> /var/www/status/log.txt
ps aux | sort -nrk 4 | head >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Logged in User info using w command
w >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
echo >> /var/www/status/log.txt
# Copy log.txt into the full log that is collected
cat /var/www/status/log.txt >> /var/www/status/fulllog.txt
# Create a free standind copy of the process tree
pstree > /var/www/status/pstree.txt

Alternate Version

I also created a version of the script for a desktop Linux PC that does not have Apache installed.  In it I use a DIR variable to contain the directory that I want the log.txt file stored.

 #!/bin/dash

# User defined variables
# No trailing / on DIR!
DIR=/home/erick/status

# Remove old log
rm $DIR/log.txt
# Print logged outputs into new log.txt
date >> $DIR/log.txt
echo >> $DIR/log.txt
tail /var/log/syslog >> $DIR/log.txt
echo >> $DIR/log.txt
free >> $DIR/log.txt
echo >> $DIR/log.txt
df -h >> $DIR/log.txt
echo >> $DIR/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
#ps aux | sort -nk +4 | tail >> log.txt
echo "USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND" >> $DIR/log.txt
ps aux | sort -nrk 4 | head >> $DIR/log.txt
echo >> $DIR/log.txt
# Logged in User info using w command
w >> $DIR/log.txt
echo >> $DIR/log.txt
echo >> $DIR/log.txt
# Copy log.txt into the full log that is collected
cat $DIR/log.txt >> $DIR/fulllog.txt
# Create a free standing copy of the process tree
pstree > $DIR/pstree.txt

Rotation of Log

In the /etc/cron.monthly directory I have created a file that is called status-log-rotate and it will save backup copies of 2 months worth of the full concatenated server status logs.

#! /bin/bash
DIR=/home/erick/status
mv $DIR/fulllog.txt.1 $DIR/fulllog.txt.2
mv $DIR/fulllog.txt $DIR/fulllog.txt.1

Tweaks for Raspberry Pi

For the Raspberry Pi which has an SD card that I am trying to be conscious of writing to often. I have recently made some modifications to put the /tmp folder onto RAM using tmpfs. I create the hourly log underneath a folder there. Daily via a script it cron.hourly it gets concatenated into a daily log which is under a status folder that has restricted access. This gets appended once per day to the fulllog which actually lives on the SD card. The end result, no multiple hourly writes to the log file, just one append to the full log per day. The only downside is if the power drops and then some log entries will be lost for the day.

Logcreate runs from /etc/cron.hourly for Raspberry Pi

#!/bin/dash
# Set DIR, on Pi this is a temp location for log
DIR=/tmp/web

# Set fixed DIR FIXDIR for files that have to be stored on SD card
# Nevermind, just make a daily log and then copy that to the full log daily.
#FIXDIR=/var/www/status

# Remove old log

rm $DIR/log.txt
# Print logged outputs into new log.txt
date >> $DIR/log.txt
echo >> $DIR/log.txt
tail /var/log/syslog >> $DIR/log.txt
echo >> $DIR/log.txt
free >> $DIR/log.txt
echo >> $DIR/log.txt
df -h >> $DIR/log.txt
echo >> $DIR/log.txt
# Top memory using processes http://www.commandlinefu.com/commands/view/3/display-the-top-ten-running-processes-sorted-by-memory-usage
echo "USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND" >> $DIR/log.txt

ps aux | sort -nrk 4 | head >> $DIR/log.txt
echo >> $DIR/log.txt
# Logged in User info using w command
w >> $DIR/log.txt
echo >> $DIR/log.txt
echo >> $DIR/log.txt
# Copy log.txt into the full log that is collected
cat $DIR/log.txt >> $DIR/dailylog.txt
# Create a free standing copy of the process tree
pstree > $DIR/pstree.txt

dailylog-to-fulllog script, runs from /etc/cron.daily

#! /bin/bash

DIR=/tmp/web
FIXDIR=/var/www/status

echo "----------------------------------------------" >> $DIR/dailylog.txt
date >> $DIR/dailylog.txt
echo "----------------------------------------------" >> $DIR/dailylog.txt
cat $DIR/dailylog.txt >> $FIXDIR/fulllog.txt
rm $DIR/dailylog.txt

Logcreate Output from Raspberry Pi

Below is what the logcreate script will output to the log.txt file on a Raspberry Pi that I have running as a web server.

Sun Jul 12 14:17:01 EDT 2015

Jul 12 13:47:51 raspberrypi dhclient: DHCPACK from 192.168.1.1
Jul 12 13:47:52 raspberrypi dhclient: bound to 192.168.1.17 -- renewal in 40673 seconds.
Jul 12 13:59:01 raspberrypi /USR/SBIN/CRON[28010]: (erick) CMD (aplay /opt/sonic-pi/etc/samples/guit_e_fifths.wav)
Jul 12 13:59:07 raspberrypi /USR/SBIN/CRON[28009]: (CRON) info (No MTA installed, discarding output)
Jul 12 14:00:01 raspberrypi /USR/SBIN/CRON[28013]: (erick) CMD (/home/erick/fswebcam/cron-timelapse.sh >> timelapse.log)
Jul 12 14:00:23 raspberrypi /USR/SBIN/CRON[28012]: (CRON) info (No MTA installed, discarding output)
Jul 12 14:01:01 raspberrypi /USR/SBIN/CRON[28022]: (root) CMD (/home/erick/bin/usbreset /dev/bus/usb/001/004)
Jul 12 14:01:02 raspberrypi /USR/SBIN/CRON[28021]: (CRON) info (No MTA installed, discarding output)
Jul 12 14:09:01 raspberrypi /USR/SBIN/CRON[28053]: (root) CMD (  [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime))
Jul 12 14:17:01 raspberrypi /USR/SBIN/CRON[28064]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)

             total       used       free     shared    buffers     cached
Mem:        445804     424488      21316          0     106768     260516
-/+ buffers/cache:      57204     388600
Swap:       102396          0     102396

Filesystem      Size  Used Avail Use% Mounted on
rootfs          6.3G  3.1G  3.0G  51% /
/dev/root       6.3G  3.1G  3.0G  51% /
devtmpfs        214M     0  214M   0% /dev
tmpfs            44M  240K   44M   1% /run
tmpfs           5.0M  8.0K  5.0M   1% /run/lock
tmpfs            88M     0   88M   0% /run/shm
/dev/mmcblk0p5   60M   19M   41M  32% /boot

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      2071  0.0  3.0  24896 13652 ?        Ss   Jun28   2:24 /usr/sbin/apache2 -k start
www-data 27745  0.0  1.5  25412  7084 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27744  0.0  1.5  24960  6760 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27743  0.0  1.5  25428  7116 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27742  0.0  1.5  25396  7036 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27538  0.0  1.5  25396  7032 ?        S    06:25   0:00 /usr/sbin/apache2 -k start
www-data 27502  0.0  1.5  25404  7036 ?        S    06:25   0:00 /usr/sbin/apache2 -k start
www-data 27501  0.0  1.5  25396  7044 ?        S    06:25   0:00 /usr/sbin/apache2 -k start
www-data 27747  0.0  1.3  24936  6188 ?        S    09:58   0:00 /usr/sbin/apache2 -k start
www-data 27746  0.0  1.3  24936  6188 ?        S    09:58   0:00 /usr/sbin/apache2 -k start

 14:17:02 up 14 days, 12:56,  1 user,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
erick    pts/0    192.168.1.5      14:04   10:39   1.70s  1.70s -bash

Hourly Chime for Linux and Mac

It is easy to set up a simple CRON job to run a sound on the top of the hour by running aplay on a Linux machine. Something like this would also work on a Mac with minor changes, afplay is the default command line player for Mac, CRON works the same. For Windows, I have not tried it but task scheduler, every hour and there must be some easy command line program out there to fire off, that stuff has been around since DOS.

aplay

aplay works with wave files so you can use oggdec to convert ogg files to wav. A lot of sound theme files come in ogg or wav. aplay and mplayer come installed in Ubuntu at least by default in 14.04 LTS, which I am running. If not a simple…

 sudo apt-get install aplay

…or…

sudo apt-get install mplayer

…will get them installed.

oggdec

Oggdec is part of a very small install package, takes seconds to install.

To install…

sudo apt-get install vorbis-tools

To convert OGG ausio file to a WAV audio file…

oggdec filename.ogg

Sound Themes

The sound themes are located at /usr/share/sounds . If you go there and try out the sounds you might find one that sounds good to you for an hourly chime.

Two level tree of /usr/share/sounds, using the tree command. If you don’t have it get in a few seconds using…

sudo apt-get install tree

Output of tree command show 2 levels below /usr/share/sounds

(tree /usr/share/sounds -d -L 2)

/usr/share/sounds
├── alsa
│   ├── Front_Center.wav
│   ├── Front_Left.wav
│   ├── Front_Right.wav
│   ├── Noise.wav
│   ├── Rear_Center.wav
│   ├── Rear_Left.wav
│   ├── Rear_Right.wav
│   ├── Side_Left.wav
│   └── Side_Right.wav
├── fLight__2.0
│   ├── Copyright
│   ├── index.theme
│   └── stereo
├── freedesktop
│   ├── index.theme
│   └── stereo
├── Fresh_and_CLean

Sound Theme Downloads

I went to a site see link below and downloaded two sound themes (fLight 2.0 and Fresh and Clean, the third one on the site was a dead link ) and tried out the sounds. I found that the Message sound in the fLight 2.0 theme was a pleasant but catchy enough sound to be heard at a distance and over any music I might be playing at the time the CRON job runs.

http://www.ubuntuvibes.com/2010/08/3-awesome-sound-themes-for-ubuntu.html

The CRON job that runs to do the hourly sound is..

00 09-23 * * * aplay /usr/share/sounds/fLight__2.0/stereo/Message.wav

It will produce a sound from 9AM to 11PM and uses the Message.wav which I converted from an ogg to a wav file…

I have attached the Message.wav below for your listening pleasure!

 

 

So far I have not switched my overall sound theme from the Ubuntu default, but I might try out the two themes that I have downloaded for variety.

Resources

Create Cron Jobs on a Mac

Command Line Audio Player on a Mac

U.S.D.A. Forest Service Webcam Image - Cloud Peak, WY

Active Desktop Wallpaper using wget

It is nice to have a desktop wallpaper that is not static, I like to see some outdoor scene that has a good view and a dynamic sky. Wyoming certainly has some ever changing skies and nice terrain so I have a wallpaper background set to show the Cloud Peaks Wilderness in Wyoming that updates every hour.

It is possible to load a JPG file periodically from a source using the Linux built-in wget command. In the example below, I am loading a scene from Cloud Peak Wyoming that is captured by a US Forest Service Webcam. It is loading right into my home folder, it could be put in any place that you prefer.

There is a nice bunch of pictures taken by the Forest Service from all over the country and they provide some nice high resolution scenery. See the links at the bottom of this post.

Code for script file

#! /bin/bash
 rm /home/erick/cpwa1_large.jpg
 wget http://www.fsvisimages.com/images/photos-large/cpwa1_large.jpg

The code first removes the old copy of the image and then it uses the wget command to fetch a new copy.

.wgetrc

It is not necessary to modify .wgetrc to use wget, but I put this here as an FYI. There is a configuration file for wget. It is located at /usr/local/etc/wgetrc. More info on wget locations. You can make a copy of it and put it in your home directory. Once in the home directory any modifications to it will work for your user profile. I have mine modified to do a few non-standard things, one is to use timestamping which will make wget only download when the file it is trying to download is newer than the local copy.

# Set this to on to use timestamping by default:
timestamping = on

Secondly, I also added a line at the end of the file that puts an option for wget for limiting the rate of downloading. Otherwise wget will run as fast as possible and will use the entire bandwidth. This option can be used on a case by case basis by putting in the line when wget is called as well. Doing this makes it so wget doesn’t slow down your connection to the Internet a lot and doesn’t hit the server hard with high speed downloads, important if you are downloading multiple large files.

limit-rate=20k

It is also possible to add a bit of a delay between connections when downloading. This avoids hammering the server that you are downloading from when downloading multiple files. This makes it easier on the server load and makes your download activity less likely to be obnoxious to the folks running the server that you are downloading from. Obnoxious down-loaders and site scrapers are more likely to get banned I would imagine if someone notices a spike in server load and pins it down to the IP address.

# It can be useful to make Wget wait between connections.  Set this to
# the number of seconds you want Wget to wait.
wait = 1

Some sites go as far as prohibiting downloads unless the user agent has a string inside of it. I didn’t do this yet as I have not had a problem with this issue. But it is possible to set the user-agent via --user-agent=“Acceptable String Here”

More on user-agent modification

CRON entry

01 08-22 * * * /home/erick/cpwa1/wget-cpwa1.sh

Using crontab -e, a line can be loaded into your CRON file to run the script periodically. The one above runs every hour 1 minute after the hour between 8AM and 10PM. There is no sense in loading nighttime pictures so that is why the times are bounded to load pictures during daylight hours ( right now) for Mountain Daylight Time. The picture I load is update around 59 minutes after the hour so loading 1 minute after the hour provides a bit of a guard band of time.

USDA Forest Service Webcams

USDA Forest Service Real Time Image Description Page
USDA Forest Service Real Time Image Gallery

Auto-shutdown and Auto-suspending a Linux Server

The article below talks about auto-suspending…

http://rolandtapken.de/blog/2013-07/suspend-nas-when-idle

A Simpler Idea

I found another article on auto-suspending that requires only a simple bash script that I have placed in /etc/cron.hourly.

WordPress did not like me uploading autosuspend.sh, for security reasons, it will give an error, so I have the script autosuspend.sh , named autosuspend.sh.txt  here->  autosuspend.sh . The file goes in /etc/cron.hourly naming it just autosuspend, cron won’t run if the filename has an extension.

The file must be owned by root and executable. So you have to use the following commands before running it.

sudo chown root:root autosuspend.sh
sudo chmod u+x autosuspend.sh

I used it as autosuspend.sh and ran it a few times manually running sudo autosuspend.sh, just to see it run properly before sticking the file renamed as autosuspend and placed it into /etc/cron.hourly.

And the autosuspend.conf  named as autosuspend.conf.txt here-> autosuspend.conf  goes in the /etc/ directory.

Both are UNIX formatted files, modify them accordingly for your use.

syslog

CRON logs things when it runs autosuspend into /var/log/syslog, so you can execute…

tail /var/log/syslog

…to see if everything is OK by seeing the traces, the autosuspend script  gives good useful error messages. It also will send an email on the server to root@yourservername, every time it runs. You can use mailx from the CLI ( or some other program ) to read the local email. Mailx is very simple and good enough to quickly page through CRON emails, using return to move down through the unread ones.

Resources

The article I got the script from is…

https://bbs.archlinux.org/viewtopic.php?id=157268

…it does not give you the autosuspend.conf file, but it references another article in German…

http://wiki.ubuntuusers.de/Skripte/AutoSuspend

…that does have the autosuspend.conf file and it seems to work, at least it runs fine so far with some mods.

Files

Once again below are the script and conf file from those sites, labeled with a txt extension. I put them here in case those sites disappear for some reason. This is good knowledge and it works so well, I’d hate to see it get lost.

The script taken from the Archlinux page, requires systemd and uses systemctl suspend to suspend the machine, named autosuspend.sh.txt. Formatted for UNIX/Linux.

autosuspend.sh

Original autosuspend.sh that uses pm-utils from the German ubuntuusers.de site, named as pm-utils_autosuspend.sh.txt and the autosuspend.conf named as autosuspend.conf.txt. Formatted for UNIX/Linux

pm-utils_autosuspend.sh
autosuspend.conf

Auto Shutdown – Mods

I decided to modify the autosuspend.sh file rather than loading the package that it needed (systemd) to execute systemctl suspend, which is what the script file from the first article uses. The other option would be to use pm-utils as the second German article has the original autosuspend.sh formatted to use. For more info on pm-utils see https://wiki.archlinux.org/index.php/pm-utils

Instead of auto suspending, I decided that since the server starts fast enough from a cold boot (17 secs. to usable), why not just replace the…

systemctl suspend

…line with…

shutdown -P +5

This will shut the server down, with a 5 minute warning and guard band. I say guard band, because it can guard against a potential loop. If I play with the script more and make a mistake, I do not want to wind up with a server that starts, jumps to the script and starts shutting down immediately. I know I put the file in /etc/cron.hourly, so it will kick off every hour, but I am just guarding against unforeseen things to be safe and it’s only 5 minutes of delay. If it goes to shut down while testing at some point, I have 5 minutes to execute a shutdown -c to cancel.

I also put the line…

ethtool eth0 -s wol g

…before the shut down line, because that same piece of code, which I tried put into rules.d. But it was not setting the wake on to g, When I ran ethtool, it was staying at d. Not sure why, but since I will be allowing this server to shutdown by itself 90%+ of the time, I opted to put it right in the shutdown script. After a second thought, I also put that line into the /etc/rc.local ( which runs at start up ) as well so it is armed even if I shutdown manually! See the post of Wake On Lan via Ubuntu Linux for more info on Wake on LAN.

Here is the modded autosuspend called autoshutdown.txt. Remove the txt extension and place into the /etc/cron.hourly folder, it is formatted for UNIX.

I forgot about the UNIX and DOS endlines being different while I was working on this. See my post on UNIX vs DOS file endlines, as I had a bit of brain fog and struggled a bit with this while working on the autoshutdown script.

Winbind

Once I got the autoshutdown running. I realized that the Linux machine was not able to resolve the names the Windows machines on the network. The server could only ping the Windows machines by IP address and not their names! I saw this when I was logged out of the server and logged in a while later and the shutdown script had recorded failed pings into syslog, when checking to see if the server was idle. The script correctly saw that no one is logged it by executing, who | wc -l yielding a zero and next it was testing for attached clients ( the Windows machines named in the autosuspend.conf file) using ping $i -c1. And ping was failing as the names were unreachable.

  • arp -a could see all the machines by IP address from both Linux and Windows.
  • net view on the Windows machine could see all the machines by name.
  • smbstatus can see every computer by name fine from my Linux server machine. Particularly since I had installed Samba, the servers name is visible from Windows PC’s due to Samba.
    Samba must send out net-bios information about itself, I see in the config file for Samba where it can act as a wins server as well.

In order for the autosuspend/shutdown script to work pinging by name is a must. To fix, install winbind and configure /etc/nsswitch.conf.

sudo apt-get install winbind

In /etc/nsswitch.conf add wins to the end of the line that starts with
hosts: Mine now reads…

hosts: files dns wins

I got the info from…
http://www.serenux.com/2009/09/howto-configure-ubuntu-to-be-able-to-use-and-respond-to-netbios-hostname-queries-like-windows-does/

Samba Connected test in shutdown script

The autosuspend script does a test to see if anyone is accessing files using Samba via smbstatus. Smbstatus is great to see what is going on, it is good to troubleshoot Samba when you can make connections. It is interesting once you play with it when various computers are accessing the server to understand what it is telling you.
But the script is just looking to see if computers are accessing Samba
shares. The autosuspend.conf shows an IP address for the test using
$SAMBANETWORK as that value and grepping on it. I am not sure how this works as I don’t see any IP numbers when I run smbstatus. So for now I decided that I will use the word Public in the autosuspend.conf instead of 192.168.1. Most likely if a computer is accessing Samba shares on my network and the computers name is not one of the “clients” ( my own machines at home, that have listed names) it is going to be only accessing the Public Samba share. For now this seems to work!

Test used in autosuspend script to look for machines accessing Samba…

/usr/bin/smbstatus | grep $SAMBANETWORK | wc -l
Other conditions for shutdown

The other two tests that autosuspend does (IsRunning() and
IsDaemonActive() ), I have not validated them.

That is a TBD. So far, so good, the server has not shutdown unexpectedly and I have not seen it held up by IsRunning() yet, based on it’s tests. If something is running and a shutdown occurs, a sigterm is generated as the system is going down, so anything in process should terminate cleanly in theory. I’d like to test for OwnCloud activity at some point, I have shut the machine down and restarted a few minutes later on purpose with an OwnCloud file transfer in progress and it picks back up. I have to figure out a test for this, TBD.

IsRunning() tests for the following applications…

 APPLICATIONS='"^nxagent$" "^rsnapshot$" "^wsus$" "^wget$" "^screen$" "^mlnetp$" "^apt-get$" "^aptitude$" "^dpkg$" "^cp$"'

The following features were not copied from the autosuspend.sh on the German site to the one on https://bbs.archlinux.org/viewtopic.php?id=157268 that I used…

# Turning suspend by day (8 a.m. to 3 a.m.) off
 DONT_SUSPEND_BY_DAY='no'
# Automatically reboot once a week when the system isn't in use
 REBOOT_ONCE_PER_WEEK='yes'
  • DONT_SUSPEND_BY_DAY seems to control suspending by blocking it out during the day between 8AM and 3PM, it uses /sys/class/rtc/rtc0/wakealarm. I wasn’t interested in this so I was fine with it being carved out.
  • REBOOT_ONCE_PER_WEEK uses cat /proc/uptime | cut -d’ ‘ -f1-1\` / 3600 / 24 )>= 7\ as a test to see if the machine has been running for more that one week and then it reboots the next time it is idle. This is not of interest to me as my machine will shutdown rather than suspend, so this is not needed either.

Interestingly, I do see a test to see if power management is supported in the original autosuspend.sh that relies on pm-utils. This does not exist in the modified script that uses systemctl, perhaps it is not neccessary as calling systemctl is fine without or it was omitted, because such a test does not exist when using systemctl.

  /usr/bin/pm-is-supported

Basically I am fine with the simpler script, if I need to add features back in, so be it!

I have been using the shutdown script for over a month with no issues so far.

Follow Up

I have been using this code on two servers, one for almost three years and one for a year. The older one does not suspend and it requires a shutdown and the newer one suspends nicely via systemctl suspend.

I decided to modify the code a bit to allow a hybrid-sleep and also allow for restarts when the system requires them. Read more about this here….

Autoshutdown Code Modded to hybrid-sleep and allow required restarts

 

Configuring Posting via email for WordPress

I am testing out the ability to post via a secret email, this is how I created this post, then edited some more in WP.

I fussed with it for a bit, sending emails and expecting results. I didn’t realize that the email reading for WP has to be stroked. So I put together a cron job to stroke the reading of email periodically (daily for the moment, which seems reasonable) via php
using…

php -q /home/yourcpanelusername/path-to-folder/wp-mail.php

Which didn’t work, initially. I kept getting email via cron which has XML in the body, it is an error with a line at the bottom…

<p>Slow down cowboy, no need to check for new mails so often!</p>

Then I tried this

But manually stroking the email by going to the URL where wp-mail.php lives does kick the email to a post as lists it as pending. This in my mind is not terribly useful. I would prefer to send an email and have not be a pending post as I would like to post from email without needing to login to WP, in other words just post it already.

Mysteriously after experimenting with sending a few posts by email, it started to work. I sm not sure why, but checking the mail daily at 1AM, it either gets the messages, creates a pending post and deletes the copies on the mail server as it should and reports this in a CRON email. Or there are no emails for it and it reports that correctly. After an initial weirdness it has been working fine and as expected for several weeks.