Category Archives: Linux

Posts that cover information that I have learned and have posted, all related to Linux.

Linux Mail From the Command Line

Local Mail Using Postfix and Mail

Mail on the command line. This was once a thing that was used much more often. When I was in college in the 1990’s it was one of the easiest ways to get mail when on campus and off. It was taught to technical and non-technical people alike as part of orientation and you were given written instructions right at registration time. Instructions came along with the email account that they made up for you using a student id number followed by @binghamton.edu. In fact, finally in the last year I was there they got rid of this cumbersome email and started allowing users to have email with their real names.

Getting to mail back then from the command line involved logging into your Unix account from one of the many terminals spread throughout the campus. Or using the  insecure but, OK at the time rlogin, remotely via dialup. Then you could either use mail or pine which was a bit more sophisticated as it was based off the pico editor, of which the popular nano editor is a derivative. It at least has somewhat of an interface that accepts up and down arrow movement and displays the shortcuts on the bottom. The standard mail program is a bit like vi, spartan but still useful.

Having access to mail locally on the command line is useful when you might be running CRON tasks or any other automated scripts that call other code as it allows you to get notified of when they have run and most importantly if they have had errors.

The other alternative is to set up ssmtp and have mail sent out of the local machine using SMTP from another established account. Of course you can also set up a full blown mail server but, that can be overkill if you are just monitoring what is happening on a few machines that you regularly log in to.

Setting up Local Mail

Below is a great Github post on how to set up local mail on a Linux Machine. I followed it and added a variation to get local mail running using the command line mail program.

It works great.  I really liked the instructions, very easy to set up. I was glad I found this as, I thought it might be tricky and with these instructions it was a few minutes on each machine.

Setup a Local Only SMTP Email Server (Linux, Unix, Mac)

 

My Additions

Server

For use with mail the program, it might not be necessary to have a local host.com in the host file. I have not tested this.

I followed the tutorial up to and including the step of restarting postfix.

Then I installed mail instead of Thunderbird.

On the server, which only has a CLI,  I wound up using mail instead of T-bird, installed via sudo apt-get install mailutils
It can be tested by sending a message to yourself by using…
mail -s “test” (your user name)@localhost

Hit enter when in the screen to bypass CC , type something and end by using Ctrl D.

Then enter the command

mail

…and you should see the email. Hit enter at the? prompt and the message is presented.

Enter q to quit.

Another test message can be sent to test if a message to any address gets sent to you. As long as the domain is local host it will work and catch it. Other domains will fall and send you an email from the system reporting the email as undeliverable.
It works great to get the CRON messages on the machine.
Just type mail and you get a CLI email list. mail basics are, q to quit, m to compose, enter and spacebar to move through messages. Entering question mark ?, brings up command help.

Desktop Install

One gotcha that caught me is that I already had T-bird installed and therefore it had a default SMTP server already. For me this required what I would call step 6A to add a local STMP server.
6A. In the pane above “Account Actions” scroll, using the bar to the bottom “Outgoing Server (SMTP)”.
Click Add
For description I wrote Local SMTP
Server Name: localhost
Port: 25
Username : (the user name)@localhost
Authentication Method: Password, transmitted insecurely
Connection Security: None

Then I went back into the account settings for the mail set up in step 6 and set the Outgoing Server (SMTP) to the Local SMTP

Host File Aliases

Also in /etc/hosts you can put in localhost.com as a alias and it works fine, like this…
127.0.0.1       localhost localhost.com

This is the way to put in aliases in a host file, for example you can have the machine name and then a shortcut to it if you have it set to a static IP. This way you can just type server to SSH to it and use that as a short name wherever you want to in scripts and etc.

192.168.1.10  Dell-Optiplex-620     server

127.0.0.1 localhost localhost.com

 

 

Getting CGI and Perl scripts up and running on Minimal Ubuntu

I was trying, again to get this up and running. I have a piece of code notestack-example.cgi that uses Perl and the Perl Module for CGI. I had this working after fiddling with it the first time I flashed the SD card that I set up for a Pine 64 that was set up with minimal Ubuntu Xenial.

The problem is I wrote down only sketchy instructions and had to re-figure it out. After flashing another card, the first one had a slow moving corruption that would cause the machine to halt after a while, I got more clear with the process.

I have posted it here for myself, if I get stuck again and for anyone else who might need to know the process, if they get stuck. It is a rough outline. I copied what commands I issued from the history and added comments and some test code that I had on a RaspberryPi which has been running the notestack-example.cgi among other items for years so that was my baseline.

Outline getting CGI and Perl running for Apache

In this outline it is assumed that Apache2 is installed and configured.

Optional: To make it easier to get to the cgi directory from your home, create a symlink in the user home directory to be able to move into the cgi folder easier.

ln -s /usr/lib/cgi-bin cgi

Enable CGI support for Apache…
sudo a2enmod cgi

modify /etc/apache2/sites-enabled/000-default.conf

Putting in the code that enables cgi in the Apache config file for sites, this was take from the Raspberrypi and added into /etc/apache2/sites-enabled/000-default.conf
it was put right above the </VirtualHost> line.

——————————————————
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory “/usr/lib/cgi-bin”>
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>

——————————————————

Test with a simple BASH CGI script

Use the following code as printenv.cgi to test basic CGI with a bash script.
in the /usr/lib/cgi-bin directory.
——————————————————
#!/bin/bash -f
echo “Content-type: text/html”
echo “”
echo “<PRE>”
env || printenv
echo “</PRE>”
exit 0
——————————————————

test with…
which needs to be 755 (+x) permissions. This file is from the RPI, was used years ago to test it out.

sudo nano printenv.cgi
sudo chmod +x printenv.cgi
  /printenv.cgi
If all is well this will also be accessible from the web…
curl http://localhost/cgi-bin/printenv.cgi

curl is installed in the minimal build.

If you are using this on your own intranet OK, if this machine is accessible from the web get rid of printenv.cgi so the whole world can’t see the output if it gets found.

 

Test Perl

Test Perl scripts, normally Perl is installed even in the minimal Ubuntu build. It’s
presence can be tested for by using ‘which perl‘.
Use the following code as perl-test.cgi
in the /usr/lib/cgi-bin directory.
——————————————————
#!/usr/bin/perl
print “Content-type: text/html\n\n”;
print “Hello CGI\n”;
EOF
——————————————————

sudo nano perl-test.cgi
 sudo chmod 755 perl-test.cgi
 ./perl-test.cgi

Does it work via Apache?
  curl http://localhost/cgi-bin/perl-test.cgi

Perl CGI Module

Get Perl scripts (Any Perl code that uses CGI.pm in it) running and web accessible, ones that require the CGI Perl Module…

Got ERROR #1,missing CGI.pm, followed by #2 complained about a missing Util.pm, see below.

Got this error when the Perl Module was missing got a similar error after installing PM, complaint about missing CGI/Util.pm
ERROR 1 and 2
Can’t locate CGI.pm in @INC (you may need to install the CGI module) (@INC contains: /etc/perl /usr/local/lib/aarch64-linux-gnu/perl/5.22.1 /usr/local/share/$
BEGIN failed–compilation aborted at ./notestack-example.cgi line 36.
Grabbed the CGI.pm and Util.pm from my Raspberry Pi by searching for them….

sudo find / -name CGI.pm

sudo find / -name Util.pm

and copying using rcp to/tmp on the board I was setting up, a Pine 64 in this case.

rcp /usr/share/perl/5.14.2/CGI.pm ubuntu@192.168.1.31:/tmp
rcp /usr/share/perl/5.14.2/CGI/Util.pm ubuntu@192.168.1.31:/tmp

The code I was trying to run, the goal of all this work was a script named notestack-example.cgi.
sudo cp /tmp/CGI.pm /etc/perl
./notestack-example.cgi      <— Got ERROR #1
sudo cp /tmp/Util.pm /etc/perl/
./notestack-example.cgi      <— Got ERROR #2
sudo mkdir /etc/perl/CGI
sudo mv /etc/perl/Util.pm /etc/perl/CGI

Final error ERROR 3

Can’t use ‘defined(@array)’ (Maybe you should just omit the defined()?) at /etc/perl/CGI.pm line 528.
Compilation failed in require at ./notestack-example.cgi line 36.
BEGIN failed–compilation aborted at ./notestack-example.cgi line 36.

This one requires removing defined as this is old and not compatible with the current version of Perl.
Just removed the defined on line 528…

ubuntu@pine64:~$ diff /etc/perl/CGI.pm /tmp/CGI.pm
528c528
< if (@QUERY_PARAM && !$initializer) {

> if (defined(@QUERY_PARAM) && !defined($initializer)) {

Learned about this trick about removing the ‘defined‘ from…
https://github.com/shenlab-sinai/diffreps/issues/6
https://rt.cpan.org/Public/Bug/Display.html?id=79917

Cloning Linux Mint Setups

Recently I swapped in an SSD to be the new primary drive on my media center PC which was running Linux Mint 18.0 on the spinning SATA drive.

This post is basically a brief documentation of the basic steps involved in cloning or upgrading/cloning Linux Mint. Most likely this works fine for Ubuntu as well as Debian as they share a common ancestry. There are most likely limits to this scheme. I imagine things would break badly trying to do this across 17.3 and 18 for example. The base on those is a different version of Ubuntu, 14.04 vs 16.04. I might try to do a clone when the next whole number version of Mint comes along. Just pop in a drive that I don’t care about, or do it on a VM, such as Virtualbox for an experiment.

Plans

The plan is to relieve some of the storage duties of the spinning drive which was filling up. Plus a speed increase as the SSD can move data 4x faster than the spinning drive but more importantly with no moving parts the access time is minute in comparison the the spinning drive. Applications open much faster, boot time is cut by 75%, etc. If the machine needs to use swap it won’t grind down to a halt as well with a fast disk. This machine is a bit older, SATA II, but a Solid State Drive (SSD) still makes a big difference.

The idea is to clone over the home folder but exclude large data such as  the ~/Music folder and leave that on the old drive and mount the drive as additional storage and use a symlink to it.

Old Setup 160GB Spinning Drive
Old Setup: 160GB Spinning Drive
New Setup: 80GB Primary SSD
New Setup: 80GB Primary SSD

Goal

The goal of this post’s example is to  do an upgrade to Linux Mint 18.3 from 18, clone over my user settings and reinstall all programs. Over the past year and a half that the machine has been in use there have been quite a few programs that have been installed on this machine. Many of them run from the command line or are libraries related to some of the machine learning code that gets run in the background on the machine. Needless to say it would be very hard to remember them and a lot of little things would be broken.

Step 1: Install Linux Mint from USB Stick or DVD

This step is pretty basic and is covered elsewhere on the web…

Linux MintUbuntu , Debian

But needless to say you want to create a user that has the same name and User ID (UID) and Group ID (GID) as on the drive that you will be cloning from.

Step 2: Login on the new machine/drive setup kill your home directory and rsync the old one over

Mount the old drive, doing this from the GUI folder is a fine way to do it. Make note of where it mounts. You can always execute df from the command line to find where it mounted as well

It sounds crazy but, it does work. Login, open a terminal and execute…

rm -rf /home/yourusername

Everything is in memory that the OS needs right now to get through the next step, so nothing will crash and this will give you a blank slate to work with.

Next rsync over your home folder from the old drive ( /dev/sda in my case) making sure that you use the archive option. Using the v and h options as well is helpful as well to produce a lot of output in case you have to trace through a problem.

-v : verbose
-a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
# -h : human-readable, output numbers in a human-readable format

Example:

For me it went something like this…

rsync -avh /media/erick/B0B807B9-E4FC-499E-81AD-CDD246817F16/home/erick /home/

Next log out and then back in. Almost like magic everything should look familiar. The wallpaper on the desktop should look like whatever it was on the old setup, fonts and other desktop sizing customizations should be there. Open the browser and it should be like you left it in the old setup. It is almost like you hibernated the old setup and woke it up after teleporting it’s soul into the new drive.

But, wait the software needs attention

Step 3: Bring over the software too, sort of…

More like apt-get install it over is closer to the truth. I tried following a post on how to do this (https://askubuntu.com/questions/25633/how-to-migrate-user-settings-and-data-to-new-machine) but, what was described in it did not lead to success. The suggestion was the following…

oldmachine$ sudo dpkg --get-selections > installedsoftware
newmachine$ sudo dpkg --set-selections < installedsoftware
newmachine$ sudo apt-get --show-upgraded dselect-upgrade

It didn’t work but, it at least did the collection part. So I wound up using the first part…

oldmachine$ sudo dpkg --get-selections > installedsoftware

…and then brute forced an install by doing some grep,rev,cut,rev again on the input file. Which basically flips every line in the file and removes the word “install” which is now at the beginning and backwards then flips it back over line by line.

The next line with the awk command prepends sudo apt-get install to the front of each line and saves the output to reinstall-software.sh

 installedsoftware-to-apt-get-install.sh
 #!/bin/bash
 cat installedsoftware | grep "install" | rev | cut -c 12- | rev > cleaned-installed-software
 awk '{printf "sudo apt-get install "$0"\n"}' cleaned-installed-software > reinstall-software.sh

Run the reinstall-software.sh script and it will do just what it says, install all of the software that was on the old setup. I believe there is an option for apt-get to preanswer Yes when it comes up and asks you the yes or no question about installing. I left it off so that I could review all the larger sized software being loaded. A few times I hit no by accident so had to re-run the script, no big deal.

Reboot is best now to avoid side-effects

Before going much further a reboot is probably in order as so much has changed on the machine.

For me, during the software install process, I was presented with a question about picking LightDM or another X- windows manager. I picked LightDM because I think that is what I had been using. After I was all done, I put the machine in suspend and it had a bit of trouble coming out of it, having a temporary error related to the X-windows manager. A blue screen came up and had a message about removing a temporary file. Just rebooting the machine cleared this up as the /tmp directory is flushed. Apparently this was something that was set before the upgrade, clone and software install process and did not get unset. Other than that I have seen no side effects from the process of upgrade, clone, software install.

Other Items

If you had files configured outside of the home directory such as /etc/hosts, you will obviously have to copy that over. Also, if you have any /etc/cron.hourly,weekly,monthlies that you put on the old machine. Also, it pays to do a dump of crontab’s using crontab -l > crontab-dump.txt on the old setup so they can be reconfigured to the same settings.

Cloning old to new box

This entire process can be used to clone one computer setup to another, old box to new one for example. Which brings us to…

Final Thoughts: Twin Machines

It is entirely possible to keep two machines in sync using the methods outlined above. I have not tried this but I am tempted to test it out at least. What I am thinking of is a laptop and desktop for instance. The desktop with it’s ability to hold multiple drives with ease works nice here It has one drive with the same OS as the “twin” laptop and is setup as multi OS boot. The steps above are executed, cloning the laptop setup and data to the desktop. It is entirely possible to keep cloning the home folder contents back and forth between the two to keep them sync’d. Even the software can be kept in sync using the method used above to re-install it.

It is possible to do this directly between them, both on at the same time. Or, through a server where they both get backed up to. The only caveat is overwriting and deletions. Such as care when using the –delete option with rsync. There is a potential for a race condition of sorts if settings and files get changed and then clobbered by a sync operation. If I were to try this I would start with a one direction sync. One device is the master and the other the slave. Deletions and settings changes get cloned from master to slave automatically only.

Raspberry Pi WiFi via USB device

Setting WiFi up via the command line on Raspberry Pi, using a USB Wireless Adapter

These are notes on how to setup WiFi on the Raspberry Pi. The R-Pi is a model 2 running Raspbian 4.1.19+.

In my case and in the example that follows, the Raspberry Pi is connected to an Ethernet network, static IP at 192.168.1.17. This is the address that  I am logging in via SSH to get to the command line to configure the USB WiFi. Adapter.

USB WiFi Adapters

Two USB Wifi’s were tried a Belkin N300 and an Edimax EW7811Un. Both use the Realtek RTL8192CU chip and work well with the R-pi. Initial testing was with the Belkin and the output from this device is used in this post for the command line examples.

Edimax EW7811Un 150Mbps 11n WiFi USB Adapter, it is nano size and has a blue activity light. It works well, can’t imagine how small the antenna is in there and how they get RF to work out OK with these sub wavelength antennas!

NOTE: Originally the adapters were tried by plugging them into a powered USB hub which plugged into the R-pi. This allows for hot-plugging. If a device is hot-plugged directly into the R-Pi it will force a reboot, at least on the one that I have (R-Pi Model 2B). This is probably due to an inrush current that pulls down the power bus momentarily, I am guessing. The powered USB hub isolates the R-Pi from the devices connected to it as far as power is concerned and things will hot plug fine using it. I did realize later on that when I plugged the USB WiFi adapter directly into the R-Pi, I got more stable behavior with it. As in less strange dropouts of the WiFi from the network. It maintains a network connection better for me at least, plugged in directly.

Initial Testing

The first steps involve checking to see that the adapter is detected, registered and the correct device driver is loaded. They are just confirmation that all is well. They can be skipped and then ran later if problems arise and troubleshooting is needed.

Plug in USB WiFi adapter and run a lsusb and dmesg…

lsusb
erick@raspberrypi ~ $ lsusb
Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. 
Bus 001 Device 004: ID 2101:8500 ActionStar 
Bus 001 Device 005: ID 2101:8501 ActionStar 
Bus 001 Device 006: ID 154b:005b PNY 
Bus 001 Device 008: ID 050d:2103 Belkin Components F7D2102 802.11n N300 Micro Wireless Adapter v3000 [Realtek RTL8192CU]
Bus 001 Device 007: ID 174c:1153 ASMedia Technology Inc.

The device shows up fine using lsusb. Now on to dmesg to see if the correct driver loaded…

[156238.694964] usb 1-1.3.3: new high-speed USB device number 8 using dwc_otg
[156238.797146] usb 1-1.3.3: New USB device found, idVendor=050d, idProduct=2103
[156238.797188] usb 1-1.3.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[156238.797207] usb 1-1.3.3: Product: Belkin Wireless Adapter
[156238.797225] usb 1-1.3.3: Manufacturer: Realtek
[156238.797242] usb 1-1.3.3: SerialNumber: 00e04c000001
[156239.201673] usbcore: registered new interface driver rtl8192cu

Kernel driver rtl8192cu is loaded, all should be well with the adapter!

lsmod

The following lsmod was run and it confirms the kernel module is loaded for the 8192cu driver. It is just added confirmation that all is well.

 

erick@raspberrypi ~ $ lsmod
 Module                  Size  Used by
 xt_state                1434  1
 ipt_MASQUERADE          1220  1
 nf_nat_masquerade_ipv4     2814  1 ipt_MASQUERADE
 iptable_nat             2120  1
 nf_nat_ipv4             6162  1 iptable_nat
 nf_nat                 17132  2 nf_nat_ipv4,nf_nat_masquerade_ipv4
 8192cu                556175  0
 nfsd                  263815  11
 nf_conntrack_ipv4      14388  3
 nf_defrag_ipv4          1766  1 nf_conntrack_ipv4
 xt_conntrack            3420  1
 nf_conntrack           95316  6 nf_nat,xt_state,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4
 iptable_filter          1698  1
 ip_tables              12362  2 iptable_filter,iptable_nat
 x_tables               18590  5 ip_tables,ipt_MASQUERADE,xt_state,xt_conntrack,iptable_filter
 i2c_dev                 6386  4
 snd_bcm2835            22502  0
 snd_pcm                92861  1 snd_bcm2835
 snd_seq                58152  0
 snd_seq_device          5142  1 snd_seq
 snd_timer              22156  2 snd_pcm,snd_seq
 snd                    67534  5 snd_bcm2835,snd_timer,snd_pcm,snd_seq,snd_seq_device
 sg                     20575  0
 i2c_bcm2708             5988  0
 bcm2835_gpiomem         3703  0
 bcm2835_rng             2207  0
 uio_pdrv_genirq         3526  0
 uio                    10078  1 uio_pdrv_genirq

 

Setup the WiFi Connection

In this example this is a WPA type of security. I know the SSID and password and just put them into the wpa_supplicant configuration file. If you need to see what WiFi nodes are available on the network consider using the scan …

( IF Needed )

sudo iwlist wlan0 scan

Check /etc/network/interfaces

Check to see that the section exists in the file that will allow the USB apapter to hot plug and also the wpa-roam line that points to the wpa_supplicant.conf file where the SSID and password will be stored in the next step.

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

…any changes to the interfaces file will require a reboot or a restart of networking to take effect, via….

 sudo service networking restart

If you are running the R-Pi headless it will disconnect from SSH and will require a re-login. If a mistake is made in the interfaces file, it might not come back and require connecting a keyboard and monitor to reconnect. The good news about having both a running eth0 and wlan0 is that if you make a mistake in only one of them it will be possible to connect via the other one. Less chances of being totally locked out by a small typo in the interfaces file. Sometimes a restart of network services will cause a non-recoverable dropout which requires a reboot to get an SSH connection going again.

Config wpa_supplicant
sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Go to the bottom of the file and add the following type of entry, putting in the correct SSID and pwd:

network={
 ssid="SprintHotspot2.4-example"
 psk="thepassword"
 }

Save and Exit

Then execute the follow to apply the new configuration….
wpa_cli -i wlan0 reconfigure

TEST IT:

ifconfig wlan0

 

Results show that the interface is up and and running

erick@raspberrypi ~/Music/music-test $ ifconfig wlan0
wlan0     Link encap:Ethernet  HWaddr 74:df:e0:e0:0e:e0  
          inet addr:192.168.128.46  Bcast:192.168.128.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4023854 errors:0 dropped:1664207 overruns:0 frame:0
          TX packets:2955774 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3003299485 (2.7 GiB)  TX bytes:1956045437 (1.8 GiB)

It should be working at this stage. Trying to reach the net using something like ping google.com should give good results. If not more troubleshooting is required. I had to do the next step to get it to reach the net on the wireless network as it was trying to use the Ethernet connection to the router to get out, set as the default gateway, which was not hooked up to the web at all. Just a router at 192.168.1.1 and no WAN port connection.

Additional Step

The following bit may or may not apply for everyone. But, I am adding here as it was not obvious at the moment I got WiFi up. I had to think on it a bit! Basically a default route has to exist that takes it to a gateway to the Internet.

IN ORDER TO GET PI OUT ON INTERNET NEEDED TO DO A

sudo route add default gw 192.168.128.1 wlan0

As there was no route out and it must pick eth0 by priority! Needs a route to the Sprint Box that I am connected to the net on the 192.168.128.0 network.

THE ABOVE WOULD HAVE TO HAPPEN ON EVERY REBOOT OR NETWORK REFRESH! Or just get rid of default gateway on ETH0 and it might just pick up the gateway on the wlan0 all of the time! If both are dhcp the eth0 gateway will always  be treated as preferred, so going static will get rid of it as I was planning on using this R-pi as a bridge from LAN to WLAN! This required editing /etc/network/interfaces to remove the 192.168.1.1 router as a gateway. The router was reconfigured by turning off DHCP on it, effectively making it an access point for WiFi. Essentially it becomes another path to the Internet along with the WiFi hotspot.

erick@raspberrypi ~ $ route -n
 Kernel IP routing table
 Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
 0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
 192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
 192.168.128.0   0.0.0.0         255.255.255.0   U     0      0        0 wlan0
 erick@raspberrypi ~ $ sudo route add default gw 192.168.128.1 wlan0
 erick@raspberrypi ~ $ route -n
 Kernel IP routing table
 Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
 0.0.0.0         192.168.128.1   0.0.0.0         UG    0      0        0 wlan0
 0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
 192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
 192.168.128.0   0.0.0.0         255.255.255.0   U     0      0        0 wlan0
SHOWS UP IN /etc/resolv.conf as well….
erick@raspberrypi ~ $ cat /etc/resolv.conf
 domain router
 search router
 nameserver 192.168.128.1

 

ip show route as well…
erick@raspberrypi ~ $ ip route show
 default via 192.168.128.1 dev wlan0
 default via 192.168.1.1 dev eth0
 192.168.1.0/24 dev eth0  proto kernel  scope link  src 192.168.1.17
 192.168.128.0/24 dev wlan0  proto kernel  scope link  src 192.168.128.46

 /etc/network/interfaces

The following /etc/network/interfaces file was edited to make the wlan0 connection which connected to the Internet work for the R-Pi.

Note the eth0 connection is setup static on the 192.168.1.0 wired network. The wlan0 connection is set for dhcp on the 192.168.128.0. The default gateway at 192.168.1.1 ( the router ) is commented out to allow the default to fall on the 192.168.128.1 WiFi router, which is a ZTE WiFi hotspot, basically a repeater from 4G LTE cell to WiFi.

Note the wpa-roam points to the wpa_supplicant file that has the SSID and password entered earlier in this post to get the WiFi going.

erick@raspberrypi ~/Music/music-test $ sudo cat /etc/network/interfaces
[sudo] password for erick: 
auto lo

iface lo inet loopback
#iface eth0 inet dhcp

iface eth0 inet static
address 192.168.1.17
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
# Remove gateway to see if it fails-over to wlan0 gateway on 192.168.128.1 12242017
# gateway 192.168.1.1
# nameservers
dns-nameservers 8.8.8.8 8.8.4.4

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

ALL UP AND RUNNING OK

Running ip a shows all interfaces. Note that the R-pi is connected to wlan0 on the 192.168.128.46 address and eth0 is connected at 192.168.1.17, both networks are now available.

erick@raspberrypi ~ $ ip a
 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
 link/ether b8:2a:eb:2a:a4:2a brd ff:ff:ff:ff:ff:ff
 inet 192.168.1.17/24 brd 192.168.1.255 scope global eth0
 valid_lft forever preferred_lft forever
 3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
 link/ether 08:8a:3c:ba:83:8a
 brd ff:ff:ff:ff:ff:ff
 inet 192.168.128.46/24 brd 192.168.128.255 scope global wlan0
 valid_lft forever preferred_lft forever

The connection to the WiFi node can be confirmed via iwconfig wlan0…

erick@raspberrypi ~ $ iwconfig wlan0    

 IEEE 802.11bg  ESSID:"SprintHotspot2.4-example"  Nickname:"<WIFI@REALTEK>"
 Mode:Managed  Frequency:2.452 GHz  Access Point: 34:3A:87:3A:BA:3A
 Bit Rate:54 Mb/s   Sensitivity:0/0
 Retry:off   RTS thr:off   Fragment thr:off
 Power Management:off
 Link Quality=100/100  Signal level=63/100  Noise level=0/100
 Rx invalid nwid:0  Rx invalid crypt:0  Rx invalid frag:0
 Tx excessive retries:0  Invalid misc:0   Missed beacon:0

lo        no wireless extensions.

eth0      no wireless extensions.

 

Create a hidden WordPress page using bash on the command line

Recently I was searching around looking for a way to create a hidden page on a WordPress site. It is a hosted site, not on wordpress.com. It is on a Linux server to which I have shell access.

Initially I tried using a plugin that I found that hides pages and posts. Plugins, you got to love or hate them. Love then when they work great right out of the box, hate them when they take a long time to troubleshoot.

Rather than waste too much time with the plugin, I went straight to the command line.

Screenshot_2018-04-03_18-28-35-shows-making-hidden-page

It turns out that if you publish a page and then log into the hosting server, make a directory somewhere under your public_html, change directory into it and execute…

 wget -x -nH your-page-url-to-hide-here

 

Screenshot_2018-04-03_18-38-33-draft-or-private-wp-setting
Set to Draft or Private

…then go back it and make the page a draft or under review, so it “disappears” from the menu structure. It will still work as a “cached” HTML page that has been downloaded to the folder that you have created. It will work, pictures and what not that you have loaded in it will be fully functional.

Example of a hidden page

http://erick.heart-centered-living.org/hidden/i-am-a-hidden-page/

Once the original page is put into draft/under review or private mode, it is gone…

http://erick.heart-centered-living.org/i-am-a-hidden-page/

Caveat

I have noticed that caching can get in the way. If your server caches pages, wget may not see the page updated when you make changes. A quick remedy is to set the page to draft/pending review or private, delete the hidden page. I usually use rm -rf from the directory above it and then force it to download the “404” page. Then  you can publish the page re-run wget and it will force it to get the fresh version. Keep note of the size of the file as a hint that it is getting the right one.

Upcoming: Do this with a CGI Script

In an upcoming post, I will cover how to make a CGI script that will allow you to create a hidden page easily without having to use SSH to login to the server.

 

wget options used in this example, from the man page

-x
–force-directories
The opposite of -nd—create a hierarchy of directories, even if
one would not have been created otherwise.  E.g. wget -x
http://fly.srk.fer.hr/robots.txt will save the downloaded file to
fly.srk.fer.hr/robots.txt.

-nH
–no-host-directories
Disable generation of host-prefixed directories.  By default,
invoking Wget with -r http://fly.srk.fer.hr/ will create a
structure of directories beginning with fly.srk.fer.hr/.  This
option disables such behavior.

Wget Resources

https://www.lifewire.com/uses-of-command-wget-2201085

https://www.labnol.org/software/wget-command-examples/28750/

The Ultimate Wget Download Guide With 15 Awesome Examples

http://www.linuxjournal.com/content/downloading-entire-web-site-wget

Script to check and re-connect WiFi on Raspberry Pi

After having occasional dropouts of the Raspberry Pi Wifi ( after I started using it as a bridge) and having to ifup the wlan0 manually, I considered making a script to automate it. But, a little searching online found this elegant solution to the problem.

mharizanov/WiFiCheck

https://gist.github.com/mharizanov/5325450

Install Slackware on a VM

Easy to follow tutorial on installing Slackware Linux onto a Virtual Machine

I have been interested in trying out Slackware for some time now. The Slackware Linux Essentials ( aka Slackbook) is an excellent review of Slackware and Linux in general. I went through it one winter a few years ago and was impressed as it was a great refresher course on Linux. After a while I tend to forget some of the tricks on the command line that I do not use on a regular basis. Going over a manual like this is a good brush up. Reading the book convinced me that I would have to try out Slackware someday.

Tutorial

I had no trouble following the tutorial and getting Slackware up and running on a VirtualBox VM. The current version 14.2 (February 2018) is similar enough to the 13.0 install in the guide that the few differences are not a problem. The one difference that I noticed is that when the disk is partitioned the option for bootable did not appear for me as it did in the tutorial. I just went ahead and wrote the disk and it was fine. The tool might have some logic built in to decide what to do and does not required you to tell it that it has to be set as bootable anymore.

http://archive.bnetweb.org/index.php?topic=7370.0

Slackware DVD ISO Torrent Page

http://www.slackware.com/getslack/torrents.php

Slackware Live DVD/USB Stick

Live DVD/USB Stick installs are relatively new for Slackware. In case you want to just go ahead and try it on a Live CD or USB stick it is now available as a download.
http://bear.alienbase.nl/mirrors/slackware/slackware-live/

Linux not booting, incorrect drive mapping, try hacking grub.cfg

Hacking /boot/grub/grub.cfg is not usually the first thing that comes to mind. But, I ran into a problem when installing Kali Linux on a plug in USB Hard Drive. I wanted to try to put it on a USB hard drive by using the USB boot stick. All went well with the install. On reboot, guess what, the USB stick and it’s multiple partitions was no longer “parked” on /dev/sdc-sdg and when Kali had installed it thought it was going to be at /dev/sdh. Pulling the USB stick put the USB Hard Drive at /dev/sdc on the next boot! So naturally when grub wanted to boot the machine it drops to a command prompt after it times out trying to find where root is on /dev/sdh which has disappeared. When this happened, I puzzled on it for a few minutes before digging into grub.cfg, it was not my first thought, but it was the only thing that I could think of that could doing this.

When the machine can boot and drops to the basic command line find out what drives it thinks it has by running…

ls /dev/sd[a-z]2

…this will show all the sda2,sdb3 and etc. Usually but not always the 2nd partition is the Linux File System, this is where the /boot directory lives with all of the GRUB files. Using a USB Stick with a live install or the DVD and booting is helpful at this point as you can use sudo fdisk -l to list what drives are available and you will need to use and editor to modify the grub.cfg file.

Hacking grub.cfg

The hack is to reboot the machine either with the USB stick/Live DVD or off of the hard drive resident in the machine and then….

chmod 644 /boot/grub/grub.cfg

…as it is read only…

-r--r--r-- 1 root root 14791 Jan  7 16:29 /boot/grub/grub.cfg

…remember to chmod it back to read only after editing using….

chmod 444 /boot/grub/grub.cfg

Make a backup copy of it first before editing!

Editing grub.cfg

Once you have it in read/write state open it in an editor, emacs or something, even nano would work.

Yes it complains not to edit it, but when you can’t boot at all, because it can’t find where the root is, it’s worth a try!

 

#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

In a terminal window find out where the drive is really mapped using…

sudo fdisk -l

Example of second bootable disk at /dev/sdb…

Disk /dev/sdb: 149 GiB, 160000000000 bytes, 312500000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0007e21b

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sdb1  *         2048 279187455 279185408 133.1G 83 Linux
/dev/sdb2       279189502 312498175  33308674  15.9G  5 Extended
/dev/sdb5       279189504 312498175  33308672  15.9G 82 Linux swap / Solaris

The trick is to make the lines for the drive the OS is on that you are trying to boot and that is failing line up in grub.cfg. In grub.cfg search for lines like this…

set root='hd0,msdos1'

If it says hd0 it better be the sda drive for instance. In my case it showed hd7=sdh and I needed to edit hd7 to hd2. This was done via a search and replace.

Also for good measure and to keep all the commenting straight correct anything by searching around for /dev/sd and make sure all the menuentrys line up with the hd* values in set root. This is just so that you don’t get confused later if set root is changed and the menus don’t line up with reality.

menuentry 'Linux Mint 17.3 Xfce 64-bit, with Linux 3.19.0-32-generic (on /dev/sdb1)

After this correction, it booted fine. Now I will just have to pay attention to what happens with drive mapping in case I plug it into another machine. But, it is just an experiment for now, nothing critical on the drive so no worries!

Another note on Boot Repair

If grub gets totally screwed somehow, boot-repair will fix it.

For instance I once had grub lose it’s brains on an EFI boot drive. The symptom was the error about not finding a bootable medium from the BIOS. Use boot-repair. Boot using a live CD or usb stick with exact same version as you are trying to fix! Run the repair and let it automatically fix it.

sudo add-apt-repository ppa:yannubuntu/boot-repair 

sudo apt-get update 

sudo apt-get install -y boot-repair && boot-repair

 

from: https://help.ubuntu.com/community/Boot-Repair

 

Additional Resources

Fix for Linux boot failure and grub rescue prompt following a normal system update

http://giantdorks.org/alain/fix-for-linux-boot-failure-and-grub-rescue-prompt-following-a-normal-system-update/

How do I run update-grub from a LiveCD?

Introduction to fstab

https://help.ubuntu.com/community/Fstab

Autoshutdown Code Modded to hybrid-sleep and allow required restarts

Hybrid Sleep Code

I decided to use hybrid sleep instead of a suspend. I have been using the code for autoshutdown as both autoshutdown, using the shutdown command and suspending. A server that I have been using for a year now supports suspend and I have used systemctl suspend successfully with it. But, if the power goes out, the next time it is as if it was shut down and gets a fresh boot. The way around that is to use systemctl hybrid sleep which puts the RAM content into swap and then suspends. This way if the power goes out it will just resume from hibernate.

Reboot code

After setting up the machine with hybrid sleep. I realized that the mechine needs a reboot once and a while after unattended updates and thought that it would be nice to automate that process. I looked on line and found a piece of code that will reboot the machine if a reboot is required. This is done via detecting the presence of the reboot-required file. So far testing once today 8/12/2017, OK so far!

https://muffinresearch.co.uk/how-do-i-know-if-my-ubuntu-server-needs-a-restart/

Snippit of code added to autosuspend a.k.a autoshutdown code that was covered in the original post on this topic

# If the reboot-required file is present, restart and l$
 if [ -f /var/run/reboot-required ]
 then
 logit "RESTART REQUIRED"
 echo 'Restart required' >> /var/www/html/shutdown.txt
 date >> /var/www/html/shutdown.txt
 echo "------------------------------------" >> /var/www$
 systemctl reboot
 fi

# Fall through and hybrid-sleep it!

systemctl hybrid-sleep
 # Switched to hybrid-sleep 08122017 systemctl suspend

Video Conversion Script for ffmpeg

Once and a while I have to convert a video made with Cheese, a .webm, or one made by my camera, .mov to an MP4 which takes up the least amount of space (as of 2017) and seems to be supported across a lot of devices.

The script is both a script to run and a reminder as to the syntax that ffmpeg expects as I seem to occasionally forget and wanted a snippet on line as a quick go to for reference.

In this one scaling is applied via the -s option and the bandwidth via the -b option is limited as well. Plus it allows you to choose the filename for the output file.

If ffmpeg -i %1 %1.mp4 is used for example it would take the input file and convert to mp4, tacking on the mp4 extension with no scaling and bandwidth limiting.

#! /bin/bash
#ffmpeg -i input.wmv -s 480x320 -b 1000k output.mp4
ffmpeg -i %1 -s 480x320 -b 1000k %2

Better Yet Do It Batchwise

For example, with a for loop, this code will simply go through the directory and convert all .webm’s to .mp4’s and it is set up to do scaling too if needed using -s hd480. It also keeps the same filename by changes the extension to the appropriate one for the output file.

#!/bin/bash

for a in ./*.webm; do
#  ffmpeg -i "$a" -qscale:a 0 "${a[@]/%webm/mp4}"
  ffmpeg -i "$a" -s hd480 -qscale:a 0 "${a[@]/%webm/mp4}"
done