Tag Archives: bash

Create a hidden WordPress page using bash on the command line

Recently I was searching around looking for a way to create a hidden page on a WordPress site. It is a hosted site, not on wordpress.com. It is on a Linux server to which I have shell access.

Initially I tried using a plugin that I found that hides pages and posts. Plugins, you got to love or hate them. Love then when they work great right out of the box, hate them when they take a long time to troubleshoot.

Rather than waste too much time with the plugin, I went straight to the command line.

Screenshot_2018-04-03_18-28-35-shows-making-hidden-page

It turns out that if you publish a page and then log into the hosting server, make a directory somewhere under your public_html, change directory into it and execute…

 wget -x -nH your-page-url-to-hide-here

 

Screenshot_2018-04-03_18-38-33-draft-or-private-wp-setting
Set to Draft or Private

…then go back it and make the page a draft or under review, so it “disappears” from the menu structure. It will still work as a “cached” HTML page that has been downloaded to the folder that you have created. It will work, pictures and what not that you have loaded in it will be fully functional.

Example of a hidden page

http://erick.heart-centered-living.org/hidden/i-am-a-hidden-page/

Once the original page is put into draft/under review or private mode, it is gone…

http://erick.heart-centered-living.org/i-am-a-hidden-page/

Caveat

I have noticed that caching can get in the way. If your server caches pages, wget may not see the page updated when you make changes. A quick remedy is to set the page to draft/pending review or private, delete the hidden page. I usually use rm -rf from the directory above it and then force it to download the “404” page. Then  you can publish the page re-run wget and it will force it to get the fresh version. Keep note of the size of the file as a hint that it is getting the right one.

Upcoming: Do this with a CGI Script

In an upcoming post, I will cover how to make a CGI script that will allow you to create a hidden page easily without having to use SSH to login to the server.

 

wget options used in this example, from the man page

-x
–force-directories
The opposite of -nd—create a hierarchy of directories, even if
one would not have been created otherwise.  E.g. wget -x
http://fly.srk.fer.hr/robots.txt will save the downloaded file to
fly.srk.fer.hr/robots.txt.

-nH
–no-host-directories
Disable generation of host-prefixed directories.  By default,
invoking Wget with -r http://fly.srk.fer.hr/ will create a
structure of directories beginning with fly.srk.fer.hr/.  This
option disables such behavior.

Wget Resources

https://www.lifewire.com/uses-of-command-wget-2201085

https://www.labnol.org/software/wget-command-examples/28750/

The Ultimate Wget Download Guide With 15 Awesome Examples

http://www.linuxjournal.com/content/downloading-entire-web-site-wget

Automating PHPList Process Queue

Frequently when using the non command line version of PHPList which I have loaded on my service providers server, I have to hit Process Queue multiple times for a message. I have read a lot of posts online complaining this issue. I have done some troubleshooting of it myself by turning on the verbose feature in PHPList and I am still at a loss as to what it gets stuck on and then stops processing the queue.

PHPList works great, I love it for all that it does and does well. My one complaint is the Process Queue issue and from reading a bunch of posts online I have successfully patched together a solution.

This code can be executed on the server where PHPList lives if they will let you run a wget command from a CRON entry, some won’t. In my case I opted to simply run the command from a Raspberry Pi server that I have that runs 24/7. It is simple a looping script that does the equivalent of hitting the “Process Queue” button on PHPList at regularly timed intervals and detected when the queue has been processed. I have seen several examples of using wget to be fired off every 5 minutes to do this, but that means it gets run every 5 minutes, all day, every day. Why not run something that can be scheduled for a time and then runs to completion which this script does.
This script also produces a log file that can be copied to a location viewable via the server so you can double check to see if all ran OK. I see this mostly as a debug feature and it could potentially be turned off along with some of the “echos” of text. Except for the one in the else-if, it is an error to have an empty else-if in a bash script.

PHPList Automatic Process Queue Script

#!/bin/bash
 # Controller Script for PHPlist Process Queue
 # Erick Clasen blog.oils-of-life.com 05/16/2015
# User defined variables
 # No trailing / on DIR!
 DIR=/home/erick/php-proc
 LOG=php-pq.log
 # Time to sleep between loop iterations.
 TIME=3m
 # Max number of loops, so the script breaks out if for some reason it fails to complete.
 LOOPS=100
 # Publish a copy of the log to the following directory
 PUBDIR=/home/erick/public_html
# wget having trouble with some options so simply switch to the dir we want it to work in.
 cd $DIR
x=1
 echo Start------------- >> $LOG
# While loop, breaks out if done or at LOOPS as a guard against #infinite loop.
 while [ $x -le $LOOPS ]; do
# Timestamp in DDMMYYYY_HHMM-SS (UTC)
 timestamp=$(date -u +"%d%m%Y_%H%M-%S")
echo $x $timestamp
 echo $x $timestamp >> $LOG
x=$(( $x + 1 ))
# Use wget to run process queue with login and password provided
 wget 'your_url_here.com/phpList/admin/?page=processqueue&login=phplist&password=your_password_here'
# Test to see if PHPlist process queue has completed.
 if grep -q "Finished, Nothing to do" $DIR/index.*
 then
 # Exit from Loop with a break, mark log,remove index files that wget fetched, break out.
 echo All Done! Breaking Out
 echo All Done!--------- >> $LOG
 echo ------------------ >> $LOG
 # Publish Log, Optional
 cp $LOG $PUBDIR/$LOG
 rm $DIR/index.*
 break
 else
 # Nothing Happens
 echo Keep Running...
 fi
# Sleep for TIME
 echo ------- Sleeping Process --------
sleep $TIME;
done;

PHPList Automated Process Bounces Script

Process Bounces can be run from a script as well…

#!/bin/bash
 # Controller Script for PHPlist Process Bounces
 # Erick Clasen blog.oils-of-life.com 05/16/2015
# User defined variables
 # No trailing / on DIR!
 DIR=/home/erick/php-proc
 LOG=php-pq.log
# Publish a copy of the log to the following directory
 PUBDIR=/home/erick/public_html
# wget having trouble with some options so simply switch to the dir we want it $
 cd $DIR
# Use wget to run process queue with login and password provided
 wget 'your_url_here.com/phpList/admin/?page=processbounces&login=phplist&password=your_password_here'
 echo ---------------------------- >> $LOG
 echo BOUNCES HAVE BEEN PROCESSED! >> $LOG
 echo ---------------------------- >> $LOG
 # Publish Log, Optional
 cp $LOG $PUBDIR/$LOG
 rm $DIR/index.*

CRON entry to run these scripts

These entries run Process Queue every Monday, Wednesday and Friday (1,3,5) at 4:15AM and Process Bounces at 12:15AM on the first day of the month. Nothing magic about those settings, you can set them any way you wish. I figure starting the process in the wee hours of the morning will allow people to receive the email by the time that they wake and first check their mail. I set the time to 15 after so it won’t run at the same time as some processes that I have set to run on the hour. Sending mail on weekdays keeps people from having to deal with more email than they might like on the weekends.

15 04 * * 1,3,5 /home/erick/php-proc/php-pq.sh
 15 00 1 * * /home/erick/php-proc/php-pb.sh

PHPList Automatic Process Queue Override

If you must make PHPList process the queue now and don’t want to wait for the CRON task. Two options are available, do it the via the normal way from the PHPList admin website or you can use something as simple as a CGI script to force the php-pq.sh script to execute. Below is an example of a dirt simple CGI script. It prints out the tail of the php-pq.log and runs the php-pq.sh script. Php-pq.log is the log of the previous automatic processing event and then goes ahead and fires off the Process Queue Override by running the script. Basically the act of loading the page makes it run. I keep my cgi-bin directory locked down so no one outside of a few IP addresses can get to the files there. But it would be fairly harmless if someone did, the worst that happens is that the queue gets processed.

CGI Code for Automatic Process Queue Override

#!/bin/bash
 echo "Content-type: text/html"
 echo ""
 echo "PHPList Automatic Process Queue Override"
 echo ""
 echo "
PHPList Automatic Process Queue Override from host $(hostname -s &)
"
 echo ""
 echo "
 $(tail /home/erick/php-proc/php-pq.log &)
"
 echo ""
 echo "
Information generated on $(date &)
"
 echo ""
 echo "Processing..."
echo "
 $(/home/erick/php-proc/php-pq.sh &)
"
echo "
"

 

PHP Config File Tweaks

I am working on taking some notes on some changes I have made to the PHPList config.php file and will be posting on this in August 2015.

U.S.D.A. Forest Service Webcam Image - Cloud Peak, WY

Active Desktop Wallpaper using wget

It is nice to have a desktop wallpaper that is not static, I like to see some outdoor scene that has a good view and a dynamic sky. Wyoming certainly has some ever changing skies and nice terrain so I have a wallpaper background set to show the Cloud Peaks Wilderness in Wyoming that updates every hour.

It is possible to load a JPG file periodically from a source using the Linux built-in wget command. In the example below, I am loading a scene from Cloud Peak Wyoming that is captured by a US Forest Service Webcam. It is loading right into my home folder, it could be put in any place that you prefer.

There is a nice bunch of pictures taken by the Forest Service from all over the country and they provide some nice high resolution scenery. See the links at the bottom of this post.

Code for script file

#! /bin/bash
 rm /home/erick/cpwa1_large.jpg
 wget http://www.fsvisimages.com/images/photos-large/cpwa1_large.jpg

The code first removes the old copy of the image and then it uses the wget command to fetch a new copy.

.wgetrc

It is not necessary to modify .wgetrc to use wget, but I put this here as an FYI. There is a configuration file for wget. It is located at /usr/local/etc/wgetrc. More info on wget locations. You can make a copy of it and put it in your home directory. Once in the home directory any modifications to it will work for your user profile. I have mine modified to do a few non-standard things, one is to use timestamping which will make wget only download when the file it is trying to download is newer than the local copy.

# Set this to on to use timestamping by default:
timestamping = on

Secondly, I also added a line at the end of the file that puts an option for wget for limiting the rate of downloading. Otherwise wget will run as fast as possible and will use the entire bandwidth. This option can be used on a case by case basis by putting in the line when wget is called as well. Doing this makes it so wget doesn’t slow down your connection to the Internet a lot and doesn’t hit the server hard with high speed downloads, important if you are downloading multiple large files.

limit-rate=20k

It is also possible to add a bit of a delay between connections when downloading. This avoids hammering the server that you are downloading from when downloading multiple files. This makes it easier on the server load and makes your download activity less likely to be obnoxious to the folks running the server that you are downloading from. Obnoxious down-loaders and site scrapers are more likely to get banned I would imagine if someone notices a spike in server load and pins it down to the IP address.

# It can be useful to make Wget wait between connections.  Set this to
# the number of seconds you want Wget to wait.
wait = 1

Some sites go as far as prohibiting downloads unless the user agent has a string inside of it. I didn’t do this yet as I have not had a problem with this issue. But it is possible to set the user-agent via --user-agent=“Acceptable String Here”

More on user-agent modification

CRON entry

01 08-22 * * * /home/erick/cpwa1/wget-cpwa1.sh

Using crontab -e, a line can be loaded into your CRON file to run the script periodically. The one above runs every hour 1 minute after the hour between 8AM and 10PM. There is no sense in loading nighttime pictures so that is why the times are bounded to load pictures during daylight hours ( right now) for Mountain Daylight Time. The picture I load is update around 59 minutes after the hour so loading 1 minute after the hour provides a bit of a guard band of time.

USDA Forest Service Webcams

USDA Forest Service Real Time Image Description Page
USDA Forest Service Real Time Image Gallery