Tag Archives: Linux

Hacker Cat

Using AWK to pull data from a field within a text file

Intelligence is the power which gives us the ability to distinguish when we are not conscious.

# awk -F” ” ‘{print $11}’ pressure-delta.txt
tail -1 $PLOTTABLE_DIR/$PRESSURE_DELTA_TAIL_FILE > $PLOTTABLE_DIR/pressure-delta-tail-one-line.txt

# Use awk with -F” “, which uses space as field seperator.
# pressure delta in inHg is in the 11th field.
# Multiply by 1000 because bash does not do floating point math natively and we want to compare so use milli-inHg to do the bash math.
# Operate on the file $PLOTTABLE_DIR/pressure-delta-tail-one-line.txt

pressure_delta=`awk -F” ” ‘{print $11*1000}’ $PLOTTABLE_DIR/pressure-delta-tail-one-line.txt`

Penguin

Linux-vs-Windows

The greatest gift of all to mankind is the friendship and understanding that which we have cultivated with each other and in cooperation.

Nice site Tim. A little backstory on how I found myself here. I found your site while looking up Phillip S. Callahan after reading about him in Dan Barber’s Book, The Third Plate. You have some interesting info on him as well as what I have seen so far on calendar discrepancies.Clocks, calendars, precision timekeeping are other interests of mine and I enjoyed those posts. After that I checked out your categories and that led me here to this post.

I will be speaking from personal experience with what I have experienced on my machines and others that I have worked on. There is a bit of a chronology to this as well.
Back when Windows started, I was a late adopter. I stayed in the command line, the DOS world, until Windows 95. It was out when I was in college and I briefly had Win3.1 until I could install 95 on the machine I had at that point. At the same time I was using the universities computers, a bank of Win95 PCs was located in a convenient computer lab. The Internet was really coming on hard and fast, so the inevitable occurred, the room was packed to the gills with students and there was a waiting line most of the time. But, there was another computer lab mostly for computer science majors, full of Sun Sparcs running UNIX, barely used at all. The room was cooler and quieter too, a bonus. This was when I got a feel for what a non Microsoft OS could be like. I would up learning it enough to use it with fair competency, a struggle at time to remember how to do something at times, but worth the effort to stick with it as it ran so smooth. I wondered if there was anything like this that I could load on a PC. A few years went by and I started to do this with Linux.
The first few machines I used Linux on were set up with dual boot. Red Hat/W98 and later Ubuntu/XP combos on two separate machines, one after the other in time. Setting up Red hat was a pain at the time and not for anyone that is not “good” with computers. Ubuntu was easy to set up, almost as easy as setting up Windows. But, it was much easy to work with than the earlier Red Hat 9.0 and that was the key. It was easy enough for my non-technical minded spouse to use, she was not lost in it in other words and could actually could use it without a lot of questions or frustration. On top of that the performance of both machines was hands down better with Linux. Things like time from a cold boot to the time you could click and open a program were faster. More programs could be run simultaneously without bogging the machine down. Moving around on the screen and opening files went faster as well. On Linux there was minimal weird behavior and very infrequent total lockups, requiring a reboot. There was no degradation either. What I mean is that it seems after having a Windows install running on a machine for years and then loading programs on it one after the other over time, it seems to get more unstable and flaky over time to the point that a fresh install is needed. This has gotten better at least with Win 8, I have noticed. On a machine that I had after the XP/Ubuntu, one was to be the last Windows machine. A Xeon machine (XP/Lubuntu) that had 1GB RAM, it was expensive RDRAM and I chose to ride it out a while as is, Linux seemed to run a bit better with less memory. In other words it would take longer to hit the out of RAM wall and start to swap to the drive and when it did it was less aggressive and didn’t do a lockup for a long time like it did while running XP. A lockup meaning the time you have to just wait for the machine to start responding again as the disk just grinds. As I said, this was the final Windows machine for me, with expensive memory, it paid to toss the PC and get a newer used machine for the same amount of money as an upgrade. This is the machine that I am on now, 6 years old and running Mint XFCE. Right now I am actually composing this while on it running Slackware in a Virtual Box, to test it out a bit. She, my spouse, has basically the same machine, same age, same CPU, with Windows 7 ( after a brief try at Windows 10, which was short as the performance was sub-par, plus the fact that when it did updates it “inhaled” 100% of the bandwidth on my connection for long time periods was frustrating), the speed difference is quite noticeable between the two machines, Win7 vs Mint XFCE. On a cold start with Mint, I can click and open something like Firefox or Word Processor, as soon as the network card is recognized, about 9 seconds after boot. The Win 7 machine takes at least 3-4 times longer. It also performs much more sluggishly overall when it finally “arrives” after a few minutes. My estimate of the speed at which I can maneuver on the Win 7 machine is along the lines of equivalence to when I tried Ubuntu on a Pentium 4 machine, single core, circa 2004, so 14 years old. One final comparison. I had a neighbor with a new machine, a budget one, but new, with Windows 10 and it still moved a lot slower than the 6 year old machine that I have with Mint.

The difference in performance is just what I have experienced and motivated me to move to Linux 100%. Not to mention the stability as well, less odd behavior and virus and malware issues are bonuses. Linux has come of age, it once was a tool that was too technical for the common user but, at this point most people could get up to speed with it fairly quickly. A little learning upfront is an investment that will save time in the long run with all of the spare seconds saved over waiting for Windows to respond to human inputs.
Microsoft has had a few hits, XP and 7 come to mind, but the product seems to go off the rails badly almost every other release, Vista and 8 come to mind. I wonder why 9 was skipped, maybe it was going in the wrong direction early on and that was realized in house before launch, I don’t know the history with that.

To all the readers, happy computing to all, with whatever OS you run,
Erick

Random AI Picture

Windows Death Cross Malaga Bay

Imagination is the power to make a difference in yourself.

My comments on Windows -vs- Linux from Malaga Bay. https://malagabay.wordpress.com/

I thought this was worth a repost here. I came across the Malaga Bay site.
while doing some research on Philip S. Callahan. A very interesting fellow who studied among other things “why is it that crops which are grown on healthy soils never attract diseases and insects.”
https://malagabay.wordpress.com/2016/03/07/philip-callahan-paramagnetism/

https://malagabay.wordpress.com/?s=Philip+S.+Callahan

Microsoft has had a few OK releases in my opinion, Windows XP and 7 come to mind, the rest seem like they have been wrong turns or at least not fully baked in the oven of development and testing….
https://malagabay.wordpress.com/2016/12/30/windows-death-cross/comment-page-1/#comment-14768

Blue Screen of Death Again!

Wget an ISO or other large file in the background

Let us forget the past. And remember that the past is a gift of the present, not a substitute for the future.

I was trying to download the Debian testing DVD ISO and it looked like it would take a while, many hours and I wanted to power off the machine.  This was back a while ago with slower internet but, this topic is still relevant. Normally I use the torrent for the distro file, but on the testing branch of Debian, none were available.

The solution

I have a Raspberry Pi, it’s running 24/7, let it do the work overnight and I can just power down my machine and not worry about the download.
Instead of downloading the file itself, I grabbed the link to the download location.
Then executed

wget -c https://gensho.ftp.acc.umu.se/cdimage/buster_di_alpha2/amd64/iso-dvd/debian-buster-DI-alpha2-amd64-DVD-1.iso
Output...
 --2018-02-07 18:15:27-- https://gensho.ftp.acc.umu.se/cdimage/buster_di_alpha2/amd64/iso-dvd/debian-buster-DI-alpha2-amd64-DVD-1.iso
 Resolving gensho.ftp.acc.umu.se (gensho.ftp.acc.umu.se)... 194.71.11.176, 2001:6b0:19::176
 Connecting to gensho.ftp.acc.umu.se (gensho.ftp.acc.umu.se)|194.71.11.176|:443... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 3864182784 (3.6G) [application/x-iso9660-image]
 Saving to: `debian-buster-DI-alpha2-amd64-DVD-1.iso'

Suceess!

Now all I have to do is put the task in the background via Ctrl-Z and then bg and I detach from SSH’ing into the R-Pi and it will just download in the background to the harddrive tethered to it’s USB port. When you enter bg it will still print it’s progress to the screen, but the terminal can be closed out fine.
There is a -b option for wget that will launch it, into the background from the start as well.

ps aux|grep wget

…will confirm that it is running still…

 erick 12438 7.0 2.2 13120 10996 ? S 18:15 2:46 wget -c https://gensho.ftp.acc.umu.se/cdimage/buster_di_alpha2/amd64/iso-dvd/debian-buster-DI-alpha2-amd64-DVD-1.iso

Watch

While in the directory that it is downloading a watch command can be used to see the progress of the download…

watch ls -l debian-buster-DI-alpha2-amd64-DVD-1.iso

 

Output…

Every 2.0s: ls -l debian-buster-DI-alpha2-amd64-DVD... Wed Feb 7 18:56:25 2018

-rw-r--r-- 1 erick erick 280608768 Feb 7 18:56 debian-buster-DI-alpha2-amd64-DV
 D-1.iso

This will show a progressive increase in file size, in case you want to monitor it.

 

Cloning Linux Mint Setups

Recently I swapped in an SSD to be the new primary drive on my media center PC which was running Linux Mint 18.0 on the spinning SATA drive.

This post is basically a brief documentation of the basic steps involved in cloning or upgrading/cloning Linux Mint. Most likely this works fine for Ubuntu as well as Debian as they share a common ancestry. There are most likely limits to this scheme. I imagine things would break badly trying to do this across 17.3 and 18 for example. The base on those is a different version of Ubuntu, 14.04 vs 16.04. I might try to do a clone when the next whole number version of Mint comes along. Just pop in a drive that I don’t care about, or do it on a VM, such as Virtualbox for an experiment.

Plans

The plan is to relieve some of the storage duties of the spinning drive which was filling up. Plus a speed increase as the SSD can move data 4x faster than the spinning drive but more importantly with no moving parts the access time is minute in comparison the the spinning drive. Applications open much faster, boot time is cut by 75%, etc. If the machine needs to use swap it won’t grind down to a halt as well with a fast disk. This machine is a bit older, SATA II, but a Solid State Drive (SSD) still makes a big difference.

The idea is to clone over the home folder but exclude large data such as  the ~/Music folder and leave that on the old drive and mount the drive as additional storage and use a symlink to it.

Old Setup 160GB Spinning Drive
Old Setup: 160GB Spinning Drive
New Setup: 80GB Primary SSD
New Setup: 80GB Primary SSD

Goal

The goal of this post’s example is to  do an upgrade to Linux Mint 18.3 from 18, clone over my user settings and reinstall all programs. Over the past year and a half that the machine has been in use there have been quite a few programs that have been installed on this machine. Many of them run from the command line or are libraries related to some of the machine learning code that gets run in the background on the machine. Needless to say it would be very hard to remember them and a lot of little things would be broken.

Step 1: Install Linux Mint from USB Stick or DVD

This step is pretty basic and is covered elsewhere on the web…

Linux MintUbuntu , Debian

But needless to say you want to create a user that has the same name and User ID (UID) and Group ID (GID) as on the drive that you will be cloning from.

Step 2: Login on the new machine/drive setup kill your home directory and rsync the old one over

Mount the old drive, doing this from the GUI folder is a fine way to do it. Make note of where it mounts. You can always execute df from the command line to find where it mounted as well

It sounds crazy but, it does work. Login, open a terminal and execute…

rm -rf /home/yourusername

Everything is in memory that the OS needs right now to get through the next step, so nothing will crash and this will give you a blank slate to work with.

Next rsync over your home folder from the old drive ( /dev/sda in my case) making sure that you use the archive option. Using the v and h options as well is helpful as well to produce a lot of output in case you have to trace through a problem.

-v : verbose
-a : archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
# -h : human-readable, output numbers in a human-readable format

Example:

For me it went something like this…

rsync -avh /media/erick/B0B807B9-E4FC-499E-81AD-CDD246817F16/home/erick /home/

Next log out and then back in. Almost like magic everything should look familiar. The wallpaper on the desktop should look like whatever it was on the old setup, fonts and other desktop sizing customizations should be there. Open the browser and it should be like you left it in the old setup. It is almost like you hibernated the old setup and woke it up after teleporting it’s soul into the new drive.

But, wait the software needs attention

Step 3: Bring over the software too, sort of…

More like apt-get install it over is closer to the truth. I tried following a post on how to do this (https://askubuntu.com/questions/25633/how-to-migrate-user-settings-and-data-to-new-machine) but, what was described in it did not lead to success. The suggestion was the following…

oldmachine$ sudo dpkg --get-selections > installedsoftware
newmachine$ sudo dpkg --set-selections < installedsoftware
newmachine$ sudo apt-get --show-upgraded dselect-upgrade

It didn’t work but, it at least did the collection part. So I wound up using the first part…

oldmachine$ sudo dpkg --get-selections > installedsoftware

…and then brute forced an install by doing some grep,rev,cut,rev again on the input file. Which basically flips every line in the file and removes the word “install” which is now at the beginning and backwards then flips it back over line by line.

The next line with the awk command prepends sudo apt-get install to the front of each line and saves the output to reinstall-software.sh

 installedsoftware-to-apt-get-install.sh
 #!/bin/bash
 cat installedsoftware | grep "install" | rev | cut -c 12- | rev > cleaned-installed-software
 awk '{printf "sudo apt-get install "$0"\n"}' cleaned-installed-software > reinstall-software.sh

Run the reinstall-software.sh script and it will do just what it says, install all of the software that was on the old setup. I believe there is an option for apt-get to preanswer Yes when it comes up and asks you the yes or no question about installing. I left it off so that I could review all the larger sized software being loaded. A few times I hit no by accident so had to re-run the script, no big deal.

Reboot is best now to avoid side-effects

Before going much further a reboot is probably in order as so much has changed on the machine.

For me, during the software install process, I was presented with a question about picking LightDM or another X- windows manager. I picked LightDM because I think that is what I had been using. After I was all done, I put the machine in suspend and it had a bit of trouble coming out of it, having a temporary error related to the X-windows manager. A blue screen came up and had a message about removing a temporary file. Just rebooting the machine cleared this up as the /tmp directory is flushed. Apparently this was something that was set before the upgrade, clone and software install process and did not get unset. Other than that I have seen no side effects from the process of upgrade, clone, software install.

Other Items

If you had files configured outside of the home directory such as /etc/hosts, you will obviously have to copy that over. Also, if you have any /etc/cron.hourly,weekly,monthlies that you put on the old machine. Also, it pays to do a dump of crontab’s using crontab -l > crontab-dump.txt on the old setup so they can be reconfigured to the same settings.

Cloning old to new box

This entire process can be used to clone one computer setup to another, old box to new one for example. Which brings us to…

Final Thoughts: Twin Machines

It is entirely possible to keep two machines in sync using the methods outlined above. I have not tried this but I am tempted to test it out at least. What I am thinking of is a laptop and desktop for instance. The desktop with it’s ability to hold multiple drives with ease works nice here It has one drive with the same OS as the “twin” laptop and is setup as multi OS boot. The steps above are executed, cloning the laptop setup and data to the desktop. It is entirely possible to keep cloning the home folder contents back and forth between the two to keep them sync’d. Even the software can be kept in sync using the method used above to re-install it.

It is possible to do this directly between them, both on at the same time. Or, through a server where they both get backed up to. The only caveat is overwriting and deletions. Such as care when using the –delete option with rsync. There is a potential for a race condition of sorts if settings and files get changed and then clobbered by a sync operation. If I were to try this I would start with a one direction sync. One device is the master and the other the slave. Deletions and settings changes get cloned from master to slave automatically only.

Create a hidden WordPress page using bash on the command line

Recently I was searching around looking for a way to create a hidden page on a WordPress site. It is a hosted site, not on wordpress.com. It is on a Linux server to which I have shell access.

Initially I tried using a plugin that I found that hides pages and posts. Plugins, you got to love or hate them. Love then when they work great right out of the box, hate them when they take a long time to troubleshoot.

Rather than waste too much time with the plugin, I went straight to the command line.

Screenshot_2018-04-03_18-28-35-shows-making-hidden-page

It turns out that if you publish a page and then log into the hosting server, make a directory somewhere under your public_html, change directory into it and execute…

 wget -x -nH your-page-url-to-hide-here

 

Screenshot_2018-04-03_18-38-33-draft-or-private-wp-setting
Set to Draft or Private

…then go back it and make the page a draft or under review, so it “disappears” from the menu structure. It will still work as a “cached” HTML page that has been downloaded to the folder that you have created. It will work, pictures and what not that you have loaded in it will be fully functional.

Example of a hidden page

http://erick.heart-centered-living.org/hidden/i-am-a-hidden-page/

Once the original page is put into draft/under review or private mode, it is gone…

http://erick.heart-centered-living.org/i-am-a-hidden-page/

Caveat

I have noticed that caching can get in the way. If your server caches pages, wget may not see the page updated when you make changes. A quick remedy is to set the page to draft/pending review or private, delete the hidden page. I usually use rm -rf from the directory above it and then force it to download the “404” page. Then  you can publish the page re-run wget and it will force it to get the fresh version. Keep note of the size of the file as a hint that it is getting the right one.

Upcoming: Do this with a CGI Script

In an upcoming post, I will cover how to make a CGI script that will allow you to create a hidden page easily without having to use SSH to login to the server.

 

wget options used in this example, from the man page

-x
–force-directories
The opposite of -nd—create a hierarchy of directories, even if
one would not have been created otherwise.  E.g. wget -x
http://fly.srk.fer.hr/robots.txt will save the downloaded file to
fly.srk.fer.hr/robots.txt.

-nH
–no-host-directories
Disable generation of host-prefixed directories.  By default,
invoking Wget with -r http://fly.srk.fer.hr/ will create a
structure of directories beginning with fly.srk.fer.hr/.  This
option disables such behavior.

Wget Resources

https://www.lifewire.com/uses-of-command-wget-2201085

https://www.labnol.org/software/wget-command-examples/28750/

The Ultimate Wget Download Guide With 15 Awesome Examples

http://www.linuxjournal.com/content/downloading-entire-web-site-wget

Install Slackware on a VM

Easy to follow tutorial on installing Slackware Linux onto a Virtual Machine

I have been interested in trying out Slackware for some time now. The Slackware Linux Essentials ( aka Slackbook) is an excellent review of Slackware and Linux in general. I went through it one winter a few years ago and was impressed as it was a great refresher course on Linux. After a while I tend to forget some of the tricks on the command line that I do not use on a regular basis. Going over a manual like this is a good brush up. Reading the book convinced me that I would have to try out Slackware someday.

Tutorial

I had no trouble following the tutorial and getting Slackware up and running on a VirtualBox VM. The current version 14.2 (February 2018) is similar enough to the 13.0 install in the guide that the few differences are not a problem. The one difference that I noticed is that when the disk is partitioned the option for bootable did not appear for me as it did in the tutorial. I just went ahead and wrote the disk and it was fine. The tool might have some logic built in to decide what to do and does not required you to tell it that it has to be set as bootable anymore.

http://archive.bnetweb.org/index.php?topic=7370.0

Slackware DVD ISO Torrent Page

http://www.slackware.com/getslack/torrents.php

Slackware Live DVD/USB Stick

Live DVD/USB Stick installs are relatively new for Slackware. In case you want to just go ahead and try it on a Live CD or USB stick it is now available as a download.
http://bear.alienbase.nl/mirrors/slackware/slackware-live/

Linux not booting, incorrect drive mapping, try hacking grub.cfg

Hacking /boot/grub/grub.cfg is not usually the first thing that comes to mind. But, I ran into a problem when installing Kali Linux on a plug in USB Hard Drive. I wanted to try to put it on a USB hard drive by using the USB boot stick. All went well with the install. On reboot, guess what, the USB stick and it’s multiple partitions was no longer “parked” on /dev/sdc-sdg and when Kali had installed it thought it was going to be at /dev/sdh. Pulling the USB stick put the USB Hard Drive at /dev/sdc on the next boot! So naturally when grub wanted to boot the machine it drops to a command prompt after it times out trying to find where root is on /dev/sdh which has disappeared. When this happened, I puzzled on it for a few minutes before digging into grub.cfg, it was not my first thought, but it was the only thing that I could think of that could doing this.

When the machine can boot and drops to the basic command line find out what drives it thinks it has by running…

ls /dev/sd[a-z]2

…this will show all the sda2,sdb3 and etc. Usually but not always the 2nd partition is the Linux File System, this is where the /boot directory lives with all of the GRUB files. Using a USB Stick with a live install or the DVD and booting is helpful at this point as you can use sudo fdisk -l to list what drives are available and you will need to use and editor to modify the grub.cfg file.

Hacking grub.cfg

The hack is to reboot the machine either with the USB stick/Live DVD or off of the hard drive resident in the machine and then….

chmod 644 /boot/grub/grub.cfg

…as it is read only…

-r--r--r-- 1 root root 14791 Jan  7 16:29 /boot/grub/grub.cfg

…remember to chmod it back to read only after editing using….

chmod 444 /boot/grub/grub.cfg

Make a backup copy of it first before editing!

Editing grub.cfg

Once you have it in read/write state open it in an editor, emacs or something, even nano would work.

Yes it complains not to edit it, but when you can’t boot at all, because it can’t find where the root is, it’s worth a try!

 

#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

In a terminal window find out where the drive is really mapped using…

sudo fdisk -l

Example of second bootable disk at /dev/sdb…

Disk /dev/sdb: 149 GiB, 160000000000 bytes, 312500000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0007e21b

Device     Boot     Start       End   Sectors   Size Id Type
/dev/sdb1  *         2048 279187455 279185408 133.1G 83 Linux
/dev/sdb2       279189502 312498175  33308674  15.9G  5 Extended
/dev/sdb5       279189504 312498175  33308672  15.9G 82 Linux swap / Solaris

The trick is to make the lines for the drive the OS is on that you are trying to boot and that is failing line up in grub.cfg. In grub.cfg search for lines like this…

set root='hd0,msdos1'

If it says hd0 it better be the sda drive for instance. In my case it showed hd7=sdh and I needed to edit hd7 to hd2. This was done via a search and replace.

Also for good measure and to keep all the commenting straight correct anything by searching around for /dev/sd and make sure all the menuentrys line up with the hd* values in set root. This is just so that you don’t get confused later if set root is changed and the menus don’t line up with reality.

menuentry 'Linux Mint 17.3 Xfce 64-bit, with Linux 3.19.0-32-generic (on /dev/sdb1)

After this correction, it booted fine. Now I will just have to pay attention to what happens with drive mapping in case I plug it into another machine. But, it is just an experiment for now, nothing critical on the drive so no worries!

Another note on Boot Repair

If grub gets totally screwed somehow, boot-repair will fix it.

For instance I once had grub lose it’s brains on an EFI boot drive. The symptom was the error about not finding a bootable medium from the BIOS. Use boot-repair. Boot using a live CD or usb stick with exact same version as you are trying to fix! Run the repair and let it automatically fix it.

sudo add-apt-repository ppa:yannubuntu/boot-repair 

sudo apt-get update 

sudo apt-get install -y boot-repair && boot-repair

 

from: https://help.ubuntu.com/community/Boot-Repair

 

Additional Resources

Fix for Linux boot failure and grub rescue prompt following a normal system update

http://giantdorks.org/alain/fix-for-linux-boot-failure-and-grub-rescue-prompt-following-a-normal-system-update/

How do I run update-grub from a LiveCD?

Introduction to fstab

https://help.ubuntu.com/community/Fstab

FTP on Raspberry Pi. An easy way to make shared folders

The idea with FTP is to have folders that can be reachable between Linux and Windows, locally and remotely and easily. FTP is not secure, but it can be made secure, that info can be found on the web. For now I am covering the basics of FTP here.

For most things that I need to do, I don’t need the files to be secure anyways, 90% of the time nothing critical is going back and forth across remotely. If it is I would use a secure method of sending files via SSH via SFTP or an SSHFS.

FTP is an old protocol but it just plain works and is compatible with Windows, Linux and Mac. I have tried WebDAV in the past but it is compatible to only a degree with various Windows operating systems. I have had a hard time getting it working correctly on versions of Windows beyond XP, resorting in installing patches to Windows and etc. Generally not easy to implement.

I was also looking at FTP as a native tool typical of server installs. I have experimented with cloud setups such as OwnCloud and Sparkleshare, but with FTP I was looking for something simple and quick to setup, no special software, no mySQL databases running on the Raspberry Pi, no special software on client PCs, that sort of thing.

vsFTP

sudo apt-get install vsftpd

Edit the configuration file

Back it up first then do an edit.

sudo cp /etc/vsftpd.conf /etc/vsftpd.orig
sudo nano /etc/vsftpd.conf

uncomment local_enable = YES

uncomment write_enable = YES

Find this and check that it is set this way…

local_umask=022

Enabling PASV

I have read online that enabling the PASV capability for FTP is a good idea. Frequently when I have FTP’d to various ISP’s sites I have seen them operate in PASV mode. So it stands to reason that if the pro’s are have it set up that way it may have it’s advantages.

Add the following lines to the /etc/vsftp.conf file.

pasv_enable= Yes
pasv_min_port=40000
pasv_max_port=40100

There is nothing magic about the numbers of the port range other than they should be unused by anything else that your setup might require and generally I have seen high numbers used commonly. To work out side of your local network you must enable port forwarding of the range of port numbers through your router configuration.

Changes to vsFTP

With the newer versions of vsFTP there is a change that has occurred since I wrote my previous post about vsFTP (  http://oils-of-life.com/blog/linux/server/additional-utilities-for-a-linux-server/ )

The change has to do with the fact that the root directory of the user has to be non-writable and I have read online that it is best to make it owned by root as well. This is covered below, after the section on adding a user. You need to have a user first before modifying their permissions!

FTP User

To create an FTP user, create it in a way that it does not have a login shell. So that someone who can log in to the FTP account can’t execute shell commands. The line /sbin/nologin may not be in the /etc/shell file and in that case it needs to be added in there. The user basically has to be jailed in their directory and has to have no login shell.

sudo useradd -m -s /sbin/nologin -d /home/user user

I added Documents, public_html directories to the /home/user as well. Then made the users root folder /home/user, owned by root and nonwritable.

cd /home/user
chown user:user Documents
chown user:user public_html

chown root:root /home/user
Make Root of user non writable
sudo chmod a-w /home/user



FTPing on the PC

Now that ftp is set up on the server you will want to be able to connect to it!

Options for connecting…

Command Line, WIndows and Linux

ftp yoursite.com

That gets you into FTP via the command line. The command prompt will now start with ftp> ,that is how you know that you are within the ftp command shell.

It is archaic, but worth knowing when you have to stick a file up or pull it down right at the command line. The commands the ftp prompt accepts are basic, but good enough to get most work done. Type help at the prompt to get a list of commands.

Via Folders

Linux

Just enter the location of the ftp server right into the top of the directory folder and you will be prompted for a password and taken there.

Windows
Windows7/Vista:
  1. Open Computer by clicking the “Start” button, and then clicking Computer.
  2. Right-click anywhere in the folder, and then click Add a Network Location.
  3. In the wizard, select Choose a custom network location, and then click Next.
  4. To use a name and password, clear the Log on anonymously check box.

From: https://www.google.com/search?q=connect+to+ftp+windows+7&ie=utf-8&oe=utf-8