Tag Archives: RAM

Simple Linux Performance Benchmarking

Recently I did some very simple benchmarking of the CPU and disk drives on a few of my Linux PC’s and a Raspberry Pi. This was a quick test to see how all of the machines compare.

CPU: I went for some very simple tests of the CPU performance under a load, calculating Pi. The code is not the best at calculating pi, it is just there to exercise the processor(s) and provide some standard piece of executable code that can be tried on multiple machines. When the pi code runs it will use the processor at close to 100% when viewed using top.

13502 erick     20   0  2784 1964 1112 R  98.8  0.4   0:36.13 arm_pi

There are programs available that are better at calculating a lot of digits of pi, fast. An example is Hyper Pi for Windows that will certainly calculate Pi to many more digits in a given period of time than the pi shown in this post program can.

DISKS: To test disk speeds. I used a simple writing and reading out of the bitbucket ( /dev/null) to disk and noting the speeds displayed.


CPU Benchmarking

CPU benchmarking was done by running an executable that calculates Pi to an arbitrary number of digits. Below is the link to the C file which is easily compiled for the target machine. I downloaded this from the net, from this page… http://www.overclockers.com/forums/archive/index.php/t-402437.html

pi.c

For a Linux machine or the Raspberry Pi, downloading the file and executing…

gcc -o pi pi.c

or

gcc -o pi pi.c -lm

As the original post states. The -l means link a library. The m means the math library. I don’t think it is necessary???

…will create the executable pi.

To run in the directory it was create just call pi with an argument of the number of digits to calculate, for example Pi to 1000 digits is calculated via…

./pi 1000

The ./ in front of the program name is needed if you are executing the program from the directory that you are in, or else Linux will go off searching through the path for programs named pi. If you have created a bin directory under your home folder you can put pi, or any executable code there and run it from other directories on the machine by just calling the program without the ./

Run of Pi program for 1000 digits

Approximation of PI to 1000 digits

3.1415926535  8979323846  2643383279  5028841971  6939937510  5820974944
5923078164  0628620899  8628034825  3421170679  8214808651  3282306647
0938446095  5058223172  5359408128  4811174502  8410270193  8521105559
6446229489  5493038196  4428810975  6659334461  2847564823  3786783165
2712019091  4564856692  3460348610  4543266482  1339360726  0249141273
7245870066  0631558817  4881520920  9628292540  9171536436  7892590360
0113305305  4882046652  1384146951  9415116094  3305727036  5759591953
0921861173  8193261179  3105118548  0744623799  6274956735  1885752724
8912279381  8301194912  9833673362  4406566430  8602139494  6395224737
1907021798  6094370277  0539217176  2931767523  8467481846  7669405132
0005681271  4526356082  7785771342  7577896091  7363717872  1468440901
2249534301  4654958537  1050792279  6892589235  4201995611  2129021960
8640344181  5981362977  4771309960  5187072113  4999999837  2978049951
0597317328  1609631859  5024459455  3469083026  4252230825  3344685035
2619311881  7101000313  7838752886  5875332083  8142061717  7669147303
5982534904  2875546873  1159562863  8823537875  9375195778  1857780532
1712268066  1300192787  6611195909  2164201989

Calculations Completed!
Time: 1 seconds


It is possible to run this program to see how the various machines that you own and compare the CPU performance by using the pi program.

Minimal Boot

More advanced benchmarking of a machine would involve trying to boot using a disk that would allow only the command line and a minimal amount of background stuff to load. I might try this at some point to see what difference it makes. I would try to boot from a Ultimate Boot CD (UBCD) and then go into the mode that loads the minimal Linux boot and somehow get pi.c loaded via USB stick maybe? It would be an experiment!

For more advanced testing that runs outside of the OS it is possible to run the code included on the UBCD, for example the  for a any machine that will boot from CD. The CD contains a suite of benchmarking, testing and stress testing tools, in addition to other tools for working with a hard drive and unlocking machines.

Pi Benchmarking Script

For benchmarking I close out of all applications that are running and open one terminal to execute the pi program.

Code

Below is code for a script file that will run the pi program,looping it multiple times, calculating the results for 1000,2000,4000,8000,16000,32000,64000,128000 and 256000 digits. I wanted to see if there would be any noticeable variations in the time to calculate various amounts of digits among the machines. I didn’t notice much of a deviation when I lined up plots of the various machines. They were all nominally nearly a constant multiple of speeds across the multiple levels of pi calculation

The script creates two temporary files cols.txt that lists the number of digits it has run up to. Additionally a file called results.txt captures the amount of time that it took to calculate the corresponding number of digits.

#! /bin/bash
rm cols.txt
rm results.txt

DIR=/home/erick/pi
x=1000


while [ $x -le 300000 ]; do

echo $x
echo $x >> cols.txt

$DIR/pi $x > temp.txt
tail -n 1 temp.txt >> results.txt

x=$(( $x * 2 ))

done;

Results of the Pi Benchmarking Script

Raspberry Pi Model B, single core CPU at 700MHz
Time: 1 seconds
Time: 1 seconds
Time: 5 seconds
Time: 21 seconds
Time: 86 seconds
Time: 352 seconds
Time: 1438 seconds
Time: 5835 seconds
Time: 23507 seconds
Dell Dimension 530-mt Xeon dual core processor at 2.4GHz

The machine has 1GB RAM and I don’t think that comes into play running this program. The pi program is only running on one of the cores in this example.

Time: 0 seconds
Time: 0 seconds
Time: 1 seconds
Time: 3 seconds
Time: 14 seconds
Time: 58 seconds
Time: 232 seconds
Time: 932 seconds
Time: 3741 seconds
Dell Dimension 2400: Pentium 4 single core at 2.4GHz

Nearly identical performance as the Dell Dimension. This machine has 1.5GB RAM.

Time: 0 seconds
Time: 0 seconds
Time: 1 seconds
Time: 4 seconds
Time: 13 seconds
Time: 56 seconds
Time: 223 seconds
Time: 898 seconds
Time: 3596 seconds

Suprise

I benchmarked two older laptops, an old Dell Inspiron (2003) running Ubuntu 10.04 and a not so old Toshiba Satellite A135 (2009), Mint 17 xfce, both 1.6GHz processors.

Pentium M on the Dell and Celeron M on the Toshiba. 333 and 533 MHz busses respectively. I thought that they would be fairly similar in performance but was pleasantly surprised to find the Toshiba was a decent amount faster than all of the other machines! This was a machine that had only 512MB RAM and ran Vista poorly and I almost scrapped it. Until I bought another stick of RAM and loaded Linux Mint 17 on it! It is like a miracle how much better it runs. It is a good test machine to try out Mint as I might consider it for a future desktop machine.

Dell Inspiron

Time: 0 seconds
Time: 0 seconds
Time: 1 seconds
Time: 3 seconds
Time: 13 seconds
Time: 54 seconds
Time: 276 seconds
Time: 1134 seconds
Time: 4526 seconds

Toshiba Satellite

Time: 0 seconds
Time: 0 seconds
Time: 0 seconds
Time: 2 seconds
Time: 7 seconds
Time: 30 seconds
Time: 164 seconds
Time: 692 seconds
Time: 2747 seconds

A ratio of 1.82 times faster, not bad at all.

 


 

Disk Benchmarking

Disk benchmarking was done by writing from the /dev/null bitbucket to disk, flushing caches and then reading back a 1GB file, discarding it into /dev/null.

Write Script

The follow code is copied into a script file ending in .sh and made executable using chmod +x filename, will write 1GB of zeros to a file named after the of=.

#!/bin/bash
dd if=/dev/zero of=/home/erick/testfile-1024x1M bs=1M count=1024

Read Script

The following code is copied into a script file ending in .sh and made executable using chmod +x filename. This script will read back the file created by the write script, dumping it into the null device. It will read it back in 8k blocks.

#!/bin/bash
dd if=/home/erick/testfile-1024x1M of=/dev/null bs=8k

Flush Caches

In order to test the disk if the RAM on the the machine is sufficiently large and the write operation leaves the written data it in the cache, you need to flush the cache in order to have the machine actually read it from disk. Linux is pretty clever about using RAM that is not doing anything, not already in use for programs and OS, for a disk read/write cache. I remember in the old days of DOS that there were a number of these utility programs that could be loaded that would use some RAM as a cache, effectively Linux does the same thing natively.

Take the following code, copy into a file ending in .sh, such as flush-caches.sh and make executable via the chmod +x filename command and then run it between writing and reading the disk.

#!/bin/bash
sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"

Example Results

These disk write and read utilities can be used to test harddrives, USB sticks, SD cards, RAM Drives (effectively tests RAM speed) and so forth. It even can be used to test the network speed when a drive is mounted using NFS ( or using rsync, scp or sftp ) as this will usually be the speed constraint and not the drive R/W speed.

Dell Dimension 530-mt Primary Hard Drive

Write

erick@Precision-WorkStation-530-MT:~/bin$ ./harddrive-write-test.sh
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 24.0374 s, 44.7 MB/s

Read

erick@Precision-WorkStation-530-MT:~/bin$ ./harddrive-read-test-8k.sh

131072+0 records in

131072+0 records out

1073741824 bytes (1.1 GB) copied, 25.3427 s, 42.4 MB/s

Toshiba Satellite A135

Write

erick@erick-Satellite-A135 ~/bin $ ./harddrive-write-test.sh
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 36.6714 s, 29.3 MB/s

Read

erick@erick-Satellite-A135 ~/bin $ ./harddrive-read-test-8k.sh
131072+0 records in
131072+0 records out
1073741824 bytes (1.1 GB) copied, 35.093 s, 30.6 MB/s

 

Old, circa 1998 machine, 4GB primary Hard Drive (/dev/sda) and 8GB Compact Flash card (/dev/sdb) Secondary drive

The interesting thing here besides just how slow the speeds are, is the fact that the CF card is actually faster than the hard drive. The drive is actually pretty loud on this machine as well. When CRON runs at the top of the hour, there is definitely a small burst of sound, enough to serve as a reminder of the time if you are in the same room as the machine. When you SSH into it, you can hear it grind away for about 2 seconds as it reconciles the password! I used this machine for remote monitoring before I had a Raspberry Pi, works good enough for that with it’s limited RAM and HD space. Now it is just a backup in case the Pi is down.

Disk /dev/sda: 4303 MB, 4303272960 bytes

Write: 1073741824 bytes (1.1 GB) copied, 144.733 s, 7.4 MB/s

Read: 1073741824 bytes (1.1 GB) copied, 138.498 s, 7.8 MB/s

Disk /dev/sdb: 8195 MB, 8195604480 bytes

Write: 1073741824 bytes (1.1 GB) copied, 91.4438 s, 11.7 MB/s

Read: 1073741824 bytes (1.1 GB) copied, 73.3132 s, 14.6 MB/s

Raspberry Pi SD Card

Using a shorter test via the following script to write…

#!/bin/bash
dd if=/dev/zero of=/home/erick/testfile-10000x8k bs=8k count=10000

and to read…

#!/bin/bash
dd if=/home/erick/testfile-10000x8k of=/dev/null bs=8k
Write To SD Card
erick@raspberrypi ~/bin $ ./sd-write-test.sh
10000+0 records in
10000+0 records out
81920000 bytes (82 MB) copied, 8.45663 s, 9.7 MB/s
Read Back from SD Card
erick@raspberrypi ~/bin $ ./sd-read-test-8k.sh
 10000+0 records in
 10000+0 records out
 81920000 bytes (82 MB) copied, 4.38749 s, 18.7 MB/s
RAM Drive Performance

tmp on my Raspberry Pi is set up as a RAM Drive (RAMFS) on my Raspberry Pi. So this gives some indication of how fast the RAM can be wrote and read to.

Using the following write script…

#!/bin/bash
dd if=/dev/zero of=/tmp/testfile-1000x8k bs=8k count=1000

and read script…

#!/bin/bash
dd if=/tmp/testfile-1000x8k of=/dev/null bs=8k
Write to RAMFS
erick@raspberrypi ~/bin $ ./ram-write-test.sh
1000+0 records in
1000+0 records out
8192000 bytes (8.2 MB) copied, 0.0458338 s, 179 MB/s
Read from RAMFS
erick@raspberrypi ~/bin $ ./ram-read-test-8k.sh
1000+0 records in
1000+0 records out
8192000 bytes (8.2 MB) copied, 0.0339184 s, 242 MB/s

 

 

end

Raspberry Pi

Reduce writes to the Raspberry Pi SD card

After 5 months of solid up-time for my Raspberry Pi server, which has been running great. It has been taking a picture every hour and from them creating a creating a timelapse video every day. Also it is being used as a place to drop files to periodically from other place on the network, a little bit of file storage. Eventually I will add more storage space to it to use it even more for network storage.

Recently, I started to think about the potential wear of the SD card as I came across several articles online dealing with the topic. I decided to make a few changes to the Raspberry Pi configuration to reduce the amount of writing to the SD card.

Write Saving #1: Using a tmpfs

I editted /etc/default/tmpfs. In it the comments state  that /run, /run/lock and /run/shm are already mounted as tmpfs on the Pi by default. Which I have observed. This was a change made a while ago for the Pi according to the buzz online. I additionally set RAMTMP=Yes to add /tmp to the directories put on the tmpfs. This sets up access to /tmp with rwx-rwx-rwx permissions. There was a suggestion that I saw online to limit the sizes of the various directories, I added that as well.

# These were recommended by http://raspberrypi.stackexchange.com/questions/169/how-can-i-extend-the-life-of-my-sd-card
# 07262015, mods for using less of the SD card, RAM optimization.
TMPFS_SIZE=10%VM
RUN_SIZE=10M
LOCK_SIZE=5M
SHM_SIZE=10M
TMP_SIZE=25M

The OS and some programs will use /tmp. But so do I. I created a /tmp/web folder under it when the Raspberry Pi boots. Into this folder files go such as the hourly photo and the daily video that scripts create for the webcam that is attached. I have reduced 3 hourly writes to just one photo. I keep only one on the SD card as I don’t want to risk losing a bunch of them taken during the day if I totally relied on the tmpfs. If I was using a UPS, I would have no problem saving all of them on the tmpfs and occasionally backing up to the SD or another device. The big saver is the daily timelapse.avi for the web that is created daily from all of the hourly captured photos. It is many megs in size gets written daily and it doesn’t matter if I lose it. It can be recreated from the photos at will. So it is the perfect kind of file to throw on a RAM file system.

I also store the hourly and daily logs that I create using the cron driven logcreate script that I run. The logcreate script creates an hourly log that is concatenated into a daily log on the tmpfs then every day the daily log will concatenate into a full log, that is rotated, on the SD card, so I have a permanent record. Need to put the link for this here!

What is a tmpfs?

It is a RAM Disk, a.k.a. RAM Drive that allows RAM to be used as a hard drive. Obviously when the power goes out, it goes away. So we don’t want anything important to go there. But for things like files that I make such as the hourly photos that my Web Cam takes and the video it makes daily and logs, it is perfectly fine for usage. It is not a really big deal if the power went out and I lost this information as it will be recreated shortly anyways.

Caution

The only issue that I see with having logs on a tmpfs would be a situation where the Pi got in a state of weirdness where it started rebooting itself and then you had no logs to track down the problem. Then I suppose, it would be just a matter of changing the /etc/fstab file to revert to putting the logs back onto the SD card for a while to track down the problem. But, for a Raspberry Pi like mine that is running stable and I am not doing many experiments with right now, having the logs in volatile memory is not something I worry about. Plus it is easy to make a script to backup the logs to the SD card or another computer, if you manually reboot it, so you can save them if you like when you have control of the reboots.

Write Saving #2: Turning off swap

If the Raspberry Pi runs out of RAM, not likely if it is a server set up for light duty usage, it will start to use swap which is on the SD card, causing writes to the swap file. Mine rarely touches swap. I would rather tune the thing for better memory use than have it use swap.

It is possible to turn off swap usage using the command…

sudo dphys-swapfile swapoff

This is not persistant and needs to be done on every boot. It could be put into the root crontab by editting it using sudo crontab -e and adding the line. Or creating a script for it along with other items that are to be run at startup.

@boot dphys-swapfile swapoff

Online, people said that there was another way to turn it off by reducing the swap file size to zero, a config file for swap, can’t remember the name. But it is claimed that when it reboots it just overrides that and makes a default 100M swap file.

Write Saving #3: Moving /var/log to a tmpfs

One of the biggest offenders as far as writing to files periodically is the logs that live under /var/log and it’s sub-directories. You can create an entry in /etc/fstab that will create a tmpfs for /var/log. The only caution here is daemons, like Apache that require a directory to exist under /var/log or else they will not start. Apt also has a directory under /var/log, but it creates itself when apt runs for the first time so that is no problem. The apt directory has logs that keep track of what apt installs or uninstalls, good info to know about. News seems to work fine creating a directory for itself too. So for me only Apache is a problem.

  1.  Put an entry in /etc/fstab…
     tmpfs /var/log tmpfs defaults,noatime,mode=0755 0 0
  2.  Found out that news and apt folders create themselves when these things run.
  3. Apache is the one thing that does not like a missing folder so made a Kludge for now using ~/bin/setup-tmp.sh where I create /var/log/apache2 and chmod it 750. Then I restart apache using apachehup.sh, which just restarts it. Apache was failing to load when I pointed the log dir to /tmp in /etc/apache2/envvars under the export APACHE_LOG_DIR directive.

Write Saving #4: noatime

As you can see above one of the options used in the /etc/fstab file is the noatime option. By default the Raspberry Pi uses this option for the mount of the SD card. If you add mount points of your own to the card, make sure noatime is used. Without it Linux makes a small write each time a file is read to keep track of when it was last accessed, this obviously causes writes. It is possible to use it for the writes to the tmpfs as I am doing above. It saves a bit of time as the system does not have to do a write when a file is just being read.

Another good use of noatime is for drives connected across the network. For example on NFS mounts noatime is a really good choice. The network is generally slower than devices attached to a PC and having to send a write across every time a file is read, slows things down a bit when moving many files.

 


Been running this setup with the RAM savings for a few months now with no problems. I hardly ever see the ACT light blinking on the Pi anymore.

 

The LEDs have the following meanings :

  • ACT – D5 (Green) – SD Card Access
  • PWR – D6 (Red) – 3.3 V Power is present
  • FDX – D7 (Green) – Full Duplex (LAN) connected
  • LNK – D8(Green) – Link/Activity (LAN)
  • 100 – D9(Yellow) – 100Mbit (LAN) connected

From http://www.raspberrypi-spy.co.uk/2013/02/raspberry-pi-status-leds-explained/

Server Hardware Swaps

RAM Upgrade

When I initially built the server using a Dell Dimension 4200, I added 1GB of RAM on top of the 512MB that was factory installed. The board can support up to 2GB, but 1.5GB seems sufficient for what I am doing. One of the first steps is to run MEMTEST by booting off of a Linux CD that I had laying around. This test ran overnight (15+ hours) with no problems, it’s always a good idea to run MEMTEST with any memory changes.

Memory upgrade, added 1GB stick to the exiting 512MB
Memory upgrade, added 1GB stick to the exiting 512MB
Running MEMTEST to check for flaws in the RAM, before loading Ubuntu Server
Running MEMTEST to check for flaws in the RAM, before loading Ubuntu Server

 

 

 

 

 

 

 

 

 

 

 

Second Hard Drive
HD in floppy bay. Tight a bit tough getting the screws in.
HD in floppy bay. Tight, a bit tough getting the screws into the holder.
Pulled lower CD burner, replaced with DVD drive
Pulled lower CD burner, replaced with DVD drive
DVD drive goes in bottom slot, ready to load Ubuntu 12.04
DVD drive goes in bottom slot, ready to load Ubuntu 12.04

I removed the floppy drive and added a second 120GB hard drive. I also replaced one of the CD drives with a DVD reader. Ubuntu Server 12.04 gets burned onto a DVD so I needed to boot off of the DVD. The other option would be to boot from a USB drive. I swapped the IDE connector off of one of the CD drives and used it for the secondary hard drive mounted in the floppy drive bay.

Disconnected CD drive, hooked IDE HD in same bus as DVD drive.
Disconnected CD drive, hooked IDE HD in same bus as DVD drive.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Swap 120GB hard drive for 500GB

I soon was well on my way to filling up the primary 200GB drive so it was time to consider putting in a bigger secondary hard drive in preparation for the future.

I have installed the 500gb drive in the server and formatted for use with Linux. Linux can use a drive formatted as NTFS but I formatted it as EXT4 for Linux so the disk checks and fixes can be more precise. EXT4 can handle extremely large drives 1EB partition size, an amount of data I cannot even imagine! EXT3 is good up to 32TB partitions, which is still very big! The new extra drive will give me much more space as the main 200gb drive is almost full. I will move some files onto it, mostly the backups from the other computers at home. This is primarily the goal with second drive. There is a software manger in Linux that can manage the drive, so called “Logical Volume Management” (LVM), the primary drive is managed using this feature. In theory I can create a “snapshot” of the drive and copy the image onto a backup external or the second drive, but I have not considered doing this yet. The primary drive contains the OS, whatever software I have loaded, which doesn’t take up much space. The files that I have loaded onto OwnCloud take up a good deal of space, but 200GB will be plenty of space for the primary drive for a while.

To LVM or not?

At first I was going to connect the first and second drive into one large “logical” drive using LVM. But, there is a risk if the system treats the 500gb+200gb = 700gb logical drive. If one drive fails it can ruin the entire “logical” drive composed of both drives. One disk failing out of two might be a bad risk, so I might leave the drives connected normally, mounted separately and not as a big logical drive.

500GB drive mounted in the floppy drive holder
500GB drive mounted in the floppy drive holder
120GB drive out and 500GB drive in
120GB drive out and 500GB drive in
Resources

For more information on Linux file systems…

https://help.ubuntu.com/community/LinuxFilesystemsExplained

About files and the file system…

http://www.tldp.org/LDP/intro-linux/html/chap_03.html