Archive for the ‘Ubuntu Jaunty’ Category

Sheevaplug

2, January 2011

Ok so I’ve had my Sheevaplug since January 2009 and have now bricked it at least 3 times trying to be too clever by half! As it’s getting to be a rugular thing and it always takes me hours to work out what to do, here are my instructions.

I have used the same info. from the page below and added/changed it to reflect my process, please read both to make sure you understand what you are doing! as I take no reponsibility for errors blah, blah etc.etc.

I found the “instructions”, script & packages on this site

SheevaPlug Installer Page;

http://plugcomputer.org/plugwiki/index.php/SheevaPlug_Installer

but to be honest they are a bit difficult to follow for a bit of a noob so I’ve adapted them a bit.

The installer will reflash a bricked plug & can be used to install another distro [I’ve used Debian Squeeze this time] but if you just want it back to “factory settings” use the packages as they are.

My plug is the BFLS one supplied with Ubuntu Jaunty on the internal flash card and the PC I will use to reflash it has Ubuntu Jaunty on it so although the scipt includes Windows support I won’t mention it here, see the above page for details.

Note::
The runme.php would not run on Ubuntu 10.10 as the version of python on Ubuntu 10.10 has depreciated some terms. I got round this at the time by installing the old 32 bit Karmic Ubuntu and using that but since then I have found the possible answers in this post

http://plugapps.com/forum/viewtopic.php?f=20&t=428#p3178

but not tested the solution yet although I have saved copies of the files it talks about for future use, contact me if you need them.

If you can get onto the plug then backup your stuff as this script will delete the lot, be Warned

First Download the tarball from

http://www.plugcomputer.org/index.php/us/resources/downloads?func=select&id=5

This includes all the files you need to re flash your sheevaplug with the default install of Ubuntu Jaunty. Check out this post http://plugcomputer.org/plugforum/index.php?topic=878.0 if you want to Install a Debian system, you can either download the pre-built Lenny or Squeeze rootfs.tar.gz files or use the script that mgillespie has created.

You now need to add the following packages to your host PC, cu, php5-cli, and libftdi1, so install with the following commands in a [Terminal];

$ sudo apt-get install cu

$ sudo apt-get install php5-cli

and

$ sudo apt-get install libftdi1

adding libftdi1 fixes the error message “openocd/openocd: error while loading shared libraries: libftdi.so.1: cannot open shared object file: No such file or directory” latter on.

You now connect the Sheevaplug to your PC with the USB lead [supplied] and issue this command in a [Terminal];

$ cu -s 115200 -l /dev/ttyUSB1

Note: [you may need to try cu -s 115200 -l /dev/ttyUSB0 my plug seemed to use either!]

If you get any sort of error message along the lines of

cu: open (/dev/ttyUSB1): No such file or directory
cu: /dev/ttyUSB1: Line in use

you need to issue the following commands in a [Terminal] to remove then add the driver support and re try.

$ sudo rmmod ftdi_sio

followed by

$ sudo modprobe ftdi_sio vendor=0x9e88 product=0x9e8f

If you now get on then you can proceed to the task at hand, if not it’s time to Google!. However I found that I could not get onto the plug sometimes and then realised that the plug end connection of the usb PC to plug cable was working itself out socket in the plug.

Installation

1. Prepare an empty USB stick that is FAT16/32 formatted. [Note: the 2 USB sticks I used were not detected by the plug even though I had used 1 of then before to unbrick my plug, be warned]. Re formatting and unplugging safely etc.etc did not help.

2. Extract the tarball you downloaded earlier into a folder on your PC (for example: ~/plug)

3. Edit the ~/plug/uboot/uboot-env/uboot-mmc-custom.txt or uboot-nand-custom.txt file to the correct MAC address according to the MAC address on the back of the Plug (default set to ethaddr 00:50:43:01:c1:e6). If your system is to boot from the internal flash then it’s the nand file you change, if it boots from a flash card in the side slot it’s the mmc file you change.

4. Copy all the files from ~/plug/installer to the USB stick.
NOTE: that the files should be written to the root directory of the USB stick. For example:

$ sudo cp -a ~/plug/installer/* /media/usb-pen/

5. You should now have a copy of the following;

  1. Init ramdisk (initrd)
  2. Kernel modules (modules.tar.gz)
  3. README.txt
  4. Root file-system (rootfs.tar.gz) Note if you are installing Debian, replace this file with the Debian version you have either downloaded or created with the mgillespie’s script.
  5. ubuntu-sheevaplug.sh
  6. Kernel (uImage)

6. Copy the uboot image (named uboot.bin) to the ~/plug/uboot/ directory, if it’s not there, mine was.

7. Safely remove the USB stick from the host PC, power off the plug and plug the USB stick into the Plug’s USB host interface (not via a USB HUB I disconnected all other periferals also!)

8. Connect the Plug to your PC with it’s USB cable.

9. On your PC in a [Terminal] again change to the working directory

$ cd ~/plug

and run the runme.php file with the command;

sudo php runme.php and either nand or mmc, nand if your system is on the internal flash mmc if on an external card, in my case;

$ sudo php runme.php nand

If you get an error message along these lines of;

Error: unable to open ftdi device: device not found
Runtime error, file “command.c”, line 469:
****    openocd FAILED
****    Is the mini USB cable connected?
****    Try powering down, then replugging the Sheevaplug

then try issuing these commands in a [Terminal] to delete and reload the driver, then retry;

$ sudo rmmod ftdi_sio
$ sudo modprobe ftdi_sio vendor=0x9e88 product=0x9e8f

[and make sure the USB cable at the plug end is firmly in]

If all goes well the process should start and end a couple of minutes later with a “beep” to indicate that the uboot install process has finished with the following message. [no beep on my plug]

****   U-boot should be up and running now. Open your console …

Now, open another Terminal  and log with this command

$ cu -s 115200 -l /dev/ttyUSB1

if you get this message;

## Booting image at 00800000 …
Bad Magic Number

as I did when using the first USB stick, then the process probably has not found the USB Stick and not loaded the o/s. You could try running the command;

run recover1

but in my case it just did not like the USB stick and I had to use another

If all goes well you should now see the o/s being installed with the final lines being..

* Starting kernel log daemon…
Ubuntu 9.04 ubuntu ttyS0

ubuntu login:                                                          [ OK ]
* Starting OpenBSD Secure Shell server sshd         [ OK ]
* Starting periodic command scheduler crond        [ OK ]
* Restarting OpenBSD Secure Shell server sshd     [ OK ]

with just an unhelpful flashing curser, in the centre of the last line!

After pressing [Enter] you get:

Ubuntu 9.04 ubuntu ttyS0

ubuntu login:

Just login with the default user root and password nosoup4u

done!!

Ctrl+Alt+Backspace

21, February 2010
Ctrl+Alt+Backspace (i.e. the shortcut which was used to restart the X server) has to be enabled in a different way with respect to previous releases of Ubuntu.

This is due to the fact that “DontZap” is no longer an option in the X server and has become an option in XKB instead.

Using GNOME

Goto [System], [Preferences], [Keyboard], select the [Layouts] tab and click on the [Layout Options…] button. Select the [Key sequence to kill the X server] option and enable “Control + Alt + Backspace” by ticking the box, then select [Close] and your done.

Jalbum

21, February 2010

 

Update: Jalbum is now provided as a deb, see my latest post Jalbum, create Photo web albums

This is the software I use to create my photo album on the web. I used it when I ran XP and was glad to see it came as a Linux option. It automatically creates the album in variety of skins and there is then an option to FTP the result direct from the program to your site. Visit the jalbum.net site and if you decide to user this program, download the file [JAlbuminstall.bin] to your [Home] folder. JAlbum needs Java 1.5 (or later) virtual machine to run so first we need to open the [Synaptic Package Manager] search for and install sun-java6-jre or later, from the listed packages and install it. Assuming you downloaded the JAlbum install file to your [Home] folder, in a [Terminal] type or copy and paste;

./Jalbuminstall.bin

This will open the JAlbum installer which will guide you through the installation. When it asks which Java VM to use, select the java6 version. If the box is empty as mine was, tell it to search for a java VM which took a while then select it an move on. If this still fails [as mine did on Intrepid] open a [Terminal] type or copy and paste;

sudo update-alternatives --config java

which should give you an output like this;

There are 2 alternatives which provide `java’.
Selection Alternative
———————————————–
* 1 /usr/lib/jvm/java-6-sun/jre/bin/java
+ 2 /usr/lib/jvm/java-6-openjdk/jre/bin/java<
Press enter to keep the default[*], or type selection number:

Select java-6-sun and note the path to it i.e.

/usr/lib/jvm/java-6-sun/jre/bin/java

now run the Jalbum installer again

./Jalbuminstall.bin

and use the [Choose Another] option to navigate to

/usr/lib/jvm/java-6-sun/jre/bin/java then click on [Next]

When it asks where do you want to install it will give the default path of /home/user/Jalbum I change this to /home/user/.Jalbum to make this folder hidden the “.” does this but still keeps it within your [Home] folder. The program can now be run from the link in your [Home] folder.

Sheevaplug – automount USB drive at boot

17, February 2010

The problem with the Sheevaplug is that it boots too damn fast and the USB drive is too slow to be available for the fstab mount.

The solution, see here for my post on setting up the fstab file to mount your USB drive. When you edit the fstab on the Sheeva you will find it empty but you should end up with an entry similar to this.

# UNCONFIGURED FSTAB FOR BASE SYSTEM
/dev/sda1 /mnt/usbdrive ext3 auto,user,rw,exec 0 0

My usb drive is mounted at /dev/sda1 and I created a mount point called /mnt/usbdrive, the drive is formatted as ext3 but you can replace this with the format of your drive or just replace with the word auto for it to check.

Once you have your fstab in place you need to create a script which will slow down the boot for the USB drive to start. Thanks to restamp on the plugcomputer.org forum see here for the solution.
First navigate to the init.d folder, in a [Terminal] issue the command;

cd /etc/init.d

then using whatever text editor you have installed [I added nano], open a file called wait4usbdrive with;

nano wait4usbdrive

then copy into it restarts script;

#!/bin/sh
#
# If /etc/fstab has been configured to mount a USB drive, pause to give
# the USB drive devices time to show up in /dev. If this is not done,
# checkfs.sh will fail, requiring manual intervention…
#
case "$1" in
start)
grep -q ^/dev/sda /etc/fstab &&
for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
do
[ -b /dev/sda ] && exit 0
sleep 1
done
exit 1
;;
stop)
;;
esac

Then make it executable with;

chmod +x wait4usbdrive

Then link it to the rcS.d folder with this command;

ln -s ../init.d/wait4usbdrive /etc/rcS.d/S25wait4usbdrive

Now you can shutdown and restart the plug and the USB drive should now auto-mount. My USB drive is a 1.5 TB iomega with it’s own power supply.

Sheevaplug – install get_iplayer

10, February 2010

With help of Percival P Plugsley’s post here

I managed to get get_iplayer working on my Sheevaplug with Ubuntu jaunty installed on it’s internal flash memory.

You need to visit the download page at linuxcentre.net and download the get_iplayer perl script to a folder of your choice i.e c/root/

As you cannot get the mplayer app in the Sheevaplug repositories you need flvstreamer from, here which you will need to compile for the Sheevaplugs ARM architecture. First download the flvstreamer source flvstreamer-1.9.tar.gz from http://mirrors.aixtools.net/sv/flvstreamer/source/ and save it to the Sheevaplug unpacking it into a directory of your choice i.e. /root/flvtemp

Now we need some other packages installed, in a [Terminal]

sudo apt-get install build-essential
sudo apt-get install ffmpeg
sudo apt-get install lame
sudo apt-get install perl
sudo apt-get install libwww-perl

I like to use nano rather than vi, so I also added;
sudo apt-get install nano

now move to the directory you saved the source to;

cd /root/flvtemp

create a new file called MakefileARM

sudo touch MakefileARM
now open it with
nano makefileARM
Then add the following to the file;

############## start #######################
CC=gcc
CXX=g++
LD=ld

CFLAGS=-Wall -D_FILE_OFFSET_BITS=64
CXXFLAGS=-Wall -D_FILE_OFFSET_BITS=64
LDFLAGS=-Wall -D_FILE_OFFSET_BITS=64

CXXFLAGS=
LDFLAGS=-Wl,-rpath=/opt/lib

all: flvstreamer

clean:
< TAB >rm -f *.o

streams: bytes.o log.o rtmp.o AMFObject.o rtmppacket.o streams.o parseurl.o
< TAB >$(CXX) $(LDFLAGS) $(ARMFLAGS) $^ -o $@_arm -lpthread

flvstreamer: bytes.o log.o rtmp.o AMFObject.o rtmppacket.o flvstreamer.o parseurl.o
< TAB >$(CXX) $(LDFLAGS) $(ARMFLAGS) $^ -o $@_arm

bytes.o: bytes.c bytes.h Makefile
log.o: log.c log.h Makefile
rtmp.o: rtmp.cpp rtmp.h log.h AMFObject.h Makefile
AMFObject.o: AMFObject.cpp AMFObject.h log.h rtmp.h Makefile
rtmppacket.o: rtmppacket.cpp rtmppacket.h log.h Makefile
flvstreamer.o: flvstreamer.cpp rtmp.h log.h AMFObject.h Makefile
parseurl.o: parseurl.c parseurl.h log.h Makefile
streams.o: streams.cpp log.h Makefile
############### end ######################

remove the texted called < TAB > and replace them with a TAB as copying the text and pasting it into a document often replaces the TAB with a space and you will get an error message when you compile it similar to;

Makefile:15: *** missing separator. Stop.

save the file and now run the command;

sudo make -f MakefileARM flvstreamer

This will create a file in the same directory called flvstreamer_arm, create a directory to store it in;

sudo mkdir /root/flvstreamer

move flvstreamer to it;

cp /root/flvtemp/flvstreamer_arm /root/flvstreamer

now move to the directory;

cd /root/flvstreamer

and make the file executable with;

chmod +x flvstreamer_arm

now test all is working with;

./flvstreamer_arm –help

If that works then you can move on to get_iplayer

create a directory for get_iplayer and copy the perl script you downloaded to it

sudo mkdir /root/get_iplayer
sudo chmod 755 get_iplayer
cp /root/get_iplayer /root/get_iplayer/

move to the Directory and run get_iplayer
cd /root/get_iplayer
./get_iplayer

The first time you run it get_iplayer should update itself and end, the second time it will download the current BBC listings.

Now add the following prefs;

./get_iplayer –prefs-add –flvstreamer=”/root/flvstreamer/flvstreamer_arm”
./get_iplayer –prefs-add –ffmpeg=”/usr/bin/ffmpeg”
./get_iplayer –prefs-add –lame=”/usr/bin/lame”
./get_iplayer –prefs-add –output=”/PATH/To/Your/Download/Directory”

Check for a program;

./get_iplayer –type=radio “News”

check out the 5 digit index code of one of the programs and download it as follows

./get_iplayer –get 13119

Hope this helps

Folding@Home

20, December 2009

Just decided to give Folding@Home a try, it’s a project which uses the spare processing time of thousands of private users PC’s to process calculations which help scientists study disease and in time cure them. More information can be found on the Folding@Home site. The project is run of the client PC’s using software which can be downloaded from the site for various o/s. The software then downloads sections of work from the project servers, works on the section using your PC’s spare processing time then uploads the results before downloaing another section to work on. As a user you gain points for the work your PC completes which you can then either assign to your own “Team” or join another Team of users. You can then track your personnal/Team results. I have decided to join the Team Ubuntu which has an id of #45104. First you need a user name, check that yours is unique by checking it here.

I have decided to use my server to help this project when it is not processing work for myself. The server is currently running a 32bit version of Ubuntu Jaunty [v 9.04]. I therefore downloaded the 32 bit version of the Floding@Home software from the above site currently at version 6.02 [20/12/09]. I then created a new Directory called Floding@Home in my home Directory and extracted the contents of FAH6.02-Linux.tgz to this Directory.

I then opened a [Terminal] and cd to the new Directory.

~$ cd Folding@Home

Then ran the following;

~/Folding@Home$ sudo ./fah6

This then asks the following questions;

User name [Anonymous]? Type in your username, then press [Enter].

Team Number [0]? Type in the number 45104, then press [Enter].

Passkey []? Leave blank, press [Enter].

Ask before fetching/sending work (no/yes) [no]? to make things automatic, leave at no and press [Enter].

Use proxy (yes/no) [no]? I’m not behind a proxy, so I left at no, and pressed [Enter].

Acceptable size of work assignment and work result packets (bigger units may have large memory demands) — ‘small’ is <5MB, ‘normal’ is <10MB, and ‘big’ is >10MB (small/normal/big) [normal]? again I left at the default of normal, and pressed [Enter].

Change advanced options (yes/no) [no]? and again left at default, pressing [Enter]. The program then ran, updated, downloaded the first batch of work and started to process…..

Note: works in Ubuntu Jaunty v9.04.

Shared Directories with Samba

4, October 2009

Configuring the Samba conf.file [smb.conf]

First, create a shared folder by opening a [Terminal] and either typing or copy & pasting the following;

martin@linux:~$ sudo mkdir /home/my_share

make it available to all users;

martin@linux:~$ sudo chmod 0777 /home/my_share

install samba with;

martin@linux:~$ sudo apt-get install samba

stop samba running with;

martin@linux:~$ sudo /etc/init.d/samba stop

Under Lucid this is now; sudo service smbd stop

rename the current config file as a backup template with;

martin@linux:~$ sudo mv /etc/samba/smb.conf /etc/samba/smb.conf.backup

create a new config template;

martin@linux:~$ sudo touch /etc/samba/smb.conf

open new config file with;

martin@linux:~$ sudo gedit /etc/samba/smb.conf

now copy & paste in the details from the smb.conf file here, changing details of network name [my_network], computer name [my_linux_box] and the shared folder details [my_share] as required.

save and close the config file.

start samba again with;

martin@linux:~$ sudo /etc/init.d/samba start

Under Lucid this is now; sudo service smbd start

Add yourself as a samba user with;

martin@linux:~$ sudo smbpasswd -L -a my_name

enter your admin password when asked.

martin@linux:~$ sudo smbpasswd -L -e my_name

Backup using rsync, SSH & cron

31, August 2009

Backup on Ubuntu.

Updated 09/11/09 to correct some errors!

This “How To” was written with the help of this article by Troy Johnson, here and amended were necessary to fit my needs.

I have 2 PC’s on a network and want to backup some files from one PC the [remote_client] to the other PC the [server] to do this I want to run a script from the server. The process can be achieved using rsync & SSH and automated with a cron job on the server.

Step:1

We install on the server

open-ssh-server and rsync

and on the client openssh-client and rsync

This can be done via the Synaptic Package Manager or using the command line i.e.

~$ sudo apt-get install openssh-client

~$ sudo apt-get install rsync

you also may want to check that cron is installed and if you want a GTK version of rsync you could install gadmin-rsync.

So the files I want to backup are on the remote_client in the following folder; /home/martin/my_files

and I want to back them up on the server in the following folder; /home/martin/backups

Note: I have an account on both PC’s with the same name and password, and read access to both the source & destination folders.

On the server we test out a basic script called backup.sh and saved with the Permissions set to “allow executing as a program”, check out my post here for help with that, the content of the script is;

#!/bin/sh

## first script to copy files from remote_client to server ##

rsync -avz -e ssh martin@remote_client:/home/martin/my_files/ /home/martin/backups

save the script and then run it by typing;

~$ ./backup.sh

in a Terminal assuming you saved the script in your home directory /home/martin and called it backup.sh. You should be prompted for your user password on the remote_client then the process should start.

Step:2

Ok so that does the trick but if I’m to automate this script it needs to work without SSH asking me for my password each time. To do this I need to generate a private/public key pair, this is much more secure than adding the actual password in the text of the script or elsewhere linked to it.

We now need to be logged onto the server;

To generate the key pair I open a [Terminal] and issue the command;

~$ mkdir /home/martin/.key

to create a hidden folder in my home folder called .key [the dot makes it hidden], then;

~$ ssh-keygen -t dsa -b 1024 -f /home/martin/.key/server-rsync-key

Generating public/private dsa key pair.

Enter passphrase (empty for no passphrase): [press Enter]

Enter same passphrase again: [press Enter]

Your identification has been saved in /home/martin/.key/server-rsync-key.

Your public key has been saved in /home/martin/.key/server-rsync-key.pub.

The key fingerprint is:

etc. etc.

I now have 2 files in the .key folder in my home folder called;

server-rsync-key

and

server-rsync-key.pub

As you will later set-up a cron job which will run a script that needs access to the junk-rsync-key file we need to change the default ubuntu file permissions to owner, read/write group & other to none

$ chmod 600 server-rsync-key

Now copy the other key, server-rsync-key.pub over to the remote_client into my home folder /home/martin and log onto the remote_client.

Now on the remote_client, I issue the following command in a [Terminal]

~$ if [ ! -d .ssh ]; then mkdir .ssh ; chmod 700 .ssh ; fi

[this checks to see if the folder .ssh exists and if not creates it]

~$ mv server-rsync-key.pub .ssh/

[this moves the key from my home folder to the .ssh folder, hidden in my home folder]

~$ cd .ssh/

[we now move to the .ssh folder]

~$ if [ ! -f authorized_keys ]; then touch authorized_keys ; chmod 600 authorized_keys ; fi

[this checks to see if there is an authorized_keys file and if not creates it with the correct permissions]

~$ cat server-rsync-key.pub >> authorized_keys

[this copies the contents of server-rsync-key.pub key into the file authorized_keys file.

If we now run our original script with the amendment below on the server it no longer asks for a password.

rsync -avz -e “ssh -i /home/martin/.key/server-rsync-key” martin@remote_client:/home/martin/my_files/ /home/martin/backups

Step:3

Now the key can be used to make connections to the remote_client, but these connections can be from anywhere (that the ssh daemon on remote_client allows connections from) and they can do anything (that remote_user can do), which could be dangerous.  To make this more secure edit the ‘authorized_keys’ file in remote_client /home/martin/.ssh folder with a text editor and modify the line ending user_name@server_name from this:

ssh-dss
yl6b2/cMmBVWO39lWAjcsKK/zEdJbrOdt/sKsxIK1/ZIvtl92DLlMhci5c4tBjCODey4yjLhApjWgvX9
D5OPp89qhah4zu509uNX7uH58Zw/+m6ZOLHN28mV5KLUl7FTL2KZ583KrcWkUA0Id4ptUa9CAkcqn/gW
kHMptgVwaZKlqZ+QtEa0V2IwUDWS097p3SlLvozw46+ucWxwTJttCHLzUmNN7w1cIv0w/OHh5IGh+wWj
V9pbO0VT3/r2jxkzqksKOYAb5CYzSNRyEwp+NIKrY+aJz7myu4Unn9de4cYsuXoAB6FQ5I8AAAEBAJSm
DndXJCm7G66qdu3ElsLT0Jlz/es9F27r+xrg5pZ5GjfBCRvHNo2DF4YW9MKdUQiv+ILMY8OISduTeu32
nyA7 etc....

to this:

from="10.1.1.1", command="/home/martin/.key/validate-rsync" ssh-dss bA402VuCsOLg
yl6b2/cMmBVWO39lWAjcsKK/zEdJbrOdt/sKsxIK1/ZIvtl92DLlMhci5c4tBjCODey4yjLhApjWgvX9
D5OPp89qhah4zu509uNX7uH58Zw/+m6ZOLHN28mV5KLUl7FTL2KZ583KrcWkUA0Id4ptUa9CAkcqn/gW
kHMptgVwaZKlqZ+QtEa0V2IwUDWS097p3SlLvozw46+ucWxwTJttCHLzUmNN7w1cIv0w/OHh5IGh+wWj
V9pbO0VT3/r2jxkzqksKOYAb5CYzSNRyEwp+NIKrY+aJz7myu4Unn9de4cYsuXoAB6FQ5I8AAAEBAJSm
DndXJCm7G66qdu3ElsLT0Jlz/es9F27r+xrg5pZ5GjfBCRvHNo2DF4YW9MKdUQiv+ILMY8OISduTeu32
nyA7 etc....

where “10.1.1.1” is the IP address of server, and “/home/martin/.key/validate-rsync” point to a script called validate-rsync as follows:

#!/bin/sh
case “$SSH_ORIGINAL_COMMAND” in
*\&*)
echo “Rejected”
;;
*\(*)
echo “Rejected”
;;
*\{*)
echo “Rejected”
;;
*\;*)
echo “Rejected”
;;
*\<*)
echo “Rejected”
;;
*\`*)
echo “Rejected”
;;
*\|*)
echo “Rejected”
;;
rsync\ –server*)
$SSH_ORIGINAL_COMMAND
;;
*)
echo “Rejected”
;;
esac

If server has a variable address, or shares its address (via NAT etc.) with hosts you do not trust, omit the ‘from=”10.1.1.1″,’ part of the line (including the comma), but leave the ‘command’ portion. This way, only the ‘rsync’ will be possible from connections using this key. Make certain that the ‘validate-rsync’ script is executable by remote_user on remote_client and test it.

So in my case the result looks like this:

command="/home/martin/.key/validate-rsync" ssh-dss bA402VuCsOLg9YS0NKxugT+o4UuIj
yl6b2/cMmBVWO39lWAjcsKK/zEdJbrOdt/sKsxIK1/ZIvtl92DLlMhci5c4tBjCODey4yjLhApjWgvX9
D5OPp89qhah4zu509uNX7uH58Zw/+m6ZOLHN28mV5KLUl7FTL2KZ583KrcWkUA0Id4ptUa9CAkcqn/gW
kHMptgVwaZKlqZ+QtEa0V2IwUDWS097p3SlLvozw46+ucWxwTJttCHLzUmNN7w1cIv0w/OHh5IGh+wWj
V9pbO0VT3/r2jxkzqksKOYAb5CYzSNRyEwp+NIKrY+aJz7myu4Unn9de4cYsuXoAB6FQ5I8AAAEBAJSm
DndXJCm7G66qdu3ElsLT0Jlz/es9F27r+xrg5pZ5GjfBCRvHNo2DF4YW9MKdUQiv+ILMY8OISduTeu32
nyA7 etc....

Our backup.sh script now looks like this:

#!/bin/sh

## second script to copy files from remote_client to server ##

rsync -avz -e “ssh -i /home/martin/.key/server-rsync-key” martin@remote_client:/home/martin/my_files/ /home/martin/backups

Step:4

I now need to run the script from a Cron job, first move the backup.sh script into your $PATH, for help with this check out my post here.

Now on the server open a [Terminal] and issue the command:

~$ crontab -e

This should open a text editor, normally vi, normally paste into it the following text as a reminder

# leave a space between each option #
# the * is a wildcard option #
# dayofmonth is 1 to 31 #
# dayofweek can be 0 – 6, 0 is Sunday #
# or use mon tue wed etc #
#
# mins hours dayofmonth month dayofweek command
# * * * * * * command

Then use Crontrol+o to save and press [Enter] to save, then Control+x to [Exit].

and test…

Cron jobs

31, August 2009

Cron tabs can be used to automate jobs on your system by running programs or scripts at prdefined times, dates or days, each user can have their own cron tabs or they can be run as root. To open a crontab, type the following in a [Terminal];

crontab -e

this will open in the vi text editor, but if you prefere the gedit text editor, run the following command;

export VISUAL=gedit

If you stick with vi then use Control+o followed by [Enter] to save your tab, then Control+x to [Exit].

When you run crontab -e, you will get an empty crontab with just the following [or similar] dispalyed;

# m h dom mon dow command

the headings are described below and I usually add these to the top of the file to make things easier to edit. So my crontab looks like this:

# m h dom mon dow command

# mins -> minutes (0-59)

# hours -> hour (0-24)

# DayOfMonth -> day of month (1-31)

# month -> month (1-12)

# DayOfWeek -> day of week (0-6) [Sunday is 0], or

# DayOfWeek -> day of week (mon – sun)

# You can use ‘,’ ‘-’ to give you 1,2,3,5 or 1-5 etc

# The * is used as a wildcard i.e every month

# m h dom mon dow command

# * * * * * /command

Examples;

each of the 6 command requires a space between them, here are a few examples;

### runs cron 2 mins past every hour ###

2 * * * * /home/martin/bin/play_sound.sh

### runs cron every 2 minutes ###

# m h dom mon dow command

*/2 * * * * /home/martin/bin/play_sound.sh

#01 * * * * root run-parts /etc/cron.hourly

#The run-parts script is simple enough: it just runs all the executables in the specified directory

### runs cron every fri at 15:30 ###

# m h dom mon dow command

30 15 * * * fri /home/martin/bin/backup.sh

Firefox crashing on flash sites

25, May 2009

Jaunty – Firefox – Gnome desktop

My Firefox constantly crashes when viewing flash based site such as BBC iplayer & Flickr. It was fine using Intrepid but the upgrade to Jaunty has done something and as it’s doing it on both my laptop and desktop both recently upgraded to Jaunty. First I closed Firefox then removed the installed plugins typing the following into a Terminal;

sudo apt-get remove –purge swfdec-mozilla mozilla-plugin-gnash flashplugin-nonfree

Then I installed the adobe flash plugin, with;

sudo apt-get install adobe-flashplugin

Tested and failed! so I downloaded Flash Player 10 .deb file for Ubuntu v8.04 from the adobe Web Site found here, Adobe Flash Player 10 then I closed Firefox and removed the adobe-flashplugin in a Terminal type;

sudo apt-get remove –purge adobe-flashplugin

now I navigated to my download and double clicked on the install_flash_player_10_linux.deb file which opens the [Package installer] and complained there was a newer version in the repositories. I [ok]’d the install and retested Firefox, so far all is well…