Category Archives: System

Monitoring a cluster of Raspberry Pi with Nagios

In this post, I’ll describe how I’ve set up the monitoring of my micro-cluster of Raspberry Pi with Nagios.

On the monitor host

Install and configure Nagios: see this article

Install the NRPE plugin:

sudo apt-get install nagios-nrpe-plugin

Define the services: edit /etc/nagios3/conf.d/services_nagios2.cfg

# NRPE Services
define service {
        hostgroup_name                  rpi-cluster
        service_description             Current-Users-N$
        check_command                   check_nrpe_1arg$
        use                             generic-service
        notification_interval           0
}

define service {
    hostgroup_name rpi-cluster
    service_description Current Load NRPE
    check_command check_nrpe_1arg!check_load
    use generic-service
    notification_interval 0
}

define service {
    hostgroup_name rpi-cluster
    service_description Disk Space NRPE
    check_command check_nrpe_1arg!check_all_disks
    use generic-service
    notification_interval 0
}

define service {
    hostgroup_name rpi-cluster
    service_description Zombie Processes NRPE
    check_command check_nrpe_1arg!check_zombie_procs
    use generic-service
    notification_interval 0
}

define service {
    hostgroup_name rpi-cluster
    service_description Total Processes NRPE
    check_command check_nrpe_1arg!check_total_procs
    use generic-service
    notification_interval 0
}

define service {
    hostgroup_name rpi-cluster
    service_description Swap NRPE
    check_command check_nrpe_1arg!check_swap
    use generic-service
    notification_interval 0
}

Define the new hostgroup: /etc/nagios3/conf.d/hostgroups_nagios2.cfg

define hostgroup {
        hostgroup_name  rpi-cluster
                alias           Raspberry PI Cluster
                members         rpi0,rpi1,rpi2
        }

Define a new host file for each slave: /etc/nagios3/conf.d/rpi-cluster-xxx.cfg. The address config contains the slave IP.

define host {
        use                     generic-host
        host_name               rpixxx
        alias                   rpi-cluster-xxx
        hostgroups              rpi-cluster
        address                 192.168.0.xxx
}

Reload Nagios:

sudo service nagios3 reload

On the slave hosts

Install the NRPE server:

sudo apt-get install nagios-nrpe-server

Edit /etc/nagios/nrpe_local.cfg. The allowed_hosts config contains the IP of the monitor.

######################################
# Do any local nrpe configuration here
######################################

allowed_hosts=127.0.0.1,192.168.0.xxx

command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10
command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20
command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10%
command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z
command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200
command[check_swap]=/usr/lib/nagios/plugins/check_swap -w 50% -c 25%

Restart the service:

sudo service nagios-nrpe-server restart

Monitor

You can now monitor the slaves on Nagios:


Inspired by: LowEndBox

OVH awesome database backups

Yesterday, I made a huge mistake while testing a new version of my RSS syndication application: I ran the installation script, which has the effect of (re)creating all the tables used by the application.

This has reminded me of 2 things:

  • an installation script should not wipe existing tables, at least not without warning the user
  • a backup strategy would not be the worse idea

But knowing that and having a solution to my current problem are two completely different things.

Fortunately, OVH (the hosting service I’m using) are awesome, and allow you to retrieve a backup of your database, either from yesterday or from last week.

To create a dump of your database, connect to your hosting with SSH, then enter the following command:

mysqldump --host=your_host --user=your_user --password=your_password --port=3307 your_bdd > mybackup.sql

Port 3307 is used for yesterday’s backup, port 3317 is for last week’s.

This will create a dump of your database in the file mybackup.sql. To import it back, enter:

cat mybackup.sql | mysql --host=your_host --user=your_user --password=your_password your_bdd

And voilà, your database is back to the state it was in yesterday.

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1x7d9oGVqGxFvJj9yHRADoV1C4NAdywGG

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

Datacenter

My backup strategy

I’ve been using multiple hard drives for around 15 years, since our 2nd familial computer. But it’s always been a case of filling up a drive, then buying a new one to store new stuff on it, and so on.

Recently, I’ve realized (better late than never), that it would be a good idea to back up some of my data. Over the years, I’ve lost some important files, pictures,… Sometimes, a hard drive just crashed, other times it was a stupid case of “There is probably nothing important on this drive, let’s format it!”. But all these cases could have easily been prevented if I just had backed up my important stuff, using my up-to 4 internal and 2 external hard drives (yes, I’m a data hoarder…).

That’s why I’ve decided to implement a real backup process; so far, I’m only using it for my important documents and my pictures, but I intend to extend it to other files as I go along.

Pictures

This is the easiest process of the two. I always have 3 copies of all my pictures:

  • 1 on my laptop (the original one), in /media/laptop_hdd/Pictures
  • 2 on 2 different USB hard drives (the copies) in /media/backup_hdd1/Pics and /media/backup_hdd2/Pics

My 2 external hard drives are partitioned to have each a dedicated backup partition of 250GB (this is currently enough, when available space becomes an issue, I’ll upgrade to new disks).

Every time I come back home after taking pictures, I immediately transfer them from the camera to my laptop (and I keep them on the camera for now). Once the transfer is done, I plug-in my first USB drive, and copy the pictures using rsync:

$ rsync -av /media/laptop_hdd/Pictures/ /media/backup_hdd1/Pics/

This will copy the content of the folder Pictures from my laptop into the folder Pics from the backup partition of my first USB drive.

A few notes on this rsync syntax:

  • -a means that the files are transferred in “archive” mode, which ensures that symbolic links, devices, attributes, permissions, ownerships,… are preserved in the transfer
  • -v means “verbose” (a log of the operation is displayed)
  • the trailing slash at the end of the first path (/media/laptop_hdd/Pictures/) is very important. If you don’t set it, rsync will copy Pictures inside Pics, instead of copying the content of Pictures inside Pics (you’ll get your copy in /media/backup_hdd1/Pics/Pictures instead of /media/backup_hdd1/Pics/). The trailing slash at the end of the second path doesn’t matter.

This command won’t delete from the backup pictures that were deleted from the source (you can force this behaviour with the parameter --delete).

Once the copy is done, I repeat it on my second USB drive:

$ rsync -av /media/laptop_hdd/Pictures/ /media/backup_hdd2/Pics/

And here I am, with 3 copies of my pictures!

In the future, I’ll probably invest in a NAS, with at least 2 drives in RAID (RAID 1 for 2 disks, RAID 10 if I decide to invest in a 4+ disks solution). This would allow me to automatize the backup (since the NAS is always on and connected, I wouldn’t have to connect the drives and run the commands manually) and simplify it (just 1 copy from my computer, the NAS would handle the replication on its drives).

Documents

This part of my backup is more evolved. Ideally, I always have 4 copies of all my documents:

  • 1 on my laptop (the original one), in ~/Documents/Important (the Documents directory itself is not backed up, only the Important directory)
  • 2 on 2 different USB hard drives (the copies) in encrypted partitions
  • 1 on SpiderOak

Encrypted partitions

I’ve explained in a previous post how to create encrypted partitions on an external hard drive. You can find the instructions here.

My 2 external hard drives are partitioned to have each a dedicated encrypted backup partition of 50GB.

Once I’ve mounted the encrypted partitions, I just run the same rsync command to copy my documents to both encrypted partitions:

$ rsync -av ~/Documents/Important/ /mnt/private_hdd1/Documents/
$ rsync -av ~/Documents/Important/ /mnt/private_hdd2/Documents/

Then I can unmount the encrypted partitions, my documents are safely stored on my encrypted partitions.

SpiderOak

SpiderOak is exactly like DropBox, except:

  • Condoleezza Rice is not a board member
  • all my files are encrypted on my computer, before they are transferred to the SpiderOak servers, which means that the company never sees a readable version of them
  • Edward Snowden recommends using SpiderOak, and dropping DropBox.

This ensures that my files will be securely stored on the SpiderOak servers. The downside of all that is that if I lose your password, I’ll never see my files again (but that’s fine, since I already have local backups on my external hard drives).

SpiderOak offers a free account, with the only limitation being the storage space (they offer 2GB, which should be more than enough for my really important documents). The procedure is really simple:

  • create an account on their website
  • download and install the SpiderOak client on your computer
  • configure the client to back up specific folders
  • and that’s it, your folders will be automatically backed up when a change is made.

Like with DropBox, I can browse my documents with a web-browser, and download them (though they recommend not the web interface, but only the client, for security reasons).

If 2GB is not enough for you, you can always upgrade to a paying account (with really simple pricing rules: 1GB = 1$/year). The paying offers start at 100GB, and go up to more than 1TB.


This was a presentation of my backup process. I’m not saying it’s the best (far from it, since you always need to have multiple copies at multiple locations, which I don’t have for my pictures), but for now it’ll have to be enough. As I said before, in the future, I’d like to invest in a NAS (ideally 2, one at home and one somewhere else).

Photo: Information, by John McStravick via Flickr (CC BY)

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1GmwezwnFh6YtUSwgWbvBKEmbgffZzGKyu

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

encryption

Encrypting a partition on Ubuntu

I’ve decided to implement a better back-up strategy for all my important data.

I’ll come back later about the back-up itself, but for now I’ll explain what I use to keep the personal stuff (documents,…) private. This means encryption!

I plan on mainly keeping using Ubuntu (or Ubuntu variants) in the future, and maybe having a Windows installation for the non-Linux games. So the encryption solution compatibility is not an issue for me. That’s why I’ve started by looking for “encrypt hard drive ubuntu” on Duckduckgo, which led me to the Ubuntu documentation. From there I’ve found a link to the ArchLinux documentation on dm-crypt.

dm-crypt is a transparent disk encryption subsystem in Linux kernel versions 2.6 and later and in DragonFly BSD. It is part of the device mapper infrastructure, and uses cryptographic routines from the kernel’s Crypto API.

This means that the encryption API is included in the kernel itself (which is a good sign for compatibility and maintenance). But to be able to create and activate encrypted volumes, we need a front-end, and in my case I’ll use cryptsetup. The main cryptsetup commands have changed between versions 1.4.1 (default on Ubuntu 12.04) and 1.6.1 (default on Ubuntu 14.04). For each cryptsetup command, I’ll give both versions.

Here is the workflow I use to create my encrypted partitions:

Create the partition using GParted

I’m a bit lazy, so I prefer to use GParted to create my basic partitions. You can easily find a good documentation about how to create a partition using GParted.

The partition I’ll be using is an ext3 primary partition (the file system probably doesn’t matter since we’ll override this partition later).

Wipe the partition

Before we create our encrypted partition, we’ll wipe the existing partition with pseudo-random data.

All the commands have to be run as sudo.

Start by creating a temporary encrypted container on the partition (sdXY) you want to encrypt:

  • 1.4.1:
    # crypsetup create my_container /dev/sdXY
    
  • 1.6.1:
    # cryptsetup open --type plain /dev/sdXY my_container
    

This will create a container for your partition at /dev/mapper/my_container.

Then check the container has been created correctly:

# fdisk -l
Disk /dev/mapper/my_container: 1000 MB, 1000277504 bytes
...
Disk /dev/mapper/my_container does not contain a valid partition table

Finally, wipe the container with pseudorandom data:

# dd if=/dev/zero of=/dev/mapper/container
dd: writing to ‘/dev/mapper/container’: No space left on device

This step will take a while (in my case, around 2 hours for a 50GB partition on a USB hard drive).

Now you can close your partition before the next step:

  • 1.4.1:
    # cryptsetup luksClose my_partition
    
  • 1.6.1:
    # cryptsetup close my_partition
    

Encrypting / decrypting the partition

Start by setting the LUKS (Linux Unified Key Setup) headers on your partition:

  • 1.4.1 / 1.6.1:
    # cryptsetup -v luksFormat /dev/sdXY
    

You can use different options instead of -v (default). You can find a list here.

Then unlock the partition with the Device Mapper:

  • 1.4.1:
    # cryptsetup luksOpen /dev/sdXY my_container
    
  • 1.6.1:
    # cryptsetup open /dev/sdXY my_container
    

which will create a container at /dev/mapper/my_container.

Now create a file system of your choice (this is why I said earlier that the file system didn’t matter in GParted). I’ve decided to create an ext3 partition, but this is up to you:

# mkfs.ext3 /dev/mapper/my_container

Now you can mount your partition:

  • 1.4.1:
    # cryptsetup luksOpen /dev/sdXY my_container
    # mount -t ext3 /dev/mapper/my_container /mnt/my_container
    
  • 1.6.1:
    # cryptsetup open --type luks /dev/sdXY my_container
    # mount -t ext3 /dev/mapper/my_container /mnt/my_container
    

And then unmount it:

  • 1.4.1:
    # umount /mnt/my_container
    # cryptsetup luksClose my_container
    
  • 1.6.1:
    # umount /mnt/my_container
    # cryptsetup close my_container
    

Easy access in your file manager

Some file managers (Nautilus, Thunar,…) allow you to unlock and mount your encrypted partitions without using the command line.

In your list of devices, just click on the encrypted partition, like you would do for any other partition.

You will get asked to enter the passphrase, and your encrypted partition will be mounted and ready to use.


I hope this will convince you to start using encrypted partitions for your personal and private data. It’s really easy to do, and can be very useful if your laptop, USB drive,… gets stolen.

I’ll come back soon to explain my back-up process.

Photo: System Lock, by Yuri Samoilov via Flickr (CC BY)

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1PPsTUSWcjNKbEC1uqiXor4thre7tSpKFx

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.

Core switch

DynDNS and No-IP

For a few years now, I’ve been using a DynDNS free account to access my personal server (at home), even though I have a dynamic IP address. This means that I’ve had a URL that would always point to my server, even when my IP would change.

But yesterday I’ve received an email from DynDNS:

For the last 15 years, all of us at Dyn have taken pride in offering you and millions of others a free version of our Dynamic DNS Pro product. What was originally a product built for a small group of users has blossomed into an exciting technology used around the world.
That is why with mixed emotions we are notifying you that in 30 days, we will be ending our free hostname program. This change in the business will allow us to invest in our customer support teams, Internet infrastructure, and platform security so that we can continue to strive to deliver an exceptional customer experience for our paying customers.
We would like to invite you to upgrade to VIP status for a 25% discounted rate, good for any package of Remote Access (formerly DynDNS Pro). By doing so, you’ll have access to customer support, additional hostnames, and more.

I’ve always been very happy with the service I’ve received from DynDNS, since I had a free account, and the only action required from me was to click on a link they sent me every month (I think) to keep my account active. But I don’t need this enough to pay for it.

So I’ve looked for a replacement for DynDNS, and I’ve found No-IP. It’s a very similar service, that has a free plan, and allows you to register up to 3 hosts. The inscription process is self-explanatory, and lets you register 1 host.

After registering (and validating your account), you need to install an update client on your host, which will regularly (by default every 30 minutes) contact the No-IP servers to update your mapping IP / URL. This way, your URL will always point to your host, with a maximum delay of 30 minutes (don’t use a free account if you need a good QOS with no or little “downtime”). The installation process of the client is very well explained on the knowledge base of No-IP: How to Install the Dynamic Update Client on Linux.

I’m now the proud owner of the URL http://remyg.no-ip.biz/, which allows me to access my personal server via SSH (or HTTP if I decide to use it to test my web projects). The total time spent for the registration, installation and configuration was less than 20 minutes, which shows how easy and well-explained the process is.

Photo: Core switch, by Seeweb via Flickr (CC BY-SA)

No tips yet.
Be the first to tip!

Like this post? Tip me with bitcoin!

1Pn8TVqCcuu7uopgFDUz8LmFrPMBmT5gsg

If you enjoyed reading this post, please consider tipping me using Bitcoin. Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.