Why I do it?
Cause my Nagios3 is running as a virtual machine on a Xen Server, and I have less than 64MB of RAM.
First I install lighttpd, to prevent automatically apache2 installation during the nagios3 installation:
1 | apt-get install lighttpd |
Why I do it?
Cause my Nagios3 is running as a virtual machine on a Xen Server, and I have less than 64MB of RAM.
First I install lighttpd, to prevent automatically apache2 installation during the nagios3 installation:
1 | apt-get install lighttpd |
apt-get install lighttpd
physical volumes:
These are your physical disks, or disk partitions, such as /dev/hda or /dev/hdb1. These are what you’d be used to using when mounting/unmounting things. Using LVM we can combine multiple physical volumes into volume groups.
volume groups:
A volume group is comprised of real physical volumes, and is the storage used to create logical volumes which you can create/resize/remove and use. You can consider a volume group as a “virtual partition” which is comprised of an arbitary number of physical volumes.
logical volumes:
These are the volumes that you’ll ultimately end up mounting upon your system. They can be added, removed, and resized on the fly. Since these are contained in the volume groups they can be bigger than any single physical volume you might have. (ie. 4x5Gb drives can be combined into one 20Gb volume group, and you can then create two 10Gb logical volumes.)
apt-get update && apt-get install lvm2 |
apt-get update && apt-get install lvm2
pvcreate /dev/md0 |
pvcreate /dev/md0
Once we’ve initialised the partitions, or drives, we will create a volume group which is built up of them:
vgcreate storm /dev/md0 |
vgcreate storm /dev/md0
If you’ve done this correctly you’ll be able to see it included in the output of:
vgscan |
vgscan
Create your first logical volume:
lvcreate -n data --size 300g storm |
lvcreate -n data --size 300g storm
Your new logical volume will be accessible via:
/dev/storm/data # or /dev/mapper/storm-data |
/dev/storm/data # or /dev/mapper/storm-data
Create file system:
mkfs.ext4 /dev/storm/data |
mkfs.ext4 /dev/storm/data
Show created volumes and their sizes:
lvdisplay |
lvdisplay
Extend volume:
lvextend -L+10g /dev/storm/data |
lvextend -L+10g /dev/storm/data
After resizing you should resize the filesystem:
e2fsck -f /dev/storm/data resize2fs /dev/storm/data |
e2fsck -f /dev/storm/data resize2fs /dev/storm/data
Remove volume:
lvremove /dev/storm/data |
lvremove /dev/storm/data
If you need some visual help you can use: “system-config-lvm” utility co configure LVM.
First thing to say it’s a kind of RAID0; if you have the need of a single big partition but you have only multiple smaller disks, you have the possibility to crate a LVM-stripe over all your smaller disks.
In this example I have 4x 4TiB devices which will be used to create a single one with 16TiB
pvcreate /dev/sdb /dev/sdc /dev/sde /dev/sdd # verify with: vgdisplay to verify |
pvcreate /dev/sdb /dev/sdc /dev/sde /dev/sdd # verify with: vgdisplay to verify
lvcreate -l 100%FREE -n storage backups # to verify: lvdisplay |
lvcreate -l 100%FREE -n storage backups # to verify: lvdisplay
Create a filesystem on your 16TiB device:
mkfs.xfs -L storage /dev/mapper/storage-bacula |
mkfs.xfs -L storage /dev/mapper/storage-bacula
This upgrade from lenny to squeeze is not more complexer then the update from etch to lenny.
If you read everything carefully your server will run after upgrade too:)
Before you go on please read the official Debian release notes:
Recording your session:
1 | script -t 2>~/upgrade-squeezestep.time -a ~/upgrade-squeezestep.script |
script -t 2>~/upgrade-squeezestep.time -a ~/upgrade-squeezestep.script
First you should update your running system:
1 | aptitude update && aptitude dist-upgrade |
aptitude update && aptitude dist-upgrade
Check the package state:
1 | dpkg --audit |
dpkg --audit
It will show any packages which have a status of Half-Installed or Failed-Config, and those with any error status.
If you had a kernel upgrade please reboot. After successfully updates replace the sources from lenny to squeeze:
[Read more…]
root@web2:$ rm pe-warn-*.bz2
-bash: /bin/rm: Argument list too long
This peoblem happens when you are trying to delete too many files in a directory at the same time – it seems rm has special limits …
To solve the problem:
Use:
1 | find . -name 'pe-warn-*.bz2' | xargs rm |
find . -name 'pe-warn-*.bz2' | xargs rm
or
1 | find . -name "pe-warn-*.bz2" -delete |
find . -name "pe-warn-*.bz2" -delete
Each of you looses already some important files like photos or important documents.
After a normal Windows crash it’ s not a problem to get all your data back.
I show you some methods to get your data back.
These two examples are taken directly from the ddrescue info pages.
Example 1: Rescue an ext3 partition in /dev/hda2 to /dev/hdb2
1 2 3 | dd_rescue /dev/hda2 /dev/hdb2 -l logfile.txt e2fsck -v -f /dev/hdb2 mount -t ext3 -o ro /dev/hdb2 /mnt |
dd_rescue /dev/hda2 /dev/hdb2 -l logfile.txt e2fsck -v -f /dev/hdb2 mount -t ext3 -o ro /dev/hdb2 /mnt
If you have a damaged hard disk /dev/sda1 and you have an empty space hard disk /dev/sdb1 You can copy data from /dev/sda1 to /dev/sdb1 use the following commnd
1 2 3 | dd_rescue /dev/sda1 /dev/sda2/backup.img # To mount use: mount -t ext3 /dev/sda2/backup.img mnt/ -o loop |
dd_rescue /dev/sda1 /dev/sda2/backup.img # To mount use: mount -t ext3 /dev/sda2/backup.img mnt/ -o loop
Example 2: Rescue a CD-ROM in /dev/cdrom
1 | ddrescue -b 2048 /dev/cdrom cdimage logfile |
ddrescue -b 2048 /dev/cdrom cdimage logfile