lsof is a tool for locating open files. What makes this especially useful is that in Linux, everything is treated as a file: pipes, directories, devices, inodes, sockets and so on.
lsof (no options) will list all files opened by any processes currently running. To restrict this to processes owned by username, use lsof -u username. Here's some sample output:
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
sshd 2354 nobs mem REG 254,0 14880 105723 /lib/libcap.so.1.10
sshd 2354 nobs DEL REG 0,8 127123574 /dev/zero
bash 2363 nobs cwd DIR 254,4 20480 7274497 /home/nobs
bash 2363 nobs txt REG 254,0 769368 4126 /bin/bash
bash 2363 nobs mem REG 254,0 97928 105698 /lib/ld-2.3.6.so
The FD column shows file descriptor information, or identifies other types of file. Here, cwd indicates the current working directory, and txt indicates program text. The TYPE column has filetype info (REG indicates a regular file). The NODE column may be useful if you're trying to recover a deleted file. See the man page for a full explanation of the output.
lsof filename shows which processes have files of this name open. lsof +D /directory will show processes which have files in this directory open. You can use this if you're trying to unmount a filesystem but getting an 'in use' error, to find the processes using files on that File System, and kill them as required.
lsof -c processname will show all processes beginning with processname that have files open; lsof +p PID does the same thing for a process ID. Using lsof -i will get you information about IP sockets. Check out the man page for more detail and for the many other available options. lsof is really a cool one to play around.
Cheers
Nobs
Monday, June 29, 2009
Recovering Deleted Files With lsof
One of the more neat things you can do with the versatile utility lsof is use it to recover a file you've just accidentally deleted.
A file in Linux is a pointer to an inode, which contains the file data (permissions, owner and where its actual content lives on the disk). Deleting the file removes the link, but not the inode itself – if another process has it open, the inode isn't released for writing until that process is done with it.
To try this out, create a test text file, save it and then type less test.txt. Open another terminal window, and type rm testing.txt. If you try ls testing.txt you'll get an error message. But! less still has a reference to the file. So:
# lsof | grep testing.txt
less 4607 juliet 4r REG 254,4 21
8880214 /home/juliet/testing.txt (deleted)
The important columns are the second one, which gives you the PID of the process that has the file open (4607), and the fourth one, which gives you the file descriptor (4). Now, we go look in /proc, where there will still be a reference to the inode, from which you can copy the file back out:
# ls -l /proc/4607/fd/4
lr-x------ 1 juliet juliet 64 Apr 7 03:19
/proc/4607/fd/4 -> /home/juliet/testing.txt (deleted)
# cp /proc/4607/fd/4 testing.txt.bk
Note: don't use the -a flag with cp, as this will copy the (broken) symbolic link, rather than the actual file contents.
Now check the file to make sure you've got what you think you have, and you're done!
Cheers
Nobs
A file in Linux is a pointer to an inode, which contains the file data (permissions, owner and where its actual content lives on the disk). Deleting the file removes the link, but not the inode itself – if another process has it open, the inode isn't released for writing until that process is done with it.
To try this out, create a test text file, save it and then type less test.txt. Open another terminal window, and type rm testing.txt. If you try ls testing.txt you'll get an error message. But! less still has a reference to the file. So:
# lsof | grep testing.txt
less 4607 juliet 4r REG 254,4 21
8880214 /home/juliet/testing.txt (deleted)
The important columns are the second one, which gives you the PID of the process that has the file open (4607), and the fourth one, which gives you the file descriptor (4). Now, we go look in /proc, where there will still be a reference to the inode, from which you can copy the file back out:
# ls -l /proc/4607/fd/4
lr-x------ 1 juliet juliet 64 Apr 7 03:19
/proc/4607/fd/4 -> /home/juliet/testing.txt (deleted)
# cp /proc/4607/fd/4 testing.txt.bk
Note: don't use the -a flag with cp, as this will copy the (broken) symbolic link, rather than the actual file contents.
Now check the file to make sure you've got what you think you have, and you're done!
Cheers
Nobs
Friday, June 26, 2009
Difference Between Soft Link and Hard Link
Hard Links :
1. All Links have same inode number.
2.ls -l command shows all the links with the link column(Second) shows No. of links.
3. Links have actual file contents
4.Removing any link ,just reduces the link count , but doesn’t affect other links.
Soft Links(Symbolic Links) :
1.Links have different inode numbers.
2. ls -l command shows all links with second column value 1 and the link points to original file.
3. Link has the path for original file and not the contents.
4.Removing soft link doesn’t affect anything but removing original file ,the link becomes “dangling” link which points to nonexistant file.
Regards
Nobs
1. All Links have same inode number.
2.ls -l command shows all the links with the link column(Second) shows No. of links.
3. Links have actual file contents
4.Removing any link ,just reduces the link count , but doesn’t affect other links.
Soft Links(Symbolic Links) :
1.Links have different inode numbers.
2. ls -l command shows all links with second column value 1 and the link points to original file.
3. Link has the path for original file and not the contents.
4.Removing soft link doesn’t affect anything but removing original file ,the link becomes “dangling” link which points to nonexistant file.
Regards
Nobs
Wednesday, June 24, 2009
Migrating LVM Volumes Over Network (using snapshots)
We run a big share of Xen virtual servers spanned over multiple servers and if you want to use the full or best capability of Xen, I would suggest LVM (Logical Volume Manager), it makes life a lot easier, especially for those who do not run a RAID setup (We run RAID10 on all VM nodes) as you can split the partition over multiple hard drives. I’m not going to cover setting up the LVM as there are loads of tutorials on how to do that but I will rather cover the best way to migrate a LVM volume.
First, we will need to create a snapshot of the LVM volume as we cannot create an image of the live version, we run the following line:
lvcreate -L20G -s -n storageLV_s /dev/vGroup/storageLV
The 20G part is the size of the snapshot LVM, I would suggest looking up the size of the real original LV and making it the same, you can find out the size of the LV by using this command:
lvdisplay /dev/vGroup/storageLV —
There will be a “LV Size” field, get it from there and put it in the command, the -n switch is for the name, usually I name them the same as the LV with a trailing _s for snapshot, the last argument is simply the real LV that we want to make a snapshot of.
Afterwards, we will use dd in different way, usually if you use dd in one line, it’s either reading or it’s either writing which makes it crawl, to bypass this, we will read the LV and pipe it to one that writes so the minimum speed is the fastest speed of the slowest hard drive (I could re-phrase that but it’s 11:10 PM!) — To speed it up a bit more, we used a block size of 64K.
dd if=/dev/vGroup/storageLV_s conv=noerror,sync bs=64k | dd of=/migrate/storageLV_s.dd bs=64k
I won’t cover the file transfer process as there are multiple methods, if you want to use SCP, I would suggest disabling encryption or anything as it really slows it down, our node usually has httpd installed on them so I simply changed the configuration to listen on a different port (for security) and changed the DocumentRoot to /migrate
Once you got your file on the server, you’ll need to re-create the LV on the target server, you’ll need to run this
lvcreate -L20G -n storageLV vGroup
You’ll have to keep the same size, bring the same name (this time without a trailing _s as it won’t be a snapshot) and the volume group at the end.
The last step is to actually restore the image using dd, again using our block-size & pipe tweak for better performance.
dd if=/migrate/storageLV_s.dd conv=noerror,sync bs=64k | dd of=/dev/vGroup/storageLV bs=64k
I have migrated around 16 LVs with this method without any problems, 13 of them were 20G each, 2 40G and 1 75G — So far every part is fast however I have to admit that the slowest part was the file transfer, I would suggest using a Gbit crossover or even better if you have a Gbit switch, if you don’t but you’re right next to the server, might consider using a spare USB 2.0 HDD as they are much faster compared to 100mbps (USB2.0 is around 480Mbps).
Thanks
Nobs
First, we will need to create a snapshot of the LVM volume as we cannot create an image of the live version, we run the following line:
lvcreate -L20G -s -n storageLV_s /dev/vGroup/storageLV
The 20G part is the size of the snapshot LVM, I would suggest looking up the size of the real original LV and making it the same, you can find out the size of the LV by using this command:
lvdisplay /dev/vGroup/storageLV —
There will be a “LV Size” field, get it from there and put it in the command, the -n switch is for the name, usually I name them the same as the LV with a trailing _s for snapshot, the last argument is simply the real LV that we want to make a snapshot of.
Afterwards, we will use dd in different way, usually if you use dd in one line, it’s either reading or it’s either writing which makes it crawl, to bypass this, we will read the LV and pipe it to one that writes so the minimum speed is the fastest speed of the slowest hard drive (I could re-phrase that but it’s 11:10 PM!) — To speed it up a bit more, we used a block size of 64K.
dd if=/dev/vGroup/storageLV_s conv=noerror,sync bs=64k | dd of=/migrate/storageLV_s.dd bs=64k
I won’t cover the file transfer process as there are multiple methods, if you want to use SCP, I would suggest disabling encryption or anything as it really slows it down, our node usually has httpd installed on them so I simply changed the configuration to listen on a different port (for security) and changed the DocumentRoot to /migrate
Once you got your file on the server, you’ll need to re-create the LV on the target server, you’ll need to run this
lvcreate -L20G -n storageLV vGroup
You’ll have to keep the same size, bring the same name (this time without a trailing _s as it won’t be a snapshot) and the volume group at the end.
The last step is to actually restore the image using dd, again using our block-size & pipe tweak for better performance.
dd if=/migrate/storageLV_s.dd conv=noerror,sync bs=64k | dd of=/dev/vGroup/storageLV bs=64k
I have migrated around 16 LVs with this method without any problems, 13 of them were 20G each, 2 40G and 1 75G — So far every part is fast however I have to admit that the slowest part was the file transfer, I would suggest using a Gbit crossover or even better if you have a Gbit switch, if you don’t but you’re right next to the server, might consider using a spare USB 2.0 HDD as they are much faster compared to 100mbps (USB2.0 is around 480Mbps).
Thanks
Nobs
Tuesday, June 23, 2009
Load Alert Script for Server
The following script will send a mail to the email address mentioned, if the server load goes above 5. The mail will contain the top 10 cpu consuming processes, memory consuming processes, memory and swap status, disk space information and the uptime.
1 . Login to the server as root
# vi /root/loadalert
2. And the below script
-------------------------------------------
#!/bin/bash
#Wednesday, December 06 2006
EMAIL="test@gmail.com"
EMAIL1="test@yahoo.com"
SUBJECT="$(hostname) load is"
TEMPFILE="/tmp/$(hostname)"
echo "Load average has crossed the limits..." >> $TEMPFILE
echo "Hostname: $(hostname)" >> $TEMPFILE
echo "Local Date & Time : $(date)" >> $TEMPFILE
echo "| Uptime status: |" >> $TEMPFILE
echo "------------------" >> $TEMPFILE
/usr/bin/uptime >> $TEMPFILE
echo "------------------" >> $TEMPFILE
echo "| Top 20 CPU consuming processes: |" >> $TEMPFILE
ps aux | head -1 >> $TEMPFILE
ps aux --no-headers | sort -rn +2 | head -20 >> $TEMPFILE
echo "| Top 10 memory-consuming processes: |" >> $TEMPFILE
ps aux --no-headers| sort -rn +3 | head >> $TEMPFILE
echo "---------------------------" >> $TEMPFILE
echo "| Memory and Swap status: |" >> $TEMPFILE
/usr/bin/free -m >> $TEMPFILE
echo "------------------------------" >> $TEMPFILE
echo "| Disk Space information: |" >> $TEMPFILE
echo "---------------------------" >> $TEMPFILE
/bin/df -h >> $TEMPFILE
echo "------THE END----------------" >> $TEMPFILE
L05="$(uptime|awk '{print $(NF-2)}'|cut -d. -f1)"
if test $L05 -gt 5
then
mail -s "$SUBJECT $L05" "$EMAIL" < $TEMPFILE
mail -s "$SUBJECT $L05" "$EMAIL1" < $TEMPFILE
fi
rm -f $TEMPFILE
-----------------------------------
Change permission
3 . chmod +x /root/loadalert
4. Add cron
# vi /var/spool/cron/root
* * * * * /root/loadalert >/dev/null 2>&1
5. Restart Cron
# /etc/init.d/crond restart
6. Check cron log for error mesage...
# tail -f /var/log/cron
Thanks
Nobs
1 . Login to the server as root
# vi /root/loadalert
2. And the below script
-------------------------------------------
#!/bin/bash
#Wednesday, December 06 2006
EMAIL="test@gmail.com"
EMAIL1="test@yahoo.com"
SUBJECT="$(hostname) load is"
TEMPFILE="/tmp/$(hostname)"
echo "Load average has crossed the limits..." >> $TEMPFILE
echo "Hostname: $(hostname)" >> $TEMPFILE
echo "Local Date & Time : $(date)" >> $TEMPFILE
echo "| Uptime status: |" >> $TEMPFILE
echo "------------------" >> $TEMPFILE
/usr/bin/uptime >> $TEMPFILE
echo "------------------" >> $TEMPFILE
echo "| Top 20 CPU consuming processes: |" >> $TEMPFILE
ps aux | head -1 >> $TEMPFILE
ps aux --no-headers | sort -rn +2 | head -20 >> $TEMPFILE
echo "| Top 10 memory-consuming processes: |" >> $TEMPFILE
ps aux --no-headers| sort -rn +3 | head >> $TEMPFILE
echo "---------------------------" >> $TEMPFILE
echo "| Memory and Swap status: |" >> $TEMPFILE
/usr/bin/free -m >> $TEMPFILE
echo "------------------------------" >> $TEMPFILE
echo "| Disk Space information: |" >> $TEMPFILE
echo "---------------------------" >> $TEMPFILE
/bin/df -h >> $TEMPFILE
echo "------THE END----------------" >> $TEMPFILE
L05="$(uptime|awk '{print $(NF-2)}'|cut -d. -f1)"
if test $L05 -gt 5
then
mail -s "$SUBJECT $L05" "$EMAIL" < $TEMPFILE
mail -s "$SUBJECT $L05" "$EMAIL1" < $TEMPFILE
fi
rm -f $TEMPFILE
-----------------------------------
Change permission
3 . chmod +x /root/loadalert
4. Add cron
# vi /var/spool/cron/root
* * * * * /root/loadalert >/dev/null 2>&1
5. Restart Cron
# /etc/init.d/crond restart
6. Check cron log for error mesage...
# tail -f /var/log/cron
Thanks
Nobs
Volume Labels
Volume labels make it possible for partitions to retain a consistent name. Each can be a maximum of 16 characters long. There are three tools to make volume labels: mke2fs, tune2fs and e2label.
e2label /dev/hda1 omega
tune2fs -L omega /dev/hda1
The above 2 commands will label the first partition of the drive “omega”. That label stays with that particular partition, even if the drive is moved to another controller or even another computer.
mke2fs omega /dev/hda1
mke2fs -L omega /dev/hda1
The above command also will do the label but only after they make the file system. This means that either of these last two commands will delete any existing data in the partition.
Cheers
Nobs
e2label /dev/hda1 omega
tune2fs -L omega /dev/hda1
The above 2 commands will label the first partition of the drive “omega”. That label stays with that particular partition, even if the drive is moved to another controller or even another computer.
mke2fs omega /dev/hda1
mke2fs -L omega /dev/hda1
The above command also will do the label but only after they make the file system. This means that either of these last two commands will delete any existing data in the partition.
Cheers
Nobs
CSF Installation
Install
-------
rm -fv csf.tgz
wget http://www.configserver.com/free/csf.tgz
tar -xzf csf.tgz
cd csf
sh install.sh
If you would like to disable APF+BFD (which you will need to do if you have
them installed otherwise they will conflict horribly):
sh disable_apf_bfd.sh
That’s it. You can then configure csf and lfd in WHM, or edit the files
directly in /etc/csf/*
CSF is pre configured to work on a cPanel server with all the standard cPanel
ports open. It also auto-configures your SSH port if it’s non-standard on
installation.
You should ensure that kernel logging daemon (klogd) is enabled. Typically, VPS
servers have this disabled and you should check /etc/init.d/syslog and make
sure that any klogd lines are not commented out. If you change the file,
remember to restart syslog.
Uninstallation
------------------
Removing csf and lfd is even more simple:
cd /etc/csf
sh uninstall.sh
Thanks
Nobs
-------
rm -fv csf.tgz
wget http://www.configserver.com/free/csf.tgz
tar -xzf csf.tgz
cd csf
sh install.sh
If you would like to disable APF+BFD (which you will need to do if you have
them installed otherwise they will conflict horribly):
sh disable_apf_bfd.sh
That’s it. You can then configure csf and lfd in WHM, or edit the files
directly in /etc/csf/*
CSF is pre configured to work on a cPanel server with all the standard cPanel
ports open. It also auto-configures your SSH port if it’s non-standard on
installation.
You should ensure that kernel logging daemon (klogd) is enabled. Typically, VPS
servers have this disabled and you should check /etc/init.d/syslog and make
sure that any klogd lines are not commented out. If you change the file,
remember to restart syslog.
Uninstallation
------------------
Removing csf and lfd is even more simple:
cd /etc/csf
sh uninstall.sh
Thanks
Nobs
About Kernel Versioning
Command to show the running kernel version:
[root@wordsworth modules]# uname -r
2.6.9-42.0.3.ELsmp
Kernel Version Numbers:
The Linux kernel version numbers consist of three numbers separated by decimals, such as 2.2.14. The first number is the major version number. The second number is the minor revision number. The third number is the patch level version.
At any given time there is a group of kernels that are considered “stable releases” and another group that is considered “development.” If the second number of a kernel is even, then that kernel is a stable release. For example, the 2.2.14 kernel is a stable release because the second number is even. If the second number is odd, then that kernel is a development release. For example, the 2.3.51 is a development release because the second nubmer is odd.
Once the 2.3.x branch is considered finished, then it will become the 2.4.0 kernel. Patches will then appear for the 2.4.x branch and development work will begin on the 2.5.x branch. If the 2.3.x advancements are significant enough to be considered a major revision, the 2.3.x branch will become 3.0.0 and development work will begin on the 3.1.x branch.
Thanks
Nobs
[root@wordsworth modules]# uname -r
2.6.9-42.0.3.ELsmp
Kernel Version Numbers:
The Linux kernel version numbers consist of three numbers separated by decimals, such as 2.2.14. The first number is the major version number. The second number is the minor revision number. The third number is the patch level version.
At any given time there is a group of kernels that are considered “stable releases” and another group that is considered “development.” If the second number of a kernel is even, then that kernel is a stable release. For example, the 2.2.14 kernel is a stable release because the second number is even. If the second number is odd, then that kernel is a development release. For example, the 2.3.51 is a development release because the second nubmer is odd.
Once the 2.3.x branch is considered finished, then it will become the 2.4.0 kernel. Patches will then appear for the 2.4.x branch and development work will begin on the 2.5.x branch. If the 2.3.x advancements are significant enough to be considered a major revision, the 2.3.x branch will become 3.0.0 and development work will begin on the 3.1.x branch.
Thanks
Nobs
Monday, June 22, 2009
Disable Direct Root Login
Inorder to disable direct root login on a linux server, you need to do the following things:
1. vi /etc/ssh/sshd_config in that file make
Permitrootlogin no then save it
2. Restart sshd service
#/etc/init.d/sshd restart
3. Now create a new user and set password for that user.
4. Add that user to the wheel group
# vi /etc/groups
Add that user to the group of wheel by appending the user name to the end of the wheel user entry in 'groups' file
Entry Should look like this:
wheel:*:0:root,user_here
5. Now logon to the server using the username and password and then do
su - and provide the root password to get root access.
Inorder to work this properly you should have the following permission settings
chmod 4755 /bin/su
chmod 1700 /etc/passwd
chmod 1700 /etc/shadow
chmod 1755 /etc/groups
If there is anything wrong with this permission, you may get permission denied or incorrect password errors.
Regards
Nobs
1. vi /etc/ssh/sshd_config in that file make
Permitrootlogin no then save it
2. Restart sshd service
#/etc/init.d/sshd restart
3. Now create a new user and set password for that user.
4. Add that user to the wheel group
# vi /etc/groups
Add that user to the group of wheel by appending the user name to the end of the wheel user entry in 'groups' file
Entry Should look like this:
wheel:*:0:root,user_here
5. Now logon to the server using the username and password and then do
su - and provide the root password to get root access.
Inorder to work this properly you should have the following permission settings
chmod 4755 /bin/su
chmod 1700 /etc/passwd
chmod 1700 /etc/shadow
chmod 1755 /etc/groups
If there is anything wrong with this permission, you may get permission denied or incorrect password errors.
Regards
Nobs
DDOS Attack Detection
A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. One common method of attack involves saturating the target (victim) machine with external communications requests, such that it cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable.
Using the command given below you can find out the list ips which are doing DDOS to your server at a particular moment.
netstat -anp|grep tcp|awk '{print $5}'| cut -d : -f1 | sort | uniq -c | sort -n
Regards
Nobs
Using the command given below you can find out the list ips which are doing DDOS to your server at a particular moment.
netstat -anp|grep tcp|awk '{print $5}'| cut -d : -f1 | sort | uniq -c | sort -n
Regards
Nobs
Command to Find out pid of our Own shell
Do you know how to find the pid of our shell from a server where there are hundreds of shell connections.
well try this.
echo $$
This command will give you the pid of your shell and if you kill that process id you will be logged out.
Cheers
Nobs
well try this.
echo $$
This command will give you the pid of your shell and if you kill that process id you will be logged out.
Cheers
Nobs
Yum Installation for Centos 5
Download URL
------------
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/
Download the following rpms
---------------------------
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/yum-3.0.5-1.el5.centos.2.noarch.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/python-elementtree-1.2.6-5.i386.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/python-sqlite-1.1.7-1.2.1.i386.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/python-urlgrabber-3.1.0-2.noarch.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/m2crypto-0.16-6.el5.1.i386.rpm
RPM Installation
----------------
rpm -ivh m2crypto-0.16-6.el5.1.i386.rpm
rpm -ivh python-urlgrabber-3.1.0-2.noarch.rpm
rpm -ivh python-sqlite-1.1.7-1.2.1.i386.rpm
rpm -ivh python-elementtree-1.2.6-5.i386.rpm
rpm -ivh yum-3.0.5-1.el5.centos.2.noarch.rpm
Update Yum
-----------
yum update
Thanks
Nobs
------------
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/
Download the following rpms
---------------------------
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/yum-3.0.5-1.el5.centos.2.noarch.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/python-elementtree-1.2.6-5.i386.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/python-sqlite-1.1.7-1.2.1.i386.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/python-urlgrabber-3.1.0-2.noarch.rpm
http://mirror.centos.org/centos-5/5.0/os/i386/CentOS/m2crypto-0.16-6.el5.1.i386.rpm
RPM Installation
----------------
rpm -ivh m2crypto-0.16-6.el5.1.i386.rpm
rpm -ivh python-urlgrabber-3.1.0-2.noarch.rpm
rpm -ivh python-sqlite-1.1.7-1.2.1.i386.rpm
rpm -ivh python-elementtree-1.2.6-5.i386.rpm
rpm -ivh yum-3.0.5-1.el5.centos.2.noarch.rpm
Update Yum
-----------
yum update
Thanks
Nobs
To Create a File with Specific Size
To create a file Test file with specified size 50Kb
dd if=/dev/zero of=Testfile bs=1024 count=50
Where,
dd - dd is a common UNIX program whose primary purpose is the low-level copying and conversion of raw data. dd is an abbreviation for "dataset definition"
if - Input File(The partition name in this case)
of - Output File(The name of the file to be created)
bs - Block Size(Default value is 512bytes)
count - the size of the file in KB
This is helpful when we require a test file with some minimum size to check the download speed or to check network performance.
Cheers
Nobs
dd if=/dev/zero of=Testfile bs=1024 count=50
Where,
dd - dd is a common UNIX program whose primary purpose is the low-level copying and conversion of raw data. dd is an abbreviation for "dataset definition"
if - Input File(The partition name in this case)
of - Output File(The name of the file to be created)
bs - Block Size(Default value is 512bytes)
count - the size of the file in KB
This is helpful when we require a test file with some minimum size to check the download speed or to check network performance.
Cheers
Nobs
Sunday, June 21, 2009
Hide Commands in Shell
To hide the commands you are entering in shell, use "stty" command
#stty -echo
Now, all commands that you type are invisible.
To disable this mode, issue the following command at the shell prompt:
#stty echo
Cheers
Nobs
#stty -echo
Now, all commands that you type are invisible.
To disable this mode, issue the following command at the shell prompt:
#stty echo
Cheers
Nobs
Rebuilding Rpmdb
Getting the following error while running up2date
rpmdb: Program version 4.2 doesn’t match environment version
error: db4 error(22) from dbenv->open: Invalid argument
error: cannot open Packages index using db3 - Invalid argument (22)
error: cannot open Packages database in /var/lib/rpm
Steps to resolve
1. Check for processes holding the rpm database open (usually in MUTEX/FUTEX states):
lsof | grep /var/lib/rpm
If it finds any, kill -9 them all.
2. Delete any temporary DB files:
rm -fv /var/lib/rpm/__*
3. Rebuild your RPM database:
rpm –rebuilddb -v -v
If you still have problems, a reboot is probably quickest, then repeat steps 2 and 3 above.
Regards
Nobs
rpmdb: Program version 4.2 doesn’t match environment version
error: db4 error(22) from dbenv->open: Invalid argument
error: cannot open Packages index using db3 - Invalid argument (22)
error: cannot open Packages database in /var/lib/rpm
Steps to resolve
1. Check for processes holding the rpm database open (usually in MUTEX/FUTEX states):
lsof | grep /var/lib/rpm
If it finds any, kill -9 them all.
2. Delete any temporary DB files:
rm -fv /var/lib/rpm/__*
3. Rebuild your RPM database:
rpm –rebuilddb -v -v
If you still have problems, a reboot is probably quickest, then repeat steps 2 and 3 above.
Regards
Nobs
Error : Maximum file limit has been reached
Many times we get an error called the maximum number of files that can be opened has reach the limit,
In order to resolve this, you will have to login as a root on your server and edit the file called /etc/sysctl.conf
vi /etc/sysctl.conf
Add the line there as
fs.file-max = 22992
Save and exit from the file.
In order to apply these changes run the command called
# sysctl -p
This will increase the maximum number of open files for your system
Cheers
Nobs
In order to resolve this, you will have to login as a root on your server and edit the file called /etc/sysctl.conf
vi /etc/sysctl.conf
Add the line there as
fs.file-max = 22992
Save and exit from the file.
In order to apply these changes run the command called
# sysctl -p
This will increase the maximum number of open files for your system
Cheers
Nobs
ICMP IP Scan Using NMAP
Type the following command to run ICMP IP Scan
# nmap -sP -PI 192.168.1.0/24
Output:
Starting Nmap 4.20 ( http://insecure.org ) at 2008-01-29 23:40 IST
Host 192.168.1.1 appears to be up.
MAC Address: 00:18:39:6A:C6:8B (Cisco-Linksys)
Host 192.168.1.106 appears to be up.
......
...
....
Nmap finished: 256 IP addresses (2 hosts up) scanned in 5.746 seconds
Where,
* -sP : This option tells Nmap to only perform a ping scan (host discovery), then print out the available hosts that responded to the scan. This is also known as ping scan.
* -PI : This open tells Nmap that we are sending ICMP echo requests
Cheers
Nobs
# nmap -sP -PI 192.168.1.0/24
Output:
Starting Nmap 4.20 ( http://insecure.org ) at 2008-01-29 23:40 IST
Host 192.168.1.1 appears to be up.
MAC Address: 00:18:39:6A:C6:8B (Cisco-Linksys)
Host 192.168.1.106 appears to be up.
......
...
....
Nmap finished: 256 IP addresses (2 hosts up) scanned in 5.746 seconds
Where,
* -sP : This option tells Nmap to only perform a ping scan (host discovery), then print out the available hosts that responded to the scan. This is also known as ping scan.
* -PI : This open tells Nmap that we are sending ICMP echo requests
Cheers
Nobs
Redirect Iptables Log to a Different Log File
According to man page:
Iptables is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user defined chains.
By default, Iptables log message to a /var/log/messages file. However you can change this location. I will show you how to create a new logfile called /var/log/iptables.log. Changing or using a new file allows you to create better statistics and/or allows you to analyze the attacks.
Iptables default log file
For example, if you type the following command, it will display current iptables log from /var/log/messages file:
tail -f /var/log/messages
Output:
————————————————————————–
Oct 4 00:44:28 debian gconfd (anish-4435): Resolved address “xml:readonly:/etc/gconf/gconf.xml.defaults” to a read-only configuration source at position 2
Oct 4 01:14:19 debian kernel: IN=ra0 OUT= MAC=00:17:9a:0a:f6:44:00:08:5c:00:00:01:08:00 SRC=200.142.84.36 DST=192.168.1.2 LEN=60 TOS=0×00 PREC=0×00 TTL=51 ID=18374 DF PROTO=TCP SPT=46040 DPT=22 WINDOW=5840 RES=0×00 SYN URGP=0
—————————————————————————–
Procedure to log the iptables messages to a different log file
Open your /etc/syslog.conf file:
vi /etc/syslog.conf
Append following line
kern.warning /var/log/iptables.log
Save and close the file.
Restart the syslogd (Debian / Ubuntu Linux): /etc/init.d/sysklogd restart On the other hand, use following command to restart syslogd under Red Hat/Cent OS/Fedora Core Linux: /etc/init.d/syslog restart
Now make sure you pass the log-level 4 option with log-prefix to iptables. For example:
DROP everything and Log it
iptables -A INPUT -j LOG –log-level 4
iptables -A INPUT -j DROP
For example, drop and log all connections from IP address 64.55.11.2 to your /var/log/iptables.log file:
iptables -A INPUT -s 64.55.11.2 -m limit –limit 5/m –limit-burst 7 -j LOG –log-prefix ‘** HACKERS **’ –log-level 4
iptables -A INPUT -s 64.55.11.2 -j DROP
Where,
* –log-level 4: Level of logging. The level # 4 is for warning.
* –log-prefix ‘*** TEXT ***’: Prefix log messages with the specified prefix (TEXT); up to 29 letters long, and useful for distinguishing messages in the logs.
You can now see all iptables message logged to /var/log/iptables.log file:
tail -f /var/log/iptables.log
Thanks
Nobs
Iptables is used to set up, maintain, and inspect the tables of IP packet filter rules in the Linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user defined chains.
By default, Iptables log message to a /var/log/messages file. However you can change this location. I will show you how to create a new logfile called /var/log/iptables.log. Changing or using a new file allows you to create better statistics and/or allows you to analyze the attacks.
Iptables default log file
For example, if you type the following command, it will display current iptables log from /var/log/messages file:
tail -f /var/log/messages
Output:
————————————————————————–
Oct 4 00:44:28 debian gconfd (anish-4435): Resolved address “xml:readonly:/etc/gconf/gconf.xml.defaults” to a read-only configuration source at position 2
Oct 4 01:14:19 debian kernel: IN=ra0 OUT= MAC=00:17:9a:0a:f6:44:00:08:5c:00:00:01:08:00 SRC=200.142.84.36 DST=192.168.1.2 LEN=60 TOS=0×00 PREC=0×00 TTL=51 ID=18374 DF PROTO=TCP SPT=46040 DPT=22 WINDOW=5840 RES=0×00 SYN URGP=0
—————————————————————————–
Procedure to log the iptables messages to a different log file
Open your /etc/syslog.conf file:
vi /etc/syslog.conf
Append following line
kern.warning /var/log/iptables.log
Save and close the file.
Restart the syslogd (Debian / Ubuntu Linux): /etc/init.d/sysklogd restart On the other hand, use following command to restart syslogd under Red Hat/Cent OS/Fedora Core Linux: /etc/init.d/syslog restart
Now make sure you pass the log-level 4 option with log-prefix to iptables. For example:
DROP everything and Log it
iptables -A INPUT -j LOG –log-level 4
iptables -A INPUT -j DROP
For example, drop and log all connections from IP address 64.55.11.2 to your /var/log/iptables.log file:
iptables -A INPUT -s 64.55.11.2 -m limit –limit 5/m –limit-burst 7 -j LOG –log-prefix ‘** HACKERS **’ –log-level 4
iptables -A INPUT -s 64.55.11.2 -j DROP
Where,
* –log-level 4: Level of logging. The level # 4 is for warning.
* –log-prefix ‘*** TEXT ***’: Prefix log messages with the specified prefix (TEXT); up to 29 letters long, and useful for distinguishing messages in the logs.
You can now see all iptables message logged to /var/log/iptables.log file:
tail -f /var/log/iptables.log
Thanks
Nobs
History with Time and Date
The command history will give you the list of command which you executed earlier. By default you won’t get the the command execution time and date.
If you wish to have a history details with time and date please do the following.
Open /etc/bashrc file and add the following line.
Export HISTTIMEFORMAT=”%h%d - %H:%M:%S”
After adding this line, relogin to the shell and execute the history command.
Regards
Nobs
If you wish to have a history details with time and date please do the following.
Open /etc/bashrc file and add the following line.
Export HISTTIMEFORMAT=”%h%d - %H:%M:%S”
After adding this line, relogin to the shell and execute the history command.
Regards
Nobs
Friday, June 19, 2009
Auto login Using Expect script
The three commands send, expect, and spawn are the building power of Expect. The send command sends strings to a process, the expect command waits for strings from a process, and the spawn command starts a process.
This script will automatically connect to the ftp server and will just do an ls on the server
#!/usr/sbin/expect
spawn ftp domain.com
expect "Name (domain.com:root):"
send "test@domain.com\r"
expect "Password:"
send "qwert\r" //qwert is password \r is part of syntax//
expect "ftp>"
send "ls\r"
expect "ftp>"
send "bye"
interact
save this as a file and give permission 755
after that run this as
# expect scriptname
Cheers
Nobs
This script will automatically connect to the ftp server and will just do an ls on the server
#!/usr/sbin/expect
spawn ftp domain.com
expect "Name (domain.com:root):"
send "test@domain.com\r"
expect "Password:"
send "qwert\r" //qwert is password \r is part of syntax//
expect "ftp>"
send "ls\r"
expect "ftp>"
send "bye"
interact
save this as a file and give permission 755
after that run this as
# expect scriptname
Cheers
Nobs
Thursday, June 18, 2009
Difference Between ‘mount’ and ‘mount -a’
There is a slight difference between the commands - “mount” and “mount -a”.
1. When you type “mount”, it will display the output of the file “/etc/mtab“.
For example,
# mount
/dev/sda5 on / type ext3 (rw,usrquota)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/sda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/sda7 on /home type ext3 (rw,usrquota)
/dev/sda8 on /tmp type ext3 (rw,noexec,nosuid)
/dev/sda3 on /usr type ext3 (rw,usrquota)
/dev/sda2 on /var type ext3 (rw,usrquota)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/tmp on /var/tmp type none (rw,noexec,nosuid,bind)
The content of the file “/etc/mtab” is:
# cat /etc/mtab
/dev/sda5 / ext3 rw,usrquota 0 0
none /proc proc rw 0 0
none /sys sysfs rw 0 0
none /dev/pts devpts rw,gid=5,mode=620 0 0
usbfs /proc/bus/usb usbfs rw 0 0
/dev/sda1 /boot ext3 rw 0 0
none /dev/shm tmpfs rw 0 0
/dev/sda7 /home ext3 rw,usrquota 0 0
/dev/sda8 /tmp ext3 rw,noexec,nosuid 0 0
/dev/sda3 /usr ext3 rw,usrquota 0 0
/dev/sda2 /var ext3 rw,usrquota 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
/tmp /var/tmp none rw,noexec,nosuid,bind 0 0
2. When you type the command “mount -a”, it will take the output of the file “/etc/fstab“.
# cat /etc/fstab
# This file is edited by fstab-sync - see ‘man fstab-sync’ for details
LABEL=/ / ext3 defaults,usrquota 1 1
LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
LABEL=/home /home ext3 defaults,usrquota 1 2
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
LABEL=/tmp /tmp ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults,usrquota 1 2
LABEL=/var /var ext3 defaults,usrquota 1 2
LABEL=SWAP-sda6 swap swap pri=0,defaults 0 0
Note: The file “/etc/mtab” has the entries of temporary partitions such as USB drive. But, the file “/etc/fstab” has the entries of mounted partitions in the server.
1. When you type “mount”, it will display the output of the file “/etc/mtab“.
For example,
# mount
/dev/sda5 on / type ext3 (rw,usrquota)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/sda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/sda7 on /home type ext3 (rw,usrquota)
/dev/sda8 on /tmp type ext3 (rw,noexec,nosuid)
/dev/sda3 on /usr type ext3 (rw,usrquota)
/dev/sda2 on /var type ext3 (rw,usrquota)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/tmp on /var/tmp type none (rw,noexec,nosuid,bind)
The content of the file “/etc/mtab” is:
# cat /etc/mtab
/dev/sda5 / ext3 rw,usrquota 0 0
none /proc proc rw 0 0
none /sys sysfs rw 0 0
none /dev/pts devpts rw,gid=5,mode=620 0 0
usbfs /proc/bus/usb usbfs rw 0 0
/dev/sda1 /boot ext3 rw 0 0
none /dev/shm tmpfs rw 0 0
/dev/sda7 /home ext3 rw,usrquota 0 0
/dev/sda8 /tmp ext3 rw,noexec,nosuid 0 0
/dev/sda3 /usr ext3 rw,usrquota 0 0
/dev/sda2 /var ext3 rw,usrquota 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
/tmp /var/tmp none rw,noexec,nosuid,bind 0 0
2. When you type the command “mount -a”, it will take the output of the file “/etc/fstab“.
# cat /etc/fstab
# This file is edited by fstab-sync - see ‘man fstab-sync’ for details
LABEL=/ / ext3 defaults,usrquota 1 1
LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
LABEL=/home /home ext3 defaults,usrquota 1 2
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
LABEL=/tmp /tmp ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults,usrquota 1 2
LABEL=/var /var ext3 defaults,usrquota 1 2
LABEL=SWAP-sda6 swap swap pri=0,defaults 0 0
Note: The file “/etc/mtab” has the entries of temporary partitions such as USB drive. But, the file “/etc/fstab” has the entries of mounted partitions in the server.
Changing Time Zone in Linux
Change Time Zone
1. Logged in as root, check which timezone your machine is currently using by executing `date`. You’ll see something like Mon 17 Jan 2005 12:15:08 PM PST, PST in this case is the current timezone.
2.Change to the directory /usr/share/zoneinfo here you will find a list of time zone regions. Choose the most appropriate region, if you live in Canada or the US this directory is the “America” directory.
3. If you wish, backup the previous timezone configuration by copying it to a different location. Such as
mv /etc/localtime /etc/localtime-old
4. Create a symbolic link from the appropiate timezone to /etc/localtime. Example:
ln -s /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime
5. If you have the utility rdate, update the current system time by executing
/usr/bin/rdate -s time.nist.gov
6. Set the ZONE entry in the file /etc/sysconfig/clock file (e.g. “America/Los_Angeles”)
7. Set the hardware clock by executing:
/sbin/hwclock –systohc
How to Change Date and Time
You can change the date and time on linux machine using the date command
Eg: If you want to change the date to July 31, 11:16 pm then type as follows
date 07312316
If you want to change the year as well, you could type
“date 073123161998”
You can also use the following:
date -s “31 JULY 1998 23:16:00″
1. Logged in as root, check which timezone your machine is currently using by executing `date`. You’ll see something like Mon 17 Jan 2005 12:15:08 PM PST, PST in this case is the current timezone.
2.Change to the directory /usr/share/zoneinfo here you will find a list of time zone regions. Choose the most appropriate region, if you live in Canada or the US this directory is the “America” directory.
3. If you wish, backup the previous timezone configuration by copying it to a different location. Such as
mv /etc/localtime /etc/localtime-old
4. Create a symbolic link from the appropiate timezone to /etc/localtime. Example:
ln -s /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime
5. If you have the utility rdate, update the current system time by executing
/usr/bin/rdate -s time.nist.gov
6. Set the ZONE entry in the file /etc/sysconfig/clock file (e.g. “America/Los_Angeles”)
7. Set the hardware clock by executing:
/sbin/hwclock –systohc
How to Change Date and Time
You can change the date and time on linux machine using the date command
Eg: If you want to change the date to July 31, 11:16 pm then type as follows
date 07312316
If you want to change the year as well, you could type
“date 073123161998”
You can also use the following:
date -s “31 JULY 1998 23:16:00″
Zombie process
Zombie process is an inactive computer process, according to wikipedia article, “…On Unix operating systems, a zombie process or defunct process is a process that has completed execution but still has an entry in the process table, allowing the process that started it to read its exit status. In the term’s colorful metaphor, the child process has died but has not yet been reaped…”
Use top or ps command to find out zombie process.
# top
OR
# ps aux | awk ‘{ print $8 ” ” $2 }’ | grep -w Z
Output:
Z 4104
Z 5320
Z 2945
You cannot kill zombies, as they are already dead. But if you have too many zombies then kill parent process or restart service.
You can kill zombie process using PID obtained from any one of the above command. For example kill zombie proces having PID 4104.
# kill -9 4104
Please note that kill -9 does not guarantee to kill a zombie process.Write a script and schedule as a cron job.The following is a script to kill Zombie processes.
Code:
for each in `ps -ef | grep ” | grep -v PID | awk ‘{ print $3 }’`; do
for every in `ps -ef | grep $each | grep -v cron | awk ‘{ print $2 }’`; do
kill -9 $every;
done;
done
Use top or ps command to find out zombie process.
# top
OR
# ps aux | awk ‘{ print $8 ” ” $2 }’ | grep -w Z
Output:
Z 4104
Z 5320
Z 2945
You cannot kill zombies, as they are already dead. But if you have too many zombies then kill parent process or restart service.
You can kill zombie process using PID obtained from any one of the above command. For example kill zombie proces having PID 4104.
# kill -9 4104
Please note that kill -9 does not guarantee to kill a zombie process.Write a script and schedule as a cron job.The following is a script to kill Zombie processes.
Code:
for each in `ps -ef | grep ” | grep -v PID | awk ‘{ print $3 }’`; do
for every in `ps -ef | grep $each | grep -v cron | awk ‘{ print $2 }’`; do
kill -9 $every;
done;
done
Useradd Options
I have come across some of the useradd options. Check out !!!
To create the user ‘nobs’ with home directory
# useradd -d /home/nobs nobs
To create a user with root privilage
# useradd -u 0 -o nobs
-u : UID 0 -> root
-o : Duplicate UID
# id nobs
uid=0(root) gid=502(nobs) groups=0(root)
The default home directory would be /home ; You can change it by editing the HOME directive on the file : “/etc/default/useradd”
HOME=/nobs
Regards
Nobs
To create the user ‘nobs’ with home directory
# useradd -d /home/nobs nobs
To create a user with root privilage
# useradd -u 0 -o nobs
-u : UID 0 -> root
-o : Duplicate UID
# id nobs
uid=0(root) gid=502(nobs) groups=0(root)
The default home directory would be /home ; You can change it by editing the HOME directive on the file : “/etc/default/useradd”
HOME=/nobs
Regards
Nobs
How to Reset the MySQL Root Password (Unix) !!
How to Reset the MySQL Root Password (Unix)
In a Unix environment, the procedure for resetting the root password is as follows:
1. Log on to your system as either the Unix root user or as the same user that the mysqld server runs as.
2. Locate the .pid file that contains the server’s process ID. The exact location and name of this file depend on your distribution, hostname, and configuration. Common locations are /var/lib/mysql/, /var/run/mysqld/, and /usr/local/mysql/data/. Generally, the filename has the extension of .pid and begins with either mysqld or your system’s hostname.
You can stop the MySQL server by sending a normal kill (not kill -9) to the mysqld process, using the pathname of the .pid file in the following command:
Code:
shell> kill `cat /mysql-data-directory/host_name.pid`
Note the use of backticks rather than forward quotes with the cat command; these cause the output of cat to be substituted into the kill command.
3. Create a text file and place the following command within it on a single line:
Code:
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('MyNewPassword');
Save the file with any name. For this example the file will be ~/mysql-init.
4. Restart the MySQL server with the special –init-file=~/mysql-init option:
Code:
shell> mysqld_safe --init-file=~/mysql-init &
The contents of the init-file are executed at server startup, changing the root password. After the server has started successfully you should delete ~/mysql-init.
5. You should be able to connect using the new password.
How to Reset the Root Password (Any Platform)
(but this approach is less secure):
1. Stop mysqld and restart it with the –skip-grant-tables –user=root options (Windows users omit the –user=root portion).
2. Connect to the mysqld server with this command:
shell> mysql -u root
3. Issue the following statements in the mysql client:
Code:
mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd') WHERE User='root';
mysql> FLUSH PRIVILEGES;
Replace “newpwd” with the actual root password that you want to use.
4. You should be able to connect using the new password.
Cheers
Nobs
In a Unix environment, the procedure for resetting the root password is as follows:
1. Log on to your system as either the Unix root user or as the same user that the mysqld server runs as.
2. Locate the .pid file that contains the server’s process ID. The exact location and name of this file depend on your distribution, hostname, and configuration. Common locations are /var/lib/mysql/, /var/run/mysqld/, and /usr/local/mysql/data/. Generally, the filename has the extension of .pid and begins with either mysqld or your system’s hostname.
You can stop the MySQL server by sending a normal kill (not kill -9) to the mysqld process, using the pathname of the .pid file in the following command:
Code:
shell> kill `cat /mysql-data-directory/host_name.pid`
Note the use of backticks rather than forward quotes with the cat command; these cause the output of cat to be substituted into the kill command.
3. Create a text file and place the following command within it on a single line:
Code:
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('MyNewPassword');
Save the file with any name. For this example the file will be ~/mysql-init.
4. Restart the MySQL server with the special –init-file=~/mysql-init option:
Code:
shell> mysqld_safe --init-file=~/mysql-init &
The contents of the init-file are executed at server startup, changing the root password. After the server has started successfully you should delete ~/mysql-init.
5. You should be able to connect using the new password.
How to Reset the Root Password (Any Platform)
(but this approach is less secure):
1. Stop mysqld and restart it with the –skip-grant-tables –user=root options (Windows users omit the –user=root portion).
2. Connect to the mysqld server with this command:
shell> mysql -u root
3. Issue the following statements in the mysql client:
Code:
mysql> UPDATE mysql.user SET Password=PASSWORD('newpwd') WHERE User='root';
mysql> FLUSH PRIVILEGES;
Replace “newpwd” with the actual root password that you want to use.
4. You should be able to connect using the new password.
Cheers
Nobs
Monday, June 15, 2009
How to compile program under Linux / UNIX
Many new users find it difficult to compiling programs in Linux. Usually following steps are involved:
a] Download tar ball using wget
b] Untar tar ball using tar command
c] Compile program using make or configure command
d] Install software
Task: compiling program
Step # 1: Download program tar ball:
$ wget http://url-com/prog.tar.gz
Step # 2: Untar tar ball :
$ tar -zxvf prog.tar.gz
$ cd prog
Step # 3: Untar tar ball:
Configure program:
$ ./configure
Compile program:
$ make
Install program (must be run as the root, login using su or use sudo):
$ sudo make install
or
$ su -
$ make install
If you are still confused, look out for the README file that comes with the package. Every package comes with that with proper instructions on compilation.
Cheers
Nobs
a] Download tar ball using wget
b] Untar tar ball using tar command
c] Compile program using make or configure command
d] Install software
Task: compiling program
Step # 1: Download program tar ball:
$ wget http://url-com/prog.tar.gz
Step # 2: Untar tar ball :
$ tar -zxvf prog.tar.gz
$ cd prog
Step # 3: Untar tar ball:
Configure program:
$ ./configure
Compile program:
$ make
Install program (must be run as the root, login using su or use sudo):
$ sudo make install
or
$ su -
$ make install
If you are still confused, look out for the README file that comes with the package. Every package comes with that with proper instructions on compilation.
Cheers
Nobs
Subscribe to:
Posts (Atom)