Tuesday, August 28, 2012

Below are the few solaris questions, hope this would be useful.

1. Can a RAID level 5 volume in SVM withstand mulitiple failure of disk?

Ans: Yes, if it has sufficient no of disks are configured as hotspare

2. Is your solaris operating system continue to function in case of all state database replicas are deleted?

Ans: Yes, Solaris would continue to function as normal, however the system looses all Solaris Volume Manager Configuration data if a reboot occurs with no existing state database replicas on disks

Latest update

Please use the below link where i am uploading my new contents

Click here : http://solarigeek.blogspot.com

Solaris Volume Manager

Configuration file and Process details of SVM


I won't get into the details of solaris volume manager, as there are many articles available on the internet which will provide you a plenty of information.
Few of the information which i would like to share, that will be handy in understanding the important concept of SVM.
As you know,for each and every software configured on a machine there are few configuration files involved and there are few processes associated with it.

So first, lets discuss the main configuration files for SVM,

The configuration located in /etc/lvm

-rw-r--r--   1 root     sys          101 Aug 28 20:01 md.cf
-rw-r--r--   1 root     sys           95 Aug 28 20:35 mddb.cf


md.cf - metadevice configuration file 

Cat output of the file is shown below

# cat md.cf
# metadevice configuration file
# do not hand edit
d0 -m d10 d11 1
d10 1 1 c2t3d0s6
d11 1 1 c2t4d0s6

The above file content can be displayed using the below command.

#metastat -p
mddb.cf - metadevice database location details

Below is the output of cat on that file

# cat mddb.cf
#metadevice database location file do not hand edit
#driver minor_t daddr_t device id       checksum
sd      512     16      id1,sd@n6000c29d87e38ec3b5a13a31456d14c1/a      -3659
sd      512     8208    id1,sd@n6000c29d87e38ec3b5a13a31456d14c1/a      -11851
sd      512     16400   id1,sd@n6000c29d87e38ec3b5a13a31456d14c1/a      -20043

From the above output what we can extract is, there are three metadatabase created on the same device.
Daemon for SVM

disabled         20:34:50 svc:/network/rpc/meta:default
disabled        20:34:50 svc:/system/metainit:default


You need not to start it manually, when you create your first metadatabase on the machine, it will start on its own and when you delete all the metadatabase, the above processes goes into disabled mode.

As you know metadatabase is the foundation of any metadevice. If by some reason, all metadatabase replicas are corrupted then you loose the complete SVM configuration.

Recovering SVM configuration from a corrupted metadatabase.

Follow the below steps incase all metadatabase are deleted or corrupted.

Step 1:

Recreate the metadatabase on the same slice which were existed earlier or use a different slice

# metadb -d -f /dev/dsk/c2t3d0s0

Step 2:

Now you need to rebuild the metadevice configuration. First question that may be arising in your mind how do we find which metadevice is configured on which slice, whether it was mirrored. Don't worry the configuration file(/etc/lvm/md.cf) which i explained earlier still has the complete details. Now you may be thinking even this file will be emptied as a result of metadatabase corruption. No, as i mentioned earlier SVM maintains configuration of metadevice and metadatabase in different files so even if you delete all metadatabase your metadevice configuration file content will remain intact.

# cat md.cf
# metadevice configuration file
# do not hand edit
d0 -m d10 d11 1
d10 1 1 c2t3d0s6
d11 1 1 c2t4d0s6

Now using the above, you can easily rebuild your metadevice. As you see from the above,  d0 is mirror metadevice which has two submirrors d10 and d11

#metainit  d10 1 1 c2t3d0s6
#metainit  d11 1 1 c2t4d0s6

#metainit d0 -m d10

#metattach d0 d11

So your metadevice is place.

NOTE: Don't create a new filesystem on this metadevice else you will loose all the previous data. Just mount  the metadevice and access your old data.








Wednesday, August 22, 2012

Indentifying HBA card on Solaris servers

Installing and configuring a HBA on the server rather i would say configuring multiple HBA is a very common task of Unix Administrator. There are two major manufacturer of HBA card 
1. Emulex
2. Qlogic


Sun-branded Qogic and Emulex card for which driver is  provided by Sun and support as well. Now the difficult task is how do we identify a card which is installed on the server is a Sun-branded or non Sun-branded.

There are few commands given below by which you can determine the type of the card and the WWPN no .

To display the connected card

#luxadm -e port

To display the wwn and wwpn details

#luxadm -e dump_map /devices/pci@23d,600000/SUNW,qlc@1/fp@0,0

prtpicl command is very powerful command which can be used to display the complete PCI hierarchy . 

Prtpicl command usage

#prtpicl -v -c scsi-fcp#
#prtpicl -v -c scsi-fcp | /usr/xpg4/bin/grep -e port-wwn -e devfs-path

For solaris 10

#fcinfo hba_port

Indentifying HBA card on Solaris servers

Installing and configuring a HBA on the server rather i would say configuring multiple HBA is a very common task of Unix Administrator. There are two major manufacturer of HBA card 
1. Emulex
2. Qlogic


Sun-branded Qogic and Emulex card for which driver is  provided by Sun and support as well. Now the difficult task is how do we identify a card which is installed on the server is a Sun-branded or non Sun-branded.

There are few commands given below by which you can determine the type of the card and the WWPN no .

To display the connected card

#luxadm -e port

To display the wwn and wwpn details

#luxadm -e dump_map /devices/pci@23d,600000/SUNW,qlc@1/fp@0,0

prtpicl command is very powerful command which can be used to display the complete PCI hierarchy . 

Prtpicl command usage

#prtpicl -v -c scsi-fcp#
#prtpicl -v -c scsi-fcp | /usr/xpg4/bin/grep -e port-wwn -e devfs-path

For solaris 10

#fcinfo hba_port

Monday, August 20, 2012

Solaris Flash image creation and installation

Creation and deployment of flar image:

-----------------------------------------------

Before deploying the flar image, you need to be sure that the clone system should have the same hardware. Flar image created on x86 can not be deployed onto SPARC hardware. Filesystem should have enough space to store the flar image. Now the question is how much space its required to create Flar image.

Run df -h and check the used space for all the filesystem which needs to be added to determine the space required to create flar image.

Creating a flar image


#flar create -n Sol10 -c /var/opt/Sol10.flar

Where Sol10 and Sol10.flar can be of different name.

To view the metadata(means the contents) of the flar image

#flar info -l  /var/opt/Sol10.flar


Once you create the flar image it can be deployed using nfs, http, ftp and local tape. 

I will explain here a simple step how a flar image can be deployed using nfs

As you see above the flar image name is "sol10.flar" located in /var/opt

Share /var/opt as a nfs share

#share -F nfs -d "Nfs share for flar image" /var/opt. That's it you are done with the server having flar image shared.

Installing the flar image:

Pre-requisite:

1. Hardware has to be the same as machine on which flar image is created.
2. Obviously should be on the network.

Boot the server using CDROM considering that you don't have the jumpstart server.

Select Interactive install( i.e option 1)

And continue with the installation process which i believe all are aware of it.

Stop when you are in the below steps.


Here you need to select the second option "Network File System"



You need to provide the absolute the path the flar image. "server" is the name of the server where flar image is located. Instead of server name, provide the IP address of the server.



So you are done, once you click on next, the installer will install the image,and finally prompt you to reboot the server once its completed.




Solaris Flash image creation and installation

Creation and deployment of flar image:

-----------------------------------------------

Before deploying the flar image, you need to be sure that the clone system should have the same hardware. Flar image created on x86 can not be deployed onto SPARC hardware. Filesystem should have enough space to store the flar image. Now the question is how much space its required to create Flar image.

Run df -h and check the used space for all the filesystem which needs to be added to determine the space required to create flar image.

Creating a flar image


#flar create -n Sol10 -c /var/opt/Sol10.flar

Where Sol10 and Sol10.flar can be of different name.

To view the metadata(means the contents) of the flar image

#flar info -l  /var/opt/Sol10.flar


Once you create the flar image it can be deployed using nfs, http, ftp and local tape. 

I will explain here a simple step how a flar image can be deployed using nfs

As you see above the flar image name is "sol10.flar" located in /var/opt

Share /var/opt as a nfs share

#share -F nfs -d "Nfs share for flar image" /var/opt. That's it you are done with the server having flar image shared.

Installing the flar image:

Pre-requisite:

1. Hardware has to be the same as machine on which flar image is created.
2. Obviously should be on the network.

Boot the server using CDROM considering that you don't have the jumpstart server.

Select Interactive install( i.e option 1)

And continue with the installation process which i believe all are aware of it.

Stop when you are in the below steps.


Here you need to select the second option "Network File System"



You need to provide the absolute the path the flar image. "server" is the name of the server where flar image is located. Instead of server name, provide the IP address of the server.



So you are done, once you click on next, the installer will install the image,and finally prompt you to reboot the server once its completed.