©2015 - 2022 Chad’s Technoworks. Disclaimer and Terms of Use

Chad’s TechnoWorks My Journal On Technology

Information Technology

Solaris NFS Server And Client Setup

The NFS (Network File system) is a file sharing technology originally developed by Sun Microsystems back in the 80’s for the UNIX operating system. The concept of this technology is similar to the Windows’ shared folder that most users are familiar. Majority of the implementation of NFS today are found on NAS devices which mostly are running in Linux. Since Linux is quite popular nowadays, most of my NFS Server setup are hosted on Linux. But then came a point when I hit a snag that the Linux either crashes or just go hung. I have found out that the problem always lies with memory allocation and management when the Linux NFS server is subjected to stress with hundreds of file updates. In my frustration, as this scenario equates to a major setback in my implementation and would often require me to do a lot of clean up and chores to fix the data integrity of the application, I had decided to use the original NFS Server engine currently available in the Solaris Unix of Sun Microsystems (now Oracle). Nothing beats the original, it is just so reliable!

There are many reference document available at the Internet on how to build a Solaris NFS server such as the Solaris NFS FAQ doc. But below is my implementation for my lab setup and will be a useful reference for me in my future needs.



SETUP SOLARIS NFS SERVER

 

Create the directories to be shared at your NFS Server


pacific:oragrid> su -

Password:

Oracle Corporation      SunOS 5.10      Generic Patch   January 2005

# cd /dsk0/share

# mkdir crsdata1 crsdata2 crsdata3

# ls -l

total 8

drwxr-xr-x   2 root     root         512 Feb  5 16:35 crsdata1

drwxr-xr-x   2 root     root         512 Feb  5 16:35 crsdata2

drwxr-xr-x   2 root     root         512 Feb  5 16:35 crsdata3

drwxr-xr-x   4 oraem    orainst      512 Feb  3 14:45 oms


Optionally, you may change the ownership of the directories to a user of the client. This implementation would require to have the same OS user account having the same UID and GID between the NFS server host and NFS client host. In my case below, I had previously created the oragrid and oraem user at the NFS Server host matching the UIDs and the GIDs of the client.


# chown oragrid:orainst crs*

# ls -l

total 8

drwxr-xr-x   2 oragrid  orainst      512 Feb  5 16:35 crsdata1

drwxr-xr-x   2 oragrid  orainst      512 Feb  5 16:35 crsdata2

drwxr-xr-x   2 oragrid  orainst      512 Feb  5 16:35 crsdata3

drwxr-xr-x   4 oraem    orainst      512 Feb  3 14:45 oms

#


Add to file /etc/dfs/dfstab


share -F nfs -o rw=s11node1:s11node2,root=s11node1:s11node2 /dsk0/share/crsdata1

share -F nfs -o rw=s11node1:s11node2,root=s11node1:s11node2 /dsk0/share/crsdata2

share -F nfs -o rw=s11node1:s11node2,root=s11node1:s11node2 /dsk0/share/crsdata3

share -F nfs -o rw /dsk0/share/oms


Manually start the NFS Server service,


# /etc/init.d/nfs.server start


Set run command for boot autostart of NFS server


# cd /etc/rc3.d

# ls

README          S16boot.server  S50apache       S80mipagent

# ln -s /etc/init.d/nfs.server S18nfs.server

# ls -l

total 16

-rw-r--r--   1 root     sys         1285 Jan 21  2005 README

-rwxr--r--   6 root     sys          474 Jan 21  2005 S16boot.server

lrwxrwxrwx   1 root     root          22 Feb  3 11:17 S18nfs.server -> /etc/init.d/nfs.server

-rwxr--r--   6 root     sys         2452 Jun 14  2013 S50apache

-rwxr--r--   6 root     sys          344 Jan 21  2005 S80mipagent

#




SETUP SOLARIS NFS CLIENT


Verify what was being shared:


chad@s11node1:~$ showmount -e pacific

export list for pacific:

/dsk0/share/crsdata1 s11node1,s11node2

/dsk0/share/crsdata2 s11node1,s11node2

/dsk0/share/crsdata3 s11node1,s11node2

/dsk0/share/oms (everyone)

chad@s11node1:~$


As root,


# mkdir -p /oem/app

# mkdir -p /ogrid/clusterdata1 /ogrid/clusterdata2 /ogrid/clusterdata3


Assign user ownership of the newly created directories that would serve as mount points.


# chown -R oraem:orainst /oem/app

# chown -R oragrid:orainst /ogrid/clusterdata*


Mount the shared directories:


mount -F nfs -o rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768 pacific:/dsk0/share/oms /oem/app

mount -F nfs -o rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768 pacific:/dsk0/share/crsdata1 /ogrid/clusterdata1

mount -F nfs -o rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768 pacific:/dsk0/share/crsdata2 /ogrid/clusterdata2

mount -F nfs -o rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768 pacific:/dsk0/share/crsdata3 /ogrid/clusterdata3


root@s11node1:~# df -h

Filesystem             Size   Used  Available Capacity  Mounted on

rpool/ROOT/solaris      34G   4.2G       4.0G    52%    /

/devices                 0K     0K         0K     0%    /devices

/dev                     0K     0K         0K     0%    /dev

ctfs                     0K     0K         0K     0%    /system/contract

proc                     0K     0K         0K     0%    /proc

mnttab                   0K     0K         0K     0%    /etc/mnttab

swap                   3.0G   1.6M       3.0G     1%    /system/volatile

objfs                    0K     0K         0K     0%    /system/object

sharefs                  0K     0K         0K     0%    /etc/dfs/sharetab

/usr/lib/libc/libc_hwcap1.so.1

                       8.2G   4.2G       4.0G    52%    /lib/libc.so.1

fd                       0K     0K         0K     0%    /dev/fd

rpool/ROOT/solaris/var

                        34G   230M       4.0G     6%    /var

swap                   3.2G   128M       3.0G     4%    /tmp

rpool/VARSHARE          34G    97K       4.0G     1%    /var/share

rpool/export            34G    32K       4.0G     1%    /export

rpool/export/home       34G    35K       4.0G     1%    /export/home

rpool/export/home/chad

                        34G   813K       4.0G     1%    /export/home/chad

rpool/export/home/oraem

                        34G    13G       4.0G    77%    /export/home/oraem

rpool/export/home/oragrid

                        34G   1.6G       4.0G    29%    /export/home/oragrid

rpool                   34G   5.0M       4.0G     1%    /rpool

rpool/VARSHARE/zones    34G    31K       4.0G     1%    /system/zones

rpool/VARSHARE/pkg      34G    32K       4.0G     1%    /var/share/pkg

rpool/VARSHARE/pkg/repositories

                        34G    31K       4.0G     1%    /var/share/pkg/repositories

/hgfs                   16G   4.0M        16G     1%    /hgfs

/tmp/VMwareDnD           0K     0K         0K     0%    /system/volatile/vmblock

pacific:/dsk0/share/oms

                        63G    18G        44G    30%    /oem/app

pacific:/dsk0/share/crsdata1

                        63G    18G        44G    30%    /ogrid/clusterdata1

pacific:/dsk0/share/crsdata2

                        63G    18G        44G    30%    /ogrid/clusterdata2

pacific:/dsk0/share/crsdata3

                        63G    18G        44G    30%    /ogrid/clusterdata3

root@s11node1:~#




For permanent NFS client mount:


edit /etc/vfstab to add the following entries -


pacific:/dsk0/share/crsdata1 - /ogrid/clusterdata1 nfs - yes rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768

pacific:/dsk0/share/crsdata2 - /ogrid/clusterdata2 nfs - yes rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768

pacific:/dsk0/share/crsdata3 - /ogrid/clusterdata3 nfs - yes rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768

pacific:/dsk0/share/oms - /oem/app nfs - yes rw,hard,proto=tcp,vers=3,rsize=32768,wsize=32768




To verify current mount options and type:


mount -v | grep -i crsdata


 


Special NFS Mount Options for Oracle Grid Clusterware

If you are planning to use NFS for Oracle Grid installation and clusterware configuration, the following are the suggested options for NFS client mount.

Reference doc: Oracle Grid Infrastructure Installation Guide For Solaris


On the cluster member nodes, you must set the values for the NFS buffer size parameters rsize and wsize to 32768.


The NFS client-side mount options for Oracle Grid Infrastructure binaries are:


rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,suid 0 0


If you have Oracle Grid Infrastructure binaries on an NFS mount, then you must include the suid option.


The NFS client-side mount options for Oracle Clusterware files (OCR and voting disk files) are:


rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,noac,forcedirectio


Update the /etc/vfstab file on each node with an entry containing the NFS mount options for your platform.