Pegasi Wiki

This wiki acts as a memo for our own work so why not share them? Feel free to browse and use out notes and leave a note while at it.

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gluster_filesystem_steps [2012/04/11 11:44]
Pekka Kuronen
gluster_filesystem_steps [2017/11/06 10:11] (current)
Line 1: Line 1:
 ** Gluster FS server ** ** Gluster FS server **
 +
 +Making a redundant, replicated 2 node Gluster service
  
   * Check that every brick in the Gluster structure has same data in hosts file or they resolv correctly from your DNS.   * Check that every brick in the Gluster structure has same data in hosts file or they resolv correctly from your DNS.
   * Install epel repo http://​fedoraproject.org/​wiki/​EPEL   * Install epel repo http://​fedoraproject.org/​wiki/​EPEL
   * Allow traffic between bricks and from bricks to clients, at least ports tcp: 24007:24047 111 38465:38467 and udp: 111   * Allow traffic between bricks and from bricks to clients, at least ports tcp: 24007:24047 111 38465:38467 and udp: 111
-  * Command line stuff:+  * Install packages and fire up Gluster in both servers / bricks
 <​code>​yum install fuse fuse-libs glusterfs glusterfs-server glusterfs-fuse glusterfs-geo-replication <​code>​yum install fuse fuse-libs glusterfs glusterfs-server glusterfs-fuse glusterfs-geo-replication
 chkconfig glusterd on; service glusterd start chkconfig glusterd on; service glusterd start
-gluster peer probe 192.168.120.10</​code>​+gluster peer probe <other node ip> 
 +gluster peer status</​code>​ 
 +  * You should see information about the other peer(s), check both bricks 
 +  * Make and mount the volume (remember fstab) to use for sharing, can make many logical volumes or a single one, can make several Gluster volumes inside a logical volume to share same capacity 
 +<​code>​ 
 +lvcreate -L1000G -n Glustervol1 VolGroup00 
 +mkdir /​mnt/​gluster 
 +mount /​dev/​mapper/​VolGroup00-Glustervol1 /​mnt/​gluster 
 +mkdir /​mnt/​gluster/​vol1 
 +mkdir /​mnt/​gluster/​vol2 
 +gluster volume create volume1 replica 2 transport tcp <1st brick ip>:/​mnt/​gluster/​vol1/​ <2nd brick up>:/​mnt/​gluster/​vol1/​ 
 +gluster volume create volume2 replica 2 transport tcp <1st brick ip>:/​mnt/​gluster/​vol2/​ <2nd brick up>:/​mnt/​gluster/​vol2/​ 
 +# gluster volume set volume1 auth.allow 10.
 +# gluster volume set volume2 auth.allow 10.* 
 +gluster volume start vol1 
 +gluster volume start vol2 
 +gluster volume info vol1 
 +gluster volume info vol2 
 +</​code>​ 
 + 
 +** Gluster native client ** 
 + 
 +Assuming we use only ethernet for accessing. 
 + 
 +  yum install glusterfs glusterfs-fuse glusterfs 
 +  modprobe fuse 
 +  dmesg | grep -i fuse 
 + 
 +  * There was some speculation that you should disable transparent hugepages to make Gluster work more stable but most likely this should be ignored. But just in case: 
 + 
 +  echo never > /​sys/​kernel/​mm/​redhat_transparent_hugepage/​enabled 
 + 
 +  * Put the mount straightforward to /etc/fstab. Use _netdev to make the mount come later in startup and never use the IP address of localhost, even if you have server and client on same server. The startup may not be finished by the time you get to mounting. Fstab entry: 
 + 
 +  <brick ip>:/​vol1 ​ /​home/​directory ​ glusterfs ​ defaults,​_netdev ​ 0 0 
 + 
 +~~DISCUSSION|Leave a comment~~ 
 +~~NOCACHE~~ 
 +~~QUICKSTATS:​@gluster_filesystem_steps&​basics~~ 
 +~~QUICKSTATS:​@gluster_filesystem_steps&​ip~~ 

  //check if we are running within the DokuWiki environment if (!defined("DOKU_INC")){ die(); } //place the needed HTML source codes BELOW this line