Distributed File Systems: GridFS vs. GlusterFS vs Ceph vs HekaFS Benchmarks [closed]

I’m not sure your list is quite correct. It depends on what you mean by a file system. If you mean a file system that is mountable in an operating system and usable by any application that reads and writes files using POSIX calls, then GridFS doesn’t really qualify. It is just how MongoDB stores … Read more

What exactly does Gluster do?

We recently started researching GlusterFS for our own usage so this question was interesting to me. Gluster uses what are called ‘translators’ on the FUSE client to handle how you store data. There are several types of translators which are outlined here: http://www.gluster.com/community/documentation/index.php/GlusterFS_Translators_v1.3 The one you are asking about specifically is called the Automatic File … Read more

Security concerns with glusterfs?

TL,DR: Adding servers to the cluster (called pool) is safe, because a 3rd party cannot join an existing cluster on it’s own, it needs to be invited from within. But make sure to restrict which clients can mount the volumes and encrypt the connections. There was no questions asked. I did nothing to tell server2 … Read more

How to monitor glusterfs volumes

This has been a request to the GlusterFS developers for a while now and there is nothing out-of-the-box solution you can use. However, with a few scripts it’s not impossible. Pretty much entire Gluster system is managed by a single gluster command and with a few options, you can write yourself health monitoring scripts. See … Read more

Create and mount GlusterFS volume with Ansible

You should start the volume with state: started: – name: Configure Gluster volume. gluster_volume: state: started name: “{{ gluster.brick_name }}” brick: “{{ gluster.brick_dir }}” replicas: 2 cluster: “{{ groups.glusterssl | join(‘,’) }}” host: “{{ inventory_hostname }}” force: yes become: true become_user: root become_method: sudo run_once: true ignore_errors: true

Are ZFS clustered filesystems possible?

Why yes, it is possible to build an active/passive ZFS-based cluster using shared DAS and multipath SAS. Details at: https://github.com/ewwhite/zfs-ha/wiki The key to this high-availability storage design is a shared SAS-attached storage enclosure, or JBOD. While it’s not shared-nothing, this can be a useful way of bringing higher availability to a storage setup that can … Read more