diff --git a/doc/Administration Guide/dynamic provisioning.md b/doc/Administration Guide/dynamic provisioning.md
new file mode 100644
index 0000000..0e8273c
--- /dev/null
+++ b/doc/Administration Guide/dynamic provisioning.md
@@ -0,0 +1,32 @@
+## Dynamic provisioning of persistent volumes
+This is a feature native to kubernetes where the cluster can dynamically provision a volume if the users request of PVC doesn't match any pre created persistent volume. For more information about dynmic provisioning please refer to kubernetes documentation.[https://kubernetes.io/docs/concepts/storage/persistent-volumes/#dynamic]
+
+## Storage Classes
+To Allow dynamic provisioning of volumes, administrator must create storage classes that defines the classes of storage a user can provision volumes from. For more information about the storage classes, please refer to the kubernetes documentation[https://kubernetes.io/docs/concepts/storage/storage-classes/]
+
+On deploying of Gluster Container Storage in the kbernetes cluster, there will be 2 storage classes created by default:
+
+```
+$ kubectl get storageclass
+NAME PROVISIONER AGE
+glusterfs-csi (default) org.gluster.glusterfs 70m
+glustervirtblock-csi org.gluster.glustervirtblock 69m
+local-storage kubernetes.io/no-provisioner 80m
+```
+#### glusterfs-csi Storage Class
+
+##### Parameters and default values
+ Thin arbiter setup-> why use this as compared to replica 3
+
+#### glustervirtblock-csi Storage Class
+
+##### Parameters and default values
+
+#### Create your own Storage Class
+
+## Create RWO PersistentVolumeClaim
+- Specify storage classes?
+
+## Create RWX PersistentVolumeClaim
+
+## Create ROX PersistentVolumeClaim
diff --git a/doc/Administration Guide/index.md b/doc/Administration Guide/index.md
new file mode 100644
index 0000000..726a4cd
--- /dev/null
+++ b/doc/Administration Guide/index.md
@@ -0,0 +1,14 @@
+# Administration Guide
+
+1. Creating Persisten Volume
+
+ * [Static provisioning](./static provisioning.md)
+ * [Dynamic provisioning](./dynamic provisioning.md)
+
+2. Snapshot and Cloning Volumes(./snapshot.md)
+
+3. Scaling of the Storage(./scaling.md)
+
+4. Performance(./performance.md)
+
+5. Uinstalling Gluster Container Storage(./uninstall.md)
diff --git a/doc/Administration Guide/performance.md b/doc/Administration Guide/performance.md
new file mode 100644
index 0000000..e69de29
diff --git a/doc/Administration Guide/scaling.md b/doc/Administration Guide/scaling.md
new file mode 100644
index 0000000..e69de29
diff --git a/doc/Administration Guide/static provisioning.md b/doc/Administration Guide/static provisioning.md
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/doc/Administration Guide/static provisioning.md
@@ -0,0 +1 @@
+
diff --git a/doc/Administration Guide/uninstall.md b/doc/Administration Guide/uninstall.md
new file mode 100644
index 0000000..e69de29
diff --git a/doc/Deployment Guide/deploy.md b/doc/Deployment Guide/deploy.md
new file mode 100644
index 0000000..294916d
--- /dev/null
+++ b/doc/Deployment Guide/deploy.md
@@ -0,0 +1,59 @@
+To deploy Gluster Container Storage(GCS), we expect a kubernetes cluster. If you do not already have the kubernetes cluster, you can deploy kubernetes on Centos(guide)[https://github.com/kotreshhr/gcs-setup/tree/master/kube-cluster] , coreos[] and many others.
+
+Once you have the kubernetes cluster up and running, its few simple steps to deploy GCS.
+
+#### Deploy GCS on kubernetes Cluster
+
+The python tool way:
+
+https://github.com/kotreshhr/gcs-setup/tree/master/kube-cluster
+https://github.com/kotreshhr/gcs-setup/tree/master/gcs-setup
+Note: Make sure that the devices do not have file syetem, and are not part of any pv/vg/lvm, if not ensured the device addition step of gcs deployment will fail.
+
+Or the Ansible way:
+
+1. Install ansible on master node:
+ `$ yum install ansible`
+
+2. To ensure ansible is able to ssh into the nodes, add public key of master node{kube1} as authorized
+ key in other kube nodes
+ `$ cat ~/.ssh/id_rsa.pub | ssh root@kube2 'cat >> ~/.ssh/authorized_keys'`
+
+3. In the kubernetes master node, gluster the gcs repo:
+ `$ git clone --recurse-submodules git@github.com:gluster/gcs.git`
+
+4. Create an inventory file to be used by the GCS ansible playbook:
+ ```
+ $ cat ~/gcs/deploy/inventory.yml
+ kube1 ansible_host= gcs_disks='["", ""]'
+ kube2 ansible_host= gcs_disks='["", ""]'
+ kube3 ansible_host= gcs_disks='["", ""]'
+
+ ## Hosts that will be kubernetes master nodes
+ [kube-master]
+ kube1
+
+ ## Hosts that will be kubernetes nodes
+ [kube-node]
+ kube1
+ kube2
+ kube3
+
+ ## Hosts that will be used for GCS.
+ ## Systems grouped here need to define 'gcs_disks' as hostvars, which are the disks that will be used by GCS to
+ provision storage.
+ [gcs-node]
+ kube1
+ kube2
+ kube3
+ ```
+
+5. Deploy GCS on kubernetes cluster
+ Execute the deploy-gcs.yml file using ansible from master node.
+ `$ ansible-playbook -i / /deploy-gcs.yml`.
+ eg: `$ cd ~/gcs ; ansible-playbook -i deploy/inventory.yml deploy/deploy-gcs.yml`
+
+6. Verify the deployment is successfull
+ `$ kubectl get pods -n gcs`
+ All the pods should be in `Running` state.
+
diff --git a/doc/Quickstart Guide/cluster.md b/doc/Quickstart Guide/cluster.md
new file mode 100644
index 0000000..0616048
--- /dev/null
+++ b/doc/Quickstart Guide/cluster.md
@@ -0,0 +1,99 @@
+The vagrant tool in the gcs repository can be used to bring up a small cluster with 3 nodes, with kubernetes and gluster container storage(GCS) setup. Please follow the below instructions to deploy the cluster:
+
+1. Clone gcs repository
+ ```
+ $ git clone --recurse-submodules git@github.com:gluster/gcs.git
+ ```
+
+2. Setup the cluster
+ ```
+ $ cd gcs/deploy
+ $ vagrant up
+ ```
+
+ If there are any timeout failures with GCS components in the deploy, please retry
+ ```
+ $ vagrant destroy -f
+ $ vagrant up
+ ```
+
+3. Verify the cluster is up and running
+ ```
+ $ vagrant ssh kube1
+ [vagrant@kube1 ~]$ kubectl get pods -ngcs
+ NAME READY STATUS RESTARTS AGE
+ alertmanager-alert-0 2/2 Running 0 64m
+ alertmanager-alert-1 2/2 Running 0 64m
+ anthill-58b9b9b6f-c8rhw 1/1 Running 0 74m
+ csi-glusterfsplugin-attacher-0 2/2 Running 0 68m
+ csi-glusterfsplugin-nodeplugin-cvnzw 2/2 Running 0 68m
+ csi-glusterfsplugin-nodeplugin-h9hf6 2/2 Running 0 68m
+ csi-glusterfsplugin-nodeplugin-nhvr4 2/2 Running 0 68m
+ csi-glusterfsplugin-provisioner-0 4/4 Running 0 68m
+ csi-glustervirtblock-attacher-0 2/2 Running 0 66m
+ csi-glustervirtblock-nodeplugin-8kdjt 2/2 Running 0 66m
+ csi-glustervirtblock-nodeplugin-jwb77 2/2 Running 0 66m
+ csi-glustervirtblock-nodeplugin-q5cnp 2/2 Running 0 66m
+ csi-glustervirtblock-provisioner-0 3/3 Running 0 66m
+ etcd-988db4s64f 1/1 Running 0 73m
+ etcd-gqds2t99bn 1/1 Running 0 74m
+ etcd-kbw8rpxfmr 1/1 Running 0 74m
+ etcd-operator-77bfcd6595-zvrdt 1/1 Running 0 75m
+ gluster-kube1-0 2/2 Running 1 73m
+ gluster-kube2-0 2/2 Running 1 72m
+ gluster-kube3-0 2/2 Running 1 72m
+ gluster-mixins-62jkz 0/1 Completed 0 64m
+ grafana-9df95dfb5-tqrqw 1/1 Running 0 64m
+ kube-state-metrics-86bc74fd4c-mh6kl 4/4 Running 0 65m
+ node-exporter-855rg 2/2 Running 0 64m
+ node-exporter-bxwg9 2/2 Running 0 64m
+ node-exporter-tm98k 2/2 Running 0 65m
+ prometheus-operator-6c4b6cfc76-lsgxb 1/1 Running 0 65m
+ prometheus-prometheus-0 3/3 Running 1 65m
+ prometheus-prometheus-1 3/3 Running 1 63m
+ ```
+
+4. Create your first PVC from the gluster container storage
+
+ RWO PV:
+ ```
+ $ cat pvc.yaml
+ ---
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ name: glusterblock-csi-pv
+ spec:
+ storageClassName: glustervirtblock-csi
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Mi
+ ```
+
+ RWX PV:
+ ```
+ $ cat pvc.yaml
+ ---
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ name: glusterblock-csi-pv
+ spec:
+ storageClassName: glustervirtblock-csi
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Mi
+ ```
+
+ Note: This is a test cluster with very less resources allocated to the cluster nodes. Hence any kind of scale/stress test on this shall lead to failures and node not responding state. If you have to try scale/stress tests, please use the Deplyments guide[] to deploy GCS on nodes that meet the resource requirements.
+
+5. Destroy the cluster
+ From the node where vagrant up was executed, run the following command to destroy the kubernetes cluster
+ ```
+ $ cd gcs/deploy
+ $ vagrant destroy -f
+ ```
diff --git a/doc/Quickstart Guide/minikube.md b/doc/Quickstart Guide/minikube.md
new file mode 100644
index 0000000..77415a8
--- /dev/null
+++ b/doc/Quickstart Guide/minikube.md
@@ -0,0 +1 @@
+Steps to deploy gcs on minikube