Skip to content

Commit

Permalink
docs: update README, RELEASE and quickstart
Browse files Browse the repository at this point in the history
Signed-off-by: Abhinandan Purkait <[email protected]>
  • Loading branch information
Abhinandan-Purkait committed Jan 31, 2025
1 parent 79b2923 commit 959f9d2
Show file tree
Hide file tree
Showing 5 changed files with 181 additions and 130 deletions.
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,16 @@
## Overview

### What is OpenEBS ZFS LocalPV?

OpenEBS ZFS LocalPV is a [CSI](https://github.com/container-storage-interface/spec) plugin for implementation of [ZFS](https://en.wikipedia.org/wiki/ZFS) backed persistent volumes for Kubernetes. It is a local storage solution, which means the device, volume and the application are on the same host. It doesn't contain any dataplane, i.e only its simply a control-plane for the kernel zfs volumes. It mainly comprises of two components which are implemented in accordance to the CSI Specs:

1. CSI Controller - Frontends the incoming requests and initiates the operation.
2. CSI Node Plugin - Serves the requests by performing the operations and making the volume available for the initiator.

### Why OpenEBS ZFS LocalPV?

1. Lightweight, easy to set up storage provisoner for host-local volumes in k8s ecosystem.
2. Makes ZFS stack available to k8s, allowing end users to use the ZFS functionalites like snapshot, restore, clone, thin provisioning, resize, encryption, compression, dedup, etc for their Persistent Volumes.
2. Makes ZFS stack available to K8s, allowing end users to use the ZFS functionalites like snapshot, restore, clone, thin provisioning, resize, encryption, compression, dedup, etc for their Persistent Volumes.
3. Cloud native, i.e based on CSI spec, so certified to run on K8s.

### Architecture
Expand All @@ -26,6 +28,7 @@ LocalPV refers to storage that is directly attached to a specific node in the Ku
<b>Use Case</b>: Ideal for workloads that require low-latency access to storage or when data locality is critical (e.g., databases, caching systems).

#### Characteristics:

- <b>Node-bound</b>: The volume is tied to the node where the disk is physically located.
- <b>No replication</b>: Data is not replicated across nodes, so if the node fails, the data may become inaccessible.
- <b>High performance</b>: Since the storage is local, it typically offers lower latency compared to network-attached storage.
Expand Down
28 changes: 13 additions & 15 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,39 +1,37 @@
# Release Process
zfs-localpv follows a on quaterly release cadence for minor version releases. The scope of the release is determined by contributor availability. The scope is published in the [Release Tracker Projects](https://github.com/orgs/openebs/projects/78).
LocalPV ZFS follows or tries to follow semantic versioning principles as specified here https://semver.org. It follows a on quarterly release cadence for minor version releases. The scope of the release is determined by contributor availability. The scope is published in the [Release Tracker Projects](https://github.com/orgs/openebs/projects/78).

## Pre-release Candidate Verification Checklist

Every release has a prerelease tag that gets created on branch creation, explained further below. This prerelease tag is meant for all the below action items throughout the release process:
Every release has a pre-release tag that gets created on branch creation, explained further below. This pre-release tag is meant for all the below action items throughout the release process:
- Platform Verification
- Regression and Feature Verification Automated tests.
- Exploratory testing by QA engineers.
- Strict security scanners on the container images.
- Upgrade from previous releases.
- Beta testing by users on issues that they are interested in.
- Regression and Feature Verification Automated tests
- Exploratory testing by QA engineers
- Strict security scanners on the container images
- Upgrade from previous releases
- Beta testing by users on issues that they are interested in

If any issues are found during the above stages, they are fixed and the prerelease tag is overidden by the newer changes and are up for above action items again.
If any issues are found during the above stages, they are fixed and the prerelease tag is overridden by the newer changes and are up for above action items again.

Once all the above tests are completed, a main release is created.

## Release Tagging

zfs-localpv is released with container images and a respective helm chart as the only recommended way of installation. Even though the [zfs-operator](./deploy/zfs-operator.yaml) is also published, it is generated by templating the helm chart itself.
LocalPV ZFS is released with container images and a respective helm chart as the only recommended way of installation. Even though the [zfs-operator](./deploy/zfs-operator.yaml) is also published, it is generated by templating the helm chart itself.

Before creating a release, the repo owner needs to create a separate branch from the active branch, which is `develop`. Name of the branch should follow the naming convention of `release/2.7` if release is for `2.7.x`.

Upon creation of a release branch ex. `release/2.7`, two automated PRs open up to change the chart versions of the charts in `release/2.7` branch to `2.7.0-prerelease` and `develop` to `2.8.0-develop`. Post merge of these two PRs, the `2.7.0-prerelease` and `2.8.0-develop` tags are pushed to respective docker registries and also the respective helm charts against these tags are published. The prerelease versions increment via automated PRs on every release creation. For example once `2.7.0` is published a `2.7.1-prerelease` image and chart would be published to allow testing of further patch releases and so on.

The format of the release tag follows semver versioning. The final release tag is of format `X.Y.Z` and the respective prerelease and develop tags are `X.Y.Z-prerelease` and `X.Y+1.0-develop`.

Once the release is triggered, the freezed code undergoes stages as such linting, unit-tests and bdd-tests and the code coverage is updated accordingly. Post the former jobs, the image build is triggered with the specified tag, the chart is run though scripts that update the tags at places whereever deemed necessary and eventually publish the images and respective helm charts.
Once the release is triggered, the unchanged code undergoes stages as such linting, unit-tests and bdd-tests and the code coverage is updated accordingly. Post the former jobs, the image build is triggered with the specified tag, the images are published and the chart is run though scripts that update the image tags at the relevant places and eventually helm charts are published.

The helm charts are hosted on github deployments for the corresponding releases.

Images and Helm charts are published at following location :

https://hub.docker.com/r/openebs/zfs-driver/tags
https://github.com/openebs/zfs-localpv/tree/gh-pages
The tagged images are published at: https://hub.docker.com/r/openebs/zfs-driver/tags
The release Helm charts are published at: https://github.com/openebs/LocalPV ZFS/tree/gh-pages

Once a release is created:-
1. The repo owner updates the release page changelog with all the necessary content.
1. The repo owner updates the release page changelog with the necessary contents.
2. The repo owner updates the [CHANGELOG.md](./CHANGELOG.md) file with the changelog for the release.
2 changes: 1 addition & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ helm repo update
helm install openebs --namespace openebs openebs/openebs --create-namespace
```

Verify that the ZFS driver Components are installed and running using below command:
Verify that the LocalPV ZFS CSI driver components are installed and running using below command:

```
$ kubectl get pods -n openebs -l role=openebs-zfs
Expand Down
Loading

0 comments on commit 959f9d2

Please sign in to comment.