Skip to content

Commit

Permalink
Add openebs docs & restructure directory of website
Browse files Browse the repository at this point in the history
Signed-off-by: isamrish <[email protected]>

This commit will add
- openebs docs (implemented in docusaurus)
- Restructure directory for website
  • Loading branch information
isamrish committed Jun 3, 2021
1 parent 4ff189b commit 0f10d26
Show file tree
Hide file tree
Showing 357 changed files with 17,189 additions and 24 deletions.
24 changes: 1 addition & 23 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,23 +1 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.

# dependencies
/node_modules
/.pnp
.pnp.js

# testing
/coverage

# production
/build

# misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local

npm-debug.log*
yarn-debug.log*
yarn-error.log*
**/.DS_Store
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# OpenEBS Website

This repository contains the content and the static-site generator code for the OpenEBS Project website.
This repository contains the content and the static-site generator code for the OpenEBS Project website as well as OpenEBS Docs.
20 changes: 20 additions & 0 deletions docs/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Dependencies
/node_modules

# Production
/build

# Generated files
.docusaurus
.cache-loader

# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local

npm-debug.log*
yarn-debug.log*
yarn-error.log*
33 changes: 33 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Website

This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.

## Installation

```console
yarn install
```

## Local Development

```console
yarn start
```

This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.

## Build

```console
yarn build
```

This command generates static content into the `build` directory and can be served using any static contents hosting service.

## Deployment

```console
GIT_USER=<Your GitHub username> USE_SSH=true yarn deploy
```

If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
3 changes: 3 additions & 0 deletions docs/babel.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};
5 changes: 5 additions & 0 deletions docs/docs/additional-info/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"label": "Additional Info",
"position": 6
}

7 changes: 7 additions & 0 deletions docs/docs/additional-info/faqs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
sidebar_position: 2
---

# FAQs

Faq
7 changes: 7 additions & 0 deletions docs/docs/additional-info/knowledge-base.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
sidebar_position: 4
---

# Knowledge Base

knowledge base
7 changes: 7 additions & 0 deletions docs/docs/additional-info/kubernetes-upgrades.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
sidebar_position: 3
---

# Kubernetes Upgrades

Kubernetes upgrades
21 changes: 21 additions & 0 deletions docs/docs/additional-info/performance-testing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
sidebar_position: 1
---

# Performance Testing

## Steps for performance testing

### Setup cStorPool and StorageClass

Choose the appropriate disks (SSDs or SAS or Cloud disks) and create pool and create StorageClass. There are some performance tunings available and this configuration can be added in the corresponding StorageClass before provisioning the volume. The tunings are available in the StorageClass section.

For performance testing, performance numbers vary based on the following factors.

- The number of OpenEBS replicas (1 vs 3) (latency between cStor target and cStor replica)

- Whether all the replicas are in one zone or across multiple zones

- The network latency between the application pod and iSCSI target (cStor target)

The steps for running FIO based Storage benchmarking and viewing the results are explained in detail [here](https://github.com/openebs/performance-benchmark/tree/master/fio-benchmarks).
4 changes: 4 additions & 0 deletions docs/docs/concepts/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"label": "Concepts",
"position": 2
}
15 changes: 15 additions & 0 deletions docs/docs/concepts/cas-engines.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
sidebar_position: 3
---

# OpenEBS Storage Engines - cStor, Jiva and LocalPV

## Overview of a Storage Engine

A storage engine is the data plane component of the IO path of a persistent volume. In CAS architecture, users can choose different data planes for different application workloads based on a configuration policy. A Storage engine can be hardened to optimize a given workload either with a feature set or for performance.

Operators or administrators typically choose a storage engine with a specific software version and build optimized volume templates that are fine-tuned with the type of underlying disks, resiliency, number of replicas, set of nodes participating in the Kubernetes cluster. Users can then choose an optimal volume template at the time of volume provisioning, thus providing the maximum flexibility in running the optimum software and storage combination for all the storage volumes on a given Kubernetes cluster.

## Types of Storage Engines

OpenEBS provides three types of storage engines.
7 changes: 7 additions & 0 deletions docs/docs/concepts/container-attached-storage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
sidebar_position: 1
---

# Container Attached Storage

In CAS or Container Attached Storage architecture, storage runs within containers and is closely associated with the application that the storage is bound to. Storage runs as micro service and will have no Kernel module dependencies. Orchestration systems such as Kubernetes orchestrates the storage volume like any other micro services or container. CAS provides benefits of both DAS and NAS.
12 changes: 12 additions & 0 deletions docs/docs/concepts/ndm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
sidebar_position: 4
---

## Node Disk Manager

Node Disk Manager(NDM) is an important component in the OpenEBS architecture. NDM treats block devices as resources that need to be monitored and managed just like other resources such as CPU, Memory and Network. It is a daemonset which runs on each node, detects attached block devices based on the filters and loads them as block devices custom resource into Kubernetes. These custom resources are aimed towards helping hyper-converged Storage Operators by providing abilities like:

Easy to access inventory of Block Devices available across the Kubernetes Cluster.
Predict failures on the Disks to help with taking preventive actions.
Allow dynamically attaching/detaching disks to a storage pod, without restarting the corresponding NDM pod running on the Node where the disk is attached/detached.
In spite of doing all of the above, NDM contributes to overall ease of provisioning persistent volumes.
7 changes: 7 additions & 0 deletions docs/docs/concepts/openebs-architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
sidebar_position: 2
---

# OpenEBS Architecture

OpenEBS follows the container attached storage or CAS model. As a part of this approach, each volume has a dedicated controller POD and a set of replica PODs. The advantages of the CAS architecture are discussed on the CNCF blog here. OpenEBS is simple to operate and to use largely because it looks and feels like other cloud-native and Kubernetes friendly projects.
5 changes: 5 additions & 0 deletions docs/docs/depreciated-releases/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"label": "Depreciated Releases",
"position": 7
}

19 changes: 19 additions & 0 deletions docs/docs/depreciated-releases/release-1.x.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
sidebar_position: 1
---

# OpenEBS 1.x Deprecated Releases

## 1.7.0 - Feb 15 2020

Change summary:

- Fixes an issue where Jiva Replicas could get stuck in WO or NA state, when the size of the replica data grows beyond 300GB.

- Fixes an issue where unused custom resources from older versions are left in the etcd, even after openebs is upgraded.

- Fixes an issue where cleanup of Jiva volumes on OpenShift 4.2 environment was failing.

- Fixes an issue where custom resources used by cStor Volumes fail to get deleted when the underlying pool was removed prior to deleting the volumes.

- Fixes an issue where a cStor Volume Replica would be incorrectly marked as invalid due to a race condition caused between a terminating and its corresponding newly launched pool pods.
5 changes: 5 additions & 0 deletions docs/docs/introduction/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"label": "Introduction",
"position": 1
}

149 changes: 149 additions & 0 deletions docs/docs/introduction/intro.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
---
sidebar_position: 1
slug: /
---

import ImgCard from "@site/src/components/imgCard";

# Welcome to OpenEBS Documentation

## Introduction

OpenEBS is the leading open-source project for container-attached and container-native storage on Kubernetes. OpenEBS adopts Container Attached Storage (CAS) approach, where each workload is provided with a dedicated storage controller. OpenEBS implements granular storage policies and isolation that enable users to optimize storage for each specific workload. OpenEBS is built completely in userspace making it highly portable to run across any OS/platform.

OpenEBS is a collection Storage Engines, allowing you to pick the right storage solution for your Stateful workloads and the type of Kubernetes platform.

See OpenEBS **[Features & Benefits](https://openebs.io)** and **[OpenEBS Adotion stories](https://openebs.io)**

## Quickstart

- When using synchronous replication, iSCSI is used to attach storage from OpenEBS to application pods. Hence OpenEBS requires iSCSI client to be configured and iscsid service running on the worker nodes. Verify if **[iSCSI service is up](https://openebs.io)** and running before starting the installation.
- Default installation works in most of the cases. As a Kubernetes cluster-admin, start the default installation using either

```shell
helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs --name openebs openebs/openebs
```

More information about OpenEBS installation using different Helm versions can be found here.

or

```shell
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
```

For advanced installation steps, **[see Installation section](https://openebs.io)**.

- Verify if OpenEBS is installed successfully and start provisioning OpenEBS volumes through Kubernetes PVC interface by using

## OpenEBS Storage Engines

OpenEBS is a Kubernetes native hyperconverged storage solution. OpenEBS consumes the storage (disks, SSDs, cloud volumes, etc) available on the Kubernetes worker nodes to dynamically provision Kubernetes Persistent Volumes.

OpenEBS can provision different type of Local PV for Stateful Workloads like Cassandra, MongoDB, Elastic, etc that are distributed in nature and have high availiability built into them. Depending on the type of storage attached to your Kubernetes worker nodes, you can select from Dynamic Local PV - Hostpath, Device, ZFS or Rawfile.

OpenEBS can provision Persistent Volumes with features like synchronous replication, snapshots and clones, backup and restore that can be used with Stateful workloads like Percona/MySQL, Jira, GitLab, etc. The replication also can be setup to be across Kubernetes zones resulting in high availability for cross AZ setups. Depending on the type of storage attached to your Kubernetes worker nodes and application performance requirements, you can select from Jiva, cStor or Mayastor.

See the following table for recommendation on which engine is right for you depending on the type of your application requirements and storage available on your Kubernetes nodes.

<table>
<thead>
<tr>
<th>Application requirements</th>
<th>Storage</th>
<th>OpenEBS Volumes</th>
</tr>
</thead>
<tbody>
<tr>
<td>
Protect against node failures, Synchronous replication, Snapshots,
Clones, Thin provisioning
</td>
<td>Use Disks/SSDs/Cloud Volumes</td>
<td>OpenEBS cStor</td>
</tr>
<tr>
<td>
Protect against node failures, Synchronous replication, Thin
provisioning
</td>
<td>Use hostpath or external mounted storage</td>
<td>OpenEBS Jiva</td>
</tr>
<tr>
<td>Low latency, Local PV</td>
<td>Use hostpath or external mounted storage</td>
<td>Dynamic Local PV - Hostpath</td>
</tr>
<tr>
<td>Low latency, Local PV</td>
<td>Use Disks/SSDs/Cloud Volumes</td>
<td>Dynamic Local PV - Device</td>
</tr>
<tr>
<td>Low latency, Local PV, Snapshots, Clones</td>
<td>Use Disks/SSDs/Cloud Volumes</td>
<td>OpenEBS Dynamic Local PV - ZFS</td>
</tr>
</tbody>
</table>

OpenEBS is also developing Mayastor and Dynamic Local PV - Rawfile storage engines available for alpha testing.

## Run stateful applications on OpenEBS

<ImgCard
dataList={[
{
Svg: "../static/img/logos/redis.svg",
title: "Redis",
},
{
Svg: "../static/img/logos/minio.svg",
title: "MinIO",
},
{
Svg: "../static/img/logos/percona.svg",
title: "Percona",
},
{
Svg: "../static/img/logos/mongodb.svg",
title: "MongoDB",
},
{
Svg: "../static/img/logos/prometheus.svg",
title: "Prometheus",
},
{
Svg: "../static/img/logos/gitlab.svg",
title: "GitLab",
},
{
Svg: "../static/img/logos/mysql.svg",
title: "MySql",
},
{
Svg: "../static/img/logos/cassandra.svg",
title: "Cassandra",
},
{
Svg: "../static/img/logos/elasticsearch.svg",
title: "elasticsearch",
},
{
Svg: "../static/img/logos/nuodb.svg",
title: "NuoDB",
},
{
Svg: "../static/img/logos/postgresql.svg",
title: "PostgreSQl",
},
]}
/>

# See also

[Container Attached Storage](../concepts/container-attached-storage.md) [CNCF CAS Blog](https://www.cncf.io/blog/2020/09/22/container-attached-storage-is-cloud-native-storage-cas/) [OpenEBS architecture](../concepts/openebs-architecture.md)
5 changes: 5 additions & 0 deletions docs/docs/stateful-applications/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"label": "Stateful Applications",
"position": 4
}

11 changes: 11 additions & 0 deletions docs/docs/stateful-applications/prometheus.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
sidebar_position: 2
---

# Using OpenEBS as TSDB for Prometheus

# Introduction

Each and every DevOps and SRE's are looking for ease of deployment of their applications in Kubernetes. After successful installation, they will be looking for how easily it can be monitored to maintain the availability of applications in a real-time manner. They can take proactive measures before an issue arise by monitoring the application. Prometheus is the mostly widely used application for scraping cloud native application metrics. Prometheus and OpenEBS together provide a complete open source stack for monitoring.

In this document, we will explain how you can easily set up a monitoring environment in your K8s cluster using Prometheus and use OpenEBS Local PV as the persistent storage for storing the metrics. This guide provides the installation of Prometheus using Helm on dynamically provisioned OpenEBS volumes.
Loading

0 comments on commit 0f10d26

Please sign in to comment.