Fluent Bit Operator facilitates the deployment of Fluent Bit and provides great flexibility in building a logging layer based on Fluent Bit.
Once installed, the Fluent Bit Operator provides the following features:
- Fluent Bit Management: Deploy and destroy Fluent Bit DaemonSet automatically.
- Custom Configuration: Select input/filter/output plugins via labels.
- Dynamic Reloading: Update configuration without rebooting Fluent Bit pods.
Fluent Bit Operator defines five custom resources using CustomResourceDefinition (CRD):
FluentBit
: Defines Fluent Bit instances and its associated config. (It requires to work with kubesphere/fluent-bit for dynamic configuration.)FluentBitConfig
: Select input/filter/output plugins and generates the final config into a Secret.Input
: Defines input config sections.Parser
: Defines parser config sections.Filter
: Defines filter config sections.Output
: Defines output config sections.
Each Input
, Parser
, Filter
, Output
represents a Fluent Bit config section, which are selected by FluentBitConfig
via label selectors. The operator watches those objects, constructs the final config and creates a Secret to store the config, which will be mounted by Fluent Bit instances owned by FluentBit
. The whole workflow can be illustrated as below:
To enable fluent-bit to pick up and use the latest config whenever the fluent-bit config changes, a wrapper called fluent-bit watcher is added to restart the fluent-bit process as soon as fluent-bit config changes are detected. This way the fluent-bit pod needn't be restarted to reload the new config. The fluent-bit config is reloaded in this way because there is no reload interface in fluent-bit itself, please refer to this known issue for more details.
Kubernetes v1.16.13+ is necessary for running Fluent Bit Operator, while it is always recommended to operate with the latest version.
The quick start instructs you to deploy fluent bit with dummy
as input and stdout
as output, which is equivalent to execute the binary with fluent-bit -i dummy -o stdout
.
kubectl apply -f manifests/setup
kubectl apply -f manifests/quick-start
Once everything is up, you'll observe messages in fluent bit pod logs like below:
[0] my_dummy: [1587991566.000091658, {"message"=>"dummy"}]
[1] my_dummy: [1587991567.000061572, {"message"=>"dummy"}]
[2] my_dummy: [1587991568.000056842, {"message"=>"dummy"}]
[3] my_dummy: [1587991569.000896217, {"message"=>"dummy"}]
[0] my_dummy: [1587991570.000172328, {"message"=>"dummy"}]
Success!
This guide provisions a logging pipeline for your work environment. It installs Fluent Bit as DaemonSet for collecting container logs, filtering unneeded fields, and forwarding them to the target destinations (eg. Elasticsearch, Kafka, and Fluentd).
Note that you need a running Elasticsearch v5+ to receive data before start. Remember to adjust output-elasticsearch.yaml to your es setup. Otherwise fluent bit will spam errors. Kafka and Fluentd are optional and switched off by default.
kubectl apply -f manifests/setup
kubectl apply -f manifests/logging-stack
Within a couple of minutes, you should observe an index available:
$ curl localhost:9200/_cat/indices
green open ks-logstash-log-2020.04.26 uwQuoO90TwyigqYRW7MDYQ 1 1 99937 0 31.2mb 31.2mb
Success!
The Linux audit framework provides a CAPP-compliant (Controlled Access Protection Profile) auditing system that reliably collects information about any security-relevant (or non-security-relevant) event on a system. Refer to manifests/logging-stack/auditd
, it supports a method for collecting audit logs from the Linux audit framework.
kubectl apply -f manifests/setup
kubectl apply -f manifests/logging-stack/auditd
Within a couple of minutes, you should observe an index available:
$ curl localhost:9200/_cat/indices
green open ks-logstash-log-2021.04.06 QeI-k_LoQZ2h1z23F3XiHg 5 1 404879 0 298.4mb 149.2mb
The listing below shows supported plugins currently. It is based on Fluent Bit v1.7.3. For more information, please refer to the API docs of each plugin.
Input, filter, and output plugins are connected by the mechanism of tagging and matching. For input and output plugins, always create Input
or Output
instances for every plugin. Don't aggregate multiple inputs or outputs into one Input
or Output
object, except you have a good reason to do so. Take the demo logging stack
for example, we have one yaml file for each output.
However, for filter plugins, if you want a filter chain, the order of filters matters. You need organize multiple filters into an array as the demo logging stack suggests.
Path to file in Fluent Bit config should be well regulated. Fluent Bit Operator adopts the following convention internally.
Dir Path | Description |
---|---|
/fluent-bit/tail | Stores tail related files, eg. file tracking db. Using fluentbit.spec.positionDB will mount a file pos.db under this dir by default. |
/fluent-bit/secrets/{secret_name} | Stores secrets, eg. TLS files. Specify secrets to mount in fluentbit.spec.secrets, then you have access. |
/fluent-bit/config | Stores the main config file and user-defined parser config file. |
Note that ServiceAccount files are mounted at
/var/run/secrets/kubernetes.io/serviceaccount
.
To enable parsers, you must set the value of FluentBitConfig.Spec.Service.ParsersFile
to parsers.conf
. Your custom parsers will be included into the built-in parser config via @INCLUDE /fluent-bit/config/parsers.conf
. Note that the parsers.conf contains a few built-in parsers, for example, docker. Read parsers.conf for more information.
Check out the demo in the folder /manifests/regex-parser
for how to use a custom regex parser.
- Support custom parser plugins
- Support Fluentd as the log aggregation layer
- Support custom Input/Filter/Output plugins
- Integrate logging sidecar
- golang v1.13+.
- kubectl v1.16.13+.
- kubebuilder v2.3+ (the project is build with v2.3.2)
- Access to a Kubernetes cluster v1.16.13+
- Install CRDs:
make install
- Run:
make run
API Doc is generated automatically. To modify it, edit the comment above struct fields, then run go run cmd/doc-gen/main.go
.
Most files under the folder manifests/setup are automatically generated from config. Don't edit them directly, run make manifests
instead, then replace them properly.