All notable changes to this project will be documented in this file.
- New
nats_kv
cache type. - The
nats_jetstream
input now supportslast_per_subject
andnew
deliver fallbacks. - Field
error_patterns
added to thedrop_on
output. - New
redis_scan
input type. - Field
auto_replay_nacks
added to all inputs that traditionally automatically retry nacked messages as a toggle for this behaviour. - New
retry
processor. - New
noop
cache. - Field
targets_input
added to theazure_blob_storage
input. - New
reject_errored
output. - New
nats_request_reply
processor. - New
json_documents
scanner.
- The
unarchive
processor no longer yields linting errors when the formatcsv:x
is specified. This is a regression introduced in v4.25.0. - The
sftp
input will no longer consume files when the watcher cache returns an error. Instead, it will reattempt the file upon the next poll. - The
aws_sqs
input no longer logs error level logs for visibility timeout refreshing errors. - The
nats_kv
processor now allows nats wildcards for thekeys
operation. - The
nats_kv
processorkeys
operation now returns a single message with an array of found keys instead of a batch of messages. - The
nats_kv
processorhistory
operation now returns a single message with an array of objects containing the record fields instead of a batch of messages. - Field
timeout
added to thenats_kv
processor to specify the maximum period to wait on an operation before aborting and returning an error. - Bloblang comparison operators (
>
,<
,<=
,>=
) now match the precision of the compared integers when applicable. - The
parse_form_url_encoded
Bloblang method no longer produces results with an unknown data type for repeated query parameters. - The
echo
CLI command no longer fails to sanitise configs when encountering an emptypassword
field.
- The log events from all inputs and outputs when they first connect have been made more consistent and no longer contain any information regarding the nature of their connections.
- Splitting message batches with a
split
processor (or custom plugins) no longer results in downstream error handling loops around nacks. This was previously implemented as a feature to ensure unbounded expanded and split batches don't flood downstream services in the event of a minority of errors. However, introducing more clever origin tracking of errored messages has eliminated the need for this undocumented behaviour.
- Field
credit
added to theamqp_1
input to specify the maximum number of unacknowledged messages the sender can transmit. - Bloblang now supports root-level
if
statements. - New experimental
sql
cache. - Fields
batch_size
,sort
andlimit
added to themongodb
input. - Field
idemponent_write
added to thekafka
output.
- The default value of the
amqp_1.credit
input has changed from1
to64
. - The
mongodb
processor and output now support extended JSON in canonical form for document, filter and hint mappings. - The
open_telemetry_collector
tracer has had theurl
field of gRPC and HTTP collectors deprecated in favour ofaddress
, which more accurately describes the intended format of endpoints. The old style will continue to work, but eventually will have its default value removed and an explicit value will be required.
- Resource config imports containing
%
characters were being incorrectly parsed during unit test execution. This was a regression introduced in v4.25.0. - Dynamic input and output config updates containing
%
characters were being incorrectly parsed. This was a regression introduced in v4.25.0.
- Fixed a regression in v4.25.0 where template based components were not parsing correctly from configs.
- Field
address_cache
added to thesocket_server
input. - Field
read_header
added to theamqp_1
input. - All inputs with a
codec
field now support a new fieldscanner
to replace it. Scanners are more powerful as they are configured in a structured way similar to other component types rather than via a single string field, for more information check out the scanners page. - New
diff
andpatch
Bloblang methods. - New
processors
processor. - Field
read_header
added to theamqp_1
input. - A debug endpoint
/debug/pprof/allocs
has been added for profiling allocations. - New
cockroachdb_changefeed
input. - The
open_telemetry_collector
tracer now supports sampling. - The
aws_kinesis
input and output now support specifying ARNs as the stream target. - New
azure_cosmosdb
input, processor and output. - All
sql_*
components now support thegocosmos
driver. - New
opensearch
output.
- The
javascript
processor now handles module imports correctly. - Bloblang
if
statements now provide explicit errors when query expressions resolve to non-boolean values. - Some metadata fields from the
amqp_1
input were always empty due to type mismatch, this should no longer be the case. - The
zip
Bloblang method no longer fails when executed without arguments. - The
amqp_0_9
output no longer prints bogus exchange name when connecting to the server. - The
generate
input no longer adds an extra second tointerval: '@every x'
syntax. - The
nats_jetstream
input no longer fails to locate mirrored streams. - Fixed a rare panic in batching mechanisms with a specified
period
, where data arrives in low volumes and is sporadic. - Executing config unit tests should no longer fail due to output resources failing to connect.
- The
parse_parquet
Bloblang function,parquet_decode
,parquet_encode
processors and theparquet
input have all been upgraded to the latest version of the underlying Parquet library. Since this underlying library is experimental it is likely that behaviour changes will result. One significant change is that encoding numerical values that are larger than the column type (float64
intoFLOAT
,int64
intoINT32
, etc) will no longer be automatically converted. - The
parse_log
processor fieldcodec
is now deprecated. - WARNING: Many components have had their underlying implementations moved onto newer internal APIs for defining and extracting their configuration fields. It's recommended that upgrades to this version are performed cautiously.
- WARNING: All AWS components have been upgraded to the latest client libraries. Although lots of testing has been done, these libraries have the potential to differ in discrete ways in terms of how credentials are evaluated, cross-account connections are performed, and so on. It's recommended that upgrades to this version are performed cautiously.
- Field
idempotent_write
added to thekafka_franz
output. - Field
idle_timeout
added to theread_until
input. - Field
delay_seconds
added to theaws_sqs
output. - Fields
discard_unknown
anduse_proto_names
added to theprotobuf
processors.
- Bloblang error messages for bad function/method names or parameters should now be improved in mappings that use shorthand for
root = ...
. - All redis components now support usernames within the configured URL for authentication.
- The
protobuf
processor now supports targetting nested types from proto files. - The
schema_registry_encode
andschema_registry_decode
processors should no longer double escape URL unsafe characters within subjects when querying their latest versions.
- The
amqp_0_9
output now supports dynamic interpolation functions within theexchange
field. - Field
custom_topic_creation
added to thekafka
output. - New Bloblang method
ts_sub
. - The Bloblang method
abs
now supports integers in and integers out. - Experimental
extract_tracing_map
field added to thenats
,nats_jetstream
andnats_stream
inputs. - Experimental
inject_tracing_map
field added to thenats
,nats_jetstream
andnats_stream
outputs. - New
_fail_fast
variants for thebroker
outputfan_out
andfan_out_sequential
patterns. - Field
summary_quantiles_objectives
added to theprometheus
metrics exporter. - The
metric
processor now supports floating point values forcounter_by
andgauge
types.
- Allow labels on caches and rate limit resources when writing configs in CUE.
- Go API:
log/slog
loggers injected into a stream builder viaStreamBuilder.SetLogger
should now respect formatting strings. - All Azure components now support container SAS tokens for authentication.
- The
kafka_franz
input now provides properly typed metadata values. - The
trino
driver for the varioussql_*
components no longer panics when trying to insert nulls. - The
http_client
input no longer sends a phantom request body on subsequent requests when an emptypayload
is specified. - The
schema_registry_encode
andschema_registry_decode
processors should no longer fail to obtain schemas containing slashes (or other URL path unfriendly characters). - The
parse_log
processor no longer extracts structured fields that are incompatible with Bloblang mappings. - Fixed occurrences where Bloblang would fail to recognise
float32
values.
- The
-e/--env-file
cli flag for importing environment variable files now supports glob patterns. - Environment variables imported via
-e/--env-file
cli flags now support triple quoted strings. - New experimental
counter
function added to Bloblang. It is recommended that this function, although experimental, should be used instead of the now deprecatedcount
function. - The
schema_registry_encode
andschema_registry_decode
processors now support JSONSchema. - Field
metadata
added to thenats
andnats_jetstream
outputs. - The
cached
processor fieldttl
now supports interpolation functions. - Many new properties fields have been added to the
amqp_0_9
output. - Field
command
added to theredis_list
input and output.
- Corrected a scheduling error where the
generate
input with a descriptor interval (@hourly
, etc) had a chance of firing twice. - Fixed an issue where a
redis_streams
input that is rejected from read attempts enters a reconnect loop without backoff. - The
sqs
input now periodically refreshes the visibility timeout of messages that take a significant amount of time to process. - The
ts_add_iso8601
andts_sub_iso8601
bloblang methods now return the correct error for certain invalid durations. - The
discord
output no longer ignores structured message fields containing underscores. - Fixed an issue where the
kafka_franz
input was ignoring batching periods and stalling.
- The
random_int
Bloblang function now prevents instantiations where either themax
ormin
arguments are dynamic. This is in order to avoid situations where the random number generator is re-initialised across subsequent mappings in a way that surprises map authors.
- Fields
client_id
andrack_id
added to thekafka_franz
input and output. - New experimental
command
processor. - Parameter
no_cache
added to thefile
andenv
Bloblang functions. - New
file_rel
function added to Bloblang. - Field
endpoint_params
added to theoauth2
section of HTTP client components.
- Allow comments in single root and directly imported bloblang mappings.
- The
azure_blob_storage
input no longer addsblob_storage_content_type
andblob_storage_content_encoding
metadata values as string pointer types, and instead adds these values as string types only when they are present. - The
http_server
input now returns a more appropriate 503 service unavailable status code during shutdown instead of the previous 404 status. - Fixed a potential panic when closing a
pusher
output that was never initialised. - The
sftp
output now reconnects upon being disconnected by the Azure idle timeout. - The
switch
output now produces error logs when messages do not pass at least one case withstrict_mode
enabled, previously these rejected messages were potentially re-processed in a loop without any logs depending on the config. An inaccuracy to the documentation has also been fixed in order to clarify behaviour when strict mode is not enabled. - The
log
processorfields_mapping
field should no longer reject metadata queries using@
syntax. - Fixed an issue where heavily utilised streams with nested resource based outputs could lock-up when performing heavy resource mutating traffic on the streams mode REST API.
- The Bloblang
zip
method no longer produces values that yield an "Unknown data type".
- The
amqp1
input now supportsanonymous
SASL authentication. - New JWT Bloblang methods
parse_jwt_es256
,parse_jwt_es384
,parse_jwt_es512
,parse_jwt_rs256
,parse_jwt_rs384
,parse_jwt_rs512
,sign_jwt_es256
,sign_jwt_es384
andsign_jwt_es512
added. - The
csv-safe
input codec now supports custom delimiters with the syntaxcsv-safe:x
. - The
open_telemetry_collector
tracer now supports secure connections, enabled via thesecure
field. - Function
v0_msg_exists_meta
added to thejavascript
processor.
- Fixed an issue where saturated output resources could panic under intense CRUD activity.
- The config linter no longer raises issues with codec fields containing colons within their arguments.
- The
elasticsearch
output should no longer fail to send basic authentication passwords, this fixes a regression introduced in v4.19.0.
- Field
topics_pattern
added to thepulsar
input. - Both the
schema_registry_encode
andschema_registry_decode
processors now support protobuf schemas. - Both the
schema_registry_encode
andschema_registry_decode
processors now support references for AVRO and PROTOBUF schemas. - New Bloblang method
zip
. - New Bloblang
int8
,int16
,uint8
,uint16
,float32
andfloat64
methods.
- Errors encountered by the
gcp_pubsub
output should now present more specific logs. - Upgraded
kafka
input and output underlying sarama client library to v1.40.0 at new module path github.com/IBM/sarama - The CUE schema for
switch
processor now correctly reflects that it takes a list of clauses. - Fixed the CUE schema for fields that take a 2d-array such as
workflow.order
. - The
snowflake_put
output has been added back to 32-bit ARM builds since the build incompatibilities have been resolved. - The
snowflake_put
output and thesql_*
components no longer trigger a panic when running on a readonly file system with thesnowflake
driver. This driver still requires access to write temporary files somewhere, which can be configured via the GoTMPDIR
environment variable. Details here. - The
http_server
input and output now follow the same multiplexer rules regardless of whether the generalhttp
server block is used or a custom endpoint. - Config linting should now respect fields sourced via a merge key (
<<
). - The
lint
subcommand should now lint config files pointed to via-r
/--resources
flags.
- The
snowflake_put
output is now beta. - Endpoints specified by
http_server
components using both the generalhttp
server block or their own custom server addresses should no longer be treated as path prefixes unless the path ends with a slash (/
), in which case all extensions of the path will match. This corrects a behavioural change introduced in v4.14.0.
- Field
logger.level_name
added for customising the name of log levels in the JSON format. - Methods
sign_jwt_rs256
,sign_jwt_rs384
andsign_jwt_rs512
added to Bloblang.
- HTTP components no longer ignore
proxy_url
settings when OAuth2 is set. - The
PATCH
verb for the streams mode REST API no longer fails to patch over newer components implemented with the latest plugin APIs. - The
nats_jetstream
input no longer fails for configs that setbind
totrue
and do not specify both astream
anddurable
together. - The
mongodb
processor and output no longer ignores theupsert
field.
- The old
parquet
processor (now superseded byparquet_encode
andparquet_decode
) has been removed from 32-bit ARM builds due to build incompatibilities. - The
snowflake_put
output has been removed from 32-bit ARM builds due to build incompatibilities. - Plugin API: The
(*BatchError).WalkMessages
method has been deprecated in favour ofWalkMessagesIndexedBy
.
- The
dynamic
input and output have a new endpoint/input/{id}/uptime
and/output/{id}/uptime
respectively for obtaining the uptime of a given input/output. - Field
wait_time_seconds
added to theaws_sqs
input. - Field
timeout
added to thegcp_cloud_storage
output. - All NATS components now set the name of each connection to the component label when specified.
- Restore message ordering support to
gcp_pubsub
output. This issue was introduced in 4.16.0 as a result of #1836. - Specifying structured metadata values (non-strings) in unit test definitions should no longer cause linting errors.
- The
nats
input default value ofprefetch_count
has been increased from32
to a more appropriate524288
.
- Fields
auth.user_jwt
andauth.user_nkey_seed
added to all NATS components. - bloblang: added
ulid(encoding, random_source)
function to generate Universally Unique Lexicographically Sortable Identifiers (ULIDs). - Field
skip_on
added to thecached
processor. - Field
nak_delay
added to thenats
input. - New
splunk_hec
output. - Plugin API: New
NewMetadataExcludeFilterField
function and accompanyingFieldMetadataExcludeFilter
method added. - The
pulsar
input and output are now included in the main distribution of Benthos again. - The
gcp_pubsub
input now adds the metadata fieldgcp_pubsub_delivery_attempt
to messages when dead lettering is enabled. - The
aws_s3
input now addss3_version_id
metadata to versioned messages. - All compress/decompress components (codecs, bloblang methods, processors) now support
pgzip
. - Field
connection.max_retries
added to thewebsocket
input. - New
sentry_capture
processor.
- The
open_telemetry_collector
tracer option no longer blocks service start up when the endpoints cannot be reached, and instead manages connections in the background. - The
gcp_pubsub
output should see significant performance improvements due to a client library upgrade. - The stream builder APIs should now follow
logger.file
config fields. - The experimental
cue
format in the clilist
subcommand no longer introduces infinite recursion for#Processors
. - Config unit tests no longer execute linting rules for missing env var interpolations.
- Flag
--skip-env-var-check
added to thelint
subcommand, this disables the new linting behaviour where environment variable interpolations without defaults throw linting errors when the variable is not defined. - The
kafka_franz
input now supports explicit partitions in the fieldtopics
. - The
kafka_franz
input now supports batching. - New
metadata
Bloblang function for batch-aware structured metadata queries. - Go API: Running the Benthos CLI with a context set with a deadline now triggers graceful termination before the deadline is reached.
- Go API: New
public/service/servicetest
package added for functions useful for testing custom Benthos builds. - New
lru
andttlru
in-memory caches.
- Provide msgpack plugins through
public/components/msgpack
. - The
kafka_franz
input should no longer commit offsets one behind the next during partition yielding. - The streams mode HTTP API should no longer route requests to
/streams/<stream-ID>
to the/streams
handler. This issue was introduced in v4.14.0.
- The
-e/--env-file
cli flag can now be specified multiple times. - New
studio pull
cli subcommand for running Benthos Studio session deployments. - Metadata field
kafka_tombstone_message
added to thekafka
andkafka_franz
inputs. - Method
SetEnvVarLookupFunc
added to the stream builder API. - The
discord
input and output now use the official chat client API and no longer rely on poll-based HTTP requests, this should result in more efficient and less erroneous behaviour. - New bloblang timestamp methods
ts_add_iso8601
andts_sub_iso8601
. - All SQL components now support the
trino
driver. - New input codec
csv-safe
. - Added
base64rawurl
scheme to both theencode
anddecode
Bloblang methods. - New
find_by
andfind_all_by
Bloblang methods. - New
skipbom
input codec. - New
javascript
processor.
- The
find_all
bloblang method no longer produces results that are of anunknown
type. - The
find_all
andfind
Bloblang methods no longer fail when the value argument is a field reference. - Endpoints specified by HTTP server components using both the general
http
server block or their own custom server addresses should now be treated as path prefixes. This corrects a behavioural change that was introduced when both respective server options were updated to support path parameters. - Prevented a panic caused when using the
encrypt_aes
anddecrypt_aes
Bloblang methods with a mismatched key/iv lengths. - The
snowpipe
field of thesnowflake_put
output can now be omitted from the config without raising an error. - Batch-aware processors such as
mapping
andmutation
should now report correct error metrics. - Running
benthos blobl server
should no longer panic when a mapping with variable read/writes is executed in parallel. - Speculative fix for the
cloudwatch
metrics exporter rejecting metrics due tominimum field size of 1, PutMetricDataInput.MetricData[0].Dimensions[0].Value
. - The
snowflake_put
output now prevents silent failures under certain conditions. Details here. - Reduced the amount of pre-compilation of Bloblang based linting rules for documentation fields, this should dramatically improve the start up time of Benthos (~1s down to ~200ms).
- Environment variable interpolations with an empty fallback (
${FOO:}
) are now valid. - Fixed an issue where the
mongodb
output wasn't using bulk send requests according to batching policies. - The
amqp_1
input now falls back to accessingMessage.Value
when the data is empty.
- When a config contains environment variable interpolations without a default value (i.e.
${FOO}
), if that environment variable is not defined a linting error will be emitted. Shutting down due to linting errors can be disabled with the--chilled
cli flag, and variables can be specified with an empty default value (${FOO:}
) in order to make the previous behaviour explicit and prevent the new linting error. - The
find
andfind_all
Bloblang methods no longer support query arguments as they were incompatible with supporting value arguments. For query based arguments use the newfind_by
andfind_all_by
methods.
- Fix vulnerability GO-2023-1571
- New
nats_kv
processor, input and output. - Field
partition
added to thekafka_franz
output, allowing for manual partitioning.
- The
broker
output with the patternfan_out_sequential
will no longer abandon in-flight requests that are error blocked until the full shutdown timeout has occurred. - Fixed a regression bug in the
sequence
input where the returned messages have typeunknown
. This issue was introduced in v4.10.0 (cefa288). - The
broker
input no longer reports itself as unavailable when a child input has intentionally closed. - Config unit tests that check for structured data should no longer fail in all cases.
- The
http_server
input with a custom address now supports path variables.
- Fixed a regression bug in the
nats
components where panics occur during a flood of messages. This issue was introduced in v4.12.0 (45f785a).
- Format
csv:x
added to theunarchive
processor. - Field
max_buffer
added to theaws_s3
input. - Field
open_message_type
added to thewebsocket
input. - The experimental
--watcher
cli flag now takes into account file deletions and new files that match wildcard patterns. - Field
dump_request_log_level
added to HTTP components. - New
couchbase
cache implementation. - New
compress
anddecompress
Bloblang methods. - Field
endpoint
added to thegcp_pubsub
input and output. - Fields
file_name
,file_extension
andrequest_id
added to thesnowflake_put
output. - Add interpolation support to the
path
field of thesnowflake_put
output. - Add ZSTD compression support to the
compression
field of thesnowflake_put
output. - New Bloblang method
concat
. - New
redis
ratelimit. - The
socket_server
input now supportstls
as a network type. - New bloblang function
timestamp_unix_milli
. - New bloblang method
ts_unix_milli
. - JWT based HTTP authentication now supports
EdDSA
. - New
flow_control
fields added to thegcp_pubsub
output. - Added bloblang methods
sign_jwt_hs256
,sign_jwt_hs384
andsign_jwt_hs512
- New bloblang methods
parse_jwt_hs256
,parse_jwt_hs384
,parse_jwt_hs512
. - The
open_telemetry_collector
tracer now automatically sets theservice.name
andservice.version
tags if they are not configured by the user. - New bloblang string methods
trim_prefix
andtrim_suffix
.
- Fixed an issue where messages caught in a retry loop from inputs that do not support nacks (
generate
,kafka
,file
, etc) could be retried in their post-mutation form from theswitch
output rather than the original copy of the message. - The
sqlite
buffer should no longer printFailed to ack buffer message
logs during graceful termination. - The default value of the
conn_max_idle
field has been changed from 0 to 2 for allsql_*
components in accordance to thedatabase/sql
docs. - The
parse_csv
bloblang method withparse_header_row
set tofalse
no longer produces rows that are of anunknown
type. - Fixed a bug where the
oracle
driver for thesql_*
components was returning timestamps which were getting marshalled into an empty JSON object instead of a string. - The
aws_sqs
input no longer backs off on subsequent empty requests when long polling is enabled. - It's now possible to mock resources within the main test target file in config unit tests.
- Unit test linting no longer incorrectly expects the
json_contains
predicate to contain a string value only. - Config component initialisation errors should no longer show nested path annotations.
- Prevented panics from the
jq
processor when querying invalid types. - The
jaeger
tracer no longer emits theservice.version
tag automatically if the user sets theservice.name
tag explicitly. - The
int64()
,int32()
,uint64()
anduint32()
bloblang methods can now infer the number base as documented here. - The
mapping
andmutation
processors should provide metrics and tracing events again. - Fixed a data race in the
redis_streams
input. - Upgraded the Redis components to
github.com/redis/go-redis/v9
.
- Field
default_encoding
added to theparquet_encode
processor. - Field
client_session_keep_alive
added to thesnowflake_put
output. - Bloblang now supports metadata access via
@foo
syntax, which also supports arbitrary values. - TLS client certs now support both PKCS#1 and PKCS#8 encrypted keys.
- New
redis_script
processor. - New
wasm
processor. - Fields marked as secrets will no longer be printed with
benthos echo
or debug HTTP endpoints. - Add
no_indent
parameter to theformat_json
bloblang method. - New
format_xml
bloblang method. - New
batched
higher level input type. - The
gcp_pubsub
input now supports optionally creating subscriptions. - New
sqlite
buffer. - Bloblang now has
int64
,int32
,uint64
anduint32
methods for casting explicit integer types. - Field
application_properties_map
added to theamqp1
output. - Param
parse_header_row
,delimiter
andlazy_quotes
added to theparse_csv
bloblang method. - Field
delete_on_finish
added to thecsv
input. - Metadata fields
header
,path
,mod_time_unix
andmod_time
added to thecsv
input. - New
couchbase
processor. - Field
max_attempts
added to thensq
input. - Messages consumed by the
nsq
input are now enriched with metadata. - New Bloblang method
parse_url
.
- Fixed a regression bug in the
mongodb
processor where message errors were not set any more. This issue was introduced in v4.7.0 (64eb72). - The
avro-ocf:marshaler=json
input codec now omits unexpected logical type fields. - Fixed a bug in the
sql_insert
output (see commit c6a71e9) where transaction-based drivers (clickhouse
andoracle
) would fail to roll back an in-progress transaction if any of the messages caused an error. - The
resource
input should no longer block the first layer of graceful termination.
- The
catch
method now defines the context of argument mappings to be the string of the caught error. In previous cases the context was undocumented, vague and would often bind to the outer context. It's still possible to reference this outer context by capturing the error (e.g..catch(_ -> this)
). - Field interpolations that fail due to mapping errors will no longer produce placeholder values and will instead provide proper errors that result in nacks or retries similar to other issues.
- The
nats_jetstream
input now adds a range of useful metadata information to messages. - Field
transaction_type
added to theazure_table_storage
output, which deprecates the previousinsert_type
field and supports interpolation functions. - Field
logged_batch
added to thecassandra
output. - All
sql
components now support Snowflake. - New
azure_table_storage
input. - New
sql_raw
input. - New
tracing_id
bloblang function. - New
with
bloblang method. - Field
multi_header
added to thekafka
andkafka_franz
inputs. - New
cassandra
input. - New
base64_encode
andbase64_decode
functions for the awk processor. - Param
use_number
added to theparse_json
bloblang method. - Fields
init_statement
andinit_files
added to all sql components. - New
find
andfind_all
bloblang array methods.
- The
gcp_cloud_storage
output no longer ignores errors when closing a written file, this was masking issues when the target bucket was invalid. - Upgraded the
kafka_franz
input and output to use github.com/twmb/franz-go@v1.9.0 since some bug fixes were made recently. - Fixed an issue where a
read_until
child input with processors affiliated would block graceful termination. - The
--labels
linting option no longer flags resource components.
- Go API: A new
BatchError
type added for distinguishing errors of a given batch.
- Rolled back
kafka
input and output underlying sarama client library to fix a regression introduced in 4.9.0 😅 whereinvalid configuration (Consumer.Group.Rebalance.GroupStrategies and Consumer.Group.Rebalance.Strategy cannot be set at the same time)
errors would prevent consumption under certain configurations. We've decided to roll back rather than upgrade as a breaking API change was introduced that could cause issues for Go API importers (more info here: IBM/sarama#2358).
- New
parquet
input for reading a batch of Parquet files from disk. - Field
max_in_flight
added to theredis_list
input.
- Upgraded
kafka
input and output underlying sarama client library to fix a regression introduced in 4.7.0 whereThe requested offset is outside the range of offsets maintained by the server for the given topic/partition
errors would prevent consumption of partitions. - The
cassandra
output now inserts logged batches of data rather than the less efficient (and unnecessary) unlogged form.
- All
sql
components now support Oracle DB.
- All SQL components now accept an empty or unspecified
args_mapping
as an alias for no arguments. - Field
unsafe_dynamic_query
added to thesql_raw
output. - Fixed a regression in 4.7.0 where HTTP client components were sending duplicate request headers.
- Field
avro_raw_json
added to theschema_registry_decode
processor. - Field
priority
added to thegcp_bigquery_select
input. - The
hash
bloblang method now supportscrc32
. - New
tracing_span
bloblang function. - All
sql
components now support SQLite. - New
beanstalkd
input and output. - Field
json_marshal_mode
added to themongodb
input. - The
schema_registry_encode
andschema_registry_decode
processors now support Basic, OAuth and JWT authentication.
- The streams mode
/ready
endpoint no longer returns status503
for streams that gracefully finished. - The performance of the bloblang
.explode
method now scales linearly with the target size. - The
influxdb
andlogger
metrics outputs should no longer mix up tag names. - Fix a potential race condition in the
read_until
connect check on terminated input. - The
parse_parquet
bloblang method andparquet_decode
processor now automatically parseBYTE_ARRAY
values as strings when the logical type is UTF8. - The
gcp_cloud_storage
output now correctly cleans up temporary files on error conditions when the collision mode is set to append.
- New
squash
bloblang method. - New top-level config field
shutdown_delay
for delaying graceful termination. - New
snowflake_id
bloblang function. - Field
wait_time_seconds
added to theaws_sqs
input. - New
json_path
bloblang method. - New
file_json_contains
predicate for unit tests. - The
parquet_encode
processor now supports theUTF8
logical type for columns.
- The
schema_registry_encode
processor now correctly assumes Avro JSON encoded documents by default. - The
redis
processorretry_period
no longer shows linting errors for duration strings. - The
/inputs
and/outputs
endpoints for dynamic inputs and outputs now correctly render configs, both structured within the JSON response and the raw config string. - Go API: The stream builder no longer ignores
http
configuration. Instead, the value ofhttp.enabled
is set tofalse
by default.
- Reverted
kafka_franz
dependency back to1.3.1
due to a regression in TLS/SASL commit retention. - Fixed an unintentional linting error when using interpolation functions in the
elasticsearch
outputsaction
field.
- Field
batch_size
added to thegenerate
input. - The
amqp_0_9
output now supports setting thetimeout
of publish. - New experimental input codec
avro-ocf:marshaler=x
. - New
mapping
andmutation
processors. - New
parse_form_url_encoded
bloblang method. - The
amqp_0_9
input now supports setting theauto-delete
bit during queue declaration. - New
open_telemetry_collector
tracer. - The
kafka_franz
input and output now supports no-op SASL options with the mechanismnone
. - Field
content_type
added to thegcp_cloud_storage
cache.
- The
mongodb
processor and output defaultwrite_concern.w_timeout
empty value no longer causes configuration issues. - Field
message_name
added to the logger config. - The
amqp_1
input and output should no longer spam logs with timeout errors during graceful termination. - Fixed a potential crash when the
contains
bloblang method was used to compare complex types. - Fixed an issue where the
kafka_franz
input or output wouldn't use TLS connections without custom certificate configuration. - Fixed structural cycle in the CUE representation of the
retry
output. - Tracing headers from HTTP requests to the
http_server
input are now correctly extracted.
- The
broker
input no longer applies processors before batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their pre-batching processors at the level of the child inputs of the broker. - The
broker
output no longer applies processors after batching as this was unintentional behaviour and counter to documentation. Users that rely on this behaviour are advised to place their post-batching processors at the level of the child outputs of the broker.
- Fixed an issue where an
http_server
input or output would fail to register prometheus metrics when combined with other inputs/outputs. - Fixed an issue where the
jaeger
tracer was incapable of sending traces to agents outside of the default port.
- The service-wide
http
config now supports basic authentication. - The
elasticsearch
output now supports upsert operations. - New
fake
bloblang function. - New
parquet_encode
andparquet_decode
processors. - New
parse_parquet
bloblang method. - CLI flag
--prefix-stream-endpoints
added for disabling streams mode API prefixing. - Field
timestamp_name
added to the logger config.
- Timestamp Bloblang methods are now able to emit and process
time.Time
values. - New
ts_tz
method for switching the timezone of timestamp values. - The
elasticsearch
output fieldtype
now supports interpolation functions. - The
redis
processor has been reworked to be more generally useful, the oldoperator
andkey
fields are now deprecated in favour of newcommand
andargs_mapping
fields. - Go API: Added component bundle
./public/components/aws
for all AWS components, including aRunLambda
function. - New
cached
processor. - Go API: New APIs for registering both metrics exporters and open telemetry tracer plugins.
- Go API: The stream builder API now supports configuring a tracer, and tracer configuration is now isolated to the stream being executed.
- Go API: Plugin components can now access input and output resources.
- The
redis_streams
output fieldstream
field now supports interpolation functions. - The
kafka_franz
input and outputs now supportAWS_MSK_IAM
as a SASL mechanism. - New
pusher
output. - Field
input_batches
added to config unit tests for injecting a series of message batches.
- Corrected an issue where Prometheus metrics from batching at the buffer level would be skipped when combined with input/output level batching.
- Go API: Fixed an issue where running the CLI API without importing a component package would result in template init crashing.
- The
http
processor andhttp_client
input and output no longer have default headers as part of their configuration. AContent-Type
header will be added to requests with a default value ofapplication/octet-stream
when a message body is being sent and the configuration has not added one explicitly. - Logging in
logfmt
mode withadd_timestamp
enabled now works.
- Field
credentials.from_ec2_role
added to all AWS based components. - The
mongodb
input now supports aggregation filters by setting the newoperation
field. - New
gcp_cloudtrace
tracer. - New
slug
bloblang string method. - The
elasticsearch
output now supports thecreate
action. - Field
tls.root_cas_file
added to thepulsar
input and output. - The
fallback
output now adds a metadata fieldfallback_error
to messages when shifted. - New bloblang methods
ts_round
,ts_parse
,ts_format
,ts_strptime
,ts_strftime
,ts_unix
andts_unix_nano
. Most are aliases of (now deprecated) time methods withtimestamp_
prefixes. - Ability to write logs to a file (with optional rotation) instead of stdout.
- The default docker image no longer throws configuration errors when running streams mode without an explicit general config.
- The field
metrics.mapping
now allows environment functions such ashostname
andenv
. - Fixed a lock-up in the
amqp_0_9
output caused when messages sent with theimmediate
ormandatory
flags were rejected. - Fixed a race condition upon creating dynamic streams that self-terminate, this was causing panics in cases where the stream finishes immediately.
- The
nats_jetstream
input now adds headers to messages as metadata. - Field
headers
added to thenats_jetstream
output. - Field
lazy_quotes
added to the CSV input.
- Fixed an issue where resource and stream configs imported via wildcard pattern could not be live-reloaded with the watcher (
-w
) flag. - Bloblang comparisons between numerical values (including
match
expression patterns) no longer require coercion into explicit types. - Reintroduced basic metrics from the
twitter
anddiscord
template based inputs. - Prevented a metrics label mismatch when running in streams mode with resources and
prometheus
metrics. - Label mismatches with the
prometheus
metric type now log errors and skip the metric without stopping the service. - Fixed a case where empty files consumed by the
aws_s3
input would trigger early graceful termination.
This is a major version release, for more information and guidance on how to migrate please refer to https://benthos.dev/docs/guides/migration/v4.
- In Bloblang it is now possible to reference the
root
of the document being created within a mapping query. - The
nats_jetstream
input now supports pull consumers. - Field
max_number_of_messages
added to theaws_sqs
input. - Field
file_output_path
added to theprometheus
metrics type. - Unit test definitions can now specify a label as a
target_processors
value. - New connection settings for all sql components.
- New experimental
snowflake_put
output. - New experimental
gcp_cloud_storage
cache. - Field
regexp_topics
added to thekafka_franz
input. - The
hdfs
outputdirectory
field now supports interpolation functions. - The cli
list
subcommand now supports acue
format. - Field
jwt.headers
added to all HTTP client components. - Output condition
file_json_equals
added to config unit test definitions.
- The
sftp
output no longer opens files in both read and write mode. - The
aws_sqs
input withreset_visibility
set tofalse
will no longer reset timeouts on pending messages during gracefully shutdown. - The
schema_registry_decode
processor now handles AVRO logical types correctly. Details in #1198 and #1161 and also in linkedin/goavro#242.
- All components, features and configuration fields that were marked as deprecated have been removed.
- The
pulsar
input and output are no longer included in the default Benthos builds. - The field
pipeline.threads
field now defaults to-1
, which automatically matches the host machine CPU count. - Old style interpolation functions (
${!json:foo,1}
) are removed in favour of the newer Bloblang syntax (${! json("foo") }
). - The Bloblang functions
meta
,root_meta
,error
andenv
now returnnull
when the target value does not exist. - The
clickhouse
SQL driver Data Source Name format parameters have been changed due to a client library update. This also means placeholders insql_raw
components should use dollar syntax. - Docker images no longer come with a default config that contains generated environment variables, use
-s
flag arguments instead. - All cache components have had their retry/backoff fields modified for consistency.
- All cache components that support a general default TTL now have a field
default_ttl
with a duration string, replacing the previous field. - The
http
processor andhttp_client
output now execute message batch requests as individual requests by default. This behaviour can be disabled by explicitly settingbatch_as_multipart
totrue
. - Outputs that traditionally wrote empty newlines at the end of batches with >1 message when using the
lines
codec (socket
,stdout
,file
,sftp
) no longer do this by default. - The
switch
output fieldretry_until_success
now defaults tofalse
. - All AWS components now have a default
region
field that is empty, allowing environment variables or profile values to be used by default. - Serverless distributions of Benthos (AWS lambda, etc) have had the default output config changed to reject messages when the processing fails, this should make it easier to handle errors from invocation.
- The standard metrics emitted by Benthos have been largely simplified and improved, for more information check out the metrics page.
- The default metrics type is now
prometheus
. - The
http_server
metrics type has been renamed tojson_api
. - The
stdout
metrics type has been renamed tologger
. - The
logger
configuration section has been simplified, withlogfmt
being the new default format. - The
logger
fieldadd_timestamp
is nowfalse
by default. - Field
parts
has been removed from all processors. - Field
max_in_flight
has been removed from a range of output brokers as it no longer required. - The
dedupe
processor now acts upon individual messages by default, and thehash
field has been removed. - The
log
processor now executes for each individual message of a batch. - The
sleep
processor now executes for each individual message of a batch. - The
benthos test
subcommand no longer walks when targetting a directory, instead use triple-dot syntax (./dir/...
) or wildcard patterns. - Go API: Module name has changed to
github.com/benthosdev/benthos/v4
. - Go API: All packages within the
lib
directory have been removed in favour of the newer APIs withinpublic
. - Go API: Distributed tracing is now via the Open Telemetry client library.
- New
sql_raw
processor and output.
- Corrected a case where nested
parallel
processors that result in emptied batches (all messages filtered) would propagate an unack rather than an acknowledgement.
- The
sql
processor and output are no longer marked as deprecated and will therefore not be removed in V4. This change was made in order to provide more time to migrate to the newsql_raw
processor and output.
- Field
nack_reject_patterns
added to theamqp_0_9
input. - New experimental
mongodb
input. - Field
cast
added to thexml
processor andparse_xml
bloblang method. - New experimental
gcp_bigquery_select
processor. - New
assign
bloblang method. - The
protobuf
processor now supportsAny
fields in protobuf definitions. - The
azure_queue_storage
input fieldqueue_name
now supports interpolation functions.
- Fixed an issue where manually clearing errors within a
catch
processor would result in subsequent processors in the block being skipped. - The
cassandra
output should now automatically matchfloat
columns. - Fixed an issue where the
elasticsearch
output would collapse batched messages of matching ID rather than send as individual items. - Running streams mode with
--no-api
no longer removes the/ready
endpoint.
- The
throttle
processor has now been marked as deprecated.
- Field
cors
added to thehttp_server
input and output, for supporting CORS requests when custom servers are used. - Field
server_side_encryption
added to theaws_s3
output. - Field
use_histogram_timing
andhistogram_buckets
added to theprometheus
metrics exporter. - New duration string and back off field types added to plugin config builders.
- Experimental field
multipart
added to thehttp_client
output. - Codec
regex
added to inputs. - Field
timeout
added to thecassandra
output. - New experimental
gcp_bigquery_select
input. - Field
ack_wait
added to thenats_jetstream
input.
- The old map-style resource config fields (
resources.processors.<name>
, etc) are now marked as deprecated. Use the newer list based fields (processor_resources
, etc) instead.
- The
generate
input now supports zeroed duration strings (0s
, etc) for unbounded document creation. - The
aws_dynamodb_partiql
processor no longer ignores theendpoint
field. - Corrected duplicate detection for custom cache implementations.
- Fixed panic caused by invalid bounds in the
range
function. - Resource config files imported now allow (and ignore) a
tests
field. - Fixed an issue where the
aws_kinesis
input would fail to back off during unyielding read attempts. - Fixed a linting error with
zmq4
input/outputurls
fields that was incorrectly expecting a string.
- Field
sync
added to thegcp_pubsub
input. - New input, processor, and output config field types added to the plugin APIs.
- Added new experimental
parquet
processor. - New Bloblang method
format_json
. - Field
collection
inmongodb
processor and output now supports interpolation functions. - Field
output_raw
added to thejq
processor. - The lambda distribution now supports a
BENTHOS_CONFIG_PATH
environment variable for specifying a custom config path. - Field
metadata
added tohttp
andhttp_client
components. - Field
ordering_key
added to thegcp_pubsub
output. - A suite of new experimental
geoip_
methods have been added. - Added flag
--deprecated
to thebenthos lint
subcommand for detecting deprecated fields.
- The
sql
processor and output have been marked deprecated in favour of the newersql_insert
,sql_select
alternatives.
- The input codec
chunked
is no longer capped by the packet size of the incoming streams. - The
schema_registry_decode
andschema_registry_encode
processors now honour trailing slashes in theurl
field. - Processors configured within
pipeline.processors
now share processors across threads rather than clone them. - Go API: Errors returned from input/output plugin
Close
methods no longer cause shutdown to block. - The
pulsar
output should now follow authentication configuration. - Fixed an issue where the
aws_sqs
output might occasionally retry a failed message send with an invalid empty message body.
- Field
json_marshal_mode
added to the MongoDB processor. - Fields
extract_headers.include_prefixes
andextract_headers.include_patterns
added to thehttp_client
input and output and to thehttp
processor. - Fields
sync_response.metadata_headers.include_prefixes
andsync_response.metadata_headers.include_patterns
added to thehttp_server
input. - The
http_client
input and output and thehttp
processor fieldcopy_response_headers
has been deprecated in favour of theextract_headers
functionality. - Added new cli flag
--no-api
for thestreams
subcommand to disable the REST API. - New experimental
kafka_franz
input and output. - Added new Bloblang function
ksuid
. - All
codec
input fields now support custom csv delimiters.
- Streams mode paths now resolve glob patterns in all cases.
- Prevented the
nats
input from error logging when acknowledgments can't be fulfilled due to the lack of message replies. - Fixed an issue where GCP inputs and outputs could terminate requests early due to a cancelled client context.
- Prevented more parsing errors in Bloblang mappings with windows style line endings.
- Fixed an issue where the
mongodb
output would incorrectly report upsert not allowed on valid operators.
- The
pulsar
input and output now supportoauth2
andtoken
authentication mechanisms. - The
pulsar
input now enriches messages with more metadata. - Fields
message_group_id
,message_deduplication_id
, andmetadata
added to theaws_sns
output. - Field
upsert
added to themongodb
processor and output.
- The
schema_registry_encode
andschema_registry_decode
processors now honour path prefixes included in theurl
field. - The
mqtt
input and outputkeepalive
field is now interpreted as seconds, previously it was being erroneously interpreted as nanoseconds. - The header
Content-Type
in the fieldhttp_server.sync_response.headers
is now detected in a case insensitive way when populating multipart message encoding types. - The
nats_jetstream
input and outputs should now honourauth.*
config fields.
- New Bloblang method
parse_duration_iso8601
for parsing ISO-8601 duration strings into an integer. - The
nats
input now supports metadata from headers when supported. - Field
headers
added to thenats
output. - Go API: Optional field definitions added for config specs.
- New (experimental)
sql_select
input. - New (experimental)
sql_select
andsql_insert
processors, which will supersede the existingsql
processor. - New (experimental)
sql_insert
output, which will supersede the existingsql
output. - Field
retained_interpolated
added to themqtt
output. - Bloblang now allows optional carriage returns before line feeds at line endings.
- New CLI flag
-w
/-watcher
added for automatically detecting and applying configuration file changes. - Field
avro_raw_json
added to theschema_registry_encode
processor. - New (experimental)
msgpack
processor. - New
parse_msgpack
andformat_msgpack
Bloblang methods.
- Fixed an issue where the
azure_table_storage
output would attempt to send >100 size batches (and fail). - Fixed an issue in the
subprocess
input where saturated stdout streams could become corrupted.
amqp_0_9
components now support TLS EXTERNAL auth.- Field
urls
added to theamqp_0_9
input and output. - New experimental
schema_registry_encode
processor. - Field
write_timeout
added to themqtt
output, and fieldconnect_timeout
added to both the input and output. - The
websocket
input and output now support customtls
configuration. - New output broker type
fallback
added as a drop-in replacement for the now deprecatedtry
broker.
- Removed a performance bottleneck when consuming a large quantity of small files with the
file
input.
- Go API: New config field types
StringMap
,IntList
, andIntMap
. - The
http_client
input, output and processor now include the response body in request error logs for more context. - Field
dynamic_client_id_suffix
added to themqtt
input and output.
- Corrected an issue where the
sftp
input could consume duplicate documents before shutting down when ran in batch mode.
- Fields
cache_control
,content_disposition
,content_language
andwebsite_redirect_location
added to theaws_s3
output. - Field
cors.enabled
andcors.allowed_origins
added to the server widehttp
config. - For Kafka components the config now supports the
rack_id
field which may contain a rack identifier for the Kafka client. - Allow mapping imports in Bloblang environments to be disabled.
- Go API: Isolated Bloblang environments are now honored by all components.
- Go API: The stream builder now evaluates environment variable interpolations.
- Field
unsafe_dynamic_query
added to thesql
processor. - The
kafka
output now supportszstd
compression.
- The
test
subcommand now expands resource glob patterns (benthos -r "./foo/*.yaml" test ./...
). - The Bloblang equality operator now returns
false
when comparing non-null values withnull
rather than a mismatched types error.
- New experimental
gcp_bigquery
output. - Go API: It's now possible to parse a config spec directly with
ParseYAML
. - Bloblang methods and functions now support named parameters.
- Field
args_mapping
added to thecassandra
output. - For NATS, NATS Streaming and Jetstream components the config now supports specifying either
nkey_file
oruser_credentials_file
to configure authentication.
- The
mqtt
input and output now support sending a last will, configuring a keep alive timeout, and setting retained out output messages. - Go API: New stream builder
AddBatchProducerFunc
andAddBatchConsumerFunc
methods. - Field
gzip_compression
added to theelasticsearch
output. - The
redis_streams
input now supports creating the stream with theMKSTREAM
command (enabled by default). - The
kafka
output now supports manual partition allocation using interpolation functions in the fieldpartition
.
- The bloblang method
contains
now correctly compares numerical values in arrays and objects.
- Go API: Added ability to create and register
BatchBuffer
plugins. - New
system_window
buffer for processing message windows (sliding or tumbling) following the system clock. - Field
root_cas
added to all TLS configuration blocks. - The
sftp
input and output now support key based authentication. - New Bloblang function
nanoid
. - The
gcp_cloud_storage
output now supports custom collision behaviour with the fieldcollision_mode
. - Field
priority
added to theamqp_0_9
output. - Operator
keys
added to theredis
processor. - The
http_client
input when configured in stream mode now allows message body interpolation functions within the URL and header parameters.
- Fixed a panic that would occur when executing a pipeline where processor or input resources reference rate limits.
- The
elasticsearch
output now supports delete, update and index operations. - Go API: Added ability to create and register
BatchInput
plugins.
- Prevented the
http_server
input from blocking graceful pipeline termination indefinitely. - Removed annoying nil error log from HTTP client components when parsing responses.
- The
redis_streams
,redis_pubsub
andredis_list
outputs now all support batching for higher throughput. - The
amqp_1
input and output now support passing and receiving metadata as annotations. - Config unit test definitions can now use files for both the input and expected output.
- Field
track_properties
added to theazure_queue_storage
input for enriching messages with properties such as the message backlog. - Go API: The new plugin APIs, available at
./public/service
, are considered stable. - The streams mode API now uses the setting
http.read_timeout
for timing out stream CRUD endpoints.
- The Bloblang function
random_int
now only resolves dynamic arguments once during the lifetime of the mapping. Documentation has been updated in order to clarify the behaviour with dynamic arguments. - Fixed an issue where plugins registered would return
failed to obtain docs for X type Y
linting errors. - HTTP client components are now more permissive regarding invalid Content-Type headers.
- New CLI flag
--set
(-s
) for overriding arbitrary fields in a config. E.g.-s input.type=http_server
would override the config setting the input type tohttp_server
. - Unit test definitions now support mocking components.
- The
nats
input now supports acks. - The
memory
andfile
cache types now expose metrics akin to other caches.
- The
switch
output whenretry_until_success
is set tofalse
will now provide granular nacks to pre-batched messages. - The URL printed in error messages when HTTP client components fail should now show interpolated values as they were interpreted.
- Go Plugins API V2: Batched processors should now show in tracing, and no longer complain about spans being closed more than once.
- Algorithm
lz4
added to thecompress
anddecompress
processors. - New experimental
aws_dynamodb_partiql
processor. - Go Plugins API: new run opt
OptUseContext
for an extra shutdown mechanism.
- Fixed an issue here the
http_client
would prematurely drop connections when configured withstream.enabled
set totrue
. - Prevented closed output brokers from leaving child outputs running when they've failed to establish a connection.
- Fixed metrics prefixes in streams mode for nested components.
- CLI flag
max-token-length
added to theblobl
subcommand. - Go Plugins API: Plugin components can now be configured seamlessly like native components, meaning the namespace
plugin
is no longer required and configuration fields can be placed within the namespace of the plugin itself. Note that the old style (withinplugin
) is still supported. - The
http_client
input fieldsurl
andheaders
now support interpolation functions that access metadata and contents of the last received message. - Rate limit resources now emit
checked
,limited
anderror
metrics. - A new experimental plugins API is available for early adopters, and can be found at
./public/x/service
. - A new experimental template system is available for early adopters, examples can be found in
./template
. - New beta Bloblang method
bloblang
for executing dynamic mappings. - All
http
components now support a betajwt
authentication mechanism. - New experimental
schema_registry_decode
processor. - New Bloblang method
parse_duration
for parsing duration strings into an integer. - New experimental
twitter_search
input. - New field
args_mapping
added to thesql
processor and output for mapping explicitly typed arguments. - Added format
csv
to theunarchive
processor. - The
redis
processor now supportsincrby
operations. - New experimental
discord
input and output. - The
http_server
input now adds a metadata fieldhttp_server_verb
. - New Bloblang methods
parse_yaml
andformat_yaml
. - CLI flag
env-file
added to Benthos for parsing dotenv files. - New
mssql
SQL driver for thesql
processor and output. - New POST endpoint
/resources/{type}/{id}
added to Benthos streams mode for dynamically mutating resource configs.
- Go Plugins API: The Bloblang
ArgSpec
now returns a public error typeArgError
. - Components that support glob paths (
file
,csv
, etc) now also support super globs (double asterisk). - The
aws_kinesis
input is now stable. - The
gcp_cloud_storage
input and output are now beta. - The
kinesis
input is now deprecated. - Go Plugins API: the minimum version of Go required is now 1.16.
- Fixed a rare panic caused when executing a
workflow
resource processor that referencesbranch
resources across parallel threads. - The
mqtt
input with multiple topics now works with brokers that would previously error on multiple subscriptions. - Fixed initialisation of components configured as resources that reference other resources, where under certain circumstances the components would fail to obtain a true reference to the target resource. This fix makes it so that resources are accessed only when used, which will also make it possible to introduce dynamic resources in future.
- The streams mode endpoint
/streams/{id}/stats
should now work again provided the default manager is used.
- The
branch
processor now writes error logs when the request or result map fails. - The
branch
processor (andworkflow
by proxy) now allow errors to be mapped into the branch usingerror()
in therequest_map
. - Added a linting rule that warns against having a
reject
output under aswitch
broker withoutretry_until_success
disabled. - Prevented a panic or variable corruption that could occur when a Bloblang mapping is executed by parallel threads.
- The
create
subcommand now supports a--small
/-s
flag that reduces the output down to only core components and common fields. - Go Plugins API: Added method
Overlay
to the public Bloblang package. - The
http_server
input now adds path parameters (/{foo}/{bar}
) to the metadata of ingested messages. - The
stdout
output now has acodec
field. - New Bloblang methods
format_timestamp_strftime
andparse_timestamp_strptime
. - New experimental
nats_jetstream
input and output.
- Go Plugins API: Bloblang method and function plugins now automatically resolve dynamic arguments.
- Fixed a regression where the
http_client
input with an emptypayload
would crash with aurl
containing interpolation functions. - Broker output types (
broker
,try
,switch
) now automatically match the highestmax_in_flight
of their children. The fieldmax_in_flight
can still be manually set in order to enforce a minimum value for when inference isn't possible, such as with dynamic output resources.
- Experimental
azure_renew_lock
field added to theamqp_1
input. - New beta
root_meta
function. - Field
dequeue_visibility_timeout
added to theazure_queue_storage
input. - Field
max_in_flight
added to theazure_queue_storage
output. - New beta Bloblang methods
format_timestamp_unix
andformat_timestamp_unix_nano
. - New Bloblang methods
reverse
andindex_of
. - Experimental
extract_tracing_map
field added to thekafka
input. - Experimental
inject_tracing_map
field added to thekafka
output. - Field
oauth2.scopes
added to HTTP components. - The
mqtt
input and output now support TLS. - Field
enable_renegotiation
added totls
configurations. - Bloblang
if
expressions now support an arbitrary number ofelse if
blocks.
- The
checkpoint_limit
field for thekafka
input now works according to explicit messages in flight rather than the actual offset. This means it now works as expected with compacted topics. - The
aws_kinesis
input should now automatically recover when the shard iterator has expired. - Corrected an issue where messages prefixed with valid JSON documents or values were being decoded in truncated form when the remainder was invalid.
- The following beta components have been promoted to stable:
ristretto
cachecsv
andgenerate
inputsreject
outputbranch
,jq
andworkflow
processors
- Fixed an issue where the
kafka
input with partition balancing wasn't committing offsets.
- The
http_server
input now provides a metadata fieldhttp_server_request_path
. - New methods
sort_by
andkey_values
added to Bloblang.
- Glob patterns for various components no longer resolve to bad paths in the absence of matches.
- Fixed an issue where acknowledgements from the
azure_queue_storage
input would timeout prematurely, resulting in duplicated message delivery. - Unit test definitions no longer have implicit test cases when omitted.
- Vastly improved Bloblang mapping errors.
- The
azure_blob_storage
input will now gracefully terminate if the client credentials become invalid. - Prevented the experimental
gcp_cloud_storage
input from closing early during large file consumption.
- New (experimental) Apache Pulsar input and output.
- Field
codec
added to thesocket
output. - New Bloblang method
map_each_key
. - General config linting improvements.
- Bloblang mappings and interpolated fields within configs are now compile checked during linting.
- New output level
metadata.exclude_prefixes
config field for restricting metadata values sent to the following outputs:kafka
,aws_s3
,amqp_0_9
,redis_streams
,aws_sqs
,gcp_pubsub
. - All NATS components now have
tls
support. - Bloblang now supports context capture in query lambdas.
- New subcommand
benthos blobl server
that hosts a Bloblang editor web application. - New (experimental)
mongodb
output, cache and processor. - New (experimental)
gcp_cloud_storage
input and output. - Field
batch_as_multipart
added to thehttp_client
output. - Inputs, outputs, processors, caches and rate limits now have a component level config field
label
, which sets the metrics and logging prefix. - Resources can now be declared in the new
<component>_resources
fields at the root of config files, the oldresources.<component>s.<label>
style is still valid for backwards compatibility reasons. - Bloblang mappings now support importing the entirety of a map from a path using
from "<path>"
syntax.
- Corrected ack behaviour for the beta
azure_queue_storage
input. - Bloblang compressed arithmetic expressions with field names (
foo+bar
) now correctly parse. - Fixed throughput issues with the
aws_sqs
input. - Prevented using the
root
keyword within Bloblang queries, returning an error message explaining alternative options. Eventuallyroot
references within queries will be fully supported and so returning clear errors messages is a temporary fix. - Increased the offset commit API version used by the
kafka
input to v0.8.2 when consuming explicit partitions.
- Go API: Component implementations now require explicit import from
./public/components/all
in order to be invokable. This should be done automatically at all plugin and custom build entry points. If, however, you notice that your builds have begun complaining that known components do not exist then you will need to explicitly import the package with_ "github.com/Jeffail/benthos/v3/public/components/all"
, if this is the case then please report it as an issue so that it can be dealt with.
- Fixed a potential pipeline stall that would occur when non-batched outputs receive message batches.
- New
azure_queue_storage
input. - All inputs with a
codec
field now support multipart. - New
codec
field added to thehttp_client
,socket
,socket_server
andstdin
inputs. - The
kafka
input now allows an empty consumer group for operating without stored offsets. - The
kafka
input now supports partition ranges.
- The bloblang
encode
method algorithmascii85
no longer returns an error when the input is misaligned.
- The
catch
method now properly executes dynamic argument functions.
- New
http
fieldscert_file
andkey_file
, which when specified enforce HTTPS for the general Benthos server. - Bloblang method
catch
now supportsdeleted()
as an argument.
- Fixed an issue with custom labels becoming stagnant with the
influxdb
metrics type. - Fixed a potential unhandled error when writing to the
azure_queue_storage
output.
- Experimental
sharded_join
fields added to thesequence
input. - Added a new API for writing Bloblang plugins in Go at
./public/bloblang
. - Field
fields_mapping
added to thelog
processor.
- Prevented pre-existing errors from failing/aborting branch execution in the
branch
andworkflow
processors. - Fixed
subprocess
processor message corruption with codecslength_prefixed_uint32_be
andnetstring
.
- The
bloblang
input has been renamed togenerate
. This change is backwards compatible andbloblang
will still be recognized until the next major version release. - Bloblang more often preserves integer precision in arithmetic operations.
- Field
key
in outputredis_list
now supports interpolation functions. - Field
tags
added to outputaws_s3
. - New experimental
sftp
input and output. - New input codec
chunker
. - New field
import_paths
added to theprotobuf
processor, replaces the now deprecatedimport_path
field. - Added format
concatenate
to thearchive
processor.
- The
aws_lambda
processor now adds a metadata fieldlambda_function_error
to messages when the function invocation suffers a runtime error.
- Fixed an issue with the
azure_blob_storage
output whereblob_type
set toAPPEND
could result in send failures. - Fixed a potential panic when shutting down a
socket_server
input with messages in flight. - The
switch
processor now correctly flags errors on messages that cause a check to throw an error.
- New bloblang method
bytes
. - The bloblang method
index
now works on byte arrays. - Field
branch_resources
added to theworkflow
processor. - Field
storage_sas_token
added to theazure_blob_storage
input and output. - The bloblang method
hash
and thehash
processor now supportmd5
. - Field
collector_url
added to thejaeger
tracer. - The bloblang method
strip_html
now allows you to specify a list of allowed elements. - New bloblang method
parse_xml
. - New bloblang method
replace_many
. - New bloblang methods
filepath_split
andfilepath_join
.
- The
cassandra
outputsbackoff.max_elapsed_time
field was unused and has been hidden from docs.
- Field
content_type
andcontent_encoding
added to theamqp_0_9
output. - Batching fields added to the
hdfs
output. - Field
codec_send
andcodec_recv
added to thesubprocess
processor. - Methods
min
,max
,abs
,log
,log10
andceil
added to Bloblang. - Added field
pattern_paths
to thegrok
processor. - The
grok
processor now supports dots within field names for nested values. - New
drop_on
output.
- The
xml
processor now supports non UTF-8 encoding schemes.
- The
drop_on_error
output has been deprecated in favour of the newdrop_on
output.
- New
influxdb
metrics target. - New
azure_blob_storage
input. - New
azure_queue_storage
output. - The
bloblang
input fieldinterval
now supports cron expressions. - New beta
aws_kinesis
andaws_sqs
inputs. - The
bool
bloblang method now supports a wider range of string values. - New
reject
output type for conditionally rejecting messages. - All Redis components now support clustering and fail-over patterns.
- The
compress
anddecompress
processors now support snappy.
- Fixed a panic on startup when using
if
statements within aworkflow
branch request or response map. - The
meta
bloblang function error messages now include the name of the required value. - Config unit tests now report processor errors when checks fail.
- Environment variable interpolations now allow dots within the variable name.
- The experimental
aws_s3
input is now marked as beta. - The beta
kinesis_balanced
input is now deprecated. - All Azure components have been renamed to include the prefix
azure_
, e.g.blob_storage
is nowazure_blob_storage
. The old names can still be used for backwards compatibility. - All AWS components have been renamed to include the prefix
aws_
, e.g.s3
is nowaws_s3
. The old names can still be used for backwards compatibility.
- New field
retry_as_batch
added to thekafka
output to assist in ensuring message ordering through retries. - Field
delay_period
added to the experimentalaws_s3
input. - Added service options for adding API middlewares and specify TLS options for plugin builds.
- Method
not_empty
added to Bloblang. - New
bloblang
predicate type added to unit tests. - Unit test case field
target_processors
now allows you to optionally specify a target file. - Basic auth support added to the
prometheus
metrics pusher.
- Unit tests that define environment variables that are run serially (
parallel: false
) will retain those environment variables during execution, as opposed to only at config parse time. - Lambda distributions now look for config files relative to the binary location, allowing you to deploy configs from the same zip as the binary.
- Add
Content-Type
headers in streams API responses. - Field
delete_objects
is now respected by the experimentalaws_s3
input. - Fixed a case where resource processors couldn't access rate limit resources.
- Input files that are valid according to the codec but empty now trigger acknowledgements.
- Mapping
deleted()
within Bloblang object and array literals now correctly omits the values.
- New field
format
added tologger
supportingjson
andlogfmt
. - The
file
input now provides the metadata fieldpath
on payloads.
- The
output.sent
metric now properly represents the number of individual messages sent even after archiving batches. - Fixed a case where metric processors in streams mode pipelines and dynamic components would hang.
- Sync responses of >1 payloads should now get a correct rfc1341 multipart header.
- The
cassandra
output now correctly marshals float and double values. - The
nanomsg
input with aSUB
socket no longer attempts to set invalid timeout.
- Added field
codec
to thefile
output. - The
file
output now supports dynamic file paths. - Added field
ttl
to thecache
processor and output. - New
sql
output, which is similar to thesql
processor and currently supports Clickhouse, PostgreSQL and MySQL. - The
kafka
input now supports multiple topics, topic partition balancing, and checkpointing. - New
cassandra
output. - Field
allowed_verbs
added to thehttp_server
input and output. - New bloblang function
now
, and methodparse_timestamp
. - New bloblang methods
floor
andround
. - The bloblang method
format_timestamp
now supports strings in ISO 8601 format as well as unix epochs with decimal precision up to nanoseconds.
- The
files
output has been deprecated as its behaviour is now covered byfile
. - The
kafka_balanced
input has now been deprecated as its functionality has been added to thekafka
input. - The
cloudwatch
metrics aggregator is now considered stable. - The
sequence
input is now considered stable. - The
switch
processor no longer permits cases with no processors.
- Fixed the
tar
andtar-gzip
input codecs in experimental inputs. - Fixed a crash that could occur when referencing contextual fields within interpolation functions.
- The
noop
processor can now be inferred with an empty object (noop: {}
). - Fixed potential message corruption with the
file
input when using thelines
codec.
- The
csv
input now supports glob patterns in file paths. - The
file
input now supports multiple paths, glob patterns, and a range of codecs. - New experimental
aws_s3
input. - All
redis
components now support TLS. - The
-r
cli flag now supports glob patterns.
- Bloblang literals, including method and function arguments, can now be mutated without brackets regardless of where they appear.
- Bloblang maps now work when running bloblang with the
blobl
subcommand.
-
The
ristretto
cache no longer forces retries on get commands, and the retry fields have been changed in order to reflect this behaviour. -
The
files
input has been deprecated as its behaviour is now covered byfile
. -
Numbers within JSON documents are now parsed in a way that preserves precision even in cases where the number does not fit a 64-bit signed integer or float. When arithmetic is applied to those numbers (either in Bloblang or by other means) the number is converted (and precision lost) at that point based on the operation itself.
This change means that string coercion on large numbers (e.g.
root.foo = this.large_int.string()
) should now preserve the original form. However, if you are using plugins that interact with JSON message payloads you must ensure that your plugins are able to process thejson.Number
type.This change should otherwise not alter the behaviour of your configs, but if you notice odd side effects you can disable this feature by setting the environment variable
BENTHOS_USE_NUMBER
tofalse
(BENTHOS_USE_NUMBER=false benthos -c ./config.yaml
). Please raise an issue if this is the case so that it can be looked into.
- New input
subprocess
. - New output
subprocess
. - Field
auto_ack
added to theamqp_0_9
input. - Metric labels can be renamed for
prometheus
andcloudwatch
metrics components usingpath_mapping
by assigning meta fields.
- Metrics labels registered using the
rename
metrics component are now sorted before registering, fixing incorrect values that could potentially be seen when renaming multiple metrics to the same name.
- OAuth 2.0 using the client credentials token flow is now supported by the
http_client
input and output, and thehttp
processor. - Method
format_timestamp
added to Bloblang. - Methods
re_find_object
andre_find_all_object
added to Bloblang. - Field
connection_string
added to the Azureblob_storage
andtable_storage
outputs. - Field
public_access_level
added to the Azureblob_storage
output. - Bloblang now supports trailing commas in object and array literals and function and method parameters.
- The
amqp_1
input and output now re-establish connections to brokers on any unknown error. - Batching components now more efficiently attempt a final flush of data during graceful shutdown.
- The
dynamic
output is now more flexible with removing outputs, and should no longer block the API as aggressively.
- New cli flag
log.level
for overriding the configured logging level. - New integration test suite (much more dapper and also a bit more swanky than the last).
- The default value for
batching.count
fields is now zero, which means adding a non-count based batching mechanism without also explicitly overridingcount
no longer incorrectly caps batches at one message. This change is backwards compatible in that working batching configs will not change in behaviour. However, a broken batching config will now behave as expected.
- Improved Bloblang parser error messages for function and method parameters.
- New methods
any
,all
andjson_schema
added to Bloblang. - New function
file
added to Bloblang. - The
switch
output can now route batched messages individually (when using the newcases
field). - The
switch
processor now routes batched messages individually (when using the newcases
field). - The
workflow
processor can now reference resource configuredbranch
processors. - The
metric
processor now has a fieldname
that replaces the now deprecated fieldpath
. When used the processor now applies to all messages of a batch and the name of the metric is now absolute, without being prefixed by a path generated based on its position within the config. - New field
check
added togroup_by
processor children, which now replaces the oldcondition
field. - New field
check
added towhile
processor, which now replaces the oldcondition
field. - New field
check
added toread_until
input, which now replaces the oldcondition
field.
- The
bloblang
input with an interval configured now emits the first message straight away.
- New function
range
added to Bloblang. - New beta
jq
processor. - New driver
clickhouse
added to thesql
processor.
- New field
data_source_name
replacesdsn
for thesql
processor, and when using this field each message of a batch is processed individually. When using the fielddsn
the behaviour remains unchanged for backwards compatibility.
- Eliminated situations where an
amqp_0_9
oramqp_1
component would abandon a connection reset due to partial errors. - The Bloblang parser now allows naked negation of queries.
- The
cache
processor interpolations forkey
andvalue
now cross-batch reference messages before processing.
- New Bloblang methods
not_null
andfilter
. - New Bloblang function
env
. - New field
path_mapping
added to all metrics types. - Field
max_in_flight
added to thedynamic
output. - The
workflow
processor has been updated to usebranch
processors with the new fieldbranches
, these changes are backwards compatible with the now deprecatedstages
field.
- The
rename
,whitelist
andblacklist
metrics types are now deprecated, and thepath_mapping
field should be used instead. - The
conditional
,process_map
andprocess_dag
processors are now deprecated and are superseded by theswitch
,branch
andworkflow
processors respectively.
- Fixed
http
processor error log messages that would print incorrect URLs. - The
http_server
input now emitslatency
metrics. - Fixed a panic that could occur during the shutdown of an
http_server
input serving a backlog of requests. - Explicit component types (
type: foo
) are now checked by the config linter. - The
amqp_1
input and output should now reconnect automatically after an unexpected link detach.
- Improved parser error messages with the
blobl
subcommand. - Added flag
file
to theblobl
subcommand. - New Bloblang method
parse_timestamp_unix
. - New beta
protobuf
processor. - New beta
branch
processor. - Batching fields added to
s3
output.
- The
http
processor fieldmax_parallel
has been deprecated in favour of rate limits, and the fields withinrequest
have been moved to the root of thehttp
namespace. This change is backwards compatible andhttp.request
fields will still be recognized until the next major version release. - The
process_field
processor is now deprecated, andbranch
should be used instead.
- Wholesale metadata mappings (
meta = {"foo":"bar"}
) in Bloblang now correctly clear pre-existing fields.
- Prevented an issue where batched outputs would terminate at start up. Fixes a regression introduced in v3.24.0.
- Endpoint
/ready
added to streams mode API. - Azure
table_storage
output now supports batched sends. - All HTTP components are now able to configure a proxy URL.
- New
ristretto
cache. - Field
shards
added tomemory
cache.
- Batch error handling and retry logic has been improved for the
kafka
anddynamodb
outputs. - Bloblang now allows non-matching not-equals comparisons, allowing
foo != null
expressions.
- Condition
check_interpolation
has been deprecated.
- Path segments in Bloblang mapping targets can now be quote-escaped.
- New beta
sequence
input, for sequentially chaining inputs. - New beta
csv
input for consuming CSV files. - New beta Azure
table_storage
output. - New
parse_csv
Bloblang method. - New
throw
Bloblang function. - The
slice
Bloblang method now supports negative low and high arguments.
- Manual
mqtt
connection handling for both the input and output. This should fix some cases where connections were dropped and never recovered. - Fixed Bloblang error where calls to a
.get
method would returnnull
after the first query. - The
for_each
processor no longer interlaces child processors during split processing.
- Added TLS fields to
elasticsearch
output. - New Bloblang methods
encrypt_aes
anddecrypt_aes
added. - New field
static_headers
added to thekafka
output. - New field
enabled
added to thehttp
config section. - Experimental CLI flag
-resources
added for specifying files containing extra resources.
- The
amqp_0_9
now resolvestype
andkey
fields per message of a batch.
- New beta
bloblang
input for generating documents. - New beta Azure
blob_storage
output. - Field
sync_response.status
added tohttp_server
input. - New Bloblang
errored
function.
- The
json_schema
processor no longer lower cases fields within error messages. - The
dynamodb
cache no longer creates warning logs for get misses.
- SASL config fields added to
amqp_1
input and output. - The
lint
subcommand now supports triple dot wildcard paths:./foo/...
. - The
test
subcommand now supports tests defined within the target config file being tested.
- Bloblang boolean operands now short circuit.
- Fields
strict_mode
andmax_in_flight
added to theswitch
output. - New beta
amqp_1
input and output added.
- Field
drop_empty_bodies
added to thehttp_client
input.
- Fixed deleting and skipping maps with the
blobl
subcommand.
- New field
type
added to theamqp_0_9
output. - New bloblang methods
explode
andwithout
.
- Message functions such as
json
andcontent
now work correctly when executing bloblang with theblobl
sub command.
- New bloblang methods
type
,join
,unique
,escape_html
,unescape_html
,re_find_all
andre_find_all_submatch
. - Bloblang
sort
method now allows custom sorting functions. - Bloblang now supports
if
expressions. - Bloblang now allows joining strings with the
+
operator. - Bloblang now supports multiline strings with triple quotes.
- The
xml
processor is now less strict with XML parsing, allowing unrecognised escape sequences to be passed through unchanged.
- The bloblang method
map_each
now respectsNothing
mapping by copying the underlying value unchanged. - It's now possible to reference resource inputs and outputs in streams mode.
- Fixed a problem with compiling old interpolation functions with arguments containing colons (i.e.
${!timestamp_utc:2006-01-02T15:04:05.000Z}
)
- Flag
log
added totest
sub command to allow logging during tests. - New subcommand
blobl
added for convenient mapping over the command line. - Lots of new bloblang methods.
- The
redis_streams
input no longer incorrectly copies message data into a metadata field.
- Bloblang is no longer considered beta. Therefore, no breaking changes will be introduced outside of a major version release.
- New
ascii85
andz85
options have been added to theencode
anddecode
processors.
- The
meta
function no longer reflects changes made within the map itself. - Extracting data from other messages of a batch using
from
no longer reflects changes made within a map. - Meta assignments are no longer allowed within named maps.
- Assigning
deleted()
toroot
now filters out a message entirely. - Lots of new methods and goodies.
- New HMAC algorithms added to
hash
processor. - New beta
bloblang
processor. - New beta
bloblang
condition.
- Prevented a crash that might occur with high-concurrent access of
http_server
metrics with labels. - The
http_client
output now respects thecopy_response_headers
field.
- Vastly improved function interpolations, including better batch handling and arithmetic operators.
- The
gcp_pubsub
output now supports function interpolation on the fieldtopic
. - New
contains_any
andcontains_any_cs
operators added to thetext
condition. - Support for input and output
resource
types. - The
broker
andswitch
output types now allow async messages and batching within child outputs. - Field
schema_path
added to theavro
processor. - The
redis
cache,redis_list
inputs and outputs now support selecting a database with the URL path. - New field
max_in_flight
added to thebroker
output.
- Benthos now runs in strict mode, but this can be disabled with
--chilled
. - The Benthos CLI has been revamped, the old flags are still supported but are deprecated.
- The
http_server
input now accepts requests without a content-type header.
- Outputs that resolve function interpolations now correctly resolve the
batch_size
function. - The
kinesis_balanced
input now correctly establishes connections. - Fixed an auth transport issue with the
gcp_pubsub
input and output.
- Format
syslog_rfc3164
added to theparse_log
processor. - New
multilevel
cache. - New
json_append
,json_type
andjson_length
functions added to theawk
processor. - New
flatten
operator added to thejson
processor.
- Processors that fail now set the opentracing tag
error
totrue
.
- Kafka connectors now correctly set username and password for all SASL strategies.
- Field
delete_files
added tofiles
input. - TLS fields added to
nsq
input and output. - Field
processors
added to batching fields to easily accommodate aggregations and archiving of batched messages. - New
parse_log
processor. - New
json
condition. - Operators
flatten_array
,fold_number_array
andfold_string_array
added tojson
processor.
- The
redis_streams
input no longer flushes >1 fetched messages as a batch.
- Re-enabled Kafka connections using SASL without TLS.
- New
socket
,socket_server
inputs. - New
socket
output. - Kafka connectors now support SASL using
OAUTHBEARER
,SCRAM-SHA-256
,SCRAM-SHA-512
mechanisms. - Experimental support for AWS CloudWatch metrics.
- The
tcp
,tcp_server
andudp_server
inputs have been deprecated and moved into thesocket
andsocket_server
inputs respectively. - The
udp
andtcp
outputs have been deprecated and moved into thesocket
output.
- The
subprocess
processor now correctly flags errors that occur.
- New field
max_in_flight
added to the following outputs:amqp_0_9
cache
dynamodb
elasticsearch
gcp_pubsub
hdfs
http_client
kafka
kinesis
kinesis_firehose
mqtt
nanomsg
nats
nats_stream
nsq
redis_hash
redis_list
redis_pubsub
redis_streams
s3
sns
sqs
- Batching fields added to the following outputs:
dynamodb
elasticsearch
http_client
kafka
kinesis
kinesis_firehose
sqs
- More TRACE level logs added throughout the pipeline.
- Operator
delete
added tocache
processor. - Operator
explode
added tojson
processor. - Field
storage_class
added tos3
output. - Format
json_map
added tounarchive
processor.
- Function interpolated strings within the
json
processorvalue
field are now correctly unicode escaped. - Retry intervals for
kafka
output have been tuned to prevent circuit breaker throttling.
- New
try
output, which is a drop-in replacement for abroker
with thetry
pattern. - Field
successful_on
added to thehttp
processor. - The
statsd
metrics type now supports Datadog or InfluxDB tagging. - Field
sync_response.headers
added tohttp_server
input. - New
sync_response
processor. - Field
partitioner
added to thekafka
output.
- The
http
processor now gracefully handles empty responses.
- The
kafka
input should now correctly recover from coordinator failures during an offset commit. - Attributes permitted by the
sqs
output should now have parity with real limitations.
- Batching using an input
broker
now works with only one child input configured. - The
zmq4
input now correctly supports broker based batching.
- New
workflow
processor. - New
resource
processor. - Processors can now be registered within the
resources
section of a config.
- The
mqtt
output fieldtopic
field now supports interpolation functions.
- The
kafka
output no longer attempts to send headers on old versions of the protocol.
- New
regexp_expand
operator added to thetext
processor. - New
json_schema
processor.
- New
amqp_0_9
output which replaces the now deprecatedamqp
output. - The
broker
output now supports batching.
- The
memory
buffer now allows parallel processing of batched payloads. - Version and date information should now be correctly displayed in archive distributions.
- The
s3
input now correctly unescapes bucket keys when streaming from SQS.
- Field
sqs_endpoint
added to thes3
input. - Field
kms_key_id
added to thes3
output. - Operator
delete
added tometadata
processor. - New experimental metrics aggregator
stdout
. - Field
ack_wait
added tonats_stream
input. - New
batching
field added tobroker
input for batching merged streams. - Field
healthcheck
added toelasticsearch
output. - New
json_schema
condition.
- Experimental
kafka_cg
input has been removed. - The
kafka_balanced
inputs underlying implementation has been replaced with thekafka_cg
one. - All inputs have been updated to automatically utilise >1 processing threads, with the exception of
kafka
andkinesis
.
- New
is
operator added totext
condition. - New config unit test condition
content_matches
. - Field
init_values
added to thememory
cache. - New
split
operator added tojson
processor. - Fields
user
andpassword
added tomqtt
input and output. - New experimental
amqp_0_9
input.
- Linting is now disabled for the environment var config shipped with docker images, this should prevent the log spam on start up.
- Go API: Experimental
reader.Async
component methods renamed.
- Prevented
kafka_cg
input lock up after batch policy period trigger with no backlog.
- New
redis
processor. - New
kinesis_firehose
output. - New experimental
kafka_cg
input. - Go API: The
metrics.Local
aggregator now supports labels.
- The
json
processor no longer removes content moved from a path to the same path.
This is a major version release, for more information and guidance on how to migrate please refer to https://benthos.dev/docs/guides/migration/v3.
- The
json
processor now allows you tomove
from either a root source or to a root destination. - Added interpolation to the
metadata
processorkey
field. - Granular retry fields added to
kafka
output.
- Go modules are now fully supported, imports must now include the major version (e.g.
github.com/Jeffail/benthos/v3
). - Removed deprecated
mmap_file
buffer. - Removed deprecated (and undocumented) metrics paths.
- Moved field
prefix
from root ofmetrics
into relevant child components. - Names of
process_dag
stages must now match the regexp[a-zA-Z0-9_-]+
. - Go API: buffer constructors now take a
types.Manager
argument in parity with other components. - JSON dot paths within the following components have been updated to allow array-based operations:
awk
processorjson
processorprocess_field
processorprocess_map
processorcheck_field
conditionjson_field
function interpolations3
inputdynamodb
output
- The
sqs
output no longer attempts to send invalid attributes with payloads from metadata. - During graceful shutdown Benthos now scales the attempt to propagate acks for sent messages with the overall system shutdown period.
- The
s3
andsqs
inputs should now correctly log handles and codes from failed SQS message deletes and visibility timeout changes.
- New
message_group_id
andmessage_deduplication_id
fields added tosqs
output for supporting FIFO queues.
- Metadata field
gcp_pubsub_publish_time_unix
added togcp_pubsub
input. - New
tcp
andtcp_server
inputs. - New
udp_server
input. - New
tcp
andudp
outputs. - Metric paths
output.batch.bytes
andoutput.batch.latency
added. - New
rate_limit
processor.
- The
json
processor now correctly stores parsedvalue
JSON when usingset
on the root path.
- The
sqs
input now adds some message attributes as metadata. - Added field
delete_message
tosqs
input. - The
sqs
output now sends metadata as message attributes. - New
batch_policy
field added tomemory
buffer. - New
xml
processor.
- The
prometheus
metrics exporter adds quantiles back to timing metrics.
- Capped slices from lines reader are now enforced.
- The
json
processor now correctly honours anull
value.
- Disabled
kinesis_balanced
input for WASM builds.
- Field
codec
added toprocess_field
processor. - Removed experimental status from sync responses components, which are now considered stable.
- Field
pattern_definitions
added togrok
processor.
- Simplified serverless lambda main function body for improving plugin documentation.
- Fixed a bug where the
prepend
andappend
operators of thetext
processor could result in invalid messages when consuming line-based inputs.
- Field
clean_session
added tomqtt
input. - The
http_server
input now adds request query parameters to messages as metadata.
- Prevent concurrent access race condition on nested parallel
process_map
processors.
- New beta input
kinesis_balanced
. - Field
profile
added to AWS components credentials config.
- Improved error messages attached to payloads that fail
process_dag
. post mappings. - New
redis_hash
output. - New
sns
output.
- Allow extracting metric
rename
submatches into labels. - Field
use_patterns
added toredis_pubsub
input for subscribing to channels using glob-style patterns.
- Go API: It's now possible to specify a custom config unit test file path suffix.
- New rate limit and websocket message fields added to
http_server
input. - The
http
processor now optionally copies headers from response into resulting message metadata. - The
http
processor now sets ahttp_status_code
metadata value into resulting messages (provided one is received.)
- Go API: Removed experimental
Block
functions from the cache and rate limit packages.
- New (experimental) command flags
--test
and--gen-test
added. - All http client components output now set a metric
request_timeout
.
- All errors caught by processors should now be accessible via the
${!error}
interpolation function, rather than just flagged astrue
.
- The
process_field
processor now propagates metadata to the original payload with theresult_type
set to discard. This allows proper error propagation.
- Field
max_buffer
added tosubprocess
processor.
- The
subprocess
processor now correctly logs and recovers subprocess pipeline related errors (such as exceeding buffer limits.)
- New
json_delete
function added to theawk
processor.
- SQS output now correctly waits between retry attempts and escapes error loops during shutdown.
- Go API: Add
RunWithOpts
optOptOverrideConfigDefaults
.
- The
filter
andfilter_parts
config sections now correctly marshall when printing with--all
.
- Go API: A new service method
RunWithOpts
has been added in order to accomodate service customisations with opt funcs.
- New interpolation function
error
.
- New
number
condition. - New
number
processor. - New
avro
processor. - Operator
enum
added totext
condition. - Field
result_type
added toprocess_field
processor for marshalling results into non-string types. - Go API: Plugin APIs now allow nil config constructors.
- Registering plugins automatically adds plugin documentation flags to the main Benthos service.
- Output
http_client
is now able to propagate responses from each request back to inputs supporting sync responses. - Added support for Gzip compression to
http_server
output sync responses. - New
check_interpolation
condition.
- New
sync_response
output type, with experimental support added to thehttp_server
input. - SASL authentication fields added to all Kafka components.
- The
s3
input now setss3_content_encoding
metadata (when not using the download manager.) - New trace logging for the
rename
,blacklist
andwhitelist
metric components to assist with debugging.
- Ability to combine sync and async responses in serverless distributions.
- The
insert_part
,merge_json
andunarchive
processors now propagate message contexts.
- JSON processors no longer escape
&
,<
, and>
characters by default.
- The
http
processor now preserves message metadata and contexts. - Any
http
components that create requests with messages containing empty bodies now correctly function in WASM.
- New
fetch_buffer_cap
field forkafka
andkafka_balanced
inputs. - Input
gcp_pubsub
now has the fieldmax_batch_count
.
- Reduced allocations under most JSON related processors.
- Streams mode API now logs linting errors.
- New interpolation function
batch_size
.
- Output
elasticsearch
no longer reports index not found errors on connect.
- Input reader no longer overrides message contexts for opentracing spans.
- Improved construction error messages for
broker
andswitch
input and outputs.
- Plugins that don't use a configuration structure can now return nil in their sanitise functions in order to have the plugin section omitted.
- The
kafka
andkafka_balanced
inputs now set akafka_lag
metadata field to incoming messages. - The
awk
processor now has a variety of typedjson_set
functionsjson_set_int
,json_set_float
andjson_set_bool
. - Go API: Add experimental function for blocking cache and ratelimit constructors.
- The
json
processor now defaults to an executable operator (clean).
- Add experimental function for blocking processor constructors.
- Core service logic has been moved into new package
service
, making it easier to maintain plugin builds that match upstream Benthos.
- Experimental support for WASM builds.
- Config linting now reports line numbers.
- Config interpolations now support escaping.
- API for creating
cache
implementations. - API for creating
rate_limit
implementations.
This is a major version released due to a series of minor breaking changes, you can read the full migration guide here.
- Benthos now attempts to infer the
type
of config sections whenever the field is omitted, for more information please read this overview: Concise Configuration. - Field
unsubscribe_on_close
of thenats_stream
input is nowfalse
by default.
- The following commandline flags have been removed:
swap-envs
,plugins-dir
,list-input-plugins
,list-output-plugins
,list-processor-plugins
,list-condition-plugins
.
- Package
github.com/Jeffail/benthos/lib/processor/condition
changed togithub.com/Jeffail/benthos/lib/condition
. - Interface
types.Cache
now hastypes.Closable
embedded. - Interface
types.RateLimit
now hastypes.Closable
embedded. - Add method
GetPlugin
to interfacetypes.Manager
. - Add method
WithFields
to interfacelog.Modular
.
- Ensure
process_batch
processor gets normalised correctly.
- New
for_each
processor with the same behaviour asprocess_batch
,process_batch
is now considered an alias forfor_each
.
- The
sql
processor now executes across the batch, documentation updated to clarify.
- Corrected
result_codec
field insql
processor config.
- New
sql
processor.
- Using
json_map_columns
with thedynamodb
output should now correctly storenull
and array values within the target JSON structure.
- New
encode
anddecode
schemehex
.
- Fixed potential panic when attempting an invalid HTTP client configuration.
- Benthos in streams mode no longer tries to load directory
/benthos/streams
by default.
- Field
json_map_columns
added todynamodb
output.
- JSON references are now supported in configuration files.
- The
hash
processor now supportssha1
. - Field
force_path_style_urls
added tos3
components. - Field
content_type
of thes3
output is now interpolated. - Field
content_encoding
added tos3
output.
- The
benthos-lambda
distribution now correctly returns all message parts in synchronous execution.
- Docker builds now use a locally cached
vendor
for dependencies. - All
s3
components no longer default to enforcing path style URLs.
- New output
drop_on_error
. - Field
retry_until_success
added toswitch
output.
- Improved error and output logging for
subprocess
processor when the process exits unexpectedly.
- The main docker image is now based on busybox.
- Lint rule added for
batch
processors outside of the input section.
- Removed potential
benthos-lambda
panic on shut down.
- The
redis
cache no longer incorrectly returns a "key not found" error instead of connection errors.
- Changed docker tag format from
vX.Y.Z
toX.Y.Z
.
- Output
broker
patternfan_out_sequential
. - Output type
drop
for dropping all messages. - New interpolation function
timestamp_utc
.
- New
benthos-lambda
distribution for running Benthos as a lambda function.
- New
s3
cache implementation. - New
file
cache implementation. - Operators
quote
andunquote
added to thetext
processor. - Configs sent via the streams mode HTTP API are now interpolated with environment variable substitutes.
- All AWS
s3
components now enforce path style syntax for bucket URLs. This improves compatibility with third party endpoints.
- New
parallel
processor.
- The
dynamodb
cacheget
call now correctly reports key not found versus general request error.
- New
sqs_bucket_path
field added tos3
input.
- The
sqs
input now rejects messages that fail by resetting the visibility timeout. - The
sqs
input no longer fails to delete consumed messages when the batch contains duplicate message IDs.
- The
metric
processor no longer mixes label keys when processing across parallel pipelines.
- Comma separated
kafka
andkafka_balanced
address and topic values are now trimmed for whitespace.
- Field
max_processing_period
added tokafka
andkafka_balanced
inputs.
- Compaction intervals are now respected by the
memory
cache type.
- Improved
kafka_balanced
consumer group connection behaviour.
- More
kafka_balanced
input config fields for consumer group timeouts.
- New config interpolation function
uuid_v4
.
- The
while
processor now correctly checks conditions against the first batch of the result of last processor loop.
- Field
max_loops
added towhile
processor.
- New
while
processor.
- New
cache
processor. - New
all
condition. - New
any
condition.
- Function interpolation for field
subject
added tonats
output.
- Switched underlying
kafka_balanced
implementation to sarama consumer.
- Always allow acknowledgement flush during graceful termination.
- Removed unnecessary subscription check from
gcp_pubsub
input.
- New field
fields
added tolog
processor for structured log output.
- Function interpolation for field
channel
added toredis_pubsub
output.
- Field
byte_size
added tosplit
processor.
- Field
dependencies
of children of theprocess_dag
processor now correctly parsed from config files.
- Field
push_job_name
added toprometheus
metrics type. - New
rename
metrics target.
- Removed potential race condition in
process_dag
with raw bytes conditions.
- Field
max_batch_count
added tos3
input. - Field
max_number_of_messages
added tosqs
input.
- New
blacklist
metrics target. - New
whitelist
metrics target. - Initial support for opentracing, including a new
tracer
root component. - Improved generated metrics documentation and config examples.
- The
nats_stream
input now has a fieldunsubscribe_on_close
that when disabled allows durable subscription offsets to persist even when all connections are closed. - Metadata field
nats_stream_sequence
added tonats_stream
input.
- The
subprocess
processor no longer sends unexpected empty lines when messages end with a line break.
- New
switch
processor.
- Printing configs now sanitises resource sections.
- The
headers
field inhttp
configs now detects and applieshost
keys.
- New
json_documents
format added to theunarchive
processor. - Field
push_interval
added to theprometheus
metrics type.
- Brokers now correctly parse configs containing plugin types as children.
- Output broker types now correctly allocates nested processors for
fan_out
andtry
patterns. - JSON formatted loggers now correctly escape error messages with line breaks.
- Improved error logging for
s3
input download failures. - More metadata fields copied to messages from the
s3
input. - Field
push_url
added to theprometheus
metrics target.
- Resources (including plugins) that implement
Closable
are now shutdown cleanly.
- New
json_array
format added to thearchive
andunarchive
processors. - Preliminary support added to the resource manager API to allow arbitrary shared resource plugins.
- The
s3
input now caps and iterates batched SQS deletes.
- The
archive
processor now interpolates thepath
per message of the batch.
- Fixed environment variable interpolation when combined with embedded function interpolations.
- Fixed break down metric indexes for input and output brokers.
- Input
s3
can now toggle the use of a download manager, switching off now downloads metadata from the target file. - Output
s3
now writes metadata to the uploaded file. - Operator
unescape_url_query
added totext
processor.
- The
nats_steam
input and output now actively attempt to recover stale connections. - The
awk
processor prints errors and flags failure when the program exits with a non-zero status.
- The
subprocess
processor now attempts to read all flushed stderr output from a process when it fails.
- Function
print_log
added toawk
processor.
- The
awk
processor functionjson_get
no longer returns string values with quotes.
- Processor
awk
codecs changed.
- Output type
sqs
now supports batched message sends.
- Functions
json_get
andjson_set
added toawk
processor.
- Functions
timestamp_format
,timestamp_format_nano
,metadata_get
andmetadata_set
added toawk
processor.
- New
sleep
processor. - New
awk
processor.
- Converted all integer based time period fields to string based, e.g.
timeout_ms: 5000
would now betimeout: 5s
. This will may potentially be disruptive but the--strict
flag should catch all deprecated fields in an existing config.
- Renamed
max_batch_size
tomax_batch_count
for consistency.
- New
max_batch_size
field added tokafka
,kafka_balanced
andamqp
inputs. This provides a mechanism for creating message batches optimistically.
- New
subprocess
processor.
- API: The
types.Processor
interface has been changed in order to add lifetime cleanup methods (addedCloseAsync
andWaitForClose
). For the overwhelming majority of processors these functions will be no-ops. - More consistent
condition
metrics.
- New
try
andcatch
processors for improved processor error handling.
- All processors now attach error flags.
- S3 input is now more flexible with SNS triggered SQS events.
- Processor metrics have been made more consistent.
- New endpoint
/ready
that returns 200 when both the input and output components are connected, otherwise 503. This is intended to be used as a readiness probe.
- Large simplifications to all metrics paths.
- Fully removed the previously deprecated
combine
processor. - Input and output plugins updated to support new connection health checks.
- Field
role_external_id
added to all S3 credential configs. - New
processor_failed
condition and improved processor error handling which can be read about here
- New
content_type
field for thes3
output.
- New
group_by_value
processor.
- Lint errors are logged (level INFO) during normal Benthos operation.
- New
--strict
command flag which causes Benthos to abort when linting errors are found in a config file.
- New
--lint
command flag for linting config files.
- The
s3
output now attempts to batch uploads. - The
s3
input now exposes errors in deleting SQS messages during acks.
- Resource based conditions no longer benefit from cached results. In practice this optimisation was easy to lose in config and difficult to maintain.
- Metadata is now sent to
kafka
outputs. - New
max_inflight
field added to thenats_stream
input.
- Fixed relative path trimming for streams from file directories.
- The
dynamodb
cache and output types now set TTL columns as unix timestamps.
- New
escape_url_query
operator for thetext
processor.
- Removed submatch indexes in the
text
processorfind_regexp
operator and added documentation for expanding submatches in thereplace_regexp
operator.
- Allow submatch indexes in the
find_regexp
operator for thetext
processor.
- New
find_regexp
operator for thetext
processor.
- New
aws
fields to theelasticsearch
output to allow AWS authentication.
- Add max-outstanding fields to
gcp_pubsub
input. - Add new
dynamodb
output.
- The
s3
output now calculatespath
field function interpolations per message of a batch.
- New
set
operator for thetext
processor.
- New
cache
output type.
- New
group_by
processor. - Add bulk send support to
elasticsearch
output.
- New
content
interpolation function.
- New
redis
cache type.
- The
process_map
processor now allows map target path overrides when a target is the parent of another target.
- Field
pipeline
andsniff
added to theelasticsearch
output. - Operators
to_lower
andto_upper
added to thetext
processor.
- Field
endpoint
added to all AWS types.
- Allow
log
config fieldstatic_fields
to be fully overridden.
- New
process_dag
processor. - New
static_fields
map added to log config for setting static log fields.
- JSON log field containing component path moved from
@service
tocomponent
.
- New
gcp_pubsub
input and outputs. - New
log
processor. - New
lambda
processor.
- New
process_batch
processor. - Added
count
field tobatch
processor. - Metrics for
kinesis
output throttles.
- The
combine
processor is now considered DEPRECATED, please use thebatch
processor instead. - The
batch
processor fieldbyte_size
is now set at 0 (and therefore ignored) by default. A log warning has been added in case anyone was relying on the default.
- New
rate_limit
resource with alocal
type. - Field
rate_limit
added tohttp
based processors, inputs and outputs.
- New
prefetch_count
field added tonats
input.
- New
bounds_check
condition type. - New
check_field
condition type. - New
queue
field added tonats
input. - Function interpolation for the
topic
field of thensq
output.
- The
nats
input now defaults to joining a queue.
- The redundant
nsq
output fieldmax_in_flight
has been removed. - The
files
output now interpolates paths per message part of a batch.
- New
hdfs
input and output. - New
switch
output. - New
enum
andhas_prefix
operators for themetadata
condition. - Ability to set
tls
client certificate fields directly.
- New
retry
output. - Added
regex_partial
andregex_exact
operators to themetadata
condition.
- The
kinesis
output fieldretries
has been renamedmax_retries
in order to expose the difference in its zero value behaviour (endless retries) versus otherretry
fields (zero retries).
- New
endpoint
field added tokinesis
input. - New
dynamodb
cache type.
- Function interpolation for the
topic
field of thekafka
output. - New
target_version
field for thekafka_balanced
input. - TLS config fields for client certificates.
- TLS config field
cas_file
has been renamedroot_cas_file
.
- New
zip
option for thearchive
andunarchive
processors.
- The
kinesis
output type now supports batched sends and per message interpolation.
- New
metric
processor.
- New
redis_streams
input and output.
- New
kinesis
input and output.
- The
index
field of theelasticsearch
output can now be dynamically set using function interpolation. - New
hash
processor.
- API: The
metrics.Type
interface has been changed in order to add labels.
- Significant restructuring of
amqp
inputs and outputs. These changes should be backwards compatible for existing pipelines, but changes the way in which queues, exchanges and bindings are declared using these types.
- New durable fields for
amqp
input and output types.
- Improved statsd client with better cached aggregation.
- New
tls
fields foramqp
input and output types.
- New
type
field forelasticsearch
output.
- New
throttle
processor.
- New
less_than
andgreater_than
operators formetadata
condition.
- New
metadata
condition type. - More metadata fields for
kafka
input. - Field
commit_period_ms
forkafka
andkafka_balanced
inputs for specifying a commit period.
- New
retries
field tos3
input, to cap the number of download attempts made on the same bucket item. - Added metadata based mechanism to detect final message from a
read_until
input. - Added field to
split
processor for specifying target batch sizes.
- Metadata fields are now per message part within a batch.
- New
metadata_json_object
function interpolation to return a JSON object of metadata key/value pairs.
- The
metadata
function interpolation now allows part indexing and no longer returns a JSON object when no key is specified, this behaviour can now be done using themetadata_json_object
function.
- Fields for the
http
processor to enable parallel requests from message batches.
- Broker level output processors are now applied before the individual output processors.
- The
dynamic
input and output HTTP paths for CRUD operations are now/inputs/{input_id}
and/outputs/{output_id}
respectively. - Removed deprecated
amazon_s3
,amazon_sqs
andscalability_protocols
input and output types. - Removed deprecated
json_fields
field from thededupe
processor.
- Add conditions to
process_map
processor.
- TLS config fields have been cleaned up for multiple types. This affects the
kafka
,kafka_balanced
andhttp_client
input and output types, as well as thehttp
processor type.
- New
delete_all
anddelete_prefix
operators formetadata
processor. - More metadata fields extracted from the AMQP input.
- HTTP clients now support function interpolation on the URL and header values, this includes the
http_client
input and output as well as thehttp
processor.
- New
key
field added to thededupe
processor, allowing you to deduplicate using function interpolation. This deprecates thejson_paths
array field.
- New
s3
andsqs
input and output types, these replace the now deprecatedamazon_s3
andamazon_sqs
types respectively, which will eventually be removed. - New
nanomsg
input and output types, these replace the now deprecatedscalability_protocols
types, which will eventually be removed.
- Metadata fields are now collected from MQTT input.
- AMQP output writes all metadata as headers.
- AMQP output field
key
now supports function interpolation.
- New
metadata
processor and configuration interpolation function.
- New config interpolator function
json_field
for extracting parts of a JSON message into a config value.
- Log level config field no longer stutters,
logger.log_level
is nowlogger.level
.
- Ability to create batches via conditions on message payloads in the
batch
processor. - New
--examples
flag for generating specific examples from Benthos.
- New
text
processor.
- Processor
process_map
replaced fieldstrict_premapping
withpremap_optional
.
- New
process_field
processor. - New
process_map
processor.
- Removed mapping fields from the
http
processor, this behaviour has been put into the newprocess_map
processor instead.
- Renamed
content
condition type totext
in order to clarify its purpose.
- Latency metrics for caches.
- TLS options for
kafka
andkafka_partitions
inputs and outputs.
- Metrics for items configured within the
resources
section are now namespaced under their identifier.
- New
copy
andmove
operators for thejson
processor.
- Metrics for recording
http
request latencies.
- Improved and rearranged fields for
http_client
input and output.
- More compression and decompression targets.
- New
lines
option for archive/unarchive processors. - New
encode
anddecode
processors. - New
period_ms
field for thebatch
processor. - New
clean
operator for thejson
processor.
- New
http
processor, where payloads can be sent to arbitrary HTTP endpoints and the result constructed into a new payload. - New
inproc
inputs and outputs for linking streams together.
- New streams endpoint
/streams/{id}/stats
for obtaining JSON metrics for a stream.
- Allow comma separated topics for
kafka_balanced
.
- Support for PATCH verb on the streams mode
/streams/{id}
endpoint.
- Sweeping changes were made to the environment variable configuration file. This file is now auto generated along with its supporting document. This change will impact the docker image.
- New
filter_parts
processor for filtering individual parts of a message batch. - New field
open_message
forwebsocket
input.
- No longer setting default input processor.
- New
root_path
field for service widehttp
config.
- New
regexp_exact
andregexp_partial
content condition operators.
- The
statsd
metrics target will now periodically report connection errors.
- The
json
processor will nowappend
array values in expanded form.
- More granular config options in the
http_client
output for controlling retry logic. - New
try
pattern for the outputbroker
type, which can be used in order to configure fallback outputs. - New
json
processor, this replacesdelete_json
,select_json
,set_json
.
- The
streams
API endpoints have been changed to become more "RESTy". - Removed the
delete_json
,select_json
andset_json
processors, please use thejson
processor instead.
- New
grok
processor for creating structured objects from unstructured data.
- New
files
input type for reading multiple files as discrete messages.
- Increase default
max_buffer
forstdin
,file
andhttp_client
inputs. - Command flags
--print-yaml
and--print-json
changed to provide sanitised outputs unless accompanied by new--all
flag.
- Badger based buffer option has been removed.
- New metrics wrapper for more basic interface implementations.
- New
delete_json
processor. - New field
else_processors
forconditional
processor.
- New websocket endpoint for
http_server
input. - New websocket endpoint for
http_server
output. - New
websocket
input type. - New
websocket
output type.
- Goreleaser config for generating release packages.
- Back to using Scratch as base for Docker image, instead taking ca-certificates from the build image.
- New
batch
processor for combining payloads up to a number of bytes. - New
conditional
processor, allows you to configure a chain of processors to only be run if the payload passes acondition
. - New
--stream
mode features:- POST verb for
/streams
path now supported. - New
--streams-dir
flag for parsing a directory of stream configs.
- POST verb for
- The
condition
processor has been renamedfilter
. - The
custom_delimiter
fields in any line reader typesfile
,stdin
,stdout
, etc have been renameddelimiter
, where the behaviour is the same. - Now using Alpine as base for Docker image, includes ca-certificates.