This plugin allows you to produce messages to Kafka from HSL.
Important!
Kafka has a internal memory based queue. In case of a forceful application restart the queue may be lost. During graceful shutdown, the system will wait 60 seconds to drain queues. If you need guaranteed transaction safety it is recommended to write to a fsynced log file (eg. halon-extras-logger) and use eg. filebeat.
Follow the instructions in our manual to add our package repository and then run the below command.
apt-get install halon-extras-kafka
yum install halon-extras-kafka
This plugin can be controlled using the halonctl tool. The following commands are available though halonctl plugin command kafka ....
| Command | |
|---|---|
| dump <queue> | Show debug information |
Example
halonctl plugin command kafka dump kafka1
For the configuration schema, see kafka.schema.json. Below is a sample configuration.
plugins:
- id: kafka
config:
queues:
- id: kafka1
config:
bootstrap.servers: kafka:9092
# queue.buffering.max.messages: 100000
These functions needs to be imported from the extras://kafka module path.
Params
- id
string(required) the id of the queue - topic
string(required) the topic - value
stringvalue to be sent - key
stringornonekey to be sent - headers
arraya list of headers to be sent - partition
numberthe partition to use, specificy -1 use the configured partitioner - block
booleanif the function should block when the internal message queue is full, the default isfalse
Returns
An array, currently only errno and errstr in case of errors. The most common error to handle would be queue full (error -184).
import { kafka_producer } from "extras://kafka";
echo kafka_producer("kafka1", "test-topic", "myvalue", "mykey", [
"foo" => "bar",
"biz" => "buz",
], -1);