-
Notifications
You must be signed in to change notification settings - Fork 38
Server Migration
websocket.zig is being redesigned to use epoll (Linux) or kqueue (MacOS/BSD) where possible, and a naive thread-per-connection where not (windows). The API is changing to both accommodate this redesign and to hopefully improve usability.
This can be used to determine if websocket.zig is running in Blocking or NonBlocking mode. Comptime safe.
Instead of using websocket.listen(H, allocator, &context, config)
, you must now create a websocket.Server(H)
instance:
var server = websocket.Server(Handler).init(allocator, config);
defer server.deinit();
server.listen(&context); // blocking
A benefit of the above api is that server.stop()
can now be called to stop the server. It is safe to call server.stop()
from a separate thread.
Previously, your Handler
had to expose an init
function and handle
and close
methods. The type of messages passed to handle
depended on the handle_ping
, handle_pong
and handle_close
configuration values.
Now websocket.zig will infer the behavior from your Handler
.
init
is unchanged and still required.
Behaves like the previous version, but is now optional. If it is defined, as before, it is guaranteed to be called exactly once.
handle
is renamed to clientMessage
. This will only be called for text and binary messages. Four overloads are supported:
pub fn clientMessage(h: *H, data: []const u8) !void
pub fn clientMessage(h: *H, data: []const u8, tpe: ws.Message.TextType) !void
pub fn clientMessage(h: *H, allocator: Allocator, data: []const u8) !void
pub fn clientMessage(h: *H, allocator: Allocator, data: []const u8, tpe: ws.Message.TextType) !void
The tpe
parameter is only useful if you care whether or a text
or binary
message was sent. Websocket.zig does not enforce the RFC's requirement that text messages be valid UTF8 (it's expensive to do, and if a client cares, it can do it itself).
The allocator
acts like an arena allocator for the specific function call. It is actually a thread-local buffer with a fallback to an arena allocator, so it is particularly efficient. It is also particularly useful for the changes to conn.writeBuffer
.
If the a public method named clientPing
is defined, it will be called for any ping messages received by the client. If not specified, websocket.zig will automatically respond with an appropriate pong
.
pub fn clientPing(h: *Handler, data: []const u8) !void
If the a public method named clientPong
is defined, it will be called for any pong messages received by the client. If not specified, the message is ignored.
pub fn clientPong(h: *Handler) !void
If the a public method named clientClose
is defined, it will be called for any close messages received by the client. If not specified, websocket.zig will echo the close message back to the serer.
You almost certainly don't want to define this method. You almost certainly want the close
method.
pub fn clientClose(h: *Handler, data: []const u8) !void
The afterInit
method remains optional. As a reminder, afterInit
is called after the handshake response has been sent and is the first time it is safe to use conn.write()
.
It now supports two overloads:
pub fn afterInit(h: *Handler) !void
pub fn afterInit(h: *Handler, ctx: anytype) !void
This is the same ctx passed to init
and is meant for cases where the ctx
is only needed when the initial connection is established.
conn.writeBuffer
now takes an allocator. When used with the optional allocator available to clientMessage
, deinit
does not need to be called:
pub fn clientMessage(h: *Handler, allocator: Allocator, data: []const u8) !void {
var wb = conn.writeBuffer(.text, allocator);
try std.fmt.format(wb.writer(), "it's over {d}!!!", .{9000});
try wb.flush();
}
The configuration structure has changed significantly. Refer for the readme for a list of configuration options.
Of particular interest:
-
thread_pool.count
- the number of threads used to process messages, these are the threads which parses the client messages and invoke your handler's methods. -
thread_pool.buffer_size
- if you are using the optional allocator passed to theclientMessage
overload, consider setting this to a value appropriate for your allocations. The allocator is a special fallback allocator which uses a thread-local buffer (ofbuffer_size
size) and an arena allocator. This will requirethread_pool.count * thread_pool.buffer_size
which is relatively small given how efficient it is for any dynamic allocations -
buffers.pool
- By default, each connection gets its own buffer to parse messages. Larger messages uses the larger buffer pool (buffers.large_count
andbuffers.large_size
). The size of this default buffer isbuffers.size
. Per-connection buffer is particularly well suited when you expect clients to send a constant stream of messages. Obviously, this will take# of connections * buffers.size
memory. If you're expecting clients to send infrequent messages, you can use a small buffer pool by settingsbuffers.pool
which can result in lower memory usage, at the cost of a bit of overhead.
For a given handler, there will only be 1 call to clientMessage
, clientPing
, clientPong
clientClose
or close
at a time. As before, methods on the conn
interface, including close()
are thread-safe.
Closing used to involve using conn.writeClose()
(or a variant) and then calling conn.close()
. And even then, the connection would only be closed at some point after the call was made.
Now conn.close(opts)
combines everything and will close the connection immediately. Default opts
are: .{.code = 1000, .reason = ""}
.
expectText()
was changed to expectMessage(type, data)
which is more generic for asserting any type of message
expectClose()
was added