Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC 2: ExecutionContext #15302

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion spec/std/spec_helper.cr
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def spawn_and_check(before : Proc(_), file = __FILE__, line = __LINE__, &block :

# This is a workaround to ensure the "before" fiber
# is unscheduled. Otherwise it might stay alive running the event loop
spawn(same_thread: true) do
spawn(same_thread: !{{flag?(:execution_context)}}) do
while x.get != 2
Fiber.yield
end
Expand Down
6 changes: 5 additions & 1 deletion spec/std/thread_spec.cr
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,11 @@ pending_interpreted describe: Thread do

it "names the thread" do
{% if flag?(:execution_context) %}
Thread.current.name.should eq("DEFAULT")
{% if flag?(:mt) %}
Thread.current.name.should match(/^DEFAULT-\d+$/)
{% else %}
Thread.current.name.should eq("DEFAULT")
{% end %}
{% else %}
Thread.current.name.should be_nil
{% end %}
Expand Down
2 changes: 1 addition & 1 deletion spec/support/mt_abort_timeout.cr
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ private SPEC_TIMEOUT = 15.seconds
Spec.around_each do |example|
done = Channel(Exception?).new

spawn(same_thread: true) do
spawn(same_thread: !{{flag?(:execution_context)}}) do
begin
example.run
rescue e
Expand Down
12 changes: 12 additions & 0 deletions src/crystal/system/thread.cr
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,18 @@ class Thread
Crystal::System::Thread.sleep(time)
end

# Delays execution for a brief moment.
@[NoInline]
def self.delay(backoff : Int32) : Int32
if backoff < 7
backoff.times { Intrinsics.pause }
backoff &+ 1
else
Thread.yield
0
end
end

# Returns the Thread object associated to the running system thread.
def self.current : Thread
Crystal::System::Thread.current_thread
Expand Down
10 changes: 5 additions & 5 deletions src/crystal/tracing.cr
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ module Crystal

n = value.size.clamp(..remaining)
value.to_unsafe.copy_to(@buf.to_unsafe + pos, n)
@size = pos + n
@size = pos &+ n
end

def write(value : String) : Nil
Expand All @@ -71,7 +71,7 @@ module Crystal
i = 0
value.each_byte do |byte|
chars[i] = byte
i += 1
i &+= 1
end
write chars.to_slice[0, i]
end
Expand Down Expand Up @@ -106,7 +106,7 @@ module Crystal
end

def write(value : Time::Span) : Nil
write(value.seconds * Time::NANOSECONDS_PER_SECOND + value.nanoseconds)
write(value.seconds &* Time::NANOSECONDS_PER_SECOND &+ value.nanoseconds)
end

def write(value : Bool) : Nil
Expand Down Expand Up @@ -204,7 +204,7 @@ module Crystal
private def self.each_token(slice, delim = ',', &)
while e = slice.index(delim.ord)
yield slice[0, e]
slice = slice[(e + 1)..]
slice = slice[(e &+ 1)..]
end
yield slice[0..] unless slice.size == 0
end
Expand Down Expand Up @@ -268,7 +268,7 @@ module Crystal
begin
yield
ensure
duration = System::Time.ticks - time
duration = System::Time.ticks &- time
Tracing.log(section.to_id, operation, time, **metadata, duration: duration)
end
else
Expand Down
6 changes: 5 additions & 1 deletion src/fiber/execution_context.cr
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,11 @@ module Fiber::ExecutionContext

# :nodoc:
def self.init_default_context : Nil
@@default = SingleThreaded.default
{% if flag?(:mt) %}
@@default = MultiThreaded.default(default_workers_count)
{% else %}
@@default = SingleThreaded.default
{% end %}
@@monitor = Monitor.new
end

Expand Down
74 changes: 47 additions & 27 deletions src/fiber/execution_context/monitor.cr
Original file line number Diff line number Diff line change
@@ -1,47 +1,66 @@
module Fiber::ExecutionContext
# :nodoc:
class Monitor
DEFAULT_EVERY = 5.seconds
struct Timer
def initialize(@every : Time::Span)
@last = Time.monotonic
end

def elapsed?(now)
ret = @last + @every <= now
@last = now if ret
ret
end
end

DEFAULT_EVERY = 10.milliseconds
DEFAULT_COLLECT_STACKS_EVERY = 5.seconds

@thread : Thread?
def initialize(
@every = DEFAULT_EVERY,
collect_stacks_every = DEFAULT_COLLECT_STACKS_EVERY,
)
@collect_stacks_timer = Timer.new(collect_stacks_every)

def initialize(@every = DEFAULT_EVERY)
# FIXME: should be an ExecutionContext::Isolated instead of bare Thread?
# it might print to STDERR (requires evloop) for example; it may also
# allocate memory, for example to raise an exception (gc can run in the
# thread, running finalizers) which is probably not an issue.
@thread = uninitialized Thread
@thread = Thread.new(name: "SYSMON") { run_loop }
end

# TODO: slow parallelism (MT): instead of actively trying to wakeup, which
# can be expensive and a source of contention, leading to waste more time
# than running the enqueued fiber(s) directly, the monitor thread could
# check the queues of MT schedulers every some milliseconds and decide to
# start or wake threads.
# TODO: slow parallelism: instead of actively trying to wakeup, which can be
# expensive and a source of contention leading to waste more time than
# running the enqueued fiber(s) directly, the monitor thread could check the
# queues of MT schedulers and decide to start/wake threads, it could also
# complain that a fiber has been asked to yield numerous times.
#
# TODO: maybe yield (ST/MT): detect schedulers that have been stuck running
# the same fiber since the previous iteration (check current fiber &
# scheduler tick to avoid ABA issues), then mark the fiber to trigger a
# cooperative yield, for example, `Fiber.maybe_yield` could be called at
# potential cancellation points that would otherwise not need to block now
# (IO, mutexes, schedulers, manually called in loops, ...); this could lead
# fiber execution time be more fair, and we could also warn when a fiber has
# been asked to yield but still hasn't after N iterations.
# TODO: detect schedulers that have been stuck running the same fiber since
# the previous iteration (check current fiber & scheduler tick to avoid ABA
# issues), then mark the fiber to trigger a cooperative yield, for example,
# `Fiber.maybe_yield` could be called at potential cancellation points that
# would otherwise not need to block now (IO, mutexes, schedulers, manually
# called in loops, ...) which could lead fiber execution time be more fair.
#
# TODO: event loop starvation: if an execution context didn't have the
# opportunity to run its event-loop since N iterations, then the monitor
# thread could run it; it would avoid a set of fibers to always resume
# TODO: if an execution context didn't have the opportunity to run its
# event-loop since the previous iteration, then the monitor thread may
# choose to run it; it would avoid a set of fibers to always resume
# themselves at the expense of pending events.
#
# TODO: run GC collections on "low" application activity? when we don't
# allocate the GC won't try to collect memory by itself, which after a peak
# usage can lead to keep memory allocated when it could be released to the
# OS.
# TODO: run the GC on low application activity?
private def run_loop : Nil
every do |now|
collect_stacks
collect_stacks if @collect_stacks_timer.elapsed?(now)
end
end

# Executes the block at exact intervals (depending on the OS scheduler
# precision and overall OS load), without counting the time to execute the
# block.
#
# OPTIMIZE: exponential backoff (and/or park) when all schedulers are
# pending to reduce CPU usage; thread wake up would have to signal the
# monitor thread.
private def every(&)
remaining = @every

Expand All @@ -51,11 +70,12 @@ module Fiber::ExecutionContext
yield(now)
remaining = (now + @every - Time.monotonic).clamp(Time::Span.zero..)
rescue exception
Crystal.print_error_buffered("BUG: %s#every crashed", self.class.name, exception: exception)
Crystal.print_error_buffered("BUG: %s#every crashed",
self.class.name, exception: exception)
end
end

# Iterates each execution context and collects unused fiber stacks.
# Iterates each ExecutionContext and collects unused Fiber stacks.
#
# OPTIMIZE: should maybe happen during GC collections (?)
private def collect_stacks
Expand Down
Loading