Multithreading

General

Scheduling model

TypeSchedulingManagement LevelParallelism
OS ThreadsPreemptiveKernelYes
Green ThreadsCooperativeRuntime/VMNo
FibersCooperativeUser spaceNo
CoroutinesCooperativeLanguage levelNo

Green Threads

Green threads are threads that are managed by a runtime library or virtual machine instead of the operating system. Key characteristics include:

  • Scheduled in user space rather than kernel space
  • Share a single operating system thread through cooperative concurrency
  • Cannot achieve true parallelism on multiple cores
  • Significantly faster than native threads for thread activation and synchronization
  • More memory efficient as they allocate from the heap rather than creating OS-level stacks

Coroutines

Coroutines are program components that allow execution to be suspended and resumed. Their key features are:

  • Use cooperative multitasking where control is explicitly passed between routines
  • Local data values persist between successive calls
  • Execution suspends and resumes from the same point
  • Do not provide parallelism but enable concurrency
  • Don’t require synchronization primitives like mutexes

Fibers

Fibers are lightweight threads of execution with these characteristics:

  • Use cooperative multitasking like coroutines
  • Have their own stack but share address space
  • Must be explicitly scheduled by the application
  • Cannot utilize multiple processors without using OS threads
  • Provide better performance for context switching compared to threads

Use Cases

  • Green threads: Legacy systems and environments without native thread support
  • Coroutines: Implementing iterators, event loops, and cooperative tasks
  • Fibers: High-performance I/O operations and task scheduling systems

Modern Implementations

Many modern programming languages provide these features:

  • Go implements goroutines (a form of green threads)
  • Kotlin and Python support coroutines
  • Ruby previously used green threads (before version 1.9)
  • PHP supports green threads through fibers and coroutines
  • Rust supports system threads natively and async I/O through libraries
  • Java threads are mapped 1:1 to native OS threads

Resource control

AspectPreemptiveCooperativeParallelism
CPU ControlOS controlledTask controlledYes
InterruptionCan happen anytimeOnly at yield pointsNo
OverheadHigherLowerNo
ComplexityMore complexSimplerNo

Preemptive Scheduling

Preemptive scheduling allows the operating system to interrupt and reassign CPU control at any time. Key characteristics include:

  • Tasks can be forcibly suspended by the scheduler
  • Higher priority tasks can interrupt lower priority ones
  • Requires context switching to store and restore thread states
  • Handles better when running unrelated software from different sources
  • Higher overhead due to frequent context switching

Cooperative Scheduling

Cooperative scheduling relies on tasks voluntarily giving up CPU control. Its main features are:

  • Tasks run until they explicitly yield control or block
  • No forced interruption of running tasks
  • Lower overhead due to minimal context switching
  • Works well when programs are designed to cooperate
  • Simpler implementation compared to preemptive scheduling

Use Cases

Preemptive Scheduling

  • Modern operating systems
  • Systems with multiple users
  • Applications requiring fair CPU distribution
  • Time-critical systems needing guaranteed response times

Cooperative Scheduling

  • Embedded systems with single-vendor software
  • Event-driven systems
  • State machine implementations
  • Systems where predictable execution is more important than responsiveness

Trade-offs

Preemptive scheduling provides better system responsiveness and fair CPU distribution but comes with higher overhead. Cooperative scheduling offers simpler implementation and fewer synchronization issues but can suffer from poor I/O performance and potential CPU hogging by misbehaving tasks.