Non-Blocking I/O - The Art of Not Waiting
Non-BlockingEvent LoopConcurrencyBackend

Non-Blocking I/O - The Art of Not Waiting

Phuoc NguyenJanuary 25, 20245 min read

Non-Blocking I/O - The Art of Not Waiting

The Problem with Blocking Code

In traditional programming, we often write blocking code. When facing performance issues, the common solution is to allow some tasks to execute in parallel. However, this requires more resources and simultaneously dealing with concurrency problems:

Blocking vs Non-blocking
Blocking vs Non-blocking

Common problems:

  • Race Condition - When multiple threads access the same resource
  • Critical Sections - Code regions that need protection
  • Deadlock - Threads waiting for each other indefinitely

Problem: When incidents cause high latency (e.g., DB or network), those threads will be blocked until completion, leading to resource exhaustion.


Understanding Concurrency through a Coffee Shop Example

To understand the concepts clearly, imagine Hieu opens a coffee shop with one barista.

Coffee Shop Analogy
Coffee Shop Analogy

Customer Service Process:

StepTaskTime
1Take order500ms
2Make coffee500ms
3Collect payment500ms
4Serve drink500ms

Scenario: Today there are 100 customers within one minute.

What is Concurrency?

When an application handles multiple requests at once, the CPU executes requests so fast that we can consider them as simultaneous.

Concurrency Diagram
Concurrency Diagram

Context Switching: A CPU with a single core supports concurrency through continuously switching between threads to handle requests.


Parallelism - Adding Staff

To increase efficiency, Hieu hires another employee Luu. Work is now done in parallel.

Parallelism
Parallelism

Comparing Concurrency vs Parallelism

ConcurrencyParallelism
Handle many tasks "at once"Actually run in parallel
1 CPU core, context switchingMultiple CPU cores
Like 1 person doing many thingsMany people doing many things

Parallelism is simultaneous execution of requests on a multi-core CPU.


Multi-thread - Optimizing the Process

Realizing payment collection can be automated, Hieu places a QR Code at the counter for customers to self-pay.

Automation
Automation

Multi-threading achieves:

  • Simultaneous use of multiple threads across multiple CPUs
  • Maximizing CPU performance
  • Reducing customer wait time

Synchronous vs Asynchronous Programming

Synchronous

With the current operation, staff must perform work sequentially:

Take order → Make coffee → Serve drink → Next customer
Synchronous Flow
Synchronous Flow

Problem: Next task must wait for previous task to complete.

Asynchronous

Hieu and Luu create a new process with help from Long:

Asynchronous Flow
Asynchronous Flow
PersonRoleEquivalent in Code
LongTake orders, give queue numbersEvent Loop
Queue numberOrder trackingEvent Queue
HieuMake coffeeHandler 1
LuuServe drinksHandler 2
QR CodeAutomatic paymentAsync Handler

Event Loop - The Heart of Non-Blocking

Event Loop is an infinite loop, listening for events and distributing them to handlers for processing.

Event Loop Architecture
Event Loop Architecture

Important Characteristics of Event Loop:

  1. Never blocked - Designed to always be ready to receive events
  2. Handle large volumes of events - Because it's not blocked
  3. Runs on one core - At any given time
  4. Multicore deployment - Requires starting multiple processes

This is the Reactor Pattern - the foundation of Non-blocking I/O.


Vert.x and Multi-Reactor Pattern

Vert.x is a toolkit (not a framework or application server) for building Reactive Applications.

Vert.x Architecture
Vert.x Architecture

Multi-Reactor Pattern in Vert.x

Vert.x Instance with Multi-Reactor:

Event LoopCPU CoreRole
Event Loop 1Core 1Process events independently
Event Loop 2Core 2Process events independently
Event Loop 3Core 3Process events independently

Each Event Loop runs on 1 CPU core, allowing maximum utilization of multi-core server resources.

Vert.x instance can manage multiple Event Loops, the number typically depends on CPU core count.


Golden Rule: Don't Block the Event Loop!

Warning
Warning

Why is it important?

If you block all Event Loops in a Vert.x application, the application will STOP!

How long should a handler take?

Depends on traffic and number of Event Loops:

Event LoopsTrafficMax Processing Time
11000 rps1ms
21000 rps2ms
41000 rps4ms

What is Reactive Programming?

Purpose:

  1. Clear architecture - Easy to scale and develop
  2. Handle appropriate traffic - Meet business requirements
  3. Follow The Reactive Manifesto - Proven best practices
Reactive Manifesto
Reactive Manifesto

How to build a Reactive Application:

ApproachToolsComplexity
Use toolkitVert.x, NettyHigh
Code from Java coreNIO, CompletableFutureVery high
Use FrameworkSpring WebFlux, Quarkus, MutinyMedium

Key Takeaways

Summary
Summary
  1. Concurrency ≠ Parallelism - Understand the difference clearly
  2. Event Loop is the heart of Non-Blocking I/O
  3. Never block the Event Loop! - Golden rule
  4. Multi-Reactor Pattern allows maximum utilization of server resources
  5. Vert.x is a powerful toolkit for building Reactive Applications

References

Share: