• Что бы вступить в ряды "Принятый кодер" Вам нужно:
    Написать 10 полезных сообщений или тем и Получить 10 симпатий.
    Для того кто не хочет терять время,может пожертвовать средства для поддержки сервеса, и вступить в ряды VIP на месяц, дополнительная информация в лс.

  • Пользаватели которые будут спамить, уходят в бан без предупреждения. Спам сообщения определяется администрацией и модератором.

  • Гость, Что бы Вы хотели увидеть на нашем Форуме? Изложить свои идеи и пожелания по улучшению форума Вы можете поделиться с нами здесь. ----> Перейдите сюда
  • Все пользователи не прошедшие проверку электронной почты будут заблокированы. Все вопросы с разблокировкой обращайтесь по адресу электронной почте : info@guardianelinks.com . Не пришло сообщение о проверке или о сбросе также сообщите нам.

Understanding Backpressure in web socket

Sascha Оффлайн

Sascha

Заместитель Администратора
Команда форума
Администратор
Регистрация
9 Май 2015
Сообщения
1,605
Баллы
155


Backpressure in WebSockets refers to a flow-control mechanism that prevents a fast data producer (server or client) from overwhelming a slow consumer (the other side of the WebSocket). It ensures stability, avoids memory bloat, and keeps throughput consistent.

Let’s go step-by-step in depth — focusing on what it is, why it happens, how it’s handled, and how to implement it.

1. Core Concept


A WebSocket connection is a full-duplex TCP stream. Both sides can send data at any time.
However, network speed, client CPU, or I/O delays mean one side can’t always consume data as fast as the other produces it.

Backpressure occurs when:

  • The sender keeps writing messages faster than the receiver (or network) can process or transmit them.

This leads to:

  • Increasing memory usage (buffers fill up).
  • Higher latency.
  • Eventually, crashes or forced connection closures.
2. How WebSockets Send Data Internally


When you call something like:


ws.send(data);




Internally:

  • The data is queued in a TCP send buffer.
  • The OS tries to send it over the network.
  • If the buffer is full, the send() call doesn’t immediately fail—it just queues the data.
  • As more sends happen, that buffer can grow in memory if the application doesn’t monitor it.

So, the write call being non-blocking is both good and bad:

  • ✅ Good: It keeps the app responsive.
  • ❌ Bad: The app might not realize that it’s flooding the buffer.
3. How Backpressure Builds Up


Consider this timeline:

TimeSender ActionReceiver ConditionResult
t1 ws.send() fast loopReceiver processes slowlyData accumulates
t2TCP buffer fills upNetwork can’t drain fastOS backpressure
t3Sender still queues messagesMemory growsRisk of crash
4. Detecting Backpressure


In Node.js and browser WebSockets:

  • The .send() method returns a boolean (in Node.js ws library).
  • If it returns false, it means the internal buffer is full.
  • You must pause sending until the 'drain' event fires.

Example (Node.js ws library):


function sendData(ws, data) {
if (!ws.send(data, { binary: false }, (err) => {
if (err) console.error('Send error:', err);
})) {
// Buffer full, wait for 'drain'
ws.once('drain', () => sendData(ws, data));
}
}




In browsers, you don’t get a drain event. You can only throttle your send frequency manually (e.g., via intervals or queues).

5. Handling Backpressure

(a) Queue messages manually


Instead of calling ws.send() directly, you push data into a queue and only send if the buffer is ready.


const queue = [];
let sending = false;

function send(ws, message) {
queue.push(message);
if (!sending) processQueue(ws);
}

function processQueue(ws) {
if (queue.length === 0) {
sending = false;
return;
}
sending = true;
const message = queue.shift();
const ok = ws.send(message);
if (!ok) {
ws.once('drain', () => processQueue(ws));
} else {
processQueue(ws);
}
}



(b) Limit per-client throughput


If multiple clients connect, apply per-client rate limiting:

  • Send N messages per second per connection.
  • Drop or batch messages beyond a limit.

Example: token bucket or leaky bucket algorithms.

(c) Apply backpressure at application layer


Instead of sending all game updates or chat messages in real time:

  • Compress or batch messages (e.g., send state diffs every 50ms).
  • Drop outdated data (like old player positions).
6. In Redis or Message Queue Context


When using WebSockets with Redis or Kafka:

  • The backend may push data faster than the WebSocket can deliver.

  • Implement backpressure propagation:
    • If WebSocket buffer is full, pause consuming from Redis.
    • Resume when WebSocket drain event fires.

Example pattern:


redisSub.on('message', (channel, msg) => {
if (!ws.send(msg)) {
redisSub.pause();
ws.once('drain', () => redisSub.resume());
}
});



7. Monitoring Metrics in Production


Track:

  • outboundQueueLength (number of pending messages).
  • averageSendTime.
  • memoryUsage growth per connection.
  • TCP retransmissions and socket buffer sizes.

Use these to auto-scale or drop slow clients.

8. Summary Table

ConceptDescriptionFix
BackpressureSender faster than receiverPause or queue sends
TCP bufferOS-managed send queueMonitor size via .send() return
Browser WebSocketNo drain eventManual throttling
Node ws Emits drain Use event to resume sending
Redis/Kafka IntegrationMay floodPause upstream on pressure
9. Key Takeaway


Backpressure = controlled data flow.
Without it, your WebSocket server becomes memory-heavy, latency increases, and you lose control over delivery rate.

You must:

  1. Detect pressure (buffer full or slow client).
  2. Stop sending.
  3. Wait for drain.
  4. Resume safely.



Источник:

Пожалуйста Авторизируйтесь или Зарегистрируйтесь для просмотра скрытого текста.

 
Вверх Снизу