Wednesday, 15 June 2011

How to ensure flow control in RabbitMQ is never triggered? -


I have a publisher who can stand in queue at a slightly larger rate than consumers' consumption. For a small number, it's okay, but for a very large number of messages, RabbitMQ starts writing it on disk. At certain times the disk is completed, and the flow control starts. Since then, rates are really slow, what is the method to reduce or share this load between cluster nodes? How should I design my application so that the flow control could never start? I am using RabbitMQ 3.2.3 on three nodes with 13g RAM, and the system is connected to each other through disk space 10G-cluster. Two of these are RAM nodes, and the remaining one is a disk node, which is also used for rabbit MQ management plugin.

You can refresh the configuration, upgrade hardware etc. and in the end you probably There is more than one rabbit MQ nodes in front of your rabbit MQ server here. The problem is that if you are publishing at a higher rate than consumption, then you will eventually run into this problem again and again.

I think the best way to prevent this from happening is by looking at the argument on the publisher who keeps track of the number of requests to be processed in the queue. If the number of requests exceeds X, then the publisher should wait until the number of messages are reduced or at the slow rate until new messages are published. The course depends on the type of solution from which the published messages are coming from, if those users have been submitted (such as through a browser or client) you can show a loading bar when the queue is created.

Ideally, however, you should pay attention to the processing of the fast on the consumer side, and possibly the part can be extended, but in order to stop the buildup when something happens to the publisher to suffocate should help.


No comments:

Post a Comment