When trying to get your head around how ActiveMQ’s networks of brokers work, it helps to have an understanding of the underlying mechanisms involved. I’ve always thought of it as the “Lego brick model” – once you understand the smaller pieces, you can reason about how they can be combined together.
When a regular client connects to a broker and sets up a subscription on a destination, ActiveMQ sees it as consumer and dispatches messages to it. On the consumer, the thread that receives messages is generally not the same one that triggers the event listener – there is an intermediate holding area for messages that have been dispatched from the broker, but not yet picked up for processing – the prefetch buffer, and it is here that received messages are placed for consumption. The thread responsible for triggering the main business logic consumes messages from this buffer one by one – either through
MessageConsumer.receive() or through a
ActiveMQ dispatches messages out to interested consumers uniformly. In the context of queues, this means in a round-robin fashion. It does not make a distinction between whether that consumer is actually a piece of client code that triggers some business logic, or whether it is another broker.
Now, when you think about it, this implies that when two brokers (let’s call them
A2) are connected in a network
A1 -> A2, somewhere there must be a consumer with a prefetch buffer. A network is established by connecting
networkConnector to a
I have always found it useful to think of a
transportConnector as simply an inbound socket that understands a particular wire protocol. When you send a message into it as a client, the broker will take it, write it to a store, and send you a reply that all’s good, or send you a NACK if there’s a problem. So really it’s a straight-through mechanism; the message gets reacted to instantly – there’s no prefetch buffers here.
So if there’s no prefetch buffer on a
transportConnector, for a network to work it must actually be a part of the
networkConnector on the sending broker (
A1). In fact, this is exactly the case.
When a network is established from
A2, a proxy consumer (aka local consumer; I’m using Gary Tully’s terminology here) is created as part of the
networkConnector for each destination on which there is demand in
A2, or which is configured inside the
staticallyIncludedDestinations block. By default, each destination being pushed has one consumer, with its own prefetch buffer.
The thread that consumes from this buffer, takes each message one-by-one and (in the case of persistent messages) synchronously sends it to the remote broker
A2 then takes that message, persists it and replies that the message has been consumed. The proxy consumer then marks the message as having been processed, at which point it is deemed as having been consumed from
A1, is marked for deletion, and will not be redelivered to any other consumers. That’s the general principle behind store and forward.
Now, because the send from
A2 is being performed by a single thread, which is sending the messages one-by-one for each destination, you will see that the throughput across the
networkConnector can be quite slow relative to the rate at which messages are being placed onto
A1 in the first place. This is the cost of two guarantees that ActiveMQ gives you when it accepts messages:
- messages will not be lost when they are marked as persistent, and
- messages will be delivered in the order in which they are sent
If you have messages that need to be sent over the network at a higher rate than that which the
networkConnector can send given these constraints, you can relax one of these two guarantees:
- By marking the messages as non-persistent when you send them, the send will be asynchronous (and so, super fast), but if you lose a broker any messages that are in-flight are gone forever. This is usually not appropriate, as in most use cases reliability trumps performance.
- If you can handle messages out-of-order, you can define N number of
A2. Each connector will have a proxy consumer created for the destinations being pushed over the network – N connectors means N consumers each pushing messages. Keep in mind that message distribution in a broker is performed in a round-robin fashion, so you lose all guarantee of ordering before the messages arrive on
A2. Of course, you can opt to group related messages together using message groups, which will be honored on both
A1(between proxy consumers), and actual consumers on
Hopefully, this helps you in developing a mental model of how the underlying mechanics of ActiveMQ’s broker networks work. Of course there’s loads more functionality available, in that there are many more Lego bricks of different shapes there, to help you build up your broker infrastructure.