RabbitMQ: Who Creates the Queues and Exchanges?

Messaging is a fundamental part of any distributed architecture. It allows a publisher to send a message to any number of consumers, without having to know anything about them. This is great for truly asynchronous and decoupled communications.

The above diagram shows a very basic but typical setup you would see when using RabbitMQ. A publisher publishes a message onto an exchange. The exchange handles the logic of routing the message to the queues that are bound to it. For instance, if it is a fanout exchange, then a copy of the same message would be duplicated and placed on each queue. A consumer can then read messages from a queue and process them.

An important assumption for this setup to work is that when publishers and consumers are run, all this RabbitMQ infrastructure (i.e. the queues, exchanges and bindings) must already exist. A publisher would not be able to publish to an exchange that does not exist, and neither could a consumer take messages from an inexistent queue.

Thus, it is not unreasonable to have publishers and/or consumers create the queues, exchanges and bindings they need before beginning to send and receive messages. Let’s take a look at how this may be done, and the implications of each approach.

1. Split Responsibility

To have publishers and consumers fully decoupled from each other, ideally the publisher should know only about the exchange (not the queues), and the consumers should know only about the queues (not the exchange). The bindings are the glue between the exchange and the queues.

One possible approach could be to have the publisher handle the creation of the exchange, and consumers create the queues they need and bind them to the exchange. This has the advantage of decoupling: as new queues are needed, the consumers that need them will simply create and bind them as needed, without the publisher needing to know anything about them. It is not fully decoupled though, as the consumers must know the exchange in order to bind to it.

On the other hand, there is a very real danger of losing messages. If the publisher is deployed before any consumers are running, then the exchange will have no bindings, and any messages published to it will be lost. Whether this is acceptable is something application-dependent.

2. Publisher Creates Everything

The publisher could be configured to create all the necessary infrastructure (exchanges, queues and bindings) as soon as it runs. This has the advantage that no messages will be lost (because queues will be bound to the exchange without needing any consumers to run first).

However, this means that the publisher must know about all the queues that will be bound to the exchange, which is not a very decoupled approach. Every time a new queue is added, the publisher must be reconfigured and redeployed to create it and bind it.

3. Consumer Creates Everything

The opposite approach is to have consumers create exchanges, queues and bindings that they need, as soon as they run. Like the previous approach, this introduces coupling, because consumers must know the exchange that their queues are binding to. Any change in the exchange (renaming, for instance) means that all consumers must be reconfigured and redeployed. This complexity may be prohibitive when a large number of queues and consumers are present.

4. Neither Creates Anything

A completely different option is for neither the publisher nor the consumer to create any of the required infrastructure. Instead, it is created beforehand using either the user interface of the Management Plugin or the Management CLI. This has several advantages:

  • Publishers and consumers can be truly decoupled. Publishers know only about exchanges, and consumers know only about queues.
  • This can easily be scripted and automated as part of a deployment pipeline.
  • Any changes (e.g. new queues) can be added without needing to touch any of the existing, already-deployed publishers and consumers.


Asynchronous messaging is a great way to decouple services in a distributed architecture, but to keep them decoupled, a valid strategy for maintaining the underlying messaging constructs (in the case of RabbitMQ, these would be the queues, exchanges and bindings) is necessary.

While publisher and consumer services may themselves take care of creating what they need, there could be a heavy price to pay in terms of initial message loss, coupling, and operational maintenance (in terms of configuration and deployment).

The best approach is probably to handle the messaging system configuration where it belongs: scripting it outside of the application. This ensures that services remain decoupled, and that the queueing system can change dynamically as needed without having to impact a lot of existing services.

11 thoughts on “RabbitMQ: Who Creates the Queues and Exchanges?”

  1. Your comments on this is based on A MAP 0.91 I guess. And a agree with you some what, in our case we script/create the exchanges before using them and the consumers create their queues and bindings.

    I have also briefly looked at the AMQP 1.0 protocol and some of your “worries” is removed there as I can understand. But it is also my understanding that rabbit will not support it.

  2. And who creates the users that the subscriber/publisher use?

    in a large microservices environment – where each service creates its own objects (which i like) – do we need to keep a big “admin” that all microservices should know? (and would create the users while deploying)? anything else?

    1. That would require implementation details that can vary depending on the application as well as client technology. Consult the documentation. Creation of infrastructure in RabbitMQ can be automated via command-line scripting.

  3. I get the feeling there is too much a focus on decoupling these days. With results like this that shift all the knowledge from services to a central place of knowledge – the ansible script to roll out the queues and exchanges. Something that then will typically require manual coordination to get it right when rolling out a new version that needs new queues. All that does is hide the dependencies, it doesn’t make them go away.
    The problem in the split responsibility with lost messages in most cases isn’t a problem because those lost messages represent messages that wouldn’t have been there otherwise in the first place. I.e. a service runs, then wants to support messaging, a new version is deployed that does starts sending messages and at some point consumers start listening. The “lost” messages would also have been lost had the service deployed 10 minutes later while waiting for the IT department to roll out the updated queue infrastructure. And if you really want to make sure nothing gets lost you can first deploy a version that declares the exchange but doesn’t send anything, then deploy the consumers and then the next version starts sending. Yes a bit more rollouts, but your rollout process should be oiled for quick rollouts anyway. “Manual” management of the queues seems to me like the worst approach, because it typically incurs more human synchonization, while a clear split responsibility approach only needs some special attention in some cases – but within the same department/team/knowledge group that is already involved anyway. Plus you only need to change one place for one feature change, the service, not two – service and central queue configuration.

  4. I kinda like the software (instead of an admin panel) creating the exchanges/queues/bindings as that automatically handles RabbitMQ restarts and connections to new RabbitMQ servers (thinking automatic fail-over to new machines if the current one dies). With the manual intervention approach, if RabbitMQ dies in the middle of the night then comes back up but no human saw it, there may be many missed messages. (or is there a way for RabbitMQ to automatically create all of the exchanges/queues/bindings at restart?)

    But if software creates everything, we still have the issue of split responsibility. Then again, don’t we still have some kind of coupling? The producer and the client both have to know the IP addresses (or DNS names) of the broker so that has to be in some shared config that they can both get to (or a file that gets sent to each producer and consumer machine). Given that, sharing info about exchange names/types and queue info might not be so bad…. put it in the same file as the DNS info.

    I think I’m leaning towards having both publisher AND consumer create EVERYTHING. This way if the consumer starts first, it’ll have everything it needs to bind to whilst waiting for incoming messages. Same for the producer and since the queue will be created, no messages will be lost (just stuck in the queue until the consumer starts). If a new queue is needed, the producer would not have to restart.

  5. I ended up here because I was thinking about this exact problem.

    I cannot afford to lose messages and most definitely want to be as decoupled as possible.

    To keep things decoupled I do think that the exchange/queue/binding work should be external to both the writers and readers. This could be done via various mechanisms – manually via the UI (could be cumbersome) or via some setup scripting or configuration app. Then writers/readers could be also configured via json config to allow updates without recompile and deploy. Just an idea.

    I think it’s perfectly OK to have a chunk of code/script that manages all the “plumbing” – it basically only knows about configuration matters and how everything is plumbed together without caring about why the plumbing exists.

    Good article!

Leave a Reply

Your email address will not be published. Required fields are marked *