Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 1

Integrating with external systems is a common requirement in Dynamics 365 Customer Engagement projects, but when the project involves an on-premises instance of Dynamics 365, routing requests from external systems through your firewall can present an additional challenge. Over the course of the next few posts, I will show you can easily build a simple service relay with RabbitMQ and Python to handle inbound requests from external data interface consumers.

Here's how my approach works. A consumer writes a request to a cloud-hosted RabbitMQ request queue (either directly or through a proxy service) and starts waiting for a response. On the other end, a Python script monitors the request queue for inbound requests. When it sees a new one, it executes the appropriate request through the Dynamics 365 Web API and writes the response back to a client-specific RabbitMQ response queue. The consumer then picks up the response from the queue. This way the consumer doesn't need to know anything other than how to write the initial request, and no extra inbound firewall ports need to be opened.

This diagram shows an overview of the process. Simple service relay diagram

Although my original goal was to accelerate the deployment of data interfaces for on-premises Dynamics 365 CE instances, a simple service relay like this could also be useful for IFD or Dynamics 365 online deployments if you don't want to allow direct access to your organization. Because the queue monitoring process is single-threaded, it's an easy way to throttle requests, but you can run multiple instances of the queue monitor script if you want to increase the number of concurrent requests the relay can process.

Why use this approach?

There are lots message brokers and service bus offerings (Azure Service Bus, IBM MQ, Amazon SQS, etc.) you could use to build a service relay. In fact there's even an Azure offering called Azure Relay that aims to solve exactly the same problem that my approach does, but not just for Dynamics 365, so "why use this?" is a great question.

First, I think RabbitMQ is just a great tool, and I previously wrote a five-part series about using RabbitMQ with Dynamics 365 (back when it was still called Dynamics CRM). Second, using RabbitMQ instead of a cloud-specific service bus offering gives you maximum flexibility in where you host your request and response queues and how you chose to scale. For example, my RabbitMQ broker runs in a Docker container on a Digital Ocean VPS. If I ever decide to move off of Digital Ocean, I can easily switch to any IaaS or VPS provider. I can also configure a RabbitMQ cluster to achieve significantly faster throughput.

As for why I'm using Python instead of C#, which is probably more familiar to most Dynamics 365 developers, Python also makes this approach more flexible. Using Python means I'm not tied to the Dynamics 365 SDK client libraries or a Windows host for running my queue monitoring process, and I can easily package my monitoring process in a Docker image. (Although I highly recommend Python, there are RabbitMQ clients for .Net, and you can also find RabbitMQ tutorials for other languages including Java, Ruby and JavaScript here.)

Wrapping up

That's it for now. In my next post in this series I will walk through the prerequisites for building the simple service relay.

How have you handled inbound data interfaces for on-premises Dynamics 365 CE organizations? Let us know in the comments!

comments powered by Disqus