<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[C# - Alexander Development]]></title><description><![CDATA[C# - Alexander Development]]></description><link>https://alexanderdevelopment.net/</link><generator>Ghost 1.20</generator><lastBuildDate>Fri, 24 Apr 2026 14:20:59 GMT</lastBuildDate><atom:link href="https://alexanderdevelopment.net/tag/c-sharp/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Building a custom Dynamics 365 data interface with OpenFaaS]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Over the past several months, I've been doing a lot of work with <a href="https://github.com/openfaas/faas">OpenFaaS</a> in my spare time, and in today's post I will show how you can use it to easily build and deploy a custom web service interface for data in a Dynamics 365 Customer Engagement online tenant.</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/07/05/building-a-custom-dynamics-365-data-interface-with-openfaas/</link><guid isPermaLink="false">5b3a415c97f5e30001931b7f</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[OpenFaaS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 05 Jul 2018 17:28:47 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/07/openfaas-d365-header.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/07/openfaas-d365-header.png" alt="Building a custom Dynamics 365 data interface with OpenFaaS"><p>Over the past several months, I've been doing a lot of work with <a href="https://github.com/openfaas/faas">OpenFaaS</a> in my spare time, and in today's post I will show how you can use it to easily build and deploy a custom web service interface for data in a Dynamics 365 Customer Engagement online tenant.</p>
<h4 id="openfaas">OpenFaaS</h4>
<p>If you're not familiar with OpenFaaS, it's basically a serverless functions platform like <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a> or <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>, but you run it on Kubernetes or Docker Swarm on your own servers or in the cloud. What I particularly like about OpenFaaS compared to the various commercial serverless platforms is that in addition to offering more control over how/where it's deployed, OpenFaaS supports a wider variety of languages for writing serverless functions.</p>
<blockquote>
<p>OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.</p>
</blockquote>
<p>To follow along with the samples in this post, you'll need access to a cluster with OpenFaaS deployed, so if you don't already have one, now would be an excellent time to look at the OpenFaaS <a href="http://docs.openfaas.com/deployment/">deployment docs</a> or maybe even work through the <a href="https://github.com/openfaas/workshop">hands-on workshop</a>. I've also previously written about how to securely deploy OpenFaaS on a free <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">Google Cloud VM with Docker Swarm</a> or on an <a href="https://alexanderdevelopment.net/post/2018/05/31/installing-and-securing-openfaas-on-an-aks/">Azure Kubernetes Service cluster</a>.</p>
<h4 id="preparingtobuildtheinterfacefunction">Preparing to build the interface function</h4>
<p>As soon as you have OpenFaaS running, it's time to look at the actual custom interface function.</p>
<p>My demo C# function does the following:</p>
<ol>
<li>Parse a JSON object sent in the client request for an access key and optional query filter</li>
<li>Validate the client-supplied access key to authorize or reject the request</li>
<li>Retrieve a Dynamics 365 OAuth access token using my <a href="https://alexanderdevelopment.net/post/2018/05/19/an-azure-ad-oauth2-helper-microservice/">Azure AD OAuth 2 helper microservice</a></li>
<li>Execute a query for contacts against the Dynamics 365 Web API</li>
<li>Return the Web API query results to the client in an array as part of a JSON object</li>
</ol>
<p>Because the OpenFaaS function uses my OAuth helper microservice instead of requesting an OAuth access token directly from Azure Active Directory, you need to deploy that microservice to your cluster before moving forward.</p>
<p>If you're using Kubernetes, you can create the deployment and corresponding service using the following YAML. You'll need to set the RESOURCE environment variable to the FQDN for your Dynamics 365 CE organization, but you can leave the CLIENTID and TOKEN_ENDPOINT values alone. <em>(While I used to think you needed to register a separate client application for every Dynamics 365 org to use OAuth authentication, I recently learned via a Twitter conversation that there is a <a href="https://twitter.com/bguidinger/status/1001796185798119424">&quot;universal&quot; CRM client id</a> you can use instead.)</em></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: azuread-oauth2-helper
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: azuread-oauth2-helper
    spec:
      containers:
      - name: azuread-oauth2-helper
        image: lucasalexander/azuread-oauth2-helper
        ports:
        - containerPort: 5000
        env:
        - name: RESOURCE
          value: &quot;https://XXXXXXXX.crm.dynamics.com&quot;
        - name: CLIENTID
          value: &quot;2ad88395-b77d-4561-9441-d0e40824f9bc&quot;
        - name: TOKEN_ENDPOINT
          value: &quot;https://login.microsoftonline.com/common/oauth2/token&quot;
---
apiVersion: v1
kind: Service
metadata:
  name: azuread-oauth2-helper
spec:
  ports:
  - port: 5000
  selector:
    app: azuread-oauth2-helper
</code></pre>
<p>Once you've deployed the microservice, here's the definition for a Kubernetes ingress. In this case my microservice is accessible on the same host as OpenFaaS (akskube.alexanderdevelopment.net), and it is secured with the same Let's Encrypt certificate. You'll want to update your configuration with the appropriate values for your specific situation.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: azuread-oauth2-helper-ingress
  annotations:
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - akskube.alexanderdevelopment.net
    secretName: faas-letsencrypt-production
  rules:
  - host: akskube.alexanderdevelopment.net
    http:
      paths:
      - path: /oauthhelper
        backend:
          serviceName: azuread-oauth2-helper
          servicePort: 5000
</code></pre>
<p>After the OAuth helper microservice is deployed, you should validate that you can get a token returned for a valid username/password combination. Here's what that looks like in Postman.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/microservice-validation-1.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<h4 id="buildingtheinterfacefunction">Building the interface function</h4>
<p>If you've made it to this point, building and deploying the function is easy!</p>
<p>First the function gets its configuration data from environment variables that are set when the function is deployed. If you were actually using this function in production, it would be better to store sensitive values like the access key and the Dynamics 365 password as <a href="https://github.com/openfaas/faas/blob/master/guide/secure_secret_management.md">secrets</a>, but I've used environment variables here to keep this overview as simple as possible.</p>
<pre><code>//get configuration from env variables        
var username = Environment.GetEnvironmentVariable(&quot;USERNAME&quot;);
var userpassword = Environment.GetEnvironmentVariable(&quot;USERPASS&quot;);
var tokenendpoint = Environment.GetEnvironmentVariable(&quot;TOKENENDPOINT&quot;);
var accesskey = Environment.GetEnvironmentVariable(&quot;ACCESSKEY&quot;);
var crmwebapi = Environment.GetEnvironmentVariable(&quot;CRMAPI&quot;);
</code></pre>
<p>After the function gets its configuration data, it deserializes the client request using Json.Net to extract a client-supplied access key and an optional query filter. The client-supplied key is validated against the stored key value, and if they don't match, an error response is returned.</p>
<pre><code>var queryrequest = JsonConvert.DeserializeObject&lt;QueryRequest&gt;(input);

if(accesskey!=queryrequest.AccessKey)
{
    JObject outputobject = new JObject();
    outputobject.Add(&quot;error&quot;, &quot;Invalid access key&quot;);
    Console.WriteLine(outputobject.ToString());
    return;
}
</code></pre>
<p>After the access key is validated, the function then makes a request to the authentication helper microservice to get an access token.</p>
<pre><code>var token = GetToken(username, userpassword, tokenendpoint);

...
...
...

string GetToken(string username, string userpassword, string tokenendpoint){
    try
    {
        JObject tokencredentials = new JObject();
        tokencredentials.Add(&quot;username&quot;, username);
        tokencredentials.Add(&quot;password&quot;,userpassword);
        var reqcontent = new StringContent(tokencredentials.ToString(), Encoding.UTF8, &quot;application/json&quot;);
        var result = _client.PostAsync(tokenendpoint, reqcontent).Result;
        var tokenobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(
            result.Content.ReadAsStringAsync().Result);
        var token = tokenobj[&quot;accesstoken&quot;];
        return token.ToString();
    }
    catch(Exception ex)
    {
        return string.Format(&quot;Error: {0}&quot;, ex.Message);
    }
}
</code></pre>
<p>Once the token is returned from the microservice, the function executes the Web API query. The query is just a hardcoded OData query in the form of <code>/contacts?$select=fullname,contactid</code> plus any filter supplied by the client. The function expects that the filter will also be provided in supported Dynamics 365 OData format like <code>startswith(fullname,'y')</code>.</p>
<pre><code>var crmreq = new HttpRequestMessage(HttpMethod.Get, crmwebapi + crmwebapiquery);
crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
crmreq.Content = new StringContent(string.Empty.ToString(), Encoding.UTF8, &quot;application/json&quot;);
var crmres = _client.SendAsync(crmreq).Result;

var crmresponse = crmres.Content.ReadAsStringAsync().Result;

var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponse);
</code></pre>
<p>Finally results are returned to the client in an array as part of a JSON object.</p>
<pre><code>JArray outputarray = new JArray();
foreach(var row in crmresponseobj[&quot;value&quot;].Children())
{
    JObject record = new JObject();
    record.Add(&quot;id&quot;, row[&quot;contactid&quot;]);
    record.Add(&quot;fullname&quot;, row[&quot;fullname&quot;]);
    outputarray.Add(record);
}
JObject outputobject = new JObject();
outputobject.Add(&quot;contacts&quot;, outputarray);
Console.WriteLine(outputobject.ToString());
</code></pre>
<p>Here's the complete function.</p>
<pre><code>using System;
using System.Text;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.IO;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System.Collections.Generic;

namespace Function
{
    public class FunctionHandler
    {
        private static HttpClient _client = new HttpClient();

        public void Handle(string input) {
            //get configuration from env variables        
            var username = Environment.GetEnvironmentVariable(&quot;USERNAME&quot;);
            var userpassword = Environment.GetEnvironmentVariable(&quot;USERPASS&quot;);
            var tokenendpoint = Environment.GetEnvironmentVariable(&quot;TOKENENDPOINT&quot;);
            var accesskey = Environment.GetEnvironmentVariable(&quot;ACCESSKEY&quot;);
            var crmwebapi = Environment.GetEnvironmentVariable(&quot;CRMAPI&quot;);
            
            //deserialize the client request
            var queryrequest = JsonConvert.DeserializeObject&lt;QueryRequest&gt;(input);
            
            //validate the client access key
            if(accesskey!=queryrequest.AccessKey)
            {
                JObject outputobject = new JObject();
                outputobject.Add(&quot;error&quot;, &quot;Invalid access key&quot;);
                Console.WriteLine(outputobject.ToString());
                return;
            }

            //get the oauth token
            var token = GetToken(username, userpassword, tokenendpoint);
            
            if(!token.ToUpper().StartsWith(&quot;ERROR:&quot;))
            {
                //set the base odata query
                var crmwebapiquery = &quot;/contacts?$select=fullname,contactid&quot;;
                
                //add a filter if the client included one in the request
                if(!string.IsNullOrEmpty(queryrequest.Filter))
                    crmwebapiquery+=&quot;&amp;$filter=&quot;+queryrequest.Filter;
                try
                {
                    //make the request to d365
                    var crmreq = new HttpRequestMessage(HttpMethod.Get, crmwebapi + crmwebapiquery);
                    crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
                    crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
                    crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
                    crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
                    crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
                    crmreq.Content = new StringContent(string.Empty.ToString(), Encoding.UTF8, &quot;application/json&quot;);
                    var crmres = _client.SendAsync(crmreq).Result;
                    
                    //handle the d365 response
                    var crmresponse = crmres.Content.ReadAsStringAsync().Result;

                    var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponse);
                    
                    try
                    {
                        //build the function response
                        JArray outputarray = new JArray();
                        foreach(var row in crmresponseobj[&quot;value&quot;].Children())
                        {
                            JObject record = new JObject();
                            record.Add(&quot;id&quot;, row[&quot;contactid&quot;]);
                            record.Add(&quot;fullname&quot;, row[&quot;fullname&quot;]);
                            outputarray.Add(record);
                        }
                        JObject outputobject = new JObject();
                        outputobject.Add(&quot;contacts&quot;, outputarray);
                        
                        //return the response to the client
                        Console.WriteLine(outputobject.ToString());
                    }
                    catch(Exception ex)
                    {
                        JObject outputobject = new JObject();
                        outputobject.Add(&quot;error&quot;, string.Format(&quot;Could not parse query response: {0}&quot;, ex.Message));
                        Console.WriteLine(outputobject.ToString());
                    }
                }
                catch(Exception ex)
                {
                    JObject outputobject = new JObject();
                    outputobject.Add(&quot;error&quot;, string.Format(&quot;Could not query data: {0}&quot;, ex.Message));
                    Console.WriteLine(outputobject.ToString());
                }
            }
            else
            {
                JObject outputobject = new JObject();
                outputobject.Add(&quot;error&quot;, &quot;Could not get token&quot;);
                Console.WriteLine(outputobject.ToString());
            }
        }

        string GetToken(string username, string userpassword, string tokenendpoint){
            try
            {
                JObject tokencredentials = new JObject();
                tokencredentials.Add(&quot;username&quot;, username);
                tokencredentials.Add(&quot;password&quot;,userpassword);
                var reqcontent = new StringContent(tokencredentials.ToString(), Encoding.UTF8, &quot;application/json&quot;);
                var result = _client.PostAsync(tokenendpoint, reqcontent).Result;
                var tokenobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(
                    result.Content.ReadAsStringAsync().Result);
                var token = tokenobj[&quot;accesstoken&quot;];
                return token.ToString();
            }
            catch(Exception ex)
            {
                return string.Format(&quot;Error: {0}&quot;, ex.Message);
            }
        }
    }

    public class QueryRequest
    {
        public string AccessKey {get;set;}
        public string Filter{get;set;}
    }
}
</code></pre>
<p>Because the function relies on Json.Net, you need to add a reference to it in your .csproj file before you build the function.</p>
<pre><code>&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;
  &lt;PropertyGroup&gt;
    &lt;TargetFramework&gt;netstandard2.0&lt;/TargetFramework&gt;
  &lt;/PropertyGroup&gt;
  &lt;PropertyGroup&gt;
    &lt;GenerateAssemblyInfo&gt;false&lt;/GenerateAssemblyInfo&gt;
  &lt;/PropertyGroup&gt;
  &lt;ItemGroup&gt;
    &lt;PackageReference Include=&quot;newtonsoft.json&quot; Version=&quot;11.0.2&quot; /&gt;
  &lt;/ItemGroup&gt;
&lt;/Project&gt;
</code></pre>
<p>Here is my function definition YAML file with enviroment variables included. You will need to update them with your appropriate values, and you will also need to change the image name if you're building your own function instead of just deploying mine from Docker Hub.</p>
<pre><code>provider:
  name: faas
  gateway: http://localhost:8080

functions:
  demo-crm-function:
    lang: csharp
    handler: ./demo-crm-function
    image: lucasalexander/faas-demo-crm-function
    environment:
      USERNAME: XXXXXX@XXXXXX.onmicrosoft.com
      USERPASS: XXXXXX
      TOKENENDPOINT: https://akskube.alexanderdevelopment.net/oauthhelper/requesttoken
      CRMAPI: https://lucastest20.api.crm.dynamics.com/api/data/v9.0
      ACCESSKEY: MYACCESSKEY
</code></pre>
<p>Once the function is deployed, you can execute it either through the OpenFaaS admin UI or another tool that makes HTTP requests like Curl or Postman. Here's what an unfiltered query in Postman looks like for a Dynamics 365 org with sample data installed.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/unfiltered-query.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<p>And here's a query with a filter included.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/filtered-query.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>Once I got OpenFaaS running, writing and deploying the actual function only took about an hour. Obviously writing a more complex data interface to support real-world requirements would take longer, but using a serverless functions platform like OpenFaaS is definitely a significant accelerator for custom Dynamics 365 integration development.</p>
<p>What do you think about this approach? Are you using serverless functions with your Dynamics 365 projects? What do you think about OpenFaaS vs Azure Functions or AWS Lambda? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Using Dynamics 365 virtual entities to show data from an external organization]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I was recently asked to be a guest on the third-anniversary episode of the <a href="https://crm.audio/">CRM Audio podcast</a>. While I was there George Doubinski challenged me to create a plugin in one Dynamics 365 organization to retrieve records from another Dynamics 365 organization so they could be displayed as virtual entities.</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/28/using-dynamics-365-virtual-entities-to-show-data-from-an-external-organization/</link><guid isPermaLink="false">5b05bc3c97f5e30001931b67</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 28 May 2018 12:55:09 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/PluginRegistration_2018-05-23_13-48-15.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/PluginRegistration_2018-05-23_13-48-15.png" alt="Using Dynamics 365 virtual entities to show data from an external organization"><p>I was recently asked to be a guest on the third-anniversary episode of the <a href="https://crm.audio/">CRM Audio podcast</a>. While I was there George Doubinski challenged me to create a plugin in one Dynamics 365 organization to retrieve records from another Dynamics 365 organization so they could be displayed as virtual entities. I was promised adulation on <a href="https://crmtipoftheday.com/">Dynamics CRM Tip of the Day</a> and fame beyond my wildest dreams, so naturally I accepted.</p>
<p><img src="https://alexanderdevelopment.net/content/images/2018/05/tumblr_inline_n4m5yj9nMP1qa7k0a.gif" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>To address the challenge, I wrote a simple Dynamics 365 plugin that calls the Web API in a different Dynamics 365 organization to retrieve records and return them to a virtual entity data provider. From there, configuration of the Dynamics 365 virtual entity is simple. Let's take a look at how I did it.</p>
<h4 id="theplugin">The plugin</h4>
<p>First you need to create a plugin to retrieve the data from the &quot;external&quot; Dynamics 365 org. Because this code connects directly to the Web API, you'll need to get an access token from Azure AD before you can make the request to Dynamics 365. Just like I showed in my <a href="https://alexanderdevelopment.net/post/2016/11/29/scheduling-dynamics-365-workflows-with-azure-functions-and-csharp/">&quot;Scheduling Dynamics 365 workflows with Azure Functions and C#&quot;</a> post back in 2016, my sample code does not use <a href="https://github.com/AzureAD/azure-activedirectory-library-for-nodejs">ADAL</a> to get the access token, but rather it issues a request directly to the Azure AD OAuth 2 token endpoint.</p>
<p>Here's the code for the plugin. There are some configuration values you'll need to set for your Dynamics 365 organization and whatever query you want to run. It's not a best practice to have any of this actually hardcoded in your plugin, but I've done it this way so it's easier to see how things work.</p>
<pre><code>using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Data.Exceptions;
using Microsoft.Xrm.Sdk.Extensions;
using Microsoft.Xrm.Sdk.Query;
using System;
using System.IO;
using System.Net;
using System.Threading.Tasks;
using Newtonsoft.Json;

namespace VirtualEntityProvider
{
    public class RetrieveOtherOrgData : IPlugin
    {
        //set these values for your D365 instance, user credentials and Azure AD clientid/token endpoint
        string crmorg = &quot;https://XXXXX.crm.dynamics.com&quot;;
        string clientid = &quot;XXXXXXXXX&quot;;
        string username = &quot;lucasalexander@XXXXXX.onmicrosoft.com&quot;;
        string userpassword = &quot;XXXXXXXXXXXX&quot;;
        string tokenendpoint = &quot;https://login.microsoftonline.com/XXXXXXXXXXX/oauth2/token&quot;;

        //relative path to web api endpoint
        string crmwebapi = &quot;/api/data/v8.2&quot;;

        //web api query to execute - in this case all accounts that start with &quot;F&quot;
        string crmwebapipath = &quot;/accounts?$select=name,accountid&amp;$filter=startswith(name,'F')&quot;;

        public void Execute(IServiceProvider serviceProvider)
        {
            //basic plugin set-up stuff
            IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
            IOrganizationServiceFactory servicefactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
            IOrganizationService service = servicefactory.CreateOrganizationService(context.UserId);
            ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));

            try
            {
                //instantiate a new entity collection to hold the records we'll return later
                EntityCollection results = new EntityCollection();

                //build the authorization request for Azure AD
                var reqstring = &quot;client_id=&quot; + clientid;
                reqstring += &quot;&amp;resource=&quot; + Uri.EscapeUriString(crmorg);
                reqstring += &quot;&amp;username=&quot; + Uri.EscapeUriString(username);
                reqstring += &quot;&amp;password=&quot; + Uri.EscapeUriString(userpassword);
                reqstring += &quot;&amp;grant_type=password&quot;;

                //make the Azure AD authentication request
                WebRequest req = WebRequest.Create(tokenendpoint);
                req.ContentType = &quot;application/x-www-form-urlencoded&quot;;
                req.Method = &quot;POST&quot;;
                byte[] bytes = System.Text.Encoding.ASCII.GetBytes(reqstring);
                req.ContentLength = bytes.Length;
                System.IO.Stream os = req.GetRequestStream();
                os.Write(bytes, 0, bytes.Length);
                os.Close();

                HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
                StreamReader tokenreader = new StreamReader(resp.GetResponseStream());
                string responseBody = tokenreader.ReadToEnd();
                tokenreader.Close();

                //deserialize the Azure AD token response and get the access token to supply with the web api query
                var tokenresponse = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(responseBody);
                var token = tokenresponse[&quot;access_token&quot;];

                //make the web api query
                WebRequest crmreq = WebRequest.Create(crmorg+crmwebapi+crmwebapipath);
                crmreq.Headers = new WebHeaderCollection();

                //use the access token from earlier as the authorization header bearer value
                crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
                crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
                crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
                crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
                crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
                crmreq.ContentType = &quot;application/json; charset=utf-8&quot;;
                crmreq.Method = &quot;GET&quot;;

                HttpWebResponse crmresp = (HttpWebResponse)crmreq.GetResponse();
                StreamReader crmreader = new StreamReader(crmresp.GetResponseStream());
                string crmresponseBody = crmreader.ReadToEnd();
                crmreader.Close();

                //deserialize the response
                var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponseBody);

                //loop through the response values
                foreach (var row in crmresponseobj[&quot;value&quot;].Children())
                {
                    //create a new virtual entity of type lpa_demove
                    Entity verow = new Entity(&quot;lpa_otheraccount&quot;);
                    //verow[&quot;lpa_otheraccountid&quot;] = Guid.NewGuid();
                    //verow[&quot;lpa_name&quot;] = ((Newtonsoft.Json.Linq.JValue)row[&quot;name&quot;]).Value.ToString();
                    verow[&quot;lpa_otheraccountid&quot;] = (Guid)row[&quot;accountid&quot;];
                    verow[&quot;lpa_name&quot;] = (string)row[&quot;name&quot;];

                    //add it to the collection
                    results.Entities.Add(verow);
                }

                //return the results
                context.OutputParameters[&quot;BusinessEntityCollection&quot;] = results;
            }
            catch (Exception e)
            {
                tracingService.Trace($&quot;{e.Message} {e.StackTrace}&quot;);
                if (e.InnerException != null)
                    tracingService.Trace($&quot;{e.InnerException.Message} {e.InnerException.StackTrace}&quot;);

                throw new InvalidPluginExecutionException(e.Message);
            }
        }
    }
}
</code></pre>
<p>Because the plugin uses JSON.Net, you'll need to use ILMerge to bundle the Newtonsoft.Json.dll assembly with your compiled plugin before you deploy it to Dynamics 365.</p>
<h4 id="settingupthevirtualentity">Setting up the virtual entity</h4>
<p>After you've deployed the plugin using the plugin registration tool, register a new data provider. When the data provider registration window opens, first create a new data source entity.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/2018-05-23_14-28-33.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Complete the details for the data source and save it.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/2018-05-23_13-53-40.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Complete the rest of the details for the data provider and save it. <img src="https://alexanderdevelopment.net/content/images/2018/05/2018-05-23_13-53-24.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>You should now see a new data provider and data source. <img src="https://alexanderdevelopment.net/content/images/2018/05/PluginRegistration_2018-05-23_13-53-57.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Open the Dynamics 365 web UI, and go to settings-&gt;administration-&gt;virtual entity data sources. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-54-52.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Click the &quot;new&quot; button to create a new virtual entity data source. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-55-21.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>In the window that pops up, select the data provider you created earlier. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-55-34.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Give your new virtual entity data source a name and save it. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-55-50.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Open your solution and create a new entity. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-56-22.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Configure your entity as a virtual entity that uses the virtual entity data source you created previously. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_13-58-31.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>Once you save and publish the virtual entity, you can open an advanced find view that will retrieve data from your other Dynamics 365 organization and display it. <img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-23_14-07-38.png#img-thumbnail" alt="Using Dynamics 365 virtual entities to show data from an external organization"></p>
<p>If you export this data to Excel and unhide the id column, you will see that the GUIDs match the records in the external system.</p>
<p>And that's all there is to it. Happy entity virtualizing!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Using ML.NET in an OpenFaaS function]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Last week at its annual Build conference, Microsoft announced <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet">ML.NET</a>, an &quot;open source and cross-platform machine learning framework&quot; that runs in .NET Core. I took a look at the <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet/get-started/windows">getting started</a> samples and realized ML.NET would be a great tool to use in OpenFaas functions.</p>
<p>I</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/17/using-ml-net-in-an-openfaas-function/</link><guid isPermaLink="false">5afe3ef397f5e30001931b56</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[C#]]></category><category><![CDATA[machine learning]]></category><category><![CDATA[text analysis]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 18 May 2018 03:20:22 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-17_22-16-59.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-17_22-16-59.png" alt="Using ML.NET in an OpenFaaS function"><p>Last week at its annual Build conference, Microsoft announced <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet">ML.NET</a>, an &quot;open source and cross-platform machine learning framework&quot; that runs in .NET Core. I took a look at the <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet/get-started/windows">getting started</a> samples and realized ML.NET would be a great tool to use in OpenFaas functions.</p>
<p>I decided to write a proof-of-concept function based on the ML.NET sentiment <a href="https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/sentiment-analysis">analysis sample</a>. Because the function needs a trained model before it can run, you actually need to use a separate application to generate the model and save it as a file. Then you can include the model as part of your function deployment.</p>
<p>Here's a screenshot of my function in action. <img src="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-17_21-42-58.png#img-thumbnail" alt="Using ML.NET in an OpenFaaS function"></p>
<p>You can get the code for my OpenFaas sentiment analysis function <a href="https://github.com/lucasalexander/faas-functions/tree/master/get_sentiment_mlnet">here</a>, and the code for the application that generates the model is available <a href="https://github.com/lucasalexander/mlnet-samples/tree/master/sentiment-analysis">here</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the final post in my series about building a service relay for Dynamics 365 CE with RabbitMQ and Python. In my previous <a href="https://alexanderdevelopment.net/post/2018/02/05/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-3/">post</a> in this series, I showed the Python code to make the service relay work. In today's post, I will show how you can use <a href="https://azure.microsoft.com/en-us/services/functions/">Azure</a></p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/07/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-4/</link><guid isPermaLink="false">5a788a53c86c8900016cf367</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[integration]]></category><category><![CDATA[Python]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[Azure]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 08 Feb 2018 04:00:42 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay-2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/simple-service-relay-2.png" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4"><p>This is the final post in my series about building a service relay for Dynamics 365 CE with RabbitMQ and Python. In my previous <a href="https://alexanderdevelopment.net/post/2018/02/05/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-3/">post</a> in this series, I showed the Python code to make the service relay work. In today's post, I will show how you can use <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a> to make a consumer service proxy using C# so client applications don't have to access to your RabbitMQ broker directly, and I will also discuss some general thoughts on security and scalability for this service relay architecture.</p>
<p>Although this simple service relay allows external consumers to get data from Dynamics 365 CE without needing to connect directly, the examples I've shown so far require that they can connect to a RabbitMQ broker. This may be problematic for a variety of reasons, so you would probably want external consumers to connect to a web service proxy that would write requests to and read responses from the RabbitMQ broker.</p>
<h4 id="buildingaserviceproxyfunction">Building a service proxy function</h4>
<p>You can build an Azure Functions service proxy with Python, but I don't recommend it for three reasons:</p>
<ol>
<li>Azure Functions Python support is still considered experimental.</li>
<li>Python scripts that use external libraries can run <a href="https://github.com/Azure/azure-functions-host/issues/1626">exceedingly slow</a>.</li>
<li>Getting the environment set up is a bit of a hassle.</li>
</ol>
<p>On the other hand, building a service proxy function with C# was so much easier, and it performed much better than a comparable Python function (~.5 seconds for C# compared to 5+ seconds for Python).</p>
<p>Here are the steps I took to build my C# service proxy function:</p>
<ol>
<li>Create a C# HTTP trigger function.</li>
<li>Create and upload a project.json file with a dependency on the RabbitMQ client (see below).</li>
<li>Take the &quot;RpcClient&quot; class from the <a href="https://www.rabbitmq.com/tutorials/tutorial-six-dotnet.html">RabbitMQ .Net RPC tutorial</a> and call it from within my function.</li>
</ol>
<p>Here's my project.json file:</p>
<pre><code>{
  &quot;frameworks&quot;: {
    &quot;net46&quot;:{
      &quot;dependencies&quot;: {
        &quot;RabbitMQ.Client&quot;: &quot;5.0.1&quot;
      }
    }
   }
}
</code></pre>
<p>And here's my run.csx file:</p>
<pre><code>using System.Net;
using System;
using System.Collections.Concurrent;
using System.Text;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;

public static async Task&lt;HttpResponseMessage&gt; Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info(&quot;Processing request&quot;);

    // parse query parameter
    string query = req.GetQueryNameValuePairs()
        .FirstOrDefault(q =&gt; string.Compare(q.Key, &quot;query&quot;, true) == 0)
        .Value;

    // Get request body
    dynamic data = await req.Content.ReadAsAsync&lt;object&gt;();

    // Set name to query string or body data
    query = query ?? data?.query;

    var rpcClient = new RpcClient();
    
    log.Info(string.Format(&quot; [.] query start time {0}&quot;, DateTime.Now.ToString(&quot;MM/dd/yyyy hh:mm:ss.fff tt&quot;)));
    var response = rpcClient.Call(query);

    log.Info(string.Format(&quot; [.] query end time {0}&quot;, DateTime.Now.ToString(&quot;MM/dd/yyyy hh:mm:ss.fff tt&quot;)));
    rpcClient.Close();

    return req.CreateResponse(HttpStatusCode.OK, response);
}

public class RpcClient
{
    private readonly IConnection connection;
    private readonly IModel channel;
    private readonly string replyQueueName;
    private readonly EventingBasicConsumer consumer;
    private readonly BlockingCollection&lt;string&gt; respQueue = new BlockingCollection&lt;string&gt;();
    private readonly IBasicProperties props;

    public RpcClient()
    {
        var factory = new ConnectionFactory() { HostName = &quot;RABBITHOST&quot;, UserName=&quot;RABBITUSER&quot;, Password=&quot;RABBITUSERPASS&quot;  };

        connection = factory.CreateConnection();
        channel = connection.CreateModel();
        replyQueueName = channel.QueueDeclare().QueueName;
        consumer = new EventingBasicConsumer(channel);

        props = channel.CreateBasicProperties();
        var correlationId = Guid.NewGuid().ToString();
        props.CorrelationId = correlationId;
        props.ReplyTo = replyQueueName;

        consumer.Received += (model, ea) =&gt;
        {
            var body = ea.Body;
            var response = Encoding.UTF8.GetString(body);
            if (ea.BasicProperties.CorrelationId == correlationId)
            {
                respQueue.Add(response);
            }
        };
    }

    public string Call(string message)
    {
        var messageBytes = Encoding.UTF8.GetBytes(message);
        channel.BasicPublish(
            exchange: &quot;&quot;,
            routingKey: &quot;rpc_queue&quot;,
            basicProperties: props,
            body: messageBytes);

        channel.BasicConsume(
            consumer: consumer,
            queue: replyQueueName,
            autoAck: true);

        return respQueue.Take(); ;
    }

    public void Close()
    {
        connection.Close();
    }
}
</code></pre>
<p>Here's a screenshot showing me calling the C# function with Postman.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/02/Postman_2018-02-05_22-02-52.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4"></p>
<p>Because I did actually build a Python function, I will go ahead and share how I did it if you're interested. Here are the steps I took:</p>
<ol>
<li>Create a Python HTTP trigger function.</li>
<li>Install Python 3.6 via site extensions (see steps 2.1-2.4 <a href="https://stackoverflow.com/a/47213859">here</a>).</li>
<li>Install the necessary libraries using pip via <a href="https://david-obrien.net/2016/07/azure-functions-kudu/">KUDU</a>.</li>
</ol>
<p>Here's the Python function code:</p>
<pre><code>import os
import sys
import json
import pika
import uuid
import datetime

class CrmRpcClient(object):
    def __init__(self):
        #RabbitMQ connection details
        self.rabbituser = 'RABBITUSERNAME'
        self.rabbitpass = 'RABBITUSERPASS'
        self.rabbithost = 'RABBITHOST' 
        self.rabbitport = 5672
        self.rabbitqueue = 'rpc_queue'
        rabbitcredentials = pika.PlainCredentials(self.rabbituser, self.rabbitpass)
        rabbitparameters = pika.ConnectionParameters(host=self.rabbithost,
                                    port=self.rabbitport,
                                    virtual_host='/',
                                    credentials=rabbitcredentials)

        self.rabbitconn = pika.BlockingConnection(rabbitparameters)

        self.channel = self.rabbitconn.channel()

        #create an anonymous exclusive callback queue
        result = self.channel.queue_declare(exclusive=True)
        self.callback_queue = result.method.queue

        self.channel.basic_consume(self.on_response, no_ack=True,
                                   queue=self.callback_queue)

    #callback method for when a response is received - note the check for correlation id
    def on_response(self, ch, method, props, body):
        if self.corr_id == props.correlation_id:
            self.response = body

    #method to make the initial request
    def call(self, n):
        self.response = None
        #generate a new correlation id
        self.corr_id = str(uuid.uuid4())

        #publish the message to the rpc_queue - note the reply_to property is set to the callback queue from above
        self.channel.basic_publish(exchange='',
                                   routing_key=self.rabbitqueue,
                                   properties=pika.BasicProperties(
                                         reply_to = self.callback_queue,
                                         correlation_id = self.corr_id,
                                         ),
                                   body=n)
        while self.response is None:
            self.rabbitconn.process_data_events()
        return self.response

print(&quot; [.] query start time %r&quot; % str(datetime.datetime.now()))
#instantiate an rpc client
crm_rpc = CrmRpcClient()

postreqdata = json.loads(open(os.environ['req']).read())
query = postreqdata['query']

crm_rpc = CrmRpcClient()
print(&quot; [.] query start time %r&quot; % str(datetime.datetime.now()))
queryresponse = crm_rpc.call(query)
print(&quot; [.] query end time %r&quot; % str(datetime.datetime.now()))
response = open(os.environ['res'], 'w')
response.write(queryresponse.decode())
response.close()
</code></pre>
<p>Here's a screenshot showing me calling the Python function with Postman.<img src="https://alexanderdevelopment.net/content/images/2018/02/Postman_2018-02-05_22-10-20.png#img-thumbnail" alt="Building a simple service relay for Dynamics 365 CE with RabbitMQ and Python - part 4"></p>
<p>Note the difference in time between the two functions - 5.62 seconds for Python and .46 seconds for C#!</p>
<h4 id="securityandscalability">Security and scalability</h4>
<p>If you decide to use this approach in production, I'd suggest you carefully consider both security and scalability. Obviously the overall solution will only be as secure as your RabbitMQ broker and communications between the broker and its clients, so you'll want to look at best practices for access control and securing the communications with TLS. Here are some links for further reading on those subjects:</p>
<ul>
<li>TLS - <a href="https://www.rabbitmq.com/ssl.html">https://www.rabbitmq.com/ssl.html</a></li>
<li>Access control - <a href="https://www.rabbitmq.com/access-control.html">https://www.rabbitmq.com/access-control.html</a></li>
</ul>
<p>As for scalability, the approach I've shown creates a separate response queue for each consumer, but it can have problems scaling, especially if you are using a RabbitMQ cluster. You may want to look at the <a href="https://www.rabbitmq.com/direct-reply-to.html">&quot;direct reply-to&quot;</a> approach instead. For an interesting real-world overview of using direct reply-to, take a look at this <a href="https://facundoolano.wordpress.com/2016/06/26/real-world-rpc-with-rabbitmq-and-node-js/">blog post.</a>.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>I hope you've enjoyed this series and that it has given you some ideas about how to implement service relays in your Dynamics 365 CE projects. As I worked through the examples, I certainly learned a few new things, especially when I created my Python service proxy in Azure Functions.</p>
<p>Here are links to all the previous posts in this series.</p>
<ol>
<li><a href="https://alexanderdevelopment.net/post/2018/01/30/relaying-external-queries-to-on-premise-dynamics-365-ce-orgs-with-rabbitmq-and-python/">Part 1</a> - Series introduction</li>
<li><a href="https://alexanderdevelopment.net/post/2018/02/01/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-2/">Part 2</a> - Solution prerequisites</li>
<li><a href="https://alexanderdevelopment.net/post/2018/02/05/building-a-simple-service-relay-for-dynamics-365-ce-with-rabbitmq-and-python-part-3/">Part 3</a> - Python code for the consumer and listener processes</li>
</ol>
<p>What do you think about this approach? Is it something you think you'd use in production? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[A Dynamics 365 local message listener for web client notifications - part 2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>In <a href="https://alexanderdevelopment.net/post/2017/07/19/a-dynamics-365-local-message-listener-for-web-client-notifications-part-1">part one</a> of this series, I discussed an approach for passing notifications from local applications to the Dynamics 365 web client through a message listener process that runs on an end user's PC. Today I will show the code I used to build the message listener and the code</p></div>]]></description><link>https://alexanderdevelopment.net/post/2017/07/21/a-dynamics-365-local-message-listener-for-web-client-notifications-part-2/</link><guid isPermaLink="false">5a5837246636a30001b978cd</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[integration]]></category><category><![CDATA[utilities]]></category><category><![CDATA[C#]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 21 Jul 2017 12:30:00 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>In <a href="https://alexanderdevelopment.net/post/2017/07/19/a-dynamics-365-local-message-listener-for-web-client-notifications-part-1">part one</a> of this series, I discussed an approach for passing notifications from local applications to the Dynamics 365 web client through a message listener process that runs on an end user's PC. Today I will show the code I used to build the message listener and the code to consume notifications in Dynamics 365.</p>
<h4 id="themessagelistener">The message listener</h4>
<p>My message listener is a lightweight web server built in C# using the &quot;WebServer&quot; class from this <a href="https://codehosting.net/blog/BlogEngine/post/Simple-C-Web-Server">blog post</a>.</p>
<p>I've made a couple of small modifications to the original web server code to enable cross-origin resource sharing (CORS). Otherwise requests from the Dynamics 365 web resource would fail because the sites have different origins. Here's my updated WebServer class:</p>
<pre><code>/*
 * The MIT License (MIT)
 * 
 * Copyright (c) 2013 David's Blog (www.codehosting.net) 
 * 
 * Permission is hereby granted, free of charge, to any person obtaining a copy of this software and 
 * associated documentation files (the &quot;Software&quot;), to deal in the Software without restriction, 
 * including without limitation the rights to use, copy, modify, merge, publish, distribute, 
 * sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is 
 * furnished to do so, subject to the following conditions:
 * 
 * The above copyright notice and this permission notice shall be included in all copies or 
 * substantial portions of the Software.
 * 
 * THE SOFTWARE IS PROVIDED &quot;AS IS&quot;, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 
 * INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR 
 * PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE 
 * FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 
 * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 
 * DEALINGS IN THE SOFTWARE.
*/

//The code below is mostly taken from https://codehosting.net/blog/BlogEngine/post/Simple-C-Web-Server
//I've added the headers to enable CORS requests.

using System;
using System.Net;
using System.Threading;
using System.Linq;
using System.Text;

namespace SimpleWebServer
{
    public class WebServer
    {
        private readonly HttpListener _listener = new HttpListener();
        private readonly Func&lt;HttpListenerRequest, string&gt; _responderMethod;

        public WebServer(string[] prefixes, Func&lt;HttpListenerRequest, string&gt; method)
        {
            if (!HttpListener.IsSupported)
                throw new NotSupportedException(
                    &quot;Needs Windows XP SP2, Server 2003 or later.&quot;);

            // URI prefixes are required, for example 
            // &quot;http://localhost:8080/index/&quot;.
            if (prefixes == null || prefixes.Length == 0)
                throw new ArgumentException(&quot;prefixes&quot;);

            // A responder method is required
            if (method == null)
                throw new ArgumentException(&quot;method&quot;);

            foreach (string s in prefixes)
                _listener.Prefixes.Add(s);

            _responderMethod = method;
            _listener.Start();
        }

        public WebServer(Func&lt;HttpListenerRequest, string&gt; method, params string[] prefixes)
            : this(prefixes, method) { }

        public void Run()
        {
            ThreadPool.QueueUserWorkItem((o) =&gt;
            {
                //Console.WriteLine(&quot;Webserver running...&quot;);
                try
                {
                    while (_listener.IsListening)
                    {
                        ThreadPool.QueueUserWorkItem((c) =&gt;
                        {
                            var ctx = c as HttpListenerContext;
                            try
                            {
                                string rstr = _responderMethod(ctx.Request);
                                byte[] buf = Encoding.UTF8.GetBytes(rstr);
                                if (ctx.Request.HttpMethod == &quot;OPTIONS&quot;)
                                {
                                    ctx.Response.AddHeader(&quot;Access-Control-Allow-Headers&quot;, &quot;Content-Type, Accept, X-Requested-With&quot;);
                                    ctx.Response.AddHeader(&quot;Access-Control-Allow-Methods&quot;, &quot;GET, POST&quot;);
                                    ctx.Response.AddHeader(&quot;Access-Control-Max-Age&quot;, &quot;1728000&quot;);
                                }
                                ctx.Response.AddHeader(&quot;Access-Control-Allow-Origin&quot;, &quot;*&quot;);
                                ctx.Response.ContentLength64 = buf.Length;
                                ctx.Response.OutputStream.Write(buf, 0, buf.Length);
                            }
                            catch { } // suppress any exceptions
                            finally
                            {
                                // always close the stream
                                ctx.Response.OutputStream.Close();
                            }
                        }, _listener.GetContext());
                    }
                }
                catch { } // suppress any exceptions
            });
        }

        public void Stop()
        {
            _listener.Stop();
            _listener.Close();
        }
    }
}
</code></pre>
<p>Here is the code for a proof-of-concept application that runs the listener web server. It allows you to enter messages directly on the command line for easy testing, but ideally you would configure your message listener to run as a service or in some other mostly headless fashion. You will need the <a href="http://www.newtonsoft.com/json">JSON.Net</a> library to compile this code.</p>
<pre><code>using System;
using System.Collections.Generic;
using System.Net;
using SimpleWebServer;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;

namespace CrmIntegrationServer
{
    class Program
    {
        static List&lt;string&gt; _messages;

        static void Main(string[] args)
        {
            _messages = new List&lt;string&gt;();
            WebServer ws = new WebServer(ProcessRequest, &quot;http://localhost:8080/&quot;);
            ws.Run();
            Console.WriteLine(&quot;Demo CRM integration server. Type 'CTRL+C' to exit.&quot;);
            while (true)
            {
                Console.WriteLine(&quot;Enter your message:&quot;);
                string receivedline = Console.ReadLine();
                _messages.Add(&quot;{\&quot;data\&quot;:\&quot;&quot;+receivedline+&quot;\&quot;}&quot;);
            }
        }

        public static string ProcessRequest(HttpListenerRequest request)
        {
            string receviedmessage = string.Empty;
            string response = string.Empty;
            if (request.HasEntityBody)
            {
                using (System.IO.Stream body = request.InputStream) // here we have data
                {
                    using (System.IO.StreamReader reader = new System.IO.StreamReader(body, request.ContentEncoding))
                    {
                        receviedmessage = reader.ReadToEnd();
                    }
                    JObject reqobject = JObject.Parse(receviedmessage);
                    switch (reqobject.Value&lt;string&gt;(&quot;action&quot;).ToUpper())
                    {
                        case &quot;QUEUE&quot;:
                            string messagebody = reqobject[&quot;messagebody&quot;].ToString(Formatting.None);
                            _messages.Add(messagebody);
                            response =  &quot;{\&quot;result\&quot;:\&quot;success\&quot;}&quot;;
                            break;
                        case &quot;READ&quot;:
                            response = JsonConvert.SerializeObject(_messages);
                            _messages.Clear();
                            break;
                    }
                }
            }
            return response;
        }
    }
}
</code></pre>
<h4 id="thedynamics365clientjavascript">The Dynamics 365 client JavaScript</h4>
<p>Finally, here's the code for the Dynamics 365 web resource that reads the messages. I have it set to poll for new messages every 100 milliseconds, which seems plenty fast, but you can experiment to find the value that works best for you. Because it's just making local requests and not putting additional load on your Dynamics 365 org, you don't need to worry about negatively impacting performance.</p>
<pre><code>&lt;html&gt;  
&lt;head&gt;  
    &lt;title&gt;listener demo &lt;/title&gt;
    &lt;script src=&quot;ClientGlobalContext.js.aspx&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;https://code.jquery.com/jquery-2.2.4.min.js&quot; type=&quot;text/javascript&quot;&gt;&lt;/script&gt;
    &lt;script&gt;
	var intervalId = null;
	var openrequest = false;
	var requestData = function(){
		if(!openrequest){
			openrequest = true;
			var request = $.ajax({
				url: 'http://localhost:8080',
				type: 'POST',
				contentType: 'application/json',
				dataType: &quot;json&quot;,
				data: JSON.stringify({action:'read'})
			});
			request.done(function( msg ) {
				openrequest = false;
				for(var i=0;i&lt;msg.length;i++){
					$('#messageDiv').append(JSON.parse(msg[i]).data + '&lt;br /&gt;')
				}
			});
			request.fail(function( jqXHR, textStatus ) {
				openrequest = false;
				clearInterval(intervalId);
				$('#messageDiv').append('Request failed: ' + textStatus );
			});
		}
	}
	var intervaltime = 100;
	$(function(){
		intervalId = setInterval(requestData, intervaltime);
    });
	
    &lt;/script&gt;
&lt;/head&gt;  
&lt;body&gt;  
	&lt;div id='messageDiv' /&gt;
&lt;/body&gt;  
&lt;/html&gt;
</code></pre>
<h4 id="finalthoughts">Final thoughts</h4>
<ol>
<li>You could modify the my proof-of-concept listener application to support outbound integrations with local workstation resources. For example, if you want to start a local program based on a Dynamics 365 form event, you post a specific kind of JSON request to the listener from Dynamics 365, and then the listener would start the program running.</li>
<li>Keeping the queued messages as a list of strings is probably not be the best long-term approach, especially if you have different types of messages passing through the listener that you want to handle differently. In that case, you'd want to store the messages in a data structure that allows you to retrieve just a particular kind of message.</li>
<li>The list of queued messages in my proof-of-concept application does not persist if the listener process stops running. This probably isn't a big deal because the whole idea of the message listener is to facilitate real-time communication, but I wanted to make the point explicitly clear.</li>
</ol>
<p>What do you think about this approach? Can you see yourself using it on your projects? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Scheduling Dynamics 365 workflows with Azure Functions and C#]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Over the past few days, I've shared two approaches for scheduling Dynamics 365 workflows using Azure Functions and the Dynamics 365 Web API. One uses <a href="https://alexanderdevelopment.net/post/2016/11/25/scheduling-dynamics-365-workflows-with-azure-functions/">Node.js</a>, and the other uses <a href="https://alexanderdevelopment.net/post/2016/11/29/scheduling-dynamics-365-workflows-with-azure-functions-and-python/">Python</a>. Because most Dynamics CRM developers are probably more familiar with C# than Node.js or Python, I also</p></div>]]></description><link>https://alexanderdevelopment.net/post/2016/11/29/scheduling-dynamics-365-workflows-with-azure-functions-and-csharp/</link><guid isPermaLink="false">5a5837246636a30001b97878</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[Azure]]></category><category><![CDATA[demonstrations]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Wed, 30 Nov 2016 02:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2016/11/chrome_2016-11-29_13-07-59.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2016/11/chrome_2016-11-29_13-07-59.png" alt="Scheduling Dynamics 365 workflows with Azure Functions and C#"><p>Over the past few days, I've shared two approaches for scheduling Dynamics 365 workflows using Azure Functions and the Dynamics 365 Web API. One uses <a href="https://alexanderdevelopment.net/post/2016/11/25/scheduling-dynamics-365-workflows-with-azure-functions/">Node.js</a>, and the other uses <a href="https://alexanderdevelopment.net/post/2016/11/29/scheduling-dynamics-365-workflows-with-azure-functions-and-python/">Python</a>. Because most Dynamics CRM developers are probably more familiar with C# than Node.js or Python, I also created an equivalent C# version. Just like with my previous examples, this version calls the Web API directly instead of using any SDK assemblies.</p>
<p>Here's my code. It does the following:</p>
<ol>
<li>Request an OAuth token using a username and password.</li>
<li>Query the Dynamics 365 Web API for accounts with names that start with the letter &quot;F.&quot;</li>
<li>Execute a workflow for each record that was retrieved in the previous step. The workflow that I am executing is the same workflow I used in my <a href="https://alexanderdevelopment.net/post/2016/11/25/scheduling-dynamics-365-workflows-with-azure-functions/">Node.js example</a> to create a note on an account.</li>
</ol>
<pre><code>#r &quot;Newtonsoft.Json&quot;

using System;
using System.Net;
using System.IO;
using Newtonsoft.Json;

//set these values to retrieve the oauth token
static string crmorg = &quot;https://CRMORG.crm.dynamics.com&quot;;
static string clientid = &quot;00000000-0000-0000-0000-000000000000&quot;;
static string username = &quot;xxxxxx@xxxxxxxx&quot;;
static string userpassword = &quot;xxxxxxxx&quot;;
static string tokenendpoint = &quot;https://login.microsoftonline.com/00000000-0000-0000-0000-000000000000/oauth2/token&quot;;

//set these values to query your crm data
static string crmwebapihost = &quot;https://CRMORG.api.crm.dynamics.com/api/data/v8.2&quot;;
static string crmwebapipath = &quot;/accounts?$select=name,accountid&amp;$filter=startswith(name,'F')&quot;;

static string workflowid = &quot;DC8519EC-F3CE-4BC9-BB79-DF2AD70217A1&quot;;

public static void Run(TimerInfo myTimer, TraceWriter log)
{
	//build the authorization request
	var reqstring = &quot;client_id=&quot; + clientid;
	reqstring += &quot;&amp;resource=&quot; + Uri.EscapeUriString(crmorg);
	reqstring += &quot;&amp;username=&quot; + Uri.EscapeUriString(username);
	reqstring += &quot;&amp;password=&quot; + Uri.EscapeUriString(userpassword);
	reqstring += &quot;&amp;grant_type=password&quot;;

	WebRequest req = WebRequest.Create(tokenendpoint);
	req.ContentType = &quot;application/x-www-form-urlencoded&quot;;
	req.Method = &quot;POST&quot;;
	byte[] bytes = System.Text.Encoding.ASCII.GetBytes(reqstring);
	req.ContentLength = bytes.Length;
	System.IO.Stream os = req.GetRequestStream();
	os.Write(bytes, 0, bytes.Length);
	os.Close();

	HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
	StreamReader tokenreader = new StreamReader(resp.GetResponseStream());
	string responseBody = tokenreader.ReadToEnd();
	tokenreader.Close();
	var tokenresponse = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(responseBody);
	var token = tokenresponse[&quot;access_token&quot;];
	log.Info(&quot;got token&quot;);

	WebRequest crmreq = WebRequest.Create(crmwebapihost + crmwebapipath);
	crmreq.Headers = new WebHeaderCollection();
	crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
	crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
	crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
	crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
	crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
	crmreq.ContentType = &quot;application/json; charset=utf-8&quot;;
	crmreq.Method = &quot;GET&quot;;

	HttpWebResponse crmresp = (HttpWebResponse)crmreq.GetResponse();
	StreamReader crmreader = new StreamReader(crmresp.GetResponseStream());
	string crmresponseBody = crmreader.ReadToEnd();
	crmreader.Close();
	var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponseBody);
	foreach(var row in crmresponseobj[&quot;value&quot;].Children())
	{
		log.Info(row[&quot;name&quot;].ToString());
		runWorkflow(token.ToString(), new Guid(row[&quot;accountid&quot;].ToString()), log);

	}

	Console.ReadLine();
}

static void runWorkflow(string token, Guid entityid, TraceWriter log)
{
	var crmwebapiworkflowpath = &quot;/workflows(&quot; + workflowid + &quot;)/Microsoft.Dynamics.CRM.ExecuteWorkflow&quot;;
	WebRequest req = WebRequest.Create(crmwebapihost + crmwebapiworkflowpath);

	log.Info(&quot;  calling workflow for &quot; + entityid);

	string reqobject = &quot;{ \&quot;EntityId\&quot;: \&quot;&quot; + entityid + &quot;\&quot;}&quot;;
    
	req.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
	req.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
	req.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
	req.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
	req.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
	req.ContentType = &quot;application/json; charset=utf-8&quot;;
    req.Method = &quot;POST&quot;;
	
	byte[] bytes = System.Text.Encoding.ASCII.GetBytes(reqobject);
	req.ContentLength = bytes.Length;
	System.IO.Stream os = req.GetRequestStream();
	os.Write(bytes, 0, bytes.Length);
	os.Close();

	HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
	StreamReader reader = new StreamReader(resp.GetResponseStream());
	string responseBody = reader.ReadToEnd();
	reader.Close();
	
    var responseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(responseBody);
	if(resp.StatusCode == HttpStatusCode.OK)
	{
		log.Info(&quot;    success &quot; + entityid.ToString());
	}
	else
	{
		log.Info(&quot;    error &quot; + entityid.ToString());
	}
}
</code></pre>
<p>To set this up in your Azure tenant, set up a Functions App and a new C# timer trigger function like I described in the <a href="https://alexanderdevelopment.net/post/2016/11/25/scheduling-dynamics-365-workflows-with-azure-functions/">Node.js example</a>. Copy the C# code from above and paste it into the function editor window. Set any specifics relative to your Dynamics 365 organization and click save. That's all there is to it.</p>
<p>*<em>If you're wondering about that <code>#r &quot;Newtonsoft.Json&quot;</code> at the top of the C# code, take a look <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-csharp#referencing-external-assemblies">here</a>.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Updated solution for scheduling recurring Dynamics CRM workflows]]></title><description><![CDATA[<div class="kg-card-markdown"><p>About three years ago I released an <a href="https://alexanderdevelopment.net/post/2013/05/19/scheduling-recurring-dynamics-crm-workflows-with-fetchxml/">open source Dynamics CRM solution for scheduling and executing recurring workflows</a>. My solution would execute a FetchXML query to return a set of records and then start a workflow for each of those records without requiring any external processes or tools. This is</p></div>]]></description><link>https://alexanderdevelopment.net/post/2016/09/19/updated-solution-for-scheduling-recurring-dynamics-crm-workflows/</link><guid isPermaLink="false">5a5837246636a30001b97843</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[utilities]]></category><category><![CDATA[FetchXML]]></category><category><![CDATA[C#]]></category><category><![CDATA[programming]]></category><category><![CDATA[process automation]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 19 Sep 2016 21:07:07 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2016/09/chrome_2016-09-19_15-27-18-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2016/09/chrome_2016-09-19_15-27-18-1.png" alt="Updated solution for scheduling recurring Dynamics CRM workflows"><p>About three years ago I released an <a href="https://alexanderdevelopment.net/post/2013/05/19/scheduling-recurring-dynamics-crm-workflows-with-fetchxml/">open source Dynamics CRM solution for scheduling and executing recurring workflows</a>. My solution would execute a FetchXML query to return a set of records and then start a workflow for each of those records without requiring any external processes or tools. This is a generalized approach to solving a class of problems that includes the following scenarios:</p>
<ol>
<li>The birthday greetings problem: How can you, on a daily basis, send an e-mail to every contact with a birthday = today (where the date value for today is obviously different every day)?</li>
<li>The monthly update problem: How can you, on a monthly basis, generate an activity for every account with status reason = X (where it's important that the process only runs on a certain day of the month based on status reason values as of that exact date)?</li>
</ol>
<p>Of all the CRM sample code and solutions I've ever shared, I think this was probably the most popular, but it had one glaring flaw. My original solution would only retrieve a maximum of 5,000 records per run because it didn't include any result paging code. Today I have released an updated version of my <a href="https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner/releases/tag/v1.1">solution for CRM 2016</a> that does include result paging, and I've also moved hosting for the solution source code to <a href="https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner">GitHub</a>.</p>
<h4 id="howitworks">How it works</h4>
<p>In case you're unfamiliar with the previous version of my solution, my approach requires three components (the names have changed in the updated version):</p>
<ol>
<li>A custom workflow activity (AlexanderDevelopment.WorkflowScheduler) that can execute a supplied FetchXML query and initiate the workflow for each retrieved record.</li>
<li>A custom entity (Recurring process) to hold the FetchXML query and scheduling details.</li>
<li>A workflow (Recurring workflow runner) to run the AlexanderDevelopment.WorkflowScheduler activity on a recurring schedule.</li>
</ol>
<p>A &quot;Recurring process&quot; record is created, which starts a corresponding &quot;Recurring workflow runner&quot; workflow in a timeout state. When the next run date == the current time, the &quot;Recurring workflow runner&quot; workflow advances the next run date, and then it initiates the AlexanderDevelopment.WorkflowScheduler activity with the FetchXML query and workflow lookup from the &quot;Recurring process&quot; record. The AlexanderDevelopment.WorkflowScheduler executes the FetchXML, loops through the results and starts the specified workflow from the lookup for each record. A newly started &quot;Recurring workflow runner&quot; workflow then waits for the next run date to start the process again.</p>
<p>Here's what a &quot;Recurring process&quot; record looks like:<br>
<img src="https://alexanderdevelopment.net/content/images/2016/09/chrome_2016-09-19_15-27-18.png#img-thumbnail" alt="Updated solution for scheduling recurring Dynamics CRM workflows"></p>
<p>Here's the &quot;Recurring workflow runner&quot; workflow:<br>
<img src="https://alexanderdevelopment.net/content/images/2016/09/chrome_2016-09-19_15-28-10.png#img-thumbnail" alt="Updated solution for scheduling recurring Dynamics CRM workflows"></p>
<p>The default result page size is 1,000, but you change it in &quot;Recurring workflow runner&quot; workflow definition:<br>
<img src="https://alexanderdevelopment.net/content/images/2016/09/chrome_2016-09-19_15-28-57.png#img-thumbnail" alt="Updated solution for scheduling recurring Dynamics CRM workflows"></p>
<h4 id="puttingitalltogether">Putting it all together</h4>
<p>As mentioned above, the source code for the solution is available in a GitHub repository <a href="https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner">here</a>. You can also download a CRM solution ready to load into your system directly from the repository <a href="https://github.com/lucasalexander/AlexanderDevelopment.ProcessRunner/releases/latest">releases area</a>.</p>
<p>Happy workflow scheduling!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Get next case functionality for Dynamics CRM]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Dynamics CRM offers sophisticated tools for working with cases and service queues, but sometimes users just want a quick and simple way to get the next case to work. In today's post, I'll share an easy way to implement this functionality in your Dynamics CRM organization.</p>
<p>There are three components</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/10/01/get-next-case-functionality-for-dynamics-crm/</link><guid isPermaLink="false">5a5837226636a30001b9776b</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[C#]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[CRM 2015]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 02 Oct 2015 02:13:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/process-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/process-1.png" alt="Get next case functionality for Dynamics CRM"><p>Dynamics CRM offers sophisticated tools for working with cases and service queues, but sometimes users just want a quick and simple way to get the next case to work. In today's post, I'll share an easy way to implement this functionality in your Dynamics CRM organization.</p>
<p>There are three components to my approach:</p>
<ol>
<li>First, there is a custom workflow activity that searches a queue for unassigned cases and assigns the one that entered the queue earliest to a user. This activity returns the assigned case id and case number as output parameters.
</li><li>Second, there is a CRM action that that acts as a wrapper for the custom workflow activity.
</li><li>Finally, there is a web resource that displays a button to call the CRM action and generate a link to the assigned case (or a message if there are no cases in the queue to assign).
</li></ol>
<p>Let's look at each of these pieces in more detail. If you'd rather get right to the code, scroll to the bottom of this post for a link.</p>
<h4 id="thecustomworkflowactivity">The custom workflow activity</h4>
<p>The custom workflow activity has the following parameters:</p>
<ol>
<li>Queue - input - Reference to the queue that contains the case
</li><li>User - input - Reference to the user who will be assigned the case
</li><li>Assign - input - Flag for whether to assign the case or just return the case details
</li><li>Remove from queue - input - Flag for whether to completely remove the case from the queue when it is assigned
</li><li>Case id - output - Case GUID as a string (string because it makes working with non-existing cases easier in the custom action)
</li><li>Case number - output - Case ticket number value
</li><li>Case found - output - Flag for whether a case was found
</li><li>Case title - output - Case title value
</li></ol>
<p>The activity then executes a FetchXML query to retrieve unassigned cases in ascending order of the date they entered the queue. If the &quot;assign&quot; flag is set to true, the first case is assigned to the user, and then the details are returned as output parameters. If the flag is set to false, the case details are returned as output parameters, but the case is not updated.</p>
<h4 id="thecrmaction">The CRM action</h4>
<p>The action is the simplest part of the solution. It just takes input parameters to pass to the custom activity and then returns the output parameters.</p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/process.png#img-thumbnail" alt="Get next case functionality for Dynamics CRM"></p>
<h4 id="thewebresource">The web resource</h4>
<p>Finally the web resource uses JavaScript to call the action and display a link to the assigned case. If no  &quot;next&quot; case can be found, then a message is displayed to the user. In my solution, the queue id and assign/remove from queue flags are supplied as encoded query string parameters.</p>
<p>The image below shows the output after the get next button is clicked:<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/button.png#img-thumbnail" alt="Get next case functionality for Dynamics CRM"></p>
<p>The web resource approach allows you to embed the button in a dashboard or CRM form through an iframe, but you could also trigger the action from a ribbon button using similar JavaScript.</p>
<h4 id="thecode">The code</h4>
<p>You can download all the custom code and a CRM solution extract from my Crm-Sample-Code repository on <a href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmQueueGetNext">GitHub</a>. Happy case management!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This the final post in my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">part 3</a> and <a href="https://alexanderdevelopment.net/post/2015/01/22/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-4">part 4</a> I showed two approaches for building a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange. In today’s post I will show</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/27/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-5/</link><guid isPermaLink="false">5a5837236636a30001b977c7</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[C#]]></category><category><![CDATA[JSON]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Tue, 27 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-1.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><p>This the final post in my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">part 3</a> and <a href="https://alexanderdevelopment.net/post/2015/01/22/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-4">part 4</a> I showed two approaches for building a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange. In today’s post I will show how to create a Windows console application that reads messages from a queue and writes the data to Dynamics CRM. The code for this application is available on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a> in the LeadWriterSample project under the LucasCrmMessageQueueTools solution.</p>
<h4 id="theapproach">The approach</h4>
<p>This application is extraordinarily simple. On startup it prompts the user to supply connection information for the RabbitMQ queue that it will monitor as well as a Dynamics CRM connection string. It then monitors the queue for new JSON-formatted messages. When new messages arrive, it attempts to deserialize them into a lightweight &quot;leadtype&quot; object, and then it creates new lead records in CRM. Once a message is successfully processed and a lead is created, the application then sends a confirmation back to RabbitMQ so that the message can be removed from the queue.</p>
<p>The following code shows what happens after a connection to the RabbitMQ is established:<pre><code>//wait for some messages<br>
var consumer = new QueueingBasicConsumer(channel);<br>
channel.BasicConsume(_queue, false, consumer);<br>
 <br>
Console.WriteLine(&quot; [*] Waiting for messages. To exit press CTRL+C&quot;);<br>
 <br>
//instantiate crm org service<br>
using (OrganizationService service = new OrganizationService(_targetConn))<br>
{<br>
   while (true)<br>
   {<br>
     //get the message from the queue<br>
     var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();<br>
 <br>
     var body = ea.Body;<br>
     var message = Encoding.UTF8.GetString(body);<br>
 <br>
     try<br>
     {<br>
       //deserialize message json to object<br>
       LeadType lead = JsonConvert.DeserializeObject&lt;LeadType&gt;(message);<br>
 <br>
       try<br>
       {<br>
         //create record in crm<br>
         Entity entity = new Entity(&quot;lead&quot;);<br>
         entity[&quot;firstname&quot;] = lead.FirstName;<br>
         entity[&quot;lastname&quot;] = lead.LastName;<br>
         entity[&quot;subject&quot;] = lead.Topic;<br>
         entity[&quot;companyname&quot;] = lead.Company;<br>
         service.Create(entity);<br>
 <br>
         //write success message to cli<br>
         Console.WriteLine(&quot;Created lead: {0} {1}&quot;, lead.FirstName, lead.LastName);<br>
 <br>
         //IMPORTANT - tell the queue the message was processed successfully so it doesn't get requeued<br>
         channel.BasicAck(ea.DeliveryTag, false);<br>
       }<br>
       catch (FaultException&lt;Microsoft.Xrm.Sdk.OrganizationServiceFault&gt; ex)<br>
       {<br>
         //return error - note no confirmation is sent to the queue, so the message will be requeued<br>
         Console.WriteLine(&quot;Could not create lead: {0} {1}&quot;, lead.FirstName, lead.LastName);<br>
         Console.WriteLine(&quot;Error: {0}&quot;, ex.Message);<br>
       }<br>
     }<br>
     catch(Exception ex)<br>
     {<br>
       //return error - note no confirmation is sent to the queue, so the message will be requeued<br>
       Console.WriteLine(&quot;Could not process message from queue&quot;);<br>
       Console.WriteLine(&quot;Error: {0}&quot;, ex.Message);<br>
     }<br>
   }<br>
}</code></pre></p>
<p>If this were to be used in production, I would have created a Windows service instead of a console application, but I wanted to make it easy to try out different connection parameters.</p>
<h4 id="verifyingtheapplication">Verifying the application</h4>
<p>The queuewriter.js application in the node-app directory in the <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub repository</a> contains a sample web page that can be used to publish lead data to the CRM-Leads queue. If the application is running, you can access the web page at http://&lt;YOUR_SERVER_NAME&gt;:3000/leadform. When the form’s submit button is clicked, an AJAX call posts a JSON object to the Node.js POST endpoint I showed in my previous post. If the LeadWriterSample console application is running, it will take the message from the queue and you will see a new lead record created in CRM. The screenshots below show each piece working.</p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-01-lead-form.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The lead has been submitted via the web form, and a success message has been received from the Node.js endpoint.</em></p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-02-lead-queue.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The lead has landed in the CRM-Leads queue and is ready to be retrieved.</em></p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-03-lead-processed.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The console application has retrieved and processed the submitted lead message.</em></p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-04-lead-crm.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The lead record has been created in CRM.</em></p>
<p>One caveat about the demo lead form is that it has the RabbitMQ credentials embedded in the HTML source, so this code should not be used in production. My approach was originally formulated with the thought that a server-side process would build the JSON message to post to Node.js, so sensitive information would not be exposed. If you decide to use an AJAX post operation like is shown here, you would want to modify the queuewriter.js application to contain the credentials so they do not need to be passed from the end user’s web browser.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>That does it for this series, but I’ve just barely explored the capabilities of RabbitMQ. There’s so much more you can do with it than what I’ve shown here, and I hope I’ve piqued your interest about how you can use RabbitMQ or any other message broker in your Dynamics CRM projects. If you have any questions or want to continue the discussion, please share your thoughts in the comments.</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Welcome back to my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">last post</a> I showed how to build a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange using the <a target="_blank" href="https://www.rabbitmq.com/dotnet.html" rel="nofollow">official RabbitMQ .Net client library</a>. Unfortunately, that plug-in can’t</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/22/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-4/</link><guid isPermaLink="false">5a5837236636a30001b977bf</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[C#]]></category><category><![CDATA[JSON]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[integration]]></category><category><![CDATA[RabbitMQ]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 22 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-2.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 4"><p>Welcome back to my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">last post</a> I showed how to build a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange using the <a target="_blank" href="https://www.rabbitmq.com/dotnet.html" rel="nofollow">official RabbitMQ .Net client library</a>. Unfortunately, that plug-in can’t successfully communicate with a RabbitMQ server if it’s executed inside the Dynamics CRM sandbox, so in today’s post I will show how to achieve the same results with a sandboxed plug-in. The code for this plug-in is available on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a> in the MessageQueueSandboxPlugin project under the LucasCrmMessageQueueTools solution.</p>
<h4 id="theapproach">The approach</h4>
<p>As I mentioned in my previous post, last month I wrote a series of blog posts about how to create a near real-time streaming API using plug-ins and Node.js. That plug-in worked fine in the Dynamics CRM sandbox, and Node.js can easily publish messages to a RabbitMQ exchange, so today’s plug-in will post a JSON-formatted message to a Node.js application, and then that Node.js application will do the actual publishing to RabbitMQ. As a result, I only need to make a couple of minor modifications to <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3/">my earlier Node.js message-posting plug-in</a> so that it can pass the RabbitMQ connection parameters to my Node.js application. Additionally, the Node.js application that I described in my earlier series only needs a few changes to publish the message to a RabbitMQ exchange instead of sending it to Socket.IO clients.</p>
<h4 id="theplugin">The plug-in</h4>
<p>The plug-in is registered for an operation (create, update, delete, etc.) with a FetchXML query in its unsecure configuration. When the plug-in step is triggered, its associated FetchXML query is executed, and then the resulting fields are serialized into a JSON object, which is then sent to a Node.js application called queuewriter.js via an HTTP POST request. The JSON object also needs to contain RabbitMQ connection details, so I pass them as part of the plug-in step’s unsecure configuration. Here’s the configuration XML fragment to enable case notifications:</p>
<pre><code>&lt;nodeendpoint&gt;http://lucas-ajax.cloudapp.net:3000/rabbit_post_endpoint&lt;/nodeendpoint&gt;
&lt;endpoint&gt;lucas-ajax.cloudapp.net&lt;/endpoint&gt;
&lt;exchange&gt;CRM&lt;/exchange&gt;
&lt;routingkey&gt;Case&lt;/routingkey&gt;
&lt;user&gt;rabbituser&lt;/user&gt;
&lt;password&gt;PASSWORDHERE&lt;/password&gt;
&lt;query&gt;&lt;![CDATA[
&lt;fetch mapping='logical'&gt;
&lt;entity name='incident'&gt;
&nbsp;&lt;attribute name='ownerid'/&gt;
&nbsp;&lt;attribute name='modifiedby'/&gt;
&nbsp;&lt;attribute name='createdby'/&gt;
&nbsp;&lt;attribute name='title'/&gt;
&nbsp;&lt;attribute name='incidentid'/&gt;
&nbsp;&lt;attribute name='ticketnumber'/&gt;
&nbsp;&lt;attribute name='createdon'/&gt;
&nbsp;&lt;attribute name='modifiedon'/&gt;
&nbsp;&lt;filter type='and'&gt;
&nbsp; &lt;condition attribute='incidentid' operator='eq' value='{0}' /&gt;
&nbsp;&lt;/filter&gt;
&lt;/entity&gt;
&lt;/fetch&gt;
]]&gt;
&lt;/query&gt;
&lt;/config&gt;</code></pre>
<p>Just like in my <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3/">earlier Node.js plug-in</a>, the FetchXML is extracted from the configuration XML, and the query is executed against Dynamics CRM. The results are then serialized to JSON using <a target="_blank" href="http://james.newtonking.com/json" rel="nofollow">Json.NET</a> just like before, except the serialized CRM data is included as a &quot;message&quot; object that is part of a parent JSON object that includes the RabbitMQ connection parameters. Here’s an example of the structure:<pre><code>{<br>
   &quot;endpoint&quot;:&quot;lucas-ajax.cloudapp.net&quot;,<br>
   &quot;username&quot;:&quot;rabbituser&quot;,<br>
   &quot;password&quot;:&quot;XXXXXXXX&quot;,<br>
   &quot;exchange&quot;:&quot;CRM&quot;,<br>
   &quot;routingkey&quot;:&quot;Lead&quot;,<br>
   &quot;message&quot;:{<br>
     &quot;property1&quot;:&quot;value 1&quot;,<br>
     &quot;property2&quot;:&quot;value 2&quot;,<br>
     &quot;property3&quot;:&quot;value 3&quot;<br>
   }<br>
}</code></pre></p>
<p>Because this plug-in uses the Json.NET client library, it has to be merged with the plug-in assembly before registering it in Dynamics CRM. I’ve included a batch script called ilmerge.bat in the project directory on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a>.</p>
<h4 id="thenodejsapplication">The Node.js application</h4>
<p>The Node.js application (queuewriter.js) waits to receive JSON messages via HTTP POST from a client. When it receives a POST request, it checks whether the message is valid JSON. If it is, the RabbitMQ connection parameters are extracted and then the notification &quot;message&quot; object is published to the RabbitMQ exchange. If everything is successful, it sends &quot;success&quot; back as a response to the client. If any errors are encountered, it sends back a descriptive error message. I am using the <a target="_blank" href="https://github.com/postwait/node-amqp" rel="nofollow">node-amqp</a> library for communicating with the RabbitMQ server, but the behavior isn’t that different from a .Net client. Here’s an extract with the relevant code:<pre><code>if (request.method == 'POST') {<br>
   request.on('data', function(chunk) {<br>
     //check if received data is valid json<br>
     if(IsJsonString(chunk.toString())){<br>
       //convert message to json object<br>
       var requestobject = JSON.parse(chunk.toString());<br>
      <br>
       //connect to rabbitmq<br>
       var connection = amqp.createConnection({ host: requestobject.endpoint<br>
       , port: 5672 //assumes default port<br>
       , login: requestobject.username<br>
       , password: requestobject.password<br>
       , connectionTimeout: 0<br>
       , authMechanism: 'AMQPLAIN'<br>
       , vhost: '/' //assumes default vhost<br>
       });<br>
      <br>
       //when connection is ready<br>
       connection.on('ready', function () {<br>
          //get the &quot;message&quot; property of the supplied request<br>
          var message = JSON.stringify(requestobject.message);<br>
         <br>
          //post it to the exchange with the supplied routing key<br>
          connection.exchange = connection.exchange(requestobject.exchange, {passive: true, confirm: true }, function(exchange) {<br>
            exchange.publish(requestobject.routingkey, message, {mandatory: true, deliveryMode: 2}, function () {<br>
              //if successful, write message to console<br>
              console.log('Message published: ' + message);<br>
             <br>
              //send &quot;success&quot; back in response<br>
              response.write('success');<br>
             <br>
              //close the rabbitmq connection and end the response<br>
              connection.end();<br>
              response.end();<br>
            });<br>
          });<br>
       });<br>
      <br>
       //if an error occurs with rabbitmq<br>
       connection.on('error', function () {<br>
          //send error message back in response and end it<br>
          response.write('failure writing message to exchange');<br>
          response.end();<br>
       });<br>
     }<br>
     else {<br>
       //if request contains invalid json<br>
       //send error message back in response and end it<br>
       response.write(&quot;invalid JSON&quot;);<br>
       response.end();<br>
     }<br>
   });<br>
}</code></pre></p>
<p>The complete queuewriter.js application is contained in the node-app directory in the <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub repository</a>.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>In addition to registering the plugin and registering a step to publish a notification message to RabbitMQ, you need to deploy and start the queuewriter.js application to publish messages. Once that’s done, you can verify everything is working as expected either by looking at the Queues tab in the RabbitMQ management web UI or running the CliConsumer sample application I showed in <a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">part 2</a>.</p>
<p>Obviously using queuewriter.js as message proxy adds an extra layer of complexity, and you have to make sure that the application is up and running in order to process message, but it also offers a couple of advantages. First, by using queuewriter.js instead of a direct connection, you can easily use this same plug-in with different message brokers like Apache ActiveMQ and Microsoft’s Azure Service Bus. Second, the queuewriter.js application isn’t limited to just handling messages outbound from Dynamics CRM. You can also use it to process inbound messages without any changes. You just have to configure a client application to read messages from the queue and process them accordingly. A good example of this would be writing data submitted through a web form to Dynamics CRM via a RabbitMQ queue, and I will show that exact scenario in my next post!</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 3]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the third post of a five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ.<br>
<a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">Last time</a> I showed how to install and configure a RabbitMQ server to support passing messages to and from Dynamics CRM. Today I will show how to build a Dynamics</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3/</link><guid isPermaLink="false">5a5837236636a30001b977b7</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[JSON]]></category><category><![CDATA[C#]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Tue, 20 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-3.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-3.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 3"><p>This is the third post of a five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ.<br>
<a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">Last time</a> I showed how to install and configure a RabbitMQ server to support passing messages to and from Dynamics CRM. Today I will show how to build a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange using the <a target="_blank" href="https://www.rabbitmq.com/dotnet.html" rel="nofollow">official RabbitMQ .Net client library</a>. The code for this plug-in is available on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a> in the MessageQueuePlugin project under the LucasCrmMessageQueueTools solution.</p>
<p>Before going any further, let’s get some bad news out of the way. Plug-ins that execute in the Dynamics CRM sandbox cannot use RabbitMQ .Net client library to publish messages to a RabbitMQ server, so you can’t use today’s plug-in approach from a CRM Online organization. In my next post, I will be showing an alternate mechanism for publishing messages that you can use from a sandboxed plug-in, but today I want to focus on the most direct integration method. Now that we’re clear on the limitations of this approach, let’s get started!</p>
<h4 id="theapproach">The approach</h4>
<p>Last month I wrote a series of blog posts about how to create a near real-time streaming API using plug-ins and Node.js. For this plug-in I’m going to basically copy the logic I used for the plug-in in that series.</p>
<p><a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3/">This post</a> outlines the approach in detail, but if you don’t want to read the entire thing, the basic idea was to create a plug-in that is registered for an operation (create, update, delete, etc.) with a FetchXML query in its unsecure configuration. When the plug-in step is triggered, its associated FetchXML query is executed, and then the resulting fields are serialized into a JSON object, which is then sent to the Node.js application via an HTTP POST request. Today’s plug-in operates in the exact same way, except instead of sending the JSON object to a Node.js endpoint, the JSON object will be published as a message to a RabbitMQ exchange.</p>
<h4 id="configuringtheplugin">Configuring the plug-in</h4>
<p>To make the plug-in easily useable in any organization without needing to be recompiled, all the RabbitMQ connection parameters are stored in the unsecure configuration along with the FetchXML query for the data to retrieve. Here’s the configuration XML fragment to enable case notifications:</p>
<pre><code>&lt;config&gt;
&lt;endpoint&gt;lucas-ajax.cloudapp.net&lt;/endpoint&gt;
&lt;exchange&gt;CRM&lt;/exchange&gt;
&lt;routingkey&gt;Case&lt;/routingkey&gt;
&lt;user&gt;rabbituser&lt;/user&gt;
&lt;password&gt;PASSWORDHERE&lt;/password&gt;
&lt;query&gt;&lt;![CDATA[
&lt;fetch mapping='logical'&gt;
&lt;entity name='incident'&gt;
&nbsp;&lt;attribute name='ownerid'/&gt;
&nbsp;&lt;attribute name='modifiedby'/&gt;
&nbsp;&lt;attribute name='createdby'/&gt;
&nbsp;&lt;attribute name='title'/&gt;
&nbsp;&lt;attribute name='incidentid'/&gt;
&nbsp;&lt;attribute name='ticketnumber'/&gt;
&nbsp;&lt;attribute name='createdon'/&gt;
&nbsp;&lt;attribute name='modifiedon'/&gt;
&nbsp;&lt;filter type='and'&gt;
&nbsp; &lt;condition attribute='incidentid' operator='eq' value='{0}' /&gt;
&nbsp;&lt;/filter&gt;
&lt;/entity&gt;
&lt;/fetch&gt;
]]&gt;
&lt;/query&gt;
&lt;/config&gt;</code></pre>
<h4 id="generatingthenotificationmessage">Generating the notification message</h4>
<p>Just like in my Node.js plug-in, the FetchXML is extracted from the configuration XML, and the query is executed against Dynamics CRM. The results are then serialized to JSON using <a target="_blank" href="http://james.newtonking.com/json" rel="nofollow">Json.NET</a>.</p>
<h4 id="publishingthemessage">Publishing the message</h4>
<p>The endpoint, exchange name, RabbitMQ user, RabbitMQ password and routing key values from the configuration XML are then used to establish a connection to RabbitMQ and publish the notification message to the exchange like so:</p>
<pre><code>try
{
&nbsp;&nbsp;&nbsp;&nbsp; //connect to rabbitmq
&nbsp;&nbsp;&nbsp;&nbsp; var factory = new ConnectionFactory();
&nbsp;&nbsp;&nbsp;&nbsp; factory.UserName = \_brokerUser;
&nbsp;&nbsp;&nbsp;&nbsp; factory.Password = \_brokerPassword;
&nbsp;&nbsp;&nbsp;&nbsp; factory.VirtualHost = "/";
&nbsp;&nbsp;&nbsp;&nbsp; factory.Protocol = Protocols.DefaultProtocol;
&nbsp;&nbsp;&nbsp;&nbsp; factory.HostName = \_brokerEndpoint;
&nbsp;&nbsp;&nbsp;&nbsp; factory.Port = AmqpTcpEndpoint.UseDefaultPort;
&nbsp;&nbsp;&nbsp;&nbsp; IConnection conn = factory.CreateConnection();
&nbsp;&nbsp;&nbsp;&nbsp; using (var connection = factory.CreateConnection())
&nbsp;&nbsp;&nbsp;&nbsp; {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; using (var channel = connection.CreateModel())
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //tell rabbitmq to send confirmation when messages are successfully published
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; channel.ConfirmSelect();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; channel.WaitForConfirmsOrDie();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //prepare message to write to queue
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var body = Encoding.UTF8.GetBytes(jsonMsg);
&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var properties = channel.CreateBasicProperties();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; properties.SetPersistent(true);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //publish the message to the exchange with the supplied routing key
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; channel.BasicPublish(_exchange, _routingKey, properties, body);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }
&nbsp;&nbsp;&nbsp;&nbsp; }
}
catch (Exception e)
{
&nbsp;&nbsp;&nbsp;&nbsp; tracingService.Trace("Exception: {0}", e.ToString());
&nbsp;&nbsp;&nbsp;&nbsp; throw;
}</code></pre>
<p>If any errors are encountered, the message is captured via the tracing service, and then an exception is thrown.</p>
<p>Because this plug-in uses both the RabbitMQ .Net and Json.NET client libraries, they have to be merged with the plug-in assembly before registering it in Dynamics CRM. I’ve included a batch script called ilmerge.bat in the project directory on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a>.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>After you register the plugin and register a step to publish a notification message to RabbitMQ, you can verify everything is working as expected either by looking at the Queues tab in the RabbitMQ management web UI or running the CliConsumer sample application I showed in<br>
<a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">part 2</a>.</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Welcome back to this five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/12/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-1">last post</a> I discussed why you would want to incorporate a message broker into your Dynamics CRM data interfaces, and today I will show how to install and configure RabbitMQ to</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2/</link><guid isPermaLink="false">5a5837236636a30001b977af</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><category><![CDATA[JSON]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Wed, 14 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-4.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-4.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"><p>Welcome back to this five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/12/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-1">last post</a> I discussed why you would want to incorporate a message broker into your Dynamics CRM data interfaces, and today I will show how to install and configure RabbitMQ to support the examples I’ll be presenting in the rest of the series.</p>
<h4 id="installation">Installation</h4>
<p>First, you’ll need to download the installation files from here: <a target="_blank" href="http://www.rabbitmq.com/download.html" rel="nofollow">http://www.rabbitmq.com/download.html</a>. The RabbitMQ server runs on Windows, Linux, UNIX and Mac OS X, and there are installation guides for each supported platform. Because RabbitMQ is written in Erlang, you will need to install an Erlang VM before you can install RabbitMQ, but there is a download link provided in the installation guide. I set up my RabbitMQ server on a Windows 2012 server, and I was up and running in less than 10 minutes.</p>
<p>Once you’ve installed RabbitMQ and started the server, the easiest way to manage it is via the <a target="_blank" href="http://www.rabbitmq.com/management.html" rel="nofollow">web-based management interface</a> that’s included with the server distribution. You can enable the management interface with the <a target="_blank" href="https://www.rabbitmq.com/man/rabbitmq-plugins.1.man.html" rel="nofollow">rabbitmq-plugins tool</a>. Run the following command to enable it: <em>rabbitmq-plugins enable rabbitmq_management</em>.</p>
<p>After the management plugin is enabled, you can access the web management UI from your server at <a href="http://localhost:15672">http://localhost:15672</a>. The default username is &quot;guest&quot; with &quot;guest&quot; as the password.</p>
<p>You’ll also need to configure any firewall rules necessary to allow access to your RabbitMQ server if it’s running on a server separate from your Dynamics CRM server. The default port is 5672, but that can be changed if you like. <a target="_blank" href="https://www.rabbitmq.com/configure.html" rel="nofollow">This page</a> discusses RabbitMQ configuration in great detail.</p>
<h4 id="settingupusersqueuesandexchanges">Setting up users, queues and exchanges</h4>
<p>The first thing you should do after the install is complete is change your default guest user password via the management UI. Then you can add additional users as necessary. For the examples in the rest of this series, you’ll need a user with full permissions on the default &quot;/&quot; virtual host. Here is what my &quot;rabbituser&quot; user account looks like:<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-00-user.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Next you need to create the entities required to broker the messages between publishers and consumers. Before continuing, I recommend you take a moment to skim this <a target="_blank" href="https://www.rabbitmq.com/tutorials/amqp-concepts.html" rel="nofollow">Advanced Message Queuing Protocol (AMQP) overview document</a>. If nothing else, at least read through the &quot;hello, world&quot; example section because it’s a great introduction to concepts that will be important throughout the rest of this series.</p>
<p><u>Queues</u><br>
In the management UI, navigate to the Queues tab, and create two new durable queues named CRM-Cases and CRM-Leads. (You can create any queues you want, but my examples in the rest of this series use queues with those names.) The screenshot below shows the queues in my system.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-01-queues.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p><u>Exchanges</u><br>
After your queues are created, you can create an exchange and bindings to your queues so messages get routed correctly. Navigate to the Exchanges tab and create a new, durable exchange named CRM. After your CRM exchange is created, you should see something like the screenshot below.</p>
<p>Next, click on the name of the CRM exchange to open its edit screen. Scroll to the &quot;add binding&quot; section toward the bottom of the page and add a binding to the CRM-Cases queue for a routing key value of &quot;Case&quot; live in the following picture and click &quot;bind.&quot;<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-02-exchanges-1.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Do the same for the CRM-Leads queue with &quot;Lead&quot; as the routing key. You should then see the two queues bound to the exchange.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-02-exchanges-2.PNG" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<h4 id="checkingtheconfiguration">Checking the configuration</h4>
<p>At this point you should have everything in place to start publishing and consuming messages. You can verify your configuration works with the CliProvider and CliConsumer sample applications included in my <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub repository</a> as part of the LucasCrmMessageQueueTools solution.</p>
<p>First, build and run the CliProvider application. You will be prompted to supply basic connection details, and then you can type a message to publish to your RabbitMQ server.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03a-cliprovider.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Once the message has been published, you can verify there’s a message waiting in the correct queue on the Queues tab of the management UI.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03b-message-ready.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Next, build and run the CliConsumer application. Once it connects to the CRM-Cases queue, the message will be retrieved and displayed.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03c-cliconsumer.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>When the CliConsumer application processes a message, it sends a confirmation back to queue that triggers removal of the message from the queue. You can check the Queues tab in the management UI to verify that the CRM-Cases queue is empty.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03d-no-message-ready.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>That’s it for today. Your RabbitMQ server is now fully configured and ready for use with the examples in the rest of this series. Next time I will show how to send messages to a RabbitMQ exchange from a plug-in using the RabbitMQ .Net client library. See you then!</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1]]></title><description><![CDATA[<div class="kg-card-markdown"><p>One of the things I love about Dynamics CRM is how easy it is to create data interfaces to enable integration with other systems. If you’ve worked with Dynamics CRM for any length of time, you’ve probably seen multiple web service integrations that enable interoperability with other line-of-business</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/12/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-1/</link><guid isPermaLink="false">5a5837236636a30001b977a7</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[C#]]></category><category><![CDATA[JSON]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 12 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-5.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-5.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1"><p>One of the things I love about Dynamics CRM is how easy it is to create data interfaces to enable integration with other systems. If you’ve worked with Dynamics CRM for any length of time, you’ve probably seen multiple web service integrations that enable interoperability with other line-of-business and legacy systems. A typical pair of inbound and outbound integrations might look like the picture below.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound.png#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1"></p>
<p>Using a tightly coupled connection between the source and target systems is usually the easiest (thus the quickest and cheapest) way to establish an integration, but this is often a bad idea. Consider the inbound scenario in which an external application is sending data to Dynamics CRM. What happens if the calling application misbehaves and starts sending thousands of requests per second? This has the potential to overload your CRM server and make it completely unusable. Now consider the outbound scenario in which a CRM plug-in calls an external web service. If the destination application’s web service is offline for a few minutes, the update from the CRM plug-in will not get received unless there’s some sort of error handling and retry logic built into the plug-in</p>
<h4 id="analternateapproach">An alternate approach</h4>
<p>For these reasons, and lots of others (logging, security, scalability, just to name a few), it’s considered a best practice to create loosely coupled integrations that rely on a message broker that sits between the source and destination systems. Though the formal definition is more complicated, for our purposes a message broker can be thought of as a collection of queues that hold messages. Publishers write messages to queues, and then consumers pick up the messages and process them appropriately. Additionally, the message broker can be configured to keep messages in their queues until the consumers provide confirmation of successful processing.</p>
<p>Here’s an example of what the integrations I showed earlier would look like with a message broker.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker.png#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1"></p>
<p>For the outbound call from the CRM plug-in, the plug-in writes the message to a broker. The message is routed to a queue where it waits to be processed. A separate processing service application retrieves the message from the queue and sends it to the destination application. For the inbound call to CRM, the process works exactly the same, except the source and destination applications are reversed.</p>
<h4 id="whyisamessagebrokerbetter">Why is a message broker better?</h4>
<p>In the inbound call scenario, an effective message broker would typically be expected to handle a larger volume of inbound messages than Dynamics CRM because all it’s doing is receiving and routing the data without any additional processing. The processing service can then process the messages in the queue at a speed that doesn’t overload the Dynamics CRM server. In the case of the outbound call, the combination of a message broker and processing service can enable complex retry logic and custom logging without having to store it in the plugin layer. As an added bonus to either scenario, a message broker can provide a guarantee that messages don’t get lost between the source and destination systems as long as the message is successfully published to the broker.</p>
<h4 id="wheredowegofromhere">Where do we go from here?</h4>
<p>Over the course of my next four blog posts, I will show how to use <a target="_blank" href="https://www.rabbitmq.com/" rel="nofollow">RabbitMQ</a> as a message broker in your Dynamics CRM data interfaces. I chose RabbitMQ for this series for several reasons:</p><ol><li>It’s open source.</li><li>It runs on multiple platforms.</li><li>It’s easy to install and configure.</li><li>It’s fast at processing messages.</li></ol><p></p>
<p>If you already have a different message broker in place in your organization or you would like to try a different message broker like Apache ActiveMQ or Microsoft’s Azure Service Bus, most of the approaches and a lot of the code I’m going to show in this series will still be applicable, with the notable exception of the post that discusses how to install and configure RabbitMQ.</p>
<p>Here’s the roadmap for the rest of the series:</p><ul><li>Part 2 – basic installation and configuration of a RabbitMQ</li><li>Part 3 – creating a Dynamics CRM plug-in that publishes messages using the RabbitMQ .Net client library</li><li>Part 4 – creating a sandboxed Dynamics CRM plug-in that publishes messages to RabbitMQ via Node.js</li><li>Part 5 – reading messages from a queue and writing them to Dynamics CRM</li></ul><p></p>
<p>If you just can’t wait to dig into the code, I’ve already posted everything to my <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code#crmmessagequeuing" rel="nofollow">repository on GitHub</a>, so you can go ahead and take a look.</p>
<p>See you next time!</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Console application for moving Dynamics CRM access team templates]]></title><description><![CDATA[<div class="kg-card-markdown"><p>When Dynamics CRM 2013 was released, I thought access teams were the new killer feature in that version, and I even developed custom workflow activity code to make <a href="https://alexanderdevelopment.net/post/2014/01/09/managing-microsoft-dynamics-crm-2013-access-team-membership-using-connections-2/">managing access team membership easier by using connection records</a>. I have thus far not had an opportunity to use access teams in</p></div>]]></description><link>https://alexanderdevelopment.net/post/2014/12/12/console-application-for-moving-dynamics-crm-access-team-templates/</link><guid isPermaLink="false">5a5837226636a30001b9775d</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[programming]]></category><category><![CDATA[C#]]></category><category><![CDATA[FetchXML]]></category><category><![CDATA[CRM 2013]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sat, 13 Dec 2014 00:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/team-mover-cli.PNG" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/team-mover-cli.PNG" alt="Console application for moving Dynamics CRM access team templates"><p>When Dynamics CRM 2013 was released, I thought access teams were the new killer feature in that version, and I even developed custom workflow activity code to make <a href="https://alexanderdevelopment.net/post/2014/01/09/managing-microsoft-dynamics-crm-2013-access-team-membership-using-connections-2/">managing access team membership easier by using connection records</a>. I have thus far not had an opportunity to use access teams in a real project, so I was disappointed to read <a href="https://crmbusiness.wordpress.com/2014/12/01/crm-2013-why-are-access-teams-marooned/">this blog post</a> by Ben Hosking (AKA &quot;The Hosk&quot;) about how Microsoft doesn't provide any out-of-the-box capabilities for moving access team templates between Dynamics CRM organizations. In that post, the Hosk says, &quot;It’s possible someone could build a console app to import the access team templates but as yet no one has created it.&quot; Challenge accepted.</p>
<p>My CRM Access Team Mover tool is available for download <a href="https://github.com/lucasalexander/CrmAccessTeamMover/releases/tag/v1.0">here,</a> and the source code is available on <a href="https://github.com/lucasalexander/CrmAccessTeamMover">GitHub</a>.</p>
<p>To copy/update access team templates from one organization to another using my tool, do the following:</p>
<ol start="2">
<li>Execute the tool from the command line.</li>
<li>When prompted to enter the the source connection string, supply a complete <a href="http://msdn.microsoft.com/en-us/library/gg695810.aspx">Dynamics CRM simplified connection string</a> for the source organization.</li>
<li>When prompted to enter the the target connection string, supply a complete <a href="http://msdn.microsoft.com/en-us/library/gg695810.aspx">Dynamics CRM simplified connection string</a> for the target organization.</li>
<li>The tool will attempt to update existing team templates based on their ids. If any source records don't already exist in the target environment, they will be created (with the identical id).</li>
<li>Any failures will be reported by the tool. Errors will be encountered if the target schema doesn't match the source schema for the relevant entities.</li>
</ol>
<p>Here's a screenshot of the tool being executed:</p>
<p><img src="https://alexanderdevelopment.net/content/images/2014/12/team-mover-cli.PNG#img-thumbnail" alt="Console application for moving Dynamics CRM access team templates"></p>
<p>And the image below shows access teams in a source and target system after I ran the tool. The source system has two access team templates for entities that don't exist in the target system, so they were not created.</p>
<p><img src="https://alexanderdevelopment.net/content/images/2014/12/team-mover.PNG#img-thumbnail" alt="Console application for moving Dynamics CRM access team templates"></p>
<h4 id="theapproachindetail">The approach in detail</h4>
<p>Access team templates are just regular Dynamics CRM records, and they can be accessed through the Dynamics CRM organization web service like most other records. (You can see a list of the messages and methods available for 2013 <a href="http://msdn.microsoft.com/en-us/library/dn481595%28v=crm.6%29.aspx">here</a>). Because of this, all I needed to do was query a source CRM organization for access teams, and then loop through the results to recreate each one in the target organization.</p>
<p>At first I tried to retrieve the teamtemplate records using a RetrieveMultiple query for all attributes, but that resulted in a strange service fault involving the issystem attribute that looks like a bug in CRM. I then decided to use a FetchXML query instead. To make it easy on myself, I first make a metadata request to retrieve all the teamtemplate attributes, and then I dynamically build a FetchXML query for everything except the issystem field. That query then gets executed. The code for all the source organization operations is below:</p>
<pre><code>using (OrganizationService service = new OrganizationService(sourceConn))  
{  
 try  
 {  
 //attributes to exclude from the query  
 List&lt;string&gt; IgnoredAttributes = new List&lt;string&gt; { &quot;issystem&quot; };  
  
 Console.WriteLine(&quot;Retrieving entity metadata . . .&quot;);  
 RetrieveEntityRequest entityreq = new RetrieveEntityRequest  
 {  
 LogicalName = &quot;teamtemplate&quot;,  
 EntityFilters = Microsoft.Xrm.Sdk.Metadata.EntityFilters.Attributes  
 };  
 RetrieveEntityResponse entityres = (RetrieveEntityResponse)service.Execute(entityreq);  
 string fetchXml = &quot;&lt;fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'&gt;&quot;;  
 fetchXml += &quot;&lt;entity name='teamtemplate'&gt;&quot;;  
  
 foreach (AttributeMetadata amd in entityres.EntityMetadata.Attributes)  
 {  
 if (!IgnoredAttributes.Contains(amd.LogicalName))  
 {  
 fetchXml += &quot;&lt;attribute name='&quot; + amd.LogicalName + &quot;' /&gt;&quot;;  
 //Console.WriteLine(amd.LogicalName);  
 }  
 }  
 fetchXml += &quot;&lt;/entity&gt;&lt;/fetch&gt;&quot;;  
  
 Console.WriteLine(&quot;&quot;);  
 Console.WriteLine(&quot;Exporting data . . .&quot;);  
 exported = service.RetrieveMultiple(new FetchExpression(fetchXml));  
 }  
 catch (FaultException&lt;Microsoft.Xrm.Sdk.OrganizationServiceFault&gt; ex)  
 {  
 Console.WriteLine(&quot;Could not export data: {0}&quot;, ex.Message);  
 return;  
 }  
}
</code></pre>
<p>Once the teamtemplate records have been retrieved, they are then created in the target organization:</p>
<pre><code>using (OrganizationService service = new OrganizationService(targetConn))  
{  
 if (exported.Entities.Count &gt; 0)  
 {  
 foreach (Entity entity in exported.Entities)  
 {  
 try  
 {  
 //try to update first  
 try  
 {  
 service.Update(entity);  
 }  
 catch (FaultException&lt;Microsoft.Xrm.Sdk.OrganizationServiceFault&gt;)  
 {  
 //if update fails, then create  
 service.Create(entity);  
 }  
 }  
 catch (FaultException&lt;Microsoft.Xrm.Sdk.OrganizationServiceFault&gt; ex)  
 {  
 //if everything fails, return error  
 Console.WriteLine(&quot;Error: {0} - {1}&quot;, entity.Id, entity[&quot;teamtemplatename&quot;]);  
 }  
 }  
 }  
}
</code></pre>
<p>You'll note this code not only creates new records, but it also tries to update existing records, so if you have a teamtemplate that's changed, this will handle it if the record was created with the same GUID in the target system as in the source system.</p>
<p>What do you think about this approach? Would you have done anything differently?</p>
</div>]]></content:encoded></item><item><title><![CDATA[Creating a near real-time streaming interface for Dynamics CRM with Node.js – part 4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the final post in my four-part series about creating a near real-time streaming interface for Microsoft Dynamics CRM using Node.js and Socket.IO. In my <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3">last post</a> I showed how to write the plug-in code to send messages from CRM to the Node.js application. In today’</p></div>]]></description><link>https://alexanderdevelopment.net/post/2014/12/11/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-4/</link><guid isPermaLink="false">5a5837236636a30001b977a2</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 11 Dec 2014 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/video-3.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/video-3.jpg" alt="Creating a near real-time streaming interface for Dynamics CRM with Node.js – part 4"><p>This is the final post in my four-part series about creating a near real-time streaming interface for Microsoft Dynamics CRM using Node.js and Socket.IO. In my <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3">last post</a> I showed how to write the plug-in code to send messages from CRM to the Node.js application. In today’s post I will show how to configure a client to receive and process notifications from the Node.js application, and I’ll also discuss some general considerations related to this solution.</p>
<p>My <a href="https://alexanderdevelopment.net/post/2014/12/03/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-1">first post</a> in this series included a video that showed two clients connected to the Node.js application via Socket.IO. One was a web page that displayed notifications using JavaScript, and the other was a simple C# console application. You can find the code for both of the clients from the video in the “client-src” directory in the solution on <a href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmStreamingNotifications/" target="_blank">GitHub</a>.</p>
<h4 id="webpages">Web pages</h4>
<p>Creating a web page to display notifications received from the Node.js application is incredibly simple. For the web page I demonstrated in the introductory video I first load JavaScript libraries for Socket.IO and JQuery:<pre><code>&lt;script src=&quot;<a href="https://cdn.socket.io/socket.io-1.2.0.js">https://cdn.socket.io/socket.io-1.2.0.js</a>&quot;&gt;&lt;/script&gt;<br>
&lt;script src=&quot;<a href="http://code.jquery.com/jquery-1.11.1.js">http://code.jquery.com/jquery-1.11.1.js</a>&quot;&gt;&lt;/script&gt;</code></pre></p>
<p>Then I connect to the Socket.IO endpoint and register a callback function to append the notification text to an element on the page with JQuery:<pre><code>var socket = io(&quot;<a href="http://lucas-ajax.cloudapp.net:3000">http://lucas-ajax.cloudapp.net:3000</a>&quot;);<br>
socket.on('message', function(msg){<br>
var obj = jQuery.parseJSON( msg );<br>
$('#records').append($('&lt;li&gt;').text(msg));<br>
});</code></pre></p>
<p>Of course you’re not limited to displaying just raw text. Once you parse the JSON message, it’s a fully-fledged object that you can work with as you like. For example you can create a web page client that lists case updates by displaying the case numbers hyperlinked to the record in Dynamics CRM.</p>
<p>The code for this is only marginally more complicated than the raw stream example above.<pre><code>var socket = io(&quot;<a href="http://lucas-ajax.cloudapp.net:3000">http://lucas-ajax.cloudapp.net:3000</a>&quot;);<br>
socket.on('message', function(msg){<br>
var obj = jQuery.parseJSON( msg );<br>
if(obj.entity===&quot;incident&quot;){<br>
$('#records').append($('&lt;li&gt;'+obj.operation + ' - &lt;a href=&quot;<a href="https://lucas-ajax.cloudapp.net/Lucas01/main">https://lucas-ajax.cloudapp.net/Lucas01/main</a><wbr>.aspx?etc=112&amp;pagetype=entityrecord&amp;id='+obj.id+'&quot; target=&quot;_blank&quot;&gt;'+obj.ticketnumber+'&lt;/a&gt;'));<br>
}<br>
});</code></pre></p>
<p>As before, first the page creates a connection to the Socket.IO endpoint, and then it registers a callback function. This time, however, the callback function includes a check for incident entities, and the append step creates a hyperlinked case number. The code for this example and another one for contact updates is included in the solution source code on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmStreamingNotifications/" rel="nofollow">GitHub</a>.</p>
<h4 id="otherclients">Other clients</h4>
<p>The C# console application I showed in the introductory video was also incredibly simple to create. First I needed a way to communicate with the Socket.IO endpoint. I used the <a target="_blank" href="https://github.com/Quobject/SocketIoClientDotNet" rel="nofollow">Socket.IO Client Library for .Net</a> (also available from Nuget -&gt; <em>Install-Package SocketIoClientDotNet</em>). Once I included the library in the project, the code ended up looking a lot like the JavaScript examples above.<pre><code>var socket = IO.Socket(&quot;<a href="http://lucas-ajax.cloudapp.net:3000/">http://lucas-ajax.cloudapp.net:3000/</a>&quot;);<br>
socket.On(Socket.EVENT_CONNECT, () =&gt;<br>
{<br>
socket.On(&quot;message&quot;, (data) =&gt;<br>
{<br>
Console.WriteLine(data);<br>
<a href="//socket.Disconnect">//socket.Disconnect</a>();<br>
});<br>
});</code></pre></p>
<p>In addition to C#, you can create Socket.IO clients in other languages, too. The <a target="_blank" href="http://socket.io/docs/faq/" rel="nofollow">Socket.IO FAQ</a> has links to client libraries for Java and iOS clients.</p>
<h4 id="securityconsiderations">Security considerations</h4>
<p>As I mentioned in the <a href="https://alexanderdevelopment.net/post/2014/12/05/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-2">second post</a> in this series, this solution lacks any mechanism for authenticating or authorizing clients, but I said it was possible. There are two typical approaches to securing Socket.IO interfaces, cookie-based and token-based. This article on <a target="_blank" href="https://auth0.com/blog/2014/01/15/auth-with-socket-io/">&quot;Token-based authentication with Socket.IO&quot;</a> gives a good overview of the problems with the cookie-based approach and goes into some detail about how to implement token-based authentication. In addition to securing the Socket.IO endpoint, you’d also want to consider securing the endpoint where Dynamics CRM posts notifications. You could use the same token-based approach for that, too.</p>
<p>I would also suggest that depending on how you’ve deployed Dynamics CRM and how you need to grant client access, you might not need to worry about security at all. For example, in the case notification example above the only information exposed via the interface is the case number and whether it was created or updated. To see anything else, the end user actually has to open the record, and regular Dynamics CRM security will do the rest.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>I hope you’ve enjoyed this series, and I hope I’ve given you some ideas about how you could implement and use a near real-time streaming API for Dynamics CRM in your own projects. If you have any questions or want to continue the discussion, please share your thoughts in the comments.</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item></channel></rss>