<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Node.js - Alexander Development]]></title><description><![CDATA[Node.js - Alexander Development]]></description><link>https://alexanderdevelopment.net/</link><generator>Ghost 1.20</generator><lastBuildDate>Fri, 24 Apr 2026 03:30:09 GMT</lastBuildDate><atom:link href="https://alexanderdevelopment.net/tag/node-js/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Dynamics 365 and Node.js integration using the Web API - part 2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Last year I wrote a <a href="https://alexanderdevelopment.net/post/2016/11/23/dynamics-365-and-node-js-integration-using-the-web-api/">post</a> that showed how to retrieve data from a Dynamics 365 Online organization in a Node.js application using the Web API. Today I will share sample code that shows how to update data from a Node.js application using the Web API.</p>
<h4 id="updatingasingleproperty">Updating a</h4></div>]]></description><link>https://alexanderdevelopment.net/post/2017/02/16/dynamics-365-and-node-js-integration-using-the-web-api-part-2/</link><guid isPermaLink="false">5a5837246636a30001b97889</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[programming]]></category><category><![CDATA[integration]]></category><category><![CDATA[Dynamics 365]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 16 Feb 2017 17:05:52 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card-markdown"><p>Last year I wrote a <a href="https://alexanderdevelopment.net/post/2016/11/23/dynamics-365-and-node-js-integration-using-the-web-api/">post</a> that showed how to retrieve data from a Dynamics 365 Online organization in a Node.js application using the Web API. Today I will share sample code that shows how to update data from a Node.js application using the Web API.</p>
<h4 id="updatingasingleproperty">Updating a single property</h4>
<p>To update a single property on a record in Dynamics 365, you can make a PUT request to the Web API. The raw HTTP request to update the first name for a contact would look like this:</p>
<pre><code>PUT [Organization URI]/api/data/v8.2/contacts(00000000-0000-0000-0000-000000000001)/firstname HTTP/1.1
Content-Type: application/json
OData-MaxVersion: 4.0
OData-Version: 4.0

{&quot;value&quot;: &quot;Demo-Firstname&quot;}
</code></pre>
<p>Assuming you have retrieved the OAuth token for authenticating to Dynamics 365 as I outlined in my earlier blog post, here is a sample Node function to make a PUT update request:</p>
<pre><code>function updateContactPut(token, contactid){
	var contactObj={};
	contactObj[&quot;value&quot;]=&quot;Firstname PUT&quot;;
	var requestdata = JSON.stringify(contactObj);
	var contentlength = Buffer.byteLength(JSON.stringify(contactObj));

    //set the crm request parameters and headers
    var crmrequestoptions = {
		path: '/api/data/v8.2/contacts('+contactid+')/firstname',
        host: crmwebapihost,
        method: 'PUT',
        headers: { 
			'Authorization': 'Bearer ' + token,
			'Content-Type': 'application/json',
			'Content-Length': contentlength,
			'OData-MaxVersion': '4.0',
			'OData-Version': '4.0'
		}
    };
	
	//make the web api request
    var crmrequest = https.request(crmrequestoptions, function(response) {
        //make an array to hold the response parts if we get multiple parts
        var responseparts = [];
        //response.setEncoding('utf8');
        response.on('data', function(chunk) {
            //add each response chunk to the responseparts array for later
            responseparts.push(chunk);      
        });
        response.on('end', function(){
            //once we have all the response parts, concatenate the parts into a single string - response should be empty for this, though
            var completeresponse = responseparts.join('');
			console.log(completeresponse);
			console.log(&quot;success&quot;);
        });
    });
    crmrequest.on('error', function(e) {
        console.error(e);
    });

	//send the data to update
	crmrequest.write(requestdata);
	
    //close the web api request
    crmrequest.end();
}
</code></pre>
<p>Although the content-length header is technically not required, the Node HTTPS module will not send the data to the Web API with the correct encoding unless you set it in your request.</p>
<h4 id="updatingmultipleproperties">Updating multiple properties</h4>
<p>To update a multiple properties on a record in Dynamics 365, you must make a PATCH request to the Web API. The raw HTTP request to update the first name and last name for a contact would look like this:</p>
<pre><code>PUT [Organization URI]/api/data/v8.2/contacts(00000000-0000-0000-0000-000000000001) HTTP/1.1
Content-Type: application/json
OData-MaxVersion: 4.0
OData-Version: 4.0

{
&quot;firstname&quot;: &quot;Demo-Firstname&quot;,
&quot;lastname&quot;: &quot;Demo-Lastname&quot;
}
</code></pre>
<p>Again, assuming you have retrieved the OAuth token for authenticating to Dynamics 365 as I outlined in my earlier blog post, here is a sample Node function to make a PATCH update request:</p>
<pre><code>function updateContactPatch(token, contactid){
	var contactObj={};
	contactObj[&quot;firstname&quot;]=&quot;Firstname test&quot;;
	contactObj[&quot;lastname&quot;]=&quot;Lastname test&quot;;
	var requestdata = JSON.stringify(contactObj);
	var contentlength = Buffer.byteLength(JSON.stringify(contactObj));

    //set the crm request parameters and headers
    var crmrequestoptions = {
		path: '/api/data/v8.2/contacts('+contactid+')',
        host: crmwebapihost,
        method: 'PATCH',
        headers: { 
			'Authorization': 'Bearer ' + token,
			'Content-Type': 'application/json',
			'Content-Length': contentlength,
			'OData-MaxVersion': '4.0',
			'OData-Version': '4.0'
		}
    };
	
	//make the web api request
    var crmrequest = https.request(crmrequestoptions, function(response) {
        //make an array to hold the response parts if we get multiple parts
        var responseparts = [];
        //response.setEncoding('utf8');
        response.on('data', function(chunk) {
            //add each response chunk to the responseparts array for later
            responseparts.push(chunk);      
        });
        response.on('end', function(){
            //once we have all the response parts, concatenate the parts into a single string - response should be empty for this, though
            var completeresponse = responseparts.join('');
			console.log(completeresponse);
			console.log(&quot;success&quot;);
        });
    });
    crmrequest.on('error', function(e) {
        console.error(e);
    });

	//send the data to update
	crmrequest.write(requestdata);
	
    //close the web api request
    crmrequest.end();
}
</code></pre>
<p>As with the earlier PUT sample, the content-length header is technically not required, but the Node HTTPS module will not send the data to the Web API with the correct encoding unless you set it in your request.</p>
<h4 id="furtherreading">Further reading</h4>
<p>For more information on updating and deleting data with the Dynamics 365 Web API, take a look at the <a href="https://msdn.microsoft.com/en-us/library/mt607664.aspx">&quot;Update and delete entities using the Web API&quot;</a> article on MSDN.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Scheduling Dynamics 365 workflows with Azure Functions and Node.js]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Earlier this week I showed an easy way to <a href="https://alexanderdevelopment.net/post/2016/11/23/dynamics-365-and-node-js-integration-using-the-web-api/">integrate a Node.js application with Dynamics 365 using the Web API</a>. Building on that example, I have created a scheduled workflow runner using Node.js and <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview">Azure Functions</a>. Here's how I did it.</p>
<p>First, I created a workflow in Dynamics</p></div>]]></description><link>https://alexanderdevelopment.net/post/2016/11/25/scheduling-dynamics-365-workflows-with-azure-functions/</link><guid isPermaLink="false">5a5837246636a30001b97866</guid><category><![CDATA[Dynamics 365]]></category><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[Azure]]></category><category><![CDATA[demonstrations]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 25 Nov 2016 17:00:03 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2016/11/04-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2016/11/04-1.png" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"><p>Earlier this week I showed an easy way to <a href="https://alexanderdevelopment.net/post/2016/11/23/dynamics-365-and-node-js-integration-using-the-web-api/">integrate a Node.js application with Dynamics 365 using the Web API</a>. Building on that example, I have created a scheduled workflow runner using Node.js and <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview">Azure Functions</a>. Here's how I did it.</p>
<p>First, I created a workflow in Dynamics 365 that creates a note on an account record. The screenshots below shows what it looks like:</p>
<p><img src="https://alexanderdevelopment.net/content/images/2016/11/workflow01.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></p>
<p><img src="https://alexanderdevelopment.net/content/images/2016/11/workflow02.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></p>
<p>Next, I wrote Node.js code to do the following in an Azure Function.</p>
<ol>
<li>Request an OAuth token using a username and password.</li>
<li>Query the Dynamics 365 Web API for accounts with names that start with the letter &quot;F.&quot;</li>
<li>Execute a workflow for each record that was retrieved in the previous step.</li>
</ol>
<p><em>Most of this is regular Node.js, but there are a couple of nuances specific to Azure Functions. See the<br>
<a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node">&quot;Azure Functions NodeJS developer reference&quot;</a> for more information.</em></p>
<pre><code>var https = require('https');

//set these values to retrieve the oauth token
//see http://alexanderdevelopment.net/post/2016/11/23/dynamics-365-and-node-js-integration-using-the-web-api/ for more details
var _crmorg = 'https://CRMORG...dynamics.crom';  
var _clientid = 'OAUTH CLIENT ID';  
var _username = 'CRM USERNAME';  
var _userpassword = 'CRM PASSWORD';  
var _tokenendpoint = 'OAUTH TOKEN ENDPOINT FROM EARLIER';

//set these values to query your crm data
var _apipath = '/api/data/v8.2'; //web api version
var _workflowid = 'DC8519EC-F3CE-4BC9-BB79-DF2AD70217A1'; //guid for the workflow you want to execute
var _crmwebapihost = 'XXXX.api.crm.dynamics.com'; //crm api url (without https://)
var _crmwebapiquerypath = &quot;/accounts?$select=name,accountid&amp;$filter=startswith(name,'f')&quot;; //web api query

var _counter = 0; //variable to keep track of how many records retrieved and workflows started

module.exports = function (context, myTimer) {
	//remove https from _tokenendpoint url
	_tokenendpoint = _tokenendpoint.toLowerCase().replace('https://','');

	//get the authorization endpoint host name
	var authhost = _tokenendpoint.split('/')[0];

	//get the authorization endpoint path
	var authpath = '/' + _tokenendpoint.split('/').slice(1).join('/');

	//build the authorization request
	var reqstring = 'client_id='+_clientid;
	reqstring+='&amp;resource='+encodeURIComponent(_crmorg);
	reqstring+='&amp;username='+encodeURIComponent(_username);
	reqstring+='&amp;password='+encodeURIComponent(_userpassword);
	reqstring+='&amp;grant_type=password';

	//set the token request parameters
	var tokenrequestoptions = {
		host: authhost,
		path: authpath,
		method: 'POST',
		headers: {
			'Content-Type': 'application/x-www-form-urlencoded',
			'Content-Length': Buffer.byteLength(reqstring)
		}
	};

	//make the token request
	context.log('starting token request');
	var tokenrequest = https.request(tokenrequestoptions, function(response) {
		//make an array to hold the response parts if we get multiple parts
		var responseparts = [];
		response.setEncoding('utf8');
		response.on('data', function(chunk) {
			//add each response chunk to the responseparts array for later
			responseparts.push(chunk);		
		});
		response.on('end', function(){
			//once we have all the response parts, concatenate the parts into a single string
			var completeresponse = responseparts.join('');
			//context.log('Response: ' + completeresponse);
			context.log('token response retrieved');
			
			//parse the response JSON
			var tokenresponse = JSON.parse(completeresponse);
			
			//extract the token
			var token = tokenresponse.access_token;
			//context.log(token);
			
			//pass the token to our data retrieval function
			getData(context, token);
		});
	});
	tokenrequest.on('error', function(e) {
		context.error(e);
		context.done();
	});

	//post the token request data
	tokenrequest.write(reqstring);

	//close the token request
	tokenrequest.end();
}

function getData(context, token){
	//set the web api request headers
	var requestheaders = { 
		'Authorization': 'Bearer ' + token,
		'OData-MaxVersion': '4.0',
		'OData-Version': '4.0',
		'Accept': 'application/json',
		'Content-Type': 'application/json; charset=utf-8',
		'Prefer': 'odata.maxpagesize=500',
		'Prefer': 'odata.include-annotations=OData.Community.Display.V1.FormattedValue'
	};
	
	//set the crm request parameters
	var crmrequestoptions = {
		host: _crmwebapihost,
		path: _apipath+_crmwebapiquerypath,
		method: 'GET',
		headers: requestheaders
	};
	
	//make the web api request
	context.log('starting data request');
	var crmrequest = https.request(crmrequestoptions, function(response) {
		//make an array to hold the response parts if we get multiple parts
		var responseparts = [];
		response.setEncoding('utf8');
		response.on('data', function(chunk) {
			//add each response chunk to the responseparts array for later
			responseparts.push(chunk);		
		});
		response.on('end', function(){
			//once we have all the response parts, concatenate the parts into a single string
			var completeresponse = responseparts.join('');
			
			//parse the response JSON
			var collection = JSON.parse(completeresponse).value;
			
			//set counter length = number of records
			_counter = collection.length;

			//loop through the results and call the workflow for each one
			collection.forEach(function (row, i) {
				callWorkflow(context, token, row['accountid']);
			});
		});
	});
	crmrequest.on('error', function(e) {
		context.error(e);
		context.done();
	});
	//close the web api request
	crmrequest.end();
}

function callWorkflow(context, token, entityid){
	var crmwebapiworkflowpath = _apipath + &quot;/workflows(&quot;+_workflowid+&quot;)/Microsoft.Dynamics.CRM.ExecuteWorkflow&quot;;

	//set the web api request headers
	var requestheaders = { 
		'Authorization': 'Bearer ' + token,
		'OData-MaxVersion': '4.0',
		'OData-Version': '4.0',
		'Accept': 'application/json',
		'Content-Type': 'application/json; charset=utf-8'
	};
	
	//set the crm request parameters
	var crmrequestoptions = {
		host: _crmwebapihost,
		path: crmwebapiworkflowpath,
		method: 'POST',
		headers: requestheaders
	};

	//create an object to post to the executeworkflow action
	var reqobj = {};
	reqobj[&quot;EntityId&quot;] = entityid;
	
	//turn it into a string
	var reqjson = JSON.stringify(reqobj);
	
	//calculate the length to set the content-length header
	crmrequestoptions.headers['Content-Length'] = Buffer.byteLength(reqjson);
	
	//make the web api request
	context.log('starting workflow request for ' + entityid);
	var crmrequest = https.request(crmrequestoptions, function(response) {
		//make an array to hold the response parts if we get multiple parts
		var responseparts = [];
		response.setEncoding('utf8');
		response.on('data', function(chunk) {
			//add each response chunk to the responseparts array for later
			responseparts.push(chunk);		
		});
		response.on('end', function(){
			//once we have all the response parts, concatenate the parts into a single string
			var completeresponse = responseparts.join('');
			context.log('success ' + entityid);
			
			//decrement the counter
			_counter = _counter-1;
			
			//if nothing is left to start, we are done
			if(_counter==0){
				context.log('all workflows started');
				context.done();
			}
		});
	});
	crmrequest.on('error', function(e) {
		context.error(e);
		context.done();
	});
	crmrequest.write(reqjson);

	//close the web api request
	crmrequest.end();
}
</code></pre>
<p>Then in the Azure Portal, I configured an Azure Function app to query accounts and execute the workflow every five minutes. Here are the detailed steps to replicate that.</p>
<ol>
<li>Create a new Function app via New-&gt;Compute-&gt;Function App. <img src="https://alexanderdevelopment.net/content/images/2016/11/01.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>Set the app name, resource group, etc. <img src="https://alexanderdevelopment.net/content/images/2016/11/02.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>Once the new Function app is provisioned, open it. <img src="https://alexanderdevelopment.net/content/images/2016/11/03.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>Select &quot;new function&quot; on the left. <img src="https://alexanderdevelopment.net/content/images/2016/11/04.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>Set language to &quot;JavaScript&quot; and scenario to &quot;Core.&quot; Find the &quot;TimerTrigger-JavaScript&quot; template and select it. <img src="https://alexanderdevelopment.net/content/images/2016/11/05.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>Give your function a name and set the schedule options. The <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer#schedule-examples">schedule value</a> is a CRON expression that includes six fields: {second} {minute} {hour} {day} {month} {day of the week}. You can accept the default value of every five minutes and change it later. Click &quot;create&quot; to create the new function. <img src="https://alexanderdevelopment.net/content/images/2016/11/06.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>Copy the Node.js code from above and paste it into the editor window. Set any specifics relative to your Dynamics 365 organization, and click save. (You can also use Git for deploying your code, but that's beyond the scope of today's post.) <img src="https://alexanderdevelopment.net/content/images/2016/11/07.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>On the &quot;integrate&quot; tab, you can modify the timer schedule. The schedule shown (0 */5 * * * *) will execute the function every five minutes. <img src="https://alexanderdevelopment.net/content/images/2016/11/08.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>The function will automatically execute at the next fifth minute, and the invocation log is available on the monitor tab. Selecting a specific invocation row shows detailed logging output on the right. <img src="https://alexanderdevelopment.net/content/images/2016/11/09.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>This screenshot shows the process sessions for when the workflow was executed in Dynamics 365. <img src="https://alexanderdevelopment.net/content/images/2016/11/10.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
<li>This screenshot shows the note records that were created by the workflow. <img src="https://alexanderdevelopment.net/content/images/2016/11/11.png#img-thumbnail" alt="Scheduling Dynamics 365 workflows with Azure Functions and Node.js"></li>
</ol>
<p>A few notes/caveats:</p>
<ol>
<li>My Node.js code has hardly any error handling right now. If the workflow execution call returns an error, the Node.js code will not recognize it as an error.</li>
<li>My CRM record retrieval is set to retrieve a maximum of 500 records. You would need to modify the Web API request logic to handle more.</li>
<li>Per the <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-best-practices">&quot;Best Practices for Azure Functions&quot;</a> guide:</li>
</ol>
<blockquote>
<p>Assume your function could encounter an exception at any time. Design your functions with the ability to continue from a previous fail point during the next execution.</p>
</blockquote>
<p>This means you should put logic in your workflow to make sure that duplicate executions are avoided (unless that's what you intend to happen).</p>
<p>This sample just scratches the surface of what's possible with Azure Functions and Dynamics 365, and I'm looking forward to working with Azure Functions more in the future. Have you looked at Azure Functions yet? What do you think? Please let me know in the comments.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dynamics 365 and Node.js integration using the Web API]]></title><description><![CDATA[<div class="kg-card-markdown"><p>I wrote a <a href="https://alexanderdevelopment.net/post/2015/01/24/authenticating-from-a-node-js-client-to-dynamics-crm-via-ad-fs-and-oauth2/">blog post</a> in early 2015 that showed how to access the Dynamics CRM organization data service from a Node.js application. Today I will show an easy way to retrieve data from a Dynamics 365 Online organization in a Node.js application using the Web API.</p>
<p>Unlike</p></div>]]></description><link>https://alexanderdevelopment.net/post/2016/11/23/dynamics-365-and-node-js-integration-using-the-web-api/</link><guid isPermaLink="false">5a5837246636a30001b97860</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[programming]]></category><category><![CDATA[integration]]></category><category><![CDATA[Dynamics 365]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Wed, 23 Nov 2016 16:17:09 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2016/11/cmd_2016-11-23_09-53-57-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2016/11/cmd_2016-11-23_09-53-57-1.png" alt="Dynamics 365 and Node.js integration using the Web API"><p>I wrote a <a href="https://alexanderdevelopment.net/post/2015/01/24/authenticating-from-a-node-js-client-to-dynamics-crm-via-ad-fs-and-oauth2/">blog post</a> in early 2015 that showed how to access the Dynamics CRM organization data service from a Node.js application. Today I will show an easy way to retrieve data from a Dynamics 365 Online organization in a Node.js application using the Web API.</p>
<p>Unlike the CRM organization service, the Dynamics 365 Web API does not allow you to authenticate directly with a user name and password. Instead you have to authenticate using OAuth to get a token, and then you pass that token to the Web API. Microsoft has created the <a href="https://github.com/AzureAD/azure-activedirectory-library-for-nodejs">&quot;Windows Azure Active Directory Authentication Library (ADAL) for Node.js&quot;</a> that can be used to get an OAuth2 token, but in my sample today, I will be making the token request without using ADAL.</p>
<p>Before you begin, you need to make sure you have registered an application in Azure Active Directory that can access your Dynamics 365 online organization. Follow the instructions on <a href="https://msdn.microsoft.com/en-us/library/mt622431.aspx">this page</a> to register a new Dynamics 365 application. Scroll down to the &quot;Register an application with Microsoft Azure&quot; section and make sure you register the application as a native client application instead of a web application.</p>
<p>After your application is registered, you need to find the endpoint where you can post an OAuth token request.</p>
<ol>
<li>
<p>If you are using the classic Azure management portal, you should click the &quot;develop applications&quot; button in the &quot;I want to&quot; section. Then click the &quot;View authentication and authorization endpoints&quot; link. <img src="https://alexanderdevelopment.net/content/images/2016/11/chrome_2016-11-23_08-59-20.png#img-thumbnail" alt="Dynamics 365 and Node.js integration using the Web API"> You should then see a list of various endpoints for your tenant. Copy the OAuth 2.0 token endpoint value. <img src="https://alexanderdevelopment.net/content/images/2016/11/chrome_2016-11-23_08-59-32.png#img-thumbnail" alt="Dynamics 365 and Node.js integration using the Web API"></p>
</li>
<li>
<p>If you are using the new Azure management portal, once you have your tenant selected in the AD management blade, select &quot;app registrations&quot; on the left and &quot;endpoints&quot; on the top. You should then see your tenant's endpoints to the right. Copy the OAuth 2.0 token endpoint value. <img src="https://alexanderdevelopment.net/content/images/2016/11/chrome_2016-11-23_09-01-17.png#img-thumbnail" alt="Dynamics 365 and Node.js integration using the Web API"></p>
</li>
</ol>
<p>The OAuth token endpoint should look like this <code>https://login.windows.net/SOME_GUID_VALUE/oauth2/token</code>.</p>
<p>Once you have your OAuth token endpoint and your application client id, you can prepare a client application. Here's a sample Node.js application I wrote to retrieve contacts from the Web API and display their names.</p>
<pre><code>'use strict';
var https = require('https');

//set these values to retrieve the oauth token
var crmorg = 'https://CRMORG...dynamics.com';
var clientid = 'CLIENT ID FROM EARLIER';
var username = 'CRM USERNAME';
var userpassword = 'CRM PASSWORD';
var tokenendpoint = 'OAUTH TOKEN ENDPOINT FROM EARLIER';

//set these values to query your crm data
var crmwebapihost = 'CRMORG.api.crm.dynamics.com';
var crmwebapipath = '/api/data/v8.2/contacts?$select=fullname,contactid'; //basic query to select contacts

//remove https from tokenendpoint url
tokenendpoint = tokenendpoint.toLowerCase().replace('https://','');

//get the authorization endpoint host name
var authhost = tokenendpoint.split('/')[0];

//get the authorization endpoint path
var authpath = '/' + tokenendpoint.split('/').slice(1).join('/');

//build the authorization request
//if you want to learn more about how tokens work, see IETF RFC 6749 - https://tools.ietf.org/html/rfc6749
var reqstring = 'client_id='+clientid;
reqstring+='&amp;resource='+encodeURIComponent(crmorg);
reqstring+='&amp;username='+encodeURIComponent(username);
reqstring+='&amp;password='+encodeURIComponent(userpassword);
reqstring+='&amp;grant_type=password';

//set the token request parameters
var tokenrequestoptions = {
	host: authhost,
	path: authpath,
	method: 'POST',
	headers: {
		'Content-Type': 'application/x-www-form-urlencoded',
		'Content-Length': Buffer.byteLength(reqstring)
	}
};

//make the token request
var tokenrequest = https.request(tokenrequestoptions, function(response) {
	//make an array to hold the response parts if we get multiple parts
	var responseparts = [];
	response.setEncoding('utf8');
	response.on('data', function(chunk) {
		//add each response chunk to the responseparts array for later
		responseparts.push(chunk);		
	});
	response.on('end', function(){
		//once we have all the response parts, concatenate the parts into a single string
		var completeresponse = responseparts.join('');
		//console.log('Response: ' + completeresponse);
		console.log('Token response retrieved . . . ');
		
		//parse the response JSON
		var tokenresponse = JSON.parse(completeresponse);
		
		//extract the token
		var token = tokenresponse.access_token;
		
		//pass the token to our data retrieval function
		getData(token);
	});
});
tokenrequest.on('error', function(e) {
	console.error(e);
});

//post the token request data
tokenrequest.write(reqstring);

//close the token request
tokenrequest.end();


function getData(token){
	//set the web api request headers
	var requestheaders = { 
		'Authorization': 'Bearer ' + token,
		'OData-MaxVersion': '4.0',
		'OData-Version': '4.0',
		'Accept': 'application/json',
		'Content-Type': 'application/json; charset=utf-8',
		'Prefer': 'odata.maxpagesize=500',
		'Prefer': 'odata.include-annotations=OData.Community.Display.V1.FormattedValue'
	};
	
	//set the crm request parameters
	var crmrequestoptions = {
		host: crmwebapihost,
		path: crmwebapipath,
		method: 'GET',
		headers: requestheaders
	};
	
	//make the web api request
	var crmrequest = https.request(crmrequestoptions, function(response) {
		//make an array to hold the response parts if we get multiple parts
		var responseparts = [];
		response.setEncoding('utf8');
		response.on('data', function(chunk) {
			//add each response chunk to the responseparts array for later
			responseparts.push(chunk);		
		});
		response.on('end', function(){
			//once we have all the response parts, concatenate the parts into a single string
			var completeresponse = responseparts.join('');
			
			//parse the response JSON
			var collection = JSON.parse(completeresponse).value;
			
			//loop through the results and write out the fullname
			collection.forEach(function (row, i) {
				console.log(row['fullname']);
			});
		});
	});
	crmrequest.on('error', function(e) {
		console.error(e);
	});
	//close the web api request
	crmrequest.end();
}
</code></pre>
<p>When I run this from my local PC against a my Dynamics 365 org with sample data installed, I get this output:<br>
<img src="https://alexanderdevelopment.net/content/images/2016/11/cmd_2016-11-23_09-53-57.png#img-thumbnail" alt="Dynamics 365 and Node.js integration using the Web API"></p>
<p>Although my sample application isn't fancy, it shows that authenticating to Dynamics 365 and retrieving data without requiring special libraries is now much easier than it used to be in the pre-CRM 2016 days, and that is very exciting.</p>
<p>What do you think? Please let me know your thoughts in the comments.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dynamics CRM and the Internet of Things - part 5]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the fifth and final post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although the code samples are focused on license plate recognition, the solution architecture I used is applicable to any Dynamics CRM</p></div>]]></description><link>https://alexanderdevelopment.net/post/2016/01/18/dynamics-crm-and-the-internet-of-things-part-5/</link><guid isPermaLink="false">5a5837236636a30001b977fb</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Internet of Things]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 18 Jan 2016 22:17:57 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/12/streaming-interface-flow.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/12/streaming-interface-flow.png" alt="Dynamics CRM and the Internet of Things - part 5"><p>This is the fifth and final post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although the code samples are focused on license plate recognition, the solution architecture I used is applicable to any Dynamics CRM + Internet of Things (IoT) integration. In my <a href="https://alexanderdevelopment.net/post/2016/01/12/dynamics-crm-and-the-internet-of-things-part-4/">previous post</a>, I showed how to execute the license plate recognition and contact search with JavaScript directly in the web resource. Today I will show how to set up a streaming interface so the Raspberry Pi can take a picture, parse the plate number and trigger the web resource to search for and display a contact without any input from the end user.</p>
<h4 id="theapproach">The approach</h4>
<p>As I described in <a href="https://alexanderdevelopment.net/post/2015/12/14/dynamics-crm-and-the-internet-of-things-part-1/">part 1</a> of this series, in this approach the Raspberry Pi takes a picture, parses the plate number and writes it to a streaming interface using Socket.IO. A web page or client application picks up the plate numbers from that interface, and then it queries CRM for a contact with the returned license plate number to display the details to the user. Essentially this is just a variation on what I described in my <a href="https://alexanderdevelopment.net/post/2014/12/03/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-1/">&quot;Creating a near real-time streaming interface for Dynamics CRM with Node.js&quot;</a> series in late 2014.<br>
<img src="http://alexanderdevelopment.net/content/images/2015/12/streaming-interface-flow.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 5"></p>
<h4 id="nodejscode">Node.js code</h4>
<p>In <a href="https://alexanderdevelopment.net/post/2015/12/21/dynamics-crm-and-the-internet-of-things-part-2/">part 2</a> I showed the complete Node.js code to support all the scenarios in this series, but I did not discuss the streaming interface. Basically the streaming interface behaves almost exactly the same as the non-streaming interface except instead of writing the JSON result object in the HTTP response, it posts the JSON result object to a Socket.IO interface.</p>
<pre><code>app.get('/check_plate_stream', function (req, res) {
	//generate a guid to use in the captured image file name
	var uuid1 = uuid.v1();
	
	//tell the webcam to take a picture and store it in the captures directory using the guid as the name
	exec('fswebcam -r 1280x720 --no-banner --quiet ./captures/' + uuid1 + '.jpg',
	  function (error, stdout, stderr) {
		if (error !== null) {
		  //log any errors
		  console.log('exec error: ' + error);
		}
	});
	
	//now that we have a picture saved, execute parse it with openalpr and return the results as json (the -j switch) 
	exec('alpr -j ./captures/' + uuid1 + '.jpg',
	  function (error, stdout, stderr) {
		//create a json object based on the alpr output
		var plateOutput = JSON.parse(stdout.toString());
		
		//add an "image" attribute to the alpr json that has a path to the captured image
		//this is so the client can view the license plage picture to verify alpr parsed it correctly
		plateOutput.image = '/captures/' + uuid1 + '.jpg';

		//write the json to the socket.io interface
		io.emit('message', plateOutput);

		//return a response to the caller that the message was sent
		res.send('message sent');

		//log the response from alpr
		console.log('alpr response: ' + stdout.toString());
		
		if (error !== null) {
		  //log any errors
		  console.log('exec error: ' + error);
		}
	});
});
</code></pre>
<p>The triggering mechanism is still a web route, so it can be called in a number of different ways without having to modify the base code. I've written a separate Node.js application that uses an <a href="http://amzn.com/B00SXZWMCS">HC-SR04 ultrasonic rangefinder</a> to detect when a license plate moves into view and then call the streaming interface URL to trigger the license plate capture and recognition. The motion detection script is available on GitHub <a href="https://github.com/lucasalexander/Crm-Sample-Code/blob/master/CrmLicensePlateRecognition/detectmotion.js">here</a>.</p>
<p>I am running the motion detection script on the same Raspberry Pi as the webcam, but it could easily be run on a separate piece of hardware, too. You could also use the same approach to connect it to any sort of other sensor or input.</p>
<p>Here is a picture of my motion detection rig with my unimpressed Great Dane behind it for scale.<br>
<img src="https://alexanderdevelopment.net/content/images/2016/01/20160118_154636.jpg#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 5"></p>
<h4 id="thewebresource">The web resource</h4>
<p>The web resource loads the Socket.IO library from a CDN.<br>
<code>&lt;script src=&quot;<a href="https://cdn.socket.io/socket.io-1.2.0.js">https://cdn.socket.io/socket.io-1.2.0.js</a>&quot;&gt;&lt;/script&gt;</code></p>
<p>The web resource then connects to the streaming interface and listens for a message. Once it picks up a license plate number from the streaming interface, it then queries CRM to find a contact just like in <a href="https://alexanderdevelopment.net/post/2016/01/12/dynamics-crm-and-the-internet-of-things-part-4/">part 4</a>.</p>
<pre><code>var piRootPath = "http://192.168.1.112:3000";
var socket = io("http://192.168.1.112:3000");

socket.on('message', function(resultObj){
$("#outputdiv").text("");
if(resultObj.results.length > 0) {
	var plateNum = resultObj.results[0].plate;
	var imgUrl = resultObj.image;
	$("#outputdiv").append("Detected plate number: " + plateNum + "<br>");
	
	$("#outputdiv").append("&lt;img src='" + piRootPath + imgUrl+ "' width='400' /&gt;");
	
	var oDataURI = Xrm.Page.context.getClientUrl()
	+ "/XRMServices/2011/OrganizationData.svc/"
	+ "ContactSet?$select=ContactId,FullName&$filter=lpa_Platenumber eq '" + plateNum +"'";
	
	var req = new XMLHttpRequest();
	req.open("GET", encodeURI(oDataURI), true);
	req.setRequestHeader("Accept", "application/json");
	//req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
	req.onreadystatechange = function () {
		if (this.readyState == 4 /* complete */) {
			req.onreadystatechange = null; //avoids memory leaks
			if (this.status == 200) {
				successCrmCallback(JSON.parse(this.responseText).d.results);
			}
			else {
				errorCallback();
			}
		}
	};
	req.send();
}
else {
	$("#outputdiv").append("No plate detected<br>");
}
});
</code></pre>
<p>If a matching contact is found, it's displayed. Otherwise a failure message is displayed instead.</p>
<pre><code>function successCrmCallback(contacts) {
	if(contacts.length > 0) {
		var contactUrl = Xrm.Page.context.getClientUrl() + "/main.aspx?etc=2&extraqs=&pagetype=entityrecord&id=%7b" + contacts[0].ContactId + "%7d";
		$("#outputdiv").prepend("Contact: &lt;a href='" + contactUrl + "' target='_blank'&gt;" + contacts[0].FullName + "&lt;/a&gt;&lt;br /&gt;");	
	}
	else {
		//otherwise display a message that no contact could be found
		$("#outputdiv").prepend("No contact found<br>");
	}
	$("#checkButton").prop('disabled', false);
}
</code></pre>
<p>The web resource (lpa_checkplatestream.htm) is included in my sample CRM solution along with the contact entity configured to store the license plate number. The sample CRM solution is available in my GitHub repository <a href="https://github.com/lucasalexander/Crm-Sample-Code/blob/master/CrmLicensePlateRecognition/LicensePlateDemo_0_0_0_1.zip">here</a>.</p>
<h4 id="demo">Demo</h4>
<p>Here's the CRM web resource open a new tab. Note there's no button to trigger the plate capture and recognition.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/stream-web-resource-before.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 5"></p>
<p>This is the web page used to trigger the plate capture and recognition opened in Chrome.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/stream-trigger.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 5"> When using my motion detection script, this page is never shown to the end user.</p>
<p>Finally here is the web resource once it gets the message from the stream and looks up the contact details. If the plate recognition is triggered again, the page will update itself.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/stream-web-resource-after.jpg#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 5"></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>I hope you've enjoyed reading this series as much as I've enjoyed writing it. Through the process of working out the various scenarios I've certainly learned a lot about how the Raspberry Pi can be combined with Dynamics CRM to support novel business processes.</p>
<p>Here are links to all the previous posts in this series.</p>
<ol>
<li><a href="https://alexanderdevelopment.net/post/2015/12/14/dynamics-crm-and-the-internet-of-things-part-1/">Part 1</a> - Series introduction</li>
<li><a href="https://alexanderdevelopment.net/post/2015/12/21/dynamics-crm-and-the-internet-of-things-part-2/">Part 2</a> - Node.js application</li>
<li><a href="https://alexanderdevelopment.net/post/2016/01/03/dynamics-crm-and-the-internet-of-things-part-3/">Part 3</a> - CRM custom assembly trigger</li>
<li><a href="https://alexanderdevelopment.net/post/2016/01/11/dynamics-crm-and-the-internet-of-things-part-4/">Part 4</a> - JavaScript in CRM web resource trigger</li>
</ol>
<p>Do you have plans to integrate your Dynamics CRM system with the Internet of Things? If so, how? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dynamics CRM and the Internet of Things - part 4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the fourth post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although the code samples are focused on license plate recognition, the solution architecture I used is applicable to any Dynamics CRM + Internet of</p></div>]]></description><link>https://alexanderdevelopment.net/post/2016/01/11/dynamics-crm-and-the-internet-of-things-part-4/</link><guid isPermaLink="false">5a5837236636a30001b977f6</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Internet of Things]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 11 Jan 2016 13:38:49 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/12/javascript-web-resource-flow.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/12/javascript-web-resource-flow.png" alt="Dynamics CRM and the Internet of Things - part 4"><p>This is the fourth post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although the code samples are focused on license plate recognition, the solution architecture I used is applicable to any Dynamics CRM + Internet of Things (IoT) integration. In my <a href="https://alexanderdevelopment.net/post/2016/01/03/dynamics-crm-and-the-internet-of-things-part-3/">previous post</a>, I showed how to trigger the license plate recognition process and then use the extracted license plate number to find a contact in my Dynamics CRM organization with a custom workflow activity. Today I'll show how to execute the license plate recognition and contact search with JavaScript directly in the web resource.</p>
<h4 id="theapproach">The approach</h4>
<p>As I described in <a href="https://alexanderdevelopment.net/post/2015/12/14/dynamics-crm-and-the-internet-of-things-part-1/">part 1</a> of this series, this approach uses JavaScript to call the Raspberry Pi to take a picture and return the parsed license plate number. Once a license plate number is retrieved, the JavaScript code then queries CRM for a contact with the returned license plate number and displays its details to the user. As long as the CRM end user's computer can access the Raspberry Pi, it doesn't matter if the Pi is visible to the CRM server, so this easily works with CRM Online.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/javascript-web-resource-flow.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 4"></p>
<h4 id="thewebresource">The web resource</h4>
<p>A web resource is used so the user can interactively trigger the license plate recognition and open the contact record if a match is found. The web resource also displays the image that is captured by the Raspberry Pi so the user can validate that the license plate number extracted by OpenALPR matches the actual license plate number.</p>
<p>Executing the license plate recognition on the Raspberry Pi just requires making a GET request to the Node.js web page described in <a href="https://alexanderdevelopment.net/post/2015/12/21/dynamics-crm-and-the-internet-of-things-part-2/">part 2</a> and then parsing the JSON response.</p>
<pre><code>//command to start checklplate action call when button is pushed  
function executeCheckplate() { 
	$("#outputdiv").text("");
	$("#checkButton").prop('disabled', true);
	var checkplateURI = piRootPath + "/check_plate";
    var req = new XMLHttpRequest();
    req.open("GET", encodeURI(checkplateURI), true);
    req.setRequestHeader("Accept", "application/json");
    //req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
    req.onreadystatechange = function () {
        //debugger;
        if (this.readyState == 4 /* complete */) {
            req.onreadystatechange = null; //avoids memory leaks
            if (this.status == 200) {
                successPlateCallback(JSON.parse(this.responseText));
            }
            else {
                errorCallback();
            }
        }
    };
    req.send();
}
</code></pre>
<p>Once the response is returned from the Raspberry Pi, the web resource then queries CRM to find a contact, unless no plate was detected, in which case it displays a failure message.</p>
<pre><code>function successPlateCallback(resultObj) {
	if(resultObj.results.length > 0) {
		var plateNum = resultObj.results[0].plate;
		var imgUrl = resultObj.image;
		$("#outputdiv").append("Detected plate number: " + plateNum + "<br>");
		
		//show the captured plate image
		$("#outputdiv").append("&lt;img src='" + piRootPath + imgUrl+ "' width='400' /&gt;");
		
		var oDataURI = Xrm.Page.context.getClientUrl()
        + "/XRMServices/2011/OrganizationData.svc/"
        + "ContactSet?$select=ContactId,FullName&$filter=lpa_Platenumber eq '" + plateNum +"'";
		
		var req = new XMLHttpRequest();
		req.open("GET", encodeURI(oDataURI), true);
		req.setRequestHeader("Accept", "application/json");
		//req.setRequestHeader("Content-Type", "application/json; charset=utf-8");
		req.onreadystatechange = function () {
			if (this.readyState == 4 /* complete */) {
				req.onreadystatechange = null; //avoids memory leaks
				if (this.status == 200) {
					successCrmCallback(JSON.parse(this.responseText).d.results);
				}
				else {
					errorCallback();
				}
			}
		};
		req.send();
	}
	else {
		$("#outputdiv").append("No plate detected<br>");
		$("#checkButton").prop('disabled', false);
	}
}
</code></pre>
<p>If a matching contact is found, it's displayed. Otherwise a failure message is displayed instead.</p>
<pre><code>function successCrmCallback(contacts) {
	if(contacts.length > 0) {
		var contactUrl = Xrm.Page.context.getClientUrl() + "/main.aspx?etc=2&extraqs=&pagetype=entityrecord&id=%7b" + contacts[0].ContactId + "%7d";
		$("#outputdiv").prepend("Contact: <a href="https://alexanderdevelopment.net/post/2016/01/11/dynamics-crm-and-the-internet-of-things-part-4/" + contactUrl + "" target="_blank">" + contacts[0].FullName + "</a><br>");	
	}
	else {
		//otherwise display a message that no contact could be found
		$("#outputdiv").prepend("No contact found<br>");
	}
	$("#checkButton").prop('disabled', false);
}
</code></pre>
<p>The web resource (lpa_checkplatejs.htm) is included in my sample CRM solution along with the contact entity configured to store the license plate number. The sample CRM solution is available in my GitHub repository <a href="https://github.com/lucasalexander/Crm-Sample-Code/blob/master/CrmLicensePlateRecognition/LicensePlateDemo_0_0_0_1.zip">here</a>.</p>
<h4 id="demo">Demo</h4>
<p>Here's the web resource open a new tab.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/js-web-resource-before.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 4"></p>
<p>Here's the result after I click the &quot;check plate&quot; button. The contact name is a hyperlink that will open the contact record in a new window.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/js-web-resource-after.jpg#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 4"></p>
<p>That's it for today. In my next and final post in this series, I'll show how to set up a streaming interface so the Raspberry Pi can take a picture and parse the plate number, which will then trigger the web resource to search for and display a contact without any input from the end user.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dynamics CRM and the Internet of Things - part 3]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the third post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although the code samples are focused on license plate recognition, the solution architecture I used is applicable to any Dynamics CRM + Internet of</p></div>]]></description><link>https://alexanderdevelopment.net/post/2016/01/03/dynamics-crm-and-the-internet-of-things-part-3/</link><guid isPermaLink="false">5a5837236636a30001b977f1</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Internet of Things]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sun, 03 Jan 2016 23:19:06 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/12/custom-assembly-flow-2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/12/custom-assembly-flow-2.png" alt="Dynamics CRM and the Internet of Things - part 3"><p>This is the third post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although the code samples are focused on license plate recognition, the solution architecture I used is applicable to any Dynamics CRM + Internet of Things (IoT) integration. In my <a href="https://alexanderdevelopment.net/post/2015/12/21/dynamics-crm-and-the-internet-of-things-part-2/">previous post</a>, I showed how I set up my Raspberry Pi to capture images and parse them for license plate numbers. In today's post, I will show how to trigger the license plate recognition process and then use the extracted license plate number to find a contact in my Dynamics CRM organization with a custom workflow activity.</p>
<h4 id="theapproach">The approach</h4>
<p>As I described in <a href="https://alexanderdevelopment.net/post/2015/12/14/dynamics-crm-and-the-internet-of-things-part-1/">part 1</a> of this series, this approach uses a web resource, a dialog or some other interactive mechanism to call a custom workflow activity that instructs the Raspberry Pi to take a picture and return the parsed license plate number. The code hosted in CRM then searches for a contact with the returned license plate number and displays its details to the user. This requires that the CRM server be able to communicate with the Raspberry Pi, which may be challenging for CRM Online deployments.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/custom-assembly-flow-1.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 3"></p>
<p>Today I'll show how to call the custom workflow activity from a web resource, which will require the use of a custom CRM action so that the web resource can execute the functionality with a JavaScript call.</p>
<h4 id="thecustomworkflowassembly">The custom workflow assembly</h4>
<p>Interacting with the Node.js web page just requires making a GET request and then parsing the JSON response, so the <a href="https://alexanderdevelopment.net/Postingprocessing-JSON-in-396ead03">approach to working with JSON data in custom workflow assemblies</a> that I've used in several other posts will work great for this.</p>
<p>There are only a few changes required to that sample:</p>
<ol>
<li>Modify the JSON response classes to match the JSON object returned by the Node.js application.</li>
<li>Modify the web request to be a GET instead of a POST.</li>
<li>Add logic to search for contacts by license plate number and return the contact id as a string.</li>
<li>Update the input/output parameters to return the contact, license plate and image details.</li>
</ol>
<p>The code for the custom workflow assembly is available in my Crm-Sample-Code repository on GitHub <a href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmLicensePlateRecognition/LicensePlateDemo">here</a></p>
<p>One thing to keep in mind is that because I wanted to register the assembly in isolation, I had to create a hosts entry for my Raspberry Pi on my CRM application server since sandboxed assemblies cannot make web requests to IP address URLs. Alternatively I could have created an entry on my LAN DNS server.</p>
<h4 id="thecustomcrmaction">The custom CRM action</h4>
<p>A custom CRM action is used to wrap the workflow assembly so that it can be called from a web resource via JavaScript. The action simply passes a request to the workflow assembly and returns its response to the client.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/custom-action.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 3"></p>
<h4 id="thewebresource">The web resource</h4>
<p>Finally a web resource is used so the user can interactively trigger the license plate recognition and open the contact record if a match is found. The web resource also displays the image that is captured by the Raspberry Pi so the user can validate that the license plate number extracted by OpenALPR matches the actual license plate number.</p>
<p>The web resource (lpa_checkplate.htm) is included in my sample CRM solution along with the custom action, compiled plugin and contact entity configured to store the license plate number. The sample CRM solution is available in my GitHub repository <a href="https://github.com/lucasalexander/Crm-Sample-Code/blob/master/CrmLicensePlateRecognition/LicensePlateDemo_0_0_0_1.zip">here</a>.</p>
<h4 id="demo">Demo</h4>
<p>Here's the web resource open a new tab.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/web-resource-before-1.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 3"></p>
<p>Here's the result after I click the &quot;check plate&quot; button. The contact name is a hyperlink that will open the contact record in a new window.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/web-resource-after.jpg#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 3"></p>
<p>That's it for today. In my next post, I'll show how to execute the license plate recognition and contact search with JavaScript directly in the web resource.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dynamics CRM and the Internet of Things - part 2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the second post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. As I mentioned in the <a href="https://alexanderdevelopment.net/post/2015/12/14/dynamics-crm-and-the-internet-of-things-part-1/">first post</a> of the series, the solution architecture I used is applicable to any Dynamics CRM + Internet of</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/12/21/dynamics-crm-and-the-internet-of-things-part-2/</link><guid isPermaLink="false">5a5837236636a30001b977ec</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Internet of Things]]></category><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 21 Dec 2015 16:50:02 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/12/node-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/12/node-1.png" alt="Dynamics CRM and the Internet of Things - part 2"><p>This is the second post in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. As I mentioned in the <a href="https://alexanderdevelopment.net/post/2015/12/14/dynamics-crm-and-the-internet-of-things-part-1/">first post</a> of the series, the solution architecture I used is applicable to any Dynamics CRM + Internet of Things (IoT) integration. In today's post I will show how I set up my Raspberry Pi to handle the basic license plate capture and parsing operations.</p>
<h4 id="tooling">Tooling</h4>
<p>For this demonstration I am using the following tools:</p>
<ol>
<li><a href="https://www.raspberrypi.org/products/raspberry-pi-2-model-b/">Raspberry Pi 2 Model B</a> running <a href="https://www.raspberrypi.org/downloads/raspbian/">Raspbian</a> (Any Linux distribution that runs on the Raspberry Pi should work with the code I'm showing, but the configuration steps later might be different.)</li>
<li><a href="http://www.logitech.com/en-us/product/hd-webcam-c615">Logitech HD Webcam C615</a> to take photos</li>
<li><a href="http://www.openalpr.com/">OpenALPR</a> to recognize license plates</li>
<li><a href="https://nodejs.org">Node.js</a> to provide a web-based interface to OpenALPR (and post to <a href="http://socket.io/">Socket.IO</a> in the streaming interface)</li>
</ol>
<p>OpenALPR and Node.js both run on Windows, so I think it should be possible to create a comparable solution using <a href="https://www.raspberrypi.org/blog/windows-10-for-iot/">Windows 10 for IoT</a> on a Raspberry Pi 2. Also, if you don't have a Pi or comparable IoT device, you can try this out on any system where you can run the software.</p>
<h4 id="basicraspberrypiconfiguration">Basic Raspberry Pi configuration</h4>
<p>Configuring the Raspberry Pi is beyond the scope of today's post, but as long as you have a Raspberry Pi running Raspbian with network access you should be good to go. My Pi is on my LAN with an IP address of 192.168.1.112, and all ports are open.</p>
<h4 id="camera">Camera</h4>
<p>I am using a Logitech HD Webcam C615 because I just happened to have one available. Theoretically any Raspberry Pi-compatible webcam or the dedicated Raspberry Pi camera module should work. I am using the fswebcam package to interact with my webcam, and I followed the instructions <a href="https://www.raspberrypi.org/documentation/usage/webcams">here</a> to get it working. As you'll see from the captured image at the end of this post, the quality of the images I'm capturing leaves something to be desired, but things seem to be working well enough that I haven't explored different configuration options to improve the quality.</p>
<h4 id="openalpr">OpenALPR</h4>
<p>OpenALPR is the tool that parses images to find license plate numbers. You can download it from GitHub, and follow the directions in the &quot;Easy Way&quot; section <a href="https://github.com/openalpr/openalpr/wiki/Compilation-instructions-(Ubuntu-Linux)#the-easy-way">here</a> to build and install it. I also should be possible to use Docker as described <a href="https://github.com/openalpr/openalpr#docker">here</a>, but I haven't tried it myself.</p>
<p>Once you have OpenALPR installed, you can test that it's working using sample images like so:</p>
<pre>wget http://plates.openalpr.com/ea7the.jpg
alpr -c us ea7the.jpg

wget http://plates.openalpr.com/h786poj.jpg
alpr -c eu h786poj.jpg</pre>
<p>OpenALPR can also be run as a daemon that continually checks a camera stream and outputs detected license plates, but I chose not to use it in this case because the open source version never stops. If you put a license plate in front of the webcam, the open source OpenALPR daemon will just keep returning the parsed license plate number over and over again. I believe the commercial version includes a motion detector feature, but to put together this sample on the cheap, I didn't explore it.</p>
<h4 id="nodejs">Node.js</h4>
<p>Having put together a <a href="https://alexanderdevelopment.net/tag/node-js/">few proof-of-concept interfaces</a> using Node.js in the past, it was the first thing that came to mind to create a web-based interface to OpenALPR. You could use a variety of other options like Python, Perl, PHP or C# if you prefer, but Node.js made setting up this part of the sample extremely easy.</p>
<p>First you need to make sure Node.js is installed:</p>
<pre>sudo apt-get install nodejs</pre>
<p>Then you need to install the following modules with npm:</p>
<ol>
<li>node-uuid</li>
<li>express</li>
<li>socket.io</li>
</ol>
<h4 id="nodejscode">Node.js code</h4>
<p>Here is the complete Node.js code to support all the scenarios in this series:</p>
<pre><code>var http = require('http');
var express = require('express'),
    app = module.exports.app = express();
var server = http.createServer(app);
var io = require('socket.io').listen(server, {log:false, origins:'*:*'})
var uuid = require('node-uuid');
var sys = require('sys'),
    exec = require('child_process').exec;

//allow clients to directly view the images in the captures directory
app.use('/captures', express.static('captures'));

//route for the home page
app.get('/', function (req, res) {
	res.send('home page');
});

//route to handle a client calling node to check a plage
app.get('/check_plate', function (req, res) {
	//generate a guid to use in the captured image file name
	var uuid1 = uuid.v1();
	
	//tell the webcam to take a picture and store it in the captures directory using the guid as the name
	exec('fswebcam -r 1280x720 --no-banner --quiet ./captures/' + uuid1 + '.jpg',
	  function (error, stdout, stderr) {
		if (error !== null) {
		  //log any errors
		  console.log('exec error: ' + error);
		}
	});

	//now that we have a picture saved, execute parse it with openalpr and return the results as json (the -j switch) 
	exec('alpr -j ./captures/' + uuid1 + '.jpg',
	  function (error, stdout, stderr) {
		//create a json object based on the alpr output
		var plateOutput = JSON.parse(stdout.toString());
		
		//add an "image" attribute to the alpr json that has a path to the captured image
		//this is so the client can view the license plage picture to verify alpr parsed it correctly
		plateOutput.image = '/captures/' + uuid1 + '.jpg';
		
		//set some headers to deal with CORS
		res.header("Access-Control-Allow-Origin", "*");
		res.header("Access-Control-Allow-Headers", "X-Requested-With");
		
		//send the json back to the client
		res.json(plateOutput);
		
		//log the response from alpr
		console.log('alpr response: ' + stdout.toString());
		
		if (error !== null) {
		  //log any errors
		  console.log('exec error: ' + error);
		}
	});
});

//route to handle a request for a license plate capture to be written to a socket.io interface
//basically the same as the non-streaming interface except the output gets written somewhere different
app.get('/check_plate_stream', function (req, res) {
	//generate a guid to use in the captured image file name
	var uuid1 = uuid.v1();
	
	//tell the webcam to take a picture and store it in the captures directory using the guid as the name
	exec('fswebcam -r 1280x720 --no-banner --quiet ./captures/' + uuid1 + '.jpg',
	  function (error, stdout, stderr) {
		if (error !== null) {
		  //log any errors
		  console.log('exec error: ' + error);
		}
	});
	
	//now that we have a picture saved, execute parse it with openalpr and return the results as json (the -j switch) 
	exec('alpr -j ./captures/' + uuid1 + '.jpg',
	  function (error, stdout, stderr) {
		//create a json object based on the alpr output
		var plateOutput = JSON.parse(stdout.toString());
		
		//add an "image" attribute to the alpr json that has a path to the captured image
		//this is so the client can view the license plage picture to verify alpr parsed it correctly
		plateOutput.image = '/captures/' + uuid1 + '.jpg';

		//write the json to the socket.io interface
		io.emit('message', plateOutput);

		//return a response to the caller that the message was sent
		res.send('message sent');

		//log the response from alpr
		console.log('alpr response: ' + stdout.toString());
		
		if (error !== null) {
		  //log any errors
		  console.log('exec error: ' + error);
		}
	});
});

//start the server listening on port 3000
server.listen(3000, function () {
	console.log('App listening');
});
</code></pre>
<p>If you look at the &quot;check_plate&quot; route starting at line 19, you can see the script does the following things:</p>
<ol>
<li>Generate a new GUID to use in the captured image name.</li>
<li>Take a picture with the webcam and save it to the &quot;captures&quot; directory.</li>
<li>Execute OpenALPR to parse the image and return the results in a JSON object.</li>
<li>Modify the JSON object to include the image path</li>
<li>Return the JSON object to the client.</li>
</ol>
<h4 id="tryingitout">Trying it out</h4>
<p>To try out the code, do the following:</p>
<ol>
<li>Save the Node.js script from above as app.js.</li>
<li>Upload it to a directory on your Pi.</li>
<li>Create a directory called &quot;captures&quot; under that directory.</li>
<li>Start the application using the following command: nodejs app.js.</li>
<li>Navigate to the URL for the Node.js application from a web browser.</li>
</ol>
<p>Here's a picture of my testing &quot;studio.&quot; The Pi and webcam are sitting on a TV tray in front of an old license plate propped up on a couch in my office.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/20151221_102014_resized-1.jpg#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 2"></p>
<p>When I call Node.js from my browser, this is what I see.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/web-page.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 2"></p>
<p>This is the JSON response formatted for easier reading.</p>
<pre><code>{
	"version":2,
	"data_type":"alpr_results",
	"epoch_time":1450713975002,
	"img_width":1280,
	"img_height":720,
	"processing_time_ms":2567.216553,
	"regions_of_interest":[],
	"results":[
	{
		"plate":"43C32Y3",
		"confidence":91.097946,
		"matches_template":0,
		"plate_index":0,
		"region":"",
		"region_confidence":0,
		"processing_time_ms":269.342682,
		"requested_topn":10,
		"coordinates":[
		{
			"x":414,
			"y":315
		},
		{
			"x":812,
			"y":319
		},
		{
			"x":812,
			"y":516
		},
		{
			"x":414,
			"y":511
		}],
		"candidates":[
		{
			"plate":"43C32Y3",
			"confidence":91.097946,
			"matches_template":0
		},
		{
			"plate":"43G32Y3",
			"confidence":81.515587,
			"matches_template":0
		},
		{
			"plate":"43C3ZY3",
			"confidence":81.408203,
			"matches_template":0
		},
		{
			"plate":"43C32YS",
			"confidence":80.856506,
			"matches_template":0
		},
		{
			"plate":"43C32Y",
			"confidence":79.70826,
			"matches_template":0
		},
		{
			"plate":"43G3ZY3",
			"confidence":71.825851,
			"matches_template":0
		},
		{
			"plate":"43G32YS",
			"confidence":71.274155,
			"matches_template":0
		},
		{
			"plate":"43C3ZYS",
			"confidence":71.166771,
			"matches_template":0
		},
		{
			"plate":"43G32Y",
			"confidence":70.1259,
			"matches_template":0
		},
		{
			"plate":"43C3ZY",
			"confidence":70.018524,
			"matches_template":0
		}]
	}],
	"image":"/captures/be8da820-a7fc-11e5-a63a-7bb250f23f1c.jpg"
}
</code></pre>
<p>Here is the Node.js output from the Raspberry Pi command line.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/node.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 2"></p>
<p>This is the actual image the webcam captured. The white balance is horrible, but the plate number is clear enough for OpenALPR to recognize.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/12/be8da820-a7fc-11e5-a63a-7bb250f23f1c.jpg#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 2"></p>
<p>That's it for now. In my next post, I'll show how to trigger the license plate recognition functionality from a custom assembly in Dynamics CRM.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Dynamics CRM and the Internet of Things - part 1]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Today's post is the first in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although my solution is focused on the use of license plate numbers captured by a webcam, the solution architecture is applicable to any</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/12/14/dynamics-crm-and-the-internet-of-things-part-1/</link><guid isPermaLink="false">5a5837236636a30001b977e7</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[Internet of Things]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 14 Dec 2015 16:02:17 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/12/javascript-web-resource-flow-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/12/javascript-web-resource-flow-1.png" alt="Dynamics CRM and the Internet of Things - part 1"><p>Today's post is the first in a five-part series on how I integrated a Raspberry Pi with Microsoft Dynamics CRM to recognize contacts using automobile license plates. Although my solution is focused on the use of license plate numbers captured by a webcam, the solution architecture is applicable to any Dynamics CRM + Internet of Things (IoT) integration. Over the course of this series, I will show how I configured my Raspberry Pi and how I built different integrations to use the license plate data in Dynamics CRM.</p>
<h4 id="background">Background</h4>
<p>Last year I was working on a Dynamics CRM project for one of the largest automakers in the world, and I got to thinking about whether it'd be possible to integrate license plate recognition with our CRM system. This would have given dealers a tool so that service advisers could immediately see customer details and service preferences as soon as a car drove into a service bay. My idea never went anywhere, and eventually I moved on to a different company, but I'd still think about it every once in a while.</p>
<p>Then I saw an <a href="http://arstechnica.com/business/2015/12/new-open-source-license-plate-reader-software-lets-you-make-your-own-hot-list/">article</a> last week about open-source license plate reader software called <a href="http://www.openalpr.com/">OpenALPR</a> that got me thinking about this again. Because I'd just gotten a Raspberry Pi 2, and I had an old webcam that was just gathering dust in my office, I finally had everything I needed to build a Dynamics CRM license plate reader integration.</p>
<h4 id="theapproach">The approach</h4>
<p>Thinking about this from the perspective of a CRM user who wants to be able to recognize a contact based on a license plate, there are a few obvious elements required:</p>
<ol>
<li>Camera to capture a license plate image</li>
<li>Software to parse the license plate number from image (OpenALPR)</li>
<li>Code to trigger the license plate number parsing and return results to a consumer</li>
<li>Field to store license plate numbers in CRM</li>
<li>Code to retrieve and display the CRM contact details</li>
</ol>
<p>The right way to put these together is less obvious because there are several possible approaches, but none are appropriate in all scenarios. For example, if the CRM user is going to trigger the license plate recognition in CRM, there are at least three potential approaches:</p>
<ol>
<li>CRM custom assembly - Using a web resource, a dialog or some other interactive mechanism, a custom workflow activity or plug-in is called that instructs the Raspberry Pi to take a picture and return the parsed license plate number. The code hosted in CRM then searches for a contact with the returned license plate number and displays its details to the user. This approach requires that the CRM server be able to communicate with the Raspberry Pi, which may be challenging for CRM Online deployments.<img src="https://alexanderdevelopment.net/content/images/2015/12/custom-assembly-flow.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 1"></li>
<li>JavaScript in web resource - Using a web resource, JavaScript is called that instructs the Raspberry Pi to take a picture and return the parsed license plate number. Once a license plate number is retrieved, the JavaScript code then queries CRM for a contact with the returned license plate number and displays its details to the user. As long as the CRM end user's computer can access the Raspberry Pi, it doesn't matter if the Pi is visible to the CRM server, so this easily works with CRM Online.<img src="https://alexanderdevelopment.net/content/images/2015/12/javascript-web-resource-flow.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 1"></li>
<li>Business logic on the Raspberry Pi - Using a web resource, JavaScript is called that instructs the Raspberry Pi to take a picture and parse the license plate number. Once a license plate number is retrieved, the Raspberry Pi then queries CRM for a contact with the returned license plate number and returns its details to the user. I don't love this approach because it requires extra effort to get the Pi to communicate with CRM, and I prefer to keep the Pi as &quot;dumb&quot; as possible.<img src="https://alexanderdevelopment.net/content/images/2015/12/raspberry-pi-business-logic-flow.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 1"></li>
</ol>
<p>If the license plate recognition is triggered via camera motion detection, a physical button or a sensor without direct input from the CRM user, there are at least two potential approaches:</p>
<ol>
<li>Streaming interface - The Raspberry Pi takes a picture, parses the plate number and writes it to a streaming interface using Socket.IO or a similar mechanism. A web page or client application picks up the plate numbers from that interface, and then it queries CRM for a contact with the returned license plate number to display the details to the user. This approach is basically a variation on what I described in my <a href="https://alexanderdevelopment.net/post/2014/12/03/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-1/">&quot;Creating a near real-time streaming interface for Dynamics CRM with Node.js&quot; series</a> last year.<img src="https://alexanderdevelopment.net/content/images/2015/12/streaming-interface-flow.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 1"></li>
<li>Back-end data post - The Raspberry Pi takes a picture, parses the plate number and posts it to CRM. The data can be posted directly to CRM or use a <a href="https://alexanderdevelopment.net/post/2015/01/12/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-1/">message queue</a>. This approach is good if a CRM user doesn't need immediate access to the data.<img src="https://alexanderdevelopment.net/content/images/2015/12/backend-data-post-flow.png#img-thumbnail" alt="Dynamics CRM and the Internet of Things - part 1"></li>
</ol>
<p>In this series I will show how I built solutions that demonstrate three of the approaches above:</p>
<ol>
<li>CRM custom assembly</li>
<li>JavaScript in web resource</li>
<li>Streaming interface</li>
</ol>
<p>As for why I'm not showing the other approaches:</p>
<ol>
<li>I just don't think the business logic on the Raspberry Pi approach is useful here. I'm sure there are times it might make sense, but my hypothetical scenario here isn't one of them.</li>
<li>The back-end data post approach is almost the same as the streaming interface, except the Pi posts the data to CRM instead of Socket.IO.</li>
</ol>
<p>In my next post, I'll show how I set up my Raspberry Pi to handle the basic license plate capture and parsing operations. See you then!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This the final post in my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">part 3</a> and <a href="https://alexanderdevelopment.net/post/2015/01/22/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-4">part 4</a> I showed two approaches for building a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange. In today’s post I will show</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/27/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-5/</link><guid isPermaLink="false">5a5837236636a30001b977c7</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[C#]]></category><category><![CDATA[JSON]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Tue, 27 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-1.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><p>This the final post in my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">part 3</a> and <a href="https://alexanderdevelopment.net/post/2015/01/22/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-4">part 4</a> I showed two approaches for building a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange. In today’s post I will show how to create a Windows console application that reads messages from a queue and writes the data to Dynamics CRM. The code for this application is available on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a> in the LeadWriterSample project under the LucasCrmMessageQueueTools solution.</p>
<h4 id="theapproach">The approach</h4>
<p>This application is extraordinarily simple. On startup it prompts the user to supply connection information for the RabbitMQ queue that it will monitor as well as a Dynamics CRM connection string. It then monitors the queue for new JSON-formatted messages. When new messages arrive, it attempts to deserialize them into a lightweight &quot;leadtype&quot; object, and then it creates new lead records in CRM. Once a message is successfully processed and a lead is created, the application then sends a confirmation back to RabbitMQ so that the message can be removed from the queue.</p>
<p>The following code shows what happens after a connection to the RabbitMQ is established:<pre><code>//wait for some messages<br>
var consumer = new QueueingBasicConsumer(channel);<br>
channel.BasicConsume(_queue, false, consumer);<br>
 <br>
Console.WriteLine(&quot; [*] Waiting for messages. To exit press CTRL+C&quot;);<br>
 <br>
//instantiate crm org service<br>
using (OrganizationService service = new OrganizationService(_targetConn))<br>
{<br>
   while (true)<br>
   {<br>
     //get the message from the queue<br>
     var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();<br>
 <br>
     var body = ea.Body;<br>
     var message = Encoding.UTF8.GetString(body);<br>
 <br>
     try<br>
     {<br>
       //deserialize message json to object<br>
       LeadType lead = JsonConvert.DeserializeObject&lt;LeadType&gt;(message);<br>
 <br>
       try<br>
       {<br>
         //create record in crm<br>
         Entity entity = new Entity(&quot;lead&quot;);<br>
         entity[&quot;firstname&quot;] = lead.FirstName;<br>
         entity[&quot;lastname&quot;] = lead.LastName;<br>
         entity[&quot;subject&quot;] = lead.Topic;<br>
         entity[&quot;companyname&quot;] = lead.Company;<br>
         service.Create(entity);<br>
 <br>
         //write success message to cli<br>
         Console.WriteLine(&quot;Created lead: {0} {1}&quot;, lead.FirstName, lead.LastName);<br>
 <br>
         //IMPORTANT - tell the queue the message was processed successfully so it doesn't get requeued<br>
         channel.BasicAck(ea.DeliveryTag, false);<br>
       }<br>
       catch (FaultException&lt;Microsoft.Xrm.Sdk.OrganizationServiceFault&gt; ex)<br>
       {<br>
         //return error - note no confirmation is sent to the queue, so the message will be requeued<br>
         Console.WriteLine(&quot;Could not create lead: {0} {1}&quot;, lead.FirstName, lead.LastName);<br>
         Console.WriteLine(&quot;Error: {0}&quot;, ex.Message);<br>
       }<br>
     }<br>
     catch(Exception ex)<br>
     {<br>
       //return error - note no confirmation is sent to the queue, so the message will be requeued<br>
       Console.WriteLine(&quot;Could not process message from queue&quot;);<br>
       Console.WriteLine(&quot;Error: {0}&quot;, ex.Message);<br>
     }<br>
   }<br>
}</code></pre></p>
<p>If this were to be used in production, I would have created a Windows service instead of a console application, but I wanted to make it easy to try out different connection parameters.</p>
<h4 id="verifyingtheapplication">Verifying the application</h4>
<p>The queuewriter.js application in the node-app directory in the <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub repository</a> contains a sample web page that can be used to publish lead data to the CRM-Leads queue. If the application is running, you can access the web page at http://&lt;YOUR_SERVER_NAME&gt;:3000/leadform. When the form’s submit button is clicked, an AJAX call posts a JSON object to the Node.js POST endpoint I showed in my previous post. If the LeadWriterSample console application is running, it will take the message from the queue and you will see a new lead record created in CRM. The screenshots below show each piece working.</p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-01-lead-form.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The lead has been submitted via the web form, and a success message has been received from the Node.js endpoint.</em></p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-02-lead-queue.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The lead has landed in the CRM-Leads queue and is ready to be retrieved.</em></p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-03-lead-processed.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The console application has retrieved and processed the submitted lead message.</em></p>
<p><img src="https://alexanderdevelopment.net/content/images/2015/10/5-04-lead-crm.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 5"><br>
<em>The lead record has been created in CRM.</em></p>
<p>One caveat about the demo lead form is that it has the RabbitMQ credentials embedded in the HTML source, so this code should not be used in production. My approach was originally formulated with the thought that a server-side process would build the JSON message to post to Node.js, so sensitive information would not be exposed. If you decide to use an AJAX post operation like is shown here, you would want to modify the queuewriter.js application to contain the credentials so they do not need to be passed from the end user’s web browser.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>That does it for this series, but I’ve just barely explored the capabilities of RabbitMQ. There’s so much more you can do with it than what I’ve shown here, and I hope I’ve piqued your interest about how you can use RabbitMQ or any other message broker in your Dynamics CRM projects. If you have any questions or want to continue the discussion, please share your thoughts in the comments.</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Authenticating from a Node.js client to Dynamics CRM via AD FS and OAuth2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Last week I decided to finally take a look at using OAuth2 as an authentication protocol with Dynamics CRM. I wanted to understand how it could enable non-Windows clients to consume CRM data. As it turns out, I was unable to find any documentation or comprehensive code samples for non-Windows</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/23/authenticating-from-a-node-js-client-to-dynamics-crm-via-ad-fs-and-oauth2/</link><guid isPermaLink="false">5a5837226636a30001b97747</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[JSON]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sat, 24 Jan 2015 00:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/adfs-diesel.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/adfs-diesel.jpg" alt="Authenticating from a Node.js client to Dynamics CRM via AD FS and OAuth2"><p>Last week I decided to finally take a look at using OAuth2 as an authentication protocol with Dynamics CRM. I wanted to understand how it could enable non-Windows clients to consume CRM data. As it turns out, I was unable to find any documentation or comprehensive code samples for non-Windows clients, so I put together my own Node.js client, and I've added the code to my Crm-Sample-Code repository on GitHub here: <a href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/NodeClientDemo">https://github.com/lucasalexander/Crm-Sample-Code/tree/master/NodeClientDemo</a>. Having endured a lot of frustration in getting this to work, I'd like to share some additional notes that might be helpful if you decide to start using OAuth2 with CRM.</p>
<p>If you're not already familiar with OAuth2, I suggest you take a look at this post on the Microsoft Dynamics CRM blog that explains how CRM uses OAuth at a high level: <a href="http://blogs.msdn.com/b/crm/archive/2013/12/12/use-oauth-to-authenticate-with-the-crm-service.aspx">http://blogs.msdn.com/b/crm/archive/2013/12/12/use-oauth-to-authenticate-with-the-crm-service.aspx</a>. Although there's no code in that post, it will help you understand how OAuth authentication works.</p>
<h4 id="infrastructureandenvironmentprep">Infrastructure and environment prep</h4>
<p>I am running Dynamics CRM 2015 and Active Directory Federation Services (AD FS) on a single Windows Server 2012 R2 Azure VM. CRM and AD FS are configured for IFD. The CRM website is running on port 443. AD FS is running on port 444.</p>
<p>Before writing any code, I completed all the prep work outlined in this CRM SDK walkthrough - <a href="https://msdn.microsoft.com/en-us/library/dn531010.aspx">https://msdn.microsoft.com/en-us/library/dn531010.aspx</a>. Specifically I enabled OAuth2 for CRM, and I registered an OAuth2 client application in AD FS. Although this was all done as part of an on-premise Dynamics CRM deployment, I don't see any reason that it won't work with CRM Online.</p>
<h4 id="thecode">The code</h4>
<p>Now, let's take a look at the Node.js application. I chose to write a Node.js client instead of something in C# for two main reasons. First, Node.js made it easy for me to inspect and modify all the HTTP headers that are required for getting OAuth2 to work. Second, Node.js is basically server-side JavaScript, so most of the work I've done can be easily ported to a client-side JavaScript implementation.</p>
<p>My Node.js application is written using the <a href="http://expressjs.com/">Express web framework</a>, and it serves four web pages via routes. They are:</p>
<ol start="2">
<li>/ - This is the web application index page. It displays links to the login page and contacts display page.</li>
<li>/auth/login - This page handles redirection of the browser to the AD FS login page.</li>
<li>/auth/callback - This is the page to which AD FS redirects the user's browser after a successful login.</li>
<li>/authenticated/contacts - This page queries the CRM OrganizationData service for contacts using an OAuth token for authentication.</li>
</ol>
<p>The basic flow of the application is:</p>
<ol start="2">
<li>A user starts on the index page. The page checks for an OAuth token in a session variable. If no token is present, a link to the login page is shown. If a token is present, a link to the contact display page is shown.</li>
<li>When a user navigates to the login page, it makes a request to the CRM OrganizationData service to request the correct URL to use for authentication and redirects the browser to that page. The client id, resource name and redirect uri are all included in the query string of the request to AD FS. (See <a href="https://github.com/nordvall/TokenClient/wiki/OAuth-2-Authorization-Code-grant-in-ADFS">https://github.com/nordvall/TokenClient/wiki/OAuth-2-Authorization-Code-grant-in-ADFS</a> for more information on how this works.)</li>
<li>The user authenticates with AD FS, and then AD FS redirects the user to the callback page with an authorization code in the query string.</li>
<li>The callback page parses the authorization code from the query string and sends it to AD FS to request a token. The token is stored in a session cookie, and then the user is redirected to the index page, which should now show a link to the contact display page.</li>
<li>The contact display page reads the token from the session cookie and makes an Odata request to CRM with the token supplied as the authorization header. The results are then parsed and displayed.</li>
</ol>
<p>A few caveats:</p>
<ol start="2">
<li>OAuth2 tokens eventually expire. The default AD FS OAuth2 token expiration value is 3600 seconds (one hour). It is possible to request a new token using a refresh token that is provided at the same time as the authorization token. Using the refresh token allows for reauthorization without needing to supply credentials again. My code sample does not demonstrate use of a refresh token.</li>
<li>My sample application doesn't have much in the way of error handling, and it has not been extensively tested. If you plan to use OAuth with CRM, I highly recommend you don't just deploy my code in production without any further testing.</li>
</ol>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Welcome back to my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">last post</a> I showed how to build a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange using the <a target="_blank" href="https://www.rabbitmq.com/dotnet.html" rel="nofollow">official RabbitMQ .Net client library</a>. Unfortunately, that plug-in can’t</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/22/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-4/</link><guid isPermaLink="false">5a5837236636a30001b977bf</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[C#]]></category><category><![CDATA[JSON]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[integration]]></category><category><![CDATA[RabbitMQ]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 22 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-2.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-2.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 4"><p>Welcome back to my five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3">last post</a> I showed how to build a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange using the <a target="_blank" href="https://www.rabbitmq.com/dotnet.html" rel="nofollow">official RabbitMQ .Net client library</a>. Unfortunately, that plug-in can’t successfully communicate with a RabbitMQ server if it’s executed inside the Dynamics CRM sandbox, so in today’s post I will show how to achieve the same results with a sandboxed plug-in. The code for this plug-in is available on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a> in the MessageQueueSandboxPlugin project under the LucasCrmMessageQueueTools solution.</p>
<h4 id="theapproach">The approach</h4>
<p>As I mentioned in my previous post, last month I wrote a series of blog posts about how to create a near real-time streaming API using plug-ins and Node.js. That plug-in worked fine in the Dynamics CRM sandbox, and Node.js can easily publish messages to a RabbitMQ exchange, so today’s plug-in will post a JSON-formatted message to a Node.js application, and then that Node.js application will do the actual publishing to RabbitMQ. As a result, I only need to make a couple of minor modifications to <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3/">my earlier Node.js message-posting plug-in</a> so that it can pass the RabbitMQ connection parameters to my Node.js application. Additionally, the Node.js application that I described in my earlier series only needs a few changes to publish the message to a RabbitMQ exchange instead of sending it to Socket.IO clients.</p>
<h4 id="theplugin">The plug-in</h4>
<p>The plug-in is registered for an operation (create, update, delete, etc.) with a FetchXML query in its unsecure configuration. When the plug-in step is triggered, its associated FetchXML query is executed, and then the resulting fields are serialized into a JSON object, which is then sent to a Node.js application called queuewriter.js via an HTTP POST request. The JSON object also needs to contain RabbitMQ connection details, so I pass them as part of the plug-in step’s unsecure configuration. Here’s the configuration XML fragment to enable case notifications:</p>
<pre><code>&lt;nodeendpoint&gt;http://lucas-ajax.cloudapp.net:3000/rabbit_post_endpoint&lt;/nodeendpoint&gt;
&lt;endpoint&gt;lucas-ajax.cloudapp.net&lt;/endpoint&gt;
&lt;exchange&gt;CRM&lt;/exchange&gt;
&lt;routingkey&gt;Case&lt;/routingkey&gt;
&lt;user&gt;rabbituser&lt;/user&gt;
&lt;password&gt;PASSWORDHERE&lt;/password&gt;
&lt;query&gt;&lt;![CDATA[
&lt;fetch mapping='logical'&gt;
&lt;entity name='incident'&gt;
&nbsp;&lt;attribute name='ownerid'/&gt;
&nbsp;&lt;attribute name='modifiedby'/&gt;
&nbsp;&lt;attribute name='createdby'/&gt;
&nbsp;&lt;attribute name='title'/&gt;
&nbsp;&lt;attribute name='incidentid'/&gt;
&nbsp;&lt;attribute name='ticketnumber'/&gt;
&nbsp;&lt;attribute name='createdon'/&gt;
&nbsp;&lt;attribute name='modifiedon'/&gt;
&nbsp;&lt;filter type='and'&gt;
&nbsp; &lt;condition attribute='incidentid' operator='eq' value='{0}' /&gt;
&nbsp;&lt;/filter&gt;
&lt;/entity&gt;
&lt;/fetch&gt;
]]&gt;
&lt;/query&gt;
&lt;/config&gt;</code></pre>
<p>Just like in my <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3/">earlier Node.js plug-in</a>, the FetchXML is extracted from the configuration XML, and the query is executed against Dynamics CRM. The results are then serialized to JSON using <a target="_blank" href="http://james.newtonking.com/json" rel="nofollow">Json.NET</a> just like before, except the serialized CRM data is included as a &quot;message&quot; object that is part of a parent JSON object that includes the RabbitMQ connection parameters. Here’s an example of the structure:<pre><code>{<br>
   &quot;endpoint&quot;:&quot;lucas-ajax.cloudapp.net&quot;,<br>
   &quot;username&quot;:&quot;rabbituser&quot;,<br>
   &quot;password&quot;:&quot;XXXXXXXX&quot;,<br>
   &quot;exchange&quot;:&quot;CRM&quot;,<br>
   &quot;routingkey&quot;:&quot;Lead&quot;,<br>
   &quot;message&quot;:{<br>
     &quot;property1&quot;:&quot;value 1&quot;,<br>
     &quot;property2&quot;:&quot;value 2&quot;,<br>
     &quot;property3&quot;:&quot;value 3&quot;<br>
   }<br>
}</code></pre></p>
<p>Because this plug-in uses the Json.NET client library, it has to be merged with the plug-in assembly before registering it in Dynamics CRM. I’ve included a batch script called ilmerge.bat in the project directory on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a>.</p>
<h4 id="thenodejsapplication">The Node.js application</h4>
<p>The Node.js application (queuewriter.js) waits to receive JSON messages via HTTP POST from a client. When it receives a POST request, it checks whether the message is valid JSON. If it is, the RabbitMQ connection parameters are extracted and then the notification &quot;message&quot; object is published to the RabbitMQ exchange. If everything is successful, it sends &quot;success&quot; back as a response to the client. If any errors are encountered, it sends back a descriptive error message. I am using the <a target="_blank" href="https://github.com/postwait/node-amqp" rel="nofollow">node-amqp</a> library for communicating with the RabbitMQ server, but the behavior isn’t that different from a .Net client. Here’s an extract with the relevant code:<pre><code>if (request.method == 'POST') {<br>
   request.on('data', function(chunk) {<br>
     //check if received data is valid json<br>
     if(IsJsonString(chunk.toString())){<br>
       //convert message to json object<br>
       var requestobject = JSON.parse(chunk.toString());<br>
      <br>
       //connect to rabbitmq<br>
       var connection = amqp.createConnection({ host: requestobject.endpoint<br>
       , port: 5672 //assumes default port<br>
       , login: requestobject.username<br>
       , password: requestobject.password<br>
       , connectionTimeout: 0<br>
       , authMechanism: 'AMQPLAIN'<br>
       , vhost: '/' //assumes default vhost<br>
       });<br>
      <br>
       //when connection is ready<br>
       connection.on('ready', function () {<br>
          //get the &quot;message&quot; property of the supplied request<br>
          var message = JSON.stringify(requestobject.message);<br>
         <br>
          //post it to the exchange with the supplied routing key<br>
          connection.exchange = connection.exchange(requestobject.exchange, {passive: true, confirm: true }, function(exchange) {<br>
            exchange.publish(requestobject.routingkey, message, {mandatory: true, deliveryMode: 2}, function () {<br>
              //if successful, write message to console<br>
              console.log('Message published: ' + message);<br>
             <br>
              //send &quot;success&quot; back in response<br>
              response.write('success');<br>
             <br>
              //close the rabbitmq connection and end the response<br>
              connection.end();<br>
              response.end();<br>
            });<br>
          });<br>
       });<br>
      <br>
       //if an error occurs with rabbitmq<br>
       connection.on('error', function () {<br>
          //send error message back in response and end it<br>
          response.write('failure writing message to exchange');<br>
          response.end();<br>
       });<br>
     }<br>
     else {<br>
       //if request contains invalid json<br>
       //send error message back in response and end it<br>
       response.write(&quot;invalid JSON&quot;);<br>
       response.end();<br>
     }<br>
   });<br>
}</code></pre></p>
<p>The complete queuewriter.js application is contained in the node-app directory in the <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub repository</a>.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>In addition to registering the plugin and registering a step to publish a notification message to RabbitMQ, you need to deploy and start the queuewriter.js application to publish messages. Once that’s done, you can verify everything is working as expected either by looking at the Queues tab in the RabbitMQ management web UI or running the CliConsumer sample application I showed in <a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">part 2</a>.</p>
<p>Obviously using queuewriter.js as message proxy adds an extra layer of complexity, and you have to make sure that the application is up and running in order to process message, but it also offers a couple of advantages. First, by using queuewriter.js instead of a direct connection, you can easily use this same plug-in with different message brokers like Apache ActiveMQ and Microsoft’s Azure Service Bus. Second, the queuewriter.js application isn’t limited to just handling messages outbound from Dynamics CRM. You can also use it to process inbound messages without any changes. You just have to configure a client application to read messages from the queue and process them accordingly. A good example of this would be writing data submitted through a web form to Dynamics CRM via a RabbitMQ queue, and I will show that exact scenario in my next post!</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 3]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the third post of a five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ.<br>
<a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">Last time</a> I showed how to install and configure a RabbitMQ server to support passing messages to and from Dynamics CRM. Today I will show how to build a Dynamics</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/20/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-3/</link><guid isPermaLink="false">5a5837236636a30001b977b7</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[JSON]]></category><category><![CDATA[C#]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Tue, 20 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-3.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-3.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 3"><p>This is the third post of a five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ.<br>
<a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">Last time</a> I showed how to install and configure a RabbitMQ server to support passing messages to and from Dynamics CRM. Today I will show how to build a Dynamics CRM plug-in that publishes notification messages to a RabbitMQ exchange using the <a target="_blank" href="https://www.rabbitmq.com/dotnet.html" rel="nofollow">official RabbitMQ .Net client library</a>. The code for this plug-in is available on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a> in the MessageQueuePlugin project under the LucasCrmMessageQueueTools solution.</p>
<p>Before going any further, let’s get some bad news out of the way. Plug-ins that execute in the Dynamics CRM sandbox cannot use RabbitMQ .Net client library to publish messages to a RabbitMQ server, so you can’t use today’s plug-in approach from a CRM Online organization. In my next post, I will be showing an alternate mechanism for publishing messages that you can use from a sandboxed plug-in, but today I want to focus on the most direct integration method. Now that we’re clear on the limitations of this approach, let’s get started!</p>
<h4 id="theapproach">The approach</h4>
<p>Last month I wrote a series of blog posts about how to create a near real-time streaming API using plug-ins and Node.js. For this plug-in I’m going to basically copy the logic I used for the plug-in in that series.</p>
<p><a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3/">This post</a> outlines the approach in detail, but if you don’t want to read the entire thing, the basic idea was to create a plug-in that is registered for an operation (create, update, delete, etc.) with a FetchXML query in its unsecure configuration. When the plug-in step is triggered, its associated FetchXML query is executed, and then the resulting fields are serialized into a JSON object, which is then sent to the Node.js application via an HTTP POST request. Today’s plug-in operates in the exact same way, except instead of sending the JSON object to a Node.js endpoint, the JSON object will be published as a message to a RabbitMQ exchange.</p>
<h4 id="configuringtheplugin">Configuring the plug-in</h4>
<p>To make the plug-in easily useable in any organization without needing to be recompiled, all the RabbitMQ connection parameters are stored in the unsecure configuration along with the FetchXML query for the data to retrieve. Here’s the configuration XML fragment to enable case notifications:</p>
<pre><code>&lt;config&gt;
&lt;endpoint&gt;lucas-ajax.cloudapp.net&lt;/endpoint&gt;
&lt;exchange&gt;CRM&lt;/exchange&gt;
&lt;routingkey&gt;Case&lt;/routingkey&gt;
&lt;user&gt;rabbituser&lt;/user&gt;
&lt;password&gt;PASSWORDHERE&lt;/password&gt;
&lt;query&gt;&lt;![CDATA[
&lt;fetch mapping='logical'&gt;
&lt;entity name='incident'&gt;
&nbsp;&lt;attribute name='ownerid'/&gt;
&nbsp;&lt;attribute name='modifiedby'/&gt;
&nbsp;&lt;attribute name='createdby'/&gt;
&nbsp;&lt;attribute name='title'/&gt;
&nbsp;&lt;attribute name='incidentid'/&gt;
&nbsp;&lt;attribute name='ticketnumber'/&gt;
&nbsp;&lt;attribute name='createdon'/&gt;
&nbsp;&lt;attribute name='modifiedon'/&gt;
&nbsp;&lt;filter type='and'&gt;
&nbsp; &lt;condition attribute='incidentid' operator='eq' value='{0}' /&gt;
&nbsp;&lt;/filter&gt;
&lt;/entity&gt;
&lt;/fetch&gt;
]]&gt;
&lt;/query&gt;
&lt;/config&gt;</code></pre>
<h4 id="generatingthenotificationmessage">Generating the notification message</h4>
<p>Just like in my Node.js plug-in, the FetchXML is extracted from the configuration XML, and the query is executed against Dynamics CRM. The results are then serialized to JSON using <a target="_blank" href="http://james.newtonking.com/json" rel="nofollow">Json.NET</a>.</p>
<h4 id="publishingthemessage">Publishing the message</h4>
<p>The endpoint, exchange name, RabbitMQ user, RabbitMQ password and routing key values from the configuration XML are then used to establish a connection to RabbitMQ and publish the notification message to the exchange like so:</p>
<pre><code>try
{
&nbsp;&nbsp;&nbsp;&nbsp; //connect to rabbitmq
&nbsp;&nbsp;&nbsp;&nbsp; var factory = new ConnectionFactory();
&nbsp;&nbsp;&nbsp;&nbsp; factory.UserName = \_brokerUser;
&nbsp;&nbsp;&nbsp;&nbsp; factory.Password = \_brokerPassword;
&nbsp;&nbsp;&nbsp;&nbsp; factory.VirtualHost = "/";
&nbsp;&nbsp;&nbsp;&nbsp; factory.Protocol = Protocols.DefaultProtocol;
&nbsp;&nbsp;&nbsp;&nbsp; factory.HostName = \_brokerEndpoint;
&nbsp;&nbsp;&nbsp;&nbsp; factory.Port = AmqpTcpEndpoint.UseDefaultPort;
&nbsp;&nbsp;&nbsp;&nbsp; IConnection conn = factory.CreateConnection();
&nbsp;&nbsp;&nbsp;&nbsp; using (var connection = factory.CreateConnection())
&nbsp;&nbsp;&nbsp;&nbsp; {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; using (var channel = connection.CreateModel())
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //tell rabbitmq to send confirmation when messages are successfully published
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; channel.ConfirmSelect();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; channel.WaitForConfirmsOrDie();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //prepare message to write to queue
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var body = Encoding.UTF8.GetBytes(jsonMsg);
&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; var properties = channel.CreateBasicProperties();
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; properties.SetPersistent(true);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; //publish the message to the exchange with the supplied routing key
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; channel.BasicPublish(_exchange, _routingKey, properties, body);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; }
&nbsp;&nbsp;&nbsp;&nbsp; }
}
catch (Exception e)
{
&nbsp;&nbsp;&nbsp;&nbsp; tracingService.Trace("Exception: {0}", e.ToString());
&nbsp;&nbsp;&nbsp;&nbsp; throw;
}</code></pre>
<p>If any errors are encountered, the message is captured via the tracing service, and then an exception is thrown.</p>
<p>Because this plug-in uses both the RabbitMQ .Net and Json.NET client libraries, they have to be merged with the plug-in assembly before registering it in Dynamics CRM. I’ve included a batch script called ilmerge.bat in the project directory on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub</a>.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>After you register the plugin and register a step to publish a notification message to RabbitMQ, you can verify everything is working as expected either by looking at the Queues tab in the RabbitMQ management web UI or running the CliConsumer sample application I showed in<br>
<a href="https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2">part 2</a>.</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Welcome back to this five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/12/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-1">last post</a> I discussed why you would want to incorporate a message broker into your Dynamics CRM data interfaces, and today I will show how to install and configure RabbitMQ to</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/14/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-2/</link><guid isPermaLink="false">5a5837236636a30001b977af</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><category><![CDATA[JSON]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Wed, 14 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-4.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-4.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"><p>Welcome back to this five-part series on creating loosely coupled data interfaces for Dynamics CRM using RabbitMQ. In my <a href="https://alexanderdevelopment.net/post/2015/01/12/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-1">last post</a> I discussed why you would want to incorporate a message broker into your Dynamics CRM data interfaces, and today I will show how to install and configure RabbitMQ to support the examples I’ll be presenting in the rest of the series.</p>
<h4 id="installation">Installation</h4>
<p>First, you’ll need to download the installation files from here: <a target="_blank" href="http://www.rabbitmq.com/download.html" rel="nofollow">http://www.rabbitmq.com/download.html</a>. The RabbitMQ server runs on Windows, Linux, UNIX and Mac OS X, and there are installation guides for each supported platform. Because RabbitMQ is written in Erlang, you will need to install an Erlang VM before you can install RabbitMQ, but there is a download link provided in the installation guide. I set up my RabbitMQ server on a Windows 2012 server, and I was up and running in less than 10 minutes.</p>
<p>Once you’ve installed RabbitMQ and started the server, the easiest way to manage it is via the <a target="_blank" href="http://www.rabbitmq.com/management.html" rel="nofollow">web-based management interface</a> that’s included with the server distribution. You can enable the management interface with the <a target="_blank" href="https://www.rabbitmq.com/man/rabbitmq-plugins.1.man.html" rel="nofollow">rabbitmq-plugins tool</a>. Run the following command to enable it: <em>rabbitmq-plugins enable rabbitmq_management</em>.</p>
<p>After the management plugin is enabled, you can access the web management UI from your server at <a href="http://localhost:15672">http://localhost:15672</a>. The default username is &quot;guest&quot; with &quot;guest&quot; as the password.</p>
<p>You’ll also need to configure any firewall rules necessary to allow access to your RabbitMQ server if it’s running on a server separate from your Dynamics CRM server. The default port is 5672, but that can be changed if you like. <a target="_blank" href="https://www.rabbitmq.com/configure.html" rel="nofollow">This page</a> discusses RabbitMQ configuration in great detail.</p>
<h4 id="settingupusersqueuesandexchanges">Setting up users, queues and exchanges</h4>
<p>The first thing you should do after the install is complete is change your default guest user password via the management UI. Then you can add additional users as necessary. For the examples in the rest of this series, you’ll need a user with full permissions on the default &quot;/&quot; virtual host. Here is what my &quot;rabbituser&quot; user account looks like:<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-00-user.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Next you need to create the entities required to broker the messages between publishers and consumers. Before continuing, I recommend you take a moment to skim this <a target="_blank" href="https://www.rabbitmq.com/tutorials/amqp-concepts.html" rel="nofollow">Advanced Message Queuing Protocol (AMQP) overview document</a>. If nothing else, at least read through the &quot;hello, world&quot; example section because it’s a great introduction to concepts that will be important throughout the rest of this series.</p>
<p><u>Queues</u><br>
In the management UI, navigate to the Queues tab, and create two new durable queues named CRM-Cases and CRM-Leads. (You can create any queues you want, but my examples in the rest of this series use queues with those names.) The screenshot below shows the queues in my system.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-01-queues.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p><u>Exchanges</u><br>
After your queues are created, you can create an exchange and bindings to your queues so messages get routed correctly. Navigate to the Exchanges tab and create a new, durable exchange named CRM. After your CRM exchange is created, you should see something like the screenshot below.</p>
<p>Next, click on the name of the CRM exchange to open its edit screen. Scroll to the &quot;add binding&quot; section toward the bottom of the page and add a binding to the CRM-Cases queue for a routing key value of &quot;Case&quot; live in the following picture and click &quot;bind.&quot;<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-02-exchanges-1.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Do the same for the CRM-Leads queue with &quot;Lead&quot; as the routing key. You should then see the two queues bound to the exchange.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-02-exchanges-2.PNG" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<h4 id="checkingtheconfiguration">Checking the configuration</h4>
<p>At this point you should have everything in place to start publishing and consuming messages. You can verify your configuration works with the CliProvider and CliConsumer sample applications included in my <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmMessageQueuing" rel="nofollow">GitHub repository</a> as part of the LucasCrmMessageQueueTools solution.</p>
<p>First, build and run the CliProvider application. You will be prompted to supply basic connection details, and then you can type a message to publish to your RabbitMQ server.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03a-cliprovider.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Once the message has been published, you can verify there’s a message waiting in the correct queue on the Queues tab of the management UI.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03b-message-ready.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>Next, build and run the CliConsumer application. Once it connects to the CRM-Cases queue, the message will be retrieved and displayed.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03c-cliconsumer.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<p>When the CliConsumer application processes a message, it sends a confirmation back to queue that triggers removal of the message from the queue. You can check the Queues tab in the management UI to verify that the CRM-Cases queue is empty.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/2-03d-no-message-ready.PNG#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 2"></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>That’s it for today. Your RabbitMQ server is now fully configured and ready for use with the examples in the rest of this series. Next time I will show how to send messages to a RabbitMQ exchange from a plug-in using the RabbitMQ .Net client library. See you then!</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1]]></title><description><![CDATA[<div class="kg-card-markdown"><p>One of the things I love about Dynamics CRM is how easy it is to create data interfaces to enable integration with other systems. If you’ve worked with Dynamics CRM for any length of time, you’ve probably seen multiple web service integrations that enable interoperability with other line-of-business</p></div>]]></description><link>https://alexanderdevelopment.net/post/2015/01/12/using-rabbitmq-as-a-message-broker-in-dynamics-crm-data-interfaces-part-1/</link><guid isPermaLink="false">5a5837236636a30001b977a7</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[CRM 2015]]></category><category><![CDATA[C#]]></category><category><![CDATA[JSON]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[RabbitMQ]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Mon, 12 Jan 2015 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-5.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker-5.png" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1"><p>One of the things I love about Dynamics CRM is how easy it is to create data interfaces to enable integration with other systems. If you’ve worked with Dynamics CRM for any length of time, you’ve probably seen multiple web service integrations that enable interoperability with other line-of-business and legacy systems. A typical pair of inbound and outbound integrations might look like the picture below.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound.png#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1"></p>
<p>Using a tightly coupled connection between the source and target systems is usually the easiest (thus the quickest and cheapest) way to establish an integration, but this is often a bad idea. Consider the inbound scenario in which an external application is sending data to Dynamics CRM. What happens if the calling application misbehaves and starts sending thousands of requests per second? This has the potential to overload your CRM server and make it completely unusable. Now consider the outbound scenario in which a CRM plug-in calls an external web service. If the destination application’s web service is offline for a few minutes, the update from the CRM plug-in will not get received unless there’s some sort of error handling and retry logic built into the plug-in</p>
<h4 id="analternateapproach">An alternate approach</h4>
<p>For these reasons, and lots of others (logging, security, scalability, just to name a few), it’s considered a best practice to create loosely coupled integrations that rely on a message broker that sits between the source and destination systems. Though the formal definition is more complicated, for our purposes a message broker can be thought of as a collection of queues that hold messages. Publishers write messages to queues, and then consumers pick up the messages and process them appropriately. Additionally, the message broker can be configured to keep messages in their queues until the consumers provide confirmation of successful processing.</p>
<p>Here’s an example of what the integrations I showed earlier would look like with a message broker.<br>
<img src="https://alexanderdevelopment.net/content/images/2015/10/inbound-outbound-broker.png#img-thumbnail" alt="Using RabbitMQ as a message broker in Dynamics CRM data interfaces – part 1"></p>
<p>For the outbound call from the CRM plug-in, the plug-in writes the message to a broker. The message is routed to a queue where it waits to be processed. A separate processing service application retrieves the message from the queue and sends it to the destination application. For the inbound call to CRM, the process works exactly the same, except the source and destination applications are reversed.</p>
<h4 id="whyisamessagebrokerbetter">Why is a message broker better?</h4>
<p>In the inbound call scenario, an effective message broker would typically be expected to handle a larger volume of inbound messages than Dynamics CRM because all it’s doing is receiving and routing the data without any additional processing. The processing service can then process the messages in the queue at a speed that doesn’t overload the Dynamics CRM server. In the case of the outbound call, the combination of a message broker and processing service can enable complex retry logic and custom logging without having to store it in the plugin layer. As an added bonus to either scenario, a message broker can provide a guarantee that messages don’t get lost between the source and destination systems as long as the message is successfully published to the broker.</p>
<h4 id="wheredowegofromhere">Where do we go from here?</h4>
<p>Over the course of my next four blog posts, I will show how to use <a target="_blank" href="https://www.rabbitmq.com/" rel="nofollow">RabbitMQ</a> as a message broker in your Dynamics CRM data interfaces. I chose RabbitMQ for this series for several reasons:</p><ol><li>It’s open source.</li><li>It runs on multiple platforms.</li><li>It’s easy to install and configure.</li><li>It’s fast at processing messages.</li></ol><p></p>
<p>If you already have a different message broker in place in your organization or you would like to try a different message broker like Apache ActiveMQ or Microsoft’s Azure Service Bus, most of the approaches and a lot of the code I’m going to show in this series will still be applicable, with the notable exception of the post that discusses how to install and configure RabbitMQ.</p>
<p>Here’s the roadmap for the rest of the series:</p><ul><li>Part 2 – basic installation and configuration of a RabbitMQ</li><li>Part 3 – creating a Dynamics CRM plug-in that publishes messages using the RabbitMQ .Net client library</li><li>Part 4 – creating a sandboxed Dynamics CRM plug-in that publishes messages to RabbitMQ via Node.js</li><li>Part 5 – reading messages from a queue and writing them to Dynamics CRM</li></ul><p></p>
<p>If you just can’t wait to dig into the code, I’ve already posted everything to my <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code#crmmessagequeuing" rel="nofollow">repository on GitHub</a>, so you can go ahead and take a look.</p>
<p>See you next time!</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item><item><title><![CDATA[Creating a near real-time streaming interface for Dynamics CRM with Node.js – part 4]]></title><description><![CDATA[<div class="kg-card-markdown"><p>This is the final post in my four-part series about creating a near real-time streaming interface for Microsoft Dynamics CRM using Node.js and Socket.IO. In my <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3">last post</a> I showed how to write the plug-in code to send messages from CRM to the Node.js application. In today’</p></div>]]></description><link>https://alexanderdevelopment.net/post/2014/12/11/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-4/</link><guid isPermaLink="false">5a5837236636a30001b977a2</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 11 Dec 2014 18:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2015/10/video-3.jpg" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2015/10/video-3.jpg" alt="Creating a near real-time streaming interface for Dynamics CRM with Node.js – part 4"><p>This is the final post in my four-part series about creating a near real-time streaming interface for Microsoft Dynamics CRM using Node.js and Socket.IO. In my <a href="https://alexanderdevelopment.net/post/2014/12/09/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-3">last post</a> I showed how to write the plug-in code to send messages from CRM to the Node.js application. In today’s post I will show how to configure a client to receive and process notifications from the Node.js application, and I’ll also discuss some general considerations related to this solution.</p>
<p>My <a href="https://alexanderdevelopment.net/post/2014/12/03/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-1">first post</a> in this series included a video that showed two clients connected to the Node.js application via Socket.IO. One was a web page that displayed notifications using JavaScript, and the other was a simple C# console application. You can find the code for both of the clients from the video in the “client-src” directory in the solution on <a href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmStreamingNotifications/" target="_blank">GitHub</a>.</p>
<h4 id="webpages">Web pages</h4>
<p>Creating a web page to display notifications received from the Node.js application is incredibly simple. For the web page I demonstrated in the introductory video I first load JavaScript libraries for Socket.IO and JQuery:<pre><code>&lt;script src=&quot;<a href="https://cdn.socket.io/socket.io-1.2.0.js">https://cdn.socket.io/socket.io-1.2.0.js</a>&quot;&gt;&lt;/script&gt;<br>
&lt;script src=&quot;<a href="http://code.jquery.com/jquery-1.11.1.js">http://code.jquery.com/jquery-1.11.1.js</a>&quot;&gt;&lt;/script&gt;</code></pre></p>
<p>Then I connect to the Socket.IO endpoint and register a callback function to append the notification text to an element on the page with JQuery:<pre><code>var socket = io(&quot;<a href="http://lucas-ajax.cloudapp.net:3000">http://lucas-ajax.cloudapp.net:3000</a>&quot;);<br>
socket.on('message', function(msg){<br>
var obj = jQuery.parseJSON( msg );<br>
$('#records').append($('&lt;li&gt;').text(msg));<br>
});</code></pre></p>
<p>Of course you’re not limited to displaying just raw text. Once you parse the JSON message, it’s a fully-fledged object that you can work with as you like. For example you can create a web page client that lists case updates by displaying the case numbers hyperlinked to the record in Dynamics CRM.</p>
<p>The code for this is only marginally more complicated than the raw stream example above.<pre><code>var socket = io(&quot;<a href="http://lucas-ajax.cloudapp.net:3000">http://lucas-ajax.cloudapp.net:3000</a>&quot;);<br>
socket.on('message', function(msg){<br>
var obj = jQuery.parseJSON( msg );<br>
if(obj.entity===&quot;incident&quot;){<br>
$('#records').append($('&lt;li&gt;'+obj.operation + ' - &lt;a href=&quot;<a href="https://lucas-ajax.cloudapp.net/Lucas01/main">https://lucas-ajax.cloudapp.net/Lucas01/main</a><wbr>.aspx?etc=112&amp;pagetype=entityrecord&amp;id='+obj.id+'&quot; target=&quot;_blank&quot;&gt;'+obj.ticketnumber+'&lt;/a&gt;'));<br>
}<br>
});</code></pre></p>
<p>As before, first the page creates a connection to the Socket.IO endpoint, and then it registers a callback function. This time, however, the callback function includes a check for incident entities, and the append step creates a hyperlinked case number. The code for this example and another one for contact updates is included in the solution source code on <a target="_blank" href="https://github.com/lucasalexander/Crm-Sample-Code/tree/master/CrmStreamingNotifications/" rel="nofollow">GitHub</a>.</p>
<h4 id="otherclients">Other clients</h4>
<p>The C# console application I showed in the introductory video was also incredibly simple to create. First I needed a way to communicate with the Socket.IO endpoint. I used the <a target="_blank" href="https://github.com/Quobject/SocketIoClientDotNet" rel="nofollow">Socket.IO Client Library for .Net</a> (also available from Nuget -&gt; <em>Install-Package SocketIoClientDotNet</em>). Once I included the library in the project, the code ended up looking a lot like the JavaScript examples above.<pre><code>var socket = IO.Socket(&quot;<a href="http://lucas-ajax.cloudapp.net:3000/">http://lucas-ajax.cloudapp.net:3000/</a>&quot;);<br>
socket.On(Socket.EVENT_CONNECT, () =&gt;<br>
{<br>
socket.On(&quot;message&quot;, (data) =&gt;<br>
{<br>
Console.WriteLine(data);<br>
<a href="//socket.Disconnect">//socket.Disconnect</a>();<br>
});<br>
});</code></pre></p>
<p>In addition to C#, you can create Socket.IO clients in other languages, too. The <a target="_blank" href="http://socket.io/docs/faq/" rel="nofollow">Socket.IO FAQ</a> has links to client libraries for Java and iOS clients.</p>
<h4 id="securityconsiderations">Security considerations</h4>
<p>As I mentioned in the <a href="https://alexanderdevelopment.net/post/2014/12/05/creating-a-near-real-time-streaming-interface-for-dynamics-crm-with-node-js-part-2">second post</a> in this series, this solution lacks any mechanism for authenticating or authorizing clients, but I said it was possible. There are two typical approaches to securing Socket.IO interfaces, cookie-based and token-based. This article on <a target="_blank" href="https://auth0.com/blog/2014/01/15/auth-with-socket-io/">&quot;Token-based authentication with Socket.IO&quot;</a> gives a good overview of the problems with the cookie-based approach and goes into some detail about how to implement token-based authentication. In addition to securing the Socket.IO endpoint, you’d also want to consider securing the endpoint where Dynamics CRM posts notifications. You could use the same token-based approach for that, too.</p>
<p>I would also suggest that depending on how you’ve deployed Dynamics CRM and how you need to grant client access, you might not need to worry about security at all. For example, in the case notification example above the only information exposed via the interface is the case number and whether it was created or updated. To see anything else, the end user actually has to open the record, and regular Dynamics CRM security will do the rest.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>I hope you’ve enjoyed this series, and I hope I’ve given you some ideas about how you could implement and use a near real-time streaming API for Dynamics CRM in your own projects. If you have any questions or want to continue the discussion, please share your thoughts in the comments.</p>
<p><em>A version of this post was originally published on the HP Enterprise Services Application Services blog.</em></p>
</div>]]></content:encoded></item></channel></rss>