<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[OpenFaaS - Alexander Development]]></title><description><![CDATA[OpenFaaS - Alexander Development]]></description><link>https://alexanderdevelopment.net/</link><generator>Ghost 1.20</generator><lastBuildDate>Mon, 24 Aug 2020 19:53:55 GMT</lastBuildDate><atom:link href="https://alexanderdevelopment.net/tag/openfaas/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Building a custom Dynamics 365 data interface with OpenFaaS]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Over the past several months, I've been doing a lot of work with <a href="https://github.com/openfaas/faas">OpenFaaS</a> in my spare time, and in today's post I will show how you can use it to easily build and deploy a custom web service interface for data in a Dynamics 365 Customer Engagement online tenant.</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/07/05/building-a-custom-dynamics-365-data-interface-with-openfaas/</link><guid isPermaLink="false">5b3a415c97f5e30001931b7f</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[OpenFaaS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[C#]]></category><category><![CDATA[integration]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 05 Jul 2018 17:28:47 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/07/openfaas-d365-header.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/07/openfaas-d365-header.png" alt="Building a custom Dynamics 365 data interface with OpenFaaS"><p>Over the past several months, I've been doing a lot of work with <a href="https://github.com/openfaas/faas">OpenFaaS</a> in my spare time, and in today's post I will show how you can use it to easily build and deploy a custom web service interface for data in a Dynamics 365 Customer Engagement online tenant.</p>
<h4 id="openfaas">OpenFaaS</h4>
<p>If you're not familiar with OpenFaaS, it's basically a serverless functions platform like <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a> or <a href="https://aws.amazon.com/lambda/">AWS Lambda</a>, but you run it on Kubernetes or Docker Swarm on your own servers or in the cloud. What I particularly like about OpenFaaS compared to the various commercial serverless platforms is that in addition to offering more control over how/where it's deployed, OpenFaaS supports a wider variety of languages for writing serverless functions.</p>
<blockquote>
<p>OpenFaaS (Functions as a Service) is a framework for building serverless functions with Docker and Kubernetes which has first class support for metrics. Any process can be packaged as a function enabling you to consume a range of web events without repetitive boiler-plate coding.</p>
</blockquote>
<p>To follow along with the samples in this post, you'll need access to a cluster with OpenFaaS deployed, so if you don't already have one, now would be an excellent time to look at the OpenFaaS <a href="http://docs.openfaas.com/deployment/">deployment docs</a> or maybe even work through the <a href="https://github.com/openfaas/workshop">hands-on workshop</a>. I've also previously written about how to securely deploy OpenFaaS on a free <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">Google Cloud VM with Docker Swarm</a> or on an <a href="https://alexanderdevelopment.net/post/2018/05/31/installing-and-securing-openfaas-on-an-aks/">Azure Kubernetes Service cluster</a>.</p>
<h4 id="preparingtobuildtheinterfacefunction">Preparing to build the interface function</h4>
<p>As soon as you have OpenFaaS running, it's time to look at the actual custom interface function.</p>
<p>My demo C# function does the following:</p>
<ol>
<li>Parse a JSON object sent in the client request for an access key and optional query filter</li>
<li>Validate the client-supplied access key to authorize or reject the request</li>
<li>Retrieve a Dynamics 365 OAuth access token using my <a href="https://alexanderdevelopment.net/post/2018/05/19/an-azure-ad-oauth2-helper-microservice/">Azure AD OAuth 2 helper microservice</a></li>
<li>Execute a query for contacts against the Dynamics 365 Web API</li>
<li>Return the Web API query results to the client in an array as part of a JSON object</li>
</ol>
<p>Because the OpenFaaS function uses my OAuth helper microservice instead of requesting an OAuth access token directly from Azure Active Directory, you need to deploy that microservice to your cluster before moving forward.</p>
<p>If you're using Kubernetes, you can create the deployment and corresponding service using the following YAML. You'll need to set the RESOURCE environment variable to the FQDN for your Dynamics 365 CE organization, but you can leave the CLIENTID and TOKEN_ENDPOINT values alone. <em>(While I used to think you needed to register a separate client application for every Dynamics 365 org to use OAuth authentication, I recently learned via a Twitter conversation that there is a <a href="https://twitter.com/bguidinger/status/1001796185798119424">&quot;universal&quot; CRM client id</a> you can use instead.)</em></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: azuread-oauth2-helper
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: azuread-oauth2-helper
    spec:
      containers:
      - name: azuread-oauth2-helper
        image: lucasalexander/azuread-oauth2-helper
        ports:
        - containerPort: 5000
        env:
        - name: RESOURCE
          value: &quot;https://XXXXXXXX.crm.dynamics.com&quot;
        - name: CLIENTID
          value: &quot;2ad88395-b77d-4561-9441-d0e40824f9bc&quot;
        - name: TOKEN_ENDPOINT
          value: &quot;https://login.microsoftonline.com/common/oauth2/token&quot;
---
apiVersion: v1
kind: Service
metadata:
  name: azuread-oauth2-helper
spec:
  ports:
  - port: 5000
  selector:
    app: azuread-oauth2-helper
</code></pre>
<p>Once you've deployed the microservice, here's the definition for a Kubernetes ingress. In this case my microservice is accessible on the same host as OpenFaaS (akskube.alexanderdevelopment.net), and it is secured with the same Let's Encrypt certificate. You'll want to update your configuration with the appropriate values for your specific situation.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: azuread-oauth2-helper-ingress
  annotations:
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - akskube.alexanderdevelopment.net
    secretName: faas-letsencrypt-production
  rules:
  - host: akskube.alexanderdevelopment.net
    http:
      paths:
      - path: /oauthhelper
        backend:
          serviceName: azuread-oauth2-helper
          servicePort: 5000
</code></pre>
<p>After the OAuth helper microservice is deployed, you should validate that you can get a token returned for a valid username/password combination. Here's what that looks like in Postman.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/microservice-validation-1.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<h4 id="buildingtheinterfacefunction">Building the interface function</h4>
<p>If you've made it to this point, building and deploying the function is easy!</p>
<p>First the function gets its configuration data from environment variables that are set when the function is deployed. If you were actually using this function in production, it would be better to store sensitive values like the access key and the Dynamics 365 password as <a href="https://github.com/openfaas/faas/blob/master/guide/secure_secret_management.md">secrets</a>, but I've used environment variables here to keep this overview as simple as possible.</p>
<pre><code>//get configuration from env variables        
var username = Environment.GetEnvironmentVariable(&quot;USERNAME&quot;);
var userpassword = Environment.GetEnvironmentVariable(&quot;USERPASS&quot;);
var tokenendpoint = Environment.GetEnvironmentVariable(&quot;TOKENENDPOINT&quot;);
var accesskey = Environment.GetEnvironmentVariable(&quot;ACCESSKEY&quot;);
var crmwebapi = Environment.GetEnvironmentVariable(&quot;CRMAPI&quot;);
</code></pre>
<p>After the function gets its configuration data, it deserializes the client request using Json.Net to extract a client-supplied access key and an optional query filter. The client-supplied key is validated against the stored key value, and if they don't match, an error response is returned.</p>
<pre><code>var queryrequest = JsonConvert.DeserializeObject&lt;QueryRequest&gt;(input);

if(accesskey!=queryrequest.AccessKey)
{
    JObject outputobject = new JObject();
    outputobject.Add(&quot;error&quot;, &quot;Invalid access key&quot;);
    Console.WriteLine(outputobject.ToString());
    return;
}
</code></pre>
<p>After the access key is validated, the function then makes a request to the authentication helper microservice to get an access token.</p>
<pre><code>var token = GetToken(username, userpassword, tokenendpoint);

...
...
...

string GetToken(string username, string userpassword, string tokenendpoint){
    try
    {
        JObject tokencredentials = new JObject();
        tokencredentials.Add(&quot;username&quot;, username);
        tokencredentials.Add(&quot;password&quot;,userpassword);
        var reqcontent = new StringContent(tokencredentials.ToString(), Encoding.UTF8, &quot;application/json&quot;);
        var result = _client.PostAsync(tokenendpoint, reqcontent).Result;
        var tokenobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(
            result.Content.ReadAsStringAsync().Result);
        var token = tokenobj[&quot;accesstoken&quot;];
        return token.ToString();
    }
    catch(Exception ex)
    {
        return string.Format(&quot;Error: {0}&quot;, ex.Message);
    }
}
</code></pre>
<p>Once the token is returned from the microservice, the function executes the Web API query. The query is just a hardcoded OData query in the form of <code>/contacts?$select=fullname,contactid</code> plus any filter supplied by the client. The function expects that the filter will also be provided in supported Dynamics 365 OData format like <code>startswith(fullname,'y')</code>.</p>
<pre><code>var crmreq = new HttpRequestMessage(HttpMethod.Get, crmwebapi + crmwebapiquery);
crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
crmreq.Content = new StringContent(string.Empty.ToString(), Encoding.UTF8, &quot;application/json&quot;);
var crmres = _client.SendAsync(crmreq).Result;

var crmresponse = crmres.Content.ReadAsStringAsync().Result;

var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponse);
</code></pre>
<p>Finally results are returned to the client in an array as part of a JSON object.</p>
<pre><code>JArray outputarray = new JArray();
foreach(var row in crmresponseobj[&quot;value&quot;].Children())
{
    JObject record = new JObject();
    record.Add(&quot;id&quot;, row[&quot;contactid&quot;]);
    record.Add(&quot;fullname&quot;, row[&quot;fullname&quot;]);
    outputarray.Add(record);
}
JObject outputobject = new JObject();
outputobject.Add(&quot;contacts&quot;, outputarray);
Console.WriteLine(outputobject.ToString());
</code></pre>
<p>Here's the complete function.</p>
<pre><code>using System;
using System.Text;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.IO;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System.Collections.Generic;

namespace Function
{
    public class FunctionHandler
    {
        private static HttpClient _client = new HttpClient();

        public void Handle(string input) {
            //get configuration from env variables        
            var username = Environment.GetEnvironmentVariable(&quot;USERNAME&quot;);
            var userpassword = Environment.GetEnvironmentVariable(&quot;USERPASS&quot;);
            var tokenendpoint = Environment.GetEnvironmentVariable(&quot;TOKENENDPOINT&quot;);
            var accesskey = Environment.GetEnvironmentVariable(&quot;ACCESSKEY&quot;);
            var crmwebapi = Environment.GetEnvironmentVariable(&quot;CRMAPI&quot;);
            
            //deserialize the client request
            var queryrequest = JsonConvert.DeserializeObject&lt;QueryRequest&gt;(input);
            
            //validate the client access key
            if(accesskey!=queryrequest.AccessKey)
            {
                JObject outputobject = new JObject();
                outputobject.Add(&quot;error&quot;, &quot;Invalid access key&quot;);
                Console.WriteLine(outputobject.ToString());
                return;
            }

            //get the oauth token
            var token = GetToken(username, userpassword, tokenendpoint);
            
            if(!token.ToUpper().StartsWith(&quot;ERROR:&quot;))
            {
                //set the base odata query
                var crmwebapiquery = &quot;/contacts?$select=fullname,contactid&quot;;
                
                //add a filter if the client included one in the request
                if(!string.IsNullOrEmpty(queryrequest.Filter))
                    crmwebapiquery+=&quot;&amp;$filter=&quot;+queryrequest.Filter;
                try
                {
                    //make the request to d365
                    var crmreq = new HttpRequestMessage(HttpMethod.Get, crmwebapi + crmwebapiquery);
                    crmreq.Headers.Add(&quot;Authorization&quot;, &quot;Bearer &quot; + token);
                    crmreq.Headers.Add(&quot;OData-MaxVersion&quot;, &quot;4.0&quot;);
                    crmreq.Headers.Add(&quot;OData-Version&quot;, &quot;4.0&quot;);
                    crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.maxpagesize=500&quot;);
                    crmreq.Headers.Add(&quot;Prefer&quot;, &quot;odata.include-annotations=OData.Community.Display.V1.FormattedValue&quot;);
                    crmreq.Content = new StringContent(string.Empty.ToString(), Encoding.UTF8, &quot;application/json&quot;);
                    var crmres = _client.SendAsync(crmreq).Result;
                    
                    //handle the d365 response
                    var crmresponse = crmres.Content.ReadAsStringAsync().Result;

                    var crmresponseobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(crmresponse);
                    
                    try
                    {
                        //build the function response
                        JArray outputarray = new JArray();
                        foreach(var row in crmresponseobj[&quot;value&quot;].Children())
                        {
                            JObject record = new JObject();
                            record.Add(&quot;id&quot;, row[&quot;contactid&quot;]);
                            record.Add(&quot;fullname&quot;, row[&quot;fullname&quot;]);
                            outputarray.Add(record);
                        }
                        JObject outputobject = new JObject();
                        outputobject.Add(&quot;contacts&quot;, outputarray);
                        
                        //return the response to the client
                        Console.WriteLine(outputobject.ToString());
                    }
                    catch(Exception ex)
                    {
                        JObject outputobject = new JObject();
                        outputobject.Add(&quot;error&quot;, string.Format(&quot;Could not parse query response: {0}&quot;, ex.Message));
                        Console.WriteLine(outputobject.ToString());
                    }
                }
                catch(Exception ex)
                {
                    JObject outputobject = new JObject();
                    outputobject.Add(&quot;error&quot;, string.Format(&quot;Could not query data: {0}&quot;, ex.Message));
                    Console.WriteLine(outputobject.ToString());
                }
            }
            else
            {
                JObject outputobject = new JObject();
                outputobject.Add(&quot;error&quot;, &quot;Could not get token&quot;);
                Console.WriteLine(outputobject.ToString());
            }
        }

        string GetToken(string username, string userpassword, string tokenendpoint){
            try
            {
                JObject tokencredentials = new JObject();
                tokencredentials.Add(&quot;username&quot;, username);
                tokencredentials.Add(&quot;password&quot;,userpassword);
                var reqcontent = new StringContent(tokencredentials.ToString(), Encoding.UTF8, &quot;application/json&quot;);
                var result = _client.PostAsync(tokenendpoint, reqcontent).Result;
                var tokenobj = JsonConvert.DeserializeObject&lt;Newtonsoft.Json.Linq.JObject&gt;(
                    result.Content.ReadAsStringAsync().Result);
                var token = tokenobj[&quot;accesstoken&quot;];
                return token.ToString();
            }
            catch(Exception ex)
            {
                return string.Format(&quot;Error: {0}&quot;, ex.Message);
            }
        }
    }

    public class QueryRequest
    {
        public string AccessKey {get;set;}
        public string Filter{get;set;}
    }
}
</code></pre>
<p>Because the function relies on Json.Net, you need to add a reference to it in your .csproj file before you build the function.</p>
<pre><code>&lt;Project Sdk=&quot;Microsoft.NET.Sdk&quot;&gt;
  &lt;PropertyGroup&gt;
    &lt;TargetFramework&gt;netstandard2.0&lt;/TargetFramework&gt;
  &lt;/PropertyGroup&gt;
  &lt;PropertyGroup&gt;
    &lt;GenerateAssemblyInfo&gt;false&lt;/GenerateAssemblyInfo&gt;
  &lt;/PropertyGroup&gt;
  &lt;ItemGroup&gt;
    &lt;PackageReference Include=&quot;newtonsoft.json&quot; Version=&quot;11.0.2&quot; /&gt;
  &lt;/ItemGroup&gt;
&lt;/Project&gt;
</code></pre>
<p>Here is my function definition YAML file with enviroment variables included. You will need to update them with your appropriate values, and you will also need to change the image name if you're building your own function instead of just deploying mine from Docker Hub.</p>
<pre><code>provider:
  name: faas
  gateway: http://localhost:8080

functions:
  demo-crm-function:
    lang: csharp
    handler: ./demo-crm-function
    image: lucasalexander/faas-demo-crm-function
    environment:
      USERNAME: XXXXXX@XXXXXX.onmicrosoft.com
      USERPASS: XXXXXX
      TOKENENDPOINT: https://akskube.alexanderdevelopment.net/oauthhelper/requesttoken
      CRMAPI: https://lucastest20.api.crm.dynamics.com/api/data/v9.0
      ACCESSKEY: MYACCESSKEY
</code></pre>
<p>Once the function is deployed, you can execute it either through the OpenFaaS admin UI or another tool that makes HTTP requests like Curl or Postman. Here's what an unfiltered query in Postman looks like for a Dynamics 365 org with sample data installed.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/unfiltered-query.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<p>And here's a query with a filter included.<br>
<img src="https://alexanderdevelopment.net/content/images/2018/07/filtered-query.png#img-thumbnail" alt="Building a custom Dynamics 365 data interface with OpenFaaS"></p>
<h4 id="wrappingup">Wrapping up</h4>
<p>Once I got OpenFaaS running, writing and deploying the actual function only took about an hour. Obviously writing a more complex data interface to support real-world requirements would take longer, but using a serverless functions platform like OpenFaaS is definitely a significant accelerator for custom Dynamics 365 integration development.</p>
<p>What do you think about this approach? Are you using serverless functions with your Dynamics 365 projects? What do you think about OpenFaaS vs Azure Functions or AWS Lambda? Let us know in the comments!</p>
</div>]]></content:encoded></item><item><title><![CDATA[Installing and securing OpenFaaS on an AKS cluster]]></title><description><![CDATA[<div class="kg-card-markdown"><p>A few months back, I wrote a <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">guide</a> for installing and locking down <a href="https://www.openfaas.com/">OpenFaaS</a> in a Docker Swarm running on <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machines. Today I want to share a step-by-step guide that shows how to install OpenFaaS on a new <a href="https://azure.microsoft.com/en-us/services/container-service/kubernetes/">Azure Kubernetes Service</a> (AKS) cluster using an <a href="https://github.com/kubernetes/ingress-nginx">Nginx</a></p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/31/installing-and-securing-openfaas-on-an-aks/</link><guid isPermaLink="false">5b0e9d8797f5e30001931b70</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Azure]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 31 May 2018 14:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-30_16-29-47.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-30_16-29-47.png" alt="Installing and securing OpenFaaS on an AKS cluster"><p>A few months back, I wrote a <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">guide</a> for installing and locking down <a href="https://www.openfaas.com/">OpenFaaS</a> in a Docker Swarm running on <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machines. Today I want to share a step-by-step guide that shows how to install OpenFaaS on a new <a href="https://azure.microsoft.com/en-us/services/container-service/kubernetes/">Azure Kubernetes Service</a> (AKS) cluster using an <a href="https://github.com/kubernetes/ingress-nginx">Nginx</a> ingress controller to lock it down with basic authentication and free <a href="https://letsencrypt.org/">Let's Encrypt</a> TLS certificates.</p>
<h4 id="beforewebegin">Before we begin</h4>
<p>If you just want to do a quick deployment of OpenFaaS on AKS, there's a guide in the official AKS documentation <a href="https://docs.microsoft.com/en-us/azure/aks/openfaas">here</a>, however it does not show how to implement TLS encryption or authentication.</p>
<p>All the Azure configuration I'll show today is done via the command line, so if you don't already have the Azure CLI installed on your local system, install it from <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">here</a>. You can do it through the Azure portal, but it's much faster to do with the CLI.</p>
<p>You will also need Git command-line tools installed so you can pull down the latest version of OpenFaaS from its repository.</p>
<p>Finally, in order to secure your OpenFaaS installation with TLS, you will need a domain and access to your DNS provider so you can point a hostname to your cluster's public IP address</p>
<p>Ready? Let's get started.</p>
<h4 id="basicazureconfiguration">Basic Azure configuration</h4>
<ol>
<li>From the command line, log in to Azure using <code>az login</code>. Follow the prompts to complete your authentication.</li>
<li>If you don't have an existing resource group you want to use for your AKS cluster, create a new one with <code>az group create -l REGIONNAME -n RESOURCEGROUP</code>. Replace REGIONNAME and RESOURCEGROUP with appropriate values, but make sure you use a region where AKS is <a href="https://docs.microsoft.com/en-us/azure/aks/container-service-quotas">currently available</a>.</li>
<li>Create a new AKS cluster with <code>az aks create -g RESOURCEGROUP -n CLUSTERNAME --generate-ssh-keys</code>. The RESOURCEGROUP value is the same as before, and CLUSTERNAME is whatever you want it to be called. Note that the default virtual machine size for your cluster is Standard_DS1_v2. You can change this by setting the <code>--node-vm-size</code>, and I am personally using burstable Standard_B2s VMs for my AKS cluster.</li>
<li>Once the AKS cluster creation completes, use this command to get the credentials you need to manage the cluster with the Kubernetes CLI <code>az aks get-credentials --resource-group RESOURCEGROUP --name CLUSTERNAME</code>.</li>
<li>Install the Kubernetes CLI (kubectl) with <code>az aks install-cli</code>.</li>
<li>Get the name of the node resource group that was created for your AKS cluster with this command <code>az resource show --resource-group RESOURCEGROUP --name CLUSTERNAME --resource-type Microsoft.ContainerService/managedClusters --query properties.nodeResourceGroup -o tsv</code>. You should get an output that looks like MC_resourcegroup_clustername_regionname. You will use this return value in the next step.</li>
<li>Create a public IP address in the node resource groupe with this command <code>az network public-ip create --resource-group MC_RESOURCEGROUP --name IPADDRESSNAME --allocation-method static</code>. You will get a JSON response that contains a &quot;publicIp&quot; object. Copy its &quot;ipAddress&quot; value and save it for later.<br>
<em>Note: you might be tempted to create a DNS name label for this IP address so you can avoid using a custom domain name, but *.cloudapp.azure.com host names don't work with Let's Encrypt.</em></li>
<li>Go to your DNS provider and register a new A record for your a hostname that points to the external IP you reserved in the previous step (mine is akskube.alexanderdevelopment.net). This will be the hostname you use to access OpenFaaS. You may also need to create a new CAA record to explicitly allow Let's Encrypt to issue certificates for your domain.</li>
</ol>
<h4 id="basicclusterconfiguration">Basic cluster configuration</h4>
<p>Once the basic Azure configuration work is done, it's time to configure the AKS cluster.</p>
<ol>
<li>If you don't already have the Helm client installed on your local system, install it by following the directions <a href="https://github.com/kubernetes/helm/blob/master/docs/install.md">here</a>. I am using a Windows dev workstation, so I installed <a href="https://chocolatey.org/">Chocolatey</a> and then installed the Helm client with <code>choco install kubernetes-helm</code>.</li>
<li>Install Helm components on your AKS cluster with <code>helm init --upgrade --service-account default</code>.</li>
<li>Install the Nginx ingress controller on your AKS cluster with <code>helm install stable/nginx-ingress --namespace kube-system --set rbac.create=false --set rbac.createRole=false --set rbac.createClusterRole=false --set controller.service.loadBalancerIP=STATICIPADDRESS</code>. Replace STATICIPADDRESS with the public IP address you created previously.</li>
<li>Install <a href="https://github.com/jetstack/cert-manager">cert-manager</a> to request and manage your TLS certificates <code>helm install --name cert-manager --namespace kube-system stable/cert-manager --set rbac.create=false</code>.</li>
</ol>
<h4 id="installingopenfaas">Installing OpenFaas</h4>
<p>It's relatively easy to install OpenFaaS on your AKS cluster using Helm, and a detailed readme is available <a href="https://github.com/openfaas/faas-netes/blob/master/chart/openfaas/README.md">here</a>. Basically you need to download the <a href="https://github.com/openfaas/faas-netes">faas-netes</a> Git repository to your local system, create a couple of namespaces on the AKS cluser and use the Helm chart in the repo you downloaded. Here's how I set it up on my AKS cluster.</p>
<pre><code>kubectl create ns openfaas
kubectl create ns openfaas-fn

git clone https://github.com/openfaas/faas-netes
cd faas-netes

helm install --namespace openfaas -n openfaas --set functionNamespace=openfaas-fn --set ingress.enabled=true --set rbac=false chart/openfaas/
</code></pre>
<p>Once OpenFaaS is installed, you need to create ingress resources to make it available externally.</p>
<h4 id="creatingtheingressresources">Creating the ingress resources</h4>
<p>Before creating your ingress resources, you need to create certificate issuer resources to get TLS certificates. Here's the YAML for a Let's Encrypt staging issuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: EMAILADDRESS
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    http01: {}
</code></pre>
<p>Copy it, replace EMAILADDRESS with your email address and save it as faas-staging-issuer.yml. Then run <code>kubectl apply -f faas-staging-issuer.yml -n openfaas</code>.</p>
<p>Here's a corresponding production issuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-production
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: EMAILADDRESS
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-production
    # Enable the HTTP-01 challenge provider
    http01: {}
</code></pre>
<p>Copy it, replace EMAILADDRESS with your email address and save it as faas-production-issuer.yml. Then run <code>kubectl apply -f faas-production-issuer.yml -n openfaas</code>.</p>
<p>Next you need to create a password file to implement basic authentication. If you are working on a system with apache2-utils installed, you can just use the htpasswd command. Otherwise, you can use a tool like <a href="http://aspirine.org/htpasswd_en.html">http://aspirine.org/htpasswd_en.html</a> to generate your htpasswd content. Once you have your htpasswd content generated, save it in a file named &quot;auth&quot; and run the following command <code>kubectl create secret generic basic-auth --from-file=auth -n openfaas</code></p>
<p>Now you can use the following YAML to create an ingress resource that exposes your OpenFaaS instance:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: faas-ingress
  annotations:
    nginx.ingress.kubernetes.io/auth-realm: &quot;Authentication Required&quot;
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-type: basic
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-staging
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - HOSTNAME
    secretName: faas-letsencrypt-staging
  rules:
  - host: HOSTNAME
    http:
      paths:
      - path: /faas-admin
        backend:
          serviceName: gateway
          servicePort: 8080
</code></pre>
<p>Copy it, replace both instances of HOSTNAME with the hostname you created earlier and save it as faas-ingress.yml. Deploy it to your cluser with this command <code>kubectl apply -f faas-ingress.yml -n openfaas</code>.</p>
<p>As the ingress starts up, it will request a staging certificate from Let's Encrypt, and then it will start listening for requests. It may take a few minutes, so now might be a good time to take a short break. Once everything is complete, you will be able to access your OpenFaaS UI from <a href="https://HOSTNAME/faas-admin/ui/">https://HOSTNAME/faas-admin/ui/</a>, and once you deploy functions, they will be available at <a href="https://HOSTNAME/faas-admin/functions/FUNCTIONNAME">https://HOSTNAME/faas-admin/functions/FUNCTIONNAME</a>. You should get a browser warning about the certificate because it's using a Let's Encrypt production certificate, but that's OK for now. You should also be prompted for basic authentication credentials, which will be the username and password you created earlier.</p>
<p>If everything looks good, you can switch over to using a production TLS certificate. Take the faas-ingress YAML and replace the &quot;letsencrypt-staging&quot; in the secretname and certmanager.k8s.io issuer values with &quot;letsencrypt-production&quot; instead. Save it and deploy the update with <code>kubectl apply -f faas-ingress.yml -n openfaas</code>. Like before, the ingress will take a few minutes to restart and request a production TLS certificate from Let's Encrypt. Once that's done, you can access the your OpenFaaS UI via the same URL, but now you should not get a warning about an invalid certificate.</p>
<p>At this point you have a locked-down OpenFaaS installation, but you might not want to use basic authentication to restrict access to your OpenFaaS functions. If that's the case you can create another ingress resource that exposes them outside the &quot;/faas-admin&quot; path. Here's the YAML for that resource:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: faas-function-ingress
  annotations:
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/rewrite-target: /functions
spec:
  tls:
  - hosts:
    - HOSTNAME
    secretName: faas-letsencrypt-production
  rules:
  - host: HOSTNAME
    http:
      paths:
      - path: /functions
        backend:
          serviceName: gateway
          servicePort: 8080
</code></pre>
<p>Copy it, replace both instances of HOSTNAME with your DNS hostname from earlier and save it as faas-function-ingress.yml. Deploy it to your cluser with this command <code>kubectl apply -f faas-function-ingress.yml -n openfaas</code>.</p>
<p>Once the ingress starts up and applies the TLS certificate, you will be able to access your functions at <a href="https://HOSTNAME/functions/FUNCTIONNAME">https://HOSTNAME/functions/FUNCTIONNAME</a> without authenticating.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>A few closing thoughts:</p>
<ol>
<li>I am still extremely new to AKS and Kubernetes, and I've tried to simplify this guide as much as possible for other newbies. In figuring out how to set this up, I relied heavily on the official <a href="https://docs.microsoft.com/en-us/azure/aks/">AKS docs</a>, and I encourage you to take a look at them if you want to dig in deeper.</li>
<li>This configuration does not expose the OpenFaaS Prometheus montitoring service. If you want to set that up, you will need to create a different DNS entry (either an A record or CNAME record) and create another ingress resource in the openfaas namespace for that host name that points to the &quot;prometheus&quot; service on service port 9090.</li>
<li>The Nginx ingress controller configuration I showed here is extremely simple. If you want to use a more sophisticated configurations to enable advanced features like rate limiting, for example, take a look at <a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration">https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration</a> and <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md</a>.</li>
</ol>
</div>]]></content:encoded></item><item><title><![CDATA[Using ML.NET in an OpenFaaS function]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Last week at its annual Build conference, Microsoft announced <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet">ML.NET</a>, an &quot;open source and cross-platform machine learning framework&quot; that runs in .NET Core. I took a look at the <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet/get-started/windows">getting started</a> samples and realized ML.NET would be a great tool to use in OpenFaas functions.</p>
<p>I</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/17/using-ml-net-in-an-openfaas-function/</link><guid isPermaLink="false">5afe3ef397f5e30001931b56</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[C#]]></category><category><![CDATA[machine learning]]></category><category><![CDATA[text analysis]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Fri, 18 May 2018 03:20:22 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-17_22-16-59.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/chrome_2018-05-17_22-16-59.png" alt="Using ML.NET in an OpenFaaS function"><p>Last week at its annual Build conference, Microsoft announced <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet">ML.NET</a>, an &quot;open source and cross-platform machine learning framework&quot; that runs in .NET Core. I took a look at the <a href="https://www.microsoft.com/net/learn/apps/machine-learning-and-ai/ml-dotnet/get-started/windows">getting started</a> samples and realized ML.NET would be a great tool to use in OpenFaas functions.</p>
<p>I decided to write a proof-of-concept function based on the ML.NET sentiment <a href="https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/sentiment-analysis">analysis sample</a>. Because the function needs a trained model before it can run, you actually need to use a separate application to generate the model and save it as a file. Then you can include the model as part of your function deployment.</p>
<p>Here's a screenshot of my function in action. <img src="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-17_21-42-58.png#img-thumbnail" alt="Using ML.NET in an OpenFaaS function"></p>
<p>You can get the code for my OpenFaas sentiment analysis function <a href="https://github.com/lucasalexander/faas-functions/tree/master/get_sentiment_mlnet">here</a>, and the code for the application that generates the model is available <a href="https://github.com/lucasalexander/mlnet-samples/tree/master/sentiment-analysis">here</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Installing and securing OpenFaaS on a Google Cloud virtual machine]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Here is a step-by-step guide that shows how to install <a href="https://www.openfaas.com/">OpenFaaS</a> on a new <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machine instance running Ubuntu Linux and how to secure it with <a href="https://www.nginx.com/">Nginx</a> as a reverse proxy using basic authentication and free SSL/TLS certificates from <a href="https://letsencrypt.org/">Let's Encrypt</a>.</p>
<p>As you look at this</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/</link><guid isPermaLink="false">5a9169a525028e00011718db</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sun, 25 Feb 2018 13:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/000-post-image.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/000-post-image.png" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"><p>Here is a step-by-step guide that shows how to install <a href="https://www.openfaas.com/">OpenFaaS</a> on a new <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machine instance running Ubuntu Linux and how to secure it with <a href="https://www.nginx.com/">Nginx</a> as a reverse proxy using basic authentication and free SSL/TLS certificates from <a href="https://letsencrypt.org/">Let's Encrypt</a>.</p>
<p>As you look at this guide, here are a few things to keep in mind:</p>
<ol>
<li>With the exception of a few steps at the beginning that are specific to using Google Cloud, this guide will work for (probably) any cloud hosting provider.</li>
<li>In order to secure your OpenFaaS installation with SSL/TLS, you will need a domain and access to your DNS provider so you can point your domain to your virtual machine instance's IP address.</li>
<li>Although OpenFaaS runs on Docker, this guide shows how to install Nginx as a service directly on the virtual machine instance instead of in a container. There's no reason you couldn't use a containerized Nginx proxy if you want.</li>
<li>If you're comfortable with Kubernetes (I am not yet), you might want to look at running OpenFaaS on Google Kubernetes Engine instead of setting things up the way I do here.</li>
<li>Finally, if you just want to get started playing around with OpenFaaS locally, there's no need to set up a reverse proxy. Instead you can just install OpenFaaS in your local environment and access it directly.</li>
</ol>
<h4 id="provisioningthevirtualmachineinstance">Provisioning the virtual machine instance</h4>
<p>Although my day job keeps me focused on the Microsoft/Azure stack, and I've recently started using Digital Ocean as my personal blog host, I decided to use Google Cloud as my OpenFaaS host because Google was offering $300 in trial credits. Once you have a Google Cloud Platform account, setting up a virtual machine instance is easy.</p>
<ol>
<li>From your project dashboard, go to Compute Engine-&gt;Images.</li>
<li>Select the Ubuntu 17.10 image and click &quot;create instance.&quot; <img src="https://alexanderdevelopment.net/content/images/2018/02/005-selectimage.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>On the create instance screen, fill out the necessary details. I am using a &quot;small&quot; instance for this demo. <img src="https://alexanderdevelopment.net/content/images/2018/02/010-createinstance.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <br>Make sure you open HTTP and HTTPS connections to the instance. <img src="https://alexanderdevelopment.net/content/images/2018/02/020-createinstance.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Once the instance is created, follow the steps here to reserve a static external IP address: <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#reserve_new_static">https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#reserve_new_static</a></li>
<li>Then follow these steps to assign the static external IP to your new instance: <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#IP_assign">https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#IP_assign</a></li>
<li>Finally you need to go to your DNS provider and register a new A record for your domain that points to the external IP you reserved in the previous step. (I am using faas.alexanderdevelopment.net.) While you're at it, you may want to create a new <a href="https://letsencrypt.org/docs/caa/">CAA record</a> to explicitly allow Let's Encrypt to issue certificates for your domain.</li>
</ol>
<h4 id="installingdockerandinitializingyourswarm">Installing Docker and initializing your swarm</h4>
<p>At this point you should have a virtual machine instance with a static IP address and ports 80 and 443 open, and your domain should also be pointing to the static IP address.</p>
<p>Now it's time to install Docker. Out of the box, OpenFaaS runs on top of either <a href="https://docs.docker.com/engine/swarm/">Docker Swarm</a> or <a href="https://kubernetes.io/">Kubernetes</a>. To keep things simple, I am using Docker Swarm.</p>
<ol>
<li>SSH to your instance. You can do this directly through the VM instance details page. <img src="https://alexanderdevelopment.net/content/images/2018/02/030-ssh.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Run the following command to update your repository package lists: <code>sudo apt-get update</code>.</li>
<li>Install Docker by following the steps here: <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/">https://docs.docker.com/install/linux/docker-ce/ubuntu/</a>.</li>
<li>To initialize your Docker swarm, first run <code>ifconfig</code> to see what network interfaces you have available to use for the advertise-addr parameter when you initialize the swarm. This address is what other nodes in the swarm will use to connect if you add more. In my case (and probably in yours, too, if you are using Google Cloud), &quot;ens4&quot; is the correct interface.</li>
<li>Finally, initialize the swarm with this command: <code>sudo docker swarm init --advertise-addr ens4</code>. If necessary replace the &quot;ens4&quot; with whatever value is correct for you. Here's what this looks like on my instance: <img src="https://alexanderdevelopment.net/content/images/2018/02/050-swarminit.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="installingopenfaas">Installing OpenFaaS</h4>
<p>After your swarm is initialized, installing OpenFaaS is easy. Just run these commands to get the latest copy of OpenFaaS from GitHub:</p>
<pre><code>git clone https://github.com/openfaas/faas
cd faas
git checkout -
sudo ./deploy_stack.sh
</code></pre>
<p>At this point OpenFaaS is running and listening on port 8080, but you can't connect to it remotely because the default firewall rules only allow traffic on ports 80 and 443. Now you need to install a reverse proxy to route requests from the public internet to OpenFaaS.</p>
<h4 id="basicnginxconfiguration">Basic Nginx configuration</h4>
<p>Although I've read about using a number of different reverse proxies with OpenFaaS, such as <a href="https://getkong.org/docs/">Kong</a>, <a href="https://traefik.io/">Traefik</a> and <a href="https://caddyserver.com/">Caddy</a>, I decided to use Nginx for this guide because I've used it previously with other projects, and it's relatively easy to configure it to handle basic authentication, HTTPS and rate limiting.</p>
<ol>
<li>Install Nginx by running this command: <code>sudo apt-get install nginx</code>. Assuming all the steps to this point were successful, you should now be able to navigate to your domain in your browser and see the default &quot;Welcome to nginx!&quot; page. <img src="https://alexanderdevelopment.net/content/images/2018/02/070-nginx-welcome.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Now you need to update the Nginx configuration to use a non-default site and directory for hosting pages. Although we're going to mainly use Nginx as a reverse proxy for connections to OpenFaaS, it will need to be able to serve pages to respond to validation challenges from Let's Encrypt so that you can get your certificates.</li>
<li>Create a new directory for your domain. Mine is /var/www/faas.alexanderdevelopment.net.</li>
<li>Create a basic hello world index.html page in this directory.</li>
<li>Update the content of your Nginx config file (/etc/nginx/sites-available/default) with the following, replacing the &quot;faas.alexanderdevelopmen.net&quot; entries with the correct falues for your domain and directory:</li>
</ol>
<pre><code>server {
        listen 80;

        server_name faas.alexanderdevelopment.net;

        root /var/www/faas.alexanderdevelopment.net;
        index index.html;

        location / {
                try_files $uri $uri/ =404;
        }
}
</code></pre>
<ol start="6">
<li>Reload your Nginx configuration with this command <code>service nginx reload</code>.</li>
<li>Refresh the browser window you opened earlier to verify the new test page is loaded.</li>
</ol>
<h4 id="basicauthentication">Basic authentication</h4>
<p>OpenFaaS has no built-in authentication mechanism, but you can use basic authentication in Nginx to only allow access to the admin areas of OpenFaaS to authenticated users. These next two steps will show you how to create an .htpasswd file that Nginx can use to authenticate and authorize users.</p>
<ol>
<li>Install apache2-utils <code>sudo apt-get install apache2-utils</code>.</li>
<li>Create the .htpasswd file and add a user named &quot;adminuser&quot; with this command <code>sudo htpasswd -c /etc/nginx/.htpasswd adminuser</code>. You can run the htpasswd command again if you want to create additional users.</li>
</ol>
<h4 id="securingyourendpointwithletsencrypt">Securing your endpoint with Let's Encrypt</h4>
<p>Now it's time to get a certificate from Let's Encrypt. We'll be using the <a href="https://certbot.eff.org/">Certbot</a> tool to automatically obtain a certificate and update the Nginx configuration to use HTTPS.</p>
<ol>
<li>Add the Certbot repository to your instance  <code>sudo add-apt-repository ppa:certbot/certbot</code>.</li>
<li>Update your repository package lists <code>sudo apt update</code>.</li>
<li>Get the Certbot tool <code>sudo apt-get install python-certbot-nginx</code>.</li>
</ol>
<p>Let's Encrypt <a href="https://letsencrypt.org/docs/rate-limits/">limits the number of requests</a> you can make against its production environment, so it's best to verify your configuration against the Let's Encrypt staging environment first. The staging environment will generate a certificate, but you'll get a certificate warning when you try to access your site, so you'll want to update your configuration to use a production certificate after you're sure everything works.</p>
<ol>
<li>Run this command to request a test certificate <code>sudo certbot --authenticator webroot --installer nginx --test-cert</code>. The first time you run the tool, you will be asked for your email and if you agree to the terms and conditions. After that, follow the prompts, and make sure you select the option to redirect all traffic to HTTPS in the final step. Here's what it looks like for me: <img src="https://alexanderdevelopment.net/content/images/2018/02/100-certbot-test.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Now you should be able to navigate to your domain's test index page using HTTPS to verify everything worked.</li>
<li>If you reopen your Nginx configuration file, you'll see that Certbot has added some sections as indicated by the &quot;managed by Certbot&quot; comments. <img src="https://alexanderdevelopment.net/content/images/2018/02/102-nginx-config-post-certbot.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="updatingthenginxconfigurationtoworkwithopenfaas">Updating the Nginx configuration to work with OpenFaaS</h4>
<p>Now that your Nginx server is able to handle HTTPS traffic, you need to update your Nginx configuration to set up the reverse proxy to OpenFaaS. At this point you'll also want to enable the basic authentication for the admin areas of the OpenFaaS user interface, and you'll need to make a small change to the HTTP-&gt;HTTPS redirect that Certbot set up previously so that you can request a production certificate from Let's Encrypt later.</p>
<p>Replace the contents of your Nginx configuration file with the following, again replacing the &quot;faas.alexanderdevelopment.net&quot; entries with the correct values for your domain and root directory:</p>
<pre><code>server {
        server_name faas.alexanderdevelopment.net;

        root /var/www/faas.alexanderdevelopment.net;
        index index.html;

        #serve acme challange files from actual directory
        location /.well-known {
                try_files $uri $uri/ =404;
        }

        #reverse proxy all &quot;function&quot; requests to openfaas and require no authentication
        location /function {
                proxy_pass      http://127.0.0.1:8080/function;
                proxy_set_header    X-Real-IP $remote_addr;
                proxy_set_header    Host      $http_host;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        #reverse proxy everthing else to openfaas and require basic authentication
        location / {
                proxy_pass      http://127.0.0.1:8080;
                proxy_set_header    X-Real-IP $remote_addr;
                proxy_set_header    Host      $http_host;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                auth_basic &quot;Restricted&quot;;
                auth_basic_user_file /etc/nginx/.htpasswd;
        }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/faas.alexanderdevelopment.net-0001/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/faas.alexanderdevelopment.net-0001/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

#this block redirects all non-ssl traffic to the ssl version
server {
        listen 80;

        server_name faas.alexanderdevelopment.net;
        root /var/www/faas.alexanderdevelopment.net;

        #serve acme challange files from actual directory
        location /.well-known {
                try_files $uri $uri/ =404;
        }

        #redirect anything other than challenges to https
        location / {
                return 301 https://$host$request_uri;
        }
}
</code></pre>
<p>A few notes on this configuration:</p>
<ol>
<li>The HTTP server section (starting on line 40) has been updated to not redirect requests to the &quot;/.well-known&quot; directory to HTTPS. With the 301 redirect in place for all requests, Certbot would return validation errors when I attempted to change over from my test certificate to a production certificate.</li>
<li>Basic authentication is enabled for all requests to the site except for the &quot;/.well-known&quot; directory and the &quot;/functions&quot; directory (see lines 28 and 29). The &quot;/.well-known&quot; directory needs to be accessible without authentication to handle Let's Encrypt validation requests, and the &quot;/functions&quot; directory has been left open with the assumption that you'll use some sort of <a href="https://github.com/openfaas/faas/tree/master/sample-functions/ApiKeyProtected">API key</a> mechanism for authenticating to your functions. If your function clients support passing basic auth credentials, you can secure the &quot;/functions&quot; directory with basic auth, too.</li>
<li>This configuration does not expose the Prometheus monitoring UI on port 9090 that is installed with OpenFaaS.</li>
</ol>
<p>One you update your configuration and reload Nginx, you should be able to test one of the default included functions with curl. <img src="https://alexanderdevelopment.net/content/images/2018/02/110-curl-validation.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <em>You'll note that I have passed the &quot;-k&quot; flag to curl to disable certificate validation.</em></p>
<p>You should also be able to navigate to the OpenFaaS admin user interface by going to https://YOUR_DOMAIN_HERE/ui/. You will be prompted for credentials and presented with a warning about the certificate. If you use the &quot;adminuser&quot; credentials you created earlier and acknowledge the warning/continue, you will see the OpenFaaS main user interface screen.</p>
<h4 id="gettingaproductioncertificate">Getting a production certificate</h4>
<p>If you've gotten to this point and everything works, you're ready to switch over from using a test SSL/TLS certificate to using a production certificate.</p>
<ol>
<li>Run Certbot without the --test-cert flag <code>sudo certbot --authenticator webroot --installer nginx</code>.</li>
<li>Select the option to renew and replace the existing certificate. <img src="https://alexanderdevelopment.net/content/images/2018/02/120-certbot-prod.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Follow the rest of the prompts like when you requested the test certificate, except when you get to the final step, instead of selecting the option to redirect all traffic, select option &quot;1: No redirect.&quot;</li>
<li>Close all your open browser windows and verify you can browse to the OpenFaaS UI by going to https://YOUR_DOMAIN_HERE/ui/. You should be prompted for credentials again, but this time you should not see any certificate warnings. <img src="https://alexanderdevelopment.net/content/images/2018/02/130-browser-verification.png#img-thumnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <br><br>You can also try running one of the default functions through <a href="https://www.getpostman.com/">Postman</a> to validate you receive no certificate errors. <img src="https://alexanderdevelopment.net/content/images/2018/02/140-postman-verfication.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="wrappingup">Wrapping up</h4>
<p>At this point you have a secure OpenFaaS server, but there are still a few things you should do.</p>
<ol>
<li>Back up your Nginx configuration, htpasswd file and certificates.</li>
<li>Remove the default functions because they are not protected by an API key, and they are runnable by anyone who can access your &quot;/functions&quot; directory, which, if you use my configuration, is actually anyone.</li>
<li>Set up <a href="https://lincolnloop.com/blog/rate-limiting-nginx/">rate limiting</a> for the Nginx server.</li>
<li>Schedule automated <a href="https://certbot.eff.org/docs/using.html#renewal">certificate renewals</a> using cron.</li>
<li>Get started writing functions and have fun!</li>
</ol>
</div>]]></content:encoded></item></channel></rss>