<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Docker - Alexander Development]]></title><description><![CDATA[Docker - Alexander Development]]></description><link>https://alexanderdevelopment.net/</link><generator>Ghost 1.20</generator><lastBuildDate>Fri, 24 Apr 2026 14:14:42 GMT</lastBuildDate><atom:link href="https://alexanderdevelopment.net/tag/docker/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Installing and securing OpenFaaS on an AKS cluster]]></title><description><![CDATA[<div class="kg-card-markdown"><p>A few months back, I wrote a <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">guide</a> for installing and locking down <a href="https://www.openfaas.com/">OpenFaaS</a> in a Docker Swarm running on <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machines. Today I want to share a step-by-step guide that shows how to install OpenFaaS on a new <a href="https://azure.microsoft.com/en-us/services/container-service/kubernetes/">Azure Kubernetes Service</a> (AKS) cluster using an <a href="https://github.com/kubernetes/ingress-nginx">Nginx</a></p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/31/installing-and-securing-openfaas-on-an-aks/</link><guid isPermaLink="false">5b0e9d8797f5e30001931b70</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Azure]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Thu, 31 May 2018 14:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-30_16-29-47.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-30_16-29-47.png" alt="Installing and securing OpenFaaS on an AKS cluster"><p>A few months back, I wrote a <a href="https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/">guide</a> for installing and locking down <a href="https://www.openfaas.com/">OpenFaaS</a> in a Docker Swarm running on <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machines. Today I want to share a step-by-step guide that shows how to install OpenFaaS on a new <a href="https://azure.microsoft.com/en-us/services/container-service/kubernetes/">Azure Kubernetes Service</a> (AKS) cluster using an <a href="https://github.com/kubernetes/ingress-nginx">Nginx</a> ingress controller to lock it down with basic authentication and free <a href="https://letsencrypt.org/">Let's Encrypt</a> TLS certificates.</p>
<h4 id="beforewebegin">Before we begin</h4>
<p>If you just want to do a quick deployment of OpenFaaS on AKS, there's a guide in the official AKS documentation <a href="https://docs.microsoft.com/en-us/azure/aks/openfaas">here</a>, however it does not show how to implement TLS encryption or authentication.</p>
<p>All the Azure configuration I'll show today is done via the command line, so if you don't already have the Azure CLI installed on your local system, install it from <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest">here</a>. You can do it through the Azure portal, but it's much faster to do with the CLI.</p>
<p>You will also need Git command-line tools installed so you can pull down the latest version of OpenFaaS from its repository.</p>
<p>Finally, in order to secure your OpenFaaS installation with TLS, you will need a domain and access to your DNS provider so you can point a hostname to your cluster's public IP address</p>
<p>Ready? Let's get started.</p>
<h4 id="basicazureconfiguration">Basic Azure configuration</h4>
<ol>
<li>From the command line, log in to Azure using <code>az login</code>. Follow the prompts to complete your authentication.</li>
<li>If you don't have an existing resource group you want to use for your AKS cluster, create a new one with <code>az group create -l REGIONNAME -n RESOURCEGROUP</code>. Replace REGIONNAME and RESOURCEGROUP with appropriate values, but make sure you use a region where AKS is <a href="https://docs.microsoft.com/en-us/azure/aks/container-service-quotas">currently available</a>.</li>
<li>Create a new AKS cluster with <code>az aks create -g RESOURCEGROUP -n CLUSTERNAME --generate-ssh-keys</code>. The RESOURCEGROUP value is the same as before, and CLUSTERNAME is whatever you want it to be called. Note that the default virtual machine size for your cluster is Standard_DS1_v2. You can change this by setting the <code>--node-vm-size</code>, and I am personally using burstable Standard_B2s VMs for my AKS cluster.</li>
<li>Once the AKS cluster creation completes, use this command to get the credentials you need to manage the cluster with the Kubernetes CLI <code>az aks get-credentials --resource-group RESOURCEGROUP --name CLUSTERNAME</code>.</li>
<li>Install the Kubernetes CLI (kubectl) with <code>az aks install-cli</code>.</li>
<li>Get the name of the node resource group that was created for your AKS cluster with this command <code>az resource show --resource-group RESOURCEGROUP --name CLUSTERNAME --resource-type Microsoft.ContainerService/managedClusters --query properties.nodeResourceGroup -o tsv</code>. You should get an output that looks like MC_resourcegroup_clustername_regionname. You will use this return value in the next step.</li>
<li>Create a public IP address in the node resource groupe with this command <code>az network public-ip create --resource-group MC_RESOURCEGROUP --name IPADDRESSNAME --allocation-method static</code>. You will get a JSON response that contains a &quot;publicIp&quot; object. Copy its &quot;ipAddress&quot; value and save it for later.<br>
<em>Note: you might be tempted to create a DNS name label for this IP address so you can avoid using a custom domain name, but *.cloudapp.azure.com host names don't work with Let's Encrypt.</em></li>
<li>Go to your DNS provider and register a new A record for your a hostname that points to the external IP you reserved in the previous step (mine is akskube.alexanderdevelopment.net). This will be the hostname you use to access OpenFaaS. You may also need to create a new CAA record to explicitly allow Let's Encrypt to issue certificates for your domain.</li>
</ol>
<h4 id="basicclusterconfiguration">Basic cluster configuration</h4>
<p>Once the basic Azure configuration work is done, it's time to configure the AKS cluster.</p>
<ol>
<li>If you don't already have the Helm client installed on your local system, install it by following the directions <a href="https://github.com/kubernetes/helm/blob/master/docs/install.md">here</a>. I am using a Windows dev workstation, so I installed <a href="https://chocolatey.org/">Chocolatey</a> and then installed the Helm client with <code>choco install kubernetes-helm</code>.</li>
<li>Install Helm components on your AKS cluster with <code>helm init --upgrade --service-account default</code>.</li>
<li>Install the Nginx ingress controller on your AKS cluster with <code>helm install stable/nginx-ingress --namespace kube-system --set rbac.create=false --set rbac.createRole=false --set rbac.createClusterRole=false --set controller.service.loadBalancerIP=STATICIPADDRESS</code>. Replace STATICIPADDRESS with the public IP address you created previously.</li>
<li>Install <a href="https://github.com/jetstack/cert-manager">cert-manager</a> to request and manage your TLS certificates <code>helm install --name cert-manager --namespace kube-system stable/cert-manager --set rbac.create=false</code>.</li>
</ol>
<h4 id="installingopenfaas">Installing OpenFaas</h4>
<p>It's relatively easy to install OpenFaaS on your AKS cluster using Helm, and a detailed readme is available <a href="https://github.com/openfaas/faas-netes/blob/master/chart/openfaas/README.md">here</a>. Basically you need to download the <a href="https://github.com/openfaas/faas-netes">faas-netes</a> Git repository to your local system, create a couple of namespaces on the AKS cluser and use the Helm chart in the repo you downloaded. Here's how I set it up on my AKS cluster.</p>
<pre><code>kubectl create ns openfaas
kubectl create ns openfaas-fn

git clone https://github.com/openfaas/faas-netes
cd faas-netes

helm install --namespace openfaas -n openfaas --set functionNamespace=openfaas-fn --set ingress.enabled=true --set rbac=false chart/openfaas/
</code></pre>
<p>Once OpenFaaS is installed, you need to create ingress resources to make it available externally.</p>
<h4 id="creatingtheingressresources">Creating the ingress resources</h4>
<p>Before creating your ingress resources, you need to create certificate issuer resources to get TLS certificates. Here's the YAML for a Let's Encrypt staging issuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: EMAILADDRESS
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    http01: {}
</code></pre>
<p>Copy it, replace EMAILADDRESS with your email address and save it as faas-staging-issuer.yml. Then run <code>kubectl apply -f faas-staging-issuer.yml -n openfaas</code>.</p>
<p>Here's a corresponding production issuer:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
  name: letsencrypt-production
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: EMAILADDRESS
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-production
    # Enable the HTTP-01 challenge provider
    http01: {}
</code></pre>
<p>Copy it, replace EMAILADDRESS with your email address and save it as faas-production-issuer.yml. Then run <code>kubectl apply -f faas-production-issuer.yml -n openfaas</code>.</p>
<p>Next you need to create a password file to implement basic authentication. If you are working on a system with apache2-utils installed, you can just use the htpasswd command. Otherwise, you can use a tool like <a href="http://aspirine.org/htpasswd_en.html">http://aspirine.org/htpasswd_en.html</a> to generate your htpasswd content. Once you have your htpasswd content generated, save it in a file named &quot;auth&quot; and run the following command <code>kubectl create secret generic basic-auth --from-file=auth -n openfaas</code></p>
<p>Now you can use the following YAML to create an ingress resource that exposes your OpenFaaS instance:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: faas-ingress
  annotations:
    nginx.ingress.kubernetes.io/auth-realm: &quot;Authentication Required&quot;
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-type: basic
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-staging
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  tls:
  - hosts:
    - HOSTNAME
    secretName: faas-letsencrypt-staging
  rules:
  - host: HOSTNAME
    http:
      paths:
      - path: /faas-admin
        backend:
          serviceName: gateway
          servicePort: 8080
</code></pre>
<p>Copy it, replace both instances of HOSTNAME with the hostname you created earlier and save it as faas-ingress.yml. Deploy it to your cluser with this command <code>kubectl apply -f faas-ingress.yml -n openfaas</code>.</p>
<p>As the ingress starts up, it will request a staging certificate from Let's Encrypt, and then it will start listening for requests. It may take a few minutes, so now might be a good time to take a short break. Once everything is complete, you will be able to access your OpenFaaS UI from <a href="https://HOSTNAME/faas-admin/ui/">https://HOSTNAME/faas-admin/ui/</a>, and once you deploy functions, they will be available at <a href="https://HOSTNAME/faas-admin/functions/FUNCTIONNAME">https://HOSTNAME/faas-admin/functions/FUNCTIONNAME</a>. You should get a browser warning about the certificate because it's using a Let's Encrypt production certificate, but that's OK for now. You should also be prompted for basic authentication credentials, which will be the username and password you created earlier.</p>
<p>If everything looks good, you can switch over to using a production TLS certificate. Take the faas-ingress YAML and replace the &quot;letsencrypt-staging&quot; in the secretname and certmanager.k8s.io issuer values with &quot;letsencrypt-production&quot; instead. Save it and deploy the update with <code>kubectl apply -f faas-ingress.yml -n openfaas</code>. Like before, the ingress will take a few minutes to restart and request a production TLS certificate from Let's Encrypt. Once that's done, you can access the your OpenFaaS UI via the same URL, but now you should not get a warning about an invalid certificate.</p>
<p>At this point you have a locked-down OpenFaaS installation, but you might not want to use basic authentication to restrict access to your OpenFaaS functions. If that's the case you can create another ingress resource that exposes them outside the &quot;/faas-admin&quot; path. Here's the YAML for that resource:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: faas-function-ingress
  annotations:
    kubernetes.io/tls-acme: &quot;true&quot;
    certmanager.k8s.io/issuer: letsencrypt-production
    nginx.ingress.kubernetes.io/rewrite-target: /functions
spec:
  tls:
  - hosts:
    - HOSTNAME
    secretName: faas-letsencrypt-production
  rules:
  - host: HOSTNAME
    http:
      paths:
      - path: /functions
        backend:
          serviceName: gateway
          servicePort: 8080
</code></pre>
<p>Copy it, replace both instances of HOSTNAME with your DNS hostname from earlier and save it as faas-function-ingress.yml. Deploy it to your cluser with this command <code>kubectl apply -f faas-function-ingress.yml -n openfaas</code>.</p>
<p>Once the ingress starts up and applies the TLS certificate, you will be able to access your functions at <a href="https://HOSTNAME/functions/FUNCTIONNAME">https://HOSTNAME/functions/FUNCTIONNAME</a> without authenticating.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>A few closing thoughts:</p>
<ol>
<li>I am still extremely new to AKS and Kubernetes, and I've tried to simplify this guide as much as possible for other newbies. In figuring out how to set this up, I relied heavily on the official <a href="https://docs.microsoft.com/en-us/azure/aks/">AKS docs</a>, and I encourage you to take a look at them if you want to dig in deeper.</li>
<li>This configuration does not expose the OpenFaaS Prometheus montitoring service. If you want to set that up, you will need to create a different DNS entry (either an A record or CNAME record) and create another ingress resource in the openfaas namespace for that host name that points to the &quot;prometheus&quot; service on service port 9090.</li>
<li>The Nginx ingress controller configuration I showed here is extremely simple. If you want to use a more sophisticated configurations to enable advanced features like rate limiting, for example, take a look at <a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration">https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration</a> and <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md">https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md</a>.</li>
</ol>
</div>]]></content:encoded></item><item><title><![CDATA[An Azure AD OAuth 2 helper microservice]]></title><description><![CDATA[<div class="kg-card-markdown"><p>One of the biggest trends in systems architecture these days is the use of &quot;serverless&quot; functions like Azure Functions, Amazon Lambda and OpenFaas. Because these functions are stateless, if you want to use a purely serverless approach to work with resources secured using Azure Active Directory like Dynamics</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/05/19/an-azure-ad-oauth2-helper-microservice/</link><guid isPermaLink="false">5aff468b97f5e30001931b5d</guid><category><![CDATA[Microsoft Dynamics CRM]]></category><category><![CDATA[Dynamics 365]]></category><category><![CDATA[Python]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sat, 19 May 2018 16:45:38 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-18_22-58-04-1.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-18_22-58-04-1.png" alt="An Azure AD OAuth 2 helper microservice"><p>One of the biggest trends in systems architecture these days is the use of &quot;serverless&quot; functions like Azure Functions, Amazon Lambda and OpenFaas. Because these functions are stateless, if you want to use a purely serverless approach to work with resources secured using Azure Active Directory like Dynamics 365 online, a new token will have to be requested every time a function executes. This is inefficient, and it requires the function to fully understand OAuth 2 authentication, which could be handled better elsewhere.</p>
<p>To address this problem, I've written a microservice in Python that can be used to request OAuth 2 tokens from Azure Active Directory, and it also handles refreshing them as needed. I've containerized it as Docker image so you can easily run it without needing to build anything.</p>
<h4 id="howitworks">How it works</h4>
<p>When a request containing a username and password arrives for the first time, the microservice retrieves an OAuth2 access token from Azure AD and returns it to the requester. The microservice also caches an object that contains the access token, refresh token, username, password and expiration time.</p>
<p>When subsequent requests arrive, the microservice checks its cache for an existing token that matches the username and password. If it finds one, it checks if the token has expired or needs to be refreshed.</p>
<p>If the existing token has expired, a new one is requested. If the existing token has not expired, but it will expire within a specified period of time (10 minutes is the default value), the microservice will execute a refresh request to Azure AD, cache the updated token and return it to the requester. If there's an unexpired existing token that doesn't need to be refreshed, the cached access token will be returned to the requester.</p>
<p>Here's what a raw token request to and response from the microservice looks like in Postman:<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/Postman_2018-05-18_22-58-04.png#img-thumbnail" alt="An Azure AD OAuth 2 helper microservice"></p>
<p>Back in 2016 I shared <a href="https://alexanderdevelopment.net/post/2016/11/27/dynamics-365-and-python-integration-using-the-web-api/">some sample Python code</a> that showed how to authenticate to Azure AD and query the Dynamics 365 (then called Dynamics CRM) Web API. Here is an updated version of that sample code that uses this new microservice to acquire access tokens:</p>
<pre><code>import requests
import json

#set these values to retrieve the oauth token
username = 'lucasalexander@xxxxxx.onmicrosoft.com'
userpassword = 'xxxxxx'
tokenendpoint = 'http://localhost:5000/requesttoken'

#set these values to query your crm data
crmwebapi = 'https://xxxxxx.api.crm.dynamics.com/api/data/v8.2'
crmwebapiquery = '/contacts?$select=fullname,contactid'

#build the authorization request
tokenpost = {
    'username':username,
    'password':userpassword
}

#make the token request
print('requesting token . . .')
tokenres = requests.post(tokenendpoint, json=tokenpost)
print('token response received. . .')

accesstoken = ''

#extract the access token
try:
    print('parsing token response . . .')
    print(tokenres)
    accesstoken = tokenres.json()['accesstoken']

except(KeyError):
    print('Could not get access token')

if(accesstoken!=''):
    crmrequestheaders = {
        'Authorization': 'Bearer ' + accesstoken,
        'OData-MaxVersion': '4.0',
        'OData-Version': '4.0',
        'Accept': 'application/json',
        'Content-Type': 'application/json; charset=utf-8',
        'Prefer': 'odata.maxpagesize=500',
        'Prefer': 'odata.include-annotations=OData.Community.Display.V1.FormattedValue'
    }

    print('making crm request . . .')
    crmres = requests.get(crmwebapi+crmwebapiquery, headers=crmrequestheaders)
    print('crm response received . . .')
    try:
        print('parsing crm response . . .')
        crmresults = crmres.json()
        for x in crmresults['value']:
            print (x['fullname'] + ' - ' + x['contactid'])
    except KeyError:
        print('Could not parse CRM results')
</code></pre>
<p>Here's the output when I run the sample against my demo environment:<br>
<img src="https://alexanderdevelopment.net/content/images/2018/05/powershell_2018-05-19_11-34-16.png" alt="An Azure AD OAuth 2 helper microservice"></p>
<h4 id="runningthemicroservice">Running the microservice</h4>
<p>Pull the image from <a href="https://hub.docker.com">Docker Hub</a>: <code>docker pull lucasalexander/azuread-oauth2-helper:latest</code></p>
<p><em>Required environment variables</em></p>
<ol>
<li>RESOURCE - The URL of the service that is going to be accessed</li>
<li>CLIENTID - The Azure AD application client ID</li>
<li>TOKEN_ENDPOINT - The OAuth2 token endpoint from the Azure AD application</li>
</ol>
<p>Run the image with the following command (replacing the environment variables with your own).</p>
<p><code>docker run -d -p 5000:5000 -e RESOURCE=https://XXXXXX.crm.dynamics.com -e CLIENTID=XXXXXX -e TOKEN_ENDPOINT=https://login.microsoftonline.com/XXXXXX/oauth2/token --name oauthhelper lucasalexander/azuread-oauth2-helper:latest</code></p>
<p>You can also optionally supply an additional &quot;REFRESH_THRESHOLD&quot; environment variable that sets the time remaining (in seconds) before a token's expiration time when it will be refreshed. The default value is 600 seconds.</p>
<h4 id="anoteonsecurity">A note on security</h4>
<p>Because the microservice is caching usernames, passwords and access tokens in memory, this approach is vulnerable to heap inspection attacks, so you'll want to make sure your environment is appropriately locked down. Also you'll want to make sure any communication between your code that requests tokens and the microservice is encrypted.</p>
<h4 id="wrappingup">Wrapping up</h4>
<p>Although I wrote this with Dynamics 365 in mind, it should work for any resource that is secured by Azure AD. If you'd like to take a closer look at the code, it's available on GitHub <a href="https://github.com/lucasalexander/azuread-oauth2-helper">here</a>.</p>
</div>]]></content:encoded></item><item><title><![CDATA[Installing and securing OpenFaaS on a Google Cloud virtual machine]]></title><description><![CDATA[<div class="kg-card-markdown"><p>Here is a step-by-step guide that shows how to install <a href="https://www.openfaas.com/">OpenFaaS</a> on a new <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machine instance running Ubuntu Linux and how to secure it with <a href="https://www.nginx.com/">Nginx</a> as a reverse proxy using basic authentication and free SSL/TLS certificates from <a href="https://letsencrypt.org/">Let's Encrypt</a>.</p>
<p>As you look at this</p></div>]]></description><link>https://alexanderdevelopment.net/post/2018/02/25/installing-and-securing-openfaas-on-a-google-cloud-virtual-machine/</link><guid isPermaLink="false">5a9169a525028e00011718db</guid><category><![CDATA[OpenFaaS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Lucas Alexander]]></dc:creator><pubDate>Sun, 25 Feb 2018 13:00:00 GMT</pubDate><media:content url="https://alexanderdevelopment.net/content/images/2018/02/000-post-image.png" medium="image"/><content:encoded><![CDATA[<div class="kg-card-markdown"><img src="https://alexanderdevelopment.net/content/images/2018/02/000-post-image.png" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"><p>Here is a step-by-step guide that shows how to install <a href="https://www.openfaas.com/">OpenFaaS</a> on a new <a href="https://cloud.google.com/">Google Cloud Platform</a> virtual machine instance running Ubuntu Linux and how to secure it with <a href="https://www.nginx.com/">Nginx</a> as a reverse proxy using basic authentication and free SSL/TLS certificates from <a href="https://letsencrypt.org/">Let's Encrypt</a>.</p>
<p>As you look at this guide, here are a few things to keep in mind:</p>
<ol>
<li>With the exception of a few steps at the beginning that are specific to using Google Cloud, this guide will work for (probably) any cloud hosting provider.</li>
<li>In order to secure your OpenFaaS installation with SSL/TLS, you will need a domain and access to your DNS provider so you can point your domain to your virtual machine instance's IP address.</li>
<li>Although OpenFaaS runs on Docker, this guide shows how to install Nginx as a service directly on the virtual machine instance instead of in a container. There's no reason you couldn't use a containerized Nginx proxy if you want.</li>
<li>If you're comfortable with Kubernetes (I am not yet), you might want to look at running OpenFaaS on Google Kubernetes Engine instead of setting things up the way I do here.</li>
<li>Finally, if you just want to get started playing around with OpenFaaS locally, there's no need to set up a reverse proxy. Instead you can just install OpenFaaS in your local environment and access it directly.</li>
</ol>
<h4 id="provisioningthevirtualmachineinstance">Provisioning the virtual machine instance</h4>
<p>Although my day job keeps me focused on the Microsoft/Azure stack, and I've recently started using Digital Ocean as my personal blog host, I decided to use Google Cloud as my OpenFaaS host because Google was offering $300 in trial credits. Once you have a Google Cloud Platform account, setting up a virtual machine instance is easy.</p>
<ol>
<li>From your project dashboard, go to Compute Engine-&gt;Images.</li>
<li>Select the Ubuntu 17.10 image and click &quot;create instance.&quot; <img src="https://alexanderdevelopment.net/content/images/2018/02/005-selectimage.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>On the create instance screen, fill out the necessary details. I am using a &quot;small&quot; instance for this demo. <img src="https://alexanderdevelopment.net/content/images/2018/02/010-createinstance.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <br>Make sure you open HTTP and HTTPS connections to the instance. <img src="https://alexanderdevelopment.net/content/images/2018/02/020-createinstance.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Once the instance is created, follow the steps here to reserve a static external IP address: <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#reserve_new_static">https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#reserve_new_static</a></li>
<li>Then follow these steps to assign the static external IP to your new instance: <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#IP_assign">https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#IP_assign</a></li>
<li>Finally you need to go to your DNS provider and register a new A record for your domain that points to the external IP you reserved in the previous step. (I am using faas.alexanderdevelopment.net.) While you're at it, you may want to create a new <a href="https://letsencrypt.org/docs/caa/">CAA record</a> to explicitly allow Let's Encrypt to issue certificates for your domain.</li>
</ol>
<h4 id="installingdockerandinitializingyourswarm">Installing Docker and initializing your swarm</h4>
<p>At this point you should have a virtual machine instance with a static IP address and ports 80 and 443 open, and your domain should also be pointing to the static IP address.</p>
<p>Now it's time to install Docker. Out of the box, OpenFaaS runs on top of either <a href="https://docs.docker.com/engine/swarm/">Docker Swarm</a> or <a href="https://kubernetes.io/">Kubernetes</a>. To keep things simple, I am using Docker Swarm.</p>
<ol>
<li>SSH to your instance. You can do this directly through the VM instance details page. <img src="https://alexanderdevelopment.net/content/images/2018/02/030-ssh.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Run the following command to update your repository package lists: <code>sudo apt-get update</code>.</li>
<li>Install Docker by following the steps here: <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/">https://docs.docker.com/install/linux/docker-ce/ubuntu/</a>.</li>
<li>To initialize your Docker swarm, first run <code>ifconfig</code> to see what network interfaces you have available to use for the advertise-addr parameter when you initialize the swarm. This address is what other nodes in the swarm will use to connect if you add more. In my case (and probably in yours, too, if you are using Google Cloud), &quot;ens4&quot; is the correct interface.</li>
<li>Finally, initialize the swarm with this command: <code>sudo docker swarm init --advertise-addr ens4</code>. If necessary replace the &quot;ens4&quot; with whatever value is correct for you. Here's what this looks like on my instance: <img src="https://alexanderdevelopment.net/content/images/2018/02/050-swarminit.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="installingopenfaas">Installing OpenFaaS</h4>
<p>After your swarm is initialized, installing OpenFaaS is easy. Just run these commands to get the latest copy of OpenFaaS from GitHub:</p>
<pre><code>git clone https://github.com/openfaas/faas
cd faas
git checkout -
sudo ./deploy_stack.sh
</code></pre>
<p>At this point OpenFaaS is running and listening on port 8080, but you can't connect to it remotely because the default firewall rules only allow traffic on ports 80 and 443. Now you need to install a reverse proxy to route requests from the public internet to OpenFaaS.</p>
<h4 id="basicnginxconfiguration">Basic Nginx configuration</h4>
<p>Although I've read about using a number of different reverse proxies with OpenFaaS, such as <a href="https://getkong.org/docs/">Kong</a>, <a href="https://traefik.io/">Traefik</a> and <a href="https://caddyserver.com/">Caddy</a>, I decided to use Nginx for this guide because I've used it previously with other projects, and it's relatively easy to configure it to handle basic authentication, HTTPS and rate limiting.</p>
<ol>
<li>Install Nginx by running this command: <code>sudo apt-get install nginx</code>. Assuming all the steps to this point were successful, you should now be able to navigate to your domain in your browser and see the default &quot;Welcome to nginx!&quot; page. <img src="https://alexanderdevelopment.net/content/images/2018/02/070-nginx-welcome.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Now you need to update the Nginx configuration to use a non-default site and directory for hosting pages. Although we're going to mainly use Nginx as a reverse proxy for connections to OpenFaaS, it will need to be able to serve pages to respond to validation challenges from Let's Encrypt so that you can get your certificates.</li>
<li>Create a new directory for your domain. Mine is /var/www/faas.alexanderdevelopment.net.</li>
<li>Create a basic hello world index.html page in this directory.</li>
<li>Update the content of your Nginx config file (/etc/nginx/sites-available/default) with the following, replacing the &quot;faas.alexanderdevelopmen.net&quot; entries with the correct falues for your domain and directory:</li>
</ol>
<pre><code>server {
        listen 80;

        server_name faas.alexanderdevelopment.net;

        root /var/www/faas.alexanderdevelopment.net;
        index index.html;

        location / {
                try_files $uri $uri/ =404;
        }
}
</code></pre>
<ol start="6">
<li>Reload your Nginx configuration with this command <code>service nginx reload</code>.</li>
<li>Refresh the browser window you opened earlier to verify the new test page is loaded.</li>
</ol>
<h4 id="basicauthentication">Basic authentication</h4>
<p>OpenFaaS has no built-in authentication mechanism, but you can use basic authentication in Nginx to only allow access to the admin areas of OpenFaaS to authenticated users. These next two steps will show you how to create an .htpasswd file that Nginx can use to authenticate and authorize users.</p>
<ol>
<li>Install apache2-utils <code>sudo apt-get install apache2-utils</code>.</li>
<li>Create the .htpasswd file and add a user named &quot;adminuser&quot; with this command <code>sudo htpasswd -c /etc/nginx/.htpasswd adminuser</code>. You can run the htpasswd command again if you want to create additional users.</li>
</ol>
<h4 id="securingyourendpointwithletsencrypt">Securing your endpoint with Let's Encrypt</h4>
<p>Now it's time to get a certificate from Let's Encrypt. We'll be using the <a href="https://certbot.eff.org/">Certbot</a> tool to automatically obtain a certificate and update the Nginx configuration to use HTTPS.</p>
<ol>
<li>Add the Certbot repository to your instance  <code>sudo add-apt-repository ppa:certbot/certbot</code>.</li>
<li>Update your repository package lists <code>sudo apt update</code>.</li>
<li>Get the Certbot tool <code>sudo apt-get install python-certbot-nginx</code>.</li>
</ol>
<p>Let's Encrypt <a href="https://letsencrypt.org/docs/rate-limits/">limits the number of requests</a> you can make against its production environment, so it's best to verify your configuration against the Let's Encrypt staging environment first. The staging environment will generate a certificate, but you'll get a certificate warning when you try to access your site, so you'll want to update your configuration to use a production certificate after you're sure everything works.</p>
<ol>
<li>Run this command to request a test certificate <code>sudo certbot --authenticator webroot --installer nginx --test-cert</code>. The first time you run the tool, you will be asked for your email and if you agree to the terms and conditions. After that, follow the prompts, and make sure you select the option to redirect all traffic to HTTPS in the final step. Here's what it looks like for me: <img src="https://alexanderdevelopment.net/content/images/2018/02/100-certbot-test.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Now you should be able to navigate to your domain's test index page using HTTPS to verify everything worked.</li>
<li>If you reopen your Nginx configuration file, you'll see that Certbot has added some sections as indicated by the &quot;managed by Certbot&quot; comments. <img src="https://alexanderdevelopment.net/content/images/2018/02/102-nginx-config-post-certbot.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="updatingthenginxconfigurationtoworkwithopenfaas">Updating the Nginx configuration to work with OpenFaaS</h4>
<p>Now that your Nginx server is able to handle HTTPS traffic, you need to update your Nginx configuration to set up the reverse proxy to OpenFaaS. At this point you'll also want to enable the basic authentication for the admin areas of the OpenFaaS user interface, and you'll need to make a small change to the HTTP-&gt;HTTPS redirect that Certbot set up previously so that you can request a production certificate from Let's Encrypt later.</p>
<p>Replace the contents of your Nginx configuration file with the following, again replacing the &quot;faas.alexanderdevelopment.net&quot; entries with the correct values for your domain and root directory:</p>
<pre><code>server {
        server_name faas.alexanderdevelopment.net;

        root /var/www/faas.alexanderdevelopment.net;
        index index.html;

        #serve acme challange files from actual directory
        location /.well-known {
                try_files $uri $uri/ =404;
        }

        #reverse proxy all &quot;function&quot; requests to openfaas and require no authentication
        location /function {
                proxy_pass      http://127.0.0.1:8080/function;
                proxy_set_header    X-Real-IP $remote_addr;
                proxy_set_header    Host      $http_host;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        #reverse proxy everthing else to openfaas and require basic authentication
        location / {
                proxy_pass      http://127.0.0.1:8080;
                proxy_set_header    X-Real-IP $remote_addr;
                proxy_set_header    Host      $http_host;
                proxy_set_header X-Forwarded-Proto https;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                auth_basic &quot;Restricted&quot;;
                auth_basic_user_file /etc/nginx/.htpasswd;
        }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/faas.alexanderdevelopment.net-0001/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/faas.alexanderdevelopment.net-0001/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

#this block redirects all non-ssl traffic to the ssl version
server {
        listen 80;

        server_name faas.alexanderdevelopment.net;
        root /var/www/faas.alexanderdevelopment.net;

        #serve acme challange files from actual directory
        location /.well-known {
                try_files $uri $uri/ =404;
        }

        #redirect anything other than challenges to https
        location / {
                return 301 https://$host$request_uri;
        }
}
</code></pre>
<p>A few notes on this configuration:</p>
<ol>
<li>The HTTP server section (starting on line 40) has been updated to not redirect requests to the &quot;/.well-known&quot; directory to HTTPS. With the 301 redirect in place for all requests, Certbot would return validation errors when I attempted to change over from my test certificate to a production certificate.</li>
<li>Basic authentication is enabled for all requests to the site except for the &quot;/.well-known&quot; directory and the &quot;/functions&quot; directory (see lines 28 and 29). The &quot;/.well-known&quot; directory needs to be accessible without authentication to handle Let's Encrypt validation requests, and the &quot;/functions&quot; directory has been left open with the assumption that you'll use some sort of <a href="https://github.com/openfaas/faas/tree/master/sample-functions/ApiKeyProtected">API key</a> mechanism for authenticating to your functions. If your function clients support passing basic auth credentials, you can secure the &quot;/functions&quot; directory with basic auth, too.</li>
<li>This configuration does not expose the Prometheus monitoring UI on port 9090 that is installed with OpenFaaS.</li>
</ol>
<p>One you update your configuration and reload Nginx, you should be able to test one of the default included functions with curl. <img src="https://alexanderdevelopment.net/content/images/2018/02/110-curl-validation.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <em>You'll note that I have passed the &quot;-k&quot; flag to curl to disable certificate validation.</em></p>
<p>You should also be able to navigate to the OpenFaaS admin user interface by going to https://YOUR_DOMAIN_HERE/ui/. You will be prompted for credentials and presented with a warning about the certificate. If you use the &quot;adminuser&quot; credentials you created earlier and acknowledge the warning/continue, you will see the OpenFaaS main user interface screen.</p>
<h4 id="gettingaproductioncertificate">Getting a production certificate</h4>
<p>If you've gotten to this point and everything works, you're ready to switch over from using a test SSL/TLS certificate to using a production certificate.</p>
<ol>
<li>Run Certbot without the --test-cert flag <code>sudo certbot --authenticator webroot --installer nginx</code>.</li>
<li>Select the option to renew and replace the existing certificate. <img src="https://alexanderdevelopment.net/content/images/2018/02/120-certbot-prod.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
<li>Follow the rest of the prompts like when you requested the test certificate, except when you get to the final step, instead of selecting the option to redirect all traffic, select option &quot;1: No redirect.&quot;</li>
<li>Close all your open browser windows and verify you can browse to the OpenFaaS UI by going to https://YOUR_DOMAIN_HERE/ui/. You should be prompted for credentials again, but this time you should not see any certificate warnings. <img src="https://alexanderdevelopment.net/content/images/2018/02/130-browser-verification.png#img-thumnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"> <br><br>You can also try running one of the default functions through <a href="https://www.getpostman.com/">Postman</a> to validate you receive no certificate errors. <img src="https://alexanderdevelopment.net/content/images/2018/02/140-postman-verfication.png#img-thumbnail" alt="Installing and securing OpenFaaS on a Google Cloud virtual machine"></li>
</ol>
<h4 id="wrappingup">Wrapping up</h4>
<p>At this point you have a secure OpenFaaS server, but there are still a few things you should do.</p>
<ol>
<li>Back up your Nginx configuration, htpasswd file and certificates.</li>
<li>Remove the default functions because they are not protected by an API key, and they are runnable by anyone who can access your &quot;/functions&quot; directory, which, if you use my configuration, is actually anyone.</li>
<li>Set up <a href="https://lincolnloop.com/blog/rate-limiting-nginx/">rate limiting</a> for the Nginx server.</li>
<li>Schedule automated <a href="https://certbot.eff.org/docs/using.html#renewal">certificate renewals</a> using cron.</li>
<li>Get started writing functions and have fun!</li>
</ol>
</div>]]></content:encoded></item></channel></rss>