This report details how RISCO Group and Microsoft teamed up to streamline and accelerate the deployment of dev/test cloud environments through the creation of a web portal for the RISCO R&D team. The implemented solution used Azure Resource Manager and Puppet.

The hackfest team members were from RISCO Group, Microsoft, and U-BTech Solutions (a Microsoft partner). Puppet provided remote assistance from the Puppet London offices.

The hackfest team:

  • Ido Vapner – Head of DevSecOps & Technology, RISCO Group
  • Eyal Shanan – Cloud Architect, RISCO Group
  • Tal Druckmann – Software Developer, RISCO Group
  • Elad Hayun (@U_BTech) – Development & Products Manager, U-BTech Solutions
  • Yuval Mazor – Cloud Solution Architect, Microsoft
  • Ori Zohar (@orizhr) – Senior Technical Evangelist, Microsoft
  • John Boero – Technical Solution Engineer, Puppet (remote support)

Customer profile

RISCO Group, an international company with headquarters in Israel, produces high-quality and reliable security products for every type of security installation. From intrusion alarm systems for residential and commercial installations to large-scale access control and integrated security and building-management platforms, RISCO aspires to provide customers with an abundance of top-of-the-line security management solutions. RISCO’s cloud-based and software as a service (SaaS) solutions enable efficient and economical installations. A smartphone application offers users remote access to their security systems and enables self-monitoring, catering to a growing trend in the industry.

RISCO has been working with U-BTech Solutions, a Microsoft partner, to address some of its technological challenges and therefore U-BTech Solutions has been part of this engagement.

Problem statement

In an envisioning session conducted by the team, RISCO described how the current deployment process suffered from long release cycles, inconsistencies between QA and production environments, and issues with synchronization between the IT and R&D groups. To improve and optimize the release process, the R&D and IT teams have been planning to adopt several DevOps practices such as continuous integration (CI) and continuous delivery (CD). Adopting these practices required a lot of effort from both teams at RISCO and so the company’s DevOps journey had to be taken in steps.

As the discussion moved forward, it quickly became clear that a major pain point to be addressed was a bottleneck in the deployment process of cloud environments for the use of the R&D team that took days and even weeks. Mitigating this pain point seemed like a good place to start because it could optimize an important part of the current flow and also pave the way to the implementation of CI/CD in the future.

Note: It is important to take the time and have a deep analysis of the current DevOps pipeline before choosing DevOps practices to implement. A useful, recommended method of analyzing a DevOps pipeline is the value stream mapping (VSM) exercise. VSM allows practitioners to illustrate the current pipeline and identify potential areas of improvement and DevOps practices that can help optimize the pipeline. This engagement leveraged a prior analysis Microsoft conducted with RISCO, so the team didn’t perform a proper VSM exercise.

The team decided to focus on this key pain point, which can be described as follows:

When members of the RISCO R&D team want a virtual cloud environment for development or testing, they request specific environments from IT, which must manually create them. These environments are multitiered and include various Azure services, so manual deployment includes creating these services, setting network configurations, and installing various modules on Windows and Linux virtual machines. This means that the time between an environment request to its deployment and availability to the requester can be as long as days and even a week, depending on the IT department’s load. This has severe consequences on R&D processes. RISCO has a need to automate the process and make dev/test environments available to developers on demand via an easy “one-click” process that will take minutes instead of days, thereby helping to optimize and streamline deployments. This process can also be extended in the future to be triggered automatically as part of a CI/CD process.

“We needed to provision each virtual machine manually. Windows updates, security hardening, and the deployment were manually configured. The deployment and configuration side took us a few days.”

– Ido Vapner, Head of DevSecOps & Technology, RISCO Group

Solution, steps, and delivery

The team designed a solution for creating a self-service web portal that will automate and accelerate the process of delivering cloud environments upon request. The solution as a whole is illustrated in the following diagram:

Solution diagram

  1. Users access the self-service portal, a web app deployed on Azure App Service.
  2. Users are authenticated using Azure Active Directory using their work account.
  3. Environments available for deployment are defined by Resource Manager templates that are stored in a blob storage container.
  4. The web app requests a new deployment from the Azure Resource Manager using the chosen template and the Azure .NET SDK.
  5. Azure Resource Manager deploys the requested environment, which includes virtual machines running Puppet agents.
  6. An on-premises Puppet master serves configuration catalogs to the Puppet agents, which configure the virtual machines and perform necessary software installations.

The hack team at work

To provide a proof-of-concept implementation of the solution, a three-day hackfest took place and the team was divided into three subteams, each focusing on a different part of the solution:

  1. Implementing the self-service portal.
  2. Authoring a Resource Manager template to deploy as an example environment.
  3. Using Puppet to automate post-deployment configuration.

Implemented DevOps practices

  • Self-service environment. The solution as a whole implements the “self-service environment” DevOps practice because it allows users to trigger the deployment of new cloud environments in a fully automated manner.
  • Infrastructure as code. The use of Resource Manager templates allows RISCO to manage “infrastructure as code.” a DevOps practice that lets practitioners define and configure infrastructure (such as cloud resources) consistently while using software tools such as source control to version, store, and manage these configurations.

It was also important to the team to ensure that the deployment pipeline implemented by the self-service portal can be extended to serve a continuous integration and continuous deployment (CI/CD) process in the future. Automating the creation of new cloud environments is a key part of this process and extending the self-service portal to trigger such deployments via some build system (such as Jenkins) using web hook can help the further adoption of these additional DevOps practices.

The self-service portal

The self service portal

The portal was implemented as an ASP.NET web application deployed on an Azure web app. The application was connected to RISCO’s Azure Active Directory for user authentication and used the Azure .NET SDK to access Azure Resource Manager for deploying environments. The code of the application is available.

Configuring app permissions

For the web application to be able to programmatically access Azure and trigger environments deployments, an Azure Active Directory application needed to be created and assigned the proper role, as described in the article Use portal to create Active Directory application and service principal that can access resources. The app ID and access key generated in this step are referenced in the code and so are placed in the web config file. Once the application was created in Active Directory, it was given a Contributor role in the subscription where the templates will be deployed.

Configuring the app role

User sign-in with Azure Active Directory

The web app was configured to authenticate users in the RISCO organization, as described in the article ASP.NET web app sign-in and sign-out with Azure AD. This allows users to use their existing work credentials to access the application and the web application to retrieve user profiles easily. The relevant reference code in the web application can be found in the Startup.Auth.cs and UserProfileController.cs files.

The web.config file includes the following app settings containing the client ID and client secret obtained while creating the Active Directory application.

<appSettings>
    <add key="webpages:Version" value="3.0.0.0" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="ida:ClientId" value="[Your Azure application client ID]" />
    <add key="ida:AADInstance" value="https://login.microsoftonline.com/" />
    <add key="ida:ClientSecret" value="[Your Azure application client secret]" />
    <add key="ida:Domain" value="[Azure AD domain]" />
    <add key="ida:TenantId" value="[Your tenant id]" />
    <add key="ida:PostLogoutRedirectUri" value="[Azure application reply URL]" />    
</appSettings>

Using the Azure SDK to deploy an environment

The web application needs to send requests to the Azure Resource Manager to deploy new environments. For this purpose, the web application makes use of the Azure .NET SDK to programmatically communicate with the Azure Resource Manager and deploy the requested Resource Manager JSON template. See a step-by-step tutorial on how to deploy a template using .NET.

In the web app implemented by the team, calling the Azure Resource Manager included creating a resource group and deploying a template into the new resource group. Note that both requests require an access token previously obtained using the client ID and application key generated when the Active Directory application was created. The relevant code can be found in the Helpers.cs file:

     public static async Task<ResourceGroup> CreateResourceGroupAsync(TokenCredentials credential, string groupName,
            string subscriptionId, string location)
        {
            try
            {
                using (var resourceManagementClient = new ResourceManagementClient(credential))
                {
                    resourceManagementClient.SubscriptionId = subscriptionId;
                    var resourceGroup = new ResourceGroup { Location = location };
                    return await resourceManagementClient.ResourceGroups.CreateOrUpdateAsync(groupName, resourceGroup);
                }
            }
            catch (Exception exception)
            {
                Console.WriteLine(exception.Message);
                throw;
            }
        }
    public static async Task<DeploymentExtended> CreateTemplateDeploymentAsync(TokenCredentials credential,
            string groupName, string deploymentName, string subscriptionId, string templateLink,
            string templateParametersLink)
        {
            var deployment = new Deployment
            {
                Properties = new DeploymentProperties
                {
                    Mode = DeploymentMode.Incremental,
                    TemplateLink = new TemplateLink(templateLink),
                    ParametersLink = new ParametersLink(templateParametersLink)
                }
            };

            try
            {
                using (var resourceManagementClient = new ResourceManagementClient(credential))
                {
                    resourceManagementClient.SubscriptionId = subscriptionId;
                    return await resourceManagementClient.Deployments.CreateOrUpdateAsync(groupName, deploymentName,
                        deployment);
                }
            }
            catch (Exception exception)
            {
                Console.WriteLine(exception.Message);
                throw;
            }
        }

The full code of the application is available here.

Infrastructure as code using Resource Manager templates

Environments available for deployment by the self-service portal are defined as Resource Manager JSON templates. These templates are stored in an Azure blob storage container, where they are made available to the web application for deployment. In this hackfest, the team wanted to use a template that described a real environment RISCO already had in place to showcase how powerful Resource Manager templates can be in describing complex cloud environments. Exporting a template from an existing resource group can be achieved via the Azure portal or via the Azure CLI.

Another easy way to get started authoring a template is by using a template from the quick-start template repository on GitHub.

Even though the exported template can be used as is to deploy an environment in Azure, it usually needs some editing to be useful in different scenarios. In the context of this hackfest, the exported template was a great starting point but some editing took place, including:

  • Moving some of the default parameters to be template variables with predefined values.
  • Using a naming convention for resources by defining resource names as variables.
  • Removing some resources that can be assumed to exist when the template is deployed (for example, a virtual network with a preconfigured VPN gateway).

The resulting template ended up including more than 40 resources, their dependencies, and configuration.

To get more information on template syntax and features, see Authoring Azure Resource Manager templates.

Puppet Enterprise virtual machine extensions

As described previously, the team chose to use Puppet for automating post-deployment virtual machine configuration. This means that Puppet Enterprise agents need to be installed on virtual machines defined in the Resource Manager templates. To make this process truly automatic, the agents need to be installed automatically as soon as the virtual machines are created. This can be achieved by using Azure virtual machine extensions, which allow performing an array of post-deployment operations on Windows or Linux virtual machines. PuppetLabs has collaborated with Microsoft to create a Puppet Enterprise virtual machine extension that installs a Puppet Enterprise agent on a target virtual machine and configures it to connect to a predefined Puppet master server.

The team added Puppet virtual machine extensions for all Windows machines defined in the template. Below is an example of a Puppet Enterprise virtual machine extension JSON definition added to the template.

    {
        "comments": "Puppet agent VM extension",
        "type": "Microsoft.Compute/virtualMachines/extensions",
        "name": "[variables('vmPuppetExt')]",
        "apiVersion": "2015-06-15",
        "location": "[variables('defaultLocation')]",
        "properties": {
            "publisher": "PuppetLabs",
            "type": "PuppetEnterpriseAgent",
            "typeHandlerVersion": "3.2",
            "settings": {
                "puppet_master_server": "[parameters('puppetMasterFQDN')]"
            },
            "protectedSettings": {
                "placeHolder": {
                    "placeHolder": "placeHolder"
                }
            }
        },
        "resources": [],
        "dependsOn": [
            "[resourceId('Microsoft.Compute/virtualMachines', variables('virtualMachine_name'))]"
        ]
    }

A few things to note about the example that follows:

  • The Puppet master server is defined via the puppet_master_server field (in this case, via a parameter given as input on template deployment).
  • The extension is defined to depend on the associated virtual machine via the dependsOn field.

See a simple example of a complete Resource Manager template that includes a virtual machine and Puppet virtual machine extension.

Configuring Puppet

Once virtual machines are deployed for the requested environment, post-deployment virtual machine configuration is handled by Puppet. For more information on Puppet, visit the Puppet official website and the Puppet Enterprise overview.

To briefly describe the way Puppet works in this scenario:

  1. Puppet agents attempt to connect to a Puppet master server.
  2. The master server then needs to accept certificates offered by the agent nodes.
  3. Nodes send the master server information regarding node environment and current state.
  4. The master classifies nodes according to predefined node groups and classes.
  5. A compiled manifest is sent back to the agents, which use it to configure and perform installations on the nodes.

To better understand Puppet terms such as nodes, classes, and manifests, see the Puppet glossary.

Installing Puppet Enterprise

In this scenario, RISCO chose to use an on-premises Puppet Enterprise installation and make it accessible to the cloud environment using a VPN connection. To install Puppet Enterprise, see the official installation guide.

Alternatively, the Azure marketplace offers a preconfigured Puppet Enterprise template that allows users to deploy a Puppet Enterprise environment (including Puppet master server and UI console) within minutes.

Accepting agent certificates

For security reasons, a Puppet master needs to accept agents that attempt to connect to it. On default, this needs to be done manually using the console as shown next.

Puppet console displaying unsigned agent certificates

To make the process truly automatic, the Puppet master needed to be configured to automatically accept certificates. This can be achieved by adding a line to the [master] block in the Puppet master configuration file /etc/puppetlabs/puppet/puppet.conf. Accepting certificates automatically has security implications and so it is recommended to configure automatic certificate signing using a policy definition as described here.

For the purposes of the POC, the team used a naive autosign policy that accepts any agent certificate. After adding autosign = true to the [master] block in the configuration file, the block looked like this:

...

[master]
node_terminus = classifier
storeconfigs = true
storeconfigs_backend = puppetdb
reports = puppetdb
certname = puppet.risco.local
always_cache_features = true
autosign = true

Once this configuration was in place, agents connecting to the master were automatically accepted into the master’s node inventory:

Puppet master node inventory

Configuring node groups and classes

For the purpose of the POC, the team decided to showcase three Puppet post-deployment actions to be executed on Windows Server 2012 virtual machines:

  1. Configuration of the Windows firewall.
  2. Installation of Chocolatey (a Windows package manager).
  3. Use of Chocolatey to install Firefox.

To perform the first two actions, the Chocolatey Puppet Forge module and Windows firewall Puppet Forge module were installed on the master. Installation was done by connecting via SSH to the Puppet master server and executing the following commands:

sudo puppet module install puppetlabs-chocolatey --version 2.0.0
sudo puppet module install puppet-windows_firewall --version 1.0.3

For the third action, the team created a simple custom module that uses the PowerShell provider to install Firefox via Chocolatey. The simple manifest that defines the new class included the following code:

# New simple manifest that installs firefox using the powershell provider. 
# In this manifest you can run any powershell script or powershell command. 
class choco_test {
  exec { 'choco_test':
    command   => 'choco install firefox -y',
    provider  => powershell,
  }
}

After adding the above three modules to the Puppet master, the team defined a node group to include all the nodes that should be configured. The new group was defined to include all Windows nodes, and the module classes mentioned above were added to it. For more details, see this quick-start guide.

Puppet master node Windows group

To summarize, any Windows node connecting to the Puppet master will be associated to the new group and have the above classes associated with it. This will ensure that the Puppet agents on the virtual machines will perform the desired installations and configurations.

To learn more, see Managing Windows configurations with Puppet.

“With Puppet Enterprise, we were able to deploy configurations, security hardening, and automate applications installation using Chocolatey.”

– Ido Vapner, Head of DevSecOps & Technology, RISCO Group

Conclusion

Despite not performing a formal VSM exercise, the team was able to build on the envisioning work performed before the hackfest to prioritize areas of the DevOps pipeline to optimize. The following DevOps practices were chosen for implementation:

  • Self-service environments
  • Infrastructure as code

The hackfest ended with a complete solution showcasing an end-to-end flow from user logon to environment deployment and configuration. The POC showed the following:

  1. User logs onto the self-service portal using corporate credentials.
  2. User chooses an environment to be deployed and includes parameters such as region and environment name.
  3. The web portal uses the associated Resource Manager template to programmatically trigger a cloud environment deployment.
  4. Virtual machines, deployed in the new environments, run Puppet agents that connect to the on-premises Puppet Enterprise master.
  5. Puppet agents perform installations and configurations on the hosts, preconfigured by the master.

By automating the existing process, RISCO managed to optimize and accelerate an important part of the dev/test process that used to take days and required manual work.

“It took us a week to build a new environment. With our new self-service portal, we can provision an entire environment with a single click. The entire project helps us by saving over 200 hours per month.”

– Ido Vapner, Head of DevSecOps & Technology, RISCO Group

Looking forward, RISCO defined the following focus areas for extending the solution:

  • Adding workflow support to the web portal (for example, requiring R&D manager approval for deploying an environment).
  • Externalizing template parameters to be configured by the user.
  • Adding additional templates to be available to users.
  • Connecting the implemented solution to the CI/CD process.
  • Extending Puppet post-deployment configuration to Linux machines.

Source code

The following GitHub repository includes reference code for the self-service web portal as well as an example Resource Manager template that includes a Windows virtual machine with a Puppet virtual machine extension:

https://github.com/orizohar/risco-hackfest

Please note that for the code to run, Azure subscription information needs to be edited, specifically in the web.config file.

Appreciation

Thanks to Elad Hayun, from U-BTech solutions, who has been a key contributor during this engagement.

Also, thanks to the Puppet team in London for providing remote support during the hackfest.

Additional resources

Puppet resources