EDP Software is the creator of SchedulePro, a workforce scheduling system that is deployed for big customers such as Shell, Ford, Sodexo, Motiva, and others.

Microsoft recently collaborated with EDP Software to address this cloud solution, which had some challenges with regard to understanding of pricing and technologies. The current cloud solution also has problems with scaling, making it difficult to increase/decrease the number of servers and update/add components. Together with Microsoft, EDP Software worked toward platform as a service (PaaS) utilization, allowing EDP to concentrate on the project as opposed to the infrastructure.

Customer profile

EDP Software, based in Vancouver, Canada, has been developing workforce-scheduling solutions since 1987.

EDP’s SchedulePro solution is a highly configurable, cloud-based workforce scheduling system. Its workflow can be configured to meet contractual agreements set up with local unions. It can automate the overtime processes resulting from these agreements, saving time and reducing potential union grievances. It can manage large-scale refining/petrochemical schedules. The rules and workflows within SchedulePro can be customized to match the needs of a customer’s facility.

SchedulePro includes project management and implementation services, training, and support to ensure its successful deployment and use. SchedulePro can build in integration with a customer’s existing infrastructure to enable it to work with the HR, payroll, and financial software.

Pain points

The solution uses the iWeb private cloud, which can be challenging in regard to price and technologies. The company should maintain all infrastructure itself including load balancing, servers, backups, and so on. Additionally, iWeb has some problems with scaling: It’s not easy to increase/decrease the number of servers and update/add components. That’s why PaaS looks better and allows the partner to focus on the project.

The hackfest team:

Technical details

The current deployment includes two Windows Server servers for ASP.NET hosting, a load balancer, and two servers for SQL Server hosting.

Current deployment

Four web applications are on the servers:

  • ASP.NET Web Forms 4.5
  • ASP.NET Web API 4.5
  • ASP.NET Web application with primary static content
  • ASP.NET MVC application

Windows Identity Foundation is used for authentication.

Azure Application Insights is used for monitoring of web applications.

Additionally, the deployment has some Windows services based on a Topshelf solution. Thanks to Topshelf, it’s possible to run console applications as Windows services in parallel. In order to establish communication protocol between Topshelf services and web applications, RabbitMQ is used.

Finally, the deployment contains two servers running SQL Server 2014 that are managing 25-GB databases in database mirroring mode.

The deployment process is primarily manual. At the same time, Visual Studio Online is the primary tool for development. The cost is the main concern that prevented building and deploying in Visual Studio Online.

At least two of EDP Software’s customers have their own codebase and require separate deployments to specific datacenters.

Azure services

Here is a high-level schema for solution deployment in Azure:

High-level schema

The schema contains the following services:

  • Web Apps. It’s possible to deploy several Azure web applications inside the same App Service plan that we can scale based on performance. All existing ASP.NET applications can be published based on Azure web application service.
  • Azure Functions. To avoid virtual machines and infrastructure as a service (IaaS), it’s important to move all Windows services to Azure Functions. This should not be a problem. Potentially we can avoid even Azure Queue storage, but it depends on the current implementation. Azure Functions can run under the same App Service plan.
  • Azure Service Bus. To substitute RabbitMQ, we can use Azure Service Bus. Initially, we wanted to use a basic Azure Queue storage, but Azure Service Bus supports topics. So, migration from RabbitMQ is not so complicated.
  • Azure SQL Database. SQL Database with elastic pool works much better than IaaS SQL in Virtual Machine. So, it’s an important task for the hackfest to check all databases for compatibility and migrate to SQL Database. It’s important to use two SQL Database servers to implement geo-replication and availability. This service satisfies all requirements.
  • Azure Traffic Manager. It’s important to use this service in order to implement the migration process without delays. Thanks to Traffic Manager, we can route all traffic to old or new deployments in a second.

Migration plan

The migration process contains the following steps:

  1. Configure Traffic Manager for DNS migration (pre-production step).
  2. Create and configure SQL Database servers (pre-production step).
  3. Implement transactional replication for live migration (pre-production step).
  4. Create and deploy Web App.
  5. Create and deploy Azure Functions/Azure Service Bus Queue.

We decided not to implement pre-production steps 1 and 3 at the prototype stage because of the additional time required and because it should be based on real data rather than on a test. Instead, we decided to focus our attention during the hackfest on steps 2, 4, and 5. (To help the partner with the migration to production, this report includes some information about Traffic Manager and transactional replication.)

So in order to build a prototype, we had to finish three separate tasks:

Three separate tasks

Due to the number of technical evangelists and EDP developers, we created two teams:

  • Team 1: Web Apps and SQL migration
  • Team 2: Topshelf and RabbitMQ migration

The next sections provide details about all of the steps (including some information about pre-production steps).

Traffic Manager (pre-production)

Traffic Manager works at the DNS level. It uses DNS responses to direct end-user traffic to globally distributed endpoints. Clients then connect to those endpoints directly.

Potentially we can use Traffic Manager to guarantee availability for critical applications, deploying applications to several data centers. But it’s possible to use Traffic Manager to migrate existing web applications to Azure.

The process is simple—we just create a Traffic Manager profile for each application in the list and modify the CName setting in order to redirect all traffic to <name>.trafficmanager.net. Modifying the endpoint in each profile, we will be able to route customers to the new or old web applications.

Modifying endpoint

So, once we change the domain settings, all traffic should be redirected to the current version of the web applications. But once we decide to switch clients to the new version, we have to modify the profiles. In case of any problems, we will be able to switch all users back within seconds.

If we migrate all components to Azure, we will be able to shut down Traffic Manager once we update the domain settings.

Live SQL Database migration (pre-production)

In order to avoid any problem with migration and guarantee availability, it’s important to set up transactional replication between SQL Server on-premises and SQL Database. It’s possible to do this for the current version of SQL Database (v12) and we can use it for live migration.

Live SQL Databases Migration

To support the replication mechanism, it’s important to make sure that the current version of SQL Server is from the list below:

  • SQL Server 2016 and above
  • SQL Server 2014 SP1 CU3 and above
  • SQL Server 2014 RTM CU10 and above
  • SQL Server 2012 SP2 CU8 and above
  • SQL Server 2012 SP3 and above

This link describes step by step how to set up transactional replication: http://www.sqlservercentral.com/blogs/john-sterrett/2016/07/26/azure-sql-database-live-migrations/

The following steps were implemented during the hackfest.

Azure SQL Database (the hackfest task)

SQL Azure

In order to create a SQL Server Database, it’s important to create a server and provide information about the pricing tier:

SQL Server Database

The pricing tier can be selected based on number of DTUs.

A DTU is a unit of measure of the resources that are guaranteed to be available to a standalone Azure SQL database at a specific performance level within a standalone database service tier. A DTU is a blended measure of CPU, memory, and data I/O and transaction log I/O in a ratio determined by an OLTP benchmark workload designed to be typical of real-world OLTP workloads. Doubling the DTUs by increasing the performance level of a database equates to doubling the set of resource available to that database. For example, a Premium P11 database with 1,750 DTUs provides 350x more DTU compute power than a Basic database with 5 DTUs. To understand the methodology behind the OLTP benchmark workload used to determine the DTU blend, see SQL Database benchmark overview.

A DTU calculator is available here: http://dtucalculator.azurewebsites.net/

It’s possible to change the pricing tier considering that downtime can be about 4 seconds.

Learnings: Based on our calculations, it’s enough to use 200 DTU/eDTU per database. We can use Standard elastic pool in order to support all four databases that can guarantee up to 1,200 eDTU. Elastic pool will save some money compared to four separate instances.

SQL Database automatically performs a combination of full database backups weekly, differential database backups hourly, and transaction log backups every five minutes to protect your business from data loss. These backups are stored in locally redundant storage for 35 days for databases in the Standard and Premium service tiers and seven days for databases in the Basic service tier. But in order to guarantee availability, we recommend using Active Geo Replication.

Active Geo Replication

The primary and the secondary servers should be paired and available. In this case we should not have any problems with availability.

Learnings: Azure supports two datacenters in Canada that are paired. It’s important for the partner and it can guarantee availability for local Canadian customers.

Before starting a DB migration (live or not), we have to check for any possible compatibility issues. SQL Server Data Tools for Visual Studio (SSDT) uses the most recent compatibility rules to detect SQL Database V12 incompatibilities. If incompatibilities are detected, you can fix detected issues directly in this tool. This is the recommended method to test and fix SQL Database V12 compatibility issues.

For tools and tips, see SQL Server database migration to SQL Database in the cloud.

Learnings

  • For the hackfest, we used SQL Management Studio in order to migrate the databases to SQL Database. This approach is not the best because it can generate an Out Of Memory error on a local computer for big databases during the process. Additionally, the process may take a lot of time, so it’s not possible to switch off the local computer. Instead, we decided to generate a backpack file that we uploaded to Azure Storage right after the compatibility check. For the final migration, it’s better to use Visual Studio Data Tools in order to export/import the schema, and transactional replication in order to sync data with the production database.
  • It’s better to use a more advanced instance for migration in order to execute all transactions faster. Right after the data is in Azure, it’s possible to switch the service to the less performance instance. We made a mistake and created an S2 instance for migration. The script worked the whole night. So we had to switch to a more advanced instance in the morning in order to finish the process in several minutes.
  • Check all users before the migration. It’s possible to have some users that should be fixed (users without logins) and avoid creating new users for Azure SQL with duplicate names (as in the database) before data migration. It can cause some problems at the end of the process.

Azure web applications (the hackfest task)

SchedulePro

Creating a new web application is not challenging. Just make sure to use the same resource group for all components (have to create a new group for the first portal) and service plans in the same region. For example, it’s possible to use Central Canada as a region:

Creating a new application

In order to activate App Insights, we would change the App Insights box to On, but we can do that later.

For all web applications, we can use just one App Service plan. Using the same plan allows us to share resources among all applications. In case we decide to use separate plans for each of the apps, we will be able to make changes through the portal.

Learnings: For the applications, we decided to use the S2 service plan and two instances there. That appears to be enough to run all the applications.

In order to use all needed features, we have to select at least a Standard plan that by default supports things such as SSL, deployment slots, and custom domains.

Once the web applications are deployed, we have to configure them.

Make sure the right version of .NET Framework is selected. You can do this using the Application settings tab.

Application settings

Learnings: It was not possible to configure all web.config settings using the Settings tab. We used Application Service Editor in order to modify some settings there. This approach worked fine. For the production, it’s possible to preconfigure some Entity Framework and application settings using web.config. It’s better to configure the DB Connection String using the Application settings tab (as we did).

Custom Domain and SSL certificate features are available on the same tab. In this step, we have to upload a certificate only. The domain name should be assigned right before we are ready to make the solution publicly available. We are not expecting any problems there. Just make sure the certificate and access to the domain settings are available for the migration.

It’s necessary to upload a .pfx certificate at this stage. In order to generate .pfx based on .crt/.cer, it’s possible to use the openssl tool:

openssl pkcs12 -export -out domain.name.pfx -inkey domain.name.key -in domain.name.crt

At the same time, it’s important to provide a private key that was used for .crt/.cer generation. If the key is lost, it would require reissuing the certificate using a dashboard of the certificate provider.

SSL certificates

We will use additional deployment slots in order to deploy the portals from Visual Studio Online to Azure. We will use this approach for testing purposes, not just for the migration but for future development as well. Once testing is done, we will be able to swap the slots, so a new slot should be created for each of the portals. More information about deployment slots is available here. Make sure the slots will be created using the same Application Plan as production, so the slots will share resources with components in production. That’s why, if there are any load tests, the test slots should be moved to a separate application plan first.

Slots

If we were going to use FTP for any tasks, Deployment credentials should be provided. But we do not recommend doing this because the most important features are available through a KUDU panel (for example, https://edpwebapptest.scm.azurewebsites.net/).

Deployment source for the testing slot should be configured in order to use Visual Studio Online.

Pay attention that Visual Studio Online integration is activated only for a testing slot. The source should be deployed to a testing slot only. Once all testing activities are completed, it will be possible to switch testing and production slots. We are not going to set up any deployment sources for production deployment.

Deployment source

Learnings: One of four projects is based on a Visual Studio Team Services (VSTS) source control system rather than on Git. It’s not possible to use it as a deployment slot now, so we decided to use msdeploy to deploy all applications. Considering that the partner uses a third-party tool for deployment that generates msdeploy files, it’s a great approach.

The applications can potentially have some settings, including connection string. Using the portal, we can apply environment settings (App settings), handler mappings, default documents, and a new root folder:

Learnings: One of the web applications contains two virtual directories/applications. It’s possible to set up all applications using the Application settings panel.

Hackfest

Azure Functions and Service Bus (the hackfest task)

Hackfest

Azure Functions is a solution for easily running small pieces of code in the cloud that can be developed in C#. We can use Azure Functions to migrate existing Windows services. Because existing services are based on C# and Topshelf, we have a chance to move existing code with few modifications only. The most important modification is about RabbitMQ, due to migration to Azure Service Bus.

Azure Functions support several invocation triggers, including Service Bus Queue Trigger, HTTPTrigger and TimeTrigger. If we are going to use Azure Service Bus in order to send all needed data between web applications and Azure Functions, Service Bus Queue Trigger is the best choice.

In order to save money, we recommend hosting Azure Functions inside the same application plan with web applications. But it’s possible to use a dynamic plan as well.

Azure Marketplace can be used to create an Azure Function:

Azure Marketplace

To use an existing App Service plan, it’s important to use the Classic option:

Even the Classic plan allows us to run several functions at the same time. Of course, the number of running functions can be limited based on the amount of memory and other parameters.

Once an Azure Functions instance is created, it’s possible to create a new function:

Azure Function instance

After that, you can configure and test the function:

Test the function

Learnings:

Azure Functions—other differences between a dynamic (consumption) plan and dedicated:

  • 5-minute limit on function executions on dynamic (unlimited on dedicated).
  • The App Service sandbox on dedicated (basic/standard/premium) is more relaxed, so PDF generation is more likely to work there. This also applies to WebJobs running in the basic+ plans.

App Service/WebJobs:

Service Bus:

Don’t forget to check out Azure Scheduler to see if it’s useful for you: https://azure.microsoft.com/en-us/services/scheduler/

Summary

To simplify further deployments, an Azure Resource Manager template was created based on the prototype deployment.

To best summarize the hackfest itself, here is a quote from Sachin Agrawal, the CEO of EDP Software:

This Hackfest accelerated our Azure onboarding tremendously. The guys came back to the office yesterday super-thrilled about the possibilities and knew how to proceed.

All in all, they were able to find answers to the “weird” aspects of our system. Our on-premises components were migrated to use Azure equivalents (Azure SQL, Azure Service Bus, WebJobs). We have a test deployment about 99% up and running with just a few internal tasks to finalize wiring some components up.