When building medical software, it is really important to have the ability to gather information from many different types of medical equipment. The main challenge is that many medical devices use custom protocols. However, what is needed is a complex and universal architecture to gather all the information in one place. Our customer, MedApp, has already created such a component, but needs to extend it to use IoT Hub as an output.

MedApp’s main product is CarnaLife, a portable telemedical system. Part of this system is an alerting solution. Sometimes with telemetry, we can receive information about critical, urgent, or special situations. These kinds of events can be detected directly by a measuring device, and of course, should be processed on a backend system as soon as possible.

A good example is a device, or “box” to be precise, for an automated external defibrillator (AED), often found in public places such as train stations, shopping malls, or museums. In an emergency, anyone should be able to use this device to provide treatment. In case of a serious heart condition, it is critical for paramedics to know when and where the defibrillator box was opened because they need to get there as fast as possible. Even if the reason for using the AED is exaggerated, someone probably needs urgent medical help. Of course, someone can call an ambulance “manually,” but it is much better to have two signals—one from the European emergency number 112, and one from the device.

The final product needs to gather all the telemetry information in one main location and provide additional routes in case of special signals.


AED device with instruction on train station in Toruń

Sample AED device


Why telemedicine?

Telemedicine (as per Wikipedia) is “the use of telecommunication and information technology to provide clinical health care from a distance. It has been used to overcome distance barriers and to improve access to medical services that would often not be consistently available in distant rural communities. It is also used to save lives in critical care and emergency situations.”

Recent European Union (EU) regulations set up a framework for new, innovative projects and also provide the required legal framework for telemedicine. To quote the European Commission: “If European countries are to meet the growing demands for healthcare services, a focus should be placed on finding ways of maximizing new technology, such as telemedicine solutions. This [approach] can help improve healthcare itself while increasing access to care and saving resources. To support this process, the EU is funding several telemedicine projects and pilots.”

Key technologies

Core team

From MedApp:

  • Mateusz Kierepka – CEO
  • Tomasz Kuciel – Vice President
  • Michał Adamczyk – Development Director
  • Mateusz Radecki – Team Leader
  • Wojciech Stadnicki – Android Developer
  • Łukasz Kierepka – Server Developer
  • Łukasz Kuźma – UWP Developer
  • Michał Dyndor – UWP Developer

From Microsoft:

  • Tomasz Kopacz – Principal Architect Evangelist, DX Poland
  • Karol Żak – Technical Evangelist, DX Poland


Team at work

Team at work


Customer profile

MedApp is a company based in Poland that delivers innovative healthcare solutions. Their main product, CarnaLife, is a revolutionary portable telemedical system focused on chronic diseases and conditions such as heart disease, diabetes, and obesity. For the first time in the world, Professor Dudek presented the use of HoloLens glasses for cardiology at this year’s (2016) New Frontiers in Interventional Cardiology (NFIC) workshop. Its key product characteristics:

  • Compatible with Android, iOS, and Windows
  • Cloud solution for patient monitoring and 3D augmented reality (AR) simulations, including HoloLens
  • Offline analysis of recorded examinations
  • One summary result of all harvested exams
  • Advanced active AR simulations visualized during examination recording

MedApp systems consist of many different modules, applications, and solutions:

  • CarnaLife Holo. Application for rendering 3D and 4D results of different examinations such as CT, MR, and heart echo scan
  • CarnaLife Server. Azure-based data storage, SQL Server, and algorithms to analyze data and test results for mostly EKGs, but also pressure and temperature
  • CarnaLife Lite. Android, iOS, and Windows app for patients; aggregates telemedical data and creates reports and statistics of their results; connects to a patient’s medical equipment and devices to gather data and test results and upload them to the server for further analysis
  • CarnaLife Pro. Android, iOS, and Windows app for doctors and specialists; manages and monitors patients and test results
  • CarnaLife System. Android, iOS, and Windows app for medical centers and organizations; manages doctors, specialists, and patients
  • Alerts system for medical devices (IoT). Solution for devices such as an automated external defibrillator (AED) that automatically sends alerts and notifies certain institutions in case of emergency


Short video about MedApp and their solutions



MedApp presentation at Microsoft event



In this article

This case study is divided into three parts, each with its own problem statement and solution, steps, and delivery sections; the author of each section is included in parentheses:

Xamarin

Problem statement

“Our team of developers is very small in numbers, which makes the development of CarnaLife on all three native platforms very hard to maintain, and if that was not enough, we recently lost the last Swift iOS developer, and at the same time we hired few new .NET developers…” —Mateusz Kierepka, CEO, MedApp S.A.

The team faced the following challenges:

  • How to maintain the same development speed and features for a mobile app on different platforms. The CarnaLife system includes client apps for Android, iOS, and Windows. The problem is that they’re all made with native tools for each platform, and because of limited time and human resources, each platform is developed at a different tempo. The perfect solution would be to rewrite them by using Xamarin to reduce the cost and time of development and also reduce feature differences between platforms.

  • How to reuse algorithms written in C# for CarnaLife’s ASP.NET server application within their mobile apps. The CarnaLife system consists of a server (ASP.NET / C#) that does a lot of heavy analytics and algorithms. The MedApp team recently made a decision to move some of the algorithms written in C# to inside their mobile apps. It was easy to achieve for the Windows app (because it’s written in C#), but they found it really difficult to rewrite their server’s C# logic to Java and Objective-C, mostly because of limited time and resources, but also because of a few slight differences between C# and other languages that change the outputs of those algorithms.

  • How to reuse libraries, views, and custom controls that are currently used in the native implementation of the mobile app for each platform. MedApp had already developed their application for Android, iOS, and Windows by using native tools, libraries, and custom controls. In the switch to Xamarin, they would like to reuse as many of those native custom controls and libraries as possible.

Solution, steps, and delivery

State of Xamarin at MedApp

The MedApp team decided that for now the best approach for them would be to leave their Android Java application as is, and focus on rewriting the iOS application from Xcode Swift to C# with Xamarin.iOS.

After they successfully migrate the Xcode Swift iOS app to Xamarin.iOS, they will focus on migrating the Android Java app to Xamarin.Android and share C# logic and algorithms among iOS, Android, and Universal Windows Platform (UWP) by using the Xamarin.Native approach.

To speed up the development process for the Xamarin.iOS project, they will reuse logic written in C# for the UWP app.

What is also crucial here is to try and reuse what they have already done with the Xcode iOS project as much as possible, such as reusing Storyboards that were created earlier in Xcode.

Reusing existing Xcode Storyboards for Xamarin Studio Xamarin.iOS project

As can be seen in the following screenshot, the CarnaLife iOS application developed by using Swift in Xcode already consisted of multiple views developed with Storyboards.


CarnaLife Lite iOS application written with Swift in Xcode

CarnaLife Lite iOS application written with Swift in Xcode


To speed up the migration process, we wanted to import those views from Xcode to Xamarin Studio. Because there were no tutorials or documentation about how to do this properly, we tried to find our own approach to deal with it.

After some research, we found out that Storyboards and XIB (XML Interface Builder) files are defined by XML, and both Xcode and Xamarin/Visual Studio share the exact same schema, which means that Storyboards and XIB files can be used interchangeably between Xcode and Xamarin/Visual Studio.


Storyboards and XIB views defined in XML

Storyboards and XIB views defined in XML


For the proof of concept (POC), we prepared a simple Storyboard in Xcode with only four views defined inside.


Simple Storyboard with four view controllers for POC

Simple Storyboard with four view controllers for POC


After creating a new Xamarin.iOS project in Xamarin Studio, we simply added the MainCopy.storyboard file into the solution.


Adding MainCopy.storyboard to Xamarin.iOS project

Adding MainCopy.storyboard to Xamarin.iOS project


We were able to open that imported Storyboard inside Xamarin Studio without any issues. All the views and controls sustained their original positions.


Xcode Storyboard opened in Xamarin Studio

Xcode Storyboard opened in Xamarin Studio


View Controller details

View Controller details


To change the startup Storyboard for our application, we had to modify the Main Interface property in the Info.plist file.


Editing the Info.plist file

Editing the Info.plist file


Every view in a Storyboard needs the UIViewController class for managing views and interactions and the logic behind them. Instead of creating the class from scratch for each of the views from the imported Storyboard, we found that there was an easier way to generate them automatically by using Xamarin Studio.

For Xamarin Studio to automatically generate the UIViewController class, it was crucial to change the value of the Class property, and then put the original value back in that text field:

  1. Press Ctrl+X to remove the value of the Class property.
  2. Press Enter to trigger the change and notify Xamarin Studio.
  3. Press Ctrl+V to paste the original value of the Class property back to the text field.
  4. Press Enter to trigger the change of value and automatically generate the .cs and .designer.cs files for the new UIViewController class.


Removing the value of the Class property

Removing the value of the Class property


Replacing the value of the Class property

Replacing the value of the Class property


Xamarin Studio generates the .cs and .designer.cs files:

  • A .cs file implements a partial definition of the UIViewController class. That’s basically the code behind that manages our view.

  • A .designer.cs file also includes a partial definition of the UIViewController class. However, it also registers this class to work with native Objective-C libraries and holds a definition of all the controls and actions defined in a view, so we can reference them from the code behind by simply using their names.


A .designer.cs file with definitions of controls and actions linked with the view

A .designer.cs file with definitions of controls and actions linked with the view


As you can see in the previous and next screenshots, those definitions are linked 1:1 between the Storyboard and the UIViewController designer file. The designer file is regenerated by Xamarin Studio after each change in the Storyboard, so developers should not modify it manually.


Sample control with Name property set to loginImageLabel value

Sample control with Name property set to loginImageLabel value


After properly generating UIViewController classes for all the views, we still faced two types of exceptions:

  1. NSUnknownKeyException

    NSUnknownKeyException

    NSUnknownKeyException


    The NSUnknownKeyException was caused by a value set for the Module property of the UIViewController in the Storyboard designer.


    Module property

    Module property


    All we had to do to fix this issue was to completely remove that value from the Module field.


    Deleted value of Module property

    Deleted value of Module property


    After that fix, our application ran successfully for the first time.


    First successful run

    First successful run


  2. NSInvalidArgumentException

    The NSInvalidArgumentException occurred every time we tried to click a button in our running application.


    NSInvalidArgumentException

    NSInvalidArgumentException


    It turned out that all the actions/events from our view were defined in the .designer.cs file as partial void methods without any implementation.


    Designer and action definitions

    Designer and action definitions


    To solve this issue, we simply added implementations for those actions in the UIViewController class definition (.cs file).


    Creating emergencyLoginTapped action implementation

    Creating emergencyLoginTapped action implementation


    Implementations for all the actions

    Implementations for all the actions


After that last fix, our UI finally worked and behaved as we wanted.


Final running application

Final running application


Steps for importing Storyboards from Xcode to Xamarin Studio

We used what we learned about generating UIViewController classes for all the views in MainCopy.storyboard to create simple step-by-step instructions for importing Storyboards from Xcode to Xamarin Studio:

  1. Add the Xcode Storyboard to the Xamarin Studio Xamarin.iOS project.

  2. For each view in the imported Storyboard, go to Identity properties (in the UI) and repeat steps 3 to 6 as follows.

  3. Press Ctrl+X to remove the value of the Class property.

  4. Remove the value of the Module property.

  5. Press Ctrl+V to paste the original value of the Class property back to the text field.

  6. Press Enter to trigger the change of value and automatically generate .cs and .designer.cs files for your UIViewController.

  7. For each of the newly generated .cs files, make sure to create an implementation of all the Actions declared as partial void methods in corresponding .designer.cs files.


Mobile DevOps

Problem statement

The team faced the following challenges:

  • How to speed up the development cycle to deliver new features and bug fixes faster. The full release cycle takes more than 30 days, which slows down the development process and the implementation of new features and possible bug fixes.

  • How to automate builds and distribution to testers. All the builds are done manually on a developer’s machine, and then are manually distributed to testers.

  • How to automate UI tests. All the UI tests are done manually by developers, and after each build, they waste a lot of time testing the same replicable scenarios.

Solution, steps, and delivery

Value stream mapping

Prior to the hackfest event, we did a value stream mapping (VSM) session to reveal all the pain points and possible resource-wasting steps in the current MedApp workflow.


Value stream mapping session results

Value stream mapping session results


As you can see in the previous diagram, most of the tasks (building packages, sending them to testers, testing, releasing) were done 100% manually by the developers. Because of that, the current release cycle takes more than 30 days to complete. The VSM turned out to be an extremely valuable lesson for MedApp, and it exposed a lot of time-wasting steps we could possibly fix by implementing DevOps practices. Ideally, we would like to shorten the release cycle to two weeks.

We settled on implementing the following practices and improvements:

  • Visual Studio Team Services projects, repositories, and project management. One of the developers asked for help in reorganizing their Team Services projects and repositories for easier management.

  • Continuous integration (CI). Currently developers waste up to 30 minutes building an app package, so that’s why we want to use Team Services to improve it by automatically building packages after each commit/merge in the repository.

  • Automated UI testing. Developers waste too much time doing the same manual UI tests over and over after each build, so we need to automate them by using Xamarin.UITest and Test Cloud.

  • Continuous deployment (CD). We’ll use Team Services releases to set up a process of running automated tests and distributing packages to testers (by using HockeyApp) and then later to beta and production environments. We’ll use pre-deployment approvals to make sure the entire process is secure and controlled by managers.

  • Application performance monitoring and management. Developers are already using HockeyApp to distribute packages to testers and beta users and to gather telemetry and crash reports. However, the problem with the current setup is that they find it very difficult to determine which crash reports come from which version of their app (dev, alpha, beta, etc.), so we’ll need to take a closer look at that issue.

Therefore, our target architecture should look something like the following.


DevOps cycle architecture

DevOps cycle architecture


Visual Studio Team Services projects, repositories, and project management

As we started to work on implementing some Mobile DevOps practices for MedApp projects, it turned out that we first needed to focus on reorganizing their Visual Studio Team Services projects and repositories to make them more manageable for team leaders.

MedApp complained about two main pain points:

  1. Managing tasks between projects. It’s really difficult for them to manage tasks for programmers because whenever they want to add a new task for a UWP or Android app, they need to switch back and forth between projects.

  2. Viewing projects. They wanted to have one unified dashboard to view tasks and sprints from different projects on a single timeline.

Managing tasks between projects

It turned out that the root of the first issue was that they were using a different Team Services project for each repository instead of having one big project with multiple repositories representing different parts of the entire CarnaLife solution (such as Android apps, iOS apps, Windows apps, and server).


CarnaLife/ECG solution repositories split into multiple Team Services projects

Multiple Team Services projects


The easy fix for that was to simply migrate all the projects into one big Team Services project set up for CarnaLife solutions. There are numerous publications from ALM experts and MVPs stating that it is the best approach; for more information, see Merge Team Projects into one in TFS and Why You Should use a Single (Giant) TFS Team Project.

There’s also a Visual Studio Team Services extension that helps out in migrating data from one Team Services project to another and merging projects.

Viewing projects

Addressing the first problem solved the second one because after migrating from multiple Team Services projects to a single project, it was much easier to manage the team, tasks, and sprints. It was all possible thanks to one combined backlog instead of having multiple backlogs like before.

Nevertheless, to fully satisfy their needs, they started using the Delivery Plans extension. Delivery Plans is an organizational tool that helps users drive cross-team visibility and alignment by tracking work status on an iteration-based calendar. Users can tailor their plan to include any team or backlog level from across projects in the account. Furthermore, Field Criteria on Plans allows users to further customize their view, while Markers highlight important dates.

Continuous integration

To set up continuous integration (CI) for MedApp, we used the CarnaLife Lite Android application written in Java as an example. We decided to use the Windows hosted agent in Team Services to run builds because it was cheaper and easier than setting up and managing a special machine for that purpose on-premises.


Continuous integration schema

Continuous integration schema


We started off by setting up a new build definition in Team Services and connecting it with the repository.


New Team Services build definition for Android Java app with Gradle

New Team Services build definition for Android Java app with Gradle


Setting up CI with develop branch

Setting up CI


At first we wanted to use the Android build task, but we quickly found out that it is now deprecated and we should instead use the Gradle build task.


Using the Gradle build task to configure build for Android Java application

Using the Gradle build task to configure build for Android Java application


After successfully building the package for the Android application, we wanted to sign it (more about signing Android apps) with a proper keystore file by using the Android Signing task as shown in the following screenshot. We used build definition variables to safely pass in any sensitive data we needed for this task.


Configuring task to sign Android app package

Configuring task to sign Android app package


Using secret variables to hide sensitive data

Using secret variables to hide sensitive data


The last thing we had to do to complete the build definition was to configure Copy files and Publish artifacts tasks. We needed to do this so we could later use those artifacts to deploy tests to Test Cloud and distribute apps to HockeyApp (more about artifacts in team build).


Configuring task to copy output files after build

Configuring task to copy output files after build


Configuring task to publish copied files to drop for further use

Configuring task to publish copied files to drop for further use


Automated UI testing

One of the huge pain points that came out during the VSM session was a need for automated UI tests to save developers time. Developing tests manually was truly was one of the biggest time-wasting steps of the MedApp workflow.

To create UI tests for the Android Java application, we used Xamarin Test Recorder and the Xamarin.UITest framework, which allowed us to easily generate tests written in C#. After creating a few sample tests, we decided to try them out, first locally on an emulator and then on real devices with Xamarin Test Cloud.

Creating tests with Xamarin Test Recorder is really fast and easy. All we had to do was to connect with our application and put Test Recorder into “record” mode. After that we could simply start using our application like we normally would to reproduce a certain UI test scenario. Xamarin Test Recorder followed each of our steps and generated a test scenario script written in C#.

While designing a test scenario, we often wanted to wait for a specific amount of time for something to happen in our UI; for example, when loading data from a web service or waiting for some animation to complete. That’s where we used “assertion mode,” which adds a WaitForElement step to the script.

Another very cool feature we used was the Screenshot method, which allowed us to take a screenshot from the UI at any time. It was very important in terms of using Test Cloud integration later because each screenshot is then used as a separate test step when presenting results.


Xamarin Test Recorder in action

Xamarin Test Recorder in action


Tests created with Xamarin Test Recorder can be:

  • Exported to a .cs file.
  • Exported to a Xamarin Studio UI test project.
  • Uploaded to the Xamarin Test Cloud.

Exporting to the Xamarin Studio UI test project is very good for when you would like to customize generated tests (add/remove steps, add screenshots, change timeout values on the WaitForElement method, etc.) and reuse those tests in the future.


New UI test project generated by Xamarin Test Recorder

New UI test project generated by Xamarin Test Recorder


As we planned to later reuse those tests, put them in a repo, and include them in our continuous deployment (CD) process, we exported all the tests to a UI test project in Xamarin Studio. After we customized our tests, we decided to run them on Test Cloud to check if they were ready to be used in the CD process.


First Test Cloud test run

First Test Cloud test run


Several minutes later the test report was ready and we could check on how the application behaved and how the UI looked in each step (that’s where using that Screenshot method came in handy).


Completed Test Cloud test run results dashboard

Completed Test Cloud test run results dashboard


Test Cloud test results displayed for multiple devices

Test Cloud test results displayed for multiple devices


Test Cloud test results displayed for specific device

Test Cloud test results displayed for specific device


Continuous deployment

We used continuous deployment (CD) to automatically run UI tests on Xamarin Test Cloud and also to deploy the Android app to alpha testers after successfully completing those tests.

To achieve that, we started by creating a new Team Services release definition, and in our first environment, we added a Deploy to Xamarin Test Cloud task.


Release definition with Deploy to Test Cloud task

Release definition with Deploy to Test Cloud task


The next thing we had to do was to make sure we had our artifacts set up properly. What we wanted to achieve here was to make use of what is published after each successful build by the CI build definition that we created earlier.


Setting up artifacts

Setting up artifacts


Another crucial configuration was on the Triggers tab. That’s where we could set up continuous deployment to run the release after each successful build on a specific branch.

As you can see in the following diagram, you can also use Scheduled as a trigger and run the release automatically whenever you like (for example: once a day or once a week).


Setting up triggers for CD

Setting up triggers for CD


A very cool feature of the environment configuration is that we can determine pre- and post-deployment approvers for each environment in the release definition. We used this to specify the team leader as an approver for each deployment to Test Cloud.


Setting up pre-deployment approvers

Setting up pre-deployment approvers


When we were done configuring the first environment (Deploy to Test Cloud), we added a new environment to distribute our application to alpha users through HockeyApp. Here again we used the pre-deployment approval option to specify the Test Lead user to verify that UI tests were passed and the application could be deployed to HockeyApp.


Adding new environment with pre-deployment approvers

Adding new environment with pre-deployment approvers


The release definition allows us to add multiple environments and define a specific order in which our application should be deployed to each of those environments. We’ve taken advantage of that and so whenever that Xamarin Test Cloud task (first environment) fails (which means that tests were not completed successfully), Team Services will not try to deploy that faulty app to our users on HockeyApp (second environment).


Setting up Deploy to Hockey App task

Setting up Deploy to Hockey App task


In the newly created HockeyApp environment, we used the Deploy to HockeyApp task and configured it to connect with the MedApp HockeyApp account and specific App ID.


Multiple environments within the same release definition

Multiple environments within the same release definition


It was very easy to set up a safe connection by using the service endpoint manager. We simply followed this HockeyApp guide to set up a service connection with the MedApp HockeyApp account.


HockeyApp Connection management

HockeyApp Connection management


When we configured everything for the ECG_LITE_ANDROID-alpha-release release definition, we decided to give it a few test releases to check out how it works. A few minor mistakes and fixes later, we finally managed to make it work!


Completed test releases

Completed test releases


Application performance monitoring and management

As I mentioned before, MedApp was already deeply integrated with HockeyApp prior to our cooperation, and as you may see in the following screenshot, they used HockeyApp quite extensively.


HockeyApp dashboard

HockeyApp dashboard


While using HockeyApp, they found two issues that were rather annoying for them:

  1. All versions of their Android app (dev, alpha, etc.) were connected to only one HockeyApp App ID, which made it difficult to determine which version of their app sent the crash report.

  2. They couldn’t install multiple versions of the same application on a single test device. Example: if one of the testers installed a dev version of CarnaLife Lite on his test device and then wanted to install another version (alpha or release, for example), he had to overwrite that dev version.


HockeyApp crash reports dashboard

HockeyApp crash reports dashboard


Both problems were actually connected to each other and they share a similar solution:

  • To separate crash reports from different releases of applications, we need to create a new app in the HockeyApp dashboard for each release type that we want to collect crash reports from. Each apps project created in HockeyApp has a different App ID value. We need to use those values inside of the application logic where we set up crash reporting with the HockeyApp for Android SDK.

  • To install a few different releases of the same application (dev, alpha, production, etc.) on one device, we need to create a separate app in the HockeyApp dashboard for each release that we want to support, and also change the Bundle Identifier (iOS) and Package Name (Android) for each release. For more information, see How to organize development and production apps for distribution.

The easiest way to address both of these issues was to include a simple task in the CI build definition in Team Services. The tasks we used were simple find-and-replace scripts for XML/JSON files, and there are multiple tasks in the Visual Studio Team Services Marketplace that can be used to complete that goal, but we found these two to be the most useful and easy to use:


Sample transformation of package name for Android application

Sample transformation of package name for Android application


Internet of Things

Problem statement

The team faced the following challenges:

  • How to reliably send information about the state of the device
  • How to gather large data (images, RMI, and others)
  • How to send data from the box and show it in almost real time on the UWP dashboard (there is no SDK for Azure Service Bus for the Universal Windows Platform)
  • How to integrate messages (signals) with existing medical systems

Solution, steps and delivery

In this section:

Components and architecture

The main components are as follows (see IoT architecture diagram):

  • A box for the automated external defibrillator (source of events; box is open)
  • A custom device (Kompakt-5) to send information when someone opens the door; uses custom TCP-based text protocol
  • Many types of medical equipment with detailed information about telemetry; started with EKG monitoring
  • MedApp server (technically, a sophisticated socket server) running on Azure Cloud Services; this kind of solution is more convenient because we can adapt to any technical requirements (protocol can be binary, text as in Kompakt-5 case, TCP session, additional headers). Now we are also working on migrating from Cloud Services to Azure Service Fabric (which is a “next generation” cloud services with many additional capabilities).
  • IoT Hub forwarder module to send information to IoT Hub, running on the same cloud service as the MedApp server
  • IoT Hub to receive events (telemetry, large images, special signals), and to store information about devices (using device twins), such as a physical (postal) address or special notes
  • IoT Hub routing to forward selected messages to Azure Service Bus topic subscriptions
  • UWP/WPF dashboard that uses device twins to get information about device location (and where to send paramedics team)
  • Azure Stream Analytics and Azure Data Lake for long-term message storage and analytics


IoT architecture diagram

IoT architecture


Protocol for Kompakt-5 devices

Kompakt-5 is a simple GSM bridge with Simcom 300C that can gather 3 GPIO signals and 2 analog (voltage range 0–24V), and send data to the socket server.

Without going into details about protocol and implementation, we still need to know a few things for further discussion. The device ID (called object ID here to distinguish from “device ID” used in IoT Hub) is a four-character string. If someone opens the box, Kompakt-5 sends a 13-character string with an object ID and event ID. We also need to periodically send a status package (to keep the socket connection alive).

All this information is processed (and generated) by using the MedApp socket server, which forwards appropriate messages to IoT Hub.

In the next version, instead of using the expensive Kompakt-5, we will probably use a device based on ESP8266 or ESP32 or another simple device with a GSM radio that is capable of directly sending messages to IoT Hub by using either the HTTP or MQTT protocol.


Part of documentation describing protocol between Kompakt-5 and TCP server

Protocol between Kompakt-5 and TCP server


Managing devices

From an IoT Hub perspective, we need a separate device ID and password. Unfortunately, many medical devices have simplified identification. In the case of Kompakt-5, this is just four letters. Therefore, we need a translator that can return the correct connection to IoT Hub for that particular device based on a 4-character string delivered to the MedApp server.

To set up this correlation, we used a simple JSON-based file with objectid and information for IoT Hub (iotdevicename and iotdevicekey):

 {
   "iothubname": "[iothubname].azure-devices.net",
   "devices": [
     {
       "iotdevicename": "defib01",
       "iotdevicekey": "deviceKey",
       "objectid": "AAAA"
     },
     {
       "iotdevicename": "defib03",
       "iotdevicekey": "deviceKey",
       "objectid": "AAAC"
     },
     {
       "iotdevicename": "defib02",
       "iotdevicekey": "deviceKey",
       "objectid": "AAAB"
     }
   ]
 }


IConfig is a simple interface with two methods: one (GetConnectionForObjectId) that returns a connection string for the device (based on the ObjectId string), and a second (GetDevices) that lists all devices.

  public interface IConfig {
    string GetConnectionForObjectId(string objectid);
    List<DeviceInfo> GetDevices();
  }
  public class DeviceInfo {
    public string IotDeviceName { get; set; }
    public string IotDeviceKey { get; set; }
    public string ObjectId { get; set; }
  }


Config is a class that implements IConfig:

  public class Config : IConfig
  {
    private string m_iotHubName;
    private List<DeviceInfo> m_devices;

    public string GetConnectionForObjectId(string objectid) {
      DeviceInfo di = m_devices.FirstOrDefault(p => p.ObjectId == objectid);
      return $"HostName={m_iotHubName};DeviceId={di.IotDeviceName};SharedAccessKey={di.IotDeviceKey}";
    }
    public List<DeviceInfo> GetDevices() { return m_devices; }
    public DeviceInfo GetDeviceInfoForObjectId(string objectid) {
      return m_devices.FirstOrDefault(p => p.ObjectId == objectid);
    }
    public Config(string file)
    {
      try
      {
        var obj = JObject.Parse(File.ReadAllText(file));
        m_iotHubName = obj["iothubname"].ToString();
        JArray devices = (JArray)obj["devices"];
        m_devices = new List<DeviceInfo>();
        foreach(var item in devices)
        {
          m_devices.Add(new DeviceInfo { IotDeviceKey = item["iotdevicekey"].ToString(), IotDeviceName = item["iotdevicename"].ToString(), ObjectId = item["objectid"].ToString() });
        }
      } catch (Exception ex)
      {
        throw new ApplicationException("Invalid configuration!",ex);
      }
    }
  }


The constructor (Config) simply iterates over JArray and reads all the configuration details.

A helper class provides semantics (methods) similar to deviceClient from the IoT Hub Client SDK, but as a parameter, receives an object ID (a small, 4-character ID for medical equipment).

To speed up the operation, we decided to cache all connection strings to IoT Hub by using a simple Dictionary<string, DeviceClient>().

Thanks to that, the implementation to send an alert is very short:

  public async Task SendAlertAsync(string objectId,string alertType)
  {
    var deviceClient = m_cache[objectId];
    var msg = new Message();
    msg.Properties.Add(alertType, "1");
    await deviceClient.SendEventAsync(msg);
  }


Similar to UploadToBlobAsync or another proxy method, we need to get the device client from the cache and call the appropriate SDK method:

  public async Task UploadToBlobAsync(string objectId, string blobName, Stream image)
  {
    var deviceClient = m_cache[objectId];
    await deviceClient.UploadToBlobAsync(blobName,image);
  }


Using the library calling IoT Hub from the MedApp server is relatively simple. First, we need to set up a helper library:

  IotSender iot = new IotSender(new Config(@"C:\TS\ASCEND2017\MedApp\MedApp\IotLib\IotLib\devices.json"));


When the socket server receives a message about opening the box from an external device (such as Kompakt-5), sending an alert is relatively easy:

  await iot.SendAlertAsync("AAAA", "open");


If another device needs to send telemetry, the TCP server simply calls:

  await iot.SendEventAsync("AAAA",
    JsonConvert.SerializeObject(new
    {
      msgtype = "med",
      devicetype = "t1",
      devicename = "dev2",
      val1 = rnd.Next(100),
      val2 = rnd.NextDouble()
    }
  ));


There is also functionality to send messages in a batch. Some telemetry messages are sent at a very fast rate, and there is no point in sending them one after another to IoT Hub. We need to process them in order in Stream Analytics or in custom code, but we could send them in a batch by using code similar to the following:

  msglst = new List<Message>();
  for (int i = 0; i < 100; i++)
  {
    var data = new
    {
      msgtype = "med",
      devicetype = "t1",
      devicename = "dev2", //For easy processing on stream analytics
      val1 = rnd.Next(100),
      val2 = rnd.NextDouble()
    };
    var str = JsonConvert.SerializeObject(data);
    var msg = new Message(UTF8Encoding.UTF8.GetBytes(str));
    msg.Properties.Add("devicename", "dev2"); 
    msglst.Add(msg); //Add to list
  }
  await iot.SendEventBatchAsync("AAAC", msglst); //Send list as batch


Sending images requires additional setup from IoT Hub. We need to set up an Azure Blob storage account to physically store large images (IoT Hub is ideal for receiving many small messages, up to 4K).

To do that, after creating IoT Hub, we need to point to new (or existing) Azure storage. New files (images) will be uploaded to appropriate folders.


Configuration of Azure storage in IoT Hub

Configuration of Azure storage in IoT Hub


Data from each device is saved to appropriate patch in storage container

Data from each device is saved to appropriate patch in storage container


To send an image from the MedApp server (where first1 is the name of the blob, and anystream is a stream with the image received from the socket server from device AAAA):

  await iot.UploadToBlobAsync("AAAA", "first1", anystream);


Further processing can be done by using the classic Storage SDK. Moreover, we can trigger additional logic in Azure Functions (binding to blob) or Azure WebJobs.

Device twins and physical address

Device twins are used to store information about the device location. In this solution, we have four tags: address, city, country, and notes.

We used an approach similar to the configuration of device ID and keys: a JSON file with all the important details; see the following sample:

  {
    "devicesTags": [
      {
        "objectid": "AAAA",
        "address": "ul. Jedna",
        "city": "Warsaw",
        "country": "Poland",
        "notes": "Left side of building"
      },
      {
        "objectid": "AAAC",
        "address": "ul. Druga",
        "city": "Poznań",
        "country": "Poland",
        "notes": "Inside Costa Caffe"
      },
      {
        "objectid": "AAAB",
        "address": "ul. Trzecia",
        "city": "Wrocław",
        "country": "Poland",
        "notes": "Passage A, near restrooms"
      }
    ]
  }


To save that data inside IoT Hub, we used RegistryManager from IoT Hub. To update tags in twins, we need to:

  1. Read old twin (to get value of ETag used to update record).
  2. Prepare path (json object) devPatchTags.
  3. Update twin (UpdateTwinAsync) sending devPatchTags and ETag value.

If someone else updates the twin between steps 1 and 3, we will get an exception in the last step, because the ETag value will be different than the cached one in step 1.

See the following short snippet:

  Config c = new Config(@"devices.json");
  string tags = @"setupdevices.json";
  RegistryManager rm = RegistryManager.CreateFromConnectionString(ConfigurationManager.AppSettings["ServiceConnection"]);

  var obj = JObject.Parse(File.ReadAllText(tags));
  JArray devices = (JArray)obj["devicesTags"];
  foreach (var item in devices) {
    var devName = c.GetDeviceInfoForObjectId(item["objectid"].ToString()).IotDeviceName;
    var devTwin = await rm.GetTwinAsync(devName); // Step 1
    var devPatchTags = new { //Step 2
      tags = new {
        address = item["address"].ToString(),
        city = item["city"].ToString(),
        country = item["country"].ToString(),
        notes = item["notes"].ToString()
      }
    };
    await rm.UpdateTwinAsync(devName, JsonConvert.SerializeObject(devPatchTags), devTwin.ETag); // Step 3
  }


Routing

When we send a message about a special state (opening the box), the message will contain the empty body and additional metadata—the string property with the alert name (open) and value 1. Therefore, we can set up IoT Hub routing to forward selected messages for further processing in Service Bus.


Routing and forwarding to Service Bus topic

Routing and forwarding to Service Bus topic


Service Bus topic definition with single subscription (all messages in this case)

Service Bus topic definition with single subscription


Azure Service Bus is a reliable information delivery service, based on brokered messages with transactional processing. To do actual processing, we used a competing consumer pattern. This mechanism assures us that important messages are correctly processed.

A Service Bus topic can have multiple subscriptions, which means that we can create separate receivers for many medical systems. A single message delivered by IoT Hub routing can trigger many parallel actions.

To communicate with Service Bus, we can use any of these protocols:

  • Advanced Message Queuing Protocol 1.0 (AMQP)
  • Service Bus Messaging Protocol (SBMP)
  • HTTP

AMQP is an open standard application layer protocol maintained by OASIS and widely used in many enterprise solutions; therefore, it is very convenient as an integration broker. For more information, see Best Practices for performance improvements using Service Bus Messaging.


UWP dashboard

Unfortunately, in UWP we are using .NET Core, which does not have full support for Service Bus topics. In the case of Windows Presentation Foundation (WPF) applications based on full .NET, subscriptions for topics can be done by using a standard SDK, such as the following:

  SubscriptionClient m_clientAll = SubscriptionClient.Create("open", "all");
  
  var m_msg = await m_clientAll.ReceiveAsync();
  if (m_msg != null) 


In the case of a MedApp solution, there is a business need to have a dashboard written in UWP (code can work on Windows and HoloLens).

Therefore, we need to implement a REST-based layer. The solution is based on the Service Bus HttpClient sample, adapted to UWP and MedApp requirements.

We need four main elements (have a look at files BrokerProperties.cs and HttpClientHelper.cs).

  • First, we need a special MessageState enumeration (used as one of the brokered properties):

      // Summary: Enumerates a message state.
      public enum MessageState {
        // Summary: Specifies an active message state.
        Active = 0,
        // Summary: Specifies a deferred message state.
        Deferred = 1,
        // Summary: Specifies the scheduled message state.
              Scheduled = 2
          }
    


  • Next, we need a class for storing the full message, which consists of the body, a set of mandatory brokered properties, and optional custom properties:

      class ServiceBusHttpMessage
          {
              public byte[] body;
              public string location;
              public BrokerProperties brokerProperties;
              public Dictionary<string,object> customProperties;
    
              public ServiceBusHttpMessage()
              {
                  brokerProperties = new BrokerProperties();
                  customProperties = new Dictionary<string, object>();
              }
          }
    


    The BrokerProperties class is used for storing mandatory information about a message. We are using DataContract, a built-in .NET serialization. For our solution, the most important properties are MessageId, which contains the ID for the brokered message, and LockToken.Value. LockToken is assigned when the message is received by the reader in PeekLock mode.

          [DataContract]
          class BrokerProperties
          {
      []
              [DataMember(EmitDefaultValue = false)]
              public int? DeliveryCount;
    
              [DataMember(EmitDefaultValue = false)]
              public Guid? LockToken;
    
              [DataMember(EmitDefaultValue = false)]
              public string MessageId;
    
      []
    
              public MessageState StateEnum;
    
              [DataMember(EmitDefaultValue = false)]
              public string State
              {
                  get { return StateEnum.ToString(); }
    
                  internal set { StateEnum = (MessageState)Enum.Parse(typeof(MessageState), value); }
              }
    
      []
          }
      }
    


  • The HttpClientHelper class wraps the REST call to Service Bus into easy-to-use functions, and is also a constructor based on the namespace, SAS key name, and key. You can set up additional headers in the httpClient instance. The Authorization header contains the SAS token. ContentType is the default encoding format.

  • The GetSasToken method is used by the constructor to generate a SAS token based on the SAS key. Please remember that a token is valid only for a selected period; in our sample, it was 20 minutes. For more information about SAS and authorization, see Service Bus authentication with Shared Access Signatures.


Shared Access policy can be assigned to entire Service Bus account or for a particular topic (and queue)

Shared Access policy for Service Bus


        const string ApiVersion = "&api-version=2012-03"; // API version 2013-03 works with Azure Service Bus and all versions of Service Bus for Windows Server.

        HttpClient httpClient;
        string token;

        // Create HttpClient object, get token, attach token to HttpClient Authorization header.
        public HttpClientHelper(string serviceNamespace, string keyName, string key)
        {
            this.httpClient = new HttpClient();
            this.token = GetSasToken(serviceNamespace, keyName, key);
            httpClient.DefaultRequestHeaders.Add("Authorization", this.token);
            httpClient.DefaultRequestHeaders.Add("ContentType", "application/atom+xml;type=entry;charset=utf-8");
        }

        // Create an SAS token. 
        public string GetSasToken(string uri, string keyName, string key)
        {
            // Set token lifetime to 20 minutes.
            DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0);
            TimeSpan diff = DateTime.Now.ToUniversalTime() - origin;
            uint tokenExpirationTime = Convert.ToUInt32(diff.TotalSeconds) + 20 * 60;

            string stringToSign = Uri.EscapeUriString(uri) + "\n" + tokenExpirationTime;
            HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key));

            string signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));
            string token = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}",
                Uri.EscapeUriString(uri), Uri.EscapeDataString(signature), tokenExpirationTime, keyName);
            Debug.WriteLine(token);
            return token;
        }


Receiving messages

The Receive method is responsible for getting messages from the Service Bus topic subscription. In the case of AMQP (or MQTT), we get a “callback” when a message arrives. In the case of HTTP, the only option is to wait on an HTTP connection, assuming that during a timeout (60 seconds here) we get a new message. If not, we need to open an HTTP connection again.

  • The first parameter for Receive is an address: https://{servicebusname}.servicebus.windows.net/topicname/subscriptions/subscriptionname

  • The second (deleteMessage) points out whether we want to use ReceiveAndDelete or PeekLock mode.

In Service Bus, we have two ways of receiving messages. One, destructive read, where information is received and automatically deleted, and two, PeekLock, where after receiving, we need to explicitly send a delete statement when messaged.

In Receive and Delete Message (Destructive Read), it showcases the operation DELETE (URI is https://{servicebusname}.servicebus.windows.net/topicname/subscriptions/subscriptionname/messages/head). In our scenario, this approach can’t be used because we want to have confirmation that the message is delivered and processed on the backend system.

If we use the POST operation, as described in Peek-Lock Message (Non-Destructive Read), and a message arrives, we will get and lock the message (and of course correct the lock “guid” in property LockToken in ServiceBusHttpMessage, BrokerProperties).

PeekLock receive mode is much more suited to our scenario. When an ambulance is successfully dispatched, we can delete the message. If some processing generates an exception, we can process the message again. Message-locking is only temporary (for a short period), and after that, information is visible again to any reader (therefore, this style of processing is called competing consumer).

        public async Task<ServiceBusHttpMessage> Receive(string address, bool deleteMessage)
        {
            // Retrieve message from Service Bus.
            HttpResponseMessage response = null;
            try
            {
                if (deleteMessage)
                {
                    response = await this.httpClient.DeleteAsync(address + "/messages/head?timeout=60");
                }
                else
                {
                    response = await this.httpClient.PostAsync(address + "/messages/head?timeout=60", new ByteArrayContent(new Byte[0]));
                }
                response.EnsureSuccessStatusCode();
            }
            catch (HttpRequestException ex)
            {
                if (deleteMessage)
                {
                    Debug.WriteLine("ReceiveAndDeleteMessage failed: " + ex.Message);
                }
                else
                {
                    Debug.WriteLine("ReceiveMessage failed: " + ex.Message);
                }
            }

            // Check if a message was returned.
            HttpResponseHeaders headers = response.Headers;
            if (!headers.Contains("BrokerProperties"))
            {
                return null;
            }

            // Get message body.
            ServiceBusHttpMessage message = new ServiceBusHttpMessage();
            message.body = await response.Content.ReadAsByteArrayAsync();

            // Deserialize BrokerProperties.
            IEnumerable<string> brokerProperties = headers.GetValues("BrokerProperties");
            DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(BrokerProperties));
            foreach (string key in brokerProperties )
            {
                using (MemoryStream ms = new MemoryStream(Encoding.ASCII.GetBytes(key)))
                {
                    message.brokerProperties = (BrokerProperties)serializer.ReadObject(ms);
                }
            }

            // Get custom properties.
            foreach (var header in headers)
            {
                string key = header.Key;
                if (!key.Equals("Transfer-Encoding") && !key.Equals("BrokerProperties") && !key.Equals("ContentType") && !key.Equals("Location") && !key.Equals("Date") && !key.Equals("Server"))
                {
                    foreach (string value in header.Value)
                    {
                        message.customProperties.Add(key, value);
                    }
                }
            }

            // Get message URI.
            if (headers.Contains("Location"))
            {
                IEnumerable<string> locationProperties = headers.GetValues("Location");
                message.location = locationProperties.FirstOrDefault();
            }
            return message;
        }


Deleting messages

To delete a message, we need to get messageId and LockId (lock token) and send the DELETE command to the correct address: https://{servicebusname}.servicebus.windows.net/topicname/subscriptions/subscriptionname/messages/messageId/locktoken

        // Delete message with the specified MessageId and LockToken.
        public async Task DeleteMessage(string address, string messageId, Guid LockId)
        {
            string messageUri = address + "/messages/" + messageId + "/" + LockId.ToString();
            await DeleteMessage(messageUri);
        }

        // Delete message with the specified URI. The URI is returned in the Location header of the response of the Peek request.
        public async Task DeleteMessage(string messageUri)
        {
            HttpResponseMessage response = null;
            try
            {
                response = await this.httpClient.DeleteAsync(messageUri + "?timeout=60");
                response.EnsureSuccessStatusCode();
            }
            catch (HttpRequestException ex)
            {
                Debug.WriteLine("DeleteMessage failed: " + ex.Message);
                throw ex;
            }
        }


Using that helper from the UWP application is pretty straightforward:

First, we need to create an instance of HttpClientHelper, where [servicebusnamespace] is the Service Bus name:

  hc = new HttpClientHelper("[servicebusnamespace]", "RootManageSharedAccessKey", "SharedAccessKey");


To get messages, we need to run in a loop (separate task, background task – depends on needs), where open is the name of the topic, and all is the name of the subscription:

  m_msg = await hc.ReceiveMessage("https://[servicebusnamespace].servicebus.windows.net/open/subscriptions/all");


When the message is received, we can check if it was sent by IoT Hub. To do that, we need to check if we have a device ID in the custom property named iothub-connection-device-id. We can then read the twin based on that device ID.

Fortunately for IoT Hub, we have a good SDK library, so reading twins is not complicated:

  if (m_msg.customProperties.ContainsKey("iothub-connection-device-id")) {
    string deviceId = m_msg.customProperties["iothub-connection-device-id"]?.ToString().Replace("\"", String.Empty);
    var deviceTwin = await m_rm.GetTwinAsync(deviceId);
    StringBuilder sb = new StringBuilder();
    sb.AppendLine(deviceTwin.Tags["address"].ToString());
    sb.AppendLine(deviceTwin.Tags["city"].ToString());
    sb.AppendLine(deviceTwin.Tags["country"].ToString());
    sb.AppendLine(deviceTwin.Tags["notes"].ToString());
    txtInfo.Text = sb.ToString();
  }


Information about the location of a device can be shown directly on the dashboard, based on metadata from the device twins. If we need to update that information, it’s enough to update the proper tags.

Next, after processing (for example, dispatching an ambulance), we need to delete the message by calling the DeleteMessage helper:

  try {
    await hc.DeleteMessage("https://[servicebusnamespace].servicebus.windows.net/open/subscriptions/all", m_msg.brokerProperties.MessageId, m_msg.brokerProperties.LockToken.Value);
    txtInfo.Text = "";
  } catch (HttpRequestException ex) {
                  //Too slow,lock timeout, message was processed on another station
              }
  btnConfirm.IsEnabled = false;
  await Task.Factory.StartNew(waitForMessage);


If we get an exception, it means that someone (a competing consumer) already processed (or is currently processing) the message.

IoT Hub security summary

One of the most important features in Azure IoT Hub is the great security model. We can explicitly grant permission using policy. Regardless of protocol (IoT Hub supports AMQP, MQTT, and HTTP), there are always per-device security credentials. The individual device can use a SAS token, a symmetric key, or an event X.509 certificate. For more information about security, see Control access to IoT Hub.

“One of the key reasons why we decided to use IoT Hub is the security model. Even if right now transmission between the device and our server is based on custom TCP protocol, in the future we can easily add Wi-Fi-based devices and directly communicate with IoT Hub. And we will be able to provide a secure way to address a single device.” —Mateusz Kierepka, CEO, MedApp

What is also crucial from the development perspective is that every step can be automated.

IoT Hub create automation

The creation of IoT Hub and devices can be automated by installing Azure CLI 2.0.

  1. First, we need to log on to the subscription:

       az login
    
  2. We then select the correct subscription (here, it’s demotest):

      az account set --subscription DEMOTEST
    
  3. Third, we need to create a resource group.

      az group create --name rgMedApp --location westeurope
    
  4. Inside the resource group, create a new instance of IoT Hub (sku S1):

      az iot hub create --name iothMedApp01 --resource-group rgMedApp --sku S1
    


Thanks to the functionality in CLI 2.0, we can also work with elements inside IoT Hub – as in the following example – register new devices.

  az iot device create --hub-name iothMedApp01 --device-id dev01
  az iot device create --hub-name iothMedApp01 --device-id dev02


To get a connection string, we can either ask for a connection string or run a simple query for primaryKey and deviceId (those two values are required in the JSON file for our Config class).

  az iot device list --hub-name iothMedApp01 --output table --query "[].{ deviceId: deviceId, primaryKey: authentication.symmetricKey.primaryKey }"


To create the Service Bus topic and subscription, we can use an ARM template. To deploy, we simply need to call:

  az group deployment create --name med01 --resource-group rgMedApp --template-file servicebussubtopic.json --parameters @servicebussubtopic.parameters.json


Our template file looks like (servicebussubtopic.json):

  {
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "serviceBusNamespaceName": {
        "type": "string",
        "metadata": {
          "description": "Name of the Service Bus Namespace"
        }
      },
      "serviceBusTopicName": {
        "type": "string",
        "metadata": {
          "description": "Name of the Service Bus Topic"
        }
      },
      "serviceBusTopicSubscriptionName": {
        "type": "string",
        "metadata": {
          "description": "Name of the Service Bus Topic Subscription"
        }
      }
    },
    "variables": {
      "sbVersion": "2015-08-01"
    },
    "resources": [
      {
        "apiVersion": "[variables('sbVersion')]",
        "name": "[parameters('serviceBusNamespaceName')]",
        "type": "Microsoft.ServiceBus/namespaces",
        "location": "[resourceGroup().location]",
        "properties": {
        },
        "resources": [
          {
              "apiVersion": "[variables('sbVersion')]",
              "name": "[parameters('serviceBusTopicName')]",
              "type": "Topics",
              "dependsOn": [
                  "[concat('Microsoft.ServiceBus/namespaces/', parameters('serviceBusNamespaceName'))]"
              ],
              "properties": {
                  "path": "[parameters('serviceBusTopicName')]"
              },
              "resources": [
                  {
                      "apiVersion": "[variables('sbVersion')]",
                      "name": "[parameters('serviceBusTopicSubscriptionName')]",
                      "type": "Subscriptions",
                      "dependsOn": [
                          "[parameters('serviceBusTopicName')]"
                      ],
                      "properties": {
                      },
                      "resources": [
                      ]
                  }
              ]
          }
        ]
      }
    ],	
    "outputs": {
    }
  }


And the parameters are (servicebussubtopic.parameters.json):

  {
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
      "serviceBusNamespaceName": {
        "value": "MedApp01"
      },
      "serviceBusTopicName": {
        "value": "open"
      },
      "serviceBusTopicSubscriptionName": {
        "value": "all"
      }
    }
  }


To get more information about how to automatically create resources using ARM templates, we can use the Azure portal. Many resources related to ARM templates can be also found at Azure Quickstart Templates.


Automation script options allow generation of ARM templates for already existing resources directly from Azure Portal

Automation script


Sample reporting forms


Sample reporting form 1

Reporting Form 1


Sample reporting form 2

Reporting Form 2


Sample reporting form 3

Reporting Form 3


Conclusion

Future plans, going forward

Overall, the MedApp team was very happy with the outcome of our cooperation. After all, we managed to achieve some really great results in all three categories we addressed.

Most of the solutions from this case study are already being used in production, and the rest of them will be used to improve and evolve their products in the near future.

Our successful experiments with Xamarin.iOS will hopefully push MedApp to migrate all of their apps to Xamarin and C# for easier development and management.

The Mobile DevOps concepts we’ve worked on will greatly improve the development process of MedApp software, and all the reorganization we did in Visual Studio Team Services will make project management much easier.

The IoT solution for AED medical boxes is already moving into the MVP stadium, and it has a great potential to land in production very soon.

What we learned

Many learnings were made during the hackfest. We learned that:

  • Xcode Storyboards and XIB files can be imported and reused in Xamarin Studio with Xamarin.iOS projects. The process to achieve that is pretty simple, but it’s amazing that there are no tutorials, guides, or documentation about how to do it properly.
  • It was nice to see that Xamarin.UITest and Xamarin Test Recorder worked so well with the Android Java application.
  • Setting up proper DevOps practices and workflows might be very time-consuming at first, but it pays off in the long term.
  • In Visual Studio Team Services, it is easier to manage a single project with multiple repositories than separate projects for each repository.

Source code