Loan Credit Risk

Typical Workflow


When a financial institution examines a request for a loan, it is crucial to assess the risk of default to determine whether to grant it. This solution is based on simulated data for a small personal loan financial institution, containing the borrower's financial history as well as information about the requested loan.

This solution package shows how to pre-process data (cleaning and feature engineering), train prediction models, and perform scoring on the SQL Server machine.

To demonstrate a typical workflow, we’ll introduce you to a few personas. You can follow along by performing the same steps for each persona.

Step 1: Server Setup and Configuration with Danny the DB Analyst


Let me introduce you to Danny, the Database Analyst. Danny is the main contact for anything regarding the SQL Server database that contains borrower and loan data.

Danny was responsible for installing and configuring the SQL Server. He has added a user named rdemo with all the necessary permissions to execute R scripts on the server and modify the Loans_R database. This was done through the create_user.sql file.

This step has already been done on the VM you deployed using the 'Deploy to Azure' button on the Quick start page.

Step 2: Data Prep and Modeling with Debra the Data Scientist


Now let’s meet Debra, the Data Scientist. Debra’s job is to use historical data to predict a model for future loans. Debra’s preferred language for developing the models is using R and SQL. She uses Microsoft ML Services with SQL Server 2017 as it provides the capability to run large datasets and also is not constrained by memory restrictions of Open Source R.  

Debra chooses to train a Logistic Regression, a standard model used in the Credit Score industry. Contrary to more complex models such as random forests or neural networks, it is easily understandable through the simple formula generated during the training.

Debra will work on her own machine, using R Client to execute these R scripts. R Client is already installed on the VM. She will also use an IDE to run R.

Now that Debra's environment is set up, she opens her IDE and creates a Project. To follow along with her, open the Loans/R directory on the VM desktop . There you will see three files with the name Loan Credit Risk

  • If you use Visual Studio, double click on the Visual Studio SLN file.
  • If you use RStudio, double click on the "R Project" file.

You are ready to follow along with Debra as she creates the scripts needed for this solution. If you are using Visual Studio, you will see these file in the Solution Explorer tab on the right. In RStudio, the files can be found in the Files tab, also on the right.

  • modeling_main.R is used to define the input and call all these steps. The inputs are pre-poplulated with the default values created for a VM deployed using the 'Deploy to Azure' button on the Quick start page. You must change the values accordingly for your implementation if you are not using the default server (localhost represents a server on the same machine as the R code). If you are connecting to an Azure VM from a different machine, the server name can be found in the Azure Portal under the "Network interfaces" section - use the Public IP Address as the server name.
  • To run all the steps described below, open and execute the file modeling_main.R. You may see some warnings regarding rxClose(). You can ignore these warnings.

Below is a summary of the individual steps invoked when running the main script.

  1. The first few steps prepare the data for training.
    • step1_preprocessing.R: Uploads data and performs preprocessing steps -- merging of the input data sets and missing value treatment.
    • step2_feature_engineering.R: Creates the label isBad based on the status of the loan, splits the cleaned data set into a Training and a Testing set, and bucketizes all the numeric variables, based on Conditional Inference Trees on the Training set.
  2. step3_train_score_evaluate.R will train a logistic regression classification model on the training set, and save it to SQL. In development mode, this script then scores the logisitic regression on the test set and evaluates the tested model. In production mode, the entire input data is used and no evaluation is performed.

  3. Finally step4_operational_metrics.R computes the expected bad rate for various classification decision thresholds and applies a score transformation based on operational metrics.

  4. After step4, the development script runs copy_dev_to_prod.R to copy the model information from the dev folder to the prod folder for use in production or web deployment.

  5. A summary of this process and all the files involved is described in more detail on the For the Data Scientist page.

Step 3: Operationalize with Debra and Danny


Debra has completed her tasks. She has connected to the SQL database, executed code from her R IDE that pushed (in part) execution to the SQL machine to create and transform scores. She has also created a summary dashboard which she will hand off to Bernie - see below.

While this task is complete for the current set of borrowers, we will need to score new loans on an ongoing basis. Instead of going back to Debra each time, Danny can operationalize the code in TSQL files which he can then run himself whenver new loans appear. Debra hands over her scripts to Danny who adds the code to the database as stored procedures, using embedded R code, or SQL queries. You can see these procedures by logging into SSMS and opening the Programmability>Stored Procedures section of the Loans database.

Log into SSMS with SQL Server Authentication.

You can find this script in the SQLR directory, and execute it yourself by following the PowerShell Instructions. As noted earlier, this was already executed when your VM was first created.

Step 4: Deploy and Visualize with Bernie the Business Analyst


Now that the predictions are created we will meet our last persona - Bernie, the Business Analyst. Bernie will use the Power BI Dashboard to examine the test data to find an appropriate score cutoff, then use that cutoff value for a new set of loan applicants.

You can access this dashboard in any of the following ways:

Test Data Tab

The output scores from the model have been binned according to the percentiles: the higher the percentile, and the most likely the risk of default. Bernie uses the slider showing these percentiles at the top right to find a suitable level of risk for extending a loan. He sets the slider to 80-99 to show the top 20% of scores. The cutpoint value for this is shown in yellow and he uses this score cutpoint to classify new loans - predicted scores higher than this number will be rejected.

The Loan Summary table divides those loans classified as bad in two: those that were indeed bad (Bad Loan = Yes) and those that were in fact good although they were classified as bad (Bad Loan = No). For each of those 2 categories, the table shows the number, total and average amount, and the average interest rate of the loans. This allows you to see the expected impact of choosing this cutoff value.

New Loans Tab

Now Bernie switches to the New Loans tab to view some scored potential loans. He uses the cutoff value from the first tab and views information about these loans. He sees he will reject 9 of the 22 potential loans based on this critera.