Blog-Header-digging-into-modeling-lifecycle
Blog Post

September APEX: Digging Into the Modeling Lifecycle

4 minutes

On Pinnacle’s September 2019 APEX webinar, “Digging Into the Modeling Lifecycle,” we discussed the importance of understanding all of the steps involved in the modeling lifecycle—far beyond simple model construction. That includes understanding the business question, anticipating the potential constraints of implementation, and staying ahead of model usage from a change management perspective. Management of those issues are among the most critical contributors to the model’s best chance for success.

The modeling lifecycle is depicted by the following graphic, which illustrates the key steps of the process.

Image

There are numerous potential applications for predictive modeling in the insurance industry. We polled our APEX audience to see how familiar they were with some of those uses. The results of the polling question are as follows. Poll results indicated that the audience had a good overall familiarity with various uses of predictive modeling for insurance.

Image

There are many examples of predictive modeling applications within these broad categories. Here are some that we noted in the APEX.

Image

We then attempted to bring the modeling lifecycle to “life” by walking through a retention modeling example complete with R code snippets and results. The retention example had two objectives:  1) provide a real-life example that echoed our discussion points and reinforce and solidify key ideas; and, 2) gauge audience interest in future Pinnacle webinars focusing on R programming in the insurance industry context. 

Retention analysis, also known as “attrition” or “churn” analysis, was chosen to illustrate an insurance application in the R language because it belongs to the classification modeling case. As we noted in our webinar, many other insurance applications also fit in within the classification analytical framework. Radost highlighted the types of business questions underlying a retention analysis, the various types of attrition, and the different kinds of modeling techniques utilized to tackle those questions.

We moved from our retention modeling discussion to a demonstration of specific R functions for data exploration and variable assessment – an important theme throughout our APEX presentation. Numerical and graphical output accompanied each step of the analysis, followed by three statistical models for predicting customer churn: a logistic regression model, decision tree, and a support vector machine. Prediction accuracy was assessed for all three models by implementing two conceptually different approaches – a hold-out dataset approach and a 10-fold cross validation. The merits of each validation method were compared from practical and theoretical standpoints.   

Radost concluded the retention example in R by underscoring the importance of a thorough analysis and specified different ways of improving upon initial modeling results. Steps to increase the effectiveness of models were divided into data-related adjustments and model-related enhancements. Those are illustrated below.

Image

Finally, we highlighted different models that can be used to further enhance the accuracy of the modeling predictions. These models include neural networks, random forests, and Bayesian methods. Ideally, a modeler should supplement structured data analysis with non-structured data (such as text mining and social media analytics) for businesses to gain full competitive advantage. We noted an interesting statistic, reported by Gartner in 2017, that estimated that approximately 80% of business data is unstructured. That statistic clearly captures the untapped potential of available data.

We also provided several best practices to be used at various points of the lifecycle. Those best practices include ways to ensure data quality, reasons to invest in developing metadata and validation checks to ensure the modeling work is correct and the model results are strong. 

Finally, we concluded by sharing key aspects of implementation and change management. Notably, we discussed the effort required to help end users understand the model to build trust and guarantee buy-in. The increasing use of predictive models for insurance has meant that regulators need to be engaged earlier in the process, facilitating speed to market. We noted the importance of model monitoring. Despite its appearance at end of the lifecycle, it must not be left solely for consideration at that end-point. At times, monitoring may result in a reconsideration or re-evaluation of business goals or, at the very least, a return to other portions of the lifecycle.

Pinnacle thanks all of those who joined us for our September APEX. We would welcome the opportunity to hear from you to discuss how predictive modeling can help your business.

News & Insights