Work in SAS? Two SAS hacks I use every day (and you can too)!
Hannah Kaufmann

Work in SAS? Two SAS hacks I use every day (and you can too)!

Hannah Kaufmann October 09, 2019 Posted in: Blog Posts, Predictive Analytics

Before starting at Pinnacle Actuarial Resources, I had never worked at a consulting firm and wasn’t exactly sure what to expect of the experience. 

But what I have gained as an actuarial analyst for the last two years has been immeasurable. My understanding of concepts and processes as well as my general knowledge and skills have increased exponentially. The opportunity to learn and grow professionally has absolutely been one of the best parts of working at Pinnacle and I have strived to learn from Pinnacle’s other talented professionals. I had no idea when starting my career that I would be exposed to so many great learning and development opportunities from inside, and outside, the firm. 

This past May, for example, I was able to attend the SAS Global Forum in Dallas, Texas. Like many in our industry, I work in SAS software every day. SAS is considered by some an industry standard data and analytics application. I was grateful to be able to attend the sessions in Dallas alongside thousands of other users, and learn so much more about SAS software tools and approaches.

With hundreds of sessions to choose from, it was difficult to narrow down which ones I thought I would benefit most from. I had simply expected to learn a few new procedures to improve everyday processes. Or maybe, hoped to gain exposure to new (at least new to me) modeling approaches. After three days packed with sessions, I learned so much new material to help me in my work. I picked up two hacks, in particular, that were so impactful I thought others would like to know about them.

Making life simpler with macros

One of the first sessions I attended was called “Macros I Use Every Day (and You Can, Too!).” A macro is a sequence that replaces a series of inputs, such as keystrokes or repetitive code. My hope at this particular session was to get exposure to different types of macros and perhaps walk away with a few ideas of how to implement new macros in my code. This presentation included step-by-step code on how to implement the example macros. 

One of the macros that really stood out was titled “%EXPO_ROUND(VAR).” The purpose of the macro was to round numbers to a specific digit. For example, it could round 5,329 to 5,300 if that was the goal. Naturally, this is something that can be very useful in the mapping, or binning, process done for ratemaking projects. The macro can be adjusted for use with variables such as age, credit score, mileage, etc. 

Implementing this one macro into the mapping code can simplify significant portions of the code and make the code easier to review. This macro would also create a useful base for future projects. It can serve as a starting point for some of the judgments that need to be made when starting a new project or teaching a new analyst.


Another session I found particularly interesting was on the “Regularization Techniques for Multicollinearity: Lasso, Ridge, and Elastic Nets.” Before this presentation, I had only briefly been introduced to these topics while studying for actuarial exams. 

This session explained the statistical concept of multicollinearity and a few ways that it can be detected. Multicollinearity happens when there is significant correlation between two or more variables in an analysis, which can lead to redundancy and compromise the integrity of results.

One approach to detect multicollinearity is to utilize Variance Inflation Factor (VIF) which analyzes how much the variance is increased by the presence of multicollinearity in the model. As someone who has never run analyses on the presence of multicollinearity, I am most likely to try testing using VIF because there is a standard measure for good or bad results. Obtaining a VIF result of more than five or 10 indicates that there is collinearity in the model and it should be addressed. 

One of the biggest challenges I encounter with predictive analytics projects is having the background knowledge to know when a result is considered good or bad. The VIF approach helps me understand where that line is drawn and allows me to improve my understanding of what might cause collinearity in the future. Calculating VIF in SAS is easy to do and provides an easy-to-follow result. In the SAS statement “Proc Reg” there is an option to calculate the VIF for a set of variables in the desired data set. Having the capability to run this simple step can help you (and me) improve understanding of common variables and the interactions they may have with other variables. 

I hope you find these two hacks as helpful as I have.

Before this development opportunity at the SAS event, I knew that I was not using a tool that I use every day to its full potential. I had assumed that it would be difficult for to me to get closer to tapping that potential. Happily, I was wrong. It underscores the value taking advantage of every learning and development opportunity that might come your way. With just a few learnings at an industry event, I walked away with useful information to help improve my approach, processes and projects as well as the ease and quality of my day-to-day work. 


Hannah Kaufmann is an Actuarial Analyst II with Pinnacle Actuarial Resources, Inc. in the Bloomington, IL office. She holds a Bachelor of Science degree in Actuarial Science from University of Illinois at Urbana-Champaign. Hannah has experience in assignments involving Predictive Analytics, Loss Reserving, and Group Captives. She is actively pursuing membership in the Casualty Actuarial Society (CAS) through the examination process. 

«November 2019»