.

Contributed by Stephen Penrice. Stephen took NYC Data Science Academy 12 week full time Data Science Bootcamp pr... between Sept 23 to Dec 18, 2015. The post was based on *his second class project (due at 4th week of the program).*

I see Shiny as a tool for quickly and easily prototyping models without having to call in a development team. So I wanted to create an app that does non-trivial calculations on a set of user inputs and returns the results in a visual format. Modern Portfolio Theory, developed by Henry Markowitz in the 1950’s, fit my requirements. I was also motivated by a desire to create an intuitive tool to help people understand the theory, something I wish I had had when I entered finance.

Modern Portfolio Theory begins with the assumption that returns on any risky asset are normally distributed. This in turn implies that any portfolio of risky assets has normally distributed returns, since a portfolio is simply a linear combination of individual assets. A normal distribution can be described by two parameters, its mean and standard deviation, which can be graphed in a plane. Customarily in finance the standard deviation is referred to as volatility (or vol) and graphed on the horizontal axis and the mean (or expected return) is graphed on the vertical axis. Modern Portfolio Theory states that for any set of risky assets the points corresponding to all possible portfolios of the assets all fit in the interior of a hyperbola (referred to as the “Efficient Frontier” or less commonly the “Markowitz Bullet”) and that the optimal portfolios are those whose points lie on the upper boundary of the hyperbola, since they represent the best possible expected return for a given level of volatility, assuming the investor holds only risky assets.

A wider range of portfolios is possible when an investor makes use of a risk-free asset (i.e. cash) to either reduce volatility and return by holding cash or increase volatility by borrowing cash to enable margin buying. In the case, all the optimal portfolios lie on a line that crosses the vertical axis at the risk-free rate r (corresponding to an all-cash portfolio, with expected return r and volatility 0) and a point on the hyperbola such that the line is tangent to the hyperbola.

Granted, this discussion is hard to follow without a visual, and hence the motivation for the Shiny app, which you can access here.

The app has a menu of checkboxes allowing the user to select assets from a list of 16 publicly traded stocks. Once the stocks are selected, pressing the “Calculate Efficient Frontier” button displays the Efficient Frontier along with the Capital Allocation Line. The Efficient Frontier is the curve that shows, for every possible expected return, the minimum volatilty for the portfolios that have that expected return. So the only portfolios comprising the given assets that are of interest are the ones corresponding to points on the upper half of the Efficient Frontier, because if a portfolio’s volatilty and expected return are in the interior of the “Markowitz Bullet,” that means there is a portfolio on the upper boundary that has the same volatility but a higher expected return.

One thing to play around with in the app is the mix of assets. For example, you can see that the left end of the bullet moves to lower volatility as you add assets. This demonstrates the concept that diversified portfolios are less risky than one concentrated in a few assets. Notice that there is an option near the top of the page to “Hide Capital Allocation Line,” which may be helpful when exploring the effects of diversity on volatility.

But what is the Capital Allocation line all about? It represents the volatility and expected return of optimal portfolios that include cash as well as risky assets. (Finance theory calls cash the “risk-free” asset, and the “risk-free rate” is the interest rate that an investor receives/pays to save/borrow cash.) An investor can lower portfolio volatility by holding cash, or increase volatility by borrowing money. (Buying stocks with borrowed money amplifies both positve and negative returns. Suppose you buy $100 of a stock using $50 of your own money and $50 of borrowed money. A $5 movement in the stock’s prices represents a 5% change in price but a 10% profit or loss on your investment.)

The Capital Allocation Line is defined by two points. One point corresponds to the most conservative portfolio possible, consisting entirely of cash. This portfolio has zero volatility, its expected return is the risk-free rate, and it is optimal because it is the only portfolio with zero volatility. The other point that defines the Capital Allocation Line is the point where the line tangent to the efficient Frontier passes through the point we just looked at, and we can call the corresponding portfolio the “Tangent Portfolio.” For any given level of volatilty, the optimal portfolio is some mixture of cash and the Tangent Portfolio, and the point representing its volatilty and expected return lies on the Capital Allocation Line.

The app illustrates these ideas nicely. The user can adjust the “Volatility Target” slider at the bottom of the page and watch the dot representing the optimal portfolio move up and down the Capital Allocation Line. The app also displays the optimal allocation (expressed as a percentage of the investor’s capital) given a choice of assets, a volatility target, and a risk-free rate. If you adjust the volatility target, you may notice that while the stock allocations change, their relative sizes remained fixed. That is because in any optimal portfolio the proportions on the risky assets are the same as in the Tangent Portfolio.

Note that a portfolio might include negative allocations to cash or stocks. Negative cash means borrowing to finance assset pruchases, and a negative stock allocation means “shorting,” i.e. borrowing a stock, then selling it and investing the proceeds in the hope that the price will go down, allowing the investor to repurchase it at the lower price and keep the difference after returning the borrowed shares.

It’s also interesting to think about what happens when the risk-free rate changes but the investor’s volatility target remains the same. If interest rates go up, the investor has an incentive to hold more cash. But this will decrease the portfolio’s volatility, so in order to compensate, the mix of stocks need to be riskier. The app illustrates this as well. If you increase the risk-free rate using the slider, you will see both an increase in the “Tangent Portfolio Volatilty” (calculated at the bottom of the page) and a rightward shift in the point of tangency on the Efficient Frontier. Likewise, a decrease in the risk-free rate creates an incentive to borrow, resulting in a need for a less risky mix of stocks, which can be seen in both the graph and the bar chart when you move the risk-free rate slider to the left. (A particularly good starting point for this particular exploration is to select Novartis, Merck, and Eli Lilly and set the target volatility to 21.5%. Remember to click the “Calculate Efficient Frontier” button!)

Using the “Long Only” checkbox at the top of the page, the user can also see the effect of restricting to long-only portfolios, i.e. protfolios that don’t make use of borrowed cash or shorted stocks.

Now let’s look at a few technical aspects of the app.

The interface has some features that help to prevent problematic inputs. If the user selects fewer than 2 assets, a message will be displayed indicating that at least two assets must be selected; this prevents an error in the function that calculates the optimal portfolios.

The range of values available on the risk-free rate slider is determined by the choice of assets. Since the efficient fontier is a hyperbola, it has an asymptote that the upper part of the efficient frontier approaches as the volatility goes to infinity. The slider prevents the risk-free rate from exceeding the y-intercept of this asymptote.

In order to keep the focus of this project on the model, I elected to choose a small set of assets, collect the return data one time, and store the data in an Rds file. For this I used the quantmod package. Specifically, I downloaded monthly returns for the 16 stocks from January 2007 to October 2015. These returns are stored in “long” format comprising there three columns: the return, the month, and the company’s ticker symbol.

`library(dplyr)`

library(quantmod)

library(tidyr)

library(stringr)

getReturns <- function(ticker){

getSymbols(ticker)

temp.returns <- periodReturn(get(ticker), period = 'monthly')

temp.df <- data.frame(temp.returns)

temp.df$date <- row.names(temp.df)

temp.df$ticker <- ticker

return(temp.df)

}

portfolioReturns <- function(tickers){

dfs <- lapply(tickers, getReturns)

stacked <- NULL

for(i in 1:length(dfs))

stacked <- rbind(stacked,dfs[[i]])

return(stacked)

}

full.list <- portfolioReturns(c('PFE','NVS','MRK','LLY',

'GS','JPM','MS','PNC',

'TWX','CMCSA','DIS','DISCA',

'WMT','TGT','HD','COST'))

full.list$date <- str_sub(full.list$date, 1, 7)

saveRDS(full.list,"eff_front_app/full_list.rds")

The model’s heavy lifting is done by the quadprog package. I used the following code to calculate the mean vector and covariance matrix from historical returns (after putting them in a “wide” data frame with a column for each ticker) and then calculate the minimum variance portfolio for a given target return. (The server calls these functions, as will be explained below.)

`library(quadprog)`

assetCov <- function(stacked, tickers){

if(is.null(stacked))

return(NULL)

else {

selected <- stacked[sapply(stacked$ticker, function(x) x %in% tickers),] %>%

spread(.,ticker,monthly.returns)

covMat <- 12*cov(selected[,-1], use = "complete.obs")

return(covMat)

}

}

assetMean <- function(stacked, tickers){

if(is.null(stacked))

return(NULL)

else {

selected <- stacked[sapply(stacked$ticker, function(x) x %in% tickers),] %>%

spread(.,ticker,monthly.returns)

meanVector <- colMeans(selected[,-1])

return(12*array(meanVector))

}

}

mean.var.opt <- function(means, covMat, target, longOnly = FALSE){

Dmat <- 2*covMat

dvec <- matrix(0,nrow(covMat),1)

Amat <- cbind(rep(1,nrow(covMat)),means)

bvec <- matrix(c(1,target),1,2)

if(longOnly){

Amat <- cbind(Amat,diag(nrow(covMat)))

bvec <- cbind(bvec, matrix(0,1,nrow(covMat)))

}

return(solve.QP(Dmat,dvec,Amat, bvec = bvec,meq=2))

}

Note that

`mean.var.opt`

has an optional `longOnly`

flag that adds non-negativity constraints.For the most part the server acts as a intermediary between the interface and the portfolio optimizer. One of the more interesting pieces of server code is the function that calculates the Efficient Frontier.

` covMat <- reactive({assetCov(full.list,portfolio())})`

assetMns <- reactive({assetMean(full.list,portfolio())})

riskyEF <- reactive({

if(longOnlyFlag() & min(assetMns()) /span> 0)

minmean <- 0

else

minmean <- min(assetMns())

mult <- ifelse(longOnlyFlag(),1,2)

mns <- seq(minmean,mult*max(assetMns()), by = 0.001)

sds <- sapply(mns, function(x) mean.var.opt(assetMns(), covMat(),x,longOnlyFlag())[[2]]^0.5)

data.frame(expRet = mns, Vol = sds, curve = "EF")

})

This function sets maximum and minimum return targets (which depend on the selected assets and whether the long-only option is selected) and creates a list of values extending from the maximum to the minimum in steps of 0.1%. Then the server uses sapply to call the optimizer on each of these target inputs to obtain the corresponding minimum volatilty. The resulting points are stored in a data frame so they can be easily plotted with ggplot2.

This app could be enhanced with more options for user input. Obviously, it could provide more asset options and gather return data on demand. More fundamentally, it could provide the user with the capability to specify a mean vector and covariance matrix based on his or her views, rather than taking historical returns as the best estimate of future performance. There could also be greater flexibility in specifying constraints, for example putting a cap on the proportion invested in a certain company or industry. In any case, the app provides a tool for visually exploring an important piece of finance theory.

- 11 data science skills for machine learning and AI
- Get started on AWS with this developer tutorial for beginners
- Microsoft, Zoom gain UCaaS market share as Cisco loses
- Develop 5G ecosystems for connectivity in the remote work era
- Choose between Microsoft Teams vs. Zoom for conference needs
- How to prepare networks for the return to office
- Qlik keeps focus on real-time, actionable analytics
- Data scientist job outlook in post-pandemic world
- 10 big data challenges and how to address them
- 6 essential big data best practices for businesses
- Hadoop vs. Spark: Comparing the two big data frameworks
- With accelerated digital transformation, less is more
- 4 IoT connectivity challenges and strategies to tackle them

Posted 10 May 2021

© 2021 TechTarget, Inc. Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central