You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The basic objective is to make the target as close as possible to the ideal alpha/signal. For this end, we can either use the maximization of similarity or the minimization of a distance measure.
Given the ideal target alpha/signal denoted by $\boldsymbol{\alpha^o}$ is, indeed, a representation of our best return prediction of assets, this objective is similar to the return maximization part of Markowitz Portfolio Optimization problem, i.e., $\min_{\bf{w}} -\boldsymbol{\mu}^\top\bf{w}$.
Expanding these yields $\min_{\boldsymbol{\alpha}}\frac{1}{2}\boldsymbol{\alpha}\top\boldsymbol{\alpha}-\boldsymbol{\alpha}^{o\top}\boldsymbol{\alpha}$, which is a combination of the similarity maximization and the regularization term $\frac{1}{2}\boldsymbol{\alpha}\top\boldsymbol{\alpha}=\frac{1}{2}||\boldsymbol{\alpha}||_2^2$.
Alternatively, you may target bigger dot product, then you may use a parameter for regularization $\lambda \in [0, 1]$,
, where $\bf{S}$ is the specific risk matrix which is diagonal.
If the number of assets, $n$, is much larger than the number of factors, $k$, then it is better to factorize the factor covariance, $\bf{\Omega},$ using Cholesky decomposition:
$\bf{\Omega}:= \bf{L}^\top\bf{L}$
Then we can add the minimization of the risk to the optimization objective:
, where $\gamma$ is a risk aversion parameter and $\mathbf{s}$ is the vector version of the specific risk matrix, $\mathbf{s}:=\text{diag}(\mathbf{S})$.
We may want to limit exposure of our portfolio on selected factors. For instance, most popular factor exposures that are often neutralized or limited are Momentum, Value, Growth, Size, Volatility.
Let $Q$ be the set of indices for the selected factors, with the upper and lower bounds vectors, $\bf{e}_L$ and $\bf{e}_U$ in $\mathbb{R}^{|Q|}$:
, where $\mathbf{B}_Q.$ is the sub-matrix of $\mathbf{B}$ where only the rows are indexed by $Q$. In numpy notation: $\mathbf{B}_Q.$ is B[Q, :].
Limiting systematic risk
This is to make more gain from specific risk.
$$
\mathbf{g}^\top\mathbf{g} \le \xi
$$
If we want to limit the systematic risk relative to the total risk, we need a estimation of the total risk, which might be estimated from the total risk of the target of the previous interval, $\boldsymbol{\alpha}_{prev}$, which leads to:
, where $\boldsymbol{\alpha}_{prev}$ is the position vector of the previous time tick.
If we would like to work on the number of shares term instead of dollar term, we need to consider the price changes of $\boldsymbol{\alpha}_{prev}$. In such case:
, where $\boldsymbol{\nu}_t$ is the average daily trading volume for N’ days used for trading limit. It is better to use shorter period of average daily trading volume for trading limit than that used for holding volume limit. Short period of average daily trading volume has better prediction power for the next day than longer period. If you use too short period of average daily volume for holding volume limit, it fluctuates too much and induces unnecessary noise to the final alpha value.
If we can have a better prediction for expected trading volume of next day, we may use it.
Reducing transaction cost (expected slippage)
Let $\mathbf{c}$ be the slippage vector (could be past N’’ days average of spread-slippage, for instance),