Wednesday, September 30, 2009

Ch8 Performance Analysis (1)

1. Introduction
The goal of perfromance analysis is to distinguish skilled from unskilled investment managers
- time series analysis: separate skill form luck by measuring return and risk
- cross-sectional comparison: distinguish winners from losers

Performance analysis can help the manager avoid two major pitfalls in implementing an active strategy.
- incidental risk: growth stocks -> concentrations on certain industy and group of stocks with high volatility
- incremental decision making: sequence of individual asset decisions

2. Skill & Luck
2-1. Dimensions of skill & luck
- Blessed
- Insufferable
- Fortorn
- Doomed
*the challenge is to separate the blessed & the insufferable

2-2. Standard Error of Information Ration (IR)

where:
Y = number of years of observation

**IR: a ratio of portfolio returns above the returns of a benchmark to the volatility of those returns. IR measures a portfolio manager's ability to generate excess returns relative to a benchmark and attempts to identify the consistency of the investor. Generally, portfolios with higher betas will tend to have lower information ratios and vice versa, However, the higher beta enhances the portfolio's alpha and contributes to a higher information ratio if the benchmark underperforms the risk-free return during a period.

IR = (Rp - Ri) / Sp-i
where:
Rp = Return of the portfolio
Ri = Return of the benchmark
Sp-i = Tracking error (standard deviation of the difference between returns of the portfolio & the returns of the benchmark), residual risk



3. Defining-Based Performance Analysis
3-1. Returns


3-2. Returns Regression
- regressing the time series of portfolio excess returns against benchmark excess returns

4. Cross-Sectional Comparisons
4-1. Drawbacks of cross-sectional comparisons
- do not represent the complete population of institutional investment manages
- survivorship bias
- ingnore the size
- do not adjust for risk (cannot untangle luck & skill)

Saturday, September 26, 2009

Ch8 Portfolio Construction (2)

5. Portfolio Revisions
- trading decisions based on expected active return, active risk, & transaction costs
- underestimating transX costs -> frequent trading -> suffering higher than expected transX costs & lower than expected alpha
- as the horizon of the forecast alphas decreases -> returns become noiser with shorter horizons. rebalancing for very short horizons would involve frequent reactions to noise, not signal. But the transaction costs stay the same, whether we are reacting to signal or noise.


**individual stock's alpha
e.g.) Boeing's alpha = 0.54%, beta = 0.56, monthly risk-free rate = 0.4%
=> Rj = Rf + beta x (Rm - Rf) = Rf (1 - beta) + beta x Rm
Rf(1 - beta) = 0.4% (1 - 0.56) = 0.18%
0.54% - 0.18% = 0.36% > 0 -> Boeing performed 0.36% better than expected
Annualized Excess Return = (1 - .0036) ^ 12 - 1 = 4.41%

- we can capture the impact of new information, and decide whether to trade, by compaing the marginal contribution to value added for stock n, MCVAn, to the transactions costs. The marginal contribution to value added shows how value added, as measure by risk-adjusted alpha, changes as the holding of the stock is increased with an offsetting decrease in the cash position.
- as our holding in stock n increases, alpha n measures the effect on portfolio alpha
- the change in value added also depends upon the marginal impact on active risk of adding more of stock n, MCARn, which measures the rate at which active risk changes as we add more of stock n - let PCn be the purchase cost and SCn the sales cost for stock n. The situation before new information arrive is

Friday, September 25, 2009

Ch8 Portfolio Construction (1)

1. Introduction
Inprementation
includes both portfolio construction and trading.
*standard object: maximizing (active returns - active risk penalty)

1-1. Inputs for portfolio construction
- portfolio (measurement with near certainty)
- alphas (unreasonable and subject to hidden biases)
- covariance estimates (noisy estimates)
- transaction cost estimates (noisy estimates)
- active risk aversion

1-2. Active Portfolio Management
Maximizing the expected utility of the excess return over a chosen benchmark
- active managers attempt to beat the market by forming portfolios capable of producing actural returns that exceed risk-adjusted expected returns.

**passive portfolio management: it just estables a portfolio that possibly tracks the chosen benchmark. Passive portfolio managers try to capture the expected return consistent with the risk level of their portfolios.

1-3. Benchmark Portfolio
It might be an equity fund (S&P 500 Index), a bond fund (Lehman Brothers Bond Fund), or a balanced fund (mix of stocks and bonds). In other applications, it could be a stream of liabilities, such as a pension fund. We assume that the selected benchmark carries only the market risk.


2. Alphas & Portfolio Construction
2-1. Constraints
Most active managers construct portfolio subject to certain constraints: no short, restriction on the amount of cash held within portfolio, asset coverage, etc. These limits can make the portfolio less efficient.

Managers often add their own restrictions to the process to make portfolio construction more robust: neutral economic sectors, restrictions on allocations to certain stock, avoidance of a position based on a forecast of the benchmark portfolio's performance

2-2. Modified Alphas
Modified alphas address the various constraints that each manager might have.



*practical issues
i. risk aversion
- Aversion to specific factor risk: help the manager address the risks associated with having a position with the potential for huge losses, and the potential dispersion across portfolio
- Quantifying risk aversion -> enabling manager to understand a client's utility in a mean-variance framework

e.g.) IR = 0.8, desired level of active risk = 10%
=> implied level of risk aversion = 0.8 / (2 x 10) = 0.04

**Utility = Excess Return - (Risk Aversion x Variance)

ii. optimal risk

iii. alpha coverage
- forcasting returns on stocks that are not in the benchmark -> expanding the benchmark to include those stocks with zero weight, but active weights can be assigned to generate active alpha.
- a lack of forecast returns for stocks in the benchmark -> inferring alphas based on the alphas for other factors -> calculating value-weighted fraction of stocks with forecasts & average alpha for group N1:
-> subtracting this measure from each alpha and set zero for the stocks without forecasts. These alphas are benchmark-neutral.


3. Alpha Analysis
3-1. Benchmark & Cash Neutal Alphas
- Benchmark-neutral alphas
*the benchmark portfolio has zero alpha by definition. Setting the benchmark alpha to zero insures that the alphas are benchmark neutral, and avoids benchmark timing.

**market timing: managers of actively managed mutual funds are interested in shifting the investment policy with changes of returns on both their investment portfolios and the benchmark portfolio from time to time.

**abnormal returns


- Cash-neutral alphas
The alphas will not lead to any active cash position

- Modified Benchmark-Neutral Alpha = Modified Alpha - Beta * Benchmark Alpha
=> the alpha of the benchmark = 0



3-2. Scale the Alphas
- Alpha has a natural structure
Alpha = volatility * IC * score
where:
IC = information coefficient
volatility = residual risk

- In the above table, Std Dev of Modified Alphas = 0.57% -> shrank IC by 62%

*we expect IC & volatility for a set of alphas to be approximately constant, with the score having mean zero & Std. Dev. one accross the set
-> Alpha should have Mean = zero, Std Dev = IC * volatility

e.g.) IC = 0.05, residual risk = 30%
=> an alpha scale of 1.5% (=0.05 x 30%) --> mean alpha = 0 & 2/3 of stocks having alphas between -1.5 ~ 1.5% & 5% of stocks having alphas larger than +3.0% or less than -3.0%

3-3. Trim Alpha Outliers
- Examine all stocks with alphas greater than in magnitude than, say, three times the scale of the alphas
- A detailed analysis: alphas that depend upon questionable data -> set to zero (while others appear genuine) -> genuine alphas: three times scale in magnitude
- Normal distribution (extreme approach) with benchmark alpha = 0 & required scale factor -> utilizing ranking information in the alphas and ignoring the size of the alphas -> rechecking benchmark neutrality and scaling

3-4. Neutralization
- Neuralization: removing biases or undesirable bets from alphas. Benchmark neutralization means that the benchmark has 0 alpha.
- The multiple-factor approach to portfolio analysis separates return along several dimensions. A manager can identify each of those dimensions as either a source of risk or as a source of value added. The manager does not have any ability to forecast the risk factors. He should neutralize the alphas aginst the risk factors
- The neutralized alphas will only include information on the factors he can forecast plus specific asset information. Once neutralized -> the alphas of the risk factors = zero

e.g.) industry alpha -> zero
=> (cap-weighted) alpha for each industry - industry average alpha


4. Transactions Costs
- one-dimensional problem: to find the correct tradeoff between alpha & active risk
- two-demensional problem: transaction costs added
- Armotizing the transactions costs to compare them to the annual rate of gain from the alpha & the annual rate of loss from the active risk. The rate of amortization will depend on the anticipated holding period.
- Annualized Transaction Cost = Round-Trip Csot / Holding Period (in years)

Ch5 Exotic Options

Ch5 Volatility Smiles

Ch5 Valuation of Mortgage-Backed Securities

Ch5 Mortgage Backed Securities(3)

5. Path Dependence
- path independent: the value of the CFs at a given point in time is independent of the path that interest rates followed up to that point

Ch5 Mortgage Backed Securities(2)

4. Valuation Models
4-1. Static Cash Flow Model
Assumption
Prepayment rates can be predicted as a function of the age of the mortgages in a pool.
*prepayment rate increases gradually with mortgage age and then levels of at some constant prepayment rate

Process
Step 1: Computer the (static) cash flow yield, given the set of (static) cash flow & a market price
Step 2: Measure the nominal spread; comparing the cash flow yield on an MBS with that on comparable bonds.

Advantages
*simple to use
- Allowance for the calculation of YTM
- Prepayment is solely a function of mortgage age and future cash flows can be forecasted

Two severe problems
- the model is not pricing model: CFs of mortgage are not fixed owing to prepayments & etc -> they do not specify appropriate yield for a mortgage
- the model provide misleading price-yield and duration-yield curves since cash flows are not fixed

4-2. Implied Models
The model estimate the interest rate sensitivity of MBSs.

Assumptions
Mortgage sensitivity changes gradually over time.

Advantage
More advanced than the static cash flow model that uses YTM

Disadvantages
- They are not true pricing models.
- Mortgage sensitivity can change dramatically over time

4-3. Prepayment Models
More sophisticated models that actually employ two separate models: a turnover model & refinancing model
*historical information + prepayment function

- incentive functions: modeling refinancing activity based on the term structure of interest rates; lagged rates = past interest rates

Non-interest-rate factors
- Mortgage age
- Points paid
- Amount outstanding
- Season of the year
- Geopraphy

Thursday, September 24, 2009

Ch5 Mortgage Backed Securities(1)

1. Overview
Definition: a loan that is collateralized with a specific piece of real property
- primary market
- secondary market; securitization
**MBS (mortgage-backed security), pass-through structure

2. Fixed-Rate, Level-Payment Mortgages
2-1. Conventional Mortgage: the most common residential mortgage

2-2. fixed-rate, level payment, fully amortized mortgage loans
*features
- principal increase as time passes
- interest decrease as time passes
- servicing fee declines as time passes
- prepayment risk

e.g.) 30 year, $500,000 level payment, fixed rate of 12%
=> Xmonthly = 500000 x 12% x (1 + 12%) ^ 12 / ((1 + 12%) ^ 12 - 1) = 5,143.06

- scheduled principal repayment (scheduled amortization) -> incremental reduction of outstanding principal
- Mortage Rate = Net Interest (Net Coupon) + Servicing Spread
**reduction in principal is unaffected by the servicing fee

3. Prepayment Risk
3-1. Repayment option
The prepayment option is valuable when mortgage rates have fallen. In the case, the value of an existing mortgage exceeds the principal outstanding.
- borrower: similar to a call option of a collable bond (American call option on an otherwise identical, nonprepayble mortgage); Strike price = outstanding principal amount
- homeowner: very much in the position of an issuer of a callable bond
- current coupon rate: initial principal amount = P.V. of Mortgage CF - Value of Prepayment Option

**due on sale
**lock-in effect: if points, refinancing bank fee, are high, these will discourage the borrower's willingness to refinance

3-2. Main factors to affect prepayments
i. prevailing mortgage rates
- Spreade between the current mortgage rate and the original mortgage rate; Historically mortgate rages fall by more than 2% -> refinancing activity increase (media effect)
- Path of mortgage rates: burnout effect
- Level of mortgage rates: low interest rates increase the affordability of housing and increas housing turnover -> increase refinancing & prepayment rates


ii. characteristics of the underlying mortgage loans
- original mortgage rate
- amount of seasoning
- origination of loan (FHA/VA or conventional)
- type of loan (30-year fixed, 30-year balloon)
- geographical location

iii. seasonal factors

iv. general economic activity

v. others: natural disasters, default

Ch5 An Overview of Mortgages & Mortgage Market

1. Overview
1-1. Mortgage: A loan secured by property
- primary market (mortgage market) until 1970
- secondary market: mortgage-backed security (MBS), pass-through security

1-2. Lien Status: seniority in the event of foreclosure
- firs-lien status: mortgage lender
- second-lien status: less than 80% ownership

1-3. Original Loan Term: 30-year, 15-year, balloon payments option

1-4. Interest-Rate Type
- Fixed-Rate Mortgages
- Adjustable-Rate Mortgages (ARMs)
- Hybrid ARMs

1-5. Credit Guarantees
- Government Sponsored Entities (GSEs)
- Federal Housing Administration (FHA)
- Department of Veterans Affairs (VA)

1-6. Loan Balance
- Underwriting standards are primarily concerned with the maximum LTV (loan-to-value) ratio, payment-to-income ratio, & loan amount
- Nonconforming mortgage loans: agency securities that fail to meet the agency's underwriting standards
- Jumbo loans: balances larger than the conforming limits

1-7. Borrower Type
- Traditional borrowers: high credit scores & stable incomes
- Subprime borrowers: impaired credit
- Alternative A (Alt-A) borrowers: those who have decent credit, but unstable income levels. they do not provide the same level of documentation as traditional borrowers

1-8. Major Players in the Mortgage Industry
- Direct lender: underwriters who fund the loans; work with loan brokers through wholesale channel or work with borrowers through retail channel
- Depository institutions: using deposits to support loan practices <--> notdepository institutions who selling the loans to investors in the secondary mortgage market
- Originators underwrite & support loan production. They usually lends for a short-period time and unlimately ends up selling the loans to large banks

1-9. Loan Underwriting Process
i. evaluating of a borrower's creditworthiness
- credit score: FICO score (over 660 = prime credit)
- LTV (loan-to-value ratio):
LTV = Current Mortgage Amount / Current Appraised Value
**the lower LTV ratio, the more comfortable the mortgage lender is in making the loan

- income ratios: level of a borrower's income level compared to the total size of the mortgage payment; Front ratios & Back ratios
- documentation: inc. income, employment, & tax return; in the event of no or little documentation -> borrowers may not be denied credit, but the mortgage rate assigned will reflect the riskiness of the loan

**Front ratios: total monthly payments / monthly income on a pre-tax basis
**Back ratios: total loan payments inc. other borrower loans



2. Basic Mortgage Mathematics
2-1. Level payment mortgage
where:
r = monthly interest rate
T = loan term (in months)
B(n) = original loan balance

***mortgage payment factor

**Monthly Payment Formula
- payments allocated more heavily to interest in the initial stages of the loan (fixed-rate loan)
- over time, the loan balance is declining -> more of payment goes toward principal


2-2. ARM Payments

3. Mortgage & MBS Risks
3-1. Risk-Based Pricing
- separation of subprime borrowers from prime borrowers
- nontraditional borrowers: low FICO scores, riskier characteristics

3-2. Prepayment Risk
- payment in excess of the required payment or payment of the entire loan
- curtailments: prepayments for less than the outstanding principal balance
- prepayements or curtailments reduce the amount of interest the lender receives over the life of the loan.
*prepayment penalties not allowed for residential mortgage

Prepayments occur for the following reasons:
- The sale of the property
- The destruction of the property by fire or other disaster
- A default on the part of the borrower
- Curtailments
- Refinancing

3-3. Mortgage Pass-Through Securities
- a claim against a pool of mortgages (securitized mortgage)
- mortgates in the pool have different maturities & rates
- WAM (weighted average maturity) = weighted average of all the mortgages in the pool
- WAC (weighted average coupon) = weighted average of the mortgage rates in the pool
- pass-through rates: less than the average coupon rate of the underlying mortgages
**invetment characteristics: a function of cash flow featrues & strength of government guarantee

- liquid securities (through securitization)
- more than one class of pass-through securities my be issued against a single mortgage pool
- timing difference between the time the mortgage service provider receives the mortgage payments and the time the cash flows are passed through ot the security holders

3-4. Measruing Prepayments Speeds
Prepayments cause the timing & amount of cash flows from the mortage pool and MBS to be uncertain. The prepayment behavior is not constant over the life of a loan; unlikely to prepay immediately after the loan, but the propensity to prepay increases over time.

3-4-1. SMM & CPR
- single monthly mortality (SMM) measures the monthly principal prepayments on a mortage portfolio as a percentage of the balance at the beginning of the month in question
- conditional prepayment rate (CPR): most common metric used to describe prepayments. CPR increases at a constant and predetermined rate (ramp)

CPR = 1 - (1 - SMM)^12

3-4-2. PSA Model
The most common model for measuring prepayments in a ramping framework
*PSA(public security association -> bond market association)

- base PSA Model (100% of the model or 100% PSA)
- assumption: prepayments begin at a rate of 0.2% CPR in the first month, increase at a rate of 0.2% CPR per month until they reach 6.0% CPR in month 30, and remain at 6% CPR for the remaining term of the loan.
* 200% PSA: speeds double that of the base PSA model
- PSA model depends on the age of the loan or on the weighted-average loan age

e.g.) 4.0% CPR in month 20
=> 4.0% = 20 (0.2%) PCA -> 100% PCA

e.g.) 25th months CPR & SMM for 150% PSA
=> CPR(month 25) = 25 x 0.2% = 5%
150% PSA = 1.5 x 5% = 7.5%
SMM = 1 - (1 - 7.5%)^(1/12) = 0.6476%

**nonlinear relationship between CPR and SMM
***PSA standard benchmark is nothing more than a market convention. It is not a model for predicting prepayment rates for MBS. Empirical studies have shown that actual CPRs differ substantially from those assumed by the PSA bench mark

3-5. Credit Risk
Senior classes of private-label securities rated AAA because of credit guarantees from GNMA or GSEs, however, the analysis of mortgage credit is important for the following reasons:
- assessment of the credit quality of portfolio holdings & the adequacy of loss reserve level (lenders)
- evaluation of potential loss-adjusted returns (buyers of subordinated securities)
- an uderstanding of trends in mortgage lending and credit quality

3-6. Posteriori Evaluation of a Mortgage Pool
i. Stratifictions of Weighted-average credit scores & LTV ratios along with documentation style & geographic concentration
ii. Delinquencies measures
- percentage of the pool that is paying on time in relation to those who are delaying payments
- OTS (Office of Thrift Supervision) method: current (- 30 days) and 30/60/90+ days delinquent
iii. Default measures
- defaults quantified
CDR (conditional default rate): annualized value of the unpaid principal balance of newly defaulted loans over the course of a month as a percentage of the total unpaid balance of the pool at the beginning of the month
CDX (cummulative default rate): proportion of the total face value of loans in the pool that have gone into default as a percentage of the totoal face value of the pool
iv. Severity
- face value of the losses on a loan after the foreclosure process is completed and the property is disposed of

Sunday, September 20, 2009

Ch5 The Science of Term Structure Models

1. Rate and Price Trees
1-1. video clips:
one step for option price: http://www.youtube.com/watch?v=kml52n2zmQs&feature=PlayList&p=1F93169FC44F4F23&playnext=1&playnext_from=PL&index=35
two step binomial: http://www.youtube.com/watch?v=YJls_RgTniw&feature=related

1-2. Binomial Model: A model that assumes that interest rates can take only one of two possible values in the next period
- interest rate tree: set of possible interest paths



2. Risk-Neutral Pricing
*interest rate drift: difference between the risk-neutral and true probabilities




3. Fixed-Income Securities & Black-Sholes-Merton Model
3-1. BSM Model: equity option-pricing model
**Assumptions
i. no upper limit to the price of the underlying asset
ii. constant risk-free rate <--> bond: shor-term rates
iii. constant price volatility <--> price volatility decreases as the bond approaches maturity




4. Callable Bonds
A call option gives the issuer the right to buy back the bond at fixed prices at one or more points in the future, prior to the date of maturity.
- negative convexity as interest rates fall
- callability effectively caps the investor's capital gains as yields fall
- increased reinvestiment risk when yields fall
- less price volatility




5. Putable Bonds
The put feature give the bondholder the right to sell the bond back to the issure at a set price.
The put price serves as a floor value for the price of the bond

Ch5 Key Rate and Bucket Exposures

1. Key Rate Shift
1-1. video clips:
http://www.youtube.com/watch?v=ZDjjI0YKDXg&feature=PlayList&p=B7A60E8C239C9A0C&index=0&playnext=1
http://www.youtube.com/watch?v=nQ4nbF0rfUA&feature=related
http://www.youtube.com/watch?v=SvAc1EF1GPE&feature=related

Friday, September 18, 2009

Ch5 Parametric Approach: Extreme Value (2)

2. POT (Peaks-Over-Threshold) Approach
- distribution of excess losses over a (high) threshold
- GPBdH(Gnedenko-Pickands-Balkema-deHaan) theorem: as u gets large, the distribution
Fu(x) converges to a GPD(generalized Pareto distribution)

where:
beta = positive scale parameter
ksi = shape or tail index parameter

- distribution function F(x): u = threshold value of X & x > 0 -> probability that a loss exceeds the threshold u by at most x
**tradeoff of choosing the threshold:
it needs to be high enough so the GPBdH theory can apply, but it must be low enough so that there will be enough observations to apply estimation techniques to the parameters
- the distribution is defined for the following regions:
for x >= 0 for ksi >= 0 and 0 <= x <= beta/ksi for ksi <>ksi & beta can be estimated using maximum likelihood approaches or semi-parametric approaches

2-1. VaR & Expected Shortfall
- all distributions of excess losses converge to the GPD (natural model for excess losses)


where:
u = threshold (in %)
n = number of observations
Nu = number of observations that exceed threshold

*corresponding ES (in %):


- ES (expected shortfall, a.k.a. conditional VaR) viewed as an average or expected value of all losses greater than the VaR: E[LPLP > VaR]
http://www.youtube.com/watch?v=eHGJFOjyzr4



2-2. GEV vs. POT
- one might be more natural in a given context than the other
- GEV: additional parameters & block maxima approach can involve some loss of useful data relative to the POT
- POT: problem of choosing the threshold



3. Multivariate EVT
We can easily see how extreme values can be dependent on each other with MEVT.
- similar relationship between the occurrence of a natural disaster and a decline in financial markets as well as markets for real goods and services
Multivariate EVT has the same goal as univariate EVT in that the objective is to move from the familiar central-value distributions to methdos that estimate extreme events. The key issue is how to model the dependence structure of extreme events. Knowledge of variances and correlations suffices to specify the multivariate distribution. However, given non-elliptical distribution, correlation no longer suffices to describe the dependence structure.
MEVT tells us that the limiting distirbution of multivariate extreme values will be a member of the family of EV copulas, and we can model multivariate EV dependence by assuming one of these EV copulas. The copulas can also have as many dimensions as appropriate and congruous with the number of random variables under consideration. However, there is a curse of dimensionality.
The occurrence of extreme events is governed by the tail dependence of the multivariate distribution. The tail dependence is the central focus of MEVT.

Ch5 Parametric Approach: Extreme Value (1)

1. GEV Theory
*video clip: http://www.youtube.com/watch?v=o-cpu1IH3tM

1-1. Extreme Events: events that are unlikely to occur, but can be very costly when they do (low-probability, high-impact events)

1-2. GEV (Generalized Extreme Value)
- Fisher-Tippett theorem (1928)
- as the sample size n gets large, the distribution of extremes, Mn, converges to the following distribution: where:
1 + ksi(x - mu)/delta > 0
mu = location parameter of the limiting distribution (a measure of the central tendency of Mn)
delta = scale parameter of the limiting distribution (a measure of the dispersion of Mn)
ksi = tail index, an indication of the shape (or heaviness) of the tail of the limiting distribution

- if ksi > 0, Frechet distribution, heavy-tailed, estimate of ksi is positive but less than 0.35
- if ksi = 0, Gumbel distribution, exponential tails (relatively light tail)
- if ksi < 0, Weibull distribution, lighter than normal tails



- both are skewed to the right, but the Frechet is more skewed than the Gumbel and has a longer right-hand tail
- probability mass will lie between x values of -2 and +6





where:
alpha = VaR confidence level associated with the threshold Mn*

1-3. Choice of distribution
- if the researcher is confident the parent distribution is a t-distribution -> ksi > 0
- if the researcher applies a statistical test and cannot reject the H0: ksi = 0 -> ksi = 0
- if the researcher may wish to be conservertive & to avoid model risk -> ksi > 0


Ch5 VaR Mapping(2)

2. Mapping Fixed-Income Portfolio
2-1. Three Mapping System
i. Principal mapping: average portfolio maturity
ii. Duration mapping: portfolio duration
iii. Cash-flow mapping: maturity buckets, term-structure vertices




2-2. Stress Test
- assumption: all zeros are perfectly correlated
- decrease all zeroes' values by their VaR -> generate a new distribution of P.V. factors -> undiversified VaR

2-3. Benchmarking a Portfolio
- compute VaR in relative terms, that is, relative to a performance benchmark
- Tracking error VaR:


where:
x = vector of position for the portfolio
x0 = vector of positions for the index

- tracking error can be measured in terms of variance reduction
e.g.) Absolute risk of the index (Absolute VaR)= $1.99, TE-VaR=$0.43
=> 1 - (0.43/1.99)^2 = 95.4%

- as correlations decrease for more distance maturities, we should expect that a duration-matched portfolio should have the lowest absolute risk for the combination of most distance maturities (babell portfolio)
**Babell Portfolio: A bond portfolio that has high concentrations of bonds in both short-term and long-term fixed-income instruments with only a few in intermediate-term bonds. A barbell portfolio implements a trading strategy that concentrates on investments in the short- and long-term end of bond maturities. A barbell strategy is useful when short-term and long-term interest rates are higher relative to intermediate interest rates. This strategy allows investors to earn higher overall yields while still retaining the desired time frame for the bond portfolio.



3. Mapping Linear Derivatives
3-1. Forward Contracts


where:
St = spot price of one unit of the underlying cash asset
K = contracted forward price
r = domestic risk-free rate
y = income flow on the asset
tau = time to maturity
Ft = current forward rate

- appropriate for application of the delta-normal method; liner combinations of normally distributed risk factors
e.g.) a forward contract to purchase pounds for dallars one year from now
risk positions:
. a short position in a U.S. T-bill
. a long position in a one-year U.K. bond
. a long position in the British pound spot market

- the current value of the forward contract is the present value of the difference between the current forward rate and the locked-in delivery rate. -> the initial value of the contract is zero
- the value of the contract may change, creating market risk

3-2. Commodity Forwards
- more complex than for financial assets such as currencies, bonds, or stock indices
- most products consumable -> creating implied benefit (convenience yield)
*convenience yield is not tied to another financial variable -> highly variable, creating it own source of risk
- main driver of the value of the contract: current forward price for commodities
- commodities are much more volatile than typical financial assets
- volatilities decrease with maturity; the effect is strongest for less storable products (energy products).
**financial assets: volatilites driven primarily by spot prices (constant volatilities across contract maturities)

3-3. Forward Rate Agreements
- the buyer of an FRA locks in a borrowing rate; the seller locks in a lending rate
e.g.) Long 6 x 12 FRA = long 6-month bill + short 12-month bill
(borrowing for 6 months & investing the proceeds for 12 months)
360-day spot rate = 5.8125%, 180-day rate = 5.6250%
=> (1 + F1,2 / 2) = (1 + 5.8125%) / (1 + 5.6250%/2) => F = 5.836%

3-4. Interest Rate Swaps
Interest-rate swaps viewed in two different ways:
- a combined position in a fixed-rate bond and in a floating-rate bond or
- a portrolio of forward contracts

4. Mapping Options
- mapping proccess for nonlinear derivatives
- delta is not a constant, which may make linear methods inappropriate for measuring the risk of options. Delta increases with the underlying spot price. The relationship becomes more nolinear for short-term options.
- with a small delta change, the linear effect will dominate the nonlinear effect -> linear approximations may be acceptable for options with long maturities when the risk horizon is short

Long option = long delta asset + short(delta asset - c) bill

Thursday, September 17, 2009

Ch5 VaR Mapping(1)

1. Mapping for Risk Measurement
1-1. Mapping: A process by which the current values of the portfolio positions are replaced by exposures on the risk factors.
- many positions can be aggregated into, or simplified to, a small set of exposures, primitive risk factors, without loss of risk information: aggregation at the highest level
*such aggregation is not appropriate for the pricing of the portfolio
- mapping is the only solution when the characteristics of the instrument change over time.

1-2. video clips
mapping for a bond portfolio: http://www.youtube.com/watch?v=CYQ2_Xzr8uk
undiversified VaR: http://www.youtube.com/watch?v=SqZaAl5g_8U
diversified VaR: http://www.youtube.com/watch?v=cjguDOUswDA&feature=related
mapping for forward contracts: http://www.youtube.com/watch?v=Um8e_teI_dw
mapping for European stock option: http://www.youtube.com/watch?v=3wEBJbUuKfQ


1-3. Mapping as a solution to data problem
- no history such as a mutual fund with a strategy of investing in IPOs -> exposures on similar risk factors already in the system
- stale data -> longer time intervals or a regression of A returns on B returns if A price = stale price

1-4. Mapping process
- exact allocation of exposures on the risk factors
- estimated exposures
1-5. General & specific risk
The choice of the set of primitive factors
- tradeof between better quality of the approximation & faster processing
- influences the size of specific risks; risks that is due to issuer-specific price movements after accounting for general market factors

1-5-1. a portfolio of N stocks

- Variance of Rp = general market risk + aggregate of specific risk for the entire portfolio


1-5-2. a corporate bond portfolio where:
Zj = a set of J government bond yields
Sk = a set of K credit spreads
**In practice, there may not be sufficient history to measure the specific risk of individual bonds, which is why it is often assumed that all issuers within the same risk class have the same risk

Wednesday, September 16, 2009

Ch5 Backtesting VaR

1. Backtesting
VaR models are only useful insofar as they predict risk reasonalby well. -> Modeling validation
*Model validation: general process of checking whether a model is adequate; backtesting, stress testing, & independent review & oversight.

1-1. Definition
A formal statistical framework that consist of verifying actual losses are in line with projected losses.

1-2. Backtesting VaR
- systematically comparing the history of VaR forecasts with associated portfolio returns
- reality checks -> reexamining for faulty assumptions, wrong parameters, or inaccurate modeling
- central to the Basel Committee's ground-breaking decision to allow internal VaR models for capital requirements.



2. Setup for Backtesting
i. number of exception: number of exceedences; too may excpetions -> the model underestimates risk
ii. the fit between absolue value of the daily profit & loss against 99% VaR, daily price volatility
iii. VaR measures assume that the current portfolio is frozen over the horizon (static).
**In practice, the actual portfolio is contaminated by changes in its composition: intraday trades, fees, commissions, spreads, & net interest income. => backtesting usually is conducted on daily returns to minimize the contamination.
iv. Returns selection
- actual portfolio return
- hypothetical return: obtained from fixed positions applied to the actiual returns
- cleaned return: actual return minus all non-mark-to-market items.
* choice to use either hypothetical or cleaned returns
v. both actual & hypothetical returns should be used for backtesting because both sets of numbers yield informative comparisons.
vi. passing backtesting with hypothetical but not actual => the problem lies with intraday trading
not passing backtesting with hypothetical => modeling methodology should be reexamined





3. Model Backtesting with Exceptions
Confidence level & statistical decision problem, accept or reject decision
The choice of the level for the test is not related to the quantitative level p selected for VaR. -> The decision rule may involve a 95% confidence level for backtesting VaR numbers, which are themselves constructed at some confidence level, say, 99% for the Basel rules.



3-1. Failure Rates
- faulure rate: N/T where N = the number of exception, T = sample of size (day)
- nonparametric: simply counting the number of exceptions
- classic testing framework for a sequence of success & fauilures (Bernoulli trials)
- binomial probability distributions:
Expected Value: E(x) = pT & Variance: V(x) = p(1 - p)T
**central limit theorem & approximation of the binomial distribution by the normal distribution

3-2. Type I & Type II Error
Type I Error
- rejecting an accurate model


Type II Error
- accepting an inaccurate model


**tradeoff between Type I & Type II Error => test is powerful if it creates a low type 1 error rate and a very low type 2 error
- the choice of the confidence level, the decition rule to reject the model, for the decision rule is not related to the quantitative level p selected for VaR.


***Kupiec's approximate 95% confidence regions, defined by the tail points of the log-likelihood ratio -asymptotically distributed chi-square with one degree of freedom under the null hypothesis that p is the true probability. LRUC is the test statistic for unconditional coverage.
It is difficult to backtest VaR models constructed with higher levels of confidence because detection of systematic biases becomes increasingly difficult for low values of p

e.g.) probability level p = 5%, if T = 252 then 6 < N < 20, if T = 1000 then 37 < N < 65
for T = 252 [6/252 = 0.024, 20/252 = 0.079]
for T = 1000 [37/1000 = 0.037, 65/1000 = 0.065]
=> the interval shrinks as the sample size extends

3-3. Holding period for VaR
two theores about choosing a holding period:
i. the holding period should correspond to the amount of time required to either liquidate or hedge the portfolio -> VaR calculate possible losses before corrective action could take effect
ii. the holding period should be chosen to match the period over which the portfolio is not expected to change due to nonrisk-related activity
**holding period is more significant than the confidence level

4. Basel Rules
The Basel rules for backtesting the internal-models approach are derived directly from failure rate test. The current verification procedure consists of recording daily exceptions of the 99% VaR over the last year (= 2.5 instances).

**Basel Penalty Zones
- Green: 0 to 4
- Yellow: 5 to 9
- Red: 10+

The penalty for banks is subject to their supervisors' discretions. Four categories of causes for exceptions are:
- basic integrity of the model: positions reported incorrectly, an error in the program code => penalty applied
- model accuracy could be improved: model not measured risk with enough precision => penalty applied
- intraday trading: positions changed during the day => penalty considered
- bad luck: volatile market, correlations changed

4-1. High VaR confidence level
- Type I error: 10.8%, Type II error: 12.8% => not powerful

**increasing the power of the test
- lowering the required VaR confidence level to 95% -> sharply reduces the probability of not catching an erroreous model
- increasing the number of observations. e.g.) T= 100

4-2. Conditional Coverage Model
So far the framework focuses on unconditional coverage (ignoring time variation or conditioning in the data).
- with a 95% VaR confidence level: E(x) = 13, if we observed 10 of these exceptions occurred over the last 2 weeks -> verification system should be designed to measure proper conditional coverage.
- Christofferson's LRCC
LRCC = LRUC + LRind
where:
LRind = the serial independence of deviations using a log-likelihood ratio test

-> reject the model if LRCC > 5.99
If exceptions are determined to be serially dependent, then the model needs to be revised to incorporate the correlations that are evident in the current conditions.

Tuesday, September 15, 2009

Ch5 Modelling Dependence: Correlations and Copulas


1. Correlation as a Measure of Dependence
The most common way to measure the dependence is to use standard (linear) correlation.
The linear correlation is a good measure of dependence when random variables are distributed as multivariate elliptical.
*elliptical distributions: normal & t-distributions


1-1. Limitation even in an elliptical
- If risks are independent, the correlation is zero. But, the reverse does not necessarily hold except in the special case where we have a multivariate normal distribution.
=> zero correlation does not imply that risks are independent unless we have multivariate normality.
- The correlation is not invariant to transformations of underlying variables. For instance, the correlation between X and Y will not in general be the same as the correlation between ln(X) and ln(Y). Hence, transformations of the data can affect correlation estimates.


1-2. Non-elliptical distribution
- correlation is not defined unless variances are finite (infinite variance); heavy-tailed distribution with an infinite variance or trended return series that are not co-integrated
- we cannot count on correlations [-1, 1] out side an elliptical even where defined
- marginal distributions & correlations no longer suffice to determine the joint multivariate distribution -> correlation does not tell us about dependence; spurious correlations (a correlation between two variables that does not result from any direct relation between them but from their relation to other variables) -> correlation does not imply causation
- outliers can affect correlations significantly




2. Copula Theory
2-1. Basics of Copula Theory
A copula is a function that joins a multivariate distribution function to a collection of univariate marginal distribution functions. Copulas enable us to extract the dependence structure from the joint distribution function and separate out the dependence structure from the marginal distribution functions.
**Modeling Joint Distribution Function
- Specify marginal distributions
- Choose a copula to represent the dependence structure
- Estimate parameters involved
- Apply the copula function to the maginals

2-2. Common Copulas
2-2-1. Simplest Copulas
- Independence Copula: X & Y are independent
- Minimum Copula: positively dependent or comonotonic
- Maximum Copula: negatively dependent or countermonotonic

2-2-2. Other Copulas
- Gaussian(Normal) Copula: The copula depends only on the correlation coefficient which confirms that the correlation coefficient is sufficient to determine the whole dependence structure; no closed-form solution
- t-Copula: generalization of the normal copula
- Gumbel or Logistic Copula: beta determines the amount of dependence between variables.
beta = 1 -> variables are independent
beta > 0 -> limited dependence
beta = 0 -> perfect dependence

- Elliptical & Archimedean Copulas
Archimedean: distribution function is strictly decreasing & convex; easy to use; it fits a wide range of dependence behavior

- EV(extreme-value) Copula: minimum & Gumbel copula, Gumbel II, Galambos copulas

2-3. Tail Dependence
Copulas can be used to investigate tail dependence, which is an asymptotic measure of the dependence of extreme values.

Sunday, September 13, 2009

Ch5 Measures of Financial Risk

1. Mean-Variance Framework
The traditional approach used to measure financial risks is the mean-variance framework. In risk management, we are concerned about outcomes in the left-hand tail.

1-1. Attractives of Normality
- central limit theorem
- straightforward formula for both cumulative probabilities and quantiles
- normal (elliptical) distribution requires only two parameters: mean & variance (expected return & risk)

1-2. Mean-Variance Efficient Frontier without a Risk-free Asset
The investor will choose some point along the upper edge of the feasible region, efficient frontier. The point chosen depend on thier risk-expected return preferences (utility or preference function).

1-3. Mean-Variance Efficient Frontier with a Risk-free Asset
*assumption: no short-selling constraints
The investor achieve any point along a straight line running from the risk-free rate through to a point or portfolio (market portfolio) just touching the top of the attainable set.

1-4. Nomality Assumption
If the distribution is skewed or has heavier tails, the normality assumption is inappropriate and the mean-variance framework can produce misleading estimates of risk. => the mean-variance framework can be applied conditionally on sets of parameters that might themselves be random. But, even with the greater flexibility, it's doubtful whether conditionally elliptical distribution can give sufficiently good fits to empirical return processes.
=> the mean-variance framework tells us to use the standard deviation as risk measure, but even with refinements such as conditionality, this is justified only in limited cases.

2. VaR
2-1. Basics of VaR

Saturday, September 12, 2009

Ch6 Credit Risks & Credit Derivatives (3)

4-4. CreditMetrics
* video clips: http://www.youtube.com/watch?v=CMn3q9gO3tM
http://www.youtube.com/watch?v=gyy0lXlXpCU&feature=related

Step 1: Figure out a rating class for the debt claim
Step 2: One-year rating transition matrix
Step 3: Specify Horizon
Step 4: Compute possible one-year forward values using one-year forward zero curves
*possible one-year forward values =Sum(CFi x i-1Ri)
Step 5: Compute the expected bond value
Step 6: Compute the credit VaR for a given confidence level
Step 7: Transition Probabilities -> Cumulative Probability -> Compute Threshold Values
Step 9: Compute joint probability bivariate distribution
=BIVAR(BT1i, BT2i, rho)

**first step = gathering of inputs: calculating many measures such as PD, recovery rate statistics, factor correlations and their relationship to the obligator, yield curve data, and individual exposures that are distinct from the other inputs

**The company surplus variable Sj obeys the linear factor model as follows:

The index model enales a straightforward calculation of pairwise asset correlations.
Cov(Si;Sj)




4-5. Moody's KMV Portfolio Manager
The KMV model calculates the expected default frequencies(EDFs) for each obligor. With KMV's model, the capital structure includes equity, short-term debt, long-term debt, & convertible debt. KMV solves for the firm value and volatility.
Advantage:
- probabilities of default are obtained using the current equity value
- any event that affects firm value translates directly into a change in the probabilities of default change continually rather than only when ratings change.
-> accurate and timely information from the equity market provides a continuous credit monitoring process that is difficult and expensive to duplicate using traditional credit analysis

Features:
- Distance to default threshold(ratio) determines the level of default risk, E(VT) - d*
d* = short-term debt + 0.5 long-term debt
VT = V0 exp((u-.5 sigma ^ 2) T + sigmaV ZT)
- Ability to adjust to the credit cycle and ability to quickly reflect any deterioration in credit quality
- Work best in highly efficient liquid market conditions

Weaknesses:
- it requires some subjective estimation of the input parameters
- it is difficult to construct theoretical EDFs without the assumption of normality of asset returns
- private firms EDFs can be calculated only by using some compatability analysis based on accounting data
- it does not distingquish among different types of long-term bonds according to their seniority collateral, covenants or convertiblity

e.g.) VT = $80, sigma = $10
=> 98% (confidence level) of observations lie between +2.33 and -2.33 standard deviations from the mean. Hence, there is a 1% chance that it will fall to a value of $80 - 2.33 * $10 = $56.70 or below; alternatively, there is a 99% provability that the equity holder will lose less than $80 - $56.70 = $23.30 in value; that is, $23.00 can be viewed as the VaR on the equity at the 99% confidence level.



Limitations of the Credit Portfolio Models
Models do not take into account changes in interest rates, credit spreads or current economic conditions.

5. Credit Derivatives
Credit derivatives are financial instruments whose payoffs are contingent on credit risk realizations.
Credit Events:
- fialure to make a required payment
- restructuring that makes any creditor worse off; debatable
- invocation of cross-default clause
- bankruptcy

* obligation acceleration, obligation default
**a credit agency downgrade is not a default event (if it's not under thereshold)

5-1. Credit Default Put
A put on the firm value with the same maturity as the debt and with an exercise price equal to the face value of the debt

5-2. Credit Default Swap (CDS)
With CDS, party A makes a fixed annual payment to party B, while party B pays the amount lost if a credit event occurs.
** video clips: http://www.youtube.com/watch?v=P2cUh-e_Qkc

*cash delivery: Z = midpoint between bid and ask price
cash payment = (100 - Z)% of the notional principal

5-3. Total rate of return swaps(TROR)
*video clips: http://www.youtube.com/watch?v=cmUXTFggIa0
- Protection Buyer: TROR Payer (owns preference asset, synthetically short the reference asset)
- Protection Seller: TROR Seller (synthetically long the reference asset)
-TROR Payer pays Total Return (Income + delta Value) e.g.) market value + coupon
- TROR Seller pays LIBOR + Spread (based on receiver's credit rating)
**buyer trasfer default risk, credit deterioration, market risk to seller
**seller: virtually eliminate funding costs
**spread depends on the credit risk of the reference asset, the creditworthiness of the receiver, and the correlation of credit quality between the reference asset issuer and the total-return swap receiver


6. Credit Risks of Derivatives
Vulnerable Option: an option with default risk
Without the default risk, the holder of the option at expiration receives:
Max(S - K,0)

*The payoff of the vulnerable option is:
Max[Min(V, S - K), 0]
where:
V = a firm's Value
S = underlying asset's price at expiration
K = exercise price

The correlation between firm's value & underlying asset value is important in the valuations of the vulnerable option.
- strongly negative correlation -> vulnerable option has little value
- strongly positive correlation -> no credit risk

If the option has credit risk, then a derivative can be written to eliminate the credit risk. If the price of the vulnerable option can be estimated then the price of the credit derivative to insure the vulnerable option can be determined.
The appropriate credit derivative is one that pays the difference between a call without default risk and the vulnerable caull:
Max(S - K,0) - Max[Min(V, S - K), 0]

Alternative approach:
Vulnerable Option = [(1 -PD) x c] + (PD x RR x c)
where:
c = value of the option without default
PD = probability of default
RR = recovery rate


e.g.) PD = 5%, RR = 50%
=> (1 - 0.05) x c + 0.05 x 0.5 x c = 0.9725c => the vulnerable option is worth 97.5% of the value of the option that is free of default risk

The credit risk in a swap can be reduced by requiring a margin or by netting the payments.
Netting means that the payments between the two counterparties are netted out, so that only a net payment has to be made (full two-way payment covenant).

Market Maker's payoff:
If S <> F then S - F

Swap's payoff to Market Maker:
-Max[F - S, 0] + Max[Min(S, V) - F,0]

Ch6 Credit Risks & Credit Derivatives (2)

3-4. Interest Rate Dynamics
Unaticipated changes in interest rates can affect debt value:
- an increase in interest rates reduces P.V. of promised coupon payments absent credit risk -> reduces the value of debt
- an increase in interest rates can affet firm value IAT empirical evidence


**Vasicek model
The change in the spot interest rate over a period of length deltat is:

where:
lambda = speed that interest rate reverts to the long-run mean, k
k = long run equilibrium value towards which the interest rate reverts
rt = current spot interest rate
sigmat = interest rate volatility
epsilont = random error term (random shock)

* mean reversion:both a stock's high and low prices are temporary and thend to have an average price over time.
* basice idea of Vasicek model: interest rates cannot rise/decrease indefinitely becaue at very high/low levels they would hamper economic activity & prompt a decrease/increase in interest rates => interes rates move in a limited range, showing a tendency to revert to a long run value.
* (lambda)(k - rt): expected instantaneous change in the interest rate at time t. => when rt <> generating a tendency for the interst rate to move upwards (toward equilibrium)

The value of risky debt is:

The value of the debt falls as the correlation between firm value and interest rate shocks increases. The impact of an increase in firm value on the value of the debt is more likely to be dampened by a simultaneous interest rate increase.
An increase in interest rate volatility and an increase in the speed of mean reversion reduce debt value.
At highly volatile interest rates, the value of the debt is less sensitive to changes in interest rate => hedge ratio depend on the parameters of the dynamics of interest rates

3-5. Application Difficulties
Empirical research:
- a naive model of redicting whether debt is riskless works better for investment grade bonds than the Merton model
- the Merton model works better than the naive moel for debt below investment grade
- the Merton model cannot predict credit spreads



4. Credit Risk Models
4-1. Merton Model
Challenge of measuring the risk of a debt portfolio:
- most debt instruments are not publicly traded
- the historical data is not reliable if securities are illiquid
- the distribution of bond returns is not normal
- debt is issued by creditors who do not have traded equity
- debt is not marked to market -> a loss is recognized only if default occrus

**assumptions:
- firm value is lognomally distributed with a constant volatility
- firm only has one liability, which is zero-coupon debt issue

PD (probability of default) & EL (expected loss):


**The following portfolio credit risk models resolve some of the difficulties of measuring a portfolio's probability of default and the amount of loss associated with default when using the Merton model.



4-2. Credit VaR
Credit VaR differs from market VaR in that it measures losses that are due specifically to default risk and credit deterioration risk.
**problems
- calculating changes in credit quality over a one-day period is difficult; credit VaR is usually calculated over a year
- changes in credit risk are highly skewed and do not follow a normal distribution; the loss distribution of changes in credit quality for investment grade bonds closely resembles a lognormal distribution



4-3. CreditRisk+
It measures the credit risk of a portfolio using a set of common risk factors for each obligor. It allows only two outcomes for each firm over the risk measurement period for a loss of a fixed size: default and no default. The probability of default for an obligor depends on its rating, the realization of K risk factors, and the sensitivity of the obligor to the risk factors. Conditional on the risk factor, defaults are uncorrelated across obligors. The risk factors can take only positive values and are scaled so that they have a mean of one. The model assumes that the risk factors follow a specific statistical distribution, gamma distribution. If the kth risk factor has a realization above one, this increases the probability of default of firm i in propotion to the obligor's exposure to that risk factor measured by wik. Once we have computed the probability of default for all the obligors, we can get the distribution of the total number of defaults in the protfolio.

Friday, September 11, 2009

Ch6 Credit Risks & Credit Derivatives (1)

1. Credit Risk
Two important roles in risk management programs:
- parts of the risks a firm tries to manage in a risk management program
e.g.) the riskiness of the debt claims it holds against thirt parties
- positions in derivatives for the express purpose of risk management
e.g.) the riskiness of counterparties in the position


2. Merton Model
2-1. Assumptions:
- only one debt issue with zero coupon
- no dividend
- perfect financial market
- no taxes, no bankruptcy costs, & no costs associated with enforcing contracts
- only debt holders & equity holders claims against the firm

2-2. Value of Equity
ST = Max (VT - F, 0)
where:
VT = the value of the firm at date T
F = the face value of the debt

2-3. Value of Debt
DT = F - Max(F - VT, 0)


2-4. Additional Assumptions to modify Black-Sholes-Merton Option-Pricing Model
- firm value characterized by a lognormal distribution with constant variance
- constant interest rate
- perfect financial market with continuous trading

2-5. Value of Equity at Time t


where:
V = value of the firm
F = face value of the firm's zero coupon debt maturing at T
Pt(T) = price at t of a zero-coupon bond that pays $1 at T
N(d) = cumulative distribution function evaluated at d



3. Credit Spreads, Time to Maturity, & Interest Rates
3-1. Credit Spreads: difference between the yield on a risky bond and the yield on a risk-free bond of same maturity


where:
D = current value of debt
F = face value of debt
T-t = remaining maturity

**as time increases, credit spreads (of both high-rated & low-rated) tend to widen. For a very risky debt, credit spreads narrow as maturity approaches. As interest rates increase, the expected value of the firm at muaturity increases, the risk of default decreases, & the credit spread decreases.

3-2. Firm Value & Volatility
Nontraded securities:
- we cannot observe firm value directly
- we cannot trade the firm to hedge a claim whose value depends on the value of the firm

With Merton's model the only random variable that affects the value of claims on the firm is the total value of the firm. A portfolio consisting of delta units of firm value plus a short position in the risk-free asset is equivalent to the value of firm. We can use equity and the risk-free asset to construct a portfolio that replicates the firm as a whole. -> the delta of equity must be estimated to do this.
To compute the delta of equity from Merton's formula, N(d), we need to know firm value, the volatility of firm value, the promised debt payment, the risk-free interest rate, and the maturity of the debt. -> If we have an estimate of delta & know the value of the firm's equity, then we can solve for firm value & the volatility of the firm value
e.g.) unkown delta, D(V, 100, t+5, t), the value of a share: $14.10, number of shares: 5M, interest rate: 10%
=> S(V, 100, t+5, t) = 14.10 x 5 = 70.5
S(V, 100, t+5, t) = c(V, 100, t+5, t)
if there are options traded on the firm's equity, then we can ge the volatility of equity using option pricing formula and deduce the volatility of the firm from the volatility of equity.
**Black-Sholes formula does not apply to the call option because it is a call option on equity => an option on an option(compound option). The distribution of equity values is not constant -> violation of the Black-Sholes-Merton model

Geske compound option model
- appropriate for compound options
- lognormal distribution with constant volatility
if we know the value of equity, then we can obtain the value of firm volatility using Geske's formula

e.g.) a firm value per share: $25, volatility: 50%
=> the value of call option using compound option model = $6.0349 & the value of equity = $15.50
if actual call option price = $6.72 & actual equity price = $14.10
=> the firm value is too high and the volatility of the firm is too low. To produce model values that are equal to the observed values, the firm value should be $21 per share and the firm volatility 68.36%.

3-3. Subordinate Debt
An increase in firm volatility makes it more likely that subordinated debt will be paid off and hence increases the value of subordinated debt. <-> Senior debt always falls in value when firm volatility increases.
The value of the firm:
V = D(V, F, T, t) + SD(V, U, T, t) + S(V, U + F, T, t)
where:
U = face value of the subordinated debt
D = senior debt
SD = subordinated debt
S = equity = c(V, U + F, T, t)

-> D(V, F, T, t) = V - c(V, F, T, t)
SD(V, U, T, t) = V - c(V, F + U, T, t) - [V - c(V, F, T, t)] = c(V, F, T, t) - c(V, F + U, T, t)

Subordinate debt can be valued in a portfolio as a long position in a call option on the firm with an exercise price equal to the face value of senior debt and a short positon on a call option on the firm with an exercise price equal to the total principal due on all debt.

Ch6 Measuring & Marking Counterparty Risk

1. Definitions

Counterparty risk
A risk that a party to an OTC derivatives contract may fail to perform on its contractual obligations, causing losses to the other party
- replacement cost
- bilateral

Counterpary exposure
The larger of zero and the market value of the portfolio of derivative positions with a counterpary that would be lost if the counterpaty were to default and there were zero recovery

Current exposure (CE)
The current value of the exposure to a counterparty

Potential future exposure (PFE)
The maximum amount of exposure expected to occur on a future date with a high degree of statistical confidence
- MPFE
- PFE(t)

Expected exposure (EE)
The average exposure on a future date
**expected exposure profile (=credit equivalent or loan equivalent exposure curve)
The Expected Exposure Profile is derived using a Monte Carlo simulation and calculating the probability weighted mean(average) exposure of the distribution of exposures at any future date for the portfolio of transactions. The EE profile is commonly used by banks that use a simulation approach to calculate exposures for their credit limits

- a graph of EE(t) across time
- credit-equivalent/loan-equivalent exposure curve -> economic capital & credit pricing




Expected positive exposure (EPE)
The average EE(t) for t in a certian interval; the weighted average over time of the expected exposure, where the weights are the proportion that an individual expected exposure represents of the entire exposure horizon time interval.

Right-way/wrong-way exposure
Positively/negatively correlated with the credit quality of the counterparty

Credit risk mitigants
Designed to reduce credit exposures, such as netting rights, collateral agreements, & early settlement provisions
- Liquidity puts: a "knock-in" barrier option, where the barrier is a liquidity metric. A investor holds bond and buys a liquidity put that is "knocked into" existence only if the bond's liquidity reaches some low barrier (e.g., trading volume falls below X for Y consecutive days). If barrier is reached (option knocks-in), investor has the right to sell the bond to the option seller at the market price.
- Credit triggers


2. Estimating PFE
*PFE measurement system include:
- Historical databases
- Monte Carlo simulation engines
- Trading pricing calculators
- Exposure calculators
- Reporting tools

2-1. Simulation engine
- normal diffusion process: e.g.) low interest rates
- lognormal diffusion process: e.g.) high interest rates, major foreign exchanges
- jump-diffusion process: emering market => a normal continuous price diffusion process modelled by Geometric Brownian Motion with mean-reversion and a volatilities term structure
-> an abnormal, discontinous jump process modelled by a Poisson distribution

- mean-reversion: random walk process; fluctuating around values determined by the cost of asset and the level of demand
- risk-neutral probabilities: used for the arbitrage-free pricing of assets for which replication strategies exist.
- correlations among market risk factors

2-2. Trading pricing
- all trades with the counterparty must be priced to calculate the exposure in the future market scenario

2-4. Exposure calculation
- fundamental concepts: netting & margin node
- conterparty exposure determined by (i) calculating the exposure in each netting node, (ii) adding all netting node exposures, (iii) calculating the collateral posted/received for each margin code, (iv) adding collateral posted/received, & (v) calculating the net exposrue to the counter party as (ii) - (iv)

2-5. Model validation & control
- standards comparable to front-office calculators


2-6. Applications of Exposure Modeling
PFE models are used for
- trade approvals against credit line limits
- credit risk valuation
- economic & regulatory capital

* gross-up factor = EC under market & credit uncertainty / EC under credit uncertainty
**BASEL I: Capital = EAD x Counterparty Risk Weight x 8% & EAD = CE + PFE & RW=F(PD,LGD,Maturity); EAD=exposure of default, RW=risk weight


3. Market Valuation of Credit Exposures
CVA (credit valuation adjustment) of an OTC derivatives portfolio with a given counterparty is the market value of the credit risk due to any failure to perform on agreements with that counterparty. CVA adjusts payment to reflect changes in the credit risk changes relative to the counterparties.

**mid-market price: price of a security in-between its offer and bid price, used in computing investment performance statistics

e.g.) Party A's, pound receiver, OTC pound-dollar currency swap with Party B; mid-market value = 150, effective maket value for default risk= 3, net market value of default risk to A of 8
=> downward credit risk adjustment = 8 - 3 = 5. hence, fair m.v. to A = 150 - 5 = 145

3-1. Risk premia
- the difference between the expected return on a security or portfolio and the riskless rate of interest
Risk-neutral mean loss rate inc. an artificially high mean loss rate that reflects a risk premia for accepting the higher default risk


3-2. Mean* exposure times mean* loss rate
Computing total market vlaue, V(t), of default risk during the t-th future time period
i. Calculate EE*(t), the risk-neutal expected exposure for period t
ii. Calculate the risk-neutral mean default loss rate L*(t) associated with the period
iii. Obtain C(t), the price of a default-free zero-coupon bond of maturity t
iv. Calculate V(t) = EE*(t) x L*(t) x C(t)

3-2-1. Mean Loss Rate; expected loss on a risky bond due to default risk
Mean Loss Rate = PD x (1 - Recovery Rate)

e.g.) A bond with F.V. of $1000,000, probability of default = 40%, recovery rate = 50%, actual price = $700,000
=> Mean Loss Rate = 40% x (1 -50%) = 20%
Assuming that interest rates are zero, the bond value = 100,000 x (1 - .2) = $80,000
Risk-neutral mean loss rate = 1 - 70,000 / 100,000 = 30%

**risk-neutral mean loss rate: A rate at which investors act as if they are risk-neutral because the rate includes an artificially high mean loss rate that reflects a risk premium for accepting the highter default rsik
- credit spread: the difference between the risky bond rate and the risk-free rate
=> a proxy for the annualized risk-neutral loss rate
- default swap rate: a proxy for the risk-neutral mean loss rate paid by the protection buyer in a credit default swap agreement

3-3. General Monte Carlo approach
Risk-nuetral Monte Carlo simulation used to obtain the estimate of the market value of credit risk in bilateral OTC portfolio
i. Initiate a new independently simulated scenario
ii. Simulate the next exposure of Counterparty A to default by Counterparty B
iii. Simulate whether or not B defaults at that date, and whether A defaults at that date
iv. If, at a given date, Counterparty B defaults and Counterparty A has not already defaulted, then simulate the fraction of the net exposure that is lost
v. Simulate the path of short-term interest rates
vi. Discount to present market value, using the compunded short-term interest rates for this scenario, the losses to Counterparty A
vii. Return to Step 1, unless a sufficiently large number of scenarios have been run to obtain the approximate effect of the law of large number
viii. Average the result of (vi), over all independently generated scenarios

3-4. General remarks on credit adjustments