\documentclass[12pt,thmsa]{article}
\begin{document}
\author{Jonathan Goodman\thanks{Courant Institute of Mathematical
Sciences at New York University. Partially supported by a grant from the
National Science Foundation.} and Daniel N. Ostrov\thanks{Department of
Mathematics and Computer Science at Santa Clara University. Partially
supported by a grant from the National Science Foundation under Grant No.
DMS-9704864.}}
\title{On the Early Exercise Boundary of the American Put Option}
\date{August 10, 2000}
\maketitle
\section{Introduction}
The key to determining the value of an American put option is finding the
``early exercise boundary'', which specifies the conditions under which the
option should be exercised before it expires. Unfortunately, the early
exercise boundary cannot be solved in closed form, and, as a result, the
option's value cannot be solved in closed form either. In this paper,
however, we will be able to determine a relatively simple integral equation
for the early exercise boundary by exploiting the especially strong
connections between the Black-Scholes equation governing \textit{the
derivative} with respect to time of the option's value and the classical
Stefan problem, which governs melting ice. Our integral equation is suitable
for iteration like nonlinear Volterra equations of the second kind --- which
it closely resembles --- and also will be used to find an analytic
approximation to the early exercise boundary when the option is close to
expiration. Our analytic approximation will be consistent with the rigorous
bounds determined by Barles et.\ al.\ in \cite{Barles} but not entirely
consistent with the work of Kuske and Keller in \cite{KK} as will be
explained and accounted for below.
We begin with the Black-Scholes theory \cite{Merton} for the value of an
American put option, $f(S,t),$ where $S$ is the price of the stock
underlying the option and $t$ is the time remaining until the option
expires. The American --- as opposed to European --- put option may be
exercised at any time for a yield of $K-S,$ where $K$ is the strike price of
the option. This leads to our splitting the domain $\{s\in (-\infty ,\infty
),$ $t\in [0,\infty )\}$ into two regions which are separated by the early
exercise boundary, $S=b(t)$: If $S\leq b(t)$ then the option should be
immediately exercised and therefore $f(S,t)=K-S;$ if $S>b(t)$ then the
option should not be exercised, and $f(S,t)$ --- which is greater than $K-S$
--- is governed by the Black-Scholes parabolic partial differential
equation:
\begin{equation}
f_{t}=\frac{\sigma ^{2}S^{2}}{2}f_{SS}+rSf_{S}-rf \label{1a}
\end{equation}
where $\sigma $ is the stock's volatility and $r$ is the risk free interest
rate. At $t=0$ the option must be exercised or discarded so $f(S,0)=\max
(K-S,0).$ The final fact we need, which is possibly less intuitive but easy
to establish mathematically \cite{Merton}, is that $f_{S}(b(t),t)=-1\ $for $
t>0;$ i.e., $f$ is $C^{1}$ smooth on the early exercise boundary.
There are useful parallels between the Black-Scholes equation for $f$ and
the Stefan problem for the temperature of water bordering a melting block of
ice. The equations for both $f$ and the water temperature are parabolic
partial differential equations and, more importantly, both contain ``free
boundaries'', that is, boundaries that are not explicitly known, which
correspond to the early exercise boundary, $b(t),$ in the Black-Scholes case
and the location of the interface between the water and the ice in the
Stefan problem. While these parallels can be exploited to find integral
equations for $b(t)$ (see \cite{KK}), we will show that the parallels
between the formulation of the Black-Scholes equation for the time
derivative, $f_{t}$, and the Stefan problem are even stronger, since, in
addition to their both being free boundary parabolic partial differential
equation problems, their conditions on the free boundary and at $t=0$ also
heavily parallel each other. This will allow us to adapt standard techniques
for the Stefan problem, see, e.g., \cite{GL}, to determine the following
integral representation for $f_{t}(S,t)$ in the region $S>b(t):$
\begin{equation}
f_{t}(S,T)=\hat{G}\left( \ln \left( \frac{S}{K}\right) ,t\right) -\frac{2r}{
\sigma ^{2}}\int_{0}^{t}\frac{\dot{b}(s)}{b(s)}\hat{G}\left( \ln \left(
\frac{S}{b(s)}\right) ,t-s\right) ds \label{1b}
\end{equation}
where $\hat{G}$ is the following Green's function:
\[
\hat{G}\left( X,T\right) =\frac{K\sigma }{\sqrt{8\pi T}}\exp \left[ -\frac{
\left( X-\left( r-\frac{\sigma ^{2}}{2}\right) T\right) ^{2}}{2\sigma ^{2}T}
-rT\right] .
\]
Delicate problems may arise when letting $S\rightarrow b(t)$ in equations
similar to $\left( \ref{1b}\right) $; however, we will show that these
problems do not arise for $\left( \ref{1b}\right) $ allowing us to easily
obtain our integral equation for $b(t),$ which is
\begin{equation}
\hat{G}\left( \ln \left( \frac{b(t)}{K}\right) ,t\right) =-\frac{2r}{\sigma
^{2}}\int_{0}^{t}\frac{\dot{b}(s)}{b(s)}\hat{G}\left( \ln \left( \frac{b(t)}{
b(s)}\right) ,t-s\right) ds. \label{1c}
\end{equation}
(Note that in the next section $\left( \ref{1c}\right) $ is rewritten as $
\left( \ref{bp}\right) ,$ which has a more simple form due to the use of
scaled variables. We use unscaled variables in this section, which are more
useful for applications, and introduce scaled variables in
the next section, which are more useful mathematically. Both types of
variables commonly appear in the literature.) The closest equation to
$\left(
\ref{1c}\right)
$ that we have found in the literature is by Hou et.\ al.\ (see equation
(3.11) in
\cite {L}). Both the Hou expression and ours relate $b(t)$ to a single
integral; their expression is more complex, but it has the advantage of not
containing
$\dot{b}(t)$.
We next consider the nature of $b(t)$ when $t$ is small. It is easy to
establish that $b(t)b(t),$ together with $f$'s boundary conditions
\begin{eqnarray}
f(b(t),t) &=&K-b(t) \label{2a} \\
f_{S}(b(t),t) &=&-1 \label{2b}
\end{eqnarray}
and $f$'s initial condition
\begin{equation}
f(S,0)=\max (K-S,0). \label{2c}
\end{equation}
The characterization of $f_{t}$ is similar. Clearly the partial differential
equation is not changed:
\begin{equation}
\left( f_{t}\right) _{t}=\frac{\sigma ^{2}}{2}S^{2}\left( f_{t}\right)
_{SS}+rS\left( f_{t}\right) _{S}-r\left( f_{t}\right) . \label{2d}
\end{equation}
We need to find two boundary conditions to replace $\left( \ref{2a}\right) $
and $\left( \ref{2b}\right) $. Differentiating $\left( \ref{2a}\right) $
with respect to $t$ gives:
\[
f_{S}\dot{b}+f_{t}=-\dot{b}.
\]
Using $\left( \ref{2b}\right) $, this simplifies to
\begin{equation}
f_{t}(b(t),t)=0. \label{2e}
\end{equation}
This is the first of the two desired conditions. Substituting $\left( \ref
{2e}\right) $ into $\left( \ref{1a}\right) $ gives, with $S=b$,
\[
0=\frac{\sigma ^{2}}{2}S^{2}f_{SS}+rSf_{S}-rf.
\]
This, together with $\left( \ref{2a}\right) $ and $\left( \ref{2b}\right) $,
gives
\begin{equation}
f_{SS}(b(t),t)=\frac{2Kr}{\sigma ^{2}b^{2}(t)}, \label{2f}
\end{equation}
which was exploited in the introduction to find a rough estimate of $b(t)$.
Finally, differentiating $\left( \ref{2b}\right) $ with respect to $t$,
\[
f_{SS}\dot{b}+f_{St}=0,
\]
and using $\left( \ref{2f}\right) $ gives the second boundary condition:
\begin{equation}
\left( f_{t}\right) _{S}(b(t),t)=-\frac{2Kr}{\sigma ^{2}b^{2}(t)}\dot{b}(t).
\label{2g}
\end{equation}
For the purpose of deriving the integral equation, the initial data for $
f_{t}$ will be
\begin{equation}
f_{t}(S,0)=\frac{\sigma ^{2}K^{2}}{2}\delta (S-K) \label{2h}
\end{equation}
for the following reason: As will become clear below, we are only interested
in $\int_{b(t)}^{\infty }f_{t}(S,t)dS$ as $t\rightarrow 0.$ We can analyze
this integral using $\left( \ref{1a}\right) $ by noting first that $
f_{S}(S,t)$ and $f(S,t)$ are bounded and go to zero as $t\rightarrow 0$ for
any $S>K$. While $f_{SS}(S,t)$ also goes to zero as $t\rightarrow 0$ for any
$S>K,$ $f_{SS}(S,t)$ is not bounded as $t\rightarrow 0$ at $S=K.$ Rather, $
f_{S}(b(t),t)=-1$ so, for any small $\varepsilon >0,$ $\int_{b(t)}^{K+
\varepsilon }f_{SS}(S,t)dS\rightarrow 1$ as $t\rightarrow 0.$ It is in this
sense that we say that $f_{SS}(S,0)=\delta (S-K),$ and therefore, applying $
\left( \ref{1a}\right) ,$ we have $\left( \ref{2h}\right) .$
The analogy of the two boundary conditions for $f_{t}$ to those that arise
in the physical Stefan problem is close. In the physical Stefan problem we
specify the temperature at the boundary where the ice is melting. This is
close to $\left( \ref{2e}\right) $. We also relate the speed of the boundary
motion, which is proportional to the rate at which ice melts, to the
gradient of the temperature at the boundary. This is similar to $\left( \ref
{2g}\right) $.
It is more convenient to work in traditional normalized coordinates with the
stock price variable normalized by the strike price:
\begin{equation}
S=Ke^{x}, \label{var}
\end{equation}
and time variable normalized by the volatility:
\[
t=\frac{2}{\sigma ^{2}}\tau.
\]
The early exercise boundary now occurs at $x=\beta$ where
\[
\beta =\ln \left( \frac{b}{K}\right) ,
\]
and we will denote the unknown in these new $x$ and $\tau$ variables by
$u(x,\tau )$
\[
u(x,\tau )=\frac{2}{K\sigma ^{2}}f_{t}(S,t)=\frac{2}{K\sigma ^{2}}
f_{t}\left( Ke^{x},\frac{2}{\sigma ^{2}}\tau\right) .
\]
In these new variables, the partial differential equation $\left( \ref{1a}
\right) $ takes the simplified constant coefficient form
\begin{equation}
u_{\tau }=u_{xx}+(\rho -1)u_{x}-\rho u, \label{2i}
\end{equation}
where $\rho $ is a nondimensional measure of the risk free rate:
\[
\rho =\frac{2r}{\sigma ^{2}}.
\]
The boundary condition $\left( \ref{2e}\right) $ becomes
\begin{equation}
u(\beta ,\tau )=0. \label{2i1}
\end{equation}
To translate the other boundary condition $\left( \ref{2g}\right) $, use $
\dot{b}/b=\dot{\beta}$. Also, $\dot{\beta}=\frac{d\beta }{dt}=\frac{\sigma
^{2}}{2}\frac{d\beta }{d\tau }$, so the right side of $\left( \ref{2g}
\right) $ is
\[
\frac{-Kr}{b}\beta ^{\prime }
\]
where $\beta ^{\prime }$ denotes $\frac{d\beta }{d\tau }\ $(which we will
use throughout the rest of this paper). Since $\left( f_{t}\right) _{S}=
\frac{1}{S}\left( f_{t}\right) _{x}$ and we have $S=b$ at the early exercise
boundary, $\left( \ref{2g}\right) $ becomes simply
\begin{equation}
u_{x}(\beta ,\tau )=-\rho \beta ^{\prime }. \label{2i2}
\end{equation}
Finally, the initial condition $\left( \ref{2h}\right) $ translated into our
new variables is
\begin{equation}
u(x,\tau )=\delta (x). \label{2i3}
\end{equation}
This completes the formulation of the problem in dimensionless variables.
The standard method for finding an integral equation for the free boundary
(see, e.g., \cite{GL}) starts with the Green's function for $\left( \ref{2i}
\right) $,
\[
G(X,T)\equiv \frac{1}{\sqrt{4\pi T}}\exp \left[ -\frac{(X+(\rho -1)T)^{2}}{4T
}-\rho T\right] ,
\]
which satifies
\begin{equation}
G_{T}=G_{XX}+(\rho -1)G_{X}-\rho G. \label{2j}
\end{equation}
(Note that $G$ is proportional to $\hat{G},$ the Green's function used in
the introduction once we transform to the original variables.) If there were
no boundary, we would have, for any $s\in [0,\tau ),$
\begin{equation}
u(x,\tau )=\int_{-\infty }^{\infty }G(x-y,\tau -s)u(y,s)dy, \label{2k}
\end{equation}
so if we set $s=0,$ we can describe $u(x,\tau )$ in terms of its initial
condition.
For our free boundary problem we consider the analogue of the integral in $
\left( \ref{2k}\right) $:
\begin{equation}
I(x,\tau ,s)=\int_{\beta (s)}^{\infty }G(x-y,\tau -s)u(y,s)dy. \label{2l}
\end{equation}
Note that as $s\rightarrow \tau ,$ $G(x-y,\tau -s)$ converges to a Gaussian
function whose variance shrinks to zero; i.e., $\lim\limits_{s\rightarrow
\tau }G(x-y,\tau -s)=\delta (x-y)$ and therefore for any $x>\beta (\tau )$
\begin{equation}
\lim\limits_{s\rightarrow \tau }I(x,\tau ,s)=u(x,\tau ). \label{2m}
\end{equation}
With this in mind it is not surprising that we can relate the solution, $
u(x,\tau )$, to the initial condition by integrating $I_{s}(x,\tau ,s)$
between $s=0$ and $s=\tau .$
We first differentiate $\left( \ref{2l}\right) $ yielding
\begin{eqnarray*}
I_{s}(x,\tau ,s) &=&-\beta ^{\prime }(s)G(x-\beta (s),\tau -s)u(\beta (s),s)
\\
&&-\int_{\beta (s)}^{\infty }G_{T}(x-y,\tau -s)u(y,s)dy \\
&&+\int_{\beta (s)}^{\infty }G(x-y,\tau -s)u_{s}(y,s)dy.
\end{eqnarray*}
The first term on the right vanishes because of $\left( \ref{2i1}\right) .$
In the third integral, substitute $\left( \ref{2i}\right) ,$ integrate by
parts to put all the derivatives on $G,$ then use $\left( \ref{2j}\right) .$
Three terms evaluated on the boundary remain. Two vanish by the boundary
condition $\left( \ref{2i1}\right) ,$ and we apply the other boundary
condition $\left( \ref{2i2}\right) $ to the remaining term to obtain
\begin{equation}
I_{s}(x,\tau ,s)=\rho G(x-\beta (s),\tau -s)\beta ^{\prime }(s). \label{2n}
\end{equation}
Next we look at $I(x,\tau ,0).$ From the form of $\left( \ref{2l}\right) $
it is now clear why the initial condition $\left( \ref{2h}\right) $ was
formulated using the integral from the early exercise boundary to infinity.
This allows us to now substitute $\left( \ref{2i3}\right) $ into $\left( \ref
{2l}\right) $ yielding
\[
I(x,\tau ,0)=G(x,\tau ).
\]
Now we can integrate $I_{s}(x,\tau ,s)$ from $s=0$ to $s=\tau $ to obtain
\begin{equation}
u(x,\tau )=G(x,\tau )+\rho \int_{0}^{\tau }\beta ^{\prime }(s)G(x-\beta
(s),\tau -s)ds \label{bo}
\end{equation}
which corresponds to $\left( \ref{1b}\right) $ in the introduction. We
stress that $\left( \ref{bo}\right) $ has only been established for $x>\beta
(\tau ).$ If $x=\beta (\tau ),$ then we see that as we try to proceed from $
\left( \ref{2l}\right) $ to $\left( \ref{2m}\right) $ we only integrate over
\emph{half} of the Gaussian, which means that if $x=\beta (\tau ),$ a factor
of $1/2$ must be inserted in front of $u$ on the left hand side of $\left(
\ref{bo}\right) $ for the equation to be valid. This ends up being
unimportant for us since $u(\beta (\tau ),\tau )=0$, so, letting $x=\beta
(\tau ),$ we have from $\left( \ref{bo}\right) $ that
\begin{equation}
G\left( \beta (\tau ),\tau \right) =-\rho \int_{0}^{\tau }\beta ^{\prime
}(s)G\left( \beta (\tau )-\beta (s),\tau -s\right) ds, \label{bp}
\end{equation}
but this has important ramifications in the Kuske and Keller paper \cite{KK}
and, in fact, it explains the differences between our results and theirs
\footnote{
In \cite{KK}, their conclusion that $\beta (\tau )\approx -\sqrt{-2\tau \ln
\left( c\tau \ln \left( \frac{1}{c\tau }\right) \right) }$ where $c=4\pi
\rho ^{2}$ follows from the second to last equation in the paper's third
section. This equation is the analogue of $\left( \ref{bo}\right) $ for the
Black-Scholes equation for $f$ (which is composed of equation $\left( \ref
{1a}\right) $ and conditions $\left( \ref{2a}\right) $--$\left( \ref{2c}
\right) )$. For the reasons given above, we believe that the left hand side
of their equation converges to $\rho \tau /2$, as opposed to $\rho \tau ,\ $
as $\tau $ decreases. Since the first integral on the right hand side of
their equation is also shown to converge to $\rho \tau /2$ (note that there
is a typo causing a negative sign to be missing), these two terms cancel
each other which significantly alters the asymptotics and the final
expression for $\beta (\tau ).$}. Expression $\left( \ref{bp}\right) $---
which corresponds to expression $\left( \ref{1c}\right) $ if we transform
back to our original variables --- is similar to a Volterra equation of the
second kind. Specifically, like Volterra equations of the second kind, we
can try to iteratively solve for $\beta (\tau )$ by inserting an initial
expression for $\beta (\tau )$ into the integral on the right hand side of $
\left( \ref{bp}\right) $, which leads to a new expression for $\beta (\tau )$
on the left hand side. We then insert the new $\beta (\tau )$ into the
integral on the right hand side and repeat the process as many times as is
necessary.
\section{Short time asymptotics}
By translating the Barles result $\left( \ref{1f}\right) $ for the short
time asymptotics of the early exercise boundary into our new coordiates, we
have that
\begin{equation}
\beta (\tau )= -\sqrt{-2\tau \ln \left( v^{2}c\tau \right) }
\label{3a}
\end{equation}
where $c=4\pi\rho ^{2}$ and
$$1\leq \lim\limits_{\tau\to 0}v^{2}(\tau) \quad\hbox{and}\quad
\lim\limits_{\tau\to 0}\frac{v^{2}(\tau)}{-\ln
(c\tau )}\leq 1. $$
We will now use the iteration algorithm described above to
determine the nature of
$v$: Letting $v_{0}=1$ be our initial guess for $v,$ we will insert $\left(
\ref {3a}\right) $ with $v=v_{0}$ into the right hand side of $\left( \ref{bp}
\right) $ which will yield a value for $v$ from the left hand side of $
\left( \ref{bp}\right) $ which we define to be $v_{1}.$ Then we insert $
\left( \ref{3a}\right) $ with $v=v_{1}$ into the right hand side of $\left(
\ref{bp}\right) $ to yield $v_{2},$ and repeat this process to establish a
sequence $v_{0},v_{1},v_{2},v_{3},...$ which converges to $v.$ In this
fashion we will establish that
\begin{equation}
v=1+\frac{1}{\ln (c\tau )}+O\left( \left( \frac{1}{\ln (c\tau )}\right)
^{2}\right) \label{3b}
\end{equation}
which is equivalent to $\left( \ref{1g}\right) $ from the introduction.
Our analysis begins with the left hand side of $\left( \ref{bp}\right) $.
Inserting $\left( \ref{3a}\right) $ into the left hand side of $\left( \ref
{bp}\right) $ yields
\begin{equation}
LHS=G\left( \beta (\tau ),\tau \right) =\frac{1}{\sqrt{4\pi \tau }}\exp
\left[ -\frac{\left( -\sqrt{-2\tau \ln \left( v^{2}c\tau \right) }+(\rho
-1)\tau \right) ^{2}}{4\tau }-\rho \tau \right] . \label{3c}
\end{equation}
As $\tau $ gets smaller, the $(\rho -1)\tau $ and $\rho \tau $ terms in $
\left( \ref{3c}\right) $ are dominated by the $\sqrt{-2\tau \ln \left(
v^{2}c\tau \right) }$ term, so
\[
LHS\approx \frac{1}{\sqrt{4\pi \tau }}\exp \left[ -\frac{\left( -\sqrt{
-2\tau \ln \left( v^{2}c\tau \right) }\right) ^{2}}{4\tau }\right] ~\hbox {~}~
\hbox{~}~\hbox{~}~\hbox{~}~\hbox{for }\tau \hbox{\ small.}
\]
Algebraic simplification of this expression yields
\[
LHS\approx \frac{v\sqrt{c}}{\sqrt{4\pi }}\hbox{~}~\hbox{~}~\hbox{~}~\hbox{~}~
\hbox{for }\tau \hbox{\ small,}
\]
and therefore, since $c=4\pi \rho ^{2},$ we have
\begin{equation}
LHS\approx v\rho \hbox{~}~\hbox{~}~\hbox{~}~\hbox{~}~\hbox{for }\tau \hbox{\
small.} \label{3d}
\end{equation}
Now we consider the right hand side of $\left( \ref{bp}\right) $. The
primary contribution to the integral representing the right hand side of $
\left( \ref{bp}\right) $ comes from the values of $s$ near $\tau $ (no
matter what $v$ is), so we consider $\beta (s)$ and $\beta ^{\prime }(s)\ $
as Taylor polynomials around $s=\tau :$
\begin{eqnarray}
RHS &=&-\rho \int_{0}^{\tau }\beta ^{\prime }(s)G\left( \beta (\tau )-\beta
(s),\tau -s\right) ds \label{3e} \\
&\approx &-\rho \int_{0}^{\tau }\left[ \beta ^{\prime }(\tau )-\beta
^{\prime \prime }(\tau )\left( \tau -s\right) +h.o.t.\right] \nonumber \\
&&\times G\left( \beta ^{\prime }(\tau )\left( \tau -s\right) -\beta
^{\prime \prime }(\tau )\frac{\left( \tau -s\right) ^{2}}{2}+h.o.t.,\tau
-s\right) ds \nonumber \\
&\approx &\frac{-\rho }{\sqrt{4\pi }}\int_{0}^{\tau }\left[ \beta ^{\prime
}(\tau )-\beta ^{\prime \prime }(\tau )\left( \tau -s\right) +h.o.t.\right]
\hbox{~}\frac{1}{\sqrt{\left( \tau -s\right) }} \nonumber \\
&&\times \exp \left[ -\frac{\left(
\begin{array}{c}
\beta ^{\prime }(\tau )\left( \tau -s\right) -\beta ^{\prime \prime }(\tau )
\frac{\left( \tau -s\right) ^{2}}{2}+h.o.t. \\
+(\rho -1)\left( \tau -s\right)
\end{array}
\right) ^{2}}{4\left( \tau -s\right) }-\rho \left( \tau -s\right) \right] ds.
\nonumber
\end{eqnarray}
where $h.o.t.$ stands for ``higher order terms''. As with the analysis of
the left hand side, the $(\rho -1)\left( \tau -s\right) $ and $\rho \left(
\tau -s\right) $ terms in the exponential will be much smaller than the
other terms as $\tau \rightarrow 0$ so we can neglect them. Neglecting these
terms and substituting
\[
x=-\beta ^{\prime }(\tau )\sqrt{\frac{\tau -s}{2}}
\]
into $\left( \ref{3e}\right) $ yields
\begin{equation}
RHS\approx \sqrt{\frac{2}{\pi }}\rho \int_{0}^{-\beta ^{\prime }(\tau )\sqrt{
\frac{\tau }{2}}}\left( 1-2Dx^{2}+h.o.t.\right) \exp \left[ -\frac{1}{2}
\left( x-Dx^{3}+h.o.t.\right) ^{2}\right] dx \label{3f}
\end{equation}
where $D=\frac{\beta ^{\prime \prime }(\tau )}{\left( \beta ^{\prime }(\tau
)\right) ^{3}}.$ Even though (as we will see) $\lim\limits_{\tau \rightarrow
0}-\beta ^{\prime }(\tau )\sqrt{\frac{\tau }{2}}=\infty ,$ the primary
contribution to $\left( \ref{3f}\right) $ comes from the values of $x$ near $
0$ so, since $\lim\limits_{\tau \rightarrow 0}D=0$ (which we will also see),
we have that
\begin{eqnarray*}
&&RHS\approx \sqrt{\frac{2}{\pi }}\rho \int_{0}^{-\beta ^{\prime }(\tau )
\sqrt{\frac{\tau }{2}}}\left( 1-2Dx^{2}+h.o.t.\right) \exp \left[ -\frac{
x^{2}}{2}\right] \exp \left[ Dx^{4}+h.o.t.\right] dx \\
&\approx &\sqrt{\frac{2}{\pi }}\rho \int_{0}^{\infty }\left(
1-2Dx^{2}+h.o.t.\right) \exp \left[ -\frac{x^{2}}{2}\right] \left(
1+Dx^{4}+h.o.t.\right) dx
\end{eqnarray*}
and so
\begin{eqnarray}
&&RHS\approx \sqrt{\frac{2}{\pi }}\rho \int_{0}^{\infty }\left(
1-2Dx^{2}+Dx^{4}+h.o.t.\right) \exp \left[ -\frac{x^{2}}{2}\right] dx
\label{3g} \\
&=&\rho (1-2D+3D+h.o.t.) \nonumber \\
&=&\rho (1+D+h.o.t.). \nonumber
\end{eqnarray}
So, combining $\left( \ref{3d}\right) $ and $\left( \ref{3g}\right) $ we
have
\begin{equation}
v\approx 1+D+h.o.t. \label{3h}
\end{equation}
which is our desired recurrence relation.
Now we set $v=v_{0}=1$ in $\left( \ref{3a}\right) $, so we have
\begin{eqnarray*}
&&\beta (\tau )\approx -\sqrt{-2\tau \ln \left( c\tau \right) }, \\
&&\beta ^{\prime }(\tau )\approx \frac{\ln (c\tau )+1}{\sqrt{-2\tau \ln
\left( c\tau \right) }}, \\
&&\beta ^{\prime \prime }(\tau )\approx \frac{\ln ^{2}(c\tau )+1}{\left(
-2\tau \ln \left( c\tau \right) \right) ^{\frac{3}{2}}},
\end{eqnarray*}
and so
\[
D\approx \frac{\ln ^{2}(c\tau )+1}{\left( \ln (c\tau )+1\right) ^{3}}\approx
\frac{1}{\ln (c\tau )}.
\]
(Note that $\lim\limits_{\tau \rightarrow 0}-\beta ^{\prime }(\tau )\sqrt{
\frac{\tau }{2}}=\infty\ $and $\lim\limits_{\tau \rightarrow 0}D=0$ as
indicated previously.) If we ignore the $h.o.t.,$ we have from $\left( \ref
{3h}\right) $ that $v_{1}=1+\frac{1}{\ln (c\tau )}.$ Setting $v=v_{1}$ in $
\left( \ref{3a}\right) ,$ and repeating these calcuations, again shows that $
D\approx \frac{1}{\ln (c\tau )},$ and so, since this will be the case for as
many iterations as we want, we have that
\[
v=1+\frac{1}{\ln (c\tau )}+h.o.t.
\]
Unfortunately, we can easily show that $h.o.t.$ $=O\left( \left( \frac{1}{
\ln (c\tau )}\right) ^{2}\right) \ $so these higher order terms do influence
the value of $v$ unless $\tau $ is extremely small.
\section{Computational Methods and Results}
We computed the free boundary using a more or less standard method, with
details chosen to minimize the effect of roundoff error in short time
computation. To compute the free boundary near 0 rather than near $K$, we
use, as in $\left(\ref{var}\right)$, $S=Ke^{x}$ transforming the
Black-Scholes equation $\left(\ref{1a}\right)$ into the constant
coefficient form
$$F_t=\frac{\sigma^2}{2}F_{xx}+\left(r-\frac{\sigma^2}{2}\right)F_x -rF$$
with $$F(x,t)=\frac{1}{K}f(S,t)$$ where the initial condition
$\left(\ref{2c}\right)$ is
$$F(x,0)=F_{0}(x)\equiv\max (1-e^{x},0).$$
Note that since the American put can be exercised at any time, we always have
the intrinsic constraint
\begin{equation}
F(x,t)\geq F_0(x). \label{4a}
\end{equation}
To avoid round off problems in computing $1-e^{x}$, we only use the built in
exponential function in double precision for
$\left| x\right| >10^{-4}$; for $\left| x\right| \leq
10^{-4}$, we use the following Horner form of the Taylor polynomial
\[
1-e^{x}\approx -x\left( 1+\frac{x}{2}\left( 1+\frac{x}{3}\left( 1+\frac{x}{4}
\right) \right) \right)
\]
to reduce round-off error.
Each plotted value of $v(t)$ in figures 1--3 is produced by performing a
simulation up to time $t$ and recording the final $v$ value. A single run
could not be used to determine all of the data points at once
because the value of $v$ produced in the early stages of any specific
run (that is, for the first few iterations in time, regardless of the value of
$\Delta t$) was found not to be accurate for our purposes. Given a specific
stoping time, $t$, we chose $x_{\min}$ and $x_{\max}$, the lower and upper
bounds for the $x$ domain, by using formulas roughly motivated by the theory:
\[
x_{\min }=-\frac{3}{2}\sigma\sqrt{-t\ln (t)},\qquad x_{\max
}=4\sigma\sqrt{t}.
\]
We have checked that the lower bound, $x_{\min },$ is irrelevant in that
$\beta (t)$ never reached $x_{\min }$ in any of our computations. We also
checked that varying $x_{\max }$ (say, using
$x_{\max }=5\sigma\sqrt{t})$ has a negligible effect on the solution. The $x$
domain was divided into {\em nx} uniformly spaced grid points (therefore
$\Delta x=\left( x_{\max }-x_{\min }\right) /(\mbox{{\em nx}}-1)$), and a
corresponding time step of $\Delta t=\mbox{{\em CFL}}\cdot
\Delta x^{2}$ was chosen, where the {\em CFL} constant was
$0.9$ for all of our runs. We used $nx=8000$ for the data presented in
figures 1--3, which agree to beyond ploting accuracy with data we obtained
using
$nx=16,000$.
The numerical approximations were computed by the forward Euler/trinomial tree
method taking into account the intrinsic constraint
$\left(\ref{4a}\right)$:
$$F_{k}^{n+1} =\max \left( \widetilde{F}_{k}^{n+1},F_{0}(x)\right)$$
where
\begin{eqnarray}
\widetilde{F}_{k}^{n+1} &=&F_{k}^{n}+\frac{\sigma^2}{2}\frac{\Delta t}{\Delta
x^{2}}\left( F_{k+1}^{n}-2F_{k}^{n}+F_{k-1}^{n}\right) \label{4b}\\
&& +\left(r-\frac{\sigma^2}{2}\right)\frac{\Delta
t}{2\Delta x }\left( F_{k+1}^{n}-F_{k-1}^{n}\right) -r \Delta tF_{k}^{n}.
\nonumber
\end{eqnarray}
At each time step, $n$, the early exercise boundary is approximated by
$\beta_n=x_{k^*_n}$, the smallest grid point
$x_k=x_{\min}+k\Delta x$, where the intrinsic constraint
$\left(\ref{4a}\right)$ is still binding; that is, in theory\footnote{
Actually, the formula (\ref{4c}) gave trouble with computer arithmetic for
small $t.$ It was more reliable to use
\[
k_{*}^{n}=\max \left( k\mid \widetilde{F}_{k}^{n}