Deep Solutions of Partial Differential Equations

Partial differential equations (PDEs) arise in numerous fields of science and engineering, capturing the behavior of physical systems across various domains. Traditional methods of solving PDEs include analytical solutions and numerical approaches such as the finite element method (FEM) or finite difference methods (FDM). However, with the rise of deep learning, researchers have begun to harness the power of neural networks to solve PDEs.

 

Why Use Deep Learning? 

 

Neural networks are universal function approximators. Given enough data and computational power, they can approximate solutions to a wide variety of PDEs without the need for domain-specific knowledge. By leveraging both simulation data and real-world measurements, neural networks can be trained to solve PDEs under various conditions. Finally, Neural networks can be parallelized across modern hardware infrastructure like GPUs, providing potential speed-ups for solving large-scale PDE problems.

 

 

The Black-Scholes PDE is a fundamental equation in mathematical finance, used for option pricing. The equation can be written as:

\frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + rS\frac{\partial V}{\partial S} – rV = 0

Where:  V is the option price as a function of stock price S and time t ,  \sigma is the volatility of the stock, and  r is the risk-free interest rate.

Neural Networks for Black-Scholes PDE

One can design a neural network to approximate the solution V(S,t) by feeding in values of S and t as inputs and training the network to minimize the discrepancy between the left and right sides of the Black-Scholes PDE. Here’s how it can be done:

1. Architecture: Use feed-forward neural networks with multiple layers and activation functions suitable for the regression task, like the ReLU (Rectified Linear Unit).

2. Training Data: Generate a set of (S,t) pairs across the domain of interest. These can be both simulated or real-world data.

3. Loss Function: Define a custom loss function that represents the residual of the Black-Scholes PDE, measuring the difference between the left-hand and right-hand sides of the equation.

4. Training: Train the neural network using optimization techniques like gradient descent to minimize the defined loss function.

5. Prediction: Once trained, the neural network can be used to predict option prices for any given S and t , providing a numerical solution to the Black-Scholes PDE.

Deep-Time Neural Networks

This RiskLab research article presents the Deep-Time Neural Network (DTNN), an efficient and novel deep-learning approach for solving partial differential equations (PDEs). DTNN leverages the power of deep neural networks to approximate the solution for a class of quasi-linear parabolic PDEs. We demonstrate that DTNN significantly reduces the computational cost and speeds up the training process compared to other models in the literature. The results of our study indicate that DTNN architecture is promising for the fast and accurate solution of time-dependent PDEs in various scientific and engineering applications. The DTNN architecture addresses the pressing need for enhanced time considerations in the deeper layers of Artificial Neural Networks (ANNs), thereby improving convergence time for high-dimensional PDE solutions. This is achieved by integrating time into the hidden layers of the DTNN, demonstrating a marked improvement over existing ANN-based solutions regarding efficiency and speed.

 

Assume u(t, x): (\mathbb{R}, \mathbb{R}^d) \to \mathbb{R} is a real-valued function, where t\in\mathbb{R} and x\in{\mathbb{R}^d} are temporal and spatial variables, respectively. Define the operator \mathcal{L} as

\mathcal{L} u(t, x) = \frac{1}{2} \operatorname{Tr}\left(\sigma \sigma^{\mathrm{T}}(t, x)\left(\operatorname{H}_x u\right)(t, x)\right)+\nabla u(t, x) \cdot \mu(t, x)

Given the operator, the parabolic PDE with the boundary condition u(T,x) = g(x) is:

\frac{\partial}{\partial t}u(t, x)+\mathcal{L} u(t, x)= f\left(t, x, u(t, x), \sigma(t, x)^{\mathrm{T}} \nabla u(t, x)\right)

\frac{\partial}{\partial t}u(t, x)+\mathcal{L} u(t, x)= f\left(t, x, u(t, x), \sigma(t, x)^{\mathrm{T}} \nabla u(t, x)\right)

We aim to find the solution at t=0, x=X_0 \in \mathbb{R}^d .

The solution to the above PDEs satisfies the following Backward Stochastic Differential Equations (BSDEs):

 

u\left(t, X_t\right)-u\left(0, X_0\right) = \int_0^t f\Big(s, X_s, u\left(s, X_s\right),Z_s\Big) d s +\int_0^t Z_s d W_s

Where Z_s = \sigma(s, X_s)^{\mathrm{T}} \nabla u(s, X_s) and X_t is given by:

X_t=X_0+\int_0^t \mu\left(s, X_s\right) d s+\int_0^t{\sigma\left(s, X_s\right) d W_s}

After discretizing, we have:

u\left(t_{n+1}, X_{t_{n+1}}\right)  = u\left(t_n, X_{t_n}\right) +  f\left(t_n, X_{t_n}, u\left(t_n, X_{t_n}\right), Z_{t_n} \right) \Delta t_n +  Z_{t_n} \Delta W_n

As soon as we have a network to solve the PDE, we can estimate the solution of Black-Scholes PDE for functionals of various forms. For example for the following representation:

 

f\left(t, x, u(t, x), \sigma^{\mathrm{T}}(t, x) \nabla u(t, x)\right) = r(t, x, u(t, x))u(t,x) +  h(t,u,x)^T \sigma(t,x)  \nabla u(t, x)

where u(t,X_t) evolves as above, we can use the Deep-Time Neural Network to estimate the solution of the PDE. Our iterative method to approximate a specific target function within a stochastic differential equation framework is showcased in our pseudocode. We start with initial values and proceed through a series of nested loops, optimizing the neural network at each step. Our Deep-Time Neural Network is implemented in Python and Julia programming languages and is available on GitHub.