Original article can be found here (source): Deep Learning on Medium

# Intro to Neural ODEs: Part 2 — Euler’s Method

## In the second blog post of this series, we take one step closer towards discovering how Neural ODEs work.

Last time, I left you with a question about the equation that describes a residual neural network, or ResNet. The equation is below and the question is what does this equation look like?

The answer is the title of this post.

If you don’t remember from calculus class, Euler’s method is the simplest way to approximate the solution of a differential equation. Given the initial value problem

we can find a numerical approximation with Euler’s method defined below.

Take a moment to look at Euler’s method as well as the ResNet equation at the top of the post. They are almost identical! The only difference being the step size, *h*, that is multiplied by the function. Because of this similarity, we can think of the ResNet as the approximate solution to an underlying differential equation. What is this diffeq? Well, we can work our way backwards. Instead of going from diffeq to Euler’s method, we can reverse engineer the problem. Starting from the ResNet, the resulting differential equation is

This finally brings us to Neural ODEs, which will be described in further detail in the next post.