Original article was published on Artificial Intelligence on Medium
Infinite Steps CartPole Problem With Variable Reward
Modify Step Method of CartPole OpenAI Gym Environment Using Inheritance
In the last blog post, we wrote our first reinforcement learning application — CartPole problem. We used Deep -Q-Network to train the algorithm. As we can see in the blog, the fixed reward of +1 was used for all the stable states and when the CartPole loses its balance, a reward of 0 was given. We saw at the end: when the CartPole approaches 200 steps, it tends to lose balance. We ended the blog suggesting a remark: the maximum number of steps (which we defined 200) and the fixed reward may have led to such behavior. Today, let’s not limit the number of steps and modify the reward and see how the CartPole behaves.
CartPole Problem Definition
The CartPole problem is considered to be solved when the average reward is greater than or equal to 195.0 over 100 consecutive trials. This is considering the fixed reward of 1.0. Thanks to its definition, it makes sense to keep a fixed reward of 1.0 for every balance state and limit the maximum number of steps to 200. It delights to know that the problem was solved in the previous blog.
The CartPole problem has the following conditions for episode termination:
- Pole angle is more than 12 degrees.
- Cart position is more than 2.4 — center of the cart reaches the edge of the display.
Our goal here is to remove the number of steps limitation and give a variable reward to each state.
If x and θ represents cart position and pole angle respectively, we define the reward as:
reward = (1 - (x ** 2) / 11.52 - (θ ** 2) / 288)
Here, both the cart position and pole angle components are normalized to [0, 1] interval to give equal weightage to them. Let’s see the screenshot of the 2D view of the 3D graph.
We see in the graph that when the CartPole is perfectly balanced (i.e. x = 0 and θ = 0), the maximum reward is achieved (i.e. 1). With increase in the absolute values of x and θ, the reward decreases and reaches 0 when |x| = 2.4 and |θ| = 12.
Let’s inherit the CartPole environment gym class (CartPoleEnv) to our custom class, CustomCartPoleEnv, and overwrite the step method. In the step method, we write the variable reward instead of the fixed reward.
By using the above block of code, the components of TF-Agents are made and the Deep Q-Network is trained. We see that the CartPole is even more balanced and stable over a large number of steps.
Let’s see the video of how our CartPole behaves after using the variable reward.
One episode lasts 35.4 seconds on an average. Impressive, isn’t it?
Here, the reward becomes zero only when both of the expressions (pole angle and cart position) reach the extreme values. We can employ different reward function that returns zero when one of the extreme conditions is reached. I expect such a reward function to do even better. Therefore, readers are encouraged to try such a reward function and comment how the CartPole behaved. Happy RLing!