Im a student that is trying to finish my degree. For that im trying to learn about Reinforcement Learning. Due that Im implementing DQN on a Gazebo-Ros environment where I use a Turtlebot3 Burger model and a Square as a Goal. I wanted to apply CNN nets to this algorithm (I tried with Lidar sensors and the algorithm seems to work). The problem with the CNN nets its that they are not learning. I dont know whats the reason. The images that im passing to the net are images (64x64x3) from top view of the map something like this: Top view of map.
Im using a learning rate of 0.0003 batch size of 64 and im training the algorithm every 128 steps on the environment (the environment has 500 steps per episode). Here is my implementation of the DQN algorithm (the method):
class DQN():
def __init__(self, n_actions=5, epsilon_min=0.01, gamma=0.99, lr=0.0003, epsilon=1.0, max_size=100000, input_dims=[364], batch_size=64, using_camera=0):
self.using_camera = using_camera
self.n_actions = n_actions
self.batch_size = batch_size
self.epsilon = epsilon
self.epsilon_decay = gamma
self.epsilon_min = epsilon_min
self.lr = lr
self.gamma = gamma
if self.using_camera: max_size = 7000
self.memory = ReplayBuffer(max_size, input_dims, n_actions, using_camera)
self.model = CNNQNetwork(n_actions=self.n_actions, name="model")
self.target_model = CNNQNetwork(n_actions=self.n_actions, name="target_model")
self.model.compile(optimizer=Adam(learning_rate=self.lr))
self.target_model.compile(optimizer=Adam(learning_rate=self.lr))
def store_data(self, states, actions, rewards, new_states, dones):
self.memory.store_data(states, actions, rewards, new_states, dones)
def choose_action(self, observation):
if np.random.rand() < self.epsilon:
return np.random.randint(self.n_actions)
q_values = self.model(tf.expand_dims(observation, axis=0))
return np.argmax(q_values)
def update_target_model(self):
self.target_model.set_weights(self.model.get_weights())
def learn(self):
if len(self.memory.states) < self.batch_size:
return
state_arr, action_arr, reward_arr, new_state_arr, dones_arr = self.memory.generate_data(self.batch_size)
states = tf.convert_to_tensor(state_arr, dtype=tf.float32)
states_ = tf.convert_to_tensor(new_state_arr, dtype=tf.float32)
rewards = tf.convert_to_tensor(reward_arr, dtype=tf.float32)
actions = tf.convert_to_tensor(action_arr, dtype=tf.int32)
dones = tf.convert_to_tensor(dones_arr, dtype=tf.float32)
with tf.GradientTape() as tape:
q_values = self.model(states)
next_q_values = self.target_model(states_)
max_next_q_values = tf.reduce_max(next_q_values, axis=1)
y = rewards + self.gamma * max_next_q_values * (1 - dones)
one_hot_actions = tf.one_hot(actions, self.n_actions)
q_values = tf.reduce_sum(q_values * one_hot_actions, axis=1)
loss = tf.keras.losses.MSE(y, q_values)
model_grads = tape.gradient(loss, self.model.trainable_variables)
self.model.optimizer.apply_gradients(zip(model_grads, self.model.trainable_variables))
if self.epsilon > self.epsilon_min:
self.epsilon = max(self.epsilon * self.epsilon_decay, self.epsilon_min)
In my ReplayBuffer im using a size of 7000. I cant use more for my RAM that gets me errors of allocation memory. Im doing DataAugmentation and normalizing the images to [0, 1] values. The net that im using is this:
class CNNQNetwork(Model):
def __init__(self, n_actions, name, save_directory = '/model_weights/dqn/'):
super(CNNQNetwork, self).__init__()
self.n_actions = n_actions
self.net = Sequential([
Conv2D(256, (8, 8), activation='relu', input_shape=(64, 64, 3)),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.2),
Conv2D(256, (4, 4), activation='relu',),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.2),
Conv2D(256, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Dropout(0.2),
Flatten(),
Dense(64),
Dense(self.n_actions, activation='linear')
])
self.model_name = name
self.save_directory = save_directory
def call(self, state):
return self.net(state)
But with all that my loss value is struggling and i get values for different training steps of for example:
Loss graph during episodes
`This leads the bot to learn to spin in the same direction. The action space is 5 and the observation space (the image) is 64x64x3. I dont know if this enought information. Thanks!
I tried differents lr 0.0003, 0.01, 0.1, 0.00001 differents batch sizes 64,128,256, differents ways to train such as training at the end of the episode, traning at differents timesteps 256,128,64. Different size of the images 128x128x64, 64x64x3`
This is the heuristic used to give the reward
for i in range(5):
angle = -pi / 4 + heading + (pi / 8 * i) + pi / 2
tr = 1 - 4 * math.fabs(0.5 - math.modf(0.25 + 0.5 * angle % (2 * math.pi) / math.pi)[0])
yaw_reward.append(tr)
distance_rate = 2 ** (current_distance / self.goal_distance)
reward = ((round(yaw_reward[action] * 5, 2)) * distance_rate)
Where heading is the actual angle between the robot and the goal. The reward for getting a goal is 400 and the reward for a collision is -400
Alejandro Nieto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.