I’m creating my own gym environment to test the freeze-tag problem. I’m trying to use Ray to do MAPPO. I have two problems:
1: My simulation is not rendering
2: Its creating multiple PyGame windows
I’ve attached snippets for my render method and the script for my training.
# render function
def render(self):
self.screen.fill((255, 255, 255))
for agent in self.all_agents:
if agent.status == 1:
pygame.draw.circle(self.screen, agent.color, (agent.x, agent.y), agent.size)
elif agent.status == 0:
pygame.draw.circle(self.screen, (0, 255, 255), (agent.x, agent.y),agent.size)
pygame.display.flip()
# Train_MAPPO_FTP.py
import ray
from ray.rllib.algorithms.ppo import PPOConfig
from ray.tune.registry import register_env
import gym_FTP as e
import pygame
import numpy as np
# Environment creation function
def env_creator(config):
robots = 5
adversaries = 2
time_steps = 500
screen = pygame.display.set_mode([1000, 1000])
gym_ftp = e.gym_FTP(screen, robots, 0, adversaries, time_steps, 15)
return gym_ftp
def train_and_evaluate(time_steps):
# Initialize Ray
ray.init(ignore_reinit_error=True)
# Register environment
register_env("Env_FTP", env_creator)
# create_env_on_local_worker = True
# Configure algorithm
config = PPOConfig()
.environment("Env_FTP")
.rollouts(num_rollout_workers=1,
rollout_fragment_length=1,
create_env_on_local_worker=True)
.training(
train_batch_size=1, # Aggregate experiences before each training update
sgd_minibatch_size=1,
model={"fcnet_hiddens": [64, 64]}
)
.framework("torch")
.evaluation(evaluation_num_workers=1)
.resources(num_gpus=0) # Set the number of GPUs
# Build algorithm
algo = config.build()
# Parameters
episodes = 5
iterations = time_steps / 10
for episode in range(episodes):
for i in range(int(iterations)):
results = algo.train()
print(f"Training iteration {i + 1} finished. mean_reward {results['episode_reward_mean']},"
f" total loss {results['info']['learner']['__all__']['total_loss']}")
# Shutdown Ray
ray.shutdown()
def main():
time_steps = 500
train_and_evaluate(time_steps)
main()
I have done multiple checks to test and see if my agent’s velocities are being updated from the new actions and if the positions are being updated so I’m sure this is not the issue. Also this environment works when I test it with other algorithms. I can use other features of the gym environment correctly and get it to render and do interesting things. It appears to strictly be a problem with RAY. My goal is to have n robots and m adversaries. I want to get new actions for the n agents based on the state of the environment. I want to train for 500 timesteps per episode, collecting batches of 10. For instance the first 10 timesteps, then add 10 more timesteps as experience then 10 more. So we do at most 50 updates per episode. We will do 100 episodes.