Help in creating Adversarial Environment #37
Replies: 10 comments 3 replies
-
Hi @AizazSharif , An adversarial scenario was not explicitly evaluated/studied in the paper but it is easy to add one to MACAD-Gym.
For your reference, the Reward functions are implemented in reward.py. I will be happy to help you if you need any further assistance in putting this together as a pull request. |
Beta Was this translation helpful? Give feedback.
-
Hi @praveen-palanisamy, In urban driving through a signalized intersection scenario I wanted to ask if the multi-agents training are done through shared policy or separate. I am interested in working on separate policy training. |
Beta Was this translation helpful? Give feedback.
-
Yes, you could use separate policies for each of the agents. An example where different policy parameters are used for each Car actor is available in the MACAD-Agents repository. It's implemented using Ray and specifically, the following lines shows how you could use different policy parameters for each of the agents: "multiagent": {
"policy_graphs": {
id: default_policy()
for id in env_actor_configs["actors"].keys()
},
"policy_mapping_fn":
tune.function(lambda agent_id: agent_id),
}, You can also use a custom/different Deep RL algorithm for each or the agents to train the policy. |
Beta Was this translation helpful? Give feedback.
-
This is great thanks a lot for the help @praveen-palanisamy. Hopefully, I am able to do this before I create a feature pull request. |
Beta Was this translation helpful? Give feedback.
-
Hi @praveen-palanisamy , Any information would be helpful. |
Beta Was this translation helpful? Give feedback.
-
Hey @AizazSharif , |
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply @praveen-palanisamy. If you have any suggestions or pointer regarding this issue it will be really helpful. Thanks. |
Beta Was this translation helpful? Give feedback.
-
Okay so that does sound to be specific to the the Impala Agent which has it's own resource requirements on top of the environment/CARLA. Since this is a different issue, can you open one on the MACAD-Agents repository to keep things organized? |
Beta Was this translation helpful? Give feedback.
-
@AizazSharif : If you need further assistance on the original topic (Help in creating Adversarial Environments), we can continue it as an Idea Discussion. |
Beta Was this translation helpful? Give feedback.
-
Hi @praveen-palanisamy , In macad-gym I can easily work on reward function, but without RL algorithm and training environment it looks difficult. Any help would be appreciated. |
Beta Was this translation helpful? Give feedback.
-
Hi @praveen-palanisamy,
I wanted to try the adversarial multi-agent example as mentioned in the related paper, but there are only 2 available examples and an adversarial example is not in it.
Kindly help in how to create such env and use it for training and testing.
Any information would be helpful.
Thanks
Beta Was this translation helpful? Give feedback.
All reactions