-
Notifications
You must be signed in to change notification settings - Fork 8
add cart pole rl env #100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
add cart pole rl env #100
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This draft PR adds a CartPole reinforcement learning environment to the embodichain framework. The implementation follows the existing pattern established by the PushCubeRL environment, introducing a new task where an agent must balance a pole on a movable cart by controlling the cart's position.
Changes:
- New CartPole RL environment class with success/failure conditions based on pole angle and velocity
- Optional goal_pose handling in RLEnv base class to support tasks without explicit goal positions
- New pole_upward reward function to encourage keeping the pole upright
- Configuration files for training and environment setup
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| embodichain/lab/gym/envs/tasks/rl/cart_pole.py | Implements CartPoleEnv with pole balancing logic, success criteria (angle < 0.05, velocity < 0.1), and truncation when pole falls beyond π/2 radians |
| embodichain/lab/gym/envs/rl_env.py | Makes goal_pose optional in info dictionary by checking for existence before adding, supporting tasks without explicit goal positions |
| embodichain/lab/gym/envs/managers/rewards.py | Adds pole_upward reward function that rewards keeping pole angle close to vertical (0 radians) |
| embodichain/lab/gym/envs/init.py | Imports and registers CartPoleEnv for use in the framework |
| configs/agents/rl/cart_pole/train_config.json | PPO training configuration with hyperparameters, camera settings, and wandb integration |
| configs/agents/rl/cart_pole/gym_config.json | Environment configuration including robot URDF path, joint parameters, observations, rewards, and control settings |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "elapsed_steps": self._elapsed_steps, | ||
| "goal_pose": self.goal_pose, | ||
| } | ||
| if hasattr(self, "_goal_pose") and self._goal_pose is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_goal_pose should be configurable @yhnsu
| "elapsed_steps": self._elapsed_steps, | ||
| "goal_pose": self.goal_pose, | ||
| } | ||
| if hasattr(self, "_goal_pose") and self._goal_pose is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be removed
Description
Add cart pole rl env.
example:
python embodichain/agents/rl/train.py --config configs/agents/rl/basic/cart_pole/train_config.jsonTODO:
Type of change
Screenshots
ep198_main_cam_allenvs.mp4
Checklist
black .command to format the code base.