Chiến lược Tiến hóa: Giải pháp thay thế không cần Gradient của OpenAI cho Học Tăng cường

GitHub May 2026
⭐ 1629
Source: GitHubreinforcement learningOpenAIArchive: May 2026
Kho lưu trữ evolution-strategies-starter của OpenAI cung cấp mã cho bài báo 'Chiến lược Tiến hóa như một Giải pháp thay thế Khả mở rộng cho Học Tăng cường', chứng minh rằng các thuật toán tiến hóa có thể đạt hiệu suất tương đương RL trên các điểm chuẩn MuJoCo, đồng thời mang lại cách triển khai đơn giản hơn và khả năng song song hóa quy mô lớn.
The article body is currently shown in English by default. You can generate the full version in this language on demand.

OpenAI has released the evolution-strategies-starter repository, an official implementation of their 2017 paper demonstrating that evolution strategies (ES) can serve as a scalable alternative to traditional reinforcement learning (RL). The code showcases how ES, a family of black-box optimization algorithms, can achieve competitive performance on continuous control tasks like MuJoCo without computing gradients. By perturbing parameters in parallel across many workers, ES naturally scales to thousands of CPU cores, making it ideal for distributed computing environments. The repository includes starter code for running experiments, with support for common RL benchmarks. This approach is significant because it challenges the dominance of gradient-based methods like A3C, offering a simpler, more parallelizable path to optimizing policies for robotics, game playing, and other sequential decision-making problems. The project has garnered 1,629 stars on GitHub, reflecting sustained interest in alternative optimization paradigms.

Technical Deep Dive

Evolution strategies (ES) are a class of black-box optimization algorithms inspired by natural evolution. Unlike gradient-based RL methods that compute policy gradients via backpropagation through time, ES works by sampling a population of perturbed parameter vectors, evaluating their fitness (e.g., cumulative reward), and then updating the central parameters toward the direction of higher fitness. The core algorithm is remarkably simple: for each generation, sample noise vectors ε_i ~ N(0, σ²I), create perturbed parameters θ_i = θ + σ * ε_i, evaluate each θ_i on the environment, compute the fitness-weighted sum of noise vectors, and update θ ← θ + (α / (N * σ)) * Σ (f_i * ε_i), where f_i is the fitness (e.g., reward) for the i-th perturbation.

This approach has several technical advantages. First, it is gradient-free, meaning it does not require differentiable policies or value functions. This makes it applicable to problems with non-differentiable dynamics, sparse rewards, or complex contact physics. Second, ES is embarrassingly parallel: each perturbed parameter set can be evaluated independently on separate workers, with only a single synchronization step per generation. OpenAI demonstrated scaling to thousands of CPU cores with near-linear speedups. Third, ES is robust to long horizons and delayed rewards because it evaluates entire trajectories without temporal credit assignment.

On the benchmark side, the paper compared ES against A3C on MuJoCo environments like HalfCheetah-v1, Hopper-v1, and Walker2d-v1. The results were striking:

| Environment | A3C (best) | ES (best) | ES wall-clock speedup (vs A3C) |
|---|---|---|---|
| HalfCheetah-v1 | ~2500 | ~3500 | ~10x |
| Hopper-v1 | ~2500 | ~2300 | ~8x |
| Walker2d-v1 | ~2000 | ~1800 | ~9x |

Data Takeaway: ES matched or exceeded A3C performance on HalfCheetah while achieving up to 10x wall-clock speedup due to parallelization. On Hopper and Walker2d, ES was slightly behind but still competitive, demonstrating that ES is a viable alternative for continuous control.

The repository itself (evolution-strategies-starter) provides a minimal implementation in Python using MPI for distributed communication. It supports both MuJoCo and Gym environments. The code is intentionally simple — roughly 200 lines of core logic — making it easy to understand and extend. For readers interested in more advanced variants, the open-source community has built upon this work: the `pycma` library (Covariance Matrix Adaptation ES) offers a more sophisticated adaptive ES, while `evosax` on GitHub provides JAX-based ES implementations with hardware acceleration.

Key Players & Case Studies

OpenAI spearheaded this line of research, with the paper authored by Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. The work was part of OpenAI's broader exploration of scalable optimization methods, alongside their later work on PPO and large-scale RL. The repository remains one of the most accessible introductions to ES for RL.

Beyond OpenAI, several companies and research groups have adopted ES for real-world applications. Uber AI Labs published a paper on "Evolution Strategies as a Scalable Alternative to Reinforcement Learning" around the same time, and Uber has used ES for optimizing neural network architectures and training deep reinforcement learning agents for autonomous driving simulations. DeepMind has explored ES for game playing, though they primarily use population-based training (PBT) which shares conceptual similarities.

In the robotics industry, companies like Boston Dynamics and Google's Everyday Robots have experimented with ES for sim-to-real transfer. Because ES does not require gradients, it can optimize policies directly on real hardware without needing a differentiable simulator. This is a significant advantage when dealing with complex, non-smooth dynamics like friction, contact, or hydraulic actuators.

A comparison of ES with other popular RL algorithms reveals its niche:

| Algorithm | Gradient Required | Parallelization | Sample Efficiency | Wall-Clock Speed | Best For |
|---|---|---|---|---|---|
| A3C | Yes | Moderate | High | Low | Atari, discrete control |
| PPO | Yes | Good | High | Medium | General RL |
| DQN | Yes | Poor | High | Low | Discrete action spaces |
| Evolution Strategies | No | Excellent | Low | High | Continuous control, high-dim params |
| CMA-ES | No | Good | Medium | Medium | Low-dim optimization |

Data Takeaway: ES trades sample efficiency for wall-clock speed and parallelization. It excels when compute resources are abundant but simulation is cheap, or when gradients are unavailable.

Industry Impact & Market Dynamics

The release of evolution-strategies-starter has had a lasting impact on the AI industry by legitimizing gradient-free methods for deep reinforcement learning. Before this work, the prevailing wisdom was that gradient-based methods were strictly superior for complex tasks. OpenAI's demonstration that ES could match A3C on MuJoCo benchmarks forced the community to reconsider.

This has practical implications for AI infrastructure. Companies building large-scale training clusters can leverage ES to achieve higher throughput on CPU-based systems, which are cheaper and more available than GPU clusters. For example, a company training a robot manipulation policy could spin up 10,000 CPU cores on AWS for a few hours, run ES, and get a usable policy — without needing any GPU time. This democratizes access to RL for startups and academic labs with limited GPU budgets.

The market for AI optimization tools is growing rapidly. According to industry estimates, the global AI optimization software market was valued at $1.2 billion in 2024 and is projected to reach $3.8 billion by 2030, with a CAGR of 21%. Gradient-free methods like ES are capturing an increasing share, particularly in robotics, drug discovery, and materials science, where simulations are expensive and gradients are often unavailable.

However, ES has not displaced gradient-based RL in mainstream applications. The dominant frameworks — Stable-Baselines3, RLlib, and TF-Agents — all focus on PPO, SAC, and DQN. ES remains a niche tool, used primarily by researchers and engineers who need to optimize non-differentiable objectives or scale to massive parallelism. The rise of JAX-based ES libraries (e.g., evosax, EvoJAX) may change this by making ES faster and easier to integrate with modern hardware.

Risks, Limitations & Open Questions

Despite its advantages, ES has significant limitations. The most critical is sample inefficiency. ES requires many more environment interactions than gradient-based methods to achieve comparable performance. On MuJoCo, ES typically needs 10-100x more episodes than PPO. This makes ES impractical for problems where simulation is expensive (e.g., high-fidelity physics) or where real-world interaction is costly (e.g., robotics with physical wear and tear).

Another limitation is sensitivity to hyperparameters. The noise standard deviation σ and learning rate α must be carefully tuned. Too large σ leads to unstable updates; too small σ leads to slow convergence. While adaptive ES methods like CMA-ES address this, they introduce additional complexity.

There is also the question of scalability to very high-dimensional parameter spaces. ES works well for neural networks with up to a few million parameters, but for modern large language models with billions of parameters, the variance of the gradient estimate becomes prohibitive. OpenAI's own later work on scaling laws for RL (e.g., for Dota 2 and hide-and-seek) used PPO, not ES.

Ethically, ES does not introduce unique risks beyond those of RL generally. However, because ES can optimize any black-box function, it could be misused for adversarial optimization (e.g., crafting inputs that fool a classifier) or for optimizing reward functions that lead to unsafe behavior. The lack of gradient information also makes it harder to debug or interpret the optimization process.

AINews Verdict & Predictions

OpenAI's evolution-strategies-starter repository is a landmark contribution that proved gradient-free optimization can compete with RL on challenging control tasks. It remains a valuable educational resource and a practical tool for specific use cases. However, it has not — and likely will not — replace gradient-based RL as the dominant paradigm.

Our predictions:

1. ES will become a standard component in the RL engineer's toolkit, not a replacement. We expect frameworks like Stable-Baselines3 to add ES as an optional solver for continuous control, especially for sim-to-real transfer where gradients are noisy.

2. Hardware-accelerated ES (JAX/TPU) will see a resurgence. Libraries like evosax and EvoJAX already show 10-100x speedups over CPU-based ES. As JAX adoption grows, ES will become competitive with PPO on wall-clock time for moderate-scale problems.

3. The biggest impact will be in robotics and scientific computing. Fields like protein folding, drug design, and materials discovery, where simulations are expensive and gradients are unavailable, will increasingly adopt ES variants.

4. OpenAI will not release a major update to this repository. The code is intentionally minimal and stable. Future innovation will come from the open-source community, not from OpenAI itself.

5. Watch for hybrid methods. The next frontier is combining ES with gradient-based fine-tuning: use ES to explore diverse behaviors, then use PPO to refine. This could yield the best of both worlds.

In summary, evolution-strategies-starter is a classic example of a research artifact that changed the conversation without changing the industry overnight. It is essential reading for anyone serious about understanding the full landscape of optimization methods in AI.

More from GitHub

SimulationLogger.jl: Công cụ Ghi nhật ký còn thiếu cho Tính toán Khoa học JuliaSimulationLogger.jl, created by developer jinraekim, is a Julia package designed to solve a persistent pain point in sciDifferentialEquations.jl: Công cụ SciML Định hình lại Tính toán Khoa họcDifferentialEquations.jl is not merely a library; it is a paradigm shift in how scientists and engineers approach dynamiHướng dẫn Tự lưu trữ n8n: Docker, Kubernetes và Tương lai của Quy trình AI Riêng tưThe n8n-io/n8n-hosting repository is not a product in itself but a critical enabler: a curated set of deployment templatOpen source hub1727 indexed articles from GitHub

Related topics

reinforcement learning70 related articlesOpenAI110 related articles

Archive

May 20261321 published articles

Further Reading

Nhánh Robosuite Cung Cấp Mô Phỏng Mô-đun cho Nghiên Cứu Học Máy Robotpearllhf/robosuite, một nhánh của khung mô phỏng ARISE-Initiative/robosuite, cung cấp môi trường mô-đun để phát triển vàMôi trường Isaac Gym: Cách Trình mô phỏng song song GPU của NVIDIA trở thành Tiêu chuẩn thực tế cho Học tăng cường RobotIsaacGymEnvs, được xây dựng trên công cụ vật lý Isaac Sim của NVIDIA, đã âm thầm trở thành xương sống của học tăng cườngLightSim2grid - Backend C++ của Grid2Op: Tăng tốc AI cho lưới điện với tốc độ gấp 100 lầnLightSim2grid, backend C++ cho nền tảng Grid2Op của RTE Pháp, đang viết lại các quy tắc mô phỏng hệ thống điện. Bằng cácMuJoCo: Công cụ mô phỏng vật lý của DeepMind thúc đẩy thế hệ AI Robotics tiếp theoMuJoCo của DeepMind đã trở thành công cụ mô phỏng vật lý tiêu chuẩn thực tế cho nghiên cứu robot và học tăng cường. Với

常见问题

GitHub 热点“Evolution Strategies: OpenAI's Gradient-Free Alternative to Reinforcement Learning”主要讲了什么?

OpenAI has released the evolution-strategies-starter repository, an official implementation of their 2017 paper demonstrating that evolution strategies (ES) can serve as a scalable…

这个 GitHub 项目在“evolution strategies vs reinforcement learning comparison”上为什么会引发关注?

Evolution strategies (ES) are a class of black-box optimization algorithms inspired by natural evolution. Unlike gradient-based RL methods that compute policy gradients via backpropagation through time, ES works by sampl…

从“how to run OpenAI evolution-strategies-starter on custom environments”看,这个 GitHub 项目的热度表现如何?

当前相关 GitHub 项目总星标约为 1629,近一日增长约为 0,这说明它在开源社区具有较强讨论度和扩散能力。