Skip to content

PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning

Notifications You must be signed in to change notification settings

stepfun-ai/PaCoRe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning

📖 Overview

We introduce PaCoRe (Parallel Coordinated Reasoning), a framework that shifts the driver of inference from sequential depth to coordinated parallel breadth, breaking the model context limitation and massively scaling test time compute:

  • Think in Parallel: PaCoRe launches massive parallel exploration trajectories.
  • Coordinate in Multi-rounds: It employs a message-passing architecture to compact these thoughts into concise messages and synthesize them to guide the next round.

Trained via large-scale, outcome-based reinforcement learning, PaCoRe masters the Reasoning Synthesis capabilities required to reconcile diverse parallel insights.

The approach yields strong improvements across diverse domains, and notably pushes reasoning beyond frontier systems in mathematics: an 8B model reaches 94.5% on HMMT 2025, surpassing GPT-5’s 93.2% by scaling effective TTC to roughly two million tokens.

We open-source model checkpoints, training data, and the full inference pipeline to accelerate follow-up work!


Figure 1 | Parallel Coordinated Reasoning (PaCoRe) performance. Left: On HMMT 2025, PaCoRe-8B demonstrates remarkable test-time scaling, yielding steady gains and ultimately surpassing GPT-5. Right: On LiveCodeBench, the RLVR-8B model fails to leverage increased test-time compute, while PaCoRe-8B model effectively unlocks substantial gains as the test-time compute increases.

Figure 2 | PaCoRe Training dynamics. Left panels: The Training Reward and Response Length steadily increase, demonstrating the training stability and effectiveness. Right panels: Evaluation on HMMT 2025 and LiveCodeBench (2408-2505). Performance is reported using single round coordinated reasoning in PaCoRe inference setting with $\vec{K} = [16]$.

🔥 Releases

[2026/02/03] 🚀 PaCoRe Server is now open source!

[2025/12/09] We are excited to release the PaCoRe-8B ecosystem:

🔍 Experiments

HMMT 2025 LiveCodeBench (2408-2505) HLEtext MultiChallenge
GPT-5 93.2 (16k) 83.5 (13k) 26.0 (14k) 71.1 (5.0k)
Qwen3-235B-Thinking 82.3 (32k) 74.5 (21k) 18.2 (23k) 60.3 (1.6k)
GLM-4.6 88.7 (25k) 79.5 (19k) 17.2 (21k) 54.9 (2.2k)
DeepSeek-v3.1-Terminus 86.1 (20k) 74.9 (11k) 19.3 (18k) 54.4 (1.1k)
Kimi-K2-Thinking 86.5 (33k) 79.2 (25k) 23.9 (29k) 66.4 (1.7k)
RLVR-8B 75.4 (48k) 70.6 (34k) 9.3 (35k) 33.3 (1.7k)
PaCoRe-8B (low) 88.2 (243k) 75.8 (188k) 13.0 (196k) 41.8 (13k)
PaCoRe-8B (medium) 92.9 (869k) 76.7 (659k) 14.6 (694k) 45.7 (45k)
PaCoRe-8B (high) 94.5 (1796k) 78.2 (1391k) 16.2 (1451k) 47.0 (95k)

Table 1 | For each benchmark, we report accuracy together with total TTC (in thousands). For Low, Medium, and High, we apply the inference trajectory configuration as $\vec{K}=[4]$, $[16]$, and $[32, 4]$ separately.

Key Findings

  • Message Passing Unlocks Scaling. Without compaction, performance flatlines at the context limit. PaCoRe breaks the memory barrier and lets reasoning scale freely.
  • Breadth > Depth. All compute is not equal. Coordinated parallel reasoning delivers far higher returns than extending a single chain.
  • Data as a Force Multiplier. The PaCoRe corpus provides exceptionally valuable supervision—even baseline models see substantial gains when trained on it.

Getting Started 🚀

Data

The data is provided as a list[dict], where each entry represents a training instance:

  • conversation: The original problem/prompt messages.
  • responses: A list of cached generated responses (trajectories). These serve as the input messages ($M$) used during PaCoRe training.
  • ground_truth: The verifiable answer used for correctness evaluation.

Model Serving

You can directly use vllm serve to serve the model! More inference details of PaCoRe will be handled in Inference Pipeline.

Inference Pipeline

Figure 3 | Inference pipeline of PaCoRe. Each round launches broad parallel exploration, compacts the resulting trajectories into compacted messages, and feeds these messages together with the question forward to coordinate the next round. Repeating this process $\hat{R}$ times yields multi-million-token effective TTC while respecting fixed context limits, with the final compacted message serving as the system’s answer.

We will explain the PaCoRe inference pipeline in this section.

PaCoRe Server Mode (Recommended)

You can run PaCoRe as an OpenAI-compatible server that proxies requests through any upstream LLM provider (vLLM, OpenRouter, etc.) while applying the PaCoRe multi-round parallel reasoning pipeline.

Example: Using OpenRouter as the upstream provider

First, install this package:

pip install -e . 

Then do the following steps:

  1. Set your OpenRouter API key:
export OPENROUTER_API_KEY='sk-or-...'
  1. Start the PaCoRe server:
python playground/example_pacore_server_openrouter_step35_flash.py
  1. Send requests to the server:
import requests
import json

messages = [
    {"role": "user", "content": "Prove that there are infinitely many prime numbers."}
]

response = requests.post(
    url="http://localhost:8000/v1/chat/completions",
    headers={"Content-Type": "application/json"},
    data=json.dumps({
        "model": "stepfun/step-3.5-flash:free",
        "messages": messages,
        "reasoning": {"enabled": True}
    })
)

result = response.json()
print(result["choices"][0]["message"]["content"])

Configuration Options

The PaCoRe server can be configured via environment variables:

Variable Description Default
PACORE_UPSTREAM_API_BASE Upstream LLM endpoint URL http://localhost:8000/v1/chat/completions
PACORE_HOST Server host address 0.0.0.0
PACORE_PORT Server port 8000
PACORE_UPSTREAM_TIMEOUT_SECONDS Request timeout 7200
PACORE_UPSTREAM_RETRY_TIMES Number of retries 5

Customizing the Server

You can also create your own server by extending the Exp base class:

from pacore.server.base_exp import ChatCompletionRequest, Exp

class MyCustomServer(Exp):
    upstream_api_base = "https://your-api-endpoint.com/v1/chat/completions"
    num_responses_per_round = [16]  # PaCoRe breadth configuration

    def get_upstream_extra_headers(self, request: ChatCompletionRequest) -> dict[str, str]:
        return {"Authorization": f"Bearer {your_api_key}"}

if __name__ == "__main__":
    MyCustomServer().run()

The num_responses_per_round controls the PaCoRe inference trajectory:

  • [4] → PaCoRe-low
  • [16] → PaCoRe-medium
  • [32, 4] → PaCoRe-high

Batch Inference Example

You can also run some data in a batch. In this case, we assume you use vllm serve the model in your localhost with PaCoRe-8B model.

Next, you can run our example inference code with PaCoRe-low inference setting:

python playground/example_batch_inference_pacore_low_1210.py

And then you can see dumped results in outputs/example_batch_inference_pacore_low_1210/results.jsonl!

🙏 Acknowledgements

  • This work was supported by computing resources and infrastructure provided by StepFun and Tsinghua University.
  • We are deeply grateful to our colleagues for their support:
    • Inference: Song Yuan, Wuxun Xie, Mingliang Li, Bojun Wang.
    • Training: Xing Chen, Yuanwei Lu, Changyi Wan, Yu Zhou.
    • Infra Operations: Shaoliang Pang, Changxin Miao, Xu Zhao, Wei Zhang, Zidong Yang, Junzhe Lin, Yuxiang Yang, Chen Xu, Xin Li, Bin Wang.
    • Data Management: Xiaoxiao Ren, Zhiguo Huang, and Kang An.
    • Helpful Discussions: Liang Zhao, Jianjian Sun, Zejia Weng, JingJing Xie.
  • We are grateful for colleagues from StepFun and Tsinghua University for their valuable feedback and contributions.
  • Our work is built on amazing open source models and data; thanks again!

🔮 Future Work

We are just scratching the surface of parallel coordinated reasoning. Our roadmap includes:

  • Scaling the Extremes: We plan to apply PaCoRe to stronger foundation models, expanding the task domains, and further scaling up both the breadth (parallel trajectories) and depth (coordination rounds) to tackle challenges currently deemed unsolvable.
  • Boosting Token Intelligence Density: While we currently scale by volume, we aim to maximize the utility of every unit of compute spent. This involves enabling more efficient parallel exploration through better organization, cooperation, and division of labor among trajectories.
  • Emergent Multi-Agent Intelligence: We are interested in exploring the joint training of both the synthesis policy and the message-passing mechanism, laying minimal yet rich cooperative multi-agent learning environment, offering a valuable playground for studying emergent communication, self-organization, and collective intelligence.
  • Ouroboros for Pre- and Post-Training: we intend to investigate the development of advanced synthetic data generation techniques with PaCoRe pipeline to improve both current pretraining and post-training processes.

Advertisement Time 📣

We are currently seeking self-motivated engineers and reseachers. If you are interested in our project and would like to contribute to the reasoner scale-up all the way to AGI, please feel free to reach out to us at hanqer@stepfun.com

Star History Chart

📜 Citation

@misc{pacore2025,
      title={PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning}, 
      author={Jingcheng Hu and Yinmin Zhang and Shijie Shang and Xiaobo Yang and Yue Peng and Zhewei Huang and Hebin Zhou and Xin Wu and Jie Cheng and Fanqi Wan and Xiangwen Kong and Chengyuan Yao and Kaiwen Yan and Ailin Huang and Hongyu Zhou and Qi Han and Zheng Ge and Daxin Jiang and Xiangyu Zhang and Heung-Yeung Shum},
      year={2026},
      eprint={2601.05593},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2601.05593}, 
}

About

PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages