Update documentation

This commit is contained in:
博惟 2025-05-28 19:44:24 +08:00
commit 011889ddd4
197 changed files with 23206 additions and 0 deletions

4
.buildinfo Executable file
View File

@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 043cebb0d5788ee01db077208c6c342e
tags: 645f666f9bcd5a90fca523b33c5a78b7

0
.nojekyll Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

BIN
_images/algo_interface.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 338 KiB

BIN
_images/buffer_arch.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 162 KiB

BIN
_images/launch.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 389 KiB

BIN
_images/master_arch.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 260 KiB

BIN
_images/param_shard.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

74
_sources/contrib.md Executable file
View File

@ -0,0 +1,74 @@
# Contribution Guide
Thank you for your interest in contributing to AReaL! We welcome contributions from everyone, whether you're fixing bugs, improving documentation, or adding new system and algorithmic features.
## Setting Up Your Development Environment
New contributors do not have write permissions to the official repository. Please fork the repository and clone your fork locally. AReaL is fully Python-based, making installation straightforward.
```bash
git clone https://github.com/${your-username}/AReaL
cd AReaL
pip3 install -r requirements.txt
pip3 install -e .
```
## Issue Guidelines
### Issue Templates
Please follow the [issue template on GitHub](https://github.com/inclusionAI/AReaL/tree/main/.github/ISSUE_TEMPLATE). Issues can be:
- Bug reports
- Feature requests
- Refactor requests
The required fields in the template help reduce communication overhead when resolving issues. **Issues with arbitrary formatting may be ignored.**
## Pull Request Guidelines
There are no specific PR templates, but **pull requests should be related to a well-templated issue**. Your PR should:
- Explain how the issue is resolved
- Describe the benefits this change will provide
- Reference the related issue number
## Code Quality
### Code Formatting
Please format your code before opening a PR:
```bash
isort . && black .
```
### Running Tests
AReaL's unit tests are based on the `pytest` framework:
```bash
# Run all tests (excluding GPU tests)
pytest -m "not gpu"
# Run a specific test case
pytest tests/test_something.py
```
**Note**: Running all tests may take several hours to complete.
## Documentation
Writing documentation is an excellent starting point for new contributors. The documentation is located in the `docs` folder and built using [Jupyter Book](https://jupyterbook.org/en/stable/intro.html).
### Adding New Documentation
1. Create your documentation files in the `docs` folder
2. Add the file path to `docs/_toc.yaml`
3. Build the documentation:
```bash
jb build docs
```
4. Preview your changes by opening the HTML files in `docs/_build/html`
This process allows you to see how your documentation will appear before submitting your contribution.

View File

@ -0,0 +1,103 @@
# Algorithm, Interface & Backends
## Overview
![](algo_interface.png)
Model Interfaces define the computations that can be performed, such as training, inference, and generation. They provide abstract classes and implementations that decouple specific algorithms (e.g., PPO, SFT) from model backends (Megatron, SGLang, vLLM). Algorithm developpers may be more interested in adding customized model interfaces.
Model backends integrate external libraries to wrap over the model as a `PipelinableEngine`, such that they can provide efficient distributed training and inference capabilities.
## Registeration
Backends and interfaces have similar registeration protocols:
```python
# Registration (at the end of each interface implementation):
model_api.register_interface("ppo", PPOActorInterface)
# Configuration (in experiment config file):
interface_config = ModelInterfaceAbstraction(
type_="ppo",
args=dict(eps_clip=0.2)
)
# Instantiation (in model worker):
interface = make_interface(interface_config)
```
## Customization
### Interfaces
An interface implementation essentially processes the data and loss function (e.g., reward clipping, computing GAEs) required by a `PipelinableEngine`, calls the actual execution method such as `PipelinableEngine.train_step`, and then runs some post-processing according to the data protocol.
Custom interfaces can be created by subclassing the `ModelInterface` class and implementing the required methods for the desired training paradigm.
Example:
```python
@dataclass
class CustomInterface(model_api.ModelInterface):
# Custom parameters
custom_param: float = 1.0
def train_step(self, model, data, mb_spec):
module = model.module
module.train()
# Custom training logic
stats = module.train_batch(
input_=data,
loss_fn=custom_loss_function,
loss_weight_fn=lambda x: x.data["mask"].count_nonzero(),
token_normalize_scope="global",
mb_spec=mb_spec,
version_steps=model.version.global_step,
)
model.inc_version()
return stats
def save(self, model, save_dir):
module = model.module
module.save_to_hf(tokenizer=model.tokenizer, save_dir=save_dir)
# Register the interface
model_api.register_interface("custom", CustomInterface)
```
Required methods vary based on the interface purpose:
+ For training interfaces: `train_step()` and `save()`
+ For inference-only interfaces: `inference()`
+ For generation interfaces: `generate()`
The interface can be configured in the experiment configuration file, e.g., `ppo_math_exp.py`. Please refer to xxx how to run unittests on your implementation.
### Backends
Backend requires implementing the `_initialize`method. Example:
```python
class FSDPEngine(PipelinableEngine):
def train_step(self, ...):
...
class FSDPBackend(ModelBackend):
def _initialize(self, model):
module = model.module
model.module: PipelinableEngine = FSDPEngine(module)
return model
register_backend("fsdp", FSDPBackend)
```
## Existing Implementations
### Interfaces
+ `ppo_interface.py`: Implemetation of PPO actor and critic.
+ `sft_interface.py`: Implementation of SFT.
### Backends
+ `megatron.py`: Training wrapper based on Megatron Core's `DistributedDataParallel`
+ `sglang.py`: A wrapper over a SGLang HTTP server for batched generation.
+ `vllm.py`: Deprecated SPMD vLLM backend.

View File

@ -0,0 +1,67 @@
# Allocation & Parallelism
## GPU Allocation
GPU allocation is controlled by the `allocation_mode` CLI parameter. The most common pattern looks like `"sglang.d2t2p1+d1t4p1"`, which means:
+ The first 4 GPUs are allocated to SGLang for inference with:
- 2-way tensor parallelism
- 2-way data parallelism
+ The remaining GPUs are allocated for training with 4-way tensor parallelism
## Parallelism Strategies
### Training
AReaL supports three parallelism strategies for dense models, similar to Megatron:
+ Data Parallelism: Uses Megatron's DistributedDataParallel with AReaL's balanced DP partitioning algorithm (`SequenceSample.split`)
+ Tensor Parallelism: Fully replicates Megatron's `ColumnParallelLinear` and `RowParallelLinear`
+ Pipeline Parallelism: Developed in-house with 1F1B scheduling (planned to be replaced with an open-source implementation due to maintenance challenges)
### Inference
AReaL supports SGLang inference with intra-node tensor parallelism and customized data parallelism.
### Parameter Partitioning
Each model worker holds multiple model shards based on the allocation configuration.
Example: With 4 GPUs configured as:
+ Actor model: First half GPUs with tensor parallelism
+ Critic model: Second half GPUs with pipeline parallelism
+ Reference model: All GPUs with tensor and pipeline parallelism
The parameter distribution would be:
![](param_shard.png)
## Torch NCCL Communication Groups
During experiments, the following NCCL communication groups are created:
1. Global group: Includes all experiment GPUs (created in `global_comm.py`)
2. Parallelism Group: 3D parallel communication groups for a specific model (may match global group or be a subset, created in `topology.py`)
3. Data transfer groups: Groups between all data-parallel processes of any two models for data transfer (created in `data_manager.py`)
## Parallelism Ranks
Each model worker has a unique GPU index, but may have different parallel strategy coordinates under different model names (actor, critic, etc.).
Example: GPU 2 might have:
+ TP rank 1 for actor model
+ TP rank 0 for reference model
Parallel strategy coordinates are maintained in `realhf.base.constants` and accessed via:
```bash
with constants.model_scope(ModelName("actor", 0)):
dp_rank1 = constants.data_parallel_rank()
with constants.model_scope(ModelName("ref", 0)):
dp_rank2 = constants.data_parallel_rank()
```
Note: Interface and backend methods are automatically called within a model scope, so the context manager can be omitted in those implementations.

View File

@ -0,0 +1,3 @@
# Launching Procedure
![Illustration of Experiment Launching](launch.png)

View File

@ -0,0 +1,29 @@
# Master Worker
## Overview
![](master_arch.png)
The worker architecture of AReaL consists of a single master worker coordinating multiple model workers.
An RL algorithm typically contains several model function calls (MFCs) that need to be executed in a certain order. For example in PPO,
1. `actor_gen` generates responses given a batch of user prompts;
2. `ref_inf` computes the log-probabilities of the tokens under the reference policy;
3. `rew_inf` computes the rewards of the responses;
4. `actor_train` updates the policy with the PPO learning objective.
Here model function calls 2 and 3 depends on the output of 1. Model function call 4 depends on the outputs of 1, 2, and 3.
The MFCs are coordinated by a `FunctionExecutor` instance. It creates a `ModelFunctionCall` instance for each MFC. The actual computation is performed on model workers via remote procedure call.
## Buffer and MFC Execution Order
![](buffer_arch.png)
The master worker creates a `AsyncIOSequenceBuffer`, which is referenced by the `FunctionExecutor` and the `ModelFunctionCall`'s. The buffer is responsible for managing (meta)data and deciding the execution order of the MFCs.
Each datapoint can be seen as a `dict` of tensors. For example, the keys may include `packed_prompts` and `task_ids`. Recall that some MFC may rely on the output of another. For example in PPO, the MFC `ref_inf` requires `packed_input_ids`, which is not presented initially. Instead, `packed_input_ids` appears as one of the results of the MFC `actor_gen`.
The buffer keeps track of the available keys of each datapoint. Each `ModelFunctionCall`instance obtains its next batch via `self.get_batch_for_rpc`, which waits for enough datapoints with all the required keys. This means that it would not start execution until all required keys are ready. After a model function call execution, it calls `self.amend_batch` and updates the corresponding datapoints with new keys.
While some keys are the results of MFCs, some are loaded from the dataset via `FunctionExecutor.load_data`. Also note that instead of the actual data, the buffer stores only metadata (data indices, keys, etc.) to reduce the cost of data transfer.

View File

@ -0,0 +1,73 @@
# Model Worker
## Master-Model Worker Interaction
The master worker sends remote procedure calls (RPCs) to model workers to execute actual computations like `actor_gen` and `actor_train`. The figure below illustrates their interaction throughout an experiment:
![](master-model-interaction.png)
Model worker "compute" involves running a model interface with a specific backend (covered in detail later). For PPO algorithms, model workers sequentially execute:
+ `actor_gen`: `actor` model with SGlang backend + `PPOActorInterface.generate`
+ `rew_inf`: `reward` model (can be null for RLVR) + `MultiTaskRewardInterface.inference`
+ `actor_train`: `actor` model with Megatron backend + `PPOActorInterface.train_step`
## Communication Protocol
### Request-Reply Pattern
The master worker and model workers communicate through a `request_reply_stream` channel that handles requests and metadata responses (actual data like `input_ids` transfers through other channels).
Master (client) can send these requests to model workers (servers):
+ **fetch**: Worker loads local dataset data and sends metadata (e.g., sequence length) to master for buffer storage
+ **spec**: Worker returns dataset specifications for master to calculate experiment steps
+ **model_config**: Worker provides transformer model configuration
+ **clear_data_cache**: Worker clears data transfer and GPU caches
+ **initialize**: Worker initializes parameters, gradient buffers, and optimizer states
+ **generate/inference/train_step**: Worker executes corresponding computation (note: "inference" refers to single forward pass)
### Request Hooks
Computation requests ("generate"/"inference"/"train_step") support pre- and post-hooks for:
+ Data transfer (pre-hook)
+ Evaluation
+ Offloading
+ Parameter reallocation
+ Checkpointing (post-hooks)
These hooks often require NCCL communication/synchronization between workers. Implementing them as dedicated hooks prevents deadlocks that could occur if these operations interleaved with other NCCL communications.
### Request Types
+ **Blocking requests**: Long-running operations requiring NCCL synchronization. Workers can't execute immediately since concurrent blocking requests may need coordinated data transfers. Master sends a "flush" request to indicate all concurrent requests have been sent.
+ **Non-blocking requests**: Shorter operations without NCCL requirements that can execute immediately.
## Data Management
### Distributed Dataset Storage
Datasets distribute across model workers without overlap. For each model:
+ Processes with PP rank = -1 and TP rank = 0 serve as DP heads
+ Data stores on DP heads of the model used in the first MFC (e.g., actor model DP heads for PPO)
During "fetch" requests:
1. DP head worker loads data into local buffer
2. Sends metadata to master
3. Master tracks metadata and later instructs workers which data to use for each MFC via computation request hooks
### Data Transfer Process
For each MFC, the master:
1. Specifies which data to use
2. Provides data locations across workers
3. Workers redistribute data using:
- `Redistributor`: Generates NCCL broadcast/gather/scatter communication plan
- `DataManager`: Executes the plan
After redistribution, workers with same DP rank receive identical input data.
### MFC Output Handling
Only workers with PP rank=-1 and TP rank=0 produce output data. These workers:
1. Store data locally
2. Notify master of data locations
3. Master generates new redistribution plans for subsequent MFCs based on this layout information

83
_sources/eval.md Executable file
View File

@ -0,0 +1,83 @@
# Evaluation
The evaluation code is located in the `evaluation` folder of the repository. Following the previous tutorial, trained checkpoints will be saved under `/storage/ray/experiments/checkpoints/root/`.
## Setup Evaluation Environment
Start a new container to execute the evaluation script. **Note**: Evaluation requires updates to certain Python libraries, so avoid using the training container for this task.
```bash
docker run -d --name areal-eval --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.3.0 /bin/bash -c "tail -f /dev/null"
docker exec -it areal-eval bash
```
## Install Dependencies and Run Evaluation
Execute the following commands inside the Docker container:
```bash
cd /storage/codes/AReaL/evaluation
cd latex2sympy
pip install -e .
cd ..
pip install -r requirements.txt
pip install vllm==0.8.5 --no-build-isolation
pip install transformers==4.51.1
pip install prettytable timeout_decorator
mkdir -p /storage/ray/eval_output/
nohup python eval_and_aggregate.py \
--model_path /storage/ray/experiments/checkpoints/root/my-exp/my-trial/epoch1epochstep20globalstep20/ \
--output_path /storage/ray/eval_output/ \
--data_names "math_500,aime24,amc23" \
--max_gen_tokens 32768 &> /storage/ray/eval_output/eval_and_aggregate_parallel.log &
```
### Command Line Parameters
- **`--model_path`**: Path to the saved model parameters
- **`--output_path`**: Path to store generated answers and log files during evaluation
- **`--data_names`**: Dataset(s) to evaluate. Multiple datasets can be separated by commas. Available options: `math_500`, `math`, `gsm8k`, `train_amc_aime`, `aime24`, `amc23`
- **`--max_gen_tokens`**: Maximum length of generated answers (default: 32768)
## Evaluation Results
The evaluation script will output a results table in the terminal:
```
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
| dataset | num_questions | greedy_length | sample_length | greedy_acc | sample_pass@1 | pass@8 | pass@16 |
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
| math_500 | 500 | 6757.4 | 4139.5 | 84.4 | 92.7 | 97.3 | 97.7 |
| aime24 | 30 | 19328.0 | 13663.5 | 50.0 | 50.4 | 77.3 | 80.0 |
| amc23 | 40 | 8850.0 | 6526.2 | 80.0 | 90.5 | 96.8 | 98.8 |
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
```
### Metrics Explanation
- **`{greedy|sample}_length`**: Average answer length under greedy or random sampling strategy
- **`greedy_acc`**: Average accuracy under greedy sampling
- **`sample_pass@{k}`**: Probability of generating a correct answer within `k` attempts under random sampling
## Configuration Details
### Sampling Parameters
- The evaluation script defaults to averaging 32 samples with temperature 0.6
- We observed that the `enforce_eager` parameter in vLLM significantly impacts evaluation performance
- When `enforce_eager=True`, we can reproduce the model performance reported in previous work
- Without this setting, evaluation results may fall below reported performance
- Therefore, we enforce `enforce_eager=True` during evaluation
### Runtime Expectations
Due to the sampling requirements and `enforce_eager` setting, the evaluation process typically takes considerable time.
Runtime depends on several factors:
- Maximum generation length
- Number of questions in the dataset
- Model size
**Performance benchmarks** (on 8x H100 GPUs):
- **AIME dataset**: ~80 minutes
- **MATH_500 dataset**: ~160 minutes

78
_sources/installation.md Executable file
View File

@ -0,0 +1,78 @@
# Installation
## Prerequisites
### Hardware Requirements
The following hardware configuration has been extensively tested:
- **GPU**: 8x H800 per node
- **CPU**: 64 cores per node
- **Memory**: 1TB per node
- **Network**: NVSwitch + RoCE 3.2 Tbps
- **Storage**:
- 1TB local storage for single-node experiments
- 10TB shared storage (NAS) for distributed experiments
### Software Requirements
| Component | Version |
|---|:---:|
| Operating System | CentOS 7 / Ubuntu 22.04 or any system meeting the requirements below |
| NVIDIA Driver | 550.127.08 |
| CUDA | 12.8 |
| Git LFS | Required for downloading models, datasets, and AReaL code. See [installation guide](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage) |
| Docker | 27.5.1 |
| NVIDIA Container Toolkit | See [installation guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) |
| AReaL Image | `ghcr.io/inclusionai/areal-runtime:v0.3.0` (includes runtime dependencies and Ray components) |
**Note**: This tutorial does not cover the installation of NVIDIA Drivers, CUDA, or shared storage mounting, as these depend on your specific node configuration and system version. Please complete these installations independently.
## Runtime Environment
We recommend using Docker with our provided image. The Dockerfile is available in the top-level directory of the AReaL repository.
Pull the Docker image:
```bash
docker pull ghcr.io/inclusionai/areal-runtime:v0.3.0
```
This image includes all training requirements for AReaL.
**For multi-node training**: Ensure shared storage is mounted to the `/storage` directory on every node. All downloads and resources will be stored in this directory, and the AReaL container will mount this directory to `/storage` within the container.
## Code Setup
Clone the AReaL project code to `/storage/codes`:
```bash
mkdir -p /storage/codes
cd /storage/codes/
git clone https://github.com/inclusionAI/AReaL
pip install -r AReaL/requirements.txt
```
## Dataset
Download the provided training dataset and place it in `/storage/datasets/`:
```bash
mkdir -p /storage/datasets/
cd /storage/datasets/
wget https://huggingface.co/datasets/inclusionAI/AReaL-RL-Data/resolve/main/data/boba_106k_0319.jsonl?download=true
```
## Model
We train using open-source models available on Hugging Face Hub. Here's an example using Qwen3 (ensure Git LFS is installed):
```bash
mkdir -p /storage/models
cd /storage/models
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/Qwen/Qwen3-1.7B
cd Qwen3-1.7B
git lfs pull
```
**Alternative**: You can also use the Hugging Face CLI to download models after installing the `huggingface_hub` package. Refer to the [official documentation](https://huggingface.co/docs/huggingface_hub/guides/cli) for details.

6
_sources/intro.md Executable file
View File

@ -0,0 +1,6 @@
# Overview
## Welcome to AReaLs documentation!
```{tableofcontents}
```

53
_sources/markdown-notebooks.md Executable file
View File

@ -0,0 +1,53 @@
---
jupytext:
formats: md:myst
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.11.5
kernelspec:
display_name: Python 3
language: python
name: python3
---
# Notebooks with MyST Markdown
Jupyter Book also lets you write text-based notebooks using MyST Markdown.
See [the Notebooks with MyST Markdown documentation](https://jupyterbook.org/file-types/myst-notebooks.html) for more detailed instructions.
This page shows off a notebook written in MyST Markdown.
## An example cell
With MyST Markdown, you can define code cells with a directive like so:
```{code-cell}
print(2 + 2)
```
When your book is built, the contents of any `{code-cell}` blocks will be
executed with your default Jupyter kernel, and their outputs will be displayed
in-line with the rest of your content.
```{seealso}
Jupyter Book uses [Jupytext](https://jupytext.readthedocs.io/en/latest/) to convert text-based files to notebooks, and can support [many other text-based notebook files](https://jupyterbook.org/file-types/jupytext.html).
```
## Create a notebook with MyST Markdown
MyST Markdown notebooks are defined by two things:
1. YAML metadata that is needed to understand if / how it should convert text files to notebooks (including information about the kernel needed).
See the YAML at the top of this page for example.
2. The presence of `{code-cell}` directives, which will be executed with your book.
That's all that is needed to get started!
## Quickly add YAML metadata for MyST Notebooks
If you have a markdown file and you'd like to quickly add YAML metadata to it, so that Jupyter Book will treat it as a MyST Markdown Notebook, run the following command:
```
jupyter-book myst init path/to/markdownfile.md
```

55
_sources/markdown.md Executable file
View File

@ -0,0 +1,55 @@
# Markdown Files
Whether you write your book's content in Jupyter Notebooks (`.ipynb`) or
in regular markdown files (`.md`), you'll write in the same flavor of markdown
called **MyST Markdown**.
This is a simple file to help you get started and show off some syntax.
## What is MyST?
MyST stands for "Markedly Structured Text". It
is a slight variation on a flavor of markdown called "CommonMark" markdown,
with small syntax extensions to allow you to write **roles** and **directives**
in the Sphinx ecosystem.
For more about MyST, see [the MyST Markdown Overview](https://jupyterbook.org/content/myst.html).
## Sample Roles and Directives
Roles and directives are two of the most powerful tools in Jupyter Book. They
are like functions, but written in a markup language. They both
serve a similar purpose, but **roles are written in one line**, whereas
**directives span many lines**. They both accept different kinds of inputs,
and what they do with those inputs depends on the specific role or directive
that is being called.
Here is a "note" directive:
```{note}
Here is a note
```
It will be rendered in a special box when you build your book.
Here is an inline directive to refer to a document: {doc}`markdown-notebooks`.
## Citations
You can also cite references that are stored in a `bibtex` file. For example,
the following syntax: `` {cite}`holdgraf_evidence_2014` `` will render like
this: {cite}`holdgraf_evidence_2014`.
Moreover, you can insert a bibliography into your page with this syntax:
The `{bibliography}` directive must be used for all the `{cite}` roles to
render properly.
For example, if the references for your book are stored in `references.bib`,
then the bibliography is inserted with:
```{bibliography}
```
## Learn more
This is just a simple starter to get you started.
You can learn a lot more at [jupyterbook.org](https://jupyterbook.org).

8
_sources/mymarkdown.md Executable file
View File

@ -0,0 +1,8 @@
# Here's my sample title
This is some sample text.
(section-label)=
## Here's my first section
Here is a [reference to the intro](intro.md). Here is a reference to [](section-label).

122
_sources/notebooks.ipynb Executable file
View File

@ -0,0 +1,122 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Content with notebooks\n",
"\n",
"You can also create content with Jupyter Notebooks. This means that you can include\n",
"code blocks and their outputs in your book.\n",
"\n",
"## Markdown + notebooks\n",
"\n",
"As it is markdown, you can embed images, HTML, etc into your posts!\n",
"\n",
"![](https://myst-parser.readthedocs.io/en/latest/_static/logo-wide.svg)\n",
"\n",
"You can also $add_{math}$ and\n",
"\n",
"$$\n",
"math^{blocks}\n",
"$$\n",
"\n",
"or\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\mbox{mean} la_{tex} \\\\ \\\\\n",
"math blocks\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"But make sure you \\$Escape \\$your \\$dollar signs \\$you want to keep!\n",
"\n",
"## MyST markdown\n",
"\n",
"MyST markdown works in Jupyter Notebooks as well. For more information about MyST markdown, check\n",
"out [the MyST guide in Jupyter Book](https://jupyterbook.org/content/myst.html),\n",
"or see [the MyST markdown documentation](https://myst-parser.readthedocs.io/en/latest/).\n",
"\n",
"## Code blocks and outputs\n",
"\n",
"Jupyter Book will also embed your code blocks and output in your book.\n",
"For example, here's some sample Matplotlib code:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from matplotlib import rcParams, cycler\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"plt.ion()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Fixing random state for reproducibility\n",
"np.random.seed(19680801)\n",
"\n",
"N = 10\n",
"data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)]\n",
"data = np.array(data).T\n",
"cmap = plt.cm.coolwarm\n",
"rcParams['axes.prop_cycle'] = cycler(color=cmap(np.linspace(0, 1, N)))\n",
"\n",
"\n",
"from matplotlib.lines import Line2D\n",
"custom_lines = [Line2D([0], [0], color=cmap(0.), lw=4),\n",
" Line2D([0], [0], color=cmap(.5), lw=4),\n",
" Line2D([0], [0], color=cmap(1.), lw=4)]\n",
"\n",
"fig, ax = plt.subplots(figsize=(10, 5))\n",
"lines = ax.plot(data)\n",
"ax.legend(custom_lines, ['Cold', 'Medium', 'Hot']);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There is a lot more that you can do with outputs (such as including interactive outputs)\n",
"with your book. For more information about this, see [the Jupyter Book documentation](https://jupyterbook.org)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.0"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"state": {},
"version_major": 2,
"version_minor": 0
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

108
_sources/training.md Executable file
View File

@ -0,0 +1,108 @@
# Training
## Launch the Ray Cluster
### Start the Ray Head Node
On the first node, start the Ray Head with the following command:
```bash
docker run -d --name r1-ray-head --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.3.0 /bin/bash -c "ray start --head --port=6379 && tail -f /dev/null"
```
### Start Ray Worker Nodes
On all other nodes, start the Ray Worker with the following command (skip this step for single-node setups):
```bash
# Replace with the actual IP address of the first node
RAY_HEAD_IP=xxx.xxx.xxx.xxx
docker run -d --name r1-ray-worker --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.3.0 /bin/bash -c "ray start --address=$RAY_HEAD_IP:6379 && tail -f /dev/null"
```
### Verify Cluster Status
Once all nodes are running, check the Ray cluster status by entering the container on the first node:
```bash
docker exec -it r1-ray-head bash
ray status
```
You should see the Ray resource status displayed.
## Launch an Experiment
On the first node (where the Ray Head is located), run the following to launch an asynchronous PPO experiment:
```bash
docker exec -it r1-ray-head bash
cd /storage/codes/AReaL
pip3 install -e .
python3 training/main_async_ppo.py --config-name=async-ppo-1.7b-gpu8
```
This command will locate the YAML configuration file `async-ppo-1.7b-gpu8.yaml` in the `training/configs/async-ppo` folder. The meaning of each configuration entry can be found in `realhf/api/cli_args.py`. You can run asynchronous PPO, synchronous PPO, or SFT depending on the script you execute.
After starting, you'll see training launch information like this:
```
20250528-17:12:16.804 quickstart INFO: Running async-ppo-math experiment.
20250528-17:12:16.804 quickstart INFO: Logs will be dumped to /storage/experiments/logs/admin/async-ppo-1.7b-gpu8/my-trial
20250528-17:12:16.804 quickstart INFO: Experiment configs will be dumped to /storage/experiments/logs/admin/async-ppo-1.7b-gpu8/my-trial/config.yaml
20250528-17:12:16.804 quickstart INFO: Model checkpoints will be saved to /storage/experiments/checkpoints/admin/async-ppo-1.7b-gpu8/my-trial
20250528-17:12:19.261 quickstart INFO: Launching experiments with RAY...
```
**Note**: The saved YAML configuration at `/storage/experiments/logs/admin/async-ppo-1.7b-gpu8/my-trial/config.yaml` can be used to reproduce previous experiments.
## Command Line Options
To view all available options:
```bash
python3 -m realhf.apps.quickstart async-ppo-math --help
```
### Important Parameters
- **`mode`**: Always set to `ray`. Do not change this value when following this tutorial.
- **`{actor|critic|ref}.path`**: The path to the model files.
- **`dataset.path`**: The path to the dataset JSONL file.
- **`cluster.fileroot`**: The root path for saving training outputs.
- **`n_nodes`**: The number of nodes in the cluster.
- **`n_gpus_per_node`**: The number of GPUs per node.
- **`allocation_mode`**: The GPU allocation strategy and 3D parallelism configuration for the experiment. Format:
- `sglang.d${DP1}m${TP1}p${PP1}+d${DP2}m${TP2}p${PP2}`: Configures parallel strategies for SGLang generation and training respectively. Generation and training use separate GPU sets, and the total GPU count must equal: DP1×TP1×PP1 + DP2×TP2×PP2 = #GPUs.
### Training Control Parameters
- **`exp_ctrl.total_train_epochs`**: Number of training epochs (complete dataset iterations).
- **`exp_ctrl.save_freq_{epochs|steps|secs}`**: Frequency for saving model parameters to persistent storage. Set to null to disable saving.
- **`exp_ctrl.ckpt_freq_{epochs|steps|secs}`**: Frequency for saving temporary parameters for restart capability.
- **`dataset.train_bs_n_seqs`**: Training batch size (number of prompts sampled per training iteration).
- **`group_size`**: Number of responses sampled per prompt.
- **`{actor_train|ref_inf|actor_inf}.mb_spec.max_tokens_per_mb`**: Maximum tokens per mini-batch for forward/backward passes during reference model inference and actor model training. Reduce to avoid OOM errors.
- **`ppo.ppo_n_minibatches`**: Number of mini-batches for dividing data during each PPO update.
- **`ppo.recompute_logprob`**: Whether to compute proximal log probabilities for training.
- **`ppo.use_decoupled_loss`**: Use decoupled loss to stabilize asynchronous training.
- **`ppo.gen.max_new_tokens`**: Maximum tokens to generate per prompt (default: 16k).
- **`ppo.gen.min_new_tokens`**: Minimum tokens to generate per prompt (default: 0).
## Monitoring the Training Process
We recommend using Weights & Biases (wandb) for monitoring. Run `wandb login` or set the `WANDB_API_KEY` environment variable. Set `wandb.mode=True` in your configuration to upload training statistics.
The main log will be saved to `/storage/experiments/logs/admin/async-ppo-1.7b-gpu8/my-trial/main.log` and contains the statistics uploaded to wandb.
### Key Training Statistics
- **`Epoch 1/5`**: Indicates total epochs required and current epoch being trained.
- **`step 6/19`**: Shows current epoch has 19 steps, with the 6th step just completed.
- **`global step 6`**: Step count across all epochs.
- **`task_reward`**: Average reward value of all sampled responses in this step. Should steadily increase during training and eventually stabilize.
- **`importance_weight`**: Average importance sampling ratio across all tokens in the PPO loss. Typically close to 1.0.
- **`actor_clip_ratio`**: Ratio of clipped tokens in PPO loss to total tokens. Usually less than 0.1.
- **`actor_loss`**: PPO loss value. **Does not show clear trends during training** and should not be used as a performance indicator.
- **`avg_seq_len`**: Average length of all sequences (prompts with sampled responses) in this step.
- **`no_eos_ratio`**: Ratio of sampled responses truncated due to exceeding maximum generation length. An increase indicates longer average response lengths.

56
_sources/troubleshooting.md Executable file
View File

@ -0,0 +1,56 @@
# Troubleshooting
If the following content does not address your issue, feel free to raise a GitHub Issue.
## Automatic Recovery
When setting `recover_mode=auto` and the experiment configuration remains unchanged, AReaL will attempt to discover previous checkpoints and recover the experiment from them.
### Recovery Failure Causes
If automatic recovery fails, check the following possibilities:
**Configuration Changes:**
- The `experiment_name` and `trial_name` in the training script differ from the previous run
- Changes in batch size (`dataset.train_bs_n_seqs` parameter)
- Changes in group size (`group_size` parameter)
- Changes in number of nodes (`n_nodes` parameter)
**Missing Recovery Checkpoints:**
Recovery checkpoints are generated under two conditions by default:
- After completion of the second step
- When a step completes and more than 600 seconds have passed since the last recovery checkpoint (controlled by `exp_ctrl.ckpt_freq_secs=600`)
### Verify Recovery Checkpoint Creation
You can confirm if a recovery checkpoint was generated by searching for the following message in the logs:
```bash
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:52:02.760 master worker INFO: Dumped recover info to file.
```
## Memory Issues
### torch.cuda.CudaOutOfMemoryError
The key to resolving this issue is identifying the phase where the error occurs:
#### During Initialization
- Check for idle processes on the GPU
- **Distributed scenarios**: Restart the Ray cluster
- **Single-machine scenarios**: Use `pkill` to terminate processes
#### During SGLang Generation
- Decrease the `actor.sglang.mem_fraction_static` parameter
- Increase the tensor parallelism degree
#### During `actor_inf` or `actor_train`
- **Adjust microbatch size**: Set parameters like `actor_train.mb_spec.max_tokens_per_mb=20480`. This parameter limits tokens per forward/backward pass and can be set as low as the maximum sequence length (including prompt)
- **Modify parallelism strategy**: Adjust `allocation_mode` by:
- Reducing data parallelism
- Increasing tensor or pipeline parallelism
- Preferring pipeline parallelism over tensor parallelism
### CUDA Error: Out of Memory
This issue may occur during data transfer. Try increasing `mem_per_xx_worker` in the CLI arguments.

View File

@ -0,0 +1,428 @@
# Tutorial
## Prerequisites
### Hardware Requirements
Check if your hardware meets these minimum requirements:
|**Model Size**| **1.5B** |**1.5B**|**1.5B**| **7B** | **7B** | **32B** |
|---|:---:|:---:|:---:|:-------------------------:|:---:|:---:|
| **Nodes** | **1** | **4** | **16** | **4** | **16** | **16** |
| GPU | 8x H800 |8x H800 per node| 8x H800 per node | 8x H800 per node | 8x H800 per node | 8x H800 per node |
| CPU | 48 cores |48 cores per node|48 cores per node| 48 cores per node | 48 cores per node| 48 cores per node|
| Memory | 1 TB |1 TB per node|1 TB per node| 1 TB per node | 1 TB per node| 1 TB per node|
| Network | NVSwitch |NVSwitch + RoCE 3.2 Tbps|NVSwitch + RoCE 3.2 Tbps| NVSwitch + RoCE 3.2 Tbps | NVSwitch + RoCE 3.2 Tbps| NVSwitch + RoCE 3.2 Tbps|
| Storage | 1TB |Shared storage (NAS) 10TB|Shared storage (NAS) 10TB| Shared storage (NAS) 10TB |Shared storage (NAS) 10TB| Shared storage (NAS) 10TB|
| BatchSize x GroupSize | 512x16 | 512x16 | 512x16 | 512x16 | 512x16 | 512x16|
| **Single-step Time (seconds)** | **3461** | **997** | **391** | **2275** | **815** | **6707**|
| **#Steps Until Convergence** | **~250** |**~250** |**~250** |**~400** |**~400** | - |
| **Total Time (Hours)** | **~240** | **~69** | **~27** | **~252** | **~90** | - |
Notes:
- GPUs need to have 80GB memory. Other GPU models with similar specs are acceptable.
- Single-node training can use local storage, but multi-node training requires shared storage.
- We haven't successfully train a powerful 32B model, so we cannot estimate the required steps and time.
### Software Requirements
This tutorial provides a Docker image. Below are the tested software versions:
| | Version |
|---|:---:|
| OS | CentOS 7 / Ubuntu 22.04 or any other system that meets the software requirements below |
| NVIDIA Driver | 550.127.08 |
| CUDA | 12.5 |
| Git LFS | Refer to: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage. Mainly used for downloading models, datasets, and AReaL project code. |
| Docker | 27.5.1 |
|NVIDIA Container Toolkit|[Installing the NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)|
| AReaL Image | `ghcr.io/inclusionai/areal-runtime:v0.2.0`. This image includes AReaL's runtime dependencies and Ray components. |
Since the installation of NVIDIA Drivers and CUDA, as well as the mounting of shared storage, depends on node configurations and system versions, please complete these installations independently. This tutorial does not cover their setup.
For multi-node training, ensure that the shared storage is mounted to the `/storage` directory on every node. All subsequent downloads and resources will be stored in this directory. The AReaL container will also mount this directory to `/storage` within the container, enabling seamless access during training.
## One-Click Environment Setup and Training Launch
This section provides a one-click setup script to automatically configure the node environment:
1. Install Docker, Git LFS, and NVIDIA Container Toolkit
2. Pull the AReaL image on each node
3. Download AReaL code, models, and datasets
4. Set up a Ray cluster
5. [Optional] Launch a training task within the Ray cluster
Please perform the following operations on any chosen node:
```bash
mkdir -p /storage/codes
cd /storage/codes/
git clone https://github.com/inclusionAI/AReaL.git
cd /storage/codes/AReaL
python ./examples/env/setup_env_and_start_train.py setup --private_key_file /path/to/ssh_key --ssh_port 22 --username root --hostnames NODE_IP_1 NODE_IP_2 NODE_IP_3 NODE_IP_4 --train_param 1.5B_n1
```
`setup_env_and_start_train.py setup` arguments
- `private_key_file`: SSH secret key. Using by connecting nodes.
- `ssh_port`: SSH port
- `username`: SSH username
- `hostnames`: IP list. Split with space. Can be 1, 4, or 16 node IPs
- `train_param`: [Optional] Training parameters used to launch a training task immediately after environment setup. Valid options are: `1.5B_n1`, `1.5B_n4`, `1.5B_n16`, `7B_n4`, `7B_n16`
If the script in this section fails to execute or encounters errors due to environmental discrepancies, you may manually configure the environment and launch training by following the instructions in the subsequent sections of this tutorial.
## Environment Setup
Since shared storage is used, downloading only needs to be done on one node.
### Code
Clone the AReaL project code to `/storage/codes`:
```bash
mkdir -p /storage/codes
cd /storage/codes/
git clone https://github.com/inclusionAI/AReaL
```
### Dataset
We provide a dataset for training. Download the dataset and place it in `/storage/datasets/`:
```bash
mkdir -p /storage/datasets/
cd /storage/datasets/
wget https://huggingface.co/datasets/inclusionAI/AReaL-RL-Data/resolve/main/data/boba_106k_0319.jsonl?download=true
wget https://huggingface.co/datasets/inclusionAI/AReaL-RL-Data/resolve/main/data/orz-zero_56k_0319.jsonl?download=true
```
### Model
We train based on open-source models, which can be downloaded directly from HuggingFaceHub (Please ensure that Git LFS is installed):
```
mkdir -p /storage/models
cd /storage/models
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
cd DeepSeek-R1-Distill-Qwen-7B
git lfs pull
```
You can also use the HuggingFace CLI to download after installing PyPI and huggingface_hub. Refer to the [official documentation](https://huggingface.co/docs/huggingface_hub/guides/cli) for details.
### Launch the Ray Cluster
Before proceeding, pull the AReaL environment image, which already includes Ray components.
On the first node, start the Ray Head with the following command:
```bash
docker run -d --name r1-ray-head --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.2.0 /bin/bash -c "ray start --head --port=6379 && tail -f /dev/null"
```
On all other nodes, start the Ray Worker with the following command (skip this step if you only have one node):
```bash
# RAY_HEAD_IP is the IP of the first node
RAY_HEAD_IP=xxx.xxx.xxx.xxx
docker run -d --name r1-ray-worker --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.2.0 /bin/bash -c "ray start --address=$RAY_HEAD_IP:6379 && tail -f /dev/null"
```
Once all nodes are up, check the Ray cluster status by entering the container on the first node:
```bash
docker exec -it r1-ray-head bash
ray status
```
You should see the Ray resource status. The output will vary depending on your node count (e.g., a 16-node, 128-GPU cluster will show the following results).
```
======== Autoscaler status: 2025-02-22 14:08:51.061250 ========
Node status
---------------------------------------------------------------
Active:
1 node_d5634ae61bfe6732d957811bed65c8a39f13ece07e0326f941acbc4e
1 node_23b0c08045c9a39bc4c454cae298ee531d9a474215ac5e77a5b01e74
1 node_bc1016320658e92645f29cecb8aaf51c0b7e01a44e8ac9c814dfee59
1 node_4e7d15e9cee9ee0da5d65e45f1e346228c52bc0c557511c6eeab40dc
1 node_c5bcf15e28a00515be5d2a7e8e33d71f0f57cdfaf1003db9e0c74788
1 node_ec3f6ee8f6fdf3a5392bb4dac244668da75d094e084dcbb520ce2525
1 node_dc2f1eef88126ae4ac7902574714af9ab74b78ba037217e73e063639
1 node_a4728608c1fda187dc33bb24e831c42fe5c8a582ad428b6e595933bc
1 node_970379a3ba750ee3b13e31612b6a6b758d50bd4943555b2a13d1bd61
1 node_bf6b658bea9e437fcb642a2d881425662a689d668c92fe1545899b36
1 node_2c69511f410d9360f1d05893fde2c97dd32240e0315afea9b2d286a3
1 node_e4c90c17cc48ad469d123041d3302dcff1f7a82a4805279300812b19
1 node_3f772cbffb206c30b6ccedade83789d78397804bab874ee59563cb96
1 node_429bd5115b5590b612590bb455f2d3ed4f77055d746a184baf807655
1 node_75071820f2c16dc51fa271316b72cd45335ec877c06450d292ab7d54
1 node_6f4323f9038248d82b91321e2c4ca5fa99e65efa2d976c0b896a8964
Pending:
(no pending nodes)
Recent failures:
(no failures)
Resources
---------------------------------------------------------------
Usage:
0.0/2128.0 CPU
0.0/128.0 GPU
0B/21.08TiB memory
0B/2.91TiB object_store_memory
Demands:
(no resource demands)
```
## RL Training
Before starting distributed training, ensure the Ray cluster is up and running properly.
Then, on the first node (where the Ray Head is located), enter the container:
```
docker exec -it r1-ray-head bash
cd /storage/codes/AReaL
```
Choose a config file that matches your hardware environment and run it:
```bash
python3 -m realhf.apps.quickstart ppo-math --config ./examples/configs/7B-distill/ppo-7B-distill-gpus-128.yaml
```
After starting, check the training launch information:
```
╭─────────────────────────────────────────────────╮
│ Setting PPOMATHConfig with the Following Values │
╰─────────────────────────────────────────────────╯
───────────────────────── Current Configuration Begin ──────────────────────────
actor (ModelTrainEvalConfig)
actor.type (ModelFamily)
actor.type._class (str) - qwen2
actor.type.size (int) - 7
actor.type.is_critic (bool) - False
...
────────────────────────── Current Configuration End ───────────────────────────
20250222-10:26:34.877 quickstart INFO: Running ppo-math experiment.
20250222-10:44:15.581 quickstart INFO: Logs will be dumped to /storage/ray/experiments/logs/root/ppo-7B-distill-gpus-128/512x16
20250222-10:44:15.581 quickstart INFO: Model checkpoints will be saved to /storage/ray/experiments/checkpoints/root/ppo-7B-distill-gpus-128/512x16
20250222-10:26:36.408 quickstart INFO: Launching experiments with RAY...
```
If errors occur during execution (e.g., keywords like "Error" appear), refer to the troubleshooting section.
### Commandline Options
```bash
python3 -m realhf.apps.quickstart ppo-math --help
```
The descriptions of the important parameters are as follows:
+ `mode`: It is always `ray`, and do not change it to other values when referring to this tutorial for training.
+ `{actor|critic|ref}.path`: The path of the model.
+ `dataset.path`: The path of the dataset jsonl file
+ `external_configs.cluster_config`: Set config for cluster_config. e.g. fileroot is the root path for saving traning outputs.
+ `n_nodes`: The number of nodes
+ `n_gpus_per_node`: The number of GPUs per node
+ `allocation_mode`: The GPU allocation and 3D parallel strategy of the model in the experiment, mainly in the following form:
+ `sglang.d${DP1}m${TP1}p${PP1}+d${DP2}m${TP2}p${PP2}`: Configure the parallel strategies for SGLang generation and training respectively. The generation and training use disjoint sets of GPUs, and the sum of the number of GPUs used by the two should be equal to the total number of GPUs, i.e DP1xTP1xPP1+DP2xTP2xPP2=#GPUs.
+ `exp_ctrl.total_train_epochs`: The number of training epochs (i.e., the number of times to iterate over the entire dataset)
+ `exp_ctrl.save_freq_{epochs|steps|secs}`: The frequency of saving the model parameters in persistent storage. If it is set to null, the model will not be saved.
+ `exp_ctrl.ckpt_freq_{epochs|steps|secs}`: The frequency of saving temporary parameters for restart
+ `dataset.train_bs_n_seqs`: The training batch size, that is, the number of prompts to be sampled each time during training
+ `group_size`: The number of answers to be sampled for each prompt
+ `{actor_train|ref_inf}.mb_spec.max_tokens_per_mb`: The maximum number of tokens in the data for each forward/backward pass during the inference of the reference model and the training of the actor model. It can be reduced to avoid OOM errors. These data will accumulate gradients for a single parameter update.
+ `ppo.ppo_n_minibatches`: The number of parts into which all the data will be divided for each PPO update to calculate the loss and update the parameters.
+ `ppo.gen.max_new_tokens`: The maximum number of tokens to be generated for a single prompt, default to 16k.
+ `ppo.gen.min_new_tokens`: The minimum number of tokens to be generated for a single prompt, default to 0.
### Monitoring the Training Process
Here, we use the logs from a 16-node run (the same applies to 1-node and 4-node setups) to explain several methods for observing training progress and results.
#### Training Progress
Search for the keyword `Epoch` in the logs to see the total number of Epochs and Steps:
```bash
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:11:56.997 master worker INFO: Epoch 1/1 step 1/19 (global step 1) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2124.429*s. Total time consumption: 2283.862s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:52:02.719 master worker INFO: Epoch 1/1 step 2/19 (global step 2) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2405.716*s. Total time consumption: 4689.584s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-12:27:25.084 master worker INFO: Epoch 1/1 step 3/19 (global step 3) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2122.318*s. Total time consumption: 6811.949s. Estimated remaining time: 33957.093s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:05:58.246 master worker INFO: Epoch 1/1 step 4/19 (global step 4) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2313.134*s. Total time consumption: 9125.111s. Estimated remaining time: 33265.891s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:44:14.349 master worker INFO: Epoch 1/1 step 5/19 (global step 5) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2296.076*s. Total time consumption: 11421.214s. Estimated remaining time: 31413.800s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:22:33.864 master worker INFO: Epoch 1/1 step 6/19 (global step 6) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2299.448*s. Total time consumption: 13720.729s. Estimated remaining time: 29350.673s.
```
Six log entries are found. We explain the meaning of each field based on the last entry:
- `Epoch 1/1`: Indicates that a total of 1 Epoch is required, and the first Epoch is currently being trained. This example only trains for 1 Epoch. Normally, training should run for 10 Epochs or more.
- `step 6/19`: Indicates that the current Epoch has 19 Steps, and the 6th Step has just finished.
- `global step 6`: Represents the step count across all Epochs.
- `#End to end# execution time: *2299.448*s`: Indicates that the current Step took 2299.448 seconds to complete.
- `Total time consumption: 13720.729s`: The total time elapsed since training started is 13720.729 seconds.
- `Estimated remaining time: 29350.673s`: The estimated time remaining to complete training is 29350.673 seconds.
#### Model Performance
Search for the keyword `task_reward` in the logs.
```bash
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:11:56.991 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.2640759198111482e-05, 'actor_loss': 1.1128166761409375e-06, 'actor_clip_ratio': 2.1122002635820536e-07, 'importance_weight': 1.0000014305114746, 'task_reward': -0.2996826171875, 'kl_reward': -2.27004832709099e-07, 'final_reward': -0.30145370960235596, 'advantage': 0.003593671601265669, 'avg_seq_len': 7907.8955078125, 'avg_prompt_len': 105.845703125, 'n_tokens': 127828786.0, 'n_valid_tokens': 127828786.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.122802734375, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:52:02.712 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.493159263394773e-05, 'actor_loss': -3.846728588996484e-07, 'actor_clip_ratio': 3.16789424914532e-07, 'importance_weight': 0.9999996423721313, 'task_reward': -0.6793212890625, 'kl_reward': -2.536311853873485e-07, 'final_reward': -0.6813737154006958, 'advantage': 0.004844569601118565, 'avg_seq_len': 8203.9453125, 'avg_prompt_len': 111.892578125, 'n_tokens': 132580185.0, 'n_valid_tokens': 132580185.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.13812255859375, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-12:27:25.077 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.572356243035756e-05, 'actor_loss': -5.036404786551429e-07, 'actor_clip_ratio': 1.8960582792715286e-07, 'importance_weight': 0.9999992251396179, 'task_reward': -0.6280517578125, 'kl_reward': -2.988609537624143e-07, 'final_reward': -0.6303607225418091, 'advantage': 0.004505862481892109, 'avg_seq_len': 7834.6328125, 'avg_prompt_len': 108.900390625, 'n_tokens': 126578395.0, 'n_valid_tokens': 126578395.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.11761474609375, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:05:58.239 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.4861981728463434e-05, 'actor_loss': 1.3935685672095133e-07, 'actor_clip_ratio': 3.02603467616791e-07, 'importance_weight': 0.9999998807907104, 'task_reward': -0.78857421875, 'kl_reward': -3.672174671009998e-07, 'final_reward': -0.791388750076294, 'advantage': 0.005053278990089893, 'avg_seq_len': 7773.39404296875, 'avg_prompt_len': 108.7890625, 'n_tokens': 125576883.0, 'n_valid_tokens': 125576883.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.117919921875, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:44:14.342 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.516058702894952e-05, 'actor_loss': -7.665488510610885e-07, 'actor_clip_ratio': 1.9505058901359007e-07, 'importance_weight': 0.9999997615814209, 'task_reward': -0.6158447265625, 'kl_reward': -4.6867208425283025e-07, 'final_reward': -0.6195111274719238, 'advantage': 0.004475570283830166, 'avg_seq_len': 7928.50830078125, 'avg_prompt_len': 105.517578125, 'n_tokens': 128171874.0, 'n_valid_tokens': 128171874.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.12353515625, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:22:33.857 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.4821250917739235e-05, 'actor_loss': -3.922649227661168e-07, 'actor_clip_ratio': 3.323623900541861e-07, 'importance_weight': 1.0000001192092896, 'task_reward': -0.7025146484375, 'kl_reward': -5.863367960046162e-07, 'final_reward': -0.7071446776390076, 'advantage': 0.004277692176401615, 'avg_seq_len': 8002.4873046875, 'avg_prompt_len': 105.951171875, 'n_tokens': 129376851.0, 'n_valid_tokens': 129376851.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.12286376953125, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
```
The last entry is used to explain the meaning of key fields:
- `task_reward`: The average reward value of all sampled answers in this step. This value should steadily increase during training and eventually stabilize.
- `importance_weight`: The average importance sampling ratio across all tokens in the PPO loss. This value is typically close to 1.
- `actor_clip_ratio`: The ratio of tokens clipped in the PPO loss to the total number of tokens. This is usually less than 0.1.
- `actor_loss`: The PPO loss. **It does not show a clear upward or downward trend during training** and should not be used as a reference for model performance.
- `avg_seq_len`: The average length of all sequences (i.e., prompts with sampled answers) in this step. In a full multi-stage training process, this value will first decrease and then increase.
- `no_eos_ratio`: The ratio of sampled answers truncated due to exceeding the maximum generation length. An increase in this value indicates that the average length of answers is increasing.
## Evaluation
### Evaluation Process
The evaluation code is located in the `evaluation` folder of the repository. As per the previous tutorial, the trained checkpoints will be saved under the path `/storage/ray/experiments/checkpoints/root/`, for example, `/storage/ray/experiments/checkpoints/root/ppo-zero-distill-7B-n16/1024x16-n16/actor/epoch1epochstep20globalstep20/`.
Start a new container to execute the evaluation script (note: evaluation requires updates to certain Python libraries; avoid using the training container for this task):
```
docker run -d --name r1-eval --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.2.0 /bin/bash -c "tail -f /dev/null"
docker exec -it r1-eval bash
```
Run the following script inside the Docker container to evaluate:
```bash
cd /storage/codes/AReaL/evaluation
cd latex2sympy
pip install -e .
cd ..
pip install -r requirements.txt
pip install vllm --no-build-isolation
pip install transformers==4.47.0
pip install prettytable timeout_decorator
mkdir /storage/ray/eval_output/
nohup python eval_and_aggregate.py \
--model_path /storage/ray/experiments/checkpoints/root/ppo-zero-distill-7B-n16/1024x16-n16/actor/epoch1epochstep20globalstep20/ \
--output_path /storage/ray/eval_output/ \
--data_names "math_500,aime24,amc23" \
--max_gen_tokens 32768 &> /storage/ray/eval_output/eval_and_aggregate_parallel.log &
```
+ `--model_path`: Path to the saved model parameters.
+ `--output_path`: Path to store the generated answers and log files during evaluation.
+ `--data_names`: Specify the dataset(s) to evaluate. Multiple datasets can be separated by commas. Default is `math_500, math, gsm8k, train_amc_aime, aime24, amc23`.
+ `--max_gen_tokens`: Maximum length of generated answers. Default is `32768`.
### Evaluation Results
The evaluation script will output a table in the terminal, for example:
```
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
| dataset | num_questions | greedy_length | sample_length | greedy_acc | sample_pass@1 | pass@8 | pass@16 |
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
| math_500 | 500 | 6757.4 | 4139.5 | 84.4 | 92.7 | 97.3 | 97.7 |
| aime24 | 30 | 19328.0 | 13663.5 | 50.0 | 50.4 | 77.3 | 80.0 |
| amc23 | 40 | 8850.0 | 6526.2 | 80.0 | 90.5 | 96.8 | 98.8 |
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
```
+ `{greedy|sample}_length`: Average answer length under greedy or random sampling strategy.
+ `greedy_acc`: Average accuracy under greedy sampling.
+ `sample_pass@{k}`: Probability of generating a correct answer on average per `k` attempts under random sampling.
### Additional Notes
#### Key Parameters
+ The evaluation script defaults to taking the average of 32 samples with temperature 0.6.
+ We observed that the `enforce_eager` parameter in vLLM significantly impacts evaluation performance. When `enforce_eager=True`, we can reproduce the model performance reported in previous work. Otherwise, the evaluation results may fall below the reported performance. Therefore, we enforce `enforce_eager` to be enabled during evaluation.
Due to the above reasons, the evaluation process typically takes a considerable amount of time.
#### Runtime
The runtime of the evaluation depends on factors such as the maximum generation length, the number of questions in the dataset, and the model size. On a machine with 8x H100 GPUs, evaluating `aime` and `math_500` takes approximately 80 minutes and 160 minutes, respectively.
## Troubleshooting
If the following content does not address your issue, feel free to raise a GitHub Issue.
### Automatic Recover
When setting `recover_mode=auto` and the experiment config remains the same, AReaL will try to discover previous checkpoints and recover the experiment from it.
If the automatic recover fails, please check the following possibilities:
* The `experiment_name` and `trial_name` in the training script differ from the previous run.
* Changes in Batch Size (`dataset.train_bs_n_seqs` in the parameters), Group Size (`group_size` in the parameters), or the number of nodes (`n_nodes` in the parameters).
* No recover checkpoint was created in the previous run. By default, recover checkpoints are generated under two conditions:
* After the completion of the second Step.
* When a Step completes and more than 600 seconds have passed since the last recover checkpoint. This parameter is in the `./examples/configs/*/*.yaml`, named `exp_ctrl.ckpt_freq_secs=600`.
You can confirm if a recover checkpoint was generated by searching in the log:
```bash
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:52:02.760 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-12:27:25.105 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:05:58.264 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:44:14.411 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:22:33.883 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:59:44.925 master worker INFO: Dumped recover info to file.
```
### Series of OutOfMemory Errors
While our scripts are designed to minimize OOM (Out of Memory) errors, they can still occasionally occur, especially due to memory fragmentation and increasing sequence lengths. Although these issues are often resolved by automatic restarts, users may require the following targeted solutions.
#### torch.cuda.CudaOutOfMemoryError
The key to resolving this issue is identifying the phase in which the error occurs.
- **If it occurs during initialization (before `actor_gen`):**
- Check if there are any idle processes on the GPU. In distributed scenarios, restart the Ray cluster. In single-machine scenarios, use `pkill`.
- **This error typically does not occur during the `actor_gen` phase.**
- **If it occurs during `ref_inf` or `actor_train`:**
- Adjust the microbatch size for the corresponding computation task. For example, set `actor_train.mb_spec.max_tokens_per_mb=20480`. This parameter limits the number of tokens per forward/backward pass and can be set as low as the maximum sequence length (including the prompt).
- Modify the parallelism strategy (`allocation_mode`) for the 7B model. Try reducing data parallelism and increasing tensor or pipeline parallelism.
#### CUDA error: out of memory
This issue may occur during vLLM's initialization of the CPU KV cache, indicating insufficient memory on the machine. To resolve this, reduce the value of `actor.vllm.swap_space`.
#### RuntimeError: Aborted due to the lack of CPU swap space.
This issue arises when the sequence length and KV cache demand exceed GPU memory, and the CPU swap space is insufficient. It is closely related to [Preemption errors](https://docs.vllm.ai/en/latest/performance/optimization.html). To resolve this, increase `actor.vllm.swap_space`. If the error persists, reduce `actor.vllm.max_num_seqs` and refer to the [vLLM documentation](https://docs.vllm.ai/en/latest/performance/optimization.html).
#### CUDA error: an illegal memory access was encountered
This error typically occurs during the vLLM generation phase and is another symptom of insufficient GPU memory. Solutions include:
- Reduce the training batch size or the number of answers generated per prompt. Note that this may lower sample efficiency and extend training time.
- [Switch vLLM's attention backend to xformers](https://github.com/vllm-project/vllm/issues/5376).

View File

@ -0,0 +1,423 @@
# Tutorial (中文)
## 前置要求
### 硬件要求
为了能正常完成训练流程,请参照下表确认你的硬件是否满足要求:
| **模型大小** | **1.5B** | **1.5B** |**1.5B** | **7B** |**7B** | **32B** |
|---------------------|---|---|---|---------------------------|---|---|
| 节点 | 1 | 4 | 16 | 4 | 16 | 16 |
| GPU | 8 张 H800 | 每节点 8 张 H800 |每节点 8 张 H800 | 每节点 8 张 H800 |每节点 8 张 H800 |每节点 8 张 H800 |
| CPU | 48 核 | 每节点 48 核 |每节点 48 核 | 每节点 48 核 |每节点 48 核 | 每节点 48 核 |
| 内存 | 1 TB |每节点 1 TB|每节点 1 TB | 每节点 1 TB |每节点 1 TB | 每节点 1 TB |
| 通信 | NVSwitch |NVSwitch+RoCE 带宽 3.2 Tbps|NVSwitch+RoCE 带宽 3.2 Tbps| NVSwitch+RoCE 带宽 3.2 Tbps |NVSwitch+RoCE 带宽 3.2 Tbps| NVSwitch+RoCE 带宽 3.2 Tbps|
| 存储 | 1TB |共享存储NAS10TB |共享存储NAS10TB | 共享存储NAS10TB |共享存储NAS10TB | 共享存储NAS10TB |
| BatchSize x GroupSize | 512x16 | 512x16 | 512x16 | 512x16 | 512x16 | 512x16 |
| 单步训练时间(秒) | **3461** | **997** | **391** | **2275** | **815** | **6707**|
| 训练至收敛需要步数 | **~250** |**~250** |**~250** |**~400** |**~400** | - |
| 总训练时间(小时) | **~240** | **~69** | **~27** | **~252** | **~90** | - |
关于硬件要求的说明:
- GPU 需要 80GB 显存,可以选择同级别其他 GPU 型号。
- 单节点训练时可以使用本地存储,但多节点训练必须要提供共享存储,否则无法进行训练。
- 目前32B模型没有训练出有意义的结果所以无法估计训练到收敛需要的步数和时间。
### 软件要求
本教程提供 Docker镜像。以下是经过测试的软件版本可以参考如下软件版本进行配置。
||版本说明|
|---|---|
|OS|CentOS 7 / Ubuntu 22.04 或其他满足下方软件运行的系统|
|NVIDIA Driver|版本550.127.08|
|CUDA|版本12.8|
|Git LFS|参考:[Git LFS 安装指南](https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage) 主要用于下载模型数据集AReaL 工程代码|
|Docker|版本27.5.1|
|NVIDIA Container Toolkit|[NVIDIA Container Toolkit 安装指南](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)|
|镜像|ghcr.io/inclusionai/areal-runtime:v0.3.0 这个镜像中包含运行依赖和 Ray 的相关组件|
由于 NVIDIA Driver 和 CUDA 的安装以及共享存储的挂载与节点和系统版本有关,请自行完成安装,本教程不进行介绍。
如果是多节点训练,请先将共享存储挂载到每个节点的 `/storage` 目录上,后续下载的内容都将放在这个目录下,并且 AReaL 容器也会将该目录挂载到容器的 `/storage`,以便训练时访问。
## 一键搭建环境并启动训练
本节提供一个一键安装脚本,自动完成节点的环境配置工作:
1. 安装 DockerGit LFSNVIDIA Container Toolkit
2. 在每个节点上拉取 AReaL 镜像
3. 下载 AReaL 代码,模型,数据集
4. 搭建 Ray 集群
5. 【可选】在 Ray 集群中启动一个训练任务
请选择任意一个节点执行如下操作:
```bash
mkdir -p /storage/codes
cd /storage/codes/
git clone https://github.com/inclusionAI/AReaL.git
cd /storage/codes/AReaL
python ./examples/env/setup_env_and_start_train.py setup --private_key_file /path/to/ssh_key --ssh_port 22 --username root --hostnames NODE_IP_1 NODE_IP_2 NODE_IP_3 NODE_IP_4 --train_param 1.5B_n1
```
`setup_env_and_start_train.py setup` 参数说明:
- `private_key_file`SSH 私钥文件,用于连接节点
- `ssh_port`SSH 端口
- `username`SSH 用户名
- `hostnames`IP 列表,用空格分割。可以是 1/4/16 个节点 IP
- `train_param`:【可选】训练参数,用于在完成环境搭建后直接启动一个训练任务。可选值为 `1.5B_n1``1.5B_n4``1.5B_n16``7B_n4``7B_n16`
如果因为环境差异,无法运行本节中的脚本或运行出现错误,也可以按照本教程后续章节的内容手动完成环境配置和启动训练。
## 环境配置
由于使用了共享存储,下载操作只需要在一个节点上完成。
### 代码
将 AReaL 项目代码克隆到 `/storage/codes` 中:
```bash
mkdir -p /storage/codes
cd /storage/codes/
git clone https://github.com/inclusionAI/AReaL.git
```
### 数据集
我们提供了用于训练的数据集,请下载数据集并放置在 /storage/datasets/
```bash
mkdir -p /storage/datasets/
cd /storage/datasets/
wget https://huggingface.co/datasets/inclusionAI/AReaL-RL-Data/resolve/main/data/boba_106k_0319.jsonl?download=true
wget https://huggingface.co/datasets/inclusionAI/AReaL-RL-Data/resolve/main/data/orz-zero_56k_0319.jsonl?download=true
```
### 模型
我们基于开源模型进行训练,该模型可以从 HuggingFace Hub 直接下载(请确保已经安装了 Git LFS
```
mkdir -p /storage/models
cd /storage/models
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
cd DeepSeek-R1-Distill-Qwen-7B
git lfs pull
```
你也可以在安装 PyPI 和 huggingface_hub 后利用 huggingface CLI 进行下载,具体请参考[官方文档](https://huggingface.co/docs/huggingface_hub/guides/cli)
### 启动 Ray 集群
在执行这一步之前,请先拉取 AReaL 环境镜像,这个镜像中已经包含了 Ray 相关的组件。
在第一个节点上执行如下命令启动 Ray Head
```bash
docker run -d --name r1-ray-head --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.3.0 /bin/bash -c "ray start --head --port=6379 && tail -f /dev/null"
```
在除了第一个节点以外的每个节点上执行如下命令启动 Ray Worker如果只有一个节点这一步就不用执行了
```bash
# RAY_HEAD_IP 是第一个节点的 IP
RAY_HEAD_IP=xxx.xxx.xxx.xxx
docker run -d --name r1-ray-worker --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.3.0 /bin/bash -c "ray start --address=$RAY_HEAD_IP:6379 && tail -f /dev/null"
```
全部启动完成后,在第一个节点上通过 docker exec 进入容器,查看 Ray 集群的状态:
```bash
docker exec -it r1-ray-head bash
ray status
```
可以看到 Ray 的资源情况,输出如下(这是一个 16 节点 128 卡的集群,根据你的节点数量,这里的输出会有所不同):
```
======== Autoscaler status: 2025-02-22 14:08:51.061250 ========
Node status
---------------------------------------------------------------
Active:
1 node_d5634ae61bfe6732d957811bed65c8a39f13ece07e0326f941acbc4e
1 node_23b0c08045c9a39bc4c454cae298ee531d9a474215ac5e77a5b01e74
1 node_bc1016320658e92645f29cecb8aaf51c0b7e01a44e8ac9c814dfee59
1 node_4e7d15e9cee9ee0da5d65e45f1e346228c52bc0c557511c6eeab40dc
1 node_c5bcf15e28a00515be5d2a7e8e33d71f0f57cdfaf1003db9e0c74788
1 node_ec3f6ee8f6fdf3a5392bb4dac244668da75d094e084dcbb520ce2525
1 node_dc2f1eef88126ae4ac7902574714af9ab74b78ba037217e73e063639
1 node_a4728608c1fda187dc33bb24e831c42fe5c8a582ad428b6e595933bc
1 node_970379a3ba750ee3b13e31612b6a6b758d50bd4943555b2a13d1bd61
1 node_bf6b658bea9e437fcb642a2d881425662a689d668c92fe1545899b36
1 node_2c69511f410d9360f1d05893fde2c97dd32240e0315afea9b2d286a3
1 node_e4c90c17cc48ad469d123041d3302dcff1f7a82a4805279300812b19
1 node_3f772cbffb206c30b6ccedade83789d78397804bab874ee59563cb96
1 node_429bd5115b5590b612590bb455f2d3ed4f77055d746a184baf807655
1 node_75071820f2c16dc51fa271316b72cd45335ec877c06450d292ab7d54
1 node_6f4323f9038248d82b91321e2c4ca5fa99e65efa2d976c0b896a8964
Pending:
(no pending nodes)
Recent failures:
(no failures)
Resources
---------------------------------------------------------------
Usage:
0.0/2128.0 CPU
0.0/128.0 GPU
0B/21.08TiB memory
0B/2.91TiB object_store_memory
Demands:
(no resource demands)
```
## RL训练
在进行分布式训练之前,请确保已经启动了 Ray 集群,并且集群状态正常。
然后在第一个节点Ray Head 所在节点),进入容器:
```
docker exec -it r1-ray-head bash
cd /storage/codes/AReaL
```
选择匹配硬件环境的一个配置运行即可:
```bash
python3 -m realhf.apps.quickstart ppo-math --config ./examples/configs/7B-distill/ppo-7B-distill-gpus-128.yaml
```
启动后,在终端可以看到启动日志:
```
╭─────────────────────────────────────────────────╮
│ Setting PPOMATHConfig with the Following Values │
╰─────────────────────────────────────────────────╯
───────────────────────── Current Configuration Begin ──────────────────────────
actor (ModelTrainEvalConfig)
actor.type (ModelFamily)
actor.type._class (str) - qwen2
actor.type.size (int) - 7
actor.type.is_critic (bool) - False
...
────────────────────────── Current Configuration End ───────────────────────────
20250222-10:26:34.877 quickstart INFO: Running ppo-math experiment.
20250222-10:44:15.581 quickstart INFO: Logs will be dumped to /storage/ray/experiments/logs/root/ppo-7B-distill-gpus-128/512x16
20250222-10:44:15.581 quickstart INFO: Model checkpoints will be saved to /storage/ray/experiments/checkpoints/root/ppo-7B-distill-gpus-128/512x16
20250222-10:26:36.408 quickstart INFO: Launching experiments with RAY...
```
如果运行过程中出现错误(比如出现 Error 关键字请参考Troubleshooting解决。
### Commandline Options
```bash
python3 -m realhf.apps.quickstart ppo-math --help
```
其中重要的参数的说明如下:
+ mode总是为 ray参考本教程进行训练时不要改成其他值。
+ {actor|critic|ref}.path模型的路径
+ dataset.path数据集 jsonl 文件的路径
+ external_configs.cluster_config设置 cluster_config 的配置,比如 fileroot 是存放训练输出的根目录。
+ n_nodes节点数量
+ n_gpus_per_node每个节点的 GPU 数量
+ allocation_mode实验中模型的 GPU 分配和 3D 并行策略,推荐的策略有以下形式:
+ `sglang.d${DP1}m${TP1}p${PP1}+d${DP2}m${TP2}p${PP2}`: 分别配置 SGLang 生成和训练的并行策略,生成和训练分离,使用两部分不同的 GPU。二者所用的GPU数量相加要等于总的 GPU 数量,即 DP1xTP1xPP1+DP2xTP2xPP2=#GPUs。
+ exp_ctrl.total_train_epochs训练的 epoch 数量(即迭代整个数据集的次数)
+ exp_ctrl.save_freq_{epochs|steps|secs}:保存持久化存储模型参数的频率,如果设成 null 会不保存模型
+ exp_ctrl.ckpt_freq_{epochs|steps|secs}:保存临时参数用于重启的频率
+ dataset.train_bs_n_seqs训练的批量大小即每次训练需要采样的 prompt 数量
+ group_size每个 prompt 需要采样的答案数量
+ {actor_train|ref_inf}.mb_spec.max_tokens_per_mbreference模型推理和actor模型训练每次forward/backward数据中最大的token数量可以减小以避免OOM错误。这些数据会累积梯度进行一次参数更新。
+ ppo.ppo_n_minibatches每次PPO更新中会把所有数据划分成多少份以此进行loss计算和参数更新。
+ ppo.gen.max_new_tokens每条prompt生成的最大token数默认训练脚本中为16k。
+ ppo.gen.min_new_tokens每条prompt生成的最小token数默认为0。
### 过程观测
这里以 16 节点的运行日志为例1 节点和 4 节点也一样),说明几个观察训练进度和效果的方法。
#### 查看训练进度
搜索日志中的 Epoch 关键字,查看总的 Epoch 数量和 Step 数量:
```bash
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:11:56.997 master worker INFO: Epoch 1/1 step 1/19 (global step 1) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2124.429*s. Total time consumption: 2283.862s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:52:02.719 master worker INFO: Epoch 1/1 step 2/19 (global step 2) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2405.716*s. Total time consumption: 4689.584s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-12:27:25.084 master worker INFO: Epoch 1/1 step 3/19 (global step 3) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2122.318*s. Total time consumption: 6811.949s. Estimated remaining time: 33957.093s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:05:58.246 master worker INFO: Epoch 1/1 step 4/19 (global step 4) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2313.134*s. Total time consumption: 9125.111s. Estimated remaining time: 33265.891s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:44:14.349 master worker INFO: Epoch 1/1 step 5/19 (global step 5) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2296.076*s. Total time consumption: 11421.214s. Estimated remaining time: 31413.800s.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:22:33.864 master worker INFO: Epoch 1/1 step 6/19 (global step 6) finishes. Average #tokens per batch is 111847. #End to end# execution time: *2299.448*s. Total time consumption: 13720.729s. Estimated remaining time: 29350.673s.
```
出现了 6 条日志信息,以最后一条信息的内容说明各个字段的含义:
+ `Epoch 1/1`:表示总共需要训练 1 个 Epochs当前在训练第 1 个。这里作为例子总共只训练 1 个 Epoch正常训练应该是 10 个 Epochs 或者更多。
+ `step 6/19`:表示当前 Epoch 有 19 个 Steps当前在训练第 6 个
+ `global step 6` 表示当前 Step 在所有 Epochs 的 Steps 里的序号
+ `#End to end# execution time: *2299.448*s`:表示当前 Step 训练耗费了 2299.448 秒
+ `Total time consumption: 13720.729s`:从训练启动开始一共耗费了 13720.729 秒
+ `Estimated remaining time: 29350.673s`:预计完成训练还需要 29350.673 秒
#### 查看训练的效果
搜索日志中的 `task_reward` 关键字
```bash
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:11:56.991 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.2640759198111482e-05, 'actor_loss': 1.1128166761409375e-06, 'actor_clip_ratio': 2.1122002635820536e-07, 'importance_weight': 1.0000014305114746, 'task_reward': -0.2996826171875, 'kl_reward': -2.27004832709099e-07, 'final_reward': -0.30145370960235596, 'advantage': 0.003593671601265669, 'avg_seq_len': 7907.8955078125, 'avg_prompt_len': 105.845703125, 'n_tokens': 127828786.0, 'n_valid_tokens': 127828786.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.122802734375, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:52:02.712 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.493159263394773e-05, 'actor_loss': -3.846728588996484e-07, 'actor_clip_ratio': 3.16789424914532e-07, 'importance_weight': 0.9999996423721313, 'task_reward': -0.6793212890625, 'kl_reward': -2.536311853873485e-07, 'final_reward': -0.6813737154006958, 'advantage': 0.004844569601118565, 'avg_seq_len': 8203.9453125, 'avg_prompt_len': 111.892578125, 'n_tokens': 132580185.0, 'n_valid_tokens': 132580185.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.13812255859375, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-12:27:25.077 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.572356243035756e-05, 'actor_loss': -5.036404786551429e-07, 'actor_clip_ratio': 1.8960582792715286e-07, 'importance_weight': 0.9999992251396179, 'task_reward': -0.6280517578125, 'kl_reward': -2.988609537624143e-07, 'final_reward': -0.6303607225418091, 'advantage': 0.004505862481892109, 'avg_seq_len': 7834.6328125, 'avg_prompt_len': 108.900390625, 'n_tokens': 126578395.0, 'n_valid_tokens': 126578395.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.11761474609375, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:05:58.239 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.4861981728463434e-05, 'actor_loss': 1.3935685672095133e-07, 'actor_clip_ratio': 3.02603467616791e-07, 'importance_weight': 0.9999998807907104, 'task_reward': -0.78857421875, 'kl_reward': -3.672174671009998e-07, 'final_reward': -0.791388750076294, 'advantage': 0.005053278990089893, 'avg_seq_len': 7773.39404296875, 'avg_prompt_len': 108.7890625, 'n_tokens': 125576883.0, 'n_valid_tokens': 125576883.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.117919921875, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:44:14.342 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.516058702894952e-05, 'actor_loss': -7.665488510610885e-07, 'actor_clip_ratio': 1.9505058901359007e-07, 'importance_weight': 0.9999997615814209, 'task_reward': -0.6158447265625, 'kl_reward': -4.6867208425283025e-07, 'final_reward': -0.6195111274719238, 'advantage': 0.004475570283830166, 'avg_seq_len': 7928.50830078125, 'avg_prompt_len': 105.517578125, 'n_tokens': 128171874.0, 'n_valid_tokens': 128171874.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.12353515625, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:22:33.857 master worker INFO: RPC name actor_train returns {'ppo_approx_kl': -2.4821250917739235e-05, 'actor_loss': -3.922649227661168e-07, 'actor_clip_ratio': 3.323623900541861e-07, 'importance_weight': 1.0000001192092896, 'task_reward': -0.7025146484375, 'kl_reward': -5.863367960046162e-07, 'final_reward': -0.7071446776390076, 'advantage': 0.004277692176401615, 'avg_seq_len': 8002.4873046875, 'avg_prompt_len': 105.951171875, 'n_tokens': 129376851.0, 'n_valid_tokens': 129376851.0, 'n_seqs': 16384.0, 'no_eos_ratio': 0.12286376953125, 'disable_value': 1.0, 'mask_no_eos_with_zero': 0.0}
```
以最后一条说明其中几个重点字段的含义:
+ `task_reward`这个step中采样的所有答案的平均奖励值训练稳步进行的话这个值会持续上升最终维持不变
+ `importance_weight`: PPO loss中重要性采样比率在所有token上的平均值通常接近1。
+ `actor_clip_ratio`: PPO loss中被clip掉的token占所有token的比率通常小于0.1。
+ `actor_loss`: PPO loss**不会随着训练过程有明显的上升或下降趋势**,不应作为模型表现的参考。
+ `avg_seq_len`: 这一步中采样的所有序列(即提示词和答案相加)的平均长度。在完整的多阶段训练中,这个值会先下降再上升。
+ `no_eos_ratio`: 这一步中采样的所有答案因为超出最大生成长度被截断的比率。这个值上升也代表了答案的平均长度在上升。
## 评估
### 评估流程
评估代码包含在仓库的`evaluation`文件夹中。按照以上的教程训练得到的checkpoint会保存在`/storage/ray/experiments/checkpoints/root/`路径下,例如`/storage/ray/experiments/checkpoints/root/ppo-zero-distill-7B-n16/1024x16-n16/actor/epoch1epochstep20globalstep20/`。
启动一个新的容器用于运行评估脚本(评估需要更新部分 python 库,请不要在训练容器中进行):
```
docker run -d --name r1-eval --privileged --gpus all --network host --shm-size 700g -v /storage:/storage ghcr.io/inclusionai/areal-runtime:v0.2.0 /bin/bash -c "tail -f /dev/null"
docker exec -it r1-eval bash
```
在docker容器内部运行以下脚本进行评估
```bash
cd /storage/codes/AReaL/evaluation
cd latex2sympy
pip install -e .
cd ..
pip install -r requirements.txt
pip install vllm --no-build-isolation
pip install transformers==4.47.0
pip install prettytable timeout_decorator
mkdir /storage/ray/eval_output/
nohup python eval_and_aggregate.py \
--model_path /storage/ray/experiments/checkpoints/root/ppo-zero-distill-7B-n16/1024x16-n16/actor/epoch1epochstep20globalstep20/ \
--output_path /storage/ray/eval_output/ \
--data_names "math_500,aime24,amc23" \
--max_gen_tokens 32768 &> /storage/ray/eval_output/eval_and_aggregate_parallel.log &
```
+ `--model_path`:模型参数的保存路径
+ `--output_path`:评估过程中生成的答案和日志文件路径
+ `--data_names`: 可以指定评测某个数据,多个数据集用逗号隔开,默认为 math_500, aime24, amc23
+ `--max_gen_tokens`:最长的答案生成长度,默认值 32768
### 评估结果
评估脚本运行完后会在 /storage/ray/eval_output/eval_and_aggregate_parallel.log 日志文件输出一个表格,例如:
```
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
| dataset | num_questions | greedy_length | sample_length | greedy_acc | sample_pass@1 | pass@8 | pass@16 |
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
| math_500 | 500 | 6757.4 | 4139.5 | 84.4 | 92.7 | 97.3 | 97.7 |
| aime24 | 30 | 19328.0 | 13663.5 | 50.0 | 50.4 | 77.3 | 80.0 |
| amc23 | 40 | 8850.0 | 6526.2 | 80.0 | 90.5 | 96.8 | 98.8 |
+----------+---------------+---------------+---------------+------------+---------------+--------+---------+
```
+ `{greedy|sample}_length`: 在greedy或随机采样策略下生成的平均答案长度
+ `greedy_acc`在greedy采样下的平均准确率
+ `sample_pass@{k}`在随机采样下平均每k个答案产生正确答案的概率
### 额外说明
#### 关键参数
+ 我们提供的评估脚本默认采样32次取平均值采样温度值为0.6
+ 我们发现vLLM的`enforce_eager`参数很大程度影响评估性能,当`enforce_eager=True`时我们才能够复现先前工作汇报的模型表现,否则评估结果会低于先前工作汇报的结果,因此我们会在执行 `eval_and_aggregate_parallel.py` 时将`enforce_eager`强制开启。
由于以上原因,评估过程通常会消耗较长时间。
#### 运行时间
评估的运行时间取决于最长生成长度、数据集的题目数量和模型大小等等。在1台8*H100机器上7B模型数据集为`math_500,aime24,amc23`生成长度为32768评估脚本运行时间为 5 个小时。
## Troubleshooting
如果以下内容没有解答你的问题,欢迎在 GitHub Issue 中进行提问。
### 自动恢复
当设置了 `recover_mode=auto` 并且训练配置和之前相同AReaL 会尝试找到之前生成的 checkpoints 并且从这个 checkpoints 恢复训练。
如果自动恢复失败,有这些可能性:
+ 训练配置里的 `experiment_name``trial_name` 与之前的不一样
+ Batch Size参数里的 `dataset.train_bs_n_seqs`Group Size参数里的 `group_size`),节点数(参数里的 `n_nodes`)三个值发生了变化
+ 之前的训练没有创建过 recover checkpoint 。默认的 recover checkpoint 规则有 2 个:
+ 从第 2 个 step 完成后才生成 recover checkpoint
+ 一个 step 训练完成,且距离上次 recover checkpoint 时间超过 600s则生成一个新的 recover checkpoint。这个参数在 `./examples/configs/*/*.yaml` 文件里,参数名为 `exp_ctrl.ckpt_freq_secs=600`。
可以通过搜索 `Dumped recover` 确认是否生成过 recover checkpoint
```bash
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-11:52:02.760 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-12:27:25.105 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:05:58.264 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-13:44:14.411 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:22:33.883 master worker INFO: Dumped recover info to file.
(master_worker/0 pid=96390, ip=xxx.xxx.xxx.xxx) 20250222-14:59:44.925 master worker INFO: Dumped recover info to file.
```
### 一系列OutOfMemory错误
我们提供的脚本已经尽最大努力避免了OOM错误的发生但是OOM问题仍然会随着训练进行在内存碎片增加和生成序列长度越来越长时偶尔发生。虽然这些问题通常可以通过自动重启解决当重启频繁时用户还可以尝试以下针对性的解决方式。
#### torch.cuda.CudaOutOfMemoryError
解决这个问题的关键是定位错误发生的阶段。
- 如果发生在初始化阶段在进入到actor_gen之前:
- 检查当前GPU上是否存在残留进程。在分布式场景下可以通过重启ray cluster解决在单机场景下可以通过pkill解决。
- 该错误通常不会发生在actor_gen阶段。
- 如果发生在ref_inf或actor_train阶段
- 改变相应计算任务的microbatch大小例如`actor_train.mb_spec.max_tokens_per_mb=20480`这个参数代表每次模型forward/backward的数据最多只会包含20480个token这个值最小可以设为生成序列的最长长度包括prompt
- 改变模型的并行策略,即`allocation_mode`,可以尝试减少数据并行的大小,增加张量或流水线并行的大小。
#### CUDA error: out of memory
这个问题可能会发生在vLLM初始化CPU KV cache时表示每台机器的内存不够了。可以减小`actor.vllm.swap_space`解决。
#### RuntimeError: Aborted due to the lack of CPU swap space.
问题的原因是序列长、对KV cache需求大在GPU显存不够时KV cache会被卸载到内存而内存中设置的swap space不够。这个问题和[Preemption的报错](https://docs.vllm.ai/en/latest/performance/optimization.html)紧密相关。解决方案是增加`actor.vllm.swap_space`,如果同样的错误出现,请减少`actor.vllm.max_num_seqs`并参考[vLLM官方文档](https://docs.vllm.ai/en/latest/performance/optimization.html)。
#### CUDA error: an illegal memory access was encountered
通常会在vLLM生成阶段出现同样是显存不足的一种表现。解决方案包括
+ 减小训练batch size或者每个prompt生成的答案数量但减小后会降低样本效率、延长训练时间
+ [将vLLM的attention backend换成xformers](https://github.com/vllm-project/vllm/issues/5376)

View File

@ -0,0 +1,101 @@
// @ts-check
// Extra JS capability for selected tabs to be synced
// The selection is stored in local storage so that it persists across page loads.
/**
* @type {Record<string, HTMLElement[]>}
*/
let sd_id_to_elements = {};
const storageKeyPrefix = "sphinx-design-tab-id-";
/**
* Create a key for a tab element.
* @param {HTMLElement} el - The tab element.
* @returns {[string, string, string] | null} - The key.
*
*/
function create_key(el) {
let syncId = el.getAttribute("data-sync-id");
let syncGroup = el.getAttribute("data-sync-group");
if (!syncId || !syncGroup) return null;
return [syncGroup, syncId, syncGroup + "--" + syncId];
}
/**
* Initialize the tab selection.
*
*/
function ready() {
// Find all tabs with sync data
/** @type {string[]} */
let groups = [];
document.querySelectorAll(".sd-tab-label").forEach((label) => {
if (label instanceof HTMLElement) {
let data = create_key(label);
if (data) {
let [group, id, key] = data;
// add click event listener
// @ts-ignore
label.onclick = onSDLabelClick;
// store map of key to elements
if (!sd_id_to_elements[key]) {
sd_id_to_elements[key] = [];
}
sd_id_to_elements[key].push(label);
if (groups.indexOf(group) === -1) {
groups.push(group);
// Check if a specific tab has been selected via URL parameter
const tabParam = new URLSearchParams(window.location.search).get(
group
);
if (tabParam) {
console.log(
"sphinx-design: Selecting tab id for group '" +
group +
"' from URL parameter: " +
tabParam
);
window.sessionStorage.setItem(storageKeyPrefix + group, tabParam);
}
}
// Check is a specific tab has been selected previously
let previousId = window.sessionStorage.getItem(
storageKeyPrefix + group
);
if (previousId === id) {
// console.log(
// "sphinx-design: Selecting tab from session storage: " + id
// );
// @ts-ignore
label.previousElementSibling.checked = true;
}
}
}
});
}
/**
* Activate other tabs with the same sync id.
*
* @this {HTMLElement} - The element that was clicked.
*/
function onSDLabelClick() {
let data = create_key(this);
if (!data) return;
let [group, id, key] = data;
for (const label of sd_id_to_elements[key]) {
if (label === this) continue;
// @ts-ignore
label.previousElementSibling.checked = true;
}
window.sessionStorage.setItem(storageKeyPrefix + group, id);
}
document.addEventListener("DOMContentLoaded", ready, false);

1
_sphinx_design_static/sphinx-design.min.css vendored Executable file

File diff suppressed because one or more lines are too long

925
_static/basic.css Executable file
View File

@ -0,0 +1,925 @@
/*
* basic.css
* ~~~~~~~~~
*
* Sphinx stylesheet -- basic theme.
*
* :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
/* -- main layout ----------------------------------------------------------- */
div.clearer {
clear: both;
}
div.section::after {
display: block;
content: '';
clear: left;
}
/* -- relbar ---------------------------------------------------------------- */
div.related {
width: 100%;
font-size: 90%;
}
div.related h3 {
display: none;
}
div.related ul {
margin: 0;
padding: 0 0 0 10px;
list-style: none;
}
div.related li {
display: inline;
}
div.related li.right {
float: right;
margin-right: 5px;
}
/* -- sidebar --------------------------------------------------------------- */
div.sphinxsidebarwrapper {
padding: 10px 5px 0 10px;
}
div.sphinxsidebar {
float: left;
width: 270px;
margin-left: -100%;
font-size: 90%;
word-wrap: break-word;
overflow-wrap : break-word;
}
div.sphinxsidebar ul {
list-style: none;
}
div.sphinxsidebar ul ul,
div.sphinxsidebar ul.want-points {
margin-left: 20px;
list-style: square;
}
div.sphinxsidebar ul ul {
margin-top: 0;
margin-bottom: 0;
}
div.sphinxsidebar form {
margin-top: 10px;
}
div.sphinxsidebar input {
border: 1px solid #98dbcc;
font-family: sans-serif;
font-size: 1em;
}
div.sphinxsidebar #searchbox form.search {
overflow: hidden;
}
div.sphinxsidebar #searchbox input[type="text"] {
float: left;
width: 80%;
padding: 0.25em;
box-sizing: border-box;
}
div.sphinxsidebar #searchbox input[type="submit"] {
float: left;
width: 20%;
border-left: none;
padding: 0.25em;
box-sizing: border-box;
}
img {
border: 0;
max-width: 100%;
}
/* -- search page ----------------------------------------------------------- */
ul.search {
margin: 10px 0 0 20px;
padding: 0;
}
ul.search li {
padding: 5px 0 5px 20px;
background-image: url(file.png);
background-repeat: no-repeat;
background-position: 0 7px;
}
ul.search li a {
font-weight: bold;
}
ul.search li p.context {
color: #888;
margin: 2px 0 0 30px;
text-align: left;
}
ul.keywordmatches li.goodmatch a {
font-weight: bold;
}
/* -- index page ------------------------------------------------------------ */
table.contentstable {
width: 90%;
margin-left: auto;
margin-right: auto;
}
table.contentstable p.biglink {
line-height: 150%;
}
a.biglink {
font-size: 1.3em;
}
span.linkdescr {
font-style: italic;
padding-top: 5px;
font-size: 90%;
}
/* -- general index --------------------------------------------------------- */
table.indextable {
width: 100%;
}
table.indextable td {
text-align: left;
vertical-align: top;
}
table.indextable ul {
margin-top: 0;
margin-bottom: 0;
list-style-type: none;
}
table.indextable > tbody > tr > td > ul {
padding-left: 0em;
}
table.indextable tr.pcap {
height: 10px;
}
table.indextable tr.cap {
margin-top: 10px;
background-color: #f2f2f2;
}
img.toggler {
margin-right: 3px;
margin-top: 3px;
cursor: pointer;
}
div.modindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
div.genindex-jumpbox {
border-top: 1px solid #ddd;
border-bottom: 1px solid #ddd;
margin: 1em 0 1em 0;
padding: 0.4em;
}
/* -- domain module index --------------------------------------------------- */
table.modindextable td {
padding: 2px;
border-collapse: collapse;
}
/* -- general body styles --------------------------------------------------- */
div.body {
min-width: 360px;
max-width: 800px;
}
div.body p, div.body dd, div.body li, div.body blockquote {
-moz-hyphens: auto;
-ms-hyphens: auto;
-webkit-hyphens: auto;
hyphens: auto;
}
a.headerlink {
visibility: hidden;
}
a:visited {
color: #551A8B;
}
h1:hover > a.headerlink,
h2:hover > a.headerlink,
h3:hover > a.headerlink,
h4:hover > a.headerlink,
h5:hover > a.headerlink,
h6:hover > a.headerlink,
dt:hover > a.headerlink,
caption:hover > a.headerlink,
p.caption:hover > a.headerlink,
div.code-block-caption:hover > a.headerlink {
visibility: visible;
}
div.body p.caption {
text-align: inherit;
}
div.body td {
text-align: left;
}
.first {
margin-top: 0 !important;
}
p.rubric {
margin-top: 30px;
font-weight: bold;
}
img.align-left, figure.align-left, .figure.align-left, object.align-left {
clear: left;
float: left;
margin-right: 1em;
}
img.align-right, figure.align-right, .figure.align-right, object.align-right {
clear: right;
float: right;
margin-left: 1em;
}
img.align-center, figure.align-center, .figure.align-center, object.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
img.align-default, figure.align-default, .figure.align-default {
display: block;
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left;
}
.align-center {
text-align: center;
}
.align-default {
text-align: center;
}
.align-right {
text-align: right;
}
/* -- sidebars -------------------------------------------------------------- */
div.sidebar,
aside.sidebar {
margin: 0 0 0.5em 1em;
border: 1px solid #ddb;
padding: 7px;
background-color: #ffe;
width: 40%;
float: right;
clear: right;
overflow-x: auto;
}
p.sidebar-title {
font-weight: bold;
}
nav.contents,
aside.topic,
div.admonition, div.topic, blockquote {
clear: left;
}
/* -- topics ---------------------------------------------------------------- */
nav.contents,
aside.topic,
div.topic {
border: 1px solid #ccc;
padding: 7px;
margin: 10px 0 10px 0;
}
p.topic-title {
font-size: 1.1em;
font-weight: bold;
margin-top: 10px;
}
/* -- admonitions ----------------------------------------------------------- */
div.admonition {
margin-top: 10px;
margin-bottom: 10px;
padding: 7px;
}
div.admonition dt {
font-weight: bold;
}
p.admonition-title {
margin: 0px 10px 5px 0px;
font-weight: bold;
}
div.body p.centered {
text-align: center;
margin-top: 25px;
}
/* -- content of sidebars/topics/admonitions -------------------------------- */
div.sidebar > :last-child,
aside.sidebar > :last-child,
nav.contents > :last-child,
aside.topic > :last-child,
div.topic > :last-child,
div.admonition > :last-child {
margin-bottom: 0;
}
div.sidebar::after,
aside.sidebar::after,
nav.contents::after,
aside.topic::after,
div.topic::after,
div.admonition::after,
blockquote::after {
display: block;
content: '';
clear: both;
}
/* -- tables ---------------------------------------------------------------- */
table.docutils {
margin-top: 10px;
margin-bottom: 10px;
border: 0;
border-collapse: collapse;
}
table.align-center {
margin-left: auto;
margin-right: auto;
}
table.align-default {
margin-left: auto;
margin-right: auto;
}
table caption span.caption-number {
font-style: italic;
}
table caption span.caption-text {
}
table.docutils td, table.docutils th {
padding: 1px 8px 1px 5px;
border-top: 0;
border-left: 0;
border-right: 0;
border-bottom: 1px solid #aaa;
}
th {
text-align: left;
padding-right: 5px;
}
table.citation {
border-left: solid 1px gray;
margin-left: 1px;
}
table.citation td {
border-bottom: none;
}
th > :first-child,
td > :first-child {
margin-top: 0px;
}
th > :last-child,
td > :last-child {
margin-bottom: 0px;
}
/* -- figures --------------------------------------------------------------- */
div.figure, figure {
margin: 0.5em;
padding: 0.5em;
}
div.figure p.caption, figcaption {
padding: 0.3em;
}
div.figure p.caption span.caption-number,
figcaption span.caption-number {
font-style: italic;
}
div.figure p.caption span.caption-text,
figcaption span.caption-text {
}
/* -- field list styles ----------------------------------------------------- */
table.field-list td, table.field-list th {
border: 0 !important;
}
.field-list ul {
margin: 0;
padding-left: 1em;
}
.field-list p {
margin: 0;
}
.field-name {
-moz-hyphens: manual;
-ms-hyphens: manual;
-webkit-hyphens: manual;
hyphens: manual;
}
/* -- hlist styles ---------------------------------------------------------- */
table.hlist {
margin: 1em 0;
}
table.hlist td {
vertical-align: top;
}
/* -- object description styles --------------------------------------------- */
.sig {
font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
}
.sig-name, code.descname {
background-color: transparent;
font-weight: bold;
}
.sig-name {
font-size: 1.1em;
}
code.descname {
font-size: 1.2em;
}
.sig-prename, code.descclassname {
background-color: transparent;
}
.optional {
font-size: 1.3em;
}
.sig-paren {
font-size: larger;
}
.sig-param.n {
font-style: italic;
}
/* C++ specific styling */
.sig-inline.c-texpr,
.sig-inline.cpp-texpr {
font-family: unset;
}
.sig.c .k, .sig.c .kt,
.sig.cpp .k, .sig.cpp .kt {
color: #0033B3;
}
.sig.c .m,
.sig.cpp .m {
color: #1750EB;
}
.sig.c .s, .sig.c .sc,
.sig.cpp .s, .sig.cpp .sc {
color: #067D17;
}
/* -- other body styles ----------------------------------------------------- */
ol.arabic {
list-style: decimal;
}
ol.loweralpha {
list-style: lower-alpha;
}
ol.upperalpha {
list-style: upper-alpha;
}
ol.lowerroman {
list-style: lower-roman;
}
ol.upperroman {
list-style: upper-roman;
}
:not(li) > ol > li:first-child > :first-child,
:not(li) > ul > li:first-child > :first-child {
margin-top: 0px;
}
:not(li) > ol > li:last-child > :last-child,
:not(li) > ul > li:last-child > :last-child {
margin-bottom: 0px;
}
ol.simple ol p,
ol.simple ul p,
ul.simple ol p,
ul.simple ul p {
margin-top: 0;
}
ol.simple > li:not(:first-child) > p,
ul.simple > li:not(:first-child) > p {
margin-top: 0;
}
ol.simple p,
ul.simple p {
margin-bottom: 0;
}
aside.footnote > span,
div.citation > span {
float: left;
}
aside.footnote > span:last-of-type,
div.citation > span:last-of-type {
padding-right: 0.5em;
}
aside.footnote > p {
margin-left: 2em;
}
div.citation > p {
margin-left: 4em;
}
aside.footnote > p:last-of-type,
div.citation > p:last-of-type {
margin-bottom: 0em;
}
aside.footnote > p:last-of-type:after,
div.citation > p:last-of-type:after {
content: "";
clear: both;
}
dl.field-list {
display: grid;
grid-template-columns: fit-content(30%) auto;
}
dl.field-list > dt {
font-weight: bold;
word-break: break-word;
padding-left: 0.5em;
padding-right: 5px;
}
dl.field-list > dd {
padding-left: 0.5em;
margin-top: 0em;
margin-left: 0em;
margin-bottom: 0em;
}
dl {
margin-bottom: 15px;
}
dd > :first-child {
margin-top: 0px;
}
dd ul, dd table {
margin-bottom: 10px;
}
dd {
margin-top: 3px;
margin-bottom: 10px;
margin-left: 30px;
}
.sig dd {
margin-top: 0px;
margin-bottom: 0px;
}
.sig dl {
margin-top: 0px;
margin-bottom: 0px;
}
dl > dd:last-child,
dl > dd:last-child > :last-child {
margin-bottom: 0;
}
dt:target, span.highlighted {
background-color: #fbe54e;
}
rect.highlighted {
fill: #fbe54e;
}
dl.glossary dt {
font-weight: bold;
font-size: 1.1em;
}
.versionmodified {
font-style: italic;
}
.system-message {
background-color: #fda;
padding: 5px;
border: 3px solid red;
}
.footnote:target {
background-color: #ffa;
}
.line-block {
display: block;
margin-top: 1em;
margin-bottom: 1em;
}
.line-block .line-block {
margin-top: 0;
margin-bottom: 0;
margin-left: 1.5em;
}
.guilabel, .menuselection {
font-family: sans-serif;
}
.accelerator {
text-decoration: underline;
}
.classifier {
font-style: oblique;
}
.classifier:before {
font-style: normal;
margin: 0 0.5em;
content: ":";
display: inline-block;
}
abbr, acronym {
border-bottom: dotted 1px;
cursor: help;
}
.translated {
background-color: rgba(207, 255, 207, 0.2)
}
.untranslated {
background-color: rgba(255, 207, 207, 0.2)
}
/* -- code displays --------------------------------------------------------- */
pre {
overflow: auto;
overflow-y: hidden; /* fixes display issues on Chrome browsers */
}
pre, div[class*="highlight-"] {
clear: both;
}
span.pre {
-moz-hyphens: none;
-ms-hyphens: none;
-webkit-hyphens: none;
hyphens: none;
white-space: nowrap;
}
div[class*="highlight-"] {
margin: 1em 0;
}
td.linenos pre {
border: 0;
background-color: transparent;
color: #aaa;
}
table.highlighttable {
display: block;
}
table.highlighttable tbody {
display: block;
}
table.highlighttable tr {
display: flex;
}
table.highlighttable td {
margin: 0;
padding: 0;
}
table.highlighttable td.linenos {
padding-right: 0.5em;
}
table.highlighttable td.code {
flex: 1;
overflow: hidden;
}
.highlight .hll {
display: block;
}
div.highlight pre,
table.highlighttable pre {
margin: 0;
}
div.code-block-caption + div {
margin-top: 0;
}
div.code-block-caption {
margin-top: 1em;
padding: 2px 5px;
font-size: small;
}
div.code-block-caption code {
background-color: transparent;
}
table.highlighttable td.linenos,
span.linenos,
div.highlight span.gp { /* gp: Generic.Prompt */
user-select: none;
-webkit-user-select: text; /* Safari fallback only */
-webkit-user-select: none; /* Chrome/Safari */
-moz-user-select: none; /* Firefox */
-ms-user-select: none; /* IE10+ */
}
div.code-block-caption span.caption-number {
padding: 0.1em 0.3em;
font-style: italic;
}
div.code-block-caption span.caption-text {
}
div.literal-block-wrapper {
margin: 1em 0;
}
code.xref, a code {
background-color: transparent;
font-weight: bold;
}
h1 code, h2 code, h3 code, h4 code, h5 code, h6 code {
background-color: transparent;
}
.viewcode-link {
float: right;
}
.viewcode-back {
float: right;
font-family: sans-serif;
}
div.viewcode-block:target {
margin: -1px -10px;
padding: 0 10px;
}
/* -- math display ---------------------------------------------------------- */
img.math {
vertical-align: middle;
}
div.body div.math p {
text-align: center;
}
span.eqno {
float: right;
}
span.eqno a.headerlink {
position: absolute;
z-index: 1;
}
div.math:hover a.headerlink {
visibility: visible;
}
/* -- printout stylesheet --------------------------------------------------- */
@media print {
div.document,
div.documentwrapper,
div.bodywrapper {
margin: 0 !important;
width: 100%;
}
div.sphinxsidebar,
div.related,
div.footer,
#top-link {
display: none;
}
}

4
_static/check-solid.svg Executable file
View File

@ -0,0 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-check" width="44" height="44" viewBox="0 0 24 24" stroke-width="2" stroke="#22863a" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none"/>
<path d="M5 12l5 5l10 -10" />
</svg>

After

Width:  |  Height:  |  Size: 313 B

7
_static/clipboard.min.js vendored Executable file

File diff suppressed because one or more lines are too long

5
_static/copy-button.svg Executable file
View File

@ -0,0 +1,5 @@
<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-copy" width="44" height="44" viewBox="0 0 24 24" stroke-width="1.5" stroke="#000000" fill="none" stroke-linecap="round" stroke-linejoin="round">
<path stroke="none" d="M0 0h24v24H0z" fill="none"/>
<rect x="8" y="8" width="12" height="12" rx="2" />
<path d="M16 8v-2a2 2 0 0 0 -2 -2h-8a2 2 0 0 0 -2 2v8a2 2 0 0 0 2 2h2" />
</svg>

After

Width:  |  Height:  |  Size: 411 B

94
_static/copybutton.css Executable file
View File

@ -0,0 +1,94 @@
/* Copy buttons */
button.copybtn {
position: absolute;
display: flex;
top: .3em;
right: .3em;
width: 1.7em;
height: 1.7em;
opacity: 0;
transition: opacity 0.3s, border .3s, background-color .3s;
user-select: none;
padding: 0;
border: none;
outline: none;
border-radius: 0.4em;
/* The colors that GitHub uses */
border: #1b1f2426 1px solid;
background-color: #f6f8fa;
color: #57606a;
}
button.copybtn.success {
border-color: #22863a;
color: #22863a;
}
button.copybtn svg {
stroke: currentColor;
width: 1.5em;
height: 1.5em;
padding: 0.1em;
}
div.highlight {
position: relative;
}
/* Show the copybutton */
.highlight:hover button.copybtn, button.copybtn.success {
opacity: 1;
}
.highlight button.copybtn:hover {
background-color: rgb(235, 235, 235);
}
.highlight button.copybtn:active {
background-color: rgb(187, 187, 187);
}
/**
* A minimal CSS-only tooltip copied from:
* https://codepen.io/mildrenben/pen/rVBrpK
*
* To use, write HTML like the following:
*
* <p class="o-tooltip--left" data-tooltip="Hey">Short</p>
*/
.o-tooltip--left {
position: relative;
}
.o-tooltip--left:after {
opacity: 0;
visibility: hidden;
position: absolute;
content: attr(data-tooltip);
padding: .2em;
font-size: .8em;
left: -.2em;
background: grey;
color: white;
white-space: nowrap;
z-index: 2;
border-radius: 2px;
transform: translateX(-102%) translateY(0);
transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1);
}
.o-tooltip--left:hover:after {
display: block;
opacity: 1;
visibility: visible;
transform: translateX(-100%) translateY(0);
transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1);
transition-delay: .5s;
}
/* By default the copy button shouldn't show up when printing a page */
@media print {
button.copybtn {
display: none;
}
}

248
_static/copybutton.js Executable file
View File

@ -0,0 +1,248 @@
// Localization support
const messages = {
'en': {
'copy': 'Copy',
'copy_to_clipboard': 'Copy to clipboard',
'copy_success': 'Copied!',
'copy_failure': 'Failed to copy',
},
'es' : {
'copy': 'Copiar',
'copy_to_clipboard': 'Copiar al portapapeles',
'copy_success': '¡Copiado!',
'copy_failure': 'Error al copiar',
},
'de' : {
'copy': 'Kopieren',
'copy_to_clipboard': 'In die Zwischenablage kopieren',
'copy_success': 'Kopiert!',
'copy_failure': 'Fehler beim Kopieren',
},
'fr' : {
'copy': 'Copier',
'copy_to_clipboard': 'Copier dans le presse-papier',
'copy_success': 'Copié !',
'copy_failure': 'Échec de la copie',
},
'ru': {
'copy': 'Скопировать',
'copy_to_clipboard': 'Скопировать в буфер',
'copy_success': 'Скопировано!',
'copy_failure': 'Не удалось скопировать',
},
'zh-CN': {
'copy': '复制',
'copy_to_clipboard': '复制到剪贴板',
'copy_success': '复制成功!',
'copy_failure': '复制失败',
},
'it' : {
'copy': 'Copiare',
'copy_to_clipboard': 'Copiato negli appunti',
'copy_success': 'Copiato!',
'copy_failure': 'Errore durante la copia',
}
}
let locale = 'en'
if( document.documentElement.lang !== undefined
&& messages[document.documentElement.lang] !== undefined ) {
locale = document.documentElement.lang
}
let doc_url_root = DOCUMENTATION_OPTIONS.URL_ROOT;
if (doc_url_root == '#') {
doc_url_root = '';
}
/**
* SVG files for our copy buttons
*/
let iconCheck = `<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-check" width="44" height="44" viewBox="0 0 24 24" stroke-width="2" stroke="#22863a" fill="none" stroke-linecap="round" stroke-linejoin="round">
<title>${messages[locale]['copy_success']}</title>
<path stroke="none" d="M0 0h24v24H0z" fill="none"/>
<path d="M5 12l5 5l10 -10" />
</svg>`
// If the user specified their own SVG use that, otherwise use the default
let iconCopy = ``;
if (!iconCopy) {
iconCopy = `<svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-copy" width="44" height="44" viewBox="0 0 24 24" stroke-width="1.5" stroke="#000000" fill="none" stroke-linecap="round" stroke-linejoin="round">
<title>${messages[locale]['copy_to_clipboard']}</title>
<path stroke="none" d="M0 0h24v24H0z" fill="none"/>
<rect x="8" y="8" width="12" height="12" rx="2" />
<path d="M16 8v-2a2 2 0 0 0 -2 -2h-8a2 2 0 0 0 -2 2v8a2 2 0 0 0 2 2h2" />
</svg>`
}
/**
* Set up copy/paste for code blocks
*/
const runWhenDOMLoaded = cb => {
if (document.readyState != 'loading') {
cb()
} else if (document.addEventListener) {
document.addEventListener('DOMContentLoaded', cb)
} else {
document.attachEvent('onreadystatechange', function() {
if (document.readyState == 'complete') cb()
})
}
}
const codeCellId = index => `codecell${index}`
// Clears selected text since ClipboardJS will select the text when copying
const clearSelection = () => {
if (window.getSelection) {
window.getSelection().removeAllRanges()
} else if (document.selection) {
document.selection.empty()
}
}
// Changes tooltip text for a moment, then changes it back
// We want the timeout of our `success` class to be a bit shorter than the
// tooltip and icon change, so that we can hide the icon before changing back.
var timeoutIcon = 2000;
var timeoutSuccessClass = 1500;
const temporarilyChangeTooltip = (el, oldText, newText) => {
el.setAttribute('data-tooltip', newText)
el.classList.add('success')
// Remove success a little bit sooner than we change the tooltip
// So that we can use CSS to hide the copybutton first
setTimeout(() => el.classList.remove('success'), timeoutSuccessClass)
setTimeout(() => el.setAttribute('data-tooltip', oldText), timeoutIcon)
}
// Changes the copy button icon for two seconds, then changes it back
const temporarilyChangeIcon = (el) => {
el.innerHTML = iconCheck;
setTimeout(() => {el.innerHTML = iconCopy}, timeoutIcon)
}
const addCopyButtonToCodeCells = () => {
// If ClipboardJS hasn't loaded, wait a bit and try again. This
// happens because we load ClipboardJS asynchronously.
if (window.ClipboardJS === undefined) {
setTimeout(addCopyButtonToCodeCells, 250)
return
}
// Add copybuttons to all of our code cells
const COPYBUTTON_SELECTOR = 'div.highlight pre';
const codeCells = document.querySelectorAll(COPYBUTTON_SELECTOR)
codeCells.forEach((codeCell, index) => {
const id = codeCellId(index)
codeCell.setAttribute('id', id)
const clipboardButton = id =>
`<button class="copybtn o-tooltip--left" data-tooltip="${messages[locale]['copy']}" data-clipboard-target="#${id}">
${iconCopy}
</button>`
codeCell.insertAdjacentHTML('afterend', clipboardButton(id))
})
function escapeRegExp(string) {
return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string
}
/**
* Removes excluded text from a Node.
*
* @param {Node} target Node to filter.
* @param {string} exclude CSS selector of nodes to exclude.
* @returns {DOMString} Text from `target` with text removed.
*/
function filterText(target, exclude) {
const clone = target.cloneNode(true); // clone as to not modify the live DOM
if (exclude) {
// remove excluded nodes
clone.querySelectorAll(exclude).forEach(node => node.remove());
}
return clone.innerText;
}
// Callback when a copy button is clicked. Will be passed the node that was clicked
// should then grab the text and replace pieces of text that shouldn't be used in output
function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") {
var regexp;
var match;
// Do we check for line continuation characters and "HERE-documents"?
var useLineCont = !!lineContinuationChar
var useHereDoc = !!hereDocDelim
// create regexp to capture prompt and remaining line
if (isRegexp) {
regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)')
} else {
regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)')
}
const outputLines = [];
var promptFound = false;
var gotLineCont = false;
var gotHereDoc = false;
const lineGotPrompt = [];
for (const line of textContent.split('\n')) {
match = line.match(regexp)
if (match || gotLineCont || gotHereDoc) {
promptFound = regexp.test(line)
lineGotPrompt.push(promptFound)
if (removePrompts && promptFound) {
outputLines.push(match[2])
} else {
outputLines.push(line)
}
gotLineCont = line.endsWith(lineContinuationChar) & useLineCont
if (line.includes(hereDocDelim) & useHereDoc)
gotHereDoc = !gotHereDoc
} else if (!onlyCopyPromptLines) {
outputLines.push(line)
} else if (copyEmptyLines && line.trim() === '') {
outputLines.push(line)
}
}
// If no lines with the prompt were found then just use original lines
if (lineGotPrompt.some(v => v === true)) {
textContent = outputLines.join('\n');
}
// Remove a trailing newline to avoid auto-running when pasting
if (textContent.endsWith("\n")) {
textContent = textContent.slice(0, -1)
}
return textContent
}
var copyTargetText = (trigger) => {
var target = document.querySelector(trigger.attributes['data-clipboard-target'].value);
// get filtered text
let exclude = '.linenos';
let text = filterText(target, exclude);
return formatCopyText(text, '', false, true, true, true, '', '')
}
// Initialize with a callback so we can modify the text before copy
const clipboard = new ClipboardJS('.copybtn', {text: copyTargetText})
// Update UI with error/success messages
clipboard.on('success', event => {
clearSelection()
temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_success'])
temporarilyChangeIcon(event.trigger)
})
clipboard.on('error', event => {
temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_failure'])
})
}
runWhenDOMLoaded(addCopyButtonToCodeCells)

73
_static/copybutton_funcs.js Executable file
View File

@ -0,0 +1,73 @@
function escapeRegExp(string) {
return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string
}
/**
* Removes excluded text from a Node.
*
* @param {Node} target Node to filter.
* @param {string} exclude CSS selector of nodes to exclude.
* @returns {DOMString} Text from `target` with text removed.
*/
export function filterText(target, exclude) {
const clone = target.cloneNode(true); // clone as to not modify the live DOM
if (exclude) {
// remove excluded nodes
clone.querySelectorAll(exclude).forEach(node => node.remove());
}
return clone.innerText;
}
// Callback when a copy button is clicked. Will be passed the node that was clicked
// should then grab the text and replace pieces of text that shouldn't be used in output
export function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") {
var regexp;
var match;
// Do we check for line continuation characters and "HERE-documents"?
var useLineCont = !!lineContinuationChar
var useHereDoc = !!hereDocDelim
// create regexp to capture prompt and remaining line
if (isRegexp) {
regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)')
} else {
regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)')
}
const outputLines = [];
var promptFound = false;
var gotLineCont = false;
var gotHereDoc = false;
const lineGotPrompt = [];
for (const line of textContent.split('\n')) {
match = line.match(regexp)
if (match || gotLineCont || gotHereDoc) {
promptFound = regexp.test(line)
lineGotPrompt.push(promptFound)
if (removePrompts && promptFound) {
outputLines.push(match[2])
} else {
outputLines.push(line)
}
gotLineCont = line.endsWith(lineContinuationChar) & useLineCont
if (line.includes(hereDocDelim) & useHereDoc)
gotHereDoc = !gotHereDoc
} else if (!onlyCopyPromptLines) {
outputLines.push(line)
} else if (copyEmptyLines && line.trim() === '') {
outputLines.push(line)
}
}
// If no lines with the prompt were found then just use original lines
if (lineGotPrompt.some(v => v === true)) {
textContent = outputLines.join('\n');
}
// Remove a trailing newline to avoid auto-running when pasting
if (textContent.endsWith("\n")) {
textContent = textContent.slice(0, -1)
}
return textContent
}

101
_static/design-tabs.js Executable file
View File

@ -0,0 +1,101 @@
// @ts-check
// Extra JS capability for selected tabs to be synced
// The selection is stored in local storage so that it persists across page loads.
/**
* @type {Record<string, HTMLElement[]>}
*/
let sd_id_to_elements = {};
const storageKeyPrefix = "sphinx-design-tab-id-";
/**
* Create a key for a tab element.
* @param {HTMLElement} el - The tab element.
* @returns {[string, string, string] | null} - The key.
*
*/
function create_key(el) {
let syncId = el.getAttribute("data-sync-id");
let syncGroup = el.getAttribute("data-sync-group");
if (!syncId || !syncGroup) return null;
return [syncGroup, syncId, syncGroup + "--" + syncId];
}
/**
* Initialize the tab selection.
*
*/
function ready() {
// Find all tabs with sync data
/** @type {string[]} */
let groups = [];
document.querySelectorAll(".sd-tab-label").forEach((label) => {
if (label instanceof HTMLElement) {
let data = create_key(label);
if (data) {
let [group, id, key] = data;
// add click event listener
// @ts-ignore
label.onclick = onSDLabelClick;
// store map of key to elements
if (!sd_id_to_elements[key]) {
sd_id_to_elements[key] = [];
}
sd_id_to_elements[key].push(label);
if (groups.indexOf(group) === -1) {
groups.push(group);
// Check if a specific tab has been selected via URL parameter
const tabParam = new URLSearchParams(window.location.search).get(
group
);
if (tabParam) {
console.log(
"sphinx-design: Selecting tab id for group '" +
group +
"' from URL parameter: " +
tabParam
);
window.sessionStorage.setItem(storageKeyPrefix + group, tabParam);
}
}
// Check is a specific tab has been selected previously
let previousId = window.sessionStorage.getItem(
storageKeyPrefix + group
);
if (previousId === id) {
// console.log(
// "sphinx-design: Selecting tab from session storage: " + id
// );
// @ts-ignore
label.previousElementSibling.checked = true;
}
}
}
});
}
/**
* Activate other tabs with the same sync id.
*
* @this {HTMLElement} - The element that was clicked.
*/
function onSDLabelClick() {
let data = create_key(this);
if (!data) return;
let [group, id, key] = data;
for (const label of sd_id_to_elements[key]) {
if (label === this) continue;
// @ts-ignore
label.previousElementSibling.checked = true;
}
window.sessionStorage.setItem(storageKeyPrefix + group, id);
}
document.addEventListener("DOMContentLoaded", ready, false);

156
_static/doctools.js Executable file
View File

@ -0,0 +1,156 @@
/*
* doctools.js
* ~~~~~~~~~~~
*
* Base JavaScript utilities for all Sphinx HTML documentation.
*
* :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
"use strict";
const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([
"TEXTAREA",
"INPUT",
"SELECT",
"BUTTON",
]);
const _ready = (callback) => {
if (document.readyState !== "loading") {
callback();
} else {
document.addEventListener("DOMContentLoaded", callback);
}
};
/**
* Small JavaScript module for the documentation.
*/
const Documentation = {
init: () => {
Documentation.initDomainIndexTable();
Documentation.initOnKeyListeners();
},
/**
* i18n support
*/
TRANSLATIONS: {},
PLURAL_EXPR: (n) => (n === 1 ? 0 : 1),
LOCALE: "unknown",
// gettext and ngettext don't access this so that the functions
// can safely bound to a different name (_ = Documentation.gettext)
gettext: (string) => {
const translated = Documentation.TRANSLATIONS[string];
switch (typeof translated) {
case "undefined":
return string; // no translation
case "string":
return translated; // translation exists
default:
return translated[0]; // (singular, plural) translation tuple exists
}
},
ngettext: (singular, plural, n) => {
const translated = Documentation.TRANSLATIONS[singular];
if (typeof translated !== "undefined")
return translated[Documentation.PLURAL_EXPR(n)];
return n === 1 ? singular : plural;
},
addTranslations: (catalog) => {
Object.assign(Documentation.TRANSLATIONS, catalog.messages);
Documentation.PLURAL_EXPR = new Function(
"n",
`return (${catalog.plural_expr})`
);
Documentation.LOCALE = catalog.locale;
},
/**
* helper function to focus on search bar
*/
focusSearchBar: () => {
document.querySelectorAll("input[name=q]")[0]?.focus();
},
/**
* Initialise the domain index toggle buttons
*/
initDomainIndexTable: () => {
const toggler = (el) => {
const idNumber = el.id.substr(7);
const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`);
if (el.src.substr(-9) === "minus.png") {
el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`;
toggledRows.forEach((el) => (el.style.display = "none"));
} else {
el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`;
toggledRows.forEach((el) => (el.style.display = ""));
}
};
const togglerElements = document.querySelectorAll("img.toggler");
togglerElements.forEach((el) =>
el.addEventListener("click", (event) => toggler(event.currentTarget))
);
togglerElements.forEach((el) => (el.style.display = ""));
if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler);
},
initOnKeyListeners: () => {
// only install a listener if it is really needed
if (
!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS &&
!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS
)
return;
document.addEventListener("keydown", (event) => {
// bail for input elements
if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return;
// bail with special keys
if (event.altKey || event.ctrlKey || event.metaKey) return;
if (!event.shiftKey) {
switch (event.key) {
case "ArrowLeft":
if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break;
const prevLink = document.querySelector('link[rel="prev"]');
if (prevLink && prevLink.href) {
window.location.href = prevLink.href;
event.preventDefault();
}
break;
case "ArrowRight":
if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break;
const nextLink = document.querySelector('link[rel="next"]');
if (nextLink && nextLink.href) {
window.location.href = nextLink.href;
event.preventDefault();
}
break;
}
}
// some keyboard layouts may need Shift to get /
switch (event.key) {
case "/":
if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break;
Documentation.focusSearchBar();
event.preventDefault();
}
});
},
};
// quick alias for translations
const _ = Documentation.gettext;
_ready(Documentation.init);

View File

@ -0,0 +1,13 @@
const DOCUMENTATION_OPTIONS = {
VERSION: '',
LANGUAGE: 'en',
COLLAPSE_INDEX: false,
BUILDER: 'html',
FILE_SUFFIX: '.html',
LINK_SUFFIX: '.html',
HAS_SOURCE: true,
SOURCELINK_SUFFIX: '',
NAVIGATION_WITH_KEYS: false,
SHOW_SEARCH_SUMMARY: true,
ENABLE_SEARCH_SHORTCUTS: true,
};

BIN
_static/file.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 B

19
_static/images/logo_binder.svg Executable file
View File

@ -0,0 +1,19 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 23.0.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 44.4 44.4" style="enable-background:new 0 0 44.4 44.4;" xml:space="preserve">
<style type="text/css">
.st0{fill:none;stroke:#F5A252;stroke-width:5;stroke-miterlimit:10;}
.st1{fill:none;stroke:#579ACA;stroke-width:5;stroke-miterlimit:10;}
.st2{fill:none;stroke:#E66581;stroke-width:5;stroke-miterlimit:10;}
</style>
<title>logo</title>
<g>
<path class="st0" d="M33.9,6.4c3.6,3.9,3.4,9.9-0.5,13.5s-9.9,3.4-13.5-0.5s-3.4-9.9,0.5-13.5l0,0C24.2,2.4,30.2,2.6,33.9,6.4z"/>
<path class="st1" d="M35.1,27.3c2.6,4.6,1.1,10.4-3.5,13c-4.6,2.6-10.4,1.1-13-3.5s-1.1-10.4,3.5-13l0,0
C26.6,21.2,32.4,22.7,35.1,27.3z"/>
<path class="st2" d="M25.9,17.8c2.6,4.6,1.1,10.4-3.5,13s-10.4,1.1-13-3.5s-1.1-10.4,3.5-13l0,0C17.5,11.7,23.3,13.2,25.9,17.8z"/>
<path class="st1" d="M19.2,26.4c3.1-4.3,9.1-5.2,13.3-2.1c1.1,0.8,2,1.8,2.7,3"/>
<path class="st0" d="M19.9,19.4c-3.6-3.9-3.4-9.9,0.5-13.5s9.9-3.4,13.5,0.5"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

BIN
_static/images/logo_colab.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

View File

@ -0,0 +1 @@
<svg viewBox="0 0 128 128" fill="none" xmlns="http://www.w3.org/2000/svg"><path fill-rule="evenodd" clip-rule="evenodd" d="M0 128h52.512l29.539-11.077-11.077-43.487-34.051 3.693L0 128Z" fill="#0076D4"/><path fill-rule="evenodd" clip-rule="evenodd" d="M52.513 128s16.6-8.759 19.673-24.277c3.072-15.517-12.091-26.594-35.263-26.594 0-.41 20.343-28.718 20.343-28.718l49.4 1.435L95.71 107.7l-20.452 15.978L52.513 128Z" fill="#002868"/><path fill-rule="evenodd" clip-rule="evenodd" d="M0 60.718 41.025.001s1.006.01 3.282 0c16.082-.068 81.23 3.12 81.23 60.368 0 65.352-73.025 67.631-73.025 67.631s30.495-5.839 30.495-34.816c0-28.978-27.541-32.466-45.264-32.466H0Z" fill="#00A9FF"/></svg>

After

Width:  |  Height:  |  Size: 681 B

View File

@ -0,0 +1 @@
<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" width="38.73" height="50" viewBox="0 0 38.73 50"><defs><style>.cls-1{fill:#767677;}.cls-2{fill:#f37726;}.cls-3{fill:#9e9e9e;}.cls-4{fill:#616262;}.cls-5{font-size:17.07px;fill:#fff;font-family:Roboto-Regular, Roboto;}</style></defs><title>logo_jupyterhub</title><g id="Canvas"><path id="path7_fill" data-name="path7 fill" class="cls-1" d="M39.51,3.53a3,3,0,0,1-1.7,2.9A3,3,0,0,1,34.48,6a3,3,0,0,1-.82-3.26,3,3,0,0,1,1.05-1.41A3,3,0,0,1,37.52.86a2.88,2.88,0,0,1,1,.6,3,3,0,0,1,.7.93,3.18,3.18,0,0,1,.28,1.14Z" transform="translate(-1.87 -0.69)"/><path id="path8_fill" data-name="path8 fill" class="cls-2" d="M21.91,38.39c-8,0-15.06-2.87-18.7-7.12a19.93,19.93,0,0,0,37.39,0C37,35.52,30,38.39,21.91,38.39Z" transform="translate(-1.87 -0.69)"/><path id="path9_fill" data-name="path9 fill" class="cls-2" d="M21.91,10.78c8,0,15.05,2.87,18.69,7.12a19.93,19.93,0,0,0-37.39,0C6.85,13.64,13.86,10.78,21.91,10.78Z" transform="translate(-1.87 -0.69)"/><path id="path10_fill" data-name="path10 fill" class="cls-3" d="M10.88,46.66a3.86,3.86,0,0,1-.52,2.15,3.81,3.81,0,0,1-1.62,1.51,3.93,3.93,0,0,1-2.19.34,3.79,3.79,0,0,1-2-.94,3.73,3.73,0,0,1-1.14-1.9,3.79,3.79,0,0,1,.1-2.21,3.86,3.86,0,0,1,1.33-1.78,3.92,3.92,0,0,1,3.54-.53,3.85,3.85,0,0,1,2.14,1.93,3.74,3.74,0,0,1,.37,1.43Z" transform="translate(-1.87 -0.69)"/><path id="path11_fill" data-name="path11 fill" class="cls-4" d="M4.12,9.81A2.18,2.18,0,0,1,2.9,9.48a2.23,2.23,0,0,1-.84-1A2.26,2.26,0,0,1,1.9,7.26a2.13,2.13,0,0,1,.56-1.13,2.18,2.18,0,0,1,2.36-.56,2.13,2.13,0,0,1,1,.76,2.18,2.18,0,0,1,.42,1.2A2.22,2.22,0,0,1,4.12,9.81Z" transform="translate(-1.87 -0.69)"/></g><text class="cls-5" transform="translate(5.24 30.01)">Hub</text></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

199
_static/language_data.js Executable file
View File

@ -0,0 +1,199 @@
/*
* language_data.js
* ~~~~~~~~~~~~~~~~
*
* This script contains the language-specific data used by searchtools.js,
* namely the list of stopwords, stemmer, scorer and splitter.
*
* :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"];
/* Non-minified version is copied as a separate JS file, if available */
/**
* Porter Stemmer
*/
var Stemmer = function() {
var step2list = {
ational: 'ate',
tional: 'tion',
enci: 'ence',
anci: 'ance',
izer: 'ize',
bli: 'ble',
alli: 'al',
entli: 'ent',
eli: 'e',
ousli: 'ous',
ization: 'ize',
ation: 'ate',
ator: 'ate',
alism: 'al',
iveness: 'ive',
fulness: 'ful',
ousness: 'ous',
aliti: 'al',
iviti: 'ive',
biliti: 'ble',
logi: 'log'
};
var step3list = {
icate: 'ic',
ative: '',
alize: 'al',
iciti: 'ic',
ical: 'ic',
ful: '',
ness: ''
};
var c = "[^aeiou]"; // consonant
var v = "[aeiouy]"; // vowel
var C = c + "[^aeiouy]*"; // consonant sequence
var V = v + "[aeiou]*"; // vowel sequence
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
var s_v = "^(" + C + ")?" + v; // vowel in stem
this.stemWord = function (w) {
var stem;
var suffix;
var firstch;
var origword = w;
if (w.length < 3)
return w;
var re;
var re2;
var re3;
var re4;
firstch = w.substr(0,1);
if (firstch == "y")
w = firstch.toUpperCase() + w.substr(1);
// Step 1a
re = /^(.+?)(ss|i)es$/;
re2 = /^(.+?)([^s])s$/;
if (re.test(w))
w = w.replace(re,"$1$2");
else if (re2.test(w))
w = w.replace(re2,"$1$2");
// Step 1b
re = /^(.+?)eed$/;
re2 = /^(.+?)(ed|ing)$/;
if (re.test(w)) {
var fp = re.exec(w);
re = new RegExp(mgr0);
if (re.test(fp[1])) {
re = /.$/;
w = w.replace(re,"");
}
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1];
re2 = new RegExp(s_v);
if (re2.test(stem)) {
w = stem;
re2 = /(at|bl|iz)$/;
re3 = new RegExp("([^aeiouylsz])\\1$");
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re2.test(w))
w = w + "e";
else if (re3.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
else if (re4.test(w))
w = w + "e";
}
}
// Step 1c
re = /^(.+?)y$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(s_v);
if (re.test(stem))
w = stem + "i";
}
// Step 2
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step2list[suffix];
}
// Step 3
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
suffix = fp[2];
re = new RegExp(mgr0);
if (re.test(stem))
w = stem + step3list[suffix];
}
// Step 4
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
re2 = /^(.+?)(s|t)(ion)$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
if (re.test(stem))
w = stem;
}
else if (re2.test(w)) {
var fp = re2.exec(w);
stem = fp[1] + fp[2];
re2 = new RegExp(mgr1);
if (re2.test(stem))
w = stem;
}
// Step 5
re = /^(.+?)e$/;
if (re.test(w)) {
var fp = re.exec(w);
stem = fp[1];
re = new RegExp(mgr1);
re2 = new RegExp(meq1);
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem))))
w = stem;
}
re = /ll$/;
re2 = new RegExp(mgr1);
if (re.test(w) && re2.test(w)) {
re = /.$/;
w = w.replace(re,"");
}
// and turn initial Y back to y
if (firstch == "y")
w = firstch.toLowerCase() + w.substr(1);
return w;
}
}

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: ar\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "بواسطة"
msgid "By"
msgstr "بواسطة"
msgid "Contents"
msgstr "محتويات"
msgid "Copyright"
msgstr "حقوق النشر"
msgid "Download notebook file"
msgstr "تنزيل ملف دفتر الملاحظات"
msgid "Download source file"
msgstr "تنزيل ملف المصدر"
msgid "Download this page"
msgstr "قم بتنزيل هذه الصفحة"
msgid "Edit this page"
msgstr "قم بتحرير هذه الصفحة"
msgid "Fullscreen mode"
msgstr "وضع ملء الشاشة"
msgid "Last updated on"
msgstr "آخر تحديث في"
msgid "Launch"
msgstr "إطلاق"
msgid "Open an issue"
msgstr "افتح قضية"
msgid "Print to PDF"
msgstr "طباعة إلى PDF"
msgid "Source repository"
msgstr "مستودع المصدر"
msgid "Sphinx Book Theme"
msgstr "موضوع كتاب أبو الهول"
msgid "Theme by the"
msgstr "موضوع بواسطة"
msgid "Toggle navigation"
msgstr "تبديل التنقل"
msgid "next page"
msgstr "الصفحة التالية"
msgid "open issue"
msgstr "قضية مفتوحة"
msgid "previous page"
msgstr "الصفحة السابقة"
msgid "repository"
msgstr "مخزن"
msgid "suggest edit"
msgstr "أقترح تحرير"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: bg\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "По"
msgid "By"
msgstr "От"
msgid "Contents"
msgstr "Съдържание"
msgid "Copyright"
msgstr "Авторско право"
msgid "Download notebook file"
msgstr "Изтеглете файла на бележника"
msgid "Download source file"
msgstr "Изтеглете изходния файл"
msgid "Download this page"
msgstr "Изтеглете тази страница"
msgid "Edit this page"
msgstr "Редактирайте тази страница"
msgid "Fullscreen mode"
msgstr "Режим на цял екран"
msgid "Last updated on"
msgstr "Последна актуализация на"
msgid "Launch"
msgstr "Стартиране"
msgid "Open an issue"
msgstr "Отворете проблем"
msgid "Print to PDF"
msgstr "Печат в PDF"
msgid "Source repository"
msgstr "Хранилище на източника"
msgid "Sphinx Book Theme"
msgstr "Тема на книгата Sphinx"
msgid "Theme by the"
msgstr "Тема от"
msgid "Toggle navigation"
msgstr "Превключване на навигацията"
msgid "next page"
msgstr "Следваща страница"
msgid "open issue"
msgstr "отворен брой"
msgid "previous page"
msgstr "предишна страница"
msgid "repository"
msgstr "хранилище"
msgid "suggest edit"
msgstr "предложи редактиране"

Binary file not shown.

View File

@ -0,0 +1,63 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: bn\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "দ্বারা"
msgid "By"
msgstr "দ্বারা"
msgid "Copyright"
msgstr "কপিরাইট"
msgid "Download notebook file"
msgstr "নোটবুক ফাইল ডাউনলোড করুন"
msgid "Download source file"
msgstr "উত্স ফাইল ডাউনলোড করুন"
msgid "Download this page"
msgstr "এই পৃষ্ঠাটি ডাউনলোড করুন"
msgid "Edit this page"
msgstr "এই পৃষ্ঠাটি সম্পাদনা করুন"
msgid "Last updated on"
msgstr "সর্বশেষ আপডেট"
msgid "Launch"
msgstr "শুরু করা"
msgid "Open an issue"
msgstr "একটি সমস্যা খুলুন"
msgid "Print to PDF"
msgstr "পিডিএফ প্রিন্ট করুন"
msgid "Source repository"
msgstr "উত্স সংগ্রহস্থল"
msgid "Sphinx Book Theme"
msgstr "স্পিনিক্স বুক থিম"
msgid "Theme by the"
msgstr "থিম দ্বারা"
msgid "Toggle navigation"
msgstr "নেভিগেশন টগল করুন"
msgid "next page"
msgstr "পরবর্তী পৃষ্ঠা"
msgid "open issue"
msgstr "খোলা সমস্যা"
msgid "previous page"
msgstr "আগের পৃষ্ঠা"

Binary file not shown.

View File

@ -0,0 +1,66 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: ca\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Per la"
msgid "By"
msgstr "Per"
msgid "Copyright"
msgstr "Copyright"
msgid "Download notebook file"
msgstr "Descarregar fitxer de quadern"
msgid "Download source file"
msgstr "Baixeu el fitxer font"
msgid "Download this page"
msgstr "Descarregueu aquesta pàgina"
msgid "Edit this page"
msgstr "Editeu aquesta pàgina"
msgid "Last updated on"
msgstr "Darrera actualització el"
msgid "Launch"
msgstr "Llançament"
msgid "Open an issue"
msgstr "Obriu un número"
msgid "Print to PDF"
msgstr "Imprimeix a PDF"
msgid "Source repository"
msgstr "Dipòsit de fonts"
msgid "Sphinx Book Theme"
msgstr "Tema del llibre Esfinx"
msgid "Theme by the"
msgstr "Tema del"
msgid "Toggle navigation"
msgstr "Commuta la navegació"
msgid "next page"
msgstr "pàgina següent"
msgid "open issue"
msgstr "número obert"
msgid "previous page"
msgstr "Pàgina anterior"
msgid "suggest edit"
msgstr "suggerir edició"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: cs\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Podle"
msgid "By"
msgstr "Podle"
msgid "Contents"
msgstr "Obsah"
msgid "Copyright"
msgstr "autorská práva"
msgid "Download notebook file"
msgstr "Stáhnout soubor poznámkového bloku"
msgid "Download source file"
msgstr "Stáhněte si zdrojový soubor"
msgid "Download this page"
msgstr "Stáhněte si tuto stránku"
msgid "Edit this page"
msgstr "Upravit tuto stránku"
msgid "Fullscreen mode"
msgstr "Režim celé obrazovky"
msgid "Last updated on"
msgstr "Naposledy aktualizováno"
msgid "Launch"
msgstr "Zahájení"
msgid "Open an issue"
msgstr "Otevřete problém"
msgid "Print to PDF"
msgstr "Tisk do PDF"
msgid "Source repository"
msgstr "Zdrojové úložiště"
msgid "Sphinx Book Theme"
msgstr "Téma knihy Sfinga"
msgid "Theme by the"
msgstr "Téma od"
msgid "Toggle navigation"
msgstr "Přepnout navigaci"
msgid "next page"
msgstr "další strana"
msgid "open issue"
msgstr "otevřené číslo"
msgid "previous page"
msgstr "předchozí stránka"
msgid "repository"
msgstr "úložiště"
msgid "suggest edit"
msgstr "navrhnout úpravy"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: da\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Ved"
msgid "By"
msgstr "Ved"
msgid "Contents"
msgstr "Indhold"
msgid "Copyright"
msgstr "ophavsret"
msgid "Download notebook file"
msgstr "Download notesbog-fil"
msgid "Download source file"
msgstr "Download kildefil"
msgid "Download this page"
msgstr "Download denne side"
msgid "Edit this page"
msgstr "Rediger denne side"
msgid "Fullscreen mode"
msgstr "Fuldskærmstilstand"
msgid "Last updated on"
msgstr "Sidst opdateret den"
msgid "Launch"
msgstr "Start"
msgid "Open an issue"
msgstr "Åbn et problem"
msgid "Print to PDF"
msgstr "Udskriv til PDF"
msgid "Source repository"
msgstr "Kildelager"
msgid "Sphinx Book Theme"
msgstr "Sphinx bogtema"
msgid "Theme by the"
msgstr "Tema af"
msgid "Toggle navigation"
msgstr "Skift navigation"
msgid "next page"
msgstr "Næste side"
msgid "open issue"
msgstr "åbent nummer"
msgid "previous page"
msgstr "forrige side"
msgid "repository"
msgstr "lager"
msgid "suggest edit"
msgstr "foreslå redigering"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: de\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Bis zum"
msgid "By"
msgstr "Durch"
msgid "Contents"
msgstr "Inhalt"
msgid "Copyright"
msgstr "Urheberrechte ©"
msgid "Download notebook file"
msgstr "Notebook-Datei herunterladen"
msgid "Download source file"
msgstr "Quelldatei herunterladen"
msgid "Download this page"
msgstr "Laden Sie diese Seite herunter"
msgid "Edit this page"
msgstr "Bearbeite diese Seite"
msgid "Fullscreen mode"
msgstr "Vollbildmodus"
msgid "Last updated on"
msgstr "Zuletzt aktualisiert am"
msgid "Launch"
msgstr "Starten"
msgid "Open an issue"
msgstr "Öffnen Sie ein Problem"
msgid "Print to PDF"
msgstr "In PDF drucken"
msgid "Source repository"
msgstr "Quell-Repository"
msgid "Sphinx Book Theme"
msgstr "Sphinx-Buch-Thema"
msgid "Theme by the"
msgstr "Thema von der"
msgid "Toggle navigation"
msgstr "Navigation umschalten"
msgid "next page"
msgstr "Nächste Seite"
msgid "open issue"
msgstr "offenes Thema"
msgid "previous page"
msgstr "vorherige Seite"
msgid "repository"
msgstr "Repository"
msgid "suggest edit"
msgstr "vorschlagen zu bearbeiten"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: el\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Από το"
msgid "By"
msgstr "Με"
msgid "Contents"
msgstr "Περιεχόμενα"
msgid "Copyright"
msgstr "Πνευματική ιδιοκτησία"
msgid "Download notebook file"
msgstr "Λήψη αρχείου σημειωματάριου"
msgid "Download source file"
msgstr "Λήψη αρχείου προέλευσης"
msgid "Download this page"
msgstr "Λήψη αυτής της σελίδας"
msgid "Edit this page"
msgstr "Επεξεργαστείτε αυτήν τη σελίδα"
msgid "Fullscreen mode"
msgstr "ΛΕΙΤΟΥΡΓΙΑ ΠΛΗΡΟΥΣ ΟΘΟΝΗΣ"
msgid "Last updated on"
msgstr "Τελευταία ενημέρωση στις"
msgid "Launch"
msgstr "Εκτόξευση"
msgid "Open an issue"
msgstr "Ανοίξτε ένα ζήτημα"
msgid "Print to PDF"
msgstr "Εκτύπωση σε PDF"
msgid "Source repository"
msgstr "Αποθήκη πηγής"
msgid "Sphinx Book Theme"
msgstr "Θέμα βιβλίου Sphinx"
msgid "Theme by the"
msgstr "Θέμα από το"
msgid "Toggle navigation"
msgstr "Εναλλαγή πλοήγησης"
msgid "next page"
msgstr "επόμενη σελίδα"
msgid "open issue"
msgstr "ανοιχτό ζήτημα"
msgid "previous page"
msgstr "προηγούμενη σελίδα"
msgid "repository"
msgstr "αποθήκη"
msgid "suggest edit"
msgstr "προτείνω επεξεργασία"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: eo\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Per la"
msgid "By"
msgstr "De"
msgid "Contents"
msgstr "Enhavo"
msgid "Copyright"
msgstr "Kopirajto"
msgid "Download notebook file"
msgstr "Elŝutu kajeran dosieron"
msgid "Download source file"
msgstr "Elŝutu fontodosieron"
msgid "Download this page"
msgstr "Elŝutu ĉi tiun paĝon"
msgid "Edit this page"
msgstr "Redaktu ĉi tiun paĝon"
msgid "Fullscreen mode"
msgstr "Plenekrana reĝimo"
msgid "Last updated on"
msgstr "Laste ĝisdatigita la"
msgid "Launch"
msgstr "Lanĉo"
msgid "Open an issue"
msgstr "Malfermu numeron"
msgid "Print to PDF"
msgstr "Presi al PDF"
msgid "Source repository"
msgstr "Fonto-deponejo"
msgid "Sphinx Book Theme"
msgstr "Sfinksa Libro-Temo"
msgid "Theme by the"
msgstr "Temo de la"
msgid "Toggle navigation"
msgstr "Ŝalti navigadon"
msgid "next page"
msgstr "sekva paĝo"
msgid "open issue"
msgstr "malferma numero"
msgid "previous page"
msgstr "antaŭa paĝo"
msgid "repository"
msgstr "deponejo"
msgid "suggest edit"
msgstr "sugesti redaktadon"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: es\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Por el"
msgid "By"
msgstr "Por"
msgid "Contents"
msgstr "Contenido"
msgid "Copyright"
msgstr "Derechos de autor"
msgid "Download notebook file"
msgstr "Descargar archivo de cuaderno"
msgid "Download source file"
msgstr "Descargar archivo fuente"
msgid "Download this page"
msgstr "Descarga esta pagina"
msgid "Edit this page"
msgstr "Edita esta página"
msgid "Fullscreen mode"
msgstr "Modo de pantalla completa"
msgid "Last updated on"
msgstr "Ultima actualización en"
msgid "Launch"
msgstr "Lanzamiento"
msgid "Open an issue"
msgstr "Abrir un problema"
msgid "Print to PDF"
msgstr "Imprimir en PDF"
msgid "Source repository"
msgstr "Repositorio de origen"
msgid "Sphinx Book Theme"
msgstr "Tema del libro de la esfinge"
msgid "Theme by the"
msgstr "Tema por el"
msgid "Toggle navigation"
msgstr "Navegación de palanca"
msgid "next page"
msgstr "siguiente página"
msgid "open issue"
msgstr "Tema abierto"
msgid "previous page"
msgstr "pagina anterior"
msgid "repository"
msgstr "repositorio"
msgid "suggest edit"
msgstr "sugerir editar"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: et\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Autor"
msgid "By"
msgstr "Kõrval"
msgid "Contents"
msgstr "Sisu"
msgid "Copyright"
msgstr "Autoriõigus"
msgid "Download notebook file"
msgstr "Laadige sülearvuti fail alla"
msgid "Download source file"
msgstr "Laadige alla lähtefail"
msgid "Download this page"
msgstr "Laadige see leht alla"
msgid "Edit this page"
msgstr "Muutke seda lehte"
msgid "Fullscreen mode"
msgstr "Täisekraanirežiim"
msgid "Last updated on"
msgstr "Viimati uuendatud"
msgid "Launch"
msgstr "Käivitage"
msgid "Open an issue"
msgstr "Avage probleem"
msgid "Print to PDF"
msgstr "Prindi PDF-i"
msgid "Source repository"
msgstr "Allikahoidla"
msgid "Sphinx Book Theme"
msgstr "Sfinksiraamatu teema"
msgid "Theme by the"
msgstr "Teema"
msgid "Toggle navigation"
msgstr "Lülita navigeerimine sisse"
msgid "next page"
msgstr "järgmine leht"
msgid "open issue"
msgstr "avatud küsimus"
msgid "previous page"
msgstr "eelmine leht"
msgid "repository"
msgstr "hoidla"
msgid "suggest edit"
msgstr "soovita muuta"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: fi\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Mukaan"
msgid "By"
msgstr "Tekijä"
msgid "Contents"
msgstr "Sisällys"
msgid "Copyright"
msgstr "Tekijänoikeus"
msgid "Download notebook file"
msgstr "Lataa muistikirjatiedosto"
msgid "Download source file"
msgstr "Lataa lähdetiedosto"
msgid "Download this page"
msgstr "Lataa tämä sivu"
msgid "Edit this page"
msgstr "Muokkaa tätä sivua"
msgid "Fullscreen mode"
msgstr "Koko näytön tila"
msgid "Last updated on"
msgstr "Viimeksi päivitetty"
msgid "Launch"
msgstr "Tuoda markkinoille"
msgid "Open an issue"
msgstr "Avaa ongelma"
msgid "Print to PDF"
msgstr "Tulosta PDF-tiedostoon"
msgid "Source repository"
msgstr "Lähteen arkisto"
msgid "Sphinx Book Theme"
msgstr "Sphinx-kirjan teema"
msgid "Theme by the"
msgstr "Teeman tekijä"
msgid "Toggle navigation"
msgstr "Vaihda navigointia"
msgid "next page"
msgstr "seuraava sivu"
msgid "open issue"
msgstr "avoin ongelma"
msgid "previous page"
msgstr "Edellinen sivu"
msgid "repository"
msgstr "arkisto"
msgid "suggest edit"
msgstr "ehdottaa muokkausta"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: fr\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Par le"
msgid "By"
msgstr "Par"
msgid "Contents"
msgstr "Contenu"
msgid "Copyright"
msgstr "droits d'auteur"
msgid "Download notebook file"
msgstr "Télécharger le fichier notebook"
msgid "Download source file"
msgstr "Télécharger le fichier source"
msgid "Download this page"
msgstr "Téléchargez cette page"
msgid "Edit this page"
msgstr "Modifier cette page"
msgid "Fullscreen mode"
msgstr "Mode plein écran"
msgid "Last updated on"
msgstr "Dernière mise à jour le"
msgid "Launch"
msgstr "lancement"
msgid "Open an issue"
msgstr "Ouvrez un problème"
msgid "Print to PDF"
msgstr "Imprimer au format PDF"
msgid "Source repository"
msgstr "Dépôt source"
msgid "Sphinx Book Theme"
msgstr "Thème du livre Sphinx"
msgid "Theme by the"
msgstr "Thème par le"
msgid "Toggle navigation"
msgstr "Basculer la navigation"
msgid "next page"
msgstr "page suivante"
msgid "open issue"
msgstr "signaler un problème"
msgid "previous page"
msgstr "page précédente"
msgid "repository"
msgstr "dépôt"
msgid "suggest edit"
msgstr "suggestion de modification"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: hr\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Od strane"
msgid "By"
msgstr "Po"
msgid "Contents"
msgstr "Sadržaj"
msgid "Copyright"
msgstr "Autorska prava"
msgid "Download notebook file"
msgstr "Preuzmi datoteku bilježnice"
msgid "Download source file"
msgstr "Preuzmi izvornu datoteku"
msgid "Download this page"
msgstr "Preuzmite ovu stranicu"
msgid "Edit this page"
msgstr "Uredite ovu stranicu"
msgid "Fullscreen mode"
msgstr "Način preko cijelog zaslona"
msgid "Last updated on"
msgstr "Posljednje ažuriranje:"
msgid "Launch"
msgstr "Pokrenite"
msgid "Open an issue"
msgstr "Otvorite izdanje"
msgid "Print to PDF"
msgstr "Ispis u PDF"
msgid "Source repository"
msgstr "Izvorno spremište"
msgid "Sphinx Book Theme"
msgstr "Tema knjige Sphinx"
msgid "Theme by the"
msgstr "Tema autora"
msgid "Toggle navigation"
msgstr "Uključi / isključi navigaciju"
msgid "next page"
msgstr "sljedeća stranica"
msgid "open issue"
msgstr "otvoreno izdanje"
msgid "previous page"
msgstr "Prethodna stranica"
msgid "repository"
msgstr "spremište"
msgid "suggest edit"
msgstr "predloži uređivanje"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: id\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Oleh"
msgid "By"
msgstr "Oleh"
msgid "Contents"
msgstr "Isi"
msgid "Copyright"
msgstr "hak cipta"
msgid "Download notebook file"
msgstr "Unduh file notebook"
msgid "Download source file"
msgstr "Unduh file sumber"
msgid "Download this page"
msgstr "Unduh halaman ini"
msgid "Edit this page"
msgstr "Edit halaman ini"
msgid "Fullscreen mode"
msgstr "Mode layar penuh"
msgid "Last updated on"
msgstr "Terakhir diperbarui saat"
msgid "Launch"
msgstr "Meluncurkan"
msgid "Open an issue"
msgstr "Buka masalah"
msgid "Print to PDF"
msgstr "Cetak ke PDF"
msgid "Source repository"
msgstr "Repositori sumber"
msgid "Sphinx Book Theme"
msgstr "Tema Buku Sphinx"
msgid "Theme by the"
msgstr "Tema oleh"
msgid "Toggle navigation"
msgstr "Alihkan navigasi"
msgid "next page"
msgstr "halaman selanjutnya"
msgid "open issue"
msgstr "masalah terbuka"
msgid "previous page"
msgstr "halaman sebelumnya"
msgid "repository"
msgstr "gudang"
msgid "suggest edit"
msgstr "menyarankan edit"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: it\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Dal"
msgid "By"
msgstr "Di"
msgid "Contents"
msgstr "Contenuti"
msgid "Copyright"
msgstr "Diritto d'autore"
msgid "Download notebook file"
msgstr "Scarica il file del taccuino"
msgid "Download source file"
msgstr "Scarica il file sorgente"
msgid "Download this page"
msgstr "Scarica questa pagina"
msgid "Edit this page"
msgstr "Modifica questa pagina"
msgid "Fullscreen mode"
msgstr "Modalità schermo intero"
msgid "Last updated on"
msgstr "Ultimo aggiornamento il"
msgid "Launch"
msgstr "Lanciare"
msgid "Open an issue"
msgstr "Apri un problema"
msgid "Print to PDF"
msgstr "Stampa in PDF"
msgid "Source repository"
msgstr "Repository di origine"
msgid "Sphinx Book Theme"
msgstr "Tema del libro della Sfinge"
msgid "Theme by the"
msgstr "Tema di"
msgid "Toggle navigation"
msgstr "Attiva / disattiva la navigazione"
msgid "next page"
msgstr "pagina successiva"
msgid "open issue"
msgstr "questione aperta"
msgid "previous page"
msgstr "pagina precedente"
msgid "repository"
msgstr "repository"
msgid "suggest edit"
msgstr "suggerisci modifica"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: iw\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "דרך"
msgid "By"
msgstr "על ידי"
msgid "Contents"
msgstr "תוכן"
msgid "Copyright"
msgstr "זכויות יוצרים"
msgid "Download notebook file"
msgstr "הורד קובץ מחברת"
msgid "Download source file"
msgstr "הורד את קובץ המקור"
msgid "Download this page"
msgstr "הורד דף זה"
msgid "Edit this page"
msgstr "ערוך דף זה"
msgid "Fullscreen mode"
msgstr "מצב מסך מלא"
msgid "Last updated on"
msgstr "עודכן לאחרונה ב"
msgid "Launch"
msgstr "לְהַשִׁיק"
msgid "Open an issue"
msgstr "פתח גיליון"
msgid "Print to PDF"
msgstr "הדפס לקובץ PDF"
msgid "Source repository"
msgstr "מאגר המקורות"
msgid "Sphinx Book Theme"
msgstr "נושא ספר ספינקס"
msgid "Theme by the"
msgstr "נושא מאת"
msgid "Toggle navigation"
msgstr "החלף ניווט"
msgid "next page"
msgstr "עמוד הבא"
msgid "open issue"
msgstr "בעיה פתוחה"
msgid "previous page"
msgstr "עמוד קודם"
msgid "repository"
msgstr "מאגר"
msgid "suggest edit"
msgstr "מציע לערוך"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: ja\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "によって"
msgid "By"
msgstr "著者"
msgid "Contents"
msgstr "目次"
msgid "Copyright"
msgstr "Copyright"
msgid "Download notebook file"
msgstr "ノートブックファイルをダウンロード"
msgid "Download source file"
msgstr "ソースファイルをダウンロード"
msgid "Download this page"
msgstr "このページをダウンロード"
msgid "Edit this page"
msgstr "このページを編集"
msgid "Fullscreen mode"
msgstr "全画面モード"
msgid "Last updated on"
msgstr "最終更新日"
msgid "Launch"
msgstr "起動"
msgid "Open an issue"
msgstr "問題を報告"
msgid "Print to PDF"
msgstr "PDFに印刷"
msgid "Source repository"
msgstr "ソースリポジトリ"
msgid "Sphinx Book Theme"
msgstr "スフィンクスの本のテーマ"
msgid "Theme by the"
msgstr "のテーマ"
msgid "Toggle navigation"
msgstr "ナビゲーションを切り替え"
msgid "next page"
msgstr "次のページ"
msgid "open issue"
msgstr "未解決の問題"
msgid "previous page"
msgstr "前のページ"
msgid "repository"
msgstr "リポジトリ"
msgid "suggest edit"
msgstr "編集を提案する"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: ko\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "에 의해"
msgid "By"
msgstr "으로"
msgid "Contents"
msgstr "내용"
msgid "Copyright"
msgstr "저작권"
msgid "Download notebook file"
msgstr "노트북 파일 다운로드"
msgid "Download source file"
msgstr "소스 파일 다운로드"
msgid "Download this page"
msgstr "이 페이지 다운로드"
msgid "Edit this page"
msgstr "이 페이지 편집"
msgid "Fullscreen mode"
msgstr "전체 화면으로보기"
msgid "Last updated on"
msgstr "마지막 업데이트"
msgid "Launch"
msgstr "시작하다"
msgid "Open an issue"
msgstr "이슈 열기"
msgid "Print to PDF"
msgstr "PDF로 인쇄"
msgid "Source repository"
msgstr "소스 저장소"
msgid "Sphinx Book Theme"
msgstr "스핑크스 도서 테마"
msgid "Theme by the"
msgstr "테마별"
msgid "Toggle navigation"
msgstr "탐색 전환"
msgid "next page"
msgstr "다음 페이지"
msgid "open issue"
msgstr "열린 문제"
msgid "previous page"
msgstr "이전 페이지"
msgid "repository"
msgstr "저장소"
msgid "suggest edit"
msgstr "편집 제안"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: lt\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Prie"
msgid "By"
msgstr "Iki"
msgid "Contents"
msgstr "Turinys"
msgid "Copyright"
msgstr "Autorių teisės"
msgid "Download notebook file"
msgstr "Atsisiųsti nešiojamojo kompiuterio failą"
msgid "Download source file"
msgstr "Atsisiųsti šaltinio failą"
msgid "Download this page"
msgstr "Atsisiųskite šį puslapį"
msgid "Edit this page"
msgstr "Redaguoti šį puslapį"
msgid "Fullscreen mode"
msgstr "Pilno ekrano režimas"
msgid "Last updated on"
msgstr "Paskutinį kartą atnaujinta"
msgid "Launch"
msgstr "Paleiskite"
msgid "Open an issue"
msgstr "Atidarykite problemą"
msgid "Print to PDF"
msgstr "Spausdinti į PDF"
msgid "Source repository"
msgstr "Šaltinio saugykla"
msgid "Sphinx Book Theme"
msgstr "Sfinkso knygos tema"
msgid "Theme by the"
msgstr "Tema"
msgid "Toggle navigation"
msgstr "Perjungti naršymą"
msgid "next page"
msgstr "Kitas puslapis"
msgid "open issue"
msgstr "atviras klausimas"
msgid "previous page"
msgstr "Ankstesnis puslapis"
msgid "repository"
msgstr "saugykla"
msgid "suggest edit"
msgstr "pasiūlyti redaguoti"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: lv\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Ar"
msgid "By"
msgstr "Autors"
msgid "Contents"
msgstr "Saturs"
msgid "Copyright"
msgstr "Autortiesības"
msgid "Download notebook file"
msgstr "Lejupielādēt piezīmju grāmatiņu"
msgid "Download source file"
msgstr "Lejupielādēt avota failu"
msgid "Download this page"
msgstr "Lejupielādējiet šo lapu"
msgid "Edit this page"
msgstr "Rediģēt šo lapu"
msgid "Fullscreen mode"
msgstr "Pilnekrāna režīms"
msgid "Last updated on"
msgstr "Pēdējoreiz atjaunināts"
msgid "Launch"
msgstr "Uzsākt"
msgid "Open an issue"
msgstr "Atveriet problēmu"
msgid "Print to PDF"
msgstr "Drukāt PDF formātā"
msgid "Source repository"
msgstr "Avota krātuve"
msgid "Sphinx Book Theme"
msgstr "Sfinksa grāmatas tēma"
msgid "Theme by the"
msgstr "Autora tēma"
msgid "Toggle navigation"
msgstr "Pārslēgt navigāciju"
msgid "next page"
msgstr "nākamā lapaspuse"
msgid "open issue"
msgstr "atklāts jautājums"
msgid "previous page"
msgstr "iepriekšējā lapa"
msgid "repository"
msgstr "krātuve"
msgid "suggest edit"
msgstr "ieteikt rediģēt"

Binary file not shown.

View File

@ -0,0 +1,66 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: ml\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "എഴുതിയത്"
msgid "By"
msgstr "എഴുതിയത്"
msgid "Copyright"
msgstr "പകർപ്പവകാശം"
msgid "Download notebook file"
msgstr "നോട്ട്ബുക്ക് ഫയൽ ഡൺലോഡ് ചെയ്യുക"
msgid "Download source file"
msgstr "ഉറവിട ഫയൽ ഡൗൺലോഡുചെയ്യുക"
msgid "Download this page"
msgstr "ഈ പേജ് ഡൗൺലോഡുചെയ്യുക"
msgid "Edit this page"
msgstr "ഈ പേജ് എഡിറ്റുചെയ്യുക"
msgid "Last updated on"
msgstr "അവസാനം അപ്‌ഡേറ്റുചെയ്‌തത്"
msgid "Launch"
msgstr "സമാരംഭിക്കുക"
msgid "Open an issue"
msgstr "ഒരു പ്രശ്നം തുറക്കുക"
msgid "Print to PDF"
msgstr "PDF- ലേക്ക് പ്രിന്റുചെയ്യുക"
msgid "Source repository"
msgstr "ഉറവിട ശേഖരം"
msgid "Sphinx Book Theme"
msgstr "സ്ഫിങ്ക്സ് പുസ്തക തീം"
msgid "Theme by the"
msgstr "പ്രമേയം"
msgid "Toggle navigation"
msgstr "നാവിഗേഷൻ ടോഗിൾ ചെയ്യുക"
msgid "next page"
msgstr "അടുത്ത പേജ്"
msgid "open issue"
msgstr "തുറന്ന പ്രശ്നം"
msgid "previous page"
msgstr "മുൻപത്തെ താൾ"
msgid "suggest edit"
msgstr "എഡിറ്റുചെയ്യാൻ നിർദ്ദേശിക്കുക"

Binary file not shown.

View File

@ -0,0 +1,66 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: mr\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "द्वारा"
msgid "By"
msgstr "द्वारा"
msgid "Copyright"
msgstr "कॉपीराइट"
msgid "Download notebook file"
msgstr "नोटबुक फाईल डाउनलोड करा"
msgid "Download source file"
msgstr "स्त्रोत फाइल डाउनलोड करा"
msgid "Download this page"
msgstr "हे पृष्ठ डाउनलोड करा"
msgid "Edit this page"
msgstr "हे पृष्ठ संपादित करा"
msgid "Last updated on"
msgstr "अखेरचे अद्यतनित"
msgid "Launch"
msgstr "लाँच करा"
msgid "Open an issue"
msgstr "एक मुद्दा उघडा"
msgid "Print to PDF"
msgstr "पीडीएफवर मुद्रित करा"
msgid "Source repository"
msgstr "स्त्रोत भांडार"
msgid "Sphinx Book Theme"
msgstr "स्फिंक्स बुक थीम"
msgid "Theme by the"
msgstr "द्वारा थीम"
msgid "Toggle navigation"
msgstr "नेव्हिगेशन टॉगल करा"
msgid "next page"
msgstr "पुढील पृष्ठ"
msgid "open issue"
msgstr "खुला मुद्दा"
msgid "previous page"
msgstr "मागील पान"
msgid "suggest edit"
msgstr "संपादन सुचवा"

Binary file not shown.

View File

@ -0,0 +1,66 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: ms\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Oleh"
msgid "By"
msgstr "Oleh"
msgid "Copyright"
msgstr "hak cipta"
msgid "Download notebook file"
msgstr "Muat turun fail buku nota"
msgid "Download source file"
msgstr "Muat turun fail sumber"
msgid "Download this page"
msgstr "Muat turun halaman ini"
msgid "Edit this page"
msgstr "Edit halaman ini"
msgid "Last updated on"
msgstr "Terakhir dikemas kini pada"
msgid "Launch"
msgstr "Lancarkan"
msgid "Open an issue"
msgstr "Buka masalah"
msgid "Print to PDF"
msgstr "Cetak ke PDF"
msgid "Source repository"
msgstr "Repositori sumber"
msgid "Sphinx Book Theme"
msgstr "Tema Buku Sphinx"
msgid "Theme by the"
msgstr "Tema oleh"
msgid "Toggle navigation"
msgstr "Togol navigasi"
msgid "next page"
msgstr "muka surat seterusnya"
msgid "open issue"
msgstr "isu terbuka"
msgid "previous page"
msgstr "halaman sebelumnya"
msgid "suggest edit"
msgstr "cadangkan edit"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: nl\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Door de"
msgid "By"
msgstr "Door"
msgid "Contents"
msgstr "Inhoud"
msgid "Copyright"
msgstr "auteursrechten"
msgid "Download notebook file"
msgstr "Download notebookbestand"
msgid "Download source file"
msgstr "Download het bronbestand"
msgid "Download this page"
msgstr "Download deze pagina"
msgid "Edit this page"
msgstr "bewerk deze pagina"
msgid "Fullscreen mode"
msgstr "Volledig scherm"
msgid "Last updated on"
msgstr "Laatst geupdate op"
msgid "Launch"
msgstr "Lancering"
msgid "Open an issue"
msgstr "Open een probleem"
msgid "Print to PDF"
msgstr "Afdrukken naar pdf"
msgid "Source repository"
msgstr "Bronopslagplaats"
msgid "Sphinx Book Theme"
msgstr "Sphinx-boekthema"
msgid "Theme by the"
msgstr "Thema door de"
msgid "Toggle navigation"
msgstr "Schakel navigatie"
msgid "next page"
msgstr "volgende bladzijde"
msgid "open issue"
msgstr "open probleem"
msgid "previous page"
msgstr "vorige pagina"
msgid "repository"
msgstr "repository"
msgid "suggest edit"
msgstr "suggereren bewerken"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: no\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Ved"
msgid "By"
msgstr "Av"
msgid "Contents"
msgstr "Innhold"
msgid "Copyright"
msgstr "opphavsrett"
msgid "Download notebook file"
msgstr "Last ned notatbokfilen"
msgid "Download source file"
msgstr "Last ned kildefilen"
msgid "Download this page"
msgstr "Last ned denne siden"
msgid "Edit this page"
msgstr "Rediger denne siden"
msgid "Fullscreen mode"
msgstr "Fullskjerm-modus"
msgid "Last updated on"
msgstr "Sist oppdatert den"
msgid "Launch"
msgstr "Start"
msgid "Open an issue"
msgstr "Åpne et problem"
msgid "Print to PDF"
msgstr "Skriv ut til PDF"
msgid "Source repository"
msgstr "Kildedepot"
msgid "Sphinx Book Theme"
msgstr "Sphinx boktema"
msgid "Theme by the"
msgstr "Tema av"
msgid "Toggle navigation"
msgstr "Bytt navigasjon"
msgid "next page"
msgstr "neste side"
msgid "open issue"
msgstr "åpent nummer"
msgid "previous page"
msgstr "forrige side"
msgid "repository"
msgstr "oppbevaringssted"
msgid "suggest edit"
msgstr "foreslå redigering"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: pl\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Przez"
msgid "By"
msgstr "Przez"
msgid "Contents"
msgstr "Zawartość"
msgid "Copyright"
msgstr "prawa autorskie"
msgid "Download notebook file"
msgstr "Pobierz plik notatnika"
msgid "Download source file"
msgstr "Pobierz plik źródłowy"
msgid "Download this page"
msgstr "Pobierz tę stronę"
msgid "Edit this page"
msgstr "Edytuj tę strone"
msgid "Fullscreen mode"
msgstr "Pełny ekran"
msgid "Last updated on"
msgstr "Ostatnia aktualizacja"
msgid "Launch"
msgstr "Uruchomić"
msgid "Open an issue"
msgstr "Otwórz problem"
msgid "Print to PDF"
msgstr "Drukuj do PDF"
msgid "Source repository"
msgstr "Repozytorium źródłowe"
msgid "Sphinx Book Theme"
msgstr "Motyw książki Sphinx"
msgid "Theme by the"
msgstr "Motyw autorstwa"
msgid "Toggle navigation"
msgstr "Przełącz nawigację"
msgid "next page"
msgstr "Następna strona"
msgid "open issue"
msgstr "otwarty problem"
msgid "previous page"
msgstr "Poprzednia strona"
msgid "repository"
msgstr "magazyn"
msgid "suggest edit"
msgstr "zaproponuj edycję"

Binary file not shown.

View File

@ -0,0 +1,75 @@
msgid ""
msgstr ""
"Project-Id-Version: Sphinx-Book-Theme\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: pt\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
msgid "By the"
msgstr "Pelo"
msgid "By"
msgstr "De"
msgid "Contents"
msgstr "Conteúdo"
msgid "Copyright"
msgstr "direito autoral"
msgid "Download notebook file"
msgstr "Baixar arquivo de notebook"
msgid "Download source file"
msgstr "Baixar arquivo fonte"
msgid "Download this page"
msgstr "Baixe esta página"
msgid "Edit this page"
msgstr "Edite essa página"
msgid "Fullscreen mode"
msgstr "Modo tela cheia"
msgid "Last updated on"
msgstr "Última atualização em"
msgid "Launch"
msgstr "Lançamento"
msgid "Open an issue"
msgstr "Abra um problema"
msgid "Print to PDF"
msgstr "Imprimir em PDF"
msgid "Source repository"
msgstr "Repositório fonte"
msgid "Sphinx Book Theme"
msgstr "Tema do livro Sphinx"
msgid "Theme by the"
msgstr "Tema por"
msgid "Toggle navigation"
msgstr "Alternar de navegação"
msgid "next page"
msgstr "próxima página"
msgid "open issue"
msgstr "questão aberta"
msgid "previous page"
msgstr "página anterior"
msgid "repository"
msgstr "repositório"
msgid "suggest edit"
msgstr "sugerir edição"

Some files were not shown because too many files have changed in this diff Show More