-
Notifications
You must be signed in to change notification settings - Fork 294
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Custom Dataset refactoring + docs (#715)
EDIT: removed the specific new functions in hf_datasets.py and kept most of the doc changes and will not go for a registration based API Fixes #311 This PR describes the status quo of how new datasets should be registered today, in that there's the implicit assumption that people are installing torchtitan from source and updating hf_datasets.py to support new datasets. As an example I passed in the wikipedia dataset The main "nice" thing about this PR is that `class HuggingFaceDataset` is now agnostic to the c4 dataset which makes it easier for new people to add datasets without reading the rest of the file There's another direction this PR could have went in which was to allow custom dataset registration, the benefit is people can support new datasets without installing titan from source but registration apis can feel kinda "bureaucratic" and presumably people would need to register the dataset somewhere, probably `train.py`? Not totally sure which is more in line with the repo's goals so opening this PR to discuss ```python def register_dataset( name: str, loader: Callable[[str, Dict[str, Any]], Any], processor: Callable[[Dict[str, Any]], str], path: Optional[str] = None, ) -> None: DATASET_LOADERS[name] = loader DATASET_TEXT_PROCESSORS[name] = processor def wikipedia_loader(dataset_path: str, **kwargs): return load_dataset( dataset_path, name="20220301.en", split="train", streaming=True, trust_remote_code=True, ) def wikipedia_processor(sample: Dict[str, Any]) -> str: return f"{sample['title']}\n\n{sample['text']}" register_dataset( name="wikipedia", loader=wikipedia_loader, processor=wikipedia_processor, path="wikipedia" ) ```
- Loading branch information
Showing
4 changed files
with
147 additions
and
83 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
# Custom Datasets in TorchTitan | ||
|
||
TorchTitan is designed to work seamlessly with most HuggingFace datasets. While we provide the C4 dataset for numerics and convergence testing, you can easily add support for your own datasets. Here's how to do it using Wikipedia as an example. | ||
|
||
## Quick Start | ||
Locate the dataset configuration file: | ||
``` | ||
torchtitan/datasets/hf_datasets/hf_datasets.py | ||
``` | ||
|
||
## Adding Your Dataset | ||
You'll need to add three components: | ||
1. A dataset loader function | ||
2. A sample processor function | ||
3. A dataset configuration entry | ||
|
||
### 1. Define Dataset Loader | ||
Create a function that specifies how to load your dataset: | ||
|
||
```python | ||
def load_wikipedia_dataset(dataset_path: str, **kwargs): | ||
"""Load Wikipedia dataset with specific configuration.""" | ||
logger.info("Loading Wikipedia dataset...") | ||
return load_dataset( | ||
dataset_path, | ||
name="20220301.en", | ||
split="train", | ||
streaming=True, | ||
trust_remote_code=True, | ||
) | ||
``` | ||
|
||
### 2. Define Sample Processor | ||
Create a function that processes individual samples from your dataset: | ||
|
||
```python | ||
def process_wikipedia_text(sample: Dict[str, Any]) -> str: | ||
"""Process Wikipedia dataset sample text.""" | ||
return f"{sample['title']}\n\n{sample['text']}" | ||
``` | ||
|
||
### 3. Register Your Dataset | ||
Add your dataset configuration to the DATASETS dictionary: | ||
|
||
```python | ||
DATASETS = { | ||
# ... existing datasets ... | ||
"wikipedia": DatasetConfig( | ||
path="wikipedia", # default HuggingFace dataset path | ||
loader=load_wikipedia_dataset, | ||
text_processor=process_wikipedia_text, | ||
), | ||
} | ||
``` | ||
|
||
### 4. Configure Your Training | ||
In your training configuration file (`.toml`), set your dataset: | ||
|
||
```toml | ||
dataset = "wikipedia" | ||
``` | ||
|
||
That's it! Your custom dataset is now ready to use with TorchTitan. | ||
|
||
## Key Points | ||
- The DatasetConfig contains all necessary components for a dataset: | ||
- `path`: The default path to the dataset (can be overridden during training) | ||
- `loader`: Function to load the dataset | ||
- `text_processor`: Function to process individual samples | ||
- The loader function should return a HuggingFace dataset object | ||
- The processor function should return a string that combines the relevant fields from your dataset | ||
- Use `streaming=True` for large datasets to manage memory efficiently | ||
|
||
Now you can start training with your custom dataset! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters