Skip to content

seqml/VerbalTS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VerbalTS: The implementation codes of "VerbalTS: Generating Time Series from Texts" 📈

project page  paper link 

VerbalTS: Generating Time Series from Texts

ICML 2025

Contribution

1. Model Architecture

We propose VerbalTS, which consists of two key components: a multi-view noise estimator and a multi-focal text processor.

Our model considers the time series generation process from three perspectives: temporal view, spatial view, and diffusion view. The textual description is processed through multi-focal reprogramming, which integrates the relevant tokens through learnable anchor vectors. Finally, a condition adapter is applied to align the multi-semantic information from the text across the three views with the corresponding components of the time series. With the method above, we achieve fine-grained time series generation from the textual descriptions.

2. Experimental Results

We compare our method, VerbalTS, with the baselines on two synthetic datasets Synth-M, Synth-U, two real-world datasets Weather, BlindWays, and two real-world augmented datasets ETTm1, Traffic. As shown in the table below, our method significantly improves the fidelity and semantic alignment of the generated time series.

3. Demo

Our method supports using verbal language to generate or edit the time series.

demo.mp4

Installation

1. Environment

torch==2.2.1
pandas==2.0.3
pyyaml==6.0.2
linear_attention_transformer==0.19.1
tensorboard==2.14.0
scikit-learn==1.3.2

You can use the following command to prepare your environment.

pip install -r requirements.txt

2. Dataset

Download the datasets from Google Drive.

Assume the datasets are in `/path/to/data/`. It should be like:
/path/to/data/:
    synthetic_m/:
        meta.json
        train_ts.npy
        train_attrs_idx.npy
        train_caps.npy
        valid_ts.npy
        valid_attrs_idx.npy
        train_caps.npy
        ...
    Weather/:
        ...

NOTE: The arg --data_folder=/path/to/data/ should be passed to the training script.

3. Pretrained model checkpoints

Download the LongCLIP from Huggingface, and put the model weights in /path/to/save/.

Download the checkpoints from Google Drive.

Assume the checkpoints are in `/path/to/save/`. It should be like:
/path/to/save/:
    [dataset_name]_cttp:
        ...
    [dataset_name]_eval:
        [run_id]:
            ckpts:
                model_best.pth
            train_configs.yaml
            eval_configs.yaml
            model_cond_configs.yaml
            model_diff_configs.yaml
        ...
    ...

NOTE: The arg --save_folder=/path/to/save/ should be passed to the training script.

Training

1. Train scripts

To pretrain the model on the specific dataset.

bash scripts/dataset_name/train.sh

2. Results

After the training, check the results at the following path.

{save_folder}/{run_id}/results_stat.csv
{save_folder}/{run_id}/results_stat_condgen.csv

3. Evaluate with checkpoints

To evaluate the model with the checkpoints.

bash scripts/dataset_name/eval.sh

5. Device

All codes in this repository run on GPU by default. If you need to run on the CPU, please modify the device-related parameters in the config file.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If our work helps you in research, please give us a star or cite us using the following:

@article{gu2025verbalts,
  title={VerbalTS: Generating Time Series from Texts},
  author={Gu, Shuqi and Li, Chuyue and Jing, Baoyu and Ren, Kan},
  journal={International Conference on Machine Learning},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published