DataParallel and distributed training utilities for torch.rb. Split batches across multiple GPUs automatically.
Chris Hasinski
January 8, 2026 1:19am
MIT