python - Parallel training with CNTK and numpy interop -


i'm training autoencoder network needs read in 3 images per training sample (one input rgb image, 2 output rgb images). easy make work python , numpy interop , reading image files in myself.

how can enable parallel/distributed training this? have use training session construct? have use image reader minibatch source that?

there following options: 1) use distributed learner + training session - need either use imagedeserializer, or implement own minibatchsource (this extensibility available starting rc2) 2) use distributed learner + write training loop yourself. in case have take care of splitting data (each worker should read images correspond rank) , conditions inside loop should based on trainer->totalnumberofsamples() (i.e. checkpointing if any).


Comments

Popular posts from this blog

Command prompt result in label. Python 2.7 -

javascript - How do I use URL parameters to change link href on page? -

amazon web services - AWS Route53 Trying To Get Site To Resolve To www -