Can some one explain logging device placement in tensorflow tutorial ? -
# creates graph. tf.device('/cpu:0'): = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) # creates session log_device_placement set true. sess = tf.session(config=tf.configproto(log_device_placement=true)) # runs op. print sess.run(c)
in above example cpu:0 has assigned execution process. with log_device_placement true. solution above code have mentioned
device mapping: /job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: tesla k40c, pci bus id: 0000:05:00.0 b: /job:localhost/replica:0/task:0/cpu:0 a: /job:localhost/replica:0/task:0/cpu:0 matmul: /job:localhost/replica:0/task:0/gpu:0 [[ 22. 28.] [ 49. 64.]]
now here place holders a, b , matmul operation c runs inside session inside device log cpu:o in log device description why matmul has been executed in gpu:0 ?
that seems bug in documentation, matmul operation placed on cpu in case.
indeed, running code sample show this:
import tensorflow tf # creates graph. tf.device('/cpu:0'): = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) # creates session log_device_placement set true. sess = tf.session(config=tf.configproto(log_device_placement=true)) # runs op. print sess.run(c) # , prove gpus exist , can used tf.device('/gpu:0'): d = tf.random_normal([]) print sess.run(d)
will show following placements:
matmul: (matmul): /job:localhost/replica:0/task:0/cpu:0 b: (const): /job:localhost/replica:0/task:0/cpu:0 a: (const): /job:localhost/replica:0/task:0/cpu:0 random_normal/randomstandardnormal: (randomstandardnormal): /job:localhost/replica:0/task:0/gpu:0
i think documentation bug c = tf.matmul(a, b)
statement supposed outside with tf.device('/cpu:0')
scope.
Comments
Post a Comment