2016-11-21 43 views
9

我有2个独特的总结小组。每个批次收集一个,每个时期收集一个。我如何使用merge_all_summaries(key='???')分别在这两组中收集摘要?手动操作始终是一种选择,但似乎有更好的方法。如何在Tensorflow中使用多个汇总集合?

插图的如何,我认为它应该工作:

 # once per batch 
     tf.scalar_summary("loss", graph.loss) 
     tf.scalar_summary("batch_acc", batch_accuracy) 
     # once per epoch 
     gradients = tf.gradients(graph.loss, [W, D]) 
     tf.histogram_summary("embedding/W", W, collections='per_epoch') 
     tf.histogram_summary("embedding/D", D, collections='per_epoch') 

     tf.merge_all_summaries()    # -> (MergeSummary...) :) 
     tf.merge_all_summaries(key='per_epoch') # -> NONE    :(
+0

首先发现了这个问题,但是搜索了2个不是特别的摘要组。这种方法https://stackoverflow.com/questions/42418029/unable-to-use-summary-merge-in-tensorboard-for-separate-training-and-evaluation对稍微不同的用例稍微简单一些。您可以简单地使用摘要的名称。 – Maikefer

回答

15

问题解决了。 collections摘要的参数应该是一个列表。 解决方案:

# once per batch 
    tf.scalar_summary("loss", graph.loss) 
    tf.scalar_summary("batch_acc", batch_accuracy) 
    # once per epoch 
    tf.histogram_summary("embedding/W", W, collections=['per_epoch']) 
    tf.histogram_summary("embedding/D", D, collections=['per_epoch']) 

    tf.merge_all_summaries()    # -> (MergeSummary...) :) 
    tf.merge_all_summaries(key='per_epoch') # -> (MergeSummary...) :) 

编辑。 TF中的句法变化:

# once per batch 
    tf.summary.scalar("loss", graph.loss) 
    tf.summary.scalar("batch_acc", batch_accuracy) 
    # once per epoch 
    tf.summary.histogram("embedding/W", W, collections=['per_epoch']) 
    tf.summary.histogram("embedding/D", D, collections=['per_epoch']) 

    tf.summaries.merge_all()    # -> (MergeSummary...) :) 
    tf.summaries.merge_all(key='per_epoch') # -> (MergeSummary...) :)