Some notes about Tensorflow code (1)

These are some summary of my learning in Tensorflow. keep edited. Generally, for deep framework, we have three parts (yeah, that’s all) with

    1. Loss function
    2. Inference part: from X to Y
    3. Optimization

The codes look like

class AwesomeModel(object):
  def __init__(self):
    """ init the model with hyper-parameters etc """

  def inference(self, x):
    """ This is the forward calculation from x to y """
    return some_op(x, name="inference")

  def loss(self, batch_x, batch_y=None):
    y_predict = self.inference(batch_x)
    self.loss = tf.loss_function(y, y_predict, name="loss") # supervised
    # loss = tf.loss_function(x, y_predicted) # unsupervised

  def optimize(self, batch_x, batch_y):
    return tf.train.optimizer.minimize(self.loss, name="optimizer")

1. Saver

Our real-data takes so long, thus we need store it, and retrieve in the future. The variables constructed before Saver commend will be saved.

To store a computation graph, we can use

saver = tf.train.Saver(), checkpoints_file_name)

To restore a computation graph, we can use

saver = tf.train.import_meta_graph(checkpoints_file_name + '.meta')
saver.restore(sess, checkpoints_file_name)

2. & FLAGS

Save the global variables in FLAGS, try to document your variables, and FLAGS have default documentation, it would save time when a lot of global variables."some_flag", False, "Documentation")


def main(_):
  # your code goes here...
  # use FLAGS.some_flag in the code.

if __name__ == '__main__':

3. Try to save operations name

we’d use

loss_tensor = tf.nn.softmax_cross_entropy_with_logits(logits, labels, dim=-1, name="loss")

to label the loss operation. We will no longer have loss_tensor when we restore the model, but we can always call graph.get_operation_by_name("loss") to get the operation.

4. Summaries

We should keep in mind that when coding , try to use summaries. Tensorflow provides the scaler, video, image summaries.  We use summary like

# 1. Declare summaries that you'd like to collect.
tf.scalar_summary("summary_name", tensor, name = "summary_op_name")

# 2. Construct a summary writer object for the computation graph, once all summaries are defined.
summary_writer = tf.train.SummaryWriter(summary_dir_name, sess.graph)

# 3. Group all previously declared summaries for serialization. Usually we want all summaries defined
# in the computation graph. To pick a subset, use tf.merge_summary([summaries]).
summaries_tensor = tf.merge_all_summaries()

# 4. At runtime, in appropriate places, evaluate the summaries_tensor, to assign value.
summary_value, ... =[summaries_tensor, ...], feed_dict={...})

# 5. Write the summary value to disk, using summary writer.
summary_writer.add_summary(summary_value, global_step=step)



Fill in your details below or click an icon to log in: 徽标

您正在使用您的 账号评论。 注销 /  更改 )

Twitter picture

您正在使用您的 Twitter 账号评论。 注销 /  更改 )

Facebook photo

您正在使用您的 Facebook 账号评论。 注销 /  更改 )

Connecting to %s

在 上创建免费网站或博客

向上 ↑

%d 博主赞过: