These are some summary of my learning in Tensorflow. keep edited. Generally, for deep framework, we have three parts (yeah, that’s all) with

1. Loss function
2. Inference part: from $X$ to $Y$
3. Optimization

The codes look like

```class AwesomeModel(object):
def __init__(self):
""" init the model with hyper-parameters etc """

def inference(self, x):
""" This is the forward calculation from x to y """
return some_op(x, name="inference")

def loss(self, batch_x, batch_y=None):
y_predict = self.inference(batch_x)
self.loss = tf.loss_function(y, y_predict, name="loss") # supervised
# loss = tf.loss_function(x, y_predicted) # unsupervised

def optimize(self, batch_x, batch_y):
return tf.train.optimizer.minimize(self.loss, name="optimizer")```

1. Saver

Our real-data takes so long, thus we need store it, and retrieve in the future. The variables constructed before Saver commend will be saved.

To store a computation graph, we can use

```saver = tf.train.Saver()
saver.save(sess, checkpoints_file_name)```

To restore a computation graph, we can use

```saver = tf.train.import_meta_graph(checkpoints_file_name + '.meta')
saver.restore(sess, checkpoints_file_name)```

2. tf.app.run() & FLAGS

Save the global variables in FLAGS, try to document your variables, and FLAGS have default documentation, it would save time when a lot of global variables.

```tf.app.flags.DEFINE_boolean("some_flag", False, "Documentation")

FLAGS = tf.app.flags.FLAGS

def main(_):
# use FLAGS.some_flag in the code.

if __name__ == '__main__':
tf.app.run()```

3. Try to save operations name

we’d use

`loss_tensor = tf.nn.softmax_cross_entropy_with_logits(logits, labels, dim=-1, name="loss")`

to label the loss operation. We will no longer have `loss_tensor` when we restore the model, but we can always call `graph.get_operation_by_name("loss")` to get the operation.

4. Summaries

We should keep in mind that when coding , try to use summaries. Tensorflow provides the scaler, video, image summaries.  We use summary like

```# 1. Declare summaries that you'd like to collect.
tf.scalar_summary("summary_name", tensor, name = "summary_op_name")

# 2. Construct a summary writer object for the computation graph, once all summaries are defined.
summary_writer = tf.train.SummaryWriter(summary_dir_name, sess.graph)

# 3. Group all previously declared summaries for serialization. Usually we want all summaries defined
# in the computation graph. To pick a subset, use tf.merge_summary([summaries]).
summaries_tensor = tf.merge_all_summaries()

# 4. At runtime, in appropriate places, evaluate the summaries_tensor, to assign value.
summary_value, ... = sess.run([summaries_tensor, ...], feed_dict={...})

# 5. Write the summary value to disk, using summary writer.