These words have appeared in my research paper frequently ！～～～～ I suddenly want to know what happened in my research, thus it is natural to dig into the words frequently appeared in my publications. It shows that

1. Bayesian
2. Dynamic
3. Learning
4. Inference
5. Network
6. Model

……….

Hhh, I think it worth to keep tracking of it to see what I am really pursuing and fighting for!

Today in Princeton 544 course, we have learned the knowledge gradient, and it is a one-period lookahead that maximizes the value of information: $V_{x}^{KG,n} = E\{\max_{y} F(y, B^{n+1}(x))\}-\max_{y}F(y,B^{n})$, where

• $B^{n}$ is current belief state, $B^{n}$ is updated parameter estimates after running experiments;
• $\max_{y}F(y,B^{n})$ is for choosing the best design given what we know;
• $x$ is experiment proposed;
• We do expectation for averaging over possible outcomes of the experiment (and our different belief about parameters)

This seems to be new for me, and less applicable for my current project, but I would like to introduce general contents below, which is summarized for four policies in all Stochastic optimization:

1. Policy function approximation
2. Parametric cost function approximation
3. Value function approximation
(1) and (2) need tunning for parameters, while it is more simple and easy to implement. and They are just parameter estimation in $X^{\pi}(S, \theta)$, for determining $\theta$; (3) and (4) need complex modeling.