Tensorflow Video tutorial

Since I have free time I would like to record some videos showing how to use Tensorflow.

Tensorflow Tutorial 1 – A very simple Neural Network
Source code: https://github.com/jmlipman/LAID/blob/master/Tensorflow/Tutorial/1_nn.py

Tensorflow Tutorial 2 – A very simple Convolutional Neural Network
Source code: https://github.com/jmlipman/LAID/blob/master/Tensorflow/Tutorial/2_cnn.py

Tensorflow Tutorial 3 – Encapsulating CNN’s code with get_variable and variable_scope
Source code: https://github.com/jmlipman/LAID/blob/master/Tensorflow/Tutorial/3_cnn_pretty.py

Tensorflow Tutorial 4 – Batch Normalization
Source code: https://github.com/jmlipman/LAID/blob/master/Tensorflow/Tutorial/4_batchNormalization.py

Tensorflow Tutorial 5 – Customized activation function, RReLU
Source code: https://github.com/jmlipman/LAID/blob/master/Tensorflow/Tutorial/5_customizedActivationRReLU.py

Tensorflow Tutorial 6 – Autoencoder
Source code: https://github.com/jmlipman/LAID/blob/master/Tensorflow/Tutorial/6_autoencoder.py

Juan Miguel Valverde

"The only way to proof that you understand something is by programming it"

4 thoughts to “Tensorflow Video tutorial”

  1. Respected Lipman,
    I have a small doubt in your 6_autoencoder.py file
    In the decoding section you coded as follows:

    with tf.variable_scope(“conv3”,reuse=True) as scope:
    W = tf.get_variable(“W”)
    layers.append(tf.nn.conv2d_transpose(layers[-1],W,[tf.shape(layers[-1])[0],7,7,2],strides=[1,2,2,1],padding=”SAME”))
    layers.append(tf.nn.relu(layers[-1]))

    with tf.variable_scope(“conv2”,reuse=True) as scope:
    W = tf.get_variable(“W”)
    layers.append(tf.nn.conv2d_transpose(layers[-5],W,[tf.shape(layers[-5])[0],14,14,4],strides=[1,2,2,1],padding=”SAME”))
    l5_act = tf.nn.relu(layers[-1])

    with tf.variable_scope(“conv1”,reuse=True) as scope:
    W = tf.get_variable(“W”)
    layers.append(tf.nn.conv2d_transpose(layers[-9],W,[tf.shape(layers[-9])[0],28,28,1],strides=[1,2,2,1],padding=”SAME”))
    layers.append(tf.nn.relu(layers[-1]))

    How layer[-5] and layer[-9] come there in decoding area?
    Why didn’t you take the output produced by previous layer? As per your code, it can be in layer[-1]

    please correct me
    I am sorry if I am wrong

    1. You are totally right Syam! Thank you so much for noticing. I’ve already uploaded a fix version where I change the [-9] and [-5] from the first argument in conv2d_transpose to a [-1]. The confusion came because it should be used in the shape but no as a layer to multiply the weights.

      Best regards and thanks again 🙂
      JM.

Leave a Reply