Home

Final review

Logistic regression

  • sigmoid curve
  • loss function, how it learns (weight updates)
  • “gradient descent”
  • be comfortable with a vector of X values (not just single value)
  • bias term

Tensorflow

  • logistic regression in Tensorflow
  • “placeholder”, “variable”, epochs, loss, optimizer (“SGD”)

Pandas

  • “dummies”, “one-hot”

Non-binary outputs

  • using softmax
  • what the loss function changes to
  • (MNIST code - 10 outputs)

Multi-layer networks

  • how this is coded in Tensorflow (not Keras)
  • how this is coded in Keras
  • how many weights/biases there are (parameters)
  • epochs, batch size, train/test splits, validation data
  • overfitting, underfitting

Text processing

  • tokenization, bag of words
  • stop words
  • word2vec, “embeddings”

Deep learning

  • convolutions
    • how it works: sliding kernel
    • padding, stride, kernel size
    • if you see a statement like: Convolution2D(32, kernel=(3,3), …) what does it mean
  • dropout
  • pooling
  • transfer learning
  • image loading and transformations (random zooming, rotation, etc.)

CSCI 431 material by Joshua Eckroth is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Source code for this website available at GitHub.