Neural Networks with TensorFlow
Master the Sequential and Functional APIs, understand layers and activations, configure training with optimizers and losses, and use callbacks to control the training process.
The Sequential API
The Sequential API is the simplest way to build a neural network in Keras. You stack layers one after another in a linear pipeline:
from tensorflow import keras model = keras.Sequential([ keras.layers.Dense(256, activation='relu', input_shape=(784,)), keras.layers.BatchNormalization(), keras.layers.Dropout(0.3), keras.layers.Dense(128, activation='relu'), keras.layers.BatchNormalization(), keras.layers.Dropout(0.3), keras.layers.Dense(10, activation='softmax') ]) model.summary() # Print architecture overview
The Functional API
The Functional API gives you more flexibility. Use it when your model has multiple inputs, multiple outputs, shared layers, or non-linear topology (skip connections, residual blocks):
from tensorflow import keras # Define inputs inputs = keras.Input(shape=(784,)) # Build layers with explicit connections x = keras.layers.Dense(256, activation='relu')(inputs) x = keras.layers.Dropout(0.3)(x) x = keras.layers.Dense(128, activation='relu')(x) x = keras.layers.Dropout(0.3)(x) outputs = keras.layers.Dense(10, activation='softmax')(x) # Create the model model = keras.Model(inputs=inputs, outputs=outputs) # Multi-input example: combining image and metadata image_input = keras.Input(shape=(224, 224, 3), name='image') meta_input = keras.Input(shape=(10,), name='metadata') img_features = keras.layers.Flatten()(image_input) img_features = keras.layers.Dense(128, activation='relu')(img_features) combined = keras.layers.concatenate([img_features, meta_input]) output = keras.layers.Dense(1, activation='sigmoid')(combined) multi_model = keras.Model(inputs=[image_input, meta_input], outputs=output)
Common Layers
| Layer | Purpose | Key Parameters |
|---|---|---|
| Dense | Fully connected layer | units, activation |
| Dropout | Regularization (randomly drops neurons) | rate (0.0-1.0) |
| BatchNormalization | Normalize activations for faster training | momentum, epsilon |
| Flatten | Reshape multi-dim input to 1D | — |
| Embedding | Learn dense representations for categorical data | input_dim, output_dim |
Training Configuration
The compile() step configures how your model learns. The three key choices are the optimizer, loss function, and metrics:
# Classification with custom learning rate model.compile( optimizer=keras.optimizers.Adam(learning_rate=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy'] ) # Regression model.compile( optimizer='adam', loss='mse', metrics=['mae'] ) # Binary classification model.compile( optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', keras.metrics.AUC()] )
Callbacks
Callbacks let you hook into the training process to save checkpoints, stop early, adjust the learning rate, and more:
callbacks = [
# Stop training when validation loss stops improving
keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=5,
restore_best_weights=True
),
# Save the best model checkpoint
keras.callbacks.ModelCheckpoint(
'best_model.keras',
monitor='val_accuracy',
save_best_only=True
),
# Reduce learning rate when loss plateaus
keras.callbacks.ReduceLROnPlateau(
monitor='val_loss',
factor=0.5,
patience=3
),
# Log metrics for TensorBoard visualization
keras.callbacks.TensorBoard(log_dir='./logs')
]
# Train with callbacks
history = model.fit(
x_train, y_train,
epochs=50,
batch_size=32,
validation_split=0.2,
callbacks=callbacks
)
tensorboard --logdir=./logs to visualize training curves, model graphs, and more in your browser.
Evaluating and Using Your Model
# Evaluate on test data test_loss, test_acc = model.evaluate(x_test, y_test) # Make predictions predictions = model.predict(x_new) # Save and load the model model.save('my_model.keras') loaded_model = keras.models.load_model('my_model.keras')
Next Up: CNNs & Computer Vision
Now that you understand neural network fundamentals in TensorFlow, let's build convolutional neural networks for image classification.
Next: CNNs & Computer Vision →
Lilly Tech Systems