|3. Neural Networks
Chapter 3Artificial Intelligence~1 min read

Neural Networks

Brain-inspired Computing

Neural Network हे human brain च्या neurons पासून inspired आहे. Interconnected nodes (neurons) चे layers — input layer, hidden layers, output layer. Deep Learning म्हणजे many hidden layers असलेले neural networks.

Marathi Analogy

Neural network म्हणजे office मधला hierarchy — data (employee) → middle managers (hidden layers, processing) → CEO (output, decision). प्रत्येक manager आपलं काम करतो आणि पुढे pass करतो!

Neural Network Structure

text
Input Layer    Hidden Layers    Output Layer
  (Features)     (Processing)      (Prediction)

  [Age]   ──┐
  [Income]──┼──► [Neuron]──► [Neuron]──► [Loan Approved?]
  [Score] ──┘    [Neuron]──► [Neuron]──► [Loan Denied?]
               [Neuron]

Each connection has a weight (importance)
Each neuron: weighted sum → activation function → output

Activation Functions

  • ReLU (Rectified Linear Unit) — f(x) = max(0, x). Most popular in hidden layers.
  • Sigmoid — f(x) = 1/(1+e^-x). Output 0-1. Binary classification साठी.
  • Softmax — Multiple classes, probabilities sum to 1. Multi-class classification.
  • Tanh — Output -1 to 1. RNNs मध्ये वापरतात.

Training — Backpropagation

How neural networks learn

text
1. Forward Pass: Input → layers through → prediction
2. Loss Calculation: prediction vs actual (how wrong?)
   Loss = (predicted - actual)²  (MSE for regression)
3. Backward Pass (Backpropagation):
   Error मागे propagate होतो
   प्रत्येक weight कितपत responsible? (gradient)
4. Weight Update (Gradient Descent):
   weight = weight - learning_rate × gradient
5. Repeat thousands of times → weights improve → better predictions

Simple Neural Network with Keras

python
import tensorflow as tf
from tensorflow import keras

# Model define करा
model = keras.Sequential([
    keras.layers.Dense(128, activation='relu', input_shape=(10,)),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(1, activation='sigmoid')  # binary output
])

# Compile
model.compile(
    optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy']
)

# Train
model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2)

# Predict
predictions = model.predict(X_test)

Key Points — लक्षात ठेवा

  • Neurons: weighted inputs → activation function → output
  • Layers: Input → Hidden(s) → Output
  • Backpropagation: error मागे propagate, weights update
  • Learning rate: खूप मोठा = unstable, खूप लहान = slow
  • Deep Learning = many hidden layers
0/11 chapters पूर्ण