📚 Transfer Learning for Rotary Machine Fault Diagnosis and Prognosis (2023)
1⃣ Join Channel Download:
https://yangx.top/+MhmkscCzIYQ2MmM8
2⃣ Download Book: https://yangx.top/c/1854405158/933
💬 Tags: #TransferLearning
👉 BEST DATA SCIENCE CHANNELS ON TELEGRAM 👈
1⃣ Join Channel Download:
https://yangx.top/+MhmkscCzIYQ2MmM8
2⃣ Download Book: https://yangx.top/c/1854405158/933
💬 Tags: #TransferLearning
👉 BEST DATA SCIENCE CHANNELS ON TELEGRAM 👈
👍5
Data Science Machine Learning Data Analysis
Photo
# 📚 PyTorch Tutorial for Beginners - Part 3/6: Convolutional Neural Networks (CNNs) & Computer Vision
#PyTorch #DeepLearning #ComputerVision #CNNs #TransferLearning
Welcome to Part 3 of our PyTorch series! This comprehensive lesson dives deep into Convolutional Neural Networks (CNNs), the powerhouse behind modern computer vision applications. We'll cover architecture design, implementation tricks, transfer learning, and visualization techniques.
---
## 🔹 Introduction to CNNs
### Why CNNs for Images?
Traditional fully-connected networks (DNNs) fail for images because:
- Parameter explosion: A 256x256 RGB image → 196,608 input features
- No spatial awareness: DNNs treat pixels as independent features
- Translation variance: Objects in different positions require re-learning
### CNN Key Innovations
| Concept | Purpose | Visual Example |
|--------------------|-------------------------------------------------------------------------|-----------------------------|
| Local Receptive Fields | Processes small regions at a time (e.g., 3x3 windows) |  |
| Weight Sharing | Same filters applied across entire image (reduces parameters) | |
| Hierarchical Features | Early layers detect edges → textures → object parts → whole objects |  |
---
## 🔹 Core CNN Components
### 1. Convolutional Layers
### 2. Pooling Layers
### 3. Normalization Layers
### 4. Dropout
---
## 🔹 Building a CNN from Scratch
### Complete Architecture
### Shape Calculation Formula
For a layer with:
- Input size: (Hᵢₙ, Wᵢₙ)
- Kernel: K
- Padding: P
- Stride: S
Output dimensions:
---
#PyTorch #DeepLearning #ComputerVision #CNNs #TransferLearning
Welcome to Part 3 of our PyTorch series! This comprehensive lesson dives deep into Convolutional Neural Networks (CNNs), the powerhouse behind modern computer vision applications. We'll cover architecture design, implementation tricks, transfer learning, and visualization techniques.
---
## 🔹 Introduction to CNNs
### Why CNNs for Images?
Traditional fully-connected networks (DNNs) fail for images because:
- Parameter explosion: A 256x256 RGB image → 196,608 input features
- No spatial awareness: DNNs treat pixels as independent features
- Translation variance: Objects in different positions require re-learning
### CNN Key Innovations
| Concept | Purpose | Visual Example |
|--------------------|-------------------------------------------------------------------------|-----------------------------|
| Local Receptive Fields | Processes small regions at a time (e.g., 3x3 windows) |  |
| Weight Sharing | Same filters applied across entire image (reduces parameters) | |
| Hierarchical Features | Early layers detect edges → textures → object parts → whole objects |  |
---
## 🔹 Core CNN Components
### 1. Convolutional Layers
import torch.nn as nn
# 2D convolution (for images)
conv = nn.Conv2d(
in_channels=3, # Input channels (RGB=3, grayscale=1)
out_channels=16, # Number of filters
kernel_size=3, # 3x3 filter
stride=1, # Filter movement step
padding=1 # Preserves spatial dimensions (with stride=1)
)
# Shape transformation: (batch, channels, height, width)
x = torch.randn(32, 3, 64, 64) # 32 RGB images of 64x64
print(conv(x).shape) # → torch.Size([32, 16, 64, 64])
### 2. Pooling Layers
# Max pooling (common for downsampling)
pool = nn.MaxPool2d(kernel_size=2, stride=2)
print(pool(conv(x)).shape) # → torch.Size([32, 16, 32, 32])
# Adaptive pooling (useful for varying input sizes)
adaptive_pool = nn.AdaptiveAvgPool2d((7, 7))
print(adaptive_pool(x).shape) # → torch.Size([32, 3, 7, 7])
### 3. Normalization Layers
# Batch Normalization
bn = nn.BatchNorm2d(16) # num_features = out_channels
x = conv(x)
x = bn(x)
# Layer Normalization (for NLP/sequences)
ln = nn.LayerNorm([16, 64, 64])
### 4. Dropout
# Spatial dropout (drops entire channels)
dropout = nn.Dropout2d(p=0.25)
---
## 🔹 Building a CNN from Scratch
### Complete Architecture
class CNN(nn.Module):
def __init__(self, num_classes=10):
super().__init__()
self.features = nn.Sequential(
# Block 1
nn.Conv2d(3, 32, kernel_size=3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2),
# Block 2
nn.Conv2d(32, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(2),
# Block 3
nn.Conv2d(64, 128, kernel_size=3, padding=1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.MaxPool2d(2),
)
self.classifier = nn.Sequential(
nn.Linear(128 * 4 * 4, 512), # Adjusted based on input size
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(512, num_classes)
)
def forward(self, x):
x = self.features(x)
x = torch.flatten(x, 1) # Flatten all dimensions except batch
x = self.classifier(x)
return x
# Usage
model = CNN().to(device)
print(model)
### Shape Calculation Formula
For a layer with:
- Input size: (Hᵢₙ, Wᵢₙ)
- Kernel: K
- Padding: P
- Stride: S
Output dimensions:
Hₒᵤₜ = ⌊(Hᵢₙ + 2P - K)/S⌋ + 1
Wₒᵤₜ = ⌊(Wᵢₙ + 2P - K)/S⌋ + 1
---
🌟 Vision Transformer (ViT) Tutorial – Part 3: Pretraining, Transfer Learning & Real-World Applications
Let's start: https://hackmd.io/@husseinsheikho/vit-3
✉️ Our Telegram channels: https://yangx.top/addlist/0f6vfFbEMdAwODBk
Let's start: https://hackmd.io/@husseinsheikho/vit-3
#VisionTransformer #TransferLearning #HuggingFace #ImageNet #FineTuning #AI #DeepLearning #ComputerVision #Transformers #ModelZoo
✉️ Our Telegram channels: https://yangx.top/addlist/0f6vfFbEMdAwODBk
❤3
PyTorch Masterclass: Part 2 – Deep Learning for Computer Vision with PyTorch
Duration: ~60 minutes
Link: https://hackmd.io/@husseinsheikho/pytorch-2
https://yangx.top/DataScienceM💯
Duration: ~60 minutes
Link: https://hackmd.io/@husseinsheikho/pytorch-2
#PyTorch #ComputerVision #CNN #DeepLearning #TransferLearning #CIFAR10 #ImageClassification #DataLoaders #Transforms #ResNet #EfficientNet #PyTorchVision #AI #MachineLearning #ConvolutionalNeuralNetworks #DataAugmentation #PretrainedModels
https://yangx.top/DataScienceM
Please open Telegram to view this post
VIEW IN TELEGRAM
❤7