Data Science Machine Learning Data Analysis
37.1K subscribers
1.13K photos
27 videos
39 files
1.24K links
This channel is for Programmers, Coders, Software Engineers.

1- Data Science
2- Machine Learning
3- Data Visualization
4- Artificial Intelligence
5- Data Analysis
6- Statistics
7- Deep Learning

Cross promotion and ads: @hussein_sheikho
加入频道
Data Science Machine Learning Data Analysis
Photo
# 📚 PyTorch Tutorial for Beginners - Part 1/6: Fundamentals & Tensors
#PyTorch #DeepLearning #MachineLearning #NeuralNetworks #Tensors

Welcome to Part 1 of our comprehensive PyTorch series! This beginner-friendly lesson covers core concepts, tensor operations, and your first neural network.

---

## 🔹 What is PyTorch?
PyTorch is an open-source deep learning framework developed by Facebook's AI Research Lab (FAIR). Key features:

✔️ Dynamic computation graphs (define-by-run)
✔️ GPU acceleration with CUDA
✔️ Pythonic syntax for intuitive coding
✔️ Automatic differentiation (autograd)
✔️ Rich ecosystem (TorchVision, TorchText, etc.)

import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")


---

## 🔹 Tensors: The Building Blocks
Tensors are PyTorch's multi-dimensional arrays (like NumPy but with GPU support).

### 1. Creating Tensors
# From Python list
a = torch.tensor([1, 2, 3]) # 1D tensor (vector)

# 2D tensor (matrix)
b = torch.tensor([[1., 2.], [3., 4.]])

# Special tensors
zeros = torch.zeros(2, 3) # 2x3 matrix of zeros
ones = torch.ones_like(zeros) # Same shape as zeros, filled with 1s
rand = torch.rand(3, 3) # 3x3 matrix with uniform random values (0-1)


### 2. Tensor Attributes
x = torch.rand(2, 3)
print(f"Shape: {x.shape}") # torch.Size([2, 3])
print(f"Data type: {x.dtype}") # torch.float32
print(f"Device: {x.device}") # cpu/cuda:0


### 3. Moving Tensors to GPU
if torch.cuda.is_available():
x = x.to('cuda') # Move to GPU
print(f"Now on: {x.device}") # cuda:0


---

## 🔹 Tensor Operations
### 1. Basic Math
x = torch.tensor([1., 2., 3.])
y = torch.tensor([4., 5., 6.])

# Element-wise operations
add = x + y # or torch.add(x, y)
sub = x - y
mul = x * y
div = x / y

# Matrix multiplication
mat1 = torch.rand(2, 3)
mat2 = torch.rand(3, 2)
matmul = torch.mm(mat1, mat2) # or mat1 @ mat2


### 2. Reshaping Tensors
x = torch.arange(6)          # [0, 1, 2, 3, 4, 5]
x_reshaped = x.view(2, 3) # [[0, 1, 2], [3, 4, 5]]
x_flattened = x.flatten() # Back to 1D


### 3. Indexing & Slicing
x = torch.tensor([[1, 2], [3, 4], [5, 6]])
print(x[0, 1]) # 2 (first row, second column)
print(x[:, 0]) # [1, 3, 5] (all rows, first column)


---

## 🔹 Autograd: Automatic Differentiation
PyTorch automatically computes gradients for tensors with requires_grad=True.

### 1. Basic Example
x = torch.tensor(2.0, requires_grad=True)
y = x**2 + 3*x + 1
y.backward() # Compute gradients
print(x.grad) # dy/dx = 2x + 3 → 7.0


### 2. Neural Network Context
# Simple linear regression
w = torch.randn(1, requires_grad=True)
b = torch.zeros(1, requires_grad=True)

# Forward pass
inputs = torch.tensor([[1.0], [2.0], [3.0]])
targets = torch.tensor([[2.0], [4.0], [6.0]])
predictions = inputs * w + b

# Loss and backward pass
loss = torch.mean((predictions - targets)**2)
loss.backward() # Computes dloss/dw, dloss/db

print(f"Gradient of w: {w.grad}")
print(f"Gradient of b: {b.grad}")


---

## **🔹 Your First Neural Network**
Let's build a single-layer perceptron for binary classification.

### 1. Define the Model
import torch.nn as nn

class Perceptron(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.linear = nn.Linear(input_dim, 1) # 1 output neuron

def forward(self, x):
return torch.sigmoid(self.linear(x)) # Sigmoid for probability

model = Perceptron(input_dim=2)
print(model)


### 2. Synthetic Dataset
# XOR-like dataset
X = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float32)
y = torch.tensor([[0], [1], [1], [0]], dtype=torch.float32)
🔥1