Data Science Jupyter Notebooks
8.09K subscribers
51 photos
22 videos
9 files
95 links
Explore the world of Data Science through Jupyter Notebooks—insights, tutorials, and tools to boost your data journey. Code, analyze, and visualize smarter with every post.
加入频道
🚀 New Tutorial: Automatic Number Plate Recognition (ANPR) with YOLOv11 + GPT-4o-mini!


This hands-on tutorial shows you how to combine the real-time detection power of YOLOv11 with the language understanding of GPT-4o-mini to build a smart, high-accuracy ANPR system! From setup to smart prompt engineering, everything is covered step-by-step. 🚗💡

🎯 Key Highlights:
YOLOv11 + GPT-4o-mini = High-precision number plate recognition
Real-time video processing in Google Colab
Smart prompt engineering for enhanced OCR performance

📢 A must-watch if you're into computer vision, deep learning, or OpenAI integrations!


🔗 Colab Notebook
▶️ Watch on YouTube


#YOLOv11 #GPT4o #OpenAI #ANPR #OCR #ComputerVision #DeepLearning #AI #DataScience #Python #Ultralytics #MachineLearning #Colab #NumberPlateRecognition

🔍 By : https://yangx.top/DataScienceN
👍21🔥1
𝑯𝒐𝒎𝒐𝒈𝒓𝒂𝒑𝒉𝒚 𝒂𝒏𝒅 𝑲𝒆𝒚𝒑𝒐𝒊𝒏𝒕 𝒇𝒐𝒓 𝑭𝒐𝒐𝒕𝒃𝒂𝒍𝒍 𝑨𝒏𝒂𝒍𝒚𝒕𝒊𝒄𝒔 ⚽️📐

🚀 Highlighting the latest strides in football field analysis using computer vision, this post shares a single frame from our video that demonstrates how homography and keypoint detection combine to produce precise minimap overlays. 🧠🎯

🧩 At the heart of this project lies the refinement of field keypoint extraction. Our experiments show a clear link between both the number and accuracy of detected keypoints and the overall quality of the minimap. 🗺️
📊 Enhanced keypoint precision leads to a more reliable homography transformation, resulting in a richer, more accurate tactical view. ⚙️

🏆 For this work, we leveraged the championship-winning keypoint detection model from the SoccerNet Calibration Challenge:

📈 Implementing and evaluating this state‑of‑the‑art solution has deepened our appreciation for keypoint‑driven approaches in sports analytics. 📹📌

🔗 https://lnkd.in/em94QDFE

📡 By: https://yangx.top/DataScienceN


#ObjectDetection hashtag#DeepLearning hashtag#Detectron2 hashtag#ComputerVision hashtag#AI
hashtag#Football hashtag#SportsTech hashtag#MachineLearning hashtag#ComputerVision hashtag#AIinSports
hashtag#FutureOfFootball hashtag#SportsAnalytics
hashtag#TechInnovation hashtag#SportsAI hashtag#AIinFootball hashtag#AI hashtag#AIandSports hashtag#AIandSports
hashtag#FootballAnalytics hashtag#python hashtag#ai hashtag#yolo hashtag
👍41🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🚀 CoMotion: Concurrent Multi-person 3D Motion 🚶‍♂️🚶‍♀️

Introducing CoMotion, a project that detects and tracks detailed 3D poses of multiple people using a single monocular camera stream. This system maintains temporally coherent predictions in crowded scenes filled with difficult poses and occlusions, enabling online tracking through frames with high accuracy.

🔍 Key Features:
- Precise detection and tracking in crowded scenes
- Temporal coherence even with occlusions
- High accuracy in tracking multiple people over time

🎁 Access the code and weights here:
🔗 Code & Weights 
🔗 View Project

This project advances 3D human motion tracking by offering faster and more accurate tracking of multiple individuals compared to existing systems.

#AI #DeepLearning #3DTracking #ComputerVision #PoseEstimation

🎙 By: https://yangx.top/DataScienceN
Please open Telegram to view this post
VIEW IN TELEGRAM
👍2🔥1
🎯 Trackers Library is Officially Released! 🚀

If you're working in computer vision and object tracking, this one's for you!

💡 Trackers is a powerful open-source library with support for a wide range of detection models and tracking algorithms:

Plug-and-play compatibility with detection models from:
Roboflow Inference, Hugging Face Transformers, Ultralytics, MMDetection, and more!

Tracking algorithms supported:
SORT, DeepSORT, and advanced trackers like StrongSORT, BoT‑SORT, ByteTrack, OC‑SORT – with even more coming soon!

🧩 Released under the permissive Apache 2.0 license – free for everyone to use and contribute.

👏 Huge thanks to Piotr Skalski for co-developing this library, and to Raif Olson and Onuralp SEZER for their outstanding contributions!

📌 Links:
🔗 GitHub
🔗 Docs


📚 Quick-start notebooks for SORT and DeepSORT are linked 👇🏻
https://www.linkedin.com/posts/skalskip92_trackers-library-is-out-plugandplay-activity-7321128111503253504-3U6-?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEXwhVcBcv2n3wq8JzEai3TfWmKLRLTefYo


#ComputerVision #ObjectTracking #OpenSource #DeepLearning #AI


📡 By: https://yangx.top/DataScienceN
👍41🔥1
🚀 The new HQ-SAM (High-Quality Segment Anything Model) has just been added to the Hugging Face Transformers library!

This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM — including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!

The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs — all while keeping the core SAM weights frozen.

The newly introduced parameters include:

* A High-Quality Token
* A Global-Local Feature Fusion mechanism

This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.

📄 Documentation: https://lnkd.in/e5iDT6Tf
🧠 Model Access: https://lnkd.in/ehS6ZUyv
💻 Source Code: https://lnkd.in/eg5qiKC2



#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA

🌟
https://yangx.top/DataScienceN
2👍2🔥1
🔥Powerful Combo: Ultralytics YOLO11 + Sony Semicon | AITRIOS (Global) Platform + Raspberry Pi
We’ve recently updated our Sony IMX model export to fully support YOLO11n detection models! This means you can now seamlessly run YOLO11n models directly on Raspberry Pi AI Cameras powered by the Sony IMX500 sensor — making it even easier to develop advanced Edge AI applications. 💡
To test this new export workflow, I trained a model on the VisDrone dataset and exported it using the following command:
👉
yolo export model=<path_to_drone_model> format=imx data=VisDrone.yaml
🎥 The video below shows the result of this process!
🔍Benchmark results for YOLO11n on IMX500: Inference Time: 62.50 ms mAP50-95 (B): 0.644📌 Want to learn more about YOLO11 and Sony IMX500? Check it out here ➡️
https://docs.ultralytics.com/integrations/sony-imx500/

#EdgeAI#YOLO11#SonyIMX500#AITRIOS#ObjectDetection#RaspberryPiAI#ComputerVision#DeepLearning#OnDeviceAI#ModelDeployment

🌟https://yangx.top/DataScienceN
👍1🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
💃 GENMO: Generalist Human Motion by NVIDIA 💃

NVIDIA introduces GENMO, a unified generalist model for human motion that seamlessly combines motion estimation and generation within a single framework. GENMO supports conditioning on videos, 2D keypoints, text, music, and 3D keyframes, enabling highly versatile motion understanding and synthesis.

Currently, no official code release is available.

Review:
https://t.ly/Q5T_Y

Paper:
https://lnkd.in/ds36BY49

Project Page:
https://lnkd.in/dAYHhuFU

#NVIDIA #GENMO #HumanMotion #DeepLearning #AI #ComputerVision #MotionGeneration #MachineLearning #MultimodalAI #3DReconstruction


✉️ Our Telegram channels: https://yangx.top/addlist/0f6vfFbEMdAwODBk

📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
📚 JaidedAI/EasyOCR — an open-source Python library for Optical Character Recognition (OCR) that's easy to use and supports over 80 languages out of the box.

### 🔍 Key Features:

🔸 Extracts text from images and scanned documents — including handwritten notes and unusual fonts
🔸 Supports a wide range of languages like English, Russian, Chinese, Arabic, and more
🔸 Built on PyTorch — uses modern deep learning models (not the old-school Tesseract)
🔸 Simple to integrate into your Python projects

### Example Usage:

import easyocr

reader = easyocr.Reader(['en', 'ru']) # Choose supported languages
result = reader.readtext('image.png')


### 📌 Ideal For:

Text extraction from photos, scans, and documents
Embedding OCR capabilities in apps (e.g. automated data entry)

🔗 GitHub: https://github.com/JaidedAI/EasyOCR

👉 Follow us for more: @DataScienceN

#Python #OCR #MachineLearning #ComputerVision #EasyOCR
2🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
🧹 ObjectClear — an AI-powered tool for removing objects from images effortlessly.

⚙️ What It Can Do:

🖼️ Upload any image
🎯 Select the object you want to remove
🌟 The model automatically erases the object and intelligently reconstructs the background

⚡️ Under the Hood:

— Uses Segment Anything (SAM) by Meta for object segmentation
— Leverages Inpaint-Anything for realistic background generation
— Works in your browser with an intuitive Gradio UI

✔️ Fully open-source and can be run locally.

📎 GitHub: https://github.com/zjx0101/ObjectClear

#AI #ImageEditing #ComputerVision #Gradio #OpenSource #Python
Please open Telegram to view this post
VIEW IN TELEGRAM
2🔥1