This media is not supported in your browser
VIEW IN TELEGRAM
🔥 SAMWISE: Infusing Wisdom in SAM2 for Text-Driven Video Segmentation, has been accepted at hashtag#CVPR2025! 🎉
make #SegmentAnything wiser by enabling it to understand text prompts—all with just 4.9M additional trainable parameters.
make #SegmentAnything wiser by enabling it to understand text prompts—all with just 4.9M additional trainable parameters.
👍3
🚀 The new HQ-SAM (High-Quality Segment Anything Model) has just been added to the Hugging Face Transformers library!
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM — including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs — all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
📄 Documentation: https://lnkd.in/e5iDT6Tf
🧠 Model Access: https://lnkd.in/ehS6ZUyv
💻 Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
🌟https://yangx.top/DataScienceN
This is an enhanced version of the original SAM (Segment Anything Model) introduced by Meta in 2023. HQ-SAM significantly improves the segmentation of fine and detailed objects, while preserving all the powerful features of SAM — including prompt-based interaction, fast inference, and strong zero-shot performance. That means you can easily switch to HQ-SAM wherever you used SAM!
The improvements come from just a few additional learnable parameters. The authors collected a high-quality dataset with 44,000 fine-grained masks from various sources, and impressively trained the model in just 4 hours using 8 GPUs — all while keeping the core SAM weights frozen.
The newly introduced parameters include:
* A High-Quality Token
* A Global-Local Feature Fusion mechanism
This work was presented at NeurIPS 2023 and still holds state-of-the-art performance in zero-shot segmentation on the SGinW benchmark.
📄 Documentation: https://lnkd.in/e5iDT6Tf
🧠 Model Access: https://lnkd.in/ehS6ZUyv
💻 Source Code: https://lnkd.in/eg5qiKC2
#ArtificialIntelligence #ComputerVision #Transformers #Segmentation #DeepLearning #PretrainedModels #ResearchAndDevelopment #AdvancedModels #ImageAnalysis #HQ_SAM #SegmentAnything #SAMmodel #ZeroShotSegmentation #NeurIPS2023 #AIresearch #FoundationModels #OpenSourceAI #SOTA
🌟https://yangx.top/DataScienceN
lnkd.in
LinkedIn
This link will take you to a page that’s not on LinkedIn
❤2👍2🔥1