Question 6 (Advanced):
Which of the following attention mechanisms is used in transformers?
A) Hard Attention
B) Additive Attention
C) Self-Attention
D) Bahdanau Attention
#Transformers #NLP #DeepLearning #AttentionMechanism #AI
Which of the following attention mechanisms is used in transformers?
A) Hard Attention
B) Additive Attention
C) Self-Attention
D) Bahdanau Attention
#Transformers #NLP #DeepLearning #AttentionMechanism #AI
❤2
Question 10 (Advanced):
In the Transformer architecture (PyTorch), what is the purpose of masked multi-head attention in the decoder?
A) To prevent the model from peeking at future tokens during training
B) To reduce GPU memory usage
C) To handle variable-length input sequences
D) To normalize gradient updates
#Python #Transformers #DeepLearning #NLP #AI
✅ By: https://yangx.top/DataScienceQ
In the Transformer architecture (PyTorch), what is the purpose of masked multi-head attention in the decoder?
A) To prevent the model from peeking at future tokens during training
B) To reduce GPU memory usage
C) To handle variable-length input sequences
D) To normalize gradient updates
#Python #Transformers #DeepLearning #NLP #AI
✅ By: https://yangx.top/DataScienceQ
❤2
Question 11 (Expert):
In Vision Transformers (ViT), how are image patches typically converted into input tokens for the transformer encoder?
A) Raw pixel values are used directly
B) Each patch is flattened and linearly projected
C) Patches are processed through a CNN first
D) Edge detection is applied before projection
#Python #ViT #ComputerVision #DeepLearning #Transformers
✅ By: https://yangx.top/DataScienceQ
In Vision Transformers (ViT), how are image patches typically converted into input tokens for the transformer encoder?
A) Raw pixel values are used directly
B) Each patch is flattened and linearly projected
C) Patches are processed through a CNN first
D) Edge detection is applied before projection
#Python #ViT #ComputerVision #DeepLearning #Transformers
✅ By: https://yangx.top/DataScienceQ
❤1
Are you preparing for AI interviews or want to test your knowledge in Vision Transformers (ViT)?
Basic Concepts (Q1–Q15)
Architecture & Components (Q16–Q30)
Attention & Transformers (Q31–Q45)
Training & Optimization (Q46–Q55)
Advanced & Real-World Applications (Q56–Q65)
Answer Key & Explanations
#VisionTransformer #ViT #DeepLearning #ComputerVision #Transformers #AI #MachineLearning #MCQ #InterviewPrep
✉️ Our Telegram channels: https://yangx.top/addlist/0f6vfFbEMdAwODBk📱 Our WhatsApp channel: https://whatsapp.com/channel/0029VaC7Weq29753hpcggW2A
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2