Question 8 (Advanced):
What is the time complexity of checking if an element exists in a Python
A) O(1)
B) O(n)
C) O(log n)
D) O(n^2)
#Python #DataStructures #TimeComplexity #Advanced
✅ By: https://yangx.top/DataScienceQ
What is the time complexity of checking if an element exists in a Python
set
?A) O(1)
B) O(n)
C) O(log n)
D) O(n^2)
#Python #DataStructures #TimeComplexity #Advanced
✅ By: https://yangx.top/DataScienceQ
❤1
Question 9 (Intermediate):
In SciPy, which function is used to solve ordinary differential equations (ODEs)?
A)
B)
C)
D)
#Python #SciPy #NumericalMethods #ODEs
✅ By: https://yangx.top/DataScienceQ
In SciPy, which function is used to solve ordinary differential equations (ODEs)?
A)
scipy.optimize.minimize()
B)
scipy.integrate.solve_ivp()
C)
scipy.signal.lfilter()
D)
scipy.linalg.solve()
#Python #SciPy #NumericalMethods #ODEs
✅ By: https://yangx.top/DataScienceQ
❤2
Question 10 (Advanced):
In the Transformer architecture (PyTorch), what is the purpose of masked multi-head attention in the decoder?
A) To prevent the model from peeking at future tokens during training
B) To reduce GPU memory usage
C) To handle variable-length input sequences
D) To normalize gradient updates
#Python #Transformers #DeepLearning #NLP #AI
✅ By: https://yangx.top/DataScienceQ
In the Transformer architecture (PyTorch), what is the purpose of masked multi-head attention in the decoder?
A) To prevent the model from peeking at future tokens during training
B) To reduce GPU memory usage
C) To handle variable-length input sequences
D) To normalize gradient updates
#Python #Transformers #DeepLearning #NLP #AI
✅ By: https://yangx.top/DataScienceQ
❤2
Question 11 (Expert):
In Vision Transformers (ViT), how are image patches typically converted into input tokens for the transformer encoder?
A) Raw pixel values are used directly
B) Each patch is flattened and linearly projected
C) Patches are processed through a CNN first
D) Edge detection is applied before projection
#Python #ViT #ComputerVision #DeepLearning #Transformers
✅ By: https://yangx.top/DataScienceQ
In Vision Transformers (ViT), how are image patches typically converted into input tokens for the transformer encoder?
A) Raw pixel values are used directly
B) Each patch is flattened and linearly projected
C) Patches are processed through a CNN first
D) Edge detection is applied before projection
#Python #ViT #ComputerVision #DeepLearning #Transformers
✅ By: https://yangx.top/DataScienceQ
❤1