Valid H13-321_V2.5 Dumps shared by ExamDiscuss.com for Helping Passing H13-321_V2.5 Exam! ExamDiscuss.com now offer the newest H13-321_V2.5 exam dumps, the ExamDiscuss.com H13-321_V2.5 exam questions have been updated and answers have been corrected get the newest ExamDiscuss.com H13-321_V2.5 dumps with Test Engine here:
In NLP tasks, transformer models perform well in multiple tasks due to their self-attention mechanism and parallel computing capability. Which of the following statements about transformer models are true?
Correct Answer: A,B,C
Transformers are designed for sequence modeling without recurrence or convolution. * A:True - self-attention captures global dependencies efficiently, outperforming RNNs/CNNs in long text processing. * B:True - multi-head attention computes multiple attention projections in parallel. * C:True - the architecture is purely attention-based. * D:False - positional encoding isrequiredbecause self-attention does not inherently encode sequence order. Exact Extract from HCIP-AI EI Developer V2.5: "The Transformer uses self-attention to model dependencies and multi-head attention to capture features in different subspaces. Positional encoding must be added to preserve sequence order." Reference:HCIP-AI EI Developer V2.5 Official Study Guide - Chapter: Transformer Architecture