Valid C_AIG_2412 Dumps shared by ExamDiscuss.com for Helping Passing C_AIG_2412 Exam! ExamDiscuss.com now offer the newest C_AIG_2412 exam dumps, the ExamDiscuss.com C_AIG_2412 exam questions have been updated and answers have been corrected get the newest ExamDiscuss.com C_AIG_2412 dumps with Test Engine here:
You want to assign urgency and sentiment categories to a large number of customer emails. You want to get a valid json string output for creating custom applications. You decide to develop a prompt for the same using generative Al hub. What is the main purpose of the following code in this context? prompt_test = """Your task is to extract and categorize messages. Here are some examples: {{?technique_examples}} Use the examples when extract and categorize the following message: {{?input}} Extract and return a json with the following keys and values: -"urgency" as one of {{?urgency}} -"sentiment" as one of {{?sentiment}} "categories" list of the best matching support category tags from: {{?categories}} Your complete message should be a valid json string that can be read directly and only contains the keys mentioned in t import random random.seed(42) k = 3 examples random. sample (dev_set, k) example_template = """<example> {example_input} examples '\n---\n'.join([example_template.format(example_input=example ["message"], example_output=json.dumps (example[ f_test = partial (send_request, prompt=prompt_test, technique_examples examples, **option_lists) response = f_test(input=mail["message"])
Correct Answer: B
The provided code is designed to evaluate the performance of a language model in assigning urgency and sentiment categories to customer emails by utilizing few-shot learning within SAP's Generative AI Hub. 1. Few-Shot Learning in Prompt Engineering: * Definition:Few-shot learning involves providing a language model with a limited number of examples to enable it to perform a specific task effectively. In this context, the model isgiven a few examples of categorized messages to learn how to assign urgency and sentiment to new, unseen emails. 2. Code Functionality: * Prompt Template Creation:The prompt_test variable defines a template that instructs the model to extract and categorize messages, specifying the desired output format as a JSON string. * Example Selection:The code randomly selects a subset of examples from a development set (dev_set) to include in the prompt, demonstrating the expected input-output pairs to the model. * Model Interaction:The function f_test sends the constructed prompt, along with the input message, to the language model for processing. * Response Handling:The model's response is expected to be a JSON string containing the assigned urgency, sentiment, and categories for the input message. 3. Purpose of the Code: * Performance Evaluation:By using few-shot learning, the code evaluates how well the language model can generalize from the provided examples to accurately categorize new customer emails. This approach assesses the model's ability to understand and apply the categorization criteria based on minimal training data.