LLMs Usage FAQ in Mandarin: Difference between revisions

Jump to navigation Jump to search
m
Line 136: Line 136:
📝 詢問內容:目前運用Grok、ChatGPT、Perplexity等AI工具,搜集全球特定行業前10大企業的財務數據與資本狀況,並整理成表格形式。同時,還需要附上數據的原始網站連結作為參考依據。然而,這三種AI工具產生的網站連結全部都有錯誤。請問大家是否有解決方案,能夠避免這類語言模型持續產出不正確的參考連結?
📝 詢問內容:目前運用Grok、ChatGPT、Perplexity等AI工具,搜集全球特定行業前10大企業的財務數據與資本狀況,並整理成表格形式。同時,還需要附上數據的原始網站連結作為參考依據。然而,這三種AI工具產生的網站連結全部都有錯誤。請問大家是否有解決方案,能夠避免這類語言模型持續產出不正確的參考連結?


💬 回覆內容:如果要使用原本的模型,而不是更聰明的推理模型,可以要求AI在給結論前,增加一個前置步驟:請將問題答案的相關網頁段落文字,擷取摘要並編號,再根據這些段落回答問題。就可以讓比較笨的模型減少幻覺的機率。<ref>[https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations Reduce hallucinations - Anthropic]</ref><ref>[https://the-learning-agency.com/the-cutting-ed/article/hallucination-techniques/ Improving AI-Generated Responses: Techniques for Reducing Hallucinations - The Learning Agency]</ref><ref>[https://www.godofprompt.ai/blog/9-prompt-engineering-methods-to-reduce-hallucinations-proven-tips 9 Prompt Engineering Methods to Reduce Hallucinations (Proven Tips) - Workflows] "Step-Back Prompting is a technique where you ask the AI to review its previous response and make sure it is accurate. " </ref>
💬 回覆內容:如果要使用原本的模型,而不是更聰明的推理模型,可以要求AI在給結論前,增加一個前置步驟:「請將問題答案的相關網頁段落文字,擷取摘要並編號,再根據這些段落回答問題。」就可以讓比較笨的模型減少幻覺的機率。<ref>[https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations Reduce hallucinations - Anthropic]</ref><ref>[https://the-learning-agency.com/the-cutting-ed/article/hallucination-techniques/ Improving AI-Generated Responses: Techniques for Reducing Hallucinations - The Learning Agency]</ref><ref>[https://www.godofprompt.ai/blog/9-prompt-engineering-methods-to-reduce-hallucinations-proven-tips 9 Prompt Engineering Methods to Reduce Hallucinations (Proven Tips) - Workflows] "Step-Back Prompting is a technique where you ask the AI to review its previous response and make sure it is accurate. " </ref>


== 相關文章 ==
== 相關文章 ==

Navigation menu