14,970
edits
| Line 178: | Line 178: | ||
If the information is too new or niche to be obtained through real-time web search, it is recommended to use the second method of uploading knowledge files. Sometimes AI will claim to understand AABB while actually not understanding (pretending to know). In such cases, you can propose basic conceptual questions for verification. | If the information is too new or niche to be obtained through real-time web search, it is recommended to use the second method of uploading knowledge files. Sometimes AI will claim to understand AABB while actually not understanding (pretending to know). In such cases, you can propose basic conceptual questions for verification. | ||
== How to Solve AI Models Generating Incorrect Website Links? == | |||
📝 Question: Currently using AI tools like Grok, ChatGPT, and Perplexity to collect financial data and capital status of the top 10 global companies in specific industries, organizing them into table format. Additionally, I need to attach original website links as reference sources for the data. However, all three AI tools generate completely incorrect website links. Does anyone have solutions to prevent these language models from continuously producing incorrect reference links? | |||
💬 Response: If you want to use the original models rather than smarter reasoning models, you can ask the AI to add a preliminary step before giving conclusions: Please extract and number relevant webpage paragraph text related to the question's answer, then answer the question based on these paragraphs. This can reduce the probability of hallucinations in less capable models.<ref>[https://docs.anthropic.com/en/docs/test-and-evaluate/strengthen-guardrails/reduce-hallucinations Reduce hallucinations - Anthropic]</ref><ref>[https://the-learning-agency.com/the-cutting-ed/article/hallucination-techniques/ Improving AI-Generated Responses: Techniques for Reducing Hallucinations - The Learning Agency]</ref><ref>[https://www.godofprompt.ai/blog/9-prompt-engineering-methods-to-reduce-hallucinations-proven-tips 9 Prompt Engineering Methods to Reduce Hallucinations (Proven Tips) - Workflows] "Step-Back Prompting is a technique where you ask the AI to review its previous response and make sure it is accurate." </ref> | |||
== Related articles == | == Related articles == | ||