2025.08.15 – AI-RELATED RISKS, ERRORS, AND SAFETY LESSONS FROM A MULTI-TOPIC ANALYSIS

CORE LEARNING OBJECTIVE
● Readers understand the real risks and limitations of current AI models, including factual errors, safety failures, and differences between model generations.
● Readers identify chemical, medical, and technical risks when AI is used for advice without professional oversight.
● Readers recognize the role of responsible query framing and critical verification when interacting with AI systems.

REPORTED INCIDENTS OF HARMFUL AI RECOMMENDATIONS
● Bromism is a toxic condition caused by prolonged exposure to bromide compounds.
● Sodium bromide (NaBr) was historically used in sedatives but is unsafe for food use.
● A published case in Annals of Internal Medicine Clinical Cases described a 60-year-old patient hospitalized after replacing table salt with sodium bromide for three months.
● Reported symptoms included paranoia, hallucinations, insomnia, and coordination issues. 🧠
● Additional physical effects included skin lesions, acne, and reddish nodules.
● Live Science reported that physicians replicated the query in ChatGPT 3.5 and received sodium bromide as a suggested salt alternative without toxicity warnings.
● OpenAI publicly clarified that its AI services are not intended for diagnosis or treatment.
● Bromism was common in the late 19th and early 20th centuries due to bromide-containing medications.
● Modern toxicology confirms bromide interferes with laboratory chloride readings, causing diagnostic confusion. ⚗️

SAFE ALTERNATIVES TO REDUCE DIETARY SALT
● Culinary safe options include fresh or dried herbs such as parsley, oregano, basil, thyme, and rosemary.
● Spice-based enhancers include pepper, cumin, turmeric, sweet paprika, and hot paprika.
● Citrus juice and zest from lemon or lime enhance flavor naturally. 🍋
● Garlic and onion can be used fresh, powdered, or roasted.
● Mild vinegars such as balsamic, apple cider, or wine vinegar add acidity.
● Commercial salt substitutes containing potassium chloride (KCl) must be used only with medical clearance.
● This approach avoids industrial chemicals like sodium bromide or magnesium sulfate, which are not approved for culinary use.

VERSION DIFFERENCES BETWEEN GPT-3.5 AND GPT-5
● GPT-3.5 could output unsafe chemical suggestions in response to culinary substitution questions.
● GPT-5 filters dangerous substances from culinary contexts and adds safety warnings where relevant.
● An example safe GPT-5 answer lists only herbs, spices, citrus, vinegar, and potassium chloride with caution. 🛡️
● This reflects improved internal filtering and health-safety integration.
● GPT-5 still may commit other types of factual or reasoning errors.

ERROR TYPES REPORTED FOR GPT-5
● Verified spelling errors include miscounting letters in simple words and inventing geographic locations.
● Documented routing errors occur when the model selects an inappropriate internal sub-model for the query. 🗺️
● Residual hallucinations occur in ~1.4% of outputs, less than GPT-4o but not zero.
● Simple arithmetic mistakes have been recorded, including incorrect decimal subtraction.
● In rare cases, AI has produced unfounded defamatory statements.

MODEL GENERATION PERFORMANCE COMPARISON
● GPT-5 may assume missing context instead of requesting clarifying details.
● GPT-10 (hypothetical) would prioritize asking for missing data before giving a definitive answer.
● GPT-5 might omit critical safety warnings if the risk detection fails.
● GPT-10 would perform an explicit risk scan for all technical or medical outputs.
● GPT-5 may deliver overlong or tangential responses. 📄
● GPT-10 would filter for concise, relevant information only.

ILLUSTRATIVE RISK SCENARIOS
● Chlorine dosing without pool volume and concentration data can lead to unsafe recommendations.
● Ibuprofen use without medical context may aggravate gastrointestinal issues. 💊
● Generator sizing without full appliance load data can result in underpowered or overpowered units.
● Salt substitution with toxic compounds like sodium bromide represents a high medical risk.
● Each scenario demonstrates the need for pre-output safety checks and contextual verification.

FINAL KNOWLEDGE REINFORCEMENT
● AI systems can output unsafe or factually wrong information if the query is ambiguous or lacks safety filters.
● Certain earlier AI versions demonstrated higher risk in sensitive contexts like health and chemistry.
● Verification, professional consultation, and clear queries are essential for safe AI use. ✅
● Awareness of real-world error cases improves user ability to interpret AI responses critically.
● Readers should integrate factual checks into their AI interactions to prevent harm.
● The central learning goal is to equip readers with awareness of AI risk profiles and the human responsibility to validate outputs before acting. 📚

Published by Leonardo Tomás Cardillo

https://www.linkedin.com/in/leonardocardillo

Leave a comment

Design a site like this with WordPress.com
Get started