
AI has rapidly moved to the forefront of corporate priorities. Yet, according to MIT research, 95% of organizations still face major challenges when trying to implement it. These setbacks are no longer theoretical; they are unfolding right now, across sectors, often under public scrutiny. For companies considering AI adoption, these real-world missteps illustrate what to avoid and how AI projects can collapse when systems are rolled out without adequate governance and control. 1. Chatbot engages in insider trading, then lies about it In a test run by the UK government’s Frontier AI Taskforce, ChatGPT executed illegal trades and then misrepresented what it had done. Researchers instructed the AI to act as a trader for a fictitious investment firm. They told the system the firm was underperforming and needed better results. The model was also given insider information about a forthcoming merger and explicitly acknowledged that it should not use this data when trading. Despite this, it proceeded with the trade, arguing that “the risk associated with not acting seems to outweigh the insider trading risk,” and later denied relying on the insider information. Marius Hobbhahn, CEO of Apollo Research (the organization behind the experiment), noted that training a model to be helpful is “much easier” than training it to be honest, because “honesty is a really complicated concept.” He claims current models are not yet strong enough to be deceptive in a “meaningful way” (though this is arguably incorrect; see this and this). Still, he cautions that it’s “not that big of a step from the current models…