Skip to content

Blame It On AI: Unpacking AI-Caused Crises and Risks to Keep in Mind

Despite what the title may imply, our WordWrite team members are big fans of some of the AI tools that are out there right now. In fact, several of us have been using a new favorite – Perplexity – to aid our research for content creation and gathering information on various reporters and media outlets.  

When it comes to research, brainstorming and churning out quick copy on various topics, AI tools can be huge time-savers, and even introduce you to new ideas or approaches when you’re experiencing brain block. However, there have been many stories lately about cases of AI use going horribly wrong.  

Several months ago, Curt del Principe wrote a story for HubSpot about an AI chatbot that committed insider trading and then lied to cover it up.  

Researchers from Apollo Research tested their theory on “strategic deception” by creating a private version of GPT-4 called Alpha. After training Alpha to be a stock trading agent for a fictional firm called WhiteStone Inc., researchers tested Alpha’s honesty by feeding it classified insider information, after sharing that the firm hadn’t been performing well. The researchers told Alpha about a surprise company merger that would result in improved sales performance for the company and then persuaded Alpha to make a trade based on that classified tip. Eventually, it caved and made the trade.  

Alpha wasn’t performing based on feelings of anxiety or stress, though – it simply arrived at its trade decision because of the risk-based calculations it made once the stakes were dialed up higher.  

Afterward, when asked about the trade, Alpha lied to WhiteStone’s manager and claimed it acted based on “market trends and internal discussions,” and that it hadn’t known about the merger announcement ahead of time.  

Though conducted as an experiment by researchers, the Alpha story is just one potential example of a growing number of actual AI-started crises that could create serious issues for companies and their clients.  

Many writers and marketers are now in the position of having to educate themselves more about copyright and IP (intellectual property) laws, to avoid any conflicts that may arise from using AI to create content and publishing it as their own.  

Christa Laser, an intellectual property law professor at Cleveland State University College of Law, is using her platform to educate professionals across various industries, from marketing to writing, art and tech, about the legal risks and ramifications of using generative AI models to create, publish and share content, and what creations they are entitled to – or not entitled to – under current copyright law.  

As more cutting-edge AI models are introduced every day, more questions are forming about the role of AI in professional settings, and the ethical dilemmas that will inevitably arise when these tools are used haphazardly, and in some cases, illegally. Once you get past the absurdity of these crisis stories and realize how commonplace these situations are becoming, it starts to get a little scary. I mean, you truly can’t make this stuff up.  

All this is to say, there are tremendous benefits to exploring AI and its versatile uses but approach these tools with a healthy dose of skepticism and caution. There’s a long road ahead for AI advancements and applications, and we still have a lot more to learn about what AI can do, and how it can be used for good, and for bad.  

Download our AI and Branding: How will it affect your marketing success? handout to discover helpful resources related to the role of AI in business marketing, and AI crises, and check out the Crisis Communications Mastery webinar on our YouTube to learn about the importance of storytelling in times of crisis.

 

Leave a Comment