No items found.

AI Problems: Solving AI Issues with Better Prompts

Rebekah Carter
Technology Journalist

Over the last couple of years, Artificial Intelligence has evolved from being little more than a curiosity, into a driving force in the business landscape. Generative AI solutions, powered by Large Language Models (LLMs) are having a resounding impact on how we work and create. 

These models possess the capacity to “understand” human input, and generate output based on the simple information we give them. However, despite their seemingly endless power, AI solutions still have their limitations. Even the most advanced AI solutions don’t have the capacity to think like human beings, which means their responses don’t always align with our intent. 

Often, the most significant “AI problems” we face, such as AI hallucinations and inaccurate responses, are a result of the “prompts” we provide these models with.

Exploring the AI Prompting Problem

For an example of how prompt issues can lead to frustrating problems with generative AI, let’s look at the “Lone Banana Problem”, identified by Daniel Hook in 2023. Using the generative AI tool, Midjourney, Daniel attempted to create a single image of a banana with AI. 

Unfortunately, no matter what he did, he found the model was unable to generate anything other than bunches of bananas. The issue was further explored by the University of Sydney Business School, which published a paper exploring the nature of generative AI. 

The paper’s authors suggested that the issue stemmed from the fact that LLMs don’t encode knowledge in the same way as humans. For instance, if a model is programmed with examples of “bananas” that most commonly occur in bunches, they’ll assume that’s the natural way to generate an image of a banana. This doesn’t just apply to image-generating AI tools either. 

Tools like ChatGPT, and countless other generative models used by businesses can often deliver incorrect responses, based either on their training or the prompts they receive. 

What makes the issue complex, is that it’s often difficult to determine whether it’s the prompt you’re giving an AI bot that’s generating a problem, or the model itself. 

Solving AI Problems: The Step-by-Step Guide

Ultimately, the easiest way to determine whether it's your prompt strategy that leads to an incorrect answer from an AI model, or it’s a problem with the model itself, is with experimentation. Here are a few key tips to help you solve common AI problems. 

Step 1: Check if You’re Following Prompting Best Practices

The first, and most obvious step in addressing the issue of incorrect AI responses, is to check your prompting strategy. Ultimately, the exact way to structure a prompt when interacting with a bot can vary depending on the model you’re using. Most tools come with an FAQ or guides that give you advice on how to tailor your prompt to get the responses you want. 

However, for the most part, there are some key rules you should follow, such as:

  • Be as clear and descriptive as possible in your request. 
  • Keep the language succinct, to avoid overwhelming the model with too much data.
  • Be explicit in any constraints (i.e. Write an email response that is 200 words long).
  • Add valuable context (such as writing an email response to a sales query).
  • Avoid negatives (instead of saying “don’t write more than 200 words”, say “write 200 words”).

Another good strategy is to check your vocabulary. If you’re using complex or jargon-filled sentences that could be interpreted in different ways, this can affect your results. 

Step 2: Ask If You’re Using the Right Tool

Today, we have dozens of different AI models to explore for different use cases. They all have their own unique quirks, capabilities, and limitations. For instance, ChatGPT is excellent at responding to text prompts, but it can’t generate images as effectively as Midjourney. 

If you’re prompting strategy isn’t a problem, but you’re still getting the wrong output from your AI model, it might be time to look closer at the capabilities of the model itself. Ask yourself what kind of data the model is trained on, and what its core use cases are. 

Consider experimenting with other models, to see if you can access better outcomes. For instance, switch ChatGPT with Bard, and see if you get the same results.

Step 3: Invest in More Iteration

Even the most advanced AI models don’t always deliver the ideal response to a question straight away. If you ask ChatGPT to “Write a sales script for a new accounting software designed for smaller businesses”, it might produce something relatively generic to begin with. 

To ensure you get the best results, you need to think about your interaction with these models as a conversation. Based on the response given, ask your model to adapt the tone for a more friendly and playful business personality. Consider asking it to add more details, with contextual information about what your solution does. 

Keep giving the model feedback, and asking it to make edits to its response based on the output you want. Don’t just give up after the first response. 

Step 4: Consider the Obscurity of Your Request

Many leading generative AI models are trained on huge volumes of data. However, that doesn’t mean they know absolutely everything there is to know about the world. If you ask your AI model to write an article for you about “Air Cover”, it could produce all kinds of information about air cover insurance, or military tactics. 

The model is unlikely to simply assume that you’re using a relatively old term in the sales and marketing landscape, to refer to “creating brand awareness” with various traditional models. 

If your request is a little obscure or uses terms that most people wouldn’t be familiar with in today’s world, then you’ll need to provide a lot more context. You’ll need to explain exactly what you mean by “air cover” with a complete definition for the model. 

Step 5: Check if your Prompt is too Simple or Complicated

AI “prompt engineering” is both an art and a science. It requires users to find the perfect balance between providing too much information, and not enough. As mentioned above, to get an accurate response, you need to provide your model with context and detail. 

However, if your prompt is more than four lines long, and includes a lot of complex information, there’s a better chance that your model will get confused and be unsure about what it should actually be responding to. 

At the same time, you still need to ensure your prompt has enough information included to give your model the direction it needs. If your question is too generic or simple, the AI model will default to giving you the most “likely” answer based on the training it has. Ultimately, you need to ensure your prompts are well-structured and detailed, but also concise. 

Overcoming Generative AI Issues

There’s no denying that generative AI is a powerful resource in today’s world. It can help streamline your go-to-market strategy, assist with creating valuable content, and even help you respond to customer queries. But no matter how competent and powerful these models seem, it’s important to remember they have their limitations.

Sometimes, the problem will be with the direction you give your bot via your prompt. Other times, the issue will simply be that you’re using the wrong AI tool for the job. The key to success is making sure you audit your own process, while still being mindful of the constraints these models have. 

Recent post