If you are trained as an architect or designer,  you understand the feeling of never-ending decision fatigue when it comes to design elements. Or even the decisions that go into crafting an email, where nuances in wording can convey subtleties in meaning. When Generative AI tools, such as ChatGPT or Dall-E, came along, leaving it up to “fate” was easy after typing in the input prompt. One is either wildly surprised and in awe of what it produced or extremely disappointed that it didn’t understand your intentions.

As the digital era progresses, businesses across various industries are continually exploring ways to harness the power of artificial intelligence (AI) to enhance their processes and outputs, whether that’s a business plan, a short movie trailer, or a sales email. By design, GenAI can create new content from scratch - be it written text, images, music, or any form of digital content - mirroring the style, context, and nuances of its training data, which can seem magical.

However, within this evolving landscape, it's essential to understand the intricacies of Generative AI and its role in shaping our perception of artificial intelligence outputs. At the heart of this is a phenomenon known as the 'Illusion of Completeness.'

The 'Illusion of Completeness' refers to the perceived accuracy or wholeness of AI-generated content, which might seem perfect to the untrained eye but may not exactly align with the specific requirements, subtleties, or context intended by the user. This illusion is influenced by various factors, including our brain's neurology, cognitive biases, and even our intrinsic appreciation for beauty. To understand this phenomenon, let's delve into some of the factors contributing to the Illusion of Completeness in Generative AI:

Value to Effort Ratio: 

In this context, the value-to-effort ratio refers to the disproportionate relationship between the minimal amount of user input and the substantial amount of AI output. Generative AI can produce large volumes of content or highly complex outputs with just a short prompt or a handful of parameters. Years ago a speed drawing challenge asked artists to sketch the same drawing in 10 minutes, 1 minute, and 10 seconds. With Dall-E, any drawing or image can be produced in 20 seconds, the amount of time it takes to generate a batch of 10 photos.

A drawing of an eye  in 10 minutes, 1 minute, and 10 seconds by rachelthellama

Example of an eye drawing in 10 minutes, 1 minute, and 10 seconds (by rachelthellama).

Left: DALL-E2 Prompt: 3d realistic render, maya, ambient studio light, splash page image,  sci-fi, futurism, greenery, aerial view, A city of bikes, scooters, pedestrians friendly city,
Right: Dall-E2 Prompt: Future of mobility workshop and symposium poster without text (bad prompting).

The image on the left could be categorized as low effort high value because it is perceived to be more coherent and labor intensive. The image on the right seems to be low effort low value, as we are more able to recognize errors in texts, just like we can recognize the famous example of a 6-fingered hand generated by AI is not a hand we recognize. From a psychological standpoint, this abundance of output against a minimal input can heighten our perception of completeness. This is seemingly the opposite of the effort justification effect in cognitive dissonance, where the individual experiences wonder despite the seemingly insignificant input.

*The “Effort justification” is a phenomenon whereby people come to evaluate a particular task or activity more favorably when it involves something that is difficult or unpleasant (APA).

Perceived Coherence: 

While generative AI models like ChatGPT and Dall-E can produce impressive outputs, they lack true comprehension and contextual understanding. However, despite their limitations, they can lead to outputs that might seem internally coherent visually complete, and contextually appropriate at times but can also fall short in other instances.

Many times, the perception of coherence lies in the ambiguity and conciseness of both the user's inputs and the AI-generated outputs. The input prompts provided to generative AI models can be concise and open-ended, leaving room for interpretation. Users may assume that the AI understands their intentions fully and will generate outputs aligned with their expectations.

Here’s an example: 

In this example, the response attempts to convey coherence and relevance to the topic of AI's societal impacts. However, the content lacks depth, true understanding, and specific examples to substantiate the claims.

Example of ChatGPT output, achieving the illusion of completeness.

Example of how bolding and similar sentence lengths achieve the illusion of completeness.

Fill in the _______

From a neuroscientific viewpoint, our brains are naturally wired to fill in missing information. This survival-oriented mechanism aids us in interpreting the world around us. A phenomenon called "filling in" happens when the brain "fills in" missing information in a person's blind spot. Reality is a construction of our brain, and the brain has evolved for survival, not accuracy. Consequently, when we examine an AI output, our brain instinctively completes any apparent gaps, making the result seem 'whole' even if it lacks certain aspects. Many times, it's easier to believe that AI can do more than it can.

Emotional Attachment: 

When users witness generative AI producing something remarkable and aligned with their desires, they may develop an emotional attachment to the output. Specifically, the effort the user exerted to produce the prompt that led to the generated output can create a certain feeling of “ownership” towards the generated output, whether that’s a text, animation, or image. This emotional response further reinforces the belief that the AI has grasped its intent comprehensively.

Confirmation Bias

Confirmation Bias is a cognitive bias that affects our interaction with AI outputs. This bias causes us to process new information in a way that affirms our existing beliefs or expectations. So, if the AI's output is somewhat aligned with what we expect, our brain is inclined to view the output as more precise than it might be. For example, the certainty at which ChatGPT generates false information can trick many. On the other hand, if the AI output is not what we expected, we might dismiss it and give it less weight in our minds or continue to edit the prompt until the desired output matches our input and coincides with the user’s preexisting beliefs and thoughts.

Summary: Illusion of Completeness in Generative AI

These cognitive biases are more complex than described in this article. These are merely a glimpse to showcase why AI feels magical. And is it too good to be true? How can individuals and teams mitigate the Illusion of Completeness in Generative AI and think critically about our usage of Generative AI?

  1. Promote Awareness: Users and consumers of generative AI should be educated about the technology's limitations. Schools, governments, local communities, and companies should teach how generative AI works. Understanding that AI models lack genuine comprehension and can produce unpredictable and incorrect results will foster more realistic expectations.
  2. Iterative Hybrid Initiatives: Encouraging collaboration between AI and human creators, where the AI assists and the human guides, can lead to more reliable and contextually accurate results. The human plays a critical role in providing feedback to fine-tune the machine. This will allow more diligent tracking of AI outputs and user control over generated content.
  3. Clear Communication of Intent: Users should be educated about prompt design and encouraged to provide more specific and explicit prompts to avoid misinterpretations by the AI model. A bad prompt is misleading, unclear, and ambiguous, leaving much to be “misinterpreted” by the machine, leading to irrelevant, inappropriate, ineffective misleading, inadequate, or biased outputs.

The Illusion of Completeness in Generative AI stems from a combination of factors related to human cognition, AI limitations, and expectations. By being aware of these factors and adopting suitable strategies, we can harness the true potential of generative AI while maintaining realistic expectations about its capabilities. As AI continues to evolve, understanding the nuances of human-AI interactions becomes increasingly critical for creators and consumers of AI-generated content.