Why we consider Prompt Engineering to be overrated

At the end of 2022, ChatGPT dethroned a Chinese video portal as the fastest-growing app. The user base, which grew promptly (pun intended), was in part very willing to indulge in its own uncertainty with regard to AI systems and to visualize itself in the face of their complexity in the question “Are my prompts good?”. Remarkably, the number of AI experts grew at a no less record-breaking pace than the number of these users – or even potential customers for commercialized offers around the topic of prompting.


Prompt You Well

You can usually recognize such offers by the fact that they are based on people whose names follow the template “Prompting” plus<<figure with special status from legend/fairy tale/religion>> – for example Prompting King, Prompting Sorcerer, Prompting Prodigy. CPO (Chief Prompting Officer) has unfortunately not yet been sighted.

While it is undoubtedly essential to learn about artificial intelligence and to build knowledge in many work environments about how to collaborate effectively with AI, some dubious trends deserve a closer look and today we will be looking at the hyper-focus on prompting.

Trial and error

In a nutshell: what is prompt engineering?

Prompting engineering refers to the creation of queries or inputs that instruct AI models to produce results that are as accurate and relevant as possible. Prompt engineering is undoubtedly important, but at the same time its actual potential is very limited. More complex and resource-intensive techniques include the use of the popular retrieval augmented generation architecture or model fine-tuning. Both require high data quality, though this is not a new insight within data-intensive projects.

Prompt Engineering - The optimization process
When prompting, the end user can make use of a number of simple principles. In addition, the output can be controlled in a more granular way with somewhat elaborate strategies and instructions. A plethora of resources on this are freely available:
  • Online encyclopedias: The first examples of prompt engineering techniques can already be found here.
  • Official documentation: Providers often publish detailed user manuals and best practices for interacting with their models.
  • Scientific contributions: Numerous academic papers are publicly available and provide insights into the theory and applications of Large Language Models (LLMs).
  • Online communities and forums: Platforms such as GitHub, Reddit and Stack Overflow host active communities where problems are discussed.

Basically, prompt engineering doesn’t require any technical background knowledge to get started. It’s more about learning through trial and error what types of prompts give the best results. This is something you can teach yourself through practice and engagement with the model, rather than through expensive courses or workshops.

The bigger picture

The hyper-focus on prompting can lead to misconceptions about what scientific, practical and socio-technical challenges exist in relation to the use of Generative AI¹. In the practical use of LLMs alone, a large number of factors need to be considered, some of which can be solved pragmatically, but which generally represent a strategic decision and require thorough examination. Examples:

  • I can pragmatically solve how I share a prompt that works well for specific tasks in my domain with my team.
  • Strategically, I want to decide how to shape the change to working with LLMs and support it with training or user groups.
  • I want to check very thoroughly how I can use LLMs safely.

¹: Experts will definitely be needed.

Demystifying AI: distraction from real challenges

The commercialization of esoteric knowledge, which is supposedly essential for unlocking the full potential of LLMs, exploits people who have little to no experience with AI and who are unsettled by the narrative that they will soon be left behind and rationalized away. Attention is thus diverted from substantial challenges in the field of AI and also from broader debates about structural problems – for example, with regard to how comprehensive digital skills can be taught (keyword digital literacy). From a technical perspective, as described above, a broad repertoire of methods is indispensable in the majority of application scenarios.

¹: For a general overview, reading this is recommended: https://llm-safety-challenges.github.io/challenges_llms.pdf

jum (1)

AI solution engineer
sequire technology

Other articles that might be interesting for you