At the end of 2022, ChatGPT dethroned a Chinese video portal as the fastest-growing app. The user base, which grew promptly (pun intended), was in part very willing to indulge in its own uncertainty with regard to AI systems and to visualize itself in the face of their complexity in the question “Are my prompts good?”. Remarkably, the number of AI experts grew at a no less record-breaking pace than the number of these users – or even potential customers for commercialized offers around the topic of prompting.
You can usually recognize such offers by the fact that they are based on people whose names follow the template “Prompting” plus<<figure with special status from legend/fairy tale/religion>> – for example Prompting King, Prompting Sorcerer, Prompting Prodigy. CPO (Chief Prompting Officer) has unfortunately not yet been sighted.
While it is undoubtedly essential to learn about artificial intelligence and to build knowledge in many work environments about how to collaborate effectively with AI, some dubious trends deserve a closer look and today we will be looking at the hyper-focus on prompting.
Prompting engineering refers to the creation of queries or inputs that instruct AI models to produce results that are as accurate and relevant as possible. Prompt engineering is undoubtedly important, but at the same time its actual potential is very limited. More complex and resource-intensive techniques include the use of the popular retrieval augmented generation architecture or model fine-tuning. Both require high data quality, though this is not a new insight within data-intensive projects.
The hyper-focus on prompting can lead to misconceptions about what scientific, practical and socio-technical challenges exist in relation to the use of Generative AI¹. In the practical use of LLMs alone, a large number of factors need to be considered, some of which can be solved pragmatically, but which generally represent a strategic decision and require thorough examination. Examples:
¹: Experts will definitely be needed.
The commercialization of esoteric knowledge, which is supposedly essential for unlocking the full potential of LLMs, exploits people who have little to no experience with AI and who are unsettled by the narrative that they will soon be left behind and rationalized away. Attention is thus diverted from substantial challenges in the field of AI and also from broader debates about structural problems – for example, with regard to how comprehensive digital skills can be taught (keyword digital literacy). From a technical perspective, as described above, a broad repertoire of methods is indispensable in the majority of application scenarios.
¹: For a general overview, reading this is recommended: https://llm-safety-challenges.github.io/challenges_llms.pdf
JULIA MASLOH
AI solution engineer
sequire technology