The application of LLMs in our everyday life has gained momentum since the ChatGPT breakthrough in November 2022. While much attention has been focused on ways to engineer prompts for specific textual results, there is little emphasis on how such iterative prompt engineering techniques can be extended beyond the limitations of screens and into more tangible, physical realities. This shift can empower users, expanding the familiar concept of “end-user programming” to include any type of everyday artefact. Consequently, Prompting Realities envisions a experiential scenario where AI distributes the authoritative role of engineers and designers as creators to end-users, opening up new opportunities to contest, interrupt, resist, or manipulate everyday products under unfair situations—not through a system, but at the edge of usage.
The underlying pipeline operates with a simple yet powerful approach: providing a precise description of the prototype (functionality, appearance, etc.) to the large language model. This enables the model to understand the correspondence between computer variables and the effects they can have on reality, in relation to the prototype’s functionality. There is little emphasis on specific hardware or software solutions used in the current prototypes, as the project primarily focuses on offering a new experience for users rather than introducing a new AI-powered system. However, the prototypes utilize a range of technologies, including the OpenAI Assistant API, Telegram Bot Interface, and TU Delft IDE Connected Interaction Kit, all of which can be replaced by similar alternatives.
“AI can distribute the authoritative role of creators to end-users.”
Prompting Realities explores the intersection of AI and physical computing, expanding the application of large language models beyond text-based interactions into tangible, real-world contexts. By empowering users to interact with AI-driven prototypes through conversational interfaces, the project pushes classic notions of end-user programming, redistributing control and agency from engineers and designers to everyday users. This shift has the potential to democratize technology enabling resistance, interruption, and subversion opening new avenues for human-AI collaboration. Through experiential AI prototyping, the project provokes speculative reflection on AI’s role in shaping our physical and digital environments, encouraging deeper engagement with the evolving relations between humans and machines.
A future with less AI-powered products but more AI-powered users are much more preferred.
There will be a series of tangible prototypes to demonstrate how users can experience the developed pipeline. Each artifact explores different aspects of this human-AI interaction pipeline that can be developed further. For instance, one prototype examines the physical appearance of artifacts and how that appearance can serve as a reference for manipulating hardware properties in an ecosystem of hardware and embodiments, resulting in various real-world effects. Another prototype delves into how LLMs can manipulate the constant sequence of values (e.g. colors in a light strip) to bring new representational in the whole. Additionally, a haptic prototype empowers users to define their sequences of vibrations, granting them more agency over the typically rigid boundaries set by an artifact’s source code. These prototypes demonstrate how conversational interactions between humans and physical objects mediated by LLMs can unlock creative exploration, iterative negotiation, and greater user empowerment over products traditionally constrained by computer code.