Premise: An introduction to prompt engineering by exploring analogies to magic (defined as “ritual meant to effect changes in the world”)…

(continuing the theme of courses that teach generative AI in strange ways )

Voces Magicae = Glitch Tokens

Voces magicae (or their Greek cousin, Ephesia Grammata) are pronounceable but nonsensical/meaningless words that have power to perform actions (if pronounced correctly) in greco-roman magical practice.

Explore the parallel to glitch tokens and more glitch tokens

Addendum 22 Jan 2024: People are starting to understand more about glitch tokens—they seem to occur when a word or token is very common in the original, unfiltered dataset that was used to make the tokenizer, but then removed before model training, which results in the LLM not knowing anything about the semantics. Odd behavior ensues. As gleaned from a hackernews thread.

Hekhalot mysticism = Escape characters

Hekhalot mysticism in proto-rabbinic Judaism involved cosmic ascents with special magical passwords needed to bypass guardians along the way. (Unlike the Platonic ascent narratives that just require making good decisions and being virtuous…) If you do this, you gain secret knowledge about the way the universe works.

It was recently discovered that entering a thousand repeated letters (“a a a a ….”) may cause ChatGPT to spit out other users’ chats … a magical password that bypasses the guardians!

Addressative Magic = Prompt Engineering

There is a long tradition of Judeo-Christian magic (e.g., Sefer ha-Razim, the Testament of Solomon, etc.) concerning the proper way to command angels and daemons to do your bidding—mostly by knowing their correct name, but also by being able to define an airtight contract with them

What is this, if not prompt engineering?

I agree with Lilian Weng’s “spicy take”—most of prompt engineering is just simple tricks or wishful thinking. (But she does a nice job of illustrating what is demonstrably useful…)

That being said, there are many interesting ideas to explore. The promptingguide.ai is a good resource (it is even linked to from the OpenAI cookbook)

Agents and Daemons

LLM-based agents (including chemical examples like ChemCrow)) as golem-like creations … or independently minded daemons .

It maybe more insightful to use the language of role-playing as a framework for discussing dialogue agents—as this still lets us use folk psychological concepts (beliefs, desires, goals, ambitions, emotions, etc.) without falling into the trap of anthropomorphism. But there may be some parallels here as well, for example in the Aquinian/Scholastic theory angels (and I suspect Iamblicus’ Proclus’ treatment of various levels of divine entities, but I need to do some more homework on this), insofar as they do not have the same types of mental states as humans. For example, according to Aquinas, following Augustine (S.T. I.58.1), angels cannot “learn”. (the intellect of the angel is not in potentiality w.r.t. certain forms of knowledge)

É solo un Trucco! From magic to natural magic to science…

(with reference to La Grand BellezzaÉ solo un Trucco! )

An exploration of the evolution of the category of of natural magic in the late-medieval to early modern period, being sure to stops along the way for Aquinas’ De operationibus occultis naturae

It’s just a trick. There’s no magic. It’s just linear algebra. How might an understanding of “natural magic” assist us in thinking about this and understanding societal responses? Use the historical ways that society has treated magic in the past as a way to think about possible ways society will respond and regulate generative AI.

Fordham Stuff

Possible field trips

  • NYPL has a collection of Late Antique Aramaic curse tablets—Rivka Elitzur-Leiman is a scholar at U Chicago? who has been researching them and gave a talk on the topic at Fordham.

Parerga and Paralipomena

Practical guidance

  • COSTAR (with examples from ref)
    • Context: Embed relevant context within your prompt to guide the model’s understanding of the task. (e.g., Context: Analyze the sentiment of a user review for a movie.)
    • Output Format: Specify the desired format of the model’s response to align with your objectives. (e.g., Output Format: Provide a sentiment label (positive, negative, or neutral) along with a brief justification.)
    • Specifications: Clearly define the task specifications and constraints to guide the model’s focus. (e.g., Specifications: Focus on the overall sentiment without getting into specific details. Consider both explicit and implicit expressions.)
    • Task Examples: Include task-specific examples in the prompt to illustrate the desired behavior. (e.g., Task Examples: If the user expresses joy about the story-line, the model should identify and label it as a positive sentiment.)
    • Additional Information: Supplement the prompt with any additional information that enhances the model’s understanding. (e.g., Additional Information: The movie genre is a romantic comedy.)
    • Restrictions: Set boundaries and restrictions to guide the model’s behavior within specific parameters. (e.g., Restrictions: Limit the response to a maximum of three sentences.)
  • The fabric repository has some useful prompting patterns for analyzing documents, etc.