Skip to content
Gallery
promptologyicon
Promptology [AI at Work Challenge]
Share
Explore
Promptology

icon picker
Prompt Journeys

Thoughts, Observations, and Updates about Promptology

Mega-Prompts

Prompts, thus far, have been simple. A couple of sentences. Maybe a data grid is included. Sometimes a few rules, such as, don’t format the output with numbered lists.
Something is happening that belies this general prompting pattern. Prompts are upsizing at an astounding rate. This new type of prompt contains a lot of detail, instructions, and, perhaps, domain-specific thought processes. Large prompts are not new, but they have been constrained mostly by token costs. Using large prompts to embody elaborate domain expertise is not something you see often.
dives deep into the emerging realm of mega and hyper-prompts.
I believe hyper-prompts are tomorrow’s knowledge objects. They will be curated, shared, subclassed, injected with real-time data, and even licensed as proprietary executable “code”.
, as it turns out, is an ideal place to build and test hyper-prompts. provides the underlying framework for building reusable hyper-prompts.
Click to enlarge →
image.png

Generating Coda Packs

13-Jul-2023 On a hunch, today I decided to test the hypothesis that a well-constructed prompt could generate a functional Coda Pack that would build and execute without modification. Almost within minutes of providing a reasonable Pack example and three prompt adjustments, the code was generated, built, and executed correctly.
This is just one very simple test, but it demonstrates that with a suitable prompt, Coda AI can fully provide code generation in the flavor and style that the Coda SDK requires.
Click to enlarge →

CleanShot 2023-07-13 at 05.59.24@2x.png

Quick-Start Prompt Workbench Video

10-Jul-2023 ​If you want to start using Promptology in three minutes, watch this at 2x speed.
It covers just the basics.
Click to enlarge →
Loading…

ChatGPT popularized AI but [Google's] Gemini May Be a Much Bigger Milestone

09-Jul-2023 ​It won't be long before Coda AI will support more than a single LLM (large language model). The evidence is clear - than OpenAI's 2022 landmark launch is going to sneak up on us soon.
When that happens, we'll need to think about prompts in the context of different models. Some will work unchanged across different models. Many won't.
Even today, subtle prompt differences exist between OpenAI's GPT-3 and GPT-4. It's difficult enough creating high-performing models, let alone trying to remember the nuances of different models.
Promptology has been designed with an eye toward a future where Coda AI supports multiple models and uses its AI capabilities to provide automated prompt variants targeting specific models. This capability has not been exposed in the Promptology UI [yet], but you can see evidence of it in hidden fields (ex. Prompt Guide).
You are free to use Prompt Guide for any purposes you might have. The premise of this field is to create a cascading review of the entire prompt as it is sequentially constructed from each prompt component. Each stage in this progression performs a prompt review of all prompt guidance to that component point. It's like asking the question -
How would you build this next part of the prompt given all that has been provided for all components up to this component?
We must consider the fluidity of prompt dependencies as we inch closer to AI systems that are model agnostic.
image.png
Click to enlarge

Prompt Reusability - Keeping it Simple

07-Jul-2023 One of the things my team demanded was extreme simplicity to leverage other team members' prompts. Promptology achieves this by making it one click to recall a prompt that can then be modified and tested as a new prompt. This one requires only two small changes.
Important Tip: Note the use of color and bold formatting in the prompt. Coda makes this possible, allowing prompt creators to format the text so that other users can understand what might need to change when reusing a prompt.
Click to enlarge →
image.png

Search Queries and AI Prompting

05-Jul-2023 ​In 1997 Google Search made it clear that as the number of information artifacts increased, the more discrete humans would have to become to find anything on the Interwebs. In 1997, we struggled to learn how to search. Some of us got good at it. Some, not so much, and many still struggle today. is still a thing.
In contrast to modern search, artificial general intelligence is based on ALL the information artifacts plus ... ALL the ways all the information artifacts could ever possibly be used, remixed, joined, and analyzed. This is a massive difference and certainly a giant leap forward in information technology.
To say that prompt engineering is about discrete and articulate queries is almost laughable. Have we ever faced a more critical moment in history where we must choose words more wisely?
Over the years, have modern search systems become more forgiving or poorly crafted queries? Nope. AI queries will likely follow the same arc.
If anything, modern search systems have become more sensitive, producing less helpful outputs year over year. And this is why those who need to find stuff are turning to conversational UIs and UXs that can sustain context, perhaps nudging the productivity curve in their favor.
While search and AI productivity share similar challenges, prompt engineering subtly tosses a new monkey into the wrench. The slightest change in a prompt can significantly change outcomes to make them almost unrecognizable. Unfortunately, this - and this alone - suggests we need to shape prompts like we craft legal documents.
With great power comes the responsibility to advance future information systems that leverage this technology. We must tame prompts through well-understood change management processes.
CleanShot 2023-07-13 at 06.49.42@2x.png
Click to enlarge

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.