An AI Assistant Development Lifecycle

In my previous post, I talked about enabling fully typed output from an AI Assistant. The cornerstone of this typing model is a schema, which is used both for defining assistant output and for app types via Typescript. To achieve this, we need to create and update assistants locally so that we can co-locate the schema with our app code.
In turn, because we are creating assistants locally (via the OpenAI API), we will also need to store the ids that are returned, so we then can update assistants as needed.
To achieve this, we can set up a data store for the assistant ids, as well as a set of commands for managing the assistant lifecycle (create, update, etc.). In this post, I’ll walk through my implementation of assistant lifecycle management, using the Trivia Game App from previous posts as an example.
The Development Lifecycle
The specifics of any lifecycle will vary depending on needs and preferences. This is the lifecycle I’m using when implementing a feature that leverages structured output from an AI Assistant.
- Create an internal assistant name
- Create assistant instances for each app environment
- Iterate on updating the dev assistant and propagating updates to other environments as needed.
Create an internal assistant name
As I discussed in a previous post, because assistants that emit structured output can only leverage a single schema, they should therefore also be instructed to have a single output goal. In the case of our example app, one main feature of the app is to create trivia questions for a new game. Therefore, a good name for that assistant might be createTriviaQuestions
.
The name should be descriptive of the purpose and unique among assistants for the app. It will then be used as the internal id and base name for the environment-specific assistants, and will also be used to store and retrieve the corresponding OpenAI assistant ids.
Here is an example of defining the app ids in a read-only array and then using that as a basis for an Assistant Name type.
export const ASSISTANT_NAMES = ["createTriviaGame"] as const;
export type AssistantName = (typeof ASSISTANT_NAMES)[number];
Even though these names effectively will be used as ids, I am using “name” to disambiguate from OpenAI assistant Ids.
Next, we’d want to create assistants using the OpenAI API for each of the environments, but first let’s talk about how we are going to stores the ids that are returned.
Storing Assistant Ids
Since we are managing assistants locally, we therefore also need a way to store the ids, so we can later update assistants and use them to generate output.
The specific type of store you choose isn’t that important. Pick something based on your own preferences. If you already have a data store in place for your app, eg Postgres or S3, I suggest just using that. In my case, I went with a Redis store, mostly because I already had that set up in the app.
In terms of data structure, you’ll want to use the above assistant name as your primary key and then associate that with all the corresponding assistant ids generated by OpenAI for all your environments. Here’s an example:
createTriviaGame: { development: "asst_abc123", production: "<OpenAI Asst Id>", // other envs, as needed}
With this in place, we now have the basis for adding lifecycle commands.
When and where should assistant lifecycle commands be run?
When I started working on this, I quickly realized that creating and updating of assistants needs to happen outside the running app itself. In other words, you wouldn’t want to create a new assistant, say, when the app boots, as that would result in innumerable duplicate assistants. At the same time, since the app creation handler needs access to the same schema needed for app types, we will want lifecycle management to be co-located with the app code.
The simplest solution, I’ve found, is to define a main assistant admin function that accepts an “ACTION” input, which determines which lifecycle handler to call, and then to simply add command scripts for each action. In my case, I added these as npm
scripts:
... "scripts": { ... "asst:admin": "dotenv -- tsx app/admin/assistants.server.ts", "asst:create": "ACTION=create npm run asst:admin", "asst:update": "ACTION=update npm run asst:admin", "asst:deploy": "NODE_ENV=production ACTION=update npm run asst:admin" },...
In this example, I’ve created a base admin
script and then call that script and pass in the desired action variable. I added scripts for creating, updating, and “deploying” (really just propagating changes to all assistants), but could also easily add scripts for, say, deleting or listing assistants. I used dotenv-cli
for enabling access to env variables (so the script handler can access the data store), and tsx so scripts can be written in Typescript.
Running the lifecycle commands
The lifecycle commands should at this point be relatively self-explanatory. We run the asst:create
command whenever a new assistant is added, run the asst:update
as needed while iterating on the assistant config, and asst:deploy
when we are ready to copy our dev assistant configs to the other environment assistants, in order to run and test in a deployed environment.
Take a look at the repo code for a more detailed understanding of the lifecycle implementation.
While I previously always started my assistant development process by going to the dashboard, this lifecycle implementation has allowed me to instead do all my work directly in the codebase. I hope you find it useful.