Operator
Search
K

Agent Developer Tutorial

How to use the Operator Gateway CLI tool to deploy your agent in minutes
Estimated Reading Time: 30 minutes
Operator is a protocol for open agents. It combines a name service with communication standards that bridge agents with the decentralized web.
We define an agent as any API that harnesses intelligence to perform a task, taking natural language as an input. Whether you use the OpenAI API, your own model, or even another agent doesn't matter to the package. The only requirement is that you have a working FastAPI application (other formats supported soon) with one primary route that can accept natural language input. Don't worry- we'll go over everything in here.

1. Preparing your agent

For now, your agent should be a FastAPI application. Here is an example: https://github.com/operatorlabs/fixie-agent
This agent is quite simple, but has all the necessary components:
  1. 1.
    A main.py file in the app/ directory, which contains the FastAPI app logic. There are a few things to note about this code, which are requirements all agents must abide by.
    1. 1.
      The primary route for the agent is a POST request as you can see by the @app.post syntax.
    2. 2.
      It accepts a field in the body named "message" which will contain the message from the agent's user.
    3. 3.
      It returns a field int he body called "message" which contains the response.
  2. 2.
    A Dockerfile in the root that contains a valid command for running the app: CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]. Notice that the command is app.main:app since our main.py file is inside the app/directory. Also notice the host is 0.0.0.0, which is recommended for our docker-compose setup later. Choosing something else for the host will make things complicated from the Docker networking perspective.
  3. 3.
    A requirements.txt file in the root that contains all necessary requirements to run your API.
Important: There is one more requirement that isn't shown in this example Github project. This is the .env file which you should have in the root of your project, locally. This should contain any environment variables and secrets needed to run your agent, but obviously should not be pushed to your Github.
Once you verify that your agent has all of these requirements, you can move on.

2. Generating a XMTP key for your agent

First, download the cli by running npm install -g @operatorlabs/cli. Then verify it was installed by running npm list -g which should show the cli and its version.
Now in the of your agent's codebase, run the command agent launch and you should see a menu like in the picture below.
For simplicity's sake, I recommend not generating things locally. However if you do, you will be shown a link to a Github project. The README file in that project shows you how to run the app locally to generate your keys.
Note- I wouldn't recommend using a browser like Arc browser yet. This is because Arc will automatically change the name of your files during download, which makes it hard for the CLI to detect your key bundle and autofill your key bundle path later. But you can provide the path to your bundle manually so it's not a huge deal.
Whether you run locally or not, you should see a screen that says "Connect Wallet." The important thing here is that you want to connect a wallet that you set aside for your agent. If you just click Connect Wallet and then press the Generate and Download XMTP Key Bundle, you will be prompted with a signature request. Note here that it wants to use my "first (testnet)" account.
However, this address is already set to a different agent, so I need to choose a different address.
Incorrect address was picked by default
I can handle this in Metamask by pressing Reject, then manually opening the Metamask extension. Then I pick the account I want to associate my agent with using the dropdown at the top.
Then click the little globe icon near the top right and press "Connect" on the account you want to choose. It should now say "Active."
Now click the key bundle generation button again and you should see that the correct account is associated with the signature phase.
Once you sign, you will successfully generate your key bundle for your agent. Make sure to download it to your downloads folder for easy access.
Once downloaded, you can close the window because you don't need to use this website anymore.

3. Creating a XMTP client from your key bundle

Go back to the CLI and you should be at the step where it asks "Automatically look for key bundle in your downloads?"
It is recommended to type y here so you don't have to hunt for the file yourself. In our situation, I am looking for the file that starts with "0xd112..."
Once you find your bundle, you can press enter. If this is the first time you are running this CLI for this agent, you should have no existing XMTP_KEY in your .env file. But if you do, the CLI will detect it and ask to overwrite it. In order to proceed, you must agree otherwise the CLI cannot be sure that the Ethereum account you want to use for your agent is the one associated to the XMTP_KEY you have in your .env file.
You can choose to remove the downloaded key bundle or not. Generally it is good practice to delete it so that you don't have private keys lying around.
You should now be at this step in the picture below.

4. Configuring deployment

To know what port your agent is running on, simply check your Dockerfile and look at the CMD line near the end. Since this is FastAPI, it should look something like: CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Our port seems to be 8000.
To know what the endpoint/route name is for the agent, check your FastAPI code itself. For the example project code, this exists in app/main.py
You can have many routes in your API, but only one can be used as the entrypoint- the default route that accepts incoming messages from your agent's users. In this case, the name of our route is entry.
You should now see this message asking to download a service.

5. Setting up the XMTP service to handle secure messaging

To give some context on why we need to download this service, the goal is to ensure that messages stay encrypted and are bound 1-to-1 between the sender and receiver. Essentially, we don't want anyone to be able to pretend like they are someone else - whether that be a user or an agent.
Downloading this service will run a XMTP client bound to your agent's address right next to your API, and ensure that the identity messaging your agent is who they are.
Once you press y and accept the download, the CLI will first check if the xmtp-service directory already exists. If this isn't your first time running the CLI for your project, it's possible that the directory is there. Please rename this to something else so the latest version of the xmtp-service can be downloaded.
If you tried using docker-compose in your codebase before, or if you ran the CLI before, it will also ask you about removing an existing docker-compose.yml file. In order for the CLI to proceed, you must remove this or rename it to something like old-docker-compose.yml.
At this point, you can press "I'm done for now" and try your application out. However, I recommend keeping this screen up and opening a second terminal window and navigating to your codebase. You can see here that in my code editor, I have two terminals open. The one labeled "node" is still running the CLI, whereas this current one I can use to try out our existing app.

6. Testing our application locally

Now that the docker-compose.yml file has been created, we can just type docker-compose up --build in the new terminal window we just opened to test things out. If your output looks like this, that means it's running successfully.
Note- if you are copying our template project, you will need a FIXIE_URL environment variable set in your .env file. You can use ours, or message us to figure out how to get your own. Here is our test one, but it is possible that the rate limit is hit by the time you try to use it.
FIXIE_URL=https://api.fixie.ai/api/v1/agents/dec1fc9d-66af-4eba-87f3-0205b2941aff/conversations
At this point, our docker-compose.yml and Dockerfile should both be in the root and look something like this:
We can now try sending a message to our agent's address from an XMTP client, like converse.xyz or Coinbase Wallet.
Here you can see both of our messages successfully hit the agent's API through our XMTP service and our agent is now available to chat with.
Since XMTP is an interoperable inbox, our messages are available to any XMTP client, whether that is Converse or Coinbase Wallet.
Message was initially sent using Converse
Same messages are replicated in Coinbase Wallet
At this point, you can choose to keep your agent running locally and play around with it.
You can also choose to deploy your application to the cloud using docker compose which is supported by many cloud providers.
However, most developers will find it easier to deploy to a service like Fly.io instead. We can easily do so by moving on to the next step.

7. Adapting our application to use supervisord

One of your terminal windows should still be stuck at this step in the CLI process. Go ahead and press "Deploy to modern infra" then pick "Fly.io."
It goes without saying that to fully follow through this guide, you need to have a Fly.io account set up and have installed the fly cli. If you don't already have these set up, please take some time to do so: https://fly.io/docs/flyctl/
Once you come to the part where you can choose "Do it myself" or "Continue guided setup" this is when we will start converting the application to use supervisord. Supervisord, like docker compose, will help us manage running two processes at the same time. However, instead of running two separate Dockerfiles we will be using one big Dockerfile and a supervisord configuration file called supervisord.conf
The steps aren't overly complicated, so you can definitely feel free to read through the Github and do this process yourself.
For the majority of people, I recommend continuing with the guided setup, which is what we will do here.
Once you press "Continue guided setup" the first thing the CLI will do is try to create a new Dockerfile. Since we already have one, it needs to be renamed old-dockerfile.txt
Then, it will create a new AGENT_RUN_COMMAND environment variable in your .env file that stores the command used to run your application. You will notice that this command is different from what was in your old Dockerfile. The host is now "localhost" instead of "0.0.0.0"- this is what we want.
If you already had a supervisord.conf in your root, the CLI will rename it old-supervisord.conf and create a new supervisord.conf file. The most important thing here is making sure the command for the agent-api program says "localhost" instead of "0.0.0.0"
Now this part is a little tricky. When we ultimately deploy to Fly.io, we want this to be localhost instead of 0.0.0.0. However, before we deploy to Fly.io, there is a way to check whether our application is working with supervisord locally with Docker.
If you want to test locally by running any docker commands, you need to set the host back to 0.0.0.0 temporarily and switch it back later.
At this point, the CLI process is finished. The rest has to do with Fly.io and can be followed along by clicking on the link the CLI shows you and going to step 4: https://github.com/operatorlabs/gateway/blob/main/README.md#using-supervisord

8. Testing our supervisord setup locally with Docker

As per instructions in step 4 of the link we just provided, make sure you have Docker installed. It is recommended to have Docker Desktop as well just to make it easy to test things. As mentioned just before this step, it is important that your supervisord.conf file has the host set back to 0.0.0.0 from localhost for these next few docker commands. Now run this command and you should see no errors.
# Build your Docker image, and call it test-agent-api or whatever you want
docker build -t test-agent-api .
This means your docker image is built, and you can see it in Docker Desktop with the name "test-agent-api" which is what we named it by running the docker build command.
Now we can run our image in a container with this command:
docker run -d --env-file .env --name test-agent-api-container test-agent-api
You should see a hash as a response like so:
You will also see it as running in Docker Desktop. This means that our application is actually working, and we can send a message using Converse or Coinbase wallet like before.
To test our message handling, first run the command:
docker logs -f test-agent-api-container
test-agent-api-container is the name that we just gave our application in the previous step. This will show the logs for our running application. Now we can send a simple "test" message from Coinbase Wallet, and see it show up in the logs:
We have now verified that our supervisord setup is working locally, and now it is time to try deploying everything to fly.io

8. Deploying to fly.io

The immediate first step we need to do is check our command in supervisord.conf especially if we ran those Docker test commands in the very previous step. In order for the commands to work, the command listed under the agent-api program should have had 0.0.0.0 set as the host:
[program:agent-api]
command=uvicorn app.main:app --host 0.0.0.0 --port 8000
We will now change this back to localhost:
[program:agent-api]
command=uvicorn app.main:app --host localhost --port 8000
This completes step 5 of our guide, and now we can do step 6 by running fly launch:
Now we do want to modify the settings, so we can set the port to the correct port. You can find this in your .env file as the AGENT_PORT variable.
Once we confirm, fly will build the app and launch it. You can find your application in the fly dashboard and click on Monitoring to see the logs.
We can see here that it says XMTP_KEY not found in environment variables
Step 7 of our guide will fix this using fly secrets. Fly doesn't have access to your .env file so we need to set the secrets by either using the fly CLI command fly secrets set or using the dashboard. It is recommended to use the dashboard since it's a lot easier.
The three secrets every agent should set are AGENT_ENDPOINT, AGENT_PORT, and XMTP_KEY which can all be found in your .env file. For this template project, we also need a FIXIE_URL secret.
Now as per step 8 in the guide, we can go back to the monitoring and check on our application:
There is now no error regarding missing environment variables. However, if you look in the top right corner you will see that this app is suspended. This is because fly will auto-stop machines to save compute when they aren't being used. We will remedy this in the next step, since having suspended machines will cause our agents to miss out on messages.
Step 9 wants us to adjust our fly.toml file to have these settings:
auto_stop_machines = false
auto_start_machines = false
min_machines_running = 1
Currently, my fly.toml looks like this for the http_service and vm sections:
You can see that our configuration will auto-stop our machines, and that it doesn't require any machines to constantly keep running for our http service. Our internal_port is also set to 8080 even though we want 8000, since that is the specified AGENT_PORT in our .env file. By making the modifications in step 9, our new fly.toml should not automatically start nor stop any processes.
With this configuration saved, we can run fly deploy:
Now to ensure our application is no longer suspended, go to Machines in the dashboard for our fly application. Then click "Start machine."
In order to prevent duplicate messages (your agent responds more than once to the user) please ensure only one machine is running. If you were just using Docker locally to test running your agent, it might still be running. Check Docker Desktop or use the docker CLI to see if your test-agent-api-container or whatever you called it is no longer running. You can stop and delete this container to make sure it isn't.
After waiting a few seconds, go back to Monitoring and our application should be live and the top right should say "Deployed" instead of "Suspended." Now we can test our final deployed application by sending another message from Converse or Coinbase Wallet. I am going to ask this agent "Is laffa saj better than pita bread?"
It looks like from our logs that the agent successfully received the messages and responded.
Checking our Coinbase Wallet messeger also shows the same results.
Your agent is now deployed on fly.io, and anyone who knows the address of your agent can message it using XMTP!

9. Adding whitelist logic

Now a lot of spam can come from XMTP since anyone can message any address. In order to control who can send requests to our agent, we can modify our agent logic.
This all comes down to the fact that our xmtp-service.js sends the address of the message sender in the API request to the FastAPI app. This is sent as a header argument called "Sender," so to access the header we need to modify our route to accept a Request model:
Now we can use the header, but we need to impose some rules on it before we actually write our whitelisting logic. We don't want the sender address to be invalid.
Great, now we can be sure that the sender variable is going to actually have a valid sender address. I am now going to create a whitelisting function called check_whitelist that uses neynar.com APIs to make this agent only available to addresses which have been associated with a farcaster.xyz account.
We need to import time as a module and set a variable neynar_key, then we can use this check_whitelist function:
app = FastAPI()
load_dotenv()
neynar_key = os.environ.get("NEYNAR_SQL_API_KEY")
"""
Check whether a given address is registered on Farcaster or not.
If yes, then they are approved for the whitelist.
Parameters:
address: Ethereum address starting with 0x
Returns:
true or false
"""
def check_whitelist(address: str) -> bool:
url = 'https://data.hubs.neynar.com/api/queries/257/results'
params = {'api_key': neynar_key}
payload = {
"max_age": 1800,
"parameters": {
"address": address.strip().lower()
}
}
headers = {'Content-Type': 'application/json'}
response = requests.post(url, params=params, headers=headers, json=payload).json()
if "query_result" not in list(response.keys()):
if "job" not in list(response.keys()):
raise ValueError("Error while trying to find matches. Is your API key valid?")
else:
time.sleep(1)
response = requests.post(url, params=params, headers=headers, json=payload).json()
if "query_result" not in list(response.keys()):
raise ValueError("Error while trying to find matches. Is your API key valid?")
rows = response["query_result"]["data"]["rows"]
return len(rows) > 0
Now before we go any further, I am going to my fly.io dashboard and set a new secret for NEYNAR_SQL_API_KEY.
Great. We can go back to our code and see where in our /entry route we can use our whitelist function.
I am going to add the logic right after our simple validation logic for the sender address, and have it throw a 400 error if the sender address is not in the whitelist:
@app.post("/entry")
def entry(entry: Entry, request: Request):
# Verify that the sender header is present
sender = request.headers.get('Sender')
if not sender:
raise HTTPException(status_code=400, detail="Sender header is required")
elif not sender.startswith("0x"):
raise HTTPException(status_code=400, detail="Sender address should start with 0x")
elif len(sender) != 42:
raise HTTPException(status_code=400, detail="Sender address must be a valid Ethereum address")
# With the sender address, now you can do any sort of validation you want
if not check_whitelist(sender):
raise HTTPException(status_code=400, detail="Address not in whitelist")
url = f"{os.environ.get('FIXIE_URL')}"
Now I am going to save this new main.py and deploy it to fly using fly deploy. Then it's time to test the app by sending a message from my XMTP address which is also an address I use for my farcaster account:
It looks like that worked. Now I will try messaging the agent using an address that is brand new:
You can see our message was successfully blocked using our new whitelist logic.

10. Creating an OpenAI API agent

Our agent is working, but there are a couple obvious issues with it:
  1. 1.
    We cannot control the prompt.
  2. 2.
    We cannot keep any context about previous messages.
We can remedy this with a simple agent that uses the OpenAI API:
This example agent uses the OPENAI_API_KEY variable in your .env to use the OpenAI chat completion API with gpt-3.5-turbo. You can see there is a system prompt that we can control, as well as a basic chat history storage mechanism that preserves the last 10 message pairs.
We're going to be deploying this straight to fly.io like we just did in the previous step. So we install our CLI and proceed to the XMTP key generation.
We want the agent to have a unique address, so we create a new account in Metamask.
Our new agent will have the address 0xD73f1845CD21475a2cDdE725280BDe1540fC40AB
The name of our route is entry and our port is 8000 as specified in our main.py and Dockerfile.
We can just go straight to deploying to fly.io in this step.
Just like before we use the guided walkthrough, and we're done.
Now we can exit the CLI and run fly launch, making sure to tweak any settings as needed. Once the first deploy finishes, we need to set our fly secrets and update our fly.toml file.
I'll set my secret first in the dashboard.
Then adjust our internal_port, auto_stop_machines, and min_machines_running variables in our fly.toml file:
internal_port = 8000
force_https = true
auto_stop_machines = false
auto_start_machines = false
min_machines_running = 1
Now we are ready to run fly deploy to deploy our updated application. Once we do, we can check our Machines tab and make sure that the machine is on and is not suspended. It looks like it is successfully deployed here.
Now we can try sending messages to this agent using an XMTP client, like Coinbase Wallet. The agent's address is 0xD73f1845CD21475a2cDdE725280BDe1540fC40AB
You can see here that the agent is following along with our prompt, and it now has the ability to remember a few messages back. This limit can be changed in the code as needed.
This is still a very simple example, and to build more robust agents we recommend taking a look at projects such as LangChain and LlamaIndex.