Skip to main content

Setting up telemetry

You can configure Agenta to capture all inputs, outputs, and other metadata from your LLM applications, regardless of whether they are hosted in Agenta or in your environment.

Post instrumentation, Agenta provides a dashboard that offers an overview of your app's performance metrics over time, including request counts, average latency, and costs.

We also provide a table detailing all the requests made to your LLM application. This table can be filtered and used to enrich your test sets, debug your applications, or fine-tune them.

tip

Concepts of Telemetry:

Traces: A trace represents the entire journey of a request or operation as it moves through a system. In our context, a trace represents one request to the LLM application.

Spans: A span represents a unit of work within a trace. Spans are nested to form a tree-like structure, with the root span representing the overall operation, and child spans representing sub-operations. In Agenta, we enrich each span with cost information and metadata in the event of an LLM call.

note

When creating an application from the UI, tracing is enabled by default. No setup is required. Simply navigate to the observability view to see all requests.

1. Create an application in agenta

To start, we need to create an application in agenta. You can do this using the command from the CLI:

agenta init

This command creates a new application in agenta and a config.toml file with all the information about the application.

2. Initialize agenta

import agenta as ag
# Option 1

ag.init(api_key="", app_id="")

# Option 2
os.environ["AGENTA_API_KEY"] = ""
os.environ["AGENTA_APP_ID"] = ""
ag.init()

# Option 3
ag.init(config_fname="config.toml") # using the config.toml generated by agenta init

You can find the API Key under the Setting view in agenta.

The app id can be found in the config.toml file if you have created the application from the CLI.

Note that if you are serving your application to the agenta cloud, agenta will automatically populate all the information in the environment variable. Therefore, you only need to use ag.init().

3. Instrument with the decorator

Add the @ag.instrument() decorator to the functions you want to instrument. This decorator will trace all input and output information for the functions.

caution

Make sure the instrument decorator is the first decorator in the function.

@ag.instrument(spankind="llm")
def myllmcall(country:str):
prompt = f"What is the capital of {country}"
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': prompt},
],
)
return response.choices[0].text

@ag.instrument()
def generate(country:str):
return myllmcall(country)

4. Modify a span's metadata

You can modify a span's metadata to add additional information using ag.tracing.set_span_attributes(). This function will access the active span and add the key-value pairs to the metadata.:

@ag.instrument(spankind="llm")
def myllmcall(country:str):
prompt = f"What is the capital of {country}"
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': prompt},
],
)
ag.tracing.set_span_attributes({"model": "gpt-4"})
return response.choices[0].text

5. Putting it all together

Here's how our code would look if we combine everything:

import agenta as ag

os.environ["AGENTA_API_KEY"] = ""
os.environ["AGENTA_APP_ID"] = ""
ag.init()

@ag.instrument(spankind="llm")
def myllmcall(country:str):
prompt = f"What is the capital of {country}"
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': prompt},
],
)
ag.tracing.set_span_attributes({"model": "gpt-4"})
return response.choices[0].text

@ag.instrument()
def generate(country:str):
return myllmcall(country)

Setting up telemetry for apps hosted in agenta

If you're creating an application to serve to agenta, not much changes. You just need to add the entrypoint decorator, ensuring it comes before the instrument decorator.

import agenta as ag

ag.init()
ag.config.register_default(prompt=ag.TextParam("What is the capital of {country}"))

@ag.instrument(spankind="llm")
def myllmcall(country:str):
response = client.chat.completions.create(
model='gpt-4',
messages=[
{'role': 'user', 'content': ag.config.prompt.format(country=country)},
],
)
ag.tracing.set_span_attributes({"model": "gpt-4"})
return response.choices[0].text

@ag.entrypoint
@ag.instrument()
def generate(country:str):
return myllmcall(country)

The advantage of this approach is that the configuration you use is automatically instrumented along with the other data.