Advanced Function Tracing with Murnitur

Using murnitur.trace, you get access to advanced function tracing capabilities, allowing developers to meticulously record every process within a single function in Large Language Models (LLMs). This includes tracking sub-function calls, RAG embeddings, and retrievals. By analyzing these details, developers can optimize performance, debug efficiently, and fine-tune models with precision.

Example


import murnitur
import anthropic
from openai import OpenAI

def generate_yoda_response():
    client = anthropic.Anthropic()

    message = client.messages.create(
      model="claude-3-opus-20240229",
      max_tokens=1000,
      temperature=0.0,
      system="Respond only in Yoda-speak.",
      messages=[
          {"role": "user", "content": "How are you today?"}
      ]
    )

    return message.content[0].text


@murnitur.trace
def detect_yoda_response():
    client = OpenAI()
    yoda_response = generate_yoda_response()
    result = client.chat.completions.create(
        model="gpt-3.5-turbo",
        temperature=1,
        messages=[
            {
                "role": "system",
                "content": "Detect if the response was in Yoda-speak.",
            },
            {"role": "user", "content": f"Response: {yoda_response}"},
        ],
        max_tokens=300,
    )
    return result.choices[0].message.content


if __name__ == "__main__":
    murnitur.set_api_key("mt-ey...")

    murnitur.init("murnix-trace", murnitur.Environment.DEVELOPMENT)
    print(detect_yoda_response())

Trace Generator

The tracer context manager in the Murnitur library is designed to facilitate tracing operations within your LLM application. This is particularly useful for monitoring, logging, and diagnosing issues in your system.

Usage

To use the tracer context manager, wrap the code block you want to trace within a with statement. You can set a custom name for each trace to identify different operations.


import murnitur

def customer_support(question: str):
    with murnitur.tracer(name="Customer support") as trace:
        # application logic

Setting the output

Using the tracer context, you have the ability to set the final result of the trace.

import murnitur

def customer_support(question: str):
    with murnitur.tracer(name="Customer support") as trace:
        preset = murnitur.load_preset("customer_support")
        prompt = murnitur.format_prompt(
            messages=preset.active_version.messages,
            params={"question": question},
        )
        result = client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=prompt,
        )
        output = result.choices[0].message.content
        trace.set_result(output)

Custom metadata

import murnitur

def ask_question(question="What is the answer to life?") -> str:
    with murnitur.tracer(name="Philosophical question") as trace:
        content = client.chat.completions.create(
            model="gpt-3.5-turbo",
            temperature=1,
            max_tokens=200,
            messages=[{"role": "user", "content": question}],
        )
        result = content.choices[0].message.content
        trace.set_result(result)
        trace.set_metadata({"question": question})
    return result



Full Example

Consider a scenario where you have a basic chatbot that interacts with users and you want to trace this operation to monitor its performance and log the result.

import murnitur
from openai import OpenAI

client = OpenAI()

PROMPT = """You are a movie genius who can guess the title of a
            movie from a one liner."""

def run_chatbot():
    """A simple chatbot"""
    conversation = [{"role": "system", "content": PROMPT}]

    print(f"What's the one-liner?. Type `exit` to end the conversation.")

    user_input = input("::")

    conversation.append({"role": "user", "content": user_input})

    with murnitur.tracer(name="Simple Trivia Chatbot") as trace:
        while user_input != "exit":
            completion = client.chat.completions.create(
                model="gpt-3.5-turbo",
                messages=conversation,
            )
            content = completion.choices[0].message.content
            conversation.append({"role": "assistant", "content": content})

            print(content)
            user_input = input("::")

            if user_input != "exit":
                conversation.append({"role": "user", "content": user_input})

        trace.set_result(conversation[-1]["content"])

if __name__ == "__main__":
    murnitur.set_api_key("mt-ey...")

    murnitur.init("murnix-trace", murnitur.Environment.DEVELOPMENT)

    print(run_chatbot())


By integrating the tracer context manager into your application, you can enhance your monitoring and debugging capabilities, leading to more reliable LLM application.