Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cognisafeltd.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

How it works

Anthropic uses direct mode. Because Anthropic’s API is not OpenAI-compatible, Cognisafe wraps the messages.create method on the Anthropic client rather than routing through the proxy. After the model responds:
  1. Your code receives the response immediately
  2. The SDK fires a background task that ships the request and response payload to POST /internal/log on the FastAPI backend
  3. The backend queues a scoring job in Redis
No proxy is involved. The background shipping adds no perceptible latency to your application.

Installation

pip install "cognisafe[anthropic]"

Setup

import cognisafe
import anthropic

cognisafe.configure(
    api_key="csk_your_key_here",
    project_id="my-app",
)
cognisafe.patch_anthropic()

client = anthropic.Anthropic()

message = client.messages.create(
    model="claude-opus-4-5",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Explain quantum entanglement in one paragraph."}
    ],
)

print(message.content[0].text)

Supported capabilities

CapabilitySupportedNotes
Messages API (messages.create)YesFull request and response logged
StreamingYesCaptured after stream completes
System promptsYesIncluded in logged payload
Tool useYesTool definitions and results logged
Vision (image inputs)YesImage payloads included in request body

Scoring note

Safety scoring always uses the model configured via SCORER_MODEL (default: gpt-4o-mini), regardless of which Claude model was used for the original call. PyRIT’s scorers are model-agnostic — they receive the text of the prompt and response, not a reference to the originating model.
Claude models and the scoring model are completely independent. You can use claude-opus-4-5 for your product and GPT-4o mini as the safety scorer with no conflict.

Async client

patch_anthropic() also wraps the async client:
import asyncio
import cognisafe
import anthropic

cognisafe.configure(api_key="csk_...", project_id="my-app")
cognisafe.patch_anthropic()

async def main():
    client = anthropic.AsyncAnthropic()
    message = await client.messages.create(
        model="claude-haiku-4-5",
        max_tokens=512,
        messages=[{"role": "user", "content": "Hello."}],
    )
    print(message.content[0].text)

asyncio.run(main())