Skip to main content

AI Badgr (Budget/Utility, OpenAI-compatible)

Overview​

PropertyDetails
DescriptionAI Badgr is a budget-friendly, utility-focused provider offering OpenAI-compatible APIs for LLMs. Ideal for cost-conscious applications and high-volume workloads.
Provider Route on LiteLLMaibadgr/
Link to Provider DocAI Badgr Documentation ↗
Base URLhttps://aibadgr.com/api/v1
Supported Operations/chat/completions, /embeddings, /v1/messages


We support ALL AI Badgr models, just set aibadgr/ as a prefix when sending completion requests

Available Models​

AI Badgr offers four tiers of models optimized for different use cases and budgets:

ModelDescriptionUse Case
aibadgr/budgetMost cost-effective tierHigh-volume, simple tasks
aibadgr/basicEntry-level model for simple tasksBasic text generation, simple Q&A
aibadgr/normalBalanced performance and costGeneral-purpose applications
aibadgr/premiumBest performance tierComplex reasoning, production workloads
info

OpenAI model names are accepted and automatically mapped to appropriate AI Badgr models.

Required Variables​

Environment Variables
os.environ["AIBADGR_API_KEY"] = ""  # your AI Badgr API key

Get your API key from AI Badgr dashboard.

Usage - LiteLLM Python SDK​

Non-streaming​

AI Badgr Non-streaming Completion
import os
import litellm
from litellm import completion

os.environ["AIBADGR_API_KEY"] = "" # your AI Badgr API key

messages = [{"content": "What is the capital of France?", "role": "user"}]

# AI Badgr call with premium tier
response = completion(
model="aibadgr/premium",
messages=messages
)

print(response)

Streaming​

AI Badgr Streaming Completion
import os
import litellm
from litellm import completion

os.environ["AIBADGR_API_KEY"] = "" # your AI Badgr API key

messages = [{"content": "Write a short poem about AI", "role": "user"}]

# AI Badgr call with streaming
response = completion(
model="aibadgr/premium",
messages=messages,
stream=True
)

for chunk in response:
print(chunk)

Using Different Tiers​

AI Badgr Tier Selection
import os
import litellm
from litellm import completion

os.environ["AIBADGR_API_KEY"] = "" # your AI Badgr API key

# Basic tier for simple tasks
basic_response = completion(
model="aibadgr/basic",
messages=[{"role": "user", "content": "Say hello"}]
)

# Normal tier for general use
normal_response = completion(
model="aibadgr/normal",
messages=[{"role": "user", "content": "Explain photosynthesis"}]
)

# Premium tier for complex tasks
premium_response = completion(
model="aibadgr/premium",
messages=[{"role": "user", "content": "Write a detailed technical explanation"}]
)

Function Calling​

AI Badgr Function Calling
import os
import litellm
from litellm import completion

os.environ["AIBADGR_API_KEY"] = "" # your AI Badgr API key

tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather in a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
]

response = completion(
model="aibadgr/premium",
messages=[{"role": "user", "content": "What's the weather like in New York?"}],
tools=tools,
tool_choice="auto"
)

print(response)

Usage - LiteLLM Proxy​

Add the following to your LiteLLM Proxy configuration file:

config.yaml
model_list:
- model_name: budget-basic
litellm_params:
model: aibadgr/basic
api_key: os.environ/AIBADGR_API_KEY

- model_name: budget-normal
litellm_params:
model: aibadgr/normal
api_key: os.environ/AIBADGR_API_KEY

- model_name: budget-premium
litellm_params:
model: aibadgr/premium
api_key: os.environ/AIBADGR_API_KEY

Start your LiteLLM Proxy server:

Start LiteLLM Proxy
litellm --config config.yaml

# RUNNING on http://0.0.0.0:4000
AI Badgr via Proxy - Non-streaming
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)

# Non-streaming response
response = client.chat.completions.create(
model="budget-premium",
messages=[{"role": "user", "content": "Explain quantum computing in simple terms"}]
)

print(response.choices[0].message.content)
AI Badgr via Proxy - Streaming
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-proxy-api-key" # Your proxy API key
)

# Streaming response
response = client.chat.completions.create(
model="budget-premium",
messages=[{"role": "user", "content": "Write a Python function to sort a list"}],
stream=True
)

for chunk in response:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")

For more detailed information on using the LiteLLM Proxy, see the LiteLLM Proxy documentation.

Embeddings​

AI Badgr supports OpenAI-compatible embeddings for RAG, vector search, and semantic similarity tasks.

AI Badgr Embeddings
import os
import litellm
from litellm import embedding

os.environ["AIBADGR_API_KEY"] = "" # your AI Badgr API key

# Generate embeddings
response = embedding(
model="aibadgr/text-embedding-ada-002", # or your embedding model name
input=["The quick brown fox", "jumps over the lazy dog"]
)

print(response.data[0].embedding) # Access embedding vector

Supported Embedding Models​

AI Badgr supports OpenAI-compatible embedding models. Use the model name that matches your AI Badgr embedding model:

# Example embedding call
response = litellm.embedding(
model="aibadgr/text-embedding-ada-002",
input=["Your text here"]
)

Claude-Compatible Endpoint​

AI Badgr supports the Anthropic /v1/messages endpoint for Claude-compatible API calls.

AI Badgr Claude-Compatible Messages
import os
import litellm

os.environ["AIBADGR_API_KEY"] = "" # your AI Badgr API key

# Use Claude-compatible messages endpoint
response = litellm.completion(
model="aibadgr/premium",
messages=[{"role": "user", "content": "Hello!"}],
custom_llm_provider="aibadgr"
)

print(response)
info

The /v1/messages endpoint automatically translates between OpenAI and Anthropic message formats, allowing seamless compatibility with tools that use Claude-style APIs.

Supported OpenAI Parameters​

AI Badgr supports the following OpenAI-compatible parameters:

ParameterTypeDescription
messagesarrayRequired. Array of message objects with 'role' and 'content'
modelstringRequired. Model ID (e.g., basic, normal, premium)
streambooleanOptional. Enable streaming responses
temperaturefloatOptional. Sampling temperature (0.0 to 2.0)
top_pfloatOptional. Nucleus sampling parameter
max_tokensintegerOptional. Maximum tokens to generate
frequency_penaltyfloatOptional. Penalize frequent tokens
presence_penaltyfloatOptional. Penalize tokens based on presence
stopstring/arrayOptional. Stop sequences
nintegerOptional. Number of completions to generate
toolsarrayOptional. List of available tools/functions
tool_choicestring/objectOptional. Control tool/function calling
response_formatobjectOptional. Response format specification
seedintegerOptional. Random seed for reproducibility
userstringOptional. User identifier

Advanced Usage​

Custom API Base​

If you need to override the API base URL:

Custom API Base
import litellm

response = litellm.completion(
model="aibadgr/premium",
messages=[{"role": "user", "content": "Hello"}],
api_base="https://custom-aibadgr-endpoint.com/api/v1",
api_key="your-api-key"
)

Or use the environment variable:

Custom API Base via Environment
import os
import litellm

os.environ["AIBADGR_BASE_URL"] = "https://custom-aibadgr-endpoint.com/api/v1"
os.environ["AIBADGR_API_KEY"] = "your-api-key"

response = litellm.completion(
model="aibadgr/premium",
messages=[{"role": "user", "content": "Hello"}]
)

Best Practices​

Choose the Right Tier​

  • budget: Most cost-effective option for high-volume, simple tasks
  • basic: Use for simple, high-volume tasks where speed and cost matter more than sophistication
  • normal: Good balance for most applications
  • premium: Best for complex reasoning, production-critical workloads, or when quality is paramount

Cost Optimization​

AI Badgr is designed for budget-conscious applications:

  • Start with budget or basic tier and upgrade only if needed
  • Use normal tier for most production workloads
  • Reserve premium tier for complex reasoning tasks

Additional Resources​