Radient API Reference
Welcome to the Radient API documentation. This section provides detailed information about our API endpoints, authentication methods, request/response formats, and usage examples.
Overview
The Radient API allows developers to programmatically interact with Radient services, including:
- Managing your Radient Pass and credits.
- Accessing AI models and tools directly.
- Integrating Radient's capabilities into your own applications and workflows.
Authentication
Authentication with the Radient API is typically done via API keys. You will need to generate an API key from your Radient account dashboard.
Include your API key in the Authorization
header as a Bearer token:
Authorization: Bearer YOUR_RADIENT_API_KEY
Certain public endpoints, like listing models or downloading agents from the marketplace, may not require authentication.
Endpoints
This section details the direct HTTP endpoints available through the Radient API. For higher-level client libraries and tool integrations, see the "Client Libraries & Tools" section.
(The Radient API is continuously evolving. More endpoints will be documented here as they become available.)
Chat Completions
These endpoints allow you to interact with chat-based language models.
POST /v1/chat/completions
Proxies OpenAI-compatible chat completions requests.
Request Body: OpenAIChatCompletionRequest
(Refer to OpenAI documentation for standard fields. Radient-specific extensions or behaviors will be noted here.)
Field | Type | Description | Required |
---|---|---|---|
model | string | The ID of the model to use (e.g., auto ). | Yes |
messages | array of objects | A list of message objects. Each with role and content . | Yes |
frequency_penalty | number (optional) | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency. | No |
logit_bias | map (optional) | Modify the likelihood of specified tokens appearing. | No |
logprobs | boolean (optional) | Whether to return log probabilities of output tokens. | No |
top_logprobs | integer (optional) | Number of most likely tokens to return at each token position. | No |
max_tokens | integer (optional) | Maximum number of tokens to generate in the chat completion. | No |
n | integer (optional, default: 1) | How many chat completion choices to generate for each input message. | No |
presence_penalty | number (optional) | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far. | No |
response_format | object (optional) | An object specifying the format that the model must output. e.g. { "type": "json_object" } | No |
seed | integer (optional) | Seed for deterministic sampling. | No |
stop | string or array of strings (optional) | Up to 4 sequences where the API will stop generating further tokens. | No |
stream | boolean (optional, default: false) | If set, partial message deltas will be sent like in ChatGPT. | No |
temperature | number (optional, default: 1) | Sampling temperature, between 0 and 2. Higher values like 0.8 will make the output more random. | No |
top_p | number (optional, default: 1) | Nucleus sampling. The model considers results of tokens with top_p probability mass. | No |
tools | array of objects (optional) | A list of tools the model may call. | No |
tool_choice | string or object (optional) | Controls which (if any) tool is called by the model. | No |
user | string (optional) | A unique identifier representing your end-user. | No |
parallel_tool_calls | boolean (optional, default: true) | Whether to enable parallel tool calls. | No |
Responses:
- 200 OK:
OpenAIChatCompletionResponse
(Matches OpenAI's response structure).id
(string): A unique identifier for the chat completion.object
(string): The object type, typicallychat.completion
.created
(integer): The Unix timestamp (in seconds) of when the chat completion was created.model
(string): The model used for the chat completion.choices
(array): A list of chat completion choices. Each choice has:index
(integer): The index of the choice in the list.message
(object): The message generated by the model, withrole
andcontent
.finish_reason
(string): The reason the model stopped generating tokens (e.g.,stop
,length
,tool_calls
).
usage
(object, optional): Usage statistics for the request.prompt_tokens
(integer)completion_tokens
(integer)total_tokens
(integer)
- 400 Bad Request: Invalid request payload.
- 401 Unauthorized: API key is missing or invalid.
- 500 Internal Server Error: An error occurred on the server.
Example Request (Python using requests
):
import requests
import json
api_key = "YOUR_RADIENT_API_KEY"
base_url = "https://api.radient.com/v1" # Replace with actual Radient API base URL
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
payload = {
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}
response = requests.post(f"{base_url}/chat/completions", headers=headers, json=payload)
if response.status_code == 200:
print(response.json())
else:
print(f"Error: {response.status_code}")
print(response.text)
POST /v1/completions
Proxies OpenAI-compatible legacy completions requests. This endpoint is generally for older models or specific use cases. Prefer /chat/completions
for newer models.
Request Body: OpenAICompletionRequest
Field | Type | Description | Required |
---|---|---|---|
model | string | The ID of the model to use. | Yes |
prompt | string or array of strings/tokens | The prompt(s) to generate completions for. | Yes |
best_of | integer (optional, default: 1) | Generates best_of completions server-side and returns the "best". | No |
echo | boolean (optional, default: false) | Echo back the prompt in addition to the completion. | No |
frequency_penalty | number (optional, default: 0) | Number between -2.0 and 2.0. | No |
logit_bias | map (optional) | Modify the likelihood of specified tokens appearing. | No |
logprobs | integer (optional) | Include the log probabilities on the logprobs most likely tokens. | No |
max_tokens | integer (optional, default: 16) | The maximum number of tokens to generate in the completion. | No |
n | integer (optional, default: 1) | How many completions to generate for each prompt. | No |
presence_penalty | number (optional, default: 0) | Number between -2.0 and 2.0. | No |
seed | integer (optional) | Seed for deterministic sampling. | No |
stop | string or array of strings (optional) | Up to 4 sequences where the API will stop generating further tokens. | No |
stream | boolean (optional, default: false) | Whether to stream back partial progress. | No |
suffix | string (optional) | The suffix that comes after a completion of inserted text. | No |
temperature | number (optional, default: 1) | Sampling temperature, between 0 and 2. | No |
top_p | number (optional, default: 1) | Nucleus sampling. | No |
user | string (optional) | A unique identifier representing your end-user. | No |
Responses:
- 200 OK:
OpenAICompletionResponse
(Matches OpenAI's legacy completion response structure).id
(string)object
(string): Typicallytext_completion
.created
(integer)model
(string)choices
(array): Each choice has:text
(string): The completion text.index
(integer)logprobs
(object, optional)finish_reason
(string)
usage
(object, optional)
- 400 Bad Request, 401 Unauthorized, 500 Internal Server Error
Image Generation (via API Endpoint)
These endpoints allow direct interaction with image generation services.
POST /v1/images/generate
Generates images based on a prompt and other parameters. This endpoint supports both synchronous and asynchronous operations. For synchronous requests (sync_mode: true
), the response will contain the image data directly if generation is quick. For asynchronous requests (sync_mode: false
or if generation takes longer), the response will include a request_id
which can be used with the /images/status
endpoint to poll for results.
Request Body: ImageGenerationRequest
Field | Type | Description | Required | Default |
---|---|---|---|---|
prompt | string | Text description of the image to generate. | Yes | |
num_images | integer | Number of images to generate. | No | 1 |
image_size | string | Desired image size/aspect ratio. Supported: square_hd , square , portrait_4_3 , portrait_16_9 , landscape_4_3 , landscape_16_9 . | No | square_hd |
source_url | string | URL or base64 data URI of an image for image-to-image generation. | No | |
strength | float | For image-to-image, controls influence of source_url (0.0 to 1.0). | No | |
sync_mode | boolean | If true , attempts direct results. If false or lengthy, returns request_id for polling. | No | true |
provider | string | Specify a particular image generation provider. | No | |
seed | integer | Seed for reproducible generation. | No | |
guidance_scale | float | How closely the generation should follow the prompt. | No | |
num_inference_steps | integer | Number of diffusion steps. | No |
Responses:
- 200 OK:
ImageGenerationResponse
request_id
(string): Unique ID for the generation request.status
(string): Current status (e.g., "COMPLETED", "PROCESSING", "FAILED").images
(array ofRadientImage
, optional): List of generated images ifstatus
is "COMPLETED".url
(string): URL to the generated image.width
(integer, optional): Image width.height
(integer, optional): Image height.
provider
(string, optional): The provider used for generation.error
(string, optional): Error message if generation failed.
- 400 Bad Request, 401 Unauthorized, 500 Internal Server Error
GET /v1/images/status
Retrieves the status and results of an image generation request.
Query Parameters:
Parameter | Type | Description | Required |
---|---|---|---|
request_id | string | The ID of the image generation request (obtained from /images/generate ). | Yes |
provider | string | The provider used for the original request, if specified. | No |
Responses:
- 200 OK:
ImageGenerationResponse
(Same structure as the response from/images/generate
). - 400 Bad Request, 401 Unauthorized, 404 Not Found, 500 Internal Server Error
GET /v1/images/providers
Lists available image generation providers.
Responses:
- 200 OK:
ProviderListResponse
providers
(array of strings): List of provider IDs/names.
- 401 Unauthorized, 500 Internal Server Error
Web Search (via API Endpoint)
These endpoints allow direct interaction with web search services.
GET /v1/search
Performs a web search using the specified query and parameters.
Query Parameters:
Parameter | Type | Description | Required | Default |
---|---|---|---|---|
query | string | The search query. | Yes | |
max_results | integer | Maximum number of search results to return. | No | 10 |
provider | string | Specify a particular web search provider. | No | |
include_raw | boolean | Whether to include full raw content of search results (if available from the provider). | No | false |
search_depth | string | Search depth (e.g., "basic", "advanced"). Provider-dependent. | No | |
domains | string | Comma-separated list of domains to restrict the search to. | No |
Responses:
- 200 OK:
WebSearchResponse
query
(string): The original search query.results
(array ofRadientSearchResult
): List of search results.title
(string): Title of the search result.url
(string): URL of the search result.content
(string): Snippet or summary of the content.raw_content
(string, optional): Full raw content ifinclude_raw
was true and available.
provider
(string, optional): The provider used for the search.error
(string, optional): Error message if the search failed.
- 400 Bad Request, 401 Unauthorized, 500 Internal Server Error
GET /v1/search/providers
Lists available web search providers.
Responses:
- 200 OK:
ProviderListResponse
providers
(array of strings): List of provider IDs/names.
- 401 Unauthorized, 500 Internal Server Error
Client Libraries & Tools (Python Example)
Radient provides client libraries and tool integrations to simplify interaction with the API. The following examples demonstrate usage with the Python local-operator
library.
Image Generation Tool
The generate_image
tool allows you to create images from text prompts.
Function Signature (Conceptual):
generate_image(prompt: str, image_size: str = "landscape_4_3", num_inference_steps: int = 28, seed: Optional[int] = None, guidance_scale: float = 5.0, num_images: int = 1) -> RadientImageGenerationResponse
Parameters:
prompt
(str): Text description of the image.image_size
(str, optional): Size/aspect ratio. Defaults to"landscape_4_3"
.- Supported:
"square_hd"
,"square"
,"portrait_4_3"
,"portrait_16_9"
,"landscape_4_3"
,"landscape_16_9"
.
- Supported:
num_inference_steps
(int, optional): Number of diffusion steps. Defaults to 28.seed
(Optional[int], optional): Seed for reproducibility.guidance_scale
(float, optional): How closely to follow the prompt. Defaults to 5.0.num_images
(int, optional): Number of images to generate. Defaults to 1.
Returns: RadientImageGenerationResponse
(see schema under /v1/images/generate
endpoint).
Example (using local-operator
tools):
# Assuming 'tools' is an initialized ToolRegistry from local_operator
# and radient_client is configured.
try:
response = tools.generate_image(
prompt="A futuristic cityscape at sunset, synthwave style",
image_size="landscape_16_9",
num_images=1
)
if response.images:
for image in response.images:
print(f"Generated image URL: {image.url}")
# Code to download and save the image would go here
else:
print(f"Image generation status: {response.status}")
if response.error:
print(f"Error: {response.error}")
except RuntimeError as e:
print(f"An error occurred: {e}")
Alter Image Tool
The generate_altered_image
tool allows you to modify an existing image based on a text prompt.
Function Signature (Conceptual):
generate_altered_image(image_path: str, prompt: str, strength: float = 0.95, num_inference_steps: int = 40, seed: Optional[int] = None, guidance_scale: float = 7.5, num_images: int = 1) -> RadientImageGenerationResponse
Parameters:
image_path
(str): Path to the local image file to modify.prompt
(str): Text description of how to modify the image.strength
(float, optional): Strength of modification (0.0-1.0). Defaults to 0.95.num_inference_steps
(int, optional): Number of inference steps. Defaults to 40.seed
(Optional[int], optional): Seed for reproducibility.guidance_scale
(float, optional): How closely to follow the prompt. Defaults to 7.5.num_images
(int, optional): Number of images to generate. Defaults to 1.
Returns: RadientImageGenerationResponse
(see schema under /v1/images/generate
endpoint).
Example (using local-operator
tools):
# Assuming 'tools' is an initialized ToolRegistry
try:
# First, ensure you have an image file, e.g., 'input_image.jpg'
response = tools.generate_altered_image(
image_path="path/to/your/input_image.jpg",
prompt="Make the cat wear a wizard hat",
strength=0.8
)
if response.images:
for image in response.images:
print(f"Altered image URL: {image.url}")
# Code to download and save the image
else:
print(f"Image alteration status: {response.status}")
if response.error:
print(f"Error: {response.error}")
except FileNotFoundError:
print("Error: Input image not found.")
except RuntimeError as e:
print(f"An error occurred: {e}")
Web Search Tool
The search_web
tool allows you to perform internet searches.
Function Signature (Conceptual):
search_web(query: str, search_engine: str = "google", max_results: int = 20) -> RadientSearchResponse
Parameters:
query
(str): The search query.search_engine
(str, optional): Search engine to use (e.g.,"google"
,"bing"
). Defaults to"google"
. (Provider dependent)max_results
(int, optional): Maximum number of results. Defaults to 20.
Returns: RadientSearchResponse
(see schema under /v1/search
endpoint).
Example (using local-operator
tools):
# Assuming 'tools' is an initialized ToolRegistry
try:
response = tools.search_web(
query="latest advancements in AI",
max_results=5
)
print(f"Search Query: {response.query}")
if response.results:
for i, result in enumerate(response.results):
print(f"\nResult {i+1}:")
print(f" Title: {result.title}")
print(f" URL: {result.url}")
print(f" Snippet: {result.content}")
elif response.error:
print(f"Search Error: {response.error}")
else:
print("No results found.")
except RuntimeError as e:
print(f"An error occurred: {e}")
We are working diligently to bring you a robust and well-documented API. Please check back for updates.
If you have specific requirements or questions about upcoming API capabilities, feel free to contact us or email us.