Playground
The Playground is an interactive testing environment designed for safe and convenient API request testing without writing code. It provides a user-friendly interface for experimenting with AI models, configuring parameters, and generating ready-to-use code snippets for your applications.
Getting Started
Accessing the Playground
To access the Playground:
- Log in to your Binom.Router account
- Navigate to Playground in the main menu
- Alternatively, visit
/user/playgrounddirectly
Authentication Requirement
The Playground requires authentication. Make sure you are logged in before accessing the page.
What You Can Do
The Playground enables you to:
- Test API requests interactively without writing code
- Experiment with different AI models to find the best fit for your use case
- Fine-tune parameters to optimize model responses
- Generate code snippets in multiple languages for your applications
- View conversation history to track your testing sessions
Chat Interface
The Chat Interface is the main component of the Playground, providing a conversational environment for testing chat completions API.
Interface Components
1. Message Area
The central area displays the conversation history:
- User messages appear on the right side (styled in primary color)
- Assistant messages appear on the left side (styled in neutral color)
- Messages are displayed in chronological order
- Markdown content is rendered with syntax highlighting
2. Input Field
Located at the bottom of the interface:
- Multi-line text area for composing messages
- Send button (paper plane icon) to submit your message
- Keyboard shortcut: Press
Enterto send (useShift + Enterfor new line)
3. Streaming Indicators
When streaming mode is enabled:
- A typing indicator appears while the model generates responses
- Responses appear token-by-token in real-time
- A completion indicator shows when generation is finished
Conversation Features
| Feature | Description |
|---|---|
| Context Retention | The model maintains context from previous messages in the session |
| Auto-scroll | Chat automatically scrolls to the latest message |
| Copy to Clipboard | Click on any message to copy its content |
| Markdown Support | Code blocks, tables, lists, and formatting are rendered |
| Clear Chat | Reset the conversation history with a single click |
Using the Chat Interface
Step 1: Type your message in the input field
Explain quantum computing in simple terms
Step 2: Press Enter or click the Send button
Step 3: View the response in the chat area
Step 4: Continue the conversation by typing follow-up questions
How does it differ from classical computing?
Conversation History
The Playground maintains a session-based conversation history:
- History persists during your current session
- Clear the history anytime using the "Clear Chat" button
- Export conversations for reference (coming soon)
Model Selection
Choosing the right AI model is crucial for achieving optimal results. The Playground provides easy access to all available models.
Available Models
The Playground supports multiple AI model families:
| Model Family | Models | Best For |
|---|---|---|
| GPT | gpt-4o, gpt-4o-mini, gpt-3.5-turbo |
General purpose, coding, analysis |
| Claude | claude-3-opus, claude-3-sonnet |
Complex reasoning, long context |
| Gemini | gemini-2.5-flash, gemini-2.5-pro |
Multimodal tasks, creative content |
How to Select a Model
Locate the Model Dropdown
- Find the model selector at the top of the Playground
- It displays the currently selected model
Browse Available Models
- Click on the dropdown to see all available models
- Models are grouped by provider (OpenAI, Anthropic, Google)
View Model Details
- Hover over any model to see a tooltip with:
- Model description
- Context window size
- Pricing information
- Recommended use cases
- Hover over any model to see a tooltip with:
Select a Model
- Click on your desired model
- The dropdown updates to show your selection
- All subsequent requests will use this model
Model Comparison
| Feature | GPT-4o | Claude-3 Opus | Gemini 2.5 Pro |
|---|---|---|---|
| Context Window | 128K | 200K | 1M |
| Speed | Fast | Medium | Fast |
| Cost | Medium | High | Medium |
| Best For | General tasks | Complex reasoning | Multimodal |
Switching Models
You can switch models at any time during a conversation:
- The new model will be used for future messages
- Previous messages remain in the chat history
- Note that different models may interpret context differently
Parameter Tuning
The Parameter Tuning panel allows you to fine-tune model behavior by adjusting various configuration options. Access it from the sidebar on the right side of the Playground.
Accessing the Parameter Panel
- Click the "Parameters" button in the toolbar
- The panel slides out from the right side
- Adjust parameters as needed
- Changes apply immediately to new requests
Core Parameters
Temperature
Range: 0.0 - 2.0
Default: 0.7
Controls the randomness of the model's responses:
| Value | Behavior | Use Case |
|---|---|---|
| 0.0 - 0.3 | Very focused, deterministic | Factual queries, code generation |
| 0.4 - 0.7 | Balanced, creative but controlled | General conversation, explanations |
| 0.8 - 1.2 | Highly creative, varied | Creative writing, brainstorming |
| 1.3 - 2.0 | Very random, experimental | Unusual use cases, testing |
Example:
Temperature: 0.2 → "Paris is the capital of France."
Temperature: 1.0 → "Paris, the glittering jewel of France, serves as the nation's capital..."
Max Tokens
Range: 1 - 4096 (varies by model)
Default: 2048
Sets the maximum length of the model's response:
- Lower values (100-500): Short, concise answers
- Medium values (500-1500): Detailed explanations
- Higher values (1500+): Long-form content, essays
Tip: Adjust based on your expected response length to optimize costs.
Top P (Nucleus Sampling)
Range: 0.0 - 1.0
Default: 1.0
Alternative to temperature for controlling diversity:
- 1.0: Considers all tokens (default behavior)
- 0.9: Considers top 90% of probability mass
- 0.5: More conservative, focused responses
Note: Generally, use either Temperature OR Top P, not both.
System Message
Type: Text input
Default: "You are a helpful assistant."
Sets the behavior and personality of the assistant:
Examples:
"You are a senior software engineer helping with code reviews."
"You are a creative writing assistant specializing in science fiction."
"You are a math tutor explaining concepts to beginners."
Advanced Parameters
Frequency Penalty
Range: -2.0 - 2.0
Default: 0.0
Reduces repetition of the same content:
| Value | Effect |
|---|---|
| Negative (-2.0 to -0.1) | Encourages repetition |
| 0.0 | No effect (default) |
| Positive (0.1 to 2.0) | Discourages repetition |
Presence Penalty
Range: -2.0 - 2.0
Default: 0.0
Encourages talking about new topics:
| Value | Effect |
|---|---|
| Negative (-2.0 to -0.1) | Stays on current topic |
| 0.0 | No effect (default) |
| Positive (0.1 to 2.0) | Introduces new concepts |
Stop Sequences
Type: Array of strings
Default: Empty
Specifies sequences where the model should stop generating:
Example:
["\n\n", "END:", "###"]
The model stops when it encounters any of these sequences.
Saving Parameter Presets
You can save your parameter configurations as presets:
- Configure your desired parameters
- Click "Save Preset" in the parameter panel
- Enter a name for the preset
- Click Save
Your presets appear in a dropdown for quick access.
Code Generation
The Playground can automatically generate ready-to-use code snippets based on your configured requests. This feature accelerates development by providing working code in your preferred programming language.
Accessing Code Generation
After completing a request in the Playground:
- Click the "Code" button in the message toolbar
- A modal appears with generated code snippets
- Select your preferred programming language
- Copy the code to your clipboard
Supported Languages
| Language | Use Case |
|---|---|
| cURL | Command-line testing, shell scripts |
| Python | Data science, ML applications |
| C# | .NET applications, web services |
| JavaScript | Node.js, frontend applications |
| Java | Enterprise applications |
cURL Example
curl -X POST https://api.binom.router/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer binom_sk_your_api_key_here" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Explain quantum computing"}
],
"temperature": 0.7,
"max_tokens": 2048
}'
Python Example
import requests
import json
url = "https://api.binom.router/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer binom_sk_your_api_key_here"
}
data = {
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "Explain quantum computing"}
],
"temperature": 0.7,
"max_tokens": 2048
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(result["choices"][0]["message"]["content"])
C# Example
using System;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;
public class ChatCompletionClient
{
private static readonly HttpClient client = new HttpClient();
private const string ApiKey = "binom_sk_your_api_key_here";
private const string Endpoint = "https://api.binom.router/v1/chat/completions";
public async Task<string> SendChatRequestAsync(string userMessage)
{
var requestBody = new
{
model = "gpt-4o",
messages = new[]
{
new { role = "user", content = userMessage }
},
temperature = 0.7,
max_tokens = 2048
};
var jsonContent = JsonConvert.SerializeObject(requestBody);
var content = new StringContent(jsonContent, Encoding.UTF8, "application/json");
client.DefaultRequestHeaders.Clear();
client.DefaultRequestHeaders.Add("Authorization", $"Bearer {ApiKey}");
var response = await client.PostAsync(Endpoint, content);
var responseString = await response.Content.ReadAsStringAsync();
dynamic result = JsonConvert.DeserializeObject(responseString);
return result.choices[0].message.content;
}
}
JavaScript Example
const axios = require('axios');
const API_KEY = 'binom_sk_your_api_key_here';
const ENDPOINT = 'https://api.binom.router/v1/chat/completions';
async function sendChatRequest(userMessage) {
try {
const response = await axios.post(ENDPOINT, {
model: 'gpt-4o',
messages: [
{ role: 'user', content: userMessage }
],
temperature: 0.7,
max_tokens: 2048
}, {
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`
}
});
return response.data.choices[0].message.content;
} catch (error) {
console.error('Error:', error.response?.data || error.message);
throw error;
}
}
// Usage
sendChatRequest('Explain quantum computing')
.then(response => console.log(response))
.catch(error => console.error(error));
Java Example
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.time.Duration;
import com.fasterxml.jackson.databind.ObjectMapper;
public class ChatCompletionClient {
private static final String API_KEY = "binom_sk_your_api_key_here";
private static final String ENDPOINT = "https://api.binom.router/v1/chat/completions";
private final HttpClient client;
private final ObjectMapper mapper;
public ChatCompletionClient() {
this.client = HttpClient.newBuilder()
.connectTimeout(Duration.ofSeconds(30))
.build();
this.mapper = new ObjectMapper();
}
public String sendChatRequest(String userMessage) throws Exception {
String requestBody = String.format("""
{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "%s"}
],
"temperature": 0.7,
"max_tokens": 2048
}
""", userMessage.replace("\"", "\\\""));
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(ENDPOINT))
.header("Content-Type", "application/json")
.header("Authorization", "Bearer " + API_KEY)
.POST(HttpRequest.BodyPublishers.ofString(requestBody))
.build();
HttpResponse<String> response = client.send(
request,
HttpResponse.BodyHandlers.ofString()
);
// Parse and return the message content
// (Implementation depends on your JSON library)
return response.body();
}
}
Code Customization
The generated code includes:
- Your selected model
- Configured parameters (temperature, max tokens, etc.)
- Your API key placeholder (replace with your actual key)
- Proper error handling
Note: Remember to replace binom_sk_your_api_key_here with your actual API key before using the code in production.
Best Practices
Testing Strategies
1. Start Simple
Begin with basic requests to understand model behavior:
"What is 2 + 2?"
2. Iterate on Prompts
Refine your prompts based on results:
- V1: "Write code for a REST API"
- V2: "Write C# code for a REST API using ASP.NET Core"
- V3: "Write C# code for a REST API using ASP.NET Core with authentication"
3. Test Edge Cases
Verify behavior with unusual inputs:
- Empty messages
- Very long messages
- Special characters
- Multiple languages
Parameter Optimization
For Code Generation
{
"temperature": 0.2,
"max_tokens": 1500,
"top_p": 0.9
}
For Creative Writing
{
"temperature": 0.9,
"max_tokens": 2048,
"frequency_penalty": 0.5
}
For Factual Queries
{
"temperature": 0.1,
"max_tokens": 500,
"top_p": 0.5
}
Cost Management
Use Smaller Models for Testing
- Test with
gpt-4o-miniorgpt-3.5-turbofirst - Switch to larger models only when needed
- Test with
Limit Token Usage
- Set appropriate
max_tokensvalues - Avoid unnecessarily long responses
- Set appropriate
Batch Similar Requests
- Test multiple similar prompts in sequence
- The model maintains context, reducing repetition
Security Considerations
Never Share API Keys
- API keys in generated code are placeholders
- Store keys securely (environment variables, secret managers)
Sanitize Inputs
- Validate user input before sending to the API
- Remove sensitive information
Review Generated Code
- Always review generated code before deployment
- Test thoroughly in your environment
Troubleshooting
Common Issues
| Issue | Solution |
|---|---|
| No response | Check your API key and internet connection |
| Unexpected behavior | Review system message and temperature settings |
| Slow responses | Try a faster model or reduce max_tokens |
| Repetitive output | Increase frequency_penalty or presence_penalty |