Try Now

LucidQuery AI API Documentation

Complete reference for integrating with our advanced AI models via a unified API

Introduction

Welcome to the official documentation for the LucidQuery AI API. This unified API provides access to our suite of advanced AI models, each optimized for different use cases while maintaining a consistent, OpenAI-compatible interface.

Our API supports multiple state-of-the-art models through a single endpoint, allowing you to choose the best model for your specific needs without changing your integration code. Whether you need advanced reasoning capabilities or specialized coding assistance, we have you covered.

Key Advantage: One API, multiple specialized models. Simply change the model parameter in your request to access different AI capabilities while maintaining the same request/response format you're familiar with.

Available Models

Choose the right model for your use case. Each model is optimized for different scenarios while maintaining the same API interface.

Model Specialization Best For Key Features
lucidnova-rf1-100b General Reasoning Complex analysis, research, general tasks • Hybrid diffusion reasoning + autoregressive generation
• Fast transparent reasoning process
• Real-time web access
• Self-tuning parameters
lucidquery-nexus-coder Code Generation Programming, debugging, code review • Hybrid autoregressive reasoning + diffusion generation
• Specialized programming knowledge
• Multi-language code generation
• Advanced debugging and optimization

Pricing: Both models use the same token-based pricing structure, so you can switch between them freely without worrying about different billing rates.

Core Features

Our unified API provides consistent access to specialized AI capabilities:

Unified API

Access multiple specialized AI models through a single OpenAI-compatible endpoint. Switch between models by changing one parameter.

Specialized Models

Choose between advanced reasoning capabilities or specialized coding assistance, each optimized for their respective use cases.

Streaming Support

Real-time response streaming for better user experience, with full OpenAI compatibility for easy integration.

Transparent Reasoning

Get detailed reasoning pathways via the <think> section to understand how the AI arrives at its conclusions.

Getting Started

To begin using the LucidQuery AI API, follow these simple steps:

  1. Create an account on the LucidQuery platform if you don't already have one.
  2. Generate an API key from the API management page in your dashboard.
  3. Install the client library for your preferred programming language, or use standard HTTP request libraries.
  4. Make your first API call following the examples in this documentation.

Tip: Our API is compatible with the OpenAI API format, making it easy to switch from other AI providers with minimal changes to your code.

Authentication

All requests to the LucidQuery AI API require authentication using an API key. Your API key should be included in the HTTP headers of each request.

Add your API key to requests using the Authorization header:

HTTP Header
Authorization: Bearer YOUR_API_KEY

Important: Never share your API keys or include them in client-side code. If you believe your API key has been compromised, you should revoke it immediately from your dashboard and generate a new one.

API Endpoints

POST https://lucidquery.com/api/v1/chat/completions

This endpoint allows you to generate AI responses for any type of query. The system automatically determines the optimal approach based on the content of your request.

Example request:

Request Body
{
  "model": "lucidnova-rf1-100b",
  "messages": [
    {
      "role": "user",
      "content": "Hello World!"
    }
  ],
  "stream": false
}

In this example, the system will process the query and respond appropriately. Our AI models automatically adjust their internal parameters based on the query context, providing an optimal response without requiring any special configuration.

Request Parameters

The API accepts the following parameters in your request:

Parameter Type Description
model string The AI model to use. Options: lucidnova-rf1-100b (general reasoning) or lucidquery-nexus-coder (coding specialist).
messages array An array of message objects representing the conversation history. Each message has a role (either "user" or "assistant") and content (the message text).
stream boolean If set to true, responses will be streamed as they're generated. Default is false.
max_tokens integer The maximum number of tokens to generate in the response. Default is 8000.

Note: Unlike other AI APIs, our models do not use a temperature parameter. The models' self-tuning capabilities automatically adjust contextual parameters based on your query type for optimal response quality.

Response Format

The API returns responses in JSON format, with a structure similar to other AI APIs for easy integration:

Standard Response (Non-Streaming)

Response Body
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1681841142,
  "model": "lucidnova-rf1-100b",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "\nAnalyzing the greeting 'Hello World!'...\nThis is a simple greeting that doesn't require any specialized knowledge or real-time information.\nI'll respond with an appropriate friendly greeting.\n\n\nHello! How can I assist you today? I'm here to help with a wide range of tasks. Feel free to ask me anything!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 74,
    "total_tokens": 84
  }
}

Note: The response includes a <think>...</think> section that shows the model's reasoning process. This provides transparency into how the AI arrived at its answer. If you don't want this in your application's output, you can strip it programmatically.

Streaming Response

When using streaming mode (stream: true), the API will send partial responses as they are generated, following the same format as OpenAI's streaming API:

Streaming Response Format
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1681841142,"model":"lucidnova-rf1-100b","choices":[{"delta":{"content":""},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1681841142,"model":"lucidnova-rf1-100b","choices":[{"delta":{"content":"Analyzing the greeting 'Hello World!'..."},"index":0,"finish_reason":null}]}

... more chunks ...

data: [DONE]

Standard Request Examples

Below are examples of how to make standard (non-streaming) requests to the LucidQuery AI API in various programming languages:

# Using the OpenAI Python client library
from openai import OpenAI

# Initialize the client with your API key and base URL
client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://lucidquery.com/api/v1/chat/completions"
)

# Example 1: General reasoning task
response = client.chat.completions.create(
    model="lucidnova-rf1-100b",
    messages=[
        {"role": "user", "content": "Explain quantum computing in simple terms"}
    ]
)

# Example 2: Coding task
response = client.chat.completions.create(
    model="lucidquery-nexus-coder",
    messages=[
        {"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
    ]
)

print(response.choices[0].message.content)
// Using Node.js with the OpenAI npm package
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://lucidquery.com/api/v1/chat/completions'
});

async function main() {
  // Example 1: General reasoning task
  const reasoning = await openai.chat.completions.create({
    model: 'lucidnova-rf1-100b',
    messages: [
      { role: 'user', content: 'Explain quantum computing in simple terms' }
    ]
  });
  
  // Example 2: Coding task
  const coding = await openai.chat.completions.create({
    model: 'lucidquery-nexus-coder',
    messages: [
      { role: 'user', content: 'Write a Python function to calculate fibonacci numbers' }
    ]
  });
  
  console.log(reasoning.choices[0].message.content);
}

main();
# Example 1: General reasoning task
curl -X POST https://lucidquery.com/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "lucidnova-rf1-100b",
    "messages": [
      {
        "role": "user",
        "content": "Explain quantum computing in simple terms"
      }
    ]
  }'

# Example 2: Coding task
curl -X POST https://lucidquery.com/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "lucidquery-nexus-coder",
    "messages": [
      {
        "role": "user",
        "content": "Write a Python function to calculate fibonacci numbers"
      }
    ]
  }'

Streaming Request Examples

Below are examples of how to make streaming requests to the LucidQuery AI API in various programming languages. Streaming provides a better user experience for longer responses as content appears progressively:

# Using the OpenAI Python client library
from openai import OpenAI

# Initialize the client with your API key and base URL
client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://lucidquery.com/api/v1/chat/completions"
)

# Streaming example
stream = client.chat.completions.create(
    model="lucidnova-rf1-100b",
    messages=[
        {"role": "user", "content": "Hello World!"}
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="", flush=True)
// Using Node.js with the OpenAI npm package
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://lucidquery.com/api/v1/chat/completions'
});

async function main() {
  // Streaming example
  const stream = await openai.chat.completions.create({
    model: 'lucidnova-rf1-100b',
    messages: [
      { role: 'user', content: 'Hello World!' }
    ],
    stream: true
  });
  
  for await (const chunk of stream) {
    if (chunk.choices[0]?.delta?.content) {
      process.stdout.write(chunk.choices[0].delta.content);
    }
  }
}

main();
# Streaming request
curl -X POST https://lucidquery.com/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --no-buffer \
  -d '{
    "model": "lucidnova-rf1-100b",
    "messages": [
      {
        "role": "user",
        "content": "Hello World!"
      }
    ],
    "stream": true
  }'

Pricing

The LucidQuery AI API uses a unified token-based pricing model for all models. Tokens are the basic text units processed by our AI (roughly 4 characters or 0.75 words in English). We charge different rates for input and output tokens:

Pay As You Go

For occasional or variable usage

$2.00 per 1M input tokens
$5.00 per 1M output tokens

No minimum commitment, pay only for what you use.

  • All features included
  • Real-time web access
  • Fast diffusion-based reasoning
Get Started

Enterprise

For high-volume or custom needs

Custom pricing

Tailored solutions for enterprise requirements.

  • Volume-based discounts
  • Dedicated account manager
  • SLA guarantees
  • Higher rate limits
Contact Sales

Pricing notes: The <think> section tokens are counted as part of the output tokens.

Rate Limits

To ensure system stability and fair resource allocation, the LucidQuery AI API implements the following rate limits across all models:

Limit Type Default Value Description
Requests per Minute 10 Maximum number of API calls allowed per minute
Requests per Day 1,000 Maximum number of API calls allowed per day
Tokens per Minute 1,000,000 Maximum number of tokens (input + output) processed per minute
Tokens per Day 10,000,000 Maximum number of tokens (input + output) processed per day
Max Input Tokens 120,000 Maximum size of a single input request
Max Output Tokens
lucidnova-rf1-100b
8,000
lucidquery-nexus-coder
60,000
Maximum size of a single response per model

If you exceed these limits, the API will return a 429 error with a message indicating which limit was reached. Enterprise customers can request higher rate limits.

Error Handling

The API uses standard HTTP status codes and returns detailed error messages:

Status Code Error Type Description
400 invalid_request The request was malformed or missing required parameters
401 unauthorized Authentication failed (invalid or missing API key)
429 rate_limit_exceeded You've exceeded one of the rate limits
500 api_error An internal server error occurred

Error response example:

Error Response
{
  "error": {
    "message": "Token rate limit exceeded. Maximum 1000000 tokens per minute.",
    "type": "rate_limit_exceeded",
    "param": null,
    "code": "rate_limit_exceeded"
  }
}

Best Practices

Follow these recommendations to get the most out of the LucidQuery AI API:

  1. Be Specific and Clear

    The more specific and clear your input, the better the response will be. While our AI models have strong contextual understanding, explicitly stating what you want often yields the best results.

  2. Manage Token Usage

    Keep your inputs concise to conserve tokens. Remember that both input and output tokens count toward your usage limits. If you don't need the reasoning pathway in responses, you can strip out the <think> sections programmatically.

  3. Use Streaming for Long Responses

    For queries that might generate lengthy responses, use streaming to provide a better user experience with progressive content display.

  4. Implement Retry Logic

    Implement exponential backoff retry logic for rate limit errors and transient server issues.

  5. Leverage Conversation History

    For context-dependent tasks, include relevant previous messages in the conversation history to maintain context without repeating information.

Frequently Asked Questions

How are LucidQuery AI models different from other AI models?

Our models feature innovative hybrid architectures. LucidNova RF1 combines diffusion-based reasoning with autoregressive generation for fast, transparent reasoning. LucidQuery Nexus Coder uses the reverse approach - autoregressive reasoning with diffusion generation - optimized for programming tasks. Both feature real-time capabilities and self-tuning parameters.

Do LucidQuery AI models have a knowledge cutoff date?

No. Unlike many other AI models, our models have real-time web access capabilities that allow them to retrieve current information when needed, without requiring explicit instructions to do so.

Why doesn't the API have a temperature parameter?

Our AI models feature self-tuning parameters that automatically adjust based on the query context. The system determines the optimal approach for each type of request without requiring manual parameter tuning.

What programming languages do LucidQuery AI models understand?

Our AI models understand and can generate code in a wide range of programming languages, including but not limited to: Python, JavaScript, TypeScript, Java, C#, C++, PHP, Ruby, Go, Rust, Swift, Kotlin, and SQL. The LucidQuery Nexus Coder model is especially optimized for programming tasks.

Support

If you need assistance with our Inference API, you can contact our support team:

For Enterprise customers, please contact your dedicated account manager for priority support.