Try Now

LucidNova RF1 API Documentation

Complete reference for integrating with our advanced multimodal AI system

Introduction

Welcome to the official documentation for the LucidNova RF1 API. This guide will help you integrate our advanced multimodal AI system into your applications, products, and services.

LucidNova RF1 is a state-of-the-art, unified AI model with 100 billion parameters that combines multiple capabilities into a single powerful system. Unlike traditional AI services that separate different capabilities into distinct models, LucidNova RF1 provides a seamless experience by intelligently activating appropriate capabilities based on the context of your query.

Key Advantage: LucidNova RF1 features a hybrid architecture combining a fast diffusion-based reasoning layer with a traditional autoregressive transformer. This approach delivers transparent reasoning that's significantly faster than typical reasoning models while maintaining high-quality outputs.

Core Features

LucidNova RF1 is a unified AI that offers several powerful capabilities:

Hybrid Architecture

Combines diffusion-based reasoning (for speed) with autoregressive generation (for quality), eliminating the typical performance tradeoff in reasoning systems.

Real-time Web Access

Automatically retrieves current information from the web when needed, without requiring explicit instructions to do so.

Self-Tuning Parameters

Dynamically adjusts internal parameters based on query context for optimal performance without manual configuration.

Transparent Reasoning

Provides detailed reasoning pathways via the <think> section that shows how the AI arrives at its conclusions without slowing down the system.

Getting Started

To begin using the LucidNova RF1 API, follow these simple steps:

  1. Create an account on the LucidQuery platform if you don't already have one.
  2. Generate an API key from the API management page in your dashboard.
  3. Install the client library for your preferred programming language, or use standard HTTP request libraries.
  4. Make your first API call following the examples in this documentation.

Tip: Our API is compatible with the OpenAI API format, making it easy to switch from other AI providers with minimal changes to your code.

Authentication

All requests to the LucidNova RF1 API require authentication using an API key. Your API key should be included in the HTTP headers of each request.

Add your API key to requests using the Authorization header:

HTTP Header
Authorization: Bearer YOUR_API_KEY

Important: Never share your API keys or include them in client-side code. If you believe your API key has been compromised, you should revoke it immediately from your dashboard and generate a new one.

API Endpoints

POST https://lucidquery.com/api/swiftapi.php/chat/completions

This endpoint allows you to generate AI responses for any type of query. The system automatically determines the optimal approach based on the content of your request.

Example request:

Request Body
{
  "model": "lucidnova-rf1-100b",
  "messages": [
    {
      "role": "user",
      "content": "Hello World!"
    }
  ],
  "stream": false
}

In this example, the system will process the query and respond appropriately. LucidNova RF1 automatically adjusts its internal parameters based on the query context, providing an optimal response without requiring any special configuration.

Request Parameters

The API accepts the following parameters in your request:

Parameter Type Description
model string The LucidNova model to use. Currently, only lucidnova-rf1-100b is available.
messages array An array of message objects representing the conversation history. Each message has a role (either "user" or "assistant") and content (the message text).
stream boolean If set to true, responses will be streamed as they're generated. Default is false.
max_tokens integer The maximum number of tokens to generate in the response. Default is 8000.

Note: Unlike other AI APIs, LucidNova RF1 does not use a temperature parameter. The model's self-tuning capabilities automatically adjust contextual parameters based on your query type for optimal response quality.

Response Format

The API returns responses in JSON format, with a structure similar to other AI APIs for easy integration:

Standard Response (Non-Streaming)

Response Body
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1681841142,
  "model": "lucidnova-rf1-100b",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "\nAnalyzing the greeting 'Hello World!'...\nThis is a simple greeting that doesn't require any specialized knowledge or real-time information.\nI'll respond with an appropriate friendly greeting.\n\n\nHello! How can I assist you today? I'm LucidNova RF1, a multimodal AI assistant ready to help with a wide range of tasks. Feel free to ask me anything!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 74,
    "total_tokens": 84
  }
}

Note: The response includes a <think>...</think> section that shows the model's reasoning process. This provides transparency into how the AI arrived at its answer. If you don't want this in your application's output, you can strip it programmatically.

Streaming Response

When using streaming mode (stream: true), the API will send partial responses as they are generated, following the same format as OpenAI's streaming API:

Streaming Response Format
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1681841142,"model":"lucidnova-rf1-100b","choices":[{"delta":{"content":""},"index":0,"finish_reason":null}]}

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1681841142,"model":"lucidnova-rf1-100b","choices":[{"delta":{"content":"Analyzing the greeting 'Hello World!'..."},"index":0,"finish_reason":null}]}

... more chunks ...

data: [DONE]

Standard Request Examples

Below are examples of how to make standard (non-streaming) requests to the LucidNova RF1 API in various programming languages:

# Using the OpenAI Python client library
from openai import OpenAI

# Initialize the client with your API key and base URL
client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://lucidquery.com/api/swiftapi.php"
)

# Standard request example
response = client.chat.completions.create(
    model="lucidnova-rf1-100b",
    messages=[
        {"role": "user", "content": "Hello World!"}
    ]
)

print(response.choices[0].message.content)
// Using Node.js with the OpenAI npm package
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://lucidquery.com/api/swiftapi.php'
});

async function main() {
  // Standard request example
  const response = await openai.chat.completions.create({
    model: 'lucidnova-rf1-100b',
    messages: [
      { role: 'user', content: 'Hello World!' }
    ]
  });
  
  console.log(response.choices[0].message.content);
}

main();
# Standard request
curl -X POST https://lucidquery.com/api/swiftapi.php/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "lucidnova-rf1-100b",
    "messages": [
      {
        "role": "user",
        "content": "Hello World!"
      }
    ]
  }'

Streaming Request Examples

Below are examples of how to make streaming requests to the LucidNova RF1 API in various programming languages. Streaming provides a better user experience for longer responses as content appears progressively:

# Using the OpenAI Python client library
from openai import OpenAI

# Initialize the client with your API key and base URL
client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://lucidquery.com/api/swiftapi.php"
)

# Streaming example
stream = client.chat.completions.create(
    model="lucidnova-rf1-100b",
    messages=[
        {"role": "user", "content": "Hello World!"}
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content is not None:
        print(chunk.choices[0].delta.content, end="", flush=True)
// Using Node.js with the OpenAI npm package
import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'https://lucidquery.com/api/swiftapi.php'
});

async function main() {
  // Streaming example
  const stream = await openai.chat.completions.create({
    model: 'lucidnova-rf1-100b',
    messages: [
      { role: 'user', content: 'Hello World!' }
    ],
    stream: true
  });
  
  for await (const chunk of stream) {
    if (chunk.choices[0]?.delta?.content) {
      process.stdout.write(chunk.choices[0].delta.content);
    }
  }
}

main();
# Streaming request
curl -X POST https://lucidquery.com/api/swiftapi.php/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  --no-buffer \
  -d '{
    "model": "lucidnova-rf1-100b",
    "messages": [
      {
        "role": "user",
        "content": "Hello World!"
      }
    ],
    "stream": true
  }'

Pricing

LucidNova RF1 uses a token-based pricing model similar to other AI APIs. Tokens are the basic text units processed by our AI (roughly 4 characters or 0.75 words in English). We charge different rates for input and output tokens:

Pay As You Go

For occasional or variable usage

$2.50 per 1M input tokens
$9.00 per 1M output tokens

No minimum commitment, pay only for what you use.

  • All features included
  • Real-time web access
  • Fast diffusion-based reasoning
Get Started

Volume Discount

For moderate to heavy usage

$1.50 per 1M input tokens
$7.00 per 1M output tokens

25% savings when you prepay for 20M+ tokens.

  • All features included
  • Prepaid token packages
  • Tokens valid for 12 months
  • Priority support
View Plans

Enterprise

For high-volume or custom needs

Custom pricing

Tailored solutions for enterprise requirements.

  • Volume-based discounts
  • Dedicated account manager
  • SLA guarantees
  • Higher rate limits
Contact Sales

Pricing notes: The <think> section tokens are counted as part of the output tokens.

Rate Limits

To ensure system stability and fair resource allocation, the LucidNova RF1 API implements the following rate limits:

Limit Type Default Value Description
Requests per Minute 10 Maximum number of API calls allowed per minute
Requests per Day 1,000 Maximum number of API calls allowed per day
Tokens per Minute 10,000 Maximum number of tokens (input + output) processed per minute
Tokens per Day 100,000 Maximum number of tokens (input + output) processed per day
Max Input Tokens 8,000 Maximum size of a single input request
Max Output Tokens 8,000 Maximum size of a single response

If you exceed these limits, the API will return a 429 error with a message indicating which limit was reached. Enterprise customers can request higher rate limits.

Error Handling

The API uses standard HTTP status codes and returns detailed error messages:

Status Code Error Type Description
400 invalid_request The request was malformed or missing required parameters
401 unauthorized Authentication failed (invalid or missing API key)
429 rate_limit_exceeded You've exceeded one of the rate limits
500 api_error An internal server error occurred

Error response example:

Error Response
{
  "error": {
    "message": "Token rate limit exceeded. Maximum 10000 tokens per minute.",
    "type": "rate_limit_exceeded",
    "param": null,
    "code": "rate_limit_exceeded"
  }
}

Best Practices

Follow these recommendations to get the most out of the LucidNova RF1 API:

  1. Be Specific and Clear

    The more specific and clear your input, the better the response will be. While LucidNova RF1 has strong contextual understanding, explicitly stating what you want often yields the best results.

  2. Manage Token Usage

    Keep your inputs concise to conserve tokens. Remember that both input and output tokens count toward your usage limits. If you don't need the reasoning pathway in responses, you can strip out the <think> sections programmatically.

  3. Use Streaming for Long Responses

    For queries that might generate lengthy responses, use streaming to provide a better user experience with progressive content display.

  4. Implement Retry Logic

    Implement exponential backoff retry logic for rate limit errors and transient server issues.

  5. Leverage Conversation History

    For context-dependent tasks, include relevant previous messages in the conversation history to maintain context without repeating information.

Frequently Asked Questions

How is LucidNova RF1 different from other AI models?

LucidNova RF1 stands out with its hybrid architecture that combines a diffusion-based reasoning layer with an autoregressive transformer. This approach enables fast, transparent reasoning without the performance penalty typical of other reasoning models. The system also features real-time web access and self-tuning parameters that automatically optimize for different types of queries.

Does LucidNova RF1 have a knowledge cutoff date?

No. Unlike many other AI models, LucidNova RF1 has real-time web access capabilities that allow it to retrieve current information when needed, without requiring explicit instructions to do so.

Why doesn't the API have a temperature parameter?

LucidNova RF1 features self-tuning parameters that automatically adjust based on the query context. The system determines the optimal approach for each type of request without requiring manual parameter tuning.

What programming languages does LucidNova RF1 understand?

LucidNova RF1 understands and can generate code in a wide range of programming languages, including but not limited to: Python, JavaScript, TypeScript, Java, C#, C++, PHP, Ruby, Go, Rust, Swift, Kotlin, and SQL.

Support

If you need assistance with the LucidNova RF1 API, you can contact our support team:

For Enterprise customers, please contact your dedicated account manager for priority support.