LogoLogo
IntroInstalling Dashboard
  • 🌐Distribute
    • Introducing DistributeAI
    • FAQ
  • 🪙Distribute For $DIS
    • Tokenomics
    • Important Addresses
      • Mint Address
      • Multi Sigs and Escrow
    • DIS Alpha Airdrop
  • DIS Beta Rewards Phase
  • 🏢Distribute For Enterprise
    • Enterprise Inference API
      • Authentication
      • Billing
        • Text To Text
        • Text To Image
        • Text To Speech
      • OpenAI Compatible API
        • Images
          • Generations
        • Chat
          • Completions
        • Audio
          • Speech
      • Async API
        • Chat
          • Chat Create
          • Chat Result
        • Images
          • Images Create
          • Images Result
        • Speech
          • Speech Create
          • Speech Result
    • Enterprise Compute Providers
  • 🖥️Distribute For Inference Providers
    • Chrome Extension
      • Installing Extension
        • Enabling Chrome Flags
        • Chrome GPU Setup Guide
      • Updating the Extension
    • Desktop App
      • Installing Desktop App
      • Desktop App Debugging
        • Windows Debugging
        • MacOS Debugging
        • Linux Debugging
      • Desktop System Requirements
  • 📱Distribute For Consumers
    • Distribute.ai Discord Bot
      • Adding the bot to a server
      • Adding the bot as a user app
      • Bot Commands
      • Generation Buttons
      • Prompting Guide
    • Distribute.ai Dashboard
      • Statistics
      • Referrals
      • Account Settings
Powered by GitBook
On this page
  1. Distribute For Enterprise
  2. Enterprise Inference API
  3. Async API
  4. Chat

Chat Create

Start a conversation with an LLM using existing OpenAI schemas.

PreviousChatNextChat Result

Last updated 16 days ago

🏢
post
Body
modelstring · enumRequiredPossible values:
reasoning_effortstring · enumOptionalDefault: mediumPossible values:
frequency_penaltynumber · min: -2 · max: 2OptionalDefault: 0
max_completion_tokensnumber · min: -1 · max: 600OptionalDefault: 128
nnumber · enumOptionalDefault: 1Possible values:
presence_penaltynumber · min: -2 · max: 2OptionalDefault: 0
seednumberOptionalDefault: 786721
service_tierstring · enumOptionalDefault: defaultPossible values:
stopstring[]Optional
streambooleanOptionalDefault: false
temperaturenumber · max: 2OptionalDefault: 1
top_pnumber · max: 1OptionalDefault: 1
tool_choiceany ofOptional
string · enumOptionalPossible values:
or
parallel_tool_callsbooleanOptionalDefault: true
Responses
200
Default Response
application/json
400
Default Response
application/json
404
Default Response
application/json
500
Default Response
application/json
post
POST /async/chat/create HTTP/1.1
Host: 
Content-Type: application/json
Accept: */*
Content-Length: 584

{
  "messages": [
    {
      "role": "system",
      "content": "text",
      "name": "text"
    }
  ],
  "model": "Llama-3.1 8B",
  "reasoning_effort": "medium",
  "metadata": {
    "ANY_ADDITIONAL_PROPERTY": "text"
  },
  "frequency_penalty": 0,
  "max_completion_tokens": 128,
  "n": 1,
  "modalities": [
    "text"
  ],
  "presence_penalty": 0,
  "seed": 786721,
  "service_tier": "default",
  "stop": [
    "text"
  ],
  "stream": false,
  "stream_options": {
    "include_usage": true
  },
  "temperature": 1,
  "top_p": 1,
  "tools": [
    {
      "type": "function",
      "function": {
        "description": "text",
        "name": "text",
        "parameters": {
          "ANY_ADDITIONAL_PROPERTY": "text"
        },
        "strict": false
      }
    }
  ],
  "tool_choice": "none",
  "parallel_tool_calls": true
}
{
  "id": "text"
}