LLM Completion Request
Function: LLM Completion Request
Send a prompt to an AI language model (LLM) and receive a generated response. This action allows you to leverage powerful AI capabilities for tasks like text generation, summarization, data extraction, and more, either as plain text or structured data.
Input
- User prompt (STRING, Required) A clear, concise instruction or question that you want the AI to respond to. This is the primary input for the AI.
- User prompt placeholders (OBJECT, Optional)
If your "User prompt" contains placeholders like
\{\{NAME\}\}, you can define their values here. The AI will see the prompt with these placeholders replaced by their corresponding values.- Example: If your prompt is "Hello, {{NAME}}!", and you set
NAMEto "Alice", the AI will receive "Hello, Alice!".
- Example: If your prompt is "Hello, {{NAME}}!", and you set
- System prompt (STRING, Optional)
Initial instructions that guide the AI's behavior and persona throughout the conversation. This helps set the context and tone for the AI's responses.
- Example: "You are a helpful assistant that provides concise answers."
- System prompt placeholders (OBJECT, Optional)
Similar to user prompt placeholders, these replace
\{\{PLACEHOLDER\}\}values within your "System prompt". - Model (SELECT_ONE, Required)
Choose the specific AI language model you want to use. Different models have varying capabilities, costs, and performance characteristics.
- Available options include: gpt-5, gpt-5-mini, gpt-4o, gpt-4o-mini, claude-sonnet, claude-haiku, claude-opus, mistral-medium, mistral-large, llama-4, gemini-2.5-flash, and many others.
- Files (ARRAY of FILE, Optional)
A list of files (e.g., documents, images) that you want the AI to analyze or refer to when generating its response. This feature is only available for certain models.
- Visibility: This input only appears if you select a model that supports file attachments (e.g., specific GPT, Claude, Llama, Mistral, or Gemini models).
- Api token (PASSWORD, Optional) Provide your own API token for the selected AI model. If you leave this blank, the platform will use your available AI credits.
Output
- Result (VARIABLE)
The variable where the AI's response will be stored. If a 'Response format' is specified, the output will be an
OBJECTmatching that structure. Otherwise, it will be a plainSTRING. - Response format (DATA_FORMAT, Optional)
Specify a predefined data structure (like a database table schema) that the AI should use to format its response. If provided, the AI will attempt to return its answer as a structured
OBJECTconforming to this format. Leave this blank if you expect a plain text response.
Execution Flow
Real-Life Examples
Example 1: Generating a Marketing Slogan
Scenario: You need a catchy slogan for a new eco-friendly coffee brand.
- Inputs:
- User prompt: "Generate 5 short, catchy marketing slogans for a new eco-friendly coffee brand called 'Green Bean Brew'. Focus on sustainability and great taste."
- System prompt: "You are a creative marketing expert specializing in brand messaging."
- Model:
gpt-4o-mini - Result:
CoffeeSlogans
- Result: The variable
CoffeeSloganswill contain a string with five suggested slogans, such as: "1. Green Bean Brew: Sip Sustainably, Taste Exceptionally. 2. Your Cup, Our Planet: Green Bean Brew. 3. Earth-Friendly Coffee, Unforgettable Flavor. 4. Green Bean Brew: Good for You, Good for Earth. 5. Taste the Future, Sustain the Planet with Green Bean Brew."
Example 2: Extracting Structured Data from Customer Reviews
Scenario: You have customer reviews and want to automatically extract the product name, rating, and a summary of the feedback into a structured format for analysis.
- Inputs:
- User prompt: "Analyze the following customer review: 'I absolutely love the new 'SmartWatch Pro'! The battery life is incredible, lasting days on a single charge. However, the strap feels a bit cheap. Overall, a solid 4 out of 5 stars.' Extract the product name, star rating, and a brief summary of the feedback."
- System prompt: "You are an expert in customer feedback analysis. Always output in JSON format."
- Model:
gpt-4o - Response format: A
DATA_FORMATnamedReviewAnalysiswith the following fields:productName(STRING)rating(NUMBER)summary(STRING)
- Result:
AnalyzedReview
- Result: The variable
AnalyzedReviewwill contain anOBJECTstructured as follows:\{
"productName": "SmartWatch Pro",
"rating": 4,
"summary": "Excellent battery life, but the strap quality could be improved."
\}
Example 3: Summarizing a Document with Specific Instructions
Scenario: You have a long project proposal document and need a concise summary focusing on the budget and timeline sections for a quick management review.
- Inputs:
- User prompt: "Summarize the key budget allocations and project timeline details from the attached document. Focus on the total budget, major cost categories, and critical milestones."
- System prompt: "You are a project management assistant. Provide a clear and concise summary suitable for executives."
- Model:
claude-opus-4-0(or another model that supports file input) - Files:
Project_Proposal_Q4_2024.pdf(a PDF file containing the proposal) - Result:
ProposalSummary
- Result: The variable
ProposalSummarywill contain a string summarizing the budget and timeline information extracted from theProject_Proposal_Q4_2024.pdfdocument.