
This guide introduces flow-graphs, the fundamental building blocks of the HyperFlow AI development platform.
HyperFlow is a no-code environment where you create AI applications and workflows by visually designing their logic. Instead of writing code, you draw flow-graphs, which are executable diagrams representing your application's processes. The HyperFlow development environment is built around creating, running, and refining these visual representations.
Flow-graphs are used in nearly every aspect of building AI applications in HyperFlow. You can use them for developing chatbots, importing and processing content for knowledge bases, integrating with external services, setting up agentic workflows where multiple AI models collaborate, and orchestrating multiple independent AI services via HyperFlow's control and API plugin system. This makes HyperFlow a great hub for complex, composite AI systems.
Example flow-graphs
To understand flow-graphs better, we will explore examples that are automatically installed with your first project. You can find them in the flow-graph directory on the left-hand side of the development environment. This directory shows all flow-graphs in the current project. The examples, grouped under the "Examples" tag, showcase six or seven distinct workflows, highlighting various applications. We will examine each, explaining their purpose and structure.
To view a flow-graph, simply select it from the directory, and it will load into the main flow-graph editor. Let's begin by examining what is likely the most basic flow-graph imaginable: a simple, one-shot chatbot.

This first flow-graph has only two components, called nodes. Each node is a specific step in the workflow. Here, the first node is executed, then the second. The purple lines show the order of execution, defining the process flow. Blue lines show the data flow, defining how data moves between nodes.
In this flow-graph, the first node requests a user input, like a prompt or query. The purple line then directs the flow to the second node, a call to a Large Language Model (LLM). The blue line between them shows the user input being passed to the LLM. The LLM processes the prompt and generates a response, which is then displayed.
This two-node structure creates a "one-shot" chatbot. The user provides input, and the LLM returns a single response.
Okay, let's break down the next examples, starting with the looping chatbot and then moving to the knowledge-based chatbot.
Building on the one-shot chatbot, we can create a more dynamic and interactive experience by implementing a looping chatbot.

This flow-graph introduces a crucial modification: the process flow, indicated by the purple line, now loops back after the LLM call. This loop allows for continuous interaction, mimicking the behavior of popular chatbots like ChatGPT and Gemini.
In addition to the loop, another node has been added. This new node enables the injection of instructions into the LLM call. These instructions guide the LLM on how to behave and respond to user requests, a critical aspect of what is commonly referred to as "prompt engineering."
Further, a data flow loop has been added, feeding the LLM's responses back into its Prior responses input. This is essential because LLMs generally lack long-term memory. To maintain context, the application must keep a history of the conversation and pass the entire history to the LLM with each interaction. This data flow feedback provides the LLM with the necessary conversational context.
This example demonstrates how to create a chatbot that can leverage a knowledge base to provide more informed and contextually relevant responses.

This type of chatbot is often referred to as a Retrieval Augmented Generation (RAG) chatbot. The core structure retains the user input, LLM call, and instruction nodes from the previous example. However, a significant addition is the Search knowledge DB node. The user's prompt is now passed both to this new node and to the main LLM call node, as indicated by the blue data flow lines.
The knowledge base lookup node retrieves relevant information from a pre-built knowledge base, guided by the user's prompt. The retrieved knowledge is then injected into the Knowledge input of the LLM call node. This process effectively augments the LLM's response generation with domain knowledge specific to the user’s query. The LLM is instructed to utilize this retrieved knowledge when formulating its response to the user's initial query.
This example introduces two more nodes. The first, Message to user, is a node that sends an opening message to the user. Since it is positioned outside the main loop, this message is only sent once, to provide an introductory message to start the conversation. The second addition is a Prompt buttons node. This allows for the specification of a set of initial recommended or suggested prompts that the user can select from, providing a more guided and user-friendly experience, particularly for domain-specific or knowledge-based chatbots.
This knowledge-based chatbot flow-graph demonstrates the ability of HyperFlow to create sophisticated chatbots that leverage external knowledge to enhance their responses and provide a more enriching user experience.
The ability to create and manage a knowledge base is crucial for developing custom, knowledge-driven applications. HyperFlow allows for the creation of workflows that are not directly related to running chatbots but are essential for supporting them. This example demonstrates a workflow specifically designed for building such a knowledge base.

This flow-graph sets out a work-flow for 1) importing content, 2) dividing it into manageable knowledge chunks, and 3) & 4) constructing a searchable knowledge base. It is a good example showing how HyperFlow can be used to create workflows that serve purposes beyond just implementing interactive chatbots.
At the end of the knowledge-database building flow, a small test loop is included. This loop allows for testing the newly built knowledge base. A test user prompt is input, a search is performed against the knowledge base, and the results are displayed, looping back for more test queries. This testing loop provides a way to quickly verify the effectiveness of the constructed knowledge base.
In essence, this flow-graph demonstrates a workflow dedicated entirely to the creation and testing of a knowledge base, highlighting that HyperFlow's capabilities extend beyond chatbot development to encompass a wide range of supporting tasks necessary for building robust AI applications.
This example showcases a different type of workflow, moving away from chatbots & knowledge databases and into the realm of agentic workflows. This particular flow-graph illustrates how two Large Language Models (LLMs) can collaborate to perform a complex task, in this case, iterative image generation and refinement.

The core idea involves using one LLM to generate an image based on a user's request and a second LLM to critique and refine that image. The user begins by providing a prompt describing the desired image. This prompt is passed to the first LLM, which could be a model like DALL-E 3 or Stable Diffusion, capable of image generation. This first LLM produces a candidate image based on the user's prompt.
The generated image is then fed as input to the second LLM which is given specific instructions: to review the generated image in conjunction with the original user prompt. If the image does not fully meet the criteria outlined in the prompt, the second LLM is tasked with generating a modified or improved image-generation prompt.
This improved prompt is then fed back to the first LLM, creating a feedback loop. The first LLM generates a new image based on the refined prompt, and the process repeats. Essentially, once a user requests a particular image, this agentic system enters a loop, with each iteration aimed at improving the generated image. The first LLM acts as the image generator, responding to modified prompts, while the second LLM acts as a critic, evaluating the image and suggesting prompt refinements.
This flow-graph provides a clear, if simplified, illustration of a classic agentic workflow. Multiple LLMs are working in concert, each fulfilling a specific role, to accomplish a desired task, in this instance, content generation with iterative refinement. This demonstrates the power of agentic workflows in automating complex processes through the coordinated efforts of multiple AI agents.
This example demonstrates HyperFlow's capabilities beyond chatbot development, showcasing a flow-graph designed for batch processing. This type of workflow is useful for handling large sets of data and automating repetitive tasks.

In this flow-graph, the goal is to process a number of scanned images. The workflow begins by importing these images in a batch. It then processes each image one at a time. Each image is sent to a Large Language Model (LLM) for processing. The LLM might perform Optical Character Recognition (OCR) to extract text, generate a descriptive summary, or perform a transcription if the image contains handwritten notes or other forms of text.
The output from the LLM, whether extracted text, a description, or a transcription, is then directed to an Add content node, which saves the output from the LLM in the project’s asset store for later use. The workflow then loops back to process the next image in the batch, continuing until all items have been processed. The Loop controller node cooperates with the Batch data node to edit the loop it controls when the input to the batching node is all processed.
This flow-graph shows a batch processing loop implemented within HyperFlow, tailored for content processing and content management. It highlights that HyperFlow can be used to build a range of workflows beyond interactive chatbots, including automating tasks related to AI technology or in support of building broader AI applications.
This example demonstrates HyperFlow's support for integrating tool-using LLMs into workflows. This capability allows LLMs to leverage external tools to broaden their functionality and improve responses.

In this flow-graph, the LLM can assess user prompts and determine whether using a specific tool would be helpful. The example shows a scenario with two defined tools: one that retrieves real-time weather information and another that performs mathematical calculations.
The tool-using LLM examines the user's prompt and based on the prompt's content, it decides whether one of its defined tools could contribute to a more informative response. If a tool is deemed relevant, the LLM asks the LLM tool agent node to call the tool, providing necessary data for its task. For instance, if a user asks about the weather in some location, the LLM would recognize the relevance of the weather information tool and use it to fetch and incorporate current weather data, passing the location requested as input to the tool. Similarly, if a user's prompt involves a mathematical problem, the LLM would engage the calculation tool to compute the answer.
This integration of a tool-using LLM within a looping chatbot structure exemplifies an agent-based AI application. It demonstrates how LLMs can be augmented with external capabilities to extend their functionality.
This concludes the overview of the example flow-graphs. While these examples will be explored in more detail in subsequent guides, they provide a sense of the range of workflows that can be constructed within HyperFlow, as well as a visual sense of their structure. Remember that the purple lines represent the process flow or the order of operations, the blue lines indicate the data flow between different nodes within the flow-graph.