Building Agentic Systems: A Hands-On Guide with LangChain and LangGraph

Agentic Ticket System

In the rapidly evolving world of AI, agentic systems have emerged as a powerful architectural pattern that brings together intelligent decision-making and real-world utility. This post dives into what agentic systems are, introduces the concept of agents, and walks you through setting up a complete working prototype: an Agentic Ticket System powered by FastAPI, Streamlit, and locally run LLMs via Ollama.

What Are Agentic Systems?

An agentic system refers to an architecture where one or more agents—autonomous AI components—observe, reason, and act to achieve goals. These systems are built to operate independently or semi-independently, adapting to their environment and making context-aware decisions.

Agent Definition according to IBM

An artificial intelligence (AI) agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.

Source: https://www.ibm.com/think/topics/ai-agents


Project Overview: Agentic Ticket System

This is a practical example of an agentic system: An Agentic Ticket System. This prototype allows you to interact with LLM agents to create support tickets.

👉 GitHub Repository: Agentic Ticket System

The project uses:

  • FastAPI for the backend agent logic and API endpoints.
  • Streamlit for a reactive frontend UI.
  • Ollama to run powerful LLMs like granite3.1-dense directly on your machine.
  • LangChain is a framework for building applications powered by language models through modular components like prompt templates, memory, and tools, while
  • LangGraph is a library built on top of LangChain that enables building stateful, multi-agent applications using graph-based execution flows.

System Architecture

Here’s a high-level overview of the stack:

    Interface (Streamlit) 🧑‍💻

FastAPI Backend 🔁

Local LLMs via Ollama 🧠

(Optional) SQLite via TablePlus 🗃️

How to Set It Up

Step 1: Install Ollama and pull models

To run models locally using Ollama:

  1. Download and install Ollama for macOS
  2. Pull a model, e.g.: ollama pull granite3.1-dense:latest
  3. Check if the model is installed: ollama list
  4. Optionally test it: ollama run granite3.1-dense:latest "Hello"

Step 2: Set Up the Backend

  1. Navigate to the backend directory.
  2. Create a virtual environment: uv venv .venv --python=/opt/homebrew/bin/python3.12
  3. Activate it: source .venv/bin/activate
  4. Install dependencies: uv pip install -r requirements.txt
  5. Go back to the root directory: cd ../
  6. Copy .env.sample to .env and configure any models or credentials.
  7. Run the FastAPI app: uvicorn backend.main:app --reload

Step 3: Set Up the Frontend

  1. Navigate to the frontend directory.
  2. Create and activate a virtual environment: uv venv source .venv/bin/activate
  3. Install dependencies: uv pip install -r requirements.txt
  4. Start the frontend app: streamlit run streamlit_app.py
  5. To run the ticket dashboard app: streamlit run streamlit_ticket_dashboard.py





Step 4 (Optional): Connect to SQLite via TablePlus

  1. Download TablePlus from here
  2. Create a new SQLite connection and name it ticketapi.
  3. Select the ticket.db file in your backend folder.
  4. Test the connection and connect.

Conclusion

Agentic systems are not just theoretical—they are rapidly becoming the core of next-gen applications. Whether it’s customer support, data analysis, or workflow automation, autonomous agents powered by LLMs will be integral to future software.

To try it yourself, find the repository here:

GitHub: Agentic Ticket System

Leave a Reply

Your email address will not be published. Required fields are marked *