AI-Native · Enterprise-Ready · Ships Real Code

The AI that
ships fixes,
not suggestions.

Tickets·Agent is an autonomous AI engineering pipeline that takes a bug report from triage to a merged pull request — with human approval at every critical gate.

4 hrs
avg time-to-fix
10×
throughput / dev
100%
audit coverage
AGENT
CORE
🔍Find
🧠Plan
📥Intake
✏️Edit
🔐Gate
🚀Push
🎯Tests
🔬Analyze
The Problem

Engineering teams are
drowning in bug debt.

0
Hours lost per developer weekly
Engineers spend nearly half their time on ticket triage, context-switching, and manually hunting down bugs in unfamiliar code — before writing a single line of fix.
$0K
Average cost per critical bug
When you factor in on-call hours, delayed features, customer churn, and compounding technical debt, every unresolved critical bug costs the business dearly.
0.2d
Mean time to resolve
The average bug takes over four days from report to merged fix — not because engineers are slow, but because the process is broken. Tickets wait, context is lost, reviews stall.
The Solution

One ticket in.
Merged PR out.

Tickets·Agent is a multi-node AI pipeline that understands your codebase, locates the root cause, drafts the fix, writes the tests, and pushes a branch — all while keeping humans in the loop at critical approval gates.

tickets-agent — agent session #4821
Starting agent for #BUG-2841: Memory leak in WebSocket handler [intake] Parsed ticket — severity: CRITICAL, component: websocket [understand] Analyzing error signature + stack trace... [find_code] Zoekt search: 12 relevant files found [root_cause] ← EventEmitter listeners not removed on disconnect [plan_fix] 3-step fix plan generated. Awaiting approval... ✓ [APPROVED] Proceeding to edit_code [edit_code] Patching src/ws/handler.ts — cleanup() on disconnect [gen_tests] 3 unit tests + 1 integration test written ✓ [APPROVED] Pushing branch fix/bug-2841-ws-leak ✓ MR created: gitlab.com/acme/app/-/merge_requests/1247
The Pipeline

10-node autonomous
engineering pipeline

Every node is independently configurable. Approval gates enforce human oversight. Custom agents reroute the graph for your workflow.

📥
Intake
01
🔍
Understand
02
💬
Clarify Check
03
🗂️
Find Code
04
🧠
Root Cause
05
📋
Plan Fix
06
🔐
Approval
Gate 1
✏️
Edit Code
07
🧪
Gen Tests
08
🔐
Approval
Gate 2
🚀
Push Branch
09
Analysis Nodes
Code Execution
Human Approval Gate
The Workspace

A full IDE experience,
built for AI-human collaboration

Monaco editor with file tree, AI assistant panel, diff viewer, and real-time agent execution — in a single unified interface.

Tickets·Agent Workspace — BUG-2841: Memory leak in WebSocket handler
CRITICAL
Explorer
📁src
📁ws
📄handler.ts
📄client.ts
📄types.ts
📁api
📄router.ts
📁tests
📄ws.test.ts
📁config
📄index.ts
handler.ts
ws.test.ts
types.ts
1// WebSocket connection handler
2import { EventEmitter } from 'events';
3import type { WebSocket, IncomingMessage } from 'ws';
4
5export class WSHandler extends EventEmitter {
6  private clients = new Map<string, WebSocket>();
7
8  handleConnection(ws: WebSocket, req: IncomingMessage) {
9    const id = generateId();
10    this.clients.set(id, ws);
11    // ⚠ BUG: listeners never removed → memory leak
12    ws.on('message', this.onMessage);
13    ws.on('error', this.onError);
14    // missing: ws.on('close', () => this.cleanup(id))
15  }
16}
AI Assistant · Full Fix Agent
Why is memory usage growing with each connection?
📍 Root Cause Line 12-13: EventEmitter listeners are attached on ws but never removed when the socket closes. Each disconnected client leaves orphaned references.
🔧 Fix Plan Add a close listener that calls cleanup(id) — removes client from map + unbinds all listeners via ws.removeAllListeners().
Writing fix...
Ask about the code... ⌘↵
Differentiation

Four moats that
matter to enterprise.

🤖
01
End-to-End Autonomous Execution
Not a Copilot that autocompletes. Not a chatbot that suggests. Tickets·Agent reads the ticket, finds the code, writes the fix, tests it, and opens a pull request — end-to-end, with zero manual steps in between.
✦ Competes with: GitHub Copilot, Devin, Cursor
🔐
02
Enterprise-Grade Human Approval Gates
Two mandatory approval checkpoints — one before code edits, one before branch push — make this safe for regulated industries. HIPAA, SOC2, and financial orgs can deploy confidently. The AI never pushes without sign-off.
✦ Enterprise & compliance ready
🧩
03
Skills System — Institutional Knowledge
Encode your team's best practices, architectural patterns, and domain knowledge as reusable AI skills. Skills inject into specific agent nodes. Built-in library of 16 starter skills. Admins approve; developers suggest. Your org's knowledge becomes your AI's advantage.
✦ Knowledge compounds over time
⚙️
04
Custom Agents — Workflow Flexibility
Different teams need different pipelines. A security team needs OWASP review. A startup needs Quick Triage. Run any subset of the 10 nodes, swap LLM providers per agent, inject custom system prompts. Ship different "modes" for different workflows.
✦ Seeded: Quick Triage · Full Fix · Security Review
Feature Set

Everything a team needs.
Nothing it doesn't.

🔎
Semantic Code Search
Zoekt-powered full-text search runs before the LLM — never relying on model guesswork. Files are found, not hallucinated.
📊
Sprint Planning Dashboard
KPIs, aging buckets, per-developer capacity, at-risk tickets, throughput trends. Real-time engineering health at a glance.
🔭
Full Observability
Every agent run traced via Phoenix/Arize (OpenInference). Token budgets, latency, LLM calls, and audit logs — all queryable.
🦊
GitLab Native Integration
Pushes branches, creates merge requests, links commits. PAT-injected secure git — no shell strings, no injection vectors.
🤝
Multi-Provider LLM
Anthropic, OpenAI, or local models (LM Studio / Ollama). Per-agent model assignment. Air-gap deployable with local Devstral.
🛡️
RBAC + Audit Logs
Super Admin / Admin / Developer / Auditor roles. Every file view, agent action, and approval gate timestamped and recorded.
Intelligence Layer

The smarter it runs,
the smarter it gets.

Skills System
Institutional knowledge,
encoded as AI fuel.
Admins encode architectural rules, security constraints, and domain patterns as skills. They inject into specific pipeline nodes via a drag-to-assign interface.
🔒 OWASP Top 10
⚡ Performance
🧪 Test Coverage
📝 API Design
🗃️ DB Patterns
🔁 Retry Logic
♿ Accessibility
📦 Bundle Size
16 built-in skills · 9 categories · Scoped to nodes
Custom Agents
The same pipeline,
infinite configurations.
Configure any subset of nodes, override system prompts, assign LLM providers per agent, scope to teams or projects. Three seeded starters ship out of the box.
{
  "name": "Security Review",
  "enabledNodes": ["intake","understand",
    "find_code","analyze_root_cause","plan_fix"],
  "modelProvider": "anthropic",
  "modelName": "claude-opus-4-6",
  "autoSkillIds": ["owasp-top10","input-validation"],
  "scope": "PROJECT"
}
By the Numbers

The results speak
for themselves.

0%
FASTER RESOLUTION
vs. manual triage + fix
0×
THROUGHPUT / DEV
bugs resolved per sprint
0h
MEAN TIME-TO-FIX
down from 4.2 days
100%
AUDIT COVERAGE
every action logged
Technical Foundation

A moat built
in the stack.

The architecture choices aren't accidental — each layer was chosen to create compounding advantages that are expensive for competitors to replicate.

🦜
LangGraph
Persistent checkpointed graph execution
Orchestration
🔭
Phoenix / Arize
OpenInference OTLP observability
Observability
🔍
Zoekt
Trigram-indexed code search
Search
FastAPI + Next.js
Async Python backend, App Router frontend
Platform
🧩
Knowledge Compounds
Every skill added by your team makes future agents smarter. The system improves the longer you use it — a flywheel competitors can't copy because it's your institutional knowledge.
🔒
Air-Gap Deployable
Full local LLM support via LM Studio / Ollama means banks, defense contractors, and regulated industries can run Tickets·Agent entirely on-premises — no data ever leaves the building.
📍
Checkpoint-Resumable
LangGraph's PostgresSaver persists agent state across restarts. Crashed runs resume from the last checkpoint — not from scratch. Enterprise reliability without custom infrastructure.
🎯
Search Before LLM
Zoekt pre-search runs before any LLM call. The model never guesses file locations. Zero hallucinated paths. This structural accuracy advantage is invisible to users but transformative in practice.
The Opportunity

The future of
engineering ops
is autonomous.

Every engineering team in the world has a bug backlog. We've built the AI that systematically eliminates it. Join us in making manual bug triage a relic of the past.

Seed
FUNDING STAGE
$2M
RAISING
18mo
RUNWAY
GTM
USE OF FUNDS