AI research workspace with agent system diagrams and dashboards
AI research lab

Applied AI Research

We build tools and methods for a world where agents do the work and humans do the thinking.

Agents grow more capable by the day, but our tools, workflows, and management practices are not keeping up. We are building new tools for you and for your agents.

Tools for humans
Tools for agents
Production methodology
Open source by default

Thesis

Agent capability is no longer the only bottleneck.

The surrounding system matters now: tools, workflows, review loops, permissions, management practices, and shared language for what good agent work looks like.

Aesoteric studies that system and turns the useful parts into software, methods, and public artifacts.

agent-workflow.ts
ThinkingReadingEditingGreppingDone

agent_run.create({

owner: "human judgment",

scope: "bounded production work",

exit: "evidence + decision"

})

method.ship({

tools: "observable",

review: "targeted",

safety: "designed into the workflow"

})

Tools

We build for the person and the agent.

Agent-native software needs to help humans think clearly while giving agents the structure they need to act reliably.

Agent workbenches

Interfaces for assigning work, following progress, reviewing evidence, and deciding what deserves human attention.

Delegation systems

Patterns for splitting goals into bounded agent work with clear ownership, constraints, and review points.

Evaluation loops

Methods for testing agent output against intent, quality, safety, reliability, and the cost of oversight.

Traceable production

Operational practices for systems where the work product may be generated, changed, and verified by agents.

Safety boundaries

Guardrails for permissions, secrets, data access, approvals, and reversibility when agents act on real systems.

Public artifacts

Open-source tools, reference workflows, and field notes that make agent-native work easier to inspect and repeat.

Methodology

Not just faster software. Correct software.

We care about methodology: how work is delegated, how correctness is established, how safety boundaries are maintained, and how humans stay responsible without becoming the bottleneck.

01

Define the human judgment

Start from the decision the human should make, then design the agent work around producing the right evidence for that decision.

02

Bound the agent's authority

Give agents enough room to work, but make the scope, tools, data, and exit criteria explicit before production work begins.

03

Instrument the work

Capture the plan, actions, files, approvals, and verification trail so agent work can be audited without reading every line.

04

Verify outcomes, not vibes

Measure whether the result is correct, durable, and safe to ship, then feed that learning back into the workflow.

Research questions

The practical questions are still open.

We work on the parts that become urgent when agents do meaningful production work: oversight, trust, management, verification, and the shape of the tools themselves.

  • What does production look like when no human writes or reads the code?
  • What does good people management look like for agents?
  • How can agents operate safely and reliably with minimal oversight?

Open source

We work in public as much as we can.

Most of what we build is open source. That matters because agent methodology should be inspectable, reusable, and improved by the people doing the work.

Code

Tools agents and humans can both use.

Methods

Repeatable workflows for agent-native production.

Notes

Research logs, decisions, and lessons from practice.

Build for the agent-native future.

Talk to Aesoteric about tools, methods, research, or production workflows for a world where agents do more of the work.