AutomationAPICI/CD

Claude Workflow Automation Recipes

Claude Workflow Automation Recipes

Every developer I know has a Claude tab open right now, but few have invested in Claude workflow automation to stop manually pasting the same prompts, copying outputs into the next step, and repeating the whole sequence tomorrow. That's not using Claude. That's paying an LLM to watch you do data entry.

Claude's API is built for end-to-end orchestration: prompt chaining, tool use, event-driven hooks, and direct integration with your CI/CD pipeline. Most developers barely scratch the surface of what's available. Here at Claudinhoswe run Claude-powered automations internally to handle content research, outline generation, and draft pipelines. Everything in this article comes from production use, not theory. By the end, you'll have working recipes for chained prompts, tool use, hooks, no-code integrations, and GitHub Actions, plus a clear testing strategy before anything goes live.

What Claude's API actually gives you for automation

The core execution model is straightforward: send a prompt, receive a response or tool call, observe the result, and iterate. That loop is what every Claude workflow automation runs on. Understanding it as a deterministic, controllable system rather than a black box changes how you design pipelines.

Claude's API is stateless by design. Unlike the chat interface, which manages session history for you, each API call starts completely fresh unless you pass prior context explicitly in the messages array. This shapes every architectural decision you make when building automations.

Every Claude automation is some combination of three building blocks: chained prompts(where one call's output feeds the next), registered tools (functions Claude can invoke, with your code handling execution), and managed context (the history and state you pass between steps). Get comfortable with all three and every recipe below becomes easier to read, adapt, and debug on the fly.

Chaining prompts for Claude workflow automation

Prompt chaining works by taking Claude's output from one API call and passing it as input to the next, with a clear task boundary at each step. There are two patterns worth knowing: sequential chains, where each output feeds directly into the next prompt, and branching chains, where the output determines which path fires next.

Handling tool use and structured outputs

Tool use is what separates conversational Claude from agentic Claude. You register function definitions in the API request using JSON schema. Claude decides when to call them based on context, and your code executes the actual function and returns results. This is the foundation of every real Claude automation pipeline that interacts with external systems.

A basic tool definition for a linting tool looks like this:

tool-definition.json
{
  "name": "run_linter",
  "description": "Runs ESLint on a specified file and returns the output",
  "input_schema": {
    "type": "object",
    "properties": {
      "file_path": {
        "type": "string",
        "description": "The path to the file to lint"
      }
    },
    "required": ["file_path"]
  }
}

When Claude responds with a tool call, the stop_reason field will be tool_use instead of end_turn. Check that field first, extract the tool name and input, route to your handler function, then return the result in the tool_result format for the next API call. Missing this detection step is the most common bug in first-time tool use implementations.

Sometimes you don't need tool calls at all. When you just need clean JSON for downstream processing, use a system prompt that explicitly constrains the output format with an example. This avoids the overhead of registering formal tools for simple extraction tasks and works reliably when the output schema is static.

Connecting Claude automations to Zapier, Make, and external services

No-code connectors like Zapier and Make are useful for Claude automations that involve SaaS triggers: a new email arrives, a form is submitted, a database row is added. The pattern is webhook-based. The trigger fires, the payload hits a Claude API call via the HTTP module, and the response routes to the next step. Non-developers can build real LLM workflows here without touching code.

No-code tools hit limits fast.There's no retry logic, no branching on tool call responses, and no real context management. When your automation needs any of those, a 50-line Node.js script gives you more control and lower cost than a multi-module Zapier workflow with premium action steps.

Setting up a Claude API call in Make

Setting up a Claude API call in Make takes five steps. First, add an HTTP module and set the URL to https://api.anthropic.com/v1/messages with method POST. Second, add these headers:

  • x-api-key: your Anthropic API key
  • anthropic-version: 2023-06-01
  • content-type: application/json

Third, set the body type to Raw and paste your JSON message structure. Fourth, map dynamic data from earlier modules into the contentfield using Make's variable panel. Fifth, run once to test and check the content[0].text field in the output.

A concrete scenario: a Make automation that takes a raw support ticket, sends it to Claude for classification and a draft reply, then posts the result to Slack. That's a three-module setup that handles a real production workload, and also the point where no-code complexity starts to compound.

Setting up Claude workflow automation with hooks and CI/CD

Hooks are event-driven scripts that run at lifecycle points in Claude Code: after a file edit, before a bash command executes, when the agent stops. They live in .claude/settings.json for shared team settings or .claude/settings.local.json (gitignored) for personal configuration. The basic structure uses three fields: event, command, and an optional if filter.

Post-edit hook: auto-formatting with Prettier

A post-edit hook that runs Prettier automatically on any file Claude touches:

.claude/settings.json
{
  "hooks": [
    {
      "event": "post_edit",
      "command": ["prettier", "--write", "${file_path}"],
      "if": "tool_name == 'edit_file'"
    }
  ]
}

For security, a pre-edit hook that prevents Claude from modifying .env files should exit with code 2 to block the action and return an explanation. This is the guardrail that catches the most common automation mistakes before they reach production. No manual review required, the hook refuses the action automatically.

GitHub Actions: automated PR review with Claude

Wiring Claude into GitHub Actions for automated PR reviews uses the anthropics/claude-code-action@v1 step. The minimal YAML that triggers a Claude-powered review on every pull request open event:

.github/workflows/claude-review.yml
name: Claude PR Review
on:
  pull_request:
    types: [opened, synchronize]
jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: anthropics/claude-code-action@v1
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          prompt: "Review this PR for code quality, correctness, and security."
        env:
          CLAUDE_ARGS: "--max-turns 5"

Scope the trigger carefully. Running on every comment event generates unnecessary API calls and token costs. Stick to opened and synchronize for PR reviews. Each triggered workflow consumes API tokens, and those costs compound quickly on active repositories with multiple open PRs.

Testing, safety gates, and production best practices

Test-driven development is not optional for Claude automations in production. Write the tests first, confirm they fail, commit them, then let Claude implement. This matters because Claude will occasionally modify tests to make them pass rather than fixing the underlying logic. Committed tests are a checksum on that behavior. For frontend workflows, add a Puppeteer screenshot comparison step to catch visual regressions that unit tests miss.

Context management is the second highest-impact lever for production reliability. Claude's performance degrades noticeably once context exceeds 60–70% of the window: instructions start getting ignored, and basic coding errors creep in. Compact context manually at around 50% usage rather than waiting for auto-compaction, and commit frequently within long automation sessions to create checkpoints you can return to. These two habits eliminate the most common failure mode in Claude-based pipelines.

On cost and scale: token-per-minute caps hit before request-per-minute caps in most production workloads. Extended thinking modes and tool use escalate costs quickly, and a Pro plan can exhaust in under an hour of active automation. Build queueing and backoff accordingly.

For anything beyond single-user pipelines, implement a queue-based architecture with batching and exponential backoff. Set team budget thresholds in Anthropic's usage dashboard before deploying. Developers who build with cost visibility from day one avoid the billing surprises that derail production rollouts.

Building reliable Claude automations: the path forward

Automating tasks with Claude is not about prompting a chat window faster. It's about designing a system with clear inputs, reliable tool definitions, tested outputs, and appropriate guardrails before anything reaches production. A solid Claude workflow automation stack tends to follow the same progression: prompt chaining handles sequential tasks first, tool use comes in when you need dynamic actions and external system calls, then hooks and GitHub Actions cover event-driven triggers, with no-code services filling gaps where they genuinely save time, and testing built in from the start, not bolted on later.

At Claudinhos (tsunode × Claude Blog), this exact stack powers the research and drafting pipelines behind every article. The patterns here aren't theoretical — they're running right now in a real project, with real token costs and real failure modes we've debugged and fixed. The recipes above are the ones that survived contact with production.

The developers who invest in Claude workflow automation now compound that advantage as the models improve and the API surface expands. Every automation you ship teaches you how to design the next one better. Start with one recipe, get it to production, and iterate from there.