Workflows Intermediate

This lesson covers practical workflows for using the Coding Agent across different development scenarios: bug fixing, feature implementation, test writing, refactoring, documentation, multi-issue projects, CI/CD integration, and team adoption.

Bug Fixing Workflow

Bug fixes are one of the strongest use cases for the Coding Agent because they are typically well-scoped and have clear success criteria (the bug is fixed and tests pass).

  1. Create a detailed bug report issue

    Include the error message, stack trace, steps to reproduce, expected vs actual behavior, and the file where the bug likely lives.

  2. Assign to Copilot

    The agent will analyze the bug report, find the root cause, and implement a fix.

  3. Verify the fix includes a regression test

    In your acceptance criteria, always request a test that would have caught the bug. This prevents regressions.

  4. Review and merge

    Verify the fix addresses the root cause (not just the symptom) and that the regression test is meaningful.

Bug Fix Issue Template
## Bug Report

**What happens:** Users with email addresses containing
a "+" character cannot register.

**Expected:** Registration should accept valid emails
like user+tag@example.com.

**Error:**
```
ValidationError: Invalid email format
  at validateEmail (src/utils/validation.ts:23)
```

**Root cause:** The email regex in
`src/utils/validation.ts` line 23 does not include
the "+" character in the local part pattern.

**Acceptance criteria:**
- [ ] Fix email regex to accept "+" in local part
- [ ] Add test cases for emails with "+" signs
- [ ] Ensure existing email validation tests still pass
- [ ] Test with: user+tag@example.com, a+b+c@test.org

**Assignee:** @copilot

Feature Implementation Workflow

For new features, the key is breaking the work into small, well-defined units that the agent can handle independently.

Strategy: Large features should be broken into multiple issues, each assigned separately. For example, "Add user profile page" could be split into: "Add GET /api/users/:id/profile endpoint", "Add profile page component", "Add profile edit form", and "Add profile image upload".
  1. Break the feature into discrete tasks

    Each task should be completable independently and have clear inputs/outputs.

  2. Define the interface first

    Specify API contracts, function signatures, or component props in the issue. The agent works best when the interface is predefined.

  3. Include implementation hints

    Reference existing patterns in the codebase. "Follow the same pattern as src/routes/orders.ts" is extremely helpful.

  4. Assign issues in dependency order

    Start with backend/data layer issues, then assign UI/integration issues once dependencies are merged.

Feature Issue Example
## Feature: Add search endpoint for products

**Context:** We need a search endpoint that supports
full-text search across product names and descriptions.

**Implementation:**
- Add GET /api/products/search?q=<query> endpoint
- Use the existing ProductService pattern (see src/services/)
- Use PostgreSQL full-text search (tsvector/tsquery)
- Return paginated results (follow orders pattern)

**Files to create/modify:**
- `src/routes/products.ts` - Add search route
- `src/services/ProductService.ts` - Add search method
- `src/migrations/xxx_add_search_index.ts` - Add GIN index
- `tests/routes/products.test.ts` - Add search tests

**Acceptance criteria:**
- [ ] Search returns relevant results ranked by relevance
- [ ] Pagination works (page, limit params)
- [ ] Empty query returns 400 error
- [ ] Special characters are handled safely (no SQL injection)
- [ ] Unit and integration tests included
- [ ] Response time < 200ms for typical queries

Test Writing Workflow

The Coding Agent excels at writing tests for existing code. This is a high-value, low-risk use case because the agent reads existing code and generates tests that validate current behavior.

Test Writing Issue
## Add unit tests for UserService

**Context:** The UserService class in
`src/services/UserService.ts` has 0% test coverage.
It handles user CRUD operations.

**Requirements:**
- Write comprehensive unit tests in
  `tests/services/UserService.test.ts`
- Use the existing test setup (jest, see jest.config.ts)
- Mock the database layer (see tests/mocks/db.ts)
- Cover all public methods: create, findById, update, delete
- Include happy paths, error cases, and edge cases
- Target at least 90% line coverage for the file

**Patterns to follow:**
- See `tests/services/OrderService.test.ts` for the testing pattern
- Use `describe/it` blocks grouped by method
- Use `beforeEach` for test setup
Note: When asking the agent to write tests, always reference your existing test patterns and frameworks. The agent produces much better results when it can follow established conventions in your codebase.

Refactoring Workflow

Refactoring tasks work well when they are focused and behavior-preserving. The key is to have good test coverage before the refactor so the agent can validate that behavior is unchanged.

Refactoring Type Agent Suitability Tips
Rename variable/function Excellent Specify old and new names; the agent handles all references
Extract function/method Good Describe what logic to extract and what the function signature should be
Move code between files Good Specify source and destination; the agent updates imports
Simplify conditional logic Good Reference the specific block of code and desired simplification
Architectural restructure Risky Break into smaller steps; may require human judgment

Documentation Workflow

Generating documentation is a great task for the agent because it reads existing code and produces descriptive text without changing functionality.

Documentation Issue
## Add JSDoc comments to ProductService

**Task:** Add comprehensive JSDoc comments to all
public methods in `src/services/ProductService.ts`.

**Requirements:**
- Add @param, @returns, @throws tags
- Include brief description of what each method does
- Add @example blocks for non-obvious usage
- Follow TSDoc standard
- Do NOT change any functional code

**Reference:** See UserService.ts for JSDoc style example

Multi-Issue Projects

For larger projects, you can assign multiple issues to Copilot simultaneously. The agent works on each issue independently in parallel.

Strategy for Multi-Issue Work:
  • Group by dependency. Assign independent tasks in parallel, but sequence dependent tasks.
  • Use milestones. Group related issues under a GitHub Milestone to track overall progress.
  • Review in order. Merge foundational PRs first, then review PRs that build on them.
  • Resolve conflicts early. If two agent PRs touch the same files, merge one first and ask the agent to rebase the other.

Example project breakdown for adding a "Favorites" feature:

Issue # Task Dependencies Parallel?
#301 Create favorites database table and migration None Yes
#302 Add FavoritesService with CRUD methods #301 No
#303 Add REST API endpoints for favorites #302 No
#304 Write unit tests for FavoritesService #302 No
#305 Add favorite button UI component #303 No

Integration with CI/CD Pipelines

The Coding Agent integrates naturally with your CI/CD pipeline. When the agent creates a PR, your existing CI checks run automatically. Here is how to optimize this integration:

  1. Ensure CI runs on PRs

    Your CI workflow should trigger on pull_request events. This is how the agent validates its changes.

  2. Add fast feedback checks

    Include linting, type checking, and unit tests as separate jobs that run quickly. This gives the agent rapid feedback on obvious issues.

  3. Require status checks

    Set up branch protection rules that require CI checks to pass before merging. This prevents broken agent PRs from being merged.

  4. Add code coverage reporting

    Use tools like Codecov or Coveralls to track whether agent PRs maintain or improve test coverage.

.github/workflows/ci.yml (Optimized for Agent)
name: CI
on:
  pull_request:
    branches: [main]

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci
      - run: npm run lint
      - run: npm run typecheck

  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci
      - run: npm test -- --coverage
      - name: Upload coverage
        uses: codecov/codecov-action@v4

Combining with Human Review

The most effective workflow combines agent automation with human oversight. Here is a recommended team process:

  1. Human triages and writes the issue

    A team member identifies the work, writes a well-structured issue with acceptance criteria, and assigns it to Copilot.

  2. Agent implements and creates PR

    The Coding Agent works on the issue and creates a pull request.

  3. CI validates automatically

    Automated tests, linting, and type checks run on the PR.

  4. Human reviews the PR

    A team member reviews the code for correctness, style, security, and adherence to the original requirements.

  5. Iterate if needed

    If changes are needed, the reviewer comments with specific feedback. The agent can address review comments.

  6. Human approves and merges

    Once satisfied, the reviewer approves and merges the PR.

Important: Never auto-merge agent PRs without human review. The Coding Agent is a tool that augments your team, not a replacement for code review. Always have a human verify the changes before merging.

Team Adoption Strategies

When rolling out the Coding Agent to your team, consider a gradual adoption approach:

Phase Activities Duration
Phase 1: Pilot One or two team members try the agent on low-risk tasks (tests, docs, small bug fixes). Document lessons learned. 1-2 weeks
Phase 2: Expand More team members start using the agent. Create issue templates and internal guidelines. Share successful patterns. 2-4 weeks
Phase 3: Integrate Agent becomes part of the regular workflow. Team uses it for routine tasks while focusing human effort on complex work. Ongoing

Try a Workflow

Pick one of the workflows above and try it on your project. Start with a bug fix or test writing task, as these tend to produce the best initial results. Track the time saved compared to manual implementation.