Creating Tasks Intermediate
The quality of the Coding Agent's output depends heavily on how well you describe the task. This lesson covers how to assign issues, write effective descriptions, specify acceptance criteria, monitor progress, and review the generated pull requests.
How to Assign an Issue to Copilot
There are two ways to assign a task to the Coding Agent:
Method 1: Direct Assignment
-
Open or create a GitHub Issue
Navigate to the repository's Issues tab and create a new issue or open an existing one.
-
Assign to Copilot
In the right sidebar, click Assignees and select Copilot from the dropdown. Copilot appears as an assignee option in repositories where the Coding Agent is enabled.
Method 2: @copilot Mention
You can also trigger the agent by mentioning @copilot in an issue comment:
@copilot Please implement the changes described in this issue. # Or with additional context: @copilot Fix this bug. The error happens in the validateEmail() function in src/utils/validation.ts. The regex doesn't handle emails with plus signs.
Writing Good Issue Descriptions
A well-written issue is the single most important factor in getting good results from the Coding Agent. Think of it as writing a specification for a junior developer who has access to your codebase but doesn't know the full context.
Essential Components
| Component | Why It Matters | Example |
|---|---|---|
| Clear title | Gives the agent an immediate understanding of the task scope | "Add pagination to GET /api/products endpoint" |
| Context | Explains the background and motivation for the change | "The products list currently returns all records, causing slow load times for large catalogs" |
| Requirements | Specifies exactly what needs to be built or changed | "Accept page and limit query params, default to page=1, limit=20" |
| File references | Points the agent to the right files to modify | "Modify src/routes/products.ts and src/services/ProductService.ts" |
| Acceptance criteria | Defines how to verify the change is correct | "All existing tests pass, new pagination tests added" |
Specifying Acceptance Criteria
Acceptance criteria tell the agent when the task is "done." Be explicit about what success looks like:
## Acceptance Criteria
- [ ] GET /api/products accepts `page` and `limit` query parameters
- [ ] Default pagination is page=1, limit=20
- [ ] Response includes `total`, `page`, `limit`, and `totalPages` metadata
- [ ] Invalid page/limit values return 400 with descriptive error message
- [ ] SQL query uses OFFSET/LIMIT (not fetching all records)
- [ ] Unit tests cover: default pagination, custom values, edge cases, invalid input
- [ ] Existing tests continue to pass
- [ ] TypeScript types are updated for the new response shape
- [ ]). The agent understands these as a checklist and will try to satisfy each item. This also makes it easy for reviewers to verify the PR against the original requirements.
Including Context
The more relevant context you provide, the better the agent performs. Here are effective ways to add context:
Code References
Link to specific files, functions, or lines of code that are relevant to the task:
## Relevant Code - The products route handler is in `src/routes/products.ts` (line 45-60) - The ProductService class is in `src/services/ProductService.ts` - The Product model is defined in `src/models/Product.ts` - Similar pagination exists in `src/routes/orders.ts` - follow the same pattern ## Related Issues - See #142 for how pagination was implemented for the orders endpoint - Closes #205 (slow product listing)
Screenshots and Error Messages
For bug fixes, include error messages, stack traces, and screenshots when available:
## Error When a user enters an email like `user+tag@example.com`, the validation fails with: ``` ValidationError: Invalid email format at validateEmail (src/utils/validation.ts:23) at UserController.register (src/controllers/UserController.ts:45) ``` ## Expected Behavior Emails with + signs should be accepted as valid. ## Steps to Reproduce 1. POST /api/auth/register 2. Body: { "email": "test+dev@example.com", "password": "..." } 3. Returns 400 instead of 201
Using Labels and Templates
Labels help the agent understand the nature and priority of the task. Consider creating labels specifically for agent assignments:
| Label | Purpose |
|---|---|
copilot-agent |
Marks issues intended for the Coding Agent |
bug |
Tells the agent this is a fix (look for root cause) |
enhancement |
Tells the agent this is a new feature |
tests |
Focus on writing tests rather than features |
documentation |
Focus on docs and README updates |
.github/ISSUE_TEMPLATE/.
Monitoring Agent Progress
After assigning an issue, the agent posts status updates as issue comments. Here is what to expect:
-
Acknowledgment
The agent posts a comment confirming it has started working on the issue, typically within a few seconds.
-
Analysis phase
The agent reads the issue, explores the codebase, and identifies relevant files. It may post comments about what it has found.
-
Implementation phase
The agent writes code, makes changes, and runs tests. This is the longest phase and may take several minutes.
-
Validation phase
The agent runs the test suite and CI pipeline. If tests fail, it may iterate and fix the issues.
-
PR creation
Once validation passes, the agent creates a pull request and links it to the issue.
Reviewing the Generated PR
Agent-generated PRs should be reviewed with the same rigor as human-authored PRs. Pay special attention to:
Correctness
Does the code actually solve the problem described in the issue? Check edge cases and error handling paths.
Test Quality
Are the tests meaningful or just checking happy paths? Ensure negative cases, boundary conditions, and error scenarios are covered.
Code Style
Does the code follow your project's conventions? Check naming, formatting, file organization, and architectural patterns.
Security
Look for potential security issues: SQL injection, XSS, hardcoded credentials, improper input validation, or missing auth checks.
Providing Feedback and Requesting Changes
If the PR needs changes, you can request modifications through standard PR review comments. The agent can respond to feedback:
# Specific and actionable (good): @copilot This pagination query will be slow on large tables. Add an index on the `created_at` column and use cursor-based pagination instead of OFFSET/LIMIT. # Request a specific change: @copilot Please add error handling for the case where `page` is greater than `totalPages`. Return an empty array with the correct metadata instead of a 404. # Too vague (bad): @copilot This doesn't look right. Fix it.
Good vs Bad Task Descriptions
Compare these examples to understand what makes an effective task description:
Bad Example
Description: The API is broken. Users are complaining. Please fix it.
Why it fails:
- No specific endpoint or error identified
- No reproduction steps
- No expected vs actual behavior
- No file references
- No acceptance criteria
Good Example
Description:
Bug: When fetching a user who has no profile picture set (profile_image_url is null), the API returns a 500 error instead of the user data with a null image URL.
Root cause (likely): The
UserSerializer in src/serializers/UserSerializer.ts calls .toString() on profile_image_url without null checking.
Steps to reproduce:
1. Create a user without setting a profile picture
2. GET /api/users/:id
3. Response: 500 Internal Server Error
Acceptance criteria:
- Users without profile pictures return normally with
profile_image_url: null
- Add a regression test for this case
- Existing user serialization tests still pass
Why it works:
- Specific endpoint and error identified
- Likely root cause and file referenced
- Clear reproduction steps
- Explicit acceptance criteria
- Actionable and scoped
Practice Exercise
Pick a real issue in your repository and rewrite it using the guidelines above. Include a clear title, context, file references, and checklist-style acceptance criteria. Then assign it to Copilot and compare the results with a vaguely written issue.
Lilly Tech Systems