Best Practices Advanced
Get the most out of the Coding Agent by following these best practices for issue writing, repository setup, security, cost management, and team scaling. This lesson also covers common pitfalls and frequently asked questions.
Writing Effective Issues for the Agent
The quality of the agent's output is directly proportional to the quality of the issue description. Follow these guidelines consistently:
| Do | Don't |
|---|---|
| Be specific about what files to change | Say "fix the app" without pointing to code |
| Include acceptance criteria as a checklist | Leave success criteria ambiguous |
| Reference existing patterns to follow | Assume the agent knows your conventions |
| Scope tasks to 1-3 files when possible | Create "kitchen sink" issues touching 20+ files |
| Provide error messages and stack traces | Say "it's broken" without details |
| Specify the testing approach | Hope the agent figures out your test framework |
Repository Setup for Best Results
Your repository structure significantly impacts the agent's effectiveness. Invest in these areas for the best results:
Test Coverage
Good test coverage is the single most important factor for agent success. Tests serve as a safety net that lets the agent validate its changes.
# Minimum recommended test coverage for agent repos: # - 70%+ line coverage overall # - 90%+ on critical business logic # - Unit tests for all services and utilities # - Integration tests for API endpoints # - Test commands documented in package.json / Makefile # package.json scripts example: { "scripts": { "test": "jest", "test:coverage": "jest --coverage", "test:watch": "jest --watch", "lint": "eslint src/", "typecheck": "tsc --noEmit" } }
CI/CD Pipeline
A robust CI pipeline gives the agent fast feedback on its changes. Include:
- Linting — catches style and formatting issues
- Type checking — catches type errors (TypeScript, mypy, etc.)
- Unit tests — validates individual components
- Integration tests — validates system behavior
- Coverage reporting — tracks whether new code is tested
Clear Project Structure
Consistent, well-organized code helps the agent navigate your codebase and follow existing patterns:
# Clear, conventional structure helps the agent src/ routes/ # API route handlers services/ # Business logic models/ # Data models utils/ # Shared utilities middleware/ # Express/Koa middleware tests/ routes/ # Mirrors src/ structure services/ utils/ mocks/ # Shared test mocks fixtures/ # Test data .github/ copilot-setup-steps.yml workflows/ ci.yml
Documentation
Keep these files up to date to help the agent understand your project:
- README.md — Project overview, setup instructions, architecture notes
- CONTRIBUTING.md — Code style, PR process, testing requirements
- .github/copilot-setup-steps.yml — Environment setup for the agent
- Inline comments — Non-obvious business logic should be commented
Security Considerations
The Coding Agent runs in a sandboxed environment, but you should still follow security best practices:
- Never expose secrets. Don't grant the agent access to production secrets. Use test/development credentials only.
- Review all PRs. Never auto-merge agent PRs. Always have a human review for security issues like SQL injection, XSS, or auth bypasses.
- Limit network access. Keep the default firewall restrictions. Only allow access to package registries you actually use.
- Use branch protection. Require PR reviews and passing status checks before any merge to the default branch.
- Monitor agent activity. Regularly audit the agent's actions through GitHub's audit log.
- Restrict sensitive files. Use CODEOWNERS to require manual review for changes to security-critical files (auth, payments, etc.).
# .github/CODEOWNERS # Require senior dev review for security-critical paths src/auth/ @your-org/security-team src/payments/ @your-org/security-team src/middleware/auth* @your-org/security-team .github/ @your-org/platform-team *.yml @your-org/platform-team
Cost Management
The Coding Agent uses compute resources and LLM tokens. Here are strategies to manage costs effectively:
| Strategy | Impact | Implementation |
|---|---|---|
| Scope tasks tightly | High | Smaller, focused issues use fewer tokens and compute time than large, ambiguous ones |
| Provide file references | Medium | Pointing to specific files reduces the agent's search time and token usage |
| Use setup steps efficiently | Medium | Optimize copilot-setup-steps.yml to use caching and avoid unnecessary builds |
| Monitor usage | Medium | Track agent usage in your organization's billing dashboard to identify patterns |
| Limit to appropriate tasks | High | Don't use the agent for tasks that would be faster to do manually |
When to Use the Agent vs Manual Coding
The Coding Agent is not the right tool for every task. Use this decision framework:
Use the Agent For
- Well-defined bug fixes with clear reproduction steps
- Adding tests for existing code
- Small, scoped features with clear interfaces
- Documentation and code comments
- Routine refactoring (renames, extractions)
- Boilerplate code generation
- Dependency updates with straightforward migrations
Code Manually For
- Architectural decisions and system design
- Security-sensitive code (auth, payments, encryption)
- Performance-critical optimizations
- Complex business logic with subtle edge cases
- Large refactors spanning many interconnected files
- Tasks requiring external service integration/testing
- Exploratory prototyping and R&D
Scaling AI-Assisted Development
As your team gets comfortable with the Coding Agent, here is how to scale its usage effectively:
-
Standardize issue templates
Create GitHub Issue templates specifically for agent tasks. Include sections for context, requirements, acceptance criteria, file references, and testing instructions.
-
Establish review guidelines
Define what reviewers should check on agent PRs: correctness, test quality, security, performance, and adherence to coding standards.
-
Track metrics
Measure time-to-merge for agent PRs vs human PRs, PR acceptance rate, number of review iterations, and issues resolved per week.
-
Build a knowledge base
Document what types of tasks work well with the agent and which don't. Share learnings across the team.
-
Invest in test infrastructure
Better tests lead to better agent results. Prioritize test coverage improvements as a multiplier for agent effectiveness.
Common Pitfalls
Writing issues like "Fix the dashboard" without specifying what is broken, where, or what "fixed" looks like. The agent will guess, and it will often guess wrong.
Solution: Always include specific files, error messages, and acceptance criteria.
Assigning tasks in repositories with minimal tests. The agent cannot validate its changes, leading to PRs that may look correct but contain bugs.
Solution: Invest in test coverage before relying on the agent. Start by assigning test-writing tasks to bootstrap coverage.
Assigning issues that require changes across 10+ files or involve complex cross-cutting concerns.
Solution: Break large tasks into smaller, independent issues. Each issue should be completable in isolation.
Setting up automation to merge agent PRs without human review, leading to subtle bugs or security issues in production.
Solution: Always require at least one human review. Use CODEOWNERS for critical paths.
Merging agent PRs even when CI checks fail, assuming the failures are unrelated.
Solution: Treat CI failures on agent PRs the same as human PRs. If tests fail, the PR is not ready.
Frequently Asked Questions
Typically 5-30 minutes, depending on the complexity of the task, the size of the codebase, and the duration of your CI pipeline. Simple bug fixes may complete in under 5 minutes, while feature implementations with extensive tests may take 20-30 minutes.
Yes. The agent can work on multiple issues in parallel, each in its own branch. However, if two issues touch the same files, there may be merge conflicts. It's best to merge one PR before the other starts working on overlapping files.
The agent supports all major programming languages, including JavaScript/TypeScript, Python, Java, C#, Go, Ruby, Rust, PHP, Swift, Kotlin, and more. It works best with languages that have strong ecosystem tooling (linters, type checkers, test frameworks).
Yes. The Coding Agent works on both public and private repositories within organizations that have Copilot Enterprise or Business subscriptions. Your code is processed securely and not used to train models.
If the target branch changes after the agent creates its PR, merge conflicts may occur. You can ask the agent to rebase by commenting @copilot rebase on the PR, or resolve the conflicts manually.
The agent processes your code within GitHub's infrastructure, using the same data handling and security policies as other GitHub features. Code is sent to LLM providers (OpenAI, Anthropic) for processing, but is not used to train their models. Review GitHub's Copilot privacy documentation for full details.
By default, no. The agent runs in a firewalled VM with restricted network access. It can reach GitHub and configured package registries, but not arbitrary external services. If your tests require external services, use mocks or configure the setup steps to provide local test instances.
The agent can work in monorepos, but it's most effective when issues clearly specify which package or subdirectory to work in. Include the relevant path in the issue description and reference the package's specific test and build commands.
Course Complete!
You now have a comprehensive understanding of GitHub Copilot Coding Agent. Start by optimizing your repository (tests, CI, documentation), write a well-structured issue, assign it to Copilot, and experience autonomous AI coding firsthand.
Back to Course Overview
Lilly Tech Systems