Deployment & Quality Assurance
This document outlines our deployment process and quality assurance practices at Kirana Labs, ensuring reliable and high-quality software delivery.
Deployment Environments
We maintain multiple environments to ensure proper testing and verification:
Development Environment
- Purpose: Daily development and integration testing
- Audience: Developers and internal QA
- Deployment Frequency: Automatic upon merge to
devbranch - Data: Test data, refreshed regularly
- URL Pattern:
dev-[project].kiranalabs.com
Staging Environment
- Purpose: Client testing and verification
- Audience: Clients, stakeholders, and QA team
- Deployment Frequency: On-demand, after QA approval in development
- Data: Production-like data, anonymized if necessary
- URL Pattern:
staging-[project].kiranalabs.com
Production Environment
- Purpose: Live application used by end users
- Audience: End users
- Deployment Frequency: Scheduled releases after thorough testing
- Data: Real production data
- URL Pattern:
[project].comor client domain
Deployment Process
Development Deployment
- Trigger: Automatic upon merge to
devbranch - Process:
- CI/CD pipeline builds the application
- Run tests
- Deploy to development environment
- Run smoke tests
- Notification: Team is notified of successful/failed deployment via Teams
Staging Deployment
-
Preparation:
- Create a PR from
devtostaging - Tech Lead reviews changes
- QA verifies critical functionality in development environment
- Create a PR from
-
Approval:
- Tech Lead approves the PR
- Product Manager confirms readiness
-
Deployment:
- Merge PR to
stagingbranch - CI/CD pipeline deploys to staging environment
- QA performs full regression testing
- Merge PR to
-
Client Review:
- Client reviews changes in staging
- Feedback is collected and addressed
- Final approval is given by client
Production Deployment
-
Preparation:
- Create a PR from
stagingtomain - Tech Lead and Product Manager review
- All tests have passed in staging
- Create a PR from
-
Planning:
- Schedule deployment during low-traffic periods
- Prepare rollback plan
- Create release notes
- Notify relevant stakeholders
-
Deployment:
- Merge PR to
mainbranch - CI/CD pipeline deploys to production
- Perform smoke tests
- Monitor application metrics
- Merge PR to
-
Post-Deployment:
- Verify critical functionality
- Monitor for errors or anomalies
- Be available for urgent fixes if needed
- Update documentation if necessary
Deployment Tools and Technologies
We use the following tools for deployment:
- GitHub Actions: CI/CD pipeline
- Railway: Infrastructure and deployment platform
- AWS/GCP/Azure: Cloud infrastructure (project-specific)
- Docker: Containerization
- Biome: Code quality and formatting
- Jest/Playwright: Testing
Quality Assurance Process
Quality assurance is integrated throughout our development process:
Testing Levels
-
Developer Testing:
- Unit tests for business logic
- Integration tests for critical flows
- End-to-end tests for key user journeys
- Manual testing of implemented features
-
QA Testing:
- Functional testing based on acceptance criteria
- Regression testing
- Cross-browser/device compatibility
- Performance testing
- Accessibility testing
-
Client Acceptance Testing:
- Client verifies the feature meets requirements
- Provides feedback or approval
Testing Process
During Development
- Developers write unit and integration tests
- Feature is tested against acceptance criteria
- PR validation includes test runs
- Code coverage is reviewed
Before Staging Deployment
- QA performs feature testing in development environment
- Critical user journeys are validated
- Issues are reported and fixed
- Regression testing of related functionality
Before Production Deployment
- Full regression testing in staging environment
- Performance testing for critical paths
- Security testing (where applicable)
- Cross-browser and device testing
- Client approval
Bug Reporting and Tracking
All issues found during testing are:
-
Reported in Plane with:
- Clear steps to reproduce
- Expected vs. actual results
- Environment details
- Screenshots or videos
- Severity level
-
Prioritized based on:
- Critical: Blocking issue affecting core functionality
- High: Major feature not working as expected
- Medium: Non-critical feature issues
- Low: Minor issues, UI improvements
-
Fixed and Verified:
- Developer fixes the issue
- QA verifies the fix
- Regression testing ensures no new issues
Monitoring and Observability
After deployment, we monitor applications using:
Application Monitoring
- Error Tracking: Sentry for real-time error monitoring
- Performance: Lighthouse and custom performance metrics
- User Analytics: Project-specific analytics tools
Infrastructure Monitoring
- Server Metrics: CPU, memory, disk usage
- Database Performance: Query times, connection counts
- API Performance: Response times, error rates
- Resource Utilization: Scaling needs and bottlenecks
Rollback Procedures
In case of critical issues after deployment:
Immediate Assessment
- Assess the severity and impact of the issue
- Determine if a rollback is necessary or if a hotfix is possible
Rollback Process
-
Quick Rollback Option:
- Revert the PR in GitHub
- Deploy the previous stable version
-
Database Considerations:
- If schema changes were made, prepare downgrade migrations
- Backup data before rolling back if necessary
-
Communication:
- Notify the team via Teams
- Update stakeholders on status
- Document the issue and resolution
Post-Incident Analysis
After resolving the issue:
- Conduct a blameless post-mortem
- Identify root causes
- Implement process improvements
- Update testing procedures if needed
Feature Flags
For complex features or risky changes:
- Implement feature flags to enable/disable functionality
- Deploy features behind flags for controlled rollouts
- Test features in production with limited user groups
- Gradually roll out features to all users
Documentation
For each release, maintain:
- Release Notes: Summary of changes and new features
- Deployment Records: When, what, and by whom deployments were made
- Known Issues: Any outstanding issues with workarounds
- Configuration Changes: Any environment or configuration updates
Compliance and Security
For projects with specific compliance requirements:
- Follow industry-specific compliance procedures
- Document deployment approvals as required
- Maintain audit trails of changes
- Perform security scans before production deployments
Continuous Improvement
We regularly review and improve our deployment and QA processes:
- Review deployment metrics (frequency, success rate, time to deploy)
- Analyze bug escape rates and root causes
- Adjust test coverage based on findings
- Automate repetitive testing tasks
- Update this documentation with learned best practices
By following these deployment and quality assurance practices, we ensure reliable, high-quality software delivery to our clients and end users.