Skip to main content

Deployment & Quality Assurance

This document outlines our deployment process and quality assurance practices at Kirana Labs, ensuring reliable and high-quality software delivery.

Deployment Environments

We maintain multiple environments to ensure proper testing and verification:

Development Environment

  • Purpose: Daily development and integration testing
  • Audience: Developers and internal QA
  • Deployment Frequency: Automatic upon merge to dev branch
  • Data: Test data, refreshed regularly
  • URL Pattern: dev-[project].kiranalabs.com

Staging Environment

  • Purpose: Client testing and verification
  • Audience: Clients, stakeholders, and QA team
  • Deployment Frequency: On-demand, after QA approval in development
  • Data: Production-like data, anonymized if necessary
  • URL Pattern: staging-[project].kiranalabs.com

Production Environment

  • Purpose: Live application used by end users
  • Audience: End users
  • Deployment Frequency: Scheduled releases after thorough testing
  • Data: Real production data
  • URL Pattern: [project].com or client domain

Deployment Process

Development Deployment

  1. Trigger: Automatic upon merge to dev branch
  2. Process:
    • CI/CD pipeline builds the application
    • Run tests
    • Deploy to development environment
    • Run smoke tests
  3. Notification: Team is notified of successful/failed deployment via Teams

Staging Deployment

  1. Preparation:

    • Create a PR from dev to staging
    • Tech Lead reviews changes
    • QA verifies critical functionality in development environment
  2. Approval:

    • Tech Lead approves the PR
    • Product Manager confirms readiness
  3. Deployment:

    • Merge PR to staging branch
    • CI/CD pipeline deploys to staging environment
    • QA performs full regression testing
  4. Client Review:

    • Client reviews changes in staging
    • Feedback is collected and addressed
    • Final approval is given by client

Production Deployment

  1. Preparation:

    • Create a PR from staging to main
    • Tech Lead and Product Manager review
    • All tests have passed in staging
  2. Planning:

    • Schedule deployment during low-traffic periods
    • Prepare rollback plan
    • Create release notes
    • Notify relevant stakeholders
  3. Deployment:

    • Merge PR to main branch
    • CI/CD pipeline deploys to production
    • Perform smoke tests
    • Monitor application metrics
  4. Post-Deployment:

    • Verify critical functionality
    • Monitor for errors or anomalies
    • Be available for urgent fixes if needed
    • Update documentation if necessary

Deployment Tools and Technologies

We use the following tools for deployment:

  • GitHub Actions: CI/CD pipeline
  • Railway: Infrastructure and deployment platform
  • AWS/GCP/Azure: Cloud infrastructure (project-specific)
  • Docker: Containerization
  • Biome: Code quality and formatting
  • Jest/Playwright: Testing

Quality Assurance Process

Quality assurance is integrated throughout our development process:

Testing Levels

  1. Developer Testing:

    • Unit tests for business logic
    • Integration tests for critical flows
    • End-to-end tests for key user journeys
    • Manual testing of implemented features
  2. QA Testing:

    • Functional testing based on acceptance criteria
    • Regression testing
    • Cross-browser/device compatibility
    • Performance testing
    • Accessibility testing
  3. Client Acceptance Testing:

    • Client verifies the feature meets requirements
    • Provides feedback or approval

Testing Process

During Development

  • Developers write unit and integration tests
  • Feature is tested against acceptance criteria
  • PR validation includes test runs
  • Code coverage is reviewed

Before Staging Deployment

  • QA performs feature testing in development environment
  • Critical user journeys are validated
  • Issues are reported and fixed
  • Regression testing of related functionality

Before Production Deployment

  • Full regression testing in staging environment
  • Performance testing for critical paths
  • Security testing (where applicable)
  • Cross-browser and device testing
  • Client approval

Bug Reporting and Tracking

All issues found during testing are:

  1. Reported in Plane with:

    • Clear steps to reproduce
    • Expected vs. actual results
    • Environment details
    • Screenshots or videos
    • Severity level
  2. Prioritized based on:

    • Critical: Blocking issue affecting core functionality
    • High: Major feature not working as expected
    • Medium: Non-critical feature issues
    • Low: Minor issues, UI improvements
  3. Fixed and Verified:

    • Developer fixes the issue
    • QA verifies the fix
    • Regression testing ensures no new issues

Monitoring and Observability

After deployment, we monitor applications using:

Application Monitoring

  • Error Tracking: Sentry for real-time error monitoring
  • Performance: Lighthouse and custom performance metrics
  • User Analytics: Project-specific analytics tools

Infrastructure Monitoring

  • Server Metrics: CPU, memory, disk usage
  • Database Performance: Query times, connection counts
  • API Performance: Response times, error rates
  • Resource Utilization: Scaling needs and bottlenecks

Rollback Procedures

In case of critical issues after deployment:

Immediate Assessment

  1. Assess the severity and impact of the issue
  2. Determine if a rollback is necessary or if a hotfix is possible

Rollback Process

  1. Quick Rollback Option:

    • Revert the PR in GitHub
    • Deploy the previous stable version
  2. Database Considerations:

    • If schema changes were made, prepare downgrade migrations
    • Backup data before rolling back if necessary
  3. Communication:

    • Notify the team via Teams
    • Update stakeholders on status
    • Document the issue and resolution

Post-Incident Analysis

After resolving the issue:

  1. Conduct a blameless post-mortem
  2. Identify root causes
  3. Implement process improvements
  4. Update testing procedures if needed

Feature Flags

For complex features or risky changes:

  • Implement feature flags to enable/disable functionality
  • Deploy features behind flags for controlled rollouts
  • Test features in production with limited user groups
  • Gradually roll out features to all users

Documentation

For each release, maintain:

  • Release Notes: Summary of changes and new features
  • Deployment Records: When, what, and by whom deployments were made
  • Known Issues: Any outstanding issues with workarounds
  • Configuration Changes: Any environment or configuration updates

Compliance and Security

For projects with specific compliance requirements:

  • Follow industry-specific compliance procedures
  • Document deployment approvals as required
  • Maintain audit trails of changes
  • Perform security scans before production deployments

Continuous Improvement

We regularly review and improve our deployment and QA processes:

  • Review deployment metrics (frequency, success rate, time to deploy)
  • Analyze bug escape rates and root causes
  • Adjust test coverage based on findings
  • Automate repetitive testing tasks
  • Update this documentation with learned best practices

By following these deployment and quality assurance practices, we ensure reliable, high-quality software delivery to our clients and end users.