gearsQA

Learn how SRE.ai can be used to facilitate QA

Overview

Quality issues found after deployment are expensive.

They require rework, rollback coordination, and often a second deployment cycle to resolve.

QA is about moving that discovery earlier, catching issues during the pipeline, before changes reach production.

SRE.ai addresses this by configuring quality gates at each pipeline stage.

Gates enforce Apex test execution at a configurable test level, block changes that don't meet a minimum code coverage threshold, and surface code analysis findings with severity-based blocking rules.

A change can't advance to the next stage until it meets the standards you've defined for that stage.

Apex test execution

Scenario

Problem:

A team writes Apex tests to validate their Salesforce customizations, but has no way to ensure those tests run and pass before changes reach higher environments.

Without enforcement, test failures surface after deployment rather than before, creating rework and risk.

SRE.ai's fit:

SRE.ai runs Apex tests automatically as part of every deployment and enforces results through configurable quality gates on each pipeline stage. Teams choose which test level runs at each stage and set whether failures block promotion.

circle-check

Who this is for

Teams with Apex test classes in their org who want test results to gate promotion between pipeline stages.

chevron-rightClick to learn how SRE.ai addresses this scenariohashtag

What you'll need

  • At least one Pipeline configured with stages mapped to Salesforce environments (see Pipelines documentation)

  • Apex test classes present in the target Salesforce org

Setup

Configure the Deploy Test Level quality gate on each stage where you want Apex tests enforced.

  1. Navigate to Pipelines and select your active pipeline.

  2. Click on the stage where you want test execution enforced to open the Stage Details panel.

  3. Under Quality gates, locate the Deploy Test Level field and select the test scope for this stage:

    • No Test Run: skips all tests. Use only for stages targeting non-Apex metadata.

    • Run Specified Tests: runs only the test classes included in the deployed package.

    • Run Local Tests: runs all test classes defined locally in the target org. This is the recommended default for integration and staging stages.

    • Run All Tests in Org: runs every test class in the target org, including those from installed packages. Use this for pre-production gates that require full coverage.

circle-exclamation
  1. Under Quality gates, locate the Block Code Coverage Percentage field and select the minimum coverage threshold required before changes can advance:

    • Available thresholds: 75%, 80%, 85%, or 90%.

    • Deployments that don't meet the threshold are blocked at this stage.

  2. Repeat for any additional stages where you want test execution enforced. Teams typically apply stricter test levels and higher coverage thresholds as changes move closer to production.

  3. Save the stage configuration.

circle-check

Example workflow

  1. A developer promotes a change to the configured pipeline stage.

  2. SRE.ai runs the Apex tests at the configured test level against the target environment.

  3. Two progress indicators track execution in real time: Deployment Progress (components deployed) and Test Execution Progress (tests passed vs. failed).

  4. When execution completes, test results are surfaced in the deployment view:

    • Test Failures: failed test methods with failure messages and stack traces.

    • Test Successes: passing test methods.

    • Code Coverage: per-class coverage percentages against the configured threshold.

  5. If all tests pass and coverage meets the threshold, the change advances through the quality gate. If any test fails or coverage falls short, the change is blocked at this stage until the issues are resolved.

Result

Every deployment through the configured stage is validated by Apex tests before advancing. Test failures and coverage gaps are surfaced with class-level detail and stack traces, giving developers the information they need to fix issues before they reach production.

Regression suite execution

Scenario

Problem:

Before any production deployment, a team needs to confirm that no existing functionality has broken.

Limiting the test run to only the changed components risks missing regressions in untouched code that depends on the deployed changes.

SRE.ai's fit:

Configuring the Run All Tests in Org test level on the staging stage ensures that every Apex test class in the org executes before changes can advance to production, providing a full regression sweep as a mandatory gate.

circle-check

Who this is for

Teams that require a full Apex test sweep as a mandatory checkpoint before promoting changes to production.

chevron-rightClick to learn how SRE.ai addresses this scenariohashtag

What you'll need

  • A Pipeline configured with a staging or pre-production stage that precedes the production stage (see Pipelines documentation)

  • Apex test classes are present in the staging org

Setup

Configure the staging stage to run every test class in the org before changes can advance.

  1. Navigate to Pipelines and select your active pipeline.

  2. Click on your staging (or pre-production) stage to open the Stage Details panel.

  3. Under Quality gates, set the Deploy Test Level to Run All Tests in Org.

    • This runs every test class in the target org (not just the tests in the deployed package), ensuring full coverage of all existing functionality.

  4. Under Quality gates, set the Block Code Coverage Percentage to your team's required threshold (75%, 80%, 85%, or 90%).

    • Changes that don't meet the threshold are blocked at this stage, even if all tests pass.

  5. Save the stage configuration.

circle-exclamation

Example workflow

  1. A developer promotes a change to the staging stage.

  2. SRE.ai triggers a full test run against the staging org. Every Apex test class executes, not just the tests in the deployed package.

  3. The deployment view tracks execution in real time, showing test successes and failures as they complete.

  4. When execution finishes, the deployment view displays:

    • Test Failures: any test methods that failed, with failure messages and stack traces.

    • Code Coverage: per-class coverage percentages against the configured threshold, including a list of classes that fall below the requirement.

  5. If all tests pass and every class meets the coverage threshold, the change is cleared to advance to production. If any test fails or coverage falls short, the change is blocked at the staging stage until the issues are resolved.

Result

Production deployments only proceed after every test class in the staging org passes and coverage requirements are met. Teams catch regressions in dependent code (not just the changed components) before any change reaches production.

Last updated