Product Verification#

The product verification activity is conducted by an accredited assessor organization and establishes the product implementation score. The goal is to establish that a product meets the claims the RTP made about it, that is: “does the product prevent unintended outcomes?”

For initial product submissions and extensive changes in a {term}`product revision submission ’, the full product verification process will be used to determine, or redetermine, the proper scores. For other, smaller product revisions this activity will be streamlined because the changes were determined to pose a lower risk to the system.

Methodology#

For more information about what is expected for the product verification activity, see the provider submission activity and the RABET-V security requirements.

For initial product submissions, a full system test is performed. A full system test will review automated test results and perform a systemwide functional test and penetration test.

For product revision submissions, the test plan determination activity outlines required tests.

Inputs#

Outputs#

  • Results of verification test methods

  • Product implementation score based on the product implementation rubric

Verification Methods#

An accredited assessor organization will use one or more of the following techniques, as indicated in the test plan. The scope of the testing (i.e., which components to test) will also be indicated by the test plan.

Artifact Review#

This method will review an artifact provided by the RTP. The review will look for gaps or concerns in relevant controls based on the information provided. Each type of artifact will have various indicators of acceptability. Types of RTP artifacts include:

  • Automated source code unit test results

  • Automated vulnerability test results

  • Automated configuration verification results

  • Security event audit logs

  • Third-party security analysis results (automated or manual)

The artifacts must be evaluated as “reliable” during the organizational assessment activity in order to be used for product verification.

Automated Testing#

Automated testing is a broad type of testing that relies on software to perform test routines against the product or product component. Automated testing will execute the testing software against its target and produce results which will be evaluated by the accredited assessor organization. The type of automated test will depend on the target. Types of automated testing may include:

  • Configuration testing

  • Vulnerability analysis

  • Source code analysis

  • Accessibility testing

  • Browser compatibility testing

Penetration Testing#

Penetration tests evaluate the product to find security vulnerabilities that an attacker could exploit. The scope of a penetration test may be the product’s network, computer systems, hardware components, or software application(s). Penetration testing is typically a combination of manual and automated testing. Automated tools help with web application pen tests but must be used by skilled and experienced testers.

RABET-V relies on OWASP’s Web Application Security Testing Guide for web application and web service penetration testing options.

In addition to a full penetration testing option, the following web application penetration testing subtypes are supported:

  • Configuration and deployment

  • Identity management

  • Authentication

  • Authorization

  • Session management

  • Input validation

  • Error handling

  • Cryptography

Limited penetration testing may be used if the changes do not warrant full penetration testing.

Product Implementation Rubric#

The product implementation rubric provides a maturity score for each of the control families based on how well the product meets the requirements within those families. The scores range from zero to three, where three is the best.

The requirements are a binary pass or fail. Any assumptions made about the configuration or setup of the product must be documented with the result.

The scores are calculated by summing the percentage of applicable requirements that pass at each maturity level. For instance, meeting 100% of requirements at maturity level one, 25% at maturity level two, and 0% at maturity level three would result a score of 1.25.

Security Test Method Descriptions#

  • Fuzzing - Test of the application’s ability to accept a wide variety of inputs without causing it to enter an unexpected or undefined state

  • Penetration Testing - Testing that verifies the extent to which a system, device or process resists active attempts to compromise its security [NIST SP 800-152]

  • Functional - Test that evaluates the functionality of a component against a design specification. Can be automated, but because the function will be implemented differently by each product, a custom test script may be required for each

  • Web Testing - A functional test that exercises one or more parts of the web stack and verifies the expected output

  • Failover and Restore Testing - Test that evaluates the resiliency of a system by making components of the system inoperable and evaluating the result

  • Code Analysis - A white box test involving the use of code artifacts, such as source code or unobfuscated binaries in order to verify certain properties

  • Bill of Materials (BOM) Analysis - Analysis of the bill of materials, such as software and their versions

  • Configuration Audit - Test to verify that the configuration of a component is configured as required

  • Data Audit - Test to verify the presence or absence of certain records, such as the inappropriate collecting of PII or the lack of authentication logs, can be combined with functional testing to provide a higher level of confidence

  • Artifact Review - Review of RTP-supplied artifacts from their development, testing, integration, and deployment process or artifacts provided by the RTP’S hosting environment

  • Documentation Audit - Review of the RTP-supplied documentation for presence of required content or presence of poor guidance (i.e. direction to use insecure password)

  • Vendor Attestation - A statement made by the vendor indicating the existence of one or more security controls

Accessibility Test Method Descriptions#

  • Conformance - Test that validates the conformance of a component, page, or application to a specific accessibility standard. Conformance testing can be automated during development to test components and after development to test full applications. For example, tools like Accessibility Insights can check Android, web, and Windows applications, eslint-plugin-jsx-a11y can perform static analysis on React applications, axe DevTools can be used to test web applications, and SiteImprove is a paid-service that can automate accessibility, spell-checking, and readability checks on web applications

  • Functional - Test that evaluates the functionality of a component against a set of accessibility expectations. This must include the ability to interact with only keyboard navigation and should include testing with assistive technology (e.g., screen reader, braille display) and plain-language analysis (e.g., ideal Flesch-Kincaid score)

  • Artifact Review - Review of RTP-supplied artifacts from their automated, functional, or third-party testing

  • Vendor Attestation - A statement made by the vendor indicating the adherence to one or more accessibility controls

Usability Test Method Descriptions#

  • Artifact Review - Review of RTP-supplied artifacts from their functional or third-party testing

  • Vendor Attestation - A statement made by the vendor indicating the adherence to one or more usability controls

Product Verification Baseline#

RABET-V uses baseline scoring in organizational, architecture, and product verification to determine whether a product is Verified, Conditionally Verified, or Returned. The product verification baseline contains all the security requirements at level one and 50% of the requirements from levels two and three combined. When the RTP completes the product claims workbook, they will identify at least 50% of the Level two and Level three requirements from each of the control families. These claimed requirements will be part of the testing in the basic and streamlined testing scenarios.

The overall minimum baseline for verification is 106 of 153 requirements, combining all of Level one and 50% of the combined Level two and three requirements in each of the security control families.

The authentication baseline is passing all level one requirements (below) and 6 of 12 level two and three requirements.

Security Control Family

Requirement

Authentication

1.1.1 Default passwords are not used or are automatically changed as part of set up

Authentication

1.1.2 Authentication is applied consistently through the application

Authentication

1.1.3 Encrypt or hash all authentication credentials

Authentication

1.1.4 Customer admins have access to an inventory of their user accounts

Authentication

1.1.5 Implement protections against brute force attacks

Authentication

1.1.6 Require multi-factor authentication for all administrative access

The authorization baseline is passing all level one requirements (below) and 4 of 8 level two and three requirements.

Security Control Family

Requirement

Authorization

2.1.1 Platform provides an authorization system

Authorization

2.1.2 Applications and middleware should run With minimal privileges

Authorization

2.1.3 Apply the principle of least privilege

Authorization

2.1.4 Use tokens to prevent forged requests

The boundary protection baseline is passing all level one requirements (below) and 6 of 12 level two and three requirements.

Security Control Family

Requirement

Boundary Protection

3.1.1 Deny communications with known malicious IP addresses

Boundary Protection

3.1.2 Deny communication over unauthorized ports

Boundary Protection

3.1.3 Deploy network-based IDS sensors

Boundary Protection

3.1.4 Document traffic configuration rules

Boundary Protection

3.1.5 Use MFA for managing network infrastructure

Boundary Protection

3.1.6 Configure perimeter devices to prevent common types of attacks

Boundary Protection

3.1.7 Disable wireless access on devices if it is not required

Boundary Protection

3.1.8 Documentation clearly identifies wireless capabilities

Boundary Protection

3.1.9 Provide dedicated wireless networks

Boundary Protection

3.1.10 Disable wireless peripheral access to devices

The data confidentiality and integrity baseline is passing all level one requirements (below) and 8 of 15 level two and three requirements.

Security Control Family

Requirement

Data Confidentiality

4.1.1 Use valid HTTPS certificates from a reputable certificate authority

Data Confidentiality

4.1.2 Encrypt transmittal of username and authentication credentials

Data Confidentiality

4.1.3 Use the strict-transport-security header

Data Confidentiality

4.1.4 Disable data caching using cache control headers and autocomplete

Data Confidentiality

4.1.5 Updated TLS configuration on servers

Data Confidentiality

4.1.6 Use TLS everywhere

Data Confidentiality

4.1.7 Disable HTTP access for all TLS enabled resources

Data Confidentiality

4.1.8 Do not disclose too much information in error messages

Data Confidentiality

4.1.9 Display generic error messages

Data Confidentiality

4.1.10 Store user passwords using a atrong, iterative, salted hash

The system availability baseline is passing all level one requirements (below) and 3 of 6 level two and three requirements.

Security Control Family

Requirement

System Availability

5.1.1 Ensure regular automated backups

System Availability

5.1.2 Backup data should be restorable

System Availability

5.1.3 Local distributed storage capability

System Availability

5.1.4 Local distributed processing capability

The injection prevention baseline is passing all level one requirements (below) and 4 of 7 level two and three requirements.

Security Control Family

Requirement

Injection Prevention

6.1.1 Use secure HTTP response headers

Injection Prevention

6.1.2 Validate uploaded files

Injection Prevention

6.1.3 Set the encoding for your application

Injection Prevention

6.1.4 Validate all input

The logging and alerting baseline is passing all level one requirements (below) and 7 of 14 level two and three requirements.

Security Control Family

Requirement

Logging and Alerting

7.1.1 Activate audit logging

Logging and Alerting

7.1.2 Ensure adequate storage for logs

Logging and Alerting

7.1.3 Log all authentication activities

Logging and Alerting

7.1.4 Log all privilege changes

Logging and Alerting

7.1.5 Do not log inappropriate data

Logging and Alerting

7.1.6 Store logs securely

Logging and Alerting

7.1.7 Log and alert on changes to administrative group membership

The secret management baseline is passing all level one requirements (below) and 3 of 5 level two and three requirements.

Security Control Family

Requirement

Secret Management

8.1.1 Don’t hardcode credentials

Secret Management

8.1.2 Store credentials securely

Secret Management

8.1.3 Credentials for non-production and production environments are different

The system integrity baseline is passing all level one requirements (below) and 6 of 12 level two and three requirements.

Security Control Family

Requirement

System Integrity

9.1.1 Install the latest stable version of any security-related updates on all network devices

System Integrity

9.1.2 Ensure anti-malware software and signatures are updated

System Integrity

9.1.3 Configure devices to not auto-run content

System Integrity

9.1.4 Use port protectors on unused ports

System Integrity

9.1.5 Configure anti-malware scanning of removable devices

The user session baseline is passing all level one requirements and 3 of 6 level two and three requirements.