uniform_resource (table) Content
- Start Row: 0
- Rows per Page: 50
- Total Rows: 96
- Current Page: 1
- Total Pages: 2
01K5C21MQN04S4NX8X626Z42WP | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compliance-testing/test-plan/qf-plan.md | 4586f5cfa0fc76ac0681e36cd41097c6fde16acc | --- id: PLN-002 name: "compliance Test Plan" description: "To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG)." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["compliance testing"] related_requirements: ["REQ-101", "REQ-102"] --- Ensure that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG). ## Test Execution Steps ### Analyze FHIR Implementation Guide (IG) Requirements - Review the IG to identify key compliance requirements, including resource profiles, extensions, and terminology bindings. ### Validate FHIR Resource Profiles - Verify that all FHIR resources conform to the structure and constraints specified in the IG. - Test for compliance with required, must-support, and optional elements. ### Verify Extensions and Modifications - Validate custom extensions and modifications to ensure they are defined and implemented according to the IG. ### Terminology Binding Validation - Confirm that codable concepts and value sets adhere to the terminology bindings specified in the IG. ### Check Conformance Statements - Validate system compliance with declared FHIR capabilities (e.g., search parameters, read, write, or operation support). ### Validate Interactions and APIs - Test API interactions (read, create, update, delete) to ensure they align with IG requirements. ### Test for Cardinality Rules - Verify adherence to cardinality rules (minimum and maximum occurrence constraints) specified in the IG. ### HTTP Status Code Validation - Validate API responses to ensure appropriate HTTP status codes are returned based on FHIR operations. ### Data Validation Against IG Examples - Compare FHIR resource instances against examples provided in the IG for accuracy and adherence. ## Decision Point - **If validation fails**: Proceed to "Defect Logging." - **If validation passes**: Proceed to "Documentation of Compliance." ## Defect Log in Jira & Xray - Log identified defects in Jira and link them to corresponding Xray test cases for traceability. ## Issue Fixes - Address non-compliance issues to ensure the API meets FHIR IG standards. ## Retesting & Regression Testing - Retest resolved issues to confirm compliance. - Conduct regression testing to verify no new issues were introduced. ## Test Report Generation - Generate a consolidated test report summarizing validation outcomes, including test success rates, defects, logs, and screenshots. ## Deliverables 1. **Test Report**: Summary of test execution, success rates, defects, and screenshots. 2. **Defect Management Records**: Complete traceability of logged defects from identification to resolution. | md | 2881 | 2024-12-30 15:39:08 UTC | { "frontMatter": "---\nid: PLN-002\nname: \"compliance Test Plan\"\ndescription: \"To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG).\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"compliance testing\"]\nrelated_requirements: [\"REQ-101\", \"REQ-102\"]\n---\n", "body": "\nEnsure that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG).\n\n## Test Execution Steps\n\n### Analyze FHIR Implementation Guide (IG) Requirements\n\n- Review the IG to identify key compliance requirements, including resource profiles, extensions, and terminology bindings.\n\n### Validate FHIR Resource Profiles\n\n- Verify that all FHIR resources conform to the structure and constraints specified in the IG.\n- Test for compliance with required, must-support, and optional elements.\n\n### Verify Extensions and Modifications\n\n- Validate custom extensions and modifications to ensure they are defined and implemented according to the IG.\n\n### Terminology Binding Validation\n\n- Confirm that codable concepts and value sets adhere to the terminology bindings specified in the IG.\n\n### Check Conformance Statements\n\n- Validate system compliance with declared FHIR capabilities (e.g., search parameters, read, write, or operation support).\n\n### Validate Interactions and APIs\n\n- Test API interactions (read, create, update, delete) to ensure they align with IG requirements.\n\n### Test for Cardinality Rules\n\n- Verify adherence to cardinality rules (minimum and maximum occurrence constraints) specified in the IG.\n\n### HTTP Status Code Validation\n\n- Validate API responses to ensure appropriate HTTP status codes are returned based on FHIR operations.\n\n### Data Validation Against IG Examples\n\n- Compare FHIR resource instances against examples provided in the IG for accuracy and adherence.\n\n## Decision Point\n\n- **If validation fails**: Proceed to \"Defect Logging.\"\n- **If validation passes**: Proceed to \"Documentation of Compliance.\"\n\n## Defect Log in Jira & Xray\n\n- Log identified defects in Jira and link them to corresponding Xray test cases for traceability.\n\n## Issue Fixes\n\n- Address non-compliance issues to ensure the API meets FHIR IG standards.\n\n## Retesting & Regression Testing\n\n- Retest resolved issues to confirm compliance.\n- Conduct regression testing to verify no new issues were introduced.\n\n## Test Report Generation\n\n- Generate a consolidated test report summarizing validation outcomes, including test success rates, defects, logs, and screenshots.\n\n## Deliverables\n\n1. **Test Report**: Summary of test execution, success rates, defects, and screenshots.\n2. **Defect Management Records**: Complete traceability of logged defects from identification to resolution.\n", "attrs": { "id": "PLN-002", "name": "compliance Test Plan", "description": "To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG).", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } } | { "id": "PLN-002", "name": "compliance Test Plan", "description": "To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG).", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQN5WHKW3KG9700PYWV | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compliance-testing/qf-suite.md | 400970aa7c748a5eeea6f845c90a14a06f24e85b | --- id: SUT-002 projectId: PRJ-001 name: "Compliance Test Suite" description: "To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG)." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["compliance testing"] --- ## Test Execution Steps ### Analyze FHIR Implementation Guide (IG) Requirements - Review the IG to identify key compliance requirements, including resource profiles, extensions, and terminology bindings. ### Validate FHIR Resource Profiles - Verify that all FHIR resources conform to the structure and constraints specified in the IG. - Test for compliance with required, must-support, and optional elements. ### Verify Extensions and Modifications - Validate custom extensions and modifications to ensure they are defined and implemented according to the IG. ### Terminology Binding Validation - Confirm that codable concepts and value sets adhere to the terminology bindings specified in the IG. ### Check Conformance Statements - Validate system compliance with declared FHIR capabilities (e.g., search parameters, read, write, or operation support). ### Validate Interactions and APIs - Test API interactions (read, create, update, delete) to ensure they align with IG requirements. ### Test for Cardinality Rules - Verify adherence to cardinality rules (minimum and maximum occurrence constraints) specified in the IG. ### HTTP Status Code Validation - Validate API responses to ensure appropriate HTTP status codes are returned based on FHIR operations. ### Data Validation Against IG Examples - Compare FHIR resource instances against examples provided in the IG for accuracy and adherence. ## Decision Point - **If validation fails**: Proceed to "Defect Logging." - **If validation passes**: Proceed to "Documentation of Compliance." ## Defect Log in Jira & Xray - Log identified defects in Jira and link them to corresponding Xray test cases for traceability. ## Issue Fixes - Address non-compliance issues to ensure the API meets FHIR IG standards. ## Retesting & Regression Testing - Retest resolved issues to confirm compliance. - Conduct regression testing to verify no new issues were introduced. ## Test Report Generation - Generate a consolidated test report summarizing validation outcomes, including test success rates, defects, logs, and screenshots. ## Deliverables 1. **Test Report**: Summary of test execution, success rates, defects, and screenshots. 2. **Defect Management Records**: Complete traceability of logged defects from identification to resolution. | md | 2673 | 2024-12-30 15:35:08 UTC | { "frontMatter": "---\nid: SUT-002\nprojectId: PRJ-001\nname: \"Compliance Test Suite\"\ndescription: \"To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG).\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"compliance testing\"]\n---\n", "body": "\n## Test Execution Steps\n\n### Analyze FHIR Implementation Guide (IG) Requirements\n\n- Review the IG to identify key compliance requirements, including resource profiles, extensions, and terminology bindings.\n\n### Validate FHIR Resource Profiles\n\n- Verify that all FHIR resources conform to the structure and constraints specified in the IG.\n- Test for compliance with required, must-support, and optional elements.\n\n### Verify Extensions and Modifications\n\n- Validate custom extensions and modifications to ensure they are defined and implemented according to the IG.\n\n### Terminology Binding Validation\n\n- Confirm that codable concepts and value sets adhere to the terminology bindings specified in the IG.\n\n### Check Conformance Statements\n\n- Validate system compliance with declared FHIR capabilities (e.g., search parameters, read, write, or operation support).\n\n### Validate Interactions and APIs\n\n- Test API interactions (read, create, update, delete) to ensure they align with IG requirements.\n\n### Test for Cardinality Rules\n\n- Verify adherence to cardinality rules (minimum and maximum occurrence constraints) specified in the IG.\n\n### HTTP Status Code Validation\n\n- Validate API responses to ensure appropriate HTTP status codes are returned based on FHIR operations.\n\n### Data Validation Against IG Examples\n\n- Compare FHIR resource instances against examples provided in the IG for accuracy and adherence.\n\n## Decision Point\n\n- **If validation fails**: Proceed to \"Defect Logging.\"\n- **If validation passes**: Proceed to \"Documentation of Compliance.\"\n\n## Defect Log in Jira & Xray\n\n- Log identified defects in Jira and link them to corresponding Xray test cases for traceability.\n\n## Issue Fixes\n\n- Address non-compliance issues to ensure the API meets FHIR IG standards.\n\n## Retesting & Regression Testing\n\n- Retest resolved issues to confirm compliance.\n- Conduct regression testing to verify no new issues were introduced.\n\n## Test Report Generation\n\n- Generate a consolidated test report summarizing validation outcomes, including test success rates, defects, logs, and screenshots.\n\n## Deliverables\n\n1. **Test Report**: Summary of test execution, success rates, defects, and screenshots.\n2. **Defect Management Records**: Complete traceability of logged defects from identification to resolution.\n", "attrs": { "id": "SUT-002", "projectId": "PRJ-001", "name": "Compliance Test Suite", "description": "To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG).", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ] } } | { "id": "SUT-002", "projectId": "PRJ-001", "name": "Compliance Test Suite", "description": "To validate that the system complies with the FHIR standards by testing its API endpoints and data exchange processes against the specifications defined in the Implementation Guide (IG).", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQN4KZR3VPJJHFQKTVB | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compliance-testing/compliance-testcase/TC-0023.run-1.result.json | f2503ac359e04171a0c1c7e0b9f6791838718ec5 | { "test_case_fii": "TC-0023", "title": "Ensure that the response body adheres to the schema defined in the Implementation Guide (IG) for the API endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a request (GET/POST/PUT depending on the API) to the target endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Capture the JSON response from the API.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 3, "stepname": "Compare the response body against the schema defined in the IG using a schema validation tool.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 4, "stepname": "Verify that all required fields are present and adhere to the expected data types and formats.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 1418 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQNS68S0AEZKGGXKG6T | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compliance-testing/compliance-testcase/TC-0023.case.md | 0521888e2e21e9da9c82e4c1b83501d7f58b7841 | --- FII: TC-0023 groupId: GRP-002 projectId: PRJ-001 test_execution: EXE-002 planId: PLN-002 title: Ensure that the response body adheres to the schema defined in the Implementation Guide (IG) for the API endpoint. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["compliance testing"] priority: "High" --- ### Description This test verifies that the JSON response from the API matches the schema requirements outlined in the IG. Any deviations from the schema should be identified and flagged as errors. ### Pre-Conditions: - API endpoint is accessible and operational. - The expected response schema is defined in the IG. - A tool or script for schema validation (e.g., Postman, JSON Schema Validator) is available. ### Test Steps: 1. **Step 1**: Send a request (GET/POST/PUT depending on the API) to the target endpoint. 2. **Step 2**: Capture the JSON response from the API. 3. **Step 3**: Compare the response body against the schema defined in the IG using a schema validation tool. 4. **Step 4**: Verify that all required fields are present and adhere to the expected data types and formats. ### Expected Result: - The response body conforms to the schema specified in the IG, including: - Required fields are present. - Data types and formats match schema definitions. - No additional or unexpected fields are included unless allowed by the schema. | md | 1422 | 2024-12-19 16:50:28 UTC | { "frontMatter": "---\nFII: TC-0023\ngroupId: GRP-002\nprojectId: PRJ-001\ntest_execution: EXE-002\nplanId: PLN-002\ntitle: Ensure that the response body adheres to the schema defined in the Implementation Guide (IG) for the API endpoint.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"compliance testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\nThis test verifies that the JSON response from the API matches the schema requirements outlined in the IG. Any deviations from the schema should be identified and flagged as errors.\n\n### Pre-Conditions:\n- API endpoint is accessible and operational.\n- The expected response schema is defined in the IG.\n- A tool or script for schema validation (e.g., Postman, JSON Schema Validator) is available.\n\n### Test Steps:\n\n1. **Step 1**: Send a request (GET/POST/PUT depending on the API) to the target endpoint. \n2. **Step 2**: Capture the JSON response from the API. \n3. **Step 3**: Compare the response body against the schema defined in the IG using a schema validation tool. \n4. **Step 4**: Verify that all required fields are present and adhere to the expected data types and formats.\n\n### Expected Result:\n- The response body conforms to the schema specified in the IG, including:\n - Required fields are present.\n - Data types and formats match schema definitions.\n - No additional or unexpected fields are included unless allowed by the schema.", "attrs": { "FII": "TC-0023", "groupId": "GRP-002", "projectId": "PRJ-001", "test_execution": "EXE-002", "planId": "PLN-002", "title": "Ensure that the response body adheres to the schema defined in the Implementation Guide (IG) for the API endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "compliance testing" ], "priority": "High" } } | { "FII": "TC-0023", "groupId": "GRP-002", "projectId": "PRJ-001", "test_execution": "EXE-002", "planId": "PLN-002", "title": "Ensure that the response body adheres to the schema defined in the Implementation Guide (IG) for the API endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "compliance testing" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQPQPXMAJ3P0XDGKCTM | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compliance-testing/compliance-testcase/TC-0023.run.md | 03bcb637cfe78689717c2f4c46ff4d8f463f1c3c | --- FII: "TR-0023" test_case_fii: "TC-0023" projectId: PRJ-001 test_execution: EXE-002 planId: PLN-002 run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 224 | 2024-12-19 16:49:06 UTC | { "frontMatter": "---\nFII: \"TR-0023\"\ntest_case_fii: \"TC-0023\"\nprojectId: PRJ-001\ntest_execution: EXE-002\nplanId: PLN-002\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0023", "test_case_fii": "TC-0023", "projectId": "PRJ-001", "test_execution": "EXE-002", "planId": "PLN-002", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0023", "test_case_fii": "TC-0023", "projectId": "PRJ-001", "test_execution": "EXE-002", "planId": "PLN-002", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQPWH9EVE7ZG7SDVYC0 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compliance-testing/compliance-testcase/qf-case-group.md | e1324f2216c16e9b3e82825e4c0235a3826b7912 | --- id: GRP-002 SuiteId: SUT-002 planId: ["PLN-002"] name: "Compliance Test Cases" description: "Comprehensive FHIR Conformance Validation and Testing" created_by: "arun-ramanan@netspective.in" created_at: "2024-11-01" tags: ["Compatability testing"] --- ## Description To ensure adherence to FHIR (Fast Healthcare Interoperability Resources) standards based on the Implementation Guide (IG). The test aims to validate conformance to FHIR profiles, security standards, and operational best practices across all relevant endpoints and resources. ## Test Cases to Execute ### 1. Resource-Level Validation - Verify adherence to mandatory and optional FHIR elements for specific resources. - Ensure support for required extensions and custom profiles per the IG. - Validate use of terminology bindings to specified Value Sets. ### 2. Capability Statement Validation - Verify the API's Capability Statement includes all mandatory elements defined in the IG. - Confirm supported operations, interactions, and resource types. ### 4. Terminology Services Testing - Verify the use of correct Value Sets, Code Systems, and terminology bindings. - Test `$validate-code` and `$expand` operations for terminology validation. ### 10. Audit Logging - Validate that all operations are logged as per FHIR security standards. - Confirm compliance with audit event structures defined in FHIR. ## Environment - **Test Environment:** Test server with FHIR-compliant configuration. - **FHIR Version:** As specified in the IG (e.g., R4, R5). - **Scope Host URL/IP:** Defined API endpoint or test instance. ## Tools Utilized - **FHIR Validator:** Validate resource conformance to FHIR profiles. ## Objectives - Validate compliance with FHIR Implementation Guide requirements. - Identify deviations from FHIR standards and IG conformance. ## Execution Strategy ### Pre-Test Preparation: - Review the FHIR IG and confirm test prerequisites. - Load test data based on IG-compliant resource examples. ### Test Execution: - Run automated tests for conformance using the FHIR Validator. ### Post-Test Reporting: - Document test results and any deviations from expected behavior. - Provide actionable recommendations for resolving identified issues. | md | 2243 | 2024-12-26 11:56:54 UTC | { "frontMatter": "---\nid: GRP-002\nSuiteId: SUT-002\nplanId: [\"PLN-002\"]\nname: \"Compliance Test Cases\"\ndescription: \"Comprehensive FHIR Conformance Validation and Testing\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-11-01\"\ntags: [\"Compatability testing\"]\n---\n", "body": "\n## Description\n\nTo ensure adherence to FHIR (Fast Healthcare Interoperability Resources) standards based on the Implementation Guide (IG). The test aims to validate conformance to FHIR profiles, security standards, and operational best practices across all relevant endpoints and resources.\n\n## Test Cases to Execute\n\n### 1. Resource-Level Validation\n\n- Verify adherence to mandatory and optional FHIR elements for specific resources.\n- Ensure support for required extensions and custom profiles per the IG.\n- Validate use of terminology bindings to specified Value Sets.\n\n### 2. Capability Statement Validation\n\n- Verify the API's Capability Statement includes all mandatory elements defined in the IG.\n- Confirm supported operations, interactions, and resource types.\n\n### 4. Terminology Services Testing\n\n- Verify the use of correct Value Sets, Code Systems, and terminology bindings.\n- Test `$validate-code` and `$expand` operations for terminology validation.\n\n### 10. Audit Logging\n\n- Validate that all operations are logged as per FHIR security standards.\n- Confirm compliance with audit event structures defined in FHIR.\n\n## Environment\n\n- **Test Environment:** Test server with FHIR-compliant configuration.\n- **FHIR Version:** As specified in the IG (e.g., R4, R5).\n- **Scope Host URL/IP:** Defined API endpoint or test instance.\n\n## Tools Utilized\n\n- **FHIR Validator:** Validate resource conformance to FHIR profiles.\n\n## Objectives\n\n- Validate compliance with FHIR Implementation Guide requirements.\n- Identify deviations from FHIR standards and IG conformance.\n\n## Execution Strategy\n\n### Pre-Test Preparation:\n\n- Review the FHIR IG and confirm test prerequisites.\n- Load test data based on IG-compliant resource examples.\n\n### Test Execution:\n\n- Run automated tests for conformance using the FHIR Validator.\n\n### Post-Test Reporting:\n\n- Document test results and any deviations from expected behavior.\n- Provide actionable recommendations for resolving identified issues.\n", "attrs": { "id": "GRP-002", "SuiteId": "SUT-002", "planId": [ "PLN-002" ], "name": "Compliance Test Cases", "description": "Comprehensive FHIR Conformance Validation and Testing", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } } | { "id": "GRP-002", "SuiteId": "SUT-002", "planId": [ "PLN-002" ], "name": "Compliance Test Cases", "description": "Comprehensive FHIR Conformance Validation and Testing", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQPNV0C04KDJGHF9MF4 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/test-plan/qf-plan.md | 30b20718b7b721653f6c9ac582e6d3eb30138a12 | --- id: PLN-004 name: "Integrity Test Plan" description: "To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. defined in the Implementation Guide (IG)." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["compliance testing"] related_requirements: ["REQ-101", "REQ-102"] --- ## Objective Ensure to validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. ## Scope This test plan focuses on verifying the following integration aspects: - **API Connectivity**: Ensuring smooth communication between APIs. - **Database Integration**: Validating the accuracy and functionality of database operations (e.g., data storage, retrieval, and updates). - **Authentication Retrievals**: Testing the reliability of API tracking mechanisms, especially for authentication retrieval processes. ## Test Environment - **Test Environment**: Test - **Database**: Connected to the live replication of the production database schema. - **API Version**: v1.0 - **Tool**: Playwright (Automation for API Endpoints) | md | 1192 | 2024-12-26 12:11:04 UTC | { "frontMatter": "---\nid: PLN-004\nname: \"Integrity Test Plan\"\ndescription: \"To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. defined in the Implementation Guide (IG).\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"compliance testing\"]\nrelated_requirements: [\"REQ-101\", \"REQ-102\"]\n---\n", "body": "\n## Objective\n\nEnsure to validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms.\n\n## Scope\n\nThis test plan focuses on verifying the following integration aspects:\n\n- **API Connectivity**: Ensuring smooth communication between APIs.\n- **Database Integration**: Validating the accuracy and functionality of database operations (e.g., data storage, retrieval, and updates).\n- **Authentication Retrievals**: Testing the reliability of API tracking mechanisms, especially for authentication retrieval processes.\n\n## Test Environment\n\n- **Test Environment**: Test\n- **Database**: Connected to the live replication of the production database schema.\n- **API Version**: v1.0\n- **Tool**: Playwright (Automation for API Endpoints)\n", "attrs": { "id": "PLN-004", "name": "Integrity Test Plan", "description": "To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. defined in the Implementation Guide (IG).", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } } | { "id": "PLN-004", "name": "Integrity Test Plan", "description": "To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. defined in the Implementation Guide (IG).", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQQQF4KHH5RV5EWJF2J | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/qf-suite.md | 9c75112a9c3ca36725bd1fc4f66ae2c16d8d7745 | --- id: SUT-004 projectId: PRJ-001 test_execution_id: ["EXE-004"] name: "Integrity Test Suite" description: "To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. " created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" tags: ["integrity testing"] --- ## Objective To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. ## Scope This test plan focuses on verifying the following integration aspects: - **API Connectivity**: Ensuring smooth communication between APIs. - **Database Integration**: Validating the accuracy and functionality of database operations (e.g., data storage, retrieval, and updates). - **Authentication Retrievals**: Testing the reliability of API tracking mechanisms, especially for authentication retrieval processes. ## Test Environment - **Test Environment**: Test - **Database**: Connected to the live replication of the production database schema. - **API Version**: v1.0 - **Tool**: Playwright (Automation for API Endpoints) | md | 1157 | 2024-12-24 14:16:08 UTC | { "frontMatter": "---\nid: SUT-004\nprojectId: PRJ-001\ntest_execution_id: [\"EXE-004\"]\nname: \"Integrity Test Suite\"\ndescription: \"To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. \"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntags: [\"integrity testing\"]\n---\n", "body": "\n## Objective\n\nTo validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms.\n\n## Scope\n\nThis test plan focuses on verifying the following integration aspects:\n\n- **API Connectivity**: Ensuring smooth communication between APIs.\n- **Database Integration**: Validating the accuracy and functionality of database operations (e.g., data storage, retrieval, and updates).\n- **Authentication Retrievals**: Testing the reliability of API tracking mechanisms, especially for authentication retrieval processes.\n\n## Test Environment\n\n- **Test Environment**: Test\n- **Database**: Connected to the live replication of the production database schema.\n- **API Version**: v1.0\n- **Tool**: Playwright (Automation for API Endpoints)\n", "attrs": { "id": "SUT-004", "projectId": "PRJ-001", "test_execution_id": [ "EXE-004" ], "name": "Integrity Test Suite", "description": "To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. ", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "tags": [ "integrity testing" ] } } | { "id": "SUT-004", "projectId": "PRJ-001", "test_execution_id": [ "EXE-004" ], "name": "Integrity Test Suite", "description": "To validate the seamless integration of APIs, ensuring proper database connectivity and the functionality of authentication retrieval mechanisms. ", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "tags": [ "integrity testing" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQQ3TDVC4GEP6XG11WA | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/integrity-testcase/TC-0025.run-1.result.json | 33e0fde3b6d4b6e8592db99c44bb07109232feff | { "test_case_fii": "TC-0025", "title": "Ensure that an appropriate error is returned when an incorrect API endpoint is provided.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a GET request to an incorrect endpoint (e.g., `/logins`).", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Observe the response status code and verify it matches the expected error code (e.g., `404 Not Found`) and Confirm that the error message does not reveal sensitive server or API information.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 959 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQQT8M2VEMT6P9G01M5 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/integrity-testcase/TC-0024.case.md | ccba3fb1fbbe85b87cb1f678130e7d108a0fbaf8 | --- FII: TC-0024 groupId: GRP-006 title: "Verify if server connection is refused, it should throw an error." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["integrity testing"] priority: "High" --- ### Description This test case ensures that when the server connection is refused, the application or client properly handles the scenario by throwing an appropriate error message. ### Pre-Conditions: - The server endpoint should be temporarily set to reject connections (e.g., through firewall rules or server configurations). - Test environment with necessary tools (e.g., Postman or playwright scripts) must be configured. ### Test Steps: 1. **Step 1**: Send a request (GET/POST/PUT depending on the API) to the target endpoint. 2. **Step 2**: Capture the JSON response from the API. 3. **Step 3**: Compare the response body against the schema defined in the IG using a schema validation tool. 4. **Step 4**: Verify that all required fields are present and adhere to the expected data types and formats. ### Expected Result: - The response body conforms to the schema specified in the IG, including: - Required fields are present. - Data types and formats match schema definitions. - No additional or unexpected fields are included unless allowed by the schema. | md | 1328 | 2024-12-26 12:11:26 UTC | { "frontMatter": "---\nFII: TC-0024\ngroupId: GRP-006\ntitle: \"Verify if server connection is refused, it should throw an error.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"integrity testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that when the server connection is refused, the application or client properly handles the scenario by throwing an appropriate error message.\n\n### Pre-Conditions:\n\n- The server endpoint should be temporarily set to reject connections (e.g., through firewall rules or server configurations).\n- Test environment with necessary tools (e.g., Postman or playwright scripts) must be configured.\n\n### Test Steps:\n\n1. **Step 1**: Send a request (GET/POST/PUT depending on the API) to the target endpoint.\n2. **Step 2**: Capture the JSON response from the API.\n3. **Step 3**: Compare the response body against the schema defined in the IG using a schema validation tool.\n4. **Step 4**: Verify that all required fields are present and adhere to the expected data types and formats.\n\n### Expected Result:\n\n- The response body conforms to the schema specified in the IG, including:\n - Required fields are present.\n - Data types and formats match schema definitions.\n - No additional or unexpected fields are included unless allowed by the schema.\n", "attrs": { "FII": "TC-0024", "groupId": "GRP-006", "title": "Verify if server connection is refused, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "integrity testing" ], "priority": "High" } } | { "FII": "TC-0024", "groupId": "GRP-006", "title": "Verify if server connection is refused, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "integrity testing" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQQ5JKRGCMGMDSQDDC0 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/integrity-testcase/TC-0024.run-1.result.json | ca086d5104994ffa528e71ccac13f25af1fb65f7 | { "test_case_fii": "TC-0024", "title": "Verify if server connection is refused, it should throw an error.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a request to the API endpoint using a REST client like Postman or playwright script.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Observe and record the response from the client/application.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 3, "stepname": "Verify if the error code and message align with the expected format (e.g., `503 Service Unavailable` or `Connection Refused`).", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 1151 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQQ9A48ZSHG0AN2ECFD | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/integrity-testcase/TC-0025.case.md | dc173dd1f903759f15db04dc9785f888361a9c1b | --- FII: TC-0025 groupId: GRP-006 title: "Ensure that an appropriate error is returned when an incorrect API endpoint is provided." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["integrity testing"] priority: "High" --- ### Description This test verifies that the API responds with an error message and the appropriate HTTP status code when a request is made to an invalid or incorrect endpoint. ### Pre-Conditions: 1. API server is running and accessible. ### Test Steps: 1. **Step 1**: Send a GET request to an incorrect endpoint (e.g., `/logins`). 2. **Step 2**: Observe the response status code and verify it matches the expected error code (e.g., `404 Not Found`) and Confirm that the error message does not reveal sensitive server or API information. ### Expected Result: - The server responds with an HTTP status code of `404 Not Found`. - The response body includes an error message such as `{"error": "Endpoint not found"}`. - No sensitive server or API details are exposed in the error response. | md | 1066 | 2024-12-26 12:11:34 UTC | { "frontMatter": "---\nFII: TC-0025\ngroupId: GRP-006\ntitle: \"Ensure that an appropriate error is returned when an incorrect API endpoint is provided.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"integrity testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test verifies that the API responds with an error message and the appropriate HTTP status code when a request is made to an invalid or incorrect endpoint.\n\n### Pre-Conditions:\n\n1. API server is running and accessible.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to an incorrect endpoint (e.g., `/logins`).\n2. **Step 2**: Observe the response status code and verify it matches the expected error code (e.g., `404 Not Found`) and Confirm that the error message does not reveal sensitive server or API information.\n\n### Expected Result:\n\n- The server responds with an HTTP status code of `404 Not Found`.\n- The response body includes an error message such as `{\"error\": \"Endpoint not found\"}`.\n- No sensitive server or API details are exposed in the error response.\n", "attrs": { "FII": "TC-0025", "groupId": "GRP-006", "title": "Ensure that an appropriate error is returned when an incorrect API endpoint is provided.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "integrity testing" ], "priority": "High" } } | { "FII": "TC-0025", "groupId": "GRP-006", "title": "Ensure that an appropriate error is returned when an incorrect API endpoint is provided.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "integrity testing" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQR4XBSPBBX3Y5QZ9ZS | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/integrity-testcase/TC-0024.run.md | cf3f511c37868b5d681d5390588d2c70efe23f13 | --- FII: "TR-0024" test_case_fii: "TC-0024" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0024\"\ntest_case_fii: \"TC-0024\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0024", "test_case_fii": "TC-0024", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0024", "test_case_fii": "TC-0024", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQRN6Z3KV8ATKCM330W | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/integrity-testcase/qf-case-group.md | 874b2074818e6cd05b7190ca7b04bd6323fadad6 | --- id: GRP-006 SuiteId: SUT-004 planId: ["PLN-004"] name: "Integrity Test Cases" description: "integration of APIs Validation and Testing" created_by: "arun-ramanan@netspective.in" created_at: "2024-11-01" tags: ["Compatability testing"] --- ## Description To validate the seamless integration of APIs with the server, ensuring proper database connectivity, authentication retrievals, and tracking mechanisms function as expected. ## Scope **Primary Focus:** - Integration of APIs with the Server. - API tracking system validation. - Validate seamless integration of APIs with the system. - Ensure proper functionality of database connectivity to support real-time operations. - Confirm accuracy and consistency of authentication retrievals via API endpoints. - Verify that API tracking mechanisms capture and log relevant transactions accurately. | md | 854 | 2024-12-26 12:11:22 UTC | { "frontMatter": "---\nid: GRP-006\nSuiteId: SUT-004\nplanId: [\"PLN-004\"]\nname: \"Integrity Test Cases\"\ndescription: \"integration of APIs Validation and Testing\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-11-01\"\ntags: [\"Compatability testing\"]\n---\n", "body": "\n## Description\n\nTo validate the seamless integration of APIs with the server, ensuring proper database connectivity, authentication retrievals, and tracking mechanisms function as expected.\n\n## Scope\n\n**Primary Focus:**\n\n- Integration of APIs with the Server.\n- API tracking system validation.\n- Validate seamless integration of APIs with the system.\n- Ensure proper functionality of database connectivity to support real-time operations.\n- Confirm accuracy and consistency of authentication retrievals via API endpoints.\n- Verify that API tracking mechanisms capture and log relevant transactions accurately.\n", "attrs": { "id": "GRP-006", "SuiteId": "SUT-004", "planId": [ "PLN-004" ], "name": "Integrity Test Cases", "description": "integration of APIs Validation and Testing", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } } | { "id": "GRP-006", "SuiteId": "SUT-004", "planId": [ "PLN-004" ], "name": "Integrity Test Cases", "description": "integration of APIs Validation and Testing", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQRMEMPTVSWK8AFC0FP | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-integrity-testing/integrity-testcase/TC-0025.run.md | 1f6a95a5bafb2c8a518f46deaab4054f77516649 | --- FII: "TR-0025" test_case_fii: "TC-0025" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0025\"\ntest_case_fii: \"TC-0025\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0025", "test_case_fii": "TC-0025", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0025", "test_case_fii": "TC-0025", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQR2V0T6BX8KHJ76CF2 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/compatibility-testcase/TC-0021.run.md | 5d934b82e932b2e4c5f95a8991bb537138c3ecaa | --- FII: "TR-0021" test_case_fii: "TC-0021" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 168 | 2024-12-26 11:55:04 UTC | { "frontMatter": "---\nFII: \"TR-0021\"\ntest_case_fii: \"TC-0021\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "\n### Run Summary\n\n- Status: Passed\n- Notes: All steps executed successfully.\n", "attrs": { "FII": "TR-0021", "test_case_fii": "TC-0021", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0021", "test_case_fii": "TC-0021", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQRPWW8BFFM0YSQ1JJZ | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/compatibility-testcase/TC-0022.run.md | 12c8fa7654f922e3219f25cd703b14cda63986eb | --- FII: "TR-0022" test_case_fii: "TC-0022" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 168 | 2024-12-26 11:55:36 UTC | { "frontMatter": "---\nFII: \"TR-0022\"\ntest_case_fii: \"TC-0022\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "\n### Run Summary\n\n- Status: Passed\n- Notes: All steps executed successfully.\n", "attrs": { "FII": "TR-0022", "test_case_fii": "TC-0022", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0022", "test_case_fii": "TC-0022", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQSX92XGADX1PYDW15W | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/compatibility-testcase/TC-0022.case.md | 1d17958f36c8df3c1d629f7338e90024a2828898 | --- FII: TC-0022 groupId: GRP-001 title: "Verify that the API server is connected and running successfully on a Linux machine." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["compatability testing"] priority: "High" --- This test ensures that the API server can start, establish connections, and respond to API requests when hosted on a Linux machine. It validates basic server functionality and compatibility with the Linux environment. ### Pre-Conditions: - The Linux machine must have the required dependencies installed. - The API server code and configuration files must be deployed correctly. - The server's network settings must allow incoming and outgoing connections. ### Test Steps: 1. **Step 1**: Start the API server by connecting the tailscale connection. 2. **Step 2**: Open the automation script and send a GET request to the API endpoint. 3. **Step 3**: Review the API response for a valid status code (`200 OK`) and response body. ### Expected Result: - The API server starts without errors and logs indicate successful initialization. - The API response includes: - Status code: `200 OK` - Response body: Contains a valid JSON object or expected response content. | md | 1245 | 2024-12-26 11:55:24 UTC | { "frontMatter": "---\nFII: TC-0022\ngroupId: GRP-001\ntitle: \"Verify that the API server is connected and running successfully on a Linux machine.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"compatability testing\"]\npriority: \"High\"\n---\n", "body": "\nThis test ensures that the API server can start, establish connections, and respond to API requests when hosted on a Linux machine. It validates basic server functionality and compatibility with the Linux environment.\n\n### Pre-Conditions:\n\n- The Linux machine must have the required dependencies installed.\n- The API server code and configuration files must be deployed correctly.\n- The server's network settings must allow incoming and outgoing connections.\n\n### Test Steps:\n\n1. **Step 1**: Start the API server by connecting the tailscale connection.\n2. **Step 2**: Open the automation script and send a GET request to the API endpoint.\n3. **Step 3**: Review the API response for a valid status code (`200 OK`) and response body.\n\n### Expected Result:\n\n- The API server starts without errors and logs indicate successful initialization.\n- The API response includes:\n - Status code: `200 OK`\n - Response body: Contains a valid JSON object or expected response content.\n", "attrs": { "FII": "TC-0022", "groupId": "GRP-001", "title": "Verify that the API server is connected and running successfully on a Linux machine.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "compatability testing" ], "priority": "High" } } | { "FII": "TC-0022", "groupId": "GRP-001", "title": "Verify that the API server is connected and running successfully on a Linux machine.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "compatability testing" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQSX8VJ1G1TDEK65PP2 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/compatibility-testcase/TC-0022.run-1.result.json | 3c8e708ade2f2e4044de5148235a96fd7eb84124 | { "test_case_fii": "TC-0022", "title": "Verify that the API server is connected and running successfully on a Linux machine.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Start the API server by connecting the tailscale connection.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Open the automation script and send a GET request to the API endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 3, "stepname": "Review the API response for a valid status code (`200 OK`) and response body.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 1102 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQS8JPVP9FHNJZJGM85 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/compatibility-testcase/TC-0021.case.md | fccfb1c9c69e968b18be7d03355c990abae3f361 | --- FII: TC-0021 groupId: GRP-001 title: Ensure that the API endpoint is accessible under the Linux platform using Chrome, and Firefox browsers. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["compatability testing"] priority: "High" --- ### Description This test verifies the accessibility of the all API endpoint from the Linux platform when accessed via different browsers (Chrome, Firefox, and Edge). The test ensures the endpoint responds correctly and consistently across all specified browsers. ### Pre-Conditions: - The Linux system is set up with Chrome, Firefox installed. - The test environment is configured, and the API endpoint is available. - Network connectivity is established. ### Test Steps: 1. **Step 1**: Launch the Chrome browser on the Linux system. 2. **Step 2**: Enter the URL for the all API endpoint and press Enter. 3. **Step 3**: Observe the response and validate it matches the expected output. 4. **Step 4**: Repeat steps 1-3 using Firefox browsers. ### Expected Result: - All API endpoint should return a valid JSON response in all browsers. | md | 1133 | 2024-12-26 12:07:18 UTC | { "frontMatter": "---\nFII: TC-0021\ngroupId: GRP-001\ntitle: Ensure that the API endpoint is accessible under the Linux platform using Chrome, and Firefox browsers.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"compatability testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test verifies the accessibility of the all API endpoint from the Linux platform when accessed via different browsers (Chrome, Firefox, and Edge). The test ensures the endpoint responds correctly and consistently across all specified browsers.\n\n### Pre-Conditions:\n\n- The Linux system is set up with Chrome, Firefox installed.\n- The test environment is configured, and the API endpoint is available.\n- Network connectivity is established.\n\n### Test Steps:\n\n1. **Step 1**: Launch the Chrome browser on the Linux system.\n2. **Step 2**: Enter the URL for the all API endpoint and press Enter.\n3. **Step 3**: Observe the response and validate it matches the expected output.\n4. **Step 4**: Repeat steps 1-3 using Firefox browsers.\n\n### Expected Result:\n\n- All API endpoint should return a valid JSON response in all browsers.\n", "attrs": { "FII": "TC-0021", "groupId": "GRP-001", "title": "Ensure that the API endpoint is accessible under the Linux platform using Chrome, and Firefox browsers.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "compatability testing" ], "priority": "High" } } | { "FII": "TC-0021", "groupId": "GRP-001", "title": "Ensure that the API endpoint is accessible under the Linux platform using Chrome, and Firefox browsers.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "compatability testing" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQS0Q1VV9PKXFWCNJYX | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/compatibility-testcase/qf-case-group.md | 850e32a688de8f58ed364ba67c1640cef7b1f7a2 | --- id: GRP-001 SuiteId: SUT-001 planId: ["PLN-001"] name: "Compatability Test Cases" description: "Comprehensive Cross-Browser Testing for APIs " created_by: "arun-ramanan@netspective.in" created_at: "2024-11-01" tags: ["Compatability testing"] --- ### Description This testing initiative validates the compatibility and performance of APIs and web interfaces across popular browsers (Chromium, Microsoft Edge, Mozilla Firefox) and Linux-based operating systems. The focus is on delivering a consistent and seamless user experience by ensuring adherence to industry standards and resolving potential compatibility issues. ### Key Areas Covered - **Cross-Browser Functionality**: Validate rendering, responsiveness, and feature parity across Chromium, Edge, and Firefox. - **Operating System Compatibility**: Test API and UI functionality on Linux platforms. - **UI Rendering Consistency**: Ensure uniform design, layout, and responsiveness. - **API Behavior**: Verify API request handling and expected responses. - **Session Management**: Test cookies, local storage, and session behaviors. - **Media/File Handling**: Validate file upload/download functionality. - **Form Input and Error Handling**: Ensure proper input validation and uniform error management. ### Environment Details - **Test Environment**: Test - **Browsers**: Chromium, Firefox - **Operating System**: Linux - **API Version**: v1.0 - **Host URL/IP**: http://localhost ### Tools Used - Microsoft Playwright for automated browser testing. ### Objectives - Achieve consistent functionality across browsers. - Address compatibility issues for an improved user experience. - Maintain compliance with industry standards. | md | 1696 | 2024-12-26 11:59:44 UTC | { "frontMatter": "---\nid: GRP-001\nSuiteId: SUT-001\nplanId: [\"PLN-001\"]\nname: \"Compatability Test Cases\"\ndescription: \"Comprehensive Cross-Browser Testing for APIs \"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-11-01\"\ntags: [\"Compatability testing\"]\n---\n", "body": "\n### Description\n\nThis testing initiative validates the compatibility and performance of APIs and web interfaces across popular browsers (Chromium, Microsoft Edge, Mozilla Firefox) and Linux-based operating systems. The focus is on delivering a consistent and seamless user experience by ensuring adherence to industry standards and resolving potential compatibility issues.\n\n### Key Areas Covered\n\n- **Cross-Browser Functionality**: Validate rendering, responsiveness, and feature parity across Chromium, Edge, and Firefox.\n- **Operating System Compatibility**: Test API and UI functionality on Linux platforms.\n- **UI Rendering Consistency**: Ensure uniform design, layout, and responsiveness.\n- **API Behavior**: Verify API request handling and expected responses.\n- **Session Management**: Test cookies, local storage, and session behaviors.\n- **Media/File Handling**: Validate file upload/download functionality.\n- **Form Input and Error Handling**: Ensure proper input validation and uniform error management.\n\n### Environment Details\n\n- **Test Environment**: Test\n- **Browsers**: Chromium, Firefox\n- **Operating System**: Linux\n- **API Version**: v1.0\n- **Host URL/IP**: http://localhost\n\n### Tools Used\n\n- Microsoft Playwright for automated browser testing.\n\n### Objectives\n\n- Achieve consistent functionality across browsers.\n- Address compatibility issues for an improved user experience.\n- Maintain compliance with industry standards.\n", "attrs": { "id": "GRP-001", "SuiteId": "SUT-001", "planId": [ "PLN-001" ], "name": "Compatability Test Cases", "description": "Comprehensive Cross-Browser Testing for APIs ", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } } | { "id": "GRP-001", "SuiteId": "SUT-001", "planId": [ "PLN-001" ], "name": "Compatability Test Cases", "description": "Comprehensive Cross-Browser Testing for APIs ", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQS8W04VK23GXAYRN1G | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/compatibility-testcase/TC-0021.run-1.result.json | 7cf5c8840ab7fb78f5e5b3e9c86f6e40f949fcef | { "test_case_fii": "TC-0021", "title": "Ensure that the API endpoint is accessible under the Linux platform using Chrome, and Firefox browsers.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Launch the Chrome browser on the Linux system.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Enter the URL for the all API endpoint and press Enter.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 3, "stepname": "Observe the response and validate it matches the expected output.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 4, "stepname": "Repeat steps 1-3 using Firefox browsers.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 1312 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQTY2EN7CYXPHVQGSWF | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/test-plan/qf-plan.md | 19f709eec0643ca72da1b51ee4eb1e92da783933 | --- id: PLN-001 name: "Compatibility Test Plan" description: "To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["Compatibility testing"] version: "1.0" related_requirements: ["REQ-101", "REQ-102"] status: "Draft" --- This plan ensures comprehensive validation of compatibility across supported browsers and operating systems while addressing any identified issues systematically. ### 1. Browser Compatibility Testing - Test API functionality on the following widely used browsers: - **Chromium** - **Edge** - **Firefox** ### 2. OS Platform Compatibility Testing - Verify API compatibility on the following operating systems: - **Linux** - Validate on different distributions (e.g., Ubuntu, CentOS) to ensure broad compatibility. ### 3. Functional Validation Across Platforms - Execute the core functionality of API across all supported browsers and OS combinations, ensuring consistent performance: - **API Connectivity**: Validate the ability to establish a secure connection. - **UI Rendering**: Ensure UI elements render correctly across all browsers. - **Response Validation**: Check for accurate API responses and error handling. ## Test Case Execution - Use **Xray test cases** to document compatibility outcomes: - Record browser and OS configurations for traceability. - Capture and compare expected vs. actual results. ## Defect Logging for Incompatibilities - Log defects identified during compatibility testing in **Jira**, following the Defect Log Format (refer to Table 4): - Link defects with relevant Xray test cases for traceability. - Include screenshots of compatibility issues (e.g., UI rendering failures or functional discrepancies). ## Issue Fix and Retesting - **Resolve compatibility defects** to ensure seamless operation across all browsers and OS platforms. - Retest to confirm the resolution of defects. - Conduct **regression testing** to verify no new issues were introduced. ## Test Report Generation - Generate a consolidated report summarizing compatibility results, including: - Success rates across browsers and OS platforms. - Details of defects and their resolutions. - Logs and screenshots for documentation. ## Deliverables 1. **Defect Management**: - Detailed records of identified issues, their resolutions, and supporting evidence. 2. **Test Report**: - Summarized results with success rates, defects, and supporting logs/screenshots. | md | 2628 | 2024-12-26 11:53:32 UTC | { "frontMatter": "---\nid: PLN-001\nname: \"Compatibility Test Plan\"\ndescription: \"To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience.\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"Compatibility testing\"]\nversion: \"1.0\"\nrelated_requirements: [\"REQ-101\", \"REQ-102\"]\nstatus: \"Draft\"\n---\n", "body": "\nThis plan ensures comprehensive validation of compatibility across supported browsers and operating systems while addressing any identified issues systematically.\n\n### 1. Browser Compatibility Testing\n\n- Test API functionality on the following widely used browsers:\n - **Chromium**\n - **Edge**\n - **Firefox**\n\n### 2. OS Platform Compatibility Testing\n\n- Verify API compatibility on the following operating systems:\n - **Linux**\n - Validate on different distributions (e.g., Ubuntu, CentOS) to ensure broad compatibility.\n\n### 3. Functional Validation Across Platforms\n\n- Execute the core functionality of API across all supported browsers and OS combinations, ensuring consistent performance:\n - **API Connectivity**: Validate the ability to establish a secure connection.\n - **UI Rendering**: Ensure UI elements render correctly across all browsers.\n - **Response Validation**: Check for accurate API responses and error handling.\n\n## Test Case Execution\n\n- Use **Xray test cases** to document compatibility outcomes:\n - Record browser and OS configurations for traceability.\n - Capture and compare expected vs. actual results.\n\n## Defect Logging for Incompatibilities\n\n- Log defects identified during compatibility testing in **Jira**, following the Defect Log Format (refer to Table 4):\n - Link defects with relevant Xray test cases for traceability.\n - Include screenshots of compatibility issues (e.g., UI rendering failures or functional discrepancies).\n\n## Issue Fix and Retesting\n\n- **Resolve compatibility defects** to ensure seamless operation across all browsers and OS platforms.\n- Retest to confirm the resolution of defects.\n- Conduct **regression testing** to verify no new issues were introduced.\n\n## Test Report Generation\n\n- Generate a consolidated report summarizing compatibility results, including:\n - Success rates across browsers and OS platforms.\n - Details of defects and their resolutions.\n - Logs and screenshots for documentation.\n\n## Deliverables\n\n1. **Defect Management**:\n\n - Detailed records of identified issues, their resolutions, and supporting evidence.\n\n2. **Test Report**:\n - Summarized results with success rates, defects, and supporting logs/screenshots.\n", "attrs": { "id": "PLN-001", "name": "Compatibility Test Plan", "description": "To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "Compatibility testing" ], "version": "1.0", "related_requirements": [ "REQ-101", "REQ-102" ], "status": "Draft" } } | { "id": "PLN-001", "name": "Compatibility Test Plan", "description": "To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "Compatibility testing" ], "version": "1.0", "related_requirements": [ "REQ-101", "REQ-102" ], "status": "Draft" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQTR1W5CMG6QB6RME4S | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-compatibility-testing/qf-suite.md | 87faac57ec72dbd688e75478f98819e5828a3d63 | --- id: SUT-001 projectId: PRJ-001 name: "Compatibility Test Suite" description: "To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["Compatibility testing"] --- ## Test Execution Steps ### 1. Browser Compatibility Testing - Test API functionality on the following widely used browsers: - **Chromium** - **Edge** - **Firefox** ### 2. OS Platform Compatibility Testing - Verify API compatibility on the following operating systems: - **Linux** - Validate on different distributions (e.g., Ubuntu, CentOS) to ensure broad compatibility. ### 3. Functional Validation Across Platforms - Execute the core functionality of API across all supported browsers and OS combinations, ensuring consistent performance: - **API Connectivity**: Validate the ability to establish a secure connection. - **UI Rendering**: Ensure UI elements render correctly across all browsers. - **Response Validation**: Check for accurate API responses and error handling. ## Test Case Execution - Use **Xray test cases** to document compatibility outcomes: - Record browser and OS configurations for traceability. - Capture and compare expected vs. actual results. ## Defect Logging for Incompatibilities - Log defects identified during compatibility testing in **Jira**, following the Defect Log Format (refer to Table 4): - Link defects with relevant Xray test cases for traceability. - Include screenshots of compatibility issues (e.g., UI rendering failures or functional discrepancies). ## Issue Fix and Retesting - **Resolve compatibility defects** to ensure seamless operation across all browsers and OS platforms. - Retest to confirm the resolution of defects. - Conduct **regression testing** to verify no new issues were introduced. ## Test Report Generation - Generate a consolidated report summarizing compatibility results, including: - Success rates across browsers and OS platforms. - Details of defects and their resolutions. - Logs and screenshots for documentation. ## Deliverables 1. **Defect Management**: - Detailed records of identified issues, their resolutions, and supporting evidence. 2. **Test Report**: - Summarized results with success rates, defects, and supporting logs/screenshots. | md | 2433 | 2024-12-26 11:53:16 UTC | { "frontMatter": "---\nid: SUT-001\nprojectId: PRJ-001\nname: \"Compatibility Test Suite\"\ndescription: \"To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience.\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"Compatibility testing\"]\n---\n", "body": "\n## Test Execution Steps\n\n### 1. Browser Compatibility Testing\n\n- Test API functionality on the following widely used browsers:\n - **Chromium**\n - **Edge**\n - **Firefox**\n\n### 2. OS Platform Compatibility Testing\n\n- Verify API compatibility on the following operating systems:\n - **Linux**\n - Validate on different distributions (e.g., Ubuntu, CentOS) to ensure broad compatibility.\n\n### 3. Functional Validation Across Platforms\n\n- Execute the core functionality of API across all supported browsers and OS combinations, ensuring consistent performance:\n - **API Connectivity**: Validate the ability to establish a secure connection.\n - **UI Rendering**: Ensure UI elements render correctly across all browsers.\n - **Response Validation**: Check for accurate API responses and error handling.\n\n## Test Case Execution\n\n- Use **Xray test cases** to document compatibility outcomes:\n - Record browser and OS configurations for traceability.\n - Capture and compare expected vs. actual results.\n\n## Defect Logging for Incompatibilities\n\n- Log defects identified during compatibility testing in **Jira**, following the Defect Log Format (refer to Table 4):\n - Link defects with relevant Xray test cases for traceability.\n - Include screenshots of compatibility issues (e.g., UI rendering failures or functional discrepancies).\n\n## Issue Fix and Retesting\n\n- **Resolve compatibility defects** to ensure seamless operation across all browsers and OS platforms.\n- Retest to confirm the resolution of defects.\n- Conduct **regression testing** to verify no new issues were introduced.\n\n## Test Report Generation\n\n- Generate a consolidated report summarizing compatibility results, including:\n - Success rates across browsers and OS platforms.\n - Details of defects and their resolutions.\n - Logs and screenshots for documentation.\n\n## Deliverables\n\n1. **Defect Management**:\n\n - Detailed records of identified issues, their resolutions, and supporting evidence.\n\n2. **Test Report**:\n - Summarized results with success rates, defects, and supporting logs/screenshots.\n", "attrs": { "id": "SUT-001", "projectId": "PRJ-001", "name": "Compatibility Test Suite", "description": "To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "Compatibility testing" ] } } | { "id": "SUT-001", "projectId": "PRJ-001", "name": "Compatibility Test Suite", "description": "To validate that the API system is compatible across a range of widely used browsers and operating system (OS) platforms, ensuring consistent functionality and user experience.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "Compatibility testing" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQTZV93YT7PY5XKN9KB | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0010.run-1.result.json | 77b50d5d81db788d86e3d46a36eedcba06ae19fb | { "test_case_fii": "TC-0010", "title": "Verify that the Dashboard Count in the response is greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response contains the `extension` field, which should include a `Dashboard_Count_Url` entry and the `valueInteger` should be greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 918 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQT8X65KQN811EZH1D8 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0010.run.md | 1c567aecb77a4954d3cedbd180dd54ecf23bab05 | --- FII: "TR-0010" test_case_fii: "TC-0010" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0010\"\ntest_case_fii: \"TC-0010\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0010", "test_case_fii": "TC-0010", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0010", "test_case_fii": "TC-0010", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQVEW6YZMWEMVY3BWMG | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0006.case.md | ff7c2be4cc0952f1b03ded70e2697823d33d7aba | --- FII: TC-0006 groupId: GRP-003 title: Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case ensures that the API endpoint /dashboard returns the Dashboard Count field as part of the response data. The field should be present inside the extension array, with a valid value of type integer. ### Pre-Conditions: - Ensure that the API is up and running and accessible. - The **/dashboard** endpoint should return a valid response with relevant data. ### Test Steps: 1. **Step 1**: Send a **GET** request to the `/dashboard` endpoint. 2. **Step 2**: Inspect the response to check if the **Dashboard Count** field is included inside the **extension** array in the returned JSON data. ### Expected Result: 1. If the **Dashboard Count** field is present, the response should have the following format: - Status Code: `200 OK` | md | 1072 | 2024-12-26 12:06:56 UTC | { "frontMatter": "---\nFII: TC-0006\ngroupId: GRP-003\ntitle: Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that the API endpoint /dashboard returns the Dashboard Count field as part of the response data. The field should be present inside the extension array, with a valid value of type integer.\n\n### Pre-Conditions:\n\n- Ensure that the API is up and running and accessible.\n- The **/dashboard** endpoint should return a valid response with relevant data.\n\n### Test Steps:\n\n1. **Step 1**: Send a **GET** request to the `/dashboard` endpoint.\n2. **Step 2**: Inspect the response to check if the **Dashboard Count** field is included inside the **extension** array in the returned JSON data.\n\n### Expected Result:\n\n1. If the **Dashboard Count** field is present, the response should have the following format:\n - Status Code: `200 OK`\n", "attrs": { "FII": "TC-0006", "groupId": "GRP-003", "title": "Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0006", "groupId": "GRP-003", "title": "Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQVC5K5PK18Q2EKGCMN | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0008.run-1.result.json | d29b6e0135d42e5cb8d0ea252e2a7860f5b4d540 | { "test_case_fii": "TC-0008", "title": "Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server that the valueInteger field under the Dashboard_Count_Url extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 1045 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQVH6MQGJ3G9C0B8MZX | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0005.run-1.result.json | ca84e8ba90109e54770c3731277a0b1a5abd43f6 | { "test_case_fii": "TC-0005", "title": "Verify that the Daily Cycle Count field is present inside the Response Data.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a GET request to the `/dashboard` API endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Check the response to verify that the `Daily_Cycle_Count_Url` is present in the `extension` section of the returned JSON data.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 3, "stepname": "Ensure that the corresponding value of `valueInteger` is 3 under the `Daily_Cycle_Count_Url`", "status": "passed", "start_time": "2024-12-15T08:45:13.042Z", "end_time": "2024-12-15T08:45:13.045Z" } ] } | json | 1157 | 2024-12-17 17:44:34 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQVCZW15549GVH87T4K | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0008.case.md | 2b97864bffc1ad41b6b1525118af9b46f0314a49 | --- FII: TC-0008 groupId: GRP-003 title: Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case is designed to verify the behavior of the API when the "Dashboard Count" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data. ### Pre-Conditions: - The API endpoint `/dashboard` should be functional. - The API should be accessible and return a valid JSON response. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server that the "valueInteger" field under the "Dashboard_Count_Url" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros). ### Expected Result: - The response should contain a 400 Bad Request status code. - An error message should be provided indicating that the "Dashboard Count" value is invalid. | md | 1275 | 2024-12-26 12:06:54 UTC | { "frontMatter": "---\nFII: TC-0008\ngroupId: GRP-003\ntitle: Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case is designed to verify the behavior of the API when the \"Dashboard Count\" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data.\n\n### Pre-Conditions:\n\n- The API endpoint `/dashboard` should be functional.\n- The API should be accessible and return a valid JSON response.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server that the \"valueInteger\" field under the \"Dashboard_Count_Url\" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).\n\n### Expected Result:\n\n- The response should contain a 400 Bad Request status code.\n- An error message should be provided indicating that the \"Dashboard Count\" value is invalid.\n", "attrs": { "FII": "TC-0008", "groupId": "GRP-003", "title": "Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0008", "groupId": "GRP-003", "title": "Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQV1JA3QGHZRRMVKP55 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0012.run.md | 3cfaf60686842a5e515630219f55d3e6165d0601 | --- FII: "TR-0012" test_case_fii: "TC-0012" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0012\"\ntest_case_fii: \"TC-0012\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0012", "test_case_fii": "TC-0012", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0012", "test_case_fii": "TC-0012", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQVREFTFRK6NJXH3R45 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0007.case.md | 79b4ccfb263eb9aa2974b40b00b69f0fc4b5b019 | --- FII: TC-0007 groupId: GRP-003 title: Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case is designed to verify the behavior of the API when the "System_count" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data. ### Pre-Conditions: - The API endpoint `/dashboard` should be functional. - The API should be accessible and return a valid JSON response. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server that the "valueInteger" field under the "System_count_Url" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros). ### Expected Result: - The response should contain a 400 Bad Request status code. - An error message should be provided indicating that the "System_count" value is invalid. | md | 1263 | 2024-12-26 12:06:56 UTC | { "frontMatter": "---\nFII: TC-0007\ngroupId: GRP-003\ntitle: Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case is designed to verify the behavior of the API when the \"System_count\" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data.\n\n### Pre-Conditions:\n\n- The API endpoint `/dashboard` should be functional.\n- The API should be accessible and return a valid JSON response.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server that the \"valueInteger\" field under the \"System_count_Url\" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).\n\n### Expected Result:\n\n- The response should contain a 400 Bad Request status code.\n- An error message should be provided indicating that the \"System_count\" value is invalid.\n", "attrs": { "FII": "TC-0007", "groupId": "GRP-003", "title": "Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0007", "groupId": "GRP-003", "title": "Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQWD5J75CMQ9JVM75BA | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0007.run.md | cfebf75ab8115a11e8893842c39f7cafec41a9cf | --- FII: "TR-0007" test_case_fii: "TC-0007" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Failed - Notes: All steps executed successfully. | md | 168 | 2025-01-01 09:48:22 UTC | { "frontMatter": "---\nFII: \"TR-0007\"\ntest_case_fii: \"TC-0007\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "\n### Run Summary\n\n- Status: Failed\n- Notes: All steps executed successfully.\n", "attrs": { "FII": "TR-0007", "test_case_fii": "TC-0007", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0007", "test_case_fii": "TC-0007", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQW7XTQ27Z5J202ZE6S | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0011.case.md | 3cf54a16b185b5557cb1833c39c1b8aef4fe4a1f | --- FII: TC-0011 groupId: GRP-003 title: Verify that the Dashboard Count is always greater than or equal to the System Count. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test verifies that the Dashboard Count, which represents the total number of cycle counts done up to the current date, is always greater than or equal to the System Count, except for the first time. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `System_Count_Url` and `Dashboard_Count_Url` value. ### Expected Result: - The response body contains the proper structure: | md | 1069 | 2024-12-26 12:06:52 UTC | { "frontMatter": "---\nFII: TC-0011\ngroupId: GRP-003\ntitle: Verify that the Dashboard Count is always greater than or equal to the System Count.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test verifies that the Dashboard Count, which represents the total number of cycle counts done up to the current date, is always greater than or equal to the System Count, except for the first time.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `System_Count_Url` and `Dashboard_Count_Url` value.\n\n### Expected Result:\n\n- The response body contains the proper structure:\n", "attrs": { "FII": "TC-0011", "groupId": "GRP-003", "title": "Verify that the Dashboard Count is always greater than or equal to the System Count.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0011", "groupId": "GRP-003", "title": "Verify that the Dashboard Count is always greater than or equal to the System Count.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQW8F2W2WKS823N7N93 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0011.run-1.result.json | 2205aa115c6ae5244f4ed67a494b805de048781d | { "test_case_fii": "TC-0011", "title": "Verify that the Dashboard Count is always greater than or equal to the System Count.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response contains the `extension` field, which should include a `Dashboard_Count_Url` and `system_Count_Url` value.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 911 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQW4TV6VZEW98C8434N | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0006.run.md | 0a2834db28c798de31501dc2182e917e71e28c77 | --- FII: "TR-0006" test_case_fii: "TC-0006" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0006\"\ntest_case_fii: \"TC-0006\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0006", "test_case_fii": "TC-0006", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0006", "test_case_fii": "TC-0006", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQW74EKGWZMH8M5ZHB5 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0006.run-1.result.json | b50493741c2880674dc7f2a6c84cdd635289da0b | { "test_case_fii": "TC-0006", "title": "Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Inspect the response to check if the Dashboard Count field is included inside the extension array in the returned JSON data.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 923 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQX6DJZ3K60X7SS4YB1 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0008.run.md | a7eebddb1e66b2a032e10eafffffec673b41cd7b | --- FII: "TR-0008" test_case_fii: "TC-0008" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0008\"\ntest_case_fii: \"TC-0008\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0008", "test_case_fii": "TC-0008", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0008", "test_case_fii": "TC-0008", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQXSX8D7W9CZFE33J8E | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0005.run.md | b5e87c0c5019c999f2103597ed1b3f3aa9fd9b7e | --- FII: "TR-0005" test_case_fii: "TC-0005" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0005\"\ntest_case_fii: \"TC-0005\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0005", "test_case_fii": "TC-0005", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0005", "test_case_fii": "TC-0005", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQXBQ3NCG5KXT0QJFP1 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0009.run-1.result.json | 1653bdef4d0bb67913658e6d9e0e593bc0ca67bf | { "test_case_fii": "TC-0009", "title": "Verify that the System Count in the response is greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response contains the `extension` field, which should include a `System_Count_Url` entry and the `valueInteger` should be greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 912 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQX9FXW694P0ACA4CBX | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0005.case.md | ef7a5fd23bfebdcb24247a424a03b06d719ebfe0 | --- FII: TC-0005 groupId: GRP-003 title: Verify that the dashboard Count field is present inside the Response Data. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Manual" tags: ["cycle-count"] priority: "High" --- ### Description This test case verifies that the API endpoint `/dashboard` returns the correct response data, specifically ensuring that the `Daily_Cycle_Count` field is present in the JSON response. ### Pre-Conditions: - The API endpoint `/dashboard` is accessible. - The API server is running and available to respond to GET requests. - Proper authentication, if required, is provided for the request. ### Test Steps: 1. **Step 1**: Send a GET request to the `/dashboard` API endpoint. 2. **Step 2**: Check the response to verify that the `Dashboard_Count_Url` is present in the `extension` section of the returned JSON data. 3. **Step 3**: Ensure that the corresponding value of `valueInteger` is 3 under the `Dashboard_Count_Url`. ### Expected Result: - If the cycle count is available, the API should return a `200` status code with the following JSON data: | md | 1117 | 2025-01-01 09:48:50 UTC | { "frontMatter": "---\nFII: TC-0005\ngroupId: GRP-003\ntitle: Verify that the dashboard Count field is present inside the Response Data.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Manual\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case verifies that the API endpoint `/dashboard` returns the correct response data, specifically ensuring that the `Daily_Cycle_Count` field is present in the JSON response.\n\n### Pre-Conditions:\n\n- The API endpoint `/dashboard` is accessible.\n- The API server is running and available to respond to GET requests.\n- Proper authentication, if required, is provided for the request.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to the `/dashboard` API endpoint.\n2. **Step 2**: Check the response to verify that the `Dashboard_Count_Url` is present in the `extension` section of the returned JSON data.\n3. **Step 3**: Ensure that the corresponding value of `valueInteger` is 3 under the `Dashboard_Count_Url`.\n\n### Expected Result:\n\n- If the cycle count is available, the API should return a `200` status code with the following JSON data:\n", "attrs": { "FII": "TC-0005", "groupId": "GRP-003", "title": "Verify that the dashboard Count field is present inside the Response Data.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0005", "groupId": "GRP-003", "title": "Verify that the dashboard Count field is present inside the Response Data.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQXMYG1M8GNC2YYE3AH | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0012.run-1.result.json | afc09e4ac90ba00e6ad2f1cd287d377b2559c699 | { "test_case_fii": "TC-0012", "title": "Verify that the response for System & Dashboard is of integer type.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response to ensure the fields for System and Dashboard are of integer value.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 855 | 2024-12-17 08:47:00 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQX2J0GHE04H8QABVH1 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0010.case.md | 09d7a6f7941260155b8ff4fa6baa990c1c0e5b37 | --- FII: TC-0010 groupId: GRP-003 title: Verify that the Dashboard Count in the response is greater than 0. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Manual" tags: ["cycle-count"] priority: "High" --- ### Description This test case ensures that the API response for the `/dashboard` endpoint includes a valid `Dashboard_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Lifetime_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0. ### Expected Result: - The value of `valueInteger` should be greater than 0 (in this example, `2`). | md | 1118 | 2025-01-01 09:49:00 UTC | { "frontMatter": "---\nFII: TC-0010\ngroupId: GRP-003\ntitle: Verify that the Dashboard Count in the response is greater than 0.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Manual\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that the API response for the `/dashboard` endpoint includes a valid `Dashboard_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Lifetime_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0.\n\n### Expected Result:\n\n- The value of `valueInteger` should be greater than 0 (in this example, `2`).\n", "attrs": { "FII": "TC-0010", "groupId": "GRP-003", "title": "Verify that the Dashboard Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0010", "groupId": "GRP-003", "title": "Verify that the Dashboard Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQY7JH1EAH0ZR1VFQD3 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0007.run-1.result.json | c5b775c00f6c283e6390b3cfb349aa563b329b87 | { "test_case_fii": "TC-0007", "title": "Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "status": "failed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server that the `valueInteger` field under the `System_count_Url` extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).", "status": "failed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } | json | 999 | 2025-01-01 09:56:18 UTC | 2025-09-17 14:42:50 | UNKNOWN | |||||||||
01K5C21MQYJYPEW1W29WSBGJT4 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/qf-case-group.md | 2a0ec055a283bc36ac13e4db6b526472834875c4 | --- id: GRP-003 SuiteId: SUT-003 planId: ["PLN-003"] name: "Dashboard Test Cases" description: "Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met." created_by: "arun-ramanan@netspective.in" created_at: "2024-11-01" tags: ["integration testing", "data validation", "reporting"] --- ### Overview This test case group is structured to ensure the seamless functionality and reliability of dashboard by validating key integration points and performance metrics: - **Data Ingestion**: Verifying capability to handle multiple data formats (JSON, CSV, XML) without errors or data loss. - **Data Processing Integrity**: Ensuring that all ingested data is accurately processed and retains integrity throughout. - **Reporting Accuracy**: Validating that generated reports reflect the processed data accurately and meet compliance requirements. - **Performance Under Load**: Testing the system's ability to handle concurrent ingestion requests and maintain performance benchmarks. - **Automated Testing**: Facilitating integration into CI/CD pipelines for consistent testing and validation of new releases. | md | 1267 | 2024-12-27 12:43:34 UTC | { "frontMatter": "---\nid: GRP-003\nSuiteId: SUT-003\nplanId: [\"PLN-003\"]\nname: \"Dashboard Test Cases\"\ndescription: \"Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-11-01\"\ntags: [\"integration testing\", \"data validation\", \"reporting\"]\n---\n", "body": "\n### Overview\n\nThis test case group is structured to ensure the seamless functionality and reliability of dashboard by validating key integration points and performance metrics:\n\n- **Data Ingestion**: Verifying capability to handle multiple data formats (JSON, CSV, XML) without errors or data loss.\n- **Data Processing Integrity**: Ensuring that all ingested data is accurately processed and retains integrity throughout.\n- **Reporting Accuracy**: Validating that generated reports reflect the processed data accurately and meet compliance requirements.\n- **Performance Under Load**: Testing the system's ability to handle concurrent ingestion requests and maintain performance benchmarks.\n- **Automated Testing**: Facilitating integration into CI/CD pipelines for consistent testing and validation of new releases.\n", "attrs": { "id": "GRP-003", "SuiteId": "SUT-003", "planId": [ "PLN-003" ], "name": "Dashboard Test Cases", "description": "Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "integration testing", "data validation", "reporting" ] } } | { "id": "GRP-003", "SuiteId": "SUT-003", "planId": [ "PLN-003" ], "name": "Dashboard Test Cases", "description": "Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "integration testing", "data validation", "reporting" ] } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQY5F02M5AWQ3TGVCK5 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0012.case.md | 64b638d1c32e2e9d6ea4c73a375a12fc8721785f | --- FII: TC-0012 groupId: GRP-003 title: Verify that the response for System & Dashboard is of integer type. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Manual" tags: ["cycle-count"] priority: "High" --- ### Description This test case checks the API response to ensure that the values for "System" and "Dashboard" are returned as integer values. The API should return the expected response structure, with "valueInteger" containing integer values. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response to ensure the fields for System and Dashboard are of integer value. ### Expected Result: - The response body contains the proper value. | md | 1029 | 2025-01-01 09:56:38 UTC | { "frontMatter": "---\nFII: TC-0012\ngroupId: GRP-003\ntitle: Verify that the response for System & Dashboard is of integer type.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Manual\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case checks the API response to ensure that the values for \"System\" and \"Dashboard\" are returned as integer values. The API should return the expected response structure, with \"valueInteger\" containing integer values.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response to ensure the fields for System and Dashboard are of integer value.\n\n### Expected Result:\n\n- The response body contains the proper value.\n", "attrs": { "FII": "TC-0012", "groupId": "GRP-003", "title": "Verify that the response for System & Dashboard is of integer type.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0012", "groupId": "GRP-003", "title": "Verify that the response for System & Dashboard is of integer type.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQY64WPVJBAMX71443R | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0009.run.md | 849b91f7d3bf331abd6296ad34fa56d37da871b0 | --- FII: "TR-0009" test_case_fii: "TC-0009" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0009\"\ntest_case_fii: \"TC-0009\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0009", "test_case_fii": "TC-0009", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0009", "test_case_fii": "TC-0009", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQZR1MC59V484HGZ5E8 | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0011.run.md | 3519c683a2721ab3e0cbb1c84f00f407cd8eafaa | --- FII: "TR-0011" test_case_fii: "TC-0011" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. | md | 165 | 2024-12-17 08:47:00 UTC | { "frontMatter": "---\nFII: \"TR-0011\"\ntest_case_fii: \"TC-0011\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0011", "test_case_fii": "TC-0011", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0011", "test_case_fii": "TC-0011", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQZGS661B3T0TJEV0XB | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0009.case.md | 74c8cbe25dda2a7be37dca56a04d3beef29c3ee0 | --- FII: TC-0009 groupId: GRP-003 title: Verify that the System Count in the response is greater than 0. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case ensures that the API response for the `/dashboard` endpoint includes a valid `System_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Daily_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0. ### Expected Result: - The value of `valueInteger` should be greater than 0 (in this example, `2`). | md | 1113 | 2024-12-26 12:06:54 UTC | { "frontMatter": "---\nFII: TC-0009\ngroupId: GRP-003\ntitle: Verify that the System Count in the response is greater than 0.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that the API response for the `/dashboard` endpoint includes a valid `System_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Daily_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0.\n\n### Expected Result:\n\n- The value of `valueInteger` should be greater than 0 (in this example, `2`).\n", "attrs": { "FII": "TC-0009", "groupId": "GRP-003", "title": "Verify that the System Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } | { "FII": "TC-0009", "groupId": "GRP-003", "title": "Verify that the System Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } | 2025-09-17 14:42:50 | UNKNOWN | |||||||
01K5C21MQZEZ0KD9M1HC0A4BGV | 01K5C21MQGZ4A2Z1E7HKH6H92D | 01K5C21MQJ6NX4XB3Q3278766R | 01K5C21MQK2Q6DB505XT6FVCMT | /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/login/TC-0004.run.md | fcc7407fd9c379ebb178610fe89c6f4ca5a3c02a | --- FII: "TR-0004" test_case_fii: "TC-0004" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Failed - Notes: All steps executed successfully. | md | 168 | 2025-01-01 09:47:52 UTC | { "frontMatter": "---\nFII: \"TR-0004\"\ntest_case_fii: \"TC-0004\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "\n### Run Summary\n\n- Status: Failed\n- Notes: All steps executed successfully.\n", "attrs": { "FII": "TR-0004", "test_case_fii": "TC-0004", "run_date": "2024-12-15", "environment": "Test" } } | { "FII": "TR-0004", "test_case_fii": "TC-0004", "run_date": "2024-12-15", "environment": "Test" } | 2025-09-17 14:42:50 | UNKNOWN |
(Page 1 of 2) Next