uniform_resource (table) Content

  • Start Row: 0
  • Rows per Page: 50
  • Total Rows: 96
  • Current Page: 1
  • Total Pages: 2

01JNPWZ2WY3VV8NR52GZ42133B 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/qf-suite.md 84cc678bdc895d6a27cf28e93e1b5853e693df94 --- id: SUT-005 projectId: PRJ-001 test_execution_id: ["EXE-005"] name: "Performance Test Suite" description: "To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["performance testing"] --- ## Test Execution Steps ### 1. Define Performance Metrics Establish performance benchmarks to evaluate system behavior, including: - **Response Time**: Measure the time taken for API endpoints to respond under normal and peak load conditions. - **Throughput**: Assess the number of transactions or API requests handled per second. - **Error Rate**: Monitor the frequency of failed API requests during testing. - **Endurance Testing**: Evaluate the system's stability and reliability under prolonged usage. - **Resource Utilization**: Analyze CPU, memory, and network usage during testing. ### 2. Create Test Scenarios Design realistic scenarios for single-user interactions and simulated load testing to identify performance bottlenecks. - Include scenarios to validate system behavior during: - Normal operations. - Simulated load spikes (e.g., multiple device interactions over a short period). - Extended usage periods to test endurance. ### 3. Test Environment Setup - **Environment**: Replicate the production environment for accurate results. - **Tools**: Utilize the performance testing tool **JMeter** to simulate load and monitor metrics. ### 4. Execute Performance Tests - Perform baseline tests to measure system performance under single-user interactions. - Gradually increase the load to identify performance thresholds. - Conduct endurance tests to evaluate stability and resource utilization over time. ### 5. Analyze Results - Compare test results against performance benchmarks. - Identify deviations, bottlenecks, and potential areas for optimization. - Document findings for further investigation and resolution. ### 6. Recommendation Based on Findings - **If benchmarks are met**: Confirm system readiness and proceed to documentation. - **If benchmarks are not met**: Log identified issues, recommend optimizations, and perform retesting after adjustments. ## Documentation Document test results, including: - Response times across different loads. - Throughput and error rates. - Resource utilization trends. - Observations during endurance testing. ## Tools Utilized - **JMeter**: For performance and load testing. - **System Monitoring Tools**: For tracking CPU, memory, and network usage. ## Performance Metrics | **Metric** | **Definition** | | ------------------------ | ------------------------------------------------------------------------------ | | **Response Time** | Time taken by the API to respond to requests under various conditions. | | **Throughput** | Number of successful API transactions handled per second. | | **Error Rate** | Frequency of failed API requests under load. | | **Endurance Testing** | System reliability and stability under continuous load for extended durations. | | **Resource Utilization** | CPU, memory, and network usage trends during performance tests. | ## Deliverables - **Performance Benchmarks**: Baseline and target metrics for system evaluation. - **Performance Test Report**: Comprehensive summary of test results, observations, and recommendations. md 3686 2024-12-24 14:17:02 UTC { "frontMatter": "---\nid: SUT-005\nprojectId: PRJ-001\ntest_execution_id: [\"EXE-005\"]\nname: \"Performance Test Suite\"\ndescription: \"To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions.\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"performance testing\"]\n---\n", "body": "\n## Test Execution Steps\n\n### 1. Define Performance Metrics\n\nEstablish performance benchmarks to evaluate system behavior, including:\n\n- **Response Time**: Measure the time taken for API endpoints to respond under normal and peak load conditions.\n- **Throughput**: Assess the number of transactions or API requests handled per second.\n- **Error Rate**: Monitor the frequency of failed API requests during testing.\n- **Endurance Testing**: Evaluate the system's stability and reliability under prolonged usage.\n- **Resource Utilization**: Analyze CPU, memory, and network usage during testing.\n\n### 2. Create Test Scenarios\n\nDesign realistic scenarios for single-user interactions and simulated load testing to identify performance bottlenecks.\n\n- Include scenarios to validate system behavior during:\n - Normal operations.\n - Simulated load spikes (e.g., multiple device interactions over a short period).\n - Extended usage periods to test endurance.\n\n### 3. Test Environment Setup\n\n- **Environment**: Replicate the production environment for accurate results.\n- **Tools**: Utilize the performance testing tool **JMeter** to simulate load and monitor metrics.\n\n### 4. Execute Performance Tests\n\n- Perform baseline tests to measure system performance under single-user interactions.\n- Gradually increase the load to identify performance thresholds.\n- Conduct endurance tests to evaluate stability and resource utilization over time.\n\n### 5. Analyze Results\n\n- Compare test results against performance benchmarks.\n- Identify deviations, bottlenecks, and potential areas for optimization.\n- Document findings for further investigation and resolution.\n\n### 6. Recommendation Based on Findings\n\n- **If benchmarks are met**: Confirm system readiness and proceed to documentation.\n- **If benchmarks are not met**: Log identified issues, recommend optimizations, and perform retesting after adjustments.\n\n## Documentation\n\nDocument test results, including:\n\n- Response times across different loads.\n- Throughput and error rates.\n- Resource utilization trends.\n- Observations during endurance testing.\n\n## Tools Utilized\n\n- **JMeter**: For performance and load testing.\n- **System Monitoring Tools**: For tracking CPU, memory, and network usage.\n\n## Performance Metrics\n\n| **Metric** | **Definition** |\n| ------------------------ | ------------------------------------------------------------------------------ |\n| **Response Time** | Time taken by the API to respond to requests under various conditions. |\n| **Throughput** | Number of successful API transactions handled per second. |\n| **Error Rate** | Frequency of failed API requests under load. |\n| **Endurance Testing** | System reliability and stability under continuous load for extended durations. |\n| **Resource Utilization** | CPU, memory, and network usage trends during performance tests. |\n\n## Deliverables\n\n- **Performance Benchmarks**: Baseline and target metrics for system evaluation.\n- **Performance Test Report**: Comprehensive summary of test results, observations, and recommendations.\n", "attrs": { "id": "SUT-005", "projectId": "PRJ-001", "test_execution_id": [ "EXE-005" ], "name": "Performance Test Suite", "description": "To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "performance testing" ] } } { "id": "SUT-005", "projectId": "PRJ-001", "test_execution_id": [ "EXE-005" ], "name": "Performance Test Suite", "description": "To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "performance testing" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2WYS4MZZMRJK5957ZD9 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/test-plan/qf-plan.md 8d3ccbbe683d79f4d86e8f77c4d70e87f06eccbf --- id: PLN-005 name: "Performance Test Plan" description: "To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["performance testing"] related_requirements: ["REQ-101", "REQ-102"] --- ## Objective Ensure to validate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions. ### 1. Define Performance Metrics Establish performance benchmarks to evaluate system behavior, including: - **Response Time**: Measure the time taken for API endpoints to respond under normal and peak load conditions. - **Throughput**: Assess the number of transactions or API requests handled per second. - **Error Rate**: Monitor the frequency of failed API requests during testing. - **Endurance Testing**: Evaluate the system's stability and reliability under prolonged usage. - **Resource Utilization**: Analyze CPU, memory, and network usage during testing. ### 2. Create Test Scenarios Design realistic scenarios for single-user interactions and simulated load testing to identify performance bottlenecks. - Include scenarios to validate system behavior during: - Normal operations. - Simulated load spikes (e.g., multiple device interactions over a short period). - Extended usage periods to test endurance. ### 3. Test Environment Setup - **Environment**: Replicate the production environment for accurate results. - **Tools**: Utilize the performance testing tool **JMeter** to simulate load and monitor metrics. ### 4. Execute Performance Tests - Perform baseline tests to measure system performance under single-user interactions. - Gradually increase the load to identify performance thresholds. - Conduct endurance tests to evaluate stability and resource utilization over time. ### 5. Analyze Results - Compare test results against performance benchmarks. - Identify deviations, bottlenecks, and potential areas for optimization. - Document findings for further investigation and resolution. ### 6. Recommendation Based on Findings - **If benchmarks are met**: Confirm system readiness and proceed to documentation. - **If benchmarks are not met**: Log identified issues, recommend optimizations, and perform retesting after adjustments. ## Documentation Document test results, including: - Response times across different loads. - Throughput and error rates. - Resource utilization trends. - Observations during endurance testing. ## Tools Utilized - **JMeter**: For performance and load testing. - **System Monitoring Tools**: For tracking CPU, memory, and network usage. ## Performance Metrics | **Metric** | **Definition** | | ------------------------ | ------------------------------------------------------------------------------ | | **Response Time** | Time taken by the API to respond to requests under various conditions. | | **Throughput** | Number of successful API transactions handled per second. | | **Error Rate** | Frequency of failed API requests under load. | | **Endurance Testing** | System reliability and stability under continuous load for extended durations. | | **Resource Utilization** | CPU, memory, and network usage trends during performance tests. | ## Deliverables - **Performance Benchmarks**: Baseline and target metrics for system evaluation. - **Performance Test Report**: Comprehensive summary of test results, observations, and recommendations. md 3919 2024-12-26 17:32:06 UTC { "frontMatter": "---\nid: PLN-005\nname: \"Performance Test Plan\"\ndescription: \"To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions.\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"performance testing\"]\nrelated_requirements: [\"REQ-101\", \"REQ-102\"]\n---\n", "body": "\n## Objective\n\nEnsure to validate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions.\n\n### 1. Define Performance Metrics\n\nEstablish performance benchmarks to evaluate system behavior, including:\n\n- **Response Time**: Measure the time taken for API endpoints to respond under normal and peak load conditions.\n- **Throughput**: Assess the number of transactions or API requests handled per second.\n- **Error Rate**: Monitor the frequency of failed API requests during testing.\n- **Endurance Testing**: Evaluate the system's stability and reliability under prolonged usage.\n- **Resource Utilization**: Analyze CPU, memory, and network usage during testing.\n\n### 2. Create Test Scenarios\n\nDesign realistic scenarios for single-user interactions and simulated load testing to identify performance bottlenecks.\n\n- Include scenarios to validate system behavior during:\n - Normal operations.\n - Simulated load spikes (e.g., multiple device interactions over a short period).\n - Extended usage periods to test endurance.\n\n### 3. Test Environment Setup\n\n- **Environment**: Replicate the production environment for accurate results.\n- **Tools**: Utilize the performance testing tool **JMeter** to simulate load and monitor metrics.\n\n### 4. Execute Performance Tests\n\n- Perform baseline tests to measure system performance under single-user interactions.\n- Gradually increase the load to identify performance thresholds.\n- Conduct endurance tests to evaluate stability and resource utilization over time.\n\n### 5. Analyze Results\n\n- Compare test results against performance benchmarks.\n- Identify deviations, bottlenecks, and potential areas for optimization.\n- Document findings for further investigation and resolution.\n\n### 6. Recommendation Based on Findings\n\n- **If benchmarks are met**: Confirm system readiness and proceed to documentation.\n- **If benchmarks are not met**: Log identified issues, recommend optimizations, and perform retesting after adjustments.\n\n## Documentation\n\nDocument test results, including:\n\n- Response times across different loads.\n- Throughput and error rates.\n- Resource utilization trends.\n- Observations during endurance testing.\n\n## Tools Utilized\n\n- **JMeter**: For performance and load testing.\n- **System Monitoring Tools**: For tracking CPU, memory, and network usage.\n\n## Performance Metrics\n\n| **Metric** | **Definition** |\n| ------------------------ | ------------------------------------------------------------------------------ |\n| **Response Time** | Time taken by the API to respond to requests under various conditions. |\n| **Throughput** | Number of successful API transactions handled per second. |\n| **Error Rate** | Frequency of failed API requests under load. |\n| **Endurance Testing** | System reliability and stability under continuous load for extended durations. |\n| **Resource Utilization** | CPU, memory, and network usage trends during performance tests. |\n\n## Deliverables\n\n- **Performance Benchmarks**: Baseline and target metrics for system evaluation.\n- **Performance Test Report**: Comprehensive summary of test results, observations, and recommendations.\n", "attrs": { "id": "PLN-005", "name": "Performance Test Plan", "description": "To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "performance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } } { "id": "PLN-005", "name": "Performance Test Plan", "description": "To evaluate the performance, load handling, and scalability of the system. While single-user interactions are expected per device, performance benchmarks will be established to ensure the system operates efficiently under defined conditions.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "performance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2WZ28WE7BBJTXWZB7WB 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0017.run.md c8ab0a2b29f9eef84abe5015d9eeaa2ea848cd6d --- FII: "TR-0017" test_case_fii: "TC-0017" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: ToDO - Notes: Not yet executed. md 148 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0017\"\ntest_case_fii: \"TC-0017\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: ToDO\n- Notes: Not yet executed.", "attrs": { "FII": "TR-0017", "test_case_fii": "TC-0017", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0017", "test_case_fii": "TC-0017", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2WZTWERXDA96B7WAQ32 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/qf-case-group.md 3da5ddc0e4fa0f8af896d6efc3dd08e05cdf6d29 --- id: GRP-007 SuiteId: SUT-005 planId: ["PLN-005"] name: "Performance Test Cases" description: "API Performance Testing Plan for Single-User Interactions" created_by: "arun-ramanan@netspective.in" created_at: "2024-11-01" tags: ["Compatability testing"] --- ## Description This document outlines the execution of performance testing for the API to ensure it meets operational requirements under expected usage scenarios. The focus is on validating the system's ability to handle anticipated workloads, particularly single-user interactions per device, while ensuring optimal performance and reliability. ## **Exclusion Justification** ### **Scope Limitation** Performance testing is designed to simulate realistic single-user interactions per device as this aligns with the anticipated usage model. ## **Execution Details** ### **Test Objectives** - Validate response times under typical single-user interaction scenarios. - Assess throughput for API operations to ensure acceptable processing rates. - Measure the error rate under sustained usage to verify robustness. - Conduct endurance testing to identify potential bottlenecks during prolonged activity. - Evaluate resource utilization (CPU, memory, disk I/O) to ensure efficient performance. ### **Performance Benchmarks** - **Response Time:** - Expected: < 1 second for 95% of requests. - **Throughput:** - Minimum: 10 requests per second for sustained single-user scenarios. - **Error Rate:** - Target: < 1% of total requests. - **Endurance Testing:** - Duration: 12 hours of continuous operation without degradation. - **Resource Utilization:** - CPU: < 80% utilization during peak operations. - Memory: < 80% of allocated resources. ### **Environment Details** - **Test Environment:** Performance Test Environment. - **API Version:** v1.0. - **Simulated Workload:** Single-user interactions. ### **Tools Used** - **Apache JMeter:** For generating requests and monitoring performance metrics. ### **Test Scenarios** 1. **Single API Request Execution:** Measure response time for individual endpoints. 2. **Sequential API Calls:** Simulate real-world scenarios involving multiple consecutive API interactions. 3. **Error Rate Validation:** Induce and measure system response under faulty requests. 4. **Endurance Testing:** Simulate continuous usage over extended periods. 5. **Resource Utilization Monitoring:** Observe system behavior under sustained usage. ## **Deliverables** - **Test Results:** Consolidated report of response times, throughput, error rates, and resource utilization. - **Analysis:** Identification of any bottlenecks or performance issues. md 2678 2024-12-26 12:17:38 UTC { "frontMatter": "---\nid: GRP-007\nSuiteId: SUT-005\nplanId: [\"PLN-005\"]\nname: \"Performance Test Cases\"\ndescription: \"API Performance Testing Plan for Single-User Interactions\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-11-01\"\ntags: [\"Compatability testing\"]\n---\n", "body": "\n## Description\n\nThis document outlines the execution of performance testing for the API to ensure it meets operational requirements under expected usage scenarios. The focus is on validating the system's ability to handle anticipated workloads, particularly single-user interactions per device, while ensuring optimal performance and reliability.\n\n## **Exclusion Justification**\n\n### **Scope Limitation**\n\nPerformance testing is designed to simulate realistic single-user interactions per device as this aligns with the anticipated usage model.\n\n## **Execution Details**\n\n### **Test Objectives**\n\n- Validate response times under typical single-user interaction scenarios.\n- Assess throughput for API operations to ensure acceptable processing rates.\n- Measure the error rate under sustained usage to verify robustness.\n- Conduct endurance testing to identify potential bottlenecks during prolonged activity.\n- Evaluate resource utilization (CPU, memory, disk I/O) to ensure efficient performance.\n\n### **Performance Benchmarks**\n\n- **Response Time:**\n - Expected: < 1 second for 95% of requests.\n- **Throughput:**\n - Minimum: 10 requests per second for sustained single-user scenarios.\n- **Error Rate:**\n - Target: < 1% of total requests.\n- **Endurance Testing:**\n - Duration: 12 hours of continuous operation without degradation.\n- **Resource Utilization:**\n - CPU: < 80% utilization during peak operations.\n - Memory: < 80% of allocated resources.\n\n### **Environment Details**\n\n- **Test Environment:** Performance Test Environment.\n- **API Version:** v1.0.\n- **Simulated Workload:** Single-user interactions.\n\n### **Tools Used**\n\n- **Apache JMeter:** For generating requests and monitoring performance metrics.\n\n### **Test Scenarios**\n\n1. **Single API Request Execution:** \n Measure response time for individual endpoints.\n2. **Sequential API Calls:** \n Simulate real-world scenarios involving multiple consecutive API interactions.\n3. **Error Rate Validation:** \n Induce and measure system response under faulty requests.\n4. **Endurance Testing:** \n Simulate continuous usage over extended periods.\n5. **Resource Utilization Monitoring:** \n Observe system behavior under sustained usage.\n\n## **Deliverables**\n\n- **Test Results:** Consolidated report of response times, throughput, error rates, and resource utilization.\n- **Analysis:** Identification of any bottlenecks or performance issues.\n", "attrs": { "id": "GRP-007", "SuiteId": "SUT-005", "planId": [ "PLN-005" ], "name": "Performance Test Cases", "description": "API Performance Testing Plan for Single-User Interactions", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } } { "id": "GRP-007", "SuiteId": "SUT-005", "planId": [ "PLN-005" ], "name": "Performance Test Cases", "description": "API Performance Testing Plan for Single-User Interactions", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "Compatability testing" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2WZM5X2HXMF306FWKMA 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0019.run.md ce3b7edd2e6ee798bf617d524bd7b5c47250e7b4 --- FII: "TR-0019" test_case_fii: "TC-0019" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: ToDO - Notes: Not yet executed. md 148 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0019\"\ntest_case_fii: \"TC-0019\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: ToDO\n- Notes: Not yet executed.", "attrs": { "FII": "TR-0019", "test_case_fii": "TC-0019", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0019", "test_case_fii": "TC-0019", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2WZ93FR9VBSHDTR4PGF 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0018.case.md 5fd35b806e3d04b89b540e318c94e92a71a5ebf7 --- FII: TC-0018 groupId: GRP-007 title: "Ensure that the API responds without any errors while fetching the complete response, maintaining an error rate of 0%." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["performance testing"] priority: "High" --- ### Description This test verifies that the API operates without generating any errors during its operation under normal load conditions. The focus is to ensure stability and reliability by ensuring that the error rate remains zero throughout the transaction. ### Pre-Conditions: - The server is online and reachable. - API endpoint is accessible and configured in the test environment. - Necessary performance testing tools are installed and configured. ### Test Steps: 1. **Step 1**: Send a request to the API endpoint. 2. **Step 2**: Verify the Track the error rate using the Jmeter testing tool and Capture the results, including response time, throughput, and error rate. ### Expected Result: - The API should fetch responses successfully for all requests. - The error rate during the test should remain 0%. md 1125 2024-12-26 12:17:48 UTC { "frontMatter": "---\nFII: TC-0018\ngroupId: GRP-007\ntitle: \"Ensure that the API responds without any errors while fetching the complete response, maintaining an error rate of 0%.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"performance testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test verifies that the API operates without generating any errors during its operation under normal load conditions. The focus is to ensure stability and reliability by ensuring that the error rate remains zero throughout the transaction.\n\n### Pre-Conditions:\n\n- The server is online and reachable.\n- API endpoint is accessible and configured in the test environment.\n- Necessary performance testing tools are installed and configured.\n\n### Test Steps:\n\n1. **Step 1**: Send a request to the API endpoint.\n2. **Step 2**: Verify the Track the error rate using the Jmeter testing tool and Capture the results, including response time, throughput, and error rate.\n\n### Expected Result:\n\n- The API should fetch responses successfully for all requests.\n- The error rate during the test should remain 0%.\n", "attrs": { "FII": "TC-0018", "groupId": "GRP-007", "title": "Ensure that the API responds without any errors while fetching the complete response, maintaining an error rate of 0%.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } } { "FII": "TC-0018", "groupId": "GRP-007", "title": "Ensure that the API responds without any errors while fetching the complete response, maintaining an error rate of 0%.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X0RK6VYTV7K5Y920DB 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0020.run.md 5e678b4d6a5b0b20bb310d73077e588663518bbc --- FII: "TR-0020" test_case_fii: "TC-0020" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: ToDO - Notes: Not yet executed. md 148 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0020\"\ntest_case_fii: \"TC-0020\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: ToDO\n- Notes: Not yet executed.", "attrs": { "FII": "TR-0020", "test_case_fii": "TC-0020", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0020", "test_case_fii": "TC-0020", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X0ZBRE118GSE62MBTC 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0019.case.md 4310717e3a9afb86219a77f8f54ba03dd478355f --- FII: TC-0019 groupId: GRP-007 title: "Verify that the API responds with a throughput of less than 10 seconds per transaction under defined load conditions." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["performance testing"] priority: "High" --- ### Description This test measures the time taken by the API to process and respond to a transaction. The goal is to ensure the API meets performance standards by maintaining a response time of less than 10 seconds for each transaction under specified load scenarios. ### Pre-Conditions: - The server is online and reachable. - API endpoint is accessible and configured in the test environment. - Necessary performance testing tools are installed and configured. ### Test Steps: 1. **Step 1**: Send a request to the API endpoint. 2. **Step 2**: Verify the Record the response time for each transaction during the test execution and analyze the test results to calculate the average, median, and 95th percentile response times. ### Expected Result: - The API responds to each transaction with a throughput of less than 10 seconds under the defined load conditions. md 1175 2024-12-26 12:17:56 UTC { "frontMatter": "---\nFII: TC-0019\ngroupId: GRP-007\ntitle: \"Verify that the API responds with a throughput of less than 10 seconds per transaction under defined load conditions.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"performance testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test measures the time taken by the API to process and respond to a transaction. The goal is to ensure the API meets performance standards by maintaining a response time of less than 10 seconds for each transaction under specified load scenarios.\n\n### Pre-Conditions:\n\n- The server is online and reachable.\n- API endpoint is accessible and configured in the test environment.\n- Necessary performance testing tools are installed and configured.\n\n### Test Steps:\n\n1. **Step 1**: Send a request to the API endpoint.\n2. **Step 2**: Verify the Record the response time for each transaction during the test execution and analyze the test results to calculate the average, median, and 95th percentile response times.\n\n### Expected Result:\n\n- The API responds to each transaction with a throughput of less than 10 seconds under the defined load conditions.\n", "attrs": { "FII": "TC-0019", "groupId": "GRP-007", "title": "Verify that the API responds with a throughput of less than 10 seconds per transaction under defined load conditions.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } } { "FII": "TC-0019", "groupId": "GRP-007", "title": "Verify that the API responds with a throughput of less than 10 seconds per transaction under defined load conditions.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X0E8QKCRTQKYPQ6SCP 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0020.case.md 0d77b4733cbcf31269d6749b63afd8c2d8d47bd1 --- FII: TC-0020 groupId: GRP-007 title: Verify Endurance Test Response for a Specified Test Duration. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["performance testing"] priority: "High" --- ### Description The endurance test will validate the system's reliability, stability, and ability to handle prolonged usage without performance degradation, memory leaks, or resource exhaustion. ### Pre-Conditions: - The server is online and reachable. - API endpoint is accessible and configured in the test environment. - Necessary performance testing tools are installed and configured. ### Test Steps: 1. **Step 1**: Set up the performance test environment, ensuring all test prerequisites are met (e.g., hardware and software configurations). 2. **Step 2**: Configure the endurance test in the Jmeter performance testing tool and Specify the test duration (e.g., 8 hours)also set the expected load (e.g., number of concurrent users, transactions per second). 3. **Step 3**: Start the endurance test and monitor the system’s behavior throughout the specified duration and Capture and analyze key performance metrics, such as response time, throughput, memory usage, CPU utilization, and error rates. 4. **Step 4**: Compare the observed results with expected thresholds for response times and other performance metrics. ### Expected Result: - The system should maintain consistent response times within acceptable thresholds throughout the test duration. - No significant increase in error rates, memory leaks, or resource utilization (e.g., CPU, disk, and memory). - All transactions should be successfully completed without degradation in service quality. md 1716 2024-12-26 12:18:26 UTC { "frontMatter": "---\nFII: TC-0020\ngroupId: GRP-007\ntitle: Verify Endurance Test Response for a Specified Test Duration.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"performance testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThe endurance test will validate the system's reliability, stability, and ability to handle prolonged usage without performance degradation, memory leaks, or resource exhaustion.\n\n### Pre-Conditions:\n\n- The server is online and reachable.\n- API endpoint is accessible and configured in the test environment.\n- Necessary performance testing tools are installed and configured.\n\n### Test Steps:\n\n1. **Step 1**: Set up the performance test environment, ensuring all test prerequisites are met (e.g., hardware and software configurations).\n2. **Step 2**: Configure the endurance test in the Jmeter performance testing tool and Specify the test duration (e.g., 8 hours)also set the expected load (e.g., number of concurrent users, transactions per second).\n3. **Step 3**: Start the endurance test and monitor the system’s behavior throughout the specified duration and Capture and analyze key performance metrics, such as response time, throughput, memory usage, CPU utilization, and error rates.\n4. **Step 4**: Compare the observed results with expected thresholds for response times and other performance metrics.\n\n### Expected Result:\n\n- The system should maintain consistent response times within acceptable thresholds throughout the test duration.\n- No significant increase in error rates, memory leaks, or resource utilization (e.g., CPU, disk, and memory).\n- All transactions should be successfully completed without degradation in service quality.\n", "attrs": { "FII": "TC-0020", "groupId": "GRP-007", "title": "Verify Endurance Test Response for a Specified Test Duration.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } } { "FII": "TC-0020", "groupId": "GRP-007", "title": "Verify Endurance Test Response for a Specified Test Duration.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X05HSSGFMY1CS0QSXB 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0017.case.md 2f907defd16e8b6a7ab5b01eb9472b3c267da90f --- FII: TC-0017 groupId: GRP-007 title: "Verify that the API responds within the benchmark time when communicating with the server." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["performance testing"] priority: "High" --- ### Description This test case ensures that the API meets the performance benchmark by responding within the expected time when interacting with the server. It validates that the response time is consistent and does not exceed acceptable limits under specified conditions. ### Pre-Conditions: - The server is online and reachable. - API endpoint is accessible and configured in the test environment. - Benchmark response time is predefined (e.g., ≤ 500ms). - Necessary performance testing tools are installed and configured. ### Test Steps: 1. **Step 1**: Send a request to the API endpoint. 2. **Step 2**: Verify the response time for each request in the Jmeter testing tool. ### Expected Result: - The API responds within the benchmark time for all requests (e.g., ≤ 500ms). - No significant deviations or delays are observed across multiple iterations. md 1144 2024-12-26 12:17:44 UTC { "frontMatter": "---\nFII: TC-0017\ngroupId: GRP-007\ntitle: \"Verify that the API responds within the benchmark time when communicating with the server.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"performance testing\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that the API meets the performance benchmark by responding within the expected time when interacting with the server. It validates that the response time is consistent and does not exceed acceptable limits under specified conditions.\n\n### Pre-Conditions:\n\n- The server is online and reachable.\n- API endpoint is accessible and configured in the test environment.\n- Benchmark response time is predefined (e.g., ≤ 500ms).\n- Necessary performance testing tools are installed and configured.\n\n### Test Steps:\n\n1. **Step 1**: Send a request to the API endpoint.\n2. **Step 2**: Verify the response time for each request in the Jmeter testing tool.\n\n### Expected Result:\n\n- The API responds within the benchmark time for all requests (e.g., ≤ 500ms).\n- No significant deviations or delays are observed across multiple iterations.\n", "attrs": { "FII": "TC-0017", "groupId": "GRP-007", "title": "Verify that the API responds within the benchmark time when communicating with the server.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } } { "FII": "TC-0017", "groupId": "GRP-007", "title": "Verify that the API responds within the benchmark time when communicating with the server.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "performance testing" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X1BWQQCMP2SNBK89SE 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-performance-testing/performance-testcase/TC-0018.run.md 6bac3f5f71c5ebfee25b9e89ffd772ba8036882a --- FII: "TR-0018" test_case_fii: "TC-0018" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: ToDO - Notes: Not yet executed. md 148 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0018\"\ntest_case_fii: \"TC-0018\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: ToDO\n- Notes: Not yet executed.", "attrs": { "FII": "TR-0018", "test_case_fii": "TC-0018", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0018", "test_case_fii": "TC-0018", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X1WS0J9RFTNBMNB3XQ 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/qf-suite.md 4f3050914553b5ea050444f49bf73ceb56de2727 --- id: SUT-003 projectId: PRJ-001 name: "Functional Test Suite" description: "To validate the functionality, reliability, and accuracy of the 10 APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" tags: ["functional testing"] --- ## Scope of Work The testing will cover the following key activities across 10 APIs: ### Functional Testing - Verify the accuracy of each API endpoint against defined test cases and the provided API documentation. - Validate input and response parameters, including headers and status codes. - Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios. - Confirm the correctness and completeness of the data retrieved by APIs. - Ensure APIs effectively handle edge cases like invalid serial numbers or missing data. md 954 2024-12-26 12:01:22 UTC { "frontMatter": "---\nid: SUT-003\nprojectId: PRJ-001\nname: \"Functional Test Suite\"\ndescription: \"To validate the functionality, reliability, and accuracy of the 10 APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntags: [\"functional testing\"]\n---\n", "body": "\n## Scope of Work\n\nThe testing will cover the following key activities across 10 APIs:\n\n### Functional Testing\n\n- Verify the accuracy of each API endpoint against defined test cases and the provided API documentation.\n- Validate input and response parameters, including headers and status codes.\n- Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios.\n- Confirm the correctness and completeness of the data retrieved by APIs.\n- Ensure APIs effectively handle edge cases like invalid serial numbers or missing data.\n", "attrs": { "id": "SUT-003", "projectId": "PRJ-001", "name": "Functional Test Suite", "description": "To validate the functionality, reliability, and accuracy of the 10 APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "tags": [ "functional testing" ] } } { "id": "SUT-003", "projectId": "PRJ-001", "name": "Functional Test Suite", "description": "To validate the functionality, reliability, and accuracy of the 10 APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "tags": [ "functional testing" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X1FNAXG3GG7Q6NW03P 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0008.case.md 2b97864bffc1ad41b6b1525118af9b46f0314a49 --- FII: TC-0008 groupId: GRP-003 title: Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case is designed to verify the behavior of the API when the "Dashboard Count" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data. ### Pre-Conditions: - The API endpoint `/dashboard` should be functional. - The API should be accessible and return a valid JSON response. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server that the "valueInteger" field under the "Dashboard_Count_Url" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros). ### Expected Result: - The response should contain a 400 Bad Request status code. - An error message should be provided indicating that the "Dashboard Count" value is invalid. md 1275 2024-12-26 12:06:54 UTC { "frontMatter": "---\nFII: TC-0008\ngroupId: GRP-003\ntitle: Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case is designed to verify the behavior of the API when the \"Dashboard Count\" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data.\n\n### Pre-Conditions:\n\n- The API endpoint `/dashboard` should be functional.\n- The API should be accessible and return a valid JSON response.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server that the \"valueInteger\" field under the \"Dashboard_Count_Url\" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).\n\n### Expected Result:\n\n- The response should contain a 400 Bad Request status code.\n- An error message should be provided indicating that the \"Dashboard Count\" value is invalid.\n", "attrs": { "FII": "TC-0008", "groupId": "GRP-003", "title": "Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0008", "groupId": "GRP-003", "title": "Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X1RPXM5WNXP9E2Q4M2 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0005.run-1.result.json ca84e8ba90109e54770c3731277a0b1a5abd43f6 { "test_case_fii": "TC-0005", "title": "Verify that the Daily Cycle Count field is present inside the Response Data.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a GET request to the `/dashboard` API endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Check the response to verify that the `Daily_Cycle_Count_Url` is present in the `extension` section of the returned JSON data.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" }, { "step": 3, "stepname": "Ensure that the corresponding value of `valueInteger` is 3 under the `Daily_Cycle_Count_Url`", "status": "passed", "start_time": "2024-12-15T08:45:13.042Z", "end_time": "2024-12-15T08:45:13.045Z" } ] } json 1157 2024-12-17 17:44:34 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X2SQWZJ4FEKTV9F41N 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0012.case.md 64b638d1c32e2e9d6ea4c73a375a12fc8721785f --- FII: TC-0012 groupId: GRP-003 title: Verify that the response for System & Dashboard is of integer type. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Manual" tags: ["cycle-count"] priority: "High" --- ### Description This test case checks the API response to ensure that the values for "System" and "Dashboard" are returned as integer values. The API should return the expected response structure, with "valueInteger" containing integer values. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response to ensure the fields for System and Dashboard are of integer value. ### Expected Result: - The response body contains the proper value. md 1029 2025-01-01 09:56:38 UTC { "frontMatter": "---\nFII: TC-0012\ngroupId: GRP-003\ntitle: Verify that the response for System & Dashboard is of integer type.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Manual\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case checks the API response to ensure that the values for \"System\" and \"Dashboard\" are returned as integer values. The API should return the expected response structure, with \"valueInteger\" containing integer values.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response to ensure the fields for System and Dashboard are of integer value.\n\n### Expected Result:\n\n- The response body contains the proper value.\n", "attrs": { "FII": "TC-0012", "groupId": "GRP-003", "title": "Verify that the response for System & Dashboard is of integer type.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0012", "groupId": "GRP-003", "title": "Verify that the response for System & Dashboard is of integer type.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X208P04FVYQZS576GN 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0006.run.md 0a2834db28c798de31501dc2182e917e71e28c77 --- FII: "TR-0006" test_case_fii: "TC-0006" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0006\"\ntest_case_fii: \"TC-0006\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0006", "test_case_fii": "TC-0006", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0006", "test_case_fii": "TC-0006", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X22PBW5PC022Q81E72 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0009.run-1.result.json 1653bdef4d0bb67913658e6d9e0e593bc0ca67bf { "test_case_fii": "TC-0009", "title": "Verify that the System Count in the response is greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response contains the `extension` field, which should include a `System_Count_Url` entry and the `valueInteger` should be greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 912 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X292KGGWQJ5P37TN4N 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0008.run.md a7eebddb1e66b2a032e10eafffffec673b41cd7b --- FII: "TR-0008" test_case_fii: "TC-0008" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0008\"\ntest_case_fii: \"TC-0008\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0008", "test_case_fii": "TC-0008", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0008", "test_case_fii": "TC-0008", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X2626R3X8MGXX2ZHK4 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/qf-case-group.md 2a0ec055a283bc36ac13e4db6b526472834875c4 --- id: GRP-003 SuiteId: SUT-003 planId: ["PLN-003"] name: "Dashboard Test Cases" description: "Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met." created_by: "arun-ramanan@netspective.in" created_at: "2024-11-01" tags: ["integration testing", "data validation", "reporting"] --- ### Overview This test case group is structured to ensure the seamless functionality and reliability of dashboard by validating key integration points and performance metrics: - **Data Ingestion**: Verifying capability to handle multiple data formats (JSON, CSV, XML) without errors or data loss. - **Data Processing Integrity**: Ensuring that all ingested data is accurately processed and retains integrity throughout. - **Reporting Accuracy**: Validating that generated reports reflect the processed data accurately and meet compliance requirements. - **Performance Under Load**: Testing the system's ability to handle concurrent ingestion requests and maintain performance benchmarks. - **Automated Testing**: Facilitating integration into CI/CD pipelines for consistent testing and validation of new releases. md 1267 2024-12-27 12:43:34 UTC { "frontMatter": "---\nid: GRP-003\nSuiteId: SUT-003\nplanId: [\"PLN-003\"]\nname: \"Dashboard Test Cases\"\ndescription: \"Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-11-01\"\ntags: [\"integration testing\", \"data validation\", \"reporting\"]\n---\n", "body": "\n### Overview\n\nThis test case group is structured to ensure the seamless functionality and reliability of dashboard by validating key integration points and performance metrics:\n\n- **Data Ingestion**: Verifying capability to handle multiple data formats (JSON, CSV, XML) without errors or data loss.\n- **Data Processing Integrity**: Ensuring that all ingested data is accurately processed and retains integrity throughout.\n- **Reporting Accuracy**: Validating that generated reports reflect the processed data accurately and meet compliance requirements.\n- **Performance Under Load**: Testing the system's ability to handle concurrent ingestion requests and maintain performance benchmarks.\n- **Automated Testing**: Facilitating integration into CI/CD pipelines for consistent testing and validation of new releases.\n", "attrs": { "id": "GRP-003", "SuiteId": "SUT-003", "planId": [ "PLN-003" ], "name": "Dashboard Test Cases", "description": "Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "integration testing", "data validation", "reporting" ] } } { "id": "GRP-003", "SuiteId": "SUT-003", "planId": [ "PLN-003" ], "name": "Dashboard Test Cases", "description": "Group of test cases designed to validate the integration capabilities of Dashboard, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-11-01", "tags": [ "integration testing", "data validation", "reporting" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X3NWJ3AJ3E6TKQVGJ6 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0009.case.md 74c8cbe25dda2a7be37dca56a04d3beef29c3ee0 --- FII: TC-0009 groupId: GRP-003 title: Verify that the System Count in the response is greater than 0. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case ensures that the API response for the `/dashboard` endpoint includes a valid `System_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Daily_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0. ### Expected Result: - The value of `valueInteger` should be greater than 0 (in this example, `2`). md 1113 2024-12-26 12:06:54 UTC { "frontMatter": "---\nFII: TC-0009\ngroupId: GRP-003\ntitle: Verify that the System Count in the response is greater than 0.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that the API response for the `/dashboard` endpoint includes a valid `System_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Daily_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0.\n\n### Expected Result:\n\n- The value of `valueInteger` should be greater than 0 (in this example, `2`).\n", "attrs": { "FII": "TC-0009", "groupId": "GRP-003", "title": "Verify that the System Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0009", "groupId": "GRP-003", "title": "Verify that the System Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X39AEPMDZB7EG6NB3R 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0007.run-1.result.json c5b775c00f6c283e6390b3cfb349aa563b329b87 { "test_case_fii": "TC-0007", "title": "Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "status": "failed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server that the `valueInteger` field under the `System_count_Url` extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).", "status": "failed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 999 2025-01-01 09:56:18 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X33SG7X6X3XNAZYD4R 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0011.run.md 3519c683a2721ab3e0cbb1c84f00f407cd8eafaa --- FII: "TR-0011" test_case_fii: "TC-0011" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0011\"\ntest_case_fii: \"TC-0011\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0011", "test_case_fii": "TC-0011", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0011", "test_case_fii": "TC-0011", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X35G4CAJHT0T3CK06F 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0011.run-1.result.json 2205aa115c6ae5244f4ed67a494b805de048781d { "test_case_fii": "TC-0011", "title": "Verify that the Dashboard Count is always greater than or equal to the System Count.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response contains the `extension` field, which should include a `Dashboard_Count_Url` and `system_Count_Url` value.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 911 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X3G3YC2R44QB08HJEV 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0010.case.md 09d7a6f7941260155b8ff4fa6baa990c1c0e5b37 --- FII: TC-0010 groupId: GRP-003 title: Verify that the Dashboard Count in the response is greater than 0. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Manual" tags: ["cycle-count"] priority: "High" --- ### Description This test case ensures that the API response for the `/dashboard` endpoint includes a valid `Dashboard_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Lifetime_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0. ### Expected Result: - The value of `valueInteger` should be greater than 0 (in this example, `2`). md 1118 2025-01-01 09:49:00 UTC { "frontMatter": "---\nFII: TC-0010\ngroupId: GRP-003\ntitle: Verify that the Dashboard Count in the response is greater than 0.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Manual\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that the API response for the `/dashboard` endpoint includes a valid `Dashboard_Count` value that is greater than 0. The test will confirm that the value in the response is correctly populated.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `Lifetime_Cycle_Count_Url` entry and the `valueInteger` should be greater than 0.\n\n### Expected Result:\n\n- The value of `valueInteger` should be greater than 0 (in this example, `2`).\n", "attrs": { "FII": "TC-0010", "groupId": "GRP-003", "title": "Verify that the Dashboard Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0010", "groupId": "GRP-003", "title": "Verify that the Dashboard Count in the response is greater than 0.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X42QWJE8B0XMHAVT57 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0011.case.md 3cf54a16b185b5557cb1833c39c1b8aef4fe4a1f --- FII: TC-0011 groupId: GRP-003 title: Verify that the Dashboard Count is always greater than or equal to the System Count. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test verifies that the Dashboard Count, which represents the total number of cycle counts done up to the current date, is always greater than or equal to the System Count, except for the first time. ### Pre-Conditions: - The `/dashboard` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The API response contains the `extension` field with the expected data structure. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `System_Count_Url` and `Dashboard_Count_Url` value. ### Expected Result: - The response body contains the proper structure: md 1069 2024-12-26 12:06:52 UTC { "frontMatter": "---\nFII: TC-0011\ngroupId: GRP-003\ntitle: Verify that the Dashboard Count is always greater than or equal to the System Count.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test verifies that the Dashboard Count, which represents the total number of cycle counts done up to the current date, is always greater than or equal to the System Count, except for the first time.\n\n### Pre-Conditions:\n\n- The `/dashboard` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The API response contains the `extension` field with the expected data structure.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server response contains the `extension` field, which should include a `System_Count_Url` and `Dashboard_Count_Url` value.\n\n### Expected Result:\n\n- The response body contains the proper structure:\n", "attrs": { "FII": "TC-0011", "groupId": "GRP-003", "title": "Verify that the Dashboard Count is always greater than or equal to the System Count.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0011", "groupId": "GRP-003", "title": "Verify that the Dashboard Count is always greater than or equal to the System Count.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X4XHE0RER0PP375ETA 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0008.run-1.result.json d29b6e0135d42e5cb8d0ea252e2a7860f5b4d540 { "test_case_fii": "TC-0008", "title": "Verify that if the Dashboard Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server that the valueInteger field under the Dashboard_Count_Url extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 1045 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X4VEKETQYWD160835N 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0007.case.md 79b4ccfb263eb9aa2974b40b00b69f0fc4b5b019 --- FII: TC-0007 groupId: GRP-003 title: Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case is designed to verify the behavior of the API when the "System_count" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data. ### Pre-Conditions: - The API endpoint `/dashboard` should be functional. - The API should be accessible and return a valid JSON response. ### Test Steps: 1. **Step 1**: Send a GET request to `/dashboard` API endpoint. 2. **Step 2**: Review the JSON response from the server that the "valueInteger" field under the "System_count_Url" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros). ### Expected Result: - The response should contain a 400 Bad Request status code. - An error message should be provided indicating that the "System_count" value is invalid. md 1263 2024-12-26 12:06:56 UTC { "frontMatter": "---\nFII: TC-0007\ngroupId: GRP-003\ntitle: Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case is designed to verify the behavior of the API when the \"System_count\" field in the response is provided with non-integer values, such as special characters, decimals, or leading zeros. The API should throw a 400 Bad Request error when the field contains invalid data.\n\n### Pre-Conditions:\n\n- The API endpoint `/dashboard` should be functional.\n- The API should be accessible and return a valid JSON response.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/dashboard` API endpoint.\n2. **Step 2**: Review the JSON response from the server that the \"valueInteger\" field under the \"System_count_Url\" extension contains a value that is not an integer (e.g., contains special characters, decimals, or leading zeros).\n\n### Expected Result:\n\n- The response should contain a 400 Bad Request status code.\n- An error message should be provided indicating that the \"System_count\" value is invalid.\n", "attrs": { "FII": "TC-0007", "groupId": "GRP-003", "title": "Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0007", "groupId": "GRP-003", "title": "Verify that if the System Count field in the response is not an integer and contains special characters, decimals, or leading zeros, it should throw an error.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X44RJD3GSBNNN5M10S 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0012.run-1.result.json afc09e4ac90ba00e6ad2f1cd287d377b2559c699 { "test_case_fii": "TC-0012", "title": "Verify that the response for System & Dashboard is of integer type.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response to ensure the fields for System and Dashboard are of integer value.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 855 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X4V3APT5780QG5GVSB 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0009.run.md 849b91f7d3bf331abd6296ad34fa56d37da871b0 --- FII: "TR-0009" test_case_fii: "TC-0009" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0009\"\ntest_case_fii: \"TC-0009\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0009", "test_case_fii": "TC-0009", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0009", "test_case_fii": "TC-0009", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X45KN7APZC6C055273 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0007.run.md cfebf75ab8115a11e8893842c39f7cafec41a9cf --- FII: "TR-0007" test_case_fii: "TC-0007" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Failed - Notes: All steps executed successfully. md 168 2025-01-01 09:48:22 UTC { "frontMatter": "---\nFII: \"TR-0007\"\ntest_case_fii: \"TC-0007\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "\n### Run Summary\n\n- Status: Failed\n- Notes: All steps executed successfully.\n", "attrs": { "FII": "TR-0007", "test_case_fii": "TC-0007", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0007", "test_case_fii": "TC-0007", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X4KFHDYFFW9X0HQSAX 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0006.run-1.result.json b50493741c2880674dc7f2a6c84cdd635289da0b { "test_case_fii": "TC-0006", "title": "Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Inspect the response to check if the Dashboard Count field is included inside the extension array in the returned JSON data.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 923 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X551VC0WMDA952A55S 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0010.run-1.result.json 77b50d5d81db788d86e3d46a36eedcba06ae19fb { "test_case_fii": "TC-0010", "title": "Verify that the Dashboard Count in the response is greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/dashboard` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response contains the `extension` field, which should include a `Dashboard_Count_Url` entry and the `valueInteger` should be greater than 0.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 918 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X5A4JRDPFZNWK7Y9PG 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0012.run.md 3cfaf60686842a5e515630219f55d3e6165d0601 --- FII: "TR-0012" test_case_fii: "TC-0012" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0012\"\ntest_case_fii: \"TC-0012\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0012", "test_case_fii": "TC-0012", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0012", "test_case_fii": "TC-0012", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X5THSFDQ2XBW7GKBMQ 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0010.run.md 1c567aecb77a4954d3cedbd180dd54ecf23bab05 --- FII: "TR-0010" test_case_fii: "TC-0010" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0010\"\ntest_case_fii: \"TC-0010\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0010", "test_case_fii": "TC-0010", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0010", "test_case_fii": "TC-0010", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X5FBT1GN8P91HCHJPJ 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0006.case.md ff7c2be4cc0952f1b03ded70e2697823d33d7aba --- FII: TC-0006 groupId: GRP-003 title: Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["cycle-count"] priority: "High" --- ### Description This test case ensures that the API endpoint /dashboard returns the Dashboard Count field as part of the response data. The field should be present inside the extension array, with a valid value of type integer. ### Pre-Conditions: - Ensure that the API is up and running and accessible. - The **/dashboard** endpoint should return a valid response with relevant data. ### Test Steps: 1. **Step 1**: Send a **GET** request to the `/dashboard` endpoint. 2. **Step 2**: Inspect the response to check if the **Dashboard Count** field is included inside the **extension** array in the returned JSON data. ### Expected Result: 1. If the **Dashboard Count** field is present, the response should have the following format: - Status Code: `200 OK` md 1072 2024-12-26 12:06:56 UTC { "frontMatter": "---\nFII: TC-0006\ngroupId: GRP-003\ntitle: Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case ensures that the API endpoint /dashboard returns the Dashboard Count field as part of the response data. The field should be present inside the extension array, with a valid value of type integer.\n\n### Pre-Conditions:\n\n- Ensure that the API is up and running and accessible.\n- The **/dashboard** endpoint should return a valid response with relevant data.\n\n### Test Steps:\n\n1. **Step 1**: Send a **GET** request to the `/dashboard` endpoint.\n2. **Step 2**: Inspect the response to check if the **Dashboard Count** field is included inside the **extension** array in the returned JSON data.\n\n### Expected Result:\n\n1. If the **Dashboard Count** field is present, the response should have the following format:\n - Status Code: `200 OK`\n", "attrs": { "FII": "TC-0006", "groupId": "GRP-003", "title": "Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0006", "groupId": "GRP-003", "title": "Verify that the Dashboard Count field is present inside the Response Data when sending a GET request to the /dashboard endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X5PFS0F31AHQKZ7PMH 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0005.run.md b5e87c0c5019c999f2103597ed1b3f3aa9fd9b7e --- FII: "TR-0005" test_case_fii: "TC-0005" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0005\"\ntest_case_fii: \"TC-0005\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0005", "test_case_fii": "TC-0005", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0005", "test_case_fii": "TC-0005", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X6D45NGENJ0AFZ6QBB 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/dashboard-count/TC-0005.case.md ef7a5fd23bfebdcb24247a424a03b06d719ebfe0 --- FII: TC-0005 groupId: GRP-003 title: Verify that the dashboard Count field is present inside the Response Data. created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Manual" tags: ["cycle-count"] priority: "High" --- ### Description This test case verifies that the API endpoint `/dashboard` returns the correct response data, specifically ensuring that the `Daily_Cycle_Count` field is present in the JSON response. ### Pre-Conditions: - The API endpoint `/dashboard` is accessible. - The API server is running and available to respond to GET requests. - Proper authentication, if required, is provided for the request. ### Test Steps: 1. **Step 1**: Send a GET request to the `/dashboard` API endpoint. 2. **Step 2**: Check the response to verify that the `Dashboard_Count_Url` is present in the `extension` section of the returned JSON data. 3. **Step 3**: Ensure that the corresponding value of `valueInteger` is 3 under the `Dashboard_Count_Url`. ### Expected Result: - If the cycle count is available, the API should return a `200` status code with the following JSON data: md 1117 2025-01-01 09:48:50 UTC { "frontMatter": "---\nFII: TC-0005\ngroupId: GRP-003\ntitle: Verify that the dashboard Count field is present inside the Response Data.\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Manual\"\ntags: [\"cycle-count\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case verifies that the API endpoint `/dashboard` returns the correct response data, specifically ensuring that the `Daily_Cycle_Count` field is present in the JSON response.\n\n### Pre-Conditions:\n\n- The API endpoint `/dashboard` is accessible.\n- The API server is running and available to respond to GET requests.\n- Proper authentication, if required, is provided for the request.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to the `/dashboard` API endpoint.\n2. **Step 2**: Check the response to verify that the `Dashboard_Count_Url` is present in the `extension` section of the returned JSON data.\n3. **Step 3**: Ensure that the corresponding value of `valueInteger` is 3 under the `Dashboard_Count_Url`.\n\n### Expected Result:\n\n- If the cycle count is available, the API should return a `200` status code with the following JSON data:\n", "attrs": { "FII": "TC-0005", "groupId": "GRP-003", "title": "Verify that the dashboard Count field is present inside the Response Data.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } } { "FII": "TC-0005", "groupId": "GRP-003", "title": "Verify that the dashboard Count field is present inside the Response Data.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "cycle-count" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X612PHGEPZA1FEC1ZG 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/test-plan/dashboard-qf-plan.md 41e896e81a2568a8b4e9001966dd00ff1465b0c4 --- id: PLN-003 name: "Functional Dashboard Test Plan" description: "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. " created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["compliance testing"] related_requirements: ["REQ-101", "REQ-102"] --- Ensure the APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ## Scope of Work The testing will cover the following key activities across 10 APIs: ### Functional Testing - Verify the accuracy of each API endpoint against defined test cases and the provided API documentation. - Validate input and response parameters, including headers and status codes. - Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios. - Confirm the correctness and completeness of the data retrieved by APIs. - Ensure APIs effectively handle edge cases like invalid serial numbers or missing data. md 1051 2024-12-26 17:31:34 UTC { "frontMatter": "---\nid: PLN-003\nname: \"Functional Dashboard Test Plan\"\ndescription: \"To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. \"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"compliance testing\"]\nrelated_requirements: [\"REQ-101\", \"REQ-102\"]\n---\n", "body": "\nEnsure the APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation.\n\n## Scope of Work\n\nThe testing will cover the following key activities across 10 APIs:\n\n### Functional Testing\n\n- Verify the accuracy of each API endpoint against defined test cases and the provided API documentation.\n- Validate input and response parameters, including headers and status codes.\n- Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios.\n- Confirm the correctness and completeness of the data retrieved by APIs.\n- Ensure APIs effectively handle edge cases like invalid serial numbers or missing data.\n", "attrs": { "id": "PLN-003", "name": "Functional Dashboard Test Plan", "description": "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } } { "id": "PLN-003", "name": "Functional Dashboard Test Plan", "description": "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X6SG9XXW30RCENEWVA 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/test-plan/search-qf-plan.md 303e80517b281b5b19d6ad3e2266342ce27e5d04 --- id: PLN-008 name: "Functional Search Test Plan" description: "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. " created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["compliance testing"] related_requirements: ["REQ-101", "REQ-102"] --- Ensure the APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ## Scope of Work The testing will cover the following key activities across 10 APIs: ### Functional Testing - Verify the accuracy of each API endpoint against defined test cases and the provided API documentation. - Validate input and response parameters, including headers and status codes. - Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios. - Confirm the correctness and completeness of the data retrieved by APIs. - Ensure APIs effectively handle edge cases like invalid serial numbers or missing data. md 1048 2024-12-26 12:03:38 UTC { "frontMatter": "---\nid: PLN-008\nname: \"Functional Search Test Plan\"\ndescription: \"To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. \"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"compliance testing\"]\nrelated_requirements: [\"REQ-101\", \"REQ-102\"]\n---\n", "body": "\nEnsure the APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation.\n\n## Scope of Work\n\nThe testing will cover the following key activities across 10 APIs:\n\n### Functional Testing\n\n- Verify the accuracy of each API endpoint against defined test cases and the provided API documentation.\n- Validate input and response parameters, including headers and status codes.\n- Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios.\n- Confirm the correctness and completeness of the data retrieved by APIs.\n- Ensure APIs effectively handle edge cases like invalid serial numbers or missing data.\n", "attrs": { "id": "PLN-008", "name": "Functional Search Test Plan", "description": "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } } { "id": "PLN-008", "name": "Functional Search Test Plan", "description": "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X7DD883CB7SRGVPFB8 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/test-plan/login-qf-plan.md fdefbde86f784991b9c24f6a4f6dd058e2cdd3a0 --- id: PLN-007 name: "Functional Login Test Plan" description: "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. " created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["compliance testing"] related_requirements: ["REQ-101", "REQ-102"] --- Ensure the APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ## Scope of Work The testing will cover the following key activities across 10 APIs: ### Functional Testing - Verify the accuracy of each API endpoint against defined test cases and the provided API documentation. - Validate input and response parameters, including headers and status codes. - Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios. - Confirm the correctness and completeness of the data retrieved by APIs. - Ensure APIs effectively handle edge cases like invalid serial numbers or missing data. md 1047 2024-12-26 17:31:42 UTC { "frontMatter": "---\nid: PLN-007\nname: \"Functional Login Test Plan\"\ndescription: \"To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. \"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"compliance testing\"]\nrelated_requirements: [\"REQ-101\", \"REQ-102\"]\n---\n", "body": "\nEnsure the APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation.\n\n## Scope of Work\n\nThe testing will cover the following key activities across 10 APIs:\n\n### Functional Testing\n\n- Verify the accuracy of each API endpoint against defined test cases and the provided API documentation.\n- Validate input and response parameters, including headers and status codes.\n- Conduct boundary value analysis and test edge cases, such as handling empty requests, invalid inputs, and other unexpected scenarios.\n- Confirm the correctness and completeness of the data retrieved by APIs.\n- Ensure APIs effectively handle edge cases like invalid serial numbers or missing data.\n", "attrs": { "id": "PLN-007", "name": "Functional Login Test Plan", "description": "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } } { "id": "PLN-007", "name": "Functional Login Test Plan", "description": "To validate APIs by executing functional test cases and ensuring alignment with defined requirements and API documentation. ", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "compliance testing" ], "related_requirements": [ "REQ-101", "REQ-102" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X7421F1Z38S16934BD 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/qf-case-group.md 65c0142c440f61917c69a0fda1392e6cb53ef096 --- id: GRP-005 SuiteId: SUT-003 planId: ["PLN-008"] name: "Search API Test Cases" description: "Group of test cases designed to validate the integration capabilities of Search API, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met." created_by: "qa-lead@example.com" created_at: "2024-11-01" tags: ["integration testing", "data validation", "reporting"] --- ### Overview This test case group is structured to ensure the seamless functionality and reliability of Search API by validating key integration points and performance metrics: - **Data Ingestion**: Verifying Search API's capability to handle multiple data formats (JSON, CSV, XML) without errors or data loss. - **Data Processing Integrity**: Ensuring that all ingested data is accurately processed and retains integrity throughout. - **Reporting Accuracy**: Validating that generated reports reflect the processed data accurately and meet compliance requirements. - **Performance Under Load**: Testing the system's ability to handle concurrent ingestion requests and maintain performance benchmarks. - **Automated Testing**: Facilitating integration into CI/CD pipelines for consistent testing and validation of new releases. md 1275 2024-12-26 17:34:48 UTC { "frontMatter": "---\nid: GRP-005\nSuiteId: SUT-003\nplanId: [\"PLN-008\"]\nname: \"Search API Test Cases\"\ndescription: \"Group of test cases designed to validate the integration capabilities of Search API, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.\"\ncreated_by: \"qa-lead@example.com\"\ncreated_at: \"2024-11-01\"\ntags: [\"integration testing\", \"data validation\", \"reporting\"]\n---\n", "body": "\n### Overview\n\nThis test case group is structured to ensure the seamless functionality and reliability of Search API by validating key integration points and performance metrics:\n\n- **Data Ingestion**: Verifying Search API's capability to handle multiple data formats (JSON, CSV, XML) without errors or data loss.\n- **Data Processing Integrity**: Ensuring that all ingested data is accurately processed and retains integrity throughout.\n- **Reporting Accuracy**: Validating that generated reports reflect the processed data accurately and meet compliance requirements.\n- **Performance Under Load**: Testing the system's ability to handle concurrent ingestion requests and maintain performance benchmarks.\n- **Automated Testing**: Facilitating integration into CI/CD pipelines for consistent testing and validation of new releases.\n", "attrs": { "id": "GRP-005", "SuiteId": "SUT-003", "planId": [ "PLN-008" ], "name": "Search API Test Cases", "description": "Group of test cases designed to validate the integration capabilities of Search API, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "integration testing", "data validation", "reporting" ] } } { "id": "GRP-005", "SuiteId": "SUT-003", "planId": [ "PLN-008" ], "name": "Search API Test Cases", "description": "Group of test cases designed to validate the integration capabilities of Search API, focusing on data ingestion, processing integrity, and reporting functionalities to ensure compliance and performance standards are met.", "created_by": "qa-lead@example.com", "created_at": "2024-11-01", "tags": [ "integration testing", "data validation", "reporting" ] } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X7QFFJ9Q7V3ATHTJYS 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0013.run.md 9fb0d905fc34e2a3ec91533ea311dfc44c44782e --- FII: "TR-0013" test_case_fii: "TC-0013" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0013\"\ntest_case_fii: \"TC-0013\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0013", "test_case_fii": "TC-0013", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0013", "test_case_fii": "TC-0013", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X79YZ98AFQZ9MAHS32 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0014.run-1.result.json 118fb1646e71c2ea50aa7ddc85ffccbdf570b239 { "test_case_fii": "TC-0014", "title": "Ensure that the `Performed Date & Time` of the procedure is present in the response data.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/search` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response should include the `PerformedDateTime` field with the correct date and time.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 883 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X7BT8M82CQ6SSP6H2T 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0013.case.md 1808862e899172433bae31d8161a070f443bdd72 --- FII: TC-0013 groupId: GRP-005 title: "Ensure that the information retrieved for the search listing." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Manual" tags: ["wild-cycle"] priority: "High" --- ### Description This test case verifies the accuracy and completeness of the information retrieved from the `/search`. The objective is to ensure that the data returned for the last procedure cycle is consistent, up-to-date, and complete, including all relevant details about the procedures performed, device information, and the status of each procedure. ### Pre-Conditions: - The `/search` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The device has completed the search API procedure, and relevant data is available in the system. ### Test Steps: 1. **Step 1**: Send a GET request to `/search` API endpoint. 2. **Step 2**: Review the JSON response from the server response should include a JSON object with a "resourceType" of "Bundle," containing an array of entries with relevant details about each procedure performed. The procedures should have accurate "performedDateTime," "outcome," "status," and associated references. ### Expected Result: - The response should contain a "Bundle" resource with "Procedure" entries showing the completed procedures, accurate "performedDateTime" and "outcome" data. - No missing or incorrect data for any procedure performed. md 1502 2025-01-01 09:49:10 UTC { "frontMatter": "---\nFII: TC-0013\ngroupId: GRP-005\ntitle: \"Ensure that the information retrieved for the search listing.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Manual\"\ntags: [\"wild-cycle\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test case verifies the accuracy and completeness of the information retrieved from the `/search`. The objective is to ensure that the data returned for the last procedure cycle is consistent, up-to-date, and complete, including all relevant details about the procedures performed, device information, and the status of each procedure.\n\n### Pre-Conditions:\n\n- The `/search` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The device has completed the search API procedure, and relevant data is available in the system.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/search` API endpoint.\n2. **Step 2**: Review the JSON response from the server response should include a JSON object with a \"resourceType\" of \"Bundle,\" containing an array of entries with relevant details about each procedure performed. The procedures should have accurate \"performedDateTime,\" \"outcome,\" \"status,\" and associated references.\n\n### Expected Result:\n\n- The response should contain a \"Bundle\" resource with \"Procedure\" entries showing the completed procedures, accurate \"performedDateTime\" and \"outcome\" data.\n- No missing or incorrect data for any procedure performed.\n", "attrs": { "FII": "TC-0013", "groupId": "GRP-005", "title": "Ensure that the information retrieved for the search listing.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "wild-cycle" ], "priority": "High" } } { "FII": "TC-0013", "groupId": "GRP-005", "title": "Ensure that the information retrieved for the search listing.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Manual", "tags": [ "wild-cycle" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X8QZK60C1W0D7NTQW0 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0016.case.md 8ce6ba624c2c3ef8d2df35c286603fb8714c68ea --- FII: TC-0016 groupId: GRP-005 title: "Ensure that the 'Result' field exists within the response data when a GET request is sent to the `/search` endpoint." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["wild-cycle"] priority: "High" --- ### Description This test verifies that the 'Result' is included in the response data returned by the `/search` endpoint. The test ensures the correct structure and expected values are returned, demonstrating successful execution and compliance with the API requirements. ### Pre-Conditions: - The `/search` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The device has completed the wild cycle procedure, and relevant data is available in the system. ### Test Steps: 1. **Step 1**: Send a GET request to `/search` API endpoint. 2. **Step 2**: Review the JSON response from the server response data contains the 'Result' field exists within the response data. ### Expected Result: - The response data includes the proper structure. md 1122 2024-12-26 12:10:38 UTC { "frontMatter": "---\nFII: TC-0016\ngroupId: GRP-005\ntitle: \"Ensure that the 'Result' field exists within the response data when a GET request is sent to the `/search` endpoint.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"wild-cycle\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test verifies that the 'Result' is included in the response data returned by the `/search` endpoint. The test ensures the correct structure and expected values are returned, demonstrating successful execution and compliance with the API requirements.\n\n### Pre-Conditions:\n\n- The `/search` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The device has completed the wild cycle procedure, and relevant data is available in the system.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/search` API endpoint.\n2. **Step 2**: Review the JSON response from the server response data contains the 'Result' field exists within the response data.\n\n### Expected Result:\n\n- The response data includes the proper structure.\n", "attrs": { "FII": "TC-0016", "groupId": "GRP-005", "title": "Ensure that the 'Result' field exists within the response data when a GET request is sent to the `/search` endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "wild-cycle" ], "priority": "High" } } { "FII": "TC-0016", "groupId": "GRP-005", "title": "Ensure that the 'Result' field exists within the response data when a GET request is sent to the `/search` endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "wild-cycle" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X899DZ2JACMT4CAEH9 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0013.run-1.result.json b78652a225ff82324bad55a079a68772faca7883 { "test_case_fii": "TC-0013", "title": "Ensure that the information retrieved for the search listing.", "status": "passed", "start_time": "2024-12-15T08:45:11.753Z", "end_time": "2024-12-15T08:45:13.250Z", "total_duration": "1.50 seconds", "steps": [ { "step": 1, "stepname": "Send a **GET** request to the `/search` endpoint.", "status": "passed", "start_time": "2024-12-15T08:45:11.762Z", "end_time": "2024-12-15T08:45:13.003Z" }, { "step": 2, "stepname": "Review the JSON response from the server response should include a JSON object with a resourceType of Bundle, containing an array of entries with relevant details about each procedure performed. The procedures should have accurate performedDateTime, outcome, status, and associated references.", "status": "passed", "start_time": "2024-12-15T08:45:13.014Z", "end_time": "2024-12-15T08:45:13.041Z" } ] } json 1022 2024-12-17 08:47:00 UTC 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X8E13Z3JDYYH0J9TSF 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0016.run.md 36ed98fce8233f83d2ed079c0af8d944f46815b0 --- FII: "TR-0016" test_case_fii: "TC-0016" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0016\"\ntest_case_fii: \"TC-0016\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0016", "test_case_fii": "TC-0016", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0016", "test_case_fii": "TC-0016", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X8DKXPE9950H7B7967 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0014.run.md 5796f1228ec7ed6c34799f2ddcc57ae9f82692c4 --- FII: "TR-0014" test_case_fii: "TC-0014" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0014\"\ntest_case_fii: \"TC-0014\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0014", "test_case_fii": "TC-0014", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0014", "test_case_fii": "TC-0014", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X8PK1XS461VKM8FZCE 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0015.run.md f5f5f61582bbd9f695dce91a7e28e405f8d6dffc --- FII: "TR-0015" test_case_fii: "TC-0015" run_date: "2024-12-15" environment: "Test" --- ### Run Summary - Status: Passed - Notes: All steps executed successfully. md 165 2024-12-17 08:47:00 UTC { "frontMatter": "---\nFII: \"TR-0015\"\ntest_case_fii: \"TC-0015\"\nrun_date: \"2024-12-15\"\nenvironment: \"Test\"\n---\n", "body": "### Run Summary\n- Status: Passed\n- Notes: All steps executed successfully.", "attrs": { "FII": "TR-0015", "test_case_fii": "TC-0015", "run_date": "2024-12-15", "environment": "Test" } } { "FII": "TR-0015", "test_case_fii": "TC-0015", "run_date": "2024-12-15", "environment": "Test" } 2025-03-06 23:34:33 UNKNOWN
01JNPWZ2X9TPAGWYK23R88C8BT 01JNPWZ2WT1HS5PPCHX2MQ2N7G 01JNPWZ2WV8HV0BYSPRCAF8B48 01JNPWZ2WWFPPBS6XC8C883632 /app/www.surveilr.com/lib/service/qualityfolio/rssd/synthetic-asset-tracking/api-functional-testing/search/TC-0015.case.md d98eb787cd256f14da27b859b78f00b43cef713c --- FII: TC-0015 groupId: GRP-005 title: "Ensure that the `MRC result` field exists within the response data when a GET request is sent to the `/search` endpoint." created_by: "arun-ramanan@netspective.in" created_at: "2024-12-15" test_type: "Automation" tags: ["wild-cycle"] priority: "High" --- ### Description This test verifies that the `MRC result` is included in the response data returned by the `/search` endpoint. The test ensures the correct structure and expected values are returned, demonstrating successful execution and compliance with the API requirements. ### Pre-Conditions: - The `/search` API endpoint is operational and accessible. - Authentication (if required) is provided, and the request can be successfully executed. - The device has completed the wild cycle procedure, and relevant data is available in the system. ### Test Steps: 1. **Step 1**: Send a GET request to `/search` API endpoint. 2. **Step 2**: Review the JSON response from the server response data contains the `MRC result` object with the expected attributes. ### Expected Result: - The response data includes the proper structure. md 1132 2024-12-26 12:10:34 UTC { "frontMatter": "---\nFII: TC-0015\ngroupId: GRP-005\ntitle: \"Ensure that the `MRC result` field exists within the response data when a GET request is sent to the `/search` endpoint.\"\ncreated_by: \"arun-ramanan@netspective.in\"\ncreated_at: \"2024-12-15\"\ntest_type: \"Automation\"\ntags: [\"wild-cycle\"]\npriority: \"High\"\n---\n", "body": "\n### Description\n\nThis test verifies that the `MRC result` is included in the response data returned by the `/search` endpoint. The test ensures the correct structure and expected values are returned, demonstrating successful execution and compliance with the API requirements.\n\n### Pre-Conditions:\n\n- The `/search` API endpoint is operational and accessible.\n- Authentication (if required) is provided, and the request can be successfully executed.\n- The device has completed the wild cycle procedure, and relevant data is available in the system.\n\n### Test Steps:\n\n1. **Step 1**: Send a GET request to `/search` API endpoint.\n2. **Step 2**: Review the JSON response from the server response data contains the `MRC result` object with the expected attributes.\n\n### Expected Result:\n\n- The response data includes the proper structure.\n", "attrs": { "FII": "TC-0015", "groupId": "GRP-005", "title": "Ensure that the `MRC result` field exists within the response data when a GET request is sent to the `/search` endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "wild-cycle" ], "priority": "High" } } { "FII": "TC-0015", "groupId": "GRP-005", "title": "Ensure that the `MRC result` field exists within the response data when a GET request is sent to the `/search` endpoint.", "created_by": "arun-ramanan@netspective.in", "created_at": "2024-12-15", "test_type": "Automation", "tags": [ "wild-cycle" ], "priority": "High" } 2025-03-06 23:34:33 UNKNOWN

(Page 1 of 2) Next