الفهرس | Only 14 pages are availabe for public view |
Abstract Regression testing repeatedly executes test cases of previous builds to validate that any new changes occurred did not affect the original features. It is the type of software testing that seeks to uncover new software bugs in existing areas of a system after changes have been made to them. In recent years, regression testing has seen a remarkable progress with the increasing popularity of agile methods, which stress on the central role of regression testing in maintaining the software quality. The significance of regression testing has grown with the amplified adoption of agile development methodologies. The optimum case for regression testing in agile context is to run regression set at the end of each sprint and release, which requires a lot of cost and time. In this master’s thesis, we present an automated scalable agile regression testing approach on both the sprints and release levels. As for the sprints level, the proposed approach addresses weighted sprint test cases prioritization technique (WSTP) that prioritizes test cases based on several agile parameters having real practical weight for testers. Regarding the release level, two different approaches are proposed: 1. Cluster-based Release Test cases selection technique (CRTS), which clusters user stories based on the similarity of covered modules to solve the scalability issue. Test cases are then selected based on issues logged for failed test cases using text-mining techniques 2. Regression Testing Reduction and Prioritization (RTRP), which reduces the number of test cases to be used at regression phasedepending on the similarity of issues exposed from the different test cases, taking into consideration the user story coverage. It then prioritizes the reduced test cases using user-provided weighted agile parameters. The three different proposed approaches are evaluated using different evaluation metrics for each technique. The prioritization technique shows enhancement in the effectiveness of test cases prioritization by average of APFD equals to 0.78 for the different parameters. Selection Technique improves the effectiveness of test cases selected by average of F-measure equals to 0.79 for the different releases, with different number of word occurrences applied. Moreover, the reduction and prioritization technique shows an improvement of TSR by an average of 6%, while retaining the fault detection capability by an average of 96.5% for the three different datasets used. As for the prioritization, results show an improvement of APFD by an average of 0.802 using different weights for the provided parameters. In addition, the implemented system execution time is compared to the manual execution time done by the users. The comparison shows that the implemented system saves time needed, which consecutively saves the cost of the regression testing done. |