Bài giảng Kiểm thử phần mềm English

SOFTWARE TESTING

“Testing is the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements”

 Testing is a process used to help identify the correctness, completeness and quality of developed computer software.

 On a whole, testing objectives could be summarized as:

     Testing is a process of executing a program with the intent of finding an error.

·    A good test is one that has a high probability of finding an as yet

 undiscovered error.

·    A successful test is one that uncovers an as yet undiscovered error.

 

ppt84 trang | Chia sẻ: hienduc166 | Lượt xem: 659 | Lượt tải: 0download
Bạn đang xem trước 20 trang tài liệu Bài giảng Kiểm thử phần mềm English, để xem tài liệu hoàn chỉnh bạn click vào nút TẢI VỀ ở trên
e:Business rule, if the Interest to be Paid is more than 8 % and the Tenor of the deposit exceeds one month, then the system should give a warning.To populate an Interest to be Paid field of a deposit, we can give 9.5478 and make the Tenor as two months for a particular deposit.This will trigger the warning in the application.Test ConditionsA Test Condition is all possible combinations and validations that can be attributed to a requirement in the specification.The importance’s of determining the conditions are: Deciding on the architecture of testing approach Evolving design of the test scenariosEnsuring Test coverage The possible condition types that can be built arePositive condition: Polarity of the value given for test is to comply with the condition existence.Negative condition: Polarity of the value given for test is not to comply with the condition existence.Boundary condition: Polarity of the value given for test is to assess the extreme values of the conditionUser Perspective condition: Polarity of the value given for test is to analyse the practical usage of the conditionA defect is an improper program condition that is generally the result of an error. Not all errors produce program defects, as with incorrect comments or some documentation errors. Conversely, a defect could result from such nonprogrammer causes as improper program packaging or handling	Software DefectsDefect CategoriesWrong 	Missing	ExtraThe specifications have been implemented incorrectly.A requirement incorporated into the product that was not specified.A specified requirement is not in the built product. Step 1:Identify the module for which the Use Case belongs.Step 2:Identify the functionality of the Use Case with respect to the overall functionality of the system.Step 3:Identify the Actors involved in the Use Case.Step 4:Identify the pre-conditions.Step 5:Understand the Business Flow of the Use Case.Step 6:Understand the Alternate Business Flow of the Use Case.Step 7:Identify the any post-conditions and special requirements.Step 8:Identify the Test Conditions from Use Case / Business Rule’s and make a Test Condition Matrix Document – Module Wise for each and every Use Case.Step 9:Identify the main functionality of the module and document a complete Test scenario Document for the Business Flow (include any actions made in the alternate business flow if applicable)Step 10:For every test scenarios, formulate the test steps based on a navigational flow of the application with the test condition matrix in a specific test case template.Designing Test Cases from Use casesRole of Documentation in TestingTesting practices should be documented so that they are repeatableSpecifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documentedChange management for documentation should be used if possibleIdeally a system should be developed for easily finding and obtaining documents and determining what documentation will have a particular piece of informationUnder the condition where the bug report is invalid.The question iswhat are the comments that the developer left to indicate that it isinvalid? If there are none, you need to discuss this with the developer.The reasons that they may have are many:1) You didn't understand the system under test correctly because1a) the requirements have changed1b) you don't get the whole picture2) You were testing against the wrong version of software, or configuration, with the wrong OS, or wrong browser3) You made an assumption that was incorrect4) Your bug was not repeatable (in which case they may mark it as"works for me"), or if it was repeatable it was because the memorywas already corrupted after the first instance, but you can'treproduce it on a clean machine (again, could be a "works for me" bug).Just remember that a bug report isn't you writing a law that thedevelopers must conform to, it's a form of communication. If youdidn't communicate the bug correctly, the bug report being in thisstate is just as much your fault as it is the developer's. Also sinceit's a communication, use it to communicate, not accuse or indict.Traceability MatrixTraceability Matrix ensures that each requirement has been traced to a specification in the Use Cases and Functional Specifications to a test condition/case in the test scenario and Defects raised during Test Execution, thereby achieving one-to-one test coverage.The entire process of Traceability is a time consuming process. In order to simplify, Rational Requisite Pro / Test Director a tool, which will maintain the specifications of the documents. Then these are mapped correspondingly. The specifications have to be loaded into the system by the user. Even though it is a time consuming process, it helps in finding the ‘ripple’ effect on altering a specification. The impacts on test conditions can immediately be identified using the trace matrix.Traceability matrix should be prepared between requirements to Test cases.Simplifying the above, A = Business Requirement, B = Functional Specification, C = Test Conditions. I.e., A = B, B = C, Therefore A = C What is Test Management?Test management is a method of organizing application test assets and artifacts — such asTest requirementsTest plansTest documentationTest scripts Test results To enable easy accessibility and reusability.Its aim is to deliver quality applications in less time. Test management is firmly rooted in the concepts of better organization, collaboration and information sharing.Test StrategyScope of Testing Types of TestingLevels of Testing Test MethodologyTest EnvironmentTest Tools Entry and Exit CriteriaTest ExecutionRoles and ResponsibilitiesRisks and ContingenciesDefect ManagementTest DeliverablesTest MilestonesTest RequirementsTest Team gathers the test requirements from the following Base Lined documents.Customer Requirements Specification(CRS)Functional Specification (FS) – Use Case, Business Rule, System ContextNon – Functional Requirements (NFR)High Level Design Document (HLD)Low Level Design Document (LLD)System Architecture DocumentPrototype of the application Database Mapping Document Interface Related Document Other Project related documents such as e-mails, minutes of meeting. Knowledge Transfer Sessions from the Development TeamBrainstorming sessions between the Test TeamConfiguration ManagementSoftware Configuration management is an umbrella activity that is applied throughout the software process. SCM identifies controls, audits and reports modifications that invariably occur while software is being developed and after it has been released to a customer. All information produced as part of software engineering becomes of software configuration. The configuration is organized in a manner that enables orderly control of change.The following is a sample list of Software Configuration Items: Management plans (Project Plan, Test Plan, etc.) Specifications (Requirements, Design, Test Case, etc.) Customer Documentation (Implementation Manuals, User Manuals, Operations Manuals, On-line help Files) Source Code (PL/1 Fortran, COBOL, Visual Basic, Visual C, etc.) Executable Code (Machine readable object code, exe's, etc.) Libraries (Runtime Libraries, Procedures, %include Files, API's, DLL's, etc.) Databases (Data being Processed, Data a program requires, test data, Regression test data, etc.) Production Documentation Automated Testing Tools Win Runner, Load Runner, Test Director from Mercury Interactive QARun ,QA Load from Compuware Rational Robot, Site Load and SQA Manager from Rational SilkTest, SilkPerformer from Segue e-Tester, e-Load and e-Monitor from RSW SoftwareTest attributesTo different degrees, good tests have these attributes:• Power. When a problem exists, the test will reveal it.• Valid. When the test reveals a problem, it is a genuine problem.• Value. It reveals things your clients want to know about the product or project.• Credible. Your client will believe that people will do the things that are done in this test.• Representative of events most likely to be encountered by the user. (xref. Musa's SoftwareReliability Engineering).• Non-redundant. This test represents a larger group that address the same risk.• Motivating. Your client will want to fix the problem exposed by this test.• Performable. It can be performed as designed.• Maintainable. Easy to revise in the face of product changes.• Repeatable. It is easy and inexpensive to reuse the test.• Pop. (short for Karl Popper) It reveal things about our basic or critical assumptions.• Coverage. It exercises the product in a way that isn't already taken care of by other tests.• Easy to evaluate.• Supports troubleshooting. Provides useful information for the debugging programmer.• Appropriately complex. As the program gets more stable, you can hit it with more complex testsand more closely simulate use by experienced users.• Accountable. You can explain, justify, and prove you ran it.• Cost. This includes time and effort, as well as direct costs.• Opportunity Cost. Developing and performing this test prevents you from doing other workTest Project Manager Customer Interface Master Test Plan Test Strategy Project Technical Contact Interaction with Development Team Review Test Artifacts Defect ManagementTest Lead Module Technical Contact Test Plan Development Interaction with Module Team Review Test Artifacts Defect Management Test Execution Summary Defect Metrics ReportingTest Engineers Prepare Test Scenarios Develop Test Conditions/Cases Prepare Test Scripts Test Coverage Matrix Execute Tests as Scheduled Defect LogTest Tool Specialist Prepare Automation Strategy Capture and Playback Scripts Run Test Scripts Defect LogRoles & ResponsibilitiesSupport Group for TestingDomain Expert, Development Team, Software Quality Assurance Team, Software Configuration, Support Group – Technology, Architecture and Design Team

File đính kèm:

  • pptKiem thu phan mem English.ppt