GPH Theory: Your Ultimate Guide to General Topics, News, SEO, and Technology

Welcome to GPH Theory, your go-to source for the latest news, insights, and analysis on general topics, SEO, technology, and more. Our mission is to provide you with the most relevant and up-to-date information to help you stay ahead of the curve. From beginners to experts, we have something for everyone. Join us and start your journey towards digital excellence today.

Testing Terminology's

  Bittu      


Acceptance Criteria: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer or other authorized entity.

Acceptance Testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.


Accessibility Testing: Testing to determine the ease by which users with disabilities can use a component or system.

Ad hoc Testing: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are not expectations for results and arbitrariness guides the test execution activity.

Back-to-back Testing: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared and analyzed in cases of discrepancies.

Behavior: The response of a component or system to a set of input values and preconditions.

Beta Testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Black box Testing: Testing, either functional or non-functional, of a component or system without reference to its internal structure.

Blocked Test Case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Bottom-up Testing: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

Boundary Value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary Value Analysis: A black box testing design technique in which test cases are designed based on boundary values.

Capture/Playback Tool: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

Cause-effect Graph: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

Classification Tree Method: A black box test design technique in which test cases, describedby means of a classification tree, are designed to execute combinations of representatives of input and/or output domains.

Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.

Component: A minimal software item that can be tested in isolation.

Component Integration Testing: Testing performed to expose defects in the interfaces and interaction between integrated components.

Condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

Condition Coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

Condition Testing: A white box test design technique in which test cases are designed to execute condition outcomes.

Coverage Analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.

Code Coverage Tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by a test suite.

Cyclomatic Complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where
L = the number of edges/links in a graph N = the number of nodes in a graph P = the number of disconnected parts of the graph (e.g. a called graph or subroutine) daily build: a development activity where a complete system is compiled and linked every day (usually overnight), so that a consistent system is available at any time including all latest changes.

Data Driven Testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools.

Decision Table Testing: A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.

Defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect Density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points).

Defect Management: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.

Entry Criteria: The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.

Equivalence Partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

Error Guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

Exit Criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.

Expected Result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

Experience-based Test Technique: Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.

Exploratory Testing: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

Fail: A test is deemed to fail if its actual result does not match its expected result.

False-Positive Result: A test result in which a defect is reported although no such defect actually exists in the test object.

False-Negative Result: A test result which fails to identify the presence of a defect that is actually present in the test object.

Fault Tolerance: The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface.

Functional Requirement: A requirement that specifies a function that a component or system must perform.

Functional Test Design Technique: Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure.

Functional Testing: Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

High Level Test Case: A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available.

Incremental Testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

Inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure.

Integration Testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Interoperability: The capability of the software product to interact with one or more specified components or systems.

Isolation Testing: Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

Key Performance Indicator: KPI A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development.

Load Testing: A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.

Low Level Test Case: A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators.

Maintainability: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.

Mean Time Between Failures: The arithmetic mean (average) time between failures of a system. The MTBF is typically part of a reliability growth model that assumes the failed system is immediately repaired, as a part of a defect fixing process.

Metric: A measurement scale and the method used for measurement.

Negative Testing: Tests aimed at showing how a system behaves in error conditions. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions and expecting negative results.

Non-functional Requirement: A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.

Non-functional Testing: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

Pairwise Testing: A black box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters.

Pass: A test is deemed to pass if its actual result matches its expected result.

Pass/Fail Criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test.

Peer Review: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

Performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.

Performance Profiling: Definition of user profiles in performance, load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system, and hence the expected workload.

Postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Quality Assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled.

Quality Gate: A special milestone in a project. Quality gates are located between those phases of a project strongly depending on the outcome of a previous phase. A quality gate includes a formal check of the documents of the previous phase.

Random Testing: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

Regression Testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

Reliability Testing: The process of testing to determine the reliability of a software product.

Requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.

Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.

Risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

Risk Analysis: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).

Risk-based Testing: An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process.

Robustness: The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.

Sanity Test: See smoke test

Scalability: The capability of the software product to be upgraded to accommodate increased loads.

Scalability Testing: Testing to determine the scalability of the software product.

Security Testing: Testing to determine the security of the software product.

Session-based Test Management: A method for measuring and managing session-based testing, e.g. exploratory testing.

Session-based Testing: An approach to testing in which test activities are planned as uninterrupted sessions of test design and execution, often used in conjunction with exploratory testing.

Smoke Tests: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Software Quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.

State Transition Testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions.

Statement Testing: A white box test design technique in which test cases are designed to execute statements.

Static Testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static analysis.

Stress Testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers.

Stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

System Integration Testing: Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

System Testing: The process of testing an integrated system to verify that it meets specified requirements.

Technical Review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.

Test Approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

Test Automation: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

Test Case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

Test Case Specification: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.

Test Condition: An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.

Test Design Specification: A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.

Test Driven Development: A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.

Test Environment: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

Test Evaluation Report: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

Test Execution: The process of running a test on the component or system under test, producing actual result(s).

Test Execution Automation: The use of software, e.g. capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.

Test Harness: A test environment comprised of stubs and drivers needed to execute a test.

Test Implementation: The process of developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts.

Test Level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test.

Test Log: A chronological record of relevant details about the execution of tests.

Test Management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test Oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), other software, a user manual, or an individual’s specialized knowledge, but should not be the code.

Test Plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.

Test Process: The fundamental test process comprises test planning and control, test analysis and design, test implementation and execution, evaluating exit criteria and reporting, and test closure activities.

Test Specification: A document that consists of a test design specification, test case specification and/or test procedure specification.

Test Strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects).

Test Suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Testability: The capability of the software product to enable modified software to be tested.

Testable Requirements: The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met.

Top-down Testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

Unreachable Code: also known as dead code. Code that cannot be reached and therefore is impossible to execute.

Usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.

Usability Testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use Case: A sequence of transactions in a dialogue between an actor and a component or system with a tangible result, where an actor can be a user or anything that can exchange information with the system.

Use Case Testing: A black box test design technique in which test cases are designed to execute scenarios of use cases.

V-model: A framework to describe the software development lifecycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development lifecycle.

Walkthrough: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.

White-box Testing: Testing based on an analysis of the internal structure of the component or system.
logoblog

Thanks for reading Testing Terminology's

Previous
« Prev Post

No comments:

Post a Comment