Certified Tester Foundation Level - CTFL Exam Notes: ISTQB

Section 1.1 of the Certified Tester Foundation Level Syllabus v4.0 is titled "What is Testing?" This section provides an overview of testing and its objectives. It explains that testing is the process of evaluating a system or component to determine whether it satisfies the specified requirements. The section also discusses the relationship between testing and debugging, highlighting that testing is distinct from debugging, which focuses on identifying and fixing defects.

Additionally, this section emphasizes the importance of testing by explaining why it is necessary. It outlines the contributions of testing to the success of a project, including improving the quality of the software, reducing risks, and providing confidence in the system's behavior. The section also introduces the concepts of errors, defects, failures, and root causes, highlighting the need for testing to identify and address these issues.

Testing is important because it helps ensure the success of a software project. It contributes to the quality of the software by finding errors, defects, and failures.

Errors are mistakes made by developers, while defects are problems in the software that can cause it to not work correctly. Failures occur when the software does not meet the expected results.

By testing the software, we can identify these errors, defects, and failures, and understand their root causes. This allows us to fix them and improve the quality of the software. Testing also helps in quality assurance, which means making sure that the software meets the required standards and specifications.

In summary, testing is necessary to find and fix errors, defects, and failures in software, and to ensure that it meets the required quality standards.

"Testing Principles." This section covers the fundamental principles that guide the testing process. It emphasizes the importance of testing as an essential activity in software development and highlights the key principles that should be followed during testing. These principles include:

  1. Testing shows the presence of defects: Testing is conducted to identify defects or errors in the software. It aims to uncover issues and ensure that the software meets the desired quality standards.
  2. Exhaustive testing is impossible: It is practically impossible to test every possible input and scenario for a software system. Therefore, testing efforts should be focused on areas that are most likely to contain defects.
  3. Early testing: Testing should be started as early as possible in the software development lifecycle. This helps in identifying and fixing defects at an early stage, reducing the cost and effort required for later stages.
  4. Defect clustering: It is observed that a small number of modules or components usually contain the majority of defects. Therefore, testing efforts should be concentrated on these high-risk areas.
  5. Pesticide paradox: Repeating the same set of tests over and over again can lead to the diminishing effectiveness of those tests. To overcome this, test cases should be regularly reviewed and updated to ensure they remain effective.
  6. Testing is context-dependent: The testing approach and techniques used may vary depending on the specific context of the software project, such as the technology used, the domain, and the project constraints.
  7. Absence of errors fallacy: The absence of errors in a software system does not guarantee its correctness or quality. Testing can only provide information about the presence of defects, not their absence.

These principles provide a foundation for effective testing practices and help testers in making informed decisions throughout the testing process.

1.4 Test Activities, Testware, and Test Roles:

  • This section discusses the various activities involved in the testing process, including test planning, test design, test execution, and test completion.
  • It explains the concept of testware, which refers to the artifacts produced during the testing process, such as test cases, test scripts, and test data.
  • It also describes the different roles and responsibilities in a testing team, including the test manager, test analyst, test designer, and test executor.

To achieve a "shift-left" approach in testing, there are several good practices that can be followed:

  1. Early involvement: Start testing activities as early as possible in the software development lifecycle. This includes participating in requirements gathering, design discussions, and code reviews.
  2. Test automation: Implement automated testing tools and frameworks to enable early and frequent testing. This allows for faster feedback and helps identify issues early in the development process.
  3. Collaboration: Foster close collaboration between developers, testers, and other stakeholders. Encourage open communication, knowledge sharing, and joint problem-solving to ensure a shared understanding of requirements and quality expectations.
  4. Continuous integration and continuous testing: Integrate testing activities into the continuous integration and delivery pipeline. Run automated tests with every code change to catch issues early and ensure the stability of the software.
  5. Shift-left security: Incorporate security testing practices early in the development process. Perform security code reviews, vulnerability scanning, and penetration testing to identify and address security risks as early as possible.
  6. Test-driven development (TDD): Adopt TDD practices where tests are written before the code is developed. This helps in defining clear requirements and ensures that the code meets those requirements.
  7. Early defect detection: Use static code analysis tools and techniques to identify potential defects and code quality issues early in the development process. This helps in reducing the number of defects that make their way into the later stages of testing.
  8. Continuous learning and improvement: Encourage a culture of continuous learning and improvement within the testing team. Regularly review and analyze testing processes, identify areas for improvement, and implement changes to enhance the effectiveness and efficiency of testing activities.

These practices can help organizations shift testing activities to the left, enabling early detection and resolution of issues, reducing rework, and improving overall software quality.

Test levels refer to the different stages or phases of testing that are performed during the software development lifecycle (SDLC). Each test level has specific objectives and focuses on different aspects of the software. Test levels are closely aligned with the different phases of the SDLC and are performed to verify the quality and functionality of the software at each stage.

In the syllabus, section 2.2.1 describes the following five test levels:

  1. Component testing (also known as unit testing): This level focuses on testing components in isolation. It is typically performed by developers in their development environments and may require specific support such as test harnesses or unit test frameworks.
  2. Component integration testing (also known as unit integration testing): This level focuses on testing the interfaces and interactions between components. It is heavily dependent on integration strategy approaches like bottom-up, top-down, or big-bang.
  3. System testing: This level focuses on the overall behavior and capabilities of an entire system or product. It includes functional testing of end-to-end tasks and non-functional testing of quality characteristics. Some non-functional testing, such as usability, is preferably done on a complete system in a representative test environment.
  4. Acceptance testing: This level focuses on determining whether a system satisfies the acceptance criteria and meets the needs of the stakeholders. It is typically performed by end-users or customers.
  5. Operational testing: This level focuses on testing the system in its operational environment to ensure that it functions correctly and meets the specified requirements.

The triggers for maintenance and maintenance testing can be classified as follows:

  • Modifications: This includes planned enhancements, corrective changes, or hot fixes to the software.
  • Upgrades or migrations of the operational environment: This involves changes in the platform or environment on which the software operates, such as moving from one platform to another.
  • Retirement: When an application reaches the end of its life, it may require testing of data archiving and restore procedures.

Confirmation and Regression Testing

Depending on the identified risks, there are several ways to test the fixed version of the software. These include:

  1. Selecting testers with the right level of experience and skills for the specific risk type.
  2. Applying an appropriate level of independence in testing.
  3. Conducting reviews and performing static analysis to identify any potential issues.
  4. Applying the appropriate test techniques and coverage levels to ensure thorough testing.
  5. Using the appropriate test types that address the affected quality characteristics.
  6. Performing dynamic testing, including regression testing, to ensure that the fixed version functions correctly.

These testing approaches help mitigate the identified risks and ensure that the fixed version of the software meets the required quality standards.

Static testing. It explains that in static testing, the software under test does not need to be executed. Instead, code, process specification, system architecture specification, or other work products are evaluated through manual examination (e.g., reviews) or with the help of a tool (e.g., static analysis). The objectives of static testing include improving quality, detecting defects, and assessing characteristics like readability, completeness, correctness, testability, and consistency. Static testing can be applied for both verification and validation. It also discusses how testers, business representatives, and developers work together during example mappings, collaborative user story writing, and backlog refinement sessions to ensure that user stories and related work products meet defined criteria. Static analysis is another aspect of static testing, which can identify problems prior to dynamic testing while often requiring less effort. It is used to detect specific code defects, evaluate maintainability and security, and improve overall project costs.

common misperception of testing is only executing software and checking results = test execution

but also includes test planning, analyzing, designing and implementing tests, reporting test progress and results and evaluating quality of test object.

testing also involves reviewing requirements, user stories source code

dynamic testing: execution of component or system

static testing: doesn't involve execution of component or system

objectives of testing:

  1. prevent defects by evaluating requirements, code, user stories etc
  2. verify whether all specified requirements fulfilled
  3. check test object is complete validate if it works as expected to users/stakeholders
  4. build confidence in quality of test object
  5. find defects failures thus reduce level of risk on inadequate software quality
  6. provide info to stakeholders for informed decisions regarding level of quality of test object
  7. comply with contractual legal regulatory requirements/ standards, to verify test object’s compliance with such standards

objectives may change according to context of SDLC like:

during component testing , objective may be to find as many failures as possible to identify defects and fix them early. another objective to increase code coverage of component test

during acceptance testing, objective may be to confirm systems works as expected. another may be to give info to stakeholders about risk of releasing the system.

testing and debugging are different. testing show failures caused by defects in software. debugging finds, analyzes and fixes such defects.

In agile and other SDLC, testers may be involved in debugging too.

Test techniques examples:

  1. having testers involved in requirements reviews or user story refinement. reduces risk of untestable features being developed.
  2. having testers work closely with system designers while system being developed increase understanding of design and can reduce risk of fundamental design defects at early stages.
  3. having testers work closely with developers while code under development can increase understanding and reduce risk of defects within code.
  4. having testers verify validate software prior to release. detects failures and removing defects to increase likelihood of software meeting stakeholders requirements.

QA and testing are not same but related. Quality Management ties them together. QA Management includes QA and QC.

errors occur for reasons like:

  1. time pressure
  2. human fallibility
  3. inexperienced people
  4. miscommunication of people requirements
  5. complexity of code design arch problem to solve
  6. misunderstanding about intra-system and inter-system interface in large number
  7. new unfamiliar tech
  8. environmental conditions like radiation, electromagnetic fields, pollution

false positives are reported as defects but ain't actually defects

false negatives are defects that are not reported.

root causes of defects are early actions/conditions that contributed to creating defects.

root cause analysis lead to process improvements that prevent occurrence of similar future defects.

7 testing principles

  1. testing shows presence of defects, not their absence
  2. exhaustive testing is impossible - testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases.
  3. early testing saves time and money - both static dynamic testing should be started early. early testing aka. shift left.
  4. defects cluster together - small no. of modules contains most of defects responsible for most failures.
  5. beware of pesticide paradox - same tests repeated over and over again can no longer find new defects.
  6. testing is context dependent - testing is done differently in different contexts. for eg: safety-critical industrial control software vs. e-commerce mobile app
  7. absence-of-errors is a fallacy - some believe testing can run all possible test and find all defects but 1 and 2 tells us its impossible. It is a fallacy a mistaken belief to expect a successful system.

factors influencing Test process for an organization are:

  1. SDLC and PM being used
  2. test levels test type considered
  3. product and project risks
  4. business domain
  5. operational constraints like budget timescale complexity contractual
  6. organizational policies practices
  7. internal external standards

A Test Process consists of these group of Test Activities and tasks:

test planning

test monitoring/ control

test analysis

test design

test implementation

test execution

test completion

test planning - involves activities that define objectives of testing and the approach of meeting test objectives within constraints imposed by context

test monitoring & control - ongoing comparison of actual progress against planned progress, test control involves taking action to meet objectives of test plan

evaluation of exit criteria may include

  • checking test results and logs against specified coverage criteria
  • assessing level of component or system quality
  • determining if more test needed

test analysis - determines “what to test” in measurable coverage criteria

  • analyzing test basis
  • requirements specifications such as business functional system user stories
  • software architecture diagrams design
  • evaluating test basis and test items to identify defects of
  • ambiguities
  • omissions
  • inconsistencies
  • inaccuracies
  • contradictions
  • superfluous statements
  • identifying features and sets of features to be tested
  • defining prioritizing test conditions for each feature
  • capturing bidirectional traceability between each element of test basis and test conditions

application of black box white box experience based test techniques useful in test analysis.

test techniques like Behavior Driven Dev Acceptance Test Driven Dev

Test Design - “how to test”

  • designing and prioritizing test cases and sets of test cases
  • identifying test data to support test conditions/ test cases
  • designing test env and required infra tools
  • capturing bidirectional traceability between test basis, test conditions test cases

Test implementation - “ do we have everything in place to run tests?”

  • developing prioritizing test procedures creating automated test scripts
  • creating test suites from the test procedures and automated test scripts
  • arranging test suites within a test execution schedule
  • building test env and verifying everything setup
  • preparing test date and ensure properly loaded in test env
  • verifying updating bidirectional traceability between test basis conditions cases procedures

test design and test implementation tasks are often combined.

Test Execution

  • recording IDs versions of test items
  • executing test manually or using test exec tools
  • comparing actual results with expected results
  • analyzing anomalies to establish likely causes
  • reporting defects based on failures observed
  • logging the outcome of test exec
  • repeating test activities - confirmation testing, regression testing
  • verifying updating bidirectional traceability between test basis test cases test conditions test progress tes results

Test completion - collect data from completed test activities to consolidate experience testware and other info. It occurs at project milestones.

  • checking all defects reports closed
  • creating test summary report to be communicated to stakeholders
  • archiving test env test data infra testware for later reuse
  • handing over testware to maintenance teams other teams to benefit from its use
  • analyzing lessons learned to determine changes for future iterations releases
  • using info gathered to improve test process maturity

Traceability between Test Basis and Test work products

Good traceability supports:

  • analyzing impact of changes
  • making testing auditable
  • meeting IT gov criteria
  • improving test progress reports and test summary reports (requirement that passed, failed pending test)
  • relating technical aspects of testing to stakeholders
  • providing info to assess product quality process capability and project progress against business goals

human psychology of testing

confirmation bias - devs find it difficult to accept that their code is incorrect

SDLC here:

  • sequential development model
  • iterative and incremental development method

Test levels:

  1. acceptance testing
  2. component testing - unit/ module testing
  3. system testin
  4. integration testing

TDD is test-first approach.

component integration testing is responsibility of developers. system integration testing is responsibility of testers.

formal review

author, management, facilitator, review leader, reviewer, scribe

4 most common review types

informal

walkthrough

technical review

inspection

individual review techniques

ad hoc

checlist based

scenarios and dry runs

perspective based

role based

probe effect - using intrusive test tools

black box, white box, experience based testing techniques

black box test techniques

  1. equivalence partitioning

% Coverage = no. of equivalence partitions / total no. of identified equivalence partitions

  1. boundary value analysis - extension to equivalence partitioning but only when partitions are ordered, of numeric and sequential data.
  2. decision table testing

conditions:

Y True T or 1

N False F or 0

— means value doesn’t matter N/A

actions:

X means actions should occur Y T 1

blank means not occur — N F 0

common min coverage standard for decision table testing is have atleast one test case per decision rule in table.

% coverage = no. decisions rules tested by at least one test case / total no. of decision rule

  1. state transition testing
  2. use case testing

white box testing techniques

  1. statement testing and coverage
  2. decision testing and coverage
  3. value of statement and decision testing

statement testing may provide less coverage than decision testing.

100% decision coverage guarantees 100% statement coverage (but not vice versa).

statement coverage find defects not exercised by other tests.

decision coverage find defects where other test have not taken both True False outcomes.

experience based testing

  1. error guessing
  2. exploratory testing
  3. checklist based testing