image
Introduction to Software Testing
Introduction to Software Testing
Difficulty
Beginner
Duration
34m
Students
170
Ratings
5/5
Description

This Course will introduce new software developers to the concepts of software testing, debugging, and logging. These concepts are common to most programming languages, making this foundational knowledge.

Learning Objectives

  • The purpose of unit testing
  • The purpose of integration testing
  • The concept of code coverage
  • The concept of software debugging
    • And some of the different debugging techniques
  • The concept of software logging
    • And why some information shouldn’t be included inside log entries

Intended Audience 

  • New software developers

Prerequisites

  • You should have at least a conceptual understanding of programming
    • It’s okay if you’re not yet developing complex software
    • As long as you’re comfortable with the concept of functions, classes, and methods, you’re likely ready for this Course
Transcript

Software can be a difficult creative medium. Even the most basic of applications require a non-trivial amount of code. Applications are built using tens, hundreds, or even thousands of individual units of code which work together towards some shared purpose. 

With so much potential complexity, how do you ensure that the software behaves as expected? The short answer is: Software Testing.

Software testing is a wide-ranging topic which spans multiple job roles. Software testing has two basic flavors: functional testing and non-functional testing. 

  • Functional testing is used to ensure that software behaves as expected. 

  • Non-functional testing is used to ensure that the software is usable, secure, performant, etc.

In this lesson, we’re going to focus on the developer-centric forms of testing. Specifically, two forms of functional testing called: unit testing and integration testing.

Let’s summarize these two forms of testing before exploring each form a bit further.

Imagine software development as a sort of assembly line. Individual units of code are written to accomplish specific tasks. Then these individual units are brought together to form the final product. 

  • Unit testing is intended to validate the code logic inside an individual unit of code.

  • Integration testing is intended to ensure that disparate units of code work well together towards their shared purpose.

 Let’s explore these a bit further starting with unit testing. 

Unit tests are likely the most common form of testing used by developers. Let’s reiterate the purpose of unit testing. 

  • The purpose of unit testing is to validate the code logic inside an individual unit of code.

What exactly does that mean, an individual unit of code? The term unit in the context of a unit test is a bit ambiguous; because different programming languages might include different units of code. 

For example: 

  • A class or method could be a unit when using an object-oriented programming language.

  • A function could be a unit when using a functional programming language.

  • Some languages may use all of these concepts to represent a unit.

  • Or a unit could be something else entirely.

While there might not be a single definition for a unit of code, the intent is that it should represent the smallest practical piece of code. 

If software consists of individual units that are combined together, then ideally we want to be able to verify that the units behave as expected before combining them. 

Unit tests are used to verify that given a specific input, a unit of code will produce a specific output. For example, imagine a function that returns the sum of two numbers. When called with the arguments 1 and 2 we expect the result to be 3. If for some reason the result isn’t 3 then we know that there’s a problem with the function.

Unit tests can help to identify bugs that exist inside a unit of code. In the example of the sum function, we expect that 1 plus 2 equals 3. If it doesn’t then we know that that unit of code contains a bug. 

Unit tests can also help to identify edge cases inside a unit code. Edge cases are conditions or problems that occur when input is different from the average or expected input. 

For example, imagine you have a function which applies a discount to a purchase price. The function applies the discount and returns the adjusted price. 

The discount_percent parameter expects a percentage to be specified as a number between 0 and 1. 

For example, if we call this function with the arguments 200 and 0.15 then we expect the result to be 170.

This function is written without any consideration for edge cases. For example, if we call this function with the arguments 200 and 2 the results are negative 200. Which means we now owe this purchaser some money. In addition to whatever they’ve purchased.

The more complex a function is, the greater the potential for bugs and edge cases. Unit tests are a codified mechanism for ensuring that a unit of code handles all of its expected cases, as well as likely edge cases.

Once a unit test is created it can be run any time to verify that the unit of code being tested behaves as expected.

Unit tests are intended to validate the logic for an individual unit of code. They’re meant to test a unit in isolation. Which means that the unit shouldn’t be interacting with external resources such as network connections, databases, files, etc.

To help ensure that a unit can be tested in isolation it’s common for developers to use a resource stand-in for external resources. There are different types of stand-ins such as fakes, mocks, spies, etc, depending on programming language and testing tools.

Using stand-ins for external resources can allow a unit of code to be run without the code actually interacting with anything external. This keeps the focus on the unit being tested and allows the tests to be run much more quickly because units don’t need to wait on external resources.

Once individual units of code have been tested then they’re ready to be integrated together with other units of code. Unit tests only ensure that a unit of code behaves correctly in isolation. Once integrated together with other units of code and external resources, there are now many more places for bugs and edge conditions to hide.

Unit tests run quickly because they don’t actually interact with external resources. However, integration tests are intended to verify that the holistic application behaves as expected. 

This adds a bit of testing overhead because if the application requires an external service, such as a database, those need to exist before the tests can be run. 

Where unit tests are more focused on verifying that some input produces the expected output - integration tests are more focused on the implementation details. 

For example: imagine that you have a web application. And you want to ensure that the homepage is functioning as expected. 

Your integration test might make an HTTP request to the home page and then make assertions regarding the response data. Assertions such as the status code of the request being 200. An HTTP status code of 200 indicates that everything was successful. You might also want to ensure that some specific text exists in the HTML for the homepage.

Integration tests are much more general than unit tests and may interact with a wide range of external resources.

It’s important to note that integration testing is intended to be performed using non-production systems and services. Integration tests should be performed using isolated copies of any required external resources. 

Sometimes the external resources required aren’t practical to be used outside a production environment. For example a cloud-based service such as a managed database. 

In these cases, stand-ins are an option. The types of stand-in used for unit testing can be used in some cases for integration testing. However, when it comes to integration testing the stand-in could be a separate application, API, service, etc.

Alright, so both unit and integration tests help developers to reduce bugs and edge cases. 

As software grows and evolves it can be useful to understand how much of the code is actually executed by the different tests. This is referred to as code coverage. 

Code coverage is the percentage of lines of code that are executed from a collection of tests. The idea is supposed to be that the more of the code that’s executed during a test, the better the coverage. 

In reality, code coverage is a useful contextual metric. However, it shouldn’t be a goal to blindly strive for 100% coverage in every application. For example, an application with 60% coverage could be enough; if that 60% verifies that the most important logic behaves as expected. 

The exact value of code coverage depends on the application. For important applications such as flight control systems, code coverage is a rather important metric. However, code coverage is less important for something such as a blog page.

Software testing is an important means of producing more robust software. However, it’s also a non-trivial amount of effort. The amount of code used to test an application tends to exceed the amount of code used in the application itself. 

Over my roughly two decades in technology, I’ve come to realize that all software should be considered broken until tests prove otherwise. It’s far too easy to develop code that works only within its limited expected scope. Which means it’s going to behave in unexpected ways when it encounters unexpected input. 

It’s not uncommon for some developers or teams to forgo comprehensive testing due to the level of effort. One of the more common reasons for this is that testing is said to be too time consuming or difficult for the given software.

This is often true when software isn’t written with testing in mind. In order for software to be more easily testable it needs to be more modular. For example: here are two functions which both calculate the same basic thing. They both accept arguments for price and tax. 

One of them accepts an argument for a fee and the other extracts that information from a database.

The function that accepts all of its required input as arguments is much easier to test than the one that connects to the database. The version without the database can be tested by simply passing different arguments to the function and verifying the results. 

The function relying on the database has coupled its functionality to the existence of the database. Which means that the function will need to be patched to use a fake version of the database for unit testing. 

Which adds unnecessary development overhead when the purpose of the function is to perform a calculation, not connect to a database.

Testing can be difficult because it’s part art form and part science. If you’re finding your code difficult to test due to tightly coupled resources it’s worth reevaluating the structure of the code. It may be time to refactor the code so that it’s easier to test. 

Okay, this seems like a natural stopping point. Here are your key takeaways for this lesson:

  • The purpose of unit testing is to validate the code logic inside an individual unit of code.

  • The purpose of integration testing is to ensure that disparate units of code work well together towards their shared purpose.

  • Code coverage is a metric used to describe the percentage of code executed when running a collection of tests.

    • It’s a useful contextual metric which hints at the overall value of the tests

Okay, that's going to be all for this lesson. Thanks so much for watching. And I’ll see you in another lesson!

About the Author
Students
100696
Labs
37
Courses
44
Learning Paths
58

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.

Covered Topics