Python - Testing
In Python unit tests, you can simulate exceptions using the unittest
framework's assertRaises method. This method is used to check whether a specific exception is raised during the execution of a given code block. It allows you to verify that the code under test raises the expected exception under certain conditions.
Example Program:
import unittest
# Function that raises an exception
def divide(a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
# Unit test class
class TestExceptionSimulation(unittest.TestCase):
# Test case to check if ValueError is raised
def test_divide_by_zero_exception(self):
with self.assertRaises(ValueError) as context:
# Code block that should raise the exception
result = divide(10, 0)
# Access the exception and assert on its details
self.assertEqual(str(context.exception), "Cannot divide by zero")
# Test case to check if TypeError is raised
def test_divide_non_numeric_exception(self):
with self.assertRaises(TypeError):
# Code block that should raise the exception
result = divide("10", 2)
if __name__ == '__main__':
unittest.main()
Run the tests using the command line:
# Run the tests
# $ python test_exception_simulation.py
Output:
.. ---------------------------------------------------------------------- Ran 2 tests in 0.001s OK
In this example:
- The divide
function raises a ValueError
when attempting to divide by zero.
- The TestExceptionSimulation
class defines two test cases using the assertRaises
method to simulate the raising of specific exceptions.
- The output shows that both tests passed successfully, indicating that the expected exceptions were raised during the tests.
Purpose of Code Coverage in Testing:
Code coverage is a metric used in software testing to measure the extent to which the source code of a program is executed during testing. The goal of code coverage analysis is to identify areas of the code that have not been exercised by the test cases. This metric helps developers and testers assess the quality of their test suite and identify gaps in test coverage. A higher code coverage percentage indicates that more parts of the code have been tested, but it doesn't guarantee the absence of bugs.
Measuring Code Coverage in Python:
In Python, you can use tools like coverage.py
to measure code coverage. This tool provides detailed information about which lines, statements, and branches of your code were executed during the tests. You can integrate it with your test suite to generate coverage reports.
Example Program:
# content of example_module.py
def add(a, b):
if a > 0:
return a + b
else:
return b
# content of test_example_module.py
import unittest
from example_module import add
class TestAddFunction(unittest.TestCase):
def test_add_positive_numbers(self):
result = add(2, 3)
self.assertEqual(result, 5, "Should be 5")
def test_add_negative_numbers(self):
result = add(-2, 3)
self.assertEqual(result, 3, "Should be 3")
if __name__ == '__main__':
unittest.main()
Run the tests with coverage:
# Install coverage.py (if not already installed)
# $ pip install coverage
# Run tests with coverage
# $ coverage run test_example_module.py
Generate and view the coverage report:
# Generate coverage report
# $ coverage report -m
Output:
Name Stmts Miss Cover Missing -------------------------------------------------- example_module.py 5 1 80% 4 test_example_module.py 11 0 100% -------------------------------------------------- TOTAL 16 1 94%
The coverage report provides information about the coverage percentage for each module and details about lines that were missed during testing.
The doctest
module in Python allows you to embed tests directly within the documentation of your code. The tests are written in a format similar to interactive Python sessions, making it easy to keep the documentation and tests in sync. The doctest
module automatically extracts and executes the tests, reporting any failures.
Example Program:
# content of example_module.py
def add(a, b):
"""
This function adds two numbers.
>>> add(2, 3)
5
>>> add(-2, 3)
1
>>> add(0, 0)
0
"""
return a + b
if __name__ == "__main__":
# Run doctests
import doctest
doctest.testmod()
Run the doctests using the command line:
# Run doctests
# $ python example_module.py -v
Output:
Trying: add(2, 3) Expecting: 5 ok Trying: add(-2, 3) Expecting: 1 ok Trying: add(0, 0) Expecting: 0 ok 1 items had no tests: example_module 1 items passed all tests: 3 tests in example_module.add 3 tests in 2 items. 3 passed and 0 failed. Test passed.
In this example:
- The add
function includes embedded doctests in its docstring, specifying the expected output for different input cases.
- The if __name__ == "__main__"
block allows running the doctests when the module is executed directly.
- The output shows that all tests passed successfully, indicating that the function produces the expected results for the specified input cases.
Role of Continuous Integration (CI) in the Testing Process:
Continuous Integration is a software development practice that involves regularly integrating code changes from multiple contributors into a shared repository. The primary goal is to detect and address integration issues early in the development process. CI helps ensure that the software remains in a functional state by automatically building, testing, and validating the codebase after each commit. This practice enhances collaboration, reduces integration problems, and provides faster feedback to developers.
Example Program:
Let's consider a simple Python project with a unit test suite. We'll use a CI tool called GitHub Actions to automate the testing process whenever changes are pushed to the repository.
# content of example_module.py
def add(a, b):
return a + b
# content of test_example_module.py
import unittest
from example_module import add
class TestAddFunction(unittest.TestCase):
def test_add_positive_numbers(self):
result = add(2, 3)
self.assertEqual(result, 5, "Should be 5")
def test_add_negative_numbers(self):
result = add(-2, 3)
self.assertEqual(result, 1, "Should be 1")
Create a GitHub Actions workflow file (e.g., .github/workflows/python-test.yml
):
name: Python Test
on:
push:
branches:
- main
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: python -m unittest discover
Commit and push the changes to GitHub:
# Commit changes
# $ git add .
# $ git commit -m "Add example Python project with tests"
# $ git push origin main
View the CI workflow on GitHub:
Visit the "Actions" tab in your GitHub repository to see the CI workflow in progress. GitHub Actions will automatically run the defined tests whenever changes are pushed to the repository.
Output:
✓ Run tests
This example demonstrates the integration of continuous testing into the development process using GitHub Actions. The CI workflow runs the unit tests whenever code changes are pushed, providing fast feedback on the status of the codebase.