Getting Started Testing
Ned Batchelder (@nedbat)
Why Test?
- Know if your code works
- Don't think of this as item to check off on some list
 
 
- Save time
 
- Better code
 
- Remove fear
 
- "Deugging is hard, testing is easy"
 
- Really good way to write really good code
 
Roadmap
- Growing tests
 
unittest 
- Mocks
 
First Principles: Growing Tests
First test: interactive
- play with your code in the shell
 
- Good
 
- Bad
- not repeatable
 
- labor intensive
 
- is it right?
 
 
Second test: standalone
- Good
- testing the code
 
- repeatable
 
- low effort
 
 
- Bad
 
Third test: expected results
Fourth test: check results automatically
...we want to find a library to run tests for us
Good Tests
- Automated
 
- Fast
 
- Reliable
 
- Informative
 
- Focused
 
unittest
- in the Python standard library
 
- infrastructure for well-structured tsts
 
- patterened on xUnit
 
Create a test_thing.py file
    import unittest
    from code import Thing
    class TestThing(unittest.TestCase):
        def test_some_things(self):
            thing = Thing()
            assert thing.value = 1
Run it
    $ python -m unittest test_thing
unittest will create new instances of the TestCase subclass for each test* method, for isolation.
Pro Tip: Write your own TestCase base class for a project
- Good place to write really good domain-specific helpers
 
- People often view/write tests as rote boilerplate, rather than actual, factorable code
 
Can't call functions that we expect to raise
    # use TestCase.assertRaises as a context manager
    with self.assertRaises(TypeError):
        Thing.some_method('bad input')
setUp method called before each test* method
- do all your setup there
 
- clean up in 
tearDown, if necessary, like when creating file 
- these help with isolation
 
- look into fixtures if you need lots of data
 
Tests are real code!
- test your test helpers, too!
 
Mocks
- "The problem with testing missile launch sites is missile launches take a long time."
 
- Don't want dependencies in your tests.
 
- The question is: assuming my dependency is working, does my code work?
 
Fake implementation of methods
- Good: test results are predictable\
 
- Bad: some code isn't tested
 
Fake deeper methods, instead
- Monkeypatch urllib.urlopen, for example
 
- Insert an object that behaves just like the real piece
 
- Good
- stdlib is stubbed
 
- all our code is run
 
 
Better: Mock
Test Doubles
- powerful: isolates code
 
- focuses test
 
- removes speed bumps
 
- BUT: fragile tests; what if the backend changes?
 
addCleanup: nicer than tearDown 
doctest: only for testing docs!!! 
nose, py.test: better test runners 
ddt: data-driven tests 
coverage: great tool! 
Selenium: browser tests 
Jenkins, TravisCI: run tests all the time