All credit for the original Go Gopher goes to Renee French
Creative Commons Attribution 3.0 licensed
I really enjoy Go as a language and as a tool for making solutions. A key aspect of getting things done is making sure that a solution works. That's where testing comes in.
If you haven't read my posts on Effective Unit Testing or Test Driven Development (TDD), then some of the points expressed there are reiterated here, but they go a lot deeper into the concepts. This is about the testing support that exists for Go (Golang) and how to structure your tests in practice to make them more maintainable and less brittle than they have to be. I do expect that you either have a working familiarity with syntax and language constructs, or you're just curious about Go. All are welcome.
Do you test?
As a quick level set, I understand that some folks aren't writing tests. The prevailing perspective is that it takes too long, or that tests break all the time, or the code base is super old, or insert any number of reasons here including just not knowing how to write good tests. I'm not going to try to convince anyone to test their code today, but I will say I've found that writing good tests is just as much of a disciplined craft as writing good code. I'll do my best to show some of that here.
My Approach to Testing
Here are a few pragmatic ideas to start with to understand the scope of test and to stay on track on the highest priorities of testing.
- 100% test coverage is a much better goal than a standard in most cases. Obviously if there's a risk to human safety or potential legal issues, or if the functionality is rather trivial, then maybe that's a different story.
- If you're using TDD, then most of your critical code will be pretty well tested already, but that approach might not work for your situation, especially if you don't have a lot of practice with it. So it might help to first identify the "critical paths" through the logic of your application to understand what must be tested vs. what should be tested.
- Global or package global variables can make testing more difficult. Try to avoid them for anything other than constants or behavior switches specifically for testing (as seen later).
- Tests stop bugs. Bug reports and their fixes should always result in more testing.
- Ideally, tests should break only when there is an actual problem. When tests are too familiar with the inner workings of the implementation - instead of making assertions about an abstraction - then tests can get too brittle and difficult to change. Sometimes it's really hard to find such a subjective middle ground.
Where to start?
Go includes support for testing in the standard library, so we'll start here before moving to a framework to understand what's happening under the covers. Before we write test code, we'll need something to test. I'll be coding and testing a simple file based persistence mechanism for task information (maybe you can incorporate it into your todo list app). The code for this example will be posted to Github here.
Identifying Scope
First, let's list a few things that our feature should be able to do and then we can identify critical path. This would certainly be a small piece of a wider feature set.
A time entry should be able to have a free-form description.
This indicates that the description is largely unbounded. Cool, then a simple string should do. We may want to prevent abuse of this and limit the length, but that constraint can be added later.
A time entry will have a start and an end time.
Our data structure is already taking shape with start time, end time, and description.
All timer entries for the day can be shown with a "list" command.
We can assume that we need to provide some way to list the day's tasks and that's about it.
Pen to paper
We'll stick with the simple to write case of a JSON file with a hard coded path for now. You'll see how we can code up our tests to not break (much) if we have to change directions.
I like to create modules with clear, discrete sets of functionality, so I'll create a persistence module to start to encompass our feature. Here's the directory structure so far.
<ROOT_DIR>
+-persistence
| +-dto.go
| +-store.go
| +-store_test.go
+-main.go
+-go.mod
+-go.sum
And the scaffolding in store.go
.
// Main data type
type TimeEntry struct {
Start time.Time `json:"start"`
End time.Time `json:"end"`
Description string `json:"description"`
}
// Interface to define interactions
type EntryStore interface {
ListEntriesToday() ([]string, error)
SaveEntry(entry *TimeEntry) error
}
// Pre-defined error types
type ErrFailRead error
type ErrFailWrite error
It's generally useful to start defining interfaces early, especially if we want to change the method of persistence later. This allows us to write testing for our abstraction and change out the simplified implementation later while still enforcing the same set of expectations, making our tests less brittle and more maintainable. If this is difficult to do, sometimes it's faster and more efficient to just stop and think about the design. There's more that goes into applications than code, and a well thought out design is critical to efficient delivery.
Skeleton Implementation
Now that we have the interface designed, we need a consistent way to get an implementation and ensure that it's always aligned with any interface changes. We also need to ensure that the expected runtime behavior doesn't conflict with a test run so we can test the tool as well as use it in our day-to-day.
var trackerFileName = ".tracker.json"
var _ EntryStore = (*jsonEntryStore)(nil)
type jsonEntryStore struct {}
func (j *jsonEntryStore) ListEntriesToday() ([]string, error) {
panic("implement me")
}
func (j *jsonEntryStore) SaveEntry(entry *TimeEntry) error {
panic("implement me")
}
func GetEntryStore() (store EntryStore, err error) {
return nil, errors.New("implement me")
}
We're going to use trackerFileName
as a way to change the file loaded during testing. This is the "test switch" I alluded to earlier, and a case where a module variable is actually really helpful. Also note the next line.
var _ EntryStore = (*jsonEntryStore)(nil)
That's a helpful idiom to let the compiler check for interface compliance for us, because the code can't compile if that var
cannot be assigned, and no variable is actually created because we set the name to _
. Since we're assigning a nil pointer, there's no allocation either. Here's the structure for reference.
var _ InterfaceType = (*implementationType)(nil)
Testing the interface
Finally, the test code! You might have noticed that testing is as much about the code design as the test cases themselves. Here's a simple test to ensure base level functionality.
func TestGetEntryStore(t *testing.T) {
trackerFileName = "testfile.json"
defer func() {
if r := recover(); r != nil {
debug.PrintStack()
t.Fatalf("Panic recovered: %v\n", r)
}
}()
store, err := GetEntryStore()
if err != nil {
t.Fatalf("Failed to get store: %v\n", err)
}
if store == nil {
t.Fatalf("Store is nil")
}
}
- Note that I set the
trackerFileName
to something other than the default. I don't want to accidentally overwrite/delete the actual data store. - Also remember that this change will persist across test cases. Changing
trackerFileName
here will change it for other test cases, but we don't know in what order the test cases will be run, so we'll set it in all test cases. - The panic handler does little more than recover the panic, print a descriptive message, and fail the test. If your IDE supports it, I'd recommend creating a shortcut for this because it comes up a lot.
- Next we call our
GetEntryStore
function (which will fail of course) and we can get started on the implementation with the confidence that comes with test coverage.
You should see something like this if you're following along and you run the test.
=== RUN TestGetEntryStore
store_test.go:20: Failed to get store: implement me
--- FAIL: TestGetEntryStore (0.00s)
FAIL
Process finished with exit code 1
I ended up with this for GetEntryStore
. I left the actual store methods unimplemented because I don't have testing ready for them yet.
func GetEntryStore() (store EntryStore, err error) {
homeDir, err := os.UserHomeDir()
if err != nil {
return nil, ErrFailRead(errors.New("unable to locate user home dir"))
}
storePath := path.Join(homeDir, trackerFileName)
storeFile, err := os.OpenFile(storePath, os.O_CREATE|os.O_RDWR, 0644)
if err != nil {
return nil, ErrFailRead(fmt.Errorf("unable to open JSON store: %v\n", err))
}
defer storeFile.Close()
data, err := ioutil.ReadAll(storeFile)
if err != nil {
return nil, ErrFailRead(fmt.Errorf("unable to read store file: %v\n", err))
}
if len(data) == 0 {
return &jsonEntryStore{}, nil
}
var entries []*TimeEntry
err = json.Unmarshal(data, &entries)
if err != nil {
return nil, ErrFailRead(fmt.Errorf("incompatible or corrupted store file: %v\n", err))
}
return &jsonEntryStore{entries: entries}, nil
}
This is definitely more of a TDD flow. At this step in the process I have a passing test for a specific feature, so now I'll refactor and commit my change.
Testing Frameworks
Our test above isn't that complex, but it's easy to see how adding a bunch of if statements for every condition we're checking could get tiresome, especially if we're confirming things about the internal structure of each element in our test store. In that case it isn't a simple if, the idiom is more like this.
if got, want := len(store.entries), 0; got != want {
t.Fatalf("Expected store to have len %d, but had length %d\n", want, got)
}
The more complex the comparison, the more fields that we're checking, the more boilerplate that needs to be typed.
Luckily, there are solutions to this problem. One such solution is Testify, which has become a bit of a standard add-on to many projects for me, but there are many alternatives. It can be added to your project like so.
> go get -u github.com/stretchr/testify
go: github.com/stretchr/testify upgrade => v1.7.0
go: github.com/davecgh/go-spew upgrade => v1.1.1
go: github.com/stretchr/objx upgrade => v0.3.0
go: gopkg.in/yaml.v3 upgrade => v3.0.0-20210107192922-496545a6307b
go: downloading github.com/stretchr/objx v0.3.0
go: downloading gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
Now we can change the test case above to be more readable.
import (
/* ... */
// Funky import to make initialization more readable.
testify "github.com/stretchr/testify/require"
/* ... */
)
func TestGetEntryStore(t *testing.T) {
defer func() {
if r := recover(); r != nil {
debug.PrintStack()
t.Fatalf("Panic recovered: %v\n", r)
}
}()
trackerFileName = "testfile.json"
// Initialize testify with the *testing.T provided to the test case.
assert := testify.New(t)
store, err := GetEntryStore()
// Simple to write (and read) assertions.
assert.NoError(err, "Failed to get store: %v\n", err)
assert.NotNil(store)
}
That's not a big difference in this case, but it definitely pays off for more complex verification.
Note the import at the top. I haven't seen this used a lot in the wild, but this is something I personally do with testify to ensure I'm always using the require portion, and the assert
makes the test read well with less noise. Plus it's easier to switch between plain assert and require because it only requires changing the import line. See the docs on Testify's require module.
Writing the rest of the store
Now we'll really see the benefits of Testify when we implement the rest of the store methods. I'll also show a common pattern for writing positive and negative tests. In case you're not familiar with the terms:
- Positive tests: Test the happy path of a function call. Ensures that valid parameters are considered valid.
- Negative tests: Just the opposite. Ensure that sufficient input validation exists to prevent failure scenarios. We want an error to be returned in this case.
Happy Path
First, I usually implement the happy path by laying out the base procedure. Then I run the test to ensure it fails and get coding. This is the test case I came up with, after adding a bit here and there to ensure that most code paths are covered.
// Consistent formatting
func TestFormatEntry(t *testing.T) {
assert := testify.New(t)
start := time.Now().Add(-5 * time.Second)
end := time.Now()
entry := &TimeEntry{
Start: start,
End: end,
Description: "Some description",
}
output := formatEntry(entry)
assert.Equal(fmt.Sprintf("%s - %s: Some description", start.Format("15:04:05"), end.Format("15:04:05")), output)
assert.Panics(func() {
formatEntry(nil)
})
}
// Full flow
func TestJsonEntryStore_SaveAndRetrieve(t *testing.T) {
assert := testify.New(t)
trackerFileName = "testfile.json"
defer cleanupTestFile()
defer func() {
if r := recover(); r != nil {
_ = os.Remove(trackerFileName)
debug.PrintStack()
t.Fatalf("Panic recovered: %v\n", r)
}
}()
store, err := GetEntryStore()
assert.NoError(err, "Failed to get store: %v\n", err)
assert.NotNil(store)
entry := &TimeEntry{
Start: time.Now().Add(-5 * time.Second),
End: time.Now(),
Description: "A helpful description",
}
err = store.SaveEntry(entry)
assert.NoError(err)
// Confirm I/O operations as well by loading entries fresh
newStore, err := GetEntryStore()
assert.NoError(err, "Failed to get new store: %v\n", err)
assert.NotNil(newStore)
entries, err := newStore.ListEntriesToday()
assert.NoError(err)
assert.Len(entries, 1)
assert.Equal(formatEntry(entry), entries[0])
}
func cleanupTestFile() {
// Delete the file when we're done
storePath, err := getStorePath()
if err != nil {
fmt.Printf("Unable to find tracker path: %v\n", err)
}
fmt.Printf("Deleting store file %s\n", storePath)
_ = os.Remove(storePath)
}
This relies on a formatEntry
function that should be present in alternate implementations. In essence, we expect that the function exists and that it formats things the same way.
Leaky Abstraction?
It would be fair to say that cleanupTestFile
knows too much about the file-based implementation and maybe shouldn't be part of a general testing approach. I justify this direction with the fact that I/O operations fit well within the critical path for the functionality at hand. If the app can't read from or write to the file, then it's fundamentally broken. Feel free to battle it out in the comments if you feel so inclined. :)
Positive/Negative tests
Lastly for this post, I'll show an example of table based tests or data driven tests. This is a technique I've found very useful for checking for many different failure scenarios, and for establishing a solid foundation to add new scenarios as the app matures.
Let's say we want to ensure that an entry passes a certain set of validations before it can be added to the store. This can go a long way toward preventing bugs in the future, regardless how bulletproof the interface is.
func TestJsonEntryStore_SaveEntryNeg(t *testing.T) {
// Like I said, this comes up a lot.
defer func() {
if r := recover(); r != nil {
debug.PrintStack()
t.Fatalf("Panic recovered: %v\n", r)
}
}()
tests := map[string]*TimeEntry {
"Nil entry": nil,
"Missing start time": {
Start: zero, // "zero" is defined in dto.go as `var zero = time.Time{}`
End: time.Now(),
Description: "abc",
},
"Missing end time": {
Start: time.Now(),
End: zero,
Description: "abc",
},
"Missing description": {
Start: time.Now().Add(-5 * time.Second),
End: time.Now(),
Description: "",
},
"End before start": {
Start: time.Now(),
End: time.Now().Add(-5 * time.Second),
Description: "abc",
},
}
for name, entry := range tests {
t.Run(name, func(t *testing.T) {
assert := testify.New(t)
trackerFileName = "testfile.json"
defer cleanupTestFile()
store, err := GetEntryStore()
assert.NoError(err, "Failed to get store: %v\n", err)
assert.NotNil(store)
err = store.SaveEntry(entry)
assert.Error(err)
_, ok := err.(ErrValidation)
assert.True(ok, "Error should be an ErrValidation")
})
}
}
This is perfect for situations when many different inputs could be applied to the same set of validations. t.Run
allows us to parameterize this logic, and Testify makes it a breeze to make concise assertions.
Note that I created a new error type to clearly differentiate persistence errors from validation errors. A simple type assertion is all we need to verify this.
_, ok := err.(ErrValidation)
assert.True(ok, "Error should be an ErrValidation")
Once these are all passing, there's even a separate output for each one showing their individual passing statuses.
=== RUN TestJsonEntryStore_SaveEntryNeg/Nil_entry
Deleting store file /your/home/path/testfile.json
--- PASS: TestJsonEntryStore_SaveEntryNeg (0.01s)
--- PASS: TestJsonEntryStore_SaveEntryNeg/Missing_start_time (0.00s)
--- PASS: TestJsonEntryStore_SaveEntryNeg/Missing_end_time (0.00s)
--- PASS: TestJsonEntryStore_SaveEntryNeg/Missing_description (0.00s)
--- PASS: TestJsonEntryStore_SaveEntryNeg/End_before_start (0.00s)
--- PASS: TestJsonEntryStore_SaveEntryNeg/Nil_entry (0.00s)
PASS
Process finished with exit code 0
Conclusion
Hopefully this was informative and gave you some tips to improve your own testing efforts. I'm sure there are some neat tips and tricks out there that I don't know about, so feel free to share them in the comments below. Thanks!
Oh, and let me know if you want to see a complete time tracker CLI app to see the full scope of testing. If so, I'll be sure to keep posting the rest of the process. :)
Comments