Peanut Butter, Jelly and Code Quality

Guest post by Tim Rosenblatt, Senior Engineer at MoovWeb

 

Software imperfections — bugs — have been around as long as there has been software. In fact, the most well known bug was arguably more hardware than software: Grace Hopper’s infamous moth.

Engineers have always looked for ways to minimize bugs, and they’ve come up with many clever solutions. Some programming languages are designed with strict rules where some types of errors are exposed during the process of writing code (type checking). Other solutions add an extra step to compilation and run special tools to find common issues (static analysis, linters), while another technique is to write special code to automate the process of checking that the application works as expected (unit and integration testing). This is why my company, Ship.io, makes software that automates all this.

Typically these types of solutions are popular, because they share one thing in common. They don’t require a human to spend time checking that the code works the way it should. We all know the standard criticisms of wetware: humans are slower than computers, they aren’t consistent, and they’re expensive.

The truth is that we’ve done a great job at removing some of the more monotonous and time-consuming aspects of software testing. Still, there is no substitute for a human applying a little common sense and judgement when deciding if a piece of software is good enough to be released to the end-user.

This leads me to the title of this post: peanut butter, jelly, and code quality. Peanut butter and jelly, as individual ingredients, are delicious. If you haven’t recently eaten two pieces of soft bread with peanut butter and jelly between them…maybe you should. They’re both good on their own, but there’s something so special about combining them that the combination is simply referred to by its initials to save time: PB and J.

To me, that’s a fantastic metaphor for the subject of automatic and manual testing. Both are good, neither is enough, and the combination is better than the sum of its parts.

There is one angle on the subject that I think is worth keeping in mind. Automatic testing is not going anywhere, and the tools get better all the time. Manual testing is also not going anywhere, but the tools don’t change much.

In most cases, bug reporting hasn’t changed much in the past half century: a human sees something that they think isn’t right, and they send a message to the coder attempting to explain what they did to cause the bug, and what the undesirable effect was. The coder then attempts to follow the instructions to reproduce the bug in a special debugging environment in an effort to find and fix the problem. Sometimes this process results in better software, but all too often it takes a long time and ends in those dreaded words: “works for me”.

Actually, there is one thing that has changed with respect to bug reporting in the last 50 years: the time spent by a programmer attempting to reproduce the bug has gotten a lot more expensive.

Since we know that manual testing is here to stay, let’s set our sights on making it more productive.

Some techniques involve writing special debugging code that logs what a user was doing. This is a good technique, and is frequently used, but has many challenges: extra code must be written, and then additionally the debug logs require a programmer to spend time understanding them.

Another technique is to release special builds of the software that record what the user does — not just the debug logs, but actually records the screen, the way Testfire does. (Actually, Testfire captures debug logs in addition to video, which is a nice bonus.)

Speaking as an engineer, I know that I don’t always get what I need from a user’s explanation of the bug, and I don’t always get what I need from the debug logs. I also know that if I can watch a user go through the process, I’m going to understand it quicker, and possibly also find ways to improve the user experience at the same time.

Using a tool like Testfire means that I’m going to get the most information so that I can quickly fix bugs that make it past my automatic tests. It gives me another layer of assurance that I’m going to release great software so that I can spend my time doing more valuable things…like making a PBJ sandwich.

Recent Posts

Learn how to build great customer experiences!

Get insights on creating delightful customer moments, delivered straight to your inbox

Be sure to follow us @MeetMirror!