Friday, June 18, 2021

I don't think you should only test manually

In response to my post When it's Ok to break the build some people thought I was trying to argue against using CI/CD tooling to test changes.

This isn't the case.

Only running tests manually and/or on your main development machine can be slow and a waste of resources.

Creating a draft PR to test against a large test suite and only raise to a full PR when everything has been confirmed as being ok is a good way of working.

Pushing local changes to a private branch with a CI/CD process running tests against it can be a good way of working.

Either of the above is also a good way of incorporating the execution of extensive test suites or tests that involve multiple environments or varied configurations that would be impractical to set up or maintain on each developer's machine.

My objections and concerns come when a full (non-draft) PR is raised before tests have been run.

BTW. I'm currently planning a book all about testing. It will focus more on the concepts than the details, but I think I can bring some new ideas to the subject. (Hopefully, I'll do a better job at avoiding confusion than I did in that other post.) If you want to know more, be sure to sign up for my (very occasional) newsletter. 

Thursday, June 17, 2021

When it's Ok to break the build

For some teams, and some developers, breaking the build is a terrible thing that should be avoided at all costs. (Not just because you may have to buy everyone donuts if you do.)

Dunkin'Donuts donut box

I used to be fine with anyone breaking the build (and not just because I like donuts.)

This came with some qualifications:

  • You shouldn't do it deliberately.
  • You should be making small, frequent changes that are less likely to break anything.
  • You should fix the build ASAP if you do. (You don't go home having broken the build as you'll be blocking other people.)
But, all in all, I'd not sweat it if someone broke the build. I'd rather they were doing that because they were shipping code and not sitting on massive changes or holding back because they were worried about breaking it.

For the most part, this approach has worked well for many years.

But recently, I've encountered behavior that's making me reconsider this idea.

The behavior I've seen is developers using the CI process as a way to validate their code.

I've seen attitudes like:

"If it fails, I'll find out when the CI process tries to build it as part of the PR automation."

"I won't bother running the tests myself as the CI process will do it for me."

"If the CI process fails, it will create a new issue, and if I fix that, it looks like I'm closing more issues."

These thoughts scare me:

  • They indicate a lack of professionalism and pride in the work being done.
    (Why wouldn't you check that it works before saying you're finished?)

  • They indicate a lack of respect for the value of testing.
    (Why wouldn't you check that there are no unintended consequences of the changes you've made by running the tests? - Ok, It may take a while, but surely it's better to be safe than sorry. If running them takes a long time, isn't it better to know about problems sooner? And especially to know about them before you get engrossed in whatever the next thing is.)

  • They suggest you can work the system to look more valuable and productive, and presumably, the organization rewards this.
    (This sounds like you've found a way to make your life simpler at a cost to the product, the company, and the team/your peers. That doesn't sound healthy. It sounds like your manipulating the data to appear to be closing more bugs when actually you're the cause of many of them.)

Don't be like that.
Make an effort to ensure that your code works, doesn't break anything else or have unintended side effects, and that you're helping your team to all be as productive as possible.

So, when is it ok to break the build?

If it's an accident that you attempted to avoid, and will fix it quickly.

When is it not Ok to break the build?

There are many reasons, but my current bug-bear is that it's not ok to break the build if you do it because of laziness.

Saturday, June 05, 2021

Is this TDD? (When you can't do: Red-Green-Refactor)

I've been thinking a lot about testing recently.

Specifically, I've been thinking about using TDD and legacy code. (Yes, I've got Michael Feathers's book and will need to reread it. This is probably covered there.)

So, I've just received a bug report that I decide to work on and fix:

  1. I created some tests that recreate the issue. Running them verified that they reproduced the issue (they failed) so my tests were RED.
  2. I then changed the code so the tests passed. Now my tests were all GREEN.
  3. Then I tidied up the code a bit more. (I REFACTORED the code.)
  4. Finally, I ran the tests again to make sure they all still pass. They did.

  5. But then I thought of another edge case that wasn't currently tested but probably should be.
  6. So, I create a test. And, it passes!
Oh, no! It didn't go Red first, so I can't be doing "proper TDD"™.

I'm happy that the code does correctly handle (and document) this test case but how should I test it "properly"?

  • Change the code so the test fails? and then change it back?
  • Change the test so it fails? and then change it back?
  • Just leave it?
  • Something else?

What would you do?

BTW. I'm currently planning a book all about testing. It will focus more on the concepts than the details but I think I can bring some new ideas to the subject. (Despite what you may think by me asking this question.) If you want to know more, be sure to sign up for my (very occasional) newsletter.