Wednesday, August 27, 2025

Is writing a test a good contribution to an open source project?

The cliché always used to be that "contributing to docs was a great way to get started in open source." Now I'm also starting to hear people suggest that writing a test can also be a good entry to a project.

But is this a good idea? I'm not sure...

"writing hand" and a test tube
Say you have a piece of code that isn't covered by any tests. It's fair to say this isn't an ideal situation to be in. All things being equal, having tests for this code would be a good (better) thing.

But not all code and not all tests are created equal.

Is adding a test for a piece of code that is never expected to change in the lifetime of the project valuable?

Is it valuable to write tests for code that is so clearly understandable that if anyone changed it, then lots of things would obviously be wrong, and a manual review of the code would easily spot the problem?

Is it valuable to add tests for only some scenarios or paths through a piece of code? Sometimes. Sometimes not.

Is adding tests that ensure all possible input can be handled by the code a good addition? Maybe, but if the project has been around a while, then all such inputs have likely been encountered already. If there were inputs that might cause a problem, they've most likely been encountered and dealt with.

You may be able to create a lot of tests very quickly. (Especially if using AI.)  But is it worth running them? If they don't run quickly, is it worth the delays and the money/energy it takes?

Coded tests must also be reviewed like any other code contribution, and reviewing PRs is a common bottleneck in many OS projects. 




I'm not against tests.
I think automated tests are great, and everyone should write more of them.
I just think that adding them after the fact is the wrong time to do it. It's harder to do it well, and they risk being low value.
Writing (or at least documenting) all the required tests before you start coding is the best time to write them.

Of course, if there's a project with documented manual test steps and you want to write code to automate them, then that sounds like a very valuable contribution. (Just as long as it doesn't require modifying the underlying code to make that possible.)

Or, if you want to help with the testing of a project, look at some open issues and start documenting how to test those features when they are implemented.

As with any open source project, the best kind of contributions are the ones the owners and maintainers are asking for, and if they're of any size, they never start with a PR but with a discussion or issue.

Thursday, August 07, 2025

Miscellaneous AI-related questions

Question mark with AI-sparkle

No answers. "Just", questions I'm aware of and considering:

  • If working with AI means communicating with machines more like we do with other humans, how do we avoid things also going back the other way and treating people more like machines?
  • Are "agents the future of [all] work"? And, if not all, how to identify the work that can change or be replaced?
  • If "AI is only as good as your data", why isn't there as much effort being put into ensuring the quality and accuracy of the data as there is hype about AI?
  • At what point does AI not need human oversight? All the education highlights human oversight, but the futurists don't include it...
  • What is in the middle-ground between traditional GUIs and "just" a text box?
  • As feedback is highlighted as being essential when developing tools with AI, is there a way for feedback from a tool to be passed back to those creating the underlying models?
  • If there's a GUI for something, does it automatically need (and benefit?) from having an equivalent interface that's accessible via command line, API, and Agent/MCP?
  • As speed/rate of change is a common complaint among all types of people and people doing disparate tasks, how do you factor this in when introducing AI-powered tools?
  • If people are generally reluctant to read instructions, why will they happily read the text-based response from an AI tool telling them how to do something?
  • Asking good questions is hard. How people ask questions of AI-powered tools greatly impacts the quality of results. In training people to use AI, are they also being taught to ask good questions?