Friday, May 17, 2024

Specific or vague answers to specific and vague questions

If the question is vague, is a specific or vague answer more helpful?

Actually, is either helpful?
Would clarifying the question (or scoping it) be a better response?


If the question is specific, a specific answer can be helpful, but a vague answer may help see the broader picture or help expand the view of the question.

Both can help or be useful in different contexts.


This feels like the opposite to the way to treat data. 

With data, we want to try and accept a broad range of options and return something specific or at least consistently formatted.

With questions, we want the question to be narrow (if not specific) and a potentially broad range of answers.


Now, how do I stop bringing all my thoughts back to software development? Or, more specifically, do I need to consider whether this matters?

Thursday, May 16, 2024

Is it really a "workaround"?

Not using something is not a workaround.

A workaround is an alternative solution to the problem. Normally, slower, longer, or more convoluted than the desired solution (that has an issue.)

I think words matter.
I want to help people get past their problems. Even if that means doing the work of fixing things.

Some people seem more interested in arguing that a reported problem isn't something they need to do anything about than actually addressing whatever is making something hard to use.

Sometimes, I wonder if there's a gap in the market for customer service training for developers who respond to public issues and discussions on GitHub.

Thursday, May 02, 2024

Retrying failing tests?

Doing the same thing repeatedly (and expecting--or hoping for--different results) is apparently a definition of madness.

That might be why trying to debug tests that sometimes fail can feel like it drives you mad.


It's a common problem: how do you know how much logging to have by default? Specifically when running automated tests or as part of a CI process.


Here's my answer:

Run with minimal logging and automatically retry with maximum logging verbosity if something fails.


It's not always easy to configure, but it has many benefits.


Not only does this help with transient issues, but it also helps provide more details to identify the cause of issues that aren't transient.

An issue is probably transient if it fails on the first run but passes on the second (with extra logging enabled.) It also helps identify issues that occur when no logging is configured. -- It can happen; ask me how I know. ;)

Be careful not to hide recurring transient errors. If errors can occur intermittently during testing and the reason is not known, what's to stop them from happening intermittently for end users, too?

Record that a test only passed on the second attempt, and raise a follow-up task to investigate why. Ideally, you want no transient errors or things that don't work when no logging is configured.


This doesn't just apply to tests. 

It can also be applied to build logging verbosity. 

You only really want (or rather need) verbose log output when something goes wrong. Such as a build failing...




Wednesday, May 01, 2024

A lesson in algorithms from Guess Who

Earlier this week, I attended the DotNetNotts user group meeting. The second talk of the night was by Simon Painter and was about some of the algorithms used in Machine Learning.

As part of his explanation of decision trees, he used an example based on the game Guess Who?

A screen capture of the display of the 24 characters in the game Guess Who
Here's a screenshot, from YouTube, where the hybrid part of the meetup was relayed.

If you're not familiar with the game, you have to ask yes or no questions to identify one of the characters:

Alex, Alfred, Anita, Anne, Bernard, Bill, Charles, Claire
David, Eric, Frans, George, Herman, Joe, Maria, Max
Paul, Peter, Philip, Richard, Robert, Sam, Susan, Tom

As part of his talk, Simon stated that the best strategy and ideal scenario for any decision tree is to divide all the available options in half (a binary split). However, for this game, there are no characteristics of the characters that make this possible (and hence the challenge of the game). 

Simon did however point out that there is the possibility of using compound questions to have a greater chance of success by more evenly dividing the groups in half each time.
So, instead of limiting questions to the form of "is the person wearing a hat?" you use questions like "does the person present as female OR have facial hair?" or "does the person have blue eyes, a hat, OR red hair?"


Such questions were relevant for the rest of the talk, but it got me wondering.

I looked at all those people and their names and thought I saw something...

About half the people seem to have names containing two vowels...

It turns out that 15 people have names containing two vowels. This is better than any visual differentiator but is still far from perfect.

Ideally, you want to divide the group in half each time.
So, we'd go from 24 > 12 > 6 > 3

When you get to only having three options left, there are myriad ways (questions) to differentiate any of the options in this version of the game, but (without having done the math), it's just as quick to guess each person/option in turn.

What we need to maximize our chance of winning, and probably remove all fun from the game, is a set of questions that will divide the group in half until it gets to a group of 3.


It was only on my way home that I realized that if I'm going to look at the letters in the names of the people there are probably more and better questions that can be used to play a perfect game of Guess Who without using compound questions?

And so, (because I'm "fun") I tried.

It actually turned out to be really easy to find such questions. And by "really easy", I mean it took me less time than it's so far taken me to type this.


Here are the questions:

Question 1: Does the person's name start with a letter that comes before 'H' in the alphabet?

This is a simple split.
If they answer yes, you get the people Alex - George.
If they answer no, you are left with Herman - Tom.

If the first answer was Yes, question 2 is: Does the person's name start with a letter that comes before 'C' in the alphabet?

Another simple split.
If they answer yes, you get the people Alex - Bill.
If they answer no, you are left with Charles - George.


If the answers so far are Yes & Yes, the 3rd question is: Does the person have facial hair?

If the answer to this question is Yes, you're left with Alex, Alfred & Bernard
If the answer to this question is No, you're left with Anita, Anne & Bill.


If the answers so far are Yes & No, the 3rd question is: Does the person's name start with a letter that comes before 'E' in the alphabet?

If the answer to this question is Yes, you're left with Charles, Claire & David
If the answer to this question is No, you're left with Eric, Frans & George.


If the first answer was No, the next question to ask was the hardest to identify. Question is: Does the person's name contain fewer than 3 consonants?

Another simple split.
If they answer yes, you get the people Joe, Maria, Max, Paul, Sam & Tom.
If they answer no, you are left with Herman, Peter, Philip, Richard, Robert & Susan.


If the answers so far are No & Yes, the 3rd question is: Does the person's name start with a letter that comes before 'P' in the alphabet?

If the answer to this question is Yes, you're left with Joe, Maria & Max
If the answer to this question is No, you're left with Paul, Sam & Tom.


If the answers so far are No & No, the 3rd question is: Does the person's name start with a letter that comes before 'R' in the alphabet?

If the answer to this question is Yes, you're left with Herman, Peter & Philip
If the answer to this question is No, you're left with Richard, Robert & Susan.



Yes, this was a questionable use of my time ;)

If you don't think questions about the letters in the person's name are appropriate or allowed, please keep that opinion to yourself.



Anyway, if you want to know more about machine learning the whole event is available online.

Further silly posts about over-analyzing games are unlikely to follow. "Normal" service will resume shortly.



Wednesday, April 24, 2024

Emotional intelligence

Be more emotional.

But not all the time. Use your intelligence to decide when. ;) 

What's the value of reviewing a Draft PR?

Unless specifically requested, who's time is it the best use of?


Saturday, April 20, 2024

trivial feedback and micro-optimizations in code reviews

When these are given, is it because there is nothing more important to say?

Or, does such feedback come from people who don't understand the bigger picture or want a quick way to show they've looked at the code without having to take the time and mental effort to truly understand it?

Tuesday, April 16, 2024

Before commenting

Are you just repeating what's already been said before?

Where/when appropriate, are you correctly attributing others?

Does your comment add value?

Are you adding a new/different/unique perspective?

Have you read the other comments first?

Have you thought about who (possibly many people) will read the comment?

Monday, April 15, 2024

MAUI App Accelerator - milestone of note

MAUI App Accelerator - 10,000 installs

In the last few days MAUI App Accelerator passed ten thousand "official" unique installs.

This doesn't include the almost eight thousand installs included via the MAUI Essentials extension pack. (Installs via an extension pack are installed in a different way, which means they aren't included in the individual extension install count.)

While big numbers are nice (and apparently worth celebrating) I'm more interested in how it's used.
The numbers for that are lower, but still noteworthy. 

It's currently used to create about 25 new apps each day. Which is nice.
I'm also trying to improve my ability to use App Insights so I can get other and better statistics too.


More updates are coming. Including the most potentially useful one...



Saturday, April 13, 2024

"I'm not smart enough to use this"

 I quite often use the phrase "I'm not smart enough to use this" when working with software tools.


This is actually a code for one or more of the following:

  • This doesn't work the way I expect/want/need.
  • I'm not sure how to do/use this correctly.
  • I'm disappointed that this didn't stop me from doing the wrong thing.
  • I don't understand the error message (if any) that was displayed. 
  • Or the error message didn't help me understand what I should do.


Do your users/customers ever say similar things?

Would they tell you?

Are you set up to hear them?

And ready to hear this?


Or will you tell me that I'm "holding it wrong"?


Friday, April 12, 2024

Don't fix that bug...yet!

An AI generated image of a computer bug - obviously

A bug is found.
A simple solution is identified and quickly implemented.
Sounds good. What's not to like?


There are more questions to ask before committing the fix to the code base. 
Maybe even before making the fix.

  • How did the code that required this fix get committed previously? 
  • Is it a failure in a process?
  • Have you fixed the underlying cause or just the symptoms?
  • Was something not known then that is now?
  • Could a test or process have found this bug before it entered the code base?
  • Are there other places in the code that have the same issue?
  • Are there places in the code that do something similar that may also be susceptible to the same (or a variation of the) issue?
  • How was the bug reported? Is there anything that can be done to make this easier/faster/better in the future?
  • How was the bug discovered? Can anything be done to make this easier, more reliable, or automated for other bugs in the future?
  • In addition to fixing this bug, what can be done to prevent similar bugs from happening in the future?
  • Is there anything relating to this issue that needs sharing among the team?



As a developer, your job isn't just to fix bugs; it's to ensure a high-quality code base that's as easy (or as easy as possible/practical) to maintain and provides value to the people who use it.
At least, I hope it is.



Thursday, April 11, 2024

Formatting test code

Should all the rules for formatting and structuring code used in automated tests always be the same as those used in the production code?

Of course, the answer is "it depends!"


I prefer my test methods to be as complete as possible. I don't want too many details hidden in "helper" methods, as this means the details of what's being tested get spread out.


As a broad generalization, I may have two helpers called from a test.

One to create the System Under Test.

And, one for any advanced assertions. These are usually to wrap multiple checks against complex objects or collections. I'll typically create these to provide more detailed (& specific) information if an assertion fails. (e.g. "These string arrays don't match" isn't very helpful. "The strings at index 12 are of different lengths" helps me identify where the difference is and what the problem may be much faster.)


A side-effect of this is that I may have lots of tests that call the same method. If the signature of that method needs to change, I *might* have to change it everywhere (all the tests) that call that method.

I could move some of these calls into other methods called by the tests and then only have to change the helpers,  but I find this makes the tests harder to read on their own.

Instead, where possible, I create an overload of the changed method that uses the old signature, and which calls the new one.


If the old tests are still valid, we don't want to change them.

If the method signature has changed because of a new requirement, add new tests for the new requirements.

Tuesday, April 09, 2024

Varying "tech talks" based on time of day

I'll be speaking at DDD South West later this month.

I'm one of the last sessions of the day. It's a slot I've not had before.

In general, if talking at part of a larger event, I try to include something to link back to earlier talks in the day. Unless I'm first, obviously.

At the end of a day of talks, where most attendees will have already heard five other talks that day, I'm wondering about including something to draw together threads from the earlier sessions and provide a conclusion that also ties in with what I'm talking about. I have a few ideas...

I've seen someone do a wonderful job of this before but it's not something I've ever heard mentioned in advice to (or books on) presenting... I guess if you'll there you'll see what I do.

Monday, April 08, 2024

Comments in code written by AI

The general "best-practice" guidance for code comments is that they should explain "Why the code is there, rather than what it does."

When code is generated by AI/LLMs (CoPilot and the like) via a prompt (rather than line completions), it can be beneficial to include the command (prompt) provided to the "AI". This is useful as the generated code isn't always as thoroughly reviewed as code written by a person. There may be aspects of it that aren't fully understood. It's better to be honest about this.

What you don't want is to come to some code in the future that doesn't fully work as expected, not be able to work out what it does, not understand why it was written that way originally, and for Copilot's explanation of the code to not be able to adequately explain the original intent.

// Here's some code. I don't fully understand it, but it seems to work.
// It was generated from the prompt: ".
.."
// The purpose of this code is ...

No, you don't always need that first line.


Maybe xdoc comments should include different sections.

"Summary" can be a bit vague.

Maybe we should have (up to) 3 sections in the comments on a class or method:

  • Notes for maintainers
  • Notes for consumers
  • Original prompt


Writing a comment like this may require some bravery the first few times you write such a thing, but it could be invaluable in the future.

Sunday, April 07, 2024

Reflecting on 1000+ blog posts

1200 posts, 1026 published, 172 drafts, 2 scheduled
These are the current numbers for my blog. (Well, at the time I first started drafting this. I expect it will be several days before it's posted and then they will be different.)

These are the numbers I care about.

The one most other people care about is 2,375,403. That's the number of views the articles have had.

But this isn't a post about statistics. This is a post about motivation and reward.


 

I started writing this blog for me.

That other people have read it and got something from it is a bonus.

If I were writing for other people, I would write about different topics, I would care about SEO and promotion, and I would have given up writing sooner.

I get lots of views each day on posts that I can't explain.

I know that most views of this blog come from "the long tail," and Google points people here because there is a lot of content. The fact that I've been posting for 17+ years also gives me a level of SEO credibility.

There have been periods where I have written very little. This is fine by me. By not forcing myself to publish on a particular schedule, the frequency of posting doesn't hold me back or force me to publish something for the sake of it.

I publish when and if I want to.,

Some people need and/or benefit from forcing themselves to publish on a regular schedule. If that works for you, great. If it doesn't, that's okay, too.

Others might think a multi-month gap in posting is bad, but if that's what I want or need, it's okay. Over a long enough period, the gaps are lost in the overall volume of posts.

I'm only interested in writing things that don't already exist anywhere else. This probably holds me back from getting more views than if that were my goal, but it probably helps me show up in the long tail of niche searches.

And yet, some people still regularly show up and read everything I write. Thank you. I'm glad you find it interesting.

Will I keep writing here? I can't say for certain but I have no plans on stopping.


I'm only publishing this post because I thought I might find it useful to reflect on all that I've written, and 1000 posts felt like a milestone worth noting, even if not fully celebrating. Originally, I thought I'd want to write lots about this, but upon starting it feels a bit too "meta" and self-reflective. I don't know what the benefit is of looking at the numbers. What I find beneficial is doing the thinking to get my ideas in order such that they make sense when written down. That's, primarily, why I write. :)




Friday, April 05, 2024

Does code quality matter for AI generated code?

A large part of code quality and the use of conventions and standards to ensure its readability has long been considered important for the maintainability of code. But does it matter if "AI" is creating the code and can provide a more easily understandable description of it if we really need to read and understand it?

If we get good enough at defining/describing what the code should do, let "AI" create that code, and then we verify that it does do what it's supposed to do, does it matter how the code does whatever it does, or what the code looks like?

Probably not.

My first thought as a counterpoint to this was about the performance of the code. But that's easy to address with "AI":

"CoPilot, do the following:

- Create a benchmark test for the current code.

- Make the code execute faster while still ensuring all the tests still pass successfully.

- Create a new benchmark test for the time the code now takes.

- Report how much time is saved by the new version of the code.

- Report how much money that time-saving saves or makes for the business.

- Send details of the financial benefit to my boss."


Thursday, April 04, 2024

Where does performance matter?

Performance matters.

Sometimes.

In some cases.

It's really easy to get distracted by focusing on code performance.

It's easy to spend far too much time debating how to write code that executes a few milliseconds faster.

How do you determine/decide/measure whether it's worth discussing/debating/changing some code if the time spent thinking about, discussing, and then changing that code takes much more time than will be saved by the slightly more performant code?

Obviously, this depends on the code, where it runs, for how, long and how often it runs.


Is it worth a couple of hours of developer's time considering (and possibly making) changes that may only save each user a couple of seconds over the entire time they spend using the software?


What are you optimizing for?

How do you ensure developers are spending time focusing on what matters?

The performance of small pieces of code can be easy to measure.
The real productivity of developers is much harder to measure.


How do you balance getting people to focus on the hard things (that may also be difficult to quantify) and easy things to review/discuss (that they're often drawn to, and can look important from the outside) but don't actually "move the needle" in terms of shipping value or making the code easier to work with?

Wednesday, April 03, 2024

Thought vs experiment

Scenario 1: Consider the options and assess each one's impact on usage before shipping the best idea.
Scenario 2: Ship something (or multiple things) and see what the response is. Then iterate and repeat.

Both scenarios can lead to the same result (in about the same amount of time.)

Neither is necessarily better than the other. However, it doesn't often feel that way.

Always using one isn't the best approach.


#NotesToMyself


Hat Tip: David Kadavy

Monday, April 01, 2024

The Visual Studio UI Refresh has made me more productive

Visual Studio is having a UI refresh. In part, this is to make it more accessible.

I think this is a very good thing. 

If you want to give feedback on another possible accessibility improvement, add your support, comments, and thoughts here.

Anyway, back to the current changes.

They include increasing the spacing between items in the menu.

Two images of the main toolbar in Visual Studio in the dark theme. The top image shows a snapshot of Visual Studio today where the bottom image is a mockup of the toolbar which has more spacing and is a bit wider with less crowding.

There are some objections to this as it means that fewer items can be displayed at once.

Instead of complaining, I took this as an opportunity to revisit what I have displayed in the toolbar in my VS instances.

I used to have a lot there. 

I knew that some of those things I didn't need and, in some cases, had never used. I just didn't want to go the the trouble of customising them. 

"If they're there by default, it must be for a reason, right?" Or so I thought.

A better question is "Are they there for reasons I have?" In many cases, they weren't.

So I went through and spent (what turned out only to be) a few minutes customising the toolbars so they only contained (showed) the items I wanted, needed and used.

That was several weeks ago, and it has been a massive improvement.

  • I feel I'm working a bit faster.
  • I'm not spending time looking for the things I need.
  • I'm not distracted by things I don't need or don't recognise.
  • I feel encouraged and empowered as I've made my work environment more productive.


A system change to improve things for others encouraged me to improve things for myself. I see that as a win-win.



when the developers no longer seem intelligent

There are companies where the managers don't know what developers do. They have some special knowledge to make the computers do what the business needs. Sometimes, these managers ask the developers a question, and they give a technical answer the manager doesn't understand.
Very soon (maybe already), the developers will be using AI to do some of the technical work the company wants/needs.
Eventually, the AI will do something unexpected, and the business managers will want to know why. The developers will not be able to explain.
The managers will want to guarantee that the bad or unexpected thing will not happen again, but the developers will not be able to do that.
It's the nature of the non-deterministic AIs that are now being built.
This may be ok.

A possible takeaway is to notice that the appearance of intelligence (by giving a technical-sounding answer that the listener doesn't really understand) isn't going to be enough.

If you can't explain what you're doing now, how will you explain what AI is doing and that you can't guarantee what it will do?


Friday, March 29, 2024

How can you make manual testing easier?

 I found this still in the printer when I bought my early morning coffee the other day.

Printout with all different supported text options displayed

It reminded me of a time when I used to have 5 (yes, five) different printers on my desk (well, on a table next to it) to enable testing support for all the different and specific label printers the software I worked on had to support.

More importantly, it reminded me of the importance of making it as easy as possible to manually test things that can only (practically) be manually tested.

I assume the above output is intended to verify that the printer is working correctly.
I also assume it's used at the start of each day to verify (check) that the printer (in this case, on the automated checkout) is working correctly.

That printout is currently stuck up in front of my desk. It's a reminder to myself to regularly ask:

What could I do to make this easier to test?

Thursday, March 28, 2024

Does AI mean software development will appeal to different people?

Certain personality types are attracted to software development.

The logic and absolute certainty appeals to those people.

AI removes that certainty. It's non-deterministic.

Will this put off some people who like (or need?) the absolutes?

May it attract different people with other interests and personality types?



The most important question on a PR checklist is...

Admit it, you thought I  was going to say something about Testing. Didn't you?

While testing is super important and should be part of every PR, the most important question to ask when working on something as part of a team is:


Does this change include anything you need to communicate to the rest of the team?


I've added this to PR templates and have been frustrated when "circumstances" have prevented me from doing so.


Why does this matter?

  • Because if it's something low-level or fundamental, then everyone is likely to need to know it. You shouldn't rely on everyone reviewing the PR or checking everything that gets merged. 
  • Because the change(s) in the PR might break the code other team members are working on.
  • Because a "team" implies working together towards a common goal. Keeping secrets and not sharing things that "teammates" will benefit from knowing hurts the team and the project.


Not sharing important knowledge and information creates frustration, resentment, wasted effort, and more.

Wednesday, March 27, 2024

Doing things different in CI builds

Say you have a large complex solution.

Building everything takes longer than you'd like. Especially when building as part of CI checks, such as for a gated merge.

To make the CI build faster, it can be tempting to have a filtered version of the solution that only builds 'the most important parts' or filters out some things that take the most time (and theoretically change least often).

When such a filter exists, it can be tempting for developers in a team to use the filtered version to do their work and make their changes. If it makes the CI faster, it can make their development build times faster too.


However, there's an unstated trade-off happening here:

Shortening the time to build on (as part of) the CI

creates 

A reliance on individual developers to check that their changes work correctly for the whole solution.


If you get it wrong, developers (mostly?) only work with a portion of the code base, and errors can be overlooked. These errors (including things like being able to build the whole solution) can then exist in the code base for unknown periods of time......



Saturday, March 23, 2024

Prompt engineering and search terms?

The prompts given to LLMs are compared to the text we can enter when searching the web.

Over time, people have learned to write search queries that will help them get better answers.

There's an argument that, over time, people will learn to write better prompts when interacting with LLMs.

However, over time, the way websites expose, format, and provide data that's fed into the search engine so that it will be more likely to be what is shown in the results for specific searches. (A crude description of SEO, I admit.)

What's the equivalent of providing data that will be used to train an LLM?

How do (or will) people make or format data to get the outcomes they want when it becomes part of the training data for the LLM?

Yes, this is like SEO for LLMs. 

LLM-Optimization? LLMO?

GPTO?

Wednesday, March 13, 2024

Setting up a new machine with WinGet DSC - my experience

TLDR (for Clint 😜)

  • Mostly positive, would recommend
  • Dev Home still crashes lots - but it is in preview, so I'll let it off
  • My scripts are at github.com/mrlacey/my-config
  • Many sample config scripts (from the official repo) wouldn't run (failed) for me :( 
  • VS doesn't like me.

Tuesday, March 12, 2024

Reverse engineering a workshop

I've been working on writing a technical workshop. I've not done it before, and I couldn't find any good, simple guidelines for creating such a thing.

Having asked a few people who've delivered workshops in the past, the advice I got was very generic and more about the workshops they've proctored rather than how to structure one or put one together.

So, rather than make it up, I started by trying to reverse engineer what good workshops do.

I want the workshop to be fully self-paced and self-guided. If it can be used in group or "instructor-led" scenarios, that'll be good, too, but I don't have any plans (yet) for this.

From looking at many workshops I've completed and thinking back to those I've participated in in the past, I was struck by how many take the approach of showing a completed project and then simply listing the steps to create it. I find this approach to often be disappointing.
Yes, as a participant, I get the satisfaction of having created something but it's not something new or necessarily specific to my needs. More importantly, the reasons for each individual step weren't explained, and the reason for taking an approach when others are available (or even what the other approaches are) wasn't given. This means that I don't get the wider knowledge I likely need to be successful. Is the intention that in completing a workshop, you have the knowledge to go and build other things and the confidence to do so, having done it once before? It probably should be. 

What I find many workshops end up doing (intentionally or otherwise) is providing a series of steps to recreate a piece of software and assuming that's enough for the participants to then go off and successfully create anything else.

Yes, saying, "Anyone can follow our workshop and create X", is great. But that's not the same as a workshop that teaches reusable skills and provides the knowledge needed to go and create your own software.

I want to create a workshop as a way of teaching skills and introducing a different way of thinking about a topic.


Aside: what's the difference between a workshop and a tutorial? I think it's that workshops are longer. Possibly a workshop is made up of a series of tutorials.


After initially struggling, I eventually concluded that a workshop is like teaching anything else. With clear learning goals and a structure, it's a lot easier to plan and create.

In this way, writing the workshop was a lot like writing a book. Only without an editor chasing me for progress ;)

More thoughts on this topic another day. Maybe.
Although, it has got me thinking about what I'll write next...


If you're interested in how my efforts turned out, you can see the results of them here.

Sunday, March 10, 2024

If you only write one document

If you only write one document (as part of developing software), make it the test plan.
Do it before you start writing the code.
This applies to new features and bug fixes.

  • This also becomes the spec.
  • It allows you to know when you've written the code to make do everything you should.
  • It will make working as part of a team easier.
  • It will make code reviews faster and easier.
  • It will make testing faster.
  • It will make creating automated tests easier. (You can even write them before the code-TDD style.)
  • It will make things easier when the customer/client/boss changes their mind or the requirements.
  • It will make future maintenance and changes faster.
  • It will make creating complete and accurate documentation easier.
  • You are more likely to be creating technical debt that you're unaware of without this.


If you don't:
  • There will be lots of guessing.
  • There will be lots of unvalidated assumptions.
  • There will be lots of repetition of effort. (Working out what needs to be done and how to do it.)
  • More effort will be wasted on things that aren't important.
  • Code reviews and testing of the code will be slower and involve more discussion and clarification.
  • You are more likely to ship with bugs.
  • Future changes (big fixes or adding new features) will be slower.
  • Future changes are more likely to introduce other bugs or regressions.


Who said it doesn't matter

That someone admitted [bad practice the business would not like to admit to] is not the issue.

The problem isn't blaming who said it.

The problem is that it is the culture. 

Trying to hide the issue or blaming someone for admitting it doesn't help. It encourages bad practice, which really only makes things worse.


 

Saturday, March 09, 2024

Detecting relevant content of interest

When AI is actually a bit dumb. 

Consider something I'm seeing a lot:
  • Content (e.g. news) app shows lots of different content.
  • You read an article in the app within a specific category.
  • Several hours later, an automated backend process tries to prompt re-engagement.
  • The backend process looks at what categories of content you've looked at recently.
  • It notices a recent article that is in that category and is getting lots of views.
  • The backend process then sends a notification about that content as you're likely to be interested. (It knows you read content in the category, and lots of people are reading this new article. It should be of interest.)
  • But the "assumption" was based on a lack of consideration for articles already read. (It was of interest, that's why you read it several hours ago.)
  • Enough people click on these notifications to make them look like they're doing a good job of promoting re-engagement.
  • People click on the notifications because they might be related to or following on from what they read earlier and not realizing that it is the exact same article.
  • Analytics only tracks openings from the notifications, not how much time is spent reading the article they were using to promote re-engagement.

Analysis of analytics doesn't flag this, and the opacity of "the algorithm" doesn't make it clear this is what's happening.
All the while, many people get wise to these pointless notifications and turn them off, and so many miss something actually useful in the future.


I know there are more complex and varied scenarios than mentioned above, including how the above could be considered a successful "engagement" in other ways.

The main thing I take away from this is that I don't want to be creating software that tries to do something clever and useful without being able to accurately tell it is being successful and providing value.
Creating something that is trying to be clever but comes across as stupid because it doesn't make wise or appropriate use of the information it already has does not excite or interest me.

Friday, March 08, 2024

Software development = basketball

This might be a leap but go with me for a minute.


"Lessons in basketball are lessons in life"

 

It's a cliched phrase that was drilled into me at basketball training camps and through basketball-related movies when I was young. We weren't just being encouraged (forced?) to learn lessons that would help us play better basketball, these lessons would help throughout our lives.


Thinking today about the importance of fundamentals, I wonder if the world would be a better place if more developers (had) played basketball.


Thursday, March 07, 2024

Sticktoitiveness and software development

I recently heard that there is a common character trait among many developers in that they won't stop working on a problem until they've solved it.

I've always identified as having a similar but different trait, I won't give up when trying to solve a problem.

I came to this trait as a result of some of my first jobs in the industry. Due to the internet and world being as they were, and in combination with the companies, teams, and projects I was working on/with, there was no option to say "I don't know how" and give up. The problem needed to be solved, there was no one to ask who might know, and so I had to figure it out. That's what I was there for. That's what I was paid for.



Wednesday, March 06, 2024

Did this bug report get me banned from Visual Studio?

 As an avid user of Visual Studio and a developer of many Visual Studio extensions, I have a strong interest in enhancing the discoverability and user-friendliness of extensions. I was pleased to learn about the recent implementation of a requested feature and eagerly went to explore it.

Recently, I've also been exploring the use of WinGet DSC to configure a new laptop and have been experimenting with .vsconfig files to streamline the process.

During these investigations, I encountered an issue regarding the use of extensions containing "Extension Packs" (references to other extensions that should also be installed). Unfortunately, attempting to include them resulted in installation failures without any accompanying explanation for this limitation. Through a process of elimination, I confirmed that the inclusion of extension packs was the cause.

I submitted a bug report detailing my findings, which can be found [link to the original report, which was unfortunately removed]. Regrettably, I discovered that my access to the site has since been restricted, citing violations of the Community Code of Conduct

Upon revisiting my initial post, I can only speculate that my direct and passionate writing style may have been misunderstood as impolite or disrespectful, but am unsure if this is the issue. I acknowledge the importance of maintaining politeness and respect in online interactions and am committed to improving in this regard.

I am left wondering if utilizing AI to refine my expressions to ensure a consistently polite and respectful tone may be a helpful approach moving forward. Perhaps this precautionary measure could prevent unintentional misinterpretations. 


Below is what I posted.

I share it here as an example (and warning?) to others. Be polite and respectful!


Themes of DDD North 2024

This last weekend, I was excited to get to speak at the DDD North conference again.

As a one-day, five-track conference there was a lot going on and a lot of varied content.

Of the sessions I attended and the discussions I had with other attendees, I noticed lots of mentions of:

  • AI
  • Testing (in a positive light)
  • General good development practices rather than talk about specific tools or technologies.


Yes, I recognise that the talk I gave about the importance of documentation and testing as we use more AI tooling while developing software likely skewed my thinking and what I was more inclined to notice. It was just nice to not be the only person saying positive things about testing software. (Although at least two speakers did make jokes about writing tests so there's still a long way to go.)

The increased focus on generally applicable "good" practices was also good to see. While learning about a new framework or technology is useful in the short-terms or for specific tasks, spending time on things that will be valuable whatever the future holds feels like a better use of time.

While I'm still waiting for the official feedback from my talk (sorry, no video) upon reflection, I'm glad I did it and it was a good thing for me to do. I don't want to give a talk that anyone could give and so basing it on my experiences (& stories) is good rather than reading official descriptions of technologies, describing APIs, or showing trivial demos. I also want to do in-person events in ways that benefit from being "in person". This talk wouldn't have worked the same way as a recording and I wouldn't have got as much from it either. If I could just record myself talking about the subject and released it as audio or a video I'd have done that but it wouldn't be the same or as good. Although, it might have been less work. Maybe I'll do that in the future though.

 Here's me during the talk in front of a perfectly timed slide ;)

Me standing in front of a slide that says "I'm NOT perfect"



Sunday, March 03, 2024

Lack of nuance

 No nuance is almost always incorrect!


Yes, "almost" is very important in that statement.


If you get a response/answer/instruction without any acknowledgement of the nuances, you're almost certainly not getting the full picture. 

How do you know the importance of what is missing, if you don't know what's missing?



Saturday, March 02, 2024

Always use a regex rather than string manipulation

 Although, maybe not always:


It's almost as if absolute claims aren't always correct.



Friday, March 01, 2024

The required amount of documentation and testing will vary

The amount of documentation and the number of tests required will vary between:

  • code bases
  • the stage of development
  • the people working on it
  • what the code does
  • more....

Wednesday, February 28, 2024

Reviewing documentation is like reviewing code

 Two quick, but key points.


1. What is it meant to do? And, where/what/who is it for?

You can't review code fully unless you know what it's meant to do. You might be able to point out if something doesn't compile or account for an edge case that hasn't been covered, but if you can't say if the code does what it was meant to do, you can't provide a useful review.

It's the same with documentation. If you don't know what the document was intended to achieve, communicate, or teach, how do you know it is correct, appropriate, or does what it's meant to?


2. Take advantage of tools before you involve people.

Use spelling and grammar checkers before asking someone to review it.

It's like asking for a code review on code that doesn't compile or meet coding standards.



Tuesday, February 27, 2024

"LGTM" isn't automatically a bad code review comment

What's the reason for doing a code review?


It's to check that the code does what it is supposed to and that the reviewer is happy to have it as part of the code base.


If the code changes look fine and the reviewer is happy, they shouldn't be expected or obliged to give (write) more feedback than is necessary.


What's not good is pointless comments or references to things that weren't changed as part of what is being reviewed.


A reviewer should not try to prove they've looked at the code by providing unnecessary or unnecessarily detailed feedback.

It's not a good use of time for the person doing the review.

Dealing with (responding to) those unnecessary comments is also not a good use of the time for the person who requested the review.


Writing something, even if it's a few characters (or an emoji) that indicates that the approval wasn't fully automated or done by accident is fine by me.

Of course, if all someone ever did was comment on code they're reviewing this way then that should raise different concerns.



Don't write more than you need to or for the sake of it.

Don't comment just to show you've looked at something.



Monday, February 26, 2024

Before you "add AI" to your software...

Is your software amazing?

Are there no usability issues?

Do the people using your software never have problems?

Are there "niggly" little issues that can be frustrating but have never been given enough priority to actually be fixed?

Do these things annoy, frustrate, disappoint, upset, or turn off users?


If your software (app/website/whatever) can't get the basics right, it might be tempting to "add AI" (in whatever form) to your app but the hype may not be worth it.

Yes, AI is powerful and can do amazing things, but if the people using your software see you failing to get the basics working correctly, how will they feel when you add more sophisticated features?

If you've demonstrated a failure to do simple things, will they trust you to do it he complex things correctly?

Especially if you can't explain how the new feature works or directly control it?


I'm not saying don't use AI. I'm asking that if you demonstrate to your customers that you can't do simpler things without issue, should you really be doing more complex things?


Sunday, February 25, 2024

Do you run tests before checking in?

If not, why not?

That the CI will (should) run them is not an excuse for you not to.

I think it's right - but haven't checked, is not professional or good enough.

Even worse than simply relying on the CI to run all the tests of the work you've done is asking for someone else to review the changes before the CI has even run the automated checks against it. 
How would you feel if I asked you to review some of the work I've done but haven't verified it's correct and doesn't break some other part of the system?

The same applies to linting or ensuring that code meets the defined standards and formats. You shouldn't be relying on something checking this after you've said you have finished.
You really should be using tooling that does this as you go.



Some test suites do take a very long time to run. Have you tried running only the tests related to the area you're working on?


If your test suite takes too long to run:

- Have you run the bits relevant to your changes?

- What work is being done (or planned) by you and your team to make the test suite run faster?



Why you shouldn't use LangVersion = latest

Unless you absolutely must.
Cross through code snippet: <PropertyGroup>    <LangVersion>latest</LangVersion> </PropertyGroup>

Never being one to be concerned about being controversial, this is why I don't think you should ever use the LangVersion of "latest" in your C# projects.


TLDR: "Latest" can mean different things on different machines and at different times. You avoid the potential for inconsistencies and unexpected behavior by not using it.


I've had the (let's call it) privilege of working as part of a certain distributed team on a large code base.

There was a global setting, hidden away somewhere, which set the LangVersion to latest. This was mostly fine until one member of the "team" updated the code to use some language features in version 10 on the day it was released. They committed the code, and other team members pulled down the changes. But now the other team members (who hadn't yet updated their tooling--it was, after all, only release day) were getting errors and they couldn't understand why code was suddenly breaking. The use of the new language features wasn't necessary, and the confusion and wasted time/effort trying to work out where the errors had come from could have been avoided if the actual number had been updated in the settings and they hadn't relied on a value that changes over time and from machine to machine. The rest of the team would still have had to update their tool, but at least they would have gotten a useful error message.


And then, there was another project I worked on.

We were using an open-source library and as part of some "clever" MSBuild task that they included in the package, it was doing a check for the LangVersion. At some point in the past, they needed to do a workaround for a specific version number and also added that workaround for "latest". In time, the underlying issue was fixed, and so when the package was used in a project that tried to use the "Latest" LangVersion, the package was trying to apply a fix that was no longer necessary and actually caused an exception. Yes, this was a pain to resolve. And yes, by "pain", I mean a long, frustrating waste of time and effort.


"Latest" may be useful if you're doing cutting-edge work that is only supported by that LangVersion setting. For all other cases, you should specify an actual number.



Of course, you're free to ignore these arguments. If you do, I'd love to know why.



Friday, February 23, 2024

If XAML has problems, why not just abandon it?

I often hear that XAML has problems, so you should use C# instead.

If you really strongly object to using XAML and can't be persuaded otherwise, feel free to only use C#. I'm not going to stop you. I'm more interested in you producing high-quality software that you can maintain and that provides value to people and they can easily use.

If you're not going to make a purely emotional decision, read on.

  • If you already have existing XAML code, rewriting it probably isn't the best use of your time.
  • XAML or C# is an artificial distinction. It was never the case that you were intended to only ever use XAML and not C# for the UI of anything but the most basic of applications. There are some things that are very hard in XAML, and there are some things that _should_ feel strange and unnatural when done in C#.
  • There are better ways to write XAML than the very basic ways shown in almost all instructions and training material.





"Quickies"?

With the social media app/platform space as it now is, I don't have a single place where it feels right to post my shorter thoughts and ideas.

These thoughts are also often longer than the default character limits of most platforms.

So, I'm experimenting with posting them here and tagging them "quickie". No promises, I'll keep it up, but this is my site, so I thought it was a suitable place to experiment.


Thursday, February 22, 2024

Why AI (LLMs) is not the solution to your problems with XAML

If you work with XAML (or you've tried to) you might think of it as being verbose, long, and hard to maintain.


For many people, AI is a solution to many technical problems. Working on the basis that if you can describe what you want or start writing it, the "AI" can generate what it thinks is likely what you want.

This is great if you're doing something that has been done many times before or you want to do something new in a very similar way to what already exists.

If, however, you aren't keen on what already exists or have complaints or concerns about the way things are currently done this isn't going to help. "AI" works by looking at existing data (mostly the contents of the internet for general-purpose and public AI services) and creating based on that.

But, come back to XAML. The criticisms aren't as much about writing the code, they are more focused on reading and modifying it. Having the code written more quickly doesn't address the problems of reading, understanding, and maintaining it. Tasks that are widely accepted to be what most developers spend most of their time doing.

If you want XAML that is easier to read, understand, maintain, and modify without unexpected consequences, you need to think about writing it differently. 

"AI" is great at giving you things similar what already exists, but if you don't want more of the same (and when it comes to XAML I don't think you do) now might be the time to start thinking about writing XAML in a different way....




Wednesday, February 21, 2024

Working with time is hard - why the film starts in "1 hour and 60 minutes"

Recently, I saw a display inside a cinema that said a film was starting in "1 hour and 60 minutes"

Be careful how you round times and parts of times before displaying them!


As 1 hour is 60 minutes, why didn't the display say the film starts in "2 hours"?

I can't say for sure, but I'd guess that the actual time until the start is 1 hour, 59 minutes, and more than 30 seconds.


Because there are less than 2 hours in the difference between now and the start, the hours were reported as 1. As the hours have been dealt with, they moved on to calculating the number of minutes. Because the number of seconds wasn't shown, the number of minutes was rounded to the nearest value. As there are more than 30 seconds in the timespan, this was rounded up to 60.


Ok, this is an edge case that happens in a very small window of time. Most people (customers & potential customers) will never see this.


Why does this matter?

Why care about such potentially trivial issues?


If your customers can't trust the simple things they can see, why should they trust you with things they can't see?

If your app is slow, unresponsive, crashes, displays duplicate information, displays obviously incorrect information, has usability or accessibility issues, or any of dozens more things, why should people trust you with things like:

  • Keeping data secure?
  • Keeping data accurate?
  • Making calculations correctly?
  • Meeting regulatory requirements correctly?
  • Charging correctly?
  • Maintaining a reliable service?


If the visible parts don't show care and attention to detail, why should the people using your software assume that you've spent more time and focus on the things people can't see?


Thursday, February 01, 2024

That's not what code reviews are for!

So, you're reviewing some code.

Here are some things you shouldn't be doing:

  • Leaving comments just to prove you've looked at it.
  • Commenting on existing parts of the code base not directly related to the specific change/PR.
  • Checking the code builds - CI should do this before you review anything.
  • Checking the tests run (& pass) - CI should do this before you review anything.
  • Checking that coding/styling/formatting conventions are met - tooling should enforce this, not you!


Instead, here's what you should be doing:

  • Verifying the code does what it's supposed to do?
  • Verifying the code does not do anything it shouldn't?
  • Checking the code doesn't introduce any new issues/bugs/potential future problems.
  • Confirming that you'd be happy to support and maintain this as part of the code base.
  • Ensuring that any follow-up work (including accrued technical debt) is appropriately logged. 


These are not complete lists, but hopefully, they are still useful.


Wednesday, January 31, 2024

I don't want to be interesting

Do Interesting: Notice. Collect. Share - by Russell Davies

This is a great book. I heartily recommend it. However, I don't think it's interesting.


I don't think it's interesting because I don't like that word. 

"Interesting" is vague.

"Interesting" is meaningless.

"Interesting" is a word people use when they don't know what else to say.

Try it. Next time someone tells you about something "interesting", ask them what made it interesting or why they thought it was interesting.

Or consider when someone tells you they "have something 'interesting' to tell you." Is it really interesting? Or is it gossip? Or is it something they don't have a better description for?


"Interesting" is unspecific and unconsidered. Not the book, the concept.


 That's not what I want to be, or do, or be thought of.


Here are some much better adjectives (in no particular order):

challenging
inspiring
thought-provoking
troubling
upsetting
motivating
fascinating
amusing
entertaining
captivating
encouraging
intriguing
inviting
gripping
impressive
restorative
engrossing
enchanting
enthralling
spellbinding
diverting
attractive
rousing
persuasive
provocative
stimulating
stirring
exceptional
exciting
unforgettable


Don't those sound more appealing?


It's the first three items on that list (challenging, inspiring, thought-provoking) that I think apply to that book, too.



Wednesday, January 24, 2024

Lessons from mobile notifications applied to IDE Extensions

TLDR: If you want to prompt "the user" to do something, let them get value from what you provide first.

Dog with pleading eyes

Photo by Jennifer Latuperisa-Andresen on Unsplash

Mobile apps want ratings and reviews. These are also valuable for open-source projects. This applies in a marketplace/store or as libraries/bundles/packages for download.

Increasingly, in the open-source world, the issue of sustainability is also a consideration.

Among other things that contribute to sustainability is financial support.

I use a variety of approaches across my open-source projects to try and encourage such compensation via Google Sponsors.

It doesn't make a massive impact on my finances but it has been enough to make a difference and was also the only way I could afford an unexpected tax bill when I was out of work during the pandemic!
Regardless of the amount or duration, I will always be grateful to those who have (and still are) sponsoring me. Thank YOU!

Yes, I ignore the amount and duration. All my sponsors get access to the same benefits, be it a one-off amount of $1 or recurring amounts of much more. (I previously did more analysis of these differing durations and amounts.)


Anyway, one of the approaches I use to encourage people to consider becoming a sponsor is a message displayed in the output window asking them. (If they do, I also tell them how to make such a message not appear.--No phoning home. No personal data is collected. 😉)

This approach isn't appreciated by everyone but seems to be very effective, as I have more people sponsoring me than most other people I'm aware of who casually contribute to open-source projects.

My approach to monetizing my projects is very much inspired by donationware, and most software made available in this way doesn't hold back from asking for donations. 

I had been following this approach but wondered if a different technique might be more effective.

In the latest update to my C# Inline Color Vizualizer extension, I changed the behavior determining how visible the encouragement to become a sponsor is.

Instead of always displaying the message to the user, it loads it in the background but only actively shows it once it is at least 7 days since it was first used and it has been used to annotate at least 100 files. The goal is to let the person become familiar with seeing the benefits of using the tool before asking anything of them. The theory is that they will then be more inclined to respond to the message.

I must have said and heard the same advice applied to mobile notifications hundreds of times but had somehow overlooked its wider application.

If you're not familiar with the extension, it adds examples of the. Like this. Why not give it a try?

Partial screenshot of the VS editor showing what the specified color looks like.

I plan to add similar changes of behavior to my other Visual Studio extensions as is appropriate to the way they work and what they do. I'll share details here if it proves effective or I learn anything insightful.


It's not only about the money I get. I like that this encourages more people to think about the sustainability of their tools. I think such messages do this, and the fact that most of my sponsors aren't sponsoring other people (yet) makes me hope that I'm just the first of many or that they find other ways to support the software they use (if not rely on.)


Sunday, January 14, 2024

What could I write about here that could help you?

Serious question.
No promises or guarantees.

I'm just aware that everything I've written here is because I wanted to write it.


But, I wonder if there are things you're interested in me writing about that would also be interesting for me to write about..... ?