Thursday, March 04, 2021

It's sad and happy to deprecate something that's no longer needed.

A couple of years ago, I built a tool called UWP Design-Time Data (GitHub | NuGet)

It attempted to provide some functionality that was available in Xamarin.Forms (XF) but not in UWP.
I saw someone from Microsoft show the functionality available to XF developers and immediately wondered two things? When will that be available to UWP? And, how can I recreate that until then? 

partial screenshot of the Visual Studio XAML editor and designer showing the above described functionality

Based on the limited XAML extensibility of Visual Studio, I did the best I could. It wasn't perfect, but it was better than nothing.

Mine was never meant to be a permanent solution.
I always expected and hoped for it to go away. The last time I gave a conference talk about XAML tooling (in Feb 2020), I even talked about mine being a temporary solution.

Version 16.7 of Visual Studio (actually released in the middle of last year) added this as part of several other XAML-related improvements.

Today I finally got round to updating the GitHub repo and Nuget listings.

I'm not going to take them down, but they're essentially deprecated, and I don't ever expect to update them again.


It's sad to think that this is the end of this project.
All the time and effort I spent on this has no future value.
Hopefully, at least some of what I learned in the creation and support of this will be useful in the future.
But, it's good to know that this small but useful functionality is now available to all VS users, not just those who hear about and install my extension.
It's also good that to think that I was able to help some people for a while.


Today is a time to celebrate what was and look again at what comes next. 🥂

Never just "should"!

 TLDR: never use "should" without explaining why.

It should never have come to this
Photo by Brett Sayles on Pexels

How many times today have you used the word "should"?

"You should do X."
"They should do Y."
"It shouldn't be like that."

Or any variation on that theme?

"should just" is even worse.


"You should just do X."

This use of "just" implies it's simple or easy to do. It implies little effort will be needed. But, that isn't always the case.
What may be simple for you may not be for someone else. 
What may seem simple from the outside belies what may actually be required to make it happen.

Avoid saying "just" when it implies simplicity unless it's something you're doing and already know how to do.


"Should" is more complicated.

You may be familiar with the MoSCoW method of prioritization. I was taught this at university and have never liked how vague and open to interpretation the categories still are. I've been in far too many meetings throughout my career that have come down to different understandings of what needs to be delivered. In these situations, "should" is either considered to mean:
  • It would be nice if this was included, but it isn't strictly necessary. 
  • It's used to indicate or identify something that can be deferred until another time.
  • Or it's understood to mean something that can only be left out without a VERY good reason. 
"Should" can cause confusion.

Or, maybe you're used to seeing SHOULD in documents where it takes the definition from RFC2119.

3. SHOULD

This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

This is the crux of my issue. It admits that there are one or more reasons why something should or shouldn't be done.

If you say that "something should be done," why should it be done?
Because of your personal preference?
Because of a contractual reason?
Because of knock-on effects and dependencies or consequences elsewhere?

Without knowing the reason(s) that something "should" be done, we're forced to guess.

If the person saying "we should do X" has the ultimate authority to make decisions, then knowing why has less importance. It might be helpful, useful, or empowering to understand why, but it's not essential. 
In this scenario, a different word is more appropriate. "Please do X." is more specific and more helpful. Even if it's a bit abrupt, it's better than adding unnecessary and potentially confusing words.

If not direct instruction, knowing why is essential. If an opinion, we may be free to ignore it. If a contractual requirement, then we need to know to make sure it happens. If a pressing priority, time-sensitive or will have considerable consequences if not know and acted upon, then we need to know.

It might not be as severe as "you should stop that small child walking into traffic," but explaining why something "should" be done is considerate.

"You should do X" is rude.
"If you do X, it will help me..." or "if X is done first, it will unblock this other work" or "we need to make sure that we do X" are respectful and collaborative. 


Words matter.

Monday, March 01, 2021

You must be insane to be a software developer

"The definition of insanity is doing the same thing over and over again and expecting a different result." - Probably not Albert Einstein.
Albert Einstein - pixabay

When debugging, I repeat myself frequently. Doing the same thing over and over again and hoping for a different result.

There are times I wish I had alternatives, but I know no other way, and so I rerun the code. Each time hoping that something will be different.

Rarely it is.

But it's those rare occasions that make this approach acceptable, if not necessary.


Note. I'm not talking about Heisenbugs. They're a separate, additional thing.


Sometimes repetition is the only option. An issue is observed or reported as only happening "some of the time." In such scenarios, the only available option is to run the code or perform an action to see when the bad thing occurs. Then try and identify what the difference was when it happened. 

This is important because, without consistent reproducibility of an issue, it's impossible to be 100% confident of an applied fix.

I know many developers and organizations who dismiss reports of occasional issues. If the problem doesn't happen all the time, so their thinking goes, it must not be with their software. Or they assume that the responsibility to fully identify the cause of the bug is not theirs. Or, they think their time would be wasted trying to reproduce such bugs consistently.

This comes down to perspective and priority.

Is the developer tasked only with writing the code? Or is the code a means to an end?  

I think the goal of writing code is to deliver value. I don't see any value being delivered by dismissing a bug report because it's not consistently reproducible. Instead, I see this as a challenge.

I assume that the person reporting the problem has given all the information they can. Now the challenge is for me to investigate the situation and find the solution. I'm a detective. 🔍

Being a bug detective might (and often does) mean repeating tasks to see when there is an inconsistent response and then trying to work out the cause. Such problems usually come down to one of four things.

  • It's time (or date) dependent.
  • It's machine/OS/configuration dependent.
  • It's a race condition.
  • It's dependent on the number of times the function is executed.

That time it failed, and the time before it worked even though the code hasn't changed. What else has changed? And how can I test that?


Note. I know there are obscure edge case bugs that it's not the best use of anybody's time to address. Let's assume that here I'm talking about bugs that the business deems worth spending time/money/effort fixing.


I wonder if there are better ways to investigate such bugs.

I wonder if there are tools to help with this. (I've found references to a few things that no longer exist but nothing current. 😢)


I will frequently run unaltered code and hope for a different result. Surely I'm not the only one...

Sunday, February 28, 2021

Tried everything? - Methodical debugging

I've been thinking about "methodical debugging" and would love to hear your thoughts and experiences.

Early in my career, I worked in a head office IT department. While the main focus of my role was software development (mobile, desktop, and web applications), I was also the last line of support for any problems.

If a customer had a problem, they'd contact the branch. If no-one at the branch could help (some branches had their own IT staff), it would be escalated to the helpdesk. If the helpdesk could solve the problem, they'd contact me.

When contacted by the helpdesk about an issue, I couldn't immediately answer, I'd ask, "what have you tried?" The answer to this was always "everything!"

This bugged me for two reasons.

  1. It definitely wasn't true because if they'd tried everything, they'd have found the answer. Or if there was no answer, they'd know that too.
  2. It meant they didn't have a list of what they'd tried, so I'd have to go right back to the beginning, assume nothing, work through all the possible issues, and find the resolution that way. This duplication of effort was inefficient and felt like a big waste of time.

In retrospect, I wish I'd taken the opportunity to define a list of questions that could be asked and things to document when they're asked. 

But I don't have a time-machine and can only think about improving things in the future.

And think about the future I have. Specifically about debugging.

My debugging has a tendency to be a bit haphazard. I'll skip from trying one possible idea to another until something works. I'll do my best to try and only change one thing at a time, but I can't be certain that's always the case. Especially if the issue isn't quickly resolved, I can easily forget all the things I've tried and suspect I may end up trying things more than once.

While this approach gets me to an answer, I'm not sure it is efficient, effective, or the best way to learn the most about what doesn't work or why.

There isn't a lot (I can find) written on the subject, and the little there is typically starts by acknowledging the lack of writing in this area.


This feels like an area that, if better understood, could greatly help many people in the industry.


What about you, do you do anything other than what I've described above?

If you document things as you debug, I'd love to know more about the process or technique you use and how it's helped you.




Aside: The role I mentioned at the top is where I developed the attitude that I must find the solution if there's a problem because there isn't anyone else to ask. This was before YouTube and StackOverflow. When I got truly stuck, there were only web-based bulletin boards and forums to ask for help, and these rarely proved fruitful. Even now, when I hit a problem, I usually stop and try and work it out myself before searching for error messages or the like. I find this approach valuable as it forces me to think more widely about the problem and explore ways to solve it. I've become a better developer by taking an attitude of "I don't know how to solve this yet(!), and in learning how to solve this, I will gain knowledge that helps me in other ways too." This is in contrast to an attitude I often see in others, which is to get the answer to the specific problem they are currently facing as quickly as possible and then move on.

Saturday, February 13, 2021

100,000+ "thank-you"s

At some point in the last couple of days, I passed a milestone. 

The public* packages I have made available on NuGet have now been downloaded over one hundred thousand times.


28 Packages | 101,283 Total downloads of packages

In the grand scheme of things, this isn't a large number. There are many packages with hundreds of thousands of downloads. Some of those packages get more downloads on an average day than my packages have had over several years.

So does this seem like a milestone even worth acknowledging?

I think so.

The above figure represents thousands of people (developers--they are people too) who I've been able to help save time and effort. It may be with a trivial, one-off task, or it may be something they use multiple times a day and in apps used by thousands of people. 

Download numbers like this are only relevant to the packages and the people who make them. I've known some developers over the moon because they never expected even a few hundred people to use the thing they created. In contrast, I've known other developers complain that the thing they created didn't get millions of downloads in the first few days.

This is a bigger number than I ever expected to reach but I also expect to produce many more packages in the future. Little by little, as I stick around and keep doing the work, the numbers go up. 

These tools may not have made a dent in the universe but they probably contribute to a small dimple. Maybe, over time, more dimples will add up to a dent. 


They may not be the most popular packages in the world. But that's not what I set out to create. I saw gaps where tools would be useful and produced tools to help with those situations. A small gap in the market that one person can quickly fill will rarely become massively popular, so I had set my expectations accordingly.

However, a niche product can be really useful to people who need it.

Many of these packages are now redundant (because they relate to platforms that no longer exist--the platforms aren't publicly available or supported in any way) but these packages serve as a reminder to myself (and maybe others) that I'm here for the long term. I haven't just shown up in the open-source world, made something quickly, and then gone away. It's also a reminder that small gaps can also be worth filling. Not only do they help with the things that I work on but they are also useful and helpful to others too.


I think the above figures help quantify part of my contribution to making the world a better place through open-source software.

Of course, my contributions aren't just about niche tools for underserved platforms and environments. I also contribute to much larger projects too. Some with many millions of downloads.
The combination of working on a breadth of smaller scenarios with my tools and much larger projects as part of a group of contributors, helps me become a better developer.


* There are a couple of private ones I have removed ("unlisted") because of name and dependency changes. They definitely shouldn't be used.