Wednesday, November 27, 2024

Open Source Software and "the Church of England's problem"

No, this isn't a religious post.


I was listening to an interview regarding marketing challenges for big brands trying to reinvent themselves. The context was large, older brands that people don't want to go away but whose products they aren't going to buy (any more).

A comparison was made with the Church of England. (It was a very English discussion.) The speakers  said:

  • They wanted it to keep existing.
  • They weren't going to go.
  • And, they certainly weren't going to pay for it.


I immediately saw a comparison with Open Source Software:

  • People want it to exist (and be maintained/updated)
  • (Most) people are unlikely to participate. (Beyond using it.)
  • And, they certainly weren't going to pay for it.


The challenges facing the CofE and OSS have similarities, but I think the solutions each will seek are very different.

That's all, except to say that the impact on anything is highly likely to be the same when people want something but aren't willing to pay or participate.

Tuesday, November 26, 2024

Even more testing nonsense with generated tests

So, following on from the last post I wrote.

I wrote that post just before I checked in the code.

As a final check before committing the code, I did some manual checks.

I found a[n obvious] scenario that didn't work.

I thought I'd created a test case for it. Turns out I hadn't. So, I added one and it failed. Then I added an extra clause to the function and made sure all tests pass.


I wonder if the Copilot generated tests cover this scenario?

It turns out they do and they don't.

There is a generated test with a name that implies it covers the scenario, but the actual contents of the test method do not do what the name implies.

It's not enough to have the test, it's also necessary that it's valid and correct.

Review generated tests more thoroughly than generated code.
If the test is wrong it leads to code that is wrong.


This also highlights the need to have other people review code (and test plans).

I know I'm not perfect and I make mistakes and omissions.
That's part of the reason for code reviews: to have an extra pair of eyes double check things. (It's just a shame this was all on a personal project.)


This experience really has reinforced my belief that test cases should be written before code.


The future of software development is testing

Or, to be more precise, the future of work for a lot of people doing "software development" depends on the need to test code.


AI is great. You can use it to write code without having to know how to program.

Except, the real value of a good software developer is knowing what code to write.

Yes, you may be able to eventually get AI to generate code that initially appears to produce what you want, but how can you determine if it's good, or efficient, or correctly covers all scenarios you need?

It's not just a case of having code that seems to work, you also (potentially among other things) need to know if the code is correct. 


AI tools can be useful to software developers by saving time generating code more quickly than the developer could have written it themselves. This is true for almost all developers. Even me.


Today, I needed to write a function to see if two numeric ranges overlapped.

It's quite a trivial thing. There are lots of examples of this type of method in existence. I could easily write such a function, but I decided to let Copilot create it for me.


The code it produced looked fine:


public bool Intersects(int start, int length)
{
    return (start >= this.Start && start < this.Start + this.Length) ||
        (start + length >= this.Start && start + length < this.Start + this.Length);
}


All it needed was some boundary checks to return false if either of the inputs was negative.

That should be good enough, I thought.


But, how can I be sure?

Does it matter if it's not?


I know this is just the type of code that: it is easy to have a mistake in; such mistakes are hard to spot; and any mistake is likely to produce annoying bugs.

So, I wanted to make sure the function does all that I need. That means writing some coded tests.

I also know that Copilot can generate tests for a piece of code.

So, I decided on a challenge: Could can I write better or a more complete set of tests?


I went first and I came up with 15 test cases. 

One of them failed.

But, it was easy to spot the issue with the code, make the change, and rerun the tests to see them all pass.


Then I let Copilot have a go. It came up with 5 test cases.

They all passed. First with the unmodified function.

Then, with the modified function, the tests all still passed.

Being able to change the logic within the function and not see any change in test results is a clear sign that there is insufficient test coverage.


Not that this is about the number of tests. This is about the quality of tests. 

When it comes to testing, quality is significantly more important than quantity.

The question should not be "Are there tests?" or "How many tests are there?"

The question you need to ask is "Do these tests give confidence that everything works correctly?"


How do you ensure that AI generated code is 100% what you want and intend? You have to be incredibly precise and consider all scenarios. 

Those scenarios make good test cases.

If you're working with code, whether it's written by a person or AI, you need to test it thoroughly. And the challenging part of thorough testing, and the bit you can't (yet?) totally outsource to AI is determining all the scenarios (test cases) to consider and account for.




What was the case that failed and how did the code need to be changed?

I needed the code to work with input of a length of zero. If this was at the end of the existing range, I need to consider this as intersecting, but the initially generated code considered it not to be.

The fix was to change the final "less than" to be a "less than or equals" and then it did what I needed.


Having now written this up, I also realize there's a potential for an overflow with internal sums that are greater than Int.Max, but as the reality of this code is that it will only be used with maximum input values of a few thousand this shouldn't be an issue...for now! ;)


This wasn't the end of the story: https://www.mrlacey.com/2024/11/even-more-testing-nonsense-with.html

Thursday, November 21, 2024

Open a web page when a project can't be built

When writing the Uno book, we wanted a way to have the project fail with a useful message if a required dependency hadn't also been set up correctly.

Below is what I came up with.


<Target Name="CheckForSfChart">
    <Exec Condition="!Exists('..\..\..\syncfusion-Uno.SfChart\Src\Syncfusion.SfChart.Uno.csproj')" Command="cmd.exe /c start https:////github.com/PacktPublishing/Creating-cross-platform-applications-with-Uno/tree/main/Chapter6/readme.md" />
    <Error Condition="!Exists('..\..\..\syncfusion-Uno.SfChart\Src\Syncfusion.SfChart.Uno.csproj')" Text="The SfChart project is also needed. See https://github.com/PacktPublishing/Creating-cross-platform-applications-with-Uno/tree/main/Chapter6/readme.md" />
  </Target>

See the real thing in GitHub.


If the required other project is missing, it:

  • Opens a webpage with more details.
  • Outputs a message with details in the build output.


Points of note:

  • Yes, msbuild will fail to compile the solution with missing project anyway. This is just additional info to help in debugging.
  • Repeated slashes in the url ("https:////" instead of "https://") due to them assumed to be being escaped
  • Because this call cmd.exe, it will only work on Windows.
  • Not a perfect solution but better than nothing.



Documenting this now before I forget.


The lack of a visual designer is actually offensive

One of the biggest complaints and requests relates to a lack of a "visual designer" for creating UIs with XAML for WinUI3 and .NET MAUI applications.

For a long time, Microsoft had no plans to build one. Although that may be changing...

There are other, related tools that they are working on but are not the designer people are asking for.

I've even heard it said (by people who are employed by Microsoft) that "these tools are better than a designer so you don't need one".

That's what I find offensive.

The claim that MS know what people want better than they know themselves. And, the implication that people are wrong about what they say they want. More importantly (and concerning) is that I don't think MS are listening to the "real" request. 

Yes, the Hot and Live tools are great for working with existing code when the app is running, but there's also a need to help with a completely blank app (or page, or window...)

There needs to be a better way to help people get started and give them confidence that what is in the source is correct before they start debugging (or even compiling?) the code.


I have worked with people creating drag-and-drop-related tools for XAML UIs, so I know what's possible and how hard it is. That's also why I'm not trying to build one myself.

I think there's another way. It's not necessarily better at first look, but it's certainly very helpful, powerful, and able to offer more productivity than arguing about why something that isn't what people are asking for isn't what they want or need.



Logging in Testing

Once upon a time, I was working with a codebase that contained significant and detailed levels of logging. It supported recording multiple loggers and various levels of details, depending on the required "LogLevel".

While looking at how this code is (and can be) tested, I made a discovery that I hope to bring to many other projects in the future.


Taking a step back, there are a few broad questions to think about in terms of testing and logging:

  • Ignoring logging, does the code do everything it is supposed to?
  • Are the correct things logged when they should be?
  • Does the code do everything it should when there are no loggers registered?
  • Does the code do everything it should when there are one or more loggers registered?
  • Are we sure that any exceptions in the code are not logged and ignored?
  • How does logging code affect the code coverage of tests?

I started thinking about this only when having to deal with a "weird" bug that meant functionality appeared to be working, but the lack of a logger meant a code path failed silently as it was configured to quietly log the exception and then do nothing. :(


My solution: add a wrapper around all tests so that they're run multiple times, with different test-specific loggers attached.
  • One test execution had no loggers attached. (To find code that was unexpectedly dependent upon always having at least one logger.)
  • One test execution with a logger that listened at all levels but did nothing. (To verify that the presence of a logger had no impact on correct functionality and to ensure that all logging code paths were followed.)
  • One test execution with a logger that would cause the test to fail if an error or exception was logged. (Because we shouldn't log errors as part of normal execution.)

This was all on code that ran hundreds of tests in a couple of seconds.
The impact of running multiple times was negligible.
The fact the total number of tests went up considerably was nice. (It also helped people reluctant to write tests feel good as every test they did write was now counted as three, so they felt more productive.)
This gave us more realistic code coverage reports, as the logging paths were now included in the coverage. (Previously, logging code wasn't--mostly--covered, making it harder to identify areas with particularly low coverage.)


Tests for logic, specifically about the loggers or reliant on specific logging behaviour, were put in classes that were excluded from the above.

Adding those different test scenarios found more issues than the one that led me to the original investigation.

I'll definitely do this again.

It's also good to have all these different test and logging combinations configured early in development. Hopefully, by the time you think development is finished, there shouldn't be any exceptions being thrown. It is hard to verify that nothing bad happens when an exception is thrown (and logged). You don't want the handling (logging) of an exception to cause another exception and end up in an infinite loop. These things can be hard to check for, but I've found that having all the different logging combinations available (without me really having to think about them) has made it possible to catch some logging issues I might otherwise have missed.

How do you name "theme" level resources?

This would have been a social media post, but it quickly became too long.

If you have some software that:

a) Has a specific color (or colors) that is (are) used for branding reasons.

b) Adjusts to "light" and "dark" themes.


I have some questions:

  • What do you call that (those) color(s)? 
  • Do you have different versions of those colors for when in a dark or light theme? And if so, how do you name them?

I've seen lots of variations in the ways the above are answered, but no consensus.





Monday, November 18, 2024

Here's how I wasted $249 on code signing, so you don't have to!

I don't have positive experiences with code signing.

As of today I have another tale of confusion, frustration, anger, disappointment, and expense.


I'll try and keep this to just the facts, so I don't let my emotions carry me away:

  • I sign my VSIX (Visual Studio extension) and NuGet packages with a code signing certificate.
  • I use a hosted Code Signing solution from DigiCert.
  • Their certificates are initially limited to signing 1000 files per year. (Additional issuances are available for $249 for another 1000.)
  • I got through my initial thousand MUCH faster than I was expecting.  
  • I found out I'd used my allocation when signing suddenly started failing.
  • It turns out that, when signing a .vsix file, the default behaviour is to also (first) sign all the files inside the package and then sign the package as a whole.
  • Even if the internal files are already signed.
  • Regardless of where the files in the package are from.
  • So, when I thought I was signing one file, under the hood it was signing many more.
  • In some cases it was signing 30-40 files each time.
  • In the past I bought the certificate, installed it on my build machine and it didn't matter how many files I signed a file (or multiple packages in a file.)
  • Now that everything is a subscription (and especially for expensive ones), it becomes even more important to understand how the things you're using work and you may end up being billed for using them.
  • No, the documentation on this is far from extensive, clear, or useful.
  • I worked out the solution based on the description here "Some artifacts are containers that contain other signable file types."
  • Then I found the setting `--filelist` in the documentation at https://learn.microsoft.com/en-us/visualstudio/extensibility/dotnet-sign-cli-reference-vsix?view=vs-2022#options 
  • I've now updated all my signing scripts to only sign what's absolutely necessary. As an example see this commit.



Not mentioned: dealing with DigiCert support and how they made it hard for me to pay them money to buy more "issuances". :'(

Wednesday, November 13, 2024

I've used future versions of Visual Studio for years

At least, that's how I've convinced myself to think about it. 

What I'm referring to is the large number of extensions I've built for Visual Studio and how, slowly, many of them become built-in features of the product itself.
It's not that my past effort has become unnecessary. It's that everyone else is catching up.


Visual Studio version 17.12 is now out. At the top of the list of new productivity features is the ability to copy just the description from the Error List, by default, rather than the full row.

I have an extension in the marketplace that does this. It's called ErrorHelper & it's been in existence for more than five years!

partial screenshot showing the context menu options that the extension provides

My extension also does more!

It allows searching for the error message/description (in the search engine of your choice) directly from Visual Studio.
Yes, VS now makes it easier to copy only the text of the description, but without my extension, you still have to go to the browser and search engine and paste it there. Or you could do it in one click.

As some error descriptions include file paths that can impact results or be something you don't want to share, there's also the option to do the search with the path removed. No need to manually remove the path from the text you've copied.

Additionally, as some error descriptions include a URL to get more information, you can directly open the URL in your default browser. No need to manually extract the URL from the other text and then paste that into your browser.


Having the Search and Open URL features has made me feel more productive every time I've used them. Yes, working with error descriptions isn't a massively complicated process but it previously felt slow and frustrating. It was a "paper-cut" that interrupted my flow and felt like more effort than it needed to be. That's why I first made the extension.

It's great that this feature has been added to the product, but it's sad that Microsoft has only gone so far.
They've said that they made this change in response to listening to feedback from developers who have told them that it was harder than necessary to search for error messages with the old functionality.
What has been implemented seems like only part of a solution to that problem. Copying the text to search for is only the first step in searching for an answer. Were they really listening that closely to the real problem if they only implemented half a solution?


They've also said that when choosing which features to add, they avoid adding things that are already possible via existing extensions. However, in their examples of this, they only mention existing extensions that were created by Microsoft staff. I guess my extensions don't count. :(


I also have many other extensions available in the Visual Studio marketplace that you could begin using to improve your productivity now rather than wait for something similar (but possibly not as good) to be added to VS itself.


Naming is important, but consistency possibly moreso

Over on the MAUI repo, I've just started a discussion relating to the naming of code, particularly in relation to what is provided by default and the impact it can have.

Yes, I know that naming is difficult and subjective.

I started the discussion less to debate the names used but more to focus on the consequences of inventing new terms or using terms other than those that the population of likely users is most likely to already be familiar with.

There may be times when new or different terminology is required or preferred, but when that's the case:

  • It is essential to explain why the change has been made, or the alternative is being used.
  • Need to use the new convention consistently. This includes updating existing documentation, samples, and instructions.
  • There needs to be a clear reason/justification for the extra work (& short-term confusion) a change in terminology will create.
  • Filtering for personal preferences and justifications based on "how we've always done it" and the potential to "try something new" must also be factored in.

This is probably also a good reminder to me that "those who show up make the difference."

I also need to remember to motivate myself over the things I care about so that the enthusiastic but under-informed squeaky wheels don't cause avoidable issues for others.

Monday, November 11, 2024

cool Vs better

Some people want to make cool things.
But, what if it's only cool for the sake of being cool?
It may be cool, because it's its new but not measurably better (and improvement) in any way.
But what if the cool new thing is different and new and so not easily comparable with what already exists.
What if cool new thing promotes responses like "it looks good, but I'm. It sure what we'd like ever [pay to] use it for".
What if you make a cool new thing but can't justify why someone should use it, beyond it being "cool".

But,
Sometimes the benefits of a cool new thing aren't obvious. Sometimes the benefits aren't seen for a long time, or until someone with very different ideas and experiences finds a new use, or combines it with something else.

So,
How do you balance spending time on cool new things that may not have any short term benefit, and incremental, measurable improvements to what already exists?

Friday, November 08, 2024

Are your programming language criticisms misdirected?

Let's say you use a programming language that you don't like or have other issues with. Is that the language's fault or yours?
Why not use a different language? or technology?

If tooling makes the language easier to work with, is not using that tooling the language's fault?

If using a language that wasn't meant to be written by hand and writing it by hand (typed directly, without other tools) the fault of the language?

If the language is significantly easier to use via a tool and the company responsible for the language don't make good tools, is that the fault of the language or the company?

If a new tool makes directly working with the programming language unnecessary, does that indicate a failure of the language or that it shouldn't have been used that way? Or that the wrong tool/language was being used?

If a language was originally designed as way of storing output from a tool and you're now writing that "output" directly, does it make sense to write it the same way as the tool did? Or would it be better to write it in a way that's easier for you to read and write?

If you are unhappy with a language or how it's used, shouldn't you look at alternatives to that language or other ways of using/writing it?



Are you blaming someone or something else for your choices or lack of action?


Does the above offend you? Or is it a cause for action?


Thursday, November 07, 2024

Nick is awesome. Be more like Nick!

Nick shows appreciation to the people that help them.

Nick knows that useful, high-quality software takes time, effort, knowledge, and skill.

Nick encourages people who do things that are beneficial to others.

Nick uses some of the benefits of having a well paid job to help others financially. 

Nick says "thank you".



One of the ways Nick (and many other like them) does this is through sponsoring open source development.

Wednesday, November 06, 2024

Superstition, placebos, belief, and "playing with code"

Another brief post to help me think by writing.

I recently heard that "there's no merit to length in writing." They were talking about reducing the length of their book from 220K words to 180K, but the idea stands. The value isn't in the number of words, the value is in what it makes you think or do as a response to what you've read.

I've also recently recognized that when I'm thinking about a new idea, I spend a lot of time focusing on how to first express that idea and make little progress in progressing the idea. Once I first start writing and articulating that idea then I make progress in thinking about the consequences and application of that idea.

So, ...

I've recently been struck by the idea that "superstition is a placebo for belief." There's lots to unpick there and maybe that will come in time.

Beliefs are also strong and hard to change. Ironically, this is especially when people think they are very logical and intelligent.


I've recently encountered as lot of developers who are reluctant to embrace significantly new ways of doing things. 

Side note for the irony of developers being responsible for creating change (by producing or altering software) but reluctant to embrace change themselves.

They will quickly try something new (or "play with a new technology") and then quickly decide that it's not as good as what they currently use (or do).


Maybe "doing things the way I've become accustomed to doing them" ("always done them"?) is a superstition about being productive and a believe that it's the best way to do something.

Briefly "playing with something" (Yes, I dislike this term) is unlikely to provide the time or chance to learn the nuances of something dramatically different to what they've used before. It's also likely that it won't enable the opportunity to fully appreciate all the potential benefits.
Or, maybe the time to "ramp-up" on using something new means that it's never adopted because "there isn't the time" now to slow down while learning something new, even if it means being able to save time and move faster in the future. 
Or, maybe it comes down to not appreciating the possibilities of a new technology that requires thinking about usage in a fundamentally (an conceptually?) different way.

I've seen the above happen over and over again as new technologies come along. i.e. Asserting that "the new technology is slow" when using it in a way that an existing technology would be used but that is far from optimal for the new technology.

It's not the technology, it's what you do with it. Unfortunately this isn't always easy to identify without spending a lot of time using it.


And the placebo? - That there can be a perceived performance (or other) benefit by not changing when compared with the short-term impact/cost of slowing down to learn to do something new.


Yes, this applies to AI/LLMs, but to many other things too...






Tuesday, November 05, 2024

Insanity, LLMs, determination and the future of software development

Insanity Is Doing the Same Thing Over and Over Again and Expecting Different Results

The above quote may or may not be by Albert Einstein. It doesn't really matter.

If you've been around software development for more than about seven minutes, you've probably heard it quoted in some form or another.

It's often used as an argument against blindly repeating yourself, especially when trying to recreate a bug with vague repro steps.

This quote also points to a fundamental difference developers need to consider as the use of LLMs and "AI" become more a part of the software being developed (not just as a tool for developing the code.)

AI (& LLMs in particular) has (have) a level of non-determinism about it (them).

Repeatedly asking the same thing of an LLM shouldn't always produce the same result. 

There's a random element that you almost certainly can't control as an input.

Many software developers likely the certainty of software as (in theory) the same inputs always produce the same outputs. The level of certainty and determinism is reassuring.

Businesses like this too. Although it may not always be as clear.

Looking at the opposite situation highlights the impact on businesses.

Business: "How (or why) did this [bad thing] happen?"

Developer: "We can't say for sure."

Business: "How do we stop it from happening again?"

Developer: "We can add handling for this specific case."

Business: "But how do we stop similar, but not identical, things happening?"

Developer: "We can try and update the training for the AI, or add some heuristics that run on the results before they're used (or shown to the user), but we can't guarantee that we'll never get something unexpected."


Fun times ahead...

Monday, November 04, 2024

not writing about the cutting edge

I think and write slowly - often in combination.
Others are keen to talk/write/speculate about the latest shiny things.
I want to understand how the latest shiny things fit into larger trends and the overall/long term tends.

My first book followed a structure from a presentation first given 6 years earlier and then refined over that time (and the 18 months I spent writing it.)

My next book is in 3 parts.
Part I has its roots in a document I wrote over 20 years before.
Part 2 is based on my own checklist, that I've been building for more than 5 years.
Part 3 is based on a revelation I had and then spent 2 years actively trying to disprove with conversations with hundreds of developers. (Repeatedly asking people to tell me why I'm wrong - was a new, but ultimately enlightening and encouraging experience when no one had an answer.)

Anyway, there's another book coming.
Progress is slow, but I keep discovering new things that must be included and the result wouldn't have been as good if I'd missed them out or if trying to retrofit them into something already written.
Yes, this might be me trying to find excuses for not seeming to have made more progress.
Still, I'm excited for the book and look forward to sharing it's lessons in the (hopefully not too distant) future.

Sunday, November 03, 2024

where did the SOCIAL in social media go?

I joined Twitter, and Facebook, and MySpace (although I was a bit late there) on the same day in 2007. Until then I didn't see the point.

MySpace was already pretty much dead by that point. Over the years Facebook became a way of keeping up with family and friends I knew locally. While Twitter became a place to connect with, meet, share, and learn from others with similar interests.

With many people and the friends and acquaintances, I'd made over the years, who had those interests, mostly gathered in one place it made keeping up with announcements, updates, and just general chit -chat possible. I found it reasonably easy to keep up with what was going on in the areas I was interested in. It was, of sorts, a community.

And then a bomb was set off under twitter.
With people leaving at different times and going off in different directions to different alternative apps. It became an impossibility to keep track of everyone who moved to a new platform/app. (Especially with the misinformation about sharing usernames and accounts on other platforms.)
I now have a "presence" in multiple apps (Mastadon, Blue sky, Instagram, Threads, and yes still X -- all profile links 😉) 
But none of them seem a patch on the community that existed before.
In each app there are a few accounts that I used to follow all in one place, but it seems an uncomfortable and unnecessary effort to keep opening and scrolling through each one on the chance of finding something important and/relevant. Plus each now has a terrible signal to noise ratio that is off-putting.
I've tried cross-posting across apps, but the expectations of content on each seems so different. Although I know others treat them as interchangeable--with varying results.
If I just feel the need to say something that I think/hope will get a response I'll go to Twitter/X, but then I'll feel bad because of all the people being vocal elsewhere about why they left and closed their accounts.

Yes, what Elon did to Twitter and what X has become are far from great, but I don't want to be another voice complaining.
How (and) can an online community be created that's anything like what we had in the past?
I know a bit about building communities IRL, but where and how are online communities really built?
Or should I just give up, pick one app, and start making connections again...

Wednesday, October 16, 2024

Code signing a VSIX Package with a certificate from DigiC**t

Let's avoid why you might want to do it, but if you need to sign a VSIX package with a certificate from the DigiCert KeyLocker (using their hosted Hardware module service) referenced with a certificate stored in the Windows Certificate Manager, I have important details for you.

A VSIX Installer showing a signed package

Here's the thing.

DigiCert claim that you can use their certificates to sign a .vsix file using SignTool.exe. You can't.

SignTool does not support signing VSIX files.

Previously, the recommended way to sign a VSIX package was with VsixSignTool, but this has now been deprecated.

The current (October 2024) recommended solution is to use the Sign CLI tool instead.

That's all well and good, but there aren't any clear instructions (anywhere!) that explain how to do this with a code signing certificate hosted in a DigiCert KeyLocker.

If you're trying to do this, I'd recommend not contacting DigiCert support as they're likely to tell you something like:

It seems our documentation is correct, it is supported, but does not specify the "how". As that would be listed as a third party custom configuration, which is something that is not supported at this time.

That's not at all helpful.

They may also point you to this (devblogs) blog post, but that still doesn't contain a complete working example for this scenario.


Here's what I recommend (based on what I've managed to get working and now use--don't ask how long it took to get working as it's very depressing.):

  • Set up your machine following DigiCert's instructions until you get to a point where you can successfully sign a .dll file with smctl.exe.
  • Install the SIGN tool
  • Install KeyStore Explorer.
  • Use KeyStore Explorer to get the SHA256 version of the fingerprint for the certificate you wish to use. (and remove the colons between values)
  • Sign the VSIX with a command like this:
sign code certificate-store {Path-to-VSIX-file} -cfp {SHA256-fingerprint} -csp "DigiCert Software Trust Manager KSP" -k {certificate-friendly-name} -u "http://timestamp.digicert.com"

e.g. (some values shortened)

sign code certificate-store "D:\output\MyExtension.vsix" -cfp 4AD4D3E4...7C2A -csp "DigiCert Software Trust Manager KSP" -k key_7...670 -u "http://timestamp.digicert.com"


I hope this helps someone.

Yes, using something like AzureKeyVault is probably preferable. If you have detailed, up-to-date instructions on how to set this up, please share them.




Thursday, October 10, 2024

It's been a while

I'd guess that the majority of personal blogs in existence have a final post that talks about apologizing for not posting in a while and then promises that this will change and they'll stat posting again soon...


Yes, I've been quiet for a while. 

At the start of the year, I made plans to leave the project I was working on to spend some time reassessing what was important and what I wanted to do in the future. That time was interrupted by me rupturing the Achilles tendon in my right leg and being forced into 3 months of virtual immobility. This not only ruined my summer plans (and those for the rest of the year) but also kept me away from my desk and my computer(s). There were positives and negatives to this.


But now I'm back. As I start to get back into things I'm planning on working through the many, many draft blog posts I have and finishing and publishing them where appropriate.

If the next few posts seem very random and unrelated, that'll be why.

Developers like us pledge to support open-source

I was recently previously contributing to a project when something broke on the CI builds due to an issue with how a referenced library was misconfigured.

I didn't realize the library was being used. It was configured in a way that meant it didn't show up inside Visual Studio when working with the solution.

I also knew this library had a special (moral) license. This "required" those using it to support the project financially, but the business wasn't.

I wasn't using the functionality of the library, but when I discovered this, I did two things:

1. I made a personal financial contribution to the project.

2. I highlighted this to the business and indicated that they should be financially supporting the project if they wished to keep using the library. (The person who originally added the library pleaded ignorance--"But, it's open source, so we don't need to pay.")


People like us (developers like us):

  • Support the people writing open-source software. Financially if possible (and requested).
  • Respect the spirit of open-source licenses. Not just the minimum, enforceable legal requirements.


This incident raises other, bigger questions, but I'll discuss them at another time—maybe.


However, I mention this because I recently heard about the Open Source Pledge. An initiative to help encourage companies to "Do the right thing, support Open Source".

No, it's not going to solve the problem of funding and support for open source maintainers, but it will help.

Friday, June 07, 2024

My unique value

Having been ill most of this week, I'm feeling unmotivated and disappointed that I haven't achieved much in the last few days.

 One thing that has cheered me up a bit is the idea that:

Other people may know or be able to communicate individual ideas better than me, but I'm the only one who can put these disparate ideas together this way to produce this unique and valuable result.


I think this applies to coding, writing, events, and more... :)

Friday, May 17, 2024

Specific or vague answers to specific and vague questions

If the question is vague, is a specific or vague answer more helpful?

Actually, is either helpful?
Would clarifying the question (or scoping it) be a better response?


If the question is specific, a specific answer can be helpful, but a vague answer may help see the broader picture or help expand the view of the question.

Both can help or be useful in different contexts.


This feels like the opposite to the way to treat data. 

With data, we want to try and accept a broad range of options and return something specific or at least consistently formatted.

With questions, we want the question to be narrow (if not specific) and a potentially broad range of answers.


Now, how do I stop bringing all my thoughts back to software development? Or, more specifically, do I need to consider whether this matters?

Thursday, May 16, 2024

Is it really a "workaround"?

Not using something is not a workaround.

A workaround is an alternative solution to the problem. Normally, slower, longer, or more convoluted than the desired solution (that has an issue.)

I think words matter.
I want to help people get past their problems. Even if that means doing the work of fixing things.

Some people seem more interested in arguing that a reported problem isn't something they need to do anything about than actually addressing whatever is making something hard to use.

Sometimes, I wonder if there's a gap in the market for customer service training for developers who respond to public issues and discussions on GitHub.

Thursday, May 02, 2024

Retrying failing tests?

Doing the same thing repeatedly (and expecting--or hoping for--different results) is apparently a definition of madness.

That might be why trying to debug tests that sometimes fail can feel like it drives you mad.


It's a common problem: how do you know how much logging to have by default? Specifically when running automated tests or as part of a CI process.


Here's my answer:

Run with minimal logging and automatically retry with maximum logging verbosity if something fails.


It's not always easy to configure, but it has many benefits.


Not only does this help with transient issues, but it also helps provide more details to identify the cause of issues that aren't transient.

An issue is probably transient if it fails on the first run but passes on the second (with extra logging enabled.) It also helps identify issues that occur when no logging is configured. -- It can happen; ask me how I know. ;)

Be careful not to hide recurring transient errors. If errors can occur intermittently during testing and the reason is not known, what's to stop them from happening intermittently for end users, too?

Record that a test only passed on the second attempt, and raise a follow-up task to investigate why. Ideally, you want no transient errors or things that don't work when no logging is configured.


This doesn't just apply to tests. 

It can also be applied to build logging verbosity. 

You only really want (or rather need) verbose log output when something goes wrong. Such as a build failing...




Wednesday, May 01, 2024

A lesson in algorithms from Guess Who

Earlier this week, I attended the DotNetNotts user group meeting. The second talk of the night was by Simon Painter and was about some of the algorithms used in Machine Learning.

As part of his explanation of decision trees, he used an example based on the game Guess Who?

A screen capture of the display of the 24 characters in the game Guess Who
Here's a screenshot, from YouTube, where the hybrid part of the meetup was relayed.

If you're not familiar with the game, you have to ask yes or no questions to identify one of the characters:

Alex, Alfred, Anita, Anne, Bernard, Bill, Charles, Claire
David, Eric, Frans, George, Herman, Joe, Maria, Max
Paul, Peter, Philip, Richard, Robert, Sam, Susan, Tom

As part of his talk, Simon stated that the best strategy and ideal scenario for any decision tree is to divide all the available options in half (a binary split). However, for this game, there are no characteristics of the characters that make this possible (and hence the challenge of the game). 

Simon did however point out that there is the possibility of using compound questions to have a greater chance of success by more evenly dividing the groups in half each time.
So, instead of limiting questions to the form of "is the person wearing a hat?" you use questions like "does the person present as female OR have facial hair?" or "does the person have blue eyes, a hat, OR red hair?"


Such questions were relevant for the rest of the talk, but it got me wondering.

I looked at all those people and their names and thought I saw something...

About half the people seem to have names containing two vowels...

It turns out that 15 people have names containing two vowels. This is better than any visual differentiator but is still far from perfect.

Ideally, you want to divide the group in half each time.
So, we'd go from 24 > 12 > 6 > 3

When you get to only having three options left, there are myriad ways (questions) to differentiate any of the options in this version of the game, but (without having done the math), it's just as quick to guess each person/option in turn.

What we need to maximize our chance of winning, and probably remove all fun from the game, is a set of questions that will divide the group in half until it gets to a group of 3.


It was only on my way home that I realized that if I'm going to look at the letters in the names of the people there are probably more and better questions that can be used to play a perfect game of Guess Who without using compound questions?

And so, (because I'm "fun") I tried.

It actually turned out to be really easy to find such questions. And by "really easy", I mean it took me less time than it's so far taken me to type this.


Here are the questions:

Question 1: Does the person's name start with a letter that comes before 'H' in the alphabet?

This is a simple split.
If they answer yes, you get the people Alex - George.
If they answer no, you are left with Herman - Tom.

If the first answer was Yes, question 2 is: Does the person's name start with a letter that comes before 'C' in the alphabet?

Another simple split.
If they answer yes, you get the people Alex - Bill.
If they answer no, you are left with Charles - George.


If the answers so far are Yes & Yes, the 3rd question is: Does the person have facial hair?

If the answer to this question is Yes, you're left with Alex, Alfred & Bernard
If the answer to this question is No, you're left with Anita, Anne & Bill.


If the answers so far are Yes & No, the 3rd question is: Does the person's name start with a letter that comes before 'E' in the alphabet?

If the answer to this question is Yes, you're left with Charles, Claire & David
If the answer to this question is No, you're left with Eric, Frans & George.


If the first answer was No, the next question to ask was the hardest to identify. Question is: Does the person's name contain fewer than 3 consonants?

Another simple split.
If they answer yes, you get the people Joe, Maria, Max, Paul, Sam & Tom.
If they answer no, you are left with Herman, Peter, Philip, Richard, Robert & Susan.


If the answers so far are No & Yes, the 3rd question is: Does the person's name start with a letter that comes before 'P' in the alphabet?

If the answer to this question is Yes, you're left with Joe, Maria & Max
If the answer to this question is No, you're left with Paul, Sam & Tom.


If the answers so far are No & No, the 3rd question is: Does the person's name start with a letter that comes before 'R' in the alphabet?

If the answer to this question is Yes, you're left with Herman, Peter & Philip
If the answer to this question is No, you're left with Richard, Robert & Susan.



Yes, this was a questionable use of my time ;)

If you don't think questions about the letters in the person's name are appropriate or allowed, please keep that opinion to yourself.



Anyway, if you want to know more about machine learning the whole event is available online.

Further silly posts about over-analyzing games are unlikely to follow. "Normal" service will resume shortly.



Wednesday, April 24, 2024

Saturday, April 20, 2024

trivial feedback and micro-optimizations in code reviews

When these are given, is it because there is nothing more important to say?

Or, does such feedback come from people who don't understand the bigger picture or want a quick way to show they've looked at the code without having to take the time and mental effort to truly understand it?

Tuesday, April 16, 2024

Before commenting

Are you just repeating what's already been said before?

Where/when appropriate, are you correctly attributing others?

Does your comment add value?

Are you adding a new/different/unique perspective?

Have you read the other comments first?

Have you thought about who (possibly many people) will read the comment?

Monday, April 15, 2024

MAUI App Accelerator - milestone of note

MAUI App Accelerator - 10,000 installs

In the last few days MAUI App Accelerator passed ten thousand "official" unique installs.

This doesn't include the almost eight thousand installs included via the MAUI Essentials extension pack. (Installs via an extension pack are installed in a different way, which means they aren't included in the individual extension install count.)

While big numbers are nice (and apparently worth celebrating) I'm more interested in how it's used.
The numbers for that are lower, but still noteworthy. 

It's currently used to create about 25 new apps each day. Which is nice.
I'm also trying to improve my ability to use App Insights so I can get other and better statistics too.


More updates are coming. Including the most potentially useful one...



Saturday, April 13, 2024

"I'm not smart enough to use this"

 I quite often use the phrase "I'm not smart enough to use this" when working with software tools.


This is actually a code for one or more of the following:

  • This doesn't work the way I expect/want/need.
  • I'm not sure how to do/use this correctly.
  • I'm disappointed that this didn't stop me from doing the wrong thing.
  • I don't understand the error message (if any) that was displayed. 
  • Or the error message didn't help me understand what I should do.


Do your users/customers ever say similar things?

Would they tell you?

Are you set up to hear them?

And ready to hear this?


Or will you tell me that I'm "holding it wrong"?


Friday, April 12, 2024

Don't fix that bug...yet!

An AI generated image of a computer bug - obviously

A bug is found.
A simple solution is identified and quickly implemented.
Sounds good. What's not to like?


There are more questions to ask before committing the fix to the code base. 
Maybe even before making the fix.

  • How did the code that required this fix get committed previously? 
  • Is it a failure in a process?
  • Have you fixed the underlying cause or just the symptoms?
  • Was something not known then that is now?
  • Could a test or process have found this bug before it entered the code base?
  • Are there other places in the code that have the same issue?
  • Are there places in the code that do something similar that may also be susceptible to the same (or a variation of the) issue?
  • How was the bug reported? Is there anything that can be done to make this easier/faster/better in the future?
  • How was the bug discovered? Can anything be done to make this easier, more reliable, or automated for other bugs in the future?
  • In addition to fixing this bug, what can be done to prevent similar bugs from happening in the future?
  • Is there anything relating to this issue that needs sharing among the team?



As a developer, your job isn't just to fix bugs; it's to ensure a high-quality code base that's as easy (or as easy as possible/practical) to maintain and provides value to the people who use it.
At least, I hope it is.



Thursday, April 11, 2024

Formatting test code

Should all the rules for formatting and structuring code used in automated tests always be the same as those used in the production code?

Of course, the answer is "it depends!"


I prefer my test methods to be as complete as possible. I don't want too many details hidden in "helper" methods, as this means the details of what's being tested get spread out.


As a broad generalization, I may have two helpers called from a test.

One to create the System Under Test.

And, one for any advanced assertions. These are usually to wrap multiple checks against complex objects or collections. I'll typically create these to provide more detailed (& specific) information if an assertion fails. (e.g. "These string arrays don't match" isn't very helpful. "The strings at index 12 are of different lengths" helps me identify where the difference is and what the problem may be much faster.)


A side-effect of this is that I may have lots of tests that call the same method. If the signature of that method needs to change, I *might* have to change it everywhere (all the tests) that call that method.

I could move some of these calls into other methods called by the tests and then only have to change the helpers,  but I find this makes the tests harder to read on their own.

Instead, where possible, I create an overload of the changed method that uses the old signature, and which calls the new one.


If the old tests are still valid, we don't want to change them.

If the method signature has changed because of a new requirement, add new tests for the new requirements.

Tuesday, April 09, 2024

Varying "tech talks" based on time of day

I'll be speaking at DDD South West later this month.

I'm one of the last sessions of the day. It's a slot I've not had before.

In general, if talking at part of a larger event, I try to include something to link back to earlier talks in the day. Unless I'm first, obviously.

At the end of a day of talks, where most attendees will have already heard five other talks that day, I'm wondering about including something to draw together threads from the earlier sessions and provide a conclusion that also ties in with what I'm talking about. I have a few ideas...

I've seen someone do a wonderful job of this before but it's not something I've ever heard mentioned in advice to (or books on) presenting... I guess if you'll there you'll see what I do.

Monday, April 08, 2024

Comments in code written by AI

The general "best-practice" guidance for code comments is that they should explain "Why the code is there, rather than what it does."

When code is generated by AI/LLMs (CoPilot and the like) via a prompt (rather than line completions), it can be beneficial to include the command (prompt) provided to the "AI". This is useful as the generated code isn't always as thoroughly reviewed as code written by a person. There may be aspects of it that aren't fully understood. It's better to be honest about this.

What you don't want is to come to some code in the future that doesn't fully work as expected, not be able to work out what it does, not understand why it was written that way originally, and for Copilot's explanation of the code to not be able to adequately explain the original intent.

// Here's some code. I don't fully understand it, but it seems to work.
// It was generated from the prompt: ".
.."
// The purpose of this code is ...

No, you don't always need that first line.


Maybe xdoc comments should include different sections.

"Summary" can be a bit vague.

Maybe we should have (up to) 3 sections in the comments on a class or method:

  • Notes for maintainers
  • Notes for consumers
  • Original prompt


Writing a comment like this may require some bravery the first few times you write such a thing, but it could be invaluable in the future.

Sunday, April 07, 2024

Reflecting on 1000+ blog posts

1200 posts, 1026 published, 172 drafts, 2 scheduled
These are the current numbers for my blog. (Well, at the time I first started drafting this. I expect it will be several days before it's posted and then they will be different.)

These are the numbers I care about.

The one most other people care about is 2,375,403. That's the number of views the articles have had.

But this isn't a post about statistics. This is a post about motivation and reward.


 

I started writing this blog for me.

That other people have read it and got something from it is a bonus.

If I were writing for other people, I would write about different topics, I would care about SEO and promotion, and I would have given up writing sooner.

I get lots of views each day on posts that I can't explain.

I know that most views of this blog come from "the long tail," and Google points people here because there is a lot of content. The fact that I've been posting for 17+ years also gives me a level of SEO credibility.

There have been periods where I have written very little. This is fine by me. By not forcing myself to publish on a particular schedule, the frequency of posting doesn't hold me back or force me to publish something for the sake of it.

I publish when and if I want to.,

Some people need and/or benefit from forcing themselves to publish on a regular schedule. If that works for you, great. If it doesn't, that's okay, too.

Others might think a multi-month gap in posting is bad, but if that's what I want or need, it's okay. Over a long enough period, the gaps are lost in the overall volume of posts.

I'm only interested in writing things that don't already exist anywhere else. This probably holds me back from getting more views than if that were my goal, but it probably helps me show up in the long tail of niche searches.

And yet, some people still regularly show up and read everything I write. Thank you. I'm glad you find it interesting.

Will I keep writing here? I can't say for certain but I have no plans on stopping.


I'm only publishing this post because I thought I might find it useful to reflect on all that I've written, and 1000 posts felt like a milestone worth noting, even if not fully celebrating. Originally, I thought I'd want to write lots about this, but upon starting it feels a bit too "meta" and self-reflective. I don't know what the benefit is of looking at the numbers. What I find beneficial is doing the thinking to get my ideas in order such that they make sense when written down. That's, primarily, why I write. :)




Friday, April 05, 2024

Does code quality matter for AI generated code?

A large part of code quality and the use of conventions and standards to ensure its readability has long been considered important for the maintainability of code. But does it matter if "AI" is creating the code and can provide a more easily understandable description of it if we really need to read and understand it?

If we get good enough at defining/describing what the code should do, let "AI" create that code, and then we verify that it does do what it's supposed to do, does it matter how the code does whatever it does, or what the code looks like?

Probably not.

My first thought as a counterpoint to this was about the performance of the code. But that's easy to address with "AI":

"CoPilot, do the following:

- Create a benchmark test for the current code.

- Make the code execute faster while still ensuring all the tests still pass successfully.

- Create a new benchmark test for the time the code now takes.

- Report how much time is saved by the new version of the code.

- Report how much money that time-saving saves or makes for the business.

- Send details of the financial benefit to my boss."


Thursday, April 04, 2024

Where does performance matter?

Performance matters.

Sometimes.

In some cases.

It's really easy to get distracted by focusing on code performance.

It's easy to spend far too much time debating how to write code that executes a few milliseconds faster.

How do you determine/decide/measure whether it's worth discussing/debating/changing some code if the time spent thinking about, discussing, and then changing that code takes much more time than will be saved by the slightly more performant code?

Obviously, this depends on the code, where it runs, for how, long and how often it runs.


Is it worth a couple of hours of developer's time considering (and possibly making) changes that may only save each user a couple of seconds over the entire time they spend using the software?


What are you optimizing for?

How do you ensure developers are spending time focusing on what matters?

The performance of small pieces of code can be easy to measure.
The real productivity of developers is much harder to measure.


How do you balance getting people to focus on the hard things (that may also be difficult to quantify) and easy things to review/discuss (that they're often drawn to, and can look important from the outside) but don't actually "move the needle" in terms of shipping value or making the code easier to work with?

Wednesday, April 03, 2024

Thought vs experiment

Scenario 1: Consider the options and assess each one's impact on usage before shipping the best idea.
Scenario 2: Ship something (or multiple things) and see what the response is. Then iterate and repeat.

Both scenarios can lead to the same result (in about the same amount of time.)

Neither is necessarily better than the other. However, it doesn't often feel that way.

Always using one isn't the best approach.


#NotesToMyself


Hat Tip: David Kadavy

Monday, April 01, 2024

The Visual Studio UI Refresh has made me more productive

Visual Studio is having a UI refresh. In part, this is to make it more accessible.

I think this is a very good thing. 

If you want to give feedback on another possible accessibility improvement, add your support, comments, and thoughts here.

Anyway, back to the current changes.

They include increasing the spacing between items in the menu.

Two images of the main toolbar in Visual Studio in the dark theme. The top image shows a snapshot of Visual Studio today where the bottom image is a mockup of the toolbar which has more spacing and is a bit wider with less crowding.

There are some objections to this as it means that fewer items can be displayed at once.

Instead of complaining, I took this as an opportunity to revisit what I have displayed in the toolbar in my VS instances.

I used to have a lot there. 

I knew that some of those things I didn't need and, in some cases, had never used. I just didn't want to go the the trouble of customising them. 

"If they're there by default, it must be for a reason, right?" Or so I thought.

A better question is "Are they there for reasons I have?" In many cases, they weren't.

So I went through and spent (what turned out only to be) a few minutes customising the toolbars so they only contained (showed) the items I wanted, needed and used.

That was several weeks ago, and it has been a massive improvement.

  • I feel I'm working a bit faster.
  • I'm not spending time looking for the things I need.
  • I'm not distracted by things I don't need or don't recognise.
  • I feel encouraged and empowered as I've made my work environment more productive.


A system change to improve things for others encouraged me to improve things for myself. I see that as a win-win.



when the developers no longer seem intelligent

There are companies where the managers don't know what developers do. They have some special knowledge to make the computers do what the business needs. Sometimes, these managers ask the developers a question, and they give a technical answer the manager doesn't understand.
Very soon (maybe already), the developers will be using AI to do some of the technical work the company wants/needs.
Eventually, the AI will do something unexpected, and the business managers will want to know why. The developers will not be able to explain.
The managers will want to guarantee that the bad or unexpected thing will not happen again, but the developers will not be able to do that.
It's the nature of the non-deterministic AIs that are now being built.
This may be ok.

A possible takeaway is to notice that the appearance of intelligence (by giving a technical-sounding answer that the listener doesn't really understand) isn't going to be enough.

If you can't explain what you're doing now, how will you explain what AI is doing and that you can't guarantee what it will do?


Friday, March 29, 2024

How can you make manual testing easier?

 I found this still in the printer when I bought my early morning coffee the other day.

Printout with all different supported text options displayed

It reminded me of a time when I used to have 5 (yes, five) different printers on my desk (well, on a table next to it) to enable testing support for all the different and specific label printers the software I worked on had to support.

More importantly, it reminded me of the importance of making it as easy as possible to manually test things that can only (practically) be manually tested.

I assume the above output is intended to verify that the printer is working correctly.
I also assume it's used at the start of each day to verify (check) that the printer (in this case, on the automated checkout) is working correctly.

That printout is currently stuck up in front of my desk. It's a reminder to myself to regularly ask:

What could I do to make this easier to test?

Thursday, March 28, 2024

Does AI mean software development will appeal to different people?

Certain personality types are attracted to software development.

The logic and absolute certainty appeals to those people.

AI removes that certainty. It's non-deterministic.

Will this put off some people who like (or need?) the absolutes?

May it attract different people with other interests and personality types?



The most important question on a PR checklist is...

Admit it, you thought I  was going to say something about Testing. Didn't you?

While testing is super important and should be part of every PR, the most important question to ask when working on something as part of a team is:


Does this change include anything you need to communicate to the rest of the team?


I've added this to PR templates and have been frustrated when "circumstances" have prevented me from doing so.


Why does this matter?

  • Because if it's something low-level or fundamental, then everyone is likely to need to know it. You shouldn't rely on everyone reviewing the PR or checking everything that gets merged. 
  • Because the change(s) in the PR might break the code other team members are working on.
  • Because a "team" implies working together towards a common goal. Keeping secrets and not sharing things that "teammates" will benefit from knowing hurts the team and the project.


Not sharing important knowledge and information creates frustration, resentment, wasted effort, and more.

Wednesday, March 27, 2024

Doing things different in CI builds

Say you have a large complex solution.

Building everything takes longer than you'd like. Especially when building as part of CI checks, such as for a gated merge.

To make the CI build faster, it can be tempting to have a filtered version of the solution that only builds 'the most important parts' or filters out some things that take the most time (and theoretically change least often).

When such a filter exists, it can be tempting for developers in a team to use the filtered version to do their work and make their changes. If it makes the CI faster, it can make their development build times faster too.


However, there's an unstated trade-off happening here:

Shortening the time to build on (as part of) the CI

creates 

A reliance on individual developers to check that their changes work correctly for the whole solution.


If you get it wrong, developers (mostly?) only work with a portion of the code base, and errors can be overlooked. These errors (including things like being able to build the whole solution) can then exist in the code base for unknown periods of time......



Saturday, March 23, 2024

Prompt engineering and search terms?

The prompts given to LLMs are compared to the text we can enter when searching the web.

Over time, people have learned to write search queries that will help them get better answers.

There's an argument that, over time, people will learn to write better prompts when interacting with LLMs.

However, over time, the way websites expose, format, and provide data that's fed into the search engine so that it will be more likely to be what is shown in the results for specific searches. (A crude description of SEO, I admit.)

What's the equivalent of providing data that will be used to train an LLM?

How do (or will) people make or format data to get the outcomes they want when it becomes part of the training data for the LLM?

Yes, this is like SEO for LLMs. 

LLM-Optimization? LLMO?

GPTO?

Wednesday, March 13, 2024

Tuesday, March 12, 2024

Reverse engineering a workshop

I've been working on writing a technical workshop. I've not done it before, and I couldn't find any good, simple guidelines for creating such a thing.

Having asked a few people who've delivered workshops in the past, the advice I got was very generic and more about the workshops they've proctored rather than how to structure one or put one together.

So, rather than make it up, I started by trying to reverse engineer what good workshops do.

I want the workshop to be fully self-paced and self-guided. If it can be used in group or "instructor-led" scenarios, that'll be good, too, but I don't have any plans (yet) for this.

From looking at many workshops I've completed and thinking back to those I've participated in in the past, I was struck by how many take the approach of showing a completed project and then simply listing the steps to create it. I find this approach to often be disappointing.
Yes, as a participant, I get the satisfaction of having created something but it's not something new or necessarily specific to my needs. More importantly, the reasons for each individual step weren't explained, and the reason for taking an approach when others are available (or even what the other approaches are) wasn't given. This means that I don't get the wider knowledge I likely need to be successful. Is the intention that in completing a workshop, you have the knowledge to go and build other things and the confidence to do so, having done it once before? It probably should be. 

What I find many workshops end up doing (intentionally or otherwise) is providing a series of steps to recreate a piece of software and assuming that's enough for the participants to then go off and successfully create anything else.

Yes, saying, "Anyone can follow our workshop and create X", is great. But that's not the same as a workshop that teaches reusable skills and provides the knowledge needed to go and create your own software.

I want to create a workshop as a way of teaching skills and introducing a different way of thinking about a topic.


Aside: what's the difference between a workshop and a tutorial? I think it's that workshops are longer. Possibly a workshop is made up of a series of tutorials.


After initially struggling, I eventually concluded that a workshop is like teaching anything else. With clear learning goals and a structure, it's a lot easier to plan and create.

In this way, writing the workshop was a lot like writing a book. Only without an editor chasing me for progress ;)

More thoughts on this topic another day. Maybe.
Although, it has got me thinking about what I'll write next...


If you're interested in how my efforts turned out, you can see the results of them here.

Sunday, March 10, 2024

If you only write one document

If you only write one document (as part of developing software), make it the test plan.
Do it before you start writing the code.
This applies to new features and bug fixes.

  • This also becomes the spec.
  • It allows you to know when you've written the code to make do everything you should.
  • It will make working as part of a team easier.
  • It will make code reviews faster and easier.
  • It will make testing faster.
  • It will make creating automated tests easier. (You can even write them before the code-TDD style.)
  • It will make things easier when the customer/client/boss changes their mind or the requirements.
  • It will make future maintenance and changes faster.
  • It will make creating complete and accurate documentation easier.
  • You are more likely to be creating technical debt that you're unaware of without this.


If you don't:
  • There will be lots of guessing.
  • There will be lots of unvalidated assumptions.
  • There will be lots of repetition of effort. (Working out what needs to be done and how to do it.)
  • More effort will be wasted on things that aren't important.
  • Code reviews and testing of the code will be slower and involve more discussion and clarification.
  • You are more likely to ship with bugs.
  • Future changes (big fixes or adding new features) will be slower.
  • Future changes are more likely to introduce other bugs or regressions.


Who said it doesn't matter

That someone admitted [bad practice the business would not like to admit to] is not the issue.

The problem isn't blaming who said it.

The problem is that it is the culture. 

Trying to hide the issue or blaming someone for admitting it doesn't help. It encourages bad practice, which really only makes things worse.


 

Saturday, March 09, 2024

Detecting relevant content of interest

When AI is actually a bit dumb. 

Consider something I'm seeing a lot:
  • Content (e.g. news) app shows lots of different content.
  • You read an article in the app within a specific category.
  • Several hours later, an automated backend process tries to prompt re-engagement.
  • The backend process looks at what categories of content you've looked at recently.
  • It notices a recent article that is in that category and is getting lots of views.
  • The backend process then sends a notification about that content as you're likely to be interested. (It knows you read content in the category, and lots of people are reading this new article. It should be of interest.)
  • But the "assumption" was based on a lack of consideration for articles already read. (It was of interest, that's why you read it several hours ago.)
  • Enough people click on these notifications to make them look like they're doing a good job of promoting re-engagement.
  • People click on the notifications because they might be related to or following on from what they read earlier and not realizing that it is the exact same article.
  • Analytics only tracks openings from the notifications, not how much time is spent reading the article they were using to promote re-engagement.

Analysis of analytics doesn't flag this, and the opacity of "the algorithm" doesn't make it clear this is what's happening.
All the while, many people get wise to these pointless notifications and turn them off, and so many miss something actually useful in the future.


I know there are more complex and varied scenarios than mentioned above, including how the above could be considered a successful "engagement" in other ways.

The main thing I take away from this is that I don't want to be creating software that tries to do something clever and useful without being able to accurately tell it is being successful and providing value.
Creating something that is trying to be clever but comes across as stupid because it doesn't make wise or appropriate use of the information it already has does not excite or interest me.

Friday, March 08, 2024

Software development = basketball

This might be a leap but go with me for a minute.


"Lessons in basketball are lessons in life"

 

It's a cliched phrase that was drilled into me at basketball training camps and through basketball-related movies when I was young. We weren't just being encouraged (forced?) to learn lessons that would help us play better basketball, these lessons would help throughout our lives.


Thinking today about the importance of fundamentals, I wonder if the world would be a better place if more developers (had) played basketball.


Thursday, March 07, 2024

Sticktoitiveness and software development

I recently heard that there is a common character trait among many developers in that they won't stop working on a problem until they've solved it.

I've always identified as having a similar but different trait, I won't give up when trying to solve a problem.

I came to this trait as a result of some of my first jobs in the industry. Due to the internet and world being as they were, and in combination with the companies, teams, and projects I was working on/with, there was no option to say "I don't know how" and give up. The problem needed to be solved, there was no one to ask who might know, and so I had to figure it out. That's what I was there for. That's what I was paid for.



Wednesday, March 06, 2024

Did this bug report get me banned from Visual Studio?

 As an avid user of Visual Studio and a developer of many Visual Studio extensions, I have a strong interest in enhancing the discoverability and user-friendliness of extensions. I was pleased to learn about the recent implementation of a requested feature and eagerly went to explore it.

Recently, I've also been exploring the use of WinGet DSC to configure a new laptop and have been experimenting with .vsconfig files to streamline the process.

During these investigations, I encountered an issue regarding the use of extensions containing "Extension Packs" (references to other extensions that should also be installed). Unfortunately, attempting to include them resulted in installation failures without any accompanying explanation for this limitation. Through a process of elimination, I confirmed that the inclusion of extension packs was the cause.

I submitted a bug report detailing my findings, which can be found [link to the original report, which was unfortunately removed]. Regrettably, I discovered that my access to the site has since been restricted, citing violations of the Community Code of Conduct

Upon revisiting my initial post, I can only speculate that my direct and passionate writing style may have been misunderstood as impolite or disrespectful, but am unsure if this is the issue. I acknowledge the importance of maintaining politeness and respect in online interactions and am committed to improving in this regard.

I am left wondering if utilizing AI to refine my expressions to ensure a consistently polite and respectful tone may be a helpful approach moving forward. Perhaps this precautionary measure could prevent unintentional misinterpretations. 


Below is what I posted.

I share it here as an example (and warning?) to others. Be polite and respectful!


Themes of DDD North 2024

This last weekend, I was excited to get to speak at the DDD North conference again.

As a one-day, five-track conference there was a lot going on and a lot of varied content.

Of the sessions I attended and the discussions I had with other attendees, I noticed lots of mentions of:

  • AI
  • Testing (in a positive light)
  • General good development practices rather than talk about specific tools or technologies.


Yes, I recognise that the talk I gave about the importance of documentation and testing as we use more AI tooling while developing software likely skewed my thinking and what I was more inclined to notice. It was just nice to not be the only person saying positive things about testing software. (Although at least two speakers did make jokes about writing tests so there's still a long way to go.)

The increased focus on generally applicable "good" practices was also good to see. While learning about a new framework or technology is useful in the short-terms or for specific tasks, spending time on things that will be valuable whatever the future holds feels like a better use of time.

While I'm still waiting for the official feedback from my talk (sorry, no video) upon reflection, I'm glad I did it and it was a good thing for me to do. I don't want to give a talk that anyone could give and so basing it on my experiences (& stories) is good rather than reading official descriptions of technologies, describing APIs, or showing trivial demos. I also want to do in-person events in ways that benefit from being "in person". This talk wouldn't have worked the same way as a recording and I wouldn't have got as much from it either. If I could just record myself talking about the subject and released it as audio or a video I'd have done that but it wouldn't be the same or as good. Although, it might have been less work. Maybe I'll do that in the future though.

 Here's me during the talk in front of a perfectly timed slide ;)

Me standing in front of a slide that says "I'm NOT perfect"



Sunday, March 03, 2024

Lack of nuance

 No nuance is almost always incorrect!


Yes, "almost" is very important in that statement.


If you get a response/answer/instruction without any acknowledgement of the nuances, you're almost certainly not getting the full picture. 

How do you know the importance of what is missing, if you don't know what's missing?



Saturday, March 02, 2024

Friday, March 01, 2024

Wednesday, February 28, 2024

Reviewing documentation is like reviewing code

 Two quick, but key points.


1. What is it meant to do? And, where/what/who is it for?

You can't review code fully unless you know what it's meant to do. You might be able to point out if something doesn't compile or account for an edge case that hasn't been covered, but if you can't say if the code does what it was meant to do, you can't provide a useful review.

It's the same with documentation. If you don't know what the document was intended to achieve, communicate, or teach, how do you know it is correct, appropriate, or does what it's meant to?


2. Take advantage of tools before you involve people.

Use spelling and grammar checkers before asking someone to review it.

It's like asking for a code review on code that doesn't compile or meet coding standards.



Tuesday, February 27, 2024

"LGTM" isn't automatically a bad code review comment

What's the reason for doing a code review?


It's to check that the code does what it is supposed to and that the reviewer is happy to have it as part of the code base.


If the code changes look fine and the reviewer is happy, they shouldn't be expected or obliged to give (write) more feedback than is necessary.


What's not good is pointless comments or references to things that weren't changed as part of what is being reviewed.


A reviewer should not try to prove they've looked at the code by providing unnecessary or unnecessarily detailed feedback.

It's not a good use of time for the person doing the review.

Dealing with (responding to) those unnecessary comments is also not a good use of the time for the person who requested the review.


Writing something, even if it's a few characters (or an emoji) that indicates that the approval wasn't fully automated or done by accident is fine by me.

Of course, if all someone ever did was comment on code they're reviewing this way then that should raise different concerns.



Don't write more than you need to or for the sake of it.

Don't comment just to show you've looked at something.