Wednesday, April 24, 2024

Saturday, April 20, 2024

trivial feedback and micro-optimizations in code reviews

When these are given, is it because there is nothing more important to say?

Or, does such feedback come from people who don't understand the bigger picture or want a quick way to show they've looked at the code without having to take the time and mental effort to truly understand it?

Tuesday, April 16, 2024

Before commenting

Are you just repeating what's already been said before?

Where/when appropriate, are you correctly attributing others?

Does your comment add value?

Are you adding a new/different/unique perspective?

Have you read the other comments first?

Have you thought about who (possibly many people) will read the comment?

Monday, April 15, 2024

MAUI App Accelerator - milestone of note

MAUI App Accelerator - 10,000 installs

In the last few days MAUI App Accelerator passed ten thousand "official" unique installs.

This doesn't include the almost eight thousand installs included via the MAUI Essentials extension pack. (Installs via an extension pack are installed in a different way, which means they aren't included in the individual extension install count.)

While big numbers are nice (and apparently worth celebrating) I'm more interested in how it's used.
The numbers for that are lower, but still noteworthy. 

It's currently used to create about 25 new apps each day. Which is nice.
I'm also trying to improve my ability to use App Insights so I can get other and better statistics too.


More updates are coming. Including the most potentially useful one...



Saturday, April 13, 2024

"I'm not smart enough to use this"

 I quite often use the phrase "I'm not smart enough to use this" when working with software tools.


This is actually a code for one or more of the following:

  • This doesn't work the way I expect/want/need.
  • I'm not sure how to do/use this correctly.
  • I'm disappointed that this didn't stop me from doing the wrong thing.
  • I don't understand the error message (if any) that was displayed. 
  • Or the error message didn't help me understand what I should do.


Do your users/customers ever say similar things?

Would they tell you?

Are you set up to hear them?

And ready to hear this?


Or will you tell me that I'm "holding it wrong"?


Friday, April 12, 2024

Don't fix that bug...yet!

An AI generated image of a computer bug - obviously

A bug is found.
A simple solution is identified and quickly implemented.
Sounds good. What's not to like?


There are more questions to ask before committing the fix to the code base. 
Maybe even before making the fix.

  • How did the code that required this fix get committed previously? 
  • Is it a failure in a process?
  • Have you fixed the underlying cause or just the symptoms?
  • Was something not known then that is now?
  • Could a test or process have found this bug before it entered the code base?
  • Are there other places in the code that have the same issue?
  • Are there places in the code that do something similar that may also be susceptible to the same (or a variation of the) issue?
  • How was the bug reported? Is there anything that can be done to make this easier/faster/better in the future?
  • How was the bug discovered? Can anything be done to make this easier, more reliable, or automated for other bugs in the future?
  • In addition to fixing this bug, what can be done to prevent similar bugs from happening in the future?
  • Is there anything relating to this issue that needs sharing among the team?



As a developer, your job isn't just to fix bugs; it's to ensure a high-quality code base that's as easy (or as easy as possible/practical) to maintain and provides value to the people who use it.
At least, I hope it is.



Thursday, April 11, 2024

Formatting test code

Should all the rules for formatting and structuring code used in automated tests always be the same as those used in the production code?

Of course, the answer is "it depends!"


I prefer my test methods to be as complete as possible. I don't want too many details hidden in "helper" methods, as this means the details of what's being tested get spread out.


As a broad generalization, I may have two helpers called from a test.

One to create the System Under Test.

And, one for any advanced assertions. These are usually to wrap multiple checks against complex objects or collections. I'll typically create these to provide more detailed (& specific) information if an assertion fails. (e.g. "These string arrays don't match" isn't very helpful. "The strings at index 12 are of different lengths" helps me identify where the difference is and what the problem may be much faster.)


A side-effect of this is that I may have lots of tests that call the same method. If the signature of that method needs to change, I *might* have to change it everywhere (all the tests) that call that method.

I could move some of these calls into other methods called by the tests and then only have to change the helpers,  but I find this makes the tests harder to read on their own.

Instead, where possible, I create an overload of the changed method that uses the old signature, and which calls the new one.


If the old tests are still valid, we don't want to change them.

If the method signature has changed because of a new requirement, add new tests for the new requirements.

Tuesday, April 09, 2024

Varying "tech talks" based on time of day

I'll be speaking at DDD South West later this month.

I'm one of the last sessions of the day. It's a slot I've not had before.

In general, if talking at part of a larger event, I try to include something to link back to earlier talks in the day. Unless I'm first, obviously.

At the end of a day of talks, where most attendees will have already heard five other talks that day, I'm wondering about including something to draw together threads from the earlier sessions and provide a conclusion that also ties in with what I'm talking about. I have a few ideas...

I've seen someone do a wonderful job of this before but it's not something I've ever heard mentioned in advice to (or books on) presenting... I guess if you'll there you'll see what I do.

Monday, April 08, 2024

Comments in code written by AI

The general "best-practice" guidance for code comments is that they should explain "Why the code is there, rather than what it does."

When code is generated by AI/LLMs (CoPilot and the like) via a prompt (rather than line completions), it can be beneficial to include the command (prompt) provided to the "AI". This is useful as the generated code isn't always as thoroughly reviewed as code written by a person. There may be aspects of it that aren't fully understood. It's better to be honest about this.

What you don't want is to come to some code in the future that doesn't fully work as expected, not be able to work out what it does, not understand why it was written that way originally, and for Copilot's explanation of the code to not be able to adequately explain the original intent.

// Here's some code. I don't fully understand it, but it seems to work.
// It was generated from the prompt: ".
.."
// The purpose of this code is ...

No, you don't always need that first line.


Maybe xdoc comments should include different sections.

"Summary" can be a bit vague.

Maybe we should have (up to) 3 sections in the comments on a class or method:

  • Notes for maintainers
  • Notes for consumers
  • Original prompt


Writing a comment like this may require some bravery the first few times you write such a thing, but it could be invaluable in the future.

Sunday, April 07, 2024

Reflecting on 1000+ blog posts

1200 posts, 1026 published, 172 drafts, 2 scheduled
These are the current numbers for my blog. (Well, at the time I first started drafting this. I expect it will be several days before it's posted and then they will be different.)

These are the numbers I care about.

The one most other people care about is 2,375,403. That's the number of views the articles have had.

But this isn't a post about statistics. This is a post about motivation and reward.


 

I started writing this blog for me.

That other people have read it and got something from it is a bonus.

If I were writing for other people, I would write about different topics, I would care about SEO and promotion, and I would have given up writing sooner.

I get lots of views each day on posts that I can't explain.

I know that most views of this blog come from "the long tail," and Google points people here because there is a lot of content. The fact that I've been posting for 17+ years also gives me a level of SEO credibility.

There have been periods where I have written very little. This is fine by me. By not forcing myself to publish on a particular schedule, the frequency of posting doesn't hold me back or force me to publish something for the sake of it.

I publish when and if I want to.,

Some people need and/or benefit from forcing themselves to publish on a regular schedule. If that works for you, great. If it doesn't, that's okay, too.

Others might think a multi-month gap in posting is bad, but if that's what I want or need, it's okay. Over a long enough period, the gaps are lost in the overall volume of posts.

I'm only interested in writing things that don't already exist anywhere else. This probably holds me back from getting more views than if that were my goal, but it probably helps me show up in the long tail of niche searches.

And yet, some people still regularly show up and read everything I write. Thank you. I'm glad you find it interesting.

Will I keep writing here? I can't say for certain but I have no plans on stopping.


I'm only publishing this post because I thought I might find it useful to reflect on all that I've written, and 1000 posts felt like a milestone worth noting, even if not fully celebrating. Originally, I thought I'd want to write lots about this, but upon starting it feels a bit too "meta" and self-reflective. I don't know what the benefit is of looking at the numbers. What I find beneficial is doing the thinking to get my ideas in order such that they make sense when written down. That's, primarily, why I write. :)




Friday, April 05, 2024

Does code quality matter for AI generated code?

A large part of code quality and the use of conventions and standards to ensure its readability has long been considered important for the maintainability of code. But does it matter if "AI" is creating the code and can provide a more easily understandable description of it if we really need to read and understand it?

If we get good enough at defining/describing what the code should do, let "AI" create that code, and then we verify that it does do what it's supposed to do, does it matter how the code does whatever it does, or what the code looks like?

Probably not.

My first thought as a counterpoint to this was about the performance of the code. But that's easy to address with "AI":

"CoPilot, do the following:

- Create a benchmark test for the current code.

- Make the code execute faster while still ensuring all the tests still pass successfully.

- Create a new benchmark test for the time the code now takes.

- Report how much time is saved by the new version of the code.

- Report how much money that time-saving saves or makes for the business.

- Send details of the financial benefit to my boss."


Thursday, April 04, 2024

Where does performance matter?

Performance matters.

Sometimes.

In some cases.

It's really easy to get distracted by focusing on code performance.

It's easy to spend far too much time debating how to write code that executes a few milliseconds faster.

How do you determine/decide/measure whether it's worth discussing/debating/changing some code if the time spent thinking about, discussing, and then changing that code takes much more time than will be saved by the slightly more performant code?

Obviously, this depends on the code, where it runs, for how, long and how often it runs.


Is it worth a couple of hours of developer's time considering (and possibly making) changes that may only save each user a couple of seconds over the entire time they spend using the software?


What are you optimizing for?

How do you ensure developers are spending time focusing on what matters?

The performance of small pieces of code can be easy to measure.
The real productivity of developers is much harder to measure.


How do you balance getting people to focus on the hard things (that may also be difficult to quantify) and easy things to review/discuss (that they're often drawn to, and can look important from the outside) but don't actually "move the needle" in terms of shipping value or making the code easier to work with?

Wednesday, April 03, 2024

Thought vs experiment

Scenario 1: Consider the options and assess each one's impact on usage before shipping the best idea.
Scenario 2: Ship something (or multiple things) and see what the response is. Then iterate and repeat.

Both scenarios can lead to the same result (in about the same amount of time.)

Neither is necessarily better than the other. However, it doesn't often feel that way.

Always using one isn't the best approach.


#NotesToMyself


Hat Tip: David Kadavy

Monday, April 01, 2024

The Visual Studio UI Refresh has made me more productive

Visual Studio is having a UI refresh. In part, this is to make it more accessible.

I think this is a very good thing. 

If you want to give feedback on another possible accessibility improvement, add your support, comments, and thoughts here.

Anyway, back to the current changes.

They include increasing the spacing between items in the menu.

Two images of the main toolbar in Visual Studio in the dark theme. The top image shows a snapshot of Visual Studio today where the bottom image is a mockup of the toolbar which has more spacing and is a bit wider with less crowding.

There are some objections to this as it means that fewer items can be displayed at once.

Instead of complaining, I took this as an opportunity to revisit what I have displayed in the toolbar in my VS instances.

I used to have a lot there. 

I knew that some of those things I didn't need and, in some cases, had never used. I just didn't want to go the the trouble of customising them. 

"If they're there by default, it must be for a reason, right?" Or so I thought.

A better question is "Are they there for reasons I have?" In many cases, they weren't.

So I went through and spent (what turned out only to be) a few minutes customising the toolbars so they only contained (showed) the items I wanted, needed and used.

That was several weeks ago, and it has been a massive improvement.

  • I feel I'm working a bit faster.
  • I'm not spending time looking for the things I need.
  • I'm not distracted by things I don't need or don't recognise.
  • I feel encouraged and empowered as I've made my work environment more productive.


A system change to improve things for others encouraged me to improve things for myself. I see that as a win-win.



when the developers no longer seem intelligent

There are companies where the managers don't know what developers do. They have some special knowledge to make the computers do what the business needs. Sometimes, these managers ask the developers a question, and they give a technical answer the manager doesn't understand.
Very soon (maybe already), the developers will be using AI to do some of the technical work the company wants/needs.
Eventually, the AI will do something unexpected, and the business managers will want to know why. The developers will not be able to explain.
The managers will want to guarantee that the bad or unexpected thing will not happen again, but the developers will not be able to do that.
It's the nature of the non-deterministic AIs that are now being built.
This may be ok.

A possible takeaway is to notice that the appearance of intelligence (by giving a technical-sounding answer that the listener doesn't really understand) isn't going to be enough.

If you can't explain what you're doing now, how will you explain what AI is doing and that you can't guarantee what it will do?