Be more emotional.
But not all the time. Use your intelligence to decide when. ;)
When these are given, is it because there is nothing more important to say?
Or, does such feedback come from people who don't understand the bigger picture or want a quick way to show they've looked at the code without having to take the time and mental effort to truly understand it?
Are you just repeating what's already been said before?
Where/when appropriate, are you correctly attributing others?
Does your comment add value?
Are you adding a new/different/unique perspective?
Have you read the other comments first?
Have you thought about who (possibly many people) will read the comment?
In the last few days MAUI App Accelerator passed ten thousand "official" unique installs.
This doesn't include the almost eight thousand installs included via the MAUI Essentials extension pack. (Installs via an extension pack are installed in a different way, which means they aren't included in the individual extension install count.)
While big numbers are nice (and apparently worth celebrating) I'm more interested in how it's used.
The numbers for that are lower, but still noteworthy.
It's currently used to create about 25 new apps each day. Which is nice.
I'm also trying to improve my ability to use App Insights so I can get other and better statistics too.
More updates are coming. Including the most potentially useful one...
I quite often use the phrase "I'm not smart enough to use this" when working with software tools.
This is actually a code for one or more of the following:
Do your users/customers ever say similar things?
Would they tell you?
Are you set up to hear them?
And ready to hear this?
Or will you tell me that I'm "holding it wrong"?
Should all the rules for formatting and structuring code used in automated tests always be the same as those used in the production code?
Of course, the answer is "it depends!"
I prefer my test methods to be as complete as possible. I don't want too many details hidden in "helper" methods, as this means the details of what's being tested get spread out.
As a broad generalization, I may have two helpers called from a test.
One to create the System Under Test.
And, one for any advanced assertions. These are usually to wrap multiple checks against complex objects or collections. I'll typically create these to provide more detailed (& specific) information if an assertion fails. (e.g. "These string arrays don't match" isn't very helpful. "The strings at index 12 are of different lengths" helps me identify where the difference is and what the problem may be much faster.)
A side-effect of this is that I may have lots of tests that call the same method. If the signature of that method needs to change, I *might* have to change it everywhere (all the tests) that call that method.
I could move some of these calls into other methods called by the tests and then only have to change the helpers, but I find this makes the tests harder to read on their own.
Instead, where possible, I create an overload of the changed method that uses the old signature, and which calls the new one.
If the old tests are still valid, we don't want to change them.
If the method signature has changed because of a new requirement, add new tests for the new requirements.
I'll be speaking at DDD South West later this month.
I'm one of the last sessions of the day. It's a slot I've not had before.
In general, if talking at part of a larger event, I try to include something to link back to earlier talks in the day. Unless I'm first, obviously.
At the end of a day of talks, where most attendees will have already heard five other talks that day, I'm wondering about including something to draw together threads from the earlier sessions and provide a conclusion that also ties in with what I'm talking about. I have a few ideas...
I've seen someone do a wonderful job of this before but it's not something I've ever heard mentioned in advice to (or books on) presenting... I guess if you'll there you'll see what I do.
The general "best-practice" guidance for code comments is that they should explain "Why the code is there, rather than what it does."
When code is generated by AI/LLMs (CoPilot and the like) via a prompt (rather than line completions), it can be beneficial to include the command (prompt) provided to the "AI". This is useful as the generated code isn't always as thoroughly reviewed as code written by a person. There may be aspects of it that aren't fully understood. It's better to be honest about this.
What you don't want is to come to some code in the future that doesn't fully work as expected, not be able to work out what it does, not understand why it was written that way originally, and for Copilot's explanation of the code to not be able to adequately explain the original intent.
// Here's some code. I don't fully understand it, but it seems to work.
// It was generated from the prompt: "..."
// The purpose of this code is ...
No, you don't always need that first line.
Maybe xdoc comments should include different sections.
"Summary" can be a bit vague.
Maybe we should have (up to) 3 sections in the comments on a class or method:
Writing a comment like this may require some bravery the first few times you write such a thing, but it could be invaluable in the future.
These are the numbers I care about.
The one most other people care about is 2,375,403. That's the number of views the articles have had.
But this isn't a post about statistics. This is a post about motivation and reward.
I started writing this blog for me.
That other people have read it and got something from it is a bonus.
If I were writing for other people, I would write about different topics, I would care about SEO and promotion, and I would have given up writing sooner.
I get lots of views each day on posts that I can't explain.
I know that most views of this blog come from "the long tail," and Google points people here because there is a lot of content. The fact that I've been posting for 17+ years also gives me a level of SEO credibility.
There have been periods where I have written very little. This is fine by me. By not forcing myself to publish on a particular schedule, the frequency of posting doesn't hold me back or force me to publish something for the sake of it.
I publish when and if I want to.,
Some people need and/or benefit from forcing themselves to publish on a regular schedule. If that works for you, great. If it doesn't, that's okay, too.
Others might think a multi-month gap in posting is bad, but if that's what I want or need, it's okay. Over a long enough period, the gaps are lost in the overall volume of posts.
I'm only interested in writing things that don't already exist anywhere else. This probably holds me back from getting more views than if that were my goal, but it probably helps me show up in the long tail of niche searches.
And yet, some people still regularly show up and read everything I write. Thank you. I'm glad you find it interesting.
Will I keep writing here? I can't say for certain but I have no plans on stopping.
I'm only publishing this post because I thought I might find it useful to reflect on all that I've written, and 1000 posts felt like a milestone worth noting, even if not fully celebrating. Originally, I thought I'd want to write lots about this, but upon starting it feels a bit too "meta" and self-reflective. I don't know what the benefit is of looking at the numbers. What I find beneficial is doing the thinking to get my ideas in order such that they make sense when written down. That's, primarily, why I write. :)
A large part of code quality and the use of conventions and standards to ensure its readability has long been considered important for the maintainability of code. But does it matter if "AI" is creating the code and can provide a more easily understandable description of it if we really need to read and understand it?
If we get good enough at defining/describing what the code should do, let "AI" create that code, and then we verify that it does do what it's supposed to do, does it matter how the code does whatever it does, or what the code looks like?
Probably not.
My first thought as a counterpoint to this was about the performance of the code. But that's easy to address with "AI":
"CoPilot, do the following:
- Create a benchmark test for the current code.
- Make the code execute faster while still ensuring all the tests still pass successfully.
- Create a new benchmark test for the time the code now takes.
- Report how much time is saved by the new version of the code.
- Report how much money that time-saving saves or makes for the business.
- Send details of the financial benefit to my boss."
Performance matters.
Sometimes.
In some cases.
It's really easy to get distracted by focusing on code performance.
It's easy to spend far too much time debating how to write code that executes a few milliseconds faster.
How do you determine/decide/measure whether it's worth discussing/debating/changing some code if the time spent thinking about, discussing, and then changing that code takes much more time than will be saved by the slightly more performant code?
Obviously, this depends on the code, where it runs, for how, long and how often it runs.
Is it worth a couple of hours of developer's time considering (and possibly making) changes that may only save each user a couple of seconds over the entire time they spend using the software?
What are you optimizing for?
How do you ensure developers are spending time focusing on what matters?
The performance of small pieces of code can be easy to measure.
The real productivity of developers is much harder to measure.
How do you balance getting people to focus on the hard things (that may also be difficult to quantify) and easy things to review/discuss (that they're often drawn to, and can look important from the outside) but don't actually "move the needle" in terms of shipping value or making the code easier to work with?
Visual Studio is having a UI refresh. In part, this is to make it more accessible.
I think this is a very good thing.
If you want to give feedback on another possible accessibility improvement, add your support, comments, and thoughts here.
Anyway, back to the current changes.
They include increasing the spacing between items in the menu.
There are some objections to this as it means that fewer items can be displayed at once.
Instead of complaining, I took this as an opportunity to revisit what I have displayed in the toolbar in my VS instances.
I used to have a lot there.
I knew that some of those things I didn't need and, in some cases, had never used. I just didn't want to go the the trouble of customising them.
"If they're there by default, it must be for a reason, right?" Or so I thought.
A better question is "Are they there for reasons I have?" In many cases, they weren't.
So I went through and spent (what turned out only to be) a few minutes customising the toolbars so they only contained (showed) the items I wanted, needed and used.
That was several weeks ago, and it has been a massive improvement.
A system change to improve things for others encouraged me to improve things for myself. I see that as a win-win.
I found this still in the printer when I bought my early morning coffee the other day.
Certain personality types are attracted to software development.
The logic and absolute certainty appeals to those people.
AI removes that certainty. It's non-deterministic.
Will this put off some people who like (or need?) the absolutes?
May it attract different people with other interests and personality types?
Admit it, you thought I was going to say something about Testing. Didn't you?
While testing is super important and should be part of every PR, the most important question to ask when working on something as part of a team is:
Does this change include anything you need to communicate to the rest of the team?
I've added this to PR templates and have been frustrated when "circumstances" have prevented me from doing so.
Why does this matter?
Not sharing important knowledge and information creates frustration, resentment, wasted effort, and more.
Say you have a large complex solution.
Building everything takes longer than you'd like. Especially when building as part of CI checks, such as for a gated merge.
To make the CI build faster, it can be tempting to have a filtered version of the solution that only builds 'the most important parts' or filters out some things that take the most time (and theoretically change least often).
When such a filter exists, it can be tempting for developers in a team to use the filtered version to do their work and make their changes. If it makes the CI faster, it can make their development build times faster too.
However, there's an unstated trade-off happening here:
Shortening the time to build on (as part of) the CI
creates
A reliance on individual developers to check that their changes work correctly for the whole solution.
If you get it wrong, developers (mostly?) only work with a portion of the code base, and errors can be overlooked. These errors (including things like being able to build the whole solution) can then exist in the code base for unknown periods of time......
The prompts given to LLMs are compared to the text we can enter when searching the web.
Over time, people have learned to write search queries that will help them get better answers.
There's an argument that, over time, people will learn to write better prompts when interacting with LLMs.
However, over time, the way websites expose, format, and provide data that's fed into the search engine so that it will be more likely to be what is shown in the results for specific searches. (A crude description of SEO, I admit.)
What's the equivalent of providing data that will be used to train an LLM?
How do (or will) people make or format data to get the outcomes they want when it becomes part of the training data for the LLM?
Yes, this is like SEO for LLMs.
LLM-Optimization? LLMO?
GPTO?
TLDR (for Clint 😜)
I've been working on writing a technical workshop. I've not done it before, and I couldn't find any good, simple guidelines for creating such a thing.
Having asked a few people who've delivered workshops in the past, the advice I got was very generic and more about the workshops they've proctored rather than how to structure one or put one together.
So, rather than make it up, I started by trying to reverse engineer what good workshops do.
I want the workshop to be fully self-paced and self-guided. If it can be used in group or "instructor-led" scenarios, that'll be good, too, but I don't have any plans (yet) for this.
From looking at many workshops I've completed and thinking back to those I've participated in in the past, I was struck by how many take the approach of showing a completed project and then simply listing the steps to create it. I find this approach to often be disappointing.
Yes, as a participant, I get the satisfaction of having created something but it's not something new or necessarily specific to my needs. More importantly, the reasons for each individual step weren't explained, and the reason for taking an approach when others are available (or even what the other approaches are) wasn't given. This means that I don't get the wider knowledge I likely need to be successful. Is the intention that in completing a workshop, you have the knowledge to go and build other things and the confidence to do so, having done it once before? It probably should be.
What I find many workshops end up doing (intentionally or otherwise) is providing a series of steps to recreate a piece of software and assuming that's enough for the participants to then go off and successfully create anything else.
Yes, saying, "Anyone can follow our workshop and create X", is great. But that's not the same as a workshop that teaches reusable skills and provides the knowledge needed to go and create your own software.
I want to create a workshop as a way of teaching skills and introducing a different way of thinking about a topic.
Aside: what's the difference between a workshop and a tutorial? I think it's that workshops are longer. Possibly a workshop is made up of a series of tutorials.
After initially struggling, I eventually concluded that a workshop is like teaching anything else. With clear learning goals and a structure, it's a lot easier to plan and create.
In this way, writing the workshop was a lot like writing a book. Only without an editor chasing me for progress ;)
More thoughts on this topic another day. Maybe.
Although, it has got me thinking about what I'll write next...
If you're interested in how my efforts turned out, you can see the results of them here.
That someone admitted [bad practice the business would not like to admit to] is not the issue.
The problem isn't blaming who said it.
The problem is that it is the culture.
Trying to hide the issue or blaming someone for admitting it doesn't help. It encourages bad practice, which really only makes things worse.
This might be a leap but go with me for a minute.
"Lessons in basketball are lessons in life"
It's a cliched phrase that was drilled into me at basketball training camps and through basketball-related movies when I was young. We weren't just being encouraged (forced?) to learn lessons that would help us play better basketball, these lessons would help throughout our lives.
Thinking today about the importance of fundamentals, I wonder if the world would be a better place if more developers (had) played basketball.
I recently heard that there is a common character trait among many developers in that they won't stop working on a problem until they've solved it.
I've always identified as having a similar but different trait, I won't give up when trying to solve a problem.
I came to this trait as a result of some of my first jobs in the industry. Due to the internet and world being as they were, and in combination with the companies, teams, and projects I was working on/with, there was no option to say "I don't know how" and give up. The problem needed to be solved, there was no one to ask who might know, and so I had to figure it out. That's what I was there for. That's what I was paid for.
As an avid user of Visual Studio and a developer of many Visual Studio extensions, I have a strong interest in enhancing the discoverability and user-friendliness of extensions. I was pleased to learn about the recent implementation of a requested feature and eagerly went to explore it.
Recently, I've also been exploring the use of WinGet DSC to configure a new laptop and have been experimenting with .vsconfig files to streamline the process.
During these investigations, I encountered an issue regarding the use of extensions containing "Extension Packs" (references to other extensions that should also be installed). Unfortunately, attempting to include them resulted in installation failures without any accompanying explanation for this limitation. Through a process of elimination, I confirmed that the inclusion of extension packs was the cause.
I submitted a bug report detailing my findings, which can be found [link to the original report, which was unfortunately removed]. Regrettably, I discovered that my access to the site has since been restricted, citing violations of the Community Code of Conduct.
Upon revisiting my initial post, I can only speculate that my direct and passionate writing style may have been misunderstood as impolite or disrespectful, but am unsure if this is the issue. I acknowledge the importance of maintaining politeness and respect in online interactions and am committed to improving in this regard.
I am left wondering if utilizing AI to refine my expressions to ensure a consistently polite and respectful tone may be a helpful approach moving forward. Perhaps this precautionary measure could prevent unintentional misinterpretations.
Below is what I posted.
I share it here as an example (and warning?) to others. Be polite and respectful!
This last weekend, I was excited to get to speak at the DDD North conference again.
As a one-day, five-track conference there was a lot going on and a lot of varied content.
Of the sessions I attended and the discussions I had with other attendees, I noticed lots of mentions of:
Yes, I recognise that the talk I gave about the importance of documentation and testing as we use more AI tooling while developing software likely skewed my thinking and what I was more inclined to notice. It was just nice to not be the only person saying positive things about testing software. (Although at least two speakers did make jokes about writing tests so there's still a long way to go.)
The increased focus on generally applicable "good" practices was also good to see. While learning about a new framework or technology is useful in the short-terms or for specific tasks, spending time on things that will be valuable whatever the future holds feels like a better use of time.
While I'm still waiting for the official feedback from my talk (sorry, no video) upon reflection, I'm glad I did it and it was a good thing for me to do. I don't want to give a talk that anyone could give and so basing it on my experiences (& stories) is good rather than reading official descriptions of technologies, describing APIs, or showing trivial demos. I also want to do in-person events in ways that benefit from being "in person". This talk wouldn't have worked the same way as a recording and I wouldn't have got as much from it either. If I could just record myself talking about the subject and released it as audio or a video I'd have done that but it wouldn't be the same or as good. Although, it might have been less work. Maybe I'll do that in the future though.
Here's me during the talk in front of a perfectly timed slide ;)
Yes, "almost" is very important in that statement.
If you get a response/answer/instruction without any acknowledgement of the nuances, you're almost certainly not getting the full picture.
How do you know the importance of what is missing, if you don't know what's missing?
Two quick, but key points.
1. What is it meant to do? And, where/what/who is it for?
You can't review code fully unless you know what it's meant to do. You might be able to point out if something doesn't compile or account for an edge case that hasn't been covered, but if you can't say if the code does what it was meant to do, you can't provide a useful review.
It's the same with documentation. If you don't know what the document was intended to achieve, communicate, or teach, how do you know it is correct, appropriate, or does what it's meant to?
2. Take advantage of tools before you involve people.
Use spelling and grammar checkers before asking someone to review it.
It's like asking for a code review on code that doesn't compile or meet coding standards.
What's the reason for doing a code review?
It's to check that the code does what it is supposed to and that the reviewer is happy to have it as part of the code base.
If the code changes look fine and the reviewer is happy, they shouldn't be expected or obliged to give (write) more feedback than is necessary.
What's not good is pointless comments or references to things that weren't changed as part of what is being reviewed.
A reviewer should not try to prove they've looked at the code by providing unnecessary or unnecessarily detailed feedback.
It's not a good use of time for the person doing the review.
Dealing with (responding to) those unnecessary comments is also not a good use of the time for the person who requested the review.
Writing something, even if it's a few characters (or an emoji) that indicates that the approval wasn't fully automated or done by accident is fine by me.
Of course, if all someone ever did was comment on code they're reviewing this way then that should raise different concerns.
Don't write more than you need to or for the sake of it.
Don't comment just to show you've looked at something.
Is your software amazing?
Are there no usability issues?
Do the people using your software never have problems?
Are there "niggly" little issues that can be frustrating but have never been given enough priority to actually be fixed?
Do these things annoy, frustrate, disappoint, upset, or turn off users?
If your software (app/website/whatever) can't get the basics right, it might be tempting to "add AI" (in whatever form) to your app but the hype may not be worth it.
Yes, AI is powerful and can do amazing things, but if the people using your software see you failing to get the basics working correctly, how will they feel when you add more sophisticated features?
If you've demonstrated a failure to do simple things, will they trust you to do it he complex things correctly?
Especially if you can't explain how the new feature works or directly control it?
I'm not saying don't use AI. I'm asking that if you demonstrate to your customers that you can't do simpler things without issue, should you really be doing more complex things?
If not, why not?
That the CI will (should) run them is not an excuse for you not to.
I think it's right - but haven't checked, is not professional or good enough.The same applies to linting or ensuring that code meets the defined standards and formats. You shouldn't be relying on something checking this after you've said you have finished.
You really should be using tooling that does this as you go.
Some test suites do take a very long time to run. Have you tried running only the tests related to the area you're working on?
If your test suite takes too long to run:
- Have you run the bits relevant to your changes?
- What work is being done (or planned) by you and your team to make the test suite run faster?
Never being one to be concerned about being controversial, this is why I don't think you should ever use the LangVersion of "latest" in your C# projects.
TLDR: "Latest" can mean different things on different machines and at different times. You avoid the potential for inconsistencies and unexpected behavior by not using it.
I've had the (let's call it) privilege of working as part of a certain distributed team on a large code base.
There was a global setting, hidden away somewhere, which set the LangVersion
to latest
. This was mostly fine until one member of the "team" updated the code to use some language features in version 10 on the day it was released. They committed the code, and other team members pulled down the changes. But now the other team members (who hadn't yet updated their tooling--it was, after all, only release day) were getting errors and they couldn't understand why code was suddenly breaking. The use of the new language features wasn't necessary, and the confusion and wasted time/effort trying to work out where the errors had come from could have been avoided if the actual number had been updated in the settings and they hadn't relied on a value that changes over time and from machine to machine. The rest of the team would still have had to update their tool, but at least they would have gotten a useful error message.
And then, there was another project I worked on.
We were using an open-source library and as part of some "clever" MSBuild task that they included in the package, it was doing a check for the LangVersion
. At some point in the past, they needed to do a workaround for a specific version number and also added that workaround for "latest". In time, the underlying issue was fixed, and so when the package was used in a project that tried to use the "Latest" LangVersion, the package was trying to apply a fix that was no longer necessary and actually caused an exception. Yes, this was a pain to resolve. And yes, by "pain", I mean a long, frustrating waste of time and effort.
"Latest" may be useful if you're doing cutting-edge work that is only supported by that LangVersion setting. For all other cases, you should specify an actual number.
Of course, you're free to ignore these arguments. If you do, I'd love to know why.
I often hear that XAML has problems, so you should use C# instead.
If you really strongly object to using XAML and can't be persuaded otherwise, feel free to only use C#. I'm not going to stop you. I'm more interested in you producing high-quality software that you can maintain and that provides value to people and they can easily use.
If you're not going to make a purely emotional decision, read on.
With the social media app/platform space as it now is, I don't have a single place where it feels right to post my shorter thoughts and ideas.
These thoughts are also often longer than the default character limits of most platforms.
So, I'm experimenting with posting them here and tagging them "quickie". No promises, I'll keep it up, but this is my site, so I thought it was a suitable place to experiment.
If you work with XAML (or you've tried to) you might think of it as being verbose, long, and hard to maintain.
For many people, AI is a solution to many technical problems. Working on the basis that if you can describe what you want or start writing it, the "AI" can generate what it thinks is likely what you want.
This is great if you're doing something that has been done many times before or you want to do something new in a very similar way to what already exists.
If, however, you aren't keen on what already exists or have complaints or concerns about the way things are currently done this isn't going to help. "AI" works by looking at existing data (mostly the contents of the internet for general-purpose and public AI services) and creating based on that.
But, come back to XAML. The criticisms aren't as much about writing the code, they are more focused on reading and modifying it. Having the code written more quickly doesn't address the problems of reading, understanding, and maintaining it. Tasks that are widely accepted to be what most developers spend most of their time doing.
If you want XAML that is easier to read, understand, maintain, and modify without unexpected consequences, you need to think about writing it differently.
"AI" is great at giving you things similar what already exists, but if you don't want more of the same (and when it comes to XAML I don't think you do) now might be the time to start thinking about writing XAML in a different way....
Recently, I saw a display inside a cinema that said a film was starting in "1 hour and 60 minutes"
Be careful how you round times and parts of times before displaying them!
As 1 hour is 60 minutes, why didn't the display say the film starts in "2 hours"?
I can't say for sure, but I'd guess that the actual time until the start is 1 hour, 59 minutes, and more than 30 seconds.
Because there are less than 2 hours in the difference between now and the start, the hours were reported as 1. As the hours have been dealt with, they moved on to calculating the number of minutes. Because the number of seconds wasn't shown, the number of minutes was rounded to the nearest value. As there are more than 30 seconds in the timespan, this was rounded up to 60.
Ok, this is an edge case that happens in a very small window of time. Most people (customers & potential customers) will never see this.
Why does this matter?
Why care about such potentially trivial issues?
If your customers can't trust the simple things they can see, why should they trust you with things they can't see?
If your app is slow, unresponsive, crashes, displays duplicate information, displays obviously incorrect information, has usability or accessibility issues, or any of dozens more things, why should people trust you with things like:
If the visible parts don't show care and attention to detail, why should the people using your software assume that you've spent more time and focus on the things people can't see?
So, you're reviewing some code.
Here are some things you shouldn't be doing:
Instead, here's what you should be doing:
These are not complete lists, but hopefully, they are still useful.
This is a great book. I heartily recommend it. However, I don't think it's interesting.
I don't think it's interesting because I don't like that word.
"Interesting" is vague.
"Interesting" is meaningless.
"Interesting" is a word people use when they don't know what else to say.
Try it. Next time someone tells you about something "interesting", ask them what made it interesting or why they thought it was interesting.
Or consider when someone tells you they "have something 'interesting' to tell you." Is it really interesting? Or is it gossip? Or is it something they don't have a better description for?
"Interesting" is unspecific and unconsidered. Not the book, the concept.
That's not what I want to be, or do, or be thought of.
Here are some much better adjectives (in no particular order):
challenging
inspiring
thought-provoking
troubling
upsetting
motivating
fascinating
amusing
entertaining
captivating
encouraging
intriguing
inviting
gripping
impressive
restorative
engrossing
enchanting
enthralling
spellbinding
diverting
attractive
rousing
persuasive
provocative
stimulating
stirring
exceptional
exciting
unforgettable
Don't those sound more appealing?
It's the first three items on that list (challenging, inspiring, thought-provoking) that I think apply to that book, too.
TLDR: If you want to prompt "the user" to do something, let them get value from what you provide first.
Photo by Jennifer Latuperisa-Andresen on Unsplash
Mobile apps want ratings and reviews. These are also valuable for open-source projects. This applies in a marketplace/store or as libraries/bundles/packages for download.
Increasingly, in the open-source world, the issue of sustainability is also a consideration.
Among other things that contribute to sustainability is financial support.
I use a variety of approaches across my open-source projects to try and encourage such compensation via Google Sponsors.
It doesn't make a massive impact on my finances but it has been enough to make a difference and was also the only way I could afford an unexpected tax bill when I was out of work during the pandemic!
Regardless of the amount or duration, I will always be grateful to those who have (and still are) sponsoring me. Thank YOU!
Yes, I ignore the amount and duration. All my sponsors get access to the same benefits, be it a one-off amount of $1 or recurring amounts of much more. (I previously did more analysis of these differing durations and amounts.)
Anyway, one of the approaches I use to encourage people to consider becoming a sponsor is a message displayed in the output window asking them. (If they do, I also tell them how to make such a message not appear.--No phoning home. No personal data is collected. 😉)
This approach isn't appreciated by everyone but seems to be very effective, as I have more people sponsoring me than most other people I'm aware of who casually contribute to open-source projects.
My approach to monetizing my projects is very much inspired by donationware, and most software made available in this way doesn't hold back from asking for donations.
I had been following this approach but wondered if a different technique might be more effective.
In the latest update to my C# Inline Color Vizualizer extension, I changed the behavior determining how visible the encouragement to become a sponsor is.
Instead of always displaying the message to the user, it loads it in the background but only actively shows it once it is at least 7 days since it was first used and it has been used to annotate at least 100 files. The goal is to let the person become familiar with seeing the benefits of using the tool before asking anything of them. The theory is that they will then be more inclined to respond to the message.
I must have said and heard the same advice applied to mobile notifications hundreds of times but had somehow overlooked its wider application.
If you're not familiar with the extension, it adds examples of the. Like this. Why not give it a try?
I plan to add similar changes of behavior to my other Visual Studio extensions as is appropriate to the way they work and what they do. I'll share details here if it proves effective or I learn anything insightful.
It's not only about the money I get. I like that this encourages more people to think about the sustainability of their tools. I think such messages do this, and the fact that most of my sponsors aren't sponsoring other people (yet) makes me hope that I'm just the first of many or that they find other ways to support the software they use (if not rely on.)