Thursday, August 07, 2025

Miscellaneous AI-related questions

Question mark with AI-sparkle

No answers. "Just", questions I'm aware of and considering:

  • If working with AI means communicating with machines more like we do with other humans, how do we avoid things also going back the other way and treating people more like machines?
  • Are "agents the future of [all] work"? And, if not all, how to identify the work that can change or be replaced?
  • If "AI is only as good as your data", why isn't there as much effort being put into ensuring the quality and accuracy of the data as there is hype about AI?
  • At what point does AI not need human oversight? All the education highlights human oversight, but the futurists don't include it...
  • What is in the middle-ground between traditional GUIs and "just" a text box?
  • As feedback is highlighted as being essential when developing tools with AI, is there a way for feedback from a tool to be passed back to those creating the underlying models?
  • If there's a GUI for something, does it automatically need (and benefit?) from having an equivalent interface that's accessible via command line, API, and Agent/MCP?
  • As speed/rate of change is a common complaint among all types of people and people doing disparate tasks, how do you factor this in when introducing AI-powered tools?
  • If people are generally reluctant to read instructions, why will they happily read the text-based response from an AI tool telling them how to do something?
  • Asking good questions is hard. How people ask questions of AI-powered tools greatly impacts the quality of results. In training people to use AI, are they also being taught to ask good questions?





Wednesday, July 30, 2025

Windows Apps London (formerly Windows Phone User Group) - it was good while it lasted

TLDR: User groups were great. I miss organising and going to them. Maybe I should revisit my plans about this...

Average Rating 4.8 (from 275 reviews)

I've organized over 100 user group events / meetups. I've also attended and spoken at many others.

The one I had the most to do with was the Windows Phone User Group, which later evolved/became Windows Apps London.

I "ran" this for as long as it existed. It all seems a very long time ago, but as I finish shutting up the virtual shop on the group (Stop paying for things--like domains--that I really don't need and no one looks at) I wanted to take a moment to reflect.

Here are a few of many pieces of similar feedback.


“Great to hear dev thoughts & experiences & see some interesting apps demo’ed”

“Met lots of cool people, and was well worth the trip.”

“Great bunch of people. Lots of enthusiasm and the usual witty banter”

“Meeting was great – fantastic bunch of WP7 developers, designers and officianados!”

“I really enjoyed everyone’s demos; even the games, which is not my domain, provided some interesting info about phone dev.”

“Thanks very much for putting on the event. I found it really useful as well as wonderfully motivating.”

“A great opportunity to meet and socialise with other developers.”

“We had a great time, really informative stuff, we learnt several things both from the talks and from general networking that we’re going to apply to our current and forthcoming projects.”

“Really appreciate the effort put into the event, great to meet everyone”

“It was great being able to network with intellectual individuals”

“It was really awesome. I now have the knowledge to create a better app.”

“It was very informative and enjoyable”

Fantastic group. Always learn new things and pick up information

I like the format where someone knowledgeable us something we didn’t know already

One of the most interesting meetings I’ve been to

Wealth of knowledge to gain, recommended this to all developers

Good format, very useful.

Great event! Enjoyed the learning and the interaction with the other participants.

Thoroughly enjoyed the evening. Learnt a lot and looking forward to the next one.

Lots of fun, excellent talk and great people

I think this is a great event necessary for the platform. The atmosphere was good and enabling for sharing ideas.

Entertaining and inspiring talk

Really enjoyed the format! Great to hear everyone’s thoughts.

My mind is pretty blown away right now, very interesting evening with so much to takeaway and think about

Probably one of the best groups. Each meeting is useful for learning new things and getting a different point of view

Would love to see some more of these events

Fantastic presentation. Really helpful to pick up new tips.

Interesting conversation was flowing freely around the table. A really good night.

Great food, company, conversation and laughs!



I couldn't let all the history of something that was a big part of my life for a very long time go away completely, so I've created an archive of the website at https://mrlacey.github.io/winappsldn/
Not that I really expect this to be of much interest or use to anyone any more, but it felt too important (to me) to let it go away completely.



Monday, July 28, 2025

Why developers should be excited about implementing migrations

NEW (emoji)

I often hear that developers aren't keen on doing work to upgrade the frameworks/platforms/tools that they use (or the software they're building is using). It's not writing code, and so it's not considered "real development" work.

I think that doing the work to support upgrades or migrations is one of the most important and valuable things a developer can do:
  • It's often a simple way to fix potential security vulnerabilities.
  • It normally brings performance improvements.
  • Updates bring new capabilities and options for things to add to the software.
  • Keeping up with the latest versions makes future updates/migrations easier.
  • It can help you learn and get hands-on experience with the latest technologies.
  • It can allow you to see a large part of or various/obscure parts of the code base, helping you to learn more about the code you're working on.
  • It can make it easier for other developers in the team.

So, a high-impact task that helps you learn while supporting the team by doing things they don't want to do? Sounds like a great thing to prioritise if you get the chance.

Friday, July 25, 2025

How quickly can a Windows application launch?

Following on from my recent post asking How much does start-up time matter when choosing a framework? I've now published some figures and the code I used.

With some minor tweaks, I now have an updated graph of the data:


This graph is based on the release builds of the simplest, most minimal apps I could reasonably come up with that were as close to identical in all frameworks.

The code and more details can be found at https://github.com/mrlacey/WinAppLaunchCompare

From that repo:

Miscellaneous observations:

  • As expected, WinForms was super fast.
  • WPF (both .NET and Framework versions) was surprisingly (disappointingly) slow.
  • The difference between MAUI and WinUI is surprising given MAUI uses WinUI to create the Windows version of apps. I expected these to be closer.
  • Of the cross-platform options (and WinUI) the difference is basically irrelevant. Having clicked the button to launch the apps many, many times, I didn't perceive any real difference, never feeling that one was slow or faster than the others.

My takeaways:

  • For choosing a cross-platform framework, there's hardly anything in it in terms of the time it takes to launch the apps.
  • I wouldn't base a decision to use a particular framework based on these (or similar) tests/results.
  • I also looked at the time until the App class was loaded. This varied but didn't seem to be related to the overall time taken.
  • Performing tests like these can easily become an infinite rabbit hole. There are always potential tweaks and optimizations that could be done. If you have such an interest, please go ahead.


Feel free to experiment with this code as you wish and suggest any ways it could be improved or anything artificially slowing down a version of an app can be addressed.



What makes a good teacher of technology?

I guess it comes down to different people wanting different things. Or, maybe, I'm looking for different things than the majority of people are looking for.

Being aware that I'm trying to learn a variety of new technologies at the moment, and also writing things to try and find a new way of teaching something I'm very familiar with.


Here's what I've seen a lot lately:

Teachers (either qualified educational professionals or people whose job it is to teach and are recognised as experts at teaching new technology) will explain that there are different options or ways of doing things, and then say, "I always use X, so you should too."


Is it that the people learning are just looking for a seemingly authoritative answer, and so are happy to use/do X because they've been told that's ok?

Or, is it that explaining the nuanced differences between options and where/why/how you'd use each and what each is intended/best for is much more difficult and so people don't (or can't) try?

Or a combination of the two?


hammer

I find it a bit like being told, "There are lots of potential tools in the toolbox, but I always use a hammer, so you should just use a hammer."


Simple answers are attractive, but don't provide the knowledge to know how to use something other than a hammer or even tell when using a hammer is not appropriate.

I want that deeper knowledge.

Maybe others are happy with a simple solution. When their hammer stops working or can't be used, they'll come back with questions about alternatives. 

I prefer to know in advance if I'm going to be using the wrong tool or what the potential negative consequences of what I'm planning will be.


I want all the learning up front. If you're selling education (or views), then holding back some of the knowledge until later has a benefit for you. But, who is the lesson for?




Monday, July 21, 2025

what "everyone" gets wrong about "write once run everywhere"

✏️1️🏃🌐

It was never meant to be about the ability to write a single app that runs on any/all devices.

It is about the ability to use the skills/tools/technologies to build software that can run on multiple operating systems, devices, hardware, etc.


There are a very small number of scenarios where you want the exact same code running on every imaginable device. Even when you do want this, there needs to be logic within the software to account for the differences:

  • Different input devices (not just touch, mouse, & keyboards)
  • Different output devices (not just size of screen, or none)
  • Different sensors or physical capabilities
  • Different usage scenarios
  • Different connectivity or storage capabilities
  • Different user permissions or account settings


There's the dream scenario where you build a piece of software for a specific OS and/or device type, but then decide it would be nice if it ran somewhere else too, and you hope that tooling can magically make it happen for you.

Sometimes this works. To a point. But you'll almost always want some customisation or need to handle different scenarios or capabilities the new device(s) present.


What's useful is: when you know you need to build software that needs to run in lots of different places/ways, you can benefit from not needing to learn/support/maintain different technologies to build all that software. 

It's not just about the reuse of code once written, it's also about the reuse of skills.

Friday, July 18, 2025

How much does start-up time matter when choosing a framework?

I got nerd-sniped.

I was asked about measuring and possibly benchmarking different .NET frameworks for making native apps in terms of performance.

Apparently, this is an important factor for some people when choosing a framework.

I think I know better. I know you can write very bad (hence slow) code in any language or with any framework. I also know there are almost infinite things you can do to make code faster.

A basic comparison didn't seem very useful. I knew that if I saw such a thing, I wouldn't care very much, as it all comes down to optimising appropriately in a real app, given the constraints and requirements of/for that particular app.

I tried to find a way to get excited about the prospect, but couldn't see what could be exciting.


However, two questions persisted:

  1. Do other people care?
  2. How much difference is there?

It's hard to know if other people care, but is there a significant difference?
Time for a quick experiment...

Doing the simplest and quickest thing I could, I did a quick test to see how long it takes to launch a trivially simple app and for it to report that it has finished loading.

Here are the initial results:
WinForms is fastest. WPF is really slow. WinUI next fastest, then Avalonia, MAUI, and Uno but not a lot in it

The Windows Forms perf was to be expected and was included as a reference for the others.

Why is there so much difference?

Is this interesting?
Do you care?
Should I go deeper in investigating more realistic scenarios and considering simple optimizations (like AOT)?


Let me know if you want to know more.


Tuesday, July 01, 2025

Why feed pizza to developers at meetups?

TLDR: If you've ever eaten free food at a meetup (even though you could have afforded a meal), why not help out those who are not as fortunate? "Free" food at developer events is about practicality not being a reason to attend.
pizza, wedges and salad!
Mmmm, Pizza!
It's a cliché that developers like pizza.
Look here are some developers enjoying pizza at a previous event I organised.

This:

Quickly becomes this:

I've even heard developers be described as people who turn pizza (& coffee) into code.


I was recently talking with someone who was organising a meetup but was complaining about the lack of signups.

    "We're providing pizza, why haven't more people signed up?

They actually said that! As if people were coming for the food, and the technical talks, networking, socialising, and community building were all secondary.

Pizza isn't provided at evening meetups as a reason for people to come.

Pizza (or any other food) is provided at meetups, so people don't have to think (worry) about food or for it to be a reason for people not to come.


Pizza (or any other food or drink) isn't provided because of a concern for a lack of money to buy food. Developers are typically very well paid and able to afford to eat. 


[Side Note. I have had people come to events were there were concerns about some people only coming for the food, but I certainly wasn't going to turn people away based on this assumption. People attend for myriad reasons that are more varied and complex than anyone can imagine. You don't know what's going on in everyone's life and even if you asked they may not want or be able to tell. Based on where and when these meetings were happening, there were other ways to get food if that's what they really needed but couldn't afford. Sitting through several hours of technical talks as a way to get a drink and a couple of slices of pizza is unlikely to be a good trade off for anyone not interested in the technology.]


It's about convenience.

Pizza (or other food) is provided so that those attending don't have to think about when or where they will eat and how it fits around event attendance. As event organizers, it's necessary to consider situations like:

  • If this person is coming straight from work, will they have a chance to eat beforehand?
  • If they have to wait until after to eat is hunger going to distract them during the event?
  • If they go somewhere to eat first, could they end up getting distracted and not come?

If a full day event and people leave at lunchtime to find food, there's a high chance that some of them won't come back in the afternoon.

Then there are events deliberately intended to fit around when people are eating. A breakfast or lunch time event would have to be much shorter if it also needed to allow time for attendees to also find food. The potential for missing a meal may also put off some attendees.

There's also a social benefit to sharing a meal (or even just a drink) with other people. With so many meetups calling themselves communities, it's great to be able to develop relationships between people based on more than a shared interest and location. Eating together can be a social lubricant to help start building relationships.


There are a lot of reasons and thought that go into providing food for developers at events and it's not about saving money or appealing to people through food.


Over the years, I've personally spent thousands buying pizza, other food, and drinks to help enable events to run smoothly. Only on a couple of occasions at smaller events did we experiment with asking for contributions. Being well paid at the time, this wasn't an issue. I expect that the majority of people reading this are people working in the software development industry who are well paid and never need to worry about being able to afford to pay for a meal.

But that's not the situation for everyone.

Food insecurity is a massive and growing problem and it's hard to imagine you can make a difference.


However, if you're in a position where you're well paid and you've ever been to an event where food was provided, please consider making a donation to Bankuet!

Bankuet is a social impact company who "make it easier to give to food banks." 

They maximize donations by letting food banks request the items they most need and then buy in bulk so that your donation goes further and waste is minimized.

You can either donate to a food bank in your area or make a general donation to where the need is currently greatest.


Please join me in supporting the excellent work they're doing.


https://www.bankuet.co.uk/givenow



"Testing in production"

test tube - question mark

It doesn't mean that you discover if something works when it's live and in use. You should test that it does what it's supposed to do before making it publicly available. (It's not an excuse for not verifying your work before saying it's done.)


It doesn't mean that you first lean about any problems with a change when it's in production. Changes should be reviewed before release and possible or likely problems addressed in advance.


It can mean that you release new functionality behind feature flags and roll them out slowly to all people.


It can mean that you A/B test new functionality in real scenarios.


It can mean that you monitor your systems in production to ensure reliability and performance.


It means that you need a way to gather feedback and errors from live systems.


It is also to recognize that things may happen in production that may not happen elsewhere.


Everyone should test in production. No one should only test in production.

Wednesday, June 18, 2025

Maybe don't think of MCP servers as an alternative to your API

Yesterday, I posted on LinkedIn about how I created an MCP server to return details from an experimental XAML analysis tool.

I've since been experimenting with it a bit more.

Without my tool, initially, Copilot told me that the file I was using for testing was great. (Sadly, I didn't get a screenshot of that.) But after running my tool a few times and then asking Copilot the same question without my MCP server, it started to suggest the possibility of improvements.

There is some duplication in animation and style usage, which could be refactored for maintainability. The file is well-structured for a modern WinUI 3 application.

Overall, the file is functional and adheres to good practices, but refactoring and optimization could improve maintainability and performance. Let me know if you'd like to implement any of these improvements!

With a bit more pushing, it even started to be able to suggest things comparable to what my tool was providing:

Duplications to Reduce in HomePage.xaml: 1. Animation Sets (ShowTransitions and HideTransitions): •	Both animation sets (ShowTransitions and HideTransitions) use similar structures with OffsetAnimation and OpacityAnimation. The only difference is the direction and duration values. •	Improvement: Create a parameterized or reusable animation resource to avoid duplication. 2. Repeated Styles for Controls: •	SelectorBarItem: Both instances use Style="{StaticResource TokenViewSelectorBarItemStyle}". •	GridView: All three GridView controls share the following properties: •	ItemTemplate="{StaticResource ControlItemTemplate}" •	IsItemClickEnabled="True" •	ItemClick="OnItemGridViewItemClick" •	ContainerContentChanging="OnItemGridViewContainerContentChanging" •	IsSwipeEnabled="False" •	Loaded="OnItemGridViewLoaded" •	SelectionMode="None" •	Improvement: Define a default style or template for GridView and SelectorBarItem in the resource dictionary.
The formatting it applied was even better than my simple string output.

But then I started asking about other files and it went back to giving generic feedback.


This whole experience has made me think about MCP servers in a new way.

If I have a tool or API that performs some specific task then I'll call that directly. Why further complicate things by getting an agent/AI/LLM involved?
If I have an API or tool that provides some data or information that it might be appropriate for the agent to use, then that might be appropriate as an MCP server (or other tool that an agent/AI) can use.

If I'm using an agent it might be appropriate to say "run this tool and make changes based on what it returns." In doing this the actions of the agent may not always be the same. Such is the nature of a non-deterministic system like an LLM based AI. Adjusting to the change from highly-deterministic systems to those that include a level of random variation may just be the hardest part of understanding "AI-based" computing.

If I know I want consistent results that are always presented/formatted the same way and don't have any random variations then I'll use a specific tool directly. If I want to make additional information available to the agent, or allow it to trigger external tasks, then an MCP server is highly appropriate.

I suspect this has some parallels with some businesses that are threatened by an Agent with an MCP wrapper to an API making them redundant. If all you're doing is providing data then where's the business? If you're creating/collating/gathering the data then that could be useful. If you're value comes from analysing or formatting data then AI could become a threat when it can do that analysis or formatting itself....



Monday, June 16, 2025

Sometimes we need people to run ahead

With so much seemingly changing at any one time, it can feel hard to keep up.

But, if all we ever do is try to keep up, how can we ever get ahead? Or even just prepare for what's coming?

sign posts on the road ahead

I've recently been thinking about the benefits of thinking about the future in ways that some people consider extreme or unnecessary. But, I've found that thinking deeply about what is or could be a long way off actually helps with thinking about the short term too.

If someone runs miles ahead, they can have an excellent idea of whether the next few meters are in the right direction.

Knowing what is, or could be, a way down the road also helps you know if you'd benefit from extra planning or preparation before you get there.
Are there lots of hills ahead? Better build up the muscles to make climbing them easier.
Is the road ahead dangerous? Better pick up some safety equipment before you get there.
Does the surface change? Do we need some different tyres, or even a different mode of transport for the next part of the journey?


Draw the analogies as you see fit. ;)

Wednesday, June 04, 2025

Have LLMs made code-coverage a meaningless statistic?

TLDR: If AI can easily generate code to increase test code coverage, has it become a meaningless metric?

Example code coverage report output

I used to like code coverage (the percentage of the code executed while testing) as a metric.

I was interested in whether it was very high or very low.

Either of these was a flag for further investigation.

Very low would indicate a lack of testing.

Very high would be suspicious or encouraging (if the code was written following TDD).

Neither was a deal breaker, as neither was an indication of the quality or value of the tests.


Now tests are easy. Anyone can ask an AI tool to create tests for a codebase.


This means very low code coverage indicates a lack of use of AI as a coding tool, which probably also suggests a lack of other productivity tools and time-saving techniques.

Now, very high code coverage can mean nothing. There may very well be lots of tests or tests that cover a lot of the code, but these are very likely to only be unit tests and are also very likely to be low-value tests.


There are two approaches to tests. Asking:

  1. Are there inputs or options that cause the code to break in unexpected or unintended ways?
  2. Does the code do what it's supposed to? (What the person/user/business wants?)


Type 1 tests are easy, and the type AI can produce as they can be written based on looking at the code. These are tests like: "What if this function is passed an empty string?"

Type 2 tests verify that the code behaves as intended. These are the kind that can't be written without knowledge that exists outside the codebase. These are tests like: "Are all the business rules met?"


Type 1 tests are about the reliability of the code. Type 2 tests are about whether you have the right code.

Type 1 tests are useful and necessary. Type 2 tests require understanding the business, the app, and the people who will be using it

Type 1 tests are generic. Type 2 tests will vary for each piece of software.

Type 1 tests are boring. Type 2 tests are where a lot of the challenge of software development lives. That's the fun bit.


Them: "We've got loads of tests."

Me: "But are they useful?"

Them: "Umm..."


I've recently started experimenting by keeping AI-generated tests separate from the ones I write myself. I'm hoping this will help me identify where value is created by AI and where it's from me.




Tuesday, June 03, 2025

The problem with multi-word terms (including "vibe coding")

TLDR: I think it's worth being clear about the meaning of the words we use. Maybe compound terms 

Not wanting to sound too pessimistic, but I think it's fair to say that we are Lazier than we realise and not as smart as we think.

We hear a term that's comprised of multiple words we recognise, and assume a meaning of the overall term based on our individual understanding of the individual words.
confused speech emojis
Let me give you three 3 examples.

1. "Vibe coding"

Originally, it was defined to describe people "going with the vibe" and letting the AI/LLM do all the work. You just tell the AI what you want and keep going until it has produced all the code and deployed the resultant software without having a care or knowledge about how it works.
But some developers heard the term, presumably thought "I know what coding is and I know what good vibes are so if I put them together that must mean 'using AI to produce code that gives me good vibes.'" 
The result: there are lots of different understandings of the meaning, and so whenever it's used, it's necessary to clarify what's meant. Yes, there can be lots of different meanings and I'm not going to argue that one is more valid than the others.

2. "Agile development" 

The original manifesto had some flexibility and left some things open to interpretation or implementation appropriate to specific circumstances. However, I suspect, there were a lot of people who thought "I know what development is and I know what it means to be agile so I'll just combine the two."
The result: everyone has their own understanding of what it means to "do agile development". Some of those variations are small and some are massive. I've yet to meet two different teams "doing agile development" who do things exactly the same. Does that matter? Probably not. It's just important to clarify what people mean when they use the term.


3. "Minimal viable product" (MVP)

Yes, you may know what all the words mean individually. You may even have an idea about the term as a whole, but the internet is bursting with explanations of what it actually means. My experience also tells me that if you have a development background, your understanding is highly likely to be very different from someone in product or marketing.
Does it matter? It depends on whether all the people using the term are in agreement. It might be fine if you're using it as an alternative term for "beta", or you mean it must have a particular set of features, or it requires a certain level of visual polish. I think that you can prove it's viable based on customer actions is more important. But, again if all the people on your project can agree on the meaning, I trust you'll work it out. (Confession: I left one job because the three people in charge--an issue for another time-- all had a different understanding of what MVP meant, but refused to give their definition or acknowledge their definition was different from the others. It made the work impossible.)



Some people (Or, maybe all people, some times--I have done this myself) will hear a word, assume a meaning and not ask any questions.

I've observed a similar thing with headlines. People make assumptions based on headlines or TLDRs, and so don't get to appreciate the nuance. Or maybe don't even appreciate that there might be more than a simple explanation.

Nuance matters. It's the detail where the devil hides. It's the 80% of edge cases accompanying the 20% of the obvious in a 'simple' scenario.

Words matter. I probably spend far too much of my time thinking about words because they're a foundation of communication.

Yes, for many people, words don't matter.

But, going back to thinking about "vibe coding", words are how we communicate with machines. While the trend has always been for "higher-level" languages, we didn't go all the way to our spoken languages previously because of
A) technical limitations 
B) the lack of precision in our spoken/written languages 

AI/LLMs overcome some of the technical limitations and can make some reasonable guesses to work around the lack of precision.

Relying solely on natural language to express all the subtle details and the specifics required with software using only a few sentences, or even paragraphs, doesn't seem appropriate.

Some people think 'Nuance doesn't matter' until the software doesn't do exactly what they expect in an edge case scenario.

Producing software that isn't as good as I want/expect may just be part of the enshittification of life.

I think many people believe (or think they can get away with) acting like "Close enough" is the new "Good enough".

Magpies are very vocal. And, maybe they're right. Perhaps we should just focus on the new and shiny.

Or if using AI/LLMs saves money and cuts costs, that's all that matters. Well, matters to some. I definitely don't think it's all that's important.



Then I wonder about choosing names for things. 
If there are such potential problems when combining existing words. Maybe using made-up words or words with no direct correlation to the thing the name is used for....




Now, I'll just wait for the comments that tell me I don't understand the above terms correctly...



Friday, May 30, 2025

Will AI make all code eventually look like demo code?

AI + learning code = ?

In the future, more code will be written by AI.

That AI will be trained by looking at existing code and documentation on the web.

For new libraries, APIs, functionality, etc. AI will first learn from the documentation, because there are no other examples to learn from.

Documentation and demo code contains unnecessary details and comments. It's also usually verbose and does not include many important considerations, like security, performance, logging, error handling, etc.

If that's all there is, how will the AI learn what better will look like?


It's already happening. 
When I have AI generate test cases, it frequently includes comments that only make sense in the context of teaching someone how to write test code. Why else would you include the single word "Assert" in a comment above a line that asserts whether a condition is met? Is such a comment necessary when the next line starts "Assert.IsTrue("? The AI does this because that's what it "saw" in the training data. And that training data included these comments because it was an instructional example, or was included in code that copied directly from examples like that.


Maybe it doesn't matter.

Maybe AI will learn to overcome this.

Maybe it's an indication that while AI may be able to write code that appears to work, it will still require people who have a different (and broader) understanding to ensure it works well, efficiently, and does everything that's needed. Not just what you might see in a demo.

Thursday, May 29, 2025

I went to Interesting 2025

hello

I went to Interesting 2025 (in London, the other week). Fourteen talks, from fifteen speakers, on very different topics.

But, I didn't find it interesting. I found it:

  • Inspiring
  • Curious
  • Terrifying
  • Intriguing
  • Surprising
  • Sad
  • Heart-warming
  • Hunger-inducing
  • Hopeful
  • Misleading
  • Unbelievable
  • Educational
  • Inspiring (again)
  • Triggering
  • Motivating
  • Wondering

which was really the point!


Thank you to: Alice, Rosa (& dad), Julia, Cate, Zoe, Lisa, Daniel, Rachel, Jackie, Terry, Lauren, Luyanda, Clare, Daria, Anthony, Helen, Rebekah, and especially Russel.


As one of the hosts said, "That was really interesting. Actually!" 


I went to Interesting 2025, I committed a rudeness, and I suffered as a result. (IYKYK😉)

Thank YOU (Russel)


Wednesday, May 28, 2025

Tuesday, May 27, 2025

I'm making Visual Studio less secure because I won't pay a 3rd party for security theatre

Which is less secure:

  • preventing the application of security patches?
  • not having signed assemblies?
It depends on context. So, what if I reframe the questions: 

How much would you pay to update an assembly that was previously signed?

If you don't pay, you can't update that assembly.

You can release another version (with a different name), but existing users won't get a notification of an available update.

no more extensions?


For "reasons", I previously signed the libraries I released through NuGet and the extensions I released through the Visual Studio Marketplace.

The cost to renew my code signing certificate this year was over $800. This is more than I want or have to spend on this. (Especially given the quality of the support they provide. or not.)


Microsoft offer an alternative at a reasonable price, but it's only available to companies registered in the USA or Canada. I'm not, so that's no help.


NuGet makes it possible to release unsigned updates to previously signed packages. So, that's good.

Sadly, Visual Studio does not. I even asked very nicely. However, I was told that for security reasons, they do not allow this.

I guess the consequences of extensions not getting updated to address security threats or security vulnerabilities in their dependencies aren't a problem.

If they didn't allow the uploading and distribution of unsigned extensions it wouldn't be such an issue. But, because I was previously trying to be good and sign things on the basis that it was better for security, I (and anyone wanting updates) suffer now.

Unless someone wants to sponsor me enough to cover the cost of a certificate, the 50 extensions I have in the marketplace will never be updated. If you're waiting on a fix (or a security update), I'm not sure what to tell you.


I'm still wondering what to do. Hopefully, I'll have an announcement in the coming weeks...




ENAMEL RFC Revision 1

I've just released the first update to the RFC for the ENAMEL language.

This is only the first of several expected revisions and adds a few new options and clarifications.

Changes in this revision:

  • Trailing semicolon is optional with inline C# (clarification)
  • Nested loops are supported (clarification)
  • Nested AUTOGRIDs are not supported (clarification)
  • Add the ExpandGridDefinitions setting
  • Add the SET keyword


As before, I appreciate any feedback on these changes or anything related.

Additionally, as I continue to explore these ideas and gain feedback from others, it's looking more and more likely that I will eventually release some tooling to support this language. I'd love your thoughts on this (including if you want to be an early tester) at https://forms.gle/DXVP8fjfyics74Sj6

Friday, May 23, 2025

AI will solve all our problems....?

 Apparently, "AI will solve all our problems!"

Even the ones that we know are being created by using AI now?


I can see two responses:

  1. We hope that will be the case. (By the time we encounter the severe negative consequences, "AI" will have been developed to deal with those problems.)
  2. We plan and prepare for these known issues and limit or mitigate their consequences before they become a problem. (Hope isn't always the best strategy - when it comes to technology.)



Thursday, May 22, 2025

Improve .NET MAUI's AppThemeBinding?

 The AppThemeBinding in .NET MAUI is great. But wouldn't it be better if it required a lot less text?

<Label Text="default way (verbose)" TextColor="{AppThemeBinding Light={StaticResource Primary},Dark={StaticResource White}}" />  <Label Text="my way (shorter)" TextColor="{lightDark:Primary_White}" />

I'm a massive fan of code that is clear, expressive, and as short as possible without obscuring any important details.

The default syntax in the top AppThemeBinding example above is indicative of what you'd expect from regular XAML use. It does everything you need, the names are clear, and anyone familiar with XAML will not struggle to read it.

But, I look at it and think:

  • That's a lot of text
  • Why the repetition?
  • Why are so many braces needed? (I know why but wish they weren't.)
  • Can't there be a better, simpler way of telling both the compiler/framework and any developer looking at the code in the future what I want to happen?


As you can see from the picture above, I have a better way, and you can see an example of it there.


Of course, I don't have all possible color combinations pre-defined. I used a Source Generator to create what I wanted by including this attribute somewhere in my code.

[AppThemeColorResource(AppColors.Primary, AppColors.White)]

This example also uses my RapidXaml.CodeGen.Maui library to generate constants for the names of the color resources defined in the Resource Dictionary. I could use hard-coded strings for the names, but that risks other possible maintainability issues.


I also have other variants of the attribute, so I can also do things like this:

[AppThemeNamedColor(nameof(Colors.Aqua), nameof(Colors.HotPink))]

[AppThemeHexColor("#FF00FF", "#8B0057", "PinkOrPurple")]

[AppThemeNamedBrush(AppBrushes.SecondaryBrush, AppBrushes.Gray200Brush)]


Each generates a MarkupExtension that can be used in place of an AppThemeBinding and is named based on the names of the provided brushes or a specifically provided name. (As in the AppThemeHexColor example above.)



The above example uses 'lightDark' as the xmlns alias. I tried various naming styles and conventions for the alias and the generated MarkupExtension but currently prefer this best. I think it communicates the maximum amount of helpful information with the least amount of text.

Examples of other things I've experimented with include:
{ifLight:WhiteElseBlack}
{color:PrimayIfLight_PrimaryDarkIfDark}
{if:LightUseFF0000_DarkUseDD3333}
{appThemeBrush:Green_ifLightElse_DarkGreen}

What would you use?


If you see something like the above, you might be tempted to think, "Yes, it's shorter and easier to read, but what's the downside? I bet it's slower."

If you've never measured how long it actually takes to load different XAML content, that is a reasonable assumption.

Previously, I would have thought the same too. However, I recently built a test harness that makes it easy to measure how long it takes to load different XAML files, and the results shocked me.

I created two pages. Each contains 100 Labels (in a VerticalStackLayout). In one page, each label uses the AppThemeBinding as shown above. The other page uses my syntax. 

In a quick test, loading each page multiple times and in a variety of scenarios, the version of the page that used my syntax loaded 25% faster!


So, my version:

  • is shorter
  • is easier to read/understand
  • has exactly the same functionality
  • and is faster!!!


Now, tell me why you wouldn't want to do this?



NOTE: If you're looking to hire someone to help you build apps with .NET MAUI and you appreciate code that is easy to read, understand, and maintain. Get in touch as I'm looking for work.



creating and making words

I was recently confused by someone who was using creating and inventing (& creator and inventor) interchangeably. I found this confusing. There must be a difference, and it may be useful to understand it.

I came up with these definitions for various creativity-related terms. They're not perfect, and there are exceptions, but sharing here as others may find this useful.

Make - To produce something
Create - To make something that has existed before
Invent - To create something that has never been created before
Innovate - To create something by combining existing things or ideas in a new way
Design - To create something for a specific need or purpose

But what about "creativity'?
In my experience, I believe this is mostly used as a shorthand for idea creation. Being creative means coming up with lots of ideas, possibly using invention or innovation, and seeing how they can meet a need or requirement.

Monday, May 12, 2025

When I built six apps in under 4 weeks

While looking for a new job, I've been thinking about how I quantify the work I've done previously, and I remember the time I built six apps in less than four weeks!

6 / 4
We didn't start from scratch, but compared to everything I have as a reference, it was fast work. I miss working on projects where the goal was to ship high-quality products regularly, and they would be used by lots of people.

So, backing up, the project was a port of existing apps that existed for other mobile platforms. The apps were "city guides" for different cities and were based on published pocket guides for those cities. The apps contained interactive maps of each city to enable the exploration of places of interest. It was also possible to create a custom itinerary of places to visit. Not massively complicated, but for a recognised brand that expected the highest quality.
Six apps, one for each of six cities.

Before we started, all the existing data and assets from the existing apps were gathered, a full review of the existing apps was performed (by everyone involved), and the designer from the agency created visual mock-ups for each of the "pages" in the app.

The [book] publisher hired an agency to build the apps, and the agency sub-contracted out the development to me. As a highly recognised agency, who did work I admired, they were one of the few companies I had always wanted to work for/with.

Prior to starting development, we held a meeting with the designer, product manager, agency owner, and myself to go through everything, ensure all requirements and details were known, and that everything was understood.

The plan was to build a single "core app" that could be "white-labelled". By providing different data files and visual assets, each of the six apps could be produced from a single codebase. I'd used a similar technique before when needing to build localised versions of an app for different countries, each with its own unique content and language.


Here's how the work broke down week by week.

Week 1: Build the custom map control. Without this, there would have been no app. It was the centrepiece of the application: unique functionality that differentiated it from other available apps. Yes, the whole first week was spent entirely on building a single control. It was, obviously, a complex control that contained a lot of functionality. All the functionality had to be easily testable too. It wasn't practical to travel to each city (on 3 different continents) to manually test the functionality.

Week 2: Get all the data in a consistent format. The provided data was in multiple formats. Mostly a mix of CSV files and SQLite databases. So, before building the actual mobile apps, I built a console app that would take all the different data sources and produce standardised, consistently formatted data that could be used by each app the same way. The existing apps embedded and used the raw data files, but it meant that they had to handle all the variations in file formats, data formatting, and incomplete data. Having data in a known and consistent format meant the code in the mobile app didn't have to account for as many variations or possible error conditions. It meant less code and less error handling. When updated data files were available, they could be reprocessed by the console app, and the files it produced were tested for consistency and correctness.

Week 3: Build the app. With all the complexities handled in the previous week, this became a case of building pages for the app(s) that displayed and allowed interaction with the data (& map control). I recall there being fewer than twenty pages in the app. Many pages were reused for different scenarios where the same basic structure could show very different data. These were pretty simple pages to display and interact with a fixed set of data. The challenging work in determining how the pages should look and the possible variations in data had all been considered prior to actually building the UI. This was key to building fast and correctly.

Week 4: Test the app. While doing some final tweaks and checks, I checked and retested everything while the agency's dedicated tester also did the same. Some additional automated checks/tests were also added. Across the entire code base, there were over 32000 tests, but many of these were for checking the data's validity, formatting, & consistency for each place of interest.


One particular memory I have was with the person from the agency responsible for testing the apps before release. They couldn't find any problems with any of the apps. However, like many people responsible for testing software, they didn't want to find no problems. They looked into the data that was being used and raised a single bug that some of the latitude and longitude details (of some places of interest) were specified to an unnecessary number of decimal places. A quick update to the data formatting app, the addition of an extra test that no lat/long values had unnecessary precision, and then the regeneration of all the data and the app was ready to ship. Once released, no bugs were reported.


Small apps with little feedback and interaction from the people using them aren't always desirable, but it's good to know (remember) that I can build them with impressive speed.


Friday, May 09, 2025

When process improvements can be more valuable than code changes

While looking for a new job, I've been thinking about how I quantify the work I've done previously. It reminds me of a role I had where I introduced positive changes far beyond the assigned coding tasks.

I was working in a team (of 8) building desktop software that the business sold. It was high-value software for a specific task and industry. The product was more than a decade old (when I joined), and updates were released (approximately) every three months. Updates would add fixes and new features. Some customers also paid for their own custom functionality.

positive graph showing a line going "up and to the right"

I collaborated with the sales team to provide technical planning support for custom work and, as part of the general development team, implemented planned changes for each release.
It wasn't the most mentally taxing work and didn't break new technical ground. But, it did allow me the time to think about the broader picture of what we were doing and how we were doing it.

Here are five things I did in that role that others on the team couldn't or wouldn't have done.

  1. Streamlined work distribution. When I joined, the work to be done for each release would be decided in advance and then individual tasks would be given out to developers one at a time as each task was completed. This way of working had been done for years, but I quickly identified multiple issues. I persuaded the manager to group related tasks (typically by area of the code base) and give them all to a single developer. This change resulted in faster changes, (so more work could be done for each release), fewer bugs, and developers gaining a deeper understanding of the codebase. Because each developer was spending more time in an area of code they got to know it better, could make multiple changes at the same time and avoid conflicts or rework when multiple developers tried to change the same area of code at the same time.
  2. Increased documentation accuracy and quantity. The company used to have a set of Word documents that detailed developer processes and important information. These were hosted in a read-only form on an intranet. The process for changing or adding a document was slow, and so it wasn't done as frequently as would have been beneficial. I migrated the existing system to a wiki-based solution, which led to more documentation being created and it being kept up to date.
  3. Simplified and automated the release process.  Releases were an important time, but they used to be very slow. Originally, a release would require a "release week" where all the developer team were involved in preparing the release or working on projects away from the main codebase. Creating a release build was a slow, manual process that took three days to complete. This would then be manually tested while custom builds were created for customers with unique features. After going through this process once, I saw the issues and began automating it. I reduced it to a 25-minute process that also included all custom builds. This was run multiple times a day as work was committed to the main branch.
  4. Introduced automated testing. I joined a company with a dedicated manual tester and a technical director who refused to accept that coded/automated tests were a good use of anyone's time. When I was tasked with work that involved multiple complex calculations, I knew I couldn't complete the task without creating coded tests. There were too many variables and scenarios for me to remember everything, and even following a manual script would be slow and prone to errors. I created the tests anyway and even identified many previously unknown existing bugs in the calculations. At our next weekly meeting, I admitted what I'd done and showed how having the tests had not only saved me time and improved the quality of the code, but it also made future changes to this part of the code easier and with less risk of introducing unintended side effects. There was initial scepticism from the manual tester who felt threatened, but once they saw how it freed them up to do other work and reduced the bottleneck of manual testing from the development process, everyone got on board and creating coded tests soon became the norm.
  5. Restructured weekly progress meetings. Every Thursday afternoon, the whole team would gather for a meeting. Initially, this was primarily dominated by each developer individually reporting what they had been working on and giving progress feedback to the manager. Most of the progress feedback to the manager was irrelevant to the rest of the team and so it wasn't a good use of time to gather everyone in a room for a series of 1-on-1 conversations. I suggested moving the progress reporting to email which, happened before the meeting. This freed up the meeting to collectively address any concerns and discuss wider issues or areas for improvement. I particularly remember the final meeting I attended on my penultimate day with the company. When I suggested a new process improvement and explained the benefits, another member of the team asked why I cared when I was about to leave. I replied that I wanted the best for the company and the team, even when I wouldn't be there. And, while I was there, I wanted to do everything I could to make it as good a place to work as possible.
The code changes I made could, arguably, have been made by any of the other developers on the team. That's part of the nature of coding. I expect that the increased use of AI/LLMs as part of software development will further reduce the distinction between the code produced by different developers. The distinguishing factor between developers may come down to their ability to do more than produce acceptable code. Being able to understand how the required task fits into the broader picture and identifying areas for improvement is a crucial skill. A knowledge of the wider business and its processes can also be valuable. Not that individual developers should always question every decision and attempt to change business processes, but they should be able to see and understand the broader environment and offer suggestions when appropriate.
I'm looking forward to competing in this developer marketplace.

Thursday, May 08, 2025

When a one second saving was worth £10K each day

While looking for a new job, I've been thinking about how I quantify the work I've done previously. It reminds me of the time we worked out how a small performance improvement was worth approximately ten thousand pounds (Sterling/GBP) to the company on each day of operation. Six days a week times 52 weeks a year, that's over £3 million per year!

clock and bank notes
I was working for a courier/delivery company and was responsible (among other things) for the software used to scan parcels in the depots and by drivers when making deliveries. First thing in the morning was a very busy time, and lots of parcels needed to be scanned and loaded onto many vehicles, which often required delivery far-away or on tight deadlines. Time was definitely an essential factor. Saving time taken during this process would:

  • Allow more time for time-sensitive deliveries
  • Reduce the number of late deliveries
  • Allow for increased capacity
  • Reduce stress among people loading vehicles and drivers with tight deadlines

We (obviously) wanted to save any time we could.
Through a combination of changes to multiple pieces of software involved in the process (preparing the list of deliveries for each route, the scanning software, and the software used by controllers to organise and dispatch routes/drivers), it resulted in time savings equivalent to one second per parcel. 

One second per parcel might not sound like a lot, but there were a lot of deliveries being made each day. In addition to potentially saving time, that one second per parcel was, on average, enough to enable the addition of one extra delivery to each route each day.
Based on the number of routes and the average price of a delivery, that amounted to approximately ten thousand pounds. Per day. Definitely a successful project.


The monetary value is a notable figure, but no, I didn't see any of that or even get a bonus. My reward was knowing I'd done my job well and improved things for the people using the software I was responsible for.
I enjoyed the satisfaction of solving a complex technical problem (how to make the process take less time) that also contributed to a positive benefit for the people using the software.
More than simply being able to put a financial figure on the result of a change, I was able to use my broader knowledge of the company and its processes and also learn more about it during the project.