Thursday, July 28, 2016

Book update

3 little updates on the book I'm writing.
  1. The title's going to be changed (slightly) - official announcement shortly
  2. It's half price today (July 28th 2016) - go to https://www.manning.com/dotd and use code dotd072816au - (if you're reading this in the future there's a different discount code here)
  3. It's half finished - I know the MEAP* still only shows 4 of 14 chapters available but I'm working hard to get you more chapters ASAP.
*MEAP - Stands for Manning Early Access Program. Think of it as a cross between the in-progress part of Leanpub and a traditional publishing process. You get the advantage of access to chapters while it's still being written but with the benefits of an editorial team helping improve what I write.

Tuesday, July 12, 2016

11 things you're doing wrong when testing code

Most developers I meet aren't big fans of testing. A few are but most don't do it, would rather not do it, or only do it begrudgingly. I love testing and happily spend more time on tests than writing new code. I'd argue that it's because of the focus on testing I can spend less time writing new code or fixing bugs and still be very productive.

If you're unsure of writing tests or don't do a lot of it the following will point you in a better direction.

  1. Not doing it. It's an easy trap to fall into but one without an excuse. Make plans to start adding tests to the code you're working on now and add them to future projects from the start.
     
  2. Not starting testing from the beginning of a project.
    It's harder to go back and add them in retrospectively and may require architecture changes to do so which will ultimately take longer for you to have code you can be confident in. Adding tests from the start saves time and effort over the lifetime of a project.
     
  3. Writing failing tests.
    The popularity of the TDD methodology has brought the idea of Red-Green-Refactor to the software testing world. The is commonly misunderstood to mean that you should "start by writing a failing test". This is not the case. The purpose of creating a test before you write the code is to define what the correct behavior of the system should be. In many cases this will be a failing test (indicated in red) but it may be that this is represented by an inconclusive or unimplemented test.
     
  4. Being afraid of unimplemented tests.
    A big problem in software development is the separation between code and any documentation about what the system should actually do. By having a test with a name that clearly defines the intended behavior that you will eventually implement, you will get some value from a test even if how it will be written is currently unknown.
     
  5. Not naming the tests well.
    Naming things in software is famously difficult to do well and this applies to tests as well. There are several popular conventions on how to name tests. The one you use isn't important as long as it's used consistently and accurately describes what is being tested.
     
  6. Having tests that do too much.
    Long complicated names are a good indication that you're trying to test more than one thing at once. An individual test should only test a single thing. If it fails it should give a clear indication of what went wrong in the code. You should not need to look at which part of the test failed to see what the problem in the code is. This doesn't mean that you should never have multiple asserts in a test but that they should be tightly related. For instance, it's ok to have a test that looks at the output of an order processing system and verify that there is a single line item in it and it contains a specific item. It's not Ok to have a single test that verifies that the output of the same system creates a specific item and it's logged to the database and it also sends a confirmation email.
     
  7. Not actually testing the code.
    It's common to see people who are new to testing creating overly complicated mocks and setup procedures that don't end up testing the actual code. They might verify that the mock code is correct or that the mock code does the same as the real code or just execute the code without ever asserting anything. Such "tests" are a waste of effort, especially if they exist to only boost the level of code coverage.
     
  8. Worrying about code coverage.
    The idea of code coverage is noble but often has limited actual value. To know how much of the code is executed when the tests are run should be useful but because it doesn't consider the quality of the tests that are executing the code it can be meaningless. Code coverage is only interesting if it is very high or very low. If very high it suggests that more of the code is probably being tested than will bring value. Very low code coverage suggests that there's probably not enough tests for the code. With this ambiguity, some people struggle to know if an individual piece of code should be tested. I use a simple question to determine this: Does the code contain non-trivial complexity? If it does then you need some tests. If it doesn't then you don't. Testing property accessors is a waste of time. If they fail there's something more fundamentally wrong with your code system than the code you're writing. If you can't look at a piece of code and instantly see everything it does then it's non-trivial. This doesn't just apply to code as you write it. If revisiting code at any point after it's been written then it needs tests. If a bug is ever found in existing code that's confirmation that there weren't sufficient tests for the complexity of that area of the code.
     
  9. Focusing on just one type of testing.
    Once you do start testing it can be easy to get drawn into just one style of tests. This is a mistake. You can't adequately test all parts of a system with one type of tests. You need unit tests to confirm individual components of the code work correctly. You need integration tests to confirm the different components work together. You need automated UI tests to verify the software can be used as it's intended. Finally you need manual tests for any parts that can't be easily automated and for exploratory testing.
     
  10. Focusing on short-term tests.
    The majority of the value from tests is obtained over time. Tests shouldn't just exist to verify that something has been written correctly but that it continues to function correctly as time passes and other changes are made to the codebase. Be they regression errors or new exceptions tests should be repeatedly run to detect problems as early as possible as that will mean they are quicker, cheaper and easier to fix. Having tests that can be automated and executed quickly, without variation (human error) is why coded tests are so valuable.
     
  11. Being a developer relying on someone else to run (or write) the tests.
    Tests have very little value if not run. If tests can't be run then they won't be and so bugs that could have been caught will be missed. Having as many tests run automatically (as part of a continuous integration system) is a start but anyone on a project should be able to run any tests at any time. If you need special setup, machines, permissions, or configurations to run tests these will only serve as barriers to the tests being executed. Developers need to be able to run tests before they check in code and so they need access to and the ability to run all relevant tests. Code and tests should be kept in the same place and any setup needed should be scripted. One of the worst examples I've seen of this being done badly was on a project where a sub-team of testers would periodically take a copy of the code the developers were working on, they'd modify the code so they could execute a series of test that developers didn’t have access to on a specially configured (an undocumented) machine and then send a single large email to all developers indicating any issues they'd found. Not only is this a bad way to test but it's a bad way to work as a team. Do not do this.
    Having code that executes correctly is part of what it means to be a professional developer. The way to guarantee the accuracy of the code you write is with appropriate tests that accompany it. You cannot be a professional developer and rely solely on other people to write tests for and run tests on your code. 

If none of the above apply to you congratulations. Carry on making robust, valuable software.

If some of the above do apply to you, now's a great time to start doing something about it.


Friday, July 08, 2016

How much to develop an app?

So, how much will it cost to get my idea for an app designed and built?
I've been asked this a few times recently so thought it would be worth writing up a "proper" answer.

It's a simple question so it deserves a simple answer. Here you go then: 17 inches.

Hang on. That's the answer to "How long is a piece of string?"

The questions are roughly the same.

There is no single answer.
There's also no simple answer.
Like almost every question, the answer is that "it depends™."

It depends because not every app is the same.

I think the press has done us all a disservice. Over the years, a big deal has been made of the fact that "anyone can make an app." The catch here is that not everyone can create the same app and not every app is the same. The other misnomer about apps is that the normal rules for a product or business don't apply, but they do.

Let's for a minute say you didn't want an app but wanted someone to design and build a house. Would you ask the question "how much will it cost to design and build a house?" And expect anyone to be able to just answer with a figure?

I hope not. There are lots of things that would influence the cost of designing and building a house. This would include:
  • How big should it be? 
  • What it will be made out of? 
  • Where will it be? 
  • Who do you want to design it? Someone merely capable or a renowned designer? 
  • What do you need in terms of the number of rooms, layout, etc.? 
  • What about how it looks from the outside? 
  • What about the grounds and landscaping? 
  • Is there a specific timeframe for the project? 
  • Any special requirements or laws that need to be considered? 
  • What external services and utilities are available at the site and/or are required? 
  • Any implications for the long-term management of the built house? 
  • Does the site have good access? 
  • And many more factors as well. 
These questions matter because not all houses are the same. A single room building that sleeps a whole family, in a slum with no utilities and made from whatever materials can be found is obviously very different from a 10 bed, 12-bathroom mansion set in 15 acres of prime real estate and including tennis courts and swimming pool.

A similar extreme exists in apps. At one end of the spectrum, building an app means Taking some pre-existing content and bundle it up into an app for side-loading on a small number of corporately owned and managed devices that are all of the same make and model.

At the other end, it means creating a custom backend, related services, an admin system and mobile client apps for multiple platforms and distribution through even more stores.

The variations in size and complexities make a single answer impossible.

What if you have a tight budget?

Maybe you've read this far and are still waiting for me to give you a figure in the hope that it will be less than the one you plucked from the sky and have in mind.

You may achieve this but you need to find someone with suitable skills.

There are people in the world who will work for a few dollars an hour who may be able to build what you're after. There are other people who will charge several hundred dollars an hour. Price isn't always a direct indication of what you'll get back.

How good what they make will be, how easy it will be to work with them, and what it will be like to maintain and adapt in the future can all vary regardless of what you pay.

Some apps will require different combinations of skills and these may or may not be available from a single person or company.
There really is a very broad scope. At one extreme, I've built simple apps by myself for a few hundred pounds. At the other extreme, I've also worked on an app that took about 40 man years of effort and ended up costing several million dollars.

Some companies and developers are only interested in the original development of apps. Others will help maintain and support it in the future too.
I'm still, regularly surprised by how much some companies will charge and how little some people are willing to pay. The prices some companies charge seems very high based on what I know is possible--I've been doing this a while. ;) That some people recognize the need for specialist skills to build something that will (hopefully) enable a profitable, long term business to be established doesn't correspond with how little they're willing to pay to make this possible.

Oh, you've got an idea for a business you claim will turn over several million dollars a year but you're only willing to pay a thousand dollars to build all the software to make it possible. - Yeah, something's just not right about that.

How much will it cost to build your app?

Some people are willing to put a price on it and when they do, they normally say to budget around $30,000.

I say - It depends!


Rising in the store charts


Are you an app developer?
Do you dream of getting your app high in the charts and all the downloads that will bring?
Maybe you think the new and rising category is a reasonable place to hope to be. If you make something good and it's popular that sounds like a realistic goal, doesn't it?

There are a lot of people monitoring what happens with the iOS App Store and the Google Play Store. The Windows app store seems to get overlooked. So, for the last few months, I've been tracking what's been going on in it and have made some interesting discoveries.

Background: Each day I've been recording the first page of results (top 72 entries) for each of the different device, type, and chart combinations in the US store.

There're lots of interesting things in the data but let's start simple.

The "new and rising" chart is the most interesting to me for a couple of reasons.

  1. Developers tend to think that's where they have a chance of getting their app listed if it's well received.
  2. It sounds like it should be a good place to go for anyone wanting to find new, interesting apps. I'd expect to be able to return there regularly and find new apps that might be relevant.

Here are four stats based on what happened in the "new and rising" chart for mobile apps in the US store between March 1st and June 30th:

  1. Over the 122 days, there were 202 new entries in the chart. That's an average of 1.66 new apps each day. (Mode and Median are both 1.) On 22 of those days (18%) there were no new entries.

  2. Of the apps that entered the chart during that time, 65 of them were re-entries. That is, they entered, left and re-entered. This included some that entered 3 & 4 times. In total, this means there were 209 different apps listed in the chart over the 4 month period.

  3. Apps that entered the top of the chart were there for an average of 39.35 days. (Mode 1 and Median 14.)

  4. There were only ten apps in the chart on the first day of the period in question that weren't also there on the last day. Yes, that means 62 of the top 72 were still in the "new and rising" chart after 4 months.

Some of those figures are very surprising. But, what can we learn from them?

For developers, I think this means you need to be realistic about the chances of breaking into this chart easily. On average, only 1.1 new apps makes it into the top of the "new and rising" chart each day.

For anyone looking to find new apps that might be of interest. The "new and rising" chart is unlikely to be a good place to go to look for them.

As this isn't a good place for people to find new apps, for developers this also means that they need to think about promoting their apps in places other than just relying on the charts.

That the charts aren't a good place to find new apps may  indicate an opportunity for providing somewhere else that is. I'll leave this to your imagination for now.

For me, it makes me unsure what "new" and "rising" really mean. Most of the apps listed in that chart appear to be old and stationary.


Is this interesting?
Would you like to see more analysis of other charts and time periods?
Let me know.


Thursday, July 07, 2016

Projects and products in the Microsoft world

This was explained to me recently by someone at Microsoft and I thought it was worth sharing as it's a useful distinction.

Microsoft create a lot of "projects". You might recognise some of these:
murphy | astoria | oxford | centennial | islandwood | madeira | natick |
spark | scorpio | florence | bletchley | spartan | rome | westminster | natal

What's important to note about these projects is that they are not fully fledged products.

Projects are not final. They are experiments or explorations. They come with no guarantees about lifetime, support or stability. They may be around for a long time. They may get cancelled early. They may turn into actual products.

Products are supported and have a future. (And maybe a price tag.)
The lesson is to maybe not pin all your hopes and plans on something that's still a project.