Thursday, February 04, 2016

I think you're thinking about continuum all wrong

tl;dr; developers and press: be aware that not everyone in the world is like you and not every product is designed for you. Continuum for phones is almost certainly one such product.
Phone connected to a desktop monitor - image from https://www.microsoft.com/en-gb/windows/Continuum as they have no press shots of this setup

If you aren't aware, Continuum for Windows 10 Mobile is a way to connect your phone to a monitor and use the apps on that monitor like it was a PC. more on Microsoft's site

It takes advantages of the adaptable and responsive nature of the user interface in Universal Windows Apps that can be built for Windows 10.

The idea is appreciated as novel but has come under some criticism:

  • It's only available on expensive, "high-end" devices.
  • It requires a special dock.
  • It doesn't work with all apps.
  • "If I go anywhere where I might need to work on something where a large screen is beneficial or required I'd have my laptop with me anyway."
  • Even if you could use it with, say a monitor/TV in a hotel room, that you'd need to take a dock, keyboard, mouse and related cables with you it would be easier to just take a laptop.

Because of these arguments the concept is often dismissed as interesting but not really relevant by people who talk about such things. People who are often developers or members of the tech press who focus on [Windows] phones and devices.

Here's the thing though. What if it isn't intended for them as an audience?

Indulge me for a minute.
As a feature the functionality necessary for this is probably baked deep into the core of Windows 10.
Work on Windows 10 probably started about 18 months to 2 years ago.
Around that timeframe (I know it was a long time ago but trust me. I remember.) there was much discussion and speculation that the opportunity for Windows Phone lay in "emerging markets" and with the so called "next billion". These people would be owning a smartphone as their first phone and probably their first computing device.

Do you see where we are?
We have people considering their first computing device.
These people have no historic commitment to an existing ecosystem and will be more willing to decide among all providers/options.
Even if they only do buy a phone they will probably be aware that there are some computing functions that are easier and preferable to do on a larger screen.
Microsoft are offering a unique differentiator that appeals to and brings extra value to a new market in ways that the competition doesn't.

But it's so expensive. How could this be for emerging markets?
I suspect it's expensive because it's new. As we've seen time and time again, technological solutions get cheaper over time. Expect this in a wider range of, cheaper, phones in the not too distant future. The need for a special dock will also likely go in the future too.

But everything is in the cloud and computers are constantly getting cheaper?
Yes.
But a slightly higher priced phone may still be cheaper than a separate PC/laptop.
It should not even be necessary to buy a separate monitor as a TV could be used so then the only extra expense is a keyboard and mouse and they can be obtained very cheaply.
Even when "everything" is in the cloud there are still benefits to having some types of content stored locally. Especially if you're in a part of the world where connectivity isn't as cheap, reliable or available.
As more services are delivered to the web first, being able to use any service/site on a large monitor may be preferable. Especially if there isn't a dedicated app--yet.

If that's the case, why try and sell the feature with high end devices in mature markets?
If the high end devices are currently required due to the early stages of the technology involved, then that's the only market available.
These are also the devices that developers use. If the hope is that when cheaper devices come to these new markets they will have this functionality it will be important that apps have been built to work with it. Due to the lead times of app development and update that may mean starting to build and update such apps now.
If this is the plan, Microsoft need developers to get into the habit of considering large screen display for their mobile apps now. While the story about using devices in hotel rooms may not be a great one it might be enough to get the ball rolling. Plus, if developers are having to design their UI to work on both small and large screens it might help them think about everything in between too.

Is that all?
Well, there's another possible benefit to all this too.
Windows app developers have traditionally been much more focused on phones rather than desktops and tablets. Perhaps, now, if they have to consider a desktop style interface layout for their phone/mobile based apps the lack of extra effort that's also needed to also get the app working and available on desktop will encourage them to make it available there.
I know Windows on phones has suffered from an "app gap" but it's even worse on the desktop. Creating more apps that work on both desktop and mobile can only be a good thing for the platform, consumers and ultimately developers.


Well, that's what I think. Does this seem reasonable?
Or do you have other ideas?


**Update **
(Because twitter makes it almost impossible to get across a complex response.)

My argument is two fold.

Firstly, it's about price. Yes, YOU could buy a cheap ($99) tablet or a low price laptop. But even that can be very expensive when a person can only earn the equivalent of a few dollars a day. Prices will come down and wages will go up but I admit this is the biggest argument against my above reasoning. If this isn't a move to compete against the cost of owning multiple devices then why create this feature?

Secondly, having more devices isn't always better. This can be hard to understand when we are used to having lots of devices and using them for specialised tasks. Most consumer's computer based tasks aren't all that specialized though. If you're a developer yes, you absolutely need a high powered computer to complete your tasks and it is perfectly reasonable to not be able to imagine a scenario where a phone could be powerful enough for all your computing needs. In places where people aren't already using other computing devices this isn't the case. If a phone can meet most of your computing needs, it's much simpler to have everything in that one place and on the rare occasion you need it for something more to plug in a bigger screen to work on an office document or view content in a browser. Compare this with a secondary device that is only used for the occasional task. I can easily imagine a scenario which goes along the lines of:
  • Where is it?
  • Ok, found it, now is it charged?
  • Oh, the OS/app needs an update.
  • Oh I first need to reconnect and sync content
  • and so on.....
Compare this with just plugging in and already being set up and ready to go.

For the same amount of money (currently) you could have multiple devices that allows the same result. Only at a cost to the smoothness of the experience.
Aah, you may say, but if they had a tablet as well they would find other uses for and value from it too.
That's a fair point and I'm sure they would. But that ignores my prediction that the cost difference will become much smaller by the time devices are available to them. And again, if this does nullify my conjecture about why continuum was created and who it is ultimately for, what is the reason for its existence? (Assuming that there is a reason and Microsoft didn't do it on a whim with no end users in mind, nor market research to justify the investment in cost and development.)
Just having more devices isn't automatically a better thing if there are practical and cognitive costs to switching between them. Having more technological devices also has a cost to the environment through the resources they require.


** Update 2 **

Why does this even matter?

I assume continuum was created because within Microsoft a need or opportunity was identified and continuum for phones was seen as a way of meeting that need and benefiting from that opportunity. I also assume that much work was done to validate the need and confirm that continuum for phones would meet that need in a beneficial way.

Whether continuum is a success or not (however that is defined) doesn't really matter to me. I'm much more interested in learning about the market, how it will develop, what future needs there will be and how they might be met. Understanding the specific need may help me better meet it and what other opportunities it may point to.
I'm not convinced that the perceived need was for a world where people would be hot desking with their phones as their only computer and the best way to prepare for that was to start marketing to people who are already carrying multiple devices. I may be wrong but it'd be good to know.


Wednesday, January 27, 2016

Wednesday, January 13, 2016

Promote your Windows 10 app with AppRaisin - here's why


Have a look at the above graph. It shows the number of downloads for a Windows 10 Mobile app. The app has had no promotion at this point. It was "soft-launched" ready for a big promotional push at the end of January.
There has been one place where this app was mentioned publicly though. On the 8th of January it was included in AppRaisin. Notice what happened on that day--a big spike in downloads.

AppRaisin is a Windows 10 app that helps people discover new Windows 10 apps and games. Anyone can submit an app when it is newly released or updated. This includes the people who have created the app.

Have you built or are building an app for Windows 10? Want more people to hear about it (and hopefully download it)? Then what have you go to lose by submitting it to AppRaisin?





Tuesday, January 12, 2016

5 tips on working with technical debt

After listening to a recent .Net Rocks show about Technical Debt I thought I'd share some tips I have on working with technical debt.

I worry if "Technical debt" is just a label we put on things so we don't have to think about them. e.g. "We know this will create technical debt and we can come back to it at some point in the future." This doesn't require that we do the best to mitigate the consequences of that debt now though.

At its simplest, technical debt is just a name for the consequences or side effects of the decisions that are made (intentionally or accidentally) when developing software. Dealing with technical debt therefore comes down to just dealing with the consequences of past decisions. Fortunately, there are lots of ways developing software that make it easier to deal with these consequences and lessen their impact.
  1. Have automated tests for the system at a high level.
    Regardless of what you call them, you need tests that verify that the application/system works at a high level, not just at a method/class level. When it is possible to verify that the system does what it should it removes the risks associated with changing how it does it, that is changing the code to remove the debt but without breaking the overall functionality. This is key to avoiding regressions and negative side effects of addressing debt.
  2. Remember the boy scout rule. (Leave the camp/code better than you found it.) If while working on something you find some existing code that needs improving, and you can do something to improve it without it being a major distraction from the task that was your original focus, you should make such improvements. Of course this depends on a working culture that allows code gardening.
  3. Allow code gardening.
    Just as with a real garden where you might tidy up some leaves, pull up a weed or prune a rogue branch from a hedge as part of the general maintenance of the garden, there are general maintenance tasks that can also be done to code. This is very similar to the boy scout rule but with an important requirement that it must be allowed for code unrelated to what you're supposed to be working on. Suppose you were working on Feature Y and it involved modifying Class X. With the boy scout rule, you may make other modifications to Class X that improve it. With code gardening it's also potentially acceptable to make changes in the completely unrelated Class Z. Say you just happened to glance across Class Z by chance and noticed an improvement you could easily make. If code gardening is allowed then you can improve it now. If code gardening is not allowed then the best that might happen is a bug gets raised to improve Class Z at some point in the future. The reality though is that it's a low priority bug and so never makes it on to the schedule. The opportunity to make an incremental improvement to the software is missed and so the slow degradation in overall quality continues. It's important to note that for this to be possible it requires that small frequent check-ins be allowed (not a one check-in per feature policy) and a branching strategy used that allows such small improvements to be easily made in isolation.
  4. Enforce code reviews on non-trivial changes.
    You can alleviate the risk of a low bus factor on an area of code or something that no-one working on a project wants to risk touching by using code reviews. Code reviews have a lot of benefits but the important point here is that the person who signs off on a review agrees that they will be able to support it in future, should the original author not be available. This is not about assigning responsibility or blame. This is about ensuring that the original code was clear enough that another developer can understand what has been created. If no one is willing to sign off on a review because they wouldn't be able to support maintaining it - improve the code until someone agrees that they could.
  5. Use "why-and" comments in code to explain intentional debt.
    Comments in code are often a bad thing. All things being equal, code should be self-explanatory. You shouldn't need code to explain what a piece of code is doing or how it is doing it. Where there is a general need for comments though is if why a piece of code was written the way it was isn't clear. From a business perspective sometimes deadlines are critical. You may have to put in a dirty hack to get something working to meet the deadline. This is an example of where comments are useful. Just adding a comment that acknowledges a hack (or other form of technical debt) has been made may be useful to help avoid people thinking badly of you in the future isn't enough to actually help other developers or the project. In such a scenario you should add the comment that acknowledges WHY the intentional debt is there AND what should be done to address it. Addressing the debt may mean more time is needed or waiting on a dependency. The important thing is to provide information that will help when someone comes back to work on the area of the debt. The aim is to avoid someone looking at the code and asking "why is it like that?" and also to provide context and information that will help when someone does have to modify it.


Just a few points but they've helped me in numerous projects in the past. Let me know what you think or if you have any other tips to make working with technical debt easier.



Tuesday, December 15, 2015

What if security in software isn't really an issue?

This is a thought exercise as much as anything. I do think that software security is massively important and more should be done about it.

Anonymous Hacker

What if security in software isn't really an issue?

Yes, a few big names have been in the news recently but they must be the exception or it wouldn't be newsworthy. It's not like everyone is getting hacked all the time. Is it?

I don't worry too much about the security of my car. It doesn't have a fancy alarm and satellite tracking. There are many other cars that are easier to steal or are worth more to someone who wanted to steal a car. Isn't it the same with software?

Even if I, as a business owner, suffered a security breach there aren't any real consequences.
  • There may be some negative press for a short time--but all press is good press, isn't it?
  • There may be a small (relatively) financial consequence in terms of fines or legal bills.
  • No one of any note who has been hacked previously has suffered any major, long term, negative consequences.

It's said that education is the answer to solving software security issues but where's the motivation?
  • If there's no real consequence to security breaches, then why spend time and money educating people to prevent it.
  • If security isn't an issue, then we can get more developers into the industry faster as that's one less thing new developers have to be taught.
  • It's not just a developer education problem. Even if developers knew how to make more secure software they won't always be given the time and resources to do that if their superiors don't think it's important so you need to persuade the whole business on the importance of software security.
Trying to sell a solution to a technical problem (software security) that someone might not have, yet, to a non-technical stakeholder (someone higher up in the business than a developer) can be tricky. In trying to persuade them to fix a problem they don't have now you're selling risk/insurance.
Let us spend more time now to prevent an issue that we might have at some point in the future.
This may or may not work based on political, financial or other business constraints.

Then there are issues of accountability, liability and due diligence.
If there is a security breach who's responsible? The developer? Or the more senior person(s) in the company who didn't ensure developers had the time, knowledge and resources to do what's best for the company?
There's also no way to be certain you're secure. So how much effort should be put into having more security? When do you stop taking more time and expense to increase security, for an uncertain return?

Even the systems we have in place to try and ensure some level of security aren't brilliant. A few years ago (yes noting that things may have changed in the intervening time) I was working on a website that had to go through a PCI compliance check. I was shocked at how little the check actually covered. Yes, it meant the site was not doing some bad things but it doesn't mean it was doing only good things. The checks potentially left a lot of what I saw as possible security vulnerabilities--which I ensured were addressed.

Let's just forget about all this though. Software security doesn't really matter as there are no real consequences to the business and the only people who seem to talk about it are developers pointing out what they think are the things the other developers didn't do or did wrong.


But wait, could capitalism solve this problem for us?
Education (of developers) is largely claimed to be the solution here but is capitalism, not education, the way to get change? - If more companies get hacked then insurance claims and therefore premiums will go up--eventually to a level which makes a difference to the company. At which point there will be incentives for being more secure - and even proving it. If a company could do things to prove it was serious about preventing software security issues it might then be able to get a discount on the related insurance.
What if a business could get cheaper insurance for software related security issues by signing up to a service from the security company which would continuously be checking for breaches?

  • The insurers would benefit if they didn't find anything as they'd be less likely to face related claims.
  • The insurers would benefit if they did find something as they could put up the premium and hopefully the company could implement a fix before it is exploited and so not have to make a claim.
  • The company would benefit if no vulnerabilities are found as they'd pay lower premiums. Plus their user data and business continuity would be protected.
  • The company would benefit if something was found as they'd have the opportunity to fix it before it being exploited.

Those doing the testing would be incentivized to find exploits and disincentivized from missing something that is later exploited by another party.

Could this be done now?
Unfortunately, I think not. It depends on the costs of having security experts work for the insurers and paid for (either directly or indirectly) by the companies taking out insurance.
Sadly, I think we'll need more exploits, pushing up insurance premiums further, before this becomes financially viable.

Things look like they will get worse before they get better. :(