Category Archives: networking

Hosted desktop, what’s the point?

I keep coming across small businesses that are buying into hosted desktop services because someone told them it was a good idea.

What am I missing? I really don’t get the value this adds. Quite the reverse, I look on in incredulity at the self inflicted pain of a user trying to run YouTube in a web browser on a “hosted desktop” service back to their local desktop.

Of course there is short term value in rolling out thin-client hardware and managing central hosted desktops for some larger organisations. These are the poor folks who through inertia, regulatory pressures, or just downright lack of imagination are stuck with running labyrinthine suites of legacy Windows executable software that was conceived in a previous era. Presenting and maintaining a complex configuration of kranky desktop software, often customised on a user role basis, is probably best done by turning the whole steaming heap into a hosted service in it’s own right, consumed from a simplified desktop. I get that.

What I really don’t understand is why more agile, and otherwise unencumbered businesses are being sold desktop PCs on which they then consume hosted desktops, on which they then consume other cloud services over the Internet. What is the point of that in 2015? why not have them consume the hosted web based services directly on a local thin desktop. When I ask this question of folks that are selling this stuff I get a lot of hand waving about access control, security, central configuration. None of it at all convincing. As far as I can see the value is a bit asymmetric as in “it is our only way of getting a slice of the x per seat per month market, after all if we don’t sell customers a hosted desktop then all the valuable recurring revenue just goes to Microsoft, Google et-al and we are left in the commodity business of swapping out dead mice and keyboards”.

That’s all very well, but at some stage the SME customer is going to work out that rendering the UI of their largely web based and increasingly media rich applications through some external server somewhere, just slower than if they were consuming the service directly in a local client is the reverse of adding value.

There must be some compelling pain point that these services resolve to make them saleable, so folks who are deploying them, what is the big enhancement they bring?

Published by:

When will Metcalfe’s law kill the Telephone

Metcalfe’s law is a perfect fit to explain business phenomena like how WhatsApp built a $19bn valuation for themselves last year, and in doing so removed an estimated $30bn a year of global SMS messaging revenues from carriers in just a few weeks once the application achieved critical mass.

Bob Metcalfe is a pretty smart guy, he invented Ethernet, founded one time industry giant 3Com (now part of HP), and proposed an equation that estimates the value of any communication network based on the number of participants. The value he came up with was n(n-1)/2 which in English means: The value of a communication network is proportional to the square of the number of participants. Various folks have proposed tweaks to this over the years, and of course it only gives a relative magnitude of the value of a small number of participants vs a larger number on the same platform. It doesn’t deliver a quantitative monetary value without knowing a lot more about what the network facilitates and assumes all nodes have the same value etc. It is however widely accepted as the rule that explains how networks grow rapidly (exponentially) in desirability and value once they achieve a critical mass. It’s why WhatsApp and iMessage get such a big share of the messaging “market” when there are 100s of other messaging apps that do the same thing. This is all pretty intuitive stuff really, we don’t need a mathematical equation to model the fact that I’m going to choose a messaging app where I can find most of my friends.

As the cannibalisation of SMS shows, Metcalfe’s law isn’t all about growth inflexion points. Curves that go up rapidly, can come down again pretty quickly too. Here’s the interesting bit though, I don’t think it is as simple as a bidirectional application of Metcalfe would imply. I’m sure there is lots of hysteresis, or rather a different equation, on the way back down.

When a technology with real users has been incumbent for a long time, a ‘long tail’ happens as numbers on the network reduce to a successively harder core of highly entrenched end-users. Objectively Metcalfe’s law still applies, but I suspect that as a network shrinks there is an extra factor in the value perception representing an individual bias based on the length of time that user has habitually chosen the technology. It isn’t just the established preferences of individuals behind the inertia that established networks have, there are (only occasionally rational) economic reasons for larger entities to continue to use a network that they have made a substantial capital or organisational investment in. This means that the decline of things like telephone calls probably won’t be the mirror image of the exponential growth new communication networks enjoy.

There is now no real doubt that use of the public switched telephone network for primary communication is declining across the board and has probably never anyway existed for the youngest generation who are about to enter the workplace. Will this fall reach a certain inflection point and then drop off a cliff as Metcalfe’s law would imply or will we see a slow lingering decline over many decades. What is your view?

Published by:

Google Hangouts and XMPP – is cloud harming the Internet?

networking

harming Internet picture from google hangouts XMPP blog postNearly a week ago Google announced that their new messaging app, Hangouts, would not have support for exchanging messages with users of other systems using XMPP. My first thoughts on this were: Google dropping XMPP from their new messaging platform is a big deal, definitely bad for customers, and may or may not be a good idea when looked at from their own perspective. Since then much smarter people than I have written many more words about this and I don’t need to re-hash those, but I do think that there is a bigger picture here.

Google have a track record of making a positive relative contribution to our industry; they employ smart people and historically appear to have encouraged them to act with technical integrity rather than suppressing this for short term product marketing goals. This approach generally produces good outcomes. I  think that they are doing the right thing for example with WebRTC and their attempts to ensure the next generation of internet communication infrastructure is built out of open technology in spite of spoilers from patent trolls.

No matter how noble individuals are and how much latitude they are given, like any company, when push comes to shove they have to act in what they perceive to be the best interests of their business. I suspect that Googles Hangouts team didn’t wake up one day and collectively think: “I know lets throw a spanner in the works to stop everyone leaching off our messaging service”. More likely they faced a tough engineering call because they couldn’t get some neat killer-feature to work well within an XMPP architecture (my bet is on the conversation “watermarks”). The choice was probably killer feature which we think will help us achieve consumer market domination for our cloud application  vs key interop protocol that will improve the utility of this class of application across all vendors. The former clearly won.

The problem here is that in a cloud environment where total dominance of one service provider is possible, there are only limited commercial incentives to play nicely. If a provider thinks their communication service can achieve critical mass then its own user interface becomes the only interop point they need to care about and communication over the public network is demoted to the role of delivering eyeballs to them. This promotes a form of collapsed backbone architecture for applications where the communication actually takes place in the core of proprietary implementations rather than on a distributed basis.

Nobody is surprised if a context free startup with megalomaniac plans pushes this kind of architecture, but the fact that Google have found this logic inescapable may be a key marker of a shift in direction that could be set to reverse decades worth of progress in the communications industry.

As an example from history, SMTP achieved traction because it was necessary for vendor e-mail implementations to exchange messages in order build a network of a usable diameter. Even back then there were what would now be called “hosted” or “cloud” implementations (Compuserve anyone) but also “premise based” solutions. Eventually the hosted solutions that were holding out for market domination had to give in and admit that being an island is untenable when the service you are providing is communication.

After many years of consolidating this shift to open federation and interoperability as the default way of doing things, it looks like those assumptions are now being re-tested in a very different commercial landscape.

Customers may have to work hard if they want today’s dominant cloud providers to come to the same conclusions about the benefits of open federation. I don’t think it is being dramatic to say that we stand to loose the “inter” bit of the Internet if they fail to do so.

Published by: