• Hosted desktop, what’s the point?

    I keep coming across small businesses that are buying into hosted desktop services because someone told them it was a good idea.

    What am I missing? I really don’t get the value this adds. Quite the reverse, I look on in incredulity at the self inflicted pain of a user trying to run YouTube in a web browser on a “hosted desktop” service back to their local desktop.

    Of course there is short term value in rolling out thin-client hardware and managing central hosted desktops for some larger organisations. These are the poor folks who through inertia, regulatory pressures, or just downright lack of imagination are stuck with running labyrinthine suites of legacy Windows executable software that was conceived in a previous era. Presenting and maintaining a complex configuration of kranky desktop software, often customised on a user role basis, is probably best done by turning the whole steaming heap into a hosted service in it’s own right, consumed from a simplified desktop. I get that.

    What I really don’t understand is why more agile, and otherwise unencumbered businesses are being sold desktop PCs on which they then consume hosted desktops, on which they then consume other cloud services over the Internet. What is the point of that in 2015? why not have them consume the hosted web based services directly on a local thin desktop. When I ask this question of folks that are selling this stuff I get a lot of hand waving about access control, security, central configuration. None of it at all convincing. As far as I can see the value is a bit asymmetric as in “it is our only way of getting a slice of the x per seat per month market, after all if we don’t sell customers a hosted desktop then all the valuable recurring revenue just goes to Microsoft, Google et-al and we are left in the commodity business of swapping out dead mice and keyboards”.

    That’s all very well, but at some stage the SME customer is going to work out that rendering the UI of their largely web based and increasingly media rich applications through some external server somewhere, just slower than if they were consuming the service directly in a local client is the reverse of adding value.

    There must be some compelling pain point that these services resolve to make them saleable, so folks who are deploying them, what is the big enhancement they bring?

    Published by:
  • When will Metcalfe’s law kill the Telephone

    Metcalfe’s law is a perfect fit to explain business phenomena like how WhatsApp built a $19bn valuation for themselves last year, and in doing so removed an estimated $30bn a year of global SMS messaging revenues from carriers in just a few weeks once the application achieved critical mass.

    Bob Metcalfe is a pretty smart guy, he invented Ethernet, founded one time industry giant 3Com (now part of HP), and proposed an equation that estimates the value of any communication network based on the number of participants. The value he came up with was n(n-1)/2 which in English means: The value of a communication network is proportional to the square of the number of participants. Various folks have proposed tweaks to this over the years, and of course it only gives a relative magnitude of the value of a small number of participants vs a larger number on the same platform. It doesn’t deliver a quantitative monetary value without knowing a lot more about what the network facilitates and assumes all nodes have the same value etc. It is however widely accepted as the rule that explains how networks grow rapidly (exponentially) in desirability and value once they achieve a critical mass. It’s why WhatsApp and iMessage get such a big share of the messaging “market” when there are 100s of other messaging apps that do the same thing. This is all pretty intuitive stuff really, we don’t need a mathematical equation to model the fact that I’m going to choose a messaging app where I can find most of my friends.

    As the cannibalisation of SMS shows, Metcalfe’s law isn’t all about growth inflexion points. Curves that go up rapidly, can come down again pretty quickly too. Here’s the interesting bit though, I don’t think it is as simple as a bidirectional application of Metcalfe would imply. I’m sure there is lots of hysteresis, or rather a different equation, on the way back down.

    When a technology with real users has been incumbent for a long time, a ‘long tail’ happens as numbers on the network reduce to a successively harder core of highly entrenched end-users. Objectively Metcalfe’s law still applies, but I suspect that as a network shrinks there is an extra factor in the value perception representing an individual bias based on the length of time that user has habitually chosen the technology. It isn’t just the established preferences of individuals behind the inertia that established networks have, there are (only occasionally rational) economic reasons for larger entities to continue to use a network that they have made a substantial capital or organisational investment in. This means that the decline of things like telephone calls probably won’t be the mirror image of the exponential growth new communication networks enjoy.

    There is now no real doubt that use of the public switched telephone network for primary communication is declining across the board and has probably never anyway existed for the youngest generation who are about to enter the workplace. Will this fall reach a certain inflection point and then drop off a cliff as Metcalfe’s law would imply or will we see a slow lingering decline over many decades. What is your view?

    Published by:
  • Is the future of WebRTC in IoT?

    If conference titles are anything to go by IoT and WebRTC are both big things at the moment. If you look at Hackathon output, WebRTC and IoT things seem practically coincident (disclaimer: at my day job, we did a drone WebRTC hack at TadHack London this year). Everyone is talking about WebRTC and IoT in the same sentence, but how relevant is WebRTC to real life IoT applications?

    WebRTC is really just a clever collection of on the wire Internet protocols and driver implementations to capture high quality real time media streams and move them between peer devices reliably and securely. It scavenges suitable connectivity by adapting paths to the network environment it finds itself in and does this securely by using mandatory cryptography and consent tokens. WebRTC is also a standardised Javascript API (hence the Web bit), but the wire implementation and web browser interface are separable to the extent that it is possible to fully use WebRTC as a transport with a browser at one end only, or even with no browsers involved at all.

    The media part of WebRTC sits behind an API called PeerConnection which can be hooked to local media sources by a browser or application to send things like audio, camera output, screen captures etc to any peer device it can get packets to. Without going into too much boring detail, WebRTC includes lots of smarts to get those media streams as reliably as is currently possible between clients, through Internet roadblocks like NAT, firewalls, limited bandwidth connections etc.

    On top of PeerConnection, WebRTC also has DataChannels which can be used to move blobs of data for things like text, binary files etc around using the same “go anywhere connections.

    Security is a big part of WebRTC, and as long as the control channel you are using is secure then it is very hard indeed for a third party to snoop either peer to peer media or data.

    This thing about control channel is important, WebRTC doesn’t in any way help with how the two peers find each other in the first place, or indeed how they communicate all of the information which allows them to subsequently setup a peerConnection. To do this, a signalling protocol is needed and that is something that WebRTC stays out of. The underlying application architecture has to provide this signalling between the two ends in order to bootstrap the WebRTC connection. Typically this is done by having both parties to a WebRTC session contact a third party server in the first instance to log their connection request and pass parameters between each other via this trusted third party. The signalling protocol and associated servers also typically handle all of the authentication and pass tokens that allow the communicating devices to know they are talking to the right party. Providing a secure signalling protocol is therefore something that is key to the operation of WebRTC. If it isn’t there, PeerConnections can’t be established, and if it is insecure then any security or confidentiality of the subsequent WebRTC conversations is an illusion.

    The “NAT busting” features of WebRTC, combined with it’s inherent security may make it a suitable technology for communication between legions of small IoT devices all on diverse scavenged connectivity. WebRTC is clever, if two devices are on the same connectivity it will find the best connection that links them, for example using the local LAN to pass data rather than communicating all the way back to a central server. If devices are behind simple NAT routers then its protocols can find ways of engineering two way direct communication through these and, if all else fails, as a last resort will fall back to centralised relay servers. The latter are expensive to provide at scale and therefore unattractive to application providers but a lot better than no communication in circumstances where this is the only way things will work!

    It is however a great big sledgehammer and, because of all the code to do network probing, it’s behaviour is dynamic and adaptive which means it works very well lots of the time, but is really quite hard to debug when it doesn’t. If all you need is a bit of command and control, or a transfer mechanism for any amount of non media data to central servers then it is highly unlikely that WebRTC will add any value to your IoT communication architecture.

    Remember that signalling protocol? In order to use WebRTC you need to have established a secure, asynchronous communication mechanism between all of your clients and central servers anyway. In 2015 this is likely to be WebSockets, or perhaps a raw TCP connection or UDP transaction protocol. If your application needs you to pass any amount of data client/server, or small amounts of non-timely client/client data then you don’t actually need the overhead of WebRTC, just send your peer data directly via the signalling server.

    The only time that WebRTC starts to earn it lunch is when you need to pass non-trivial amounts of data peer to peer without hitting central servers, or need to pass bandwidth hungry real time media streams across the Internet. IoT devices with video cameras will certainly have a use for WebRTC, but would you incorporate DataChannel just to do command and control when you already need to have a signalling protocol in place in order to bootstrap it? As a party piece at a Hackathon maybe but in real life perhaps not.

    Published by: