• Uncategorized

    All set…


    View Nightride Route in a larger map

    for tonight’s 100K nightride around London. I’ve got a start time of 00:25 tomorrow morning and I managed nearly 11 hours sleep last night (may try and get another couple later) to prepare.

    Now I Just need to stick some new batteries in my bike lights and I think I’m all set.

    Of course there is a point to all of this as I’m doing it to raise funds for the work of The ToyBox Charity with street children. I’m still a little below my target at the moment and would value any support I can get via my JustGiving page.

    Published by:
  • networking

    Google Hangouts and XMPP – is cloud harming the Internet?

    harming Internet picture from google hangouts XMPP blog postNearly a week ago Google announced that their new messaging app, Hangouts, would not have support for exchanging messages with users of other systems using XMPP. My first thoughts on this were: Google dropping XMPP from their new messaging platform is a big deal, definitely bad for customers, and may or may not be a good idea when looked at from their own perspective. Since then much smarter people than I have written many more words about this and I don’t need to re-hash those, but I do think that there is a bigger picture here.

    Google have a track record of making a positive relative contribution to our industry; they employ smart people and historically appear to have encouraged them to act with technical integrity rather than suppressing this for short term product marketing goals. This approach generally produces good outcomes. I  think that they are doing the right thing for example with WebRTC and their attempts to ensure the next generation of internet communication infrastructure is built out of open technology in spite of spoilers from patent trolls.

    No matter how noble individuals are and how much latitude they are given, like any company, when push comes to shove they have to act in what they perceive to be the best interests of their business. I suspect that Googles Hangouts team didn’t wake up one day and collectively think: “I know lets throw a spanner in the works to stop everyone leaching off our messaging service”. More likely they faced a tough engineering call because they couldn’t get some neat killer-feature to work well within an XMPP architecture (my bet is on the conversation “watermarks”). The choice was probably killer feature which we think will help us achieve consumer market domination for our cloud application  vs key interop protocol that will improve the utility of this class of application across all vendors. The former clearly won.

    The problem here is that in a cloud environment where total dominance of one service provider is possible, there are only limited commercial incentives to play nicely. If a provider thinks their communication service can achieve critical mass then its own user interface becomes the only interop point they need to care about and communication over the public network is demoted to the role of delivering eyeballs to them. This promotes a form of collapsed backbone architecture for applications where the communication actually takes place in the core of proprietary implementations rather than on a distributed basis.

    Nobody is surprised if a context free startup with megalomaniac plans pushes this kind of architecture, but the fact that Google have found this logic inescapable may be a key marker of a shift in direction that could be set to reverse decades worth of progress in the communications industry.

    As an example from history, SMTP achieved traction because it was necessary for vendor e-mail implementations to exchange messages in order build a network of a usable diameter. Even back then there were what would now be called “hosted” or “cloud” implementations (Compuserve anyone) but also “premise based” solutions. Eventually the hosted solutions that were holding out for market domination had to give in and admit that being an island is untenable when the service you are providing is communication.

    After many years of consolidating this shift to open federation and interoperability as the default way of doing things, it looks like those assumptions are now being re-tested in a very different commercial landscape.

    Customers may have to work hard if they want today’s dominant cloud providers to come to the same conclusions about the benefits of open federation. I don’t think it is being dramatic to say that we stand to loose the “inter” bit of the Internet if they fail to do so.

    Published by:
  • business networking

    Why the WebRTC video codec choice is important

    webrtcReal time peer to peer communication on the web has had a good couple of months. At the end of January Chrome and Firefox demonstrated interop of two independent WebRTC implementations and then a couple of weeks ago Google squelched the MPEG LA attack on the open status of it’s key VP8 codec in a deal that granted them the full rights to any MPEG LA pool IP related to VP8 (if any ever existed) and used this to grant a free licence to other implementations.

    Most sensible folks can see why it is necessary to agree on one mandatory video codec for the web which is openly available to all implementers.  The next generation of applications need to be able to rely on endpoints talking the same language which means that the technology chosen has to be universal.

    The trouble is that nobody can agree what this should be. Established players, especially those who own codec IP want this to be H.264 as this works in their commercial favour and puts new entrants at a disadvantage. Others who see the need to build standards out of open, freely implementable technologies favour VP8 which was built from the ground up to avoid the patent thicket around H.264.

    There are some marginal differences between H.264 and VP8 on encoding efficiency, but the only real argument for H.264 that stands up is that it would make it easier for new WebRTC implementations to talk directly to existing, mostly embedded hardware endpoints that only currently implement H.264. Whilst this is valid, it won’t be a common scenario on tomorrows Internet and there are plenty of ways to achieve interoperability. Relatively speaking there really aren’t that many existing H.264 embedded hardware implementations  – hands up if you have a video phone on your desk? Certainly not compared to the billions of  WebRTC endpoints that will exist in released web browsers within months. With H.264 as an optional or plugin codec, vendors with legacy H.264 devices could simply take care to use an endpoint with optional H.264 support in their application, upgrade their current embedded hardware to support VP8, or if all else fails transcode on their proprietary MCUs. Encumbering the whole Internet to use H.264 to accommodate this one use case is unacceptable collateral damage.

    Just when it seemed that the VP8 vs H.264 tradeoff would go VP8 after the Google/MPEG LA announcement, Nokia served a fairly astonishing spoiler this week when it bowled an IP disclosure in to the IETF claiming it’s own rights in VP8 which it wouldn’t be prepared to licence on any acceptable terms. It later admitted that this was deliberately done to derail the efforts to standardise on VP8. If Nokia’s claims are genuine and it really does own sigificant IP in VP8 then it is probably a good thing that it did disclose it at this stage, although it would be interesting to know why they didn’t do so very much earlier. If it’s claims are found to be weak then it is a pretty shocking way to try and manipulate a standards process.

    Rather perversely my company is developing software that gateways between WebRTC and legacy SIP video phones among other things, it would actually be good for us if H.264 was mandated in WebRTC. It would be very bad for the Internet though so I’m really hoping that the questions around Nokia’s VP8 claims are quickly resolved in the right direction!

    Published by: