top of page

Search Results

613 results found with an empty search

  • Boosting internet in mobile: the return of the browser proxies (mobile megatrend series)

    (browser proxies are back in fashion.. guest blogger Fredrik Ademar looks at the limitations of today’s mobile web and how browser proxies have resurfaced to bring the internet to the masses. Part of our Mobile Megatrends 2008 series). Struggling with the limitations of the mobile web Numerous attempts, more or less successful and well-known, have been made over the past years to replicate the browsing experience provided on a desktop device also in the mobile context. Latency, low bandwidth, limited input capabilities and small screens have typically been main hurdles to overcome to really get something remotely close to the original web experience. A quite interesting trend in the mobile browsing space that now has gone through an exciting renaissance, is the concept of bringing in network proxies to intercept the web-to-mobile traffic and optimize. The most well-known example is probably Opera Mini, which truly has made a significant impact on how mobile web is perceived by the masses. But Opera is only one of many contenders in this space, and there is a set of different initiatives providing similar functionality and benefits (although in slightly different packages) such as Bitstream ThunderHawk, InfoGin, Flash Networks, Novarra, WiderWeb, Google Wireless Transcoder etc. The trend seems clear going forward – this could indeed be the answer to the quest for a truly pervasive web experience across mobile and desktop. Or maybe we are hoping for too much? Ways to address the problem To begin with one should note that the solutions provided are not by any means new concepts. The ideas can be found already back in the early WAP days, and many of the issues that now attract attention, were in fact exactly the same that WAP attempted to address with the original WAP gateways etc. In retrospect one of the major problems with WAP was that the ambitions were stretching too far. For instance using SMS and USSD as transport mechanisms was a bad idea from the very beginning, and this seriously harmed the priorities and technology trade-offs made. However, one important assumption was right, the insight that simply applying the classical W3C standards to the mobile space was not going to do the job and that is still the case today. Standards like HTTP and HTML (with Javascript, CSS etc.) are simple and straightforward, but also pretty verbose formats quite unfit for a mobile environment. Applying these on top of standard TCP as transport does not really match the need for a responsive and user friendly mobile web service. To some extent it is really a no-brainer to identify potential solutions, and the most straightforward and natural approach is to introduce an intermediate proxy, which translates and optimizes the traffic over the air interface, still maintaining the legacy structure and protocols on the server side. Typical functionalities included in the available solutions are things like page pre-rendering and reformatting, image and data compression, intelligent proxy caching, image size reduction, session tracking etc. These functionalities can be basically be categorized in the following three technology segments (based on the excellent taxonomy of browser proxies at the S60 browser blog): – Speed proxies Purpose: image compression, efficient page contents caching, HTTP & content pipelining. Examples: Bytemobile, NSN, Flash Networks (NettGain), Venturi VServer, Novarra – Adaptation (transcoding) proxies Purpose: page reformatting, image reduction, menu simplification, session tracking, SSL session handling, XHTML/MP adaptation Examples: ByteMobile, InfoGin IMP, Google Wireless Transcoder (ex Req Wireless), Novarra nweb, Volantis Transcoder, WiderWeb, Greenlight Wireless Skweezer, Clicksheet – Server based (pre-rendering) proxies Purpose: pre-renders page before sending and improves navigation Examples: Opera Mini, Bitstream ThunderHawk A speed proxy typically makes the mobile browsing faster and reduces data to a certain extent, while it still preserves a full page. Adaptation and server-based browser proxies on the other hand will drastically reduce the amount of data sent over the air, but at a significant cost since the page will no longer be the original web experience. Often the page is re-formatted into one long narrow column (like e.g. Opera SSR), and dynamic effects like pop-down menus and pop-up windows will not work. Bringing in the proxies, pros and cons When benchmarking these products in terms of performance, the improvements are indeed often significant. Content size is reduced to 10-50% of the original size, and the downloading of typical sites can be done in half the normal browser download time (ballpark figures from Opera Mini). Since much of the heavy lifting is done in the network, an interesting side effect is also that the CPU, memory etc. requirements for the device are much lower. It is even possible to deploy solutions to devices post sales that will mobile web enable them, even if they did not have that kind of support from the beginning (using e.g. java based approaches like with Opera Mini) Ok, this sounds great – are there really no weaknesses with the browser proxy approach? Yes there are. A common problem highlighted is the lack of true end-to-end security, as well as the problem of ensuring integrity of the transferred data. These problems are difficult to get around given the nature of the architectural setup. Another very relevant problem is the fact that when applying different automatic intelligent conversion algorithms on content, you do indeed tend to violate the original intent of the content author. You can never replicate 100% of the experience on the desktop web, and in many cases content gets optimized away completely (like e.g. flash content). Another typical comment is that networks and device hardware are getting more capable each year, and solutions including anything other than standard web browser technology, will quickly become obsolete. I think this assumption is completely wrong, there will always be a gap between mobile and desktop web – the mobile device will always be more limited and therefore needs to be treated differently. Building a business case As always, the technology roll-out also needs to be coupled with a sustainable business model. Where is the money in all this? Besides the services of providing the core browser experience, there are lots of value added services like billing, content filtering etc. that can be applied, but the true value lies in the fact that companies in this space are right in the middle of a giant flow of very targeted user data going back and forth. Carefully catered this asset can prove to be far more valuable than what can be made from the original service – with that said it is really no surprise to find Google (Google Wireless Transcoder) as one of the contenders in this segment. A megatrend going forward? As a future outlook for 2008, the mobile browser proxies will continue to provide an increasingly important contribution to the mobile web experiences, especially important harnessing the value of the long tail. This time, there is no doubt the proxy based browser model is here to stay, but it will typically not be perceived as a ground breaking revolutionary step, more as a natural and obvious evolution. We will also likely see a consolidation of technical solutions, as some players in the space today are to some extent not providing scalable and competitive enough solutions. Comments? Fredrik

  • Execution engines: understanding the alphabet soup of ARM, .NET, Java, Flash …

    [mobile development platforms, execution engines, virtualisation, Flash, Java, Android, Flex, Silverlight.. guest blogger Thomas Menguy demystifies the alphabet soup of mobile software development]. The news at All About Symbian raised a few thoughts about low level software: Red Five Labs has just announced that their Net60 product, which enables .NET applications from the Windows world to run unchanged under S60, is now available for beta testing. .NET on S60 3rd Edition now a reality? This is really interesting: even the battle for languages/execution environment is not settled! For years Mobility coding was tightly coupled with assembly code, then C and in lesser extent C++. The processor of choice is the ARM family (some others exist, but no more in the phone industry)…this was before Java. Basically Java (the language) is no more than a virtual processor with its own instruction set, and this virtual processor, also called a Virtual Machine, or JVM in the case of java, simply does what every processor does: it processes some assembly code describing the low level actions to be performed by the processor to execute a given program/application. On the PC other execution engines have been developed: the first obvious one, the native one is the venerable x86 instruction set: thanks to it all the PC applications are “binary compatible”. Then Java, and more recently … the Macromedia/Flash runtime (yes Flash is compiled in a Byte Code which defines its own instruction set). Another big contender is the .NET runtime…with you guessed what, its own instruction set. At the end it is easy to categorize the executions engines: The “native” ones: the hardware executes directly the actions described in a program, compiled from source code to a machine dependent format. A native ARM application running on a ARM processor is an example, or partially for a Java program that is running on an ARM with Jazelle (some Java byte code are directly implemented in hardware) The “virtual” ones: Java, .NET, JavaScript/Flash (or ActionScript, not so far from JavaScript: the two languages will be merged with the next version: ActionScript 3 == JavaScript 2 == ECMAScript 4) where the source code is compiled in a machine independent binary format (often called byte code)…But how an ARM emulator running on an x86 PC may be called? you guessed, virtual. So why bother with virtual execution engines? Java has been built with the premise of the now famous (and defunct) write once run everywhere, because at that time (and I really don’t know why) people were thinking that it was enough to reduce the “cross platform development issue” to the low level binary compatibility, simply allowing the code to be executed. And we know now it is not enough! Once the binary issue was fixed, the really big next one were APIs (and to be complete the programming model) … and the nightmare begins. When we say Java we only name the Language, but not the available services, same for JavaScript, C# or ActionScript. So development platforms started to emerge CDLC J2ME .NET framework, Flash, Adobe Flex, Silverlight, Javascript+Ajax, Yahoo widgets … but after all what are GNOME, KDE, Windows, MacOS, S60, WinMob ?…yes development platforms. The Open Source community has quickly demonstrated that binary compatibility was not that important for portability: once you have the C/C++ source code and the needed libraries plus a way to link everything, you can simply recompile for ARM/x86 or any other platform. I’ve made a big assumption here: you have “a way to link everything”. And this is really a big assumption: on many platforms you don’t have any dynamic link, nor library repository or dynamic service discovery…so how to expose cleanly your beloved APIs? This is why OSGI has been introduced, much like COM, Corba, some .NET mechanisms, etc : it is about component based programming, encapsulating a piece of code around what it offers (an API, some resources) and what it uses (API and resources). Basically an execution engine has to: Allow Binary Compatibility: Abstracting the raw hardware, ie the processor, either using a virtual machine and/or a clean build environment Allow clean binary packaging Allow easy use and exposition of services/APIs It is not impossible for virtual engines to dissociate the language(s) and the engine: Java …well for Java, ActionScript for Flash, all the # languages for .NET. An execution engine is nothing without the associated build chain and development chain around the supported languages. In fact this is key as all those modern languages have a strong common point: developers do not have to bother with memory handling, and as all the C/C++ coders will tell you it means around 80% less bugs, so a BIG productivity boost, but also (and it is something a tier one OEM confirmed): it is way to more easyily train and find “low cost” coders for those high level languages compared to C/C++ experts!… another development cost gain. A virtual execution engine basically brings productivity gain and lower development cost thanks to modern languages ….. but we are far far away from “write once run everywhere”. As discussed before it is not enough and here comes the real development environments based on virtual execution engines : .NET framework platform : an .NET VM at heart, with a big big set of APIs (this is what I would like to know what are the APIs exposed in Red Five Labs s60 .NET port) Silverlight : also a .NET VM at heart + some APIs and a nice UI framework J2ME: a JVM + JSR + …well different APIs for each platform J2SE: a JVM + a lot of APIs J2EE: a JVM + “server side” frameworks Flex : Adobe Action Script Tamarin VM + Flex APIs Google Android: Java VM + Google APIs,… but more interestingly also C++: as android use Interface IDL description C++/Java interworking will work (I will have to cover it in length at another post) …and the list goes on What really matters is the development environment as a whole, not simply a language (for me this is where Android may be interesting). For example the Mono project (that aims to bring .NET execution with Linux) was of limited interest before they ported the Windows Forms (Big set of APIs to make graphical stuff in .NET framework) and made them available in their .NET execution engine. What I haven’t mentioned is that the development costs gain allowed by modern languages comes at a cost: Performance. Even if Java/.NET/ActionScript JIT helped partially for CPU (Just in Time compilers: VM technology that translates virtual byte code to real machine code before execution), it is still not the case for the RAM used, and in the embedded world the Moore law doesn’t help you, it only helps to reduce silicon die size, to reduce chipset cost, so using a virtual engine actually will force you to … upsize your hardware, increasing the BOM of your phone. And it isn’t a vague assumption: when your phone has to be produced in the 10 millions units range, using 2MB of RAM, 4MB of flash and an ARM7 based chipset helps you a lot to make money selling at low cost….some some nights/days have been spent optimizing stuff to make it happen smoothly very recently… Just as an example what was done first at Open-Plug was a low cost execution engine, not virtual, running “native code” on ARM and x86, with a service discovery and a dedicated toolchain: a component platform for low cost phones. Then it has been possible to add a development environment with tools and middle to high services. A key opportunity may be for a single framework and multiple execution engines for easy adaptation with legacy software and productivity boost for certain projects/hardware, or some parts of the software. And in this area the race is not over, because another beast may come in: “virtualization” . In the above discussion another execution engine benefit was omitted: this is a development AND execution sandbox. This notion of sandbox and the last argument about performance comes really essential when you need to run a time critical code on one hand and a full blown “fat” OS on another, to be more specific if you need to run a GSM/UMTS stack written on legacy RTOS and an OpenOS (like Linux) on a single core chipset. Today this is not possible, or very difficult: it may be achieved by low level tricks if one entity master the whole system (like when Symbian OS where running in a Nokia NOS task), or with real virtualization technologies like what Virtuallogix is doing with NXP high end platforms. And in that case the cost gain is obvious: single core vs dual core chipset…. But why bother with virtualization and not rewrite the stacks for other OSs? because this is simply not achievable in our industry time frame (nearly all the chipset vendors have tried and failed). And again the desktop was the first in this area (see VMware and others): Intel and AMD should introduce some hardware to help this process…to have multiple virtual servers running on a single CPU (or more). So where are all those technologies are leading us? maybe more freedom for software architects, more productivity, but above all more reuse of disparate pieces of softwares, because it does not seem possible to build a full platform from scatch anymore, and making those pieces running in clean sandboxes is mandatory as they haven’t been designed to work together. Anyway once you know how to cleanly write some code running independently from the hardware, you have to offer a programming model! Implying how to share resources between your modularized pieces of code…and in that respect execution engines are of no help, you need an application framework (like Hiker from access, Android is about that, but also S60 and Windows Mobile, OpenPlug ELIPS, …): It will abstract the notion of resources for your code : Screen, Keypad, Network, CPU, memory, … but this is another story, for another post. Feel free to comment! Thomas

  • Nokia's Ovi equals S60 squared

    Launched in 2002, S60 has been Nokia’s software platform which delivers an application framework, key middleware, core applications and user interface on top of the Symbian OS platform. For the last few years, the vast majority (circa 65%) of Symbian devices have shipped with S60 on top, and in the form of Nokia’s own devices. But I ‘m digressing. S60 has been Nokia’s strategy to extend its market share in the value chain beyond its own 40%. The manufacturer has long realised that extending far beyond 40% of the mobile device market is pretty hard. As such Nokia developed S60, an in-house software platform that can be licensed to other manufacturers. In creating this strategy, Nokia envisaged that many OEMs would take up S60 which would translate to a meaningful addition to its revenue base. It’s worth noting that contrary to the S40 software platform, S60 incurs far greater costs in maintaining and upholding APIs, catering to developer needs and handset OEM differentiation requirements. S60 has therefore been Nokia’s strategy to extend well beyond it’s own device market share and reap licensing revenues from competing OEMs. As history has taught, very few models and volumes of non-Nokia devices based on S60 have shipped to date, compared to the 100M+ Nokia S60 devices. Visualising Nokia’s Ovi strategy Interestingly, Ovi is an extension of S60, for the connected device age. Ovi is about channeling services (e.g. music and video sharing, widgets, location services, and storage-in-the-cloud services) onto mobile devices. In this sense, Ovi is an extension of S60, but with lower costs. To deliver an Ovi service, you need an enabling client application, not a complete software platform. What more, Ovi is about extending service delivery to connected devices beyond mobile; PCs, set-top boxes, home entertainment and other appliances. And it’s about bringing those services to the consumer irrespective of the device (mobile or fixed) or the medium (over the cable or over the air). If we were to represent mobile devices as one dimension and the spectrum of connected devices as another dimension, a very revealing relationship between Ovi and S60 forms, which lends well to visualising Nokia’s Ovi strategy. Ovi = S60 squared. Thoughts ? – Andreas

  • Do we really need femto cells?

    A femto cell is currently the smallest implementation of a cellular network. It is designed to be placed in each home and enable ordinary mobile handsets to communicate with the mobile network through broadband connections, including cable or xDSL. Femto cells operate on the same licensed spectrum that is used in macro and micro cells but only have a range of tens of meters, to cover the area within the home. They bring a whole new value proposition to mobile operators and enable them to enter a previously unreachable market: the home environment. But do we really need femto cells ? The Femto Forum has been formed by seven early femto cell innovators mostly in the UK (including IPAccess and Ubiquisys) during July 2007 and attracted several heavyweights during the summer of 2007, including ZTE, NEC, Alcatel-Lucent, Nokia Siemens Networks, Motorola and ZTE. The forum currently consists of 50 members that are distributed across the mobile value chain. The forum has created four working groups tackling technical, business and marketing issues and aims to minimize fragmentation in this new market. Why is there a need for such small cells? The most efficient way to increase network capacity in a cellular network is to shrink the cell size – ok, there are other ways, including getting new spectrum, sectorization, adaptive algorithms for scheduling but all are semi-disruptive and cannot compete with a smaller cell size. However, in an archetypal mobile network, the cost to deploy a network with many small cells in data hungry areas is prohibitive. Femto cells piggyback on broadband connections and are relatively inexpensive and can effectively form a distributed high capacity network. On a much simpler usage case, femto cells can provide coverage where ordinary cells cannot, in highly populated areas where propagation issues are a concern. (Although pico and femto cells may appear similar, a pico cell connects to a base station controller to extend coverage in areas without, e.g. enterprise locations. Femto cells may include some form of a base station controller and are more intelligent). What femto cells really propose is revolutionary for mobile and fixed operators, assuming that they aim to provide more than just coverage in the home. Saying that, femto cell application is most likely to depend on the region it is being deployed in: Western Europe is most likely to use femto cells for advanced data services, while North America is more likely to see femto cells for coverage in remote areas where low traffic does not justify a typical base station. Are femto cells valuable as marketed currently? First of all, I am not convinced that fixed operators will be happy to see mobile operators piggybacking their broadband connections and generating revenue through them, cannibalizing bandwidth that could otherwise be used for fixed services. Although it is likely that some form of agreement will take place between the mobile and fixed operators, it is still early to discuss about this when there could be serious technical difficulties facing femto cells. A serious technical issue is interference, with femto cells interfering with each other and the macro/micro cells in the main mobile network. Simon Saunders, chairman of the Femto Forum, affirms that major femto cell developers have made their products aware of their environment and intelligent so that they do not interfere. This may be the case, but I would like to see how femto cells will interact when there are tens in the vicinity, all trying to work in the same spectrum. Another issue is whether the mobile network will be able to cope with so many distributed base stations accessing the core elements of the mobile network, including the central switches, location registers, softswitches, media gateways etc. These may have been designed to cope with hundreds of base stations in dense urban areas, but the number for base stations may escalate to several thousands if the mobile operator considers femto cells. As far as usage is concerned, I can t see a solid scenario for femto cells. They can bring mobile wireless data to the home with the added benefit that users can access the new application with a device they are already familiar with. However, I don t see how a mobile device can compete with a PC or a notebook computer for data services most commonly accessed at home: Web, email, social networking and multimedia. Especially if mobile operators build WiFi in a femto cell box to enable computer networking, I think that fixed operators will get quite alarmed. I can see three ways for mobile operators to bring something of interest to end users with femto cells: New services: Mobile operators can release new services that can target mobile devices with very high speed connections. Intelligent architectures that distribute intelligence to the edge of the network (including IMS) are ideal for this setting but then again, user behavior is nearly impossible to predict, and deploying this kind of services would require heavy capital expenditure on behalf of the mobile operator. New terminals: This is a far more radical approach. Mobile operators can promote devices with increased display and input capabilities to be used in femto cells and outdoors. This would be possible only when proof of concept has been achieved and economies of scale are in place to justify for the need to change handsets (or get an additional one). Or they could simply add coverage where there isn t to start with and build a stable of applications after end users are familiar with cell at home solutions. Do we really need femto cells ? Femto cells may be a good thing. After all, distributed is the way to go forward: FON and Meraki enjoy success with little overhead costs compared to traditional network providers by giving more power to the end user. I am not saying that the mobile operator will give more power to the end user, but will enable more advanced applications and perhaps cheaper mobile basic services including voice and SMS at home. There is a lot of work to be done to make sure that: Fragmentation is managed and technical issues are resolved (e.g. Nokia Siemens has released a femto gateway that speaks to other vendor femto cells via a proprietary interface). Operators market (and subsidize) the devices very carefully Mobile operators should work with fixed operators to setup some form of cooperation to enable femto cells, or assess whether they should offer fixed services themselves. Educate end users that health risks are minimal (as with guideline compliant macro/micro cells) However, as it stands (and in the short term future) I wouldn t pay anything to have a femto cell at home, when I can enjoy voice calls through circuit-switched (or VoIP) practically free and have a very fast broadband connection with WiFi. Would you?

  • The significance of Google's Android

    Google makes money by building inventory (i.e. white space on web, print and radio) and auctioning off inventory to advertisers. Search is merely the means to create a boundless amount of inventory and attract billions of eyeballs to it. All Google products including Docs, Maps, iGoogle, Gmail, GTalk and News Alerts are strategies to increase the amount of inventory and attract more eyeballs. The Android operating system for mobile phones is no different. It’s a platform for building and channeling inventory, much like a web browser. In fact we could say that Android is similar to a browser on steroids, in that it allows developers to easily build any connected handset application anywhere within the mobile user journey, and within those create more inventory. So why is Google spending more than 200 man years building a complete operating system, instead of building just a browser for mobile phones or even a downloadable application, like an on-device portal ? Because browsers on mobile handsets are used for a tiny percentage of the time, probably less than 5% of the time the user spends on their phone. 95% or more of the user journey is taken up by the contacts application, idle screen, main menu, calendar, inbox and settings. In parallel, with Android, Google is addressing the need of handset manufacturer for an operating system they can control (it’s licensed under APL2), that’s low-cost (it’s free), that reduces time to market for variants (see the declarative XML UI framework and developer platform). Plus, Android is backed by Google, a heavyweight vendor who can support OEMs during launch of multi-million units. What’s so special about Android ? Android is different to other OSes, including Windows Mobile, Symbian/S60/UIQ, the Linux variants and proprietary OSes (Nucleus, EMP, BREW, etc) in several ways: – The declarative XML UI framework enables developers and handset manufacturers to rapidly develop the user interface for new applications. – The Android SDK is an environment for building connected applications. Every application (including dialler, idle screen, SMS, contacts, etc) can consume and produce content. Every application on Android is a Web 2.0 citizen. – The Android source code will be licensed under the Apache 2.0 license, a non-copyleft license which allows handset manufacturers to modify the source code without being forced to share back their modifications. This is in complete contrast to GPL v2 and GPL v3 which is a copyleft license (see our white paper); Sun applied the GPLv2 license to its Java ME implementation, which is the reason why not a single handset OEM is using it. – Android allows developers to program against the familiar Java SE library of APIs (the desktop version of the Java libraries), which is much broader and more powerful than Java ME, the mobile version. Much like SavaJe (now Sun’s Java FX Mobile), Android is a Java SE -like platform built on a Linux kernel, but more importantly one where the Java platform is deeply integrated with the underlying Linux support package. In other words, the Java SE-like platform is a native application platform for Android phones. Symbian may arrogantly dismiss Android as yet another Linux initiative, but the breadth and depth of Java APIs is something Symbian never managed to get right. And unlike the FX Mobile platform, Android has several OEMs who are planning to build handsets on it. – Android is not only a departure from Java ME development model, but also away from Linux development. Funnily enough, operators like Vodafone and Telefonica who have committed to supporting Linux as a prefered platform would not be counting Android in. (thanks Guy!). – Android uses Dalvik, a ‘proprietary’ (non-Sun-endorsed) Java virtual machine which means that Android developers can use Java SE APIs, while Google does not have to pay any royalties to Sun for TCK certification, as they ‘re not claiming this is a Java environment. As Stefano writes, Google doesn’t claim that Android is a Java platform, although it can run some programs written with the Java language and against some derived version of the Java class library. This is slap in the face of Sun. – Google is paying developers $10 million to write applications for Android, which is a smart move to motivate developers especially when no phones are out yet. It’s worth noting that $10 million exceeds the yearly marketing budget of most operating system vendors. The Open Handset Alliance (OHA) is formed by an array of complimentary participants; operators (covering US, Europe, Asia, Japan and Latin America), handset OEMs covering all global regions (including HTC, the second-biggest smartphone OEM after Nokia), as well as hardware and software vendors covering complimentary constituents of a mobile handset. So is Android mature and will it be adopted by OEMs ? Google has dedicated an estimated 200+ man years building the platform (since the Android acquisition), but there are still bugs (see this report). HTC has confirmed it is launching one handset in 2H08 and reportedly plans to release a total of 2 or 3 Android-based handsets in 2008. Moreover, according to a WSJ report, T-Mobile US has committed to releasing a phone in 2008 that will be based on Android. For a new Linux initiative, this level of commercial support is extremely rare. What’s in it for Google ? Android is a service access platform, not a delivery platform. It’s about growing the pie of mobile advertising inventory and not necessarily growing Google’s share. There’s nothing to stop Yahoo taking Android and launching a phone with Motorola that bundles Yahoo Go!, flickr and eBay. I ‘m guessing however that Google has some sort of agreement with OHA-participant handset manufacturers and operators about bundling Google services with Android handsets by default. Moreover, Google might want to bundle the gPay payment system (see this Times Online article). Or connect the physical world to Google advertisers via its ZebraCrossing QR reader technology for mobile phones. What’s even more interesting is that it may provide a channel for feeding customer analytics back to Google, such as presence, contacts, call logs, SMS messages and a wealth of user profile information that can be used to build extremely detailed digital footprints. Another important impact of Android is that it will catalyse the development of white-label phones, i.e. phones ready-to-customise by consumer brands like MTV, Nike, Gucci and Tag Heuer. Rapid software customisation is what hampers the scalability of customised design manufacturers like ModeLabs today. All-in-all, Android seems to be the only non-proprietary operating system with a strong chance of wider commercial adoption. Motorola is losing interest in LiMo (it committed to Qtopia APIs, whereas LiMo supports rival GTK). The LiPS forum doesn’t really have a route to market, apart from Chinese ODMs, and is a partial OS. All other mobile Linux operating systems are either in alpha stage (Celunite, ALP, A la Mobile), not shrink-wrapped (Greensuite), or not backed by a big services firm (Purple Labs). Symbian is dominated by Nokia and DoCoMo; outside Japan, the overwhelming majority (volume-wise and model-wise) of Symbian handsets are Nokia, whereas in Japan the vast majority of 30 million Symbian-based shipments are DoCoMo (60 out of 66 models). And Windows Mobile is for enterprise segments only (at least up to version 6). Plus Android ticks several boxes of OEM checklists including control, time-to-market and cost. Thoughts ? – Andreas

  • Prepaid roaming: an underhyped opportunity

    But what is causing all of this turmoil? Prepaid users are many more than post-paid (especially in developing markets where pre-paid may account of up to 90% of subscriptions) and operators are expected to harness their roaming potential in order to combat declining revenues due to competition, regulation (Eurotariff), increased mobile phone penetration, cheap fixed telephony services, VoIP and several others that are threatening their revenue streams. Informa estimates that approximately 62% of mobile subscribers worldwide are prepaid, counting more than 1.5 billion as of July 2006. The necessary technologies to implement prepaid roaming are heavily fragmented much more so than post-paid roaming, mainly because a prepaid user requires authorisation to communicate before each call, which is based on his credit, requiring this procedure to take place in near real-time. There are several ways to enable prepaid roaming: Call back with USSD: The Unstructured Supplementary Services Data is the simplest method of enabling prepaid roaming. It relies on entering a short code on the handset to query current balance and enable communication. An example of a USSD code is *99#phone number#. It is hardly user-friendly and in most cases complicates use beyond the reach of most users. On the other hand it is practically costless to implement but not considered as a long term solution, only in some developing markets where revenues do not permit an integrated roaming solution or roaming is not seen as a revenue driver. CAMEL: Customised Applications for Mobile networks Enhanced Logic is a set of standards published by ETSI which describe services that operate above a GSM or UMTS network and are based on Intelligent Network standards. Its use is completely transparent to the end user who uses the mobile phone as in their home network. However, CAMEL requires heavy expenditures to deploy (some vendors quote a cost of 7-8 per subscriber) and also both home and visited networks to be CAMEL-enabled. The evolved mobile markets in Western Europe have implemented CAMEL widely for prepaid roaming. Proprietary solutions: Several vendors have released roaming solutions that either mimic or translate CAMEL signalling between home and visited networks to enable prepaid roaming. Prepaid hubs: A solution vendor establishes a roaming ecosystem with multiple agreements with mobile operators, international traffic carriers, signalling providers and other players in the roaming value chain. An operators that seeks to enter the roaming market only has to form an agreement with the hub provider and enjoys several advantages: pricing transparency, wide reach and customer base without the need for complex and extensive bilateral agreements. It appears that prepaid hubs is the most efficient and cost-effective way to go forward in a fragmented roaming world but market dynamics do not suggest a simple migration. In advanced markets, CAMEL is already established, but operators will still want the flexibility of hubs in markets where CAMEL is unavailable. More so, CAMEL gives the operator the choice of partner networks abroad so that subscribers can be steered to a quasi-controlled environment allowing both operators to benefit. On the other hand, in developing markets including Asia, Africa and South America, the value of prepaid hubs may be priceless in the eyes of operators whose prepaid subscriber base accounts in most cases for more than 90% of all subscribers. It is these markets that are leading the prepaid hub evolution. In the rapidly changing world of roaming, it seems that operators are closer to breaking from the ambiguity of roaming charges and provide a transparent service and pricing to end users. Hubs are expected to change the roaming landscape, but to what extent will operators want to shift from their established semi-walled gardens to a more flexible and cost-effective offering for end users? Hubs are expected to play a major part in enabling prepaid roaming but some players in the industry including service providers, vendors and mobile operators fell that hubs are threatening their established roaming business. One thing is for sure though: With the advent of the Eurotariff and distributed solutions like hubs, the shift towards realistic and competitive charges and pricing transparency for end users is slowly becoming a reality. – Dimitris

  • Managing software as lego bricks: the industry side of mobile software management

    Mobile software management (MSM) is an umbrella of emerging technologies which encompasses firmware over-the-air (FOTA), user interface management and enterprise device management, which have traditionally been considered applications of mobile device management (MDM). However, MSM extends beyond MDM by enabling management of software at the individual component level and doing so at any point in time, not only post-sales, but also pre-sales and pre-manufacturing. In this second part of the series on mobile software management, I take a look at the industry side of this new umbrella of MSM technologies, digging into the software lifecycle, the industry benefits of software management and the multitude of vendors who are claiming a stake of the pie. For the user benefits of mobile software see part one of this series. This article contains extracts from our recent report titled Mobile Software Management: Advances and Opportunities in Service Delivery. The handset software lifecycle The handset lifecycle consists of three main stages: pre-load (i.e. from concept to in-ROM), post-load (from in-ROM to in-shop), and post-sales (from the time of sale to handset retirement). Mobile software begins life 18-24 months before the handset leaves the factory. Software requirements are captured as use-cases and are translated into specific technology, hardware and system dependencies. The process of requirements gathering, collation, prioritisation, definition and agreement often takes up to six months. Thereafter follows the software development process, comprising configuration management, integration, testing and quality assurance. This phase of software development and integration requires typically another six months. Adding in handset-specific hardware requirements, generation of variants for different channels, operators and regions, acceptance testing, interoperability testing and resolution of last minute bugs results in another six months before the software is finally embedded into the ROM. Post flashing to the ROM, there are additional development requirements regarding channel customisation, addition of specific applications and settings. The handset lifecycle post-sales is much more familiar; the user can personalise the handset with ringtones, or games. We can also envisage new handset features being delivered over the air (as in the case of the iPhone). Why the mobile industry cares about MSM The industry does care for MSM; mobile software management can address a large number of diverse challenges for the various industry players, as we will see next. Handset manufacturers. Handset manufacturers use software to develop, deliver and manage handset features and services across the handset lifecycle. There are multiple challenges for manufacturers: – Software reuse: Software delivery across multiple product lines is inherently complex and costly for manufacturers. MSM and specifically modularisation technologies offer easier software re-use across handsets and componentised software development, integration, testing and delivery. – Variant management: Handset manufacturers have to address the continually increasing number of channels, operator customers and end user segments, each of which requires creation of a unique handset variant. For example, it is believed that Nokia currently manages around 10,000 software releases every month, i.e. discrete versions of applications (SMS, email, browser, Java) for a certain handset model, for a certain region and channel. MSM technologies enable not only pre-load, but also post-load delivery and management of software modules to address channel, customer and user needs of handset manufacturers. – Post-sales services: Tier-1 manufacturers have since 2006 been packaging after-sales services onto their handsets (e.g. Nokia Catalogs, Motorola Screen 3 and Sony Ericsson s TrackID) via specialised clients. MSM would allow this client software to be easily manageable and scalable across handset models. Mobile network operators MNOs face continual challenges in delivering services, particularly as these services are increasingly dependent on handset software enablers: – Variant creation: Similarly to OEMs, operators must offer a variety of software features to target different end user segments. Operator solutions must ideally be capable of managing hundreds of different device models, a challenge that MSM technologies cater to with post-sales software update and componentisation capabilities. In general, MSM can decouple the service lifecycle from the handset delivery lifecycle, so that new services can be provisioned directly to the handset at any time in the handset lifecycle. For example, operator-customised applications could be delivered to the handset post-load via software component updates. – Device base enablement: New operator services (e.g. i-mode, open web strategy or HSDPA network upgrades) are often not supported by the installed device base. MSM technologies can enable the existing device base to support newly launched services, thus generating additional revenues throughout the post-sales device lifecycle. – Post-load handset specification: Typically operators will provide hardware and software requirements to handset manufacturers as fixed, one-off requirements. This is a process manufacturers struggle with, given that operator specifications often exceed 4,000 requirements and are refreshed every six months. By moving part of the handset tailoring or variant creation process to the post-load phase, operators can achieve faster time to market. Independent software vendors (ISVs) ISVs develop software for the device pre- and post-load by working directly with the OS provider, handset manufacturer or operator, or alternatively by providing applications to the device in an after-sales market. Software management challenges for ISVs are evident in pre-load, post-load and post-sales cases: – Pre-load integration: ISVs report that pre-load integration and acceptance testing of software is both complex and resource intensive. An ISV that we spoke with indicated that the lead time for software acceptance testing is 4-8 weeks for the S60 platform, 8 weeks for Java and 10-12 weeks for Windows Mobile. MSM stands to reduce that time-to-market cost by decoupling software acceptance testing from device delivery, i.e. allowing testing and integration to occur post-load and post-sales. – Post-load / post-sales variant management: ISVs catering to consumer applications have to deliver thousands of variants of each application that must be created for the equally numerous flavours and types of software platforms, particularly due to Java fragmentation. For example, software development house Glu Mobile generates approximately 5,000 variants (SKUs) for each one of their top-selling games in order to be ported to 700 device models. As another example, Jamdat shipped 57,000 SKUs of their game titles in 2006. MSM technologies stand to deliver a difference here by allowing application variants to be bundled into a single application that checks for the platform version and adapts the application accordingly. For example in the PC environment where software management is more advanced, Electronic Arts, the largest game developer has to produce 70 variants for each game. Enterprises Enterprises are actively asking for MSM today under the context of specific enterprise applications such as device policy management, ability to manage/update software and applications on the device and inventory reporting. Mobile phones need to be managed as IT assets similar to the PCs on the enterprise network. The requirement here is very much for an end-to-end solution and key to this is security and reliability with service level agreements becoming the norm. MSM provides the ability to gain control over granularly managing software on employee devices in the field. End users Last but not least, mobile software management enables a wide range of scenarios that offer value to end users, such as: – Buy, pick & mix. – Accessorise me. – Dress me up. – Fix me. – Check me up – Supersize me For a detailed discussion of these user-centric scenarios see part one of this series on mobile software management. Behind the industry scenes: deployments and vendors Based on 20+ interviews with software vendors, network operators and handset manufacturers that we conducted for our VisionMobile report on Mobile Software Management, we understand that at least eight trials of mobile software management technologies are underway as of H1 2007. The supply part of the MSM market is in a nascent stage. Despite the early stage of the market, a large, diverse range of vendors are moving to exploit revenue opportunities in mobile software management. The above table summarises key vendors from each category of actors playing in the MSM market: – OS vendors, e.g. S60/Symbian, Windows Mobile, EMP, Mentor Graphics (Nucleus) – Modular operating systems, e.g. BREW and Open-Plug – System integrators, e.g. Teleca, Sasken, SysOpen Digia – Software component management vendors, e.g. Red Bend – Retailers, e.g. Carphone Warehouse – Distributors, e.e.g Brightpoint, Brighstar, Cellstar – Software vendors, e.g. Abaxia, Cibenix – MDM incumbents, e.g. HP, InnoPath, mFormation, Nokia (Intellisync), Sicap, SmartTrust, Smith Micro, Synchronica, WDS Global – Application environment vendors, e.g. Java, Flash Lite Winners and Losers Amidst this emerging market, we argue that winners are operators with an advanced handset software strategy such as Orange and Vodafone and manufacturers who have embarked on a complete software redesign and service-focused modularity like Motorola with their Linux-Java and UIQ platforms. Vendors we have high expectations from include Open-Plug, which offers tools for software modularity; Red Bend Software, which provides a solution to perform software component updating on mobile devices and is already the preferred firmware update partner for several major OEMs; mFormation who has grown its product portfolio to offer a full set of MDM and MSM services, while continually attracting venture capital and securing global deals with tier-1 operators; and Abaxia who has pioneered use of SIM cards for service delivery post-factory. The challenge these providers will face is to build relationships and strike deals with major operators and handset OEMs early on, while the market is still figuring out how to solve’s today s issues. – Andreas The VisionMobile research report Mobile Software Management: Advances and Opportunities in Service Delivery dissects the complex array MSM technologies, reviews eight major vendors, presents several operator case studies and uncovers key market trends within mobile software management. The paper is available as a free download from www.visionmobile.com/whitepapers.

  • Google’s Android: boring, exciting or breakthrough ?

    Why Google’s Android is boring: – The Open Handset Alliance is another alliance designed to bring openness to the world (like OMTP, LiMo, LiPS, GMAE, the MontaVista partner programme, Trolltech’s Greensuite alliance & integration project, and many more). – OHA is another industry alliance for building better phones, but with zero phones out in the market. If it weren’t for the G company, it would probably be discounted as slideware. – Android is another Linux stack. We ‘ve already got WindRiver, MontaVista, Purple Labs, ALP, Mizi Research, Trolltech Greensuite, Celunite, Applix, OpenMoko, A la Mobile, .. do we need one more ? – The Android OS (in connection with Google’s OpenSocial) will help Google compete against Nokia’s Ovi, the umbrella of (mostly unannouned) mobile services. Nothing new here. – It shows that Google is not going down the RIA route, but the native application route. – It’s Google’s way of bringing advertising to mobile. Duh. Why Android is exciting: – It is the first software stack to use an open source license and one that makes sense. Android will be made available as open source via the Apache v2 license, which is a non-copyleft license. As such, OEMs, operators, distributors, etc can add proprietary functionality to their products based on Android without needing to contribute anything back to the platform. – Handset producers (new term?) can add/remove functionality easier, without being restricted by component-specific licenses, as is the case with Symbian OS and the Windows CE stack for example. It allows white label phones to be created by design, not afterthought. – As Nomura points out, it’s Google’s attempt to reach mass-market, mid-range phones where Nokia’s S40 and Sony Ericsson’s EMP control the service game. – It will allow not only developer innovation, but also user innovation. ”users will be able to fully tailor the phone to their interests. They can swap out the phone’s homescreen, the style of the dialer, or any of the applications.” (source). See what happened to Facebook which was designed to enable both developer innovation and user innovation; it scaled and scaled beyond all expectations. Why Android is a breakthrough: – It’s the first time a mobile Linux stack gets a major long-term partner, which is what HTC’s chairman said was needed back in September. No other stack can come close to the multi-billion cash reserves that Google amasses. – The Android OS will be offered to OEMs for free, which makes Android the first true disruption for mobile phone operating systems as it accelerates the commoditisation of mobile OSes and pushes the value line several thousand lines of code higher. – It’s the first time that core apps will be equal citizens to downloadable apps. This is a VERY big step forward for two reasons: a) It’s extremely challenging for any OpenOS developer to design a downloadable app that can replace the dialer, idle screen, inbox, calendar, contacts. There’s very few Open OS apps that can replace the idle screen or contacts, but they need serious know-how and access to manufacturer ABIs (hidden binary interfaces), lots of trial & error and licking OEM boots. Out of 1+ billion phones a year, there’s been no innovation on core apps, other than the Vodafone Simply phones, LG Prada, Samsung D900, Windows Mobile 6 dialer and the iPhone. Android is now making this possible BY DESIGN, not afterthought, contrary to all other open OSes. b) By replacing core apps with third party apps, it will be possible (and far easier) to design visually consistent UIs, where the usage experience feels like a single personality, from the startup screen through idle screen, dialer, contacts, shut down screen, without breaking the user experience. And hopefully designing a Barbie UI will be exactly the same drag & drop process as designing a BMW UI. This is happening in Japan already. – It is the first true service platform that allows content to be inserted at any point of the user journey, aka ‘widgets in any application’, aka Magpie (the 2002 Symbian project that was way ahead of its time). This allows core apps (dialer, inbox, contacts, calendar, etc) to come alive, allowing internet services, ads/informercials, content, alerts, etc to be inserted where relevant and in a *context-specific* way. Imagine seeing a weather icon next to a calendar travel entry, or location whereabouts info next to a contact. The Android brings you your entire connected world of services onto your mobile – in the same way that Facebook brings it onto your ‘me-portal’ on the web. Only Android does this by enriching the familiar user journey, not redefining it, acting like a ‘parasite’ or lying on its periphery. The OS of the future might have arrived early. Fingers crossed. – Andreas [update: a few readers have written back to question my optimism for Google’s Android. I could easily have criticised Google’s announcement as many industry observers have done. I chose not to. I believe that the OHA and Android are strategic initiatives from Google, which as more credible than previous Linux forums (LiPS, LiMo and GMAE included) and that there are likely several phones coming out from leading OEMs in 2H08. The New York times seems to confirm this by reporting that “mobile phones based on Google s software are not expected to be available until the second half of next year. They will be manufactured by a variety of handset companies, including HTC, LG, Motorola and Samsung and be available in the United States through T-Mobile and Sprint. The phones will also be available through the world s largest mobile operator, China Mobile, with 332 million subscribers in China, and the leading carriers in Japan, NTT DoCoMo and KDDI, as well as T-Mobile in Germany, Telecom Italia in Italy and Telef nica in Spain.”]

  • What if handset features could shape and evolve with the user? the user side of mobile software mana

    Today s mobile handsets are highly underutilized; beyond calling and texting, tens of typical handset features go unused. Are handsets over-featured, crammed with capabilities that leave most users indifferent ? Why can t the user today pick a handset based on style and then choose the features they would like to include, much like choosing the extras for a new car? Why are today s handsets most limited; why can t you transfer a game from a friend s handset? Why can t you get FM radio functionality on a new 400 smartphone? The answers lie in how the handset software is designed, built and managed through the handset lifetime. Whereas software inside most phones is highly sophisticated, at the same time it is practically shaped into a rigid monolith. In a sense, the phone software from birth to retirement suffers from chronic arteriosclerosis; for all its PC similarities, the software is mostly immutable and unmanageable, only fit for the narrowly-defined purpose for which it was designed two years before being sold. At the same time, the user is little interested on how the handset menus can be coloured red, orange, blue or magenta by the mobile operator, but how the phone could be made a bit more friendly and a bit more personalised. A shallow dive into mobile software management Mobile software management (MSM) is a new wave of technologies that allow the handset software to be turned from a monolith into soft clay. MSM technologies treat the software as malleable from the design stage and embedding on the device, to configuring at the point of sale, installing features post-sale and prolonging its use until the handset is retired . From a technical perspective MSM enables the management (deployment, installation, activation, update, de-activation and removal) of software components (applications, handset features and their dependencies) on any device, throughout the software lifecycle (from architectural design, to manufacturing and post sale). The umbrella of MSM technologies encompasses firmware over-the-air (FOTA), user interface management and enterprise device management, (traditionally applications considered within the scope of mobile device management, MDM) and extends into software variant development, service lifecycle management, feature customization and dependency management. The next diagram shows the taxonomy of technologies under the umbrella of MSM and the relationship with mobile device management. Figure: Taxonomy of MSM and MDM applications (source: VisionMobile research) One fundamental difference beyond MDM and MSM is that, while MDM offers control of the higher-level functionality of the device (e.g. skins, ringtones, contacts backup, antivirus and device detection), MSM extends far deeper into the handset, allowing any part of the internal device software to be manipulated. MSM is a much more powerful and much more complex set of technologies, especially for mass-market (non open OS) handsets; The deeper you go into the handset to manage software components, the more surgery you need to perform. Adding or removing software is like performing organ surgery where each organ is highly interconnected, making interventions a highly complex operation. Mobile software management is a relatively new and still emerging market, both technically and with regard to commercial deployments. But demand and supply are both ramping up to exploit the benefits that MSM brings to both the user and the mobile industry. What does all this mean for the user ? The MSM umbrella of technologies brings several distinct, welcome and newfound benefits to the user: – Buy, pick & mix. MSM goes far beyond installing ringtones at the time of handset purchase. At the operator retail shop, the user can personalise the handset features, for example adding FM radio functionality, upgrading the camera resolution or adding a stereo enhancer feature to the built-in mp3 player. – Accessorise me. A month after handset purchase, the user can log in to the manufacturer s website and accessorise their phone with an automatic photo panorama function or a real-camera upgrade for instant click-to-shoot experience. The features are automagically downloaded and installed on the handset in a matter of seconds. – Dress me up. MSM goes far beyond changing wallpapers. Through a dedicated on-device portal the user can preview and buy complete UI themes; themes that change the way the phone looks and feels, from dialing up a contact to texting and taking pictures. Purchasing a new real-theme morphs the user experience from a pink Barbie look to a sleek, silver BMW look; and themes can change to reflect the user s mood. – Fix me. Operator Telefonica use MSM technologies like firmware OTA to offer customer reassurance; the user does not need to be go into a repair shop when there s something wrong with their handset; instead they can call up customer services and have the software fixed over the air in a matter of minutes. In the near future, handset software will be able to be fixed or upgraded in a matter of seconds. – Check me up Automatic, proactive fault detection means that the user can rest assured that their contract includes a monthly health status check-up and monitoring. Like a car regular check-up, only that in the case of handsets it happens overnight, and without an inconvenient visit to the car service centre. – Supersize me With MSM the user can get the latest and best features and exclusive promotions on the handset before his friends do. Supersizing can be bundled or offered as part of an add-on monthly subscription. Easing the industry s headaches Naturally, mobile software management aims primarily to solve many challenges facing handset manufacturers, mobile operators, service providers and software developers; As such, MSM technologies enable a range of scenarios and associated revenue opportunities for industry players: – Configuration of the handset software as part of 1-to-1 just-in-time customer segmentation at the point of sale, installing, removing and updating features based on the specific customer profile. – Ability to install and update a new application environment like Java MIDP3 or Flash Lite on the existing handset installed base. – Ability to personalise the handset user interface twice a month as part of the user s subscription package. – Upgrade of the handset installed base to support a new 4G network technology and associated data services introduced by the operator. – Granular monitoring and management of the services supported by handsets in the field by an enterprise. – Repurposing of handsets for a new channel and customisation for a specific market segment, after the handset has left the factory. – Component-based software integration and testing as the handset parts move along the value chain, reducing cost and time-to-market for the handset OEM. Behind the industry scenes: deployments and vendors Based on 20+ interviews with software vendors, network operators and handset manufacturers that we conducted for our VisionMobile report on Mobile Software Management, we understand that at least eight trials of mobile software management technologies are underway as of H1 2007. The supply part of the MSM market is in a nascent stage. An evolving puzzle of 10s of vendors principally active in MDM and handset software development are moving to exploit the revenue opportunities in MSM. The next diagram shows how vendors from multiple solution categories are participating in mobile software management. MDM incumbents HP, InnoPath, mFormation, Nokia, Sicap, SmartTrust, Smith Micro, Synchronica, and WDS Global are introducing applications that are increasingly extending into MSM such as service lifecycle management and user interface management. Software platform providers Nokia (S60), Symbian, Microsoft (Windows Mobile), EMP, Qualcomm (BREW) and Mentor Graphics (Nucleus) are enhancing their platform modularity, reshaping the software architecture into configurable, updatable building blocks. SIM card manufacturers like Gemalto are showcasing SIM-driven software configuration. Software houses like Abaxia and Cibenix are launching SIM-based software configuration and on-device portals for browsing & buying software, respectively. Vendors Red Bend and Open Plug are offering solutions for granular feature customization at the software platform level throughout the software lifecycle and across phone tiers. This amalgam of MSM technology and service vendors will be continuously expanding and evolving over the next three years, to exploit the revenue opportunities ingrained in the many user and industry scenarios opening up. – Andreas Coming next: a detailed view of the technologies, players and benefits of mobile software management from the industry perspective. The VisionMobile research report Mobile Software Management: Advances and Opportunities in Service Delivery dissects the complex array MSM technologies, reviews eight major vendors, presents several operator case studies and uncovers key market trends within mobile software management. The paper is available as a free download from www.visionmobile.com/whitepapers.

  • Carnival of the Mobilists #97

    Welcome to the 97th edition of the Carnival of the Mobilists! This week’s Carnival is hosted at the VisionMobile Forum. It’s been another busy week for mobile industry observers. Om Malik analyses the volume of LBS deals in 1999-2007 and shows how the number of deals is really peaking in 2007; not a coincidence given the numerous GPS-capable handset models on manufacturer roadmaps for 2008. Openwave, once the unshakeable market-share leader in mobile browsers, revealed its 1Q08 revenues and a steep drop in license and service revenue. Mozilla announces Prism, a tool that gives web applications its own window/desktop presence and shows that “the desktop isn t dead at all and that a hybrid approach is a successful way to go”, according to ZDNet’s Ryan Stewart. The blogosphere has also been buzzing with debate as to how soon will Java ME eclipse or become superceded by Java FX Mobile (aka SavaJe). So let’s look at what’s in store at this week’s Carnival of the Mobilists. One of my favourite analysts, Chetan Sharma has written a very detailed and analytical CTIA Wireless IT and Entertainment 2007 Roundup. Chetan writes about the openness touted by Facebook, Microsoft and RIM, the progress in mobile advertising, WiMAX picking up steam, the anachronistic pitches of US operators and how mobile video has (not really) changed. Chetan also comments on the recent activity on the LBS landscape: “I have been working in or following this space since 1995 and it finally feels that there is going to be some activity in this space after years of posturing, delays, and hype.”, which strikes a chord with my thinking; it seems that built-in GPS support by major handset manufacturers in 2008 is acting like a magnet for a horde of LBS startups and deals. On the subject of location services, Tarek Abu-Esber talks about how Google Maps still has minor glitches. Tarek puts his engineering hat on and Fixes GPS for Google Maps on the HTC TyTN II. Abhishek Tiwari postulates the structure of Google’s rumoured mobile OS in GPhone If I Built It. His analysis suggests that the OS would consist of three layers; base OS (where the OpenMoko distribution may be used), messaging/productivity/media (where we ‘re likely to see an integration of Gmail, Gtalk, Orkut and GrandCentral) and application ecosystem/revenue enablement. Abhishek writes thoughtfully “I see a lot of power within the contacts list. The contact list is the user s true social graph, which can offer much more than just phone numbers. “ – I totally agree that the contacts list will become the centre of the user journey, and Google might just show us how. C. Enrique Ortiz at the mobility weblog writes about Interaction Triggers in Mobile Applications. He breaks down triggers into dial + voice, texting, URL, visual tags (2D codes, etc) and radio tags (NFC, etc). Somehow I feel there is an important learning on Enrique’s abstraction for interaction, but the article is very terse. Martin Sauter writes about a popular topic in the mobile industry, IMS vs. Naked SIP. Martin analyses the features and capabilities which the Naked SIP protocol lacks, but which exist in the ‘operator sanctioned’ IMS architecture. Interestingly, Martin notes that Naked SIP is “already implemented in some 3G phones such as Nokia N-Series and E-Series S60 phones”, continuing to say “I have yet to see an IMS capable terminal in the wild”. This is yet another reminder that mobile operators always finish the innovation race last. Dennis Bournique at WAP Review writes about Opera Links, Opera Mini 4 Beta 3 and Opera 9.5. Dennis attended the Rock Opera party at San Francisco (sounds cool!) and writes about how Opera Link can keep your web surfing activity synchronized across all the browsers you use, on multiple desktops and mobile devices, even if they aren’t all running Opera browsers. Dennis explains how Opera Link works and discusses the many new features in the latest versions of Opera Mini, which according to Dennis “delivers a mobile browsing experience rivaling the best browsers on the latest smartphones”. Jason Devitt at Skydeck writes about how Sprint Will Start Unlocking Phones. Following a class-action lawsuit, Sprint Nextel has agreed to unlock customers phones at the end of their contracts and to activate non-Sprint phones on the network – however all is not lost for Sprint. Jason’s short analysis talks about how Sprint could benefit from this change, in terms of net adds and lower CPGA. I like Jason’s realistic view of the repercussions: “Data services may not work, but those looking for the cheapest option in the market won t care that they can t subscribe to VCast.” Steve Litchfield at All About Symbian writes in fury why Motorola and Sony Ericsson need to ‘get’ it. Steve recounts his frustrating experiences from visiting the first Sony Ericsson store in London and trying to snap a picture of the Motorola Z10 at the Symbian Show. “Motorola are appalling, quite appalling at keeping journalists informed and resourced. While, in contrast, Nokia consistently go out of their way to keep a flow of press releases coming, to provide all press materials needed, to run a sumptuous blogger relations program, to think of new and innovative ways to fire peoples imaginations, and so on.” I was also at the Symbian Show and I have to agree with Steve: what on earth were the Motorola PR/AR people thinking when they put these basketball jugglers there, especially in London of all cities ? My friend Ajit Jaokar at Open Gardens writes about Widget once run anywhere and Opera Widgets on KDDI handsets. Ajit makes a thought-provoking observation when he says “widgets are a much more likely driver of client side service convergence”; in other words if operators can ensure that the same widgets are available across handsets and terminals, they have a better chance of delivering service convergence and reducing churn. Antoine RJ Wright deliberates the relevance of Web/Mobile 2.0. Antoine pauses to think out of the box of the ‘2.0 hype’ and concludes “that is where I see a lot of the web/mobile 2.0 movement failing. There are a ton of services and applications out there. But very little that has made Joe and Suzie Consumer run out and try it.” And finally for the fashion-consious reader, Doris Chua asks What s your favourite colour for a phone? The post of the week award goes to Chetan Sharma’s very detailed and analytical CTIA Wireless IT and Entertainment 2007 Roundup. And if you are still reading, stop over to read our lengthy analysis on Motorola s UIQ: Diversion or U-Turn ? which postulates why Motorola’s Linux strategy has been facing an uphill struggle and why theUIQ investment does make sense as a medium-term diversion. Next week tune in to Michael Mace’s excellent Mobile Opportunity for the 98th installment of the best of the mobile blogging! Which reminds me that it’s only three weeks until the Carnival hits the magic 100 number! – Andreas

  • Motorola s UIQ: Diversion or U-Turn ?

    In a surprise announcement last week, Motorola agreed to buy 50% of UIQ Holdings from Sony Ericsson. Pending regulatory approval, Motorola’s co-ownership of UIQ questions the US-based OEM’s vision for mobile Linux handsets. Motorola has shipped more than 9 million Linux-based handsets to date, while in August reaffirmed its commitment to base as much as 60% of its device portfolio on a Linux operating system by 2012. So did Motorola have a sudden change of heart and was that a diversion or a U-Turn from Linux ? The dent in Motorola’s Linux vision. Motorola has to date made huge investments in building a mobile Linux platform. The investment began in 2001 by Mark Vandenbrink s Beijing-based team tasked to develop an in-market, for-market operating system with reduced costs for the manufacturer. The OS initially known as EZX is based on MontaVista’s Linux-based kernel, and uses Trolltech’s Qt/E (now Qtopia) for graphics and application framework, both of which have been heavily customised by Vandenbrink’s team. Over time, Motorola replaced the 2.4.20 kernel used in EZX with a newer 2.6.10 kernel and renamed the platform to L-J (for Linux-Java) which uses Sun’s KVM virtual machine for supporting third party applications. During Motorola’s 6-year Linux history, the OEM has launched around 15 models (A1200, A728, A732, A760, A768, A780, A910, E680, E680g, E680i, E895, MING, ROKR E2, ROKR E6) for the Chinese market and recently the RAZR2 V8 for the US and European markets. At LinuxWorld in August 2007, Motorola re-baptised the L-J platform as the more brand-policy-friendly name MOTOMAGX and reaffirmed that “in the next few years, up to 60% of Motorola’s handset portfolio is expected to be based on Linux”. The remainder of Motorola’s portfolio would probably be powered by Windows Mobile for enterprise devices, UIQ for high-end handsets, and TTP Com’s Ajar for low-end handsets. Yet only two months later, Motorola bought into a 50/50 ownership of UIQ from Sony Ericsson. As a Symbian spin-off, UIQ (along with its Symbian OS base) is practically a competitor to Motorola’s MOTOMAGX. Is this is a dual supplier strategy for Motorola ? Hardly, as the OEM is under extreme financial pressure and needs to urgently trim its cost base. Motorola s operating profits have suffered a major blow in the last year, dropping from an operating profit of US$ 819 million in 3Q06 to an operating loss of US$ 332 million in 2Q07, according to Fitch Ratings. At the same time its market share dropped from 21.1% in 2006 to 14.6% in 2Q06, according to Gartner. In response, Motorola announced a reduction of 15% in R&D budgets. Is the UIQ announcement a tactical move? Certainly not. Motorola could easily continue licensing UIQ from Sony Ericsson as it did for its five Symbian OS -based handset models to date, the A1000, M1000, A925, A920 and the recent Z8 handset. Analysing Motorola’s change of heart Motorola’s financial troubles are only the trigger behind Motorola’s change of heart in its Linux single-platform strategy. I would argue that there are four reasons for Motorola’s rethink of its Linux strategy. 1. Motorola has been quietly trying to develop a single-core version of its MOTOMAGX platform, in order to reach a planned 50-60% of its handset portfolio, as indicated by the announced-but-never-released MotoRizr Z6. The move from a dual core to a single core architecture would mean a major re-architecture, as a single-core OS has to run both the applications and the modem stack. Virtualisation techniques (see WindRiver, Trango and VirtualLogix) are designed to faciliate single-core OS development, but I suspect Motorola would still have to re-architect major parts of its Linux-based OS even if it used virtualisation. [updated: a reader reports that the MotoRizr Z6 has been released in China and is based on Freescale’s Starcore single-core CPU. According to the same source, the Z6 is far better in terms of performance compared to the ROKR E2. This finding implies that Motorola is 6-9 months ahead of any other OEM in launching a single-core based Linux stack. Well, it turns out that Motorola’s Z6 is based on a dual core Freescale MXC275-30 SoC (single chip) architecture, and not a single core one. This is according to a Freescale presentation which you can find here. The model name has been changed from MotoRizr Z6 to MotoRokr Z6, according to a LinuxDevices report, while the handset appears to be available in 20 countries according to the same report. Moreover, given that no single core Linux handset has been released by Motorola, this would strengthen the argument on how single core Linux remains a challenge for the Linux OEM champion.] 2. Motorola has invested man-centuries into building MOTOMAGX, based on MontaVista s Mobilinux Linux support package and Qt/E (an old version of Trolltech s Qtopia). Motorola has had to add lots of glue and optimisations on top of Mobilinux and Qt/E, and so a migration away from these components would mean significant re-investment. Yet this is exactly what Motorola would have to do if it is to keep its costs down for equipping more than half of its product portfolio with Linux and at the same time to migrate to a new scalable, end-to-end UI framework architecture as other handset OEMs are doing. MontaVista and Trolltech make money by selling developer seats and Motorola had in 2Q06 ordered 200 developer seat licenses (in addition to the 100 they already had). 3. Motorola s 9 million Linux-based phones up to mid 2007 have shipped in China and Latin America, primarily due to the relaxed device testing/certification and operator customisation requirements in these countries. The RAZR2 V8 which started shipping in July 2007 has been the first Linux-based phone for Western and European markets. It is likely that Moto s Linux platform strategy met with long delays due to increased requirements for handset certification (GCF in Europe and FCC in the US), the stringent network interoperability testing requirements (particularly with US operators) and the need to comply with voluminous operator customisation requirements (in Europe these measure at 4,000 lines of requirements per handset and change twice yearly). 4. Motorola has been the leading force behind the foundation of LiMo, its chairman, its chief architect, according to Nomura’s Richard Windsor. In addition, Motorola has committed to contributing to LiMo multiple software components (package model, application execution model, architecture, registry (with Samsung), security policy, certificate manager, event system+input method (with Panasonic/NEC). However, the LiMo foundation has recently more than doubled in size, while its complex licensing models and unfamiliar processes for new contributions has likely resulted in more delays and higher resource commitments for Motorola. As the leader in LiMo, Motorola may have deemed that incorporating LiMo requirements into its MOTOMAGX platform would prove too costly. Why Motorola invested in UIQ The number one priority for handset manufacturers is to make money from handsets. Handsets come first, while software and hardware platform strategies come second. At a time of extreme financial pressures and competition from Nokia and Samsung, Motorola had to stick to its product commitments and seek an alternative software platform for launching its handsets in 2008 and beyond. Ajar, Its other in-house platform which Motorola acquired alongside TTPCom is designed for low-end handsets, not mid-range or high-end ones and was intended to complement the L-J platform. The manufacturer has also launched Windows Mobile based handsets, but these have been targeted to enterprises, not consumers. Motorola has also launched five handset models based on UIQ, the M1000, A1000, A925, A920 and Z8, so why not continue persuing a typical platform licensing strategy ? Let’s do the math. Motorola has been looking to scale it s Linux-based handsets from an estimated 2-3% portfolio share in 2007 to a claimed 50-60% in 2012. Assuming a linear growth and a steady 15% mobile device market share this would mean that the OEM would need to build around 225 million handsets based on Linux in the next 5 years, of which around 50 million in the next two years. Now assuming that a tenth of Motorola’s 2008 and 2009 portfolio of Linux handsets would have to move to UIQ, at $3 royalty, this means a cost of $15 million to Motorola. How much is UIQ worth ? Symbian’s financial statements do not yet account for the UIQ sale, so it’s difficult to tell. Nomura reports that 1.2 million UIQ phones shipped in 2006 (a paltry 2.3% of Symbian-based smartphones), which at around $3 per-unit royalties implies $3.6 million in annual revenues, and at a x10 valuation factor, UIQ is worth around $30 million. Therefore the value of Motorola’s acquisition of 50% of UIQ is the same as the licensing cost for 10% of its 2008/9 planned portfolio of devices, were Motorola to replace Linux with UIQ on these devices. With 1.2 million devices shipped in 2006 and nearly 150 staff, UIQ is definitely a loss-making business. Therefore, Motorola’s move was clearly strategic, with the OEM hoping that the UIQ software will help reduce platform TCO and time to market, at a time of challenged Linux strategy. Motorola would longer-term benefit from reduced licensing costs, a major stake at defining UIQ’s roadmap and hopefully a profitable licensing business once UIQ one-handed interface penetrates higher volume devices. The 50/50 split shows alignment of incentives between Motorola and Sony Ericsson, but also hints at the fragile balance between the two competing manufacturers down the road. The sale of 50% of UIQ also makes sense for Sony Ericsson, who is dealing with a massive and rapidly growing cost base, with employee numbers more than doubling from 142 staff in February to over 350 in October, and new offices in Budapest and London. There are two more noteworthy consequences here: 1. To support Linux developers at a low cost, Motorola essentially handed off development of its Linux SDK to Trolltech. The Norwegian software vendor was probably eager to accept, given the poor performance of Qtopia’s dual-licensing strategy and the industry alignment behind the competing GTK graphics library (LiPS, LiMo and GMAE all endorse GTK, not Qtopia). [updated: I spoke to Trolltech who clarified that they don t have direct responsibility for Motorola’s SDK. Motorola licenses Trolltech’s Qtopia SDK and is free to make it available to handset application developers] 2. Motorola s UIQ strategy will likely lead to a reduced presence in LiMo, particularly since the OEM is heavily resource-constrained and because LiMo has specified a GTK-based graphics stack. To keep it s membership in LiMo and save face, Motorola would most easily contract an ODM to release Motorola-branded devices based on other distributions such as Linux stack vendor Celunite, who recently joined LiMo, rather than re-architect its own stack. Repercussions for Symbian In the short term, Symbian s industry valuation and prospects have been significantly strengthened as a result of Motorola s UIQ co-ownership. In the medium term however, Motorola does not have any ownership in Symbian, and so I doubt the co-ownership of UIQ will impact Nokia s near-majority influence (read: control) over Symbian OS. Longer term, the OS game for UIQ stakeholders becomes quite interesting. The Symbian stack provides little value above the kernel and drivers (Symbian has essential become akin to a board support package) – read the latest specs of S60 (here) to see that not only the application suites and UI frameworks, but also the vast majority of middleware have been drawn out of Symbian OS. In other words, the value of the Symbian OS software stack is similar to that of a zero-royalty Linux-based stack from the likes of MontaVista and WindRiver. [updated: I realise that the above argument is not backed up with hard data. I hope to delve into some research to detail the components in the Symbian stack vs those in MontaVista/WindRiver distros and what is the *sale* value of each component]. However, while academically it s possible to replace Symbian with a Linux support package (kernel, hardware drivers and base OS functions), it is an expensive undertaking, of the order of $50 million. Given Sony Ericsson and Motorola are facing difficult times competing with the Nokia giant on launching successful handsets, large-scale investments in a software platform is hardly the priority these days. Diversion or U-turn ? Motorola s acquisition of 50% of UIQ is clearly a strategic initiative, which will likely continue with the launch of several high-end handsets powered by UIQ in the next year. At the same time, Motorola has far too much invested in Linux. It was only late 2006 when Motorola’s Christy Wyatt, said that There isn t a group within Motorola s 70,000 workforce that isn t impacted by Linux and open source in one way, shape or form . Consequently, UIQ is a diversion, not a U-Turn for Motorola. While R&D investments are being curtailed, the manufacturer will naturally try to re-use its assets and existing Linux-based research, especially since its mobile Linux software know-how is second to none in Western markets. Medium term, Motorola will likely pursue a dual-OS strategy (UIQ and MOTOMAGX), with the more ambitious and demanding handset projects (esp. western markets) powered by the mature UIQ platform. Longer term, the fate of Motorola s Linux platform strategy will depend on the success of the UIQ investment and how UIQ will help the OEM constrain the total cost of ownership, cost of variant creation and time-to-market for its new handsets. Time will tell. – Andreas

  • Three reasons for a Google-phone

    The cat’s out of the bag. Google is creating an operating system for mobile phones, according to mainstream news media. A phone OS would make sense for the search & paid advertising giant (after all Google is a software company), but it doesn’t make sense for Google to branch out into making phones, right ? Wrong – and there’s three good reasons for a Google phone. Beyond the gossip and rumours, details on Google’s mobile phone OS has hit the mainstream media. According to the NY times: “In short, Google is not creating a gadget to rival the iPhone, but rather creating software that will be an alternative to Windows Mobile from Microsoft and other operating systems, which are built into phones sold by many manufacturers. And unlike Microsoft, Google is not expected to charge phone makers a licensing fee for the software.” So why would it make sense for the Google software company to branch out into making actual phones ? There are three good reasons: 1. To seed the market Google needs to scale its mobile platform if it is to have any relevance to advertisers. Google can do that by seeding the market with its own phone, hoping that others will follow. 2. To create a reference platform Google creates a commercial proof-of-concept phone that is robust and cheap enough to produce, so that it forms an enticing proposition for any ODM (or even OEM) who wants to use the Google software and service platform. (as in ‘look, we did it, and so can you’). A Google-phone would not only be a proof of concept, but also a proof of viability, cost of ownership and sugar candy for ODMs and OEMs who want to get into the mobile service business (basically, everyone who doesn’t have Nokia’s operating profit margins). 3. To set pricing and marketing norms A good platform strategy player needs to also introduce a product that runs on top of the platform; this is vital constituent of a platform strategy depending on product complementors, so as to set a precedent and the norms for product pricing and marketing. (for case studies see HBR article: With Friends Like These, The Art of Managing Complementors) (updated: there’s actually a fourth reason) 4. To productise the Linux-based software stack According to the same NY Times article, Google’s software stack is based on open source Linux. As experience shows, open source projects cannot be productised without the help of a commercial sponsor. What a sponsor adds is not necessarily money, but commitiment and disciplined drive to move from a beta-state software to a finished, tested, working product. While individual contributors to open source software care about scratching their own particular ‘itches’, commercial sponsors usually care about productising the software in a form that works out of the box, with zero amount of tinkering. In the case of Google’s Linux-based software stack, hardware integration, testing and quality assurance is essential in order to move the open source stack from ‘in a working condition’ to a ‘ready to integrate’ status. A Google phone would provide this needed much productisation and finishing touches to the open source software stack. A Google-phone would make a lot of sense indeed. – Andreas

  • GPLv2 and GPLv3: licensing dynasty or end of the road?

    The GNU GPLv3 license, successor to the pervasive GPLv2 license, was published in June 2007. Following publication several discussions have sprung up regarding GPLv3 s interpretation as well as the perceived benefits and cons as compared with GPLv2. So why the debates and disagreements and is GPLv3 really important anyway? Firstly one must consider GPLv3 in context to GPLv2. To date, GPLv2 licenses the vast majority, typically 60-70% of all FOSS (free and open source software) projects. Moreover the Linux kernel is licensed under GPLV2 and is used in increasing numbers of consumer electronics and mobile devices thus furthering proliferation of GPLv2. These attributes give GPLv2 a privileged position in the league of FOSS licenses. Therefore any successor license has the capacity to greatly impact the FOSS community. Historically GPLv2 was a watershed when first published 16 years ago due to its copyleft properties. These intend that users of the license continue to receive the source code and derivatives of that GPLv2 covered code, thus preserving users freedom to run, copy, distribute, study, change and improve the software . As successful as it has been, GPLv2 has also attracted a certain amount of criticism. These criticisms concern the difficulty in interpreting the license due to the lack of formally defined terms, differing views about what makes a derivative work and ambiguous patent license grant. In writing GPLv3, the FSF (Free Software Foundation and original publishers of GPLv2) set out to rectify these concerns as well as advance the license in light of contemporary themes of patents and digital rights management. Specifically GPLv3 introduces new terms regarding the DMCA (Digital Millennium Copyright Act), a new patent provision and new mechanisms for dealing with anti-Tivoisation. Firstly, the section titled Protecting Users Legal Rights from Anti-Circumvention Law is intended to prevent GPLv3-covered code from being included in technology or products that would be used to enforce the DMCA. Secondly, there is an explicit patent provision in GPLv3 but some argue that the wording used is not particularly clear or straightforward. Thirdly, the anti-Tivoisation section appears to place very specific additional obligations on users to provide source code and its installation information. So what is the impact of these new terms? Is GPLv2 better than GPLv3? What are the differences? What are the similarities? If I were starting a FOSS Project now would I use GPLv2 or GPLv3? In our just published white paper titled GPLv2 versus GPLv3, The Two Seminal Open Source Licenses Their Roots, Consequences and Repercussions we explore these issues in detail. There are many issues that need to be considered in making such decisions and these criteria are explored and reviewed further in this paper. The end of the road seems unlikely, particularly given that nearly 600 mature open source projects have already moved from GPLv2 to GPLv3, a transfer rate of about 10% of existing software projects (see Palamida’s website for more details). Additionally on the 10th September 2007 the Open Source Institute (OSI) announced their approval of GPLv3, thus providing formal endorsement of the license. Whilst time will only tell if GPLv3 continues the successful legacy of its predecessor we can for now analyse the issues and contemplate its future. Liz P.S. We are at the OSiM Conference in Madrid this week, come and speak to us if you are there also.

  • Sun's open source Java policy will mean very little for the mobile industry

    [last part of the series on five traits of open source and its impact in the mobile industry. See also part 1, part 2, part 3 and part 4.] In early November 2006, Sun proclaimed the most significant shift in Java strategy since the launch of the software platform in 1995. The US software giant announced that it is licensing several key components of the Java for mobile (Java ME) and desktop (Java SE) platforms under an open source license. With this move, Sun provides Java ME and Java SE platform reference implementations not only under its traditional commercial license terms, but also under open source license (GPL) terms. Furthermore, Sun has created a web-based repository for open source Java projects (the Mobile & Embedded community), and announced a governance model for it. It is worth stressing that the Java Community Process (JCP), the process by which third parties can play a role in the future of the Java platforms, remains unaffected. With the open source strategy, Sun s goal is likely to incentivise the industry into adopting a single reference Java implementation and mitigate the threat from rapid market penetration of Adobe s Flash as well as other competing application environments such as BREW. The fundamentals of open source Java To estimate the implications that Sun s Java open source strategy will have in the mobile industry, we should consider four fundamental elements of Sun s licensing and trademark policies. 1. Sun s choice of GPL license, requires third party modifications to be also distributed under GPL for no charge. This dis-incentivises handset manufacturers from even accessing GPL code due to IP contamination concerns. 2. Sun s open source Java phoneME Feature and Advanced projects are reference optimised implementations, but ones which have not been optimised for specific phone hardware. Handset manufacturers today compete heavily on Java platform optimisation to accelerate the performance of Java applications and games on each handset as a means of differentiation. Even if an OEM takes Sun’s optimised implementation of the MIDP2 JVM, they would have to further tweak the JVM to adapt it to their particular hardware and add own memory or speed optimisations. 3. Sun Microsystems retains a trademark to the Java term and has a copyright on the cup & steam logo. Sun requires handset OEMs and Java implementation vendors to pass TCK certification tests for the base CLDC and CDC platforms (at a considerable cost) in order to be able to claim that their handsets are Java Compatible. 4. Sun’s phoneME GPL branch excludes about 5% of the source code corresponding to ‘IPR-encumbered’ code which Sun does not have the right to release under GPL. This means that you can’t build the full phoneME project from source code. On the other hand, Sun did a couple of things right: firstly, contributors to the GPL branch of phoneME have to surrender copyrights; thereby allowing Sun to integrate these changes directly into the commercial branch. This prevents the divergence of the two code branches in a dual licensing model, as happened to Trolltech’s Qtopia. Secondly, Sun receives direct feedback from developers and is able to use that feedback within product planning. Too much an effort, too small a change Overall, I believe that Sun s Java open source policy will change very little in the mobile industry; the choice of the GPL license protects Sun’s revenue stream from licensing of optimised implementations, but scares off handset OEMs (in fact I recently had a conversation with an exec at a top-5 OEM that confirmed just that). Sun does maintain its revenue stream from TCK licensing due to its Java trademark. Was Sun too greedy to choose the GPL ? Perhaps, as Sun’s optimised implementation is mostly licensed to ODMs; OEMs use their in-house optimised implementations. Did Sun have any other choice ? Probably. They could have licensed the ‘core’ JVM under GPL and the hardware-dependent code under a non-copyleft license like the Apache license, so that OEMs could optimise for their hardware. When I mentioned this to Sun last week they sounded interested in the idea, but said it might be too complex from an organisational perspective. I would also argue that Motorola s intentions for releasing Java MIDP3 under open source will likely see more uniform adoption and consistent implementation of MIDP3 across mobile handsets. This is assuming Motorola sticks with its promise for releasing the full MIDP3 source code under the liberal Apache License 2.0; Motorola wants to first test the waters by releasing the code under the Motorola Extensible License, which is a copyleft license. Motorola will also release the TCK under an open source license, which means it will be cheaper for OEMs to certify their implementations as MIDP-3 compliant. – Andreas [this article has been updated following a briefing with Sun at the Informa Open Source in Mobile conference]

bottom of page