Sunday, April 21, 2013

Takeaways from Xamarin Evolve, Part 2: Platform as a Strategy

In the previous entry in this series, I roughly sketched and reacted to the value proposition of the Xamarin platform, and a platform it is, as "CLI everywhere else that matters" might be considered Xamarin's target market while their marketing goal is to "delight developers."

If that last catchphrase sounds familiar to you, you're not alone.  I was not the first among my peers at the conference to suggest that a Microsoft acquisition of Xamarin would make sense, and there was a significant contingent from Redmond in attendance. To potential customers, such interest from a big established company might lend an air of credibility or permanence, at face value. Such a marriage, however, would inevitably be a disservice to Xamarin's customers, Mono, the geniuses would brought .NET into the world, and even Microsoft.

In this article I hope to elucidate the considerations in choosing Xamarin through the lens of explaining why a Microsoft acquisition would destroy what they've created.

The primary impetus for creation in the open source community is to "scratch your own itch", followed closely by a need to be recognized among your peers, a currency sometimes referred to as "ego boo".  Though Xamarin has its roots in open source in a big way, it's a business, one that must profit from the work of its many simians, whose number continues to grow.  The direction they have chosen to attain that profit speaks directly to the niche they've carved for themselves.

Though underlying platforms to which Xamarin provides lease are a moving target; living, breathing platforms themselves, they are being updated continuously and updating end-user expectations just as often.  So, Xamarin's tools will never be done, and therefore they sell a perpetual license to use their tools with only a year of free updates to the same.  This is a healthy, virtuous model that mirrors customer expecations.

Xamarin sells tools to target a platform, one I'll continue to refer to as "CLI everywhere else that matters". Microsoft sells platforms with tools that support and enrich them.  Xamarin's profit motive is based on ensuring its customers have the broadest reach with their toolset as possible.  Microsoft's profit motive is based on vendor lock-in.

Granted, if you've written a lot of code targeting Xamarin.iOS or Xamarin.Android, you don't have any other options (at the moment) to take that code to market.  However, Xamarin has built their business around making sure you'll buy the next version of their tools, something you'll have to do when the platforms change. The relationship is symbiotic, not subordinate.  (Apple is a fiefdom.  Azure is a platform.)

The real sweet spot for Xamarin are companies building "universal" applications.  If you need a non-trivial client on several platforms (including iOS, Android, Windows, Windows Phone, and Mac OSX), Xamarin is your best and only option.  They had a scare some time ago when Apple changed their iOS terms to preclude apps from the AppStore that hadn't been written with a C/C++ compiler.  They recanted, but one could imagine a similar dust-up should Microsoft acquire Xamarin.  Today Xamarin occupies a similar role as Adobe, a tool vendor.  As long as they remain such, it's a safe bet to build your applications with their tools.

Central to all of this discussion is the CLI and C#.  From early days the CLI was an ECMA standard, and a FreeBSD implementation existed as far back as I can recall, as more of an academic exercise.  With Xamarin's work, the CLI is a commercially viable substrate on a much broader spectrum of devices, essentially validating the work done by the great folks within (and around) Microsoft who created such a powerful and imminently implementable language infrastructure.

A fairly modern language with open implementations, a well-defined standard, in a general sense unencumbered by IP baggage, and a huge user base, C# stands strong facing the future.  Continued investment in it seems a good hedge against the the invariable volatility in the technology sector;  Xamarin will provide the leverage.

Friday, April 19, 2013

Takeaways from Xamarin Evolve, Part 1: The Value Proposition

In my day job we have several different consumer apps targeting lots of platforms.  For some time now I have been exploring the option of moving some of that development to Xamarin's toolchain.  Ostensibly, we could reduce our technology footprint, with concomitant benefits to our engineering costs.  In this article we examine the value of the Xamarin approach vis-à-vis native toolchains and HTML5.

Let's first turn our attention to the nominal benefits of a reduced technology footprint.  The Xamarin approach means you can use one IDE and one language to target some pretty diverse platforms: Windows Phone, Windows, iOS, Mac OSX, and Android.  As most products involve a server component, we get even more leverage from our choice, targeting great platforms like ServiceStack. This benefit is not without some caveats, however.

Code reuse, for example, seems like an obvious benefit to using one language across the tiers and targets that comprise your ecosystem.  Indeed, sharing DTOs, ViewModels, business logic and the like is entirely practicable and very valuable.  Whole swaths of engineering costs go away: mapping service responses into client objects on all platforms, client validation logic can be written once, etc. There remains, however, some important functional areas that are not amenable to reuse.

Presentation logic, specifically the View logic in MVVM, remains inextricably tied to the platform.  To realize the performance advantages of a native application and the usability of established human interface guidelines, the View logic must be very specific to the platform.  While the MVVM pattern enables reuse of the interaction and business logic, View logic remains necessarily tied to a given platform target; a native application's strength is in the degree to which it looks and feels native.  Reusable Views, such as what is achieved by using HTML5, provide a diminished experience when compared to native except for the simplest user interfaces.

The need to write platform specific View logic belies our reduced technology footprint.  One must be intimately familiar with the target platform; knowledge of Intents versus ViewControllers versus Pages, the higher level interaction metaphors, and the low level details (e.g. CoreAnimation) are necessary to create great apps. In short, your organization will still have folks who specialize in the various platforms.

Furthermore, you will need to add a new capabilities and specializations in all likelihood.  Understanding how Xamarin.Android and Xamarin.iOS work will ultimately be vital to your success in using these platforms.  Architecting your apps to maximize reuse potential across them also requires some specialized consideration. Choosing Xamarin signals an expansion in the number of technologies your organization must master.

In gaining that mastery, you get a number of benefits:

  • an excellent Android UI designer,
  • Android applications that potentially run faster than those written in Java targeting Dalvik,
  • an iOS storyboard editor (in alpha) that is in some respects superior to Interface Builder,
  • full-speed native apps with full access to the features offered on the platforms,
  • access to Xamarin Components.
The Xamarin Component experience is quite good, providing very simple access to an ecosystem of reusable components.  Besides the availability of some non-normative UI elements that have been popularized by first-tier app vendors, the Component Store offers a host of tools that abstract away the details of some common concerns.  Among these is Xamarin.Mobile; it provides a common API surface to access e.g. location and camera features.

I am dubious of the value of many of these components over the long run.  Grabbing UI components off-the-shelf could certainly make your app appear as an also ran, and abstracting away details about device features cannot be without compromise.  The first thing said about these things is that they allow you to build apps quickly, but experience teaches us that sustainable velocity beats a fast start every time.  One-size-fits-all components are great for prototypes, but purpose built components must follow quickly for continued success.

Though we must distinguish between UI components that have expanded the normative lexicon (flyout navigation, progress HUD, etc.) and those that in some sense define the apps that invented them like Path's satellite menu, by any measure the simple, one-stop-cross-platform-shopping experience of Xamarin's Components is a win.

Components integration is just one great feature of Xamarin Studio.  As someone who uses Visual Studio + Resharper + VsVim on a daily basis, I felt right at home.  Many of the better ideas from Eclipse, IntelliJ and Visual Studio seem to have been incorporated. While most developers using their toolset will likely continue to use Visual Studio (and must do so if they intend to target Windows Phone), Xamarin's continued investment in Xamarin Studio is an effective barometer for their health as a company, IMO. There are things I miss, but I felt productive right away; there's a level of polish here that was sorely lacking in previous incarnations.  

Speaking of polish, discovering how to develop for the various platforms with Xamarin's tools is a joy.  The websites are first class and provide a nice corpus of material to get started.  Documentation matters, and Xamarin gets it right.

Adopting Xamarin's tools is a fantastic gateway for C# (and F#) developers to start developing native apps for Android and iOS.  If you're targeting multiple platforms, HTML5 should be considered first, as the vast majority of line-of-business applications won't take advantage of the native capabilities.  If your apps need native capability or polish, you should look at Xamarin.

In fact, I think Xamarin is a viable alternative for C# developers even when targeting a single platform, but perhaps as a stopgap to get in the market.  The iOS ecosystem, for example, is large and complex.  Learning the platforms APIs, tools, and paradigms is challenge enough without picking up a new dynamic and sophisticated language like Objective-C.  Over time you'll have translated enough Objective-C from WWDC videos, Stackoverflow posts, and iOS Dev Center to know the language pretty well, at which point the Xamarin.iOS value proposition starts to make less sense on a single platform.

To sum it all up, there is no silver bullet.  There are trade-offs with the Xamarin approach, but they are mostly non-technical.  It may make a lot of sense for your organization depending on your goals and staff.

Thursday, January 31, 2013

bash versus Powershell

Occasionally, I find myself dumbfounded at how difficult something is in Powershell that is just brain-dead simple in bash.  Then, I remember that the various Unix shell flavors and their POSIX toolset have had many more years to mature than even I have had thus far. The Unix shell tool philosophy is "do one thing and do it well". Doing things well in this context is certainly enabling the most common usage scenarios with minimum ceremony and surprise.

So, let's say you wanted a one-liner that gave the number of lines in a bunch of C# files:
find -name *.cs | xargs wc -l

Now, let's see how to get the equivalent output from Powershell:

gci -filter *.cs -Recurse | 
select @{Name="Lines";Expression={(gc $_.FullName | measure).Count }}, @{Name="Path";Expression={ resolve-path $_.FullName -Relative }} | 
sort Lines | 
ft -HideTableHeaders -Auto

Believe it or not, that is all one line. Of course, wc also gave us a summary row in its output, and we can get that with Powershell, too.
gci -Filter *.cs -Recurse | % {gc $_.FullName | measure }  | measure Count -Sum | select -expand Sum

Now we have a two-liner and tired fingers!

To be fair I am not playing to Powershell's strengths, i.e. .NET accessibility, structured scripting, object pipelining, and a wonderful extensibility story.  With Powershell you won't need the analog of Perl, sed, and awk when the built-in shell functions reveal their limitations.

In fact our Powershell script is a lot more impressive in one respect than its competition; it allowed me to cobble together a "light" version of wc. I was nearly able to duplicate its output. If the roles were reversed, one could almost certainly contrive some Powershell I/O that would make bash look like the verbose, stilted challenger.

So, which shell wins? There's certainly a lot to be said for bash (and most Unix shells); it's power, succinctness, and ubiquity have been honed over more than 30 years. At the end of the day, I'm grateful for the Github for Windows release of a shell that incorporates Powershell and the POSIX tools, so I don't have to choose.

Recommended: The Unreasonable Effectiveness of C

Sunday, August 19, 2012

What Works on the Web

I can't find the quote, but I remember reading around the time of the dotcom bubble a secondhand quote from a Japanese business executive--paraphrasing here--that the only things that make money on the Internet are sex and gambling.  He was wrong; advertising is very profitable.

It's ironic that in the height of the dotcom era, if you had a business plan with a monetization strategy of selling ads, you were often quickly dismissed.  Witness the latter day success of Google and Facebook. I bought a Google beanbag, before they invented Ad Words, because I so desperately needed their product.  Monetization is emergent from network effects.

Back when collegiate Internet stalking (aka The Facebook) was first blowing up, I started thinking about the Internet resources I found indispensable in my brief college career.  In the mid-90s, before Napster/Limewire/Kazaa/torrents, if you wanted free music downloads, you fired up an IRC client, joined #mp3 or #mp3central on Freenode or EFNet, and the friendly bots there would DCC you a list of the mp3 files their owner's had made available.  With patience you could find just about anything you'd want to play at a party.

While I was partying, my roommates were studying and doing homework.  Some of the grad student TAs had setup class websites with Java applet chat rooms.  Other studious individuals would share their answers--and more importantly--how they came to those answers.  These chat rooms were busiest the night before homework was due.

You see, the Internet has always been social.  Anyone wanting to get anything done has always needed other people: tutor, muse, compatriot, nemesis, partner, friend.  At the core of every human interaction is a give and take, a transaction of time or attention prosaically, but in a more important sense, a transaction of an intangible but indelible part of our selves--the currency that makes us human: ego-boo, whuffie, brownie points, karma, influence, power, etc.

If you want to start a company that exists primarily on the web, you'll be engaged in a kind of arbitrage of this human currency.  You must understand your particular arbitrage strategy and be able to articulate it.  This allows you to focus on those features of your service(s) that provide the best leverage and/or growth.

What works on the web?  Making people useful to each other.

Sunday, July 29, 2012

Can Github Save Your Life?

As of late I've had a renewed interest open-source medicine, specifically evidence-based medicine.  My exposure while at Zynx Health and a talk given on OpenEMR is what engendered the interest, and it was recently rekindled by these articles: The Most Important Social Network: GitHub and How we use Pull Requests to build GitHub.

Briefly, the correlation there is that Github is a language-agnostic content management system, with strong versioning and collaboration capabilities.  In evidence-based medicine, treatment and diagnostic decisions are based on rigorous statistical analysis of outcomes. In reality, though, physicians are, like the rest of us, innumerate and busy.  (cf. Do physicians understand cancer screening statistics...). Therefore it is incumbent in practicing evidence-based medicine to distill the research and analysis into clinical pathways that can be employed at the point of care.  Github could be used to author those clinical pathways.

In fact, git as a distributed, decentralized, collaborative, open, secure content management system is ideally suited for this task, when leveraged by the powerful collaboration features offered by Github.  On Github everyday the foremost experts in their field, collaborating from nearly every country on Earth, produce tools that they go on to employ in their work.  The value of this activity then filters down to the rest of the practitioners through third-party systems.

I'm describing software, but I could be describing evidence-based medicine.  

Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. Evidence-based Medicine[...]

In a world where the clinical pathways were created by the foremost experts in their field and made readily available to medical professionals in the field: we make all practitioners more effective; we get lower costs and better outcomes; we can afford to save lives.  

Open-source clinical pathways would provide grist for the mill of healthcare innovation.  Entrepreneurs would create mobile applications that combine location, biometric, and symptom data to suggest diagnostics and treatments with high specificity.  Like the Linux kernel, clinicians could use "builds" from a distributed trust model, where updates to clinical pathways were controlled through the kind of real-world trust and control models that are in place to keep us safe.

What is needed is programming ecosystem for clinical pathways.  I mean that literally.  We need a "byte code" (portability standard), languages, compilers, linkers, theorem provers, IDEs, package managers, etc.  We need all of this to create a bazaar of healthcare innovations, and we need it soon.

Your life may depend on it.

Thursday, June 14, 2012

Learning French in a Hurry: Some iPhone Hacks to Try

My best friend and I—that is, my wife and I—are headed to Paris this Fall.  Unfortunately, no parlez français.  There's this awful rumor (okay, myriad anecdotes) going around that the French are a bit jingoistic, or, perhaps more fairly, intolerant of those who don't bother to learn any French at all before visiting the country, particularly of those Americans who simply expect everyone to speak English in pursuit of the (no-longer-almighty) American Dollar.

Think what you will of this attitude or the veracity of it's justification; it's their country.  When in Paris...

So what is the busy American with about three months to learn French for a trip to Paris to do? First, and above all else, know that only a genius with can learn a language this quickly, so suck it up, be humble when you go, and do your best. Here are some hacks that we are trying out.

Change your iPhone language
You are surely extremely familiar with navigating your iPhone. Plus, the icons make it incredibly easy to find the app your looking for, despite it's caption. Changing the language your iPhone uses exposes you to French words and phrases as often as you check your phone. Since you already know what most of their counterparts are when the phone is using English, you'll surely pick up some new vocabulary and keep it. Immersion is the key to quick language acquisition.
Enable VoiceOver on your iPhone
Reading words is one thing, but it's hardly sufficient. You need to hear and speak them to improve retention and to make practical use of them. This is where the accessibility features of the iPhone come into play. Turn on VoiceOver to have all of the menu items, titles, button, etc. spoken by a clear French accent. Adjust the slider controlling how quickly the words are spoken to a rate your comfortable with interpreting—probably a slow as possible. Finally (and this is important), make sure you set the Triple-click Home feature to toggle VoiceOver. This is the most convenient way to silence the new Frenchman in your phone, but with VoiceOver on it is nearly impossible to do any texting.
Text message to your language buddy exclusively in French
My wife and I are using Google Translate in concert with iMessage for all of our text messaging. Here's the pattern:
  • Type your English in Google Translate
  • Listen to Google's French translation
  • Important! Manually type the translation into iMessage. You could copy/paste if you are in a huge hurry, but this is where you will practice writing and recall of spelling.
  • Copy/paste your interlocutor's response into Google Translate and get the English translation. Yes, do this first; you need to know what the words mean before hearing or writing them.
  • Back in iMessage now, use Triple-click Home to enable VoiceOver. Select the response you just translated to English and listen to it a couple of times, repeating it time. Turn VoiceOver off before repeating these steps.
News in Slow French
This podcast is available for free in iTunes. As a beginner you won't understand much, but it is delivered in a way that is both entertaining and accessible with a didactic slant that sometimes makes it seem a little silly. Remember, immersion is key.
Create a French Radio Station for the Pandora app
We've had decent luck with Carla Bruni (thanks to this Yahoo! answer). The real trick is to cull all of the non-French songs that come up using the thumbs-down button. Give it some time and you should get a pretty good stream of French language music.
Put Google Translate on your Home Screen
This one almost goes without saying, but the Add to Home Screen feature of Safari is under-utilized in my opinion. Quick access to a standalone view of the Google Translate web app will save you a lot of time and frustration in employing these hacks.

Don't expect miracles here, but when all else fails this app could save you. Jibbigo interprets speech and translates bi-directionally! This means you can speak in English and hear a French translation, and vice versa. The important distinction between this app and Google Translate is that it works offline.

There are two major drawbacks to this app that demand comment. First, it is really, really slow. Painfully. I'm using the iPhone 4, not the faster 4S, but I suspect it will still be awkward to use this in conversation, so don't rely on it. Second, the French speech-to-text function seems unreliable. As I don't speak French, I played Google Translate audio to the phone and got pretty bad results. Not bad for a universal translator, just imperfect enough to frustrate conversation. There is an option to type the words you want translated, so a patient interlocutor can succeed.

Besides an emergency translation, this app is useful in learning French when Google Translate is unavailable for any reason.

Pimsleur on your commute and in the gym
Get the Pimsleur French audio lessons into your iTunes and listen to them on your commute and at the gym. You really need to repeat these lessons, so I suggest doing a new lesson at the gym, or your commute home, whichever is first. Then listen to that same lesson on your morning commute. The sleep between these two periods will help things stick. If you are not sure about your commitment level, buy French, Conversational: Learn to Speak and Understand French with Pimsleur Language Programs (Pimsleur Instant Conversation). Otherwise, get French I, Comprehensive: Learn to Speak and Understand French with Pimsleur Language Programs, since there is a big overlap between the two products.

Those are the hacks we've come up with so far. Do you have a language learning hack for the iPhone?

Tuesday, June 5, 2012

Web Performance: Measure the Right Thing

By attempting to improve your site's page performance, you can actually make it slower. There are numerous well developed techniques to improve the performance of your web sites and applications.  Steve Souders and others have put together some best practices to improve your page performance, and companies like have put together turn-key solutions to implement these practices. With these and other ingenious tools that help measure page performance, you'd think implementing a program to improve page performance would be straightforward, but you would be wrong.

Performance improvement is an optimization game, and optimization is the act of maximizing (or minimizing) some measurable quantity.  The conventional wisdom for page performance holds that the time from first byte until window's load event is fired (i.e. time-to-onload, or TTO) by the browser is the best quantity to optimize, as it roughly approximates the wait until the page is interactive. While this seems like the worthwhile goal, it isn't; many of the techniques used to optimize TTO have deleterious effects on the user experience, including making perceived performance much slower.

Consider the practice of deferring the loading of various JavaScript files until TTO.  This generally has a very positive effect on TTO, but can have potentially negative effects. At a former client, a major airline, developers and management became fixated on TTO (as measured internally and by a third party).  The result was that the page would fully render in under a second on average, but some customers would be unable to use the flight search widget properly for twenty seconds or more.

In a somewhat common scenario, the HTML corpus of the page would download, along with the major images assets (hero shot and sprites), and the page would render completely above the fold. The deferred load and execution of the various JavaScript assets would then commence.  Unfortunately, the assets that powered the flight search widget would sometimes take an inordinate amount of time to download, resulting in a much diminished, if not jolting, user experience.

The solution is simple; the prescriptive guidance from Souders and others is to put these assets, those that spark the interactivity of critical features of your pages, inline.  Specifically, including the JavaScript for the flight search widget in a script tag at the bottom of the HTML payload would have made the widget interactive in a timely manner. By optimizing the wrong quantity, the team made things perceptively worse.

What should have been the optimized quantity? The answer to that question could only come from user-experience thinking.  UX is a kind of optimization itself: personas are developed; their goals explored; and the interactions are designed to best help them meet those goals while balancing the personas' agendas against each other. This kind of user-centered exploration of the problem space gives us a tangible quantity to optimize in any given interaction.

In the particular case of an airline's homepage, many agendas begin with searching for a flight; it is clearly the most important interaction. Concordantly, it is the time-to-interactivity of the flight search widget on the homepage that should have been optimized. By prioritizing the interactions on a given screen, we create a ranked list of performance measures that will provide tangible benefits to users of our web sites.

These measures, like time-to-interactivity of the flight search widget, are couched in domain-specific terms. We have to instrument our JavaScript to make them measurable in ways not unlike the DOM onload event.  By segmenting traffic and sending instrumentation data back to a beacon, we can monitor and optimize the real experiences of our customers, instead of a browser event.