Android Fragmentation, UI Testing and YouLast year, Eddie Vassallo over at Gigaom.com joined the rest of the well-informed tech bloggers in agreeing that at least as far as developers are concerned, "[it's] 2014, and Android fragmentation is no longer a problem." While the developers have tools and patterns that will allow them to successfully build appealing, rich experiences for any screen size and support devices many years past their prime, just the term "Android fragmentation" can intimidate many people trying to decide what phone or tablet to buy. The developer has the ability to access powerful adaptive interface layout APIs and rely on Google Play Services to update crucial dependencies such as Maps and Push Notifications. That behind-the-scenes flexibility is meaningless to an end user staring at a dizzying array of devices from OEMs hellbent on stamping their unique mark on the user experience. Looking at phones such as the Samsung Galaxy Note Edge alongside the HTC One M8 or the Motorola Moto X can make it hard for the uninitiated observer to believe the same OS is powering each of those to say nothing of car infotainment units, wristwatches, televisions, and magical little HDMI dongles.
You. Yeah you; the one who laughed when I said "dongles". We're all adults here, so please keep it together or leave the blog while the grown-ups talk.
Where was I? Oh yes, fragmentation. Fragmentation therefore doesn't just affect developers. It clearly arose as the result of OEMs desiring to offer the benefit of choice through distinguishing themselves visually and to a lesser extent functionally from one another. But there is another less talked about group impacted by fragmentation; a group not unlike the end user but better informed of the underlying technology yet crazy enough to feign ignorance in order to ensure a quality product.
You might be thinking I'm referring to the mighty yet poorly appreciated Black Box Tester. While their role is often to empathize with the user, as intellectual and observant technicians they are able to apply reason and intuition to rise above variations in the system interfaces to accomplish things like setting the system time or date and turning Airplane Mode on or off. No, it is not the Black Box Tester who has to truly wrestle the digital OctoBear. I'm referring instead to the pitiable but diligent Black Box Automator. There are many tools out there for driving the system UI programmatically but to be honest, I hate every one of them that is not UiAutomator. I've written about UiAutomator before but the key reason for it's centrality in any discussion of Android fragmentation and Black Box Automation comes from two realities:
- It uses the Accessibility API to know what is on the screen, what to click on, what to scroll, etc. Think of it like a system meant to enable users WHO CANNOT SEE your carefully designed, beautiful, buttery smooth GUI to still interact with your app through the touch screen. That means your tests are about as well informed what they're doing to your app as a blind person. Sooo good luck.
- Because UiAutomator is slow to execute and somewhat poorly informed of what it is interacting with, the best justification to run your UI tests with it versus the new hotness (Espresso 2.0, so hot right now, Espresso 2.0) is that unlike Espresso or any other Junit-based framework compiled within your application's namespace, UiAutomator can touch ANYTHING. A. USER. CAN. Which while amazingly powerful from an automation perspective, carries the heavy price of having to deal with every tiny difference the OEMs choose to make in their flavor of Android.
Remember, simple, readable, maintainable test cases are the goal of any good automation writer. Compared to writing test instructions for an experienced and well-informed Black Box Tester, you must assume you are dealing with a pedantic, hyper-literal child who doesn't care if they do it right when you write test automation. Remember writing exercises where you had to learn how to communicate without making assumptions about whether your audience understands your typical idiomatic style? This is a lot like that except your job is at stake. No pressure.
So let me pause briefly to illustrate what kinds of things constitute system UI fragmentation, particularly in the context of how it might affect a test case. Once we're done looking at pretty pictures, I'll lay out one approach to handling system UI fragmentation in code: using the Factory Design Pattern. For some of you, this could be really useful. For the rest, this could just be another relentless ordeal that makes you feel like you picked a bad week to quit smoking.
Minor Fragmented UI Example: Date and Time Settings Views
Exhibit A: Stock Android Lollipop date and time settings view
|Holy whitespace, Batman!|
Exhibit B: Same view, this time on a Samsung Galaxy S5
|Notice minor text differences, nothing too scary right?|
Moderate Fragmented UI Example: DatePicker Dialogs
Exhibit C: DatePicker Dialog in TouchWiz on Samsung Galaxy S5
|Okay, looks simple. I like simple. I don't know what up does though.|
Exhibit D: DatePicker Dialog on Nexus 7 2012 running Lollipop
|Well crap. That looks... ...complicated. Okay, I can probably swipe but how do I change years?|
Major Fragmented UI Example: TimePicker Dialogs
Exhibit E: Samsung's TouchWiz TimePicker
|Looks pretty safe but why does the dialog title match the content exactly?|
Exhibit F: Oh come on, Lollipop!
|If you think this is bad, try setting minutes.|
Exhibit G: Nope
|Mommy, why does the text in the screenshot not match the text in the Node Detail?|
Exhibit H: FUUUUuuuuu...
|Accessibility affordance: 0. Nil. Zilch. Nada.|
Exhibit I: Nope nope nope nope
|Nope nope nope nope nope nope nope nope nope nope nope|
Writing Clean, Concise Test Cases That Can Run Anywhere
Test automation should function like documentation for how the application should work. That means that within the test methods themselves, you should minimize any code that doesn't serve to enrich or clarify those expectations. It might be tempting to write custom test cases for each device variation and then carefully select which ones to run based on the test target. That gets to be a lot of code to rewrite and maintain for each new device you add to your matrix. There has to be a better way. In order to achieve the goals of simple, readable, and maintainable test cases we need to write a single case for each behavior we want to test not for each device. And that's where the Factory Design Pattern comes into play.
Rather than explain exhaustively how the Factory Design Pattern works in paragraph after paragraph, I'll just supply you with this handy link and summarize in the following way:
- Knowing that your test case will only call a single set of methods which need to map to a variety of helpers, you add an interface class defining those shared methods.
- Then you add a factory class that has a create() method of the interface type from above
- Now you write your helper classes that implement that interface INCLUDING ALL methods implemented by that interface
Your test classes that have tests that need those helper methods will need to instantiate that interface in a member variable then assign it via the factory's create method in your setUp() call. In a traditional Factory Design Pattern example, the method call to create() passes the parameter which selects the desired helper class. In my case, since I'm talking about inherent system UI variations, I keep things simpler by getting the Build.FINGERPRINT string and comparing it to a whitelist of known, tested devices for whom I've already written the appropriate helper classes. The end result is that the test case knows exactly what device it is running on and the framework automatically selects the appropriate code for exercising the same path in the UI. Take THAT Android system UI fragmentation!
Enough Words, Monkey Boy, Show Me Some Sample Code
I'll do even better. I've posted a sample project to Github for a workshop I'm hosting in Seattle. I wanted to demonstrate a test case that needed to exercise the system UI such as setting the time. Because this is a Black Box Automation project, I don't need to own the code for the app under test, I can just download one. I chose Google Keep. It is a fantastic app for jotting down quick notes, making shopping lists, etc. It uses Google Play Services to sync notes across your account, has a fantastic widget, and I use it all the time already. You should check it out.
It took some digging to figure out certain shortcuts like launching the date and time settings activities directly or launching Google Keep directly via shell commands rather than tapping around the UI even more than needed. I'll write up another post sometime that lays out those tricks. For now, let's just focus on the guts of the test case.
The test scenario is simple:
- Set the system time to something convenient
- Launch the app
- Create a note
- Set a reminder for that note to some convenient time in the future
- Set the system time to that time
- Verify that a notification is generated containing the content of the note
To whet your appetite, here are Github Gists for the factory and interface classes. Check out the sample project on Github and try it on your own device by making sure you have Google Keep installed first (consider disabling sync for Google Keep in your Google Account settings first before running any automation against it) importing it into Eclipse, updating the helper and factory classes for your device(s) and extending the project with additional test scenarios. Keep in mind the handy included build/deploy/test shell script assumes you're able to run BASH and have added adb to your PATH environment variable.
Exhibit J: Factory Class:
Exhibit K: Interface Class:
What Was That Bit About 0 Accessibility in The Lollipop TimePicker
Ah, you're very astute. I almost forgot to clarify. Yes, you *COULD* have just looked at the helper classes in the supplied example project but that wouldn't tell you anything about how I figured it out. UIAutomator DOES require the Accessibility API to send touch events through the UI. That is true. Without any accessibility affordance in those dialogs, I could have been really screwed, forced to use click-by-coordinates methods like some kind of animal. However because I didn't have a device running Lollipop yet when I first looked into it, I had to try and figure out what was going on in that interface via an emulator. And that's when I found the magic.
That view has listeners for other kinds of events, not just touch events. Because I am lazy and prefer to use my keyboard to enter text on an emulator instead of tapping around the on-screen keyboard with a mouse, I checked the box for "Hardware keyboard present" when I created that emulator. That view listens for key events too. I only discovered that critical fact when I was tapping around idly on the emulator and out of impotent rage and frustration, started typing the number keys on my keyboard. Suddenly things started happening. Magical things. Things that probably should have been more visibly documented somewhere. Hooray for undocumented features.
Exhibit L: Check this box always