Widget properties: XML layout vs code

Just stumbled upon another annoyance: you can’t trust XML properties. Let’s take the EditText. We could try to disable dictionary suggestions:


Well does it work? Probably not, at least not with the devices I tested. But when we do exactly the same in code:


It works like a charm! Documentation says clearly that these two are equivalent:


Can be combined with text and its variations to indicate that the IME should not show any dictionary-based word suggestions. Corresponds toTYPE_TEXT_FLAG_NO_SUGGESTIONS.

Well, not quite… 🙂 


ExpandableListView group indicator bounds nonsense

Digging deeper into ExpandableListView I came across another weird API design issue. This one is about ExpandableListView group indicator images.

These images are displayed on the left side of a group row in the list. As the name suggests they indicate if the group is expanded or collapsed. Obviously there can be two different images here and Android supports it. You can write an XML drawable selector (setGroupIndicator (Drawable groupIndicator)) with two states: state_empty for collapsed and state_expanded for an expanded group.

Well, what is also obvious to me is the fact that two images can have two different sizes, right? Apparently it is not so obvious to people who designed this part of Android API. You can have two different images for two different group states, but… both images will be stretched to fit the size you set by ExpandableListView‘s setIndicatorBounds(int left, int right) method.

What options do you have? You can set indicator bounds (via setIndicatorBounds(int left, int right)) to match wider one of your two images and compensate for differences using this kind of selector:

<selector xmlns:android="http://schemas.android.com/apk/res/android" >

    <item android:state_empty="true">

    <item android:state_expanded="true">


In above example the expanded image is a wider one, so we compensate 20dp in android:right coordinate of the collapsed image. As you can see there are also some sample top and bottom margins for both images.

Second option is to convert your images to a 9-patches, carefully selecting stretchable and non-stretchable parts. As it deals with image editing by hand I consider this solution impractical.

You can also set your indicator to @null in selector items and put your indicator views inside XML row layouts. You will have to write a custom ExpandableListAdapter and display/hide indicator views after inflating these from your row layout in getGroupView implementation.

I haven’t tested it yet, but this looks like the best solution. Ditch this useless feature of group indicators and do it your way.

Android hates animations

I think the main reason that iPhone is still most popular smartphone after all these years of so-called ‘iPhone killers’ coming every now and then from almost every other smartphone manufacturer is Apple’s attention to visual detail. Let’s consider animations. Other manufacturers seem to treat animations as some nice addition to user interface, stuff that is nice to have, pleasant to an eye but not really that much important. I can see that even on my 2012 Google Nexus, a state of the art Android tablet. Animations there are OK in general but quite frequently you can observe minor glitches, small animation hiccups that ruin the experience of responsiveness.

The problem that Google does not seem to understand is that animations are much more than pleasant visual effects. In real world everything we physically interact with move smoothly. When you throw a ball it does not appear in its destination without first travelling the distance needed to get there. Event if movement is fast, the smooth transition is still there, perceptible to our senses. When people approach you or talk with you and use gestures you are able to see their bodies and limbs move smoothly. You can see smooth movement of a drawer and changing patterns of light inside when you open it. Movement and inertia is ubiquitous around us.

That is the very reason why our brains are hardwired to observe and analyze smooth movements. Modern neuroscience discovers sophisticated prediction mechanisms built into our brain to cope with movement (http://www.ted.com/talks/daniel_wolpert_the_real_reason_for_brains.html). Smooth movements make us comfortable, allow our brain to predict the transition and anticipate the final result of movement.

Remember the shock when you think you are alone in the room and suddenly hear someone talking to you behind your back? You feel adrenaline rushing trough your veins because your brain realizes that the image of the outside world it carefully constructed from sensory input (the image of an empty room) instantly turned out to be false and that may be dangerous.

Now let’s look at movement in phone’s user interface. Let’s suppose you have an ExpandableListView. It’s a ListView with expandable groups that can contain rows, like this:


Let’s suppose there is no expand-like animation when you tap the group. You see three groups, tap one and – snap! – several more different items appear. This is very unnatural, there are no smooth transitions, brain is not able to make predictions of the final result. Instead, it is confronted with completely new state (only partly predicted by logical reasoning of your consciousness – you know that tapping a group row will make it expand, but you have no idea if the list would scroll, where exactly the sub rows will appear, without animations you can’t properly predict the visual image that would result from tapping).

Another example – deleting items from simple ListView:


Try implementing removing first 15 items on tap, without animation. Now it really takes some time to realize which items have been deleted! Advantage of an animation is even more obvious here. With smooth transition you would be able to see which rows disappear and which move upward to take place of the deleted ones, even if the animation were quite fast.

So, to sum up, Animation is not only for visual pleasure. It conveys information and makes UI’s interactions with the brain more efficient by taking advantage of neurological mechanisms created by millenia of human evolution.

Apple understood this from the very beginning of an iPhone. CoreAnimation is wired deeply in UIKit, everything is animatable. Deleting and adding rows in lists also.

Not so in Android (note: I’m speaking of Android 2.x which has still more then 50% market share in Google Play market as of 2013). I couldn’t believe that such a basic UX animation as expanding a group in ExpandableListView can’t be done with standard widgets! But it is true. The same goes for deleting rows from ListView. Of course you can implement fading, scaling and movement of individual row, but you wont be able to make all rows below the deleted ones move up smoothly. They just snap after calling notifyDataSetChanged() on adapter.

You can try to animate these rows below deleted one by getting ListView’s child using getChildAt(). But in order to make it look natural you have to animate rows that are not yet visible. If you don’t do this you will see empty space that moves up from the device’s bottom border – ListView implements lazy row loading and views not yet visible can’t be animated.

What options do you have? You can implement your own ListView with your own row reusing mechanism. You can try to subclass ListView. You can even get ListView source, copy some parts to your custom class, modify others and come up with something that works. Yeah, hacking Android source code to implement common animations. That’s the “Android way” of doing things, I suppose.

EditText filtering – overcomplicated and poorly documented

Suppose you have an EditText in your application. That’s fairly common, in most of applications you need some kind of text input from your users. Now let’s also assume that you want to restrict characters the user can type or limit the length. You think about filtering user’s input.

You have two options here:

  • TextWatcher protocol
  • InputFilter objects

And this itself is a poor API design. Why have two? Why not one, well thought-over delegate protocol that tells the programer when EditText content is about to be changed and let them discard or modify these changes? What do you need two options for? This creates confusion, makes developers think “what should I choose? which is better?” and makes the API more complicated than it should be (remember Occam’s razor?).

But that is just the beginning of nonsense. Let’s see what happens when you decide to implement this filtering using, say, InputFilter. After a quick glance at the documentation you know you have to override this method in your InputFilter class implementation:

public abstract CharSequence filter (CharSequence source, int start, int end, Spanned dest, int dstart, int dend)

Since: API Level 1

This method is called when the buffer is going to replace the range dstart … dend of dest with the new text from the range start … end of source. Return the CharSequence that you would like to have placed there instead, including an empty string if appropriate, or null to accept the original replacement. Be careful to not to reject 0-length replacements, as this is what happens when you delete text. Also beware that you should not attempt to make any changes to dest from this method; you may only examine it for context. Note: If source is an instance of Spanned or Spannable, the span objects in the source should be copied into the filtered result (i.e. the non-null return value). copySpansFrom(Spanned, int, int, Class, Spannable, int) can be used for convenience.

You may think about implementing this method in a straightforward way – create some kind of StringBuilder or buffer from source, remove characters that you do not want in your EditText and return that buffer as a String.

Then you realize that little note in the documentation: “If source is an instance of Spanned or Spannable(…)”. Your straightforward approach won’t work. Modern versions of Android (>=4.0 I guess) introduce dictionary suggestions displayed above the keyboard. If the word user is typing is being displayed above the keyboard your source parameter is an instance of SpannableStringBuilder class (which implements Spanned interface), otherwise it appears to be just a simple String.

So from the documentation you know you may get a Spanned or Spannable, which you have to copy. You want to actually filter your input so you decide to use a SpannableStringBuilder as your buffer.

Here is what might be your first attempt on this task:

if (source instanceof Spanned) {
    SpannableStringBuilder sourceCopy = new SpannableStringBuilder();
    TextUtils.copySpansFrom((Spanned)source, start, end, source.getClass(), sourceCopy, 0);
    /* ... do some filtering on your sourceCopy ... */
    return sourceCopy;

Looks nice, but doesn’t work. It turns out that copySpansFrom doesn’t copy the actual text. The docs for SpannableStringBuilder say that you can copy both text and spans using constructor. You give it a chance:

if (source instanceof Spanned) {
    SpannableStringBuilder sourceCopy = new SpannableStringBuilder(source, start, end);
    /* ... do some filtering on your sourceCopy ... */
    return sourceCopy;

And guess what? Yes, it copies the text, but now when user types a single letter the EditText field appends all the letters of the word he is currently typing to the end of the text!

You tried you best. You read the docs carefully, tried to implement filtering the way it should be, but failed. It is not you who failed however. You simply stumbled upon one of the many FAILs of Android API. Now you are on your own, poorly written Android API documentation won’t help you anymore.

It turns out that you can modify the source in place if it is a SpannableStringBuffer (code sample here), but the only way to come to this solution is by experimentation and guessing. And this is not how development process for a major mobile platform should work…