How we made a user-friendly school picker

One of the things I love about working in technology is building refined and intuitive tools that make things easier for people. Most of the time these arise from dealing with challenges.

pickerThe school picker challenge

One of the challenges we ran into was letting users (who were coaches for a program) pick the school their running club was involved with. This was important because some of the schools had scholarships allocated to them and we needed to make sure those scholarships could be selected without intervention from the client — in this case Marathon Kids. The other thing we had to limit was bad data — the existing data was difficult to work with because school names didn’t match up and there were typos in the addresses.

What didn’t work

Our initial solution was to provide a list of states, cities and districts with scholarship schools. This worked fine for those existing schools, but it was clunky and didn’t work well for the schools who would not be eligible for a scholarship (which was a much larger number). This forced us into a split registration process — one flow for coaches from existing schools and another for new schools. It also created confusion for new coaches at existing schools.

The “ah ha” moment for us was realizing that the natural thing for users to do is just enter the name of their school. If each school had a unique name, and we had a list of all the schools, then we’d be done. We wouldn’t need to ask about any of the other options.

Still there was one remaining issue – in the real world, there are many schools with the same name. What we really needed was to have the system figure out which school the user probably wants.

What worked


Our first step toward guessing the correct school started with finding an open data source of all the public schools in the country. Once we had that, I immediately put the data into a mysql table and wired up a query-ui autocomplete control to it.

It was a great way to find out what we needed to fix and where the pain points were, but it turned out to be a big mistake letting our users play around with it before we had it working great. One of the things that I’ve slowly learned through my many years in software development is that interface interaction has an emotional component, and that once your product generates a negative emotional reaction, then it is difficult to recover from.

That’s exactly what our slow, dense school picker did for us. I left that basic implementation hooked in while I scrambled to build other parts of the system. In the meantime, the client started writing instructions around the shortcomings of the initial implementation.

Eventually, I had time to put the final solution together. First, it needed to be fast. I found the Twitter typeahead library that took care of things from the browser, with a smart cache on the client. Plus, it no longer hammered the server with outdated queries. On the server front, I moved to solr instead of mysql because I knew I could tune the filtering with much more control, and it had the performance I needed.

Now, we got the same list, but now it just leapt into place as you started typing. My local elementary still showed up at result number 30, but at least it didn’t take 3 seconds for it to show up anymore. Getting the user’s location was the next trick. Asking for the location would have been a distraction on top of an already lengthy form, so I knew I wanted to avoid querying the browser for the location.

That left GEOIP lookup. I’ve written my own geoip lookup component in C# for a click scoring engine in the past. Given that we didn’t need to handle millions of lookups a second, I figured I’d see what other options there were out there. I found some node packages for geoip lookup and a solr client relatively easily.

That let me bring location into the mix. Now things are are working well. The autocomplete is really responsive, and when the IP lookup data is close, the right school is almost always the right choice after a few characters are typed.

One quirk that validated our approach is that our office shows up three states away. Even with that difference, the speed makes choosing local schools simple and easy.

As far as meeting our original goals, we’ve run into a few hitches with the data itself.

  1. Schools are segmented as public, private and charter, and for some schools it isn’t clear what kind of school it should be in the data.
  2. Full school names sometimes don’t match when we only have an abbreviated name.
  3. We don’t have all the schools — and probably never will — so this is a convenience, and some users just fill out all the data manually.

Overall, it works well, and doesn’t match in only a small percentage of the registered clubs.

Lessons learned

  1. The first thing is that autocomplete is awful without good performance. If you don’t have the performance to backup the design, then users won’t trust or like it.
  2. Be careful with guessing algorithms. A good algorithm seems like it reads the user’s mind. A bad algorithm grates on your nerves like a stupid annoying prat.
  3. Supporting IE8 was a huge pain.

If I were to do things over again, I would have focused on a smaller data set for initial testing, so that users would have gotten a more realistic emotional response to the picker. They would have been fine with limited data for the first pass as long as the picker showed them exactly what they were looking for. Loading all the data was exactly the right choice for the development environment, just not for demoing to the client.

How can UX improve your organization's value with customers? Schedule a short conversation to find out how Standard Beagle's TRU/X process would work for you.