LogRocket.identify(uid); IMAGE ANNOTATION | MPx
top of page

Alegion Image Annotation Tool

Led research effort, collaborated with cross-disciplinary stakeholders and end-users to define needs, conducted usability sessions and interviews to ensure successful delivery of most efficient annotation tooling.

WHO

Data Annotators

WHY

Alegion’s clients have large amounts of image and video data that needs to be labeled accurately at the lowest possible cost. Alegion’s workforce needs to be able to annotate this data efficiently.

WHAT

Image and Video annotation tooling on the Computer Vision Platform aka Worker Portal.

Who is Alegion?

Alegion provides customers with high quality labeled data to use in training machine learning models.

 

Alegion's core whiteglove service allowed customers to offload their data assets to our Customer Success team who then would create an appropriate workflow and task template design for the data to be labeled efficiently and accurately by Alegion’s global workforce and BPO’s. Customer Success team members created tasks on the “Admin portal”, which was one of the user experiences I managed. The other side of the platform was the “Worker portal” which enabled a user to log in, view available tasks and complete them for compensation. The Worker portal supported a variety of tasks from simple, like inputting address fields from a given website to complex image and video annotation tasks. We found Computer Vision (image & video annotation) was a valuable investment so that is what I spent the majority of my time on at Alegion.

Simplified Alegion Explainer.png

Building Image Annotation

The first major project I was tasked with was to design an image annotation experience that was the most efficient and accurate for workers. The field of computer vision was completely new to me so I spent a lot of time reading white papers, exploring open-source tools and asking our Chief Data Scientist a ton of questions.

Working with the Product Owner we determined the Image Annotation tooling requirements and I began the first ideations of the image annotation experience.

Tooling Requirements

  • Support low and high image resolutions

  • Localize a target object

  • Edit an existing shape

  • Adjust view space to see fine details

  • Classify a localized object

  • Add other defining details about a localized object

Version 1

The first phase took us about three months to implement. The Product and Engineering team had to incorporate these brand new features into the platform while simultaneously adding small one-off features and functionality tweaks based on client requests. With all this on our plate, there wasn’t time to do extensive research on tooling libraries or to test with users. Ultimately, Engineering selected a library called Leaflet. While not officially for drawing, the library was a robust open-source mapping software originally designed for editing and viewing maps. It was functional...but “user-friendly” wasn’t a phrase I felt comfortable using to describe the first release.

Our first clients to use the tool had relatively simple use cases and our annotation tooling got the job done. As more complex project requests came through we had to get a little more creative.

One client had precise definitions of the items they needed annotated. Many of these objects were difficult for us to decipher let alone the non-native english annotators we needed to complete the tasks with high accuracy. As a work around we added a radio button list to the task layout because select-list items supported rich media tooltips so we could display image examples on hover.

Version 2

For the second big release of image annotation we added the concept of relationships. The following video shows me demoing the new features. We were still using the mapping library, Leaflet to power our drawing tools and you can see how clunky the interactions are.

  • Having a true Northstar and socializing my design process with all stakeholders, especially engineering, more often would’ve eased a lot of tensions and helped us deliver an initial version sooner. 
     

  • The implemented library required a user to choose a mode: draw, delete, edit or save each time he/she wanted to change anything. This resulted in excessive mouse movement and non-intuitive user experience. To say the least, we still had a lot of work to do to meet our goal of having the most efficient annotation tool.
     

  • It was clear microinteractions like adjusting an existing shape and changing a relationship were the next big challenge to tackle to gain efficiency and increase usability.

Key Takeaways

Future Versions

With each subsequent feature release for image annotation we aimed to increase worker usability and efficiency. One of my favorite examples of how we accomplished this is through the SmartPoly tool. This feature used Ai to assist the worker in drawing tedious polygons. Check out the clickable Figma prototype here.

ia-smartpoly-1.png
  • Annotators went from taking 45-90 minutes to create one polygon to taking 20-60 seconds to create a polygon. 

Key Takeaways

Conclusion

My biggest takeaway from the first release of image annotation was the importance of sharing even your earliest ideas with engineering as soon as possible. In an effort to not disturb the engineers I kept my head down and iterated on my own and with other internal stakeholders. By the time I shared the first Sketch mock-ups with an engineering lead, he felt blindsided and overwhelmed by everything I was showing him. We ended up having a few one-on-one discussions to define a process for how and when Design should include Engineering in developing new features.

bottom of page