Source: Deep Learning on Medium
Data management for image annotation: how to make the life of ML engineer easier?
This year has been great for us. We were super efficient in learning new tricks required for the fast growing field of computer vision and especially image annotation. As a result we have upgraded the app and the annotation tools. These updates has enabled us to ensure higher accuracy and precision through touch mobile annotation and collaborative team work.We also must admit that working closely with engineers’ on the client project is very valuable experience. It helped us to learn how to better organize our own data workflow to make the cycle of data training, testing and validation easier.
We are excited to release our new portal
- Efficient self-managed service for our clients.
- Access to results and monitoring real-life progress.
- Customized project management side. We improved quality control and became even more flexible.
What we learned:
ML models in Computer Vision are getting more complex to serve more and more use cases:We move into an era where computer vision solves complex real life problems. For instance security, traffic management, and object recognition in the industrial facilities. Therefore should evolve image annotation tools and practices. So now our tools and data management portal allows:So now our tools and data management portal allows:
- Multi-tagging system for hybrid (segmentation, object detection and classification algorithms).
- Layers of attributes associated with polygons or bounding box.
- Ranking systems.
- Image level attributes to allow categorization/classifications.
Don’t just train data for AI. Work with engineers through data training, testing and validation for ML model.
This is precisely why we have now the ways to query data based on status of annotation, class. It allows to upload pre-trained model results and add more data as the process of Machine learning is on its way.Versioning, class query, real-life class statistics.
Everyone knows it, but in fact it means a little more than a correct label. Now we are training ML models in the complex contextual environment. Understanding the context and delivering a shared understanding among annotating team need to be in an organized workflow. Our up-to-date portal now is able to validate and reject object by object vs image by image. Moreover, it automatically flags components that are not annotated right and sends a message to a group and directly to annotator on the flagged error. It is significant that client also has access to this error messaging feature, to ensure through real-time review, full understanding of the desired output.