Original article was published on Deep Learning on Medium
Today in AI: May 21, 2020
83 papers appeared on Arxiv today.
NVIDIA released HIERARCHICAL MULTI-SCALE ATTENTION FOR SEMANTIC SEGMENTATION, an image segmentation architecture that uses a form of hierarchical attention to combine multi-scale predictions. This approach is an extension of an earlier 2015 paper Attention to scale: Scale-aware semantic image segmentation, except the fusion operator is applied only pairwise between neighboring scales instead of explicitly at a fixed set of scales. The advantages include faster training and more flexible inference, where we get to choose the number of scales to combine. More broadly, this trick or class of approaches can be applied in setting where there is a mixture of experts present in a pool with different strengths and weaknesses. A small (attention-based) network can be trained to select the expert to trust conditioned on the input.
symjax released its whitepaper for Release 0.0.1 on Arxiv today. Honestly it’s a bit hard to read and interpret, and the differences w.r.t. Jax are not very well explained. It looked like reddit was a bit confused as well, as were others on the Github project issues. The stated goal of “our plan is to greatly augment Jax with deep learning and IO functionalities to allow easy and rapid test of ideas for practitioners” sounds like it was generated by a GPT-2. Indeed, Jax is already quite good for rapid testing of ideas out of the box. For now, symjax looks like Theano API with Jax running the show under the hood. The story is developing.
Reproducibility at EMNLP 2020, a guest post by Jesse Dodge and Noah A. Smith on the EMNLP website introduce the reproducibility checklist and the reproducibility challenge. The former follows in spirit the machine learning reproducibility checklist, but adapts it further to NLP. In summary, it is inviting the authors to consider whether they fully explained their datasets, approach and evaluation so that others may follow in their footsteps without long threads of private emails. This is especially important in areas like NLP that are heavily empirical, and where results have a large weight over the acceptance considerations over a paper.
The reproducibility challenge is equally welcome, explicitly challenging the participants to reproduce a paper of their choice and writing up the results.
Honestly, not much else happened in AI today though, so I will this note by highlighting Panoptic Instance Segmentation of Pigs. It’s really exactly what it sounds like, so there you go. This is what it looks like to segment pigs:
With that — signing off.