Source: Deep Learning on Medium
Self-driving cars. Less biased crowdsourced data. Automatically generated historical accounts. These are some of the topics data science researchers across the world tackled and published to the arXiv research aggregator out of Cornell University Library in October.
Learn about revelations researchers made and how they applied machine learning, deep learning, and natural language processing in these settings.
Maximizing Accuracy of Crowdsourced Data for Machine Learning
One Switzerland-based research team proposed an algorithm to maximize fairness and accuracy of crowdsourced data.
Recently, developers have been more hesitant to train machine learning algorithms on crowdsourced data because of demonstrated racial, gender, and political discrimination within it. For instance, the apparent racial bias in facial recognition software that was trained on an undiverse, open source datasets.
The Swiss research team proposed a new algorithm to assign tasks to human crowdworkers in cases where those crowdworkers determine data labels through their own observations. The model learns how to optimize the sampling probability distribution over a set of crowdworkers. This maximizes the data’s expected accuracy and ensures computational errors don’t discriminate unfairly against any social groups.
Task assignment studies are often dominated by graph matching algorithms. But this study approaches optimization as a linear problem to make it easier to solve and analyze. Ultimately, the study authors experimented with the algorithm and showed it performs well on real-world data.
Autonomous Driving with Deep Learning
Recent self-driving car studies have attempted to apply reinforcement learning methods. One October study found some success.
A team in China developed a vision-based lateral control system through deep learning and reinforcement learning. Lateral control systems generally help drivers change lange safely, park, and avoid collisions.
There are two methods researchers use to teach autonomous vehicles to perceive their environments: End-to-end, a modeling system which maps an observation to its desired output using a classifier or regressor; and perception and control separation, which disassembles the vision-based lateral control system into two modules (which it’s named for).
- The perception module takes a driver-view image as input. It uses features like lane boundaries, distance to boundaries, vehicle poses, and road curvature to locate the vehicle. The multi-task learning neural network produces that location as an output.
- The control module takes the previous module’s location output and determines the best direction to follow the desired trajectory.
The vision-based lateral control framework which includes: the
perception module, the control module, and the VTORCS environment. Image from study.
The researchers used a state-of-the-art perception and control algorithm when developing their deep learning model.
The control module is popularly modeled through linear quadratic regulator, fuzzy logic, or model predictive control. Model-based methods can be cost-prohibitive for large-scale applications because they depend on expensive light detection and ranging. Instead, this study tested a model-free method. The control module underwent reinforcement learning with raw sensors data from a simple camera.
The resulting automated driving controller was very capable of determining features of a track and the vehicle’s location on it. In fact, it outperformed those from linear quadratic regulator controllers and model predictive control controllers on different tracks.
Using Story Salads to Generate Stories with Natural Language Processing
Computational documentation of history: that’s what one natural language processing study looks to achieve with “story salads.”
Multiple news sources may write about the same event, but their stories will include different details and topics. University of Texas at Austin researchers wanted to determine whether they could use natural language processing to combine these different sources to create meaningful and comprehensive accounts of events.
“Story salads” combine documents that can be generated at scale, which can then be utilized to train neural models. The Texas researchers used New York Times and Wikipedia story salads to train unsupervised clustering and neural network-based supervised clustering algorithms. Then, they tested whether the groupings could create a logical and complete picture of events described within those story salads.
Ultimately, they found that these clusters need global context, regardless of whether the models are driven by natural language alone or based on event tuple representations (created by humans). When contextualized, sentence-based models created more accurate groupings among story salads containing very similar topics. Event-based models were not as effective, but didn’t have as much difficulty handling topically similar events.
What’s more, finding the prevailing narratives in a story salad can be done more successfully by machine learning than by humans when there’s a need to access additional knowledge from an external source. Machine learning can do this quickly and automatically, whereas humans have a limited capacity for how much they can learn in a short span of time. The researchers noted this is unlike many natural language processing tasks, where humans actions are considered the gold standard that machine learning tries to achieve.
Machine learning, deep learning, and natural language processing will continue to advance the capabilities of resources we utilize every day. Learn more about cutting-edge research through a quick glance at some of September’s most compelling research, including AI-identified healthcare policy, hate speech detection, and sarcasm detection.
— — — — — — — — — — — — — — — — — —
Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday.