Original article can be found here (source): Deep Learning on Medium


The regulation → has to be done in some way → in the new media age, not only we can make fake media, but it also can use AI to regulate the industry. (content management).

The online platform has grown → now it is integral → everyone needs to survive → but harmful materials → are not a good idea.

Things that promote → abuse or nudity → we cannot filter all of them, hence using AI tools might be the solution. (machine to do human-like tasks). (there are a lot of investments in AI media world).

The usual supervised learning → methods are general applications. (more AI stuff will continue on).

Content moderations → Gmail already uses machine learning to do spam detection → but now things can get really complicated → now, video, audio and much more can be used and managed.

A board understanding of stuff can happen and more. (image and audio analysis → are critical).

Now live videos have to be moderated as well → they need computer vision methods to process all of this information. (but removing content has also negative effects → since they are deciding what to say and where)

AI → has to identify harmful things → these are more or less outlier detection → since we want to classify if it is harmful.

The problem is → human input is always going to be needed → since viewing harmful content is not a good thing. (there are two methods → either pre/post → there are different workflows).

There is a lot of potentials when it comes to the impact of industry and traffic. (keyword filtering is one of them → additionally, hash matching are done as well).

RNN → can process temporal videos → but they are still far from being perfect → all of these are very challenging.

Different problems → need different kind of materials → identifying bullying stuff are also a problem.

The reason why this is important is because → there is a human being behind the click and traffics. (and matters on the internet can affect how we live and how we think and feel). (there are other examples → such as style transfers and more).

GAN → they are the best for generating content → that might be bad or fake content. (so the first AI → can mark the advertisements → and next to a human can take a deeper look- → the degree of harmfulness have to measured as well as in different languages).

The traffic is growing daily and have a lot of potentials → it might be a better idea → since we are able to create a company that does this.

A websites are generally → more or less content distribution → and advertisements are the key ideas of profit → hence they need someone who can regulate these contents.

Prevent from potential bad content to be posted. (AI can be used to engage positive engagement! → auto-filling → they can select words for you → or maybe if there is an harmful content → we can send a message before)

AI-based Chatbots → are also another idea of management.

Great Example of how Twitter did its → policies → creating policies on these matters is a tricky thing. (increasing the performance of moderation is the final goal).

Sharing data between different social media can be helpful → since they have a lot of data. (NLP systems → they also play a critical role when it comes to understanding).

Thinking about how AI can be used to generate beneficial behavior → is another interesting area to study.

GPU → thanks to their advancements → now we are able to process a lot of data in instant → and deep learning is the best way to get the automation to get going.

Deepmind → Alpha GO → this was the start of everything → as well as the 2014 image net challenge. (at the end of the day → understanding what the content is critical → since we humans can understand)

However, using AI → has bias and unexplained → it is not a silver bullet → and there are weakness → but it is definitely the way to go. (freedom of speech must be kept).

What we are going to cover → as the book goes by.

Additionally, we are going to look at what is going to be hard for implementation.