GANS are indeed AI’s bleeding new edge(#Ian goodfellow),but on the basis of how they work,the Discriminator(D) and Generator(G) compete with each other and try to overpower the other.This sometimes can result in other way around.Contrary to this,happy networks are productive networks, laying the groundwork for advances in motivational AI and deep-learning.

So,recently GUNS were proposed (link of research paper is provided below),where fighting is strictly off and both models learn to respect their differences.

Here,Generator G proposes samples and outputs Props & in return gets acknowledgement Acks by motivator M.It can be viewed as a two-player game in which both players work as a team to achieve the best score(unlike rivalry in GANS).

Mathematically,we refine model as D,G:=G,D and compute cost function as C(V )=α *(integral of βV (G)) where βV is a violent mapping from discriminator to the closest mathematical structure and α is a constant representing the cost of violence.

Under GUNS(general unadversarial networks),we train a generator G which gets whatever best data it can feel & manage and a motivator M which assists G to get its goal.

A simple implementation is:

- def train:
- for #epochs do:
- Generate n noise samples and compute G(z^(1); θg), …G(z^(n);θg).
- Sample n data samples x(1), …x(n) from the data distribution.
- Let G show pairs (x(i) , G(z(i) ; θg)) to Motivator M.
- Sample criticised and motivational comments from Motivator.
- Update θG & calculate loss function

Now,Results were cool & AMAZING…

Quite astonishing results compared to gans………GUNS produce quite clear,convenient cherry picked images….

link to the paper:

https://arxiv.org/pdf/1703.02528.pdf

If u loved my article,dont forget to clap….

Source: Deep Learning on Medium