Building AGI by learning to learn.

Source: Deep Learning on Medium

Building AGI by learning to learn.

In a few weeks, our team @astrum will be launching in private beta to users, and business owners looking to derive value from their datasets. The idea is pretty straightforward — we use a brain to create mini brains recursively.

A sketch I made in 5 mins explaining the process 🙂

If the algorithm was a conversation, here’s how their dialogue would pan out;

Big brain: *builds a child brain*

Child brain: *learns something* Hey big brain! I just learned X from the data you gave me!

Big brain: *Rolls eyes and proceeds to create a better child brain*

New child brain: *learns new thing* Hey big brain! I just learned X,Y and Z from the data you gave me!

Big brain: *visibly impressed* Good job! *consumes child brain to improvise it’s own capabilities*

Big brain after consuming it’s own child 🙁

As we work with more diverse data, our models learn to improve themselves, and start to pick up the art of building custom brains for each type of dataset. (Keep in mind that this is not the same as hyper-parameter search like AutoML or other cloud ML features.)

Meanwhile, businesses that use our platform feed their data through our pipeline, and receive a well generalized model in return. (They basically get to keep the best child brain for a fee.)

We’ll be blogging about our progress periodically, and plan to release the product in iterations of cumulative features. Our MVP is launching soon, and if you’re interested in the value you might be able to unlock through our platform — leave us a comment down below. We can’t wait to get our MVP in your hands to test it out for yourselves, and look forward to seeing what you guys do with it!

-The Astrum team 🙂