Original article was published on Artificial Intelligence on Medium
Q Blocks: Ubiquitous Super Computing has arrived
The 20th century was the century of the Personal Computer. The 21st century will be the century of the Personal Supercomputer.
Twenty years ago, at the beginning of this new century, nobody imagined the amazing technologies that would be born in the following decades. The smartphones that we carry in our pockets nowadays have more computing capabilities than the Spacecraft that made Armstrong and Aldrin the first men to step on the moon.
New technologies like smartphones, which come with a massive paradigm shift (computers went from being massive devices restricted to a very few people to tiny, carry-anywhere things that fit in our pockets) have transformed the world, dramatically increasing our quality of life.
Today I want to speak about one of such new technologies, one that has come to make a paradigm shift comparable to the one we experienced with smartphones. When it is fully developed, you will be able to execute incredibly costly computational tasks from your laptop, grab a coffee, and by the time you are back have them completed. No more waiting for hours while a Machine Learning model is being trained, or spending a fortune to speed up the process.
This technology will allow anybody to access extraordinary computing capabilities that up to now have only been achievable by Super Computers, from anywhere in the world, by leveraging a concept that has been amazingly disruptive in the last decades, and that has set the path for the future of many technologies: Distributed Computing.
What is distributed computing?
Distributed computing is the field of computer science that studies systems whose components are located in independent, networked computers that work towards a common goal. Distributed systems have no central point of failure, and therefore can keep operating if any of the nodes in the network goes down.
Heard about Bitcoin? Well, the whole concept is based on the Blockchain technology: a distributed ledger that tracks all the transactions of the currency and that is managed by a distributed peer to peer network of miners. We will speak about these miners later, so keep them in mind.
Distributed computing however, does not limit its magic to currency exchange applications. Furthermore, the beauty of this technology is how promising it’s uses are elsewhere. Ethereum is a very good example of this, allowing for the creation of powerful, decentralised applications.
Wouldn’t it be awesome if from anywhere in the world, we could access some sort of distributed network of computing resources that work together to crunch the big numbers in our Machine Learning jobs?
By doing this, and adding the computing resources of all the nodes in the network, we could access extraordinary computing capabilities, that until now have always been limited to a very few lucky players. Anyone would be able to harvest all the power of Supercomputing.
What is Supercomputing?
Supercomputing refers to the use of insanely large processing capabilities by harnessing the computing resources of a large number of individual computing systems working in parallel. This leads to a system that can have a tremendous performance, and that can be used for profoundly complex problems like weather forecasting or particle physics.
Supercomputing is fundamental for solving tasks which would be simply impossible for standard computers to solve, so to this day it has been only been accessible to those who needed to carry out these kind of tasks, like governments, gigantic enterprises or top-notch research teams at universities.
Some examples of the amazing tasks that Super computers are used for are: simulating the evolution of the Universe since the Big Bang, unravelling the mysteries of protein folding, or mimicking the human brain, as these kind of applications require processing and memory capabilities that no common computer can achieve by itself.
What would happen if anybody could have access to super-computing like resources? Imagine if every scientist or engineer, designer or GCI artist, could have these amazing capabilities at their disposal. Their work would be so much faster, painless and efficient.
The Future of computing: Distributed Supercomputing
Distributed computing and Supercomputing meet by harvesting the power of an insanely large network of individual, remote computers that together add up their computing power, building up to the capabilities of a Supercomputer.
This concept of Distributed Supercomputing isn’t new, as it used in projects like SETI@home by UC Berkley University, that uses a large peer to peer network of computers for helping in the Search for Extra-terrestrial Intelligence (SETI) or by FOLDING@home from Stanford University, a distributed computing project that uses PCs from all over the world for disease research, and that has played a large role investigating the molecular structure of the COVID-19 virus, achieving over 1 exaflops of computing power.
In the previous paragraph however, we can spot the problem with Supercomputing. Both of these projects are hosted by top universities and backed up by big tech companies like NVIDIA. Supercomputing has never been accessible to the masses. We’ve never been able to use this kind of power for our own projects or applications.
Up to now. What if you, my dear reader, could access these insane computing capabilities for the price of a Starbucks coffee?
This is exactly what Q Blocks aims to build: an affordable, accessible, and easy to use platform that allows anybody to harness the power of distributed supercomputing. A platform that can bring supercomputing into the hands of the general public. Lets find out how.
Q Blocks: the democratisation of Super Computing
Q-Blocks aspires to put amazing computing capabilities at the disposal of everybody by using distributed computing and connecting a vast network of individual computers across the world to put the power of a real supercomputer at the reach of your fingertips.
It does this for a fraction of the cost of other Cloud Platforms like Amazon Web Services or Microsoft’s Cloud, while also increasing performance and usability.
How? you might ask. Remember the miners from Bitcoin we mentioned earlier? Well, here is where they come back to us. Now, people with idle or under-utilised computing resources can become mini cloud hosts on the Q blocks network and get some meaningful economic incentives to keep those computing nodes online 24/7.
Their software enables people with powerful computers to rent out their computing power to data scientists, designers, and anybody else who can benefit from a vast amount of computing resources.
In the past decade, some companies like Uber and Airbnb made it evident that the future is moving towards sharing a economy that can take advantage of underutilised resources so that everyone involved is happy: the users get to use an asset which is not theirs, and the owners of such assets get financially rewarded.
Just like Uber or Airbnb, Q Blocks helps crypto-miners and gamers make money from their idle machines, while allowing Data Scientists or Engineers from all over the world to use incredible computing capabilities in an affordable manner. Everybody wins.
Q Block’s mission is to achieve this in an staged and secure manner. There are some technologies that still need to be built in order to provide this highly scalable yet affordable supercomputing experience. However, everything has been conscientiously planned.
Also, like Elon Musk’s master plan, the route is there so that everybody can see what the path is and where its heading. Lets take a look at it.
The Vision and Stages
The goal of Q Blocks is to change the world by bringing the power of Supercomputing into the hands of every creative mind on the planet. Their entire vision is mapped out in multiple stages with Stage 1 (the actual stage) being dedicated to demonstration of the power of peer to peer computing.
The path towards distributed supercomputing is divided into three main phases or stages. Each phase builds up on the previous ones, constituting an upgrade in the computing capabilities that are accessible on the Q Blocks platform. These stages are the following:
- Stage 1: Peer to Peer computing. By using a distributed network of miners/volunteers, Q Blocks can provide anyone with access to a powerful GPU to power their applications in a transparent manner, while also rewarding the owners of those GPUs. This is the stage where we are now, where you can use one of those powerful, remote GPUs for the cost of a CPU.
- Stage 2: Multi-GPUs. In the second stage, Q Blocks users will be able to access a cluster of GPUs (30–50) to run their computationally expensive applications in the time it takes to make a coffee.
- Stage 3: The arrival of the Personal Supercomputer. In the final stage of this exciting path, Q Blocks aims to provide users access to a 1000 GPU cluster, skyrocketing the speed of any computer you have used up to then, and allowing you to train your Machine Learning models in the blink of an eye.
As I mentioned earlier this path is being paved, as numerous technologies need to be built before reaching Stage 3. However, some amazing work has already been carried out to reach the first stage.
First, Q Blocks developed an easy to use application for people with spare or unused computing resources to onboard their GPUs on the Q blocks network, while bench-marking these machines and making sure that they satisfy certain requirements. If this happens, then the machines are accepted onto the network and automatically configured with the required dependencies and frameworks.
As a user, when you access the platform, all you have to do is to create and configure an instance, choose the desired computing capacities and programming framework, and select an access method (Jupyterlab or SSH). From there, you can start your instance, and start programming! The following video shows how easy it is.
In the background, the Q Blocks application handles all the heavy lifting, setting up the instances, configuring them with the chosen frameworks (Tensorflow, Pytorch, Scikit-Learn…) and launching the desired environment to run your applications in an inexpensive yet very powerful manner.
Already, with a long way to go to reach the final stage, you can use a powerful GPU for the cost of an standard CPU. The road towards bringing affordable supercomputing to the hands of everyone is slowly being paved. Want to take your first steps on it? Read on to learn how.
How can I try it out?
You are probably already biting your nails, asking yourself when you will be able to send your first computing job to Jacob’s computer’s GPU in Manchester. Well, you are in luck. You can do it today.
To introduce the power of their service and help users try it out, Q Blocks offers free computing hours to their early access users. If you want to be part of the distributed super-computing revolution from the earliest days, save time, money, and never again sit in your chair looking at the roof while you wait for your Convolutional Neural Network to train, go on and check it out!
I have to admit, I am very excited about something like this being built. The part of being a Data Scientist that I hate the most is waiting for hours for a model to train, feeling helpless as you watch the epochs go by. I’m sure many of you feel the same way, designers, scientists, and engineers, having to spend a relevant part of your days waiting for something to execute, train, or load.
Well, it looks like those days are over.
Distributed computing is about collaboration. Achieving things together. You can all be part of this journey. The journey to bring a supercomputers to the hands of everybody.
There’s still a lot of work to be done, but when Stage 3 of this journey arrives, it will be the day when a brilliant scientist in Nigeria would be able to get a supercomputing experience for the cost of a Starbucks coffee. And you and I will be proud that we were in this journey since the beginning.
Become an early access user and begin the journey here: The Q Blocks Cloud.
Image Source: Q Blocks.