One of the analogies I use to explain enterprise IT to people is that it’s like a classical music orchestra. Think about it for a second. An orchestra has musicians, instruments, a composer, and some sheet music. An enterprise IT organization has a similar structure.
The musicians are the different roles in IT. The instruments are the tools and platforms. The composer, well, that could be the architect or the department head. The sheet music is the process they use to build, ship, and run applications.
They also have similar goals in mind. An orchestra’s goal is to create or play a symphony. An IT organization’s goal is to do the same thing — but instead of music, it’s shipping an app. Either way, both are utilizing their tools and their talent in the best ways they can. Both are trying to create something cohesive, with purpose, and appeal to their audience.
And, like an orchestra, any IT department knows that the hardest thing to do is get everyone on the same page, let alone playing the same song and at the right key. Recent advancements in technology and the processes involved has made things easier.
Containers have made developing, deploying, and running software simpler and more consistent. DevOps have made the lifecycles of applications more adaptable, responsive, and resilient. And in order for machine learning to function at the enterprise IT level, it must be able to amalgamate with how the rest of the organization deploys applications. Whether you agree or disagree, it’s reality.
Machine Learning: a new set of instruments
There’s an new section getting introduced to the orchestra: machine learning. That’s because data scientists and the instruments they use have emerged as practical tools — tools that provide quantifiable business benefits and are therefore, moving deeper into the enterprise. According to a report published by Narrative Science entitled Outlook on Artificial Intelligence in the Enterprise 2018, “AI adoption grew by over 60% in 2017”. That’s a tremendous increase in traction.
If we return to our analogy, think about what it was like when the brass section was introduced to the orchestra. The instruments themselves weren’t new. Horns had been used centuries before composers like Beethoven. (Who was the first major composer to incorporate a brass section in his symphonic works, by the way.
However, before the brass section could be accepted, two things needed to change. The first thing that needed to change was musical tastes. Second, the instruments themselves needed to improve. Machine Learning is in a similar spot. It’s been around for a long time. And it has seen staggering improvement. More importantly, if reports like the one mentioned above are proof, tastes have evolved as well. The challenge is writing this new section into the music to improve the overall symphony.
Introducing RiseML: the simplest way to adapt machine learning into enterprise IT
This is where RiseML comes into play. RiseML makes it incredibly easy for IT organizations to implement the technology with little to no disruption. RiseML provides a simple yet powerful abstraction for machine learning engineers that leverages kubernetes, an existing container orchestration engine used by numerous IT departments. The beauty of RiseML is not just the tech itself, but the fact that it that hides the working details of the underlying cluster. It eliminates the need for the data scientist to become an ops person.
Making things simple is paramount for success. And that’s where RiseML shines. Containers and DevOps helped solved a very prominent issue in IT, which is that Ops doesn’t like developing applications, and developers don’t like being bogged down by system administration. In the same vein, a data scientist doesn’t like doing either of those things. They’re interested in the data, and validating hypotheses with that data. In other words, woodwind wouldn’t do well playing a brass instrument, and someone from the brass section wouldn’t do well in the woodwind section.
RiseML frees up the data scientist so he or she doesn’t have to bother with either of those things. It does it in such a way that the data scientist can play along with everyone else without distributing existing processes. It creates a seamless way for the data scientist to add value to businesses.
To get a sense of what I mean, lets take a look at the RiseML architecture.
The answer is in the RiseML Architecture
RiseML has three main components: The CLI, the backend component, and the Kubernetes container orchestration platform. And it’s the harmonious — see what I did there? — integration of these three areas that makes RiseML such a powerful solution for any organization looking to adapt or further their usage of machine learning in their business.
RiseML — The CLI
The CLI abstracts a lot of the tasks associated with interacting with an orchestration engine like Kubernetes. This means the data scientist can stay in their area of expertise, without disrupting the ops environment. It leverages kubernetes, something your ops person is already familiar with. Plus, the RiseML CLI is couched in language the data scientist is already familiar with (like “train”, “status”, etc.). In order for the data scientist to run an experiment, all they need to do is run “riseml train”. Beautiful.
Rise ML — The Backend
The backend portion of the RiseML solution is series of components installed on top of the kubernetes platform. The backend automatically takes care of things like versioning, executing experiments, collecting logs, and more. Also, the backend has a full REST API that is addressable. Thus, it can be extended further to fit individual needs if necessary.
This critical component is what makes things relatively seamless. The data scientist doesn’t have to spend days or weeks trying to hack their own environment together. This becomes especially important when the lifecycle of the ML experiment comes into play.
Operations teams will love this, because it ensures that there isn’t fragmentation. Machine learning in itself is resource-intensive. It’s this backend component that will ensure workloads remain sequential. Therefore, it won’t drive up additional costs by consuming unnecessary resources across various environments.
RiseML — GPU enabled Kubernetes
Part of the genius of the RiseML solution is that it doesn’t add a custom orchestration engine into the mix. This means that there isn’t an additional layer of complexity into the mix left to the ops person to maintain. GPU-enabled kubernetes is possible on its own, but as of today, requires a lot of customization. RiseML has taken care of all that for you. Data scientists can install and deploy a fully capable cluster, including the additional RiseML components, with just one command (additional blog post to follow).
In closing, if you’re a company that is exploring or already using machine learning and are wondering how to do it at scale and with minimal risk, RiseML is definitely something you need to check out. Adopting a new technology can be a daunting task, and make it seem like you’re taking on a lot of additional risk. But it doesn’t have to be that way. RiseML has created a solution that minimizes the risks and challenges of implementing data science into your existing IT environment, while leveraging existing technologies you’re likely already using.
“The music” is changing, and solutions like this are rare. Leveraging something like RiseML may be exactly what you need to be the next Beethoven.
Source: Deep Learning on Medium