Ivy unifies existing deep learning frameworks, providing the building blocks for framework-agnostic functions, layers and libraries.
More links available at ivy-dl.org
In this post, we introduce Ivy; a new templated deep learning framework which supports Jax, TensorFlow, PyTorch, MXNet and Numpy. We also introduce four Ivy libraries in the areas of mechanics, 3D vision, robotics, and differentiable environments. We’re very excited for you to try these out!
If you want to dive straight in to some useful code, then we suggest you check out our set of applied libraries, which build on top of the Ivy framework. The ReadMe files and documentation are good starting points, which also explain how to run some demos. Visualizations of these demos are also given in the Ivy Libraries section at the end of this post, so feel free to scroll straight down!
On the other hand, if you want to hear more about the motivation behind the Ivy framework itself, and it’s broader uses and potential, then keep reading!
Ivy maximizes the portability of deep learning codebases, by simplifying code sharing and extending code lifespan.
Diving straight into some code, we first show how Ivy can be dropped into any existing project, and used directly alongside your favourite framework:
We only consider ivy.concatenate above, but there are ~100 similar core tensor functions provided by Ivy.
Based on this short code sample alone, you may wonder, why is this helpful? Don’t most developers stick to just one framework for a project? This is indeed the case, and the benefit of Ivy is NOT the ability to combine different frameworks in a single project.
Ivy’s strength arises when we want to maximize the usability of our code.
With deep learning and gradient-based optimization increasingly finding their way into all kinds of different fields, let’s suppose you need to implement a set of functions in one of these fields for your own DL project. The topic could be anything, such as bayesian inference, medical imaging, fluid mechanics, particle physics, economics etc. For the purpose of this example, let’s assume you need to implement some functions for bayesian inference.
With Ivy, you can implement these functions once, and then immediately have a set of functions simultaneously supporting Jax, TensorFlow, PyTorch, MXNet, and Numpy. When open sourcing these functions, the Ivy abstraction therefore makes them availble to almost all deep learning developers at once.
From a personal perspective, this has the benefit of maximizing your audience, as well as your clones, forks, stars (and all other github metrics you hold dear!). From a community perspective, this has the benefit of accelerating collaboration, code-sharing, and Deep Learning research.
Aside from just supporting existing frameworks, we are also committed to updating Ivy behind the scenes to remain fully compatible with all future deep learning frameworks when they become available. So with your functions and libraries written in Ivy, you can rest assured that your codebase won’t need to be re-implemented when the next deep learning framework is released!
The number of open source deep learning projects has grown considerably in recent years, as can be seen from the rapidly increasing number of github repos containing the term “Deep Learning” over time. These projects are written in a vast array of different frameworks.
While this is a wonderful thing for researchers and developers, when we also consider the speed at which the frameworks are evolving, the sharability of code is significantly hindered, with projects becoming outdated in a matter of months if not rigourously maintained against the newest frameworks and also the newest framework versions.
For software development pipelines where rapid prototyping and collaboration are vital, this is a significant bottleneck. As new future frameworks become available, framework specific code quickly becomes outdated and obsolete, and users of these frameworks are constantly re-inventing the wheel.
If our desire is to provide a new framework which simultaneously supports all of the modern frameworks in a simple and scalable manner, then we must determine exactly where the common ground lies between them.
Finding common ground between the existing frameworks is essential in order to design a simple, scalable, and universal abstraction.
In the search for common ground, considering the language first, we can see that Python has become the clear frontrunner. Looking a little deeper at these python frameworks, we find that all of these follow the same core principles of operation, exposing almost identical core functional APIs, but with unique syntax and arguments. There are only so many ways to manipulate a tensor, and unsurprisingly these fundamental tensor operations are consistent between frameworks. The functions exposed by each framework follow very similar conventions to those of Numpy, first introduced in 2006.
A very simple and scalable abstraction therefore presents itself. Ivy is a thin templated and purely functional framework, which wraps existing deep learning frameworks to provide consistent call signatures and syntax for the core tensor operations.
You may be thinking, what about Keras? This is a very fair question! We are by no means the first to have the idea of creating abstractions at the level of deep learning frameworks, and Keras certainly saw a huge amount of success when it was released. Keras is now deprecated, but while in active development, it supported TensorFlow, CNTK, and Theano.
In contrast to Keras, we return to our principle of finding a common ground, and argue that it is simpler, more scalable and more maintainable to abstract purely at the level of the core APIs for tensor operations, as opposed to attempting to abstract the entire wider learning process.
Ivy simplifies and reduces the abstraction to just the level of the core tensor API, enabling complex and dedicated libraries to be built on top of Ivy in a very scalable and maintainable manner.
Keras focused on abstractions at the level of classes and models, which hides the underlying framework from the developer. This provided a very useful tool for early deep learning developers to prototype quickly. However, with this design, it is very difficult to maintain pace with fast evolving frameworks like TensorFlow, PyTorch, MXNet and Jax as they undergo significant architectural changes at the higher class levels.
Existing python DL frameworks create a layer of abstraction over the efficient pre-compiled backend C++ operations, but they still allow development at this lower C++ level for cases where more control is desired. In a similar way, Ivy abstracts over the DL frameworks themselves, but still allows development at the framework-specific level, and also the C++ level when required.
Given this flexibility in usage, and Ivy’s purely functional form, Ivy is well suited to supplement existing projects in existing frameworks, where developers can maintain full control of their training pipelines. The helpful classes adopted by PyTorch, MXNet, TensorFlow, and Flax can all still be fully utilized, with Ivy functions “dragged-and-dropped” only where necessary.
The low-level functional Ivy abstraction maximes developer control
Indeed, the Ivy libraries are predominantly targetted at users who’s projects consist mainly of native code. For example a user could construct a trainable pytorch model with a 3²³ voxel grid of learnt features with a single “drag-and-drop” from the Ivy vision library like so:
This is not to say that the only way of using Ivy is to drag-and-drop individual functions from existing Ivy libraries. Simple network layers such as dense and convolutional are also supported by Ivy in functional form, with explicit input of the learnable parameters. The Ivy abstration can then go deeper into your own project, with custom trainable pure Ivy classes if desired:
In the example above, different ivy backends can be passed to the network constructor, such as ivy.torch, ivy.tensorflow, ivy.mxnd, ivy.jax, or ivy.numpy. See the section “Writing Ivy” in the Ivy documentation for more details on writing effective Ivy code, including the use of placeholders like self._f above.
While Ivy can be used to implement parameter-based trainable network classes as shown above, the current Ivy libraries focus on parameter-free computation.
If end-to-end training is the goal, these parameter-free implementations must be implemented in a framework with automatic differentiation (AD) support. All modern deep learning frameworks support AD. Because Ivy abstracts these frameworks, Ivy also supports AD, and can therefore be used for creating these fully differentiable libraries.
One of the key benefits of Ivy is it’s utility for creating differentiable libraries of parameter-free functions.
We have so far implemented four Ivy libraries in the areas of: Mechanics, 3D Vision, Robotics, and Differentiable Environments, with more in the pipeline. We run through some demos from these library now, and encourage you to pip install the libraries and run the demos yourself if you like what you see!
Ivy mechanics provides functions for conversions of orientation, pose, and positional representations, as well as transformations, and some other more applied functions. The orientation module is the largest, with conversions to and from all euler conventions, quaternions, rotation matrices, rotation vectors, and axis-angle representations.
We show demos of the methods ivy_mech.target_facing_rotation_matrix and ivy_mech.polar_to_cartesian_coords below.
Ivy vision focuses predominantly on 3D vision, with functions for image projections, co-ordinate frame transformation, forward warping, inverse warping, optical flow, depth generation, voxel grids, point clouds, and others.
We show demos of the methods ivy_vision.coords_to_voxel_grid and ivy_vision.render_pixel_coords below.
Ivy robot provides functions and classes for gradient-based trajectory optimization and motion planning. Classes are provided both for mobile robots and robot manipulators.
We show demos of the methods ivy_robot.sample_spline_path, ivy_robot.Manipulator.sample_links, and ivy_robot.RigidMobile.sample_body in applications of drone and manipulator gradient-based motion planning below.
Ivy gym provides differentiable implementations of the control environments provided by OpenAI Gym, as well as new “Swimmer” task which illustrates the simplicity of creating new tasks. The differentiable nature of the environments means that the cumulative reward can be directly optimized in a supervised manner, without need for reinforcement learning. Ivy Gym opens the door for intersectional research between supervised learning, trajectory optimization, and reinforcement learning.
We show demos of each of the environments cartpole, mountain_car, pendulum, reacher, and swimmer, solved using both direct trajectory optimization and supervised learning via a policy network. In the case of trajectory optimization, we optimize for a specific starting state of the environment, whereas for policy optimization we train a policy which is conditioned on the environment state, and the starting state is then randomized between training steps.
Despite this slightly optimistic sub-heading, we hope it’s not too far from true, (otherwise please feel free to be as critical as you like in a comment!). So, what next?
Well, this depends on whether you see yourself more likely as an Ivy library user or an Ivy library contributor in the short term.
If you see yourself more as a user, then the first step would be to pip install all the Ivy libraries on offer, give the demos a run, check out the online docs, and see if any of the provided functions look to be of use for your own projects.
If you see yourself more as a contributor, meaning you would like to implement your own portable Ivy library, then we suggest you pip install Ivy, and check out the page “Writing Ivy” in the Ivy docs. This page explains the best coding practices for creating your own portable Ivy library. We will include links to any community-written Ivy libraries in our official docs!
We have very high aspirations for the role Ivy could play in the intersetional Deep Learning landscape, but this is dependent on fostering a community of Ivy developers with unique sets of expertise.
We therefore call upon any developers who see utility in the Ivy abstraction, to consider implementing the more general parts of their own projects as Ivy functions, either in brand new libraries, or as a pull requests to existing ones.
Once a function is in an Ivy library, it will remain there for all to use, we will ensure it stays compatible with all the latest frameworks and versions, and you will be credited with it’s inclusion.
If contributing to Ivy libraries still doesn’t sound like your cup of tea, then not to worry! We hope the Ivy libraries currently on offer can be of use to you.
For our final point, we want to reiterate that if you have any questions, thoughts, or critiques about Ivy, please don’t hesitate to leave a comment. We will strive to reply to all of these!
Thanks a lot for sticking with this blog post,
The Ivy Team.