Wrapping Up Stage 1: An Honest Reflection
After spending the first half of our summer immersing ourselves in the field of deep learning through numerous online courses, professor lectures, Youtube walkthroughs, and random tutorials, we still considered ourselves rookies that only grazed the surface of a complex field.
However, this was actually what we fully expected. We knew where our starting line was and the limited time we had for this journey.
While we now understood the fundamental concepts and applicable logic behind ML and neural networks such as the definition of supervised learning, how to build a convolutional neural network, or what forward propagation or gradient descent was, there was just so much to learn in such little time. Especially, taking into consideration where we stood at the beginning of this learning process when we asked ourselves — “What even is machine learning?”.
Moreover, deep learning is a math-intensive field. For two freshmen that only completed basic fundamental engineering math courses, understanding the advanced calculus, linear algebra, and probability was quite the headache to digest. Nonetheless, we genuinely enjoyed engineering maths in university and simply thought of this difficulty as the perfect chance to explore an area of our passion while also keeping our mathematics skills sharp during the quarantine season.
Now, time for a massive thank you to Grant Sanderson of 3Blue1Brown! This is a Youtube channel with superb visual-first explanations for a variety of mathematical and computer science theories. 3Blue1Brown is THE channel us, as first-year engineering students, referred to every other day for the most concise, beautifully demonstrated, and clearly explained concepts in calculus and linear algebra. Not surprisingly, TensorFlow also refers novices to these videos under their online resources section and thus, 3Blue1Brown was our go-to place when our brains were knotted up with complex math.
On a similar note, if anyone reading this is also a beginner looking to explore the area of ML, we highly recommend starting your journey on the TensorFlow official website. There are several written tutorials, interactive Google Collab notebooks, and suggested resources to help you get started. The “ML Zero to Hero” video series presented by Laurence Moronet on the TensorFlow Youtube channel was my personal favorite.
Being resourceful and efficient at learning was key to succeeding at our fellowship and we found that these two resources were the most effective for conveying knowledge to someone with our level of understanding.
By this stage, we finally felt confident enough to delve into the common ML tools, TensorFlow being one. It was now time for us to put our knowledge to the test and this is where TensorFlow’s Neural Structured Learning (NSL) framework comes in. As per our fellowship, we were required to 1. Learn and 2. Contribute to open source communities. Stage 1 already covered much of the first step, stage 2 was time for us to apply our knowledge while making meaningful contributions to the NSL community.
Neural Structured Learning: What Is It?
As mentioned above, TensorFlow’s public learning material is perfect for audiences of all levels. Their NSL Youtube series presented by Arjun Gopalan and Da-Cheng Juan is no exception. Each video includes engaging graphics and animations that greatly enhanced my learning. I also personally loved seeing Arjun and Da-Cheng visually explain concepts because this helped provide a sense of “connection” between me as a learner and them as a teacher. Youtube tutorials for similar topics can be uninteresting and hard to take in due to a multitude of factors such as poor clarity, long monotone narrations, or just hard to follow diagrams. The NSL video series, however, was excellently put together.
This 4 part learning series is what we used to familiarize ourselves with the framework and allow us to further solidify our knowledge of deep learning through forming connections between our previously established knowledge base, and the NSL paradigm which is summarized below:
Neural networks exist as an incredibly effective approach for a multitude of machine learning tasks. In the typical case where an image requires classification, it is fed into the various of neuron layers within a neural network. Each of these layers is activated and collectively form numerous activation paths that ultimately determine the image classification. With Neural Structured Learning, neural nets are given structured signals in addition to the input image features, therefore enabling better neural net training. In other words, the NSL framework uses structured signals existing from sample collections in order to improve model quality and robustness.
Sound interesting? If you have some time, be sure to check out their 45-minute presentation on “Neural structured learning in TensorFlow” from TF World ’19. https://www.youtube.com/watch?v=2Ucq7a8CY94&t=2s
The Finish Line: Our Contributions to the NSL Community
The Project Aqua Fellowship was meant to finish by September, however during the beginning of August is when we had a panicked realization. In the past few months, we paved a path in this field for us to continue our learning exploration and potentially spark a new career interest. While we fulfilled part 1 of the fellowship and successfully created a strong basis of Deep Learning knowledge with applied practice, our stumped realization was — “What could we possibly contribute to the community as NSL rookies?”. After looking at some open issues from the GitHub repository, we found it difficult to understand what the issue even detailed. How were we supposed to discover similar issues with the framework when we had only just recently understood what NSL was and it’s usage practice?
Luckily Denys Linkov, the fellowship lead and our mentor who had guided us through every week of our process, once again helped us determine a direction to move forward with. While we may not be able to discover and fix bug issues relating to the source code, open-source contributions are meant to benefit the NSL community. We along with others in the NSL Youtube audience noticed a lack of accuracy and depth in some documentation and thus decided that this was the most feasible contribution we could provide. While the community had many articles, videos, and interactive tutorials, what was lacking was the connection between each resource. Despite corresponding tutorial references in each video description via a link, a complete beginner may realize that the concepts taught in the videos were not well integrated into the tutorials. The different resources were not cohesively presented for rookie learners.
Based on this discovery, we wrapped up the Project Aqua Fellowship by spending our last few weeks submitting PRs that improved the NSL tutorial documentation in ways outlined below:
- Embedded conceptual references from other NSL resources
- Expanded on terminology/concepts that beginners may find difficult
- Added useful links to related resources to further reader understanding
- Improve reader clarity
- Feel free to check out the different contributions in our PRs here: Mine, Zoey’s
And that is a wrap! With our newfound knowledge of deep learning and journey from zero, we applied our background knowledge and beginner perspective to help curate tutorials that were truly “beginner-friendly”. Hopefully, having improved the learning experience of others within the Neural Structured Learning community. As for us, we had one incredibly productive summer with this fellowship, and moving forwards, can’t wait to discover what awaits us in this ever-evolving field of Artificial Intelligence.