In the field of software development, it seems almost necessary to stay up-to-date with the latest trends, technologies and even programming languages in order to take advantage of their benefits and stay ahead of the competition. This is especially true in web development, where it seems like a shiny new JavaScript framework is released every other week. Following from this, I decided to pick up Vue.js over the weekend in order to learn something new and brush up my web development skills before my upcoming internship.
As I pulled myself out of the vortex of Shark Tank clips on YouTube and decided to finally start, a thought crossed my mind.
How do I usually learn the basics of a new technology1 and is it the right way?
My process of learning a new technology X
usually goes something like this:
- Someone tells me about how great
X
is or I stumble upon it online. - I read up on the benefits of
X
and its popularity. - I open up
X
’s official documentation and navigate to the “Get Started” section. - I look for introductory online courses on
X
(Udacity, Udemy, edX, Coursera), which usually end up playing in the background as I go through the documentation. - Once I feel like I’ve read/watched enough, I build. I think of a small application that I can build using
X
and get started on it. - I continue the cycle of Read ⇄ Build until I feel satisfied with the amount I’ve learnt as well as what I’ve built.
This process, of course, relies on my foundational knowledge of Computer Science to help me learn the basics of X
in an accelerated fashion (since most software principles remain the same from language to language and framework to framework).
Sometimes, an additional step in my learning process is conducting an introductory workshop on X
for students at my university. I strongly believe in the concept of docendo discimus2 and usually end up clarifying my own understanding of X
as I prepare the material and conduct dry runs of the workshop.
Obviously, this entire process is only meant to achieve an initial exposure to X
. Further depth requires more experience working with X
and a deeper understanding of X
’s features & inner workings.
But is this the right way to learn? Let’s look at the pros & cons.
Pros
- This learning process is efficient (barring one’s procrastination habits) and doesn’t usually get boring.
- There is a constant application of the concepts learned and there is an end goal to work towards.
- The final application built is “proof” of one’s newly learned skills.
Cons
- It’s not that easy to come up with ideas for applications to build using
X
when you’re still learning. There are only so many todo list or note-taking applications one can build in a lifetime. - There is a tendency for Google searches and StackOverflow to take precedence over documentation during the “building” phase, which is not always bad but can lead to a shallow understanding of
X
. - The lack of a deadline in self-learning generally requires a high level of motivation.
In conclusion, the concurrent learning & building makes the process less mundane and allows for creativity but a conscious effort has to be made to learn “properly” and avoid hacky fixes & workarounds.
But is this the right way to learn? Honestly, I still don’t know but it has worked alright for me so far so I’ll stick to it, for now at least.