Public Learning

project-centric learning for becoming a software engineer

Week 4: A Recap

Forcing Inspiration

Since the beginning of January I had kept a list of ideas. They were rough, spitballed ideas that came to my mind, and each one was supposed to be a starting point to extend my thinking in a certain direction: inspirations that could lead to a full project idea that was worth pursuing. Items on that list were, for example, “Collaborative, real-time X” or “self-hosted, personal cloud backup/password manager/whatever”.
As you can see, nothing of value could have ever been gained from that list. Those items were way too vague to be of use. With a date for the start of my project set in stone (the 27th of January, today!), I had to actually make an effort to find something suitable. And while last week might have been disappointing from a pure learning standpoint (with the fewest hours spent on actual programming and reading, by a huge margin), the numbers don’t tell the whole story. And I’m not going to lie, banging one’s head against a wall in the hopes that a brick might come loose is no fun, but that’s how most of last week felt.
Since finding a good project idea will be a concern for anyone trying to do a similar undertaking, I want to take the time to present my approach and how I ultimately came up with something that I hope is worth the time investment.

Projects Galore

Launch School Capstone projects are always problem-centric: they identify a problem that does not have an obvious solution. Which does sound rather simple, but in order to come that far, you need to dig deep into a potential problem space and hope for a problem to emerge: it’s a game of chance - and the reason why I decided to move away from focusing on just one particular area (distributed systems). The upfront cost of deeply familiarizing myself with the problem space was simply too high, given the uncertainty of actually finding something that is worth solving.
My broader learning approach instead exposed me to a few key technologies (like NoSQL data stores, containers or React) that can all be tools suited for solving one part of a bigger problem, which is undoubtfully going to be helpful. But the elephant in the room started trumpeting loudly and grew harder and harder to ignore: I needed an idea and I needed it now!

A piece of useful advice I have received about finding a project idea was “Don’t be afraid to initially copy something existing, and then spin it into your own direction”. This ties into another great advice: “Existing competitors validate the usefulness of an idea”.
For too long I have tried to pull some revolutionary idea out of thin air. But that approach only resulted in the list mentioned above and was clearly not working. So I changed plans: I was looking for something that I could copy. Not in the sense that I would just replicate some existing piece of software, but with the intent of building upon something that’s already there. For that purpose, I started digging through the “Show HN section” on Hacker News.
I’m not exaggerating when I say that I probably spend a good ten or twelve hours sifting through all those crazy and smart, big and small, wonderfully nerdy and deeply practical side projects posted there. Did you know there is a RESTful Doom, allowing you to control Doom by sending HTTP requests? Now you do!
I didn’t count much of that time against my study hours, but I was exposed to lots of cool projects. The great thing about Show HN was the immediate feedback that was visible through the upvote count and the comment section. Sure, it’s a popularity contest, but it clearly showed me what struck a nerve and what didn’t. And after spending those many hours and a few more, I had a list that was far more substantial than my previous one.

Zeroing In

Here’s what was on my list after a bit of pruning and consideration. I’m also adding why I ultimately didn’t go with each of those ideas.

Real-time, Collaborative Maze Solving Game

That is the only thing that kind of survived from my initial list. Fellow Capstone student Ian Evans had reminded me of the wonderful book Mazes for Programmers (Pragmatic Bookshelf, 2015) and the idea of a collaborative maze solving game, which incidentally was a general idea I was toying with for a while myself. I like procedural generation and I have a soft spot for game development, so it’s no wonder I was considering something game-like as my big project.
Potential negative connotations during my job search aside (“A game? What a waste of time!"), the project had what I felt was too large of a focus: the maze generation and solving algorithms were probably the simplest part, but creating a multiplayer game around it comes with a big array of problems, some of which were very similar to whole Capstone projects themselves. In the end, it felt too risky. A game needs to be polished enough to be impressive, and there were many issues that I had no idea how to even approach (client-server sync, lag compensation, graphical representation [WebGL?], animations, etc.).

Self-Hosted, Privacy-Focused Web Analytics

This is an idea I have seriously considered for a while. The basic idea was to have an easy-to-use analytics tool that provides both the small script to embed in your website as well as the actual backend that can store and render comprehensive statistics about your site’s visitors. Think Google Analytics meets GDPR, something that values the privacy and keeps the anonymity of website visitors: no tracking, no ad targeting, just analytics.
I still think it’s a neat idea, but it felt a tad too uninspired and frontend-oriented. Web analytics is unfortunately nothing I can get too excited about.

A Terminal-To-GIF Converter and Sharing Solution

From a technical perspective, this was one of the most enticing ideas I had. Unfortunately, I quickly found out that there are very professional and mature open-source solutions like asciinema that do everything I intended to do (and much more).
The idea would have been to write a CLI application that could record terminal input an output and create an easily shareable video or GIF, e.g. to demonstrate a script or share some shell wizardry. But not only were there existing alternatives (and a lack of a unique focus for mine), it would have been the project most removed from web development, therefore making it ill-suited as a portfolio item for my job search.

A Hosted Key-Value Store

During my browsing of Show HN I stumbled upon EasyDB, a one-click, ephemeral key-value store for testing purposes or small projects. I had thought about creating my own Redis clone before, taking inspiration from the Capstone project CorvoStore. I thought that I could improve on a few of their limitations and make a hosted version akin to EasyDB, although there was a good chance that I underestimated the complexity of it (it was a full Capstone project from two people, after all).
But I kept thinking about the general idea, and it led me to my actual project idea …

An Easy-To-Use, Ephemeral API-As-A-Service for Mocking, Testing and Learning

I kept thinking about EasyDB and found myself wondering: what if I take the same approach, but with a RESTful backend API instead? My initial idea was along the lines of providing a one-click REST API that would have no predefined endpoints, but instead would create them along with your HTTP requests: make a POST request to /users and it saves the request body as a new user. Issue a GET request to /users afterwards and it returns a collection with that one user previously created.
The API could make educated guesses to increase the usefulness of that schema-less approach. If there’s an id field in the request body when creating an item, chances are that it should be an identifier to access that particular item later. So if the POST request for user creation includes an id field with the value 1, that item should be returned when GET requesting /users/1.

Coupled with the idea of creating a one-click solution (click a button and get your API root path), this sounded like a fun idea. It would be ephemeral like EasyDB, so any data would be persisted for somewhere around 3 to 7 days, some rate-limiting would make sense as well as other possible restrictions (e.g. a maximum of 100 items per collection). So if you’re developing frontend code and don’t want to worry about the backend yet, you could just use this API to keep developing and getting real data in and out of the system.

There are, of course, a few similar projects in existence that do something similar. Mock APIs exist that can either serve some generic data (users, posts, comments, …) or return specific data for certain endpoints based on pre-defined rules. MockAPI is one of them, allowing neat rule-based endpoint creation. Mockoon is a desktop application with tons of features for mocking an API and inspecting requests, while Mocky is a light-weight solution to define one-off endpoints with a particular response. Beeceptor is another one, combining both endpoint mocking and payload inspecting.
Some of those are payed services with a free tier, but none provide the ease of use I have in mind. Sure, there are lots of API endpoints that can’t be intelligently mocked (like /login), but if you’re just writing your first Todo app with React, chances are that /items is all you need to get started.

Here Be Dragons

There is a ton of work for me to fully validate this idea and solve a few core problems, but this is where I start my exploration today.
There is the question of how each API should be modeled: as one monolithic service that serves and stores data by checking the full path? Or should each API be an independent service, orchestrated by a reverse proxy that redirects incoming requests? In addition to that, how should the data be stored? With the kind of limitations I have in mind, one central PostgreSQL database might be all that’s needed, but there are alternatives to consider.

Those are just a few of the questions I try to answer this week, along with probably making a quick validating prototype to see how my schema-less idea pans out in practice. I’m pretty excited to have a meaty idea to sink my teeth into, and I hope that I can report substantial progress in next week’s recap.

Summary

😄

Time spent this week: 24.5 hours1


  1. For the sake of brevity (and because I really want to start working), I didn’t report much on my actual learning last week. The number of hours reflect my time spent with Docker concepts and usage as well as a quick WebSockets introduction. However, it does not include time spent on finding the project idea, which was hard to track because browsing Hacker News and sitting in my chair, thinking and looking up existing solutions for an idea, didn’t feel like something that should count against my learning time. I did also take most the weekend off and had to provide technical support at my previous job for almost a full day, so I was a bit tight on time last week. ↩︎