Public Learning

project-centric learning for becoming a software engineer

Week 13: A Recap

With A Little Help From My Server-Sent Events

With a healthy dose of discipline (and two tablespoons full of grit), I have had a productive week working on 200 OK. First, quite some time was spent on finalizing my approach for the live debugging of the request/response cycle. Let me quickly summarize my intent:
When you have a black box-like backend API, there are no error logs or response messages tailored to your use case. While I try to be as helpful as possible with concrete error messages for failed requests, some of it might not make sense to a user, be it because of a different intention when making the request or through an undetected mistake in the user’s application code. So it makes sense to provide tools for debugging the request/response cycle and allowing a user to identify and correct any mistakes.

For the backend, this means a rather complicated flow of information. Ever since I decided to completely separate my frontend application (including - confusingly, I know - its backend) into a new Node application, there is no direct communication path between the API backend (located here) and the user-facing administration app (here). Let’s illustrate the current architecture:

200 OK information flow

Let’s assume that there is Telemachus, an aspiring web developer who has created an API named tactful-tesla with 200 OK. Telemachus has now written a fancy social media app with React that uses this API as its backend. But the code for displaying comments in that app produces some strange results and Telemachus wants to find out why. Currently, nginx acts as the reverse proxy that distributes requests to 200ok.app depending on whether they are made to a subdomain (like Telemachus’ tactful-tesla.200ok.app subdomain) or not (like the configuration frontend on 200ok.app). So when Telemachus’ app makes a request to tactful-tesla.200ok.app/users/2/comments/42, both the request and the response sent by the responsible Node application should be available for inspection by Telemachus in the frontend.
As you can see, the only connection between that frontend and the backend API application is the database. So the first idea would be to store both requests and responses to MongoDB. That might even work, but it’s not an optimal solution. The backend has to make quite a few Mongo operations already, so I’m hesitant to add another one. Also, most of the requests and responses would be stored without anyone ever inspecting them, creating the need for a regular cleanup operation to prevent old data piling up inside MongoDB.

Now, ever since my initial Capstone learning phase in January, I have been enamored of Redis, an in-memory key-value store that is blazingly fast and very lightweight compared to other data stores. Redis also has support for a publish/subscribe mode: you can publish information to a specificly named channel and any client can subscribe to such a channel and receive the corresponding information. There is no persistence necessary for that mode: when there is no subscriber to immediately read from a channel, the published information is simply discarded.
This is a perfect use case for what I intended to do and it’s exactly how my approach works. The API backend publishes each request/response cycle to a Redis channel named after the API, so tactful-tesla in our example. When Telemachus decides to debug his app’s requests, he logs into the configuration frontend. At that point, that Node application subscribes to the API’s Redis channel and receives the whole request/response cycle without measurable delay. Really, Redis is so fast that adding this additional step did not result in any measurable impact on the response times in both backends.

The only question left was: how do I transport the information from the frontend Node application to Telemachus’ browser? There is always the option of using WebSockets, but that creates quite a bit of overhead: the server needs to listen to WebSockets communication on a separate port and the client needs a library to be able to properly communicate with the server. Plus, it would be overkill for the presented use case: WebSockets provide a duplex connection, but there is no need for the browser to send anything to the server.
Enter Server-Sent Events, a rather underappreciated way of creating an unidirectional data flow between a server and a browser client. Server-sent events leverage a normal HTTP connection and the special text/event-stream content type to allow the server to send data to a client in potentially large intervals. The great thing about it is the simplicity: since it uses a normal HTTP connection, the only Node-side addition is an extra route that sets the appropriate headers and then sends everything received from the Redis subscriber.
On the frontend the situation is similar. With an HTML5 feature called EventSource, the browser can easily receive those events, add listeners for them and display the data sent by the server. There is no need for any library, it’s just vanilla JavaScript with a pretty small footprint.

I have become a huge fan of solutions that add a minimal amount of complexity, and server-sent events fit right into that mode of thinking. I still have a few issues with the persistent HTTP connection on the live version of 200ok.app (which is why the feature is currently still disabled there), but other than that, the whole live debugging works flawlessly.

Caught In React

Regular readers of my weekly recap will know of my pain with client-side code. The actual dynamic frontend portion of 200 OK is small in comparison to the rest, but it has eaten huge bits of my time budget last week. Initially, I envisioned myself using React, mainly because I already had bits of knowledge and because it is a marketable skill in today’s job landscape. Just two weeks ago, I renounced those plans and instead jabbered about using vanilla JavaScript. Being well aware of the comedic effect, I will now renounce my renouncement. What happened?, you might ask.

Well, fellow Launch School student Simon Dein has offered his help with the frontend code concepts and I happily agreed to let him investigate the possibilities of using JavaScript without any framework. At that point I had created simple placeholder scripts for both the front page behavior and the dashboard, and both had quite some smell, even with their relatively small size. Interactivity and lots of DOM manipulation create a need for some structure if one doesn’t want to get lost in a bad spaghetti mix of JavaScript and HTML.
Simon fearlessly dove into finding a potential solution and quickly stumbled upon Web Components and the principles behind them. In theory, web components sound awesome and like an equivalent to React components (but with a couple 100 kB less JS code). But, unfortunately, they do not seem to be well-suited for my purposes. They are encapsulated building blocks, consisting of a shadow DOM (an internal DOM hidden from outside manipulation) and a tight scope, restricting all CSS markup and JS code to the component without anything leaking out (or in). But that closed scope includes my global Bulma stylesheets as well, which will be stopped before styling the components code and would require the removal of the shadow DOM to take effect. Without the shadow DOM, the normal CSS cascade would take effect, but then the whole templating gets messed up. And the core problem is still not solved: how to achieve interactivity between such different components.
Even with web components, much custom behavior code would need to be written, essentially reinventing parts of the ubiquitous frontend frameworks. And then it would be better to rely on those frameworks in the first place.

So I began weighing my options again and then I remembered Preact, a tiny framework with React’s API but none of its heavy weight. What is most important to me is still a really small footprint for any client using 200 OK‘s frontend, so Preact fits that requirement quite well. It also relieves the burden of the complex React tooling ecosystem by providing an alternative to JSX (the JS/HTML syntax requiring a build step to compile). HTM is a JSX-like templating syntax that leverages ES6 template literals to achieve the same effect without requiring a compiler like Babel to create valid syntax. Together, the library including both _Preact _and HTML weighs 12.3 kB in size (non-gzipped!), which is currently even way smaller than the CSS stylesheet file I serve. And it works without any change to the Node/Express backend by allowing me to drop in this tiny library and just write JavaScript like before.
That means that I need to learn React, though. So I have gritted my teeth and started reading the Fullstack React book I had already bought. That is 39 $ not going to waste now, so hooray for that!

Summary

Time spent this week: 46 hours