Published by Weisser Zwerg Blog on
Reflection on the state of web development or the story of Media Artists AG (1997).
This blog post is part of the Odysseys in Software Engineering series.
In my first year at university, I studied physics and computer science in parallel. After my first year I decided to stop the computer science lectures and start working in a start-up instead. This was 1997. This was the time of the dot-com boom, where everybody wanted to have a place on the internet, nobody knew what the internet was, and we gave them the internet. The start-up was called Media Artists AG. There were the two founders and I was their first employee.
While all of this was working fine, the incompatibilities between different versions of Internet Explorer 4 were horrific. Even after the beta phase for Internet Explorer 4 ended and Internet Explorer 4 was supposed to be stable, every (even minor) update brought countless incompatibilities that we had to account for in our code base with a lot of “if-then-else” logic handling different version numbers of IE4.
This experience was so painful for me that for a long time I did not touch front-end technologies again and preferred to stay on the back-end. Only in 2019, when my daughter had an idea for an own social start-up, I decided it was time to revisit the topic once again. So, what follows, is the story of my re-discovery of web technologies with a 20 year break in between. And just to make the above story about Media Artists AG complete: we switched from HTML and JScript to Java, where I worked in this context with early versions of Java Swing, before it became an integral part of the JDK. At least Java Swing stayed backward compatible between releases so that we could focus on the product rather than on incompatibilities.
The package management system like npm I also count as a positive. I am aware that people are sometimes critical about package management systems and complain that they “download the whole internet”. But managing external libraries in the past was always a major pain! You had to download them from somewhere, mostly as a zip archive, and then keep track of changes made there yourself. In addition you had to identify and download all the transitive dependencies yourself. A package system makes all of that a repeatable and stable process.
Another nice encounter was bootstrap: finally, some useful defaults for a starting point that you can build upon. But I was a bit disappointed by the state of CSS in general. CSS existed already in 1997 and the improvements are limited. Why are pre-processors like sass or less still needed? Why is that not already part of the core standard language?
I also enjoyed working with react due to its functional nature. You take one state and transform it into another state. Nice and clean. I always thought that UIs should be built like they are in computer games, e.g. a big while loop where you take inputs from your periphery, update the game state and draw the scene as a derivative of that game state. This is the exact opposite of these tangled balls of mud with UI state scattered over your whole codebase and event chains ending in infinite loops. While this is not one-to-one related to react the state management via redux together with redux-persist to handle situations where users inadvertently close tabs or press F5 also felt like an improvement.
While this is not part of the core web technologies and I am still not 100% sure if I should count it as a positive, I found the serverless security rules intriguing. I always was wondering how a serverless infrastructure would handle these security aspects in the absence of any protected and safe secure area like the backend. But writing and testing these rules and being sure that they finally achieve the intended goal was a major headache. I am not sure if people will in the future come up with better solutions perhaps?
The “bad” is easily summarized as “layers on top of layers of indirection” (or turtles all the way down). It took me quite a lot of time to understand what is (roughly) going on. I won’t pretend that I understand it all.
People, often say, that computer science is a young field and therefore a lot of progress is still being made. This also nurtures the narrative of the fast-paced technological progress and the constant need for learning and staying atop of the curve to not become irrelevant. I would argue that most relevant work in computer science was done before the 2000s (with a few exceptions like DNNs and similar) and what we see as fast-paced technological progress is mostly bike-shedding. Typically, for the difficult problems, there is only one way how to do it. For the trivial problems, there are 100s of ways on how to do it and that invites the myriads of web frameworks and short-lived hypes out there. I still remember when GWT was the hottest thing on the planet. Then it was dropped by Google and most of its users. Then came “newer technologies” like Polymer, AngularJS, Flutter with Dart, Angular, … and this is only an enumeration of Google technologies. The same is true in the Facebook/React camp with Gatsby, next.js, Create React App, … and that does not even touch yet the 10’000s of other web frameworks out there.
Why are there still no agreed best practice approaches out there? I may exaggerate here, but as far as I can tell it looks to me as if you have to throw away your front-end every 1 to 2 years, only because the web framework on which you originally built your site was abandoned by its creators for no other good reason than the next fad. How do you want to build stable and production ready software that delivers business value like that?
Especially the state of the constant security alerts worries me. There are “solutions” out there like GitHub’s Dependabot that at least warn you about known vulnerabilities, but the process for how you’re supposed to deal with those is still not clear to me? I tried to read quite a bit about it and there are articles like How to fix Security Vulnerabilities in NPM Dependencies in 3 Minutes out there, but following these recipes nearly never leads to the wanted result and you’re left with the message:
X vulnerabilities required manual review and could not be updated
The one thing that seems to at least sometimes do the right thing is npm-check-updates. But why is there no canonical and well described process that every front-end developer knows by heart and can apply in their sleep? At least I did not find anything like that.
I would even argue that most of the (security) problems in web applications stem from the fact that you use a web application in the first place. Why did rich-clients on desktop computers fall so much out of favour? On the mobile platforms like iOS and Android native rich-clients are still very common, but on a desktop computer rich-clients are a threatened species. In a rich-client you don’t have to worry about XSS or what you store or mustn’t store in the browser’s localStorage. You know that the operating system is making sure that no other process has unauthorized access to your process memory. In addition rich-clients could be so much more responsive and productive. Have you recently worked with O365 or the cloud version of Atlassian tools or any other “cloud native” application for that matter? While waiting for the hundreds of little background requests to make your application useable my feet fall asleep. This was not the case in the past when the local rich-client application was actually doing something useful like allowing you to start working immediately and syncing in the background with some server.
Perhaps it will help if we introduce a new buzz word like edge computing to make something we did do 20 years ago sound like technological progress.
All in all I am quite underwhelmed by the state of front-end web development in 2021.
On the one hand side I hope that rich-clients will gain favour again to avoid many of the problems I describe above; the problems you wouldn’t have had if you wouldn’t have used a web application in the first place. Such a rich-client may actually come as a web-application in disguise like Electron. But things like Java Web Start (by now removed from the JDK but still living on as OpenWebStart) and JavaFX deserve their place, too.
The other thing I would hope for is a reduction of the layers on top of layers of indirection. It seems that the people from the Modern Web project are thinking in a similar direction and are promoting buildless approaches and workflows. The same people are behind the Open Web Components sister project to promote components that are independent of any framework. There exist several base-libraries and some component-libraries that look at least on first sight promising.
I would hope for something simpler (less layers of abstraction) and something more stable with a longer life-cycle (longevity) to protect businesses investments rather than needing to follow one short-lived hype after the other for no good reason.
- “Uncle” Bob Martin: The Future of Programming
- Jonathan Blow: Preventing the Collapse of Civilization
- Bert Hubert: How Tech Loses Out over at Companies, Countries and Continents