's just for layout purposes, inherently held no semantic meaning.
Modern Alternatives
As you would expect, there are modern alternatives that make this layout trivial:
Flexbox
With Flexbox, it really is this simple:
.container {
display: flex;
}
.left-column, .right-column {
flex: 1;
}
CSS Grid
In CSS grid, it's even easier as all child elements in a row are explicitly defined and align by default, no extra CSS required:
.container {
display: grid;
grid-template-columns: repeat(2, 1fr); /* 1fr = 1 fraction unit */
}
Can I still use it?
Errr, silly question, no not at all! Look how effortless the modern alternatives above are! Imagine how good it feels to rip out the legacy faux column code from a legacy codebase, and replace it with 1 or 2 lines of CSS!
Faux Columns Summary
The Faux Columns technique was one of those clever hacks we leaned on back when CSS didn’t give designers and developers much to work with. It did the job, but it was fragile and fiddly, and you were always one layout change away from breaking it. These days, it’s more of a historical curiosity. Flexbox and Grid have long since made it obsolete, and with newer tools like Subgrid and Container Queries coming through the standards process, we’ve moved on from trickery to browser tools that are actually built for layout.
Zoom Layouts (using CSS zoom for responsiveness)
Back when responsive design was first emerging, a technique called Zoom Layouts emerged in order to scale whole elements within a UI. This technique emerged because responsive CSS layout techniques were very limited.
Example
An simple example of this is given below:
.container {
zoom: 0.8;
}
This CSS is easy to understand, it simply scales the entire .container by 80%.
When was it useful?
This technique was useful when you needed to shrink or enlarge an entire layout without refactoring a fixed-width design. It also came in handy when working with legacy layouts that could not adapt with fluid widths or media queries. Lastly, it was used as a workaround before widespread browser support for the transform: scale() CSS property or relative units like rem, em, %, vw, and vh.
Why is it outdated?
There are a number of reasons as to why the Zoom Layout technique is now outdated. These include:
zoom is non-standard and inconsistent: The zoom property isn't a part of any official CSS specification. It is a proprietary feature initially supported by Internet Explorer, then later Chromium-based browsers. Interestingly, Firefox and Safari have never supported the zoom property, making cross-browser layouts using the technique very tricky.
Causes Accessibility issues: zoom does affect the layout scaling, but it doesn't interact well with user-initiated zoom or accessibility scaling preferences. Thus, using this technique can create barriers for users with visual impalements that rely on native browser zooming or OS-level zooming tools.
Breaks layout semantics: zoomed elements don't always reflow correctly, for example text can reflow outside its container, images can become blurry, and form elements may not align correctly when scaled.
Modern CSS has better solutions: As with most outdated techniques in this post, modern browsers now support much better layout techniques and relative units, that make responsive design much more consistent and easier to maintain. These include Flexbox, CSS Grid and rem, em, %, vw, vh units. Along with Media Queries and container queries, this gives developers the ability to adapt individual elements proportionally, rather than resorting to scaling the entire UI.
Performance issues: The use of zoom can cause serious performance issues, especially on low-powered devices since the rescaling causes the browser to scale rasterised layers rather than reflow content natively, which increases UI repaint costs.
Can I still use it?
Seriously, only if you hate your users and love additional maintenance. In practical terms, using it would not be a responsible choice; avoid it. If you come across a critical legacy site using this approach, plan to refactor it with modern techniques. Build your layouts using CSS Grid or Flexbox for flexibility across breakpoints, implement fluid typography with clamp and viewport units, adopt container queries for component-level responsiveness, rely on viewport-based units for consistency, and always test with browser zoom and assistive technologies to ensure accessibility and adaptability for all users.
Zoom Layouts Summary
Using zoom for layout responsiveness is an outdated, non-standard technique that can compromise accessibility, compatibility, and performance. Modern responsive design principles provide far more robust, scalable, and accessible solutions.
If you require a transition approach for legacy systems still using Zoom Layouts, consider refactoring incrementally to CSS Grid and Flexbox combined with relative units like rem or percentages to modernise their responsiveness. Luckily for you, this isn’t the last time you’ll hear about the infamous proprietary zoom property, as it makes quite a few appearances later in the blog post when we dive into those classic IE layout quirks.
Nested ems instead of Pixels
Before the CSS rem unit was added to the CSS Values and Units Module Level 3, developers used the em unit as a responsive strategy to avoid fixed pixel font sizes. Having used it for years I can confirm it was a real pain in the ass to work with (pardon my French!). When using an em unit, both the CSS font size and spacing were sized relative to their immediate parent container.
For example, given this HTML:
And this CSS:
body { font-size: 1em; }
.container { font-size: 1.2em; } /* relative to body */
.child { font-size: 1.2em; } /* relative to .container, so compounded */
Can you guess what the .child font size of the text is in pixels?
Better get your Math(s) hat on, let's go through it!
Default body font size = 16px so 1em x 16px = 16px.
The .container DIV is relative to the body font size so:
.container font size = 1.2em x 16px = 19.2px.
The .child paragraph is relative to the .container font size so:
.child font size = 1.2em x 19.2px = 23.04px
That's right, that well-known font size 23.04px!
Now this is just a very basic example, imagine if you include em units for margins and paddings too! And also layer on additional nesting! Hopefully, you are starting to realise how painful em units were to use on a website, especially when the only viable alternatives were percentages (which had the same relative nesting issue and were even less intuitive to use than em), or CSS keywords e.g. font-size: small, medium, large, x-large, etc. As you can see, there weren’t a lot of viable or maintainable options in terms of responsive typography and spacing in the early responsive design era (around 2010-2013).
Why it's outdated?
Complexity and unpredictability: Nested ems lead to compounded calculations as we saw in the simple example I gave above, making sizing unpredictable in deeply nested components. A small change in a parent font size cascades unexpectedly and could completely obliterate your well-crafted layout.
Maintenance overhead: Adjusting layouts or typography with nested ems quickly creates brittle CSS and significant technical debt, especially when ems are used for spacing like margins and padding.
Inconsistent UI scales: Components may render differently in different contexts if they rely on em units, especially in large applications with diverse layout containers.
Modern Alternatives
You can utilise several modern options for nested em units. These include:
rem units for consistent global scaling relative to the root font size
Clamp-based fluid typography for responsive design, for example Utopia.fyi.
CSS custom properties (variables) for consistent, maintainable scales
Can I use them today?
You could, but I have no idea why you would! When more viable alternatives exist today like rem units for global scaling, clamp for fluid typography, and CSS variables for maintainable scales, why make life harder than it needs to be??
Nested ems Summary
Using nested em units is outdated. It adds unnecessary complexity and unpredictability. For modern responsive design you are far better off using rems for consistent global scaling, or taking advantage of the clamp CSS function if you are feeling adventurous. Lastly, you could always use modern CSS variables for more consistent and maintainable code.
Setting the browsers base font size to 62.5%
As a direct follow on from the nested em technique earlier in the post, there was an alternative that developers came up with to simplify the math(s) behind percentages (since they had the same "relative to parent" issue as em's). When developers decided to use percentages for fonts, they often set the font size on the element to:
html { font-size: 62.5%; } /* default font size now 10px not 16px, due to scaling making `em` units easier to work with (base-10 rather than base-16) */
.container { font-size: 1.6em }/*16px*/
.container { font-size: 2.4em }/*24px*/
.container { font-size: 3.6em }/*36px*/
This avoided complicated fractional calculations when using em units:
Without the percentage: 1em = 16px → 24px = 1.5em.
With the percentage: 1em = 10px → 24px = 2.4em.
You still had the problem with nested elements, but that was later fixed by using rem units (root em).
Why are these techniques less common today?
It overrides user defaults: Some users may increase their base font size from 16px for accessibility reasons, hard-coding the base size to 62.5% undermines this user preference.
Modern teams work with rem: Most developers and teams now accept that 1rem = 16px and use design tokens, variables, or a spacing scale instead of forcing a base-10 (62.5% hack) mental model.
Simplicity from Modern tooling: Design systems, utility classes, and CSS variables handle sizing scales more predictably without the 62.5% hack.
Can I still use it today?
No, not really, mainly due to the list of reasons I've given above. font-size: 62.5% was merely a developer convenience hack to make 1em / 1rem equal 10px for easy math(s). Look at the short list of modern alternatives I have listed above instead.
Base font size Summary
As mentioned above, this math(s) hack for easier font sizing is no longer required on the modern web, in fact it should be avoided due to the impact it has on users who change their base font size for accessibility reasons. Look to use one of the more modern techniques mentioned in the "Why are these techniques less common today?" section above.
Fixed-Width Fonts for Responsive Text
Fixed-width fonts are better known as monospaced fonts, they allocate the same horizontal spacing to each character. For example:
.mono-spaced-font {font-family: Courier New,Courier,Lucida Sans Typewriter,Lucida Typewriter,monospace;}
The example provided above demonstrates how to render text in a monospaced font and concurrently defines a monospaced font stack for most web page implementations. The reason I say "most" is because Windows has a 99.73% support for Courier New and OSX has 95.68% support according to CSSFontStack. Which is why it is listed first in the font stack, for the less than 1% of users that don’t support it, the browser will look for Courier and so on, until it gets to the end of the font stack, where it just tells the browser to use any monospace font the system has available.
Historically, monospaced fonts were used for:
Terminal emulation.
Code editors for alignment.
Early web design, where the layout predictability was prioritised over aesthetics or responsiveness.
Why was the technique used in responsive text?
Developers and designers struggled with the web platform's limitations at the time due to a lack of suitable tools. So monospaced text was usually used for:
Consistent character spacing across browsers.
Easier text alignment in table-based layouts.
Simplifying calculations for layout sizing, since browser layout strategies were much less mature than they are today.
Why is the technique outdated?
The technique is now considered outdated for various reasons, including:
It limits design flexibility. Modern responsive design has moved on from fixed typography, as fluid typography is now possible, which is better served by proportional fonts that adapt visually to varying screen sizes and reading contexts.
Monospaced fonts are harder to read, especially for paragraphs or long text blocks. This requirement on the modern web is critical for accessibility-focused design.
Instead of outdated methods, modern CSS offers enhanced tools and support for contemporary layout techniques. Flexbox and CSS Grid, coupled with various typography scaling units like rem, em, vw, vh, and clamp(), enable more predictable and reliable layout control.
There's no performance difference between modern proportional fonts and monospaced fonts, they both have similar browser overhead, so why choose a technique that is harder to maintain and comes with a whole host of other disadvantages?
What's a modern replacement?
There are a number of modern alternatives, some of which we touched on above. These include the use of:
Fluid typography with CSS clamp() and viewport units to ensure text scales responsively across devices.
Proportional fonts with font fallback stacks to optimise readability and layout adaptability.
Only using monospaced fonts for semantic or functional reasons, not aesthetics. Code blocks and tabular data are prime examples of where monospaced fonts should be used to enhance readability of these certain areas of a website. Adventurous designers can even transform a web UI into a retro Terminal window with these elements, though readability must be carefully considered.
Can I still use it?
As stated repeatedly in this section of the blog post, while technically possible, it would be highly illogical to employ this technique. Given the numerous disadvantages outlined earlier in this section, utilising such an antiquated method on the modern web would be ill-advised.
Fixed-Width Fonts Summary
There are so many font options available to developers and designers today. There is no way you should ever use a monospaced font for anything besides sections of code, or possibly text in a data table, depending on what type of data you are wishing to display. In both of these cases, a monospaced font can enhance readability if used correctly.
4. The Plugin Era – Flash and Friends
Flash-Based Content
I distinctly remember having a conversation with a then colleague regarding iOS not supporting Flash content and how it was the beginning of the end of Adobe Flash (Flash) on the web. At the time he refused to believe it, but thankfully for the web, my prediction came true!
What was Flash-based content?
Flash was a proprietary multimedia software platform developed by Adobe. It was used to:
Deliver animations, video, and interactive content via a plugin in the web browser.
Enable rich media applications embedded in websites.
Power early interactive interfaces on the web, this was way before the web platform matured and could support these types of interactivity natively.
I, personally, remember it for flash-based advertisements of which I created many when I was first starting out in web development!
Why was it popular?
Flash was hugely popular at the time due to the fact that:
Cross-browser multimedia support was lacking on the web platform (i.e. no native support)
Advanced vector animation support
In 2005, Flash was the sole method for streaming audio and video on the web, as exemplified by YouTube's reliance on it.
Interaction was programmed through the use of ActionScript. If it sounds very familiar to JS, that's because it is, as they are both based on the ECMAScript Standard. That's a massive oversimplification, but if you are curious, read all about it on Wikipedia.
In the late 1990s there was a popular trend on the web of having completely pointless Flash intros that would load and play automatically before you entered a site. There are countless example of these intros on YouTube if you are interested!
Why was it deprecated?
There are many reasons as to why Flash is now deprecated. These include:
Flash was well-known for serious security vulnerabilities, which were often used for malicious software and browser takeovers.
Flash content often consumed significant CPU and memory, leading to poor performance and excessive battery drain on mobile devices.
As mentioned earlier, Apple refused to support Flash on iOS, citing security, performance, and stability concerns, which contributed heavily to its decline.
Adobe's proprietary Flash technology was incompatible with open web standards, hindering accessibility, interoperability, and sustainability.
Lastly, open web standards and the web platform evolved to replace Flash with native (and non-proprietary) functionality like:
Native video and audio playback (
and tags)
CSS animations and transitions
Canvas and WebGL for interactive graphics and games
SVG for scalable vector graphics
Flash met its end with the advent of modern web APIs, including HTML5, CSS3, and modern JS.
In 2017 Adobe announced that Flash's end of life would be in 2020. In December 2020 Adobe released the final update for Flash. By January 2021, major browsers disabled Flash by default and eventually blocked Flash content entirely.
Can I still use it?
At last! A straightforward answer to this question: No, it's impossible for you to use Flash on the modern web as Flash content in no longer supported in any modern browser. Simple! RIP Adobe (Macromedia) Flash 1996 to 2020. You won't be missed.
Modern Alternatives
As mentioned above, there are a number of native browser-based alternatives for Flash (HTML5, CSS3, Modern JS) functionality these are:
- Native video and audio playback ( and tags)
- CSS animations and transitions
- Canvas and WebGL for interactive graphics and games
- SVG for scalable vector graphics
Scalable Inman Flash Replacement (sIFR)
Before the introduction of the @font-face at-rule, which is defined in the CSS Fonts Module Level 4 specifications, Web Designers and Frontend Developers were desperately seeking to expand the limited number of cross-browser, and cross-operating system fonts that were available on the web. In order to achieve that, a number of workarounds and general ingenious browser hackery were built. From the name of this technique you may also be able to guess the answer for the "Can I still use it?" section!
What is sIFR?
Scalable Inman Flash Replacement (sIFR) was an creative technique that used JS and Flash together to replace HTML text elements with Flash-rendered text. This feature allowed developers to embed custom fonts directly within the Flash file. Consequently, they could modify the HTML text, and the Flash file would dynamically render the updated content.
This workaround was required at the time because there was very limited support for custom web fonts via the use of @font-face. Surprisingly, @font-face was first introduced by Microsoft in Internet Explorer 4 in 1997, using Embedded OpenType (EOT) as the font format. This was proprietary to IE, so no other browsers supported it. Since there wasn't a cross-browser way to use custom fonts, alternative techniques like sIFR emerged.
sIFR Popularity
sIFR emerged in the early to mid-2000s, with its first public version released around 2004-2005. sIFR was widely used until around 2009-2010, especially for headings and branded typography. Its popularity grew during that period due to the technique's preservation of SEO and accessibility advantages. This was because the original HTML text remained in the Document Object Model (DOM), allowing it to still be readable by search engines and assistive technology. And once setup it was simple to update the underlying text and sIFR took care of the rest. There was also the added bonus that the text remained selectable, so could be copied and pasted when needed. It sounds like a great solution, so where did it all go wrong?
Why it's outdated?
There are several reasons why the sIFR technique is now outdated. We covered the main one in the previous "Flash-Based Content" technique above:
It relies on the Adobe Flash Player browser plugin, that is now deprecated and blocked in all major browsers due to security vulnerabilities and performance issues.
It slowed down page performance by increasing page load time, due to having to download the Flash assets.
Although it was partially accessible from an HTML perspective, it certainly wasn't perfect as it introduced accessibility and compatibility issues on devices without Flash support.
Web standards came to the rescue. CSS3 brought with it native cross-browser support for @font-face for custom fonts, without the need for any browser plugins. The new standards supported Web Open Font Format (WOFF and WOFF2) font formats which are a standardised and optimised custom font format for the delivery of fonts on the web. Basically, HTML5 and CSS working together simply removed the need for plugin-based typography workarounds.
Can I still use it?
As I mentioned, above this is a pretty simple question to answer… No, not at all, the removal of Flash from all modern browsers used today guarantees that!
Modern Alternative
There's only one modern alternative that should be used on the web today: @font-face. An example of its usage is given below:
@font-face {
font-family: "MyCustomFont";
src: url("fonts/MyCustomFont.eot"); /* IE9 Compat Modes */
src: url("fonts/MyCustomFont.eot?#iefix") format("embedded-opentype"), /* IE6-IE8 */
url("fonts/MyCustomFont.woff2") format("woff2"), /* Super modern browsers */
url("fonts/MyCustomFont.woff") format("woff"), /* Modern browsers */
url("fonts/MyCustomFont.ttf") format("truetype"); /* Safari, Android, iOS */
font-weight: normal;
font-style: normal;
}
Thankfully, modern browsers widely support WOFF, thus, simplifying the above code:
@font-face {
font-family: "MyCustomFont";
url("fonts/MyCustomFont.woff2") format("woff2"), /* Super modern browsers */
url("fonts/MyCustomFont.woff") format("woff"); /* modern browsers */
font-weight: normal;
font-style: normal;
}
In fact, any modern web browser that supports WOFF also supports WOFF2. Therefore, the code you should use today is as follows:
@font-face {
font-family: "MyCustomFont";
url("fonts/MyCustomFont.woff2") format("woff2");
font-weight: normal;
font-style: normal;
}
In all instances above you'd use the custom font like so:
.myfontclass {
font-family: "MyCustomFont", /* other "fallback" fonts here */;
}
The browser will take care of the rest!
Note: that you should always provide a font fallback in your font-family property, just in case the font file fails to load, or it is accidentally deleted from your server. There is so much more to the use of @font-face. If you are interested in advanced topics around its usage you should definitely check out Zach Leatherman's copious amount of work on the subject over the years!
Cufón
Much like sIFR above, Cufón was created due to the lack of options when it came to using custom fonts on the web. It was popular around the same time as sIFR (late 2000s and early 2010s). It essentially was solving the same problem, but using a different cross-browser technique. Whereas sIFR used Flash, Cufón worked like so:
Fonts were converted into vector graphics, and canvas (or VML for older versions of IE) this was then used to render the text in place.
The JS then replaced the HTML text with the custom font rendered version of the text.
Since it was JS-based, there was no need for any plugins (like Flash).
Why was it used?
As previously noted with sIFR, browser support for CSS @font-face was inadequate or inconsistent at the time. Designers and developers wanted to use custom fonts for branding and stylistic reasons without users having to install a plugin or the fonts locally. Cufón was attractive because it:
Didn't require a plugin for it to work.
Provided near pixel-perfect rendering of the custom font.
Was easy to integrate with minimal JS setup.
Why is it outdated?
Modern browsers all support @font-face, a much better solution as it allows the direct use of web fonts like WOFF or WOFF2 files without the use of JS hacks.
Its usage impacted accessibility. Due to Cufón replacing text in the DOM with rendered graphics, screen readers couldn't interpret these replacements as text, thus degrading a site's accessibility.
Cufón caused web performance issues as the text replacement script was run after page load, which increased page render time, blocked interactivity and degraded the overall performance, especially on slower devices.
Although Cufón attempted to preserve the replaced text in the DOM, the results were often inconsistent, mainly because search engine crawlers at the time had inconsistent results when parsing JS-replaced content.
Cufón didn't work with responsive design, once rendered the Cufón replaced text didn't scale correctly, unless the page was reloaded at the new page size.
Can I still use it?
Although the site is still available here, and the cufon.js script is still available to download. The font generator has been taken down and is no longer maintained. So to get it working you'd need to jump through quite a few hoops! So, what I'm really trying to say is: Yes, you can, but it isn't worthwhile. Even the original author Simo Kinnunen says on the website:
Seriously, though you should be using standard web fonts by now.
Modern Alternatives
Rather than repeat myself, I'll refer you to the same section from the sIFR methodology above.
GIF Text Replacements
Although I am aware that this is not a plug-in, it seemed like the most appropriate section to include it in since we are discussing font replacement techniques. Of all the custom font techniques I've listed, this one is by far the worst in my opinion. The technique was popular in the late 1990s and into the early 2000s, when there were very few other options for using custom fonts on the web.
What is a GIF Text Replacement?
It's a self-explanatory name. In order to use a custom font, a designer would create the static asset (usually via Photoshop or similar), then the developer would cut out the text as a GIF. This image would then be used to replace the HTML text on the page with the image, in order to make it look like a custom font was in use. Example code for this technique can be seen below:
Note: Some readers may wonder why a GIF was used instead of a PNG, since both support transparency. The main reason is that Internet Explorer offered poor support for transparent PNGs and required a complicated hack to make them work, which I will explain later in the post.
Why was it so bad?
It was time-consuming and maintenance heavy. Should the design change, then so would all the GIFs that had to be manually cut out of the design files again.
It was bad for Accessibility. Screen readers are unable to process text embedded within GIFs or images that lack meaningful alt text. The absence or outdated nature of alt text therefore created an exclusionary experience for users with visual impairments.
It was bad for SEO. Search engines could not index text within images, thus, harming discoverability, this technique relied on developers ensuring they had accurate alt-text, which isn't always the case.
It was bad for performance. At the time of its popularity, the web was going through a transition period from HTTP/1.0 to HTTP/1.1. Although HTTP/1.1 was better for TCP connections than HTTP/1.0, these TCP connections were very expensive in terms of web performance, and each of these GIF replacements required its own TCP connection, which increased page load times.
It was terrible for responsiveness. Although the responsive web was a few years away when it was popular, the difference between images and text rendering was text's ability to scale across different devices and screen sizes. Images simply couldn't do that, leading to poor rendering and pixelation on some devices.
GIF only supports 256 colours (8-bit) and for the GIF to be transparent, one of those colours would need to be transparent. So if your text had a complex colour palette, it either wouldn't work, or just look terrible.
Can I still use it?
No, it's as simple as that. It's a technique with so many negatives and so few positives, it should be confined to the interesting history of the web platform!
Modern Alternatives
Again, rather than repeat myself, I'll refer you to the same section from the sIFR methodology above.
Adobe AIR
I remember going to a conference around 2007 / 2008, and there was so much hype about Adobe AIR. It was going to be the "next big thing", due to the fact it could enable developers to create rich desktop and mobile apps using only web skills and technologies.
What was Adobe AIR?
The AIR in Adobe AIR stood for Adobe Integrated Runtime. It was a cross-platform runtime developed by Adobe. It allowed developers to use: HTML, JS, Adobe Flash, Flex, and ActionScript. All combined they could run as a standalone desktop or mobile application. It supported Windows and macOS for desktop, and later Android and iOS on mobile.
Furthermore, it also enabled:
Running Flash-based applications outside the browser.
Enabling rich multimedia, animations, and offline capabilities.
Why is it outdated?
It relied on Flash and ActionScript. With flash reaching its end of life in late 2020 due to persistent security vulnerabilities, and the momentum behind open standards like HTML5, CSS3, and JS ES6+. AIR ceased to exist due to the loss of its core technology.
A shift to modern cross-platform frameworks. The market moved towards more efficient and performant technologies like:
React Native
Flutter
Electron (for desktop apps)
The advantage of these frameworks is they use native components or JS runtimes, without the heavy reliance on Flash. This offered developers and users greater performance, maintainability, security, and community support.
Lack of Adobe support. Adobe handed AIR over to a subsidiary of Samsung (Harman) in June 2019for ongoing maintenance. Support is still provided by Harman, but only for enterprises with legacy applications that they still rely on, There's no active innovation or features being added to AIR by Harman.
Security concerns. As with Flash in the browser, security was always an ongoing issue, and this continued in AIR since it was the backbone to its core functionality. By continuing to build on AIR, it poses security risks and compatibility limitations with modern browsers and operating systems.
Lack of developer interest and ecosystem. Developers on the modern web tend to favour open ecosystems with an active community for support and updates. Adobe AIR’s ecosystem has completely stagnated.
Can I still use it?
As with any other Flash-based technology, I'm afraid not, it is no longer supported and even if you could, there are more modern open frameworks you could use like, React Native, Flutter, or Electron (for desktop applications). AIR is now history, and if you are using an AIR application within your digital estate, it is strongly recommended you prioritise migration, due to higher maintainability, poor security, and lack of developer availability.
Yahoo Pipes
It is easy to forget just how dominant Yahoo was on the web during the late 1990s. Before Google emerged as the leading search engine, Yahoo was one of the primary gateways to the internet. Its peak influence was between 1996 and 2000, when it played a central role in how people accessed and navigated the web. It was the default starting point for most web users due to its curated directory, news, and email services combined. It was also a technology leader on the web too, as I mention later in the blog post when I look at their extensive JS Library: Yahoo! User Interface (YUI).
I remember using Yahoo Pipes for combining my many RSS feeds at the time, it really was a fantastic visual tool for data manipulation.
What was it?
Yahoo pipes was a visual data mashup tool that allowed users to aggregate, manipulate, and filter data from around the web. It was released in 2007, and it allowed developers (and non-developers) alike to aggregate, manipulate, and filter data from around the web. It provided a drag and drop interface where users could connect various manipulation modules and join them together by creating pipes between modules. You were essentially piping data "through" the tool (hence the name!) and the manipulated date would come out the other end. It was considered highly innovative at the time, and it was used a lot for rapid prototyping.
Why is it outdated?
Have a look at the Yahoo! homepage, and you will see it is the shadow of its former self, it looks to more of a news aggregation service now than a popular search engine. This was because Yahoo decided to make a giant shift in business strategy, moving away from developer tools and open web utilities to concentrate on advertising and media products. Although it was popular with technology enthusiasts, it was never a mainstream product for Yahoo, so the operational expenses vs. usage statistics didn't align with Yahoo's business priorities. Lastly, the web evolved beyond Yahoo Pipes, and it couldn't keep pace with the changes. Modern APIs, JSON-based services, and JS frameworks allowed developers to build similar data transformations programmatically with greater flexibility. Due to all these factors Yahoo Pipes was sadly shut down in 2015.
Can I still use it today?
Nope, it was shutdown by Yahoo in 2015, with no further support or hosting by Yahoo.
Modern Alternatives
While innovative at the time, visual mashups have been replaced by:
Dedicated data transformation tools (e.g. Zapier, Integromat/Make).
Serverless functions (AWS Lambda, Azure Functions, Cloudflare Workers, and Fastly Compute@Edge) for real-time data processing.
Low-code platforms with integrated API management.
The web has also become a lot more complicated regarding Web scraping and feed aggregation. Because anti-scraping measures, authentication, and API rate limits weren't necessary when Yahoo Pipes was created, the techniques it employed didn't support the robust backend processes now required to handle these requirements. Although Yahoo Pipes was innovative at the time, it has long been discontinued and is now considered a obsolete part of web platform history.
PhoneGap / Apache Cordova
The one thing that sticks in my mind when I think about PhoneGap is when I saw a talk from one of the Nitobi engineers back in 2009 / 2010, he said something along the lines of:
We are using PhoneGap to bridge the current gap for developers in creating cross-platform mobile applications. Our goal is for it to become obsolete once native platforms fully support these capabilities.
This really impressed me at the time, spending so much time on a product with the aim for it to become obsolete.
What was PhoneGap?
PhoneGap was a mobile development framework created by a Canadian company called Nitobi in 2009, it was later acquired by Adobe in 2011. PhoneGap allowed web developers to build cross-platform mobile applications, simply, using web technologies: HTML, CSS, and JS. The applications were packaged into native containers allowing them to run as mobile apps, while also having access to device API's via JS.
Why was it required?
At the time of release, the mobile web was incredibly popular and getting bigger month on month. It's important to remember, the first version of the iPhone was released only 2-years before (June 2007). This really was an exciting time in the Web Platform's history. If you wanted to release a cross-platform application at the time and wanted to support Android, iOS, and Windows Phone, you needed developers with knowledge of multiple programming languages:
Android required Java.
iOS required Objective-C.
Windows Phone required C#.
Finding a single developer with all these skills would be incredibly hard, so to build and maintain all 3 platforms usually required a whole team of developers.
One of the main advantages of PhoneGap was that all 3 platforms had a single central codebase, which reduced development time and maintenance.
PhoneGap leveraged Cordova under the hood, which it essentially branded and wrapped for broader adoption.
Why is it outdated?
PhoneGap apps performed poorly. This was especially true for graphics-intensive, or animation-heavy interfaces. This is because PhoneGap apps ran within a WebView container rather than as a native application.
Adobe stopped supporting it. This seems to be a common theme in this blog post… Adobe ended support for PhoneGap in October 2020. At the time developers were advised to either migrate to Apache Cordova or consider other frameworks.
Alternatives evolved. As the mobile platform expanded, so did the availability of other frameworks to help developers build apps. These alternatives included:
React Native allowing near-native performance with JS and React paradigms.
Flutter enabling high-performance apps with a single Dart codebase and native compilation.
Progressive Web Apps (PWAs) reducing the need for wrapping web apps as native apps in many use cases.
Capacitor (by Ionic) providing modern native bridging with a streamlined developer experience compared to PhoneGap/Cordova.
PhoneGap's Ecosystem growth stalled. As newer frameworks were released and Adobe stopped supporting it the community moved away and PhoneGap’s plugin ecosystem stagnated.
Can I still use it?
No, there are several alternatives I have listed above that you should consider instead. PhoneGap served its initial purpose as a bridge, enabling developers to build cross-platform mobile applications. As was its mission, It became obsolete when native platforms fully incorporated these capabilities.
Microsoft Silverlight
NOTE: I never used Silverlight (I do remember it being announced) I'm just adding it to the post for completeness.
What is Silverlight?
Silverlight was a rich internet application (RIA) framework introduced by Microsoft in 2007. It was conceptually similar to Adobe Flash, designed to deliver interactive multimedia, animations, and streaming video inside the browser.
It used a subset of the .NET Framework, with applications typically written in C# or VB.NET, and presentation defined using XAML (an XML-based UI markup language). Developers could reuse existing .NET skills, which made Silverlight attractive in Microsoft-centric enterprises.
Silverlight was often used for:
Media streaming (notably Netflix in its early streaming days)
Interactive dashboards and line-of-business web apps
Cross-browser, cross-platform plugins (Windows and Mac were supported, but Linux support lagged)
Why is it considered legacy?
Plugin dependency: By the 2010s browser vendors had moved away from browser plugins in favour of newly developed web platform technologies. Plugins were often unsecure, unstable, and inaccessible.
Limited cross-platform reach: Although Silverlight was well supported on Microsoft platforms (as you would expect!), it also had support on Mac, but it had limited support on Linux (via project Moonlight), and no support on mobile devices (Android, iOS).
Rise of open web standards: HTML5, CSS3, and JavaScript rose so rapidly for audio, video and advanced graphics (via canvas). The use of plugins was no longer required.
End of support: Considering the above points, Microsoft only stopped support in October 2021. Although Browser vendors stopped a long time before: Chrome in 2015, Edge never supported it. Firefox ended support in March 2017.
Can I still use it?
Well, this is another easy one. No, you can't Microsoft have dropped support and no Modern browsers support it either.
Modern Alternatives
The answer to this is basically native web platform API's.
Specifics include:
Video Streaming
HTML5 element with adaptive bitrate streaming (HLS, MPEG-DASH).
DRM is handled via Encrypted Media Extensions (EME).
For interactive apps and dashboards:
Modern JavaScript frameworks such as React, Angular, Vue, or Svelte.
WebAssembly (Wasm) for near-native performance, including options like Blazor (from Microsoft) which lets you run .NET in the browser without plugins.
For graphics, animation, and UI:
CSS3 animations and transforms for UI transitions.
Canvas API and WebGL for 2D and 3D graphics.
SVG for scalable vector graphics.
WebGPU (emerging) for modern GPU-accelerated rendering.
Silverlight Summary
Silverlight is legacy because it relied on a now-obsolete plugin model, had poor cross-platform support, and was outpaced by open web standards. Today, everything Silverlight did can be done more securely and portably with HTML5, CSS, JavaScript frameworks, and WebAssembly.
Java Applets
NOTE: I never used Java Applets (although I remember them!) I'm just adding it to the post for completeness.
What is Java Applets?
Java Applets were small applications written in Java that could be embedded into web pages and run inside the browser through a special Java Plug-in (based on the NPAPI plugin architecture). Introduced in the mid-1990s, they were part of Sun Microsystems’ vision of “write once, run anywhere” – letting developers build interactive content and complex functionality that browsers of the time (pre-HTML5) could not support natively.
They were often used for:
Interactive educational content and simulations
Online games
Financial tools like mortgage calculators or trading dashboards
Enterprise intranet applications
Why is it considered legacy?
Plugin dependency: To use the Applets this required a user to install the Java Runtime Environment (JRE) plugin and keep it updated. I distinctly remember the update prompt for these nuisance updates!
Security risks: The Java plugin was a frequent target of exploits and malware, leading browsers and enterprises to actively block or disable it.
Performance and user experience: Applets often loaded slowly, had inconsistent UI integration with web pages, and required clunky permission dialogs.
Decline of NPAPI support: Browsers started phasing out NPAPI (the plugin technology Applets relied on). Chrome dropped NPAPI in 2015, Firefox dropped NPAPI in 2017 (except Flash until 2021) , and Microsoft Edge never supported NPAPI.
Official deprecation: Oracle deprecated the Java browser plugin in Java 9 (2017) and removed it entirely in later releases.
Can I still use it?
Nope! Modern browser no longer support it and Oracle stopped supporting the plugin in 2017.
Modern Alternatives
This list is basically native web platform API's. I don't want to repeat myself so refer to the Silverlight Modern Alternatives from earlier in the post.
Java Applet Summary
Java Applets are legacy because they relied on a fragile plugin model that posed significant security risks and is no longer supported by modern browsers. Today, HTML5, JavaScript, and WebAssembly provide richer, faster, and safer alternatives without requiring any plugins.
5. The JavaScript Framework Explosion
DHTML Beginnings (1997)
Dynamic HTML (DHTML) was all the rage around 1997-1998. By combining the primary web technologies: HTML, CSS, JS, and the DOM, developers realised that they could make a web page interactive and "dynamically" update the page without the need to reload the entire page. This new technique was frequently employed for animated HTML elements, such as image rollovers and dynamic navigation menus. It also provided immediate user feedback, particularly for form validation, by checking if the user had entered a valid email, for instance.
If not available, then use standard HTML and CSS to show a user-friendly error message.
DHTML wasn't necessarily bad, if implemented in moderation, unfortunately it always seemed like the wild-west. What would work in Internet Explorer 4 (IE4) wouldn't always work in Netscape Navigator because Microsoft and Netscape Communications Corporation (NCC), had different levels of support for JavaScript in fact, Microsoft had their own version of the ECMAScript standard called JScript. This led to lots of maintenance headaches for developers as the solution often involving forking code for different browsers.
Secondly it turned into a bit of Copy / Paste madness. Since any developer could simply "View Source" on a web page and copy the code and add it to their own website, you often ended up with a mishmash of different interactions and animations all over a website! Thankfully, the trend eventually died out!
Can I still use it?
A web developer could technically still use DHTML on the modern web, but doing so would be strongly discouraged for any serious or production-level work. This is because in using it, a developer wouldn’t be following modern best practices like separating concerns with structured HTML, CSS and JS, building accessible and performant interfaces, using modular and maintainable code, or leveraging modern frameworks and tooling that enforce consistency, security and scalability.
Modern alternatives
There are several modern alternatives to DHTML, including:
Using Frameworks like React, Vue, or Svelte
Native browser APIs
Modern CSS techniques
Progressive enhancement and accessibility standards
Early frameworks and libraries
Prototype.js (2005)
Prototype.js version 1.0 was released in February 2005, and was initially developed to help simplify JS tasks in Ruby on Rails (RoR) projects. Its key features were a host of DOM manipulation utilities, AJAX abstraction to make XMLHttpRequest easier to handle cross-browser, Class-based inheritance in the form of a lightweight object-oriented programming (OOP) in JS. But by far its most influential feature was its shorthand DOM element selector $(), later popularised by jQuery.
I remember trying to learn and use Prototype a few times, but as a JS beginner, I found the name confusing. Especially since the prototype object sits at the heart of JavaScript, with almost the whole language hanging off it through things like property and method inheritance.
Can I still use it?
I mean, technically you could, but be warned it hasn't been updated in almost 10-years! Given the significant evolution of JS and the availability of modern, updated alternatives that leverage the latest browser JS APIs, it would not be a wise choice.
Modern Alternatives
It really depends on what a developer was using prototype.js for, as it had quite a range of functionality:
DOM manipulation
Native JS (ES6+)
Cash
Umbrella JS
AJAX
fetch
Axios
Ky
Utility functions
Lodash
Rambda
Templating
Handlebars
ES6 Template literals
Full replacement
jQuery
React
Vue
Prototype.js was incredibly powerful. My preference would be to utilise several micro-JS libraries for specific functionalities rather than adopting an extensive framework such as React, but that's just my opinion, given the complexity of the React ecosystem.
Script.aculo.us (2005)
Script.aculo.us v1.0 was released in 2005 as an extension to Prototype.js. It built on Prototype.js by delivering a powerful set of visual effects, animations, and UI components. It also featured Drag-and-drop support out of the box, as well as sortable lists and autocompletion widgets. As with Prototype.js, Script.aculo.us popularity was partly because it was bundled with RoR, and it had widespread adoption within the "Rails" community.
I still remember Script.aculo.us for its distinctive URL, its bright, animated homepage, and how it really embodied the spirit of ‘web 2.0,’ making the web feel more alive. It's a library that left a lasting legacy, influencing later libraries like jQuery UI.
Can I still use it?
I really wouldn't recommend it, as it hasn't been updated in over 15 years! However, if you're curious to delve into some web 2.0 history, the site is still live (on HTTP, not HTTPS).
Modern Alternatives
Assuming you are only looking for a JS library for animations:
GSAP (GreenSock Animation Platform)
Motion One (Vanilla JS)
Popmotion
Anime.js
CSS + Web Animations API (WAAPI)
Dojo Toolkit (2005)
Dojo Toolkit was one of the first major cross-browser toolkits, released in March 2005 with version 0.1. It was developed by Alex Russell and was also maintained by a community of other developers too. It's open-source and still available on GitHub It was one of the earliest frameworks to help build rich web applications by simplifying DOM manipulation, AJAX, event handling, animations and internationalisation (i18n), among many other cutting-edge features. Not only that, it was an early advocate for Asynchronous Module Definition (AMD) and modular JS. Furthermore, it was one of the first JS libraries with strong accessibility (a11y) support. In 2018 Dojo was rewritten as Dojo 2 which supports TypeScript, reactive patterns, virtual DOM, and modern build systems. Dojo 1.x is still in use in some long-running enterprise applications on the web. The last stable release was released in 2022, which was mainly for bug fixes and security updates. New feature development has now shifted purely to Dojo 2+.
Dojo 1.x was incredibly influential for later JS libraries like jQuery, MooTools, Prototype, especially when it came to governance. It was governed by the Dojo Foundation, which later merged with the jQuery Foundation to form the JS Foundation (now part of the OpenJS Foundation).
Can I still Use it?
Version 1.x, would be a bad idea, but you could technically still use the latest version (8.0.0), although that may not be wise either, given there hasn't been a new release in over 3-years. So it's most likely better to stick with more modern framework alternatives.
Modern Alternatives
There are a number of modern alternatives you could consider, keeping in mind that Dojo was very focussed on Accessibility (A11y) and Internationalisation (i18n):
React: Maintained by: Meta (Facebook).
A11y: Strong support, but it’s developer-driven. ARIA roles and keyboard navigation must be implemented explicitly by developers.
i18n: Excellent ecosystem (react-intl, formatjs, lingui, etc.)
Vue.js (v3): Maintained by: Evan You and the Vue core team.
A11y: Good defaults; still developer-led, but accessible components are emerging.
i18n: vue-i18n is well-maintained and powerful.
Angular: Maintained by: Google
A11y: Arguably the best among mainstream frameworks. The Angular Material team publishes a11y guidance, and many baked-in best practices exist.
i18n: Built-in i18n support, including message extraction and compile-time translation.
Svelte / SvelteKit: Maintained by: Rich Harris and the Svelte core team
A11y: Improving, but not as mature as React or Angular. Accessible components need to be explicitly chosen or built.
i18n: Community libraries exist (svelte-i18n), but official support is not as comprehensive.
Yahoo! User Interface (YUI) (2006)
YUI was released in February 2006, its first release was version 2.0.0, this is because there was lots of internal development and usage within Yahoo! before it was released publicly. It was originally designed to standardise frontend development at Yahoo and provide the team with a solid cross-browser solution on which the Yahoo team could build feature-rich web applications. It contained a custom loader system that only loaded the components it needed, this was an ingenious approach in the pre-ECMAScript 6 (ES6) module era. Furthermore, it also came with a whole host of feature rich UI widgets, cross-browser abstraction, Event handling, DOM utilities, Animations, CSS Tools (which heavily influencing Normalize.css that was developed later), and it was one of the first libraries (after Dojo 1.x) to prioritise internationalisation (i18n) and accessibility, specifically Accessible Rich Internet Applications (ARIA).
What I remember most about YUI 2.x is just how huge it was! Not just the sheer number of UI modules, but the file size too: 300–350 KB minified, or 90–120 KB gzipped! This was before the widespread availability of fast broadband, and when hardware and browsers were significantly less optimised. A full build of the library could easily exceed those figures, too. This is why Yahoo also provided a combination aware CDN service to help reduce the number of requests made and bundle only the components needed. This was a practice that was way ahead of its time!
Can I still use it?
No, not really, it hasn't been updated since 2014, the reason for this is detailed in this Yahoo Engineering announcement from the time.
Modern alternatives
Given YUI's extensive nature as a framework, and to avoid repetition, I recommend referencing my notes on Prototype.js modern alternatives as an initial guide.
moo.fx (2005)
Moo.fx was a lightweight animation library designed to be unobtrusive and it also worked well with Prototype.js. It focused purely on DOM animations like height transitions and fading etc. It was part of a movement in JS to modularise the codebase for lighter and more responsive interactions. Furthermore, it was a distinct diversion away from larger and heavier animation libraries like Script.aculo.us.
I believe moo.fx was one of the first animation libraries I ever saw. Annoyingly the site hasn't been archived on archive.org. I do remember it having a simple and colourful homepage with examples of the animations you could add by only adding a tiny library to your page (3 KB).
It also had a fantastic URL too "http://moo.fx". What's so cool about the name, you may ask? Well, the .fx country code top-level domain (ccTLD) has now been rendered obsolete and is no longer available. It was originally reserved for Metropolitan France, but it was never officially delegated or made available for registration. France later adopted .fr as its official ccTLD. I can't fathom how Valerio Proietti, the creator of moo.fx, managed to register the name, but it's all true. As proof only a single record can be found on archive.org, dating back to 2007, it links to the site's robots.txt file.
Can I still use it?
I can't even find an archived copy of the homepage, let alone the library itself! So, it definitely falls into the "no, you can't still use it" category!
Modern Alternatives
Assuming we are only looking for a modern animation library, it's best to refer to the list I gave above for Script.aculo.us modern alternatives.
MooTools (2006)
The author of moo.fx, Valerio Proietti wanted more than an animation library. He aimed to develop a complete JS framework with an Object-Oriented Programming, modularity and extensibility as first-class features. Thus, MooTools was born! v1.0 was released in September 2006, and it featured some fantastic features for its lightweight size. These included a modular core, and separate components, you could also include, for extra functionality, an Advanced class system (predating the ES6 class functionality). Powerful DOM manipulation utilities, Ajax handling, effects (moo.fx), and custom events. It was both performant (for the time) and syntactically elegant. Development ceased in the mid-2010's with v1.5 (the final active release). moo.fx and MooTools hold a special place in my memory as one of the first JS libraries I learnt as a Junior developer.
Can I still use it?
Well, the website still exists here, But considering it hasn't been updated since January 2016, it's probably best to look for a modern alternative.
jQuery (core) (2006)
In 2006, a truly revolutionary JS library was released called jQuery. It was developed by the legendary John Resig. It offered a lightweight, chainable API in JS to simplify tasks like DOM traversal, and Ajax, among many other helpful tools and methods in its fantastically written API.
jQuery always prided itself on its easy-to-use API and its ability to abstract away the many cross-browser bugs related to different browser vendors implementation of JS (Mozilla) / JScript (Microsoft). It finally gave developers a "stable" API to start building JS powered websites without all the stress of the cross-browser hacks, and forking of code to make features work in every browser. With jQuery, it just worked!
I must admit, I absolutely adored jQuery (and still do)! The API was so clean and readable. The complete opposite to the DOM and the JS API! This library has saved more than just my code, it’s rescued entire projects and probably saved my sanity in the process! Especially working in Digital Marketing, as I did at the time. In those days, clients were constantly after the newest, flashiest animations, regardless of usability. It was all about chasing trends. And when the client's paying, you just nod and make that already bouncing button pulse and change colour!
Can I still use it?
Finally, I can say "Yes" to this question! Assuming you plan to use v3.x of jQuery, as the 1.x and 2.x branches are no longer supported or maintained. In fact version 4.0 is currently in beta, according to the Support page. Amazingly, almost a full 19-years after its first release, it is still under active maintenance! Not only that, according to the Web Almanac 2024, it's still the most popular JavaScript library in use on the web! Truly impressive work by the jQuery team and community!
Later era frameworks
Ext.js (2007)
Ext.js version 1.0 was released in April 2007. It was initially developed as an extension to the YUI Library before becoming a fully standalone framework. Its key features included a comprehensive suite of rich UI components, a powerful event model, and advanced layout management capabilities that far exceeded most contemporaries. It introduced a highly structured approach to building web applications, with a strong emphasis on reusable widgets and object-oriented design. But by far its most distinctive contribution was its fully integrated, desktop-like component model for the web. Something rarely seen at the time, and which set the tone for many enterprise-grade JS frameworks that followed.
I never used YUI personally, as its sheer size and breadth of functionality simply didn’t align with the kind of work I was doing at the time. As a result, Ext.js (which as mentioned above, was initially built upon YUI) wasn’t on my radar either. That being said, I’ve included it here for completeness, as it clearly played a significant role in the evolution of rich client-side application frameworks. During my research I discovered how Ext.js transformed into an enterprise-grade toolkit under the Sencha brand. Its strong emphasis on data-driven UIs distinguished it from other lightweight libraries of that period.
Can I still use it?
Yes, Ext.js is still a viable option that you can use on the modern web, if you are building a web application. It continues to be actively maintained by Sencha and even offers a React Extension, allowing for seamless integration of Ext.js components into React applications. However, be aware that Ext.js is now a paid library, with a per-year, per-developer licensing model that can be costly. While a free community version exists, it appears to have a very limited feature set.
jQuery UI (2007)
jQuery UI emerged in 2007 as an official companion library to jQuery, at a time when the JS ecosystem was fragmented and riddled with browser inconsistencies. It was developed to bring a unified, extensible suite of widgets, effects, and interactions to the web. jQuery UI offered an easy-to-integrate API that drastically lowered the barrier for implementing rich UI's, with full cross-browser compatibility. It played a crucial role in making dynamic front-end behaviour accessible to developers at all skill levels, becoming a staple in both enterprise and amateur hobbyist applications during the formative years of modern web development.
Although I was a big fan of jQuery and used it extensively across many projects, I never really had the opportunity to use jQuery UI in its entirety. When I did, it was typically for a single component, such as a date picker or drag-and-drop functionality. These components were reliable and well-supported, but required a lot of JS to function, and added complexity that I never felt was acceptable for a single feature. Especially when there were plenty of alternative micro-frameworks available, offering small, focused libraries that solved one problem well. I was far more inclined to take that modular approach than to include an entire suite of UI components unnecessarily.
One resource, I found invaluable at the time, was MicroJS. It hasn’t been updated in quite some time, but it remains a powerful illustration of how easy it is to cherry-pick the exact only the functionality you need, without burdening your page with hundreds of kilobytes of JS.
Can I still use it?
For this question, just as I did with jQuery, my answer is yes, you can still use it. It isn't updated very often, but it is still updated! The last release being in October 2024 with version 1.14.1
To put the enduring popularity of jQuery and jQuery UI into perspective, the 2024 Web Almanac reports that jQuery is still the most-used JS library on the web, appearing on 74% of pages in the dataset analysed. jQuery UI comes in fourth, with a 22% usage rate, Though described as mostly deprecated, it’s a clear reminder of how quickly modern tools become legacy software that must be maintained for decades. The latest release came nearly 18 years after jQuery UI’s first version. That’s an incredible achievement by the jQuery UI team, talk about dedication!
AngularJS (2010)
AngularJS and Angular are fundamentally different frameworks that share a name and lineage but are otherwise entirely different. AngularJS (1.x) was based on a Model-View-Controller (MVC) architecture with two-way data binding written in JS with support for ECMAScript 5 (ES5) and some ECMAScript (ES6) features. Angular (2+) was built entirely differently, it has a component-based architecture boasts stronger modularity, enabling both two-way binding and promoting unidirectional data flow. Another major difference is that Angular is written in TypeScript, a distinct superset of JS, that enables better tooling and type safety like Java and many other languages. Angular remains actively used and maintained on the modern web today.
I distinctly remember when Google released AngularJS because I was in Melbourne, Australia, working at a digital media agency. One of the Tech Directors there was raving about it, and how it was going to change Frontend development entirely. In hindsight, I agree with him, but I, personally, don’t believe it was a positive change. Single Page Apps (SPA’s) have had such a huge negative impact on Web Performance and Accessibility. Plus, a lot of the code in many of these SPA frameworks essentially reinvents functionality that’s already built into the browser and the Web Platform as a whole. Let's not overcomplicate things with framework-managed page state; we have a perfectly good back and forward buttons, thank you very much!
As you can probably tell, I’m not a big fan of SPA's. Admittedly, they have their place under some circumstances; However, I genuinely think they're excessive and unnecessarily complicate frontend web development for most applications. But I guess complex is the new simple, right? Anyway, rant over!
Can I still use it?
AngularJS, no. It is no longer actively maintained. Angular, absolutely, it is still a very popular framework on the modern web, although according to the State of JS 2024 survey, it has been overtaken by Vue.js for usage, for the 2nd year running. Frontend developers can be fickle, often chasing the next shiny framework like a kitten distracted by a dangling set of keys. It will be interesting to see if this decline in Angular usage continues in future JS surveys.
Backbone (2010)
Backbone.js was released in 2010 by Jeremy Ashkenas, who also created Underscore.js and CoffeeScript. As with AngularJS it was one of the first JS libraries that implemented the MVC architecture pattern to client-side JS. The key features of Backbone were its models, collections, views, router, events, and sync functionality that allowed it to easily communicate with RESTful API's.
I've only ever worked on a Backbone project once, and it was a rapid prototype website for a major airline based in Asia. Given the tight timelines and the client’s high design expectations, we ultimately opted for static HTML in the end, as Backbone’s complexity wasn’t advantageous for a rapid prototyping. In hindsight, had it got to the production stage, then I could see the architecture of backbone being very useful.
Can I still use it?
Backbone is nowhere near as popular as modern frameworks like React, Vue, or Angular, It is still mostly in use on legacy systems. The last version released was in April 1st 2025 v1.6.1, but looking through the releases, it seems to only get 1 update per year. According to Wappalyzer it is still in use by around 521,000 websites. The biggest of those being Atlassian. So in my opinion I'd say you should avoid using it and opt for a more popular framework with a more active community. Refer to the modern alternatives I listed for Dojo Toolkit Modern Alternatives as a starting point.
Knockout (2010)
During the early 2010s, Knockout.js was a popular JS library for building dynamic UIs using the Model-View-ViewModel (MVVM) pattern. It offered features like declarative bindings and two-way data synchronisation, which made it easier to keep the UI in sync with underlying data without manually manipulating the DOM. Its simplicity, ease of learning, and lack of required tooling, e.g. just drop in a
Explanation of the Condition Syntax:
IE matches any version of Internet Explorer.
IE 6, IE 7, IE 8, etc. match specific versions.
lte = less than or equal to
gte = greater than or equal to
!IE = not Internet Explorer
Note the use of the and to properly close in non-IE browsers. This was known as a downlevel-revealed comment.
As you can see from the complexity of the above examples it was fairly simple to target very specific and multiple versions of Internet Explorer. A significant drawback was the increased maintenance burden and the clutter they introduced within the sites' tag.
Thankfully, Microsoft removed the parsing of conditional comments in IE10 and IE11 before they eventually introduced a whole new browser called Microsoft Edge. Edge initially used a proprietary rendering engine called EdgeHTML. However, the browser was subsequently rewritten to incorporate the same open-source engine as Google Chrome. This new version, based on Chromium 79, was released as Microsoft Edge 79 on January 15, 2020.
IE CSS Selector Limit
This issue, of all those detailed in this section, is arguably one of the most random, as well as one of the least noticeable! IE6 to IE9 had a selector limit of 4,095 selectors per stylesheet, now that may sound like a lot, but it was very straightforward to go over this limit, especially when grouping selectors. For Example:
/* This counts as a single selector */
.my-selector {
margin: 20px;
}
That's all straightforward, but then you look at something like this:
/* This block counts as three selectors */
.button-primary, .button-secondary, .button-tertiary {
margin: 20px;
}
Once you started grouping selectors for easier maintenance it became far too easy to hit the limit, especially on large websites. If you were a user of Bootstrap or Foundation at the time you could hit this limit unintentionally, without even knowing it!
And that brings me onto my next point: What happened when that limit was reached? Well... nothing really, IE just didn't parse any CSS beyond the 4,095 selector limit. Would it warn you that this was happening? Absolutely not!
Developers were often extremely fortunate if this issue was discovered through testing pages styled later in the stylesheet. Internet Explorer itself, however, would simply fail without any error messages or warnings.
And, to make it even more confusing: it would only impact the specific stylesheet that had exceeded the limit, not the page as a whole. For example:
In IE6 - IE9 this is what would happen:
base.css loads perfectly fine, it is under the limit.
theme.css only the first 4095 selectors are parsed, the rest are silently ignored.
overrides.csswould be load fully since it is under the limit.
This behaviour creates a partial styling issue. Elements relying on the theme.css file won't be styled correctly beyond the 4095 selector limit. In such cases, most pages will appear normal until a page attempts to utilise style selectors 4096 through 4500 from the theme file, at which point it will fail without warning. And of course if you were unlucky enough to be working with IE6 or IE7, then you had no developer tools to even debug the issue either!
Solution
So what was the solution? Well, with the invention preprocessors like Sass or tooling like Grunt, Gulp, or PostCSS they could automate the splitting of stylesheets at the 4065 limit.
Or another solution was to supply a simplified UI to IE browsers only and serve those CSS files via IE's Conditional Comments. But can you imagine the maintenance involved in updating multiple different stylesheets? Just for the slightest UI change!
The final approach involved reducing reliance on external stylesheets by inlining critical CSS directly into the of the page, specifically for above-the-fold content (we’ll come back to that strategy later, as it’s not as relevant today). Even thinking about these different maintenance options, and their implications gives me a headache!
OldIE hacks Summary
As you can imagine CSS files around this time were a bit lot of a mess with all these random cross browser hacks and workarounds! Thankfully Microsoft recognised it was an issue so decided to implement conditional comments in IE5-IE9 to make this madness a little easier (in terms of organisation, not coding)
7. Markup of the Past
XHTML 1.1 and 2.0
I remember having a conversation with a friend about how he was converting his website into a new standard that had just come out, this was around 2001 and the new standard was XHTML 1.1. The most obvious difference at the time was the DOCTYPE at the top of the page source. From HTML 4.01 Strict:
to:
What did XHTML 1.1 aim to achieve?
The goal of this new standard was to modularise XHTML and enforce stricter XML compliance. Its key features were the fact that it was based on XHTML 1.0 Strict but split into modules for better reusability and extensibility. It also required documents to be well-formed XML, and it enforced stricter syntax than HTML like all tags must close, and all attributes must be quoted.
Unfortunately, for XHTML 1.1 it came with a number of limitations that doomed the specification from the start. These included, very little browser support for serving XHTML 1.1 as application/xhtml+xml, it also, more critically broke backwards compatibility in many real-world use cases. And lastly many developers continued to write XHTML but serve it as text/html which entirely defeated the point of writing XHTML in the first place!
Because it never gained wide browser support and required a very strict syntax, it eventually became obsolete and is now mostly of interest for historical or academic reasons.
What did XHTML 2.0 aim to achieve?
XHTML 2.0 was never officially released or used in any production browsers. Had it done so, this would have been the DOCTYPE:
Development began on the specification in the early 2000s with the key goals its key goals being:
a clean break from HTML 4 and XHTML 1.x.
planned to introduce entirely new ideas like replacing with for links.
high up on its priority list was for it to be device-agnostic and semantically pure.
The XHTML 2.0 specification ultimately failed for the following reasons:
It lacked practical browser support and Implementation.
HTML5 (developed by WHATWG) gained real traction by improving the existing HTML and maintaining compatibility.
And the gigantic final nail in the XHTML 2.0 specification was its complete lack of backwards compatibility. It broke the entire ecosystem of existing web pages and tools.
The W3C officially halted XHTML 2.0 in July 2009 and shifted efforts to HTML5.
This then brings us onto the modern standard still in use today, and finally a DOCTYPE that was easy to remember! Look how developer-friendly it is:
It is so simple in fact it is case-insensitive and doesn't require a cumbersome URI to the Document Type Definition (DTD), which was there to define the structure, rules, and legal elements and attributes used on a page.
Inline JavaScript
There are 2 distinct methods of using inline JS in an HTML document. They both have specifications in the HTML Standard (WHATWG).
Inline Script Block
The inline block is defined in section 4.12.1 The syntax is simple and familiar. An example usage is as follows:
Title here
It's perfectly fine to use the inline script block in this way, and it in no way a legacy / outdated technique. But it does come with a few things worth considering before using it. Inline script blocks like this cannot be run async or defer, these attributes only apply when loading external scripts using the src attribute.
Can I still use it?
Yes, you can, but it comes with a few caveats. A script in this position in the executes immediately and synchronously, and it will block page rendering until the JS code completes. So make sure you don't overload an inline script block as your website will pay the price in terms of frontend web performance.
Inline Event Handler or HTML Event Attribute
Inline event handlers are defined in section 3.2.6 and section 8.1 of the HTML Standard (WHATWG) Inline event handlers are now considered a legacy pattern in modern web development. This is due to several reasons, including the security risk posed by JS execution in the global scope, the potential for Cross-Site Scripting (XSS) vulnerabilities, and the way they clutter the code. An example of an inline event handler is below:
The simple example above captures the onclick event from the anchor, and instead of taking you to the about page which you would expect it simply brings up an alert box with the "About clicked" string.
There are other reasons why this technique is considered legacy too since they conflict with the principle of separation of concerns, and also make debugging, testing, and the implementation of accessibility best practices harder. Lastly, Content Security Policy (CSP) can also disallow inline event handlers unless explicitly allowed (another big security risk!).
Can I still use it?
No! Don't use this outdated technique to add JS interactivity to your page. Instead you should move towards external scripts and unobtrusive JavaScript. An example of which is given below:
// assuming the element.myclass is already in the DOM
const el = document.querySelector(".myclass");
el.addEventListener('click', () => {
// Do JS stuff here!
});
This assumes that the myclass element is already in the DOM. If it isn't document.querySelector will return null and result in a TypeError. The safest way to get around this would be to use the DOMContentLoaded event. An example of this given below:
document.addEventListener("DOMContentLoaded", () => {
const el = document.querySelector(".myclass");
if (el) {
el.addEventListener("click", () => {
// Do JS stuff here!
});
}
});
If you're thinking that's a lot of code just to add a click event to a single element! Then you'd be correct, hence why libraries like jQuery were so incredibly popular for event additions and basic DOM manipulation.
Document.write()
I must admit, I don't think I've every actually used Document.write() on a webpage, that's probably because I've never seen a sane reason to use it! It's a JS method provided by the browser’s DOM, that allows you to write HTML or text directly to the page. A simple example is given below:
Wherever this code is placed in the page, it will simply output a h1 with the content of Hello, world! Now there's a reason I was so harsh on the method above, and that's because it comes with some horrible side effects. These include the following:
As with any Inline Script Block, it blocks page rendering until the content is written to the page.
It runs synchronously and can block scripts and other resources from loading efficiently.
Lastly, and this has to be the best (and most horrifying feature!). If used after the page has fully loaded, it can erase the entire DOM and replace it with whatever was passed to the method. This would be Hello, world! in the example given above.
It's important to consider the security implications of its usage as it is similar to eval()(MDN link in some ways (but not all). Both methods can enable cross-site scripting (XSS) if user input is injected without sanitisation.
When should it be used?
On the modern web the simple answer is never! It's best seen as a historical curiosity that some legacy systems may still use that haven't been modernised yet. It could also be used in elementary educational examples or demos. There are a number of much safer (and robust) modern alternatives that should be used instead. These include:
element.innerHTML (MDN Link).
element.textContent (MDN Link).
document.createElement() (MDN link) in conjunction with appendChild and insertBefore.
Modern frameworks or libraries for manipulating the DOM and updating the UI.
Fixed Viewport Meta Tags
Fixed viewport meta tags were used in early mobile responsive development. An example of what it looks like is below:
My Fixed width Mobile Site
In the example above the viewport meta tag is telling the browser that:
This website should be rendered at a fixed width of 320px
It should disable user scaling, so the layout is locked to a specific dimension, regardless of the actual screen size.
Why use this approach?
In the early days of mobile development, desktop websites were very difficult to view and interact with on small mobile screens. In order to "fix" this issue, developers often built mobile specific websites, that would sit alongside the desktop website. 320px was a popular width at the time because the first iPhone (iPhone 3G) had a 320px size screen. In order to maintain maximum control over the layout appearance on these mobile devices, developers frequently prevented users from zooming into these sites. These restrictions also helped avoid layout shifts when loading in dynamic viewport sizes e.g. device orientation (portrait vs landscape), zoom or pitch gestures, changes in browser UI elements (address bar or toolbars).
Why was this bad?
There were a number of reasons as to why this technique was bad. The number one being it was terrible for accessibility. A user with a visual impairment, on a mobile site that disabled pinch-to-zoom (user-scalable=no), had no way to read the site's content. Secondly, by dictating a screen width, you are harming both adaptability and making assumptions about a user's device. Devices come in all shapes, sizes, and pixel densities. Mobile, tablet, desktop, and every resolution in-between. There are literally an infinite number of screen dimensions possible, obviously many of those would be impractical for users beyond a certain range, but it's impossible to maintain all these fixed versions so this technique quickly become outdated. Lastly, fixed sizes can lead to performance issues, as they may cause unnecessary UI reflows and repaints when used with older layout methods such as tables or fixed-position elements.
Modern Best Practice
With the development of responsive design practices by Ethan Marcotte in 2010 (Responsive Web Design: A List Apart), fixed layouts quickly fell out of fashion, as with responsive design, you could develop a single UI that worked on all devices, no matter what their screen size or pixel density. One UI to rule them all! It comes with the huge advantages of much less maintenance for developers, and more importantly a huge usability for all users, no matter what device they are viewing a website on. The recommended viewport meta tag for modern websites is this:
This tells a user's browser:
to match the screen’s actual width (width=device-width).
to set the base zoom level (initial-scale=1).
to allow users to zoom the viewport (user-scalable=yes this is default if not set)
You may come across fixed layouts in legacy applications, and if you do, you should seriously consider:
Replacing fixed tags with scalable ones.
Refactoring CSS layout logic to use flexible grids, fluid typography, and media queries.
Ensuring accessibility standards are upheld, especially zooming support (up to 200% as specified in WCAG Success Criterion 1.4.4).
Web Safe Fonts Only (before @font-face)
Fonts are arguably the most crucial component of the web. Without them, there would be no content, and consequently, no internet. This fundamental importance explains the extensive nature of the CSS Fonts Module Level 4 documentation. Fonts present a challenge due to their vast variety and subjective nature. What one person finds legible, another may not.
Web-safe fonts are typefaces that are broadly supported and consistently rendered across most web browsers and operating systems, eliminating the need for users to install additional fonts. They are distinguished by three primary characteristics:
They come pre-installed on most devices across all operating systems (Windows, macOS, Linux, iOS, and Android).
They render consistently across browsers, devices, and operating systems.
If a font isn't available on a certain device, there's a viable alternative that can be used by default, this is called "fallback safety".
A common fallback font family using the font-family CSS property is displayed below:
.class {
font-family: Arial, Helvetica Neue, Helvetica, sans-serif;
}
According to CSS Font Stack this combination of fonts is supported by 99.84% of devices on Windows, and 98.74% on Mac. Notice how it gives the browser a list of fonts, the primary choice being Arial, and if Arial isn't available then Helvetica Neue will be used, all the way down to sans-serif. This is basically saying "if none of the preceding fonts are available, then choose any sans-serif font on the device". This guarantees that a font will always be available on any device, so even though different fonts will be used depending on the operating system, the page content will still be rendered and readable for all users.
Common Web Safe Fonts
Arial
Times New Roman
Verdana
Georgia
Courier New
Trebuchet MS
Lucida Console
The issue with these fonts is that they are very limiting, especially for the Design community. Designers have very strong opinions on fonts, it is their "bread and butter", after all, so that's to be expected! For years, both developers and designers have strived to integrate all fonts, not just web-safe ones, into web development. In doing so many people on the web came up with different ways to allow them to use non-web safe fonts. These include the methods I mentioned earlier in the post:
Scalable Inman Flash Replacement (Sifr)
Cufón
GIF Text Replacements
As mentioned earlier all of these methods worked, but they had limitations, be that with Accessibility, Performance, Maintenance, Security, or SEO.
In order to mitigate thes limitations, a modern, standardised method was required for browsers to load custom fonts.
Enter @font-face
@font-face is a CSS rule that allows web developers to load custom fonts on a webpage. Unlike the methods listed above it's a native browser feature that brings typographic control to the web, while also preserving Accessibility, SEO, Maintenance, Security, and Performance (if implemented correctly).
The @font-face rule has a notable history, as it was initially implemented by Microsoft in Internet Explorer 4 in 1997. At the time it used Embedded OpenType (EOT) fonts. This was a proprietary solution by Microsoft and not a part of the CSS standard at the time, so adoption outside of Microsoft browsers was non-existent. It wasn't until the W3C developed and standardised the CSS Fonts Module Level 3, that browser support across different vendors started to improve.
Although CSS Fonts Module Level 3 work began in the early 2000s, true standardisation took time as browser vendors adopted open formats like TTF, OTF, and later WOFF and WOFF2. The CSS Fonts Module Level 3 was not released as a W3C recommendation until September 2018. The first CSS Fonts Module Level 3 working draft was published in July 2001.
Usage
So how do we actually use @font-face? Well, it's pretty straightforward:
Declaring the font
@font-face {
font-family: 'MyFont';
src: url('/fonts/myfont.woff2') format('woff2'),
url('/fonts/myfont.woff') format('woff');
/* other font formats here */
font-weight: normal;
font-style: normal;
}
Although you can define other font formats, this is no longer recommended, since the combination of WOFF and WOFF2 will cover all popular browsers. In fact, depending on your user analytics data, you may even be able to drop to only listing WOFF2, since it is now supported by 96.2% of browsers used on the internet according to Can I Use.
Using the Font
body {
font-family: 'MyFont', sans-serif;
}
Here is where we apply the custom font to the page elements using standard CSS selectors.
IMPORTANT: note how sans-serif has also been set as a fallback e.g. later in the font list. This is best practice because we are loading an external font file to render the text on the page. If this font no longer exists on the server (or simply fails to load), users will be left with no text, since the external font is missing. This ensures that even if the custom font isn't available it will "fallback" to an appropriate web safe font.
Now there are a number of web performance points that should be considered when using web fonts, but I won't go into them here. Instead, I will point you towards Zach Leat's excellent "The Five Whys of Web Font Loading Performance" article from November 2018, and it also links to his Performance.now() conference talk on the same subject. Well worth a watch if you have a spare 46 minutes!
8. Tools and Workflow Relics
SVN (subversion, largely replaced by Git)
SVN (Subversion) is a centralised version control system that was widely used in the 2000s and early 2010s. It was the first versioning system I used at one of the digital agencies I worked at in the late 2000s. The memories that stick with me most about SVN are:
Every folder and sub-folder had an annoying .svn directory within them. This directory contains all the metadata needed by SVN to manage the versioned files.
Branching and merging in SVN was a painful experience!
Although in all honesty, I haven't used it in over a decade, so both these points may have changed and now be invalid? Actually, I doubt it for backwards compatibility with older SVN repositories.
The key word in the top paragraph above is "centralised". In a version control context that means that with SVN there's a single central repository that all version history and file management operations are built around.
In comparison, Git / GitHub are decentralised repositories. When you clone a repository onto your local machine you have the whole history of all the files, you can modify them while offline then synchronise with other developers code modifications once you're back online.
Legacy Development Practices
If SVN is being used in 2025, it could imply certain things about the codebase and teams working practices. These include:
The tooling being used is likely to be old (e.g. Eclipse plugins, shell scripts).
Continuous Integration (CI) / Continuous Development (CD) is likely to be very basic or missing entirely.
Due to the complexity of the branching and merging process in SVN, this type of workflow will likely be minimal if used at all!
Team Cultural Indicators
There are also red flags in terms of engineering culture if SVN is still being used. It typically indicates that:
The engineering team has a conservative engineering culture.
The team have a risk-averse attitude to change.
The team may have a backlog of technical debt that has accumulated over many years.
Recruitment of developers wanting to use SVN is likely to be challenging, as recent surveys indicate that SVN has 5.18% of the Version Control System (VCS) market share. It is second to Git that dominates with a 93.87% market share. This is also likely to impact retention of developers too, since Git / GitHub are the dominant tools in use in most industries (although, not all) in 2025.
What to look out for
Should you happen to encounter a project that still uses SVN for version control, you should:
Expect resistance to adopt modern workflows (e.g., GitFlow, CI/CD).
Investigate whether the tooling supports migration to Git e.g. git svn or if a full rewrite might be needed.
Evaluate whether SVN is tightly embedded in the build and deployment process.
Prepare yourself for recruitment and retention issues as mentioned above.
Migration
Assuming you've stumbled across a legacy project that uses SVN, and modernisation and migration is a goal for the project. It's worth knowing that:
SVN to Git migration tools exist (git svn or tools like SubGit), but edge cases can be painful.
You will likely need to retrain teams and completely refactor deployment automation.
Start with a small component or project to check see if modularisation is a feasible option.
Can I still use it?
TL;DR: Git / GitHub is the way to go, for modern web development best practice.
Chrome Frame
Chrome Frame was an ambitious project to bridge the gap between modern web standards and the many limitations of old IE versions (IE6 - IE8). It was released by Google as a plugin for IE in late 2009. It essentially embedded the Chrome browser engine into older IE versions, thus allowing the IE / Chrome hybrid to use modern web standards, modern JS frameworks, HTML5 features, all while still being compatible with an IE-centric environment. This proved particularly beneficial for larger enterprise organisations, which were reliant on older versions of IE due to legacy infrastructure and were unable to adopt more modern browsers.
While it sounded great in theory, unfortunately it came with a whole host of downsides, especially when it came to adoption. This included requiring admin access to machines in order to install the plugin, many companies locked-down the use of plugins due to the security risk involved in installation of 3rd-party code, lastly it introduced complexity for IT support teams and Quality Assurance (QA) teams due to the hybrid nature of the rendering engine.
Can I still use it?
No, as it was ultimately deprecated by Google in 2013, and support ended in 2014. The reasons for this were that web standards had improved and IE itself had improved with the release of IE9 and later. A significant change in browser releases was the move to "evergreen browsers". These browsers update automatically in the background, without user intervention. These browser releases were untethered from specific Operating system versions, apart from Safari being a notable exception.
Although Chrome Frame only saw limited success, it certainly helped initiate discussions on migrating from legacy browsers in large enterprise environments.
I distinctly remember when it was announced I thought it would solve all our IE problems (finally!). It was only when it was released that I realised there was no way it would be able to solve the issue because:
It was complex to install (e.g. required admin access)
The majority of users at the time using older versions of IE were likely neither technically capable nor even interested in installing it as a plugin.
Note: This isn’t meant to sound elitist, but at the time, most people would have likely identified the internet as the "blue ‘e’ icon" on their desktop. Outside the web development community, few knew (or cared) what a web browser was, let alone which one they used! And I'd say this statement is most likely true with the modern web too!
CSS Resets
Before we get into the details, what is a CSS reset? It's essentially a set of CSS selectors and properties that are used to "reset" all styles across browsers to a common baseline. Think of it as a solid foundation on which to build your website off. In theory if all browsers render elements identically from the start, then the site will be easier to build and maintain because all those nasty minor cross-browser CSS differences will have been dealt with, that's the theory anyway.
Just to be clear, CSS resets are still around, but they have evolved into something more forward-thinking, minimal, and only focussed on common pain points. The first CSS Reset to be released in January 2007 was Eric Meyer’s classic CSS Reset, it quickly became one of the most widely adopted resets to try to standardise styling across the modern browsers at the time, these were Internet Explorer, Firefox, and Safari. It did this by removing all margin, padding, borders, and fonts to a common baseline. It could either be included within your own CSS file, or added as a separate CSS file at the start of your CSS in the tag. The order is crucial because you're establishing a standardised foundation. This allows subsequent CSS files to override these resets, either through direct duplication, leveraging the cascade, or by increasing specificity. For example:
/* Basic Reset of the body styling: Specicifity score 0,0,1 */
body {
line-height: 1.5;
font-family: system-ui, sans-serif;
background: #fff;
color: #000;
}
/* Here this selector is overriding the one above because it comes after it in the cascade: Specicifity score 0,0,1 */
body {
font-family: "Comic Sans MS", Impact, sans-serif;
}
/* Here we are using CSS Specicifity to override the page background color: Specicifity score 0,1,1 */
body.colored-background {
background: #ff0000;
}
This is why ordering your CSS correctly is best to do right at the start of a project. If you bring in a reset file at the end, it will either do nothing at all due to higher specificity CSS selectors before it, or it will completely undo lots of your styling, simply because it “wins” (by coming last in the cascade e.g. it’s the last CSS file loaded).
As mentioned above, CSS resets have evolved over the years. Normalize.css is a very common one in use on the modern web, as it works differently by preserving useful default browser styles and only fixing CSS styles that need to be fixed to maximise CSS consistency across modern browsers.
Other notable mentions are more modern, minimal resets that only focus on certain pain points in cross-browser rendering like box-sizing, responsive images, and font inheritance. These include Andy Bell's: A (more) Modern CSS Reset and Josh Comeau's: A Modern CSS Reset.
Kudos to the authors of CSS Resets! Their dedication makes CSS authoring significantly smoother for millions of developers across the world.
Hover-Only Interactions
Hover-only interactions are a legacy practice that suited desktop-only contexts but fail in today’s multi-device environment. An example of what a hover-only Interaction is:
.button:hover {
background-color: #ff0000;
}
Hover-only Interactions come with the following issues:
Not accessible on touch devices: Touchscreens do not have a hover state. This means hover-only functionality becomes inaccessible on phones and tablets, leading to broken user experiences.
Not accessible on touch devices: Touchscreens do not have a hover state. This means hover-only functionality becomes inaccessible on phones and tablets, leading to broken user experiences.
Lack of fallback interaction: Many legacy implementations didn't provide alternative means (like a click or focus) to trigger the same behaviour, effectively hiding essential UI or functionality.
Keyboard accessibility problems: Hover interactions are not always accessible via keyboard unless explicitly paired with :focus or JS handling.
Poor progressive enhancement: Relying solely on hover effects often ignored the principle of progressive enhancement, especially when essential content was hidden using CSS unless hovered.
Inconsistent browser behaviour: Legacy browsers had quirks in how they handled hover states, particularly with complex layouts or when mixing JS and CSS interactions.
Modern Best Practice
UI's need to be device-agnostic and align with inclusive design principles. Hover-based interactions should be a supplementary interaction, not the primary interaction. In order to align with modern best practice you should:
Avoid hover-only interactions for essential functionality.
Use :focus alongside :hover, and consider adding :focus-visible to better support keyboard navigation.
Support click or tap events explicitly for mobile compatibility
Provide visible indicators or alternative access methods (e.g. always-visible menus on small screens)
An example in CSS is:
.button:hover,/* mouse users */
.button:focus,/* element focused via click or tab */
.button:focus-visible { /* user likely on keybord, or other assistive technology */
background-color: #ff0000;
}
The above can be simplified to avoid redundant styling as combining :focus and :focus-visible can sometimes cause overlapping or unnecessary duplication of visual effects. The recommended approach is to use the following as it keeps your styling clean and scoped, applying just what’s needed based on the user’s input method:
.button:hover,
.button:focus-visible {
background-color: #ff0000;
}
By avoiding redundant styling means you reduce:
Overlapping in CSS rules.
Maintenance complexity.
Risk of inconsistent behaviour between browsers.
Slightly less to download.
9. Legacy Web Strategies
Blackhat SEO
Blackhat SEO refers to a collection of techniques that people tried to use to manipulate search engine rankings, mostly in ways that violate search engine guidelines, especially for guidelines laid out by Google.
Intent
So why would people want to use Blackhat SEO? Well, its sole focus was prioritising rapid results over sustainable growth. Consultants selling these techniques were focussing on exploiting weaknesses in search engine algorithms, rather than creating genuine value for users. Being at the top of the Google results page was the primary goal, and companies were willing to try these techniques to get an edge on their competition. That was until search engines got wise to what was happening, and started to penalise sites that employed these techniques.
Examples
Let's look over a few outdated examples and how they worked:
Keyword stuffing: This was essentially stuffing as many keywords into a page in an attempt to trick the search engine into pushing a page to the top of the results for more search engine searches. So even if the keywords weren't at all related to the actual content, they were included anyway. Thankfully, search engines got wise to this tactic and cracked down on sites that used it.
Cloaking: This is where you show different content to search engines than you do to users. Search engines would be shown pages with detailed, keyword-rich content about a specific product in order to trick a search engine into ranking the page highly. But the page shown to users was minimal and mainly promotional with very little or no helpful information on the page related to what it was being ranked on.
Hidden text and links: This is the one I remember the most, using CSS or HTML to hide text on a page that was only intended for search engines. Think white colour text on a white background, it was that simple! It was also straightforward to spot, as you'd get pages where the scrollbar was huge, but the content on the page was very short. The overflow in the vertical direction was the hidden text that you could easily reveal by highlighting the text with your cursor!
Link farms and paid link schemes: This is where companies would create hundreds, or thousands of low-quality content linking back to a specific page, in the hope that the search engine would rank the page highly because of all the backlinks to it. There were (and most likely still are) whole business's setup that promised to get you to the top of search results by essentially spamming the web like this. If you ever had a WordPress blog without the Akismet plug-in, you'd see this rapidly! WordPress was highly vulnerable due to its support for Trackbacks and Pingbacks (XML-RPC). Have a look at any old unmaintained WordPress blog, and you are likely to see this. They were so common at one point that a new term called "splogs" (spam blogs) was created for them. Wired.com wrote a blog post all about them back in September 2006 "Spam + Blogs = Trouble".
Duplicate content: This is a simple strategy, copy another site's high-quality content and pass it off as your own. Thankfully, this now triggers de-ranking in search engines.
Automated content: Basically automating the production of low-quality and spammy content. I anticipate that this technique will see a resurgence, given the recent surge in AI tools.
"This is why we can't have nice things!" The phrase echoes in my ears as I recall all the techniques mentioned above.
Why are they Outdated?
There are a number of reasons why these techniques are no longer used:
Google and other search engines have significantly enhanced their algorithms to identify and penalise such manipulative tactics.
Modern SEO is more geared towards user-first metrics, content relevance, quality, user experience, and even web performance are now taken into account when ranking a web page.
Sites found to be using these Blackhat tactics are almost certain to be heavy penalised and may even be de-indexed completely from search engines.
Companies found to be using these tactics on the modern web are very likely to suffer reputational and credibility damage. Some sectors that are heavily regulated will likely have legal implications too.
Can I still use it?
Fortunately, these blackhat techniques are no longer effective on the web. They are detrimental to both users and the internet, yet some individuals persist in attempting to use them.
For example, Google now ranks web pages on their usability issues, one of these is called Cumulative Layout Shift (CLS). This metric measures the stability of a page while a website is loading, websites that "shift around" while loading (called "jank") aren't scored as highly as those that are more stable.
I recently saw a new SEO technique using JS that would mask an entire page with a transparent element in order to trick the browsers CLS metric into thinking the page was completely stable. After page load this element would be deleted and the page could be interacted as usual. Basically, a modern day cloaking technique aimed at improving a pages Layout Instability API score.
So yes, it still happens and "this is why we still can't have nice things!".
“Above the Fold” obsession
The concept of "above the fold" is an outdated technique. The fold's position is not fixed; rather, it varies depending on the device used to view a web page. If we take this to the extreme. Viewing a website on a desktop widescreen device vs a mobile device, there's never going to be a common "fold" in this situation. Consider the vast number of device widths, ranging from a large desktop widescreen to a mobile device—literally thousands along the x-axis. If you then factor in the viewport height (y-axis), you're looking at millions of possible viewport permutations. Past assumptions are no longer true:
User behaviour: Users scroll instinctively now. The old belief that users don’t scroll is no longer valid.
Web performance evolution: Modern performance metrics (like Largest Contentful Paint (LCP) and Interaction to Next Paint (INP)) reward real user-perceived speed, not just fast above-the-fold content.
Lazy loading and streaming: The web has moved towards prioritising meaningful content dynamically, rather than front-loading everything visible “above the fold”.
Can I still use it?
It depends. “Above the fold optimisation” is an older performance technique that focuses on rendering the visible portion of the page as quickly as possible. When used thoughtfully, it can still improve perceived load speed, especially in critical user flows. However, relying on it too heavily can narrow the focus to just a fragment of the overall experience.
Today, the more effective and sustainable approach is to optimise for end-to-end, user-centric performance. This includes not only what appears first on-screen, but also how quickly the page becomes usable and interactive. A strategy focused on delivering a consistently fast page experience will naturally improve the content visible without scrolling, regardless of the device.
Superseded compatibility approaches
Graceful Degradation
The technique of graceful degradation involves building a website to take advantage of all the modern features of a browser, and once completed add "fall backs" for browsers that don't support modern features.
Examples
An example of graceful degradation is: a developer builds a website where the main layout is using CSS Grid. But if a browser doesn't support Grid it will "fall back" to a simpler layout system like Flexbox or even a float-based layout (depending on the site's browser support requirements).
Another example is a feature-rich, JS-enhanced input form may fall back to a basic HTML form if JS is disabled or fails to load, for example, due to a poor or unstable network connection. In this case, core functionality (such as form submission) remains available, even though advanced features (like real-time validation or dynamic UI elements) are unavailable.
Why is it Legacy?
Graceful degradation is increasingly being considered a legacy approach in modern web development as it has largely been superseded by progressive enhancement which takes the opposite approach.
Why is it outdated?
This assumption of a modern baseline fails to acknowledge the true diversity of browsers and devices currently in use by users. Furthermore, comprehensive testing is challenging due to the difficulty of covering all scenarios involving older or limited browsers. Third, adding "fallbacks" further increases the complexity of an already intricate full-featured initial build. Most importantly, Graceful Degradation negatively impacts accessibility and resilience. Pages employing this technique frequently fail in low-capability environments, such as older browsers, devices, or poor connections.
Can I still use it?
There are a few scenarios where it may still be useful, these include:
Legacy enterprise environments: for example a company that mandates the use of older browsers like Internet Explorer. A notable example of this is Banks and other financial institutions in South Korea. For a country, that ranks 12th on the internet adoption rate for its citizens (97.4% it 2025), it's a pretty surprising legacy issue they are still trying to tackle!
Modernisation: If a website is in a transition phase of being modernised, and it still needs to support older browsers for a limited period.
Non-critical enhancements: If a site has non-critical enhancements like animations or media features that are optional and don't impact access to the site's core content.
What should I use instead?
Progressive Enhancement is now the preferred approach for modern web development, offering a more robust, inclusive, and future-proof way to build websites and web applications. While Graceful Degradation was a useful technique for older browsers, it has now been superseded.
Browser Sniffing
This is the practice of detecting more information about a user's browser, like its specific version number or the operating system it is currently running on. Once detected a developer can use this information to “fork” their code, e.g. make decisions as to what bug workarounds should, or shouldn't be applied. Or even tailor the user experience for a specific version of a browser. Two very common uses of this technique in the past were to redirect a user to the mobile version of the site (when mobile and desktop sites built separately), or even blocking the usage of a site on "unsupported" browsers. An example of how you'd do this in JS is below:
if (navigator.userAgent.includes('Chrome')) {
// Apply Chrome-specific behaviour or simply block other browsers if you are feeling malicious
}
This code highlights a significant problem with browser sniffing, demonstrating why it's considered an outdated technique. In the code, the whole functionality is decided by the fact that the User-Agent variable in the browser happens to include the string "Chrome". But what happens if Google one day decides to change this string to lowercase "chrome", or even change it completely? Well, the code depending on this detection will break!
Now, you could modify the above code to tackle the case issue like so:
if(navigator.userAgent.toLowerCase().includes('chrome')){
// Apply Chrome-specific behaviour
}
But as you can see, this has only made the code more complex and fragile.
It's also worth mentioning that this code won't do what you expect it to either, as all Chrome based browser will return true. For example:
Google Chrome
Microsoft Edge (Chromium-based)
Opera (also Chromium-based)
Brave
Vivaldi
At the time of writing each of the User-Agent strings for the above browsers contain: Chrome/115.0.0.0 (as well as other information, that I have removed for the example).
All contain "Chrome" in their User-Agent, so will all run the code.
What's worse is that Chrome on iOS will return false and not run the code. On iOS, all browsers, including what appears to be Chrome on the home screen, are actually forced to use WebKit (Safari). Consequently, "Chrome" in this instance isn't truly Chrome and is not reflected in the User-Agent String.
Other issues
Fragility isn't the only issue seen when using this technique. It can also:
add a maintenance burden for developers, as this logic will need to be updated as browsers evolve.
create browser feature mismatch, two versions of the same browser don't always support the same features.
cause accessibility risks leading to user exclusion. A user on a less-common browser or assistive technologies, could inadvertently receive a degraded experience, or even blocked completely.
Can I still use it?
Realistically, no, you should aim to avoid browser sniffing entirely and instead of asking which browser a user is using, ask what can their browser do? Essentially, you want to be detecting the features that the user's browser can support. For example, to detect if a browser supports the Service Worker API, you can do this:
if ('serviceWorker' in navigator) {
// The users browser supports the 'serviceWorker' API, so do Service Worker stuff!
}
Browser Sniffing Summary
In summary, browser sniffing is a legacy technique that should be avoided on the modern web. In order to create a more resilient and inclusive web, you should use Feature Detection, Graceful Degradation, and Progressive Enhancement instead.
Modernizr
I was a big fan of Modernizr (with its very Web 2.0 name!). For readers who've not used or heard of it, Modernizr is a HTML5 and CSS3 feature detection library, It does this via browser feature detection, rather than browser user-agent strings, which can be unreliable and misleading. Modernizr actually tests to see if the browser being used supports a whole host of features.
It was released in 2009 at version 1.0, since then, it has had 27 releases, and 300 contributors. So how does it work, and how exactly do you use it? Here's an example of how it detects flexbox support in a user's browser.
// Adds a new test to the Modernizr object under the key 'flexbox'
Modernizr.addTest('flexbox', function () {
// Create a new HTML div element to test CSS properties on
var testElement = document.createElement('div');
// Attempt to set the display property to 'flex'
testElement.style.display = 'flex';
// Check if the browser retains the value 'flex' for the display property
// If supported, the style will remain 'flex'; otherwise, it may remain empty or be changed
return testElement.style.display === 'flex';
});
And here's how you would use that in your website. There are 2 methods for how you use it:
CSS
Modernizr adds a class to the element, in our case above it would be:
By default, the script assumes that the browser doesn't support JS (class="no-js"). When it gets to the inline script tag, the script executes. And as this proves that JS is supported it swaps the no-js class to a js class that can then be used in CSS styling, as you would any other Modernizr CSS class as demonstrated above.
10. Tests and Standards of Yesteryear
Acid2 and Acid3 Tests
The Acid2 and Acid3 tests were really clever ways to test a browser's compliance with the ever-evolving rendering standards at the time. They were both created by the Web Standards Project (WaSP)which was founded in 1998, when the web was a battleground of two main browsers:
Microsoft (Internet Explorer 4 at the time)
Netscape (Netscape Navigator 4.05 at the time)
The Web Standards Project, aimed to promote web standards that made development simpler, more accessible, and future-proofed, by working closely with browser vendors and development toolmakers to achieve this. When the team posted their final blog post in March 2013, their mission was largely complete. It resulted in successfully getting browser vendors to support the standards set by the World Wide Web Consortium (W3C). As of 2025, the W3C remains active in setting new standards to ensure the web continues to support communication, commerce, and knowledge-sharing for all, with a strong focus on accessibility, diversity, and inclusion.
Acid2 (2005)
The Acid2 test was created to test compliance with HTML 4.01, CSS 1 & 2, and PNG rendering standards (e.g. alpha transparency). It did this by focussing on the following key areas of browser rendering:
Box model
Absolute and relative positioning
Float behaviour
Table layout
PNG alpha transparency
Data URLs
This was achieved through a highly innovative browser test: creating a simple cartoon face, similar to a smiling emoji. Depending on how compliant the browser was, influenced how well the face was rendered. It's much simpler if I just show you!
Rendering reference
This is what the output of the test is supposed to look like across all browsers.
Internet Explorer 6
This "face," rendered with IE6, appears as if the individual suffered a severe accident!
Netscape 4.8
Netscape performed similarly to IE6 at the time.
Mozilla Deer Park Alpha 2 (later Firefox 3)
Finally a browser that actually rendered a face!
There are far too many variations and versions of browsers around at the time to list here, but if you are interested in how other browsers rendered the Acid2 test check out this ancient blog post by author Mark "Tarquin" Wilton-Jones. Mark, thank you for safeguarding this significant and captivating piece of web standards history!
Acid3 (2008)
After the success of the Acid2 browser test in putting pressure on browser vendors to improve standards support in their browsers, it was decided by the WaSP team to create another browser test, this time focussing on a different set of browser technologies. These included:
DOM Level 2 and 3
ECMAScript (JS) behaviour
CSS 3 selectors
SVG rendering
Data URIs
Animation timing and rendering
WebFonts via @font-face
Rendering reference
The Acid3 test took a more traditional testing route and simply scored a browser taking the test between 1 (poor support) and 100 (perfect support).
In my opinion, the Acid3 test, while practical and easy to interpret, lacked the entertainment value of the Acid2 facial disfigurement test!
The release of the more challenging Acid3 test coincided with a surge in browser competition, particularly among Firefox, Safari, Opera, and Google Chrome (which was released later in 2008).
Scores
At the time of release (March 2008) the Acid3 scores for each major browser were as follows:
IE7: 12 / 100.
IE8 Beta 1: 18 / 100.
Firefox 2: 50 / 100.
Firefox 3 Beta 4: 71 / 100.
Opera 9.5 Beta: between 60–70 / 100.
Safari 3.1: between 75–90 / 100.
Google Chrome: Not yet released in March 2008.
Google Chrome 0.2 Beta: first release: 77 / 100.
Google Chrome 1.0: 100 / 100.
Google Chrome quickly improved its Acid3 score shortly after its initial release. This rapid improvement was mainly due to its use of the Safari WebKit engine, which already scored 75–90 out of 100 at the time.
Legacy
Neither test is maintained or relevant to the modern web, but they played a key role in pushing browser vendors toward better standards support. Today, browser compliance is measured using the Web Platform Tests (WPT), a much broader and actively maintained suite developed by the vendors themselves with input from WHATWG and W3C.
11. What Still Matters – Progressive Enhancement
Not legacy but often forgotten
Congratulations! You've made it! After discussing countless legacy approaches and techniques best left in the past, you've finally arrived at a truly timeless and Incredibly important methodology. More than two decades after Steve Champeon and Nick Finck introduced it in their talk "Inclusive Web Design For the Future” at SXSW in 2003. The Progressive Enhancement (PE) methodology remains one of the most robust and future-ready methods for modern web development.
What is Progressive Enhancement?
There's a ubiquitous diagram that is always shown whenever PE is mentioned in a blog post. And this blog post will be no different in using it, as it actually explains the concept incredibly well.
Here we have the well known Progressive Enhancement pyramid.
HTML
The HTML is at the bottom of the pyramid as it gives the website a solid foundation on which to build off. The HTML layer is the most resilient layer in the web development stack. Without the HTML there is no content, no links, no images, no website! This layer is by far the most important layer in the pyramid. Just to give you an idea of how resilient HTML is in web browsers, let's take a look at the very first website. Back on Tuesday, August 6, 1991, Sir Tim Berners-Lee, the inventor of the Web, published the very first website! Now, this statistic makes me feel ancient! The first website was published almost 34-years ago, at the time of writing! And what do you notice about the page? Well, most importantly, it still renders correctly, and the content can still be read perfectly well after all this time. If you take a peak at the page's source code, you will notice a few oddities, like:
The complete lack of a DOCTYPE, and no tag.
No links to external stylesheets or JavaScript (they weren't invented yet!)
Anchors existed to link to other pages, but they had a strange NAME=[integer] attribute.
All elements were written in uppercase, e.g. , , , .
Lack of Semantic Markup. This was to come later once the WWW had matured.
To put it into perspective, this website has been around longer than nearly half of the entire global population, that’s over 4 billion people younger than this single page on the internet! Which other digital format on the planet can boast that form of robustness and ease of accessibility? Just think about all the storage media formats that have come and gone in that time:
5.25-inch floppy disk (the disk that was actually floppy)
3.5-inch floppy disk (the 3D "save icon")
CD-ROM / CD-R / CD-RW
MiniDisc (data version)
CompactFlash (CF)
Zip disk
Jazz drive
DVD-ROM / DVD±R / DVD±RW
Blu-ray
HD DVD
The list goes on… The point is clear: if you need a long-term, reliable storage solution that just works, plain HTML on the web is hard to beat! FYI: Of course, these web assets ultimately reside on physical hardware in data centres, but that’s not the point. What matters is the resilience and accessibility the web platform offers, regardless of the underlying infrastructure.
CSS
The second layer in the pyramid is the CSS or the Cascading Style Sheets. When the World Wide Web (W3) was first invented back in the early 90s, CSS simply didn't exist. It wasn't until 1996 / mid-1997 that browsers started to support the CSS Level 1 Specification. The browsers at the time were Internet Explorer 3 and Netscape Navigator 4, both had partial (and mostly buggy implementations). Up until this point the web had been completely "naked" in terms of design. Just pages full of text, images, and the odd animated GIF. Nothing at all like the modern web we see today.
CSS constitutes the second layer of the pyramid because, frankly, it is a "nice to have." Browsers are equipped with default stylesheets (as previously discussed in the CSS Resets section), which enable HTML content to display correctly and remain readable even in the absence of a website's custom styles. A company or brand must ensure their CSS is available for browsers to download so that their website renders correctly. Without it, many users would assume the site is broken, especially given how modern websites are expected to appear. But in the unlikely event the CSS fails to load, users will still receive the HTML content in a perfectly readable format. While it may appear unappealing, it remains fully functional across all current (and all future) web browsers and assistive technologies.
The beauty of Progressive Enhancement lies in establishing a foundational layer (HTML) and then progressively adding desired features. This method ensures that if any subsequent layer fails, the underlying content and functionality remain accessible to users.
A prime example of this browser feature in action is, since April 9, 2006, CSS Naked Day has been observed. Where for a 50-hour period, website owners disable their site's CSS, allowing users to experience the semantic HTML without styling. It started as a push for web standards and semantic markup, and gave site owners an excuse to flaunt their sexy . Gotta love a good HTML pun!
JavaScript
The final layer of the pyramid and the final piece of the Web stack puzzle is JS, this is the interaction layer that is added to a site last after the foundation (HTML) and design (CSS) have been added. It's difficult to believe that just three technologies, all of which have been discussed in this section, form the entirety of the web. There is truly nothing more to it than these three foundational components. Ultimately, the output of both frontend and backend development invariably consists of standard HTML, CSS, and JS. Although a multitude of tools and languages are available for web developers to use, with endless paths to choose from, they eventually all lead to identical HTML, CSS, and JS as their final output. It all comes down to using the right tool and technology for the job!
JS is deliberately placed as the final enhancement layer in the pyramid. This is not incidental. JS, while powerful, is the least resilient layer in the web stack. Its execution depends on multiple fragile components:
the network
the parser
the runtime environment
the integrity of the code itself.
A single misplaced character, for example, an errant semicolon or an undefined variable, can render entire swathes of interaction inoperable. This fragility is not a hypothetical risk. It manifests regularly across production environments all over the web, particularly where sites are heavily reliant on client-side code for core user journeys.
Progressive Enhancement Summary
The modern web has increasingly drifted away from the principles of Progressive Enhancement, often placing JS as the foundation rather than the finishing touch. Single Page Applications are a prime example, where even basic navigation and content rendering require full JS execution. This inversion of the pyramid not only risks total inoperability in degraded environments but also introduces avoidable accessibility and performance issues.
From a resilience and user experience standpoint, over-reliance on JS creates brittleness. Unlike HTML and CSS, which both degrade gracefully, JS fails noisily and catastrophically. If a CSS file fails to load, a page might look plain but will still remain usable. If a JS bundle fails, the entirety of the website's features may be lost, with little to no fallback available.
The web’s reach includes users with:
unreliable networks
older devices
constrained data plans
assistive technologies
A heavy dependence on JS frequently excludes these users or significantly worsens their experience. Progressive Enhancement is not about supporting “no JavaScript” users as a niche edge case. It’s about ensuring a robust baseline that works for everyone, every time, demonstrating empathy for all users regardless of how they access the internet.
While JS is a vital tool in a web developer’s toolkit, it must be handled with care. Its position at the top of the Progressive Enhancement pyramid reflects its power, but also its fragility. It should be used responsibly, with the awareness that its failure often leads to a broken experience. True resilience comes from building upwards from stable foundations, not downwards from brittle interactions.
Importance in government services
Having worked at GDS for 6-years, I can't tell you how many times I had to defend the frontend communities stance on Progressive Enhancement! Thankfully, it's all written in black and white in the Service Manual for all to read. However, some departments and developers found ways to work around the methodology or opted for alternative approaches. This was most likely driven by two things:
A team had made significant progress with their JS dependent service and were expressing concerns about meeting the requirements for their future service assessment(s).
Some Frontend Developers in the department were enthusiastic about adopting the latest client-side frameworks, with less emphasis on assessing their maturity or suitability for the service, and its users.
For point 1 it always amazed me that teams were able to get so far into prototyping for it to become an issue. As depressing as it may be to me, maybe the Service Manual and its guidance isn't as well-known across government as I'd like to believe?
For point 2, I 100% get it, new technology on the web is fun to play with and also to have on your CV / Resume! The real question is whether this new technology is truly the right choice for a critical public service that every UK taxpayer depends on and has a fundamental right to access?
Technology Suitability Check (Progressive Enhancement Focus):
Does the technologies core functionality allow the service to work without JS enabled?
Can the service still function reliably on low-powered or older devices when using this technology?
Is the final output from the technology easily accessible and usable with assistive technologies, regardless of the device used?
Does the technology degrade gracefully in poor network conditions, such as on a 3G connection or in rural areas?
Are all critical user journeys still functional when JS fails to load, or is blocked?
If the answer to any of the questions above is "no", then the technology probably isn't a great fit for a public service that needs to be maintained for years (or even decades!).
The last point in the list above is incredibly important:
Are all critical user journeys still functional when JS fails to load, or is blocked?
I think this is where there was a lot of misunderstanding in terms of Progressive Enhancement in government. I continually strive to highlight that a JavaScript-only journey doesn't require a direct 1-to-1 correlation with its progressively enhanced foundation. As long as for each user journey a user can complete their task, quickly and easily, then the use of JavaScript is fine.
For example, consider a feature-rich JavaScript dashboard built to enter user data into a backend database. If a simple HTML form with a submit button can achieve the same outcome (which it often can), then the dashboard is acceptable only if the HTML form provides a reliable fallback for situations where JavaScript is unavailable, such as when it fails to load due to a limited data plan, a poor connection, or a low specification device.
During a service assessment, the key question is whether the dashboard meaningfully improves data entry and user interaction, or whether it exists purely for the sake of using new technology. Adopting new tools without clear justification is not acceptable for a government service. I consider such an approach to be driven by a desire to boost a developer's CV or LinkedIn profile. E.g. CDD (CV-Driven Development) or LDD (LinkedIn-Driven Development).
12. Lessons for the Future
What these legacy practices teach us today
If there is one takeaway from this post, it is that Frontend Development has never stood still. What counts as best practice today can feel outdated or even “legacy” tomorrow. That constant state of reinvention is what first drew me in back in the late 90s, and it is what continues to excite me now. Backend development always seemed a little too steady, too predictable. Frontend, on the other hand, has always lived on the edge of change.
But what is different today is the scale of change ahead of us. With the rise of Artificial Intelligence (AI), we may be standing on the edge of a shift as significant as the birth of the modern Web itself. Just as the early internet reshaped how we live and work, the combination of AI and the Web could redefine what it even means to build, design, and interact online. The coming years are not just going to be interesting, they could mark a turning point in the history of our craft.
Applying lessons to modern frontend work
Core principle
Choose optimal solutions for the enduring parts of the stack: HTML, CSS, and JavaScript are the stable contract. Prioritise the most straightforward and maintainable approach to delivering clean, accessible HTML, efficient CSS, and lightweight, well scoped JavaScript. The further you move away from those three, the harder accessibility, maintenance, and performance become. Let the browser and the web platform do the heavy lifting.
Practical rules to work by
1. Start from the Web Platform
Prefer native elements and platform features before adding libraries. Use form controls, semantic HTML, CSS layout, media queries, inert, details and summary, dialog, fetch, URLPattern, IntersectionObserver, and Web Components where they fit.
2. Progressive enhancement as a default
Deliver meaningful HTML first, enhance with CSS, then layer JavaScript for interactivity. Critical journeys should still work when scripts fail or load slowly.
3.Ship less code
Adopt a dependency diet. Each abstraction must earn its keep through measurable value. Small utilities over frameworks by default. If a framework is chosen, configure it to output lean HTML, CSS, and JS.
4.Accessibility first, not last
Use semantic structure, proper labels, roles only when needed, visible focus, real buttons and links, reduced motion preferences, and test with keyboard and screen readers. Performance 100% is an accessibility feature.
5. Performance budgets and baselines
Set budgets for bundle size, interaction latency, and memory. Track Core Web Vitals from real users. Fail builds that exceed budgets. Optimise for first input delay, input responsiveness, and low CPU use on mid-range devices.
6. Keep the build simple
Prefer standard tooling that converges to web standards. Use the minimum build steps required. Long pipelines increase failure modes and slow iteration.
7. Design for resilience
Favour server rendering for first paint, hydrate only what is interactive, cache well, and handle partial failure gracefully. Make error states explicit.
8. Document the escape hatches
Where you choose abstractions, document how to reach the underlying HTML, CSS, and JS. Future teams should be able to debug without learning a bespoke stack.
9. Measure before you change
Add observability. Use Real User Monitoring (RUM) to guide work. Optimise the slowest real user paths, not synthetic microbenchmarks.
10. Plan for upgrades
Last, but not least, prefer tools with clear deprecation policies and migration paths. Avoid lock in. Isolate framework code behind simple boundaries so you can replace parts without rewriting the product.
A quick decision test
Can this be done with native HTML or CSS alone?
If not, can a few lines of vanilla JS do it without a dependency?
If not, does a library reduce long term cost and keep output close to the platform?
If a framework is still justified, can it produce accessible HTML by default and degrade gracefully?
Closing thought
Technologies come and go, but the contract with the browser remains. Choose the simplest path that produces high quality HTML, CSS, and JavaScript. The closer you stay to the platform, the easier your product will be to maintain, to make accessible, and to run fast at scale.
Post Summary
I have to admit, my posts always seem to take on a life of their own and end up being longer than I plan. Concise writing might be a goal for another day. If you made it all the way here, congratulations, you’ve officially joined the “end of post club”! I really hope you found this journey as enjoyable to read as it was for me to write. Revisiting these ideas was a real trip down memory lane, and it reminded me of things I hadn’t thought about in years.
Your thoughts and feedback are always welcome. If I’ve overlooked a method or technique you think deserves a mention, let me know and I’ll happily credit you in the changelog. Thanks again for sticking with me to the very end, and if you’d like to share your thoughts, you can do so here.
Post changelog:
26/08/25: Initial post published.
27/08/25: Fixed number ordering of headers.
27/08/25: Added Table of Contents for easier navigation!
28/08/25: Added Silverlight and Java Applets to the post (Thanks to an AI hallucination, regarding browser plugins from the past)
17/09/25: Thanks to Sven Kannengiesser for drawing my attention to the use of ‘here’ anchors. I have now revised them to be contextual and accessible.
--- End: Hack to the Future - Frontend
--- Start: Configuring your Content-Security-Policy on your development environment in 11ty
Published on: 04 February 2025
https://nooshu.com/blog/2025/02/04/configuring-your-content-security-policy-on-your-development-environment-in-11ty/
Main Content:
This is just a short post to discuss how I improved my Content-Security-Policy (CSP), on my local development environment in 11ty. This is essentially a follow-on post from my Securing your static website with HTTP response headers I wrote last year.
Note: this post isn’t exclusive to 11ty—it’s just the static site generator I use for this site. The code can be easily adapted for any static site running on a Node.js-based server—or practically any local web server!
Why is this useful?
It's incredibly useful for me because my Cloudflare build currently takes 7 minutes and 30 seconds to complete! I've no idea why it takes this long, and as of yet I've had absolutely no response from Cloudflare on why this is! What I do know is that something happened on the 19th December 2024 between 12:33AM and 12:19PM because my Cloudflare Pages build time increased by a massive 460%! See the screenshots below for proof of that fact:
Before
Note the date, time, and git hash in the screenshot above: 12:33AM December 19, 2024, hash efd9011. Now let's compare it to the screenshot below:
After
So in this screenshot above, we have the same git hash: efd9011, but 12 hours have passed, it is now 12:19PM December 19, 2024.
Clearly my code hasn't changed because the git hash is the identical and nothing else has been committed, but for some reason my build time has increased by 460%! It's my guess that something changed on Cloudflare's side in this 12-hour period to drastically increase the build time. Even though I posted about it on the Cloudflare Discord server and the Cloudflare Community Forums, I've had no response to tell me what was changed and if it is possible to fix it. So if anyone from Cloudflare is reading, I'm begging you, please, can you investigate what changed in that 12-hours and tell me if there's something I can change to drop my build time back to 1 minute 30 seconds!
So you're probably asking: "What on earth has any of this got to do with a "Content-Security-Policy" response header? Well, a 1-minute 30-second rebuild time to update my site's CSP in the 11ty _headers file is a reasonable time to wait to test for issues. But waiting 7 minutes 20 seconds for every little change just isn't efficient or workable! Quite frankly it's incredibly frustrating. And if you've ever worked on writing a CSP before you will realise how convoluted and unreasonably lengthy they can become! There had to be a better way to do this, and there is! So let me take you through the node.js / 11ty solution I used below to make my latest CSP changes.
The code
Let's have a look at the code:
// This is my site's Content Security Policy.
// Modify this CSP, don't just copy / paste it! It will break your site!
// You can also use `var` and `let` depending on your coding syntax, they all work
const CSP = `
base-uri 'self';
child-src 'self';
connect-src 'none';
default-src 'none';
img-src 'self' https://v1.indieweb-avatar.11ty.dev/;
font-src 'self';
form-action 'self' https://webmention.io https://submit-form.com/DmOc8anHq;
frame-ancestors 'self';
frame-src 'self' https://player.vimeo.com/ https://www.slideshare.net/ https://www.youtube.com/ https://giscus.app/ https://www.google.com/;manifest-src 'self';
media-src 'self';
object-src 'none';
script-src 'self' https://ajax.cloudflare.com https://giscus.app/ https://www.google.com/ https://www.gstatic.com/;
style-src 'self' https://giscus.app/;
worker-src 'self';`.replaceAll('\n', ' ');
// This is the middleware for our 11ty development server
eleventyConfig.setServerOptions({
middleware: [(req, res, next) => {
if (req.url.endsWith('.html') || req.url === '/') {
res.setHeader('Content-Type', 'text/html; charset=UTF-8');
res.setHeader('Content-Security-Policy', CSP);
}
next();
}]
});
As you will see I have formatted the CSP using template literals to make the CSP more readable, and thus easier to modify and test on my local environment. You can find the public gist for the code above here.
A couple of notes on this code:
Notice how I removed newlines from the final CSP variable. Using const signals to other developers that it shouldn't change, but since strings are immutable in JavaScript, modifying it using .replace() creates a new valid CSP header string.
I've added detection for both "naked URLs" and those ending in /index.html. This is just for convenience really, as it saves having to add the index.html to URLs before the response header is sent.
Lastly, the above CSP in its current form will break 11ty's live reload functionality because the connect-src in the CSP is set to none which tells the browser to block all network requests initiated by the page.
The following network communication technologies are blocked by this directive:
Fetch API
XMLHttpRequest
WebSockets
EventSource (Server-Sent Events)
WebRTC connections
In order to fix point 3 there are 2 options:
1) Modify the connect-src directive
This is the obvious fix for the issue:
// add the following to your CSP variable
connect-src 'self' ws: wss: http://localhost:* ws://localhost:*;
By modifying the connect-src directive, we are allowing connections on localhost via HTTP and WebSockets (which 11ty's live reload script relies on to push updates to the browser without having to reload the whole page). And we are also allowing WebSocket connections that are both encrypted (wss:) and unencrypted (ws:).
2) Modify the default-src directive
This is a less obvious fix for the issue:
// add the following to your CSP variable
default-src 'self' ws: wss: http://localhost:* ws://localhost:*;
The reason this works is that default-src is the "fallback" directive. So assuming your connect-src is set to self then the following ws: wss: http://localhost:* ws://localhost:*; would still be followed because the directive is falling back to default-src.
IMPORTANT: don't use default-src if your connect-src is set to none! The value of none will still block all connections and override the default-src settings.
The advantage of modifying the default-src directive is that it will be applied to other directives like script-src and img-src. So it really depends on your CSP requirements for your local development environment! For most readers, I'd guess modifying the connect-src directive will be enough. But I've added both options for completeness.
Summary
Believe it or not, I kept the blog post short this time! I hope this code snippet makes configuring your 11ty CSP a breeze! As always, thanks for reading and if you have any feedback or comments, please let me know!
Post changelog:
04/02/25: Initial post published.
09/02/25: Thanks to vrugtehagel from discord for his couple of tweaks to the const CSP code. The small changes can be seen in the gist revisions if you are interested.
--- End: Configuring your Content-Security-Policy on your development environment in 11ty
--- Start: Lets create a plaintext RSS feed with 11ty
Published on: 29 January 2025
https://nooshu.com/blog/2025/01/29/lets-create-a-plaintext-rss-feed-with-11ty/
Main Content:
Before writing this blog post I had 3 types of RSS feeds for my blog posts:
Atom
RSS
JSON
In this post, I'm going to add a 4th type: Plaintext.
Origin
A few months ago I was browsing my contacts and past colleagues looking for people to follow for my blogroll page. In doing so I just so happened to remember Terence Eden who I worked with at Government Digital Service (GDS). Terence (or edent, as he is known on most social networks), is a prolific blogger! I honestly don't know where he gets the time to blog so much and work, eat, sleep, blink, and breathe! While looking at the source code of his blog for his RSS feed, I just so happened to notice he had a plaintext version of his RSS feed. At the time it was responding with a 404, so I let him know on Mastodon, and he fixed it in a matter of minutes. Once fixed, I could see how it was formatted, and I set it as a personal challenge to replicate this feature on this blog (once all other more important functionality had been added). Now you may be asking, why would you need a plaintext version? So, let's answer that question first!
Why add a plaintext RSS feed?
Other than a personal challenge, what are the reasons for adding a plaintext version of your RSS feed?
Greater syndication
Now I have no idea how people like to read my content (I'm just very glad they do!). So if there's any way I can make their lives easier by adding multiple formats of the same RSS feed, then I'm happy to do it. After all, I'm always looking for ways to make this blog more open and inclusive. Even if it's only for a handful of readers, then it's worth adding the functionality, as it really is "set it and forget it" once deployed!
Command-line feed readers
I had no idea until I started researching the topic that there are users who use their terminal window as an RSS feed reader. For this they use programs like:
newsboat: "An RSS/Atom feed reader for text terminals" — It looks to have a very active repository on GitHub with 217 forks and 3,100 stars!
Liferea: "Liferea (Linux Feed Reader), a newsreader for GTK/GNOME" — Again an active GitHub repository with 129 forks and 830+ stars. I also noticed the project is over 20 years old (looking at some of the commits)!
RSS Guard: "Feed reader (podcast player and also Gemini protocol client) which supports RSS/ATOM/JSON and many web-based feed services." — another RSS client with an active GitHub repository with 130 forks and 1,800 stars. It's not surprising really considering the number of operating systems it supports!
So if you were in any doubt that plaintext RSS feed users don't exist, then I think the numbers above prove otherwise! (before you get angry, I do realise that not all these three programs users will be plaintext users!).
Privacy and Security Concerns
You only have to look at the Web Almanac 2024's chapters on Privacy and Security to realise how important these two topics are on the modern web. So it's completely understandable that RSS users are concerned about this when they subscribe to RSS feeds. Thankfully, this is where plaintext RSS feeds excel. By using plaintext RSS feeds, users can avoid loading external content such as images or scripts, thus reducing the risk of tracking and also enhancing their privacy.
Accessibility
I've seen accessibility brought up as a plus for plaintext RSS feeds, but honestly, I’m a little sceptical. Sure, a screen reader can read the content easily enough, but when it comes to navigation and overall user experience for someone with accessibility needs, navigating a plaintext feed seems like it’d be a step backwards. Wouldn’t something like HTML or XML, with its semantic markup and built-in navigation, be a way better option? That said, I’m totally open to being wrong here—if I’ve got this all incorrect, feel free to let me know.
Data Efficiency
This one really doesn't need much discussion, really. You can't get much more minimal in terms of bandwidth usage than a plaintext RSS feed. No images, no JavaScript, no markup, just pure content. It certainly makes for an excellent source of content for users with limited internet access or those operating in low-bandwidth environments.
Customisation and Integration
Tools like RSS Fulltext Proxy and Five Filters allow for text extraction from any website to convert partial feeds into full-text. This allows for integration into any feed reader, without plugins, or additional configuration. These are exceptional options if you want to integrate a particular feed into one of your workflows.
Reliability and Readability
You literally can't get a more robust format than plaintext. It is universally supported and small. And in terms of the resulting readability, there's no cookie banners, adverts, or newsletter signup forms that pop up halfway through an article. Thankfully, none of these annoyances are possible in plaintext, so you can consume all the content without interruption. It reminds me of how the World Wide Web used to be before it was commercialised beyond recognition! Take a read of the world's first website, and you get an idea of what Sir Tim Berners-Lee had in mind when he invented it in 1989 at CERN.
The web was originally conceived and developed to meet the demand for automated information-sharing between scientists in universities and institutes around the world.
The WorldWideWeb (W3) is a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents.
Anyway, enough of the "why's" let's crack on with the 11ty implementation!
The code
eleventy.config.js
First, let's modify the 11ty config file:
Here we are telling 11ty to process .txt files as template files.
eleventyConfig.addTemplateFormats("txt");
Next we need to configure 11ty and tell it how to handle .txt files with a custom handler.
eleventyConfig.addExtension("txt", {
outputFileExtension: "txt",
compile: async function (inputContent) {
return async (data) => inputContent;
},
});
The code is fairly self-explanatory outputFileExtension is obvious, the resulting output will have a .txt extension. The compile function is a simple pass-through. It takes the txt input and returns the original content unchanged.
feed.txt.njk
Next up is the template for the plaintext feed output. I'm using Nunjucks as that's the main templating language I use across this blog, but I'm sure you'd be able to any other templating language 11ty supports too. This file sits in my /content/feed/ directory, so 11ty will process it as a njk file in the output.
---
permalink: feed/feed.txt
eleventyComputed:
layout: null
---
# {{ metadata.title }} - {{ metadata.author.name }} - {{ metadata.description }}
## {{ metadata.fulldescription }}
URL: {{ metadata.url }}
{% for post in collections.posts | reverse -%}
{% if loop.index0 < 10 -%}
--- Start: {{ post.data.title | safe }}
Published on: {{ post.date | readableDate }}
{{ metadata.url }}{{ post.url }}
Main Content:
{{ post.templateContent | striptags(true) | decodeHtmlEntities | safe }}
{% if not loop.last %}
--- End: {{ post.data.title | safe }}
{% endif -%}
{% endif -%}
{% endfor -%}
A Gist for this template is here, if you find that easier to read.
There's a fair amount going on in this template file, but it is essentially:
Setting where I want the feed to sit in my final site output (permalink)
Overriding the default layout dynamically during the build process (layout: null)
Populating the top of the feed with my basic blog metadata.
Looping through all by blog posts in reverse (newest first).
Limiting the number of posts in the feed to 10.
Outputting the parts of each post I want in the feed using Nunjucks and cleaning up the output using various filters, which I will go through next.
Code Notes:
You may be looking at this template and wondering what is going on with all the space between the logic. Well, since this template is outputting plaintext, this is all intentional spacing to make the final output more readable.
I'm using loop.index0 which is the current iteration of the loop but 0 indexed because it was the only way that seemed to work for me to limit the number of posts. I think I must have a weird collections.posts setup somewhere because regardless of what I tried, nothing worked. Likewise, I believe the native slice(0, 10) should do the same thing, but for me, the loop stopped working altogether. I tried debugging it for a couple of hours and eventually stuck with the loop.index0 setup because it actually worked. However, if anyone has any ideas on why the native slice() function on a collections.posts doesn't work, please let me know! I'd love to get to the bottom of what the issue is!
You may notice my use of the -%} in the template, this is me telling Nunjucks to trim any whitespace (spaces, tabs, newlines) directly following the tag. This was required because it was adding numerous new lines in the resulting output. It's when I discover little intricacies in Nunjucks like this, that I realise what a versatile templating language it actually is!
filters
Most of the filters used in the above template are all pretty standard to the Nunjucks language:
safe
reverse
striptags(true)
I only added one custom 11ty filter for this functionality.
decodeHtmlEntities
For some reason, I had some very odd character encoding issues, and the only way I could resolve the issue was to create a custom 11ty filter to search and replace them throughout the content. I looked through the default Nunjucks filters for a solution, but filters like escape and forceescape didn't work. I could have probably used Nunjucks native replace filter, but that would have cluttered up the template, so decided to move it to an 11ty filter instead:
// clean up the HTML entities in the RSS feed text
eleventyConfig.addFilter("decodeHtmlEntities", function(text) {
return text.replace(/&([^;]+);/g, function(match, entity) {
const entities = {
'amp': '&',
'apos': "'",
'lt': '<',
'gt': '>',
'quot': '"',
'nbsp': ' '
};
// Handle numeric entities
if (entity[0] === '#') {
const code = entity[1] === 'x'
? parseInt(entity.slice(2), 16)
: parseInt(entity.slice(1), 10);
return String.fromCharCode(code);
}
return entities[entity] || match;
});
});
It's basically just a fancy way to search and replace for multiple encoding issues it finds in the plaintext output.
Just to be clear, I wrote a really clunky version that did the same thing as this first. Basically, a chain of text.replaceAll('&', '&').replaceAll(''', "'")... and so on, I think you get the idea.
It worked fine, but then I asked ChatGPT to optimise it, and this is the result it came up with! It's certainly not as readable, but it still does the job, and it also handles numeric entities too! Off the back of this example, I think AI used correctly can be a fantastic teaching tool, especially when it comes to random little filter functions like this.
Adding the feed to your
Once you have the feed up and running, it's time to make sure you have it visible to your users by adding it to your pages . It's dead simple, just like your other feeds, only the type is set to text/plain.
I also have links to all my feed formats in my footer so they are easy to find. So, finally, why not look at the result of the above code now by looking at my plaintext RSS feed.
Sample output
Here's an example of the output from the code above:
# Nooshu - Matt Hobbs - Frontend web developer, turned engineering manager.
## This is the website of Matt Hobbs, who is a Frontend Engineering Manager from Oxfordshire, UK.
URL: https://nooshu.com
--- Start: The Speed Trifecta: 11ty, Brotli 11, and CSS Fingerprinting
Published on: 23 January 2025
https://nooshu.com/blog/2025/01/23/the-speed-trifecta-11ty-brotli-11-and-css-fingerprinting/
Main Content:
So recently, I have written two 11ty related blog posts:
Using an 11ty Shortcode to craft a custom CSS pipeline
Cranking Brotli up to 11 with Cloudflare Pro and 11ty
...more blogpost content here...
Post changelog:
23/01/25: Initial post published.
--- End: The Speed Trifecta: 11ty, Brotli 11, and CSS Fingerprinting
Summary
Whew! Another blog post in the books, and another shiny new feature added—this time, a working plaintext RSS feed! (That’s one more thing off the to-do list—always a win!)
Thanks for stopping by! I hope you found this post interesting, maybe even useful? As always, if you’ve got thoughts, or feedback, please let me know!
Post changelog:
29/01/25: Initial post published.
--- End: Lets create a plaintext RSS feed with 11ty
--- Start: The Speed Trifecta: 11ty, Brotli 11, and CSS Fingerprinting
Published on: 23 January 2025
https://nooshu.com/blog/2025/01/23/the-speed-trifecta-11ty-brotli-11-and-css-fingerprinting/
Main Content:
So recently, I have written two 11ty related blog posts:
Using an 11ty Shortcode to craft a custom CSS pipeline
Cranking Brotli up to 11 with Cloudflare Pro and 11ty
If you haven't read them, no problem, they are always there if you're ever struggling to sleep!
TL;DR:
In the first post I look at how I've automated CSS fingerprinting on production (Cloudflare), resulting in the ability to use long-life Cache-Control directives like max-age=31536000 and the immutable browser hint.
In the second post, I look at how you can improve Brotli compression by manually compressing your assets to Brotli setting 11 (max) then serving them via a Cloudflare Pro plan. This gives a significant improvement in file size over the Brotli compression setting of 4 that Cloudflare uses for dynamic (on the fly) compression.
While working on both posts, it dawned on me that blending the approaches they cover would result in the ideal CSS configuration for my little blog. In this post, I'll show you exactly how I've done it. Let's get started!
Original code
So I'm going to be using the code I wrote in the CSS fingerprinting blog post I wrote recently. You can see it below, or in this Gist.
import dotenv from "dotenv";
import CleanCSS from 'clean-css';
import fs from 'fs';
import crypto from 'crypto';
import path from 'path';
dotenv.config();
// create a single instance of the CleanCSS function
// to be used in file loops. Add additional optimisation settings in here.
const cleanCSS = new CleanCSS({
level: {
2: {
removeDuplicateRules: true // turns on removing duplicate rules
}
}
});
export function manipulateCSS(eleventyConfig) {
eleventyConfig.addShortcode("customCSS", async function(cssPath) {
// output the file with no fingerprinting if on the dev environment
// (allows auto-reload when the CSS is modified)
if (process.env.ELEVENTY_ENV === 'development') {
return ` `;
}
// Using path.join for better cross-platform compatibility
const inputFile = path.join('./public', cssPath);
const outputDirectory = path.join('./_site', 'css');
const cacheDirectory = path.join('./.cache', 'css');
try {
// Check if input file exists first
if (!fs.existsSync(inputFile)) {
console.error(`Input CSS file not found: ${inputFile}`);
return '';
}
// Ensure both cache and output directories exist
for (const dir of [cacheDirectory, outputDirectory]) {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
// Read the input CSS file
const inputCSS = await fs.promises.readFile(inputFile, 'utf8');
// Initialises a new hashing instance
const hash = crypto.createHash('sha256')
// Feed CSS data into the hash function
.update(inputCSS)
// Specify the hash should be returned as a hexadecimal string
.digest('hex')
// Only take the first 10 characters of the hash
.slice(0, 10);
// Generate our CSS Cache name
const cacheKey = `${hash}-${cssPath.replace(/[\/\\]/g, '-')}`;
// This is where the file will be written
const cachePath = path.join(cacheDirectory, cacheKey);
// store our manipulated CSS in this variable
let processedCSS;
// check we have a cache directory
if (fs.existsSync(cachePath)) {
// read the cached CSS file
processedCSS = await fs.promises.readFile(cachePath, 'utf8');
} else {
// Use the memoized cleanCSS instance to minify the input CSS
processedCSS = cleanCSS.minify(inputCSS).styles;
await fs.promises.writeFile(cachePath, processedCSS);
}
// Split the input file path into its components (directory, filename, extension)
const parsedPath = path.parse(inputFile);
// Use path.join for output paths
const finalFilename = path.join(outputDirectory, `${parsedPath.name}-${hash}${parsedPath.ext}`);
// Write the optimised CSS to the final output location with the hash in the filename
await fs.promises.writeFile(finalFilename, processedCSS);
// path manipulation for final URL
const hashedPath = finalFilename.replace(path.join('./_site'), '').replace(/\\/g, '/');
// return our final link element with optimised and fingerprinted CSS.
return ` `;
} catch (err) {
console.error("Error processing CSS:", err);
return "";
}
});
}
Install zlib
The first thing we shall do is install the zlib package from npm, as this is what we are going to use to compress the CSS file all the way up to 11 on production.
npm install zlib
Next, we import it into our ESM code above:
import zlib from 'zlib';
Don't worry, I'm not going to go this slow all the way through the code. Feel free to skip straight to the final code if you so wish!
Setting the default compression level
This is a completely optional step, it really depends on how, and where you store all your build variables. But I'm going to be storing the Brotli Compression level I want to use on production both in the JS (as a default to fall back on if the environment variable doesn't exist), and also an environment variable on production. In the JavaScript below, set at 6 to show it can be any value between 0-11 (3 or 4 is what most CDN use for dynamic compression). Cloudflare's compression is set to 4:
// Default Brotli compression level if not set in the environment
const DEFAULT_BROTLI_COMPRESSION_LEVEL = 6;
And also in my .env file in development:
BROTLI_COMPRESSION_LEVEL=11
Now we set our compression level as a constant in our JavaScript for use later in the code:
const brotliCompressionLevel = parseInt(process.env.BROTLI_COMPRESSION_LEVEL || DEFAULT_BROTLI_COMPRESSION_LEVEL, 10);
Since our environment variable is currently set, compression will be set to 11, but if that variable isn't set, it will fall back to 6, the default value set above.
Cloudflare and Brotli
Before we go through the Brotli compression code, it's a good time to mention that Cloudflare does something automatically that is very useful for compressed files. Cloudflare Pages will automatically serve Brotli-compressed files (.br) if available, with no additional configuration required on the Cloudflare side. See the Content Compression Documentation on Cloudflare for more information.
This means we don't need to rename the compressed CSS file from index-hash.css.br back to index-hash.css for a user's browser to recognise it as a CSS file. Handy huh! Don't worry if you don't use Cloudflare Pages, I also have a version written that renames the CSS file back to index-hash-compressed.css for non-Cloudflare Pages, readers.
Important: While testing the documented method above, I could not find a way to confirm that my static compressed version was being served automatically. So in the code below I have removed this assumption and explicitly added my statically compressed version to the tag. Please do let me know if I'm interpreting the documentation for this functionality wrong! As I'd love to see it in action, and also how you verify the .br file is actually being served automatically?
Brotli Compression code
Now that I've covered that bit of (potential) Cloudflare Pages automation, let's have a look at the actual compression code:
// Brotli compression
// The output filename for the compressed file
const brotliFilename = `${finalFilename}.br`;
// Only compress to Brotli if the file doesn't exist
if (!fs.existsSync(brotliFilename)) {
// Set our zlib options here e.g. compression
const brotliOptions = {
level: brotliCompressionLevel // Use the level specified in the environment
};
// zlib does it's compression magic!
const brotliBuffer = zlib.brotliCompressSync(Buffer.from(processedCSS), brotliOptions);
// Write the compressed code to the output filename defined above
await fs.promises.writeFile(brotliFilename, brotliBuffer);
}
Generate the CSS tag
The last thing to do now is mostly the same as what I had in the original code set out above:
// path manipulation for final URL
const hashedPath = brotliFilename.replace(path.join('./_site'), '').replace(/\\/g, '/');
// return our final link element with optimised and fingerprinted CSS.
return ` `;
Only this time I'm modifying the path to the Brotli compressed filename rather than the original uncompressed CSS file! Simple!
On Development:
Once added to the 11ty config as per my previous post, my development CSS looks like this:
// uncompressed CSS with no fingerprinting, auto-reload still functioning
On Production:
The CSS looks like this:
// Brotli 11 compressed, minified and fingerprinted CSS
This is the version I have running on the site at the moment, why not view the page source and take a look.
Finishing touches
That's the bulk of the work done, but if you were to run this code on production at present you'd get a page full of naked HTML and no styling. This is because the Cloudflare Pages _headers file needs to be modified, as Cloudflare Pages is now serving my Brotli 11 compressed CSS file, not the uncompressed version we had before.
Simply add the following to your _headers file:
/css/*
Content-Encoding: br
Vary: Accept-Encoding
Content-Type: text/css
It's important to remember that this headers file is looking at the path from which the file is served on the production website! It isn't (and can't) use an extension wildcard, as Cloudflare pages doesn't support it. It took me a while to figure this out I must admit!
So for example, The following doesn't work:
*.css
Content-Encoding: br
Vary: Accept-Encoding
Content-Type: text/css
When I first migrated to Cloudflare Pages, I tried it and couldn't work out why it wasn't working. /css/* is basically saying: "Any file that resides in the css directory on production will be served with the following headers".
Lastly, remember to add the following to your Cloudflare Pages environment variables. This can be plaintext if you like, since the variable is only storing a single integer (not a secret API key):
BROTLI_COMPRESSION_LEVEL=11
Once added, you should be good to go!
Final code
As promised, below is the final code that I just described above:
import zlib from 'zlib';
import dotenv from "dotenv";
import CleanCSS from 'clean-css';
import fs from 'fs';
import crypto from 'crypto';
import path from 'path';
dotenv.config();
// An example of how you could add additional CleanCSS settings if required
const cleanCSS = new CleanCSS({
level: {
2: {
removeDuplicateRules: true
}
}
});
// Default Brotli compression level if not set in the environment
const DEFAULT_BROTLI_COMPRESSION_LEVEL = 6;
export function manipulateCSS(eleventyConfig) {
eleventyConfig.addShortcode("customCSS", async function(cssPath) {
if (process.env.ELEVENTY_ENV === 'development') {
return ` `;
}
const inputFile = path.join('./public', cssPath);
const outputDirectory = path.join('./_site', 'css');
const cacheDirectory = path.join('./.cache', 'css');
// Get compression level from the environment or use the default
const brotliCompressionLevel = parseInt(process.env.BROTLI_COMPRESSION_LEVEL || DEFAULT_BROTLI_COMPRESSION_LEVEL, 10);
try {
if (!fs.existsSync(inputFile)) {
console.error(`Input CSS file not found: ${inputFile}`);
return '';
}
for (const dir of [cacheDirectory, outputDirectory]) {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
const inputCSS = await fs.promises.readFile(inputFile, 'utf8');
const hash = crypto.createHash('sha256').update(inputCSS).digest('hex').slice(0, 10);
const cacheKey = `${hash}-${cssPath.replace(/[\/\\]/g, '-')}`;
const cachePath = path.join(cacheDirectory, cacheKey);
let processedCSS;
if (fs.existsSync(cachePath)) {
processedCSS = await fs.promises.readFile(cachePath, 'utf8');
} else {
processedCSS = cleanCSS.minify(inputCSS).styles;
await fs.promises.writeFile(cachePath, processedCSS);
}
const parsedPath = path.parse(inputFile);
const finalFilename = path.join(outputDirectory, `${parsedPath.name}-${hash}${parsedPath.ext}`);
await fs.promises.writeFile(finalFilename, processedCSS);
// Brotli compression
// The output filename for the compressed file
const brotliFilename = `${finalFilename}.br`;
// Only compress to Brotli if the file doesn't exist
if (!fs.existsSync(brotliFilename)) {
// Set our zlib options here e.g. compression
const brotliOptions = {
level: brotliCompressionLevel // Use the level specified in the environment
};
// zlib does it's compression magic!
const brotliBuffer = zlib.brotliCompressSync(Buffer.from(processedCSS), brotliOptions);
// Write the compressed code to the output filename defined above
await fs.promises.writeFile(brotliFilename, brotliBuffer);
}
const hashedPath = brotliFilename.replace(path.join('./_site'), '').replace(/\\/g, '/');
return ` `;
} catch (err) {
console.error("Error processing CSS:", err);
return "";
}
});
}
And here's a Gist for the code as well!
With file renaming
Now I understand not everyone will want to serve their CSS files using the .br extension. So there's a version below that also renames the file for you back to .css, and adds the -compressed suffix to the filename as well. The rest of the code is identical to the version above:
import zlib from 'zlib';
import dotenv from "dotenv";
import CleanCSS from 'clean-css';
import fs from 'fs';
import crypto from 'crypto';
import path from 'path';
dotenv.config();
// An example of how you could add additional CleanCSS settings if required
const cleanCSS = new CleanCSS({
level: {
2: {
removeDuplicateRules: true
}
}
});
// Default Brotli compression level if not set in the environment
const DEFAULT_BROTLI_COMPRESSION_LEVEL = 6;
export function manipulateCSS(eleventyConfig) {
eleventyConfig.addShortcode("customCSS", async function(cssPath) {
if (process.env.ELEVENTY_ENV === 'development') {
return ` `;
}
const inputFile = path.join('./public', cssPath);
const outputDirectory = path.join('./_site', 'css');
const cacheDirectory = path.join('./.cache', 'css');
// Get compression level from the environment or use the default
const brotliCompressionLevel = parseInt(process.env.BROTLI_COMPRESSION_LEVEL || DEFAULT_BROTLI_COMPRESSION_LEVEL, 10);
try {
if (!fs.existsSync(inputFile)) {
console.error(`Input CSS file not found: ${inputFile}`);
return '';
}
for (const dir of [cacheDirectory, outputDirectory]) {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
const inputCSS = await fs.promises.readFile(inputFile, 'utf8');
const hash = crypto.createHash('sha256').update(inputCSS).digest('hex').slice(0, 10);
const cacheKey = `${hash}-${cssPath.replace(/[\/\\]/g, '-')}`;
const cachePath = path.join(cacheDirectory, cacheKey);
let processedCSS;
if (fs.existsSync(cachePath)) {
processedCSS = await fs.promises.readFile(cachePath, 'utf8');
} else {
processedCSS = cleanCSS.minify(inputCSS).styles;
await fs.promises.writeFile(cachePath, processedCSS);
}
const parsedPath = path.parse(inputFile);
const finalFilename = path.join(outputDirectory, `${parsedPath.name}-${hash}${parsedPath.ext}`);
await fs.promises.writeFile(finalFilename, processedCSS);
// Brotli compression with renaming
const compressedFilename = path.join(outputDirectory, `${parsedPath.name}-${hash}-compressed${parsedPath.ext}`);
// Only compress to Brotli if the file doesn't exist
if (!fs.existsSync(compressedFilename)) {
// Set our zlib options here e.g. compression
const brotliOptions = {
level: brotliCompressionLevel
};
// zlib does it's compression magic!
const brotliBuffer = zlib.brotliCompressSync(Buffer.from(processedCSS), brotliOptions);
// Write the compressed code to the output filename defined above
await fs.promises.writeFile(compressedFilename, brotliBuffer);
}
const hashedPath = compressedFilename.replace(path.join('./_site'), '').replace(/\\/g, '/');
return ` `;
} catch (err) {
console.error("Error processing CSS:", err);
return "";
}
});
}
The Gist for the code above is here.
The above code will do the following:
On Development:
It will simply output:
Exactly as it did in the previous version.
On Production:
This is the output that will be seen:
In this instance, the CSS file has been Brotli compressed to 11, but it has also been renamed with the -compressed suffix and the .br extension has been removed. Technically, you don't need the suffix, but I've added it to emphasise (and remind myself) that it isn't a "standard" CSS file in plain text.
And as I mentioned in the previous CSS Shortcode blog post. You'll now be able to use long cache life headers like:
/[css]/*
Cache-Control: public, max-age=31536000, immutable
Safe in the knowledge that updating your CSS will generate a brand-new filename, effectively nullifying the existing cache. Check out the previous post if you want more details on the above _headers file code for Cloudflare.
Summary
Fantastic! You made it to the end of yet another 11ty blog post of mine, congrats! I'm not sure if there's anything else I can do to optimise my CSS delivery to users at the moment?
Maybe preloading or the Speculation Rules API? Which Cloudflare already supports, and is currently enabled on my site. This functionality is called "Speed Brain" in the Cloudflare dashboard (under the "Speed" then "Content Optimization"). It's worth noting that this functionality is currently in Beta. But I don't think I need to do anything manually to use it, since Cloudflare rolled it out to all plans (including the free plan!), back in September 2024.
But perhaps there are other optimisations I can make? Unknown, unknowns etc… As always, thanks for reading, I hope you found the post useful and if you have any feedback or comments let me know!
Post changelog:
23/01/25: Initial post published.
--- End: The Speed Trifecta: 11ty, Brotli 11, and CSS Fingerprinting
--- Start: Using an 11ty Shortcode to craft a custom CSS pipeline
Published on: 12 January 2025
https://nooshu.com/blog/2025/01/12/using-an-11ty-shortcode-to-craft-a-custom-css-pipeline/
Main Content:
I know I'm still a bit of a 11ty n00b, so I hope this isn't frowned upon in the community, but on rebuilding my blog using 11ty, I decided not to use the standard Bundle plugin that was added in v3.0.0. Instead, I decided to write a custom Shortcode to customise my CSS output. In this blog post, I will go through the code I have written in the hope it will help others and more importantly gather feedback from the community, to see if any improvements can be made to the code. Please do let me know if you have any feedback.
My Requirements
First, what am I hoping to achieve with this Shortcode solution?
Maintain the auto-reload functionality that comes with 11ty when you modify your CSS locally (this is a super helpful feature I've been a big fan of since LiveReload was released back in February 2011, and later BrowserSync).
Filename fingerprinting to allow me to use long-life Cache-Control response headers without having to worry about cache becoming outdated. (e.g. max-age=31536000, immutable)
The ability to optimise my CSS output using the excellent clean-css Node plugin. It does more than just minify! Check out all the optimisations it can help you if you haven't seen them.
Minimal impact on the 11ty build process by leveraging build caching in some way.
Set it and forget it, once it's added It requires minimal (or no ongoing maintenance), apart from updating dependencies, every so often!
The Code
I'll quit the waffling and just show you the code I currently have building my CSS for this site. This code sits in its own file called css-manipulation.js in a _helpers directory in my 11ty root (i.e. the same level as _data).
/**
* CSS Manipulation Module for Eleventy
*
* This module provides CSS processing, minification, and compression functionality
* for an Eleventy static site generator. It handles:
* - CSS minification using CleanCSS
* - Content-based file hashing for cache busting
* - Brotli compression for optimized delivery
* - Build-time caching to avoid redundant processing
* - Concurrent request handling to prevent duplicate work
*/
// Node.js built-in modules for file system operations, path manipulation, and hashing
import crypto from "crypto"; // Used to create SHA-256 hashes of CSS content for cache busting
import fs from "fs"; // File system operations (reading/writing files, checking existence)
import path from "path"; // Cross-platform path manipulation and joining
// Node.js zlib module for Brotli compression
// Brotli is a modern compression algorithm that provides better compression ratios than gzip
import { brotliCompressSync } from "zlib";
// CleanCSS library for CSS minification and optimization
// This performs a wide range of optimizations to reduce file size
import CleanCSS from "clean-css";
// Environment configuration to check if we're in local development mode
import env from "../_data/env.js";
/**
* CleanCSS Configuration
*
* CleanCSS performs multi-level CSS optimization. This configuration enables
* comprehensive minification while maintaining CSS functionality.
*
* Level 1 optimizations (basic cleanups):
* - cleanupCharsets: Remove unnecessary @charset declarations
* - normalizeUrls: Normalize and optimize URL() references
* - optimizeBackground: Optimize background properties
* - optimizeBorderRadius: Shorten border-radius values
* - optimizeFilter: Optimize filter() functions
* - optimizeFont: Optimize font-family declarations
* - optimizeFontWeight: Convert font-weight to numeric values
* - optimizeOutline: Optimize outline properties
* - removeEmpty: Remove empty CSS rules
* - removeNegativePaddings: Remove invalid negative padding values
* - removeQuotes: Remove unnecessary quotes from URLs and identifiers
* - removeWhitespace: Remove unnecessary whitespace
* - replaceMultipleZeros: Shorten multiple zero values
* - replaceTimeUnits: Optimize time unit values (e.g., 0s → 0)
* - replaceZeroUnits: Remove units from zero values (e.g., 0px → 0)
* - roundingPrecision: Round numeric values to 2 decimal places
* - selectorsSortingMethod: Sort selectors in a standard order
* - specialComments: Remove all comments (set to "none")
* - tidyAtRules: Clean up @-rules
* - tidyBlockScopes: Clean up block scopes
* - tidySelectors: Clean up and optimize selectors
* - transform: Custom transformation function (empty, no custom transforms)
*
* Level 2 optimizations (advanced restructuring):
* - mergeAdjacentRules: Combine adjacent CSS rules with same selectors
* - mergeIntoShorthands: Convert long-form properties to shorthand
* - mergeMedia: Combine @media rules with same conditions
* - mergeNonAdjacentRules: Merge duplicate rules even if not adjacent
* - mergeSemantically: Disabled to avoid potentially breaking semantic merges
* - overrideProperties: Remove overridden properties
* - removeEmpty: Remove empty rules at level 2
* - reducePadding: Optimize padding values
* - reducePositions: Optimize position values
* - reduceTimingFunctions: Optimize animation timing functions
* - reduceTransforms: Optimize transform functions
* - restructureRules: Disabled to avoid aggressive restructuring that might break CSS
* - skipProperties: Empty array means optimize all properties
*
* Format options (output formatting):
* - All breaks set to false: Output as single line (no line breaks)
* - indentBy: 0 (no indentation for minimal file size)
* - indentWith: "space" (if indentation were enabled)
* - All spaces set to false: Remove unnecessary spaces
* - wrapAt: false (no line wrapping)
*
* Other options:
* - inline: ["none"] - Don't inline any @import rules
* - rebase: false - Don't rebase URLs (keep original paths)
* - returnPromise: false - Use synchronous API (we handle async ourselves)
*/
const cleanCSS = new CleanCSS({
level: {
1: {
cleanupCharsets: true,
normalizeUrls: true,
optimizeBackground: true,
optimizeBorderRadius: true,
optimizeFilter: true,
optimizeFont: true,
optimizeFontWeight: true,
optimizeOutline: true,
removeEmpty: true,
removeNegativePaddings: true,
removeQuotes: true,
removeWhitespace: true,
replaceMultipleZeros: true,
replaceTimeUnits: true,
replaceZeroUnits: true,
roundingPrecision: 2,
selectorsSortingMethod: "standard",
specialComments: "none",
tidyAtRules: true,
tidyBlockScopes: true,
tidySelectors: true,
transform: function () {},
},
2: {
mergeAdjacentRules: true,
mergeIntoShorthands: true,
mergeMedia: true,
mergeNonAdjacentRules: true,
mergeSemantically: false,
overrideProperties: true,
removeEmpty: true,
reducePadding: true,
reducePositions: true,
reduceTimingFunctions: true,
reduceTransforms: true,
restructureRules: false,
skipProperties: [],
},
},
format: {
breaks: {
afterAtRule: false,
afterBlockBegins: false,
afterBlockEnds: false,
afterComment: false,
afterProperty: false,
afterRuleBegins: false,
afterRuleEnds: false,
beforeBlockEnds: false,
betweenSelectors: false,
},
indentBy: 0,
indentWith: "space",
spaces: {
aroundSelectorRelation: false,
beforeBlockBegins: false,
beforeValue: false,
},
wrapAt: false,
},
inline: ["none"],
rebase: false,
returnPromise: false,
});
/**
* Default Brotli Compression Level
*
* Brotli compression levels range from 0-11:
* - 0: Fastest, least compression
* - 11: Slowest, best compression (maximum)
*
* Level 11 is used by default for production builds where file size matters
* more than compression time. This can be overridden via BROTLI_COMPRESSION_LEVEL
* environment variable.
*/
const DEFAULT_BROTLI_COMPRESSION_LEVEL = 11;
/**
* In-Memory Build Cache
*
* This Map caches CSS processing results during a single build run.
* It serves two purposes:
* 1. Prevents re-processing the same CSS file when multiple pages reference it
* 2. Handles concurrent requests by storing Promises during processing
*
* Structure:
* - Key: cssPath (the original CSS file path)
* - Value: Either a Promise (if processing is in progress) or an object with:
* - hash: Content hash of the CSS file
* - processedCSS: The minified CSS string
* - htmlOutput: The final HTML tag string
*
* This cache is cleared between builds (per-process, not persisted).
*/
const cssBuildCache = new Map();
/**
* Directory Creation Tracking
*
* Tracks which directory combinations have been created during this build.
* This prevents redundant fs.existsSync() and fs.mkdirSync() calls when
* multiple CSS files are processed.
*
* Structure: Set of strings like "cacheDir:outputDir"
*/
const directoriesCreated = new Set();
/**
* Main Function: manipulateCSS
*
* Registers a custom Eleventy shortcode that processes CSS files.
* The shortcode can be used in templates like:
*
* Processing Pipeline:
* 1. Check if in local development mode (skip processing)
* 2. Check in-memory cache (avoid duplicate processing)
* 3. Read and hash the CSS file
* 4. Check disk cache for minified CSS
* 5. Minify CSS if not cached
* 6. Write processed CSS with hash-based filename
* 7. Compress with Brotli
* 8. Return HTML tag with hashed filename for cache busting
*
* @param {object} eleventyConfig - The Eleventy configuration object
*/
export function manipulateCSS(eleventyConfig) {
/**
* Register the "customCSS" shortcode with Eleventy
* This makes it available in all templates as
*
* @param {string} cssPath - The relative path to the CSS file (from /public directory)
* @returns {Promise} - HTML string containing a tag pointing to the processed CSS
*/
eleventyConfig.addShortcode("customCSS", async function (cssPath) {
/**
* Stage 1: Local Development Short-Circuit
*
* In local development, skip all processing to speed up builds.
* Just return a simple tag pointing to the original file.
* This allows for faster iteration during development.
*/
if (env.isLocal) {
return ` `;
}
/**
* Stage 2: In-Memory Cache Check
*
* Check if we've already processed this CSS file during this build.
* This handles the common case where multiple pages reference the same CSS file.
*
* The cache can contain:
* - A Promise: Processing is currently in progress, wait for it
* - An object: Processing is complete, return the cached HTML output
*/
if (cssBuildCache.has(cssPath)) {
const cached = cssBuildCache.get(cssPath);
// If it's a promise (in progress), wait for it
// This handles concurrent requests - multiple pages calling this shortcode
// simultaneously for the same CSS file will all wait for the same processing
if (cached instanceof Promise) {
return await cached;
}
// Otherwise return cached result immediately (already HTML string)
// This is the fast path for subsequent page builds
return cached.htmlOutput;
}
/**
* Stage 3: Initialize Processing
*
* Set up file paths and configuration for processing.
*/
// Construct full paths for input/output/cache directories
const inputFile = path.join("./public", cssPath); // Source CSS file location
const outputDirectory = path.join("./_site", "css"); // Where processed CSS goes (build output)
const cacheDirectory = path.join("./.cache", "css"); // Where minified CSS cache is stored
/**
* Stage 4: Get Brotli Compression Level
*
* Read compression level from environment variable or use default.
* This allows fine-tuning compression vs. speed trade-off per environment.
* Higher levels = better compression but slower processing.
*/
const brotliCompressionLevel = parseInt(
process.env.BROTLI_COMPRESSION_LEVEL || DEFAULT_BROTLI_COMPRESSION_LEVEL,
10, // Base 10 parsing
);
/**
* Stage 5: Create Processing Promise
*
* Wrap all processing in an async IIFE (Immediately Invoked Function Expression).
* This allows us to:
* 1. Handle concurrent requests by caching the Promise itself
* 2. Catch errors at the processing level
* 3. Return the same Promise to multiple concurrent callers
*/
const processingPromise = (async () => {
try {
/**
* Stage 5.1: Validate Input File Exists
*
* Check if the source CSS file exists before attempting to process it.
* If missing, log an error and return empty string (fails gracefully).
*/
if (!fs.existsSync(inputFile)) {
return "";
}
/**
* Stage 5.2: Ensure Directories Exist
*
* Create output and cache directories if they don't exist.
* Uses a Set to track which directory combinations have been created
* during this build to avoid redundant checks and operations.
*
* recursive: true ensures parent directories are created if needed.
*/
const dirKey = `${cacheDirectory}:${outputDirectory}`;
if (!directoriesCreated.has(dirKey)) {
for (const dir of [cacheDirectory, outputDirectory]) {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
directoriesCreated.add(dirKey);
}
/**
* Stage 5.3: Read and Hash CSS File
*
* Read the source CSS file and generate a content-based hash.
* The hash is used for:
* - Cache busting (filename changes when content changes)
* - Disk cache key (identify if we've seen this content before)
*
* SHA-256 is used for strong collision resistance.
* Only first 10 characters of hash are used (sufficient for cache busting).
*
*/
const inputCSS = await fs.promises.readFile(inputFile, "utf8");
const hash = crypto
.createHash("sha256") // Use SHA-256 algorithm
.update(inputCSS) // Hash the CSS content
.digest("hex") // Get hexadecimal representation
.slice(0, 10); // Take first 10 chars (sufficient for uniqueness)
/**
* Stage 5.4: Generate Cache Key and Path
*
* Create a unique cache key combining:
* - The content hash (identifies file contents)
* - The file path (normalized to avoid path separator issues)
*
* This allows different CSS files with same content to share cache,
* but also handles edge cases where paths matter.
*/
const cacheKey = `${hash}-${cssPath.replace(/[/\\]/g, "-")}`;
const cachePath = path.join(cacheDirectory, cacheKey);
/**
* Stage 5.5: Minify CSS (or Load from Disk Cache)
*
* Check if we've already minified this exact CSS content before.
* Disk cache persists between builds, so unchanged CSS files don't
* need re-minification even after restarting the build process.
*
* If cached:
* - Load the pre-minified CSS from disk
* - Skip the expensive minification step
*
* If not cached:
* - Run CleanCSS minification (expensive operation)
* - Save result to disk cache for next time
*/
let processedCSS;
if (fs.existsSync(cachePath)) {
// Cache hit - load pre-minified CSS
processedCSS = await fs.promises.readFile(cachePath, "utf8");
} else {
// Cache miss - minify and save
processedCSS = cleanCSS.minify(inputCSS).styles;
await fs.promises.writeFile(cachePath, processedCSS);
}
/**
* Stage 5.6: Write Processed CSS to Output Directory
*
* Write the minified CSS to the build output directory with a
* hash-based filename. The hash in the filename enables:
* - Cache busting (browser cache invalidates when content changes)
* - Long-term caching (files with same hash are unchanged)
*
* Filename format: original-name-hash.css
* Example: main-a1b2c3d4e5.css
*
* Only write if file doesn't exist to avoid redundant disk I/O
* (useful when multiple pages reference the same CSS).
*/
const parsedPath = path.parse(inputFile); // Parse original filename
const finalFilename = path.join(
outputDirectory,
`${parsedPath.name}-${hash}${parsedPath.ext}`, // name-hash.css
);
if (!fs.existsSync(finalFilename)) {
await fs.promises.writeFile(finalFilename, processedCSS);
}
/**
* Stage 5.7: Brotli Compression
*
* Compress the minified CSS using Brotli algorithm.
* Brotli provides better compression than gzip, especially for text.
*
* The .br extension indicates Brotli-compressed files.
* Web servers can serve these directly to browsers that support Brotli
* (most modern browsers do via Accept-Encoding header).
*
* Compression happens synchronously (brotliCompressSync) because:
* - It's fast enough for build-time processing
* - Simplifies error handling
* - Build processes typically prefer synchronous operations
*
* Only compress if file doesn't exist (avoid redundant compression).
*/
const brotliFilename = `${finalFilename}.br`; // Add .br extension
if (!fs.existsSync(brotliFilename)) {
const brotliOptions = {
level: brotliCompressionLevel, // Compression level (0-11)
};
const brotliBuffer = brotliCompressSync(
Buffer.from(processedCSS), // Convert string to Buffer
brotliOptions,
);
await fs.promises.writeFile(brotliFilename, brotliBuffer);
}
/**
* Stage 5.8: Generate Final HTML Output
*
* Create the HTML tag that will be inserted into the page.
* The path is relative to the site root and uses forward slashes
* (normalized for web URLs, works on all platforms).
*
* Note: The HTML references the .br file, assuming the web server
* can serve Brotli-compressed files when the browser supports it.
* If your server doesn't handle .br files, you may need to modify
* this to point to the uncompressed file or configure your server.
*/
const hashedPath = brotliFilename
.replace(path.join("./_site"), "") // Remove build directory prefix
.replace(/\\/g, "/"); // Normalize path separators for web URLs
const result = ` `;
/**
* Stage 5.9: Cache Result in Memory
*
* Store the processing result in the in-memory cache for subsequent
* calls during this build. This includes:
* - hash: For reference/debugging
* - processedCSS: In case we need the CSS string later
* - htmlOutput: The final HTML tag (what we return)
*
* Replace the Promise in cache with the actual result object.
*/
cssBuildCache.set(cssPath, { hash, processedCSS, htmlOutput: result });
return result;
} catch {
/**
* Error Handling
*
* Catch any errors during processing and fail gracefully.
* Return empty string to prevent one CSS file error from breaking the entire build.
*/
return "";
}
})();
/**
* Stage 6: Handle Concurrent Requests
*
* Store the processing Promise in the cache BEFORE awaiting it.
* This ensures that if multiple pages call this shortcode simultaneously
* for the same CSS file, they all get the same Promise and wait for
* the same processing to complete (no duplicate work).
*
* Once processing completes, the Promise in cache is replaced with
* the result object (see Stage 5.9).
*/
cssBuildCache.set(cssPath, processingPromise);
return await processingPromise;
});
}
It's all a little clumsy to have the code only in the blog post, so I've created a gist on GitHub here for easier reading & copying and pasting (should you wish to use or modify it yourself).
Usage
So how would you use this code in 11ty? Let's go through each of the files you'd need to modify.
eleventy.config.js
Just your standard ESM 11ty config file with an import and the execution.
// Custom CSS manipulation
import { manipulateCSS } from './_helpers/css-manipulation.js';
export default async function(eleventyConfig) {
// All your other 11ty config above...
// execute the CSS manipulation
manipulateCSS(eleventyConfig);
// All your other 11ty config below...
}
.env
It sits in the 11ty root directory and is added to .gitignore.
ELEVENTY_ENV=development
HTML template
I, personally, have my HTML separate and sitting in its own Nunjucks partial, but it will work with any template setup really. This Shortcode is using Nunjucks because that's what I use on this blog. Spaces added in the Nunjucks to stop 11ty just printing the output. The index.css is my one and only CSS file I use on this blog, and it is passed into the Shortcode. It is either used unmodified (in development), or manipulated (in production).
{ % customCSS "/css/index.css" % }
Output
For my development environment the HTML output is:
There's no fingerprinting until it is built on production (in my case Cloudflare Pages). Where the output looks like this:
As you can see the index.css file that contains all the CSS for this site, now has a unique 10 character hash suffix. If I were to change a single byte of data in the CSS file it will generate an entirely new hash. For those wondering, a sha256 hash using only the first 10 characters still has 1,099,511,627,776 unique combinations, so the likelihood of a collision is pretty slim!
Since the name of my CSS file is now guaranteed to have a unique name I can use the following response headers for my CSS:
_headers file
We can now start to use Cache-Control directives that maximise the time that the CSS is stored in a user's browser cache.
/[css]/*
Cache-Control: public, max-age=31536000, immutable
Here we are telling the user's browser that:
public: This response can be cached by any cache, it isn't restricted to only a private cache (like a user's browser cache). Examples of other caches are CDNs, and proxies.
max-age=31536000: 31536000 seconds is the number of seconds in a year. So, this directive tells caches that they can store and reuse this response for up to a year without the need for revalidation.
immutable: This is a hint to the browser or cache system that the content will never change for the lifetime of the cache. Even if the user reloads the page, the browser won't bother checking with the server for updates.
There's a lot involved in caching on the web, so if you want to know a lot more about this fascinating topic, check out Cache-Control for Civilians by Harry Roberts.
For those wondering about what happens to the old "orphaned" files in the cache, in this instance:
The old file version will stay in the cache for up to 1 year (until the cache duration expires).
But it's more than likely that the browser will clean up the cache before the year expiry time to free up storage space. This is more likely on devices with limited resources like older phones, since devices with limited resources will need to be more aggressive with their resource management.
In either case, this isn't anything a user has to worry about. The browsers will take care of this automatically, although a user can manually clear their browser cache should they wish too.
Summary
There we go, another 11ty blog post! We've reviewed my current CSS setup, which works well for now, but as mentioned earlier, feel free to suggest improvements or point out anything that strays from the "11ty way". As, the path to true enlightenment starts with uncovering the unseen gaps in our (my) understanding. I read that last sentence in Master Yoda's voice! As always, thanks for taking the time to read the post. I hope you found it useful and informative, and if you have any comments or feedback, please do let me know.
Post changelog:
12/01/25: Initial post published.
02/11/25: Updated post and code after shortcode was found to run for every page generation in the build process. This was inefficient and is now limited to a single run per build.
--- End: Using an 11ty Shortcode to craft a custom CSS pipeline
--- Start: Cloudflares Mirage 2.0 broke my images on all mobile devices
Published on: 07 January 2025
https://nooshu.com/blog/2025/01/07/cloudflares-mirage-2-0-broke-my-images-on-all-mobile-devices/
Main Content:
I'm writing this (hopefully short) blog post to warn others about the situation I found myself in the other day. As I've mentioned in my previous posts, I recently migrated everything from Jekyll and GitHub Pages to 11ty and Cloudflare Pages. This was quite a significant migration that took a while to complete! When I started the migration, I was on Cloudflare's Free plan. As I have been for several years. This is important, and I will revisit this point later!
Once the initial migration was completed, I tested the new code, design, and functionality on every device I could get my hands on, just to make sure there weren't any obvious bugs I'd missed. In doing so, I actually found a cross-browser layout bug where the latest Firefox and latest Chrome differed. I was quite surprised by this, but it shows you should always test on different browsers! Especially ones with differing rendering engines! Anyway, I'm rambling off-topic again!
By this point, I thought everything was looking pretty stable. I had the basics in place, the site was performing well, and accessible (according to Lighthouse anyway!), I'd locked it down in terms of security (another important point!).
New year, New Cloudflare Plan
Unknown to me, I'd set up an unfortunate chain of decisions that would lead to my images being broken on all mobile devices for exactly 3 weeks without me knowing (I know, a bit dramatic, and a very 1st world problem!).
In December, I'd decided to pay annually for the Pro Cloudflare plan for 2025. Unfortunately for me, what they don't mention when this happens is that certain "Pro-only" functionality is enabled with this upgrade process. One of those features happens to be a performance optimisation called Mirage 2.0. I do love the image that they have used in that blog post, and its clever play on words.
Mirage 2.0
Mirage 2.0 is Cloudflare's latest version of the image optimisation library. Mirage 1.0 was released in June 2012.
Cloudflare's Mirage 2.0 is a web performance optimisation tool focused on enhancing image delivery for mobile users by reducing page load times and bandwidth usage. It uses techniques like lazy loading and adaptive image resizing to serve images tailored to users' devices and network conditions, ensuring faster load times even on slow connections. Mirage dynamically prioritises image loading, starting with low-resolution placeholders and upgrading to higher-quality versions as needed, optimising user experience while conserving data. Integrated into Cloudflare's CDN, Mirage 2.0 provides seamless optimisation without extra effort from website operators.
The emphasis in the above paragraph is mine. Just to be clear, I had no idea Mirage 2.0 even existed until the day all my images were randomly broken on mobile devices!
Event timeline
So what exactly happened to get to this point? Well, I:
Migrated from Jekyll and GitHub Pages to 11ty and Cloudflare Pages.
Completed the site migration, optimised web performance, accessibility, and security.
Tested my shiny new site in numerous browsers (including mobile browsers).
Upgraded to the Cloudflare Pro Plan.
Mistakenly (maybe?), didn't retest the site across browsers after the upgrade (since I assumed nothing would change regarding the platform and environment!).
Made sure site still rendered perfectly on Desktop (yay!), and discovered the Pro plan even allowed me to enable Brotli compression for all assets! Winning!
Was then informed by Matteo Contrini that my images were broken on mobile off the back of announcing my new blog post on Bluesky.
Panicked, confusion… argh!
It's point 8 where the self-doubt creeps in:
"Have the images ever worked on the site??"
"I'm 100% sure I tested on mobile!"
"Surely someone would have mentioned this to me if they'd never worked??"
"Why does it work on desktop but not on mobile??"
"Why do the images work when I request the desktop site on mobile!!"
"Is it the new eleventyImageTransformPlugin? If so, surely, it would have been spotted and fixed already!"
"How on earth am I going to debug and resolve this issue?"
That list of quotes above was basically my thought process for a couple of hours after the image problem was reported!
Web Inspector for iOS
Thankfully, Matteo jumped in and helped with the final quote on the list above when he pointed me in the direction of the fantastic Web Inspector App for iOS. Once installed and enabled as a Safari extension, it allows you to easily debug web pages on iOS Safari. It comes with tabs for the:
DOM
Elements
Console
Network
Resources
In fact, if you look at the code seen in the inspector in the image, you will see how I finally worked out what was going on!
Script Injection
On upgrading to Cloudflare Pro, unbeknown to me, the Mirage functionality is automatically enabled within the Cloudflare dashboard! So every image on every page has the mirage 2.0 script injected into the DOM (but only on mobile! Remember my added emphasis in the Mirage 2.0 section earlier). Also remember how I'd locked down my blog's security using HTTP response headers. Which just so happened to include a Content Security Policy (CSP)!
Script Injection + CSP equals pain
I think most readers may realise where I'm going with this now! On the one hand, it's fantastic to know that the CSP did its job and blocked the 3rd party script from ajax.cloudflare.com! I mean, that's what it's there for, after all! But what's not so great is the fact that this injection only happens on a mobile device where there are no console errors to be seen (by default anyway). It's also worth mentioning that at this point it was still a complete guess that this was the issue, as even with the Web Inspector Extension installed. There was no usual CSP error (Content Security Policy: The page's settings blocked the loading of a resource at...) in the console, like you get in a desktop browser! So I was just hoping that this was the issue!
Disabling Mirage
So, the easiest way to resolve the issue is to disable Mirage 2.0 from the Cloudflare Dashboard, easier said than done at times! Cloudflare I adore your platform and the features you offer, but your dashboard navigation is just nuts! Please consider a major overhaul of the whole UI! Furthermore, don't hesitate to put me down as a beta tester as well!
So to disable Mirage, first select the domain you want to change, then look for the "Speed" dropdown on the left, then click the "Optimization" link:
Next, look for the "Image Optimization" tab on the page:
Lastly, scroll to the bottom of this tab, and you will find the Mirage setting you can toggle on and off:
Unsurprisingly, the setting is now turned off on my dashboard! Also note the "Beta" tag in orange next to the title…
What have we learned?
So, what have we learned from my stressful little adventure?
CSP's actually work really well for blocking third-party JavaScript code injection!
Be sure to check your website often, especially if you happen to change your Cloudflare subscription plan.
Never fully trust anyone, not even your CDN or hosting provider! Even they may choose to meddle with your site's code!
The Web Inspector App for iOS is really useful for debugging mobile Safari issues.
Steps Cloudflare can take to prevent this
So, I'm pretty sure this isn't my fault, right?? I'd like to think I've just been unlucky with having a particular setup and a unique set of circumstances. Saying that, I'm certain that Cloudflare can introduce some changes to stop this from happening to others! My recommendations would be:
Ensure that Mirage 2.0 is Opt-in only and not Opt-out. Especially for a feature that is still in Beta!
When a user upgrades their plan, consider emailing them, or pointing out the new features that are available to them with the new plan (maybe within their dashboard?)
Consider some sort of automation or logging to check to see if Mirage (or other functionality) is causing, or could cause issues if the site has a CSP in place.
Don't assume that your paying customers know everything about every product on your platform.
Finally, it's also worth mentioning to users that there have been similar image issues caused by Mirage in the past. Documented here and here. Although, admittedly, these are related to Vercel and Next.js, so a fairly different setup!
I hope that this blog post isn't coming across too harshly. As I've said before, I really love Cloudflare as a company, and the services they provide. It's just when they overstep the mark and make assumptions about the websites they are hosting!
One final point, I briefly chatting to Andy Davies from Speedcurve on Bluesky. He mentioned they (Speedcurve) recommend all clients that use Cloudflare disable both Mirage AND Rocket Loader — Source
Summary
If you made it to the end of this post, congratulations—you deserve a medal! Meanwhile, I've learned I couldn't write a short blog post if my life depended on it!
Nevertheless, I hope you found it a useful read, and it helps someone who stumbles across this completely unrelated set of circumstances! As always, thanks for reading, and if you have any feedback or comments, please do let me know!
Post changelog:
07/01/25: Initial post published.
--- End: Cloudflares Mirage 2.0 broke my images on all mobile devices
--- Start: Cranking Brotli up to 11 with Cloudflare Pro and 11ty
Published on: 05 January 2025
https://nooshu.com/blog/2025/01/05/cranking-brotli-up-to-11-with-cloudflare-pro-and-11ty/
Main Content:
As with the past few blog posts I've written, this post is about my migration from GitHub Pages and Jekyll to Cloudflare Pages and 11ty. Once the migration was completed, I decided to have a look at how I could optimise the web performance of my blog. As with all web performance projects I start, I begin with the basics. For me, that's the 3 C's:
For assets being delivered to your user, make sure they are properly:
Cached
Concatenated
Compressed
Before
Upon looking at the DevTools Network Panel in Firefox (my main browser of choice 😍), I could see that only my HTML document was being Brotli compressed:
Before I get into the details, let's discuss what the difference between Gzip and Brotli is. If you already know all this, feel free to skip this section.
What is Brotli compression?
Brotli is a "new" compression algorithm that was released by Google in September 2015 (hence the quotes around the new). Brotli came with the following key features:
Lossless Compression — No data is lost during compression / decompression.
High Compression Ratios — It achieves higher compression ratios compared to other popular compression algorithms used on the web, like gzip, and deflate.
Optimal for Web Performance — Brotli is specifically optimised for HTTP compression, its predefined static dictionary is optimised for patterns often found in web content. It's high compression ratios, and fast decompression, make it a perfect choice for HTTP compression.
Static and Dynamic Compression — Brotli supports both dynamic (real-time) and static (pre-compressed) data compression, offering a balance between server CPU usage and optimal file size. Dynamic compression suits real-time needs, while static compression, using the highest setting (11), maximises asset compression during the build process. Pre-compressed assets can then be delivered with minimal CPU overhead.
It's these key features among many others that have made Brotli one of the preferred compression algorithms on the modern web. According to Can I Use, as of today, 97.79% of browsers on the web support Brotli Accept-Encoding/Content-Encoding.
I, personally, love that Brotli has compression settings ranging from 0 to 11, where 0 is the fastest compression speed but the lowest compression ratio, and 11 has the highest compression ratio but requires more computational resources and time. I'd love to know if the Google developers got the idea for a compression level of 11 from the very famous scene from the film 1984 film "This Is Spinal Tap".
If you are interested in learning more about Brotli compression, here's some further reading for you:
RFC 7932.
Google's Brotli Repository.
Brotli: The New Era of Data Compression on Dev.to.
Better than Gzip Compression with Brotli.
Brotli Compression: A Fast Alternative to GZIP Compression from Kinsta.
Brotli and Cloudflare
Cloudflare has been a strong advocate for Brotli for many years, introducing support for it on September 21, 2016. Remarkably, this was just over a year after Google open-sourced Brotli on September 22, 2015, and only a few months after its formal standardisation in RFC 7932 in July 2016. Demonstrating impressive speed, Cloudflare rolled out Brotli support in just over two months, significantly ahead of one of their main competitors, Akamai, which implemented support on March 19, 2018—almost 18 months later.
At first, they were very explicit with their support, offering their users the option to toggle it on and off via the "Optimization" menu in their site dashboard:
Since then, Brotli has become so ubiquitous for web performance on the web that this toggle was removed from the dashboard in May 2024, as announced on their community forum here. It is now enabled by default for the following file types. Unfortunately, when moving from beta to live, Cloudflare decided that Brotli compression wouldn't be the default compression used on the free plan. Although, saying that, the free plan is still incredible for web performance! This sadly means that if you want all your website assets dynamically compressed (on the fly), by Cloudflare, then you will need to be on the Pro plan.
Another point to know about the Pro plan and Brotli compression is that it uses dynamic "on the fly" compression. It therefore uses a reduced compression setting. A happy medium between compression size and CPU usage happens to be around the 3 or 4 setting (out of 11). Cloudflare has chosen 4 to be the setting they use for dynamic compression. Depending on the content, Brotli's compression ratio at level 4 is approximately 2x to 4x, which is considered balanced compression. This level provides a good balance of speed and compression.
But since we are controlling the compression level on our static blog, let's crank it all the way up to 11! This is defined as maximum compression, and offers a compression ratio of approximately 4x to 7x smaller than the original file size. The use case for this level of compression is it's often used for static assets that don't change regularly, such as font files or infrequently updated files. In my case, once written, I don't plan to update the minimal amount of JavaScript in use on my blog. Hence, the JavaScript files are the perfect candidate for extreme compression. So let's get started.
Compression
Installation
The simplest way to compress your assets is by using a command-line tool in the terminal. To get started, you'll first need to install the Brotli Command-line Interface (CLI). On macOS, you can do this by either:
Using Homebrew
You may or may not have Homebrew already installed. Here I assume you haven't. To install it, you'd run the following commands in the terminal.
Install:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Ensure Homebrew is up-to-date:
brew update
Install Brotli:
brew install brotli
Verify Installation:
brotli --version
This will return the installed Brotli version. You are now ready to compress your assets using Brotli.
Using MacPorts
First, follow the MacPorts installation here.
Then update and install:
sudo port selfupdate
sudo port install brotli
Verify Installation:
brotli --version
These are just 2 ways of installing Brotli on OSX, others include:
Build from source (using Xcode).
Install via Python's pip.
Download Precompiled Binaries (via the Brotli releases page).
Use Docker.
Each method has its benefits depending on your environment and preferences. Once installed, the usage is the same for all methods listed above.
Usage
To compress a file, it's as simple as:
brotli [filename-here]
Or if you want to control the compression settings:
brotli --quality=[0-11] [filename-here]
This will output a Brotli version of the file so file.txt will be duplicated, and a new file called file.txt.br will be created. A shortcut for --quality=11 is --best:
So these commands would produce the same output:
brotli --quality=11 file.txt
brotli --best file.txt
For lots more information on the command-line options available, use:
brotli -h
Compression script — a single file
Now you can either do this manually, which is long and laborious, or we can semi-automate the task by writing a shell script (Thanks to ozcoder for the idea, and initial code!).
compress.sh
#!/bin/bash
# Check if the user provided a file name
if [ -z "$1" ]; then
echo "Usage: $0 "
exit 1
fi
# Get the file name from the first argument
fname="$1"
# Compress the file using Brotli at quality 11
brotli --quality=11 --output="${fname}.br" "${fname}"
# Notify the user
if [ $? -eq 0 ]; then
echo "File compressed successfully: ${fname}.br"
else
echo "Error occurred during compression."
exit 1
fi
Remember to make the script executable using chmod +x compress.sh. Then you can use it like so:
./compress.sh filename.js
A Brotli compressed file (filename.js.br) will now be located in the same directory.
Compression script — whole directory
If you find manually compressing each file using the compress.sh script above tedious, then we can modify the shell script to compress all JavaScript assets in a single directory (compress-directory.sh):
compress-directory.sh
#!/bin/bash
# Check if the user provided a directory
if [ -z "$1" ]; then
echo "Usage: $0 "
exit 1
fi
# Get the directory from the first argument
directory="$1"
# Check if the specified directory exists
if [ ! -d "$directory" ]; then
echo "Error: Directory '$directory' does not exist."
exit 1
fi
# Compress all .js files in the specified directory (modify for other file types)
for file in "$directory"/*.js; do
# Check if there are .js files in the directory
if [ ! -e "$file" ]; then
echo "No .js files found in '$directory'."
exit 0
fi
# Compress the file using Brotli
brotli --quality=11 --output="${file}.br" "$file"
# Notify the user of success or failure
if [ $? -eq 0 ]; then
echo "Compressed: $file -> ${file}.br"
else
echo "Error compressing: $file"
fi
done
echo "Compression complete."
Once either of these scripts have completed you will have one or more Brotli encoded files e.g. filename.js AND filename.js.br. You can now delete the original JavaScript file and remove the .br extension from the newly compressed file. Assuming you are using Git (which, of course, everyone does, right?), you can always retrieve the original JS file from the Git history should you need the uncompressed version again, or you can use the brotli --decompress filename.js.br command. Remember that Brotli is a Lossless compression algorithm, there's no data loss during compression and decompression, so you can always decompress to get the original file back.
My scripts are broken on localhost
So now that your JavaScript has been Brotli compressed, it's no longer in a plaintext format. Brotli compressed files are actually in a binary format, so they aren't directly editable in your code editor. These errors are in the console because presently, the local browser is expecting to receive the JavaScript in plaintext, so when it doesn't, it will let you know via the console with an error that says something like Uncaught SyntaxError: illegal character U+FFFD. Now, this is a scary looking error, but there's an easy fix. We need to tell our browser that these JavaScript files are no longer plaintext, they are Brotli compressed. To achieve that, we are going to modify our 11ty config file (e.g. eleventy.config.js) and add in some node middleware that our local 11ty development server will use.
// The rest of your eleventy.config.js code above...
eleventyConfig.setServerOptions({
middleware: [(req, res, next) => {
if (req.url.endsWith('.js')) {
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Content-Encoding', 'br');
res.setHeader('Content-Type', 'application/javascript');
res.setHeader('Vary', 'Accept-Encoding');
}
next();
}]
});
// The rest of your eleventy.config.js code below...
The code above will hopefully be fairly explanatory. It's basically saying when you serve a JavaScript file from localhost, make sure to include the above response headers. The critical one is the 'Content-Encoding', 'br' response header, which tells the browser that the files are now Brotli compressed. It then recognises that the response body has been compressed using the Brotli algorithm. The browser will then decompress the Brotli-compressed data before interpreting or rendering the content as it would do with a plaintext version.
Cloudflare Compression rules setup (Pro account)
Now that the development environment has been fixed, it's time to dive into the Cloudflare dashboard and make sure we have everything setup in there!
If you have a pro account, you will see a "Compression Rules" menu item under the "Rules" section of the navigation.
Once inside this menu, you will see an option to create a new compression rule:
You will see I've already created a "Brotli Compression" rule in the image, and it is enabled.
Within this section, you will see a long page with 2 main sections:
I added a name and just left the option on the default content types to be compressed.
Next, we need to set the compression settings we want to apply to these files being served to our users:
You will see in my settings I've been very explicit in the order of compression I would like: "Brotli, Zstandard, Gzip, Auto". Zstandard (Zstd) is generally considered better than Gzip in terms of performance and compression ratio for most use cases. Refer to this page here for more information about Zstandard.
After
Once all the settings have been updated, and we have cleared the CDN cache! We can see the final result from the network panel below:
Looking at the far-right column from the network panel, we can see that all my assets are now Brotli compressed.
Although the panel doesn't show it, the assets are being compressed using a mixture of dynamic and static compression:
HTML document — Dynamic (level 4) — by Cloudflare.
CSS file — Dynamic (level 4) — by Cloudflare.
JavaScript files — Static (level 11) — by me.
Favicon — Dynamic (level 4) — by Cloudflare.
If I wanted to, I could statically compress the favicon.ico file too, since that's never going to change. But I'll leave that to you the reader to figure out how to do that, should you wish to!
Summary
It's a real shame that you need a Cloudflare Pro account to use Brotli compression for all static assets served by Cloudflare. But their free account does give you a tremendous number of features for web performance and security out of the box, including:
HTTP/2, and HTTP/3 + QUIC.
0-RTT.
Cloudflare Workers.
DNSSEC.
Excellent cacheability of static assets.
Email security and forwarding.
Free hosting on Cloudflare Pages.
Content Delivery Network (CDN) with global caching.
Page Rules for fine-grained cache control.
Improved security.
Image optimisation.
Improved web performance.
DDoS protection.
Basic Web Application Firewall (WAF) rules.
SSL/TLS encryption (Free Universal SSL).
Bot mitigation.
IP masking via reverse proxy.
Free DNS hosting with fast resolution.
plus many other features too…
I'm genuinely interested to know if this can all be achieved on the free plan?
For example, by compressing your assets with Brotli (11), then setting your Pages _headers file to serve the 'Content-Encoding', 'br' response header along with these compressed assets. If so, I probably wouldn't recommend it as it may be against one of the Cloudflare Website and Online Services Terms of Use. And sounds like a quick way to get your account deleted by Cloudflare! If anyone tries this please do let me know! I really wish Cloudflare had a referral program, I'd probably get my Pro account paid for in no time!
And there we have it, another blog post off the back of my migration to 11ty. As always, thanks for reading, and I hope you found it useful and informative! If you have any feedback or comments, please do let me know!
Post changelog:
05/01/25: Initial post published.
05/01/25: Thanks to Barry Pollard for pointing out that I'd broken the shell scripts by adding a comment before the shebang! Doh!
--- End: Cranking Brotli up to 11 with Cloudflare Pro and 11ty
--- Start: Refactoring a Web Performance Snippet for Security and Best Practice
Published on: 02 January 2025
https://nooshu.com/blog/2025/01/02/refactoring-a-web-performance-snippet-for-security-and-best-practice/
Main Content:
In this blog post, I'm going to discuss a little Web Performance Snippet that I've seen a few WebPerf evangelists use on their websites. As you will have seen in a previous blog post, I've recently overhauled my blog, both in terms of design and also static hosting. In doing so, I've completely rewritten almost everything from my old site by either migrating it across and "cleaning it up", or simply realising the feature was no longer useful and discarding it, thus reducing technical debt in order to streamline maintenance in the future.
Security
In my last blog post, I touched on improving the security of this blog. And this is the reason for this post (in a very roundabout way!). For the past few years, since I started strengthening my Web Performance knowledge, I've had a little code snippet in my footer. It simply queries the Performance API and outputs an elementary Page Load time via the use of loadEventEnd property. Unfortunately, these are now deprecated features of the PerformanceTiming API. But as they still work, I've had no reason to change it. I'd love to say I came up with this idea and code myself, but I didn't. I actually copied this feature from Tim Kadlec several years ago, it can still be seen in the footer of his website today (bottom-right corner of his website):
Examining the code that outputs this paragraph to the page, I can see it looks like this:
window.onload = function() {
setTimeout(function() {
window.performance = window.performance || window.mozPerformance || window.msPerformance || window.webkitPerformance || {};
var t = performance.timing || {};
if (!t) {
return;
}
var start = t.navigationStart,
end = t.loadEventEnd
loadTime = (end - start) / 1000;
var copy = document.querySelectorAll('.copy');
copy[0].innerHTML += "This page loaded in " + loadTime + " seconds .
";
}, 0);
}
And this is the code I had running for many years on my old website. I liked having this code on my website as it gave me an extremely basic statistic on how well the site loaded on any given device I happen to use (e.g. an older mobile device, or when I'm using an unstable connection in a rural area of the country). It could also be used to verify there's no obvious performance problems. For example, if I see that the metric is anywhere over 500 ms (on my laptop) then that's a good indication that something isn't performing as it should be. Over 500 ms for such a simple static page on an 2020 MacBook Air (M1) would be a real worry!
Updating the code
Unfortunately, there are a few issues with this code that wouldn't work with my newly updated website. The biggest of all being the use of the setTimeout() which is a type of eval() and is a huge security risk. You only need to look at the MDN Page for eval, and it comes with a big red warning at the top of the page:
They even have a whole section on the page as to why you shouldn't do this.
For my newly built blog, I planned to update my blog to use a fairly strict Content Security Policy (CSP). And as I intended to use the unsafe-inline CSP directive AND didn't want to go down the Nonce-based strict CSP or Hash-based strict CSP route due to the additional complexity it adds to the site. For these reasons, and also for something to do over the Christmas Holiday period, I thought I'd just refactor the code and use something more modern, like the PerformanceObserver API.
In doing so, I'd remove the need for the setTimeout code:
setTimeout(function() {}, 0);
The above code uses something called "macro-task scheduling". This is where the browser adds this function to the end of the event loop queue. In doing so, it ensures:
The load event handler has finished.
The browser has had time to populate all the performance timing values.
The performance.timing.loadEventEnd property has a valid value.
Without this macro-task scheduling code, the loadEventEnd value won't have populated by the time it is required, which would likely result in a Not-A-Number (NaN) value being displayed on the page (or alternatively a non-helpful value of 0).
Refactored Code
In the updated code below I refactored it by:
using the Optional chaining operator for detecting window.performance and window.performance.getEntriesByType.
removing the setTimeout "macro-task scheduling" code for greater security (no more eval!).
using a modern API for optimal code execution (PerformanceObserver API).
Wrapping the code in an Immediately Invoked Function Expression (IIFE) to ensure the browsers global scope isn't polluted with random functions and variables.
using Template literals (Template strings) for concatenating the final string that is injected into the DOM.
replacing the deprecated PerformanceTiming API and have migrated to the PerformanceNavigationTiming API.
// webperf.js
(function() {
/* check to see if the browser is modern enough to support the
PerformanceNavigationTiming API */
// use "optional chaining operator" modern replacement for
// (if (!window.performance || !window.performance.getEntriesByType))
if (!window.performance?.getEntriesByType) {
console.warn('Performance API not supported');
return;
}
const observer = new PerformanceObserver((list) => {
const entries = list.getEntriesByType('navigation');
const pageLoadTime = entries[0].duration;
// Create the paragraph element dynamically if it doesn't exist
let loadTimeElement = document.getElementById('pl');
if (!loadTimeElement) {
loadTimeElement = document.createElement('p');
loadTimeElement.id = 'pl';
// Find the footer element and insert the paragraph as the first child
const footer = document.querySelector('footer');
if (footer) {
footer.insertBefore(loadTimeElement, footer.firstChild);
}
}
loadTimeElement.textContent = `This page loaded in: ${pageLoadTime} milliseconds.`;
});
// buffered: true - The observer will also receive entries that occurred before it started observing
observer.observe({
type: 'navigation',
buffered: true
});
})();
In refactoring Tim's code, I deliberately chose to exclude certain older browsers and their users from accessing this feature. The refactored code includes a check for the PerformanceNavigationTiming API, ensuring the web performance snippet is only added to the footer if the browser supports this feature. This prevents older browsers from displaying a partially broken footer. Interestingly, this approach aligns with a modern take on a methodology Tim already uses on his website. His website follows a progressive enhancement technique known as "cutting the mustard", a concept introduced by the BBC News web development team in April 2013!
Cutting the mustard is a technique that involves categorising browsers into two groups: older "feature browsers" with limited capabilities and modern "smart browsers" that support advanced features. By adopting a two-tiered approach to responsive web design, a core experience is delivered to all users, while additional enhancements are applied for more capable browsers. This ensures broad accessibility while providing a refined and feature-rich experience for modern devices like smartphones and larger screens. I actually used my own version of this in a previous blog post back in November 2019, when I implemented Webmentions purely in a client-side JavaScript in my blog post: "Implementing Webmentions on this blog". TL;DR: Gist here.
Thankfully, most modern browsers now auto-update every 6-weeks without users even knowing, so these older browsers are becoming less and less of an "issue".
For example, let's have a look at the refactored features I mentioned above:
PerformanceObserver API
This API allows developers to asynchronously observe and collect performance-related metrics, such as resource loading times, navigation events, and custom user timing marks. It listens for entries in the Performance Timeline, filtering them by type (e.g., resource, navigation, paint) and executes a callback function whenever new entries are available. This API is useful for real-time performance monitoring (RUM) and analytics, e.g. Google Analytics.
The PerformanceObserver API was introduced into most major browsers between 2016 and 2017, Microsoft Edge was the last to introduce the API in January 2020 when they migrated over to using the Chromium browser engine, rather than EdgeHTML.
According to Can I Use, 96.64% of users globally now use a browser that supports the PerformanceObserver API.
Template Literals
Template literals in JavaScript are string literals enclosed by backticks (`) that allow for embedded expressions using the ${expression} syntax. They support multi-line strings, string interpolation, and special constructs like tagged templates for advanced processing of literals.
Template Literals were introduced into the JavaScript Language in ECMAScript 6 (ES6) which was "released" in 2015. Amazingly, most major browsers managed to implement the new specifications in the same year (2015), including iOS Safari 9.2 AND Edge 13 (EdgeHTML).
According to Can I Use, 97.97% of users globally now use a browser that supports the Template Literals language feature.
PerformanceTiming API
This API is part of the deprecated Navigation Timing API, that provides detailed timestamps for key events in the navigation and page load process, such as DNS lookup, server response, DOM processing, and resource loading. Accessible via performance.timing, it allows developers to measure metrics like page load time, Time to First Byte (TTFB), and DOM readiness. While useful for diagnosing performance bottlenecks, it has been largely replaced by the more precise and structured PerformanceNavigationTiming API, available through the performance.getEntriesByType('navigation') method.
The PerformanceTiming API is now deprecated, but it was first supported by all major browsers back in 2015. Incredibly, IE9 even supported it way back in 2011!
It's not recommended that the API be used today because at some point browser vendors may consider removing it. According to Can I Use, 97.38% of users globally now use a browser that supports the PerformanceTiming API.
PerformanceNavigationTiming API
The API is a modern, high-precision replacement for the deprecated PerformanceTiming API, offering detailed metrics on navigation and page load performance. Accessible via performance.getEntriesByType('navigation'), it provides timestamps for events like DNS lookup, server response, DOM processing, and resource loading, along with new attributes like transferSize and decodedBodySize for enhanced analysis. It simplifies performance monitoring with a single object and is ideal for modern web applications, including Single Page Apps (SPA's), making it the preferred choice for navigation performance insights.
If you know you are still using the PerformanceTiming API, it is recommended that you update to use the PerformanceNavigationTiming API. Although it's unlikely that support for the older API will be fully removed, the new API offers so much more functionality and is a much easier API to use. You may as well take advantage of the new features and functionality that is built into all modern browsers.
According to Can I Use, 96.06% of users globally now use a browser that supports the PerformanceNavigationTiming API. It was supported all the way back in Edge 12 (EdgeHTML), and was finally supported in iOS Safari in version 15.3 back in December 2021.
Data Driven Decision-Making
So when it comes to browser support for modern features like this, how is it best to approach the old browser issue? In my opinion, you should approach it as a data collection exercise. If you have data to show that your site has browsers visiting that are approaching a decade old (at the time of writing), you should absolutely continue to "cut the mustard" to support those users.
Although I believe there's a very valid counterargument too. That by supporting such old and outdated browsers you're actually harming a users' experience of the web as a whole, by allowing them to use such an outdated browser. Also, can you imagine how vulnerable a user who is still using a browser like IE9 would be on the modern web? Finally point, the web performance barrier to entry of modern JavaScript-driven websites would make the internet completely unusable for these browsers! With a Median JavaScript size of 650 KB(!) as of December 2024, even a modern desktop browser would struggle!
Summary
So there we go, another blog post related to updating nooshu.com to 11ty and the steps I took to shed the years of technical debt I'd accumulated and move to a more maintainable and forward facing blog. Big thanks to Tim Kadlec for writing the original code snippet and inspiring me to add it to my blog! As always, I hope you enjoyed reading my random ramblings, and please do let me know if you have any comments or feedback.
Post changelog:
02/01/25: Initial post published.
--- End: Refactoring a Web Performance Snippet for Security and Best Practice
--- Start: Securing your static website with HTTP response headers
Published on: 28 December 2024
https://nooshu.com/blog/2024/12/28/securing-your-static-website-with-http-response-headers/
Main Content:
In this post, I'm going to go into how I secured my 11ty blog (this site), using Cloudflare pages and HTTP Response headers.
If you aren't bothered about all the information below, you can just jump straight to the code.
What is a response header?
An HTTP response header is a part of the response sent from a web server to a client (like your browser) when the client requests a resource, such as a webpage, image, or data.
The response header will contain additional information (metadata) about the request being made. Response headers come in three main categories:
Information about the response, for example the status code of the resource being requested. e.g. 200 for an OK response or a status of 404 for not found.
Details about the resource, for example a resource's Content Type e.g. text/html, text/css, application/javascript†.
Instructions for the client, for example how long to store a resource in a browser's cache (caching rules).
†: I realise that technically Content-Type isn't actually a response header, it's actually a representation header, which provides metadata about the message body, such as its length, type, and encoding, helping the recipient understand and process the data correctly. But for simplicity's sake we'll consider it to be a response header in this post.
For the rest of the blog post, we are going to focus on number 3: "Instructions for the client". As with the security HTTP response headers, we will be instructing a user's browser as to what it can, and can't do with resources on a webpage.
Why? It's a static website!
Now I know what some of you will be thinking, since it's just static HTML and only hosted on a basic web server (with nothing dynamic), surely, there's nothing to secure? I mean, it's not like a hacker can hack into the Content Management System (CMS), and steal a username and password because in most cases there isn't one! This is true, but it's important to remember, static websites can be used for anything. I'm sure plenty of developers use static websites to build transactional websites that accept sensitive user data, like their address, Social Security Number (US), National Insurance Number (UK) or Credit Card details. So really, the security you are implementing isn't to protect you, it's to protect your users.
Here is a list of reasons below:
Preventing Cross-Site Scripting (XSS) Attacks
See the Content-Security-Policy (CSP) section for details.
Securing Sensitive User Data
See the Strict-Transport-Security (HSTS) section for details.
Reducing Information Leakage
See the Referrer-Policy section for details.
Isolating Contexts
See the Cross-Origin-Opener-Policy (COOP) and Cross-Origin-Embedder-Policy (COEP) sections for details.
Blocking Unintended Features
See the Permissions-Policy section for details.
Preventing Clickjacking
See the X-Frame-Options section for details.
Improving Browser Behaviour
See X-Content-Type-Options section for details.
Enhancing Trust and SEO
A secure website builds trust with users and browsers. Search engines also rank secure websites, potentially improving your site's SEO rankings.
It's important to remember that even static websites can serve as entry points for attackers, host phishing pages, or be used in more complex attack chains.
Just show me the code
It's worth mentioning that the examples I'm giving aren't specific to Cloudflare Pages or 11ty. They can be used with any static website, or indeed any website or web server (e.g. Apache, Nginx, IIS, Tomcat, Node.js).
The code below sits in my _headers file in the public assets directory of my 11ty blog on Cloudflare Pages. There's a tonne of information in the Cloudflare documentation on headers here if you're interested.
/*
Access-Control-Allow-Origin: https://nooshu.com
Cache-Control: public, s-maxage=31536000, max-age=31536000
Content-Security-Policy: base-uri 'self';child-src 'self';connect-src 'self';default-src 'none';img-src 'self' https://v1.indieweb-avatar.11ty.dev/;font-src 'self';form-action 'self' https://webmention.io https://submit-form.com/DmOc8anHq;frame-ancestors;frame-src 'self' https://player.vimeo.com/ https://www.slideshare.net/ https://www.youtube.com/ https://giscus.app/ https://www.google.com/;manifest-src 'self';media-src 'self';object-src 'none';script-src 'self' https://giscus.app/ https://www.google.com/ https://www.gstatic.com/;style-src 'self' 'unsafe-inline' https://giscus.app/;worker-src 'self';upgrade-insecure-requests;
Cross-Origin-Opener-Policy: same-origin
Permissions-Policy: accelerometer=(), ambient-light-sensor=(), autoplay=(), camera=(), display-capture=(), document-domain=(), encrypted-media=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), navigation-override=(), payment=(), picture-in-picture=(), publickey-credentials-get=(), screen-wake-lock=(), sync-xhr=(), usb=(), web-share=(), xr-spatial-tracking=()
Referrer-Policy: strict-origin-when-cross-origin
Cross-Origin-Resource-Policy: cross-origin
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
X-Content-Type-Options: nosniff
X-DNS-Prefetch-Control: off
X-Frame-Options: DENY
X-Permitted-Cross-Domain-Policies: none
Origin-Agent-Cluster: ?1
Or I've created a GitHub Gist here if that's easier to copy, paste, and modify.
Just remember to change the Content-Security-Policy (CSP) and ensure it allows any 3rd-party assets you load on your website. For this, you are specifically looking at the following CSP directives:
form-action for form actions to a third party, e.g. a contact form.
script-src to allow the execution of scripts from a third party, e.g. Google Analytics.
img-src to allow images to load from a third party, e.g. Webmention images.
frame-src to allow